content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
How can I pass data from an aspx page to an ascx modal popup?
I'm fairly new to ASP.NET and trying to learn how things are done. I come from a C# background so the code-behind portion is easy, but thinking like a web developer is unfamiliar.
I have an aspx page that contains a grid of checkboxes. I have a button that is coded via a Button_Click event to collect a list of which rows are checked and create a session variable out of that list. The same button is referenced (via TargetControlID) by my ascx page's ModalPopupExtender which controls the panel on the ascx page.
When the button is clicked, the modal popup opens but the Button_Click event is never fired, so the modal doesn't get its session data.
Since the two pages are separate, I can't call the ModalPopupExtender from the aspx.cs code, I can't reach the list of checkboxes from the ascx.cs code, and I don't see a way to populate my session variable and then programmatically activate some other hidden button or control which will then open my modal popup.
Any thoughts?
A:
All a usercontrol(.ascx) file is is a set of controls that you have grouped together to provide some reusable functionality. The controls defined in it are still added to the page's control collection (.aspx) durring the page lifecylce. The ModalPopupExtender uses javascript and dhtml to show and hide the controls in the usercontrol client-side. What you are seeing is that the click event is being handled client-side by the ModalPoupExtender and it is canceling the post-back to the server. This is the default behavior by design. You certainly can access the page's control collection from the code-behind of your usercontrol though because it is all part of the same control tree. Just use the FindControl(xxx) method of any control to search for the child of it you need.
A:
After some research following DancesWithBamboo's answer, I figured out how to make it work.
An example reference to my ascx page within my aspx page:
<uc1:ChildPage ID="MyModalPage" runat="server" />
The aspx code-behind to grab and open the ModalPopupExtender (named modalPopup) would look like this:
AjaxControlToolkit.ModalPopupExtender mpe =
(AjaxControlToolkit.ModalPopupExtender)
MyModalPage.FindControl("modalPopup");
mpe.Show();
A:
Sorry, but I'm confused. You can't call an ascx directly, so...
Is your modal code that you are calling from within the same page, like a hidden panel, etc;
Or is it another aspx page that you are calling on a click event?
| How can I pass data from an aspx page to an ascx modal popup? | I'm fairly new to ASP.NET and trying to learn how things are done. I come from a C# background so the code-behind portion is easy, but thinking like a web developer is unfamiliar.
I have an aspx page that contains a grid of checkboxes. I have a button that is coded via a Button_Click event to collect a list of which rows are checked and create a session variable out of that list. The same button is referenced (via TargetControlID) by my ascx page's ModalPopupExtender which controls the panel on the ascx page.
When the button is clicked, the modal popup opens but the Button_Click event is never fired, so the modal doesn't get its session data.
Since the two pages are separate, I can't call the ModalPopupExtender from the aspx.cs code, I can't reach the list of checkboxes from the ascx.cs code, and I don't see a way to populate my session variable and then programmatically activate some other hidden button or control which will then open my modal popup.
Any thoughts?
| [
"All a usercontrol(.ascx) file is is a set of controls that you have grouped together to provide some reusable functionality. The controls defined in it are still added to the page's control collection (.aspx) durring the page lifecylce. The ModalPopupExtender uses javascript and dhtml to show and hide the controls in the usercontrol client-side. What you are seeing is that the click event is being handled client-side by the ModalPoupExtender and it is canceling the post-back to the server. This is the default behavior by design. You certainly can access the page's control collection from the code-behind of your usercontrol though because it is all part of the same control tree. Just use the FindControl(xxx) method of any control to search for the child of it you need.\n",
"After some research following DancesWithBamboo's answer, I figured out how to make it work.\nAn example reference to my ascx page within my aspx page:\n<uc1:ChildPage ID=\"MyModalPage\" runat=\"server\" />\n\nThe aspx code-behind to grab and open the ModalPopupExtender (named modalPopup) would look like this:\nAjaxControlToolkit.ModalPopupExtender mpe = \n (AjaxControlToolkit.ModalPopupExtender) \n MyModalPage.FindControl(\"modalPopup\");\nmpe.Show();\n\n",
"Sorry, but I'm confused. You can't call an ascx directly, so...\nIs your modal code that you are calling from within the same page, like a hidden panel, etc; \nOr is it another aspx page that you are calling on a click event?\n"
] | [
3,
1,
0
] | [] | [] | [
"asp.net",
"asp.net_ajax",
"c#"
] | stackoverflow_0000012671_asp.net_asp.net_ajax_c#.txt |
Q:
What is a good way to denormalize a mysql database?
I have a large database of normalized order data that is becoming very slow to query for reporting. Many of the queries that I use in reports join five or six tables and are having to examine tens or hundreds of thousands of lines.
There are lots of queries and most have been optimized as much as possible to reduce server load and increase speed. I think it's time to start keeping a copy of the data in a denormalized format.
Any ideas on an approach? Should I start with a couple of my worst queries and go from there?
A:
I know more about mssql that mysql, but I don't think the number of joins or number of rows you are talking about should cause you too many problems with the correct indexes in place. Have you analyzed the query plan to see if you are missing any?
http://dev.mysql.com/doc/refman/5.0/en/explain.html
That being said, once you are satisifed with your indexes and have exhausted all other avenues, de-normalization might be the right answer. If you just have one or two queries that are problems, a manual approach is probably appropriate, whereas some sort of data warehousing tool might be better for creating a platform to develop data cubes.
Here's a site I found that touches on the subject:
http://www.meansandends.com/mysql-data-warehouse/?link_body%2Fbody=%7Bincl%3AAggregation%7D
Here's a simple technique that you can use to keep denormalizing queries simple, if you're just doing a few at a time (and I'm not replacing your OLTP tables, just creating a new one for reporting purposes). Let's say you have this query in your application:
select a.name, b.address from tbla a
join tblb b on b.fk_a_id = a.id where a.id=1
You could create a denormalized table and populate with almost the same query:
create table tbl_ab (a_id, a_name, b_address);
-- (types elided)
Notice the underscores match the table aliases you use
insert tbl_ab select a.id, a.name, b.address from tbla a
join tblb b on b.fk_a_id = a.id
-- no where clause because you want everything
Then to fix your app to use the new denormalized table, switch the dots for underscores.
select a_name as name, b_address as address
from tbl_ab where a_id = 1;
For huge queries this can save a lot of time and makes it clear where the data came from, and you can re-use the queries you already have.
Remember, I'm only advocating this as the last resort. I bet there's a few indexes that would help you. And when you de-normalize, don't forget to account for the extra space on your disks, and figure out when you will run the query to populate the new tables. This should probably be at night, or whenever activity is low. And the data in that table, of course, will never exactly be up to date.
[Yet another edit] Don't forget that the new tables you create need to be indexed too! The good part is that you can index to your heart's content and not worry about update lock contention, since aside from your bulk insert the table will only see selects.
A:
MySQL 5 does support views, which may be helpful in this scenario. It sounds like you've already done a lot of optimizing, but if not you can use MySQL's EXPLAIN syntax to see what indexes are actually being used and what is slowing down your queries.
As far as going about normalizing data (whether you're using views or just duplicating data in a more efficient manner), I think starting with the slowest queries and working your way through is a good approach to take.
A:
I know this is a bit tangential, but have you tried seeing if there are more indexes you can add?
I don't have a lot of DB background, but I am working with databases a lot recently, and I've been finding that a lot of the queries can be improved just by adding indexes.
We are using DB2, and there is a command called db2expln and db2advis, the first will indicate whether table scans vs index scans are being used, and the second will recommend indexes you can add to improve performance. I'm sure MySQL has similar tools...
Anyways, if this is something you haven't considered yet, it has been helping a lot with me... but if you've already gone this route, then I guess it's not what you are looking for.
Another possibility is a "materialized view" (or as they call it in DB2), which lets you specify a table that is essentially built of parts from multiple tables. Thus, rather than normalizing the actual columns, you could provide this view to access the data... but I don't know if this has severe performance impacts on inserts/updates/deletes (but if it is "materialized", then it should help with selects since the values are physically stored separately).
A:
In line with some of the other comments, i would definately have a look at your indexing.
One thing i discovered earlier this year on our MySQL databases was the power of composite indexes. For example, if you are reporting on order numbers over date ranges, a composite index on the order number and order date columns could help. I believe MySQL can only use one index for the query so if you just had separate indexes on the order number and order date it would have to decide on just one of them to use. Using the EXPLAIN command can help determine this.
To give an indication of the performance with good indexes (including numerous composite indexes), i can run queries joining 3 tables in our database and get almost instant results in most cases. For more complex reporting most of the queries run in under 10 seconds. These 3 tables have 33 million, 110 million and 140 millions rows respectively. Note that we had also already normalised these slightly to speed up our most common query on the database.
More information regarding your tables and the types of reporting queries may allow further suggestions.
A:
For MySQL I like this talk: Real World Web: Performance & Scalability, MySQL Edition. This contains a lot of different pieces of advice for getting more speed out of MySQL.
A:
You might also want to consider selecting into a temporary table and then performing queries on that temporary table. This would avoid the need to rejoin your tables for every single query you issue (assuming that you can use the temporary table for numerous queries, of course). This basically gives you denormalized data, but if you are only doing select calls, there's no concern about data consistency.
A:
Further to my previous answer, another approach we have taken in some situations is to store key reporting data in separate summary tables. There are certain reporting queries which are just going to be slow even after denormalising and optimisations and we found that creating a table and storing running totals or summary information throughout the month as it came in made the end of month reporting much quicker as well.
We found this approach easy to implement as it didn't break anything that was already working - it's just additional database inserts at certain points.
A:
I've been toying with composite indexes and have seen some real benefits...maybe I'll setup some tests to see if that can save me here..at least for a little longer.
| What is a good way to denormalize a mysql database? | I have a large database of normalized order data that is becoming very slow to query for reporting. Many of the queries that I use in reports join five or six tables and are having to examine tens or hundreds of thousands of lines.
There are lots of queries and most have been optimized as much as possible to reduce server load and increase speed. I think it's time to start keeping a copy of the data in a denormalized format.
Any ideas on an approach? Should I start with a couple of my worst queries and go from there?
| [
"I know more about mssql that mysql, but I don't think the number of joins or number of rows you are talking about should cause you too many problems with the correct indexes in place. Have you analyzed the query plan to see if you are missing any?\nhttp://dev.mysql.com/doc/refman/5.0/en/explain.html\nThat being said, once you are satisifed with your indexes and have exhausted all other avenues, de-normalization might be the right answer. If you just have one or two queries that are problems, a manual approach is probably appropriate, whereas some sort of data warehousing tool might be better for creating a platform to develop data cubes.\nHere's a site I found that touches on the subject:\nhttp://www.meansandends.com/mysql-data-warehouse/?link_body%2Fbody=%7Bincl%3AAggregation%7D\nHere's a simple technique that you can use to keep denormalizing queries simple, if you're just doing a few at a time (and I'm not replacing your OLTP tables, just creating a new one for reporting purposes). Let's say you have this query in your application:\nselect a.name, b.address from tbla a \njoin tblb b on b.fk_a_id = a.id where a.id=1\n\nYou could create a denormalized table and populate with almost the same query:\ncreate table tbl_ab (a_id, a_name, b_address); \n-- (types elided)\n\nNotice the underscores match the table aliases you use\ninsert tbl_ab select a.id, a.name, b.address from tbla a\njoin tblb b on b.fk_a_id = a.id \n-- no where clause because you want everything\n\nThen to fix your app to use the new denormalized table, switch the dots for underscores. \nselect a_name as name, b_address as address \nfrom tbl_ab where a_id = 1;\n\nFor huge queries this can save a lot of time and makes it clear where the data came from, and you can re-use the queries you already have.\nRemember, I'm only advocating this as the last resort. I bet there's a few indexes that would help you. And when you de-normalize, don't forget to account for the extra space on your disks, and figure out when you will run the query to populate the new tables. This should probably be at night, or whenever activity is low. And the data in that table, of course, will never exactly be up to date.\n[Yet another edit] Don't forget that the new tables you create need to be indexed too! The good part is that you can index to your heart's content and not worry about update lock contention, since aside from your bulk insert the table will only see selects.\n",
"MySQL 5 does support views, which may be helpful in this scenario. It sounds like you've already done a lot of optimizing, but if not you can use MySQL's EXPLAIN syntax to see what indexes are actually being used and what is slowing down your queries.\nAs far as going about normalizing data (whether you're using views or just duplicating data in a more efficient manner), I think starting with the slowest queries and working your way through is a good approach to take.\n",
"I know this is a bit tangential, but have you tried seeing if there are more indexes you can add?\nI don't have a lot of DB background, but I am working with databases a lot recently, and I've been finding that a lot of the queries can be improved just by adding indexes.\nWe are using DB2, and there is a command called db2expln and db2advis, the first will indicate whether table scans vs index scans are being used, and the second will recommend indexes you can add to improve performance. I'm sure MySQL has similar tools...\nAnyways, if this is something you haven't considered yet, it has been helping a lot with me... but if you've already gone this route, then I guess it's not what you are looking for.\nAnother possibility is a \"materialized view\" (or as they call it in DB2), which lets you specify a table that is essentially built of parts from multiple tables. Thus, rather than normalizing the actual columns, you could provide this view to access the data... but I don't know if this has severe performance impacts on inserts/updates/deletes (but if it is \"materialized\", then it should help with selects since the values are physically stored separately).\n",
"In line with some of the other comments, i would definately have a look at your indexing.\nOne thing i discovered earlier this year on our MySQL databases was the power of composite indexes. For example, if you are reporting on order numbers over date ranges, a composite index on the order number and order date columns could help. I believe MySQL can only use one index for the query so if you just had separate indexes on the order number and order date it would have to decide on just one of them to use. Using the EXPLAIN command can help determine this.\nTo give an indication of the performance with good indexes (including numerous composite indexes), i can run queries joining 3 tables in our database and get almost instant results in most cases. For more complex reporting most of the queries run in under 10 seconds. These 3 tables have 33 million, 110 million and 140 millions rows respectively. Note that we had also already normalised these slightly to speed up our most common query on the database.\nMore information regarding your tables and the types of reporting queries may allow further suggestions.\n",
"For MySQL I like this talk: Real World Web: Performance & Scalability, MySQL Edition. This contains a lot of different pieces of advice for getting more speed out of MySQL.\n",
"You might also want to consider selecting into a temporary table and then performing queries on that temporary table. This would avoid the need to rejoin your tables for every single query you issue (assuming that you can use the temporary table for numerous queries, of course). This basically gives you denormalized data, but if you are only doing select calls, there's no concern about data consistency.\n",
"Further to my previous answer, another approach we have taken in some situations is to store key reporting data in separate summary tables. There are certain reporting queries which are just going to be slow even after denormalising and optimisations and we found that creating a table and storing running totals or summary information throughout the month as it came in made the end of month reporting much quicker as well.\nWe found this approach easy to implement as it didn't break anything that was already working - it's just additional database inserts at certain points.\n",
"I've been toying with composite indexes and have seen some real benefits...maybe I'll setup some tests to see if that can save me here..at least for a little longer.\n"
] | [
12,
2,
1,
1,
1,
0,
0,
0
] | [] | [] | [
"database",
"denormalization",
"mysql"
] | stackoverflow_0000012890_database_denormalization_mysql.txt |
Q:
How can I stop MATLAB from returning until after a command-line script completes?
I see in the MATLAB help (matlab -h) that I can use the -r flag to specify an m-file to run. I notice when I do this, MATLAB seems to start the script, but immediately return. The script processes fine, but the main app has already returned.
Is there any way to get MATLAB to only return once the command is finished? If you're calling it from a separate program it seems like it's easier to wait on the process than to use a file or sockets to confirm completion.
To illustrate, here's a sample function waitHello.m:
function waitHello
disp('Waiting...');
pause(3); %pauses 3 seconds
disp('Hello World');
quit;
And I try to run this using:
matlab -nosplash -nodesktop -r waitHello
A:
Quick answer:
matlab -wait -nosplash -nodesktop -r waitHello
In Matlab 7.1 (the version I have) there is an undocumented command line option -wait in matlab.bat. If it doesn't work for your version, you could probably add it in. Here's what I found. The command at the bottom that finally launches matlab is (line 153):
start "MATLAB" %START_WAIT% "%MATLAB_BIN_DIR%\%MATLAB_ARCH%\matlab" %MATLAB_ARGS%
The relevant syntax of the start command (see "help start" in cmd.exe) in this case is:
start ["window title"] [/wait] myprogram.exe args ...
A bit higher, among all of the documented command line options, I found (line 60):
) else if (%opt%) == (-wait) (
set START_WAIT=/wait
) else (
So specifying -wait should do what you want, as long as you're also exiting matlab from your script (otherwise it will wait for you to terminate it interactively).
| How can I stop MATLAB from returning until after a command-line script completes? | I see in the MATLAB help (matlab -h) that I can use the -r flag to specify an m-file to run. I notice when I do this, MATLAB seems to start the script, but immediately return. The script processes fine, but the main app has already returned.
Is there any way to get MATLAB to only return once the command is finished? If you're calling it from a separate program it seems like it's easier to wait on the process than to use a file or sockets to confirm completion.
To illustrate, here's a sample function waitHello.m:
function waitHello
disp('Waiting...');
pause(3); %pauses 3 seconds
disp('Hello World');
quit;
And I try to run this using:
matlab -nosplash -nodesktop -r waitHello
| [
"Quick answer:\nmatlab -wait -nosplash -nodesktop -r waitHello\n\nIn Matlab 7.1 (the version I have) there is an undocumented command line option -wait in matlab.bat. If it doesn't work for your version, you could probably add it in. Here's what I found. The command at the bottom that finally launches matlab is (line 153):\nstart \"MATLAB\" %START_WAIT% \"%MATLAB_BIN_DIR%\\%MATLAB_ARCH%\\matlab\" %MATLAB_ARGS%\n\nThe relevant syntax of the start command (see \"help start\" in cmd.exe) in this case is:\nstart [\"window title\"] [/wait] myprogram.exe args ...\n\nA bit higher, among all of the documented command line options, I found (line 60):\n) else if (%opt%) == (-wait) (\n set START_WAIT=/wait\n) else (\n\nSo specifying -wait should do what you want, as long as you're also exiting matlab from your script (otherwise it will wait for you to terminate it interactively).\n"
] | [
25
] | [] | [] | [
"command_line",
"matlab"
] | stackoverflow_0000014495_command_line_matlab.txt |
Q:
Is "include file" in shtml the best method to keep non-database changing data
We have a website that uses #include file command to roll info into some web pages. The authors can access the text files to update things like the occasional class or contact information for the department.
My question is this, I don't see anyone using this method and wonder if it is a good idea to keep using it. If not, what method should I transition to instead?
A:
If you are using ASP.NET then you could bundle that code into a nice little UserControl that will display all of the important information.
Other platforms should allow you to bundle the logic into a class object, and display it using that.
It really depends on the platform that you are using to deploy the application in. The include file could be your best solution if you are deploying in a more limited platform.
A:
I don't think there is anything wrong with it, and I've done some similar things with PHP. If the people that make the changes are comfortable with how they do it and they can do what they need to, I think you should stick with it. There could be some retraining involved and/or a fair amount of work involved for changing the system.
| Is "include file" in shtml the best method to keep non-database changing data | We have a website that uses #include file command to roll info into some web pages. The authors can access the text files to update things like the occasional class or contact information for the department.
My question is this, I don't see anyone using this method and wonder if it is a good idea to keep using it. If not, what method should I transition to instead?
| [
"If you are using ASP.NET then you could bundle that code into a nice little UserControl that will display all of the important information. \nOther platforms should allow you to bundle the logic into a class object, and display it using that. \nIt really depends on the platform that you are using to deploy the application in. The include file could be your best solution if you are deploying in a more limited platform.\n",
"I don't think there is anything wrong with it, and I've done some similar things with PHP. If the people that make the changes are comfortable with how they do it and they can do what they need to, I think you should stick with it. There could be some retraining involved and/or a fair amount of work involved for changing the system.\n"
] | [
1,
1
] | [] | [] | [
"html",
"include",
"shtml"
] | stackoverflow_0000014791_html_include_shtml.txt |
Q:
Best way to let users download a file from my website: http or ftp
We have some files on our website that users of our software can download. Some of the files are in virtual folders on the website while others are on our ftp. The files on the ftp are generally accessed by clicking on an ftp:// link in a browser - most of our customers do not have an ftp client. The other files are accessed by clicking an http:// link in a browser.
Should I move all the files to the ftp? or does it not matter? Whats the difference?
A:
HTTP has many advantages over FTP:
it is available in more places (think workplaces which block anything other than HTTP/S)
it works nicely with proxies (FTP requires extra settings for the proxy - like making sure that it allows the CONNECT method)
it provides built-in compression (with GZIP) which almost all browsers can handle (as opposed to FTP which has a non-official "MODE Z" extension)
NAT gateways must be configured in a special mode to support active FTP connections, while passive FTP connections require them to allow access to all ports (it it doesn't have conneciton tracking)
some FTP clients insist on opening a new data connection for each data transfer, which can leave you with a lot of "TIME_WAIT" sockets
A:
If speed matters to your users, and they are technically inclined, http allows multiple connections for one file (if the client supports it. I use DownThemAll). Most browsers should handle ftp links just fine, though.
A:
I think most users, even today, are more familiar with http than ftp and for that reason you should stick with http by default unless there's a compelling reason to use ftp. It's nit-picking, though.
A:
I think it doesn't matter really, because the ftp is also transparent nowdays. You don't have to know anything special, the browser handles all.
I suggest that if they are downloading one file at one time, you can go to http.
However if they have to download several files with one go, I prefer ftp, because it's much more easy to manage.
There are some nice broswer extensions as _l0ser mentioned, but I prefer ftp for mass file-transfer.
A:
Both FTP and HTTP seem sufficient for your needs, so I would definitely recommend choosing the simplest approach, which is either to leave things as they currently are or consolidate on HTTP.
Personally, I would put everything on HTTP. If nothing else, it eliminates an extra server. There is no compelling reason to choose FTP over HTTP anymore, and there are a few small advantages to HTTP (as others have pointed out).
| Best way to let users download a file from my website: http or ftp | We have some files on our website that users of our software can download. Some of the files are in virtual folders on the website while others are on our ftp. The files on the ftp are generally accessed by clicking on an ftp:// link in a browser - most of our customers do not have an ftp client. The other files are accessed by clicking an http:// link in a browser.
Should I move all the files to the ftp? or does it not matter? Whats the difference?
| [
"HTTP has many advantages over FTP:\n\nit is available in more places (think workplaces which block anything other than HTTP/S)\nit works nicely with proxies (FTP requires extra settings for the proxy - like making sure that it allows the CONNECT method)\nit provides built-in compression (with GZIP) which almost all browsers can handle (as opposed to FTP which has a non-official \"MODE Z\" extension)\nNAT gateways must be configured in a special mode to support active FTP connections, while passive FTP connections require them to allow access to all ports (it it doesn't have conneciton tracking)\nsome FTP clients insist on opening a new data connection for each data transfer, which can leave you with a lot of \"TIME_WAIT\" sockets\n\n",
"If speed matters to your users, and they are technically inclined, http allows multiple connections for one file (if the client supports it. I use DownThemAll). Most browsers should handle ftp links just fine, though.\n",
"I think most users, even today, are more familiar with http than ftp and for that reason you should stick with http by default unless there's a compelling reason to use ftp. It's nit-picking, though.\n",
"I think it doesn't matter really, because the ftp is also transparent nowdays. You don't have to know anything special, the browser handles all.\nI suggest that if they are downloading one file at one time, you can go to http.\nHowever if they have to download several files with one go, I prefer ftp, because it's much more easy to manage. \nThere are some nice broswer extensions as _l0ser mentioned, but I prefer ftp for mass file-transfer.\n",
"Both FTP and HTTP seem sufficient for your needs, so I would definitely recommend choosing the simplest approach, which is either to leave things as they currently are or consolidate on HTTP.\nPersonally, I would put everything on HTTP. If nothing else, it eliminates an extra server. There is no compelling reason to choose FTP over HTTP anymore, and there are a few small advantages to HTTP (as others have pointed out).\n"
] | [
6,
4,
3,
2,
1
] | [] | [] | [
"download",
"ftp",
"http"
] | stackoverflow_0000014263_download_ftp_http.txt |
Q:
Web page field validation
I need to validate a date/time field on a webpage but want it to do it without reloading the page and would like 'instant' feedback for the users.
What's the best/easiest solution.
BTW: easiest scores 65% of total points
Edit:
What if best was 65% of total points?
A:
If you would like to use JavaScript then it has built in date validation functions. However, if you do not want to go the JavaScript route, you could change the UI to dropdown controls which would limit the users ability to enter invalid data. You would still need to check server side to ensure nobody submits Feb 30th.
A:
Check out this javascript date validation function.
It uses javascript, regular expressions and the 'onblur' event of a text input.
A:
@David H. Aust
Using onblur for validation is problematic, because some folks use the enter key, not the mouse, to submit a form. Using onblur and the form's onsubmit event in conjunction could be a better solution. Back when I did JS validation for forms a lot more, I would run against keyup events. This gave the user instant feedback on whether or not their entry was correct. You can (and I did) also put checks in place so that the user doesn't receive an "incorrect" message until they've left the field (since you shouldn't tell them they're incorrect if they aren't done yet).
A:
I would recommend using drop-downs for dates, as indicated above. I can't really think of any reason not to--you want the user to choose from pre-defined data, not give you something unique that you can't anticipate.
You can avoid February 30 with a little bit of Javascript (make the days field populate dynamically based on the month).
A:
@Brian Warshaw
That is a really good point you make about not forgetting the users who navigate via the keyboard (uh, me).
Thanks for bringing our attention to that.
A:
A simple javascript method that reads what's in the input field on submit and validates it. If it's not valid, return false so that the form is not submitted to the server.
... onSubmit="return validateForm();" ...
Make sure you validate on the server side too, it's easy to bypass javascript validation.
A:
If you're using ASP.NET, it has validator controls that you can point to textboxes which you can then use to validate proper date/time formats.
A:
There are a couple of date widgets available out in the aether. Then you can allow only valid input.
A:
Looks like there's a great video about the ASP.NET AJAX Control Toolkit provides the MaskedEdit control and the MaskedEditValidator control that works great. Not easy for beginners but VERY good and instant feedback.
Thanks for all the answers though!
asp.net
Unfortunately I can't accept this answer.
A:
I've used this small bit of js code in a few projects, it'll do date quickly and easily along with a few others.
Link
| Web page field validation | I need to validate a date/time field on a webpage but want it to do it without reloading the page and would like 'instant' feedback for the users.
What's the best/easiest solution.
BTW: easiest scores 65% of total points
Edit:
What if best was 65% of total points?
| [
"If you would like to use JavaScript then it has built in date validation functions. However, if you do not want to go the JavaScript route, you could change the UI to dropdown controls which would limit the users ability to enter invalid data. You would still need to check server side to ensure nobody submits Feb 30th.\n",
"Check out this javascript date validation function.\nIt uses javascript, regular expressions and the 'onblur' event of a text input.\n",
"@David H. Aust\nUsing onblur for validation is problematic, because some folks use the enter key, not the mouse, to submit a form. Using onblur and the form's onsubmit event in conjunction could be a better solution. Back when I did JS validation for forms a lot more, I would run against keyup events. This gave the user instant feedback on whether or not their entry was correct. You can (and I did) also put checks in place so that the user doesn't receive an \"incorrect\" message until they've left the field (since you shouldn't tell them they're incorrect if they aren't done yet).\n",
"I would recommend using drop-downs for dates, as indicated above. I can't really think of any reason not to--you want the user to choose from pre-defined data, not give you something unique that you can't anticipate.\nYou can avoid February 30 with a little bit of Javascript (make the days field populate dynamically based on the month). \n",
"@Brian Warshaw\nThat is a really good point you make about not forgetting the users who navigate via the keyboard (uh, me). \nThanks for bringing our attention to that.\n",
"A simple javascript method that reads what's in the input field on submit and validates it. If it's not valid, return false so that the form is not submitted to the server.\n... onSubmit=\"return validateForm();\" ...\n\nMake sure you validate on the server side too, it's easy to bypass javascript validation.\n",
"If you're using ASP.NET, it has validator controls that you can point to textboxes which you can then use to validate proper date/time formats.\n",
"There are a couple of date widgets available out in the aether. Then you can allow only valid input.\n",
"Looks like there's a great video about the ASP.NET AJAX Control Toolkit provides the MaskedEdit control and the MaskedEditValidator control that works great. Not easy for beginners but VERY good and instant feedback.\nThanks for all the answers though!\nasp.net\nUnfortunately I can't accept this answer.\n",
"I've used this small bit of js code in a few projects, it'll do date quickly and easily along with a few others.\nLink\n"
] | [
3,
2,
2,
1,
1,
0,
0,
0,
0,
0
] | [] | [] | [
"webpage"
] | stackoverflow_0000010599_webpage.txt |
Q:
What program can I use to generate diagrams of SQL view/table structure?
I've been tasked with redesigning part of a ms-sql database structure which currently involves a lot of views, some of which contain joins to other views.
Anyway, I wonder if anyone here could recommend a utility to automatically generate diagrams to help me visualise the whole structure.
What's the best program you've used for such problems?
A:
I am a big fan of Embarcadero's ER/Studio. It is very powerful and produces excellent on-screen as well as printed results. They have a free trial as well, so you should be able to get in and give it a shot without too much strife.
Good luck!
A:
Toad Data Modeller from Quest does a nice job on this and is reasonably priced. Embarcadero E/R studio is good too, as Bruce mentioned.
A:
OP asked about diagramming views and view dependencies, SQL Management Studio and Enterprise Manager doesn't allow you to diagram views. I can't vouch for the other tools.
The LINQ to SQL designer for Visual Studio does allow you to drop views on the design surface but there isn't a easy way to model the dependencies between the views. I'm not sure which tool has this type of diagramming functionality. You could take a look at Red Gate's SQLDoc tool but it just provides text based output.
A:
If you are talking about MS SQL Server tables, I like the diagram support in SQL Server Management Studio. You just drag the tables from the explorer onto the canvas, and they are laid out for you along with lines for relationships. You'll have to do some adjusting by hand for the best looking diagrams, but it is a decent way to get diagrams.
A:
I upmodded Mark's post about Toad Data Modeler and wanted to point out that they have a beta version that is fully functional and free. The only downsides are the occasional bug and built in expiration (typically around the time a new beta is available), but for this poor bloke it does wonders until I can get my boss to chip in for a license.
| What program can I use to generate diagrams of SQL view/table structure? | I've been tasked with redesigning part of a ms-sql database structure which currently involves a lot of views, some of which contain joins to other views.
Anyway, I wonder if anyone here could recommend a utility to automatically generate diagrams to help me visualise the whole structure.
What's the best program you've used for such problems?
| [
"I am a big fan of Embarcadero's ER/Studio. It is very powerful and produces excellent on-screen as well as printed results. They have a free trial as well, so you should be able to get in and give it a shot without too much strife.\nGood luck!\n",
"Toad Data Modeller from Quest does a nice job on this and is reasonably priced. Embarcadero E/R studio is good too, as Bruce mentioned.\n",
"OP asked about diagramming views and view dependencies, SQL Management Studio and Enterprise Manager doesn't allow you to diagram views. I can't vouch for the other tools.\nThe LINQ to SQL designer for Visual Studio does allow you to drop views on the design surface but there isn't a easy way to model the dependencies between the views. I'm not sure which tool has this type of diagramming functionality. You could take a look at Red Gate's SQLDoc tool but it just provides text based output.\n",
"If you are talking about MS SQL Server tables, I like the diagram support in SQL Server Management Studio. You just drag the tables from the explorer onto the canvas, and they are laid out for you along with lines for relationships. You'll have to do some adjusting by hand for the best looking diagrams, but it is a decent way to get diagrams.\n",
"I upmodded Mark's post about Toad Data Modeler and wanted to point out that they have a beta version that is fully functional and free. The only downsides are the occasional bug and built in expiration (typically around the time a new beta is available), but for this poor bloke it does wonders until I can get my boss to chip in for a license.\n"
] | [
4,
2,
1,
0,
0
] | [] | [] | [
"database",
"diagram",
"sql",
"sql_server"
] | stackoverflow_0000004110_database_diagram_sql_sql_server.txt |
Q:
Identifying SQL Server Performance Problems
We're having sporadic, random query timeouts on our SQL Server 2005 cluster. I own a few apps that use it, so I'm helping out in the investigation. When watching the % CPU time in regular ol' Perfmon, you can certainly see it pegging out. However, SQL activity monitor only gives cumulative CPU and IO time used by a process, not what it's using right then, or over a specific timeframe. Perhaps I could use the profiler and run a trace, but this cluster is very heavily used and I'm afraid I'd be looking for a needle in a haystack. Am I barking up the wrong tree?
Does anyone have some good methods for tracking down expensive queries/processes in this environment?
A:
This will give you the top 50 statements by average CPU time, check here for other scripts: http://www.microsoft.com/technet/scriptcenter/scripts/sql/sql2005/default.mspx?mfr=true
SELECT TOP 50
qs.total_worker_time/qs.execution_count as [Avg CPU Time],
SUBSTRING(qt.text,qs.statement_start_offset/2,
(case when qs.statement_end_offset = -1
then len(convert(nvarchar(max), qt.text)) * 2
else qs.statement_end_offset end -qs.statement_start_offset)/2)
as query_text,
qt.dbid, dbname=db_name(qt.dbid),
qt.objectid
FROM sys.dm_exec_query_stats qs
cross apply sys.dm_exec_sql_text(qs.sql_handle) as qt
ORDER BY
[Avg CPU Time] DESC
A:
I've found the Performance Dashboard Reports to be very helpful. They are a set of custom RS reports supplied by Microsoft. You just have to run the installer on your client PC and then run the setup.sql on the SQL Server instance.
After that, right click on a database (does not matter which one) in SSMS and goto Reports -> Custom Reports. Navigate to and select the performance_dashboard_main.rdl which is located at in the \Program Files\Microsoft SQL Server\90\Tools\PerformanceDashboard folder by default. You only need to do this once. After the first time, it will show up in the reports list.
The main dashboard view will show CPU utilization over time, among other things. You can refresh it occasionally. When you see a spike, just click on the bar in the graph to get the detail data behind it.
A:
We use Quest's Spotlight product. Obviously it's an investment in time and money so it might not help you out in the short term but if you are have a large SQL environment it's pretty useful.
A:
As Yaakov says, run profiler for a few minutes under typical load and save the results to a table which will allow you to run queries against the results making it much easier to spot any resource hogging queries.
A:
Profiler may seem like a "needle in a haystack" approach, but it may turn up something useful. Try running it for a couple of minutes while the databases are under typical load, and see if any queries stand out as taking way too much time or hogging resources in some way. While a situation like this could point to some general issue, it could also be related to some specific issue with one or two sites, which mess things up enough in certain circumstances to cause very poor performance across the board.
A:
Run Profiler and filter for queries that take more than a certain number of reads. For the application I worked on, any non-reporting query that took more than 5000 reads deserved a second look. Your app may have a different threshold, but the idea is the same.
A:
This utility by Erland Sommarskog is awesomely useful.
It's a stored procedure you add to your database. Run it whenever you want to see what queries are active and get a good picture of locks, blocks, etc. I use it regularly when things seem gummed up.
| Identifying SQL Server Performance Problems | We're having sporadic, random query timeouts on our SQL Server 2005 cluster. I own a few apps that use it, so I'm helping out in the investigation. When watching the % CPU time in regular ol' Perfmon, you can certainly see it pegging out. However, SQL activity monitor only gives cumulative CPU and IO time used by a process, not what it's using right then, or over a specific timeframe. Perhaps I could use the profiler and run a trace, but this cluster is very heavily used and I'm afraid I'd be looking for a needle in a haystack. Am I barking up the wrong tree?
Does anyone have some good methods for tracking down expensive queries/processes in this environment?
| [
"This will give you the top 50 statements by average CPU time, check here for other scripts: http://www.microsoft.com/technet/scriptcenter/scripts/sql/sql2005/default.mspx?mfr=true\nSELECT TOP 50\n qs.total_worker_time/qs.execution_count as [Avg CPU Time],\n SUBSTRING(qt.text,qs.statement_start_offset/2, \n (case when qs.statement_end_offset = -1 \n then len(convert(nvarchar(max), qt.text)) * 2 \n else qs.statement_end_offset end -qs.statement_start_offset)/2) \n as query_text,\n qt.dbid, dbname=db_name(qt.dbid),\n qt.objectid \nFROM sys.dm_exec_query_stats qs\ncross apply sys.dm_exec_sql_text(qs.sql_handle) as qt\nORDER BY \n [Avg CPU Time] DESC\n\n",
"I've found the Performance Dashboard Reports to be very helpful. They are a set of custom RS reports supplied by Microsoft. You just have to run the installer on your client PC and then run the setup.sql on the SQL Server instance. \nAfter that, right click on a database (does not matter which one) in SSMS and goto Reports -> Custom Reports. Navigate to and select the performance_dashboard_main.rdl which is located at in the \\Program Files\\Microsoft SQL Server\\90\\Tools\\PerformanceDashboard folder by default. You only need to do this once. After the first time, it will show up in the reports list. \nThe main dashboard view will show CPU utilization over time, among other things. You can refresh it occasionally. When you see a spike, just click on the bar in the graph to get the detail data behind it.\n",
"We use Quest's Spotlight product. Obviously it's an investment in time and money so it might not help you out in the short term but if you are have a large SQL environment it's pretty useful.\n",
"As Yaakov says, run profiler for a few minutes under typical load and save the results to a table which will allow you to run queries against the results making it much easier to spot any resource hogging queries.\n",
"Profiler may seem like a \"needle in a haystack\" approach, but it may turn up something useful. Try running it for a couple of minutes while the databases are under typical load, and see if any queries stand out as taking way too much time or hogging resources in some way. While a situation like this could point to some general issue, it could also be related to some specific issue with one or two sites, which mess things up enough in certain circumstances to cause very poor performance across the board.\n",
"Run Profiler and filter for queries that take more than a certain number of reads. For the application I worked on, any non-reporting query that took more than 5000 reads deserved a second look. Your app may have a different threshold, but the idea is the same.\n",
"This utility by Erland Sommarskog is awesomely useful.\nIt's a stored procedure you add to your database. Run it whenever you want to see what queries are active and get a good picture of locks, blocks, etc. I use it regularly when things seem gummed up.\n"
] | [
12,
4,
3,
3,
2,
2,
2
] | [] | [] | [
"performance",
"sql_server",
"sql_server_2005"
] | stackoverflow_0000014717_performance_sql_server_sql_server_2005.txt |
Q:
Mixed C++/CLI TypeLoadException Internal limitation: too many fields
On a quest to migrate some new UI into Managed/C# land, I have recently turned on Common Language Runtime Support (/clr) on a large legacy project, which uses MFC in a Shared DLL and relies on about a dozen other projects within our overall solution. This project is the core of our application, and would drive any managed UI code that is produced (hence the need to turn on clr support for interop).
After fixing a ton of little niggly errors and warnings, I finally managed to get the application to compile..
However, running the application causes an EETypeLoadException and leaves me unable to debug...
Doing some digging, I found the cause to be "System.TypeLoadException: Internal limitation: too many fields." which occurs right at the end of compilation. I then found this link which suggests to break the assembly down into two or more dlls. However, this is not possible in my case, as a limitation I have is that the legacy code basically remains untouched.
Can anyone suggest any other possible solutions? I'm really at a dead end here.
A:
Make sure the Enable String Pooling option under C/C++ Code Generation is turned on.
That usually fixes this issue, which is one of those "huh?" MS limitations like the 64k limit on Excel spreadsheets. Only this one affects the number of symbols that may appear in an assembly.
A:
I have done this with very large mixed-mode (C#/C++) applications three times (3x) and once putting the above fix into place have never seen the error again.
And no, if anything this should result in slightly faster run-time execution (nothing you could ever measure, however.)
But I agree it's somewhat of a stopgap. The internal limit on symbols didn't use to be an issue, or if it was, that limit was much higher. Then MS changed some of the loader code. I got onto MSDN and ranted about it and was told in no uncertain terms, "only an idiot would put that many symbols in a single assembly".
(Which is one of the reasons I no longer participate on MSDN.)
Well, color me stupid, but I don't think I should have to change the physical structure of my application, breaking things out into satellite DLLs, merely to get around the fact that the loader has decided 10,001 symbols is 1 too many.
And as you pointed out, we often don't have control over how assemblies/satellite DLLs are structure, and the sort of dependencies they contain.
But I don't think you'll see this error again, in any case.
A:
Do you need to turn /clr on for the entire project? Could you instead turn it on only for a small select number of files and be very careful how you include managed code? I work with a large C++/MFC application and we have found it very difficult to use managed C++. I love C# and .NET but managed C++ has been nothing but a headache. Most of our problems happened with .NET 1.0/1.1 ... maybe things are better now.
| Mixed C++/CLI TypeLoadException Internal limitation: too many fields | On a quest to migrate some new UI into Managed/C# land, I have recently turned on Common Language Runtime Support (/clr) on a large legacy project, which uses MFC in a Shared DLL and relies on about a dozen other projects within our overall solution. This project is the core of our application, and would drive any managed UI code that is produced (hence the need to turn on clr support for interop).
After fixing a ton of little niggly errors and warnings, I finally managed to get the application to compile..
However, running the application causes an EETypeLoadException and leaves me unable to debug...
Doing some digging, I found the cause to be "System.TypeLoadException: Internal limitation: too many fields." which occurs right at the end of compilation. I then found this link which suggests to break the assembly down into two or more dlls. However, this is not possible in my case, as a limitation I have is that the legacy code basically remains untouched.
Can anyone suggest any other possible solutions? I'm really at a dead end here.
| [
"Make sure the Enable String Pooling option under C/C++ Code Generation is turned on.\nThat usually fixes this issue, which is one of those \"huh?\" MS limitations like the 64k limit on Excel spreadsheets. Only this one affects the number of symbols that may appear in an assembly.\n",
"I have done this with very large mixed-mode (C#/C++) applications three times (3x) and once putting the above fix into place have never seen the error again.\nAnd no, if anything this should result in slightly faster run-time execution (nothing you could ever measure, however.)\nBut I agree it's somewhat of a stopgap. The internal limit on symbols didn't use to be an issue, or if it was, that limit was much higher. Then MS changed some of the loader code. I got onto MSDN and ranted about it and was told in no uncertain terms, \"only an idiot would put that many symbols in a single assembly\".\n(Which is one of the reasons I no longer participate on MSDN.)\nWell, color me stupid, but I don't think I should have to change the physical structure of my application, breaking things out into satellite DLLs, merely to get around the fact that the loader has decided 10,001 symbols is 1 too many.\nAnd as you pointed out, we often don't have control over how assemblies/satellite DLLs are structure, and the sort of dependencies they contain.\nBut I don't think you'll see this error again, in any case.\n",
"Do you need to turn /clr on for the entire project? Could you instead turn it on only for a small select number of files and be very careful how you include managed code? I work with a large C++/MFC application and we have found it very difficult to use managed C++. I love C# and .NET but managed C++ has been nothing but a headache. Most of our problems happened with .NET 1.0/1.1 ... maybe things are better now.\n"
] | [
15,
3,
3
] | [] | [] | [
"c++_cli",
"clr",
"compiler_construction"
] | stackoverflow_0000014843_c++_cli_clr_compiler_construction.txt |
Q:
Parameter Binding: What happens under the hood?
.NET, Java and other high level database API's in various language often provide techniques known as prepared statements and parameter binding as opposed to sending plain text commands to the Database server. What I would like to know is what happens when you execute a statement like this:
SqlCommand cmd = new SqlCommand("GetMemberByID");
cmd.CommandType = CommandType.StoredProcedure;
SqlParameter param = new SqlParameter("@ID", memberID);
para.DbType = DbType.Integer;
cmd.Parameters.Add(param);
I know this is a best practice. SQL injection attacks are minimized this way. But what exactly happens under the hood when you execute these statements? Is the end result still a SQL safe string? If not, what is the end result? And is this enough to prevent SQL injection attacks?
A:
The MySQL manual page on prepared statements provides lots of information (which should apply to any other RDBMS).
Basically, your statement is parsed and processed ahead of time, and the parameters are sent separately instead of being handled along with the SQL code. This eliminates SQL-injection attacks because the SQL is parsed before the parameters are even set.
A:
If you're using MS SQL, load up the profiler and you'll see what SQL statements are generated when you use parameterised queries. Here's an example (I'm using Enterprise Libary 3.1, but the results are the same using SqlParameters directly) against SQL Server 2005:
string sql = "SELECT * FROM tblDomains WHERE DomainName = @DomName AND DomainID = @Did";
Database db = DatabaseFactory.CreateDatabase();
using(DbCommand cmd = db.GetSqlStringCommand(sql))
{
db.AddInParameter(cmd, "DomName", DbType.String, "xxxxx.net");
db.AddInParameter(cmd, "Did", DbType.Int32, 500204);
DataSet ds = db.ExecuteDataSet(cmd);
}
This generates:
exec sp[underscore]executesql N'SELECT * FROM tblDomains WHERE DomainName = @DomName AND DomainID = @Did',
N'@DomName nvarchar(9),
@Did int',
@DomName=N'xxxxx.net',
@Did=500204
You can also see here, if quotation characters were passed as parameters, they are escaped accordingly:
db.AddInParameter(cmd, "DomName", DbType.String, "'xxxxx.net");
exec sp[underscore]executesql N'SELECT * FROM tblDomains WHERE DomainName = @DomName AND DomainID = @Did',
N'@DomName nvarchar(10),
@Did int',
@DomName=N'''xxxxx.net',
@Did=500204
A:
in layman terms: if a prepared statement is sent then the DB will use a plan if it is available, it doesn't not have to recreate a plan every time this query is sent over but only the values of the params have changed. this is very similar to how procs work, the additional benefit with procs is that you can give permission through procs only and not to the underlying tables at all
| Parameter Binding: What happens under the hood? | .NET, Java and other high level database API's in various language often provide techniques known as prepared statements and parameter binding as opposed to sending plain text commands to the Database server. What I would like to know is what happens when you execute a statement like this:
SqlCommand cmd = new SqlCommand("GetMemberByID");
cmd.CommandType = CommandType.StoredProcedure;
SqlParameter param = new SqlParameter("@ID", memberID);
para.DbType = DbType.Integer;
cmd.Parameters.Add(param);
I know this is a best practice. SQL injection attacks are minimized this way. But what exactly happens under the hood when you execute these statements? Is the end result still a SQL safe string? If not, what is the end result? And is this enough to prevent SQL injection attacks?
| [
"The MySQL manual page on prepared statements provides lots of information (which should apply to any other RDBMS).\nBasically, your statement is parsed and processed ahead of time, and the parameters are sent separately instead of being handled along with the SQL code. This eliminates SQL-injection attacks because the SQL is parsed before the parameters are even set.\n",
"If you're using MS SQL, load up the profiler and you'll see what SQL statements are generated when you use parameterised queries. Here's an example (I'm using Enterprise Libary 3.1, but the results are the same using SqlParameters directly) against SQL Server 2005:\nstring sql = \"SELECT * FROM tblDomains WHERE DomainName = @DomName AND DomainID = @Did\";\nDatabase db = DatabaseFactory.CreateDatabase();\nusing(DbCommand cmd = db.GetSqlStringCommand(sql))\n{\n db.AddInParameter(cmd, \"DomName\", DbType.String, \"xxxxx.net\");\n db.AddInParameter(cmd, \"Did\", DbType.Int32, 500204);\n\n DataSet ds = db.ExecuteDataSet(cmd);\n}\n\nThis generates:\nexec sp[underscore]executesql N'SELECT * FROM tblDomains WHERE DomainName = @DomName AND DomainID = @Did',\n N'@DomName nvarchar(9),\n @Did int',\n @DomName=N'xxxxx.net',\n @Did=500204\n\nYou can also see here, if quotation characters were passed as parameters, they are escaped accordingly:\ndb.AddInParameter(cmd, \"DomName\", DbType.String, \"'xxxxx.net\");\n\nexec sp[underscore]executesql N'SELECT * FROM tblDomains WHERE DomainName = @DomName AND DomainID = @Did',\n N'@DomName nvarchar(10),\n @Did int',\n @DomName=N'''xxxxx.net',\n @Did=500204\n\n",
"in layman terms: if a prepared statement is sent then the DB will use a plan if it is available, it doesn't not have to recreate a plan every time this query is sent over but only the values of the params have changed. this is very similar to how procs work, the additional benefit with procs is that you can give permission through procs only and not to the underlying tables at all\n"
] | [
6,
0,
0
] | [] | [] | [
".net",
"api",
"c#",
"database",
"sql"
] | stackoverflow_0000014934_.net_api_c#_database_sql.txt |
Q:
What version of .Net framework ships with SQL Server 2008?
Does SQL Server 2008 ship with the .NET 3.5 CLR, so that stored procedures written in CLR can use 3.5 features?
A:
Actually it ships with .NET 3.5 SP1. So yes, the stored procs can use 3.5 features and libraries.
A:
I swear this isn't being pedantic, but is an important distinction -- I don't know what specifically you need when you say ".NET 3.5 CLR" -- probably the .NET 3.5 Framework? Possibly C# 3.0 language features? But the CLR that .NET 3.5 runs on is still CLR 2.0. (the link is to the same explanation re: .NET 3.0; I couldn't immediately find this info on 3.5. Actually, the best explanation of CLR vs. Framework vs. language version numbers I've yet found is on page 12 of Teach Yourself WPF in 24 Hours*)
So, my point is that you can even use the features of .NET 3.5 and C# 3.0 on SQL 2005 CLR stored procedures -- we do, at my company -- and there's not even really any trickery to it. All you have to do is have the free 3.5 framework on your server. Obviously the SQL 2005 answer isn't that relevant for your specific question, but hopefully this will be helpful to the person who eventually comes across this page via Google.
*disclosure: I'm friends with the authors
| What version of .Net framework ships with SQL Server 2008? | Does SQL Server 2008 ship with the .NET 3.5 CLR, so that stored procedures written in CLR can use 3.5 features?
| [
"Actually it ships with .NET 3.5 SP1. So yes, the stored procs can use 3.5 features and libraries.\n",
"I swear this isn't being pedantic, but is an important distinction -- I don't know what specifically you need when you say \".NET 3.5 CLR\" -- probably the .NET 3.5 Framework? Possibly C# 3.0 language features? But the CLR that .NET 3.5 runs on is still CLR 2.0. (the link is to the same explanation re: .NET 3.0; I couldn't immediately find this info on 3.5. Actually, the best explanation of CLR vs. Framework vs. language version numbers I've yet found is on page 12 of Teach Yourself WPF in 24 Hours*)\nSo, my point is that you can even use the features of .NET 3.5 and C# 3.0 on SQL 2005 CLR stored procedures -- we do, at my company -- and there's not even really any trickery to it. All you have to do is have the free 3.5 framework on your server. Obviously the SQL 2005 answer isn't that relevant for your specific question, but hopefully this will be helpful to the person who eventually comes across this page via Google.\n*disclosure: I'm friends with the authors\n"
] | [
10,
3
] | [] | [] | [
"sql_server",
"sql_server_2008"
] | stackoverflow_0000011430_sql_server_sql_server_2008.txt |
Q:
.NET 3.5 Service Pack 1 causes 404 pages on ASP.NET Web App
I have a problem with IIS 6.0 ceasing to work for an ASP.NET application after installing Service Pack 1 for .NET 3.5.
I have 2 identical virtual dedicated servers. Installing SP1 on the first had no adverse effect. Installing it on the second caused ASP.NET pages to start returning 404 page not found.
Static .html pages working okay on both servers.
Has anybody else experienced this?
A:
This is broad problem, so let's start by asking some troubleshooting questions:
Based on your description, the ASP.NET runtime is not catching your request and processing the aspx files. You may need to register the asp.net pipeline with IIS again using ASPNET_REGIIS -i.
Have you made sure that the app_offline.htm file has been removed
from the directory of the application?
I have had this happen before after an
update.
Have you setup fiddler for instance to follow the request to see what is
exactly being requested?
Make sure ASP.NET is enabled in the IIS Administration Console under "Web
Service Extensions." Make sure everything is set to allowed for your different versions of the framework.
Well, let's start with those and hopefully we can guide you to the problem.
A:
I've seen various people with this problem recently. This link might help.
And this one.
And a few others.
A:
Is CustomErrors in your web.config set to On or RemoteOnly? If so, what do you get when you change it to Off?
A:
I have not had this exact error with .NET 3.5 SP1, but have seen similar occur in the past. Typically it can be resolved by opening a command prompt, going to the appropriate .NET folder and running ASPNET_REGIIS -i. In the case of .NET 3.5 there wasn't an update to the main bits of the framework, so you'd actually go to the .NET 2.0 folder, which on my machine can be found at:
\Windows\Microsoft.Net\framework\v2.0.50727
Running the ASPNET_REGIIS -i will re-register all the ASP.NET libraries with IIS, and should be the equivalent of a re-install of the framework on a given machine (as far as IIS is concerned)
A:
Just to clarify. The last (4th) point given by Dale was the problem. During the installation of SP1 the Status for ASP.NET and WebDAV became set to Prohibited under Web Service Extensions.
Why the installation of SP1 changed this setting on one server and not the other is a mystery that I wouldn't mind (but not expect) an answer to...
The second link provided by CodingTheWheel also had the answer so I'm also going to mark this as an answer.
| .NET 3.5 Service Pack 1 causes 404 pages on ASP.NET Web App | I have a problem with IIS 6.0 ceasing to work for an ASP.NET application after installing Service Pack 1 for .NET 3.5.
I have 2 identical virtual dedicated servers. Installing SP1 on the first had no adverse effect. Installing it on the second caused ASP.NET pages to start returning 404 page not found.
Static .html pages working okay on both servers.
Has anybody else experienced this?
| [
"This is broad problem, so let's start by asking some troubleshooting questions:\n\n\nBased on your description, the ASP.NET runtime is not catching your request and processing the aspx files. You may need to register the asp.net pipeline with IIS again using ASPNET_REGIIS -i.\nHave you made sure that the app_offline.htm file has been removed\n from the directory of the application?\n I have had this happen before after an\n update.\nHave you setup fiddler for instance to follow the request to see what is\n exactly being requested?\nMake sure ASP.NET is enabled in the IIS Administration Console under \"Web\n Service Extensions.\" Make sure everything is set to allowed for your different versions of the framework.\n\n\nWell, let's start with those and hopefully we can guide you to the problem.\n",
"I've seen various people with this problem recently. This link might help.\nAnd this one.\nAnd a few others.\n",
"Is CustomErrors in your web.config set to On or RemoteOnly? If so, what do you get when you change it to Off?\n",
"I have not had this exact error with .NET 3.5 SP1, but have seen similar occur in the past. Typically it can be resolved by opening a command prompt, going to the appropriate .NET folder and running ASPNET_REGIIS -i. In the case of .NET 3.5 there wasn't an update to the main bits of the framework, so you'd actually go to the .NET 2.0 folder, which on my machine can be found at:\n\\Windows\\Microsoft.Net\\framework\\v2.0.50727\nRunning the ASPNET_REGIIS -i will re-register all the ASP.NET libraries with IIS, and should be the equivalent of a re-install of the framework on a given machine (as far as IIS is concerned)\n",
"Just to clarify. The last (4th) point given by Dale was the problem. During the installation of SP1 the Status for ASP.NET and WebDAV became set to Prohibited under Web Service Extensions.\nWhy the installation of SP1 changed this setting on one server and not the other is a mystery that I wouldn't mind (but not expect) an answer to...\nThe second link provided by CodingTheWheel also had the answer so I'm also going to mark this as an answer.\n"
] | [
4,
2,
0,
0,
0
] | [
"No-one did before, so I'll point to the trivial solution:\nHave you already de-installed the Service Pack and re-installed it again (or the whole framework)?\nEdit: @Kev:\nEasy explanation: He said the update works on one machine, but not on the other. I had similar problems in the past and re-installing helped to solve some of them. And it is trivial to do.\nThat's my approach:\n1. trivial\n2. easy\n3. headache \nYou are right, on productive systems you must be careful, but that's his decision. And because it is a virtual server, maybe it is easy for him to copy it and try as a test environment first.\n"
] | [
-1
] | [
".net_3.5",
"iis",
"servicepacks"
] | stackoverflow_0000014963_.net_3.5_iis_servicepacks.txt |
Q:
.net Job Interview
I have a job interview tomorrow for a .NET shop. For the past few years I have been developing in languages other than .NET and figure it is probably a good idea to brush up on what is cool and new in the world of .NET. I've been reading about LINQ and WPF but these are more technologies than trends. What else should I look at?
Been reading things like:
http://msdn.microsoft.com/en-us/library/bb332048.aspx
http://msdn.microsoft.com/en-us/library/ms754130.aspx
Edit
As it turns out this interview was high level and we didn't really get into much which was more .NET specific than generics.
A:
This is completely language agnostic so you may want to skip over it, but I've based a lot of my practice and preparation for job interviews around Steve Yegge's getting a job at google post.
I use a lot of the topics there not only as an interview preparedness guide, but also as a list of things that I SHOULD know about. Admittedly I am still working my way through some of the books and exercises, but every little bit helps.
EDIT: I'm not sure if it necessarily a good thing to focus specifically on the latest trends in web development for job interviews. When I am interviewing someone I am more impressed if they can write a recursive function to solve some problem or write a really cool algorithm, then if they know all the details about some latest thing that is going to fix everything but it's really just a buzzword
A:
Take this with a grain of salt, but in my experience, LINQ and WPF are still in the realm of "yeah we'd like to get into that someday".
Most shops are still on VS2005 and .NET 2.0, so I'd want to make sure I was up to speed on core facilities:
generics
ADO.NET
WinForms / WebForms depending
And so forth.
A:
It's probably a bit late to be looking tonight at code trends for an interview tomorrow.
Microsoft is currently busy doing what it has always done: me-too functionality, only better. New dynamically typed languages with a new language runtime and MVC are looking really promising.
With WPF and Expression they're creating different interfaces for UI developers and business logic developers to use. I'm not sure about that - I'd rather see Expression Blend as part of VS.
They're pushing open source more than they ever have - http://www.codeplex.com is getting busier. VS Express editions are an excellent route in to the technologies.
With their Team System they're pushing Agile methods more and more - they've even resolved them with more structured processes like CMMI.
-1? serves me right for starting with a sarcastic comment ;-(
How about: how to hack an interview?
A:
As a student of many languages/frameworks, I can't stress enough that you shouldn't be concentrating on the whizz-bang latest and greatest stuff. It's a solid understanding of the tried and true programming principles (see design patterns, DRY principle, OOP conventions, etc.) and general familiarity with the framework that employers (and fellow developers) are looking for.
A:
If you're doing web development, ASP.NET MVC and Silverlight (née WPF/e) come to mind as relatively recent trends.
| .net Job Interview | I have a job interview tomorrow for a .NET shop. For the past few years I have been developing in languages other than .NET and figure it is probably a good idea to brush up on what is cool and new in the world of .NET. I've been reading about LINQ and WPF but these are more technologies than trends. What else should I look at?
Been reading things like:
http://msdn.microsoft.com/en-us/library/bb332048.aspx
http://msdn.microsoft.com/en-us/library/ms754130.aspx
Edit
As it turns out this interview was high level and we didn't really get into much which was more .NET specific than generics.
| [
"This is completely language agnostic so you may want to skip over it, but I've based a lot of my practice and preparation for job interviews around Steve Yegge's getting a job at google post.\nI use a lot of the topics there not only as an interview preparedness guide, but also as a list of things that I SHOULD know about. Admittedly I am still working my way through some of the books and exercises, but every little bit helps. \nEDIT: I'm not sure if it necessarily a good thing to focus specifically on the latest trends in web development for job interviews. When I am interviewing someone I am more impressed if they can write a recursive function to solve some problem or write a really cool algorithm, then if they know all the details about some latest thing that is going to fix everything but it's really just a buzzword\n",
"Take this with a grain of salt, but in my experience, LINQ and WPF are still in the realm of \"yeah we'd like to get into that someday\".\nMost shops are still on VS2005 and .NET 2.0, so I'd want to make sure I was up to speed on core facilities:\n\ngenerics\nADO.NET\nWinForms / WebForms depending\n\nAnd so forth. \n",
"It's probably a bit late to be looking tonight at code trends for an interview tomorrow.\nMicrosoft is currently busy doing what it has always done: me-too functionality, only better. New dynamically typed languages with a new language runtime and MVC are looking really promising.\nWith WPF and Expression they're creating different interfaces for UI developers and business logic developers to use. I'm not sure about that - I'd rather see Expression Blend as part of VS.\nThey're pushing open source more than they ever have - http://www.codeplex.com is getting busier. VS Express editions are an excellent route in to the technologies.\nWith their Team System they're pushing Agile methods more and more - they've even resolved them with more structured processes like CMMI.\n\n-1? serves me right for starting with a sarcastic comment ;-(\nHow about: how to hack an interview?\n",
"As a student of many languages/frameworks, I can't stress enough that you shouldn't be concentrating on the whizz-bang latest and greatest stuff. It's a solid understanding of the tried and true programming principles (see design patterns, DRY principle, OOP conventions, etc.) and general familiarity with the framework that employers (and fellow developers) are looking for.\n",
"If you're doing web development, ASP.NET MVC and Silverlight (née WPF/e) come to mind as relatively recent trends.\n"
] | [
4,
1,
1,
1,
0
] | [] | [] | [
".net"
] | stackoverflow_0000015124_.net.txt |
Q:
How can I override an EJB 3 session bean method with a generic argument - if possible at all?
Suppose you have the following EJB 3 interfaces/classes:
public interface Repository<E>
{
public void delete(E entity);
}
public abstract class AbstractRepository<E> implements Repository<E>
{
public void delete(E entity){
//...
}
}
public interface FooRepository<Foo>
{
//other methods
}
@Local(FooRepository.class)
@Stateless
public class FooRepositoryImpl extends
AbstractRepository<Foo> implements FooRepository
{
@Override
public void delete(Foo entity){
//do something before deleting the entity
super.delete(entity);
}
//other methods
}
And then another bean that accesses the FooRepository bean :
//...
@EJB
private FooRepository fooRepository;
public void someMethod(Foo foo)
{
fooRepository.delete(foo);
}
//...
However, the overriding method is never executed when the delete method of the FooRepository bean is called. Instead, only the implementation of the delete method that is defined in AbstractRepository is executed.
What am I doing wrong or is it simply a limitation of Java/EJB 3 that generics and inheritance don't play well together yet ?
A:
I tried it with a pojo and it seems to work. I had to modify your code a bit.
I think your interfaces were a bit off, but I'm not sure.
I assumed "Foo" was a concrete type, but if not I can do some more testing for you.
I just wrote a main method to test this.
I hope this helps!
public static void main(String[] args){
FooRepository fooRepository = new FooRepositoryImpl();
fooRepository.delete(new Foo("Bar"));
}
public class Foo
{
private String value;
public Foo(String inValue){
super();
value = inValue;
}
public String toString(){
return value;
}
}
public interface Repository<E>
{
public void delete(E entity);
}
public interface FooRepository extends Repository<Foo>
{
//other methods
}
public class AbstractRespository<E> implements Repository<E>
{
public void delete(E entity){
System.out.println("Delete-" + entity.toString());
}
}
public class FooRepositoryImpl extends AbstractRespository<Foo> implements FooRepository
{
@Override
public void delete(Foo entity){
//do something before deleting the entity
System.out.println("something before");
super.delete(entity);
}
}
A:
Can you write a unit test against your FooRepository class just using it as a POJO. If that works as expected then I'm not familiar with any reason why it would function differently inside a container.
I suspect there is something else going on and it will probably be easier to debug if you test it as a POJO.
| How can I override an EJB 3 session bean method with a generic argument - if possible at all? | Suppose you have the following EJB 3 interfaces/classes:
public interface Repository<E>
{
public void delete(E entity);
}
public abstract class AbstractRepository<E> implements Repository<E>
{
public void delete(E entity){
//...
}
}
public interface FooRepository<Foo>
{
//other methods
}
@Local(FooRepository.class)
@Stateless
public class FooRepositoryImpl extends
AbstractRepository<Foo> implements FooRepository
{
@Override
public void delete(Foo entity){
//do something before deleting the entity
super.delete(entity);
}
//other methods
}
And then another bean that accesses the FooRepository bean :
//...
@EJB
private FooRepository fooRepository;
public void someMethod(Foo foo)
{
fooRepository.delete(foo);
}
//...
However, the overriding method is never executed when the delete method of the FooRepository bean is called. Instead, only the implementation of the delete method that is defined in AbstractRepository is executed.
What am I doing wrong or is it simply a limitation of Java/EJB 3 that generics and inheritance don't play well together yet ?
| [
"I tried it with a pojo and it seems to work. I had to modify your code a bit.\nI think your interfaces were a bit off, but I'm not sure.\nI assumed \"Foo\" was a concrete type, but if not I can do some more testing for you.\nI just wrote a main method to test this.\nI hope this helps!\npublic static void main(String[] args){\n FooRepository fooRepository = new FooRepositoryImpl();\n fooRepository.delete(new Foo(\"Bar\"));\n}\n\npublic class Foo\n{\n private String value;\n\n public Foo(String inValue){\n super();\n value = inValue;\n }\n public String toString(){\n return value;\n }\n}\n\npublic interface Repository<E>\n{\n public void delete(E entity);\n}\n\npublic interface FooRepository extends Repository<Foo>\n{\n //other methods\n}\n\npublic class AbstractRespository<E> implements Repository<E>\n{\n public void delete(E entity){\n System.out.println(\"Delete-\" + entity.toString());\n }\n}\n\npublic class FooRepositoryImpl extends AbstractRespository<Foo> implements FooRepository\n{\n @Override\n public void delete(Foo entity){\n //do something before deleting the entity\n System.out.println(\"something before\");\n super.delete(entity);\n }\n}\n\n",
"Can you write a unit test against your FooRepository class just using it as a POJO. If that works as expected then I'm not familiar with any reason why it would function differently inside a container.\nI suspect there is something else going on and it will probably be easier to debug if you test it as a POJO.\n"
] | [
2,
1
] | [] | [] | [
"ejb_3.0",
"generics",
"inheritance",
"jakarta_ee",
"java"
] | stackoverflow_0000014801_ejb_3.0_generics_inheritance_jakarta_ee_java.txt |
Q:
Is it possible to disable command input in the toolbar search box?
In the Visual Studio toolbar, you can enter commands into the search box by prefixing them with a > symbol. Is there any way to disable this? I've never used the feature, and it's slightly annoying when trying to actually search for something that you know is prefixed by greater-than in the code. It's particularly annoying when you accidentally search for "> exit" and the IDE quits (I knew there was a line in the code that was something like if(counter > exitCount) so entered that search without thinking).
At the very least, can you escape the > symbol so that you can search for it? Prefixing with ^ doesn't seem to work.
A:
This is a really cool feature. I've poked through the feature documentation, and the accompanying command list, and not a heck of a lot is showing up in terms of turning it off.
If you want to search for >exit, you could always type >Edit.Find >exit in the search box; that seems to do the trick. A bit verbose, though, but it really is an edge case.
A:
you can enter commands into the search box by prefixing them with a > symbol.
Wow, I didn't know that. Where do I find the list of possible commands?
I never actually use the search box, I've remapped ctrl+F to incremental search, which is usually ctrl+I
I find this much cooler than the normal search - give it a go, you might end up not caring about the search box anymore.
A:
Wow, I didn't know that. Where do I
find the list of possible commands?
The commands are the same as those you can enter in the command window, so you can pretty much drive the entire IDE and debugger using it. There are a load of predefined aliases for common commands. Open up the command window and enter alias for a list, to get you started.
| Is it possible to disable command input in the toolbar search box? | In the Visual Studio toolbar, you can enter commands into the search box by prefixing them with a > symbol. Is there any way to disable this? I've never used the feature, and it's slightly annoying when trying to actually search for something that you know is prefixed by greater-than in the code. It's particularly annoying when you accidentally search for "> exit" and the IDE quits (I knew there was a line in the code that was something like if(counter > exitCount) so entered that search without thinking).
At the very least, can you escape the > symbol so that you can search for it? Prefixing with ^ doesn't seem to work.
| [
"This is a really cool feature. I've poked through the feature documentation, and the accompanying command list, and not a heck of a lot is showing up in terms of turning it off.\nIf you want to search for >exit, you could always type >Edit.Find >exit in the search box; that seems to do the trick. A bit verbose, though, but it really is an edge case.\n",
"\nyou can enter commands into the search box by prefixing them with a > symbol. \n\nWow, I didn't know that. Where do I find the list of possible commands?\nI never actually use the search box, I've remapped ctrl+F to incremental search, which is usually ctrl+I\nI find this much cooler than the normal search - give it a go, you might end up not caring about the search box anymore.\n",
"\nWow, I didn't know that. Where do I\n find the list of possible commands?\n\nThe commands are the same as those you can enter in the command window, so you can pretty much drive the entire IDE and debugger using it. There are a load of predefined aliases for common commands. Open up the command window and enter alias for a list, to get you started.\n"
] | [
2,
1,
0
] | [] | [] | [
"visual_studio"
] | stackoverflow_0000014760_visual_studio.txt |
Q:
What do the getUTC* methods on the date object do?
What does it mean when you get or create a date in UTC format in JavaScript?
A:
A date represents a specific point in time. This point in time will be called differently in different places. As I write this, it's 00:27 on Tuesday in Germany, 23:27 on Monday in the UK and 18:27 on Monday in New York.
To take an example method: getDay returns the day of the week in the local timezone. Right now, for a user in Germany, it would return 2. For a user in the UK or US, it would return 1. In an hour's time, it will return 2 for the user in the UK (because it will then be 00:27 on Tuesday there).
The ..UTC.. methods deal with the representation of the time in UTC (also known as GMT). In winter, this is the same timezone as the UK, in summer it's an hour behind the time in the UK.
It's summer as I write this. getUTCDay will return 1 (Monday), getUTCHours will return 22, getUTCMinutes will return 27. So it's 22:27 on Monday in the UTC timezone. Whereas the plain get... functions will return different values depending on where the user is, the getUTC.. functions will return those same values no matter where the user is.
A:
getUTC is for converting times to Coordinated Universal Time (UTC, the acronym is ordered differently than what it stands for) which is the standard time based on the time in Greenwich, London.
The universal time is calculated using a time offset (in minutes when in JavaScript.) This offset is based on the time zone configured on the client browser's operating system.
If you plan on storing dates for users in multiple time zones, this is what you should use.
A:
Further to Dan's remark about the acronym being different to what it stands for is a good reason: UTC Abbreviation on Wikipedia
| What do the getUTC* methods on the date object do? | What does it mean when you get or create a date in UTC format in JavaScript?
| [
"A date represents a specific point in time. This point in time will be called differently in different places. As I write this, it's 00:27 on Tuesday in Germany, 23:27 on Monday in the UK and 18:27 on Monday in New York.\nTo take an example method: getDay returns the day of the week in the local timezone. Right now, for a user in Germany, it would return 2. For a user in the UK or US, it would return 1. In an hour's time, it will return 2 for the user in the UK (because it will then be 00:27 on Tuesday there).\nThe ..UTC.. methods deal with the representation of the time in UTC (also known as GMT). In winter, this is the same timezone as the UK, in summer it's an hour behind the time in the UK.\nIt's summer as I write this. getUTCDay will return 1 (Monday), getUTCHours will return 22, getUTCMinutes will return 27. So it's 22:27 on Monday in the UTC timezone. Whereas the plain get... functions will return different values depending on where the user is, the getUTC.. functions will return those same values no matter where the user is.\n",
"getUTC is for converting times to Coordinated Universal Time (UTC, the acronym is ordered differently than what it stands for) which is the standard time based on the time in Greenwich, London.\nThe universal time is calculated using a time offset (in minutes when in JavaScript.) This offset is based on the time zone configured on the client browser's operating system.\nIf you plan on storing dates for users in multiple time zones, this is what you should use.\n",
"Further to Dan's remark about the acronym being different to what it stands for is a good reason: UTC Abbreviation on Wikipedia\n"
] | [
14,
0,
0
] | [] | [] | [
"date",
"javascript",
"utc"
] | stackoverflow_0000015302_date_javascript_utc.txt |
Q:
Printers not available unless shared
We are using classic asp to call a C# dll and in the C# dll we are using System.Drawing.Printing.PrinterSettings.InstalledPrinters to get a list of availabe printers. If the printers are not shared they will not show up when a user trys to print. The Local System account can see and print to them from a VB6 dll and Administrators can print just fine from the C# dll as you might expect. Is there some sort of permissions we need to grant the user so these printers will be available?
A:
As I recall, running a website uses the Network User account, which may not have permission to view local printers.
There was a page on MSDN that said how you can impersonate another user that might have access to the printers, but I've not been able to find it.
Edit: I posted too soon. Here's the page.
HTH
A:
I'm fairly certain that impersonating a user or using their credentials does not constitute the ability to see the printers for that user. I believe explorer.exe reconnects all the network resources (shares/printers) upon logon.
| Printers not available unless shared | We are using classic asp to call a C# dll and in the C# dll we are using System.Drawing.Printing.PrinterSettings.InstalledPrinters to get a list of availabe printers. If the printers are not shared they will not show up when a user trys to print. The Local System account can see and print to them from a VB6 dll and Administrators can print just fine from the C# dll as you might expect. Is there some sort of permissions we need to grant the user so these printers will be available?
| [
"As I recall, running a website uses the Network User account, which may not have permission to view local printers.\nThere was a page on MSDN that said how you can impersonate another user that might have access to the printers, but I've not been able to find it.\nEdit: I posted too soon. Here's the page.\nHTH\n",
"I'm fairly certain that impersonating a user or using their credentials does not constitute the ability to see the printers for that user. I believe explorer.exe reconnects all the network resources (shares/printers) upon logon.\n"
] | [
4,
3
] | [] | [] | [
"asp.net",
"c#",
"printing",
"windows_server_2003"
] | stackoverflow_0000014756_asp.net_c#_printing_windows_server_2003.txt |
Q:
Sending emails without looking like spam
I want so send every week an update by email. But Im afraid that if there are too many emails sent, they will be marked as spam. Any of you has experience sending many emails (thousands) weekly? What techniques do you use?
A:
A good answer for this question would
be a real gold mine for a motivated
spammer :)
Not really -- as you'll see in that other thread, answers center on showing that you are the authorative sender of the email, and various aspects that are useless to spammers and useful to non-spammers who send a lot of email.
| Sending emails without looking like spam | I want so send every week an update by email. But Im afraid that if there are too many emails sent, they will be marked as spam. Any of you has experience sending many emails (thousands) weekly? What techniques do you use?
| [
"\nA good answer for this question would\n be a real gold mine for a motivated\n spammer :)\n\nNot really -- as you'll see in that other thread, answers center on showing that you are the authorative sender of the email, and various aspects that are useless to spammers and useful to non-spammers who send a lot of email.\n"
] | [
2
] | [] | [] | [
"asp.net",
"email"
] | stackoverflow_0000009751_asp.net_email.txt |
Q:
Use SVN instead of CVS on SourceForge
I've just setup a new project on SourceForge and the admins set it up with CVS as the SCM, however, I want to use SVN. There is NO code in this project yet - empty directory.
How do I change this project from using CVS to SVN?
A:
It's under the options.
| Use SVN instead of CVS on SourceForge | I've just setup a new project on SourceForge and the admins set it up with CVS as the SCM, however, I want to use SVN. There is NO code in this project yet - empty directory.
How do I change this project from using CVS to SVN?
| [
"It's under the options.\n"
] | [
2
] | [] | [] | [
"cvs",
"sourceforge",
"svn"
] | stackoverflow_0000015504_cvs_sourceforge_svn.txt |
Q:
What is the best way to iterate through a strongly-typed generic List?
What is the best way to iterate through a strongly-typed generic List in C#.NET and VB.NET?
A:
For C#:
foreach(ObjectType objectItem in objectTypeList)
{
// ...do some stuff
}
Answer for VB.NET from Purple Ant:
For Each objectItem as ObjectType in objectTypeList
'Do some stuff '
Next
A:
With any generic implementation of IEnumerable the best way is:
//C#
foreach( var item in listVariable) {
//do stuff
}
There is an important exception however. IEnumerable involves an overhead of Current() and MoveNext() that is what the foreach loop is actually compiled into.
When you have a simple array of structs:
//C#
int[] valueTypeArray;
for(int i=0; i < valueTypeArray.Length; ++i) {
int item = valueTypeArray[i];
//do stuff
}
Is quicker.
Update
Following a discussion with @Steven Sudit (see comments) I think my original advice may be out of date or mistaken, so I ran some tests:
// create a list to test with
var theList = Enumerable.Range(0, 100000000).ToList();
// time foreach
var sw = Stopwatch.StartNew();
foreach (var item in theList)
{
int inLoop = item;
}
Console.WriteLine("list foreach: " + sw.Elapsed.ToString());
sw.Reset();
sw.Start();
// time for
int cnt = theList.Count;
for (int i = 0; i < cnt; i++)
{
int inLoop = theList[i];
}
Console.WriteLine("list for : " + sw.Elapsed.ToString());
// now run the same tests, but with an array
var theArray = theList.ToArray();
sw.Reset();
sw.Start();
foreach (var item in theArray)
{
int inLoop = item;
}
Console.WriteLine("array foreach: " + sw.Elapsed.ToString());
sw.Reset();
sw.Start();
// time for
cnt = theArray.Length;
for (int i = 0; i < cnt; i++)
{
int inLoop = theArray[i];
}
Console.WriteLine("array for : " + sw.Elapsed.ToString());
Console.ReadKey();
So, I ran this in release with all optimisations:
list foreach: 00:00:00.5137506
list for : 00:00:00.2417709
array foreach: 00:00:00.1085653
array for : 00:00:00.0954890
And then debug without optimisations:
list foreach: 00:00:01.1289015
list for : 00:00:00.9945345
array foreach: 00:00:00.6405422
array for : 00:00:00.4913245
So it appears fairly consistent, for is quicker than foreach and arrays are quicker than generic lists.
However, this is across 100,000,000 iterations and the difference is about .4 of a second between the fastest and slowest methods. Unless you're doing massive performance critical loops it just isn't worth worrying about.
A:
C#
myList<string>().ForEach(
delegate(string name)
{
Console.WriteLine(name);
});
Anonymous delegates are not currently implemented in VB.Net, but both C# and VB.Net should be able to do lambdas:
C#
myList<string>().ForEach(name => Console.WriteLine(name));
VB.Net
myList(Of String)().ForEach(Function(name) Console.WriteLine(name))
As Grauenwolf pointed out the above VB won't compile since the lambda doesn't return a value. A normal ForEach loop as others have suggested is probably the easiest for now, but as usual it takes a block of code to do what C# can do in one line.
Here's a trite example of why this might be useful: this gives you the ability to pass in the loop logic from another scope than where the IEnumerable exists, so you don't even have to expose it if you don't want to.
Say you have a list of relative url paths that you want to make absolute:
public IEnumerable<String> Paths(Func<String> formatter) {
List<String> paths = new List<String>()
{
"/about", "/contact", "/services"
};
return paths.ForEach(formatter);
}
So then you could call the function this way:
var hostname = "myhost.com";
var formatter = f => String.Format("http://{0}{1}", hostname, f);
IEnumerable<String> absolutePaths = Paths(formatter);
Giving you "http://myhost.com/about", "http://myhost.com/contact" etc. Obviously there are better ways to accomplish this in this specfic example, I'm just trying to demonstrate the basic principle.
A:
For VB.NET:
For Each tmpObject as ObjectType in ObjectTypeList
'Do some stuff '
Next
A:
Without knowing the internal implementation of a list, I think generally the best way to iterate over it would be a foreach loop. Because foreach uses an IEnumerator to walk over the list, it's up to the list itself to determine how to move from object to object.
If the internal implementation was, say, a linked list, then a simple for loop would be quite a bit slower than a foreach.
Does that make sense?
A:
It depends on your application:
for loop, if efficiency is a priority
foreach loop or ForEach method, whichever communicates your intent more clearly
A:
I may be missing something, but iterating through a generic list should be fairly simple if you use my examples below. The List<> class implements the IList and IEnumerable interfaces so that you can easily iterate through them basically any way you want.
The most efficient way would be to use a for loop:
for(int i = 0; i < genericList.Count; ++i)
{
// Loop body
}
You may also choose to use a foreach loop:
foreach(<insertTypeHere> o in genericList)
{
// Loop body
}
| What is the best way to iterate through a strongly-typed generic List? | What is the best way to iterate through a strongly-typed generic List in C#.NET and VB.NET?
| [
"For C#:\nforeach(ObjectType objectItem in objectTypeList)\n{\n // ...do some stuff\n}\n\nAnswer for VB.NET from Purple Ant:\nFor Each objectItem as ObjectType in objectTypeList\n 'Do some stuff '\nNext\n\n",
"With any generic implementation of IEnumerable the best way is:\n//C#\nforeach( var item in listVariable) {\n //do stuff\n}\n\nThere is an important exception however. IEnumerable involves an overhead of Current() and MoveNext() that is what the foreach loop is actually compiled into.\nWhen you have a simple array of structs:\n//C#\nint[] valueTypeArray;\nfor(int i=0; i < valueTypeArray.Length; ++i) {\n int item = valueTypeArray[i];\n //do stuff\n}\n\nIs quicker.\n\nUpdate\nFollowing a discussion with @Steven Sudit (see comments) I think my original advice may be out of date or mistaken, so I ran some tests:\n// create a list to test with\nvar theList = Enumerable.Range(0, 100000000).ToList();\n\n// time foreach\nvar sw = Stopwatch.StartNew();\nforeach (var item in theList)\n{\n int inLoop = item;\n}\nConsole.WriteLine(\"list foreach: \" + sw.Elapsed.ToString());\n\nsw.Reset();\nsw.Start();\n\n// time for\nint cnt = theList.Count;\nfor (int i = 0; i < cnt; i++)\n{\n int inLoop = theList[i];\n}\nConsole.WriteLine(\"list for : \" + sw.Elapsed.ToString());\n\n// now run the same tests, but with an array\nvar theArray = theList.ToArray();\n\nsw.Reset();\nsw.Start();\n\nforeach (var item in theArray)\n{\n int inLoop = item;\n}\nConsole.WriteLine(\"array foreach: \" + sw.Elapsed.ToString());\n\nsw.Reset();\nsw.Start();\n\n// time for\ncnt = theArray.Length;\nfor (int i = 0; i < cnt; i++)\n{\n int inLoop = theArray[i];\n}\nConsole.WriteLine(\"array for : \" + sw.Elapsed.ToString());\n\nConsole.ReadKey();\n\nSo, I ran this in release with all optimisations:\nlist foreach: 00:00:00.5137506\nlist for : 00:00:00.2417709\narray foreach: 00:00:00.1085653\narray for : 00:00:00.0954890\n\nAnd then debug without optimisations:\nlist foreach: 00:00:01.1289015\nlist for : 00:00:00.9945345\narray foreach: 00:00:00.6405422\narray for : 00:00:00.4913245\n\nSo it appears fairly consistent, for is quicker than foreach and arrays are quicker than generic lists.\nHowever, this is across 100,000,000 iterations and the difference is about .4 of a second between the fastest and slowest methods. Unless you're doing massive performance critical loops it just isn't worth worrying about.\n",
"C#\nmyList<string>().ForEach(\n delegate(string name)\n {\n Console.WriteLine(name);\n });\n\nAnonymous delegates are not currently implemented in VB.Net, but both C# and VB.Net should be able to do lambdas:\nC#\nmyList<string>().ForEach(name => Console.WriteLine(name));\n\nVB.Net\nmyList(Of String)().ForEach(Function(name) Console.WriteLine(name))\n\n\nAs Grauenwolf pointed out the above VB won't compile since the lambda doesn't return a value. A normal ForEach loop as others have suggested is probably the easiest for now, but as usual it takes a block of code to do what C# can do in one line.\n\nHere's a trite example of why this might be useful: this gives you the ability to pass in the loop logic from another scope than where the IEnumerable exists, so you don't even have to expose it if you don't want to.\nSay you have a list of relative url paths that you want to make absolute:\npublic IEnumerable<String> Paths(Func<String> formatter) {\n List<String> paths = new List<String>()\n {\n \"/about\", \"/contact\", \"/services\"\n };\n\n return paths.ForEach(formatter);\n}\n\nSo then you could call the function this way:\nvar hostname = \"myhost.com\";\nvar formatter = f => String.Format(\"http://{0}{1}\", hostname, f);\nIEnumerable<String> absolutePaths = Paths(formatter);\n\nGiving you \"http://myhost.com/about\", \"http://myhost.com/contact\" etc. Obviously there are better ways to accomplish this in this specfic example, I'm just trying to demonstrate the basic principle.\n",
"For VB.NET:\nFor Each tmpObject as ObjectType in ObjectTypeList\n 'Do some stuff '\nNext\n",
"Without knowing the internal implementation of a list, I think generally the best way to iterate over it would be a foreach loop. Because foreach uses an IEnumerator to walk over the list, it's up to the list itself to determine how to move from object to object.\nIf the internal implementation was, say, a linked list, then a simple for loop would be quite a bit slower than a foreach.\nDoes that make sense?\n",
"It depends on your application:\n\nfor loop, if efficiency is a priority\nforeach loop or ForEach method, whichever communicates your intent more clearly\n\n",
"I may be missing something, but iterating through a generic list should be fairly simple if you use my examples below. The List<> class implements the IList and IEnumerable interfaces so that you can easily iterate through them basically any way you want.\nThe most efficient way would be to use a for loop: \nfor(int i = 0; i < genericList.Count; ++i) \n{\n // Loop body\n}\n\nYou may also choose to use a foreach loop:\nforeach(<insertTypeHere> o in genericList)\n{\n // Loop body\n}\n\n"
] | [
32,
21,
5,
4,
2,
2,
1
] | [] | [] | [
".net",
"c#",
"collections",
"generics",
"vb.net"
] | stackoverflow_0000015204_.net_c#_collections_generics_vb.net.txt |
Q:
How do I generate ASCII codes 2 and 3 in a Bash command line?
If I press Ctrl+B that ought to give me ASCII code 2, but Ctrl+C is going to be interpreted as a Break.
So I figure I've got to redirect a file in. How do I get these characters into a file?
A:
Ctrl-V escapes the next keystoke. That's how you can get a Ctrl-C out: Ctrl-V Ctrl-C
A:
echo $'\002\003' > ./myfile
A:
perl -e 'print "\xFF"'
where FF is the hex code of the ACSII code you want to print. So for ACSII code 2, it would be \x02.
| How do I generate ASCII codes 2 and 3 in a Bash command line? | If I press Ctrl+B that ought to give me ASCII code 2, but Ctrl+C is going to be interpreted as a Break.
So I figure I've got to redirect a file in. How do I get these characters into a file?
| [
"Ctrl-V escapes the next keystoke. That's how you can get a Ctrl-C out: Ctrl-V Ctrl-C\n",
"echo $'\\002\\003' > ./myfile\n\n",
"perl -e 'print \"\\xFF\"'\n\nwhere FF is the hex code of the ACSII code you want to print. So for ACSII code 2, it would be \\x02.\n\n"
] | [
10,
6,
2
] | [] | [] | [
"ascii",
"bash",
"linux",
"unix"
] | stackoverflow_0000015537_ascii_bash_linux_unix.txt |
Q:
Resharper and TortoiseSVN
Is there any good way to deal with the class renaming refactor from Resharper when the file is under source control and TortoiseSVN is the client. I have am trying VisualSVN right now but I haven't had the need to rename anything recently. I don't want to change our repository just to try this out.
Also not sure if this feature alone is worth the cost of VisualSVN.
Update: I have uninstalled the trial of VisualSVN and tried AhknSVN. I seems to provided the same functionality so far.
I know this my sound trivial but the indicators seem to be lacking some functionality, it seems like they don't trickle up. (If a file in the project is different I would think the project indicator would indicate this as well.) I tend to keep my projects rolled as much as possible, so it is hard to tell what files have changed unless the project is expanded.
A:
TortoiseSVN 1.5 has a neat hidden feature on the check in window:
Select a missing file and a new file and right-click. One of the options will be "fix move".
I tend to refactor away, and then use this to fix any files where the name has changed.
A:
You should really check the Free as in Beer option of AnkhSVN. They made some major improvements in v2.x and I don't feel penalized anymore when doing ReSharper refactoring-ninja moves inside Visual Studio.
A:
I find VisualSVN to be well worth the money. There are ways to do it with Tortoise, but the integration of VisualSVN is very nice. I had tried over VS-integration tools before like Ankh and was not impressed. V-SVN has really upped the level of interaction with the repository from the IDE.
The quick trick in TortoiseSVN to fix the move sounds pretty nice as well, I need to try that out.
Another bonus: I've yet to "forgot" to add a file to the repository since I got Visual SVN.
| Resharper and TortoiseSVN | Is there any good way to deal with the class renaming refactor from Resharper when the file is under source control and TortoiseSVN is the client. I have am trying VisualSVN right now but I haven't had the need to rename anything recently. I don't want to change our repository just to try this out.
Also not sure if this feature alone is worth the cost of VisualSVN.
Update: I have uninstalled the trial of VisualSVN and tried AhknSVN. I seems to provided the same functionality so far.
I know this my sound trivial but the indicators seem to be lacking some functionality, it seems like they don't trickle up. (If a file in the project is different I would think the project indicator would indicate this as well.) I tend to keep my projects rolled as much as possible, so it is hard to tell what files have changed unless the project is expanded.
| [
"TortoiseSVN 1.5 has a neat hidden feature on the check in window:\nSelect a missing file and a new file and right-click. One of the options will be \"fix move\".\nI tend to refactor away, and then use this to fix any files where the name has changed.\n",
"You should really check the Free as in Beer option of AnkhSVN. They made some major improvements in v2.x and I don't feel penalized anymore when doing ReSharper refactoring-ninja moves inside Visual Studio.\n",
"I find VisualSVN to be well worth the money. There are ways to do it with Tortoise, but the integration of VisualSVN is very nice. I had tried over VS-integration tools before like Ankh and was not impressed. V-SVN has really upped the level of interaction with the repository from the IDE.\nThe quick trick in TortoiseSVN to fix the move sounds pretty nice as well, I need to try that out.\nAnother bonus: I've yet to \"forgot\" to add a file to the repository since I got Visual SVN.\n"
] | [
12,
12,
5
] | [
"Time to branch your repository. That's the nice part about version control, you can create new branches without totaling the old ones.\n"
] | [
-1
] | [
"ankhsvn",
"resharper",
"svn",
"tortoisesvn",
"visualsvn"
] | stackoverflow_0000013745_ankhsvn_resharper_svn_tortoisesvn_visualsvn.txt |
Q:
Is there an easy way to convert C# classes to PHP?
I am used to writing C# Windows applications. However, I have some free hosted PHP webspace that I would like to make use of. I have a basic understanding of PHP but have never used its object-oriented capabilities.
Is there an easy way to convert C# classes to PHP classes or is it just not possible to write a fully object-oriented application in PHP?
Update: There is no reliance on the .NET framework beyond the basics. The main aim would be to restructure the class properties, variable enums, etc. The PHP will be hosted on a Linux server.
A:
PHP doesn't support enums, which might be one area of mismatch.
Also, watch out for collection types, PHP despite it's OO features, tends to have no alternative to over-using the array datatype. Check out the sections on the PHP manual on iterators if you would like to see beyond this.
Public, protected, private, and static properties of classes all work roughly as expected.
A:
A huge problem would be to replicate the .Net Framework in PHP if the C# class usses it.
A:
It is entirely possible to write a PHP application almost entirely in an object-oriented methodology. You will have to write some procedural code to create and launch your first object but beyond that there are plenty of MVC frameworks for PHP that are all object-oriented. One that I would look at as an example is Code Igniter because it is a little lighter weight in my opinion.
A:
I don't know about a tool to automate the process but you could use the Reflexion API to browse your C# class and generate a corresponding PHP class.
Of course, the difficulty here is to correctly map C# types to PHP but with enough unit testing, you should be able to do what you want.
I advice you to go this way because I already did a C# to VB and C++ conversion. That was a pain but the result was worth it.
A:
If the problem is that you want to transition to PHP and you are happy to continue running on a windows server with .NET support you might consider wrapping your code using swig.
This can be used to generated stubs to execute from php and you can then go about rewriting the .NET code into PHP in an incremental fashion.
This works for any of the supported languages. ie. you could incrementally rewrite an application in c++ to java if you really wanted to.
| Is there an easy way to convert C# classes to PHP? | I am used to writing C# Windows applications. However, I have some free hosted PHP webspace that I would like to make use of. I have a basic understanding of PHP but have never used its object-oriented capabilities.
Is there an easy way to convert C# classes to PHP classes or is it just not possible to write a fully object-oriented application in PHP?
Update: There is no reliance on the .NET framework beyond the basics. The main aim would be to restructure the class properties, variable enums, etc. The PHP will be hosted on a Linux server.
| [
"PHP doesn't support enums, which might be one area of mismatch.\nAlso, watch out for collection types, PHP despite it's OO features, tends to have no alternative to over-using the array datatype. Check out the sections on the PHP manual on iterators if you would like to see beyond this.\nPublic, protected, private, and static properties of classes all work roughly as expected.\n",
"A huge problem would be to replicate the .Net Framework in PHP if the C# class usses it.\n",
"It is entirely possible to write a PHP application almost entirely in an object-oriented methodology. You will have to write some procedural code to create and launch your first object but beyond that there are plenty of MVC frameworks for PHP that are all object-oriented. One that I would look at as an example is Code Igniter because it is a little lighter weight in my opinion.\n",
"I don't know about a tool to automate the process but you could use the Reflexion API to browse your C# class and generate a corresponding PHP class.\nOf course, the difficulty here is to correctly map C# types to PHP but with enough unit testing, you should be able to do what you want.\nI advice you to go this way because I already did a C# to VB and C++ conversion. That was a pain but the result was worth it.\n",
"If the problem is that you want to transition to PHP and you are happy to continue running on a windows server with .NET support you might consider wrapping your code using swig.\nThis can be used to generated stubs to execute from php and you can then go about rewriting the .NET code into PHP in an incremental fashion.\nThis works for any of the supported languages. ie. you could incrementally rewrite an application in c++ to java if you really wanted to.\n"
] | [
5,
3,
2,
0,
0
] | [] | [] | [
"c#",
"php"
] | stackoverflow_0000013647_c#_php.txt |
Q:
Design problems with .Net UserControl
I have created a UserControl that has a ListView in it. The ListView is publicly accessible though a property. When I put the UserControl in a form and try to design the ListView though the property, the ListView stays that way until I compile again and it reverts back to the default state.
How do I get my design changes to stick for the ListView?
A:
You need to decorate the ListView property with the DesignerSerializationVisibility attribute, like so:
[DesignerSerializationVisibility(DesignerSerializationVisibility.Content)]
public ListView MyListView { get { return this.listView1; } }
This tells the designer's code generator to output code for it.
A:
Fredrik is right, basically, when you need to enable the designer to persist the property to page so it can be instantiated at run time. There is only one way to do this, and that is to write its values to the ASPX page, which is then picked up by the runtime.
Otherwise, the control will simply revert to its default state each and every time.
Always keep in the back of your mind that the Page (and its contents) and the code are completely seperate in ASP.NET, they are hooked up at run time. This means that you dont get the nice code-behind designer support like you do in a WinForms app (where the form is an instance of an object).
A:
Just so I'm clear, you've done something like this, right?
public ListView MyListView { get { return this.listView1; } }
So then you are accessing (at design time) the MyListView property on your UserControl?
I think if you want proper design-time support you're better off changing the "Modifier" property on the ListView itself (back on the original UserControl) to Public - that way you can modify the ListView directly on instances of the UserControl. I've had success doing that anyway.
| Design problems with .Net UserControl | I have created a UserControl that has a ListView in it. The ListView is publicly accessible though a property. When I put the UserControl in a form and try to design the ListView though the property, the ListView stays that way until I compile again and it reverts back to the default state.
How do I get my design changes to stick for the ListView?
| [
"You need to decorate the ListView property with the DesignerSerializationVisibility attribute, like so:\n[DesignerSerializationVisibility(DesignerSerializationVisibility.Content)]\npublic ListView MyListView { get { return this.listView1; } }\n\nThis tells the designer's code generator to output code for it.\n",
"Fredrik is right, basically, when you need to enable the designer to persist the property to page so it can be instantiated at run time. There is only one way to do this, and that is to write its values to the ASPX page, which is then picked up by the runtime.\nOtherwise, the control will simply revert to its default state each and every time.\nAlways keep in the back of your mind that the Page (and its contents) and the code are completely seperate in ASP.NET, they are hooked up at run time. This means that you dont get the nice code-behind designer support like you do in a WinForms app (where the form is an instance of an object).\n",
"Just so I'm clear, you've done something like this, right?\npublic ListView MyListView { get { return this.listView1; } }\n\nSo then you are accessing (at design time) the MyListView property on your UserControl?\nI think if you want proper design-time support you're better off changing the \"Modifier\" property on the ListView itself (back on the original UserControl) to Public - that way you can modify the ListView directly on instances of the UserControl. I've had success doing that anyway.\n"
] | [
6,
1,
0
] | [] | [] | [
".net_2.0",
"c#",
"user_controls"
] | stackoverflow_0000015716_.net_2.0_c#_user_controls.txt |
Q:
Minimize javascript HTTP calls from AjaxControlToolkit controls?
I love the ease that the ASP.NET Ajax control Toolkit provides to ASP.NET developers. However, I've not found it to be a scalable solution at all. I have a page with 2 date input fields. By putting a TextBox and an CalendarExtendar control attached to it for each date field, I get several (11, in fact) external Javascript calls to /ScriptResource.axd?d=xxxx
Is there any way to control this? Why does it suck so much? What's a better Ajax toolkit for .NET that can provide equal (or better) set of functionality that is more scalable, straight forward and completely customizable? NOT looking to reinvent the wheel here.
A:
I always preferred to write my Ajax calls in javascript using JQuery or Prototype. ASP.NET Ajax Toolkit does make things easier, but it never seems to do so elegantly.
I personally would make a new Calendar Controller. This way you can control the AJAX (using JQuery/Prototype) calls that are being made.
A:
ASP.NET AJAX allows you to register web services with the ScriptManager which will create JavaScript proxies for you to call. See http://msdn.microsoft.com/en-us/library/bb515101.aspx.
| Minimize javascript HTTP calls from AjaxControlToolkit controls? | I love the ease that the ASP.NET Ajax control Toolkit provides to ASP.NET developers. However, I've not found it to be a scalable solution at all. I have a page with 2 date input fields. By putting a TextBox and an CalendarExtendar control attached to it for each date field, I get several (11, in fact) external Javascript calls to /ScriptResource.axd?d=xxxx
Is there any way to control this? Why does it suck so much? What's a better Ajax toolkit for .NET that can provide equal (or better) set of functionality that is more scalable, straight forward and completely customizable? NOT looking to reinvent the wheel here.
| [
"I always preferred to write my Ajax calls in javascript using JQuery or Prototype. ASP.NET Ajax Toolkit does make things easier, but it never seems to do so elegantly.\nI personally would make a new Calendar Controller. This way you can control the AJAX (using JQuery/Prototype) calls that are being made.\n",
"ASP.NET AJAX allows you to register web services with the ScriptManager which will create JavaScript proxies for you to call. See http://msdn.microsoft.com/en-us/library/bb515101.aspx.\n"
] | [
2,
0
] | [] | [] | [
"ajax",
"asp.net",
"frontend",
"javascript"
] | stackoverflow_0000015788_ajax_asp.net_frontend_javascript.txt |
Q:
SSRS - Post Publishing Tasks
As part of the publishing "best practices" I came up with my own, I tend to archive report groups and republish the "updated" reports. However, with this stratedgy, I lose users associated with each report or have to rehide reports.
Is there an automated process I can use to hide reports or add users, after deploying form Visual Studio?
A:
Paul Stovell posted some examples of Reporting Services automation that might get you going.
EDIT: The link to the Subversion repository has been updated and is now working
| SSRS - Post Publishing Tasks | As part of the publishing "best practices" I came up with my own, I tend to archive report groups and republish the "updated" reports. However, with this stratedgy, I lose users associated with each report or have to rehide reports.
Is there an automated process I can use to hide reports or add users, after deploying form Visual Studio?
| [
"Paul Stovell posted some examples of Reporting Services automation that might get you going.\nEDIT: The link to the Subversion repository has been updated and is now working\n"
] | [
0
] | [] | [] | [
"automation",
"reporting_services"
] | stackoverflow_0000015815_automation_reporting_services.txt |
Q:
How do you use #define?
I'm wondering about instances when it makes sent to use #define and #if statements. I've known about it for a while, but never incorporated it into my way of coding. How exactly does this affect the compilation?
Is #define the only thing that determines if the code is included when compiled? If I have #define DEBUGme as a custom symbol, the only way to exclude it from compile is to remove this #define statement?
A:
In C# #define macros, like some of Bernard's examples, are not allowed. The only common use of #define/#ifs in C# is for adding optional debug only code. For example:
static void Main(string[] args)
{
#if DEBUG
//this only compiles if in DEBUG
Console.WriteLine("DEBUG")
#endif
#if !DEBUG
//this only compiles if not in DEBUG
Console.WriteLine("RELEASE")
#endif
//This always compiles
Console.ReadLine()
}
A:
#define is used to define compile-time constants that you can use with #if to include or exclude bits of code.
#define USEFOREACH
#if USEFOREACH
foreach(var item in items)
{
#else
for(int i=0; i < items.Length; ++i)
{ var item = items[i]; //take item
#endif
doSomethingWithItem(item);
}
A:
Is #define the only thing that
determines if the code is included
when compiled? If I have #define
DEBUGme as a custom symbol, the only
way to exclude it from compile is to
remove this #define statement?
You can undefine symbols as well
#if defined(DEBUG)
#undef DEBUG
#endif
A:
Well, defines are used often for compile time constants and macros. This can make your code a bit faster as there are really no function calls, the output values of the macros are determined at compile time. The #if's are very useful. The most simple example that I can think of is checking for a debug build to add in some extra logging or messaging, maybe even some debugging functions. You can also check different environment variables this way.
Others with more C/C++ experience can add more I am sure.
A:
I often find myself defining some things that are done repetitively in certain functions. That makes the code much shorter and thus allows a better overview.
But as always, try to find a good measure to not create a new language out of it. Might be a little hard to read for the occasional maintenance later on.
A:
It's for conditional compilation, so you can include or remove bits of code based upon project attributes which tend to be:
Intended platform (Windows/Linux/XB360/PS3/Iphone.... etc)
Release or Debug (Generally logging, asserts etc are only included in a debug build)
They can also be used to disable large parts of a system quickly,
for example, during development of a game, I might define
#define PLAYSOUNDS
and then wrap the final call to play a sound in:
#ifdef PLAYSOUNDS
// Do lots of funk to play a sound
return true;
#else
return true;
So it's very easy for me to turn on and off the playing of sounds for a build. (Typically I don't play sounds when debugging because it gets in the way of my personal music :) )
The benefit is that you're not introducing a branch through adding an if statement....
A:
@Ed: When using C++, there is rarely any benefit for using #define over inline functions when creating macros. The idea of "greater speed" is a misconception. With inline functions you get the same speed, but you also get type safey, and no side-effects of preprocessor "pasting" due to the fact that parameters are evaluated before the function is called (for an example, try writing the ubiquitous MAX macro, and call it like this: MAX(x++, y).. you'll see what I'm getting at).
I have never had to use #define in my C#, and I very rarely use it for anything other that platform and compiler version checking for conditional compilation in C++.
A:
Perhaps the most common usees of #define in C# is to differentiate between debug/release and different platforms (for example Windows and X-Box 360 in the XNA framework).
| How do you use #define? | I'm wondering about instances when it makes sent to use #define and #if statements. I've known about it for a while, but never incorporated it into my way of coding. How exactly does this affect the compilation?
Is #define the only thing that determines if the code is included when compiled? If I have #define DEBUGme as a custom symbol, the only way to exclude it from compile is to remove this #define statement?
| [
"In C# #define macros, like some of Bernard's examples, are not allowed. The only common use of #define/#ifs in C# is for adding optional debug only code. For example:\n static void Main(string[] args)\n {\n#if DEBUG\n //this only compiles if in DEBUG\n Console.WriteLine(\"DEBUG\")\n#endif \n#if !DEBUG\n //this only compiles if not in DEBUG\n Console.WriteLine(\"RELEASE\")\n#endif\n //This always compiles\n Console.ReadLine()\n }\n\n",
"#define is used to define compile-time constants that you can use with #if to include or exclude bits of code.\n#define USEFOREACH\n\n#if USEFOREACH\n foreach(var item in items)\n { \n#else\n for(int i=0; i < items.Length; ++i)\n { var item = items[i]; //take item\n#endif\n\n doSomethingWithItem(item);\n }\n\n",
"\nIs #define the only thing that\n determines if the code is included\n when compiled? If I have #define\n DEBUGme as a custom symbol, the only\n way to exclude it from compile is to\n remove this #define statement?\n\nYou can undefine symbols as well\n#if defined(DEBUG)\n#undef DEBUG\n#endif\n\n",
"Well, defines are used often for compile time constants and macros. This can make your code a bit faster as there are really no function calls, the output values of the macros are determined at compile time. The #if's are very useful. The most simple example that I can think of is checking for a debug build to add in some extra logging or messaging, maybe even some debugging functions. You can also check different environment variables this way.\nOthers with more C/C++ experience can add more I am sure.\n",
"I often find myself defining some things that are done repetitively in certain functions. That makes the code much shorter and thus allows a better overview.\nBut as always, try to find a good measure to not create a new language out of it. Might be a little hard to read for the occasional maintenance later on.\n",
"It's for conditional compilation, so you can include or remove bits of code based upon project attributes which tend to be:\n\nIntended platform (Windows/Linux/XB360/PS3/Iphone.... etc)\nRelease or Debug (Generally logging, asserts etc are only included in a debug build)\n\nThey can also be used to disable large parts of a system quickly,\nfor example, during development of a game, I might define \n#define PLAYSOUNDS\n\nand then wrap the final call to play a sound in:\n#ifdef PLAYSOUNDS\n// Do lots of funk to play a sound\nreturn true;\n#else\nreturn true;\n\nSo it's very easy for me to turn on and off the playing of sounds for a build. (Typically I don't play sounds when debugging because it gets in the way of my personal music :) )\nThe benefit is that you're not introducing a branch through adding an if statement....\n",
"@Ed: When using C++, there is rarely any benefit for using #define over inline functions when creating macros. The idea of \"greater speed\" is a misconception. With inline functions you get the same speed, but you also get type safey, and no side-effects of preprocessor \"pasting\" due to the fact that parameters are evaluated before the function is called (for an example, try writing the ubiquitous MAX macro, and call it like this: MAX(x++, y).. you'll see what I'm getting at).\nI have never had to use #define in my C#, and I very rarely use it for anything other that platform and compiler version checking for conditional compilation in C++.\n",
"Perhaps the most common usees of #define in C# is to differentiate between debug/release and different platforms (for example Windows and X-Box 360 in the XNA framework).\n"
] | [
14,
8,
3,
1,
1,
1,
0,
0
] | [] | [] | [
"c#"
] | stackoverflow_0000015744_c#.txt |
Q:
Entity diagrams in ASP.NET MVC
What's the best way/tool to display nice diagrams with entity relationships in ASP MVC views?
i.e. servers and and applications, or servers with other servers.
Are there any third party tools out there that can do this?
I've been searching around things like Telerik, but it's really hard to google for this!
A:
Here are some 3rd-party diagramming tools:
http://www.nevron.com/Products.DiagramFor.NET.Overview.aspx
http://www.nwoods.com/GO/dotnet.htm
http://www.syncfusion.com/products/diagram/web/default.aspx
| Entity diagrams in ASP.NET MVC | What's the best way/tool to display nice diagrams with entity relationships in ASP MVC views?
i.e. servers and and applications, or servers with other servers.
Are there any third party tools out there that can do this?
I've been searching around things like Telerik, but it's really hard to google for this!
| [
"Here are some 3rd-party diagramming tools:\n\nhttp://www.nevron.com/Products.DiagramFor.NET.Overview.aspx\nhttp://www.nwoods.com/GO/dotnet.htm\nhttp://www.syncfusion.com/products/diagram/web/default.aspx\n\n"
] | [
3
] | [] | [] | [
".net_3.5",
"asp.net",
"asp.net_mvc",
"diagram",
"graphics"
] | stackoverflow_0000015900_.net_3.5_asp.net_asp.net_mvc_diagram_graphics.txt |
Q:
What is the best way to replicate a version control repository?
Here is the scenario that I have. I have a cvs repository in one location (A) and I want to replicate it and keep it in sync with a repository in another location(B). This would be a single directional sync from A to B. What is the best way to do this? If it is not really feasible in CVS then which source code control system would you recommend to accomplish this? Thanks
A:
When using CVS, I don't know any tools to do that other than files syncing. You can achieve that using tools like rsync (Unix) or xcopy/robocopy (Windows).
If you plan on migrating to Subversion, it provides a tool called svnsync that allows to sync a repository from another one.
A:
I would recommend you migrate from CVS to a proper distributed version control system such as git, which will provide this sort of functionality very naturally.
Subversion also provides svnsync which does the same sort of thing.
A:
If you do take the rsync/filecopy approach with CVS, it is important to only sync the files at a time when there is not an active commit. Otherwise, the repository's lock file will get copied over and you will be unable to checkout/update on the target side until the next sync.
This reason alone may make CVS a bad choice. The migration path from CVS to Subversion is pretty smooth and there are tools to import a full CVS repo, with history, into Subversion.
Consider Git or Mercurial if you want to get into true distributed versioning, but it sounds like that would be overkill for your "read only" needs.
A:
The Best (and perhaps costliest) way is Clearcase Multisite
But if you are looking for opensource, Git is becoming quickly replacing svn everywhere..
| What is the best way to replicate a version control repository? | Here is the scenario that I have. I have a cvs repository in one location (A) and I want to replicate it and keep it in sync with a repository in another location(B). This would be a single directional sync from A to B. What is the best way to do this? If it is not really feasible in CVS then which source code control system would you recommend to accomplish this? Thanks
| [
"When using CVS, I don't know any tools to do that other than files syncing. You can achieve that using tools like rsync (Unix) or xcopy/robocopy (Windows).\nIf you plan on migrating to Subversion, it provides a tool called svnsync that allows to sync a repository from another one.\n",
"I would recommend you migrate from CVS to a proper distributed version control system such as git, which will provide this sort of functionality very naturally.\nSubversion also provides svnsync which does the same sort of thing.\n",
"If you do take the rsync/filecopy approach with CVS, it is important to only sync the files at a time when there is not an active commit. Otherwise, the repository's lock file will get copied over and you will be unable to checkout/update on the target side until the next sync.\nThis reason alone may make CVS a bad choice. The migration path from CVS to Subversion is pretty smooth and there are tools to import a full CVS repo, with history, into Subversion.\nConsider Git or Mercurial if you want to get into true distributed versioning, but it sounds like that would be overkill for your \"read only\" needs.\n",
"The Best (and perhaps costliest) way is Clearcase Multisite\nBut if you are looking for opensource, Git is becoming quickly replacing svn everywhere.. \n"
] | [
4,
2,
2,
0
] | [] | [] | [
"cvs",
"version_control"
] | stackoverflow_0000016097_cvs_version_control.txt |
Q:
C# Auto Clearing Winform Textbox
I have a user that want to be able to select a textbox and have the current text selected so that he doesn't have to highlight it all in order to change the contents.
The contents need to be handle when enter is pushed. That part I think I have figured out but any suggestions would be welcome.
The part I need help with is that once enter has been pushed, any entry into the textbox should clear the contents again.
Edit: The textbox controls an piece of RF hardware. What the user wants to be able to do is enter a setting and press enter. The setting is sent to the hardware. Without doing anything else the user wants to be able to type in a new setting and press enter again.
A:
Hook into the KeyPress event on the TextBox, and when it encounters the Enter key, run your hardware setting code, and then highlight the full text of the textbox again (see below) - Windows will take care of clearing the text with the next keystroke for you.
TextBox1.Select(0, TextBox1.Text.Length);
A:
OK, are you sure that is wise? I am picturing two scenarios here:
There is a default button on the form, which is "clicked" when enter is pushed".
There is no default button, and you want the user to have to press enter, regardless.
Both of these raise the same questions:
Is there any validation that is taking place on the text?
Why not create a user control to encapsulate this logic?
If you know the enter button is being pushed and consumed fine, how are you having problems with TextBoxName.Text = string.Empty ?
Also, as a polite note, can you please try and break up your question a bit? One big block is a bit of a pain to read..
| C# Auto Clearing Winform Textbox | I have a user that want to be able to select a textbox and have the current text selected so that he doesn't have to highlight it all in order to change the contents.
The contents need to be handle when enter is pushed. That part I think I have figured out but any suggestions would be welcome.
The part I need help with is that once enter has been pushed, any entry into the textbox should clear the contents again.
Edit: The textbox controls an piece of RF hardware. What the user wants to be able to do is enter a setting and press enter. The setting is sent to the hardware. Without doing anything else the user wants to be able to type in a new setting and press enter again.
| [
"Hook into the KeyPress event on the TextBox, and when it encounters the Enter key, run your hardware setting code, and then highlight the full text of the textbox again (see below) - Windows will take care of clearing the text with the next keystroke for you.\nTextBox1.Select(0, TextBox1.Text.Length);\n\n",
"OK, are you sure that is wise? I am picturing two scenarios here:\n\nThere is a default button on the form, which is \"clicked\" when enter is pushed\".\nThere is no default button, and you want the user to have to press enter, regardless.\n\nBoth of these raise the same questions:\n\nIs there any validation that is taking place on the text?\nWhy not create a user control to encapsulate this logic?\nIf you know the enter button is being pushed and consumed fine, how are you having problems with TextBoxName.Text = string.Empty ?\n\nAlso, as a polite note, can you please try and break up your question a bit? One big block is a bit of a pain to read..\n"
] | [
4,
1
] | [] | [] | [
"c#",
"textbox",
"winforms"
] | stackoverflow_0000016110_c#_textbox_winforms.txt |
Q:
C# Include Derived Control in Toolbox
This is in reference to my other question Auto Clearing Textbox.
If I choose to derive a new TextBox control from TextBox instead of implement a user control just containing my Textbox, how would I include that in the toolbox.
A:
Right-click the toolbox, click "Choose
Items" from the context menu, browse
to your DLL, and select it.
To extend on Greg's answer...
Just to clarify, you cannot add a user control to the tool box if the code for it is in the same project that you want to use it in. For some reason MS has never added this ability, which would make sense since we don't want to always have to create a User Control Library DLL everytime we want to use a user control. So, to get it in your tool box, you have to first create a separate "User Control Library" project (which can be in the same solution!) and then do what Greg said.
A:
Right-click the toolbox, click "Choose Items" from the context menu, browse to your DLL, and select it.
| C# Include Derived Control in Toolbox | This is in reference to my other question Auto Clearing Textbox.
If I choose to derive a new TextBox control from TextBox instead of implement a user control just containing my Textbox, how would I include that in the toolbox.
| [
"\nRight-click the toolbox, click \"Choose\n Items\" from the context menu, browse\n to your DLL, and select it.\n\nTo extend on Greg's answer...\nJust to clarify, you cannot add a user control to the tool box if the code for it is in the same project that you want to use it in. For some reason MS has never added this ability, which would make sense since we don't want to always have to create a User Control Library DLL everytime we want to use a user control. So, to get it in your tool box, you have to first create a separate \"User Control Library\" project (which can be in the same solution!) and then do what Greg said.\n",
"Right-click the toolbox, click \"Choose Items\" from the context menu, browse to your DLL, and select it.\n"
] | [
8,
0
] | [] | [] | [
"c#",
"textbox",
"visual_studio",
"winforms"
] | stackoverflow_0000016114_c#_textbox_visual_studio_winforms.txt |
Q:
LocationProvider
We need to replace the menu system in our main ASP.NET application. So naturally we're looking at the ASP.NET SiteMapProvider and Menu controls. However we also need enough security to prevent users from directly entering URLs that they shouldn't have access to. We can do this by putting <location> entries in web.config and securing them individually but that's going to be a PITA to manage across multiple web servers.
Is there a Provider that can be used to, well, provide the equivalent of the <location> entries? I haven't been able to find one, and it's slightly frustrating given the existence of the ConfigurationLocation class.
Alternatively is there a configuration option we're missing in SiteMapProvider that will restrict users from getting to URLs they shouldn't?
A:
Why don't you create rights & profiles to manage wich pages a user can see?
I usually create a user class which implements the IPrincipal security interface. On every request to your application, you check the rights of a particular user and output the SiteMap's nodes allowed for this user.
| LocationProvider | We need to replace the menu system in our main ASP.NET application. So naturally we're looking at the ASP.NET SiteMapProvider and Menu controls. However we also need enough security to prevent users from directly entering URLs that they shouldn't have access to. We can do this by putting <location> entries in web.config and securing them individually but that's going to be a PITA to manage across multiple web servers.
Is there a Provider that can be used to, well, provide the equivalent of the <location> entries? I haven't been able to find one, and it's slightly frustrating given the existence of the ConfigurationLocation class.
Alternatively is there a configuration option we're missing in SiteMapProvider that will restrict users from getting to URLs they shouldn't?
| [
"Why don't you create rights & profiles to manage wich pages a user can see?\nI usually create a user class which implements the IPrincipal security interface. On every request to your application, you check the rights of a particular user and output the SiteMap's nodes allowed for this user.\n"
] | [
1
] | [] | [] | [
"asp.net",
"security",
"sitemap"
] | stackoverflow_0000016003_asp.net_security_sitemap.txt |
Q:
How do I use T-SQL Group By
I know I need to have (although I don't know why) a GROUP BY clause on the end of a SQL query that uses any aggregate functions like count, sum, avg, etc:
SELECT count(userID), userName
FROM users
GROUP BY userName
When else would GROUP BY be useful, and what are the performance ramifications?
A:
To retrieve the number of widgets from each widget category that has more than 5 widgets, you could do this:
SELECT WidgetCategory, count(*)
FROM Widgets
GROUP BY WidgetCategory
HAVING count(*) > 5
The "having" clause is something people often forget about, instead opting to retrieve all their data to the client and iterating through it there.
A:
GROUP BY is similar to DISTINCT in that it groups multiple records into one.
This example, borrowed from http://www.devguru.com/technologies/t-sql/7080.asp, lists distinct products in the Products table.
SELECT Product FROM Products GROUP BY Product
Product
-------------
Desktop
Laptop
Mouse
Network Card
Hard Drive
Software
Book
Accessory
The advantage of GROUP BY over DISTINCT, is that it can give you granular control when used with a HAVING clause.
SELECT Product, count(Product) as ProdCnt
FROM Products
GROUP BY Product
HAVING count(Product) > 2
Product ProdCnt
--------------------
Desktop 10
Laptop 5
Mouse 3
Network Card 9
Software 6
A:
Group By forces the entire set to be populated before records are returned (since it is an implicit sort).
For that reason (and many others), never use a Group By in a subquery.
A:
Counting the number of times tags are used might be a google example:
SELECT TagName, Count(*)
AS TimesUsed
FROM Tags
GROUP BY TagName ORDER TimesUsed
If you simply want a distinct value of tags, I would prefer to use the DISTINCT statement.
SELECT DISTINCT TagName
FROM Tags
ORDER BY TagName ASC
A:
GROUP BY also helps when you want to generate a report that will average or sum a bunch of data. You can GROUP By the Department ID and the SUM all the sales revenue or AVG the count of sales for each month.
| How do I use T-SQL Group By | I know I need to have (although I don't know why) a GROUP BY clause on the end of a SQL query that uses any aggregate functions like count, sum, avg, etc:
SELECT count(userID), userName
FROM users
GROUP BY userName
When else would GROUP BY be useful, and what are the performance ramifications?
| [
"To retrieve the number of widgets from each widget category that has more than 5 widgets, you could do this:\nSELECT WidgetCategory, count(*)\nFROM Widgets\nGROUP BY WidgetCategory\nHAVING count(*) > 5\n\nThe \"having\" clause is something people often forget about, instead opting to retrieve all their data to the client and iterating through it there.\n",
"GROUP BY is similar to DISTINCT in that it groups multiple records into one.\nThis example, borrowed from http://www.devguru.com/technologies/t-sql/7080.asp, lists distinct products in the Products table. \nSELECT Product FROM Products GROUP BY Product\n\nProduct\n-------------\nDesktop\nLaptop\nMouse\nNetwork Card\nHard Drive\nSoftware\nBook\nAccessory\n\nThe advantage of GROUP BY over DISTINCT, is that it can give you granular control when used with a HAVING clause.\nSELECT Product, count(Product) as ProdCnt\nFROM Products\nGROUP BY Product\nHAVING count(Product) > 2\n\nProduct ProdCnt\n--------------------\nDesktop 10\nLaptop 5\nMouse 3\nNetwork Card 9\nSoftware 6\n\n",
"Group By forces the entire set to be populated before records are returned (since it is an implicit sort).\nFor that reason (and many others), never use a Group By in a subquery.\n",
"Counting the number of times tags are used might be a google example:\nSELECT TagName, Count(*)\nAS TimesUsed\nFROM Tags\nGROUP BY TagName ORDER TimesUsed\n\nIf you simply want a distinct value of tags, I would prefer to use the DISTINCT statement.\nSELECT DISTINCT TagName\nFROM Tags\nORDER BY TagName ASC\n\n",
"GROUP BY also helps when you want to generate a report that will average or sum a bunch of data. You can GROUP By the Department ID and the SUM all the sales revenue or AVG the count of sales for each month.\n"
] | [
36,
15,
4,
2,
0
] | [] | [] | [
"group_by",
"sql",
"sql_server"
] | stackoverflow_0000002702_group_by_sql_sql_server.txt |
Q:
Displaying Version Information in a Web Service
Can anyone suggest a way of getting version information into a Web Service? (VB.NET)
I would like to dynamically use the assembly version in the title or description, but the attributes require constants.
Is manually writing the version info as a string the only way of displaying the information on the .asmx page?
A:
You need to pick a type in your assembly and then do the following:
typeof(Some.Object.In.My.Assembly).Assembly.GetName().Version;
A:
via reflection you can get the Assembly object which contains the assembly version.
A:
Yeah, attributes cannot have anything but constants in them, so you cannot use reflection to get the version number. The WebServiceAttribute class is sealed too, so you cannot inherit it and do what you want from there.
A solution might be to use some kind of placeholder text as the Name, and set up an MsBuild task to replace it with the version number when building the project.
| Displaying Version Information in a Web Service | Can anyone suggest a way of getting version information into a Web Service? (VB.NET)
I would like to dynamically use the assembly version in the title or description, but the attributes require constants.
Is manually writing the version info as a string the only way of displaying the information on the .asmx page?
| [
"You need to pick a type in your assembly and then do the following:\ntypeof(Some.Object.In.My.Assembly).Assembly.GetName().Version;\n\n",
"via reflection you can get the Assembly object which contains the assembly version.\n",
"Yeah, attributes cannot have anything but constants in them, so you cannot use reflection to get the version number. The WebServiceAttribute class is sealed too, so you cannot inherit it and do what you want from there.\nA solution might be to use some kind of placeholder text as the Name, and set up an MsBuild task to replace it with the version number when building the project.\n"
] | [
0,
0,
0
] | [] | [] | [
"vb.net",
"versions",
"web_services"
] | stackoverflow_0000016209_vb.net_versions_web_services.txt |
Q:
Load an XmlNodeList into an XmlDocument without looping?
I originally asked this question on RefactorMyCode, but got no responses there...
Basically I'm just try to load an XmlNodeList into an XmlDocument and I was wondering if there's a more efficient method than looping.
Private Function GetPreviousMonthsXml(ByVal months As Integer, ByVal startDate As Date, ByVal xDoc As XmlDocument, ByVal path As String, ByVal nodeName As String) As XmlDocument
'' build xpath string with list of months to return
Dim xp As New StringBuilder("//")
xp.Append(nodeName)
xp.Append("[")
For i As Integer = 0 To (months - 1)
'' get year and month portion of date for datestring
xp.Append("starts-with(@Id, '")
xp.Append(startDate.AddMonths(-i).ToString("yyyy-MM"))
If i < (months - 1) Then
xp.Append("') or ")
Else
xp.Append("')]")
End If
Next
'' *** This is the block that needs to be refactored ***
'' import nodelist into an xmldocument
Dim xnl As XmlNodeList = xDoc.SelectNodes(xp.ToString())
Dim returnXDoc As New XmlDocument(xDoc.NameTable)
returnXDoc = xDoc.Clone()
Dim nodeParents As XmlNodeList = returnXDoc.SelectNodes(path)
For Each nodeParent As XmlNode In nodeParents
For Each nodeToDelete As XmlNode In nodeParent.SelectNodes(nodeName)
nodeParent.RemoveChild(nodeToDelete)
Next
Next
For Each node As XmlNode In xnl
Dim newNode As XmlNode = returnXDoc.ImportNode(node, True)
returnXDoc.DocumentElement.SelectSingleNode("//" & node.ParentNode.Name & "[@Id='" & newNode.Attributes("Id").Value.Split("-")(0) & "']").AppendChild(newNode)
Next
'' *** end ***
Return returnXDoc
End Function
A:
Dim returnXDoc As New XmlDocument(xDoc.NameTable)
returnXDoc = xDoc.Clone()
The first line here is redundant - you are creating an instance of an XmlDocument, then reassigning the variable:
Dim returnXDoc As XmlDocument = xDoc.Clone()
This does the same.
Seeing as you appear to be inserting each XmlNode from your node list into a different place in the new XmlDocument then I can't see how you could possibly do this any other way.
There may be faster XPath expressions you could write, for example pre-pending an XPath expression with "//" is almost always the slowest way to do something, especially if your XML is well structured. You haven't shown your XML so I couldn't really comment on this further however.
| Load an XmlNodeList into an XmlDocument without looping? | I originally asked this question on RefactorMyCode, but got no responses there...
Basically I'm just try to load an XmlNodeList into an XmlDocument and I was wondering if there's a more efficient method than looping.
Private Function GetPreviousMonthsXml(ByVal months As Integer, ByVal startDate As Date, ByVal xDoc As XmlDocument, ByVal path As String, ByVal nodeName As String) As XmlDocument
'' build xpath string with list of months to return
Dim xp As New StringBuilder("//")
xp.Append(nodeName)
xp.Append("[")
For i As Integer = 0 To (months - 1)
'' get year and month portion of date for datestring
xp.Append("starts-with(@Id, '")
xp.Append(startDate.AddMonths(-i).ToString("yyyy-MM"))
If i < (months - 1) Then
xp.Append("') or ")
Else
xp.Append("')]")
End If
Next
'' *** This is the block that needs to be refactored ***
'' import nodelist into an xmldocument
Dim xnl As XmlNodeList = xDoc.SelectNodes(xp.ToString())
Dim returnXDoc As New XmlDocument(xDoc.NameTable)
returnXDoc = xDoc.Clone()
Dim nodeParents As XmlNodeList = returnXDoc.SelectNodes(path)
For Each nodeParent As XmlNode In nodeParents
For Each nodeToDelete As XmlNode In nodeParent.SelectNodes(nodeName)
nodeParent.RemoveChild(nodeToDelete)
Next
Next
For Each node As XmlNode In xnl
Dim newNode As XmlNode = returnXDoc.ImportNode(node, True)
returnXDoc.DocumentElement.SelectSingleNode("//" & node.ParentNode.Name & "[@Id='" & newNode.Attributes("Id").Value.Split("-")(0) & "']").AppendChild(newNode)
Next
'' *** end ***
Return returnXDoc
End Function
| [
"Dim returnXDoc As New XmlDocument(xDoc.NameTable)\nreturnXDoc = xDoc.Clone()\n\nThe first line here is redundant - you are creating an instance of an XmlDocument, then reassigning the variable:\nDim returnXDoc As XmlDocument = xDoc.Clone()\n\nThis does the same.\nSeeing as you appear to be inserting each XmlNode from your node list into a different place in the new XmlDocument then I can't see how you could possibly do this any other way.\nThere may be faster XPath expressions you could write, for example pre-pending an XPath expression with \"//\" is almost always the slowest way to do something, especially if your XML is well structured. You haven't shown your XML so I couldn't really comment on this further however.\n"
] | [
2
] | [] | [] | [
"vb.net",
"xml",
"xmldocument",
"xmlnode",
"xmlnodelist"
] | stackoverflow_0000012709_vb.net_xml_xmldocument_xmlnode_xmlnodelist.txt |
Q:
Why are my auto-run applications acting weird on Vista?
The product we are working on allows the user to easily set it up to run automatically whenever the computer is started. This is helpful because the product is part of the basic work environment of most of our users.
This feature was implemented not so long ago and for a while all was well, but when we started testing this feature on Vista the product started behaving really weird on startup. Specifically, our product makes use of another product (lets call it X) that it launches whenever it needs its services. The actual problem is that whenever X is launched immediately after log-on, it crashes or reports critical errors related to disk access (this happens even when X is launched directly - not through our product).
This happens whenever we run our product by registering it in the "Run" key in the registry or place a shortcut to it in the "Startup" folder inside the "Start Menu", even when we put a delay of ~20 seconds before actually starting to run. When we changed the delay to 70 seconds, all is well.
We tried to reproduce the problem by launching our product manually immediately after logon (by double-clicking on a shortcut placed on the desktop) but to no avail.
Now how is it possible that applications that run normally a minute after logon report such hard errors when starting immediately after logon?
A:
This is the effect of a new feature in Vista called "Boxing":
Windows has several mechanisms that allow the user/admin to set up applications to automatically run when windows starts. This feature is mostly used for one of these purposes:
1. Programs that are part of the basic work environment of the user, such that the first action the user would usually take when starting the computer is to start them.
2. All sorts of background "agents" - skype, messenger, winamp etc.
When too many (or too heavy) programs are registered to run on startup the end result is that the user can't actually do anything for the first few seconds/minutes after login, which can be really annoying. In comes Vista's "Boxing" feature:
Briefly, Vista forces all programs invoked through the Run key to operate at low priority for the first 60 seconds after login. This affects both I/O priority (which is set to Very Low) and CPU priority. Very Low priority I/O requests do not pass through the file cache, but go directly to disk. Thus, they are much slower than regular I/O.
The length of the boxing period is set by the registry value:
"HKLM\Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced\DelayedApps\Delay_Sec".
For a more detailed explanation see here and here
A:
The program probably needs some more info put into its properties. It needs to "Run As", instead of just running.
Maybe this application should be developed as a service, instead of a program to be launched, or you could have service that launches the program when its determined the best window of opportunity.
| Why are my auto-run applications acting weird on Vista? | The product we are working on allows the user to easily set it up to run automatically whenever the computer is started. This is helpful because the product is part of the basic work environment of most of our users.
This feature was implemented not so long ago and for a while all was well, but when we started testing this feature on Vista the product started behaving really weird on startup. Specifically, our product makes use of another product (lets call it X) that it launches whenever it needs its services. The actual problem is that whenever X is launched immediately after log-on, it crashes or reports critical errors related to disk access (this happens even when X is launched directly - not through our product).
This happens whenever we run our product by registering it in the "Run" key in the registry or place a shortcut to it in the "Startup" folder inside the "Start Menu", even when we put a delay of ~20 seconds before actually starting to run. When we changed the delay to 70 seconds, all is well.
We tried to reproduce the problem by launching our product manually immediately after logon (by double-clicking on a shortcut placed on the desktop) but to no avail.
Now how is it possible that applications that run normally a minute after logon report such hard errors when starting immediately after logon?
| [
"This is the effect of a new feature in Vista called \"Boxing\":\nWindows has several mechanisms that allow the user/admin to set up applications to automatically run when windows starts. This feature is mostly used for one of these purposes:\n1. Programs that are part of the basic work environment of the user, such that the first action the user would usually take when starting the computer is to start them.\n2. All sorts of background \"agents\" - skype, messenger, winamp etc.\nWhen too many (or too heavy) programs are registered to run on startup the end result is that the user can't actually do anything for the first few seconds/minutes after login, which can be really annoying. In comes Vista's \"Boxing\" feature:\nBriefly, Vista forces all programs invoked through the Run key to operate at low priority for the first 60 seconds after login. This affects both I/O priority (which is set to Very Low) and CPU priority. Very Low priority I/O requests do not pass through the file cache, but go directly to disk. Thus, they are much slower than regular I/O. \nThe length of the boxing period is set by the registry value: \n\"HKLM\\Software\\Microsoft\\Windows\\CurrentVersion\\Explorer\\Advanced\\DelayedApps\\Delay_Sec\". \nFor a more detailed explanation see here and here\n",
"The program probably needs some more info put into its properties. It needs to \"Run As\", instead of just running.\nMaybe this application should be developed as a service, instead of a program to be launched, or you could have service that launches the program when its determined the best window of opportunity. \n"
] | [
6,
0
] | [] | [] | [
"virtual_pc",
"windows_vista"
] | stackoverflow_0000015805_virtual_pc_windows_vista.txt |
Q:
How to get only directory name from SaveFileDialog.FileName
What would be the easiest way to separate the directory name from the file name when dealing with SaveFileDialog.FileName in C#?
A:
Use:
System.IO.Path.GetDirectoryName(saveDialog.FileName)
(and the corresponding System.IO.Path.GetFileName). The Path class is really rather useful.
A:
You could construct a FileInfo object. It has a Name, FullName, and DirectoryName property.
var file = new FileInfo(saveFileDialog.FileName);
Console.WriteLine("File is: " + file.Name);
Console.WriteLine("Directory is: " + file.DirectoryName);
A:
The Path object in System.IO parses it pretty nicely.
A:
Since the forward slash is not allowed in the filename, one simple way is to divide the SaveFileDialog.Filename using String.LastIndexOf; for example:
string filename = dialog.Filename;
string path = filename.Substring(0, filename.LastIndexOf("\"));
string file = filename.Substring(filename.LastIndexOf("\") + 1);
| How to get only directory name from SaveFileDialog.FileName | What would be the easiest way to separate the directory name from the file name when dealing with SaveFileDialog.FileName in C#?
| [
"Use:\nSystem.IO.Path.GetDirectoryName(saveDialog.FileName)\n\n(and the corresponding System.IO.Path.GetFileName). The Path class is really rather useful.\n",
"You could construct a FileInfo object. It has a Name, FullName, and DirectoryName property.\nvar file = new FileInfo(saveFileDialog.FileName);\nConsole.WriteLine(\"File is: \" + file.Name);\nConsole.WriteLine(\"Directory is: \" + file.DirectoryName);\n\n",
"The Path object in System.IO parses it pretty nicely.\n",
"Since the forward slash is not allowed in the filename, one simple way is to divide the SaveFileDialog.Filename using String.LastIndexOf; for example:\nstring filename = dialog.Filename;\nstring path = filename.Substring(0, filename.LastIndexOf(\"\\\"));\nstring file = filename.Substring(filename.LastIndexOf(\"\\\") + 1);\n\n"
] | [
15,
2,
1,
0
] | [] | [] | [
"c#",
"parsing",
"string"
] | stackoverflow_0000016306_c#_parsing_string.txt |
Q:
Runtime Configuration in .Net (specifically the EntLib)
I'm looking for a way to configure a DB connection at runtime; specifically using the Enterprise Library. I see that there's a *.Data.Configuration (or something close to this ... don't recall off the top of my head) assembly but am finding not much on the interwebs. Complicating matters is the fact that the API help is broken on Vista.
Now, I found this work-around:
Configuration cfg = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None);
ConnectionStringSettings connection = new ConnectionStringSettings();
connection.Name = "Runtime Connection";
connection.ProviderName = "System.Data.OleDb";
connection.ConnectionString = "myconstring";
cfg.ConnectionStrings.ConnectionStrings.Add(connection);
cfg.Save(ConfigurationSaveMode.Modified);
ConfigurationManager.RefreshSection("connectionStrings");
var runtimeCon = DatabaseFactory.CreateDatabase("Runtime Connection");
And although it gives me what I want, it permanently edits the App.config. Sure I can go back and delete the changes, but I'd rather not go through this hassle.
A:
If you're using a winforms app you could try using UserProperties to store this info. Another possible solution could be custom configuration sections.
A:
If you don't want it saved, you do not need to execute the cfg.Save command.
The Configuration object will store your changes until it isn't needed anymore.
A:
Nope, you must save in order for the EntLib (and, I suspect, any other tool) to see the changes.
| Runtime Configuration in .Net (specifically the EntLib) | I'm looking for a way to configure a DB connection at runtime; specifically using the Enterprise Library. I see that there's a *.Data.Configuration (or something close to this ... don't recall off the top of my head) assembly but am finding not much on the interwebs. Complicating matters is the fact that the API help is broken on Vista.
Now, I found this work-around:
Configuration cfg = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None);
ConnectionStringSettings connection = new ConnectionStringSettings();
connection.Name = "Runtime Connection";
connection.ProviderName = "System.Data.OleDb";
connection.ConnectionString = "myconstring";
cfg.ConnectionStrings.ConnectionStrings.Add(connection);
cfg.Save(ConfigurationSaveMode.Modified);
ConfigurationManager.RefreshSection("connectionStrings");
var runtimeCon = DatabaseFactory.CreateDatabase("Runtime Connection");
And although it gives me what I want, it permanently edits the App.config. Sure I can go back and delete the changes, but I'd rather not go through this hassle.
| [
"If you're using a winforms app you could try using UserProperties to store this info. Another possible solution could be custom configuration sections.\n",
"If you don't want it saved, you do not need to execute the cfg.Save command. \nThe Configuration object will store your changes until it isn't needed anymore.\n",
"Nope, you must save in order for the EntLib (and, I suspect, any other tool) to see the changes.\n"
] | [
1,
0,
0
] | [] | [] | [
".net",
"connection_string",
"enterprise_library"
] | stackoverflow_0000015700_.net_connection_string_enterprise_library.txt |
Q:
How to insert/replace XML tag in XmlDocument?
I have a XmlDocument in java, created with the Weblogic XmlDocument parser.
I want to replace the content of a tag in this XMLDocument with my own data, or insert the tag if it isn't there.
<customdata>
<tag1 />
<tag2>mfkdslmlfkm</tag2>
<location />
<tag3 />
</customdata>
For example I want to insert a URL in the location tag:
<location>http://something</location>
but otherwise leave the XML as is.
Currently I use a XMLCursor:
XmlObject xmlobj = XmlObject.Factory.parse(a.getCustomData(), options);
XmlCursor xmlcur = xmlobj.newCursor();
while (xmlcur.hasNextToken()) {
boolean found = false;
if (xmlcur.isStart() && "schema-location".equals(xmlcur.getName().toString())) {
xmlcur.setTextValue("http://replaced");
System.out.println("replaced");
found = true;
} else if (xmlcur.isStart() && "customdata".equals(xmlcur.getName().toString())) {
xmlcur.push();
} else if (xmlcur.isEnddoc()) {
if (!found) {
xmlcur.pop();
xmlcur.toEndToken();
xmlcur.insertElementWithText("schema-location", "http://inserted");
System.out.println("inserted");
}
}
xmlcur.toNextToken();
}
I tried to find a "quick" xquery way to do this since the XmlDocument has an execQuery method, but didn't find it very easy.
Do anyone have a better way than this? It seems a bit elaborate.
A:
How about an XPath based approach? I like this approach as the logic is super-easy to understand. The code is pretty much self-documenting.
If your xml document is available to you as an org.w3c.dom.Document object (as most parsers return), then you could do something like the following:
// get the list of customdata nodes
NodeList customDataNodeSet = findNodes(document, "//customdata" );
for (int i=0 ; i < customDataNodeSet.getLength() ; i++) {
Node customDataNode = customDataNodeSet.item( i );
// get the location nodes (if any) within this one customdata node
NodeList locationNodeSet = findNodes(customDataNode, "location" );
if (locationNodeSet.getLength() > 0) {
// replace
locationNodeSet.item( 0 ).setTextContent( "http://stackoverflow.com/" );
}
else {
// insert
Element newLocationNode = document.createElement( "location" );
newLocationNode.setTextContent("http://stackoverflow.com/" );
customDataNode.appendChild( newLocationNode );
}
}
And here's the helper method findNodes that does the XPath search.
private NodeList findNodes( Object obj, String xPathString )
throws XPathExpressionException {
XPath xPath = XPathFactory.newInstance().newXPath();
XPathExpression expression = xPath.compile( xPathString );
return (NodeList) expression.evaluate( obj, XPathConstants.NODESET );
}
A:
How about an object oriented approach? You could deserialise the XML to an object, set the location value on the object, then serialise back to XML.
XStream makes this really easy.
For example, you would define the main object, which in your case is CustomData (I'm using public fields to keep the example simple):
public class CustomData {
public String tag1;
public String tag2;
public String location;
public String tag3;
}
Then you initialize XStream:
XStream xstream = new XStream();
// if you need to output the main tag in lowercase, use the following line
xstream.alias("customdata", CustomData.class);
Now you can construct an object from XML, set the location field on the object and regenerate the XML:
CustomData d = (CustomData)xstream.fromXML(xml);
d.location = "http://stackoverflow.com";
xml = xstream.toXML(d);
How does that sound?
A:
If you don't know the schema the XStream solution probably isn't the way to go. At least XStream is on your radar now, might come in handy in the future!
A:
You should be able to do this with query
try
fn:replace(string,pattern,replace)
I am new to xquery myself and I have found it to be a painful query language to work with, but it does work quiet well once you get over the initial learning curve.
I do still wish there was an easier way which was as efficient?
| How to insert/replace XML tag in XmlDocument? | I have a XmlDocument in java, created with the Weblogic XmlDocument parser.
I want to replace the content of a tag in this XMLDocument with my own data, or insert the tag if it isn't there.
<customdata>
<tag1 />
<tag2>mfkdslmlfkm</tag2>
<location />
<tag3 />
</customdata>
For example I want to insert a URL in the location tag:
<location>http://something</location>
but otherwise leave the XML as is.
Currently I use a XMLCursor:
XmlObject xmlobj = XmlObject.Factory.parse(a.getCustomData(), options);
XmlCursor xmlcur = xmlobj.newCursor();
while (xmlcur.hasNextToken()) {
boolean found = false;
if (xmlcur.isStart() && "schema-location".equals(xmlcur.getName().toString())) {
xmlcur.setTextValue("http://replaced");
System.out.println("replaced");
found = true;
} else if (xmlcur.isStart() && "customdata".equals(xmlcur.getName().toString())) {
xmlcur.push();
} else if (xmlcur.isEnddoc()) {
if (!found) {
xmlcur.pop();
xmlcur.toEndToken();
xmlcur.insertElementWithText("schema-location", "http://inserted");
System.out.println("inserted");
}
}
xmlcur.toNextToken();
}
I tried to find a "quick" xquery way to do this since the XmlDocument has an execQuery method, but didn't find it very easy.
Do anyone have a better way than this? It seems a bit elaborate.
| [
"How about an XPath based approach? I like this approach as the logic is super-easy to understand. The code is pretty much self-documenting.\nIf your xml document is available to you as an org.w3c.dom.Document object (as most parsers return), then you could do something like the following:\n// get the list of customdata nodes\nNodeList customDataNodeSet = findNodes(document, \"//customdata\" );\n\nfor (int i=0 ; i < customDataNodeSet.getLength() ; i++) {\n Node customDataNode = customDataNodeSet.item( i );\n\n // get the location nodes (if any) within this one customdata node\n NodeList locationNodeSet = findNodes(customDataNode, \"location\" );\n\n if (locationNodeSet.getLength() > 0) {\n // replace\n locationNodeSet.item( 0 ).setTextContent( \"http://stackoverflow.com/\" );\n }\n else {\n // insert\n Element newLocationNode = document.createElement( \"location\" );\n newLocationNode.setTextContent(\"http://stackoverflow.com/\" );\n customDataNode.appendChild( newLocationNode );\n }\n}\n\nAnd here's the helper method findNodes that does the XPath search.\nprivate NodeList findNodes( Object obj, String xPathString )\n throws XPathExpressionException {\n\n XPath xPath = XPathFactory.newInstance().newXPath();\n XPathExpression expression = xPath.compile( xPathString );\n return (NodeList) expression.evaluate( obj, XPathConstants.NODESET );\n}\n\n",
"How about an object oriented approach? You could deserialise the XML to an object, set the location value on the object, then serialise back to XML.\nXStream makes this really easy.\nFor example, you would define the main object, which in your case is CustomData (I'm using public fields to keep the example simple):\npublic class CustomData {\n public String tag1;\n public String tag2;\n public String location;\n public String tag3;\n}\n\nThen you initialize XStream:\nXStream xstream = new XStream();\n// if you need to output the main tag in lowercase, use the following line\nxstream.alias(\"customdata\", CustomData.class); \n\nNow you can construct an object from XML, set the location field on the object and regenerate the XML:\nCustomData d = (CustomData)xstream.fromXML(xml);\nd.location = \"http://stackoverflow.com\";\nxml = xstream.toXML(d);\n\nHow does that sound?\n",
"If you don't know the schema the XStream solution probably isn't the way to go. At least XStream is on your radar now, might come in handy in the future!\n",
"You should be able to do this with query\ntry \n fn:replace(string,pattern,replace)\n\nI am new to xquery myself and I have found it to be a painful query language to work with, but it does work quiet well once you get over the initial learning curve. \nI do still wish there was an easier way which was as efficient? \n"
] | [
5,
2,
1,
0
] | [] | [] | [
"java",
"xml",
"xquery"
] | stackoverflow_0000015899_java_xml_xquery.txt |
Q:
How do you use a variable in xsl when trying to select a node?
I would have thought this would be an easy one to Google, but I've been unsucessful.
I want to assign a variable the value out of an attribute (easy so far) then use that variable to select another node based on the value of that attribute.
Example:
<xsl:variable name="myId" select="@id" />
<xsl value-of select="//Root/Some/Other/Path/Where[@id='{@myId}']/@Name />
That does not work. If I replace the {@myId} with the value that is in the variable then it does find the right node, but doign it this way produces nothing. I'm sure I'm missing something, or perhaps there is a different way to do it.
The context is that there is related data under different top-level nodes that share the same id value so I need to get the related nodes in my template.
A:
Ok, I finally figured it out. Silly problem really, I simply needed to leave out the quotes and the braces. One of those times when I thought that I'd already tried that. :D Oh, and I mistyped @myId in the first example, the code was actually $myId.
<xsl:variable name="myId" select="@id" />
<xsl value-of select="//Root/Some/Other/Path/Where[@id=$myId]/@Name" />
A:
You seem to have got confused with use of a variable (which is just $variable) and Attribute Value Templates, which allow you to put any XPath expression in some attributes, e.g.
<newElement Id="{@Id}"/>
They can obviously be combined, so you can include a variable in an Attribute Value Template, such as:
<newElement Id="{$myId}"/>
| How do you use a variable in xsl when trying to select a node? | I would have thought this would be an easy one to Google, but I've been unsucessful.
I want to assign a variable the value out of an attribute (easy so far) then use that variable to select another node based on the value of that attribute.
Example:
<xsl:variable name="myId" select="@id" />
<xsl value-of select="//Root/Some/Other/Path/Where[@id='{@myId}']/@Name />
That does not work. If I replace the {@myId} with the value that is in the variable then it does find the right node, but doign it this way produces nothing. I'm sure I'm missing something, or perhaps there is a different way to do it.
The context is that there is related data under different top-level nodes that share the same id value so I need to get the related nodes in my template.
| [
"Ok, I finally figured it out. Silly problem really, I simply needed to leave out the quotes and the braces. One of those times when I thought that I'd already tried that. :D Oh, and I mistyped @myId in the first example, the code was actually $myId.\n<xsl:variable name=\"myId\" select=\"@id\" />\n<xsl value-of select=\"//Root/Some/Other/Path/Where[@id=$myId]/@Name\" />\n\n",
"You seem to have got confused with use of a variable (which is just $variable) and Attribute Value Templates, which allow you to put any XPath expression in some attributes, e.g. \n<newElement Id=\"{@Id}\"/>\n\nThey can obviously be combined, so you can include a variable in an Attribute Value Template, such as:\n<newElement Id=\"{$myId}\"/>\n\n"
] | [
17,
5
] | [] | [] | [
"xslt"
] | stackoverflow_0000005374_xslt.txt |
Q:
Making one interface overwrite a method it inherits from another interface in PHP
Is there a way in PHP to overwrite a method declared by one interface in an interface extending that interface?
The Example:
I'm probably doing something wrong, but here is what I have:
interface iVendor{
public function __construct($vendors_no = null);
public function getName();
public function getVendors_no();
public function getZip();
public function getCountryCode();
public function setName($name);
public function setVendors_no($vendors_no);
public function setZip($zip);
public function setCountryCode($countryCode);
}
interface iShipper extends iVendor{
public function __construct($vendors_no = null, $shipment = null);
public function getTransitTime($shipment = null);
public function getTransitCost($shipment = null);
public function getCurrentShipment();
public function setCurrentShipment($shipment);
public function getStatus($shipment = null);
}
Normally in PHP, when you extend something, you can overwrite any method contained therein (right?). However, when one interface extends another, it won't let you. Unless I'm thinking about this wrong... When I implement the iShipper interface, I don't have to make the Shipper object extend the Vendor object (that implements the iVendor interface). I just say:
class FedEx implements iShipper{}
and make FedEx implement all of the methods from iVendor and iShipper. However, I need the __construct functions in iVendor and iShipper to be unique. I know I could take out the $shipment = null, but then it wouldn't be as convenient to create Shippers (by just passing in the vendors_no and the shipment while instantiating).
Anyone know how to make this work? My fallback is to have to set the shipment by calling $shipper->setShipment($shipment); on the Shipper after I instantiate it, but I'm hoping for a way to get around having to do that...
A little more explanation for the curious:
The FedEx Object has methods that go to the FedEx site (using cURL) and gets an estimate for the Shipment in question. I have a UPS Object, a BAXGlobal Object, a Conway Object, etc. Each one has COMPLETELY different methods for actually getting the shipping estimate, but all the system needs to know is that they are a "shipper" and that the methods listed in the interface are callable on them (so it can treat them all exactly the same, and loop through them in a "shippers" array calling getTransitX() to find the best shipper for a shipment).
Each "Shipper" is also a "Vendor" though, and is treated as such in other parts of the system (getting and putting in the DB, etc. Our data design is a pile of crap, so FedEx is stored right alongside companies like Dunder Mifflin in the "Vendors" table, which means it gets to have all the properties of every other Vendor, but needs the extra properties and methods supplied by iShipper).
A:
@cmcculloh Yeah, in Java you don't define constructors in Interfaces. This allows you to both extend interfaces and also have a class that implements multiple interfaces (both allowed, and very useful in many cases) without worrying about having to satisfy a particular constructor.
EDIT:
Here's my new model:
A. Each interface no longer has a constructor method.
B. All Shippers (UPS, FedEx, etc) now implement iShipper (which extends iVendor) and extend the abstract class Shipper (which has all common non-abstract methods for shippers defined in it, getName(), getZip() etc).
C. Each Shipper has it's own unique _construct method which overwrites the abstract __construct($vendors_no = null, $shipment = null) method contained in Shipper (I don't remember why I'm allowing those to be optional now though. I'd have to go back through my documentation...).
So:
interface iVendor{
public function getName();
public function getVendors_no();
public function getZip();
public function getCountryCode();
public function setName($name);
public function setVendors_no($vendors_no);
public function setZip($zip);
public function setCountryCode($countryCode);
}
interface iShipper extends iVendor{
public function getTransitTime($shipment = null);
public function getTransitCost($shipment = null);
public function getCurrentShipment();
public function setCurrentShipment($shipment);
public function getStatus($shipment = null);
}
abstract class Shipper implements iShipper{
abstract public function __construct($vendors_no = null, $shipment = null);
//a bunch of non-abstract common methods...
}
class FedEx extends Shipper implements iShipper{
public function __construct($vendors_no = null, $shipment = null){
//a bunch of setup code...
}
//all my FedEx specific methods...
}
Thanks for the help!
ps. since I have now added this to "your" answer, if there is something about it you don't like/think should be different, feel free to change it...
A:
You could drop off the constructor and just put them in each individual class. Then what you have is each class has its own __construct, which is probably the same depending on if it is a shipper or vendor. If you want to only have those constructs defined once I don't think you want to go down that route.
What I think you want to do is make an abstract class that implements vendor, and one that implements shipper. There you could define the constructors differently.
abstract class Vendor implements iVendor {
public function __construct() {
whatever();
}
}
abstract class Shipper implements iShipper {
public function __construct() {
something();
}
}
| Making one interface overwrite a method it inherits from another interface in PHP | Is there a way in PHP to overwrite a method declared by one interface in an interface extending that interface?
The Example:
I'm probably doing something wrong, but here is what I have:
interface iVendor{
public function __construct($vendors_no = null);
public function getName();
public function getVendors_no();
public function getZip();
public function getCountryCode();
public function setName($name);
public function setVendors_no($vendors_no);
public function setZip($zip);
public function setCountryCode($countryCode);
}
interface iShipper extends iVendor{
public function __construct($vendors_no = null, $shipment = null);
public function getTransitTime($shipment = null);
public function getTransitCost($shipment = null);
public function getCurrentShipment();
public function setCurrentShipment($shipment);
public function getStatus($shipment = null);
}
Normally in PHP, when you extend something, you can overwrite any method contained therein (right?). However, when one interface extends another, it won't let you. Unless I'm thinking about this wrong... When I implement the iShipper interface, I don't have to make the Shipper object extend the Vendor object (that implements the iVendor interface). I just say:
class FedEx implements iShipper{}
and make FedEx implement all of the methods from iVendor and iShipper. However, I need the __construct functions in iVendor and iShipper to be unique. I know I could take out the $shipment = null, but then it wouldn't be as convenient to create Shippers (by just passing in the vendors_no and the shipment while instantiating).
Anyone know how to make this work? My fallback is to have to set the shipment by calling $shipper->setShipment($shipment); on the Shipper after I instantiate it, but I'm hoping for a way to get around having to do that...
A little more explanation for the curious:
The FedEx Object has methods that go to the FedEx site (using cURL) and gets an estimate for the Shipment in question. I have a UPS Object, a BAXGlobal Object, a Conway Object, etc. Each one has COMPLETELY different methods for actually getting the shipping estimate, but all the system needs to know is that they are a "shipper" and that the methods listed in the interface are callable on them (so it can treat them all exactly the same, and loop through them in a "shippers" array calling getTransitX() to find the best shipper for a shipment).
Each "Shipper" is also a "Vendor" though, and is treated as such in other parts of the system (getting and putting in the DB, etc. Our data design is a pile of crap, so FedEx is stored right alongside companies like Dunder Mifflin in the "Vendors" table, which means it gets to have all the properties of every other Vendor, but needs the extra properties and methods supplied by iShipper).
| [
"@cmcculloh Yeah, in Java you don't define constructors in Interfaces. This allows you to both extend interfaces and also have a class that implements multiple interfaces (both allowed, and very useful in many cases) without worrying about having to satisfy a particular constructor.\nEDIT:\nHere's my new model:\nA. Each interface no longer has a constructor method.\nB. All Shippers (UPS, FedEx, etc) now implement iShipper (which extends iVendor) and extend the abstract class Shipper (which has all common non-abstract methods for shippers defined in it, getName(), getZip() etc).\nC. Each Shipper has it's own unique _construct method which overwrites the abstract __construct($vendors_no = null, $shipment = null) method contained in Shipper (I don't remember why I'm allowing those to be optional now though. I'd have to go back through my documentation...).\nSo:\ninterface iVendor{\n public function getName();\n public function getVendors_no();\n public function getZip();\n public function getCountryCode();\n public function setName($name);\n public function setVendors_no($vendors_no);\n public function setZip($zip);\n public function setCountryCode($countryCode);\n}\n\ninterface iShipper extends iVendor{\n public function getTransitTime($shipment = null);\n public function getTransitCost($shipment = null);\n public function getCurrentShipment();\n public function setCurrentShipment($shipment);\n public function getStatus($shipment = null);\n}\n\nabstract class Shipper implements iShipper{ \n abstract public function __construct($vendors_no = null, $shipment = null); \n //a bunch of non-abstract common methods... \n}\n\nclass FedEx extends Shipper implements iShipper{ \n public function __construct($vendors_no = null, $shipment = null){\n //a bunch of setup code...\n }\n //all my FedEx specific methods...\n}\n\nThanks for the help!\nps. since I have now added this to \"your\" answer, if there is something about it you don't like/think should be different, feel free to change it...\n",
"You could drop off the constructor and just put them in each individual class. Then what you have is each class has its own __construct, which is probably the same depending on if it is a shipper or vendor. If you want to only have those constructs defined once I don't think you want to go down that route. \nWhat I think you want to do is make an abstract class that implements vendor, and one that implements shipper. There you could define the constructors differently. \nabstract class Vendor implements iVendor {\n public function __construct() {\n whatever();\n }\n}\n\nabstract class Shipper implements iShipper {\n public function __construct() {\n something();\n }\n}\n\n"
] | [
6,
0
] | [] | [] | [
"extends",
"interface",
"oop",
"php"
] | stackoverflow_0000016155_extends_interface_oop_php.txt |
Q:
Easy installation method for windows/ Batch Reference needed?
I have a bunch of files that I need to be able to transport and install quickly. My current method for doing so is moving a flash drive with a readme file of where stuff goes whenever I need to move stuff, which is rather inelegant and cumbersome.
My idea for a solution would be to write up a quick script to move files around that I could just click on. I've done some bash scripting before but batch scripting is a little odd to me. Does anyone have a good online reference guide I could use?
An alternative soulution I could accept would be a program that makes an installer for you, though I'm a bit against that as I would lose a lot of control. However, I'd be alright with it if it was extremely simple,
A:
Sounds like robocopy tool is exactly what you need.
Very powerful replication command-line tool.
MS TechNet reference,
Wikipedia article about robocopy,
Full command switch guide,
Batch scripting guide.
A:
I like to use VBscript for this kind of thing. The VBS engine is on every recent windows machine and the language is a little more like real programming than a batch script.
Also, if your installer grows to require WMI functions too, this becomes a piece of cake.
| Easy installation method for windows/ Batch Reference needed? | I have a bunch of files that I need to be able to transport and install quickly. My current method for doing so is moving a flash drive with a readme file of where stuff goes whenever I need to move stuff, which is rather inelegant and cumbersome.
My idea for a solution would be to write up a quick script to move files around that I could just click on. I've done some bash scripting before but batch scripting is a little odd to me. Does anyone have a good online reference guide I could use?
An alternative soulution I could accept would be a program that makes an installer for you, though I'm a bit against that as I would lose a lot of control. However, I'd be alright with it if it was extremely simple,
| [
"Sounds like robocopy tool is exactly what you need.\nVery powerful replication command-line tool.\n\nMS TechNet reference,\nWikipedia article about robocopy,\nFull command switch guide,\nBatch scripting guide.\n\n",
"I like to use VBscript for this kind of thing. The VBS engine is on every recent windows machine and the language is a little more like real programming than a batch script.\nAlso, if your installer grows to require WMI functions too, this becomes a piece of cake.\n"
] | [
2,
0
] | [] | [] | [
"batch_file",
"installation",
"windows"
] | stackoverflow_0000016402_batch_file_installation_windows.txt |
Q:
Select Query on 2 tables, on different database servers
I am trying to generate a report by querying 2 databases (Sybase) in classic ASP.
I have created 2 connection strings:
connA for databaseA
connB for databaseB
Both databases are present on the same server (don't know if this matters)
Queries:
q1 = SELECT column1 INTO #temp FROM databaseA..table1 WHERE xyz="A"
q2 = SELECT columnA,columnB,...,columnZ FROM table2 a #temp b WHERE b.column1=a.columnB
followed by:
response.Write(rstsql) <br>
set rstSQL = CreateObject("ADODB.Recordset")<br>
rstSQL.Open q1, connA<br>
rstSQL.Open q2, connB
When I try to open up this page in a browser, I get error message:
Microsoft OLE DB Provider for ODBC Drivers error '80040e37'
[DataDirect][ODBC Sybase Wire Protocol driver][SQL Server]#temp not found. Specify owner.objectname or use sp_help to check whether the object exists (sp_help may produce lots of output).
Could anyone please help me understand what the problem is and help me fix it?
Thanks.
A:
your temp table is out of scope, it is only 'alive' during the first connection and will not be available in the 2nd connection
Just move all of it in one block of code and execute it inside one conection
A:
With both queries, it looks like you are trying to insert into #temp. #temp is located on one of the databases (for arguments sake, databaseA). So when you try to insert into #temp from databaseB, it reports that it does not exist.
Try changing it from Into #temp From to Into databaseA.dbo.#temp From in both statements.
Also, make sure that the connection strings have permissions on the other DB, otherwise this will not work.
Update: relating to the temp table going out of scope - if you have one connection string that has permissions on both databases, then you could use this for both queries (while keeping the connection alive). While querying the table in the other DB, be sure to use [DBName].[Owner].[TableName] format when referring to the table.
A:
temp is out of scope in q2.
All your work can be done in one query:
SELECT a.columnA, a.columnB,..., a.columnZ
FROM table2 a
INNER JOIN (SELECT databaseA..table1.column1
FROM databaseA..table1
WHERE databaseA..table1.xyz = 'A') b
ON a.columnB = b.column1
| Select Query on 2 tables, on different database servers | I am trying to generate a report by querying 2 databases (Sybase) in classic ASP.
I have created 2 connection strings:
connA for databaseA
connB for databaseB
Both databases are present on the same server (don't know if this matters)
Queries:
q1 = SELECT column1 INTO #temp FROM databaseA..table1 WHERE xyz="A"
q2 = SELECT columnA,columnB,...,columnZ FROM table2 a #temp b WHERE b.column1=a.columnB
followed by:
response.Write(rstsql) <br>
set rstSQL = CreateObject("ADODB.Recordset")<br>
rstSQL.Open q1, connA<br>
rstSQL.Open q2, connB
When I try to open up this page in a browser, I get error message:
Microsoft OLE DB Provider for ODBC Drivers error '80040e37'
[DataDirect][ODBC Sybase Wire Protocol driver][SQL Server]#temp not found. Specify owner.objectname or use sp_help to check whether the object exists (sp_help may produce lots of output).
Could anyone please help me understand what the problem is and help me fix it?
Thanks.
| [
"your temp table is out of scope, it is only 'alive' during the first connection and will not be available in the 2nd connection\nJust move all of it in one block of code and execute it inside one conection\n",
"With both queries, it looks like you are trying to insert into #temp. #temp is located on one of the databases (for arguments sake, databaseA). So when you try to insert into #temp from databaseB, it reports that it does not exist.\nTry changing it from Into #temp From to Into databaseA.dbo.#temp From in both statements. \nAlso, make sure that the connection strings have permissions on the other DB, otherwise this will not work.\nUpdate: relating to the temp table going out of scope - if you have one connection string that has permissions on both databases, then you could use this for both queries (while keeping the connection alive). While querying the table in the other DB, be sure to use [DBName].[Owner].[TableName] format when referring to the table.\n",
"temp is out of scope in q2.\nAll your work can be done in one query:\n\nSELECT a.columnA, a.columnB,..., a.columnZ\nFROM table2 a\nINNER JOIN (SELECT databaseA..table1.column1 \n FROM databaseA..table1\n WHERE databaseA..table1.xyz = 'A') b\n ON a.columnB = b.column1\n\n"
] | [
4,
4,
2
] | [] | [] | [
"asp_classic",
"sql",
"sybase",
"vbscript"
] | stackoverflow_0000016447_asp_classic_sql_sybase_vbscript.txt |
Q:
Removing N items from a list conditionally
I was writing some ASP.NET control when I came to the scenario where I needed to remove items from a list, only when they matched a certain condition.
The RemoveAll method of the generic List class does a good job, but removes all items that match the condition, specified by the predicate.
What if I want to only remove a certain number of items specifying the condition? What do you think is the best way to do this?
A:
If you want to specify both a limit for number of items to remove and a condition to select the items to remove, you can use this approach:
int limit = 30; // Suppose you want to remove 30 items at most
list.RemoveAll(item => ShouldIRemoveThis(item) && limit-- > 0);
A:
@buyutec
Instead of
list.RemoveAll(item => ShouldIRemoveThis(item));
you can use:
list.RemoveAll(ShouldIRemoveThis);
The lambda has the same signature as the method, so they are equivalent so you can just pass the method directly.
A:
Unless the method provides a "limit" parameter (which it doesn't) your best option is to go with a simple loop that removes the items that match, breaking when your incremented "match counter" hits your limit.
That's pretty much how the internal function works anyway, but in a more optimized way.
A:
In framework 3.5, RemoveAll method takes a predicate as a parameter. So you may use
list.RemoveAll(item => ShouldIRemoveThis(item));
where ShouldIRemoveThis is a method that returns a boolean indicating whether the item must be removed from the list.
A:
Can you use LINQ? If so, you can just use the .Take() method and specify how many records you want (maybe as total - N).
A:
Anonymous delegates are useful here. A simple example to remove the first limit even numbers from a list.
List<int> myList = new List<int>;
for (int i = 0; i < 20; i++) myList.add(i);
int total = 0;
int limit = 5;
myList.RemoveAll(delegate(int i) { if (i % 2 == 0 && total < limit) { total++; return true; } return false; });
myList.ForEach(i => Console.Write(i + " "));
Gives 1 3 5 7 9 10 11 12 13 14 15 16 17 18 19, as we want. Easy enough to wrap that up in a function, suitable for use as a lambda expression, taking the real test as a parameter.
| Removing N items from a list conditionally | I was writing some ASP.NET control when I came to the scenario where I needed to remove items from a list, only when they matched a certain condition.
The RemoveAll method of the generic List class does a good job, but removes all items that match the condition, specified by the predicate.
What if I want to only remove a certain number of items specifying the condition? What do you think is the best way to do this?
| [
"If you want to specify both a limit for number of items to remove and a condition to select the items to remove, you can use this approach:\nint limit = 30; // Suppose you want to remove 30 items at most\nlist.RemoveAll(item => ShouldIRemoveThis(item) && limit-- > 0);\n\n",
"@buyutec\nInstead of\nlist.RemoveAll(item => ShouldIRemoveThis(item));\n\nyou can use:\nlist.RemoveAll(ShouldIRemoveThis);\n\nThe lambda has the same signature as the method, so they are equivalent so you can just pass the method directly.\n",
"Unless the method provides a \"limit\" parameter (which it doesn't) your best option is to go with a simple loop that removes the items that match, breaking when your incremented \"match counter\" hits your limit.\nThat's pretty much how the internal function works anyway, but in a more optimized way.\n",
"In framework 3.5, RemoveAll method takes a predicate as a parameter. So you may use\nlist.RemoveAll(item => ShouldIRemoveThis(item));\n\nwhere ShouldIRemoveThis is a method that returns a boolean indicating whether the item must be removed from the list.\n",
"Can you use LINQ? If so, you can just use the .Take() method and specify how many records you want (maybe as total - N).\n",
"Anonymous delegates are useful here. A simple example to remove the first limit even numbers from a list.\nList<int> myList = new List<int>;\nfor (int i = 0; i < 20; i++) myList.add(i);\n\nint total = 0;\nint limit = 5;\nmyList.RemoveAll(delegate(int i) { if (i % 2 == 0 && total < limit) { total++; return true; } return false; });\n\nmyList.ForEach(i => Console.Write(i + \" \"));\n\nGives 1 3 5 7 9 10 11 12 13 14 15 16 17 18 19, as we want. Easy enough to wrap that up in a function, suitable for use as a lambda expression, taking the real test as a parameter.\n"
] | [
8,
5,
1,
1,
0,
0
] | [] | [] | [
".net"
] | stackoverflow_0000016460_.net.txt |
Q:
Can using lambdas as event handlers cause a memory leak?
Say we have the following method:
private MyObject foo = new MyObject();
// and later in the class
public void PotentialMemoryLeaker(){
int firedCount = 0;
foo.AnEvent += (o,e) => { firedCount++;Console.Write(firedCount);};
foo.MethodThatFiresAnEvent();
}
If the class with this method is instantiated and the PotentialMemoryLeaker method is called multiple times, do we leak memory?
Is there any way to unhook that lambda event handler after we're done calling MethodThatFiresAnEvent?
A:
Yes, save it to a variable and unhook it.
DelegateType evt = (o, e) => { firedCount++; Console.Write(firedCount); };
foo.AnEvent += evt;
foo.MethodThatFiresAnEvent();
foo.AnEvent -= evt;
And yes, if you don't, you'll leak memory, as you'll hook up a new delegate object each time. You'll also notice this because each time you call this method, it'll dump to the console an increasing number of lines (not just an increasing number, but for one call to MethodThatFiresAnEvent it'll dump any number of items, once for each hooked up anonymous method).
A:
You wont just leak memory, you will also get your lambda called multiple times. Each call of 'PotentialMemoryLeaker' will add another copy of the lambda to the event list, and every copy will be called when 'AnEvent' is fired.
A:
Well you can extend what has been done here to make delegates safer to use (no memory leaks)
A:
Your example just compiles to a compiler-named private inner class (with field firedCount and a compiler-named method). Each call to PotentialMemoryLeaker creates a new instance of the closure class to which where foo keeps a reference by way of a delegate to the single method.
If you don't reference the whole object that owns PotentialMemoryLeaker, then that will all be garbage collected. Otherwise, you can either set foo to null or empty foo's event handler list by writing this:
foreach (var handler in AnEvent.GetInvocationList()) AnEvent -= handler;
Of course, you'd need access to the MyObject class's private members.
A:
Yes in the same way that normal event handlers can cause leaks. Because the lambda is actually changed to:
someobject.SomeEvent += () => ...;
someobject.SomeEvent += delegate () {
...
};
// unhook
Action del = () => ...;
someobject.SomeEvent += del;
someobject.SomeEvent -= del;
So basically it is just short hand for what we have been using in 2.0 all these years.
| Can using lambdas as event handlers cause a memory leak? | Say we have the following method:
private MyObject foo = new MyObject();
// and later in the class
public void PotentialMemoryLeaker(){
int firedCount = 0;
foo.AnEvent += (o,e) => { firedCount++;Console.Write(firedCount);};
foo.MethodThatFiresAnEvent();
}
If the class with this method is instantiated and the PotentialMemoryLeaker method is called multiple times, do we leak memory?
Is there any way to unhook that lambda event handler after we're done calling MethodThatFiresAnEvent?
| [
"Yes, save it to a variable and unhook it.\nDelegateType evt = (o, e) => { firedCount++; Console.Write(firedCount); };\nfoo.AnEvent += evt;\nfoo.MethodThatFiresAnEvent();\nfoo.AnEvent -= evt;\n\nAnd yes, if you don't, you'll leak memory, as you'll hook up a new delegate object each time. You'll also notice this because each time you call this method, it'll dump to the console an increasing number of lines (not just an increasing number, but for one call to MethodThatFiresAnEvent it'll dump any number of items, once for each hooked up anonymous method).\n",
"You wont just leak memory, you will also get your lambda called multiple times. Each call of 'PotentialMemoryLeaker' will add another copy of the lambda to the event list, and every copy will be called when 'AnEvent' is fired.\n",
"Well you can extend what has been done here to make delegates safer to use (no memory leaks)\n",
"Your example just compiles to a compiler-named private inner class (with field firedCount and a compiler-named method). Each call to PotentialMemoryLeaker creates a new instance of the closure class to which where foo keeps a reference by way of a delegate to the single method.\nIf you don't reference the whole object that owns PotentialMemoryLeaker, then that will all be garbage collected. Otherwise, you can either set foo to null or empty foo's event handler list by writing this:\nforeach (var handler in AnEvent.GetInvocationList()) AnEvent -= handler;\n\nOf course, you'd need access to the MyObject class's private members. \n",
"Yes in the same way that normal event handlers can cause leaks. Because the lambda is actually changed to:\nsomeobject.SomeEvent += () => ...;\nsomeobject.SomeEvent += delegate () {\n ...\n};\n\n// unhook\nAction del = () => ...;\nsomeobject.SomeEvent += del;\nsomeobject.SomeEvent -= del;\n\nSo basically it is just short hand for what we have been using in 2.0 all these years.\n"
] | [
16,
4,
3,
2,
0
] | [] | [] | [
"event_handling",
"lambda",
"memory_leaks"
] | stackoverflow_0000016473_event_handling_lambda_memory_leaks.txt |
Q:
Image size for BannerBitmap property in Windows Installer
I'm working on a quick setup program in Visual Studio and wanted to change the banner bitmap. Anyone know off-hand what the ideal (or the required) dimensions are for the new banner image? Thanks.
A:
Found it on MSDN docs for BannerBitmap Property:
For best results, you should use a bitmap with dimensions of 500 pixels wide by 70 pixels high.
| Image size for BannerBitmap property in Windows Installer | I'm working on a quick setup program in Visual Studio and wanted to change the banner bitmap. Anyone know off-hand what the ideal (or the required) dimensions are for the new banner image? Thanks.
| [
"Found it on MSDN docs for BannerBitmap Property:\n\nFor best results, you should use a bitmap with dimensions of 500 pixels wide by 70 pixels high.\n\n"
] | [
76
] | [] | [] | [
"windows_installer"
] | stackoverflow_0000016517_windows_installer.txt |
Q:
Convince Firefox to send an If-Modified-Since header over HTTPS
How can I convince Firefox (3.0.1, if it matters) to send an If-Modified-Since header in an HTTPS request? It sends the header if the request uses plain HTTP and my server dutifully honors it. But when I request the same resource from the same server using HTTPS instead (i.e., simply changing the http:// in the URL to https://) then Firefox does not send an If-Modified-Since header at all. Is this behavior mandated by the SSL spec or something?
Here are some example HTTP and HTTPS request/response pairs, pulled using the Live HTTP Headers Firefox extension, with some differences in bold:
HTTP request/response:
http://myserver.com:30000/scripts/site.js
GET /scripts/site.js HTTP/1.1
Host: myserver.com:30000
User-Agent: Mozilla/5.0 (...) Gecko/2008070206 Firefox/3.0.1
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
If-Modified-Since: Tue, 19 Aug 2008 15:57:30 GMT
If-None-Match: "a0501d1-300a-454d22526ae80"-gzip
Cache-Control: max-age=0
HTTP/1.x 304 Not Modified
Date: Tue, 19 Aug 2008 15:59:23 GMT
Server: Apache/2.2.8 (Unix) mod_ssl/2.2.8 OpenSSL/0.9.8
Connection: Keep-Alive
Keep-Alive: timeout=5, max=99
Etag: "a0501d1-300a-454d22526ae80"-gzip
HTTPS request/response:
https://myserver.com:30001/scripts/site.js
GET /scripts/site.js HTTP/1.1
Host: myserver.com:30001
User-Agent: Mozilla/5.0 (...) Gecko/2008070206 Firefox/3.0.1
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
HTTP/1.x 200 OK
Date: Tue, 19 Aug 2008 16:00:14 GMT
Server: Apache/2.2.8 (Unix) mod_ssl/2.2.8 OpenSSL/0.9.8
Last-Modified: Tue, 19 Aug 2008 15:57:30 GMT
Etag: "a0501d1-300a-454d22526ae80"-gzip
Accept-Ranges: bytes
Content-Encoding: gzip
Content-Length: 3766
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: text/javascript
UPDATE: Setting browser.cache.disk_cache_ssl to true did the trick (which is odd because, as Nickolay points out, there's still the memory cache). Adding a "Cache-control: public" header to the response also worked. Thanks!
A:
HTTPS requests are not cached so sending an If-Modified-Since doesn't make any sense. The not caching is a security precaution.
The not caching on disk is a security pre-caution, but it seems it indeed affects the If-Modified-Since behavior (glancing over the code).
Try setting the Firefox preference (in about:config) browser.cache.disk_cache_ssl to true. If that helps, try sending Cache-Control: public header in your response.
UPDATE: Firefox behavior was changed for Gecko 2.0 (Firefox 4) -- HTTPS content is now cached.
A:
HTTPS requests are not cached so sending an If-Modified-Since doesn't make any sense. The not caching is a security precaution.
| Convince Firefox to send an If-Modified-Since header over HTTPS | How can I convince Firefox (3.0.1, if it matters) to send an If-Modified-Since header in an HTTPS request? It sends the header if the request uses plain HTTP and my server dutifully honors it. But when I request the same resource from the same server using HTTPS instead (i.e., simply changing the http:// in the URL to https://) then Firefox does not send an If-Modified-Since header at all. Is this behavior mandated by the SSL spec or something?
Here are some example HTTP and HTTPS request/response pairs, pulled using the Live HTTP Headers Firefox extension, with some differences in bold:
HTTP request/response:
http://myserver.com:30000/scripts/site.js
GET /scripts/site.js HTTP/1.1
Host: myserver.com:30000
User-Agent: Mozilla/5.0 (...) Gecko/2008070206 Firefox/3.0.1
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
If-Modified-Since: Tue, 19 Aug 2008 15:57:30 GMT
If-None-Match: "a0501d1-300a-454d22526ae80"-gzip
Cache-Control: max-age=0
HTTP/1.x 304 Not Modified
Date: Tue, 19 Aug 2008 15:59:23 GMT
Server: Apache/2.2.8 (Unix) mod_ssl/2.2.8 OpenSSL/0.9.8
Connection: Keep-Alive
Keep-Alive: timeout=5, max=99
Etag: "a0501d1-300a-454d22526ae80"-gzip
HTTPS request/response:
https://myserver.com:30001/scripts/site.js
GET /scripts/site.js HTTP/1.1
Host: myserver.com:30001
User-Agent: Mozilla/5.0 (...) Gecko/2008070206 Firefox/3.0.1
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
HTTP/1.x 200 OK
Date: Tue, 19 Aug 2008 16:00:14 GMT
Server: Apache/2.2.8 (Unix) mod_ssl/2.2.8 OpenSSL/0.9.8
Last-Modified: Tue, 19 Aug 2008 15:57:30 GMT
Etag: "a0501d1-300a-454d22526ae80"-gzip
Accept-Ranges: bytes
Content-Encoding: gzip
Content-Length: 3766
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: text/javascript
UPDATE: Setting browser.cache.disk_cache_ssl to true did the trick (which is odd because, as Nickolay points out, there's still the memory cache). Adding a "Cache-control: public" header to the response also worked. Thanks!
| [
"\nHTTPS requests are not cached so sending an If-Modified-Since doesn't make any sense. The not caching is a security precaution.\n\nThe not caching on disk is a security pre-caution, but it seems it indeed affects the If-Modified-Since behavior (glancing over the code).\nTry setting the Firefox preference (in about:config) browser.cache.disk_cache_ssl to true. If that helps, try sending Cache-Control: public header in your response.\n\nUPDATE: Firefox behavior was changed for Gecko 2.0 (Firefox 4) -- HTTPS content is now cached.\n",
"HTTPS requests are not cached so sending an If-Modified-Since doesn't make any sense. The not caching is a security precaution.\n"
] | [
14,
2
] | [] | [] | [
"firefox",
"https"
] | stackoverflow_0000016483_firefox_https.txt |
Q:
Communication between Javascript and the server
I've been developing a "Form Builder" in Javascript, and coming up to the part where I'll be sending the spec for the form back to the server to be stored. The builder maintains an internal data structure that represents the fields, label, options (for select/checkbox/radio), mandatory status, and the general sorting order of the fields.
When I want to send this structure back to the server, which format should I communicate it with?
Also, when restoring a server-saved form back into my Javascript builder, should I load in the data in the same format it sent it with, or should I rebuild the fields using the builder's createField() functions?
A:
When making and processing requests with JavaScript, I live and breath JSON. It's easy to build on the client side and there are tons of parsers for the server side, so both ends get to use their native tongue as much as possible.
A:
This seems like a perfect scenario for using JSON as a serialization format for the server. If you study a few examples it is not too difficult to understand.
A:
Best practice on this dictates that if you are not planning to use the stored data for anything other than recreating the form then the best method is to send it back in some sort of native format (As mentioned above) With this then you can just load the data back in and requires the least processing of any method.
A:
I'd implement some sort of custom text serialization and transmit plain text. As you say, you can rebuild the information doing the reversed process.
A:
There's a lot of people who will push JSON. It's a lot lighter weight than XML. Personally, I find XML to be a little more standard though. You'll have trouble finding a server side technology that doesn't support XML. And JavaScript supports it just fine also.
You could also go a completely different route. Since you'll only be sending information back when the form design is complete, you could do it with a form submit, for a bunch of hidden fields. Create your hidden fields using JavaScript and set the values as needed.
This would probably be the best solution if didn't want to deal with JSON/XML at all.
| Communication between Javascript and the server | I've been developing a "Form Builder" in Javascript, and coming up to the part where I'll be sending the spec for the form back to the server to be stored. The builder maintains an internal data structure that represents the fields, label, options (for select/checkbox/radio), mandatory status, and the general sorting order of the fields.
When I want to send this structure back to the server, which format should I communicate it with?
Also, when restoring a server-saved form back into my Javascript builder, should I load in the data in the same format it sent it with, or should I rebuild the fields using the builder's createField() functions?
| [
"When making and processing requests with JavaScript, I live and breath JSON. It's easy to build on the client side and there are tons of parsers for the server side, so both ends get to use their native tongue as much as possible. \n",
"This seems like a perfect scenario for using JSON as a serialization format for the server. If you study a few examples it is not too difficult to understand.\n",
"Best practice on this dictates that if you are not planning to use the stored data for anything other than recreating the form then the best method is to send it back in some sort of native format (As mentioned above) With this then you can just load the data back in and requires the least processing of any method.\n",
"I'd implement some sort of custom text serialization and transmit plain text. As you say, you can rebuild the information doing the reversed process.\n",
"There's a lot of people who will push JSON. It's a lot lighter weight than XML. Personally, I find XML to be a little more standard though. You'll have trouble finding a server side technology that doesn't support XML. And JavaScript supports it just fine also.\nYou could also go a completely different route. Since you'll only be sending information back when the form design is complete, you could do it with a form submit, for a bunch of hidden fields. Create your hidden fields using JavaScript and set the values as needed.\nThis would probably be the best solution if didn't want to deal with JSON/XML at all.\n"
] | [
5,
3,
1,
0,
0
] | [] | [] | [
"javascript",
"server"
] | stackoverflow_0000016529_javascript_server.txt |
Q:
ArgumentNullException for Integer
In .NET, is it more appropriate to throw an argument null exception for an Integer if the value is Integer.MinValue or Integer = 0 (assuming that 0 is not a valid value)?
A:
Throwing an ArgumentNullException isn't appropriate unless the argument is actually null. Throw an ArgumentOutOfRangeException instead (preferably with a message informing the user what values of int are actually acceptable).
ArgumentOutOfRangeException is thrown when a method is invoked and at least one of the arguments passed to the method is not a null reference (Nothing in Visual Basic) and does not contain a valid value.
A:
Well, I think if you are using an int, then it would be better to say InvalidArgumentException.
Alternatively, you could make your INTs nullable by declaring them as int? (especially if you expect null values for your int.)
A:
If the argument is not null, don't throw an ArgumentNullException. It would probably be more reasonable to throw an ArgumentException, explained here.
edit: ArgumentOutOfRangeException is probably even better, as suggested above by Avenger546.
| ArgumentNullException for Integer | In .NET, is it more appropriate to throw an argument null exception for an Integer if the value is Integer.MinValue or Integer = 0 (assuming that 0 is not a valid value)?
| [
"Throwing an ArgumentNullException isn't appropriate unless the argument is actually null. Throw an ArgumentOutOfRangeException instead (preferably with a message informing the user what values of int are actually acceptable).\n\nArgumentOutOfRangeException is thrown when a method is invoked and at least one of the arguments passed to the method is not a null reference (Nothing in Visual Basic) and does not contain a valid value.\n\n",
"Well, I think if you are using an int, then it would be better to say InvalidArgumentException.\nAlternatively, you could make your INTs nullable by declaring them as int? (especially if you expect null values for your int.)\n",
"If the argument is not null, don't throw an ArgumentNullException. It would probably be more reasonable to throw an ArgumentException, explained here.\nedit: ArgumentOutOfRangeException is probably even better, as suggested above by Avenger546.\n"
] | [
8,
2,
0
] | [] | [] | [
".net"
] | stackoverflow_0000016616_.net.txt |
Q:
ICE Faces fileInput file path and file name properties
I'd like to utilize an ICE Faces fileInput control to fill in the file path & file name for the input to an input field on a web page based on file that the user selects. How can I capture these properties without actually performing any file transfer operations?
A:
I believe that the real answer is that you can't. The file path won't be sent by the browser for security reasons. The file name will be sent, however I don't believe it gets sent without an actual upload.
The closest you could come, afaik, would be to forcibly kill the connection just when the upload starts. That would net you the filename with little actual transferred data, but it doesn't sound like it would be useful to you.
Alternatively, a signed Java applet might get you closer to a solution that you'd want.
| ICE Faces fileInput file path and file name properties | I'd like to utilize an ICE Faces fileInput control to fill in the file path & file name for the input to an input field on a web page based on file that the user selects. How can I capture these properties without actually performing any file transfer operations?
| [
"I believe that the real answer is that you can't. The file path won't be sent by the browser for security reasons. The file name will be sent, however I don't believe it gets sent without an actual upload.\nThe closest you could come, afaik, would be to forcibly kill the connection just when the upload starts. That would net you the filename with little actual transferred data, but it doesn't sound like it would be useful to you.\nAlternatively, a signed Java applet might get you closer to a solution that you'd want.\n"
] | [
0
] | [] | [] | [
"ajax",
"icefaces",
"java",
"jsf"
] | stackoverflow_0000009361_ajax_icefaces_java_jsf.txt |
Q:
GUI system development resources?
Could someone recommend any good resources for creating Graphics User Interfaces, preferably in C/C++?
Currently my biggest influence is 3DBuzz.com's C++/OpenGL VTMs (Video Training Modules). While they are very good, they cover a large area of graphics programming, so only skim the surface of GUI programming.
This question does relate to "How do I make a GUI?", where there is also a rough outline of my current structure.
Any response would be appreciated.
Edit:
I've just read some of the answers and spotted that I've missed some points. I had to type the question twice as I caught the wrong button and lost it.
I missed two important points, first: This will be used cross platform including homebrew on a Sony PSP. Second: I want to create a GUI system not use an existing one.
Edit 2: I think some of you are missing the point, I don't what to use an existing GUI system I want to build one.
Qt in it's current form is not portable to the PSP, never mind the overkill of such a task.
That said I've decided to create an IM-GUI, and have started to prototype the code.
A:
I wouldn't use OpenGL for the GUI unless you are planning for hardware accelerated and/or 3D effects that you don't think you would get with a more traditional GUI toolkit (e.g Qt, wxWidgets, GTK, etc). If you just need a quick simple GUI for hosting your OpenGL graphics then FLTK is a nice choice. Otherwise, for rendering the GUI directly in OpenGL their are libraries like Crazy Eddie's GUI that do just that and provide lots of skinnable widgets that you won't have to reinvent. The window and OpenGL context could then be provide with a portable library like SDL.
EDIT: Now that I've gone back and taken at look your other post I think I have a better understanding of what you are asking. For a GUI on an embedded system like the Nintendo DS, I would consider using an "immediate mode" GUI. Jari Komppa has a good tutorial about them, but you could use a more object-oriented approach with C++ than the C code he presents.
A:
Have a look at Qt. It is an open source library for making GUI's. Unlike Swing in Java, it assumes a lot of stuff, so it is really easy to make functional GUI's. For example, a textarea assumes that you want a context menu when you right click it with copy, paste, select all, etc. The documentation is also very good.
A:
http://www.fox-toolkit.org has an API reference, if you're looking how to work with a specific framework. Or were you more interested in general theory or something more along the lines of how to do the low-level stuff yourself?
A:
For more information about "immediate mode" GUI, I can recommend the Molly Rocket forums. There's a good video presentation of the thinking behind IM-GUI, along with lots of discussion.
I recently hacked together a very quick IM-GUI system based on presentation on Jari's page, and in my case, where I really just wanted to be able to get a couple of buttons and boxes on the screen, and more or less just hard code the response to the inputs, it really felt like the right thing to do, instead of going for a more full blown GUI-architecture. (This was in a DirectX-application, so the number of choices I had was pretty limited).
A:
One of the fastest ways is to use python with a gui binding like pyQt, PyFLTK, tkinter, wxPython or even via pygame which uses SDL.
Its easy fast and platform independent.
Also the management of the packages is unbeatable.
See:
http://wiki.python.org/moin/PyQt
http://www.fltk.org/
(tkinter is default and already packaged with python)
http://wxpython.org/
http://www.pygame.org/news.html
A:
For a platform like the PSP, I'd worry slightly about the performance of an IM GUI solution. With a traditional retained mode type of solution, when you create a control, you can also create the vertex buffer/display list or what-have-you required to render it. With an immediate mode solution, it seems to me that you'd need to recreate this dynamically each frame.
You might not care about this, if you're only doing a few buttons, or it's not going to be used in-game (assuming you're making a game) but, especially if you have a fair bit of text, the cost of rendering might start to hurt if you can't find a way to cache the display lists somehow.
A:
I'll second Qt. It's cross platform, and I found it much easier to work with than the built in Visual Studio GUI stuff. It's dual-licensed, so if you don't want your code to be GPL you could purchase a license instead.
A:
I've had a look at the Video from Molley Rocket and Looked through Jari Komppa's cached tutorials.
An IM-GUI seems the best way to go, I think it will be a lot more streamlined, and lot quicker to build than the system I originally had in mind.
Now a new issue, I can only except one Answer. :(
Thanks again to Monjardin and dooz, cheers.
thing2k
A:
I'd have a look at GLAM and GLGooey
| GUI system development resources? | Could someone recommend any good resources for creating Graphics User Interfaces, preferably in C/C++?
Currently my biggest influence is 3DBuzz.com's C++/OpenGL VTMs (Video Training Modules). While they are very good, they cover a large area of graphics programming, so only skim the surface of GUI programming.
This question does relate to "How do I make a GUI?", where there is also a rough outline of my current structure.
Any response would be appreciated.
Edit:
I've just read some of the answers and spotted that I've missed some points. I had to type the question twice as I caught the wrong button and lost it.
I missed two important points, first: This will be used cross platform including homebrew on a Sony PSP. Second: I want to create a GUI system not use an existing one.
Edit 2: I think some of you are missing the point, I don't what to use an existing GUI system I want to build one.
Qt in it's current form is not portable to the PSP, never mind the overkill of such a task.
That said I've decided to create an IM-GUI, and have started to prototype the code.
| [
"I wouldn't use OpenGL for the GUI unless you are planning for hardware accelerated and/or 3D effects that you don't think you would get with a more traditional GUI toolkit (e.g Qt, wxWidgets, GTK, etc). If you just need a quick simple GUI for hosting your OpenGL graphics then FLTK is a nice choice. Otherwise, for rendering the GUI directly in OpenGL their are libraries like Crazy Eddie's GUI that do just that and provide lots of skinnable widgets that you won't have to reinvent. The window and OpenGL context could then be provide with a portable library like SDL.\nEDIT: Now that I've gone back and taken at look your other post I think I have a better understanding of what you are asking. For a GUI on an embedded system like the Nintendo DS, I would consider using an \"immediate mode\" GUI. Jari Komppa has a good tutorial about them, but you could use a more object-oriented approach with C++ than the C code he presents.\n",
"Have a look at Qt. It is an open source library for making GUI's. Unlike Swing in Java, it assumes a lot of stuff, so it is really easy to make functional GUI's. For example, a textarea assumes that you want a context menu when you right click it with copy, paste, select all, etc. The documentation is also very good.\n",
"http://www.fox-toolkit.org has an API reference, if you're looking how to work with a specific framework. Or were you more interested in general theory or something more along the lines of how to do the low-level stuff yourself?\n",
"For more information about \"immediate mode\" GUI, I can recommend the Molly Rocket forums. There's a good video presentation of the thinking behind IM-GUI, along with lots of discussion.\nI recently hacked together a very quick IM-GUI system based on presentation on Jari's page, and in my case, where I really just wanted to be able to get a couple of buttons and boxes on the screen, and more or less just hard code the response to the inputs, it really felt like the right thing to do, instead of going for a more full blown GUI-architecture. (This was in a DirectX-application, so the number of choices I had was pretty limited).\n",
"One of the fastest ways is to use python with a gui binding like pyQt, PyFLTK, tkinter, wxPython or even via pygame which uses SDL.\nIts easy fast and platform independent.\nAlso the management of the packages is unbeatable.\nSee:\n\nhttp://wiki.python.org/moin/PyQt\nhttp://www.fltk.org/\n(tkinter is default and already packaged with python)\nhttp://wxpython.org/\nhttp://www.pygame.org/news.html\n\n",
"For a platform like the PSP, I'd worry slightly about the performance of an IM GUI solution. With a traditional retained mode type of solution, when you create a control, you can also create the vertex buffer/display list or what-have-you required to render it. With an immediate mode solution, it seems to me that you'd need to recreate this dynamically each frame.\nYou might not care about this, if you're only doing a few buttons, or it's not going to be used in-game (assuming you're making a game) but, especially if you have a fair bit of text, the cost of rendering might start to hurt if you can't find a way to cache the display lists somehow.\n",
"I'll second Qt. It's cross platform, and I found it much easier to work with than the built in Visual Studio GUI stuff. It's dual-licensed, so if you don't want your code to be GPL you could purchase a license instead.\n",
"I've had a look at the Video from Molley Rocket and Looked through Jari Komppa's cached tutorials.\nAn IM-GUI seems the best way to go, I think it will be a lot more streamlined, and lot quicker to build than the system I originally had in mind.\nNow a new issue, I can only except one Answer. :(\nThanks again to Monjardin and dooz, cheers.\nthing2k\n",
"I'd have a look at GLAM and GLGooey\n"
] | [
2,
1,
1,
1,
1,
1,
0,
0,
0
] | [] | [] | [
"c++",
"playstation_portable",
"user_interface"
] | stackoverflow_0000013607_c++_playstation_portable_user_interface.txt |
Q:
Office VSTO Word 2003 project keeps trying to autoconvert to 2007
I am working on a Office Word add-in for Word 2003. When I reopen the project, the VS2008 auto covert dialog box opens and tries to convert it to the Word 2007 format.
How can I reopen this file and keep it in the Word 2003 format?
A:
Got a answer over at MSDN Forums
This is the default behavior when you have Office 2007 installed on your
development computer. You can modify
this behavior under Tools->Options.
For more informaiton, see the
following threads:
http://forums.microsoft.com/Forums/ShowPost.aspx?PostID=3762143&SiteID=1
http://forums.microsoft.com/Forums/ShowPost.aspx?PostID=3742203&SiteID=1&mode=1
I hope this helps,
McLean Schofield
| Office VSTO Word 2003 project keeps trying to autoconvert to 2007 | I am working on a Office Word add-in for Word 2003. When I reopen the project, the VS2008 auto covert dialog box opens and tries to convert it to the Word 2007 format.
How can I reopen this file and keep it in the Word 2003 format?
| [
"Got a answer over at MSDN Forums\n\nThis is the default behavior when you have Office 2007 installed on your\n development computer. You can modify\n this behavior under Tools->Options.\n For more informaiton, see the\n following threads:\nhttp://forums.microsoft.com/Forums/ShowPost.aspx?PostID=3762143&SiteID=1\nhttp://forums.microsoft.com/Forums/ShowPost.aspx?PostID=3742203&SiteID=1&mode=1\nI hope this helps,\nMcLean Schofield\n\n"
] | [
3
] | [] | [] | [
"c#",
"ms_word",
"visual_studio",
"vsto"
] | stackoverflow_0000015102_c#_ms_word_visual_studio_vsto.txt |
Q:
How to intercept and cancel auto play from an application?
I am developing an application to install a large number of data files from multiple DVDs. The application will prompt the user to insert the next disk, however Windows will automatically try to open that disk either in an explorer window or ask the user what to do with the new disk.
How can I intercept and cancel auto play messages from my application?
A:
There are two approaches that I know of. The first and simplest is to register the special Windows message "QueryCancelAutoPlay" and simply return 1 when the message is handled. This only works for the current window, and not a background application.
The second approach requires inserting an object that implements the COM interface IQueryCancelAutoPlay COM interface into the Running Object Table.
A:
Alternatively, you could just programmatically save the current state of autoplay and turn it off when your program starts, then restore the original state when your program closes. This would be a lot simpler. Check out the NoDriveTypeAutoRun key.
| How to intercept and cancel auto play from an application? | I am developing an application to install a large number of data files from multiple DVDs. The application will prompt the user to insert the next disk, however Windows will automatically try to open that disk either in an explorer window or ask the user what to do with the new disk.
How can I intercept and cancel auto play messages from my application?
| [
"There are two approaches that I know of. The first and simplest is to register the special Windows message \"QueryCancelAutoPlay\" and simply return 1 when the message is handled. This only works for the current window, and not a background application.\nThe second approach requires inserting an object that implements the COM interface IQueryCancelAutoPlay COM interface into the Running Object Table.\n",
"Alternatively, you could just programmatically save the current state of autoplay and turn it off when your program starts, then restore the original state when your program closes. This would be a lot simpler. Check out the NoDriveTypeAutoRun key.\n"
] | [
3,
0
] | [] | [] | [
"disk",
"windows"
] | stackoverflow_0000011734_disk_windows.txt |
Q:
Validation Patterns for Custom XML Documents
I have a web application that generates a medium sized XML dataset to be consumed by a third party.
I thought it would be a good idea to provide some form of schema document for the XML that I generate so I pasted the XML into Visual Studio and got it to generate an XSD.
The annoying thing is that my XML doesn't validate to the XSD that was generated!
Is it better to roll your own XSD?
What about different schema docs like DTDs, Relax NG, or Schematron?
The key is that I would like to be able to validate my document using C#.
What are your XML validation strategies?
A:
Whether you choose XSD and/or Schematron depends on what you are trying to validate. XSD is probably the most common validation strategy, but there are limits on what it can validate. If all you want to do is ensure that the right type of data is in each field, XSD should work for you. If you need to assert, for example, that the value of the <small> element is less than the value of the <big> element, or even more complex business rules involving multiple fields, you probably want Schematron or a hybrid approach.
A:
You will be able to validate your XML with either an XML Schema or a DTD using C#. DTDs are older standards as compared to XML Schemas.
So, I recommend an XML Schema approach.
| Validation Patterns for Custom XML Documents | I have a web application that generates a medium sized XML dataset to be consumed by a third party.
I thought it would be a good idea to provide some form of schema document for the XML that I generate so I pasted the XML into Visual Studio and got it to generate an XSD.
The annoying thing is that my XML doesn't validate to the XSD that was generated!
Is it better to roll your own XSD?
What about different schema docs like DTDs, Relax NG, or Schematron?
The key is that I would like to be able to validate my document using C#.
What are your XML validation strategies?
| [
"Whether you choose XSD and/or Schematron depends on what you are trying to validate. XSD is probably the most common validation strategy, but there are limits on what it can validate. If all you want to do is ensure that the right type of data is in each field, XSD should work for you. If you need to assert, for example, that the value of the <small> element is less than the value of the <big> element, or even more complex business rules involving multiple fields, you probably want Schematron or a hybrid approach.\n",
"You will be able to validate your XML with either an XML Schema or a DTD using C#. DTDs are older standards as compared to XML Schemas.\nSo, I recommend an XML Schema approach.\n"
] | [
5,
0
] | [] | [] | [
"c#",
"schema",
"visual_studio",
"xml"
] | stackoverflow_0000016611_c#_schema_visual_studio_xml.txt |
Q:
Problem databinding an ASP.Net AJAX toolkit MaskedEditExtender
I have a database that contains a date and we are using the MaskedEditExtender (MEE) and MaskedEditValidator to make sure the dates are appropriate. However, we want the Admins to be able to go in and change the data (specifically the date) if necessary.
How can I have the MEE field pre-populate with the database value when the data is shown on the page? I've tried to use 'bind' in the 'InitialValue' property but it doesn't populate the textbox.
Thanks.
A:
We found out this morning why our code was mishandling the extender. Since the db was handling the date as a date/time it was returning the date in this format 99/99/9999 99:99:99 but we had the extender mask looking for this format 99/99/9999 99:99
Mask="99/99/9999 99:99:99"
the above code fixed the problem.
thanks to everyone for their help.
A:
Are you referring to the asp.Net Ajax toolkit extensions at:
http://www.asp.net/AJAX/AjaxControlToolkit/Samples/MaskedEdit/MaskedEdit.aspx
If so have you checked that your data is coming back in the correct format? It will have to match your date format in order to be displayed.
| Problem databinding an ASP.Net AJAX toolkit MaskedEditExtender | I have a database that contains a date and we are using the MaskedEditExtender (MEE) and MaskedEditValidator to make sure the dates are appropriate. However, we want the Admins to be able to go in and change the data (specifically the date) if necessary.
How can I have the MEE field pre-populate with the database value when the data is shown on the page? I've tried to use 'bind' in the 'InitialValue' property but it doesn't populate the textbox.
Thanks.
| [
"We found out this morning why our code was mishandling the extender. Since the db was handling the date as a date/time it was returning the date in this format 99/99/9999 99:99:99 but we had the extender mask looking for this format 99/99/9999 99:99 \nMask=\"99/99/9999 99:99:99\"\nthe above code fixed the problem.\nthanks to everyone for their help.\n",
"Are you referring to the asp.Net Ajax toolkit extensions at:\nhttp://www.asp.net/AJAX/AjaxControlToolkit/Samples/MaskedEdit/MaskedEdit.aspx\nIf so have you checked that your data is coming back in the correct format? It will have to match your date format in order to be displayed.\n"
] | [
1,
0
] | [] | [] | [
"asp.net",
"asp.net_ajax",
"validation"
] | stackoverflow_0000012225_asp.net_asp.net_ajax_validation.txt |
Q:
Best way to implement a dirty flag in EF
You can easily use the PropertyChanges events to set the flag. But how do you easily reset it after a save to the ObjectContext?
A:
what about the ObjectContext.SavingChanges event? See also http://www.thedatafarm.com/blog/2008/07/13/OverridingObjectContextSaveChanges.aspx.
A:
The above method calls for using the SavingChanges event which is called before the changes are persisted. If there is an error during the save, you have already cleared your dirty flag. I would think there would be a SavedChanges event exposed as well.
| Best way to implement a dirty flag in EF | You can easily use the PropertyChanges events to set the flag. But how do you easily reset it after a save to the ObjectContext?
| [
"what about the ObjectContext.SavingChanges event? See also http://www.thedatafarm.com/blog/2008/07/13/OverridingObjectContextSaveChanges.aspx.\n",
"The above method calls for using the SavingChanges event which is called before the changes are persisted. If there is an error during the save, you have already cleared your dirty flag. I would think there would be a SavedChanges event exposed as well.\n"
] | [
1,
1
] | [] | [] | [
"entity",
"frameworks"
] | stackoverflow_0000016406_entity_frameworks.txt |
Q:
Internationalization in SSRS
What's the best way to handle translations for stock text in a SSRS. For instance - if I have a report that shows a grid of contents what's the best way to have the correct translation for the header of that grid show up, assuming the culture of the report is set correctly.
Put another way - is it possible to do resources in a SSRS report, or am I stuck with storing all that text in the database and querying for it?
A:
AS far as I know, there is no way to localize a report (meaning automating the translation of string litterals)...
Like you said,you basically have to use the User!Language global variable to catch the user's settings and then use that to retrieve the appropriate strings from the DB...
However, you can adapt the display of currency/numeric/date fields according to the user locale. Also possible is changing the interface of the Report Viewer to match your user's langage.
Here are a few links giving tips on how to adapt the locale:
http://www.ssw.com.au/Ssw/Standards/Rules/RulesToBetterSQLReportingServices.aspx#LanguageSetting
Langage pack for Report Viewer:
http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=e3d3071b-d919-4ff9-9696-c11d312a36a0
| Internationalization in SSRS | What's the best way to handle translations for stock text in a SSRS. For instance - if I have a report that shows a grid of contents what's the best way to have the correct translation for the header of that grid show up, assuming the culture of the report is set correctly.
Put another way - is it possible to do resources in a SSRS report, or am I stuck with storing all that text in the database and querying for it?
| [
"AS far as I know, there is no way to localize a report (meaning automating the translation of string litterals)...\nLike you said,you basically have to use the User!Language global variable to catch the user's settings and then use that to retrieve the appropriate strings from the DB...\nHowever, you can adapt the display of currency/numeric/date fields according to the user locale. Also possible is changing the interface of the Report Viewer to match your user's langage.\nHere are a few links giving tips on how to adapt the locale:\nhttp://www.ssw.com.au/Ssw/Standards/Rules/RulesToBetterSQLReportingServices.aspx#LanguageSetting\nLangage pack for Report Viewer:\nhttp://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=e3d3071b-d919-4ff9-9696-c11d312a36a0\n"
] | [
5
] | [] | [] | [
"internationalization",
"reporting_services"
] | stackoverflow_0000016660_internationalization_reporting_services.txt |
Q:
How do you handle white space in your HTML
One of my biggest typographical frustrations about HTML is the way that it mangles conjoined whitespace. For example if I have:
<span>Following punctuation rules. With two spaces after the period. </span>
One of the two spaces following the period will be considered to be insignificant whitespace and be removed. I can of course, force the whitespace to be significant with:
<span>Following punctuation rules. With two spaces after the period. </span>
but it just irks me to have to do that and I usually don't bother. Does anyone out there automatically insert significant whitespace into external content submissions that are intended for a web page?
A:
If you really want your white space to be preserved, try the css property: white-space: pre;
Or, you could just use a <pre> tag in your markup.
By the way, it's a good thing that HTML browsers ignore white space in general, it allows us to have clearly formatted source code, without affecting the output.
A:
For your specific example, there is no need to worry about it. Web browsers perform typographical rendering and place the correct amount of space between periods and whatever character follows (and it's different depending on the next character, according to kerning rules.)
If you want line breaks, <br/> isn't really a big deal, is it?
Not sure what's worthy of a downmod here... You should not be forcing two spaces after a period, unless you're using a monospace font. For proportional fonts, the rederer kerns the right amount of space after a period. See here and here for detailed discussions.
A:
It may not be very elegant, but I apply CSS to a <pre> tag.
There's always the "white-space" CSS attribute, but it can be a bit hit and miss.
A:
You can use a styled pre block to preserve whitespace. Most WYSIWYG editors also insert for you...
Overall, it's good that the browser ignores whitespace. Just view the source on this website for yourself and imagine how crazy the site would look if every space was displayed.
A:
Take a look at the pre tag. It might do what you want.
A:
You'd better use white-space: pre-wrap than white-space: pre or
With your example, the latter solutions can start a new line on "rules. " just because your non-breakable space hit the end of the line.
A:
The PRE tag can be a valid solution, depending on your needs. However, if you are trying to use the 2 space rule in sentences throughout your site, you'll soon find that the other characters the PRE tag preserves are the line feed/carriage returns (or lack of) will muck up any styling you try to do.
In general, I tend to ignore the "2 spaces after a sentence" rule, or if you're a stickler for it, I'd stick with the , but you'll occasionally run into the issue Nicolas stated.
A:
There is a page regarding this topic on webtypography.net. That site has many other interesting things about creating text for the web from the point of view of typography, things that web page designers often don't even think about. It's worth reading.
| How do you handle white space in your HTML | One of my biggest typographical frustrations about HTML is the way that it mangles conjoined whitespace. For example if I have:
<span>Following punctuation rules. With two spaces after the period. </span>
One of the two spaces following the period will be considered to be insignificant whitespace and be removed. I can of course, force the whitespace to be significant with:
<span>Following punctuation rules. With two spaces after the period. </span>
but it just irks me to have to do that and I usually don't bother. Does anyone out there automatically insert significant whitespace into external content submissions that are intended for a web page?
| [
"If you really want your white space to be preserved, try the css property: white-space: pre;\nOr, you could just use a <pre> tag in your markup.\nBy the way, it's a good thing that HTML browsers ignore white space in general, it allows us to have clearly formatted source code, without affecting the output.\n",
"For your specific example, there is no need to worry about it. Web browsers perform typographical rendering and place the correct amount of space between periods and whatever character follows (and it's different depending on the next character, according to kerning rules.)\nIf you want line breaks, <br/> isn't really a big deal, is it?\n\nNot sure what's worthy of a downmod here... You should not be forcing two spaces after a period, unless you're using a monospace font. For proportional fonts, the rederer kerns the right amount of space after a period. See here and here for detailed discussions.\n",
"It may not be very elegant, but I apply CSS to a <pre> tag.\nThere's always the \"white-space\" CSS attribute, but it can be a bit hit and miss.\n",
"You can use a styled pre block to preserve whitespace. Most WYSIWYG editors also insert for you...\nOverall, it's good that the browser ignores whitespace. Just view the source on this website for yourself and imagine how crazy the site would look if every space was displayed.\n",
"Take a look at the pre tag. It might do what you want.\n",
"You'd better use white-space: pre-wrap than white-space: pre or \nWith your example, the latter solutions can start a new line on \"rules. \" just because your non-breakable space hit the end of the line.\n",
"The PRE tag can be a valid solution, depending on your needs. However, if you are trying to use the 2 space rule in sentences throughout your site, you'll soon find that the other characters the PRE tag preserves are the line feed/carriage returns (or lack of) will muck up any styling you try to do. \nIn general, I tend to ignore the \"2 spaces after a sentence\" rule, or if you're a stickler for it, I'd stick with the , but you'll occasionally run into the issue Nicolas stated.\n",
"There is a page regarding this topic on webtypography.net. That site has many other interesting things about creating text for the web from the point of view of typography, things that web page designers often don't even think about. It's worth reading.\n"
] | [
7,
6,
2,
1,
1,
0,
0,
0
] | [] | [] | [
"html",
"whitespace"
] | stackoverflow_0000016403_html_whitespace.txt |
Q:
How do you use back-references to PCREs in PHP?
I read this PHP RegEx page, but either I'm missing something, misreading something, or it doesn't work the way they say. I'm guessing it's one of the first two.
$str = preg_replace("([|]\d*)", "\1;", $str);
A:
Your regular expression should follow Perl syntax, meaning it has to start and end with the same character (with some exceptions). Also, the back reference should start with a double slash, to get around PHPs double escaping. This should work (with a quick test):
$str = "asdfasdf |123123 asdf iakds |302 asdf |11";
$str = preg_replace("/([|]\d*)/", "\\1;", $str);
echo $str; // prints "asdfasdf |123123; asdf iakds |302; asdf |11;"
| How do you use back-references to PCREs in PHP? | I read this PHP RegEx page, but either I'm missing something, misreading something, or it doesn't work the way they say. I'm guessing it's one of the first two.
$str = preg_replace("([|]\d*)", "\1;", $str);
| [
"Your regular expression should follow Perl syntax, meaning it has to start and end with the same character (with some exceptions). Also, the back reference should start with a double slash, to get around PHPs double escaping. This should work (with a quick test):\n$str = \"asdfasdf |123123 asdf iakds |302 asdf |11\";\n$str = preg_replace(\"/([|]\\d*)/\", \"\\\\1;\", $str);\necho $str; // prints \"asdfasdf |123123; asdf iakds |302; asdf |11;\"\n\n"
] | [
4
] | [] | [] | [
"php",
"regex"
] | stackoverflow_0000016759_php_regex.txt |
Q:
Choosing a multiplier for a (string) hash function
Do you have any advice/rules on selecting a multiplier to use in a (multiplicative) hash function. The function is computing the hash value of a string.
A:
You want to use something that is relatively prime to the size of your set. That way, when you loop around, you won't end up on the same numbers you just tried.
A:
I had an interesting discussion with a coworker about hash function recently. Our conclusions were as follows:
If you really need to write a good hash function that minimizes collisions more than the default implementations available in the standard languages you need an advanced degree in mathematics.
If you're writing applications where a custom hash function will noticeably improve the performance of your application, you're Google and you've got plenty of Math PhDs to do the work.
Sorry to not directly answer your question, but the bottom line is that there's really no need to write your own hash function for String. What language are you working with? I'd imagine there's an easy way to compute a "good enough" hash code.
A:
Historically 33 seems like a popular choice, and it tends to work pretty well. No one knows why though. For more details, look here
| Choosing a multiplier for a (string) hash function | Do you have any advice/rules on selecting a multiplier to use in a (multiplicative) hash function. The function is computing the hash value of a string.
| [
"You want to use something that is relatively prime to the size of your set. That way, when you loop around, you won't end up on the same numbers you just tried.\n",
"I had an interesting discussion with a coworker about hash function recently. Our conclusions were as follows:\nIf you really need to write a good hash function that minimizes collisions more than the default implementations available in the standard languages you need an advanced degree in mathematics.\nIf you're writing applications where a custom hash function will noticeably improve the performance of your application, you're Google and you've got plenty of Math PhDs to do the work.\nSorry to not directly answer your question, but the bottom line is that there's really no need to write your own hash function for String. What language are you working with? I'd imagine there's an easy way to compute a \"good enough\" hash code.\n",
"Historically 33 seems like a popular choice, and it tends to work pretty well. No one knows why though. For more details, look here\n"
] | [
3,
2,
1
] | [] | [] | [
"algorithm",
"performance"
] | stackoverflow_0000016873_algorithm_performance.txt |
Q:
.Net drawing clipping bug
GDI+ DrawLines function has a clipping bug that can be reproduced by running the following c# code. When running the code, two line paths appear, that should be identical, because both of them are inside the clipping region. But when the clipping region is set, one of the line segment is not drawn.
protected override void OnPaint(PaintEventArgs e)
{
PointF[] points = new PointF[] { new PointF(73.36f, 196),
new PointF(75.44f, 32),
new PointF(77.52f, 32),
new PointF(79.6f, 196),
new PointF(85.84f, 196) };
Rectangle b = new Rectangle(70, 32, 20, 164);
e.Graphics.SetClip(b);
e.Graphics.DrawLines(Pens.Red, points); // clipped incorrectly
e.Graphics.TranslateTransform(80, 0);
e.Graphics.ResetClip();
e.Graphics.DrawLines(Pens.Red, points);
}
Setting the antials mode on the graphics object resolves this. But that is not a real solution.
Does anybody know of a workaround?
A:
It appears that this is a known bug...
The following code appears to function as you requested:
protected override void OnPaint(PaintEventArgs e)
{
PointF[] points = new PointF[] { new PointF(73.36f, 196),
new PointF(75.44f, 32),
new PointF(77.52f, 32),
new PointF(79.6f, 196),
new PointF(85.84f, 196) };
e.Graphics.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.AntiAlias;
Rectangle b = new Rectangle(70, 32, 20, 165);
e.Graphics.SetClip(b);
e.Graphics.DrawLines(Pens.Red, points); // clipped incorrectly
e.Graphics.TranslateTransform(80, 0);
e.Graphics.ResetClip();
e.Graphics.DrawLines(Pens.Red, points);
}
Note: I have AntiAlias'ed the line and extended your clipping region by 1
it appears that the following work arounds might help (although not tested):
The pen is more than one pixel thick
The line is perfectly horizontal or vertical
The clipping is against the window boundaries rather than a clip rectangle
The following is a list of articles that might / or then again might not help:
http://www.tech-archive.net/pdf/Archive/Development/microsoft.public.win32.programmer.gdi/2004-08/0350.pdf
http://www.tech-archive.net/Archive/Development/microsoft.public.win32.programmer.gdi/2004-08/0368.html
OR...
the following is also possible:
protected override void OnPaint ( PaintEventArgs e )
{
PointF[] points = new PointF[] { new PointF(73.36f, 196),
new PointF(75.44f, 32),
new PointF(77.52f, 32),
new PointF(79.6f, 196),
new PointF(85.84f, 196) };
Rectangle b = new Rectangle( 70, 32, 20, 164 );
Region reg = new Region( b );
e.Graphics.SetClip( reg, System.Drawing.Drawing2D.CombineMode.Union);
e.Graphics.DrawLines( Pens.Red, points ); // clipped incorrectly
e.Graphics.TranslateTransform( 80, 0 );
e.Graphics.ResetClip();
e.Graphics.DrawLines( Pens.Red, points );
}
This effecivly clips using a region combined/unioned (I think) with the ClientRectangle of the canvas/Control. As the region is difned from the rectangle, the results should be what is expected. This code can be proven to work by adding
e.Graphics.FillRectangle( new SolidBrush( Color.Black ), b );
after the setClip() call. This clearly shows the black rectangle only appearing in the clipped region.
This could be a valid workaround if Anti-Aliasing the line is not an option.
Hope this helps
A:
What appears to be the matter with the code?
OK, the question should be... what should the code do that it doesn't already.
When I run the code, I see 2 red 'spikes' am I not ment to?
You appear to draw the first spike within the clipped rectangle region verified by adding the the following after the declaration of teh Rectangle :
e.Graphics.FillRectangle( new SolidBrush( Color.Black ), b );
Then you perform a translation, reset the clip so at this point I assume the clientRectangle is being used as the appropriate clip region and then attempt to redarw the translated spike. Where's the bug?!?
A:
The bug is that both line segments should be drawn identical but they are not because the spike that is drawn within the clipping region is completely within the clipping region and should not be clipped in any way but it is. This is a very annoying but that results in any software that uses drawlines heavily + clipping to look unprofessional because of gaps that can appear in the polygons.
| .Net drawing clipping bug | GDI+ DrawLines function has a clipping bug that can be reproduced by running the following c# code. When running the code, two line paths appear, that should be identical, because both of them are inside the clipping region. But when the clipping region is set, one of the line segment is not drawn.
protected override void OnPaint(PaintEventArgs e)
{
PointF[] points = new PointF[] { new PointF(73.36f, 196),
new PointF(75.44f, 32),
new PointF(77.52f, 32),
new PointF(79.6f, 196),
new PointF(85.84f, 196) };
Rectangle b = new Rectangle(70, 32, 20, 164);
e.Graphics.SetClip(b);
e.Graphics.DrawLines(Pens.Red, points); // clipped incorrectly
e.Graphics.TranslateTransform(80, 0);
e.Graphics.ResetClip();
e.Graphics.DrawLines(Pens.Red, points);
}
Setting the antials mode on the graphics object resolves this. But that is not a real solution.
Does anybody know of a workaround?
| [
"It appears that this is a known bug...\nThe following code appears to function as you requested:\nprotected override void OnPaint(PaintEventArgs e)\n {\n PointF[] points = new PointF[] { new PointF(73.36f, 196), \n new PointF(75.44f, 32), \n new PointF(77.52f, 32), \n new PointF(79.6f, 196), \n new PointF(85.84f, 196) };\n\n e.Graphics.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.AntiAlias;\n Rectangle b = new Rectangle(70, 32, 20, 165);\n e.Graphics.SetClip(b);\n e.Graphics.DrawLines(Pens.Red, points); // clipped incorrectly\n e.Graphics.TranslateTransform(80, 0);\n e.Graphics.ResetClip(); \n e.Graphics.DrawLines(Pens.Red, points);\n }\n\nNote: I have AntiAlias'ed the line and extended your clipping region by 1\nit appears that the following work arounds might help (although not tested):\n\nThe pen is more than one pixel thick\nThe line is perfectly horizontal or vertical\nThe clipping is against the window boundaries rather than a clip rectangle\n\nThe following is a list of articles that might / or then again might not help:\nhttp://www.tech-archive.net/pdf/Archive/Development/microsoft.public.win32.programmer.gdi/2004-08/0350.pdf\nhttp://www.tech-archive.net/Archive/Development/microsoft.public.win32.programmer.gdi/2004-08/0368.html\nOR...\nthe following is also possible:\nprotected override void OnPaint ( PaintEventArgs e )\n {\n PointF[] points = new PointF[] { new PointF(73.36f, 196), \n new PointF(75.44f, 32), \n new PointF(77.52f, 32), \n new PointF(79.6f, 196), \n new PointF(85.84f, 196) };\n\n Rectangle b = new Rectangle( 70, 32, 20, 164 );\n Region reg = new Region( b );\n e.Graphics.SetClip( reg, System.Drawing.Drawing2D.CombineMode.Union);\n e.Graphics.DrawLines( Pens.Red, points ); // clipped incorrectly\n e.Graphics.TranslateTransform( 80, 0 );\n e.Graphics.ResetClip();\n e.Graphics.DrawLines( Pens.Red, points );\n }\n\nThis effecivly clips using a region combined/unioned (I think) with the ClientRectangle of the canvas/Control. As the region is difned from the rectangle, the results should be what is expected. This code can be proven to work by adding\ne.Graphics.FillRectangle( new SolidBrush( Color.Black ), b );\n\nafter the setClip() call. This clearly shows the black rectangle only appearing in the clipped region.\nThis could be a valid workaround if Anti-Aliasing the line is not an option.\nHope this helps\n",
"What appears to be the matter with the code?\nOK, the question should be... what should the code do that it doesn't already.\nWhen I run the code, I see 2 red 'spikes' am I not ment to?\nYou appear to draw the first spike within the clipped rectangle region verified by adding the the following after the declaration of teh Rectangle :\ne.Graphics.FillRectangle( new SolidBrush( Color.Black ), b );\nThen you perform a translation, reset the clip so at this point I assume the clientRectangle is being used as the appropriate clip region and then attempt to redarw the translated spike. Where's the bug?!?\n",
"The bug is that both line segments should be drawn identical but they are not because the spike that is drawn within the clipping region is completely within the clipping region and should not be clipped in any way but it is. This is a very annoying but that results in any software that uses drawlines heavily + clipping to look unprofessional because of gaps that can appear in the polygons.\n"
] | [
3,
0,
0
] | [] | [] | [
"gdi+",
"graphics",
"system.drawing",
"winforms"
] | stackoverflow_0000015478_gdi+_graphics_system.drawing_winforms.txt |
Q:
Overriding the equals method vs creating a new method
I have always thought that the .equals() method in java should be overridden to be made specific to the class you have created. In other words to look for equivalence of two different instances rather than two references to the same instance. However I have encountered other programmers who seem to think that the default object behavior should be left alone and a new method created for testing equivalence of two objects of the same class.
What are the argument for and against overriding the equals method?
A:
Overriding the equals method is necessary if you want to test equivalence in standard library classes (for example, ensuring a java.util.Set contains unique elements or using objects as keys in java.util.Map objects).
Note, if you override equals, ensure you honour the API contract as described in the documentation. For example, ensure you also override Object.hashCode:
If two objects are equal according to
the equals(Object) method, then
calling the hashCode method on each of
the two objects must produce the same
integer result.
EDIT: I didn't post this as a complete answer on the subject, so I'll echo Fredrik Kalseth's statement that overriding equals works best for immutable objects. To quote the API for Map:
Note: great care must be exercised if
mutable objects are used as map keys.
The behavior of a map is not specified
if the value of an object is changed
in a manner that affects equals
comparisons while the object is a key
in the map.
A:
I would highly recommend picking up a copy of Effective Java and reading through item 7 obeying the equals contract. You need to be careful if you are overriding equals for mutable objects, as many of the collections such as Maps and Sets use equals to determine equivalence, and mutating an object contained in a collection could lead to unexpected results. Brian Goetz also has a pretty good overview of implementing equals and hashCode.
A:
You should "never" override equals & getHashCode for mutable objects - this goes for .net and Java both. If you do, and use such an object as the key in f.ex a dictionary and then change that object, you'll be in trouble because the dictionary relies on the hashcode to find the object.
Here's a good article on the topic: http://weblogs.asp.net/bleroy/archive/2004/12/15/316601.aspx
A:
@David Schlosnagle mentions mentions Josh Bloch's Effective Java -- this is a must-read for any Java developer.
There is a related issue: for immutable value objects, you should also consider overriding compare_to. The standard wording for if they differ is in the Comparable API:
It is generally the case, but not strictly required that (compare(x, y)==0) == (x.equals(y)). Generally speaking, any comparator that violates this condition should clearly indicate this fact. The recommended language is "Note: this comparator imposes orderings that are inconsistent with equals."
A:
The Equals method is intended to compare references. So it should not be overriden to change its behaviour.
You should create a new method to test for equivalence in different instances if you need to (or use the CompareTo method in some .NET classes)
A:
You should only need to override the equals() method if you want specific behaviour when adding objects to sorted data structures (SortedSet etc.)
When you do that you should also override hashCode().
See here for a complete explanation.
A:
To be honest, in Java there is not really an argument against overriding equals. If you need to compare instances for equality, then that is what you do.
As mentioned above, you need to be aware of the contract with hashCode, and similarly, watch out for the gotchas around the Comparable interface - in almost all situations you want the natural ordering as defined by Comparable to be consistent with equals (see the BigDecimal api doc for the canonical counter example)
Creating a new method for deciding equality, quite apart from not working with the existing library classes, flies in the face of Java convention somewhat.
| Overriding the equals method vs creating a new method | I have always thought that the .equals() method in java should be overridden to be made specific to the class you have created. In other words to look for equivalence of two different instances rather than two references to the same instance. However I have encountered other programmers who seem to think that the default object behavior should be left alone and a new method created for testing equivalence of two objects of the same class.
What are the argument for and against overriding the equals method?
| [
"Overriding the equals method is necessary if you want to test equivalence in standard library classes (for example, ensuring a java.util.Set contains unique elements or using objects as keys in java.util.Map objects).\nNote, if you override equals, ensure you honour the API contract as described in the documentation. For example, ensure you also override Object.hashCode:\n\nIf two objects are equal according to\n the equals(Object) method, then\n calling the hashCode method on each of\n the two objects must produce the same\n integer result.\n\nEDIT: I didn't post this as a complete answer on the subject, so I'll echo Fredrik Kalseth's statement that overriding equals works best for immutable objects. To quote the API for Map:\n\nNote: great care must be exercised if\n mutable objects are used as map keys.\n The behavior of a map is not specified\n if the value of an object is changed\n in a manner that affects equals\n comparisons while the object is a key\n in the map.\n\n",
"I would highly recommend picking up a copy of Effective Java and reading through item 7 obeying the equals contract. You need to be careful if you are overriding equals for mutable objects, as many of the collections such as Maps and Sets use equals to determine equivalence, and mutating an object contained in a collection could lead to unexpected results. Brian Goetz also has a pretty good overview of implementing equals and hashCode.\n",
"You should \"never\" override equals & getHashCode for mutable objects - this goes for .net and Java both. If you do, and use such an object as the key in f.ex a dictionary and then change that object, you'll be in trouble because the dictionary relies on the hashcode to find the object.\nHere's a good article on the topic: http://weblogs.asp.net/bleroy/archive/2004/12/15/316601.aspx\n",
"@David Schlosnagle mentions mentions Josh Bloch's Effective Java -- this is a must-read for any Java developer.\nThere is a related issue: for immutable value objects, you should also consider overriding compare_to. The standard wording for if they differ is in the Comparable API:\n\nIt is generally the case, but not strictly required that (compare(x, y)==0) == (x.equals(y)). Generally speaking, any comparator that violates this condition should clearly indicate this fact. The recommended language is \"Note: this comparator imposes orderings that are inconsistent with equals.\"\n\n",
"The Equals method is intended to compare references. So it should not be overriden to change its behaviour.\nYou should create a new method to test for equivalence in different instances if you need to (or use the CompareTo method in some .NET classes)\n",
"You should only need to override the equals() method if you want specific behaviour when adding objects to sorted data structures (SortedSet etc.)\nWhen you do that you should also override hashCode().\nSee here for a complete explanation.\n",
"To be honest, in Java there is not really an argument against overriding equals. If you need to compare instances for equality, then that is what you do. \nAs mentioned above, you need to be aware of the contract with hashCode, and similarly, watch out for the gotchas around the Comparable interface - in almost all situations you want the natural ordering as defined by Comparable to be consistent with equals (see the BigDecimal api doc for the canonical counter example)\nCreating a new method for deciding equality, quite apart from not working with the existing library classes, flies in the face of Java convention somewhat.\n"
] | [
19,
8,
4,
2,
0,
0,
0
] | [] | [] | [
"java",
"oop"
] | stackoverflow_0000016557_java_oop.txt |
Q:
Only accepting certain ajax requests from authenticated users
What's the best practice for making sure that certain ajax calls to certain pages are only accepted from authenticated users?
For example:
Let's say that I have a main page called blog.php (I know, creativity abounds). Let's also say that there is a page called delete.php which looks for the parameter post_id and then deletes some entry from a database.
In this very contrived example, there's some mechanism on blog.php which sends a request via ajax to delete.php to delete an entry.
Now this mechanism is only going to be available to authenticated users on blog.php. But what's to stop someone from just calling delete.php with a bunch of random numbers and deleting everything in site?
I did a quick test where I set a session variable in blog.php and then did an ajax call to delete.php to return if the session variable was set or not (it wasn't).
What's the accepted way to handle this sort of thing?
OK. I must have been crazy the first time I tried this.
I just did another test like the one I described above and it worked perfectly.
A:
You were correct in trying to use session variables. Once your user authenticates, you should store that information in their session so that each subsequent page view will see that. Make sure you are calling session_start() on both pages (blog.php and delete.php) before accessing $_SESSION. Also make sure you have cookies enabled -- and if not, you should pass an additional parameter in the query string, usually PHPSESSID=<session_id()>.
A:
It is not recommended that you rely on sessions for authentication without taking additional actions.
Read more on.
| Only accepting certain ajax requests from authenticated users | What's the best practice for making sure that certain ajax calls to certain pages are only accepted from authenticated users?
For example:
Let's say that I have a main page called blog.php (I know, creativity abounds). Let's also say that there is a page called delete.php which looks for the parameter post_id and then deletes some entry from a database.
In this very contrived example, there's some mechanism on blog.php which sends a request via ajax to delete.php to delete an entry.
Now this mechanism is only going to be available to authenticated users on blog.php. But what's to stop someone from just calling delete.php with a bunch of random numbers and deleting everything in site?
I did a quick test where I set a session variable in blog.php and then did an ajax call to delete.php to return if the session variable was set or not (it wasn't).
What's the accepted way to handle this sort of thing?
OK. I must have been crazy the first time I tried this.
I just did another test like the one I described above and it worked perfectly.
| [
"You were correct in trying to use session variables. Once your user authenticates, you should store that information in their session so that each subsequent page view will see that. Make sure you are calling session_start() on both pages (blog.php and delete.php) before accessing $_SESSION. Also make sure you have cookies enabled -- and if not, you should pass an additional parameter in the query string, usually PHPSESSID=<session_id()>.\n",
"It is not recommended that you rely on sessions for authentication without taking additional actions. \nRead more on.\n"
] | [
7,
2
] | [] | [] | [
"ajax",
"authentication",
"php"
] | stackoverflow_0000015366_ajax_authentication_php.txt |
Q:
Difference between a byte array and MemoryStream
I am reading a binary file into a parsing program. I will need to iterate through the file and look for certain markers so I can split the file up and pass those parts into their respective object’s constructors.
Is there an advantage to holding the file as a stream, either MemoryStream or FileStream, or should it be converted into a byte[] array?
Keith
A:
A byte[] or MemoryStream will both require bringing the entire file into memory. A MemoryStream is really a wrapper around an underlying byte array. The best approach is to have two FileStream (one for input and one for output). Read from the input stream looking for the pattern used to indicate the file should be separated while writing to the current output file.
You may want to consider wrapping the input and output files in a BinaryReader and BinaryWriter respectively if they add value to your scenario.
A:
A MemoryStream is basically a byte array with a stream interface, e.g. sequential reading/writing and the concept of a current position.
| Difference between a byte array and MemoryStream | I am reading a binary file into a parsing program. I will need to iterate through the file and look for certain markers so I can split the file up and pass those parts into their respective object’s constructors.
Is there an advantage to holding the file as a stream, either MemoryStream or FileStream, or should it be converted into a byte[] array?
Keith
| [
"A byte[] or MemoryStream will both require bringing the entire file into memory. A MemoryStream is really a wrapper around an underlying byte array. The best approach is to have two FileStream (one for input and one for output). Read from the input stream looking for the pattern used to indicate the file should be separated while writing to the current output file.\nYou may want to consider wrapping the input and output files in a BinaryReader and BinaryWriter respectively if they add value to your scenario.\n",
"A MemoryStream is basically a byte array with a stream interface, e.g. sequential reading/writing and the concept of a current position.\n"
] | [
27,
11
] | [] | [] | [
"comparison",
"performance"
] | stackoverflow_0000016939_comparison_performance.txt |
Q:
Google Maps API - Problems with class GLatLngBounds
I am having some trouble with the Google Maps API. I have an array which holds a ojbect I created to store points.
My array and class:
var tPoints = [];
function tPoint(name) {
var id = name;
var points = [];
var pointsCount = 0;
...
this.getHeadPoint = function() { return points[pointsCount-1]; }
}
tPoint holds an array of GLatLng points. I want to write a function to return a GLatLngBounds object which is extended from the current map bounds to show all the HeadPoints.
Heres what I have so far..
function getBounds() {
var mBound = map.getBounds();
for (var i = 0; i < tPoints.length; i++) {
alert(mBound.getSouthWest().lat() + "," + mBound.getSouthWest().lng());
alert(mBound.getNorthEast().lat() + "," + mBound.getNorthEast().lng());
currPoint = trackMarkers[i].getHeadPoint();
if (!mBound.containsLatLng(currPoint)) {
mBound.extend(currPoint);
}
}
return mBound;
}
Which returns these values for the alert. (Generally over the US)
"19.64258,NaN" "52.69636,NaN" "i=0"
"19.64258,NaN" "52.69636,-117.20701" "i=1"
I don't know why I am getting NaN back.
When I use the bounds to get a zoom level I think the NaN value is causing the map.getBoundsZoomLevel(bounds) to return 0 which is incorrect. Am I using GLatLngBounds incorrectly?
A:
The google maps sample is using this code...
var bounds = map.getBounds();
var southWest = bounds.getSouthWest();
var northEast = bounds.getNorthEast();
var lngSpan = northEast.lng() - southWest.lng();
var latSpan = northEast.lat() - southWest.lat();
...which is putting the SouthWest/NorthEast bounds into a variable before attempting to get the individual lng/lat coordinates. Maybe there is something with the "nested" evaluations causing problems. Have tried the granular approach to see if you get the data you need?
A:
I found that example through my Google searches too and did play with it. That wasn't the problem.
I found my bug. No one would have been able to solve the problem. It turns out that right before I test my bounds I had centered my map with bad data. I did something like the lngSpan = northEast.lng() - southWest.lng(); however JavaScript interpreted my var as a string. So (maxLng-minLng)/2 + minLng returns something like "20.456-116.1178" as the lng. I centered my map on var centerPoint = new GLatLng(setLat, setLng); and after that the maps API gets a little strange ;)
Thanks for the help though.
| Google Maps API - Problems with class GLatLngBounds | I am having some trouble with the Google Maps API. I have an array which holds a ojbect I created to store points.
My array and class:
var tPoints = [];
function tPoint(name) {
var id = name;
var points = [];
var pointsCount = 0;
...
this.getHeadPoint = function() { return points[pointsCount-1]; }
}
tPoint holds an array of GLatLng points. I want to write a function to return a GLatLngBounds object which is extended from the current map bounds to show all the HeadPoints.
Heres what I have so far..
function getBounds() {
var mBound = map.getBounds();
for (var i = 0; i < tPoints.length; i++) {
alert(mBound.getSouthWest().lat() + "," + mBound.getSouthWest().lng());
alert(mBound.getNorthEast().lat() + "," + mBound.getNorthEast().lng());
currPoint = trackMarkers[i].getHeadPoint();
if (!mBound.containsLatLng(currPoint)) {
mBound.extend(currPoint);
}
}
return mBound;
}
Which returns these values for the alert. (Generally over the US)
"19.64258,NaN" "52.69636,NaN" "i=0"
"19.64258,NaN" "52.69636,-117.20701" "i=1"
I don't know why I am getting NaN back.
When I use the bounds to get a zoom level I think the NaN value is causing the map.getBoundsZoomLevel(bounds) to return 0 which is incorrect. Am I using GLatLngBounds incorrectly?
| [
"The google maps sample is using this code...\nvar bounds = map.getBounds();\nvar southWest = bounds.getSouthWest();\nvar northEast = bounds.getNorthEast();\nvar lngSpan = northEast.lng() - southWest.lng();\nvar latSpan = northEast.lat() - southWest.lat();\n\n...which is putting the SouthWest/NorthEast bounds into a variable before attempting to get the individual lng/lat coordinates. Maybe there is something with the \"nested\" evaluations causing problems. Have tried the granular approach to see if you get the data you need?\n",
"I found that example through my Google searches too and did play with it. That wasn't the problem.\nI found my bug. No one would have been able to solve the problem. It turns out that right before I test my bounds I had centered my map with bad data. I did something like the lngSpan = northEast.lng() - southWest.lng(); however JavaScript interpreted my var as a string. So (maxLng-minLng)/2 + minLng returns something like \"20.456-116.1178\" as the lng. I centered my map on var centerPoint = new GLatLng(setLat, setLng); and after that the maps API gets a little strange ;)\nThanks for the help though.\n"
] | [
1,
1
] | [] | [] | [
"google_maps",
"javascript"
] | stackoverflow_0000016638_google_maps_javascript.txt |
Q:
Can placement new for arrays be used in a portable way?
Is it possible to actually make use of placement new in portable code when using it for arrays?
It appears that the pointer you get back from new[] is not always the same as the address you pass in (5.3.4, note 12 in the standard seems to confirm that this is correct), but I don't see how you can allocate a buffer for the array to go in if this is the case.
The following example shows the problem. Compiled with Visual Studio, this example results in memory corruption:
#include <new>
#include <stdio.h>
class A
{
public:
A() : data(0) {}
virtual ~A() {}
int data;
};
int main()
{
const int NUMELEMENTS=20;
char *pBuffer = new char[NUMELEMENTS*sizeof(A)];
A *pA = new(pBuffer) A[NUMELEMENTS];
// With VC++, pA will be four bytes higher than pBuffer
printf("Buffer address: %x, Array address: %x\n", pBuffer, pA);
// Debug runtime will assert here due to heap corruption
delete[] pBuffer;
return 0;
}
Looking at the memory, the compiler seems to be using the first four bytes of the buffer to store a count of the number of items in it. This means that because the buffer is only sizeof(A)*NUMELEMENTS big, the last element in the array is written into unallocated heap.
So the question is can you find out how much additional overhead your implementation wants in order to use placement new[] safely? Ideally, I need a technique that's portable between different compilers. Note that, at least in VC's case, the overhead seems to differ for different classes. For instance, if I remove the virtual destructor in the example, the address returned from new[] is the same as the address I pass in.
A:
Personally I'd go with the option of not using placement new on the array and instead use placement new on each item in the array individually. For example:
int main(int argc, char* argv[])
{
const int NUMELEMENTS=20;
char *pBuffer = new char[NUMELEMENTS*sizeof(A)];
A *pA = (A*)pBuffer;
for(int i = 0; i < NUMELEMENTS; ++i)
{
pA[i] = new (pA + i) A();
}
printf("Buffer address: %x, Array address: %x\n", pBuffer, pA);
// dont forget to destroy!
for(int i = 0; i < NUMELEMENTS; ++i)
{
pA[i].~A();
}
delete[] pBuffer;
return 0;
}
Regardless of the method you use, make sure you manually destroy each of those items in the array before you delete pBuffer, as you could end up with leaks ;)
Note: I haven't compiled this, but I think it should work (I'm on a machine that doesn't have a C++ compiler installed). It still indicates the point :) Hope it helps in some way!
Edit:
The reason it needs to keep track of the number of elements is so that it can iterate through them when you call delete on the array and make sure the destructors are called on each of the objects. If it doesn't know how many there are it wouldn't be able to do this.
A:
@Derek
5.3.4, section 12 talks about the array allocation overhead and, unless I'm misreading it, it seems to suggest to me that it is valid for the compiler to add it on placement new as well:
This overhead may be applied in all array new-expressions, including those referencing the library function operator new[](std::size_t, void*) and other placement allocation functions. The amount of overhead may vary from one invocation of new to another.
That said, I think VC was the only compiler that gave me trouble with this, out of it, GCC, Codewarrior and ProDG. I'd have to check again to be sure, though.
A:
@James
I'm not even really clear why it needs the additional data, as you wouldn't call delete[] on the array anyway, so I don't entirely see why it needs to know how many items are in it.
After giving this some thought, I agree with you. There is no reason why placement new should need to store the number of elements, because there is no placement delete. Since there's no placement delete, there's no reason for placement new to store the number of elements.
I also tested this with gcc on my Mac, using a class with a destructor. On my system, placement new was not changing the pointer. This makes me wonder if this is a VC++ issue, and whether this might violate the standard (the standard doesn't specifically address this, so far as I can find).
A:
Thanks for the replies. Using placement new for each item in the array was the solution I ended up using when I ran into this (sorry, should have mentioned that in the question). I just felt that there must have been something I was missing about doing it with placement new[]. As it is, it seems like placement new[] is essentially unusable thanks to the standard allowing the compiler to add an additional unspecified overhead to the array. I don't see how you could ever use it safely and portably.
I'm not even really clear why it needs the additional data, as you wouldn't call delete[] on the array anyway, so I don't entirely see why it needs to know how many items are in it.
A:
Placement new itself is portable, but the assumptions you make about what it does with a specified block of memory are not portable. Like what was said before, if you were a compiler and were given a chunk of memory, how would you know how to allocate an array and properly destruct each element if all you had was a pointer? (See the interface of operator delete[].)
Edit:
And there actually is a placement delete, only it is only called when a constructor throws an exception while allocating an array with placement new[].
Whether new[] actually needs to keep track of the number of elements somehow is something that is left up to the standard, which leaves it up to the compiler. Unfortunately, in this case.
A:
Similar to how you would use a single element to calculate the size for one placement-new, use an array of those elements to calculate the size required for an array.
If you require the size for other calculations where the number of elements may not be known you can use sizeof(A[1]) and multiply by your required element count.
e.g
char *pBuffer = new char[ sizeof(A[NUMELEMENTS]) ];
A *pA = (A*)pBuffer;
for(int i = 0; i < NUMELEMENTS; ++i)
{
pA[i] = new (pA + i) A();
}
A:
I think gcc does the same thing as MSVC, but of course this doesn't make it "portable".
I think you can work around the problem when NUMELEMENTS is indeed a compile time constant, like so:
typedef A Arr[NUMELEMENTS];
A* p = new (buffer) Arr;
This should use the scalar placement new.
| Can placement new for arrays be used in a portable way? | Is it possible to actually make use of placement new in portable code when using it for arrays?
It appears that the pointer you get back from new[] is not always the same as the address you pass in (5.3.4, note 12 in the standard seems to confirm that this is correct), but I don't see how you can allocate a buffer for the array to go in if this is the case.
The following example shows the problem. Compiled with Visual Studio, this example results in memory corruption:
#include <new>
#include <stdio.h>
class A
{
public:
A() : data(0) {}
virtual ~A() {}
int data;
};
int main()
{
const int NUMELEMENTS=20;
char *pBuffer = new char[NUMELEMENTS*sizeof(A)];
A *pA = new(pBuffer) A[NUMELEMENTS];
// With VC++, pA will be four bytes higher than pBuffer
printf("Buffer address: %x, Array address: %x\n", pBuffer, pA);
// Debug runtime will assert here due to heap corruption
delete[] pBuffer;
return 0;
}
Looking at the memory, the compiler seems to be using the first four bytes of the buffer to store a count of the number of items in it. This means that because the buffer is only sizeof(A)*NUMELEMENTS big, the last element in the array is written into unallocated heap.
So the question is can you find out how much additional overhead your implementation wants in order to use placement new[] safely? Ideally, I need a technique that's portable between different compilers. Note that, at least in VC's case, the overhead seems to differ for different classes. For instance, if I remove the virtual destructor in the example, the address returned from new[] is the same as the address I pass in.
| [
"Personally I'd go with the option of not using placement new on the array and instead use placement new on each item in the array individually. For example:\nint main(int argc, char* argv[])\n{\n const int NUMELEMENTS=20;\n\n char *pBuffer = new char[NUMELEMENTS*sizeof(A)];\n A *pA = (A*)pBuffer;\n\n for(int i = 0; i < NUMELEMENTS; ++i)\n {\n pA[i] = new (pA + i) A();\n }\n\n printf(\"Buffer address: %x, Array address: %x\\n\", pBuffer, pA);\n\n // dont forget to destroy!\n for(int i = 0; i < NUMELEMENTS; ++i)\n {\n pA[i].~A();\n } \n\n delete[] pBuffer;\n\n return 0;\n}\n\nRegardless of the method you use, make sure you manually destroy each of those items in the array before you delete pBuffer, as you could end up with leaks ;)\nNote: I haven't compiled this, but I think it should work (I'm on a machine that doesn't have a C++ compiler installed). It still indicates the point :) Hope it helps in some way!\n\nEdit:\nThe reason it needs to keep track of the number of elements is so that it can iterate through them when you call delete on the array and make sure the destructors are called on each of the objects. If it doesn't know how many there are it wouldn't be able to do this.\n",
"@Derek\n5.3.4, section 12 talks about the array allocation overhead and, unless I'm misreading it, it seems to suggest to me that it is valid for the compiler to add it on placement new as well:\n\nThis overhead may be applied in all array new-expressions, including those referencing the library function operator new[](std::size_t, void*) and other placement allocation functions. The amount of overhead may vary from one invocation of new to another.\n\nThat said, I think VC was the only compiler that gave me trouble with this, out of it, GCC, Codewarrior and ProDG. I'd have to check again to be sure, though.\n",
"@James\n\nI'm not even really clear why it needs the additional data, as you wouldn't call delete[] on the array anyway, so I don't entirely see why it needs to know how many items are in it.\n\nAfter giving this some thought, I agree with you. There is no reason why placement new should need to store the number of elements, because there is no placement delete. Since there's no placement delete, there's no reason for placement new to store the number of elements.\nI also tested this with gcc on my Mac, using a class with a destructor. On my system, placement new was not changing the pointer. This makes me wonder if this is a VC++ issue, and whether this might violate the standard (the standard doesn't specifically address this, so far as I can find).\n",
"Thanks for the replies. Using placement new for each item in the array was the solution I ended up using when I ran into this (sorry, should have mentioned that in the question). I just felt that there must have been something I was missing about doing it with placement new[]. As it is, it seems like placement new[] is essentially unusable thanks to the standard allowing the compiler to add an additional unspecified overhead to the array. I don't see how you could ever use it safely and portably.\nI'm not even really clear why it needs the additional data, as you wouldn't call delete[] on the array anyway, so I don't entirely see why it needs to know how many items are in it.\n",
"Placement new itself is portable, but the assumptions you make about what it does with a specified block of memory are not portable. Like what was said before, if you were a compiler and were given a chunk of memory, how would you know how to allocate an array and properly destruct each element if all you had was a pointer? (See the interface of operator delete[].)\nEdit:\nAnd there actually is a placement delete, only it is only called when a constructor throws an exception while allocating an array with placement new[].\nWhether new[] actually needs to keep track of the number of elements somehow is something that is left up to the standard, which leaves it up to the compiler. Unfortunately, in this case.\n",
"Similar to how you would use a single element to calculate the size for one placement-new, use an array of those elements to calculate the size required for an array.\nIf you require the size for other calculations where the number of elements may not be known you can use sizeof(A[1]) and multiply by your required element count.\ne.g\nchar *pBuffer = new char[ sizeof(A[NUMELEMENTS]) ];\nA *pA = (A*)pBuffer;\n\nfor(int i = 0; i < NUMELEMENTS; ++i)\n{\n pA[i] = new (pA + i) A();\n}\n\n",
"I think gcc does the same thing as MSVC, but of course this doesn't make it \"portable\".\nI think you can work around the problem when NUMELEMENTS is indeed a compile time constant, like so:\n\ntypedef A Arr[NUMELEMENTS];\nA* p = new (buffer) Arr;\n\nThis should use the scalar placement new.\n"
] | [
34,
5,
4,
3,
3,
2,
1
] | [] | [] | [
"arrays",
"c++",
"compiler_construction",
"overhead",
"portability"
] | stackoverflow_0000015254_arrays_c++_compiler_construction_overhead_portability.txt |
Q:
How do I reset an increment identity's starting value in SQL Server
I would like to have a nice template for doing this in development. How do I reset an increment identity's starting value in SQL Server?
A:
DBCC CHECKIDENT('TableName', RESEED, 0)
A:
Just a word of warning with:
DBCC CHECKIDENT (MyTable, RESEED, 0)
If you did not truncate the table, and the identity column is the PK, you will get an error when reaching pre-existing identites.
For example, you have identities (3,4,5) in the table already. You then reset the identity column to 1. After the identity 2 is inserted, the next insert will try to use the identity 3, which will fail.
A:
To set the identity to 100:
DBCC CHECKIDENT (MyTable, RESEED, 100)
| How do I reset an increment identity's starting value in SQL Server | I would like to have a nice template for doing this in development. How do I reset an increment identity's starting value in SQL Server?
| [
"DBCC CHECKIDENT('TableName', RESEED, 0)\n\n",
"Just a word of warning with:\nDBCC CHECKIDENT (MyTable, RESEED, 0)\nIf you did not truncate the table, and the identity column is the PK, you will get an error when reaching pre-existing identites.\nFor example, you have identities (3,4,5) in the table already. You then reset the identity column to 1. After the identity 2 is inserted, the next insert will try to use the identity 3, which will fail.\n",
"To set the identity to 100:\nDBCC CHECKIDENT (MyTable, RESEED, 100)\n\n"
] | [
127,
34,
17
] | [] | [] | [
"identity",
"sql_server"
] | stackoverflow_0000016971_identity_sql_server.txt |
Q:
What is the best way to rename (move) file system branches in .NET?
I would like to rename files and folders recursively by applying a string replacement operation.
E.g. The word "shark" in files and folders should be replaced by the word "orca".
C:\Program Files\Shark Tools\Wire Shark\Sharky 10\Shark.exe
should be moved to:
C:\Program Files\Orca Tools\Wire Orca\Orcay 10\Orca.exe
The same operation should be of course applied to each child object in each folder level as well.
I was experimenting with some of the members of the System.IO.FileInfo and System.IO.DirectoryInfo classes but didn't find an easy way to do it.
fi.MoveTo(fi.FullName.Replace("shark", "orca"));
Doesn't do the trick.
I was hoping there is some kind of "genius" way to perform this kind of operation.
A:
So you would use recursion. Here is a powershell example that should be easy to convert to C#:
function Move-Stuff($folder)
{
foreach($sub in [System.IO.Directory]::GetDirectories($folder))
{
Move-Stuff $sub
}
$new = $folder.Replace("Shark", "Orca")
if(!(Test-Path($new)))
{
new-item -path $new -type directory
}
foreach($file in [System.IO.Directory]::GetFiles($folder))
{
$new = $file.Replace("Shark", "Orca")
move-item $file $new
}
}
Move-Stuff "C:\Temp\Test"
A:
string oldPath = "\\shark.exe"
string newPath = oldPath.Replace("shark", "orca");
System.IO.File.Move(oldPath, newPath);
Fill in with your own full paths
| What is the best way to rename (move) file system branches in .NET? | I would like to rename files and folders recursively by applying a string replacement operation.
E.g. The word "shark" in files and folders should be replaced by the word "orca".
C:\Program Files\Shark Tools\Wire Shark\Sharky 10\Shark.exe
should be moved to:
C:\Program Files\Orca Tools\Wire Orca\Orcay 10\Orca.exe
The same operation should be of course applied to each child object in each folder level as well.
I was experimenting with some of the members of the System.IO.FileInfo and System.IO.DirectoryInfo classes but didn't find an easy way to do it.
fi.MoveTo(fi.FullName.Replace("shark", "orca"));
Doesn't do the trick.
I was hoping there is some kind of "genius" way to perform this kind of operation.
| [
"So you would use recursion. Here is a powershell example that should be easy to convert to C#:\nfunction Move-Stuff($folder)\n{\n foreach($sub in [System.IO.Directory]::GetDirectories($folder))\n {\n Move-Stuff $sub\n }\n $new = $folder.Replace(\"Shark\", \"Orca\")\n if(!(Test-Path($new)))\n {\n new-item -path $new -type directory\n }\n foreach($file in [System.IO.Directory]::GetFiles($folder))\n {\n $new = $file.Replace(\"Shark\", \"Orca\")\n move-item $file $new\n }\n}\n\nMove-Stuff \"C:\\Temp\\Test\"\n\n",
"string oldPath = \"\\\\shark.exe\"\nstring newPath = oldPath.Replace(\"shark\", \"orca\");\n\nSystem.IO.File.Move(oldPath, newPath);\n\nFill in with your own full paths\n"
] | [
1,
0
] | [] | [] | [
"directory",
"file",
"system.io.fileinfo"
] | stackoverflow_0000016945_directory_file_system.io.fileinfo.txt |
Q:
SSRS - Uninstall Trial Version of VS Business Intelligence
I want to know how to fully uninstall MSSQL 2005.
I've been using the Trial version of SQL Server Reporting Services for a while now. My company finally purchased the software from an online distributor, and for support of Oracle, we needed to upgrade to MSSQL 2005 SP2. Anyway, the "full" version of the software would not install, as it was already installed (It seems the installer doesn't recognize what was installed was the trial version). So I tried uninstalling MSSQL 2005, and everything related (including visual studio), I can not seem to get it reinstalled. The error is a vague error message, and when i click the link to get more information, the usual "no information about this error was found" error.
Microsoft SQL Server 2005 Setup
There was an unexpected failure during
the setup wizard. You may review the
setup logs and/or click the help
button for more information.
For help, click:
http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft+SQL+Server&ProdVer=9.00.1399.06&EvtSrc=setup.rll&EvtID=50000&EvtType=packageengine%5cinstallpackageaction.cpp%40InstallToolsAction.11%40sqls%3a%3aInstallPackageAction%3a%3aperform%400x643
BUTTONS:
OK
A:
@Mark Struzinski
I actually discovered that it was a problem with the installer, when installing the "Full Version". I discovered, since the product was downloaded, instead of delivered on CD/DVD, that the installer was looking for information in a path that was not correct. There was a MS Knowledge Base article on the topic. Thanks for your reply, tho
A:
I had the exact same problem, and this article helped me clean up all the related files from my system and do a fresh install of both Visual Studio and the SQL client components. Give it a try and let me know if it helps you out:
http://support.citrix.com/article/CTX115270
| SSRS - Uninstall Trial Version of VS Business Intelligence | I want to know how to fully uninstall MSSQL 2005.
I've been using the Trial version of SQL Server Reporting Services for a while now. My company finally purchased the software from an online distributor, and for support of Oracle, we needed to upgrade to MSSQL 2005 SP2. Anyway, the "full" version of the software would not install, as it was already installed (It seems the installer doesn't recognize what was installed was the trial version). So I tried uninstalling MSSQL 2005, and everything related (including visual studio), I can not seem to get it reinstalled. The error is a vague error message, and when i click the link to get more information, the usual "no information about this error was found" error.
Microsoft SQL Server 2005 Setup
There was an unexpected failure during
the setup wizard. You may review the
setup logs and/or click the help
button for more information.
For help, click:
http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft+SQL+Server&ProdVer=9.00.1399.06&EvtSrc=setup.rll&EvtID=50000&EvtType=packageengine%5cinstallpackageaction.cpp%40InstallToolsAction.11%40sqls%3a%3aInstallPackageAction%3a%3aperform%400x643
BUTTONS:
OK
| [
"@Mark Struzinski\nI actually discovered that it was a problem with the installer, when installing the \"Full Version\". I discovered, since the product was downloaded, instead of delivered on CD/DVD, that the installer was looking for information in a path that was not correct. There was a MS Knowledge Base article on the topic. Thanks for your reply, tho\n",
"I had the exact same problem, and this article helped me clean up all the related files from my system and do a fresh install of both Visual Studio and the SQL client components. Give it a try and let me know if it helps you out:\nhttp://support.citrix.com/article/CTX115270\n"
] | [
1,
0
] | [] | [] | [
"reporting_services",
"sql_server",
"visual_studio"
] | stackoverflow_0000015632_reporting_services_sql_server_visual_studio.txt |
Q:
Accessing html parameter in PHP
I'm trying to do a simple test php script for sessions. Basically it increments a counter (stored in $_SESSION) every time you refresh that page. That works, but I'm trying to have a link to destroy the session which reloads the page with the ?destroy=1 parameter. I've tried a couple of if statements to see if that parameter is set and if so to destroy the session but it doesn't seem to work.
I've even put an if statement in the main body to pop-up a message if the parameter is set - but it doesn't seem to be picked up.
I know I'm doing something silly (I'm a PHP newbie) but I can't seem to find what it is...
See code here:
<?php
if ($_POST['destroy']) {
session_destroy();
} else {
session_start();
}
?>
<html>
<head>
<title>Session test</title>
</head>
<body>
<?php
if (isset($_POST['destroy'])) {
echo "Destroy set";
}
$_SESSION['counter']++;
echo "You have visited this page " . $_SESSION['counter'] . " times" . "<BR>";
echo "I am tracking you using the session id " . session_id() . "<BR>";
echo "Click <a href=\"" . $_SERVER['PHP_SELF'] . "?destroy=1\">here</a> to destroy the session.";
?>
A:
I think you put
$_POST['destroy']
Instead of
$_GET['destroy']
You need to use a form if you'd like to use a $_POST variable. $_GET variables are stored in the URL.
A:
By the way you can use
$_REQUEST['destroy']
which would work regardless if the data is passed in a POST or a GET request.
A:
In the PHP Manual it has code snippet for destroying a session.
session_start();
$_SESSION = array();
if (isset($_COOKIE[session_name()])) {
setcookie(session_name(), '', time()-42000, '/');
}
session_destroy();
A:
Yeah, you're going to want to do
if( $_GET['destroy'] == 1 )
or
if( isset($_GET['destroy']) )
A:
I know I'm doing something silly (I'm a php newbie) but I can't seem to find what it is...
that is how you are going to learn a lot ;) enjoy it ...
| Accessing html parameter in PHP | I'm trying to do a simple test php script for sessions. Basically it increments a counter (stored in $_SESSION) every time you refresh that page. That works, but I'm trying to have a link to destroy the session which reloads the page with the ?destroy=1 parameter. I've tried a couple of if statements to see if that parameter is set and if so to destroy the session but it doesn't seem to work.
I've even put an if statement in the main body to pop-up a message if the parameter is set - but it doesn't seem to be picked up.
I know I'm doing something silly (I'm a PHP newbie) but I can't seem to find what it is...
See code here:
<?php
if ($_POST['destroy']) {
session_destroy();
} else {
session_start();
}
?>
<html>
<head>
<title>Session test</title>
</head>
<body>
<?php
if (isset($_POST['destroy'])) {
echo "Destroy set";
}
$_SESSION['counter']++;
echo "You have visited this page " . $_SESSION['counter'] . " times" . "<BR>";
echo "I am tracking you using the session id " . session_id() . "<BR>";
echo "Click <a href=\"" . $_SERVER['PHP_SELF'] . "?destroy=1\">here</a> to destroy the session.";
?>
| [
"I think you put\n$_POST['destroy']\n\nInstead of\n$_GET['destroy']\n\nYou need to use a form if you'd like to use a $_POST variable. $_GET variables are stored in the URL.\n",
"By the way you can use \n\n$_REQUEST['destroy'] \n\nwhich would work regardless if the data is passed in a POST or a GET request.\n",
"In the PHP Manual it has code snippet for destroying a session.\nsession_start();\n$_SESSION = array();\nif (isset($_COOKIE[session_name()])) {\n setcookie(session_name(), '', time()-42000, '/');\n}\nsession_destroy();\n\n",
"Yeah, you're going to want to do \nif( $_GET['destroy'] == 1 )\n\nor\nif( isset($_GET['destroy']) )\n\n",
"\nI know I'm doing something silly (I'm a php newbie) but I can't seem to find what it is...\n\nthat is how you are going to learn a lot ;) enjoy it ...\n"
] | [
7,
4,
2,
1,
0
] | [] | [] | [
"php",
"session"
] | stackoverflow_0000016815_php_session.txt |
Q:
Simplest way to make a Google Map mashup?
Given a list of locations such as
<td>El Cerrito, CA</td>
<td>Corvallis, OR</td>
<td>Morganton, NC</td>
<td>New York, NY</td>
<td>San Diego, CA</td>
What's the easiest way to generate a Google Map with pushpins for each location?
A:
I'm assuming you have the basics for Maps in your code already with your API Key.
<head>
<script
type="text/javascript"
href="http://maps.google.com/maps?
file=api&v=2&key=xxxxx">
function createMap() {
var map = new GMap2(document.getElementById("map"));
map.setCenter(new GLatLng(37.44, -122.14), 14);
}
</script>
</head>
<body onload="createMap()" onunload="GUnload()">
Everything in Google Maps is based off of latitude (lat) and longitude (lng).
So to create a simple marker you will just create a GMarker with the lat and lng.
var where = new GLatLng(37.925243,-122.307358); //Lat and Lng for El Cerrito, CA
var marker = new GMarker(where); // Create marker (Pinhead thingy)
map.setCenter(where); // Center map on marker
map.addOverlay(marker); // Add marker to map
However if you don't want to look up the Lat and Lng for each city you can use Google's Geo Coder. Heres an example:
var address = "El Cerrito, CA";
var geocoder = new GClientGeocoder;
geocoder.getLatLng(address, function(point) {
if (point) {
map.clearOverlays(); // Clear all markers
map.addOverlay(new GMarker(point)); // Add marker to map
map.setCenter(point, 10); // Center and zoom map on marker
}
});
So I would just create an array of GLatLng's of every city from the GeoCoder and then draw them on the map.
A:
Check out the Google Maps API Examples
They make it pretty simple and their API documentation is great.
Most of the examples are for doing all the code in JavaScript on the client side, but there are APIs for other languages available as well.
A:
I guess more information would be needed to really give you an answer, but over at Django Pluggables there is a django-googlemap plugin that might be of help.
Edit: Adam has a much better answer. When it doubt look at the API examples.
A:
Try this: http://www.google.com/uds/solutions/wizards/mapsearch.html
It's a google maps wizard which will generate the code for you. Not the best for your application; but a good place to "get your feet wet" ;)
Edit: (found the link), here's a good Google Maps API stepwise tutorial.
Good luck!
/mp
A:
Here are some links but as with most things i have not got round to trying them yet.
http://gathadams.com/2007/08/21/add-google-maps-to-your-net-site-in-10-minutes/
http://www.mapbuilder.net/
Cheers
John
| Simplest way to make a Google Map mashup? | Given a list of locations such as
<td>El Cerrito, CA</td>
<td>Corvallis, OR</td>
<td>Morganton, NC</td>
<td>New York, NY</td>
<td>San Diego, CA</td>
What's the easiest way to generate a Google Map with pushpins for each location?
| [
"I'm assuming you have the basics for Maps in your code already with your API Key.\n<head>\n <script \n type=\"text/javascript\"\n href=\"http://maps.google.com/maps?\n file=api&v=2&key=xxxxx\">\n function createMap() {\n var map = new GMap2(document.getElementById(\"map\"));\n map.setCenter(new GLatLng(37.44, -122.14), 14);\n }\n </script>\n</head>\n<body onload=\"createMap()\" onunload=\"GUnload()\">\n\nEverything in Google Maps is based off of latitude (lat) and longitude (lng).\nSo to create a simple marker you will just create a GMarker with the lat and lng.\nvar where = new GLatLng(37.925243,-122.307358); //Lat and Lng for El Cerrito, CA\nvar marker = new GMarker(where); // Create marker (Pinhead thingy)\nmap.setCenter(where); // Center map on marker\nmap.addOverlay(marker); // Add marker to map\n\nHowever if you don't want to look up the Lat and Lng for each city you can use Google's Geo Coder. Heres an example:\nvar address = \"El Cerrito, CA\";\nvar geocoder = new GClientGeocoder;\ngeocoder.getLatLng(address, function(point) {\n if (point) {\n map.clearOverlays(); // Clear all markers\n map.addOverlay(new GMarker(point)); // Add marker to map\n map.setCenter(point, 10); // Center and zoom map on marker\n }\n});\n\nSo I would just create an array of GLatLng's of every city from the GeoCoder and then draw them on the map.\n",
"Check out the Google Maps API Examples\nThey make it pretty simple and their API documentation is great.\nMost of the examples are for doing all the code in JavaScript on the client side, but there are APIs for other languages available as well.\n",
"I guess more information would be needed to really give you an answer, but over at Django Pluggables there is a django-googlemap plugin that might be of help.\nEdit: Adam has a much better answer. When it doubt look at the API examples. \n",
"Try this: http://www.google.com/uds/solutions/wizards/mapsearch.html\nIt's a google maps wizard which will generate the code for you. Not the best for your application; but a good place to \"get your feet wet\" ;)\nEdit: (found the link), here's a good Google Maps API stepwise tutorial.\nGood luck!\n/mp\n",
"Here are some links but as with most things i have not got round to trying them yet.\nhttp://gathadams.com/2007/08/21/add-google-maps-to-your-net-site-in-10-minutes/\nhttp://www.mapbuilder.net/\nCheers\nJohn\n"
] | [
11,
8,
1,
1,
1
] | [] | [] | [
"google_maps",
"html"
] | stackoverflow_0000015247_google_maps_html.txt |
Q:
Hyperlinks displaced on IE7
Browse to a webpage with hyperlinks using IE (I am using IE7) Once on the page, enlarge the fonts using ctl + mouse wheel. Now when you try to hover over the hyperlinks, they are laterally displaced to the right. To click on the link, i have to move the mouse to the right till the cursor turns into a hand.
Anyone has a comment on this??
I was browsing the following page.
It is the 2nd hyperlink in the body of the article. (the link text is "here")
A:
IE7 doesn't handle Zoom correctly, You can see this error on this page (I mean the page you're reading right now) if you zoom large enough, view the logout | about link at the top, hover over it, hover off to the right, back over.
A:
All of the links on that page are displaced to the right on my copy of IE7 (7.0.6001.18000) even before I enlarge or shrink the fonts. Whereas other pages act normally. (My test page was http://www.frito-lay.com/fl/flstore/cgi-bin/good_questions.htm).
It appears to be something specific to the page.
| Hyperlinks displaced on IE7 | Browse to a webpage with hyperlinks using IE (I am using IE7) Once on the page, enlarge the fonts using ctl + mouse wheel. Now when you try to hover over the hyperlinks, they are laterally displaced to the right. To click on the link, i have to move the mouse to the right till the cursor turns into a hand.
Anyone has a comment on this??
I was browsing the following page.
It is the 2nd hyperlink in the body of the article. (the link text is "here")
| [
"IE7 doesn't handle Zoom correctly, You can see this error on this page (I mean the page you're reading right now) if you zoom large enough, view the logout | about link at the top, hover over it, hover off to the right, back over.\n",
"All of the links on that page are displaced to the right on my copy of IE7 (7.0.6001.18000) even before I enlarge or shrink the fonts. Whereas other pages act normally. (My test page was http://www.frito-lay.com/fl/flstore/cgi-bin/good_questions.htm).\nIt appears to be something specific to the page.\n"
] | [
1,
0
] | [] | [] | [
"browser"
] | stackoverflow_0000017269_browser.txt |
Q:
iFrame Best Practices
I have a large, hi-def JavaScript-intensive image banner for a site I'm designing. What is everyone's opinion of using iframes so that you incur the load time only once? Is there a CSS alternative to the iframe?
Feel free to preview the site.
It is very much a work in progress.
A:
I should also have mentioned that I would like the banner rotation to keep moving. When the visitor clicks on a link, the banner rotation starts over. It would be nice if the "animation" kept rotating, regardless of the page the user visits.Blockquote
Well, in that case I would strongly recommend not doing that. The only real way of achieving that is to have the actual website content in the iframe, which means that you suddenly have lots of negative sides to the site: not being able to bookmark urls easily due to the address bar not changing; accessibility concerns; etc
I think you'll find that most people won't care that it reloads again. Once a visitor lands on your website, they'll marvel at the wonderful banner immediately, and then will continue to ignore it while they browse your site - until an image they haven't seen appears and distracts them away from your content.
Keep the rotation random enough, and with enough images, and people will stop to look at it from whatever page they're on.
A:
I find the main challenge with iFrame headers is resizing. Since the font in your header is of static size, I don't see a problem with using an iFrame. Although I'm not sure if it's really intensive enough to be worth it.
A:
Well, the browser appears to cache all seven banner images upon the first load, and runs them out from the cache (for each subsequent page) thereafter. I don't think you have a problem :D
Try it out with Firebug's Net monitoring tool in Firefox.
A:
While using IFrames as a sort of master page/template for your pages might be a good thing, IFrames have a known negative impact to searchability/SEO.
It might also be unnecessary in the first place because once your images are loaded the first time (and with the large high-def images you have on your site, that would be slow no matter what you do) the images are cached by browsers and will not be reloaded until the user clears their cache or does a Ctrl+F5.
A:
This may work without CSS also, but if you use CSS to load the background and your server is configured correctly, the image should already only be downloaded once.
Usually the browser will request a resource by asking for it only if it has not been modified since the last time it was downloaded. In this case, the only things sent back and forth are the HTTP headers, no content.
If you want to ensure the image is only downloaded once, add an .htacces or an apache2.conf rule to make the image expire a few days into the future so that users will only request it again if their cache is cleared or the content expiration date passes. An .htaccess file is probably too excessive to use in your case, though results may vary.
A:
You could have it load the main page once, then asynchronously load the other elements when needed (ajax). If you did that, an iFrame would not be necessary. Here is an example of loading only the new material.
| iFrame Best Practices | I have a large, hi-def JavaScript-intensive image banner for a site I'm designing. What is everyone's opinion of using iframes so that you incur the load time only once? Is there a CSS alternative to the iframe?
Feel free to preview the site.
It is very much a work in progress.
| [
"\nI should also have mentioned that I would like the banner rotation to keep moving. When the visitor clicks on a link, the banner rotation starts over. It would be nice if the \"animation\" kept rotating, regardless of the page the user visits.Blockquote\n\nWell, in that case I would strongly recommend not doing that. The only real way of achieving that is to have the actual website content in the iframe, which means that you suddenly have lots of negative sides to the site: not being able to bookmark urls easily due to the address bar not changing; accessibility concerns; etc\nI think you'll find that most people won't care that it reloads again. Once a visitor lands on your website, they'll marvel at the wonderful banner immediately, and then will continue to ignore it while they browse your site - until an image they haven't seen appears and distracts them away from your content.\nKeep the rotation random enough, and with enough images, and people will stop to look at it from whatever page they're on.\n",
"I find the main challenge with iFrame headers is resizing. Since the font in your header is of static size, I don't see a problem with using an iFrame. Although I'm not sure if it's really intensive enough to be worth it.\n",
"Well, the browser appears to cache all seven banner images upon the first load, and runs them out from the cache (for each subsequent page) thereafter. I don't think you have a problem :D\nTry it out with Firebug's Net monitoring tool in Firefox.\n",
"While using IFrames as a sort of master page/template for your pages might be a good thing, IFrames have a known negative impact to searchability/SEO. \nIt might also be unnecessary in the first place because once your images are loaded the first time (and with the large high-def images you have on your site, that would be slow no matter what you do) the images are cached by browsers and will not be reloaded until the user clears their cache or does a Ctrl+F5.\n",
"This may work without CSS also, but if you use CSS to load the background and your server is configured correctly, the image should already only be downloaded once.\nUsually the browser will request a resource by asking for it only if it has not been modified since the last time it was downloaded. In this case, the only things sent back and forth are the HTTP headers, no content.\nIf you want to ensure the image is only downloaded once, add an .htacces or an apache2.conf rule to make the image expire a few days into the future so that users will only request it again if their cache is cleared or the content expiration date passes. An .htaccess file is probably too excessive to use in your case, though results may vary.\n",
"You could have it load the main page once, then asynchronously load the other elements when needed (ajax). If you did that, an iFrame would not be necessary. Here is an example of loading only the new material.\n"
] | [
1,
0,
0,
0,
0,
0
] | [] | [] | [
"css",
"html",
"iframe"
] | stackoverflow_0000017289_css_html_iframe.txt |
Q:
Unload a COM control when working in VB6 IDE
Part of my everyday work is maintaining and extending legacy VB6 applications. A common engine is written in C/C++ and VB6 uses these functions in order to improve performance.
When it comes to asynchronous programming, a C interface is not enough and we rely on COM controls to fire events to VB6.
My problem is that when I register the control in VB6, VB loads this control in memory and does not unload it until I quit the VB6 IDE. As the control is loaded the whole time, I am unable to recompile it in VC6, because the DLL file is locked.
A solution I found is not to enable the control in VB but use the CreateObject() with the full name of my control. The problem then is that I must declare my control as an Object because VB6 knows nothing of the interface I am using and I do not have access to IntelliSense, which is a pain.
Any idea how I can tell VB6 to unload controls after quitting the application or directly in the IDE ?
A:
I'm pretty sure there's no good way to force VB6 to unload the control.
Here's what I do... instead of running Visual C and Visual Basic side-by-side, run VB6 under VC :
Load up VC
Open the project containing your COM objects
Edit, change, etc.
In VC, set the Output Executable to be VB6.EXE with appropriate command-line arguments to load the VB6 workspace
Now just hit F5 to launch the VB6 IDE and load your VB6 project
When you want to change the COM code again, exit VB6.EXE, make your changes, and hit F5 again. As long as you save your workspace VB6 will remember what windows you had open and all your project settings.
Advantages of this method:
You can set breakpoints in the COM object and debug it using a full source debugger
You can happily debug in C and VB at the same time
Whenever VB6 is running it always has the latest version of the COM DLLs
| Unload a COM control when working in VB6 IDE | Part of my everyday work is maintaining and extending legacy VB6 applications. A common engine is written in C/C++ and VB6 uses these functions in order to improve performance.
When it comes to asynchronous programming, a C interface is not enough and we rely on COM controls to fire events to VB6.
My problem is that when I register the control in VB6, VB loads this control in memory and does not unload it until I quit the VB6 IDE. As the control is loaded the whole time, I am unable to recompile it in VC6, because the DLL file is locked.
A solution I found is not to enable the control in VB but use the CreateObject() with the full name of my control. The problem then is that I must declare my control as an Object because VB6 knows nothing of the interface I am using and I do not have access to IntelliSense, which is a pain.
Any idea how I can tell VB6 to unload controls after quitting the application or directly in the IDE ?
| [
"I'm pretty sure there's no good way to force VB6 to unload the control.\nHere's what I do... instead of running Visual C and Visual Basic side-by-side, run VB6 under VC :\n\nLoad up VC\nOpen the project containing your COM objects\nEdit, change, etc.\nIn VC, set the Output Executable to be VB6.EXE with appropriate command-line arguments to load the VB6 workspace\nNow just hit F5 to launch the VB6 IDE and load your VB6 project\nWhen you want to change the COM code again, exit VB6.EXE, make your changes, and hit F5 again. As long as you save your workspace VB6 will remember what windows you had open and all your project settings.\n\nAdvantages of this method:\n\nYou can set breakpoints in the COM object and debug it using a full source debugger\nYou can happily debug in C and VB at the same time\nWhenever VB6 is running it always has the latest version of the COM DLLs\n\n"
] | [
27
] | [] | [] | [
"com",
"vb6"
] | stackoverflow_0000000419_com_vb6.txt |
Q:
Refactoring away labeled loops
After I was convinced that labeled breaks/continues are a total "nono" over here, I need help to remove the label out of my code.
I have a square matrix and a vector that has the same length. The vector has already some values in it an depending on the values in the matrix the vector is changed in the loop.
I hope, the code-fragment is basically understandable…
vectorLoop:
for( int idx = 0; idx < vectorLength; idx++) {
if( conditionAtVectorPosition( v, idx ) ) continue vectorLoop;
matrixLoop:
for( rowIdx = 0; rowIdx < n; rowIdx++ ) {
if( anotherConditionAtVector( v, rowIdx ) ) continue matrixLoop;
if( conditionAtMatrixRowCol( m, rowIdx, idx ) ) continue vectorLoop;
}
setValueInVector( v, idx );
}
Please convince me, that there is a more readable/better version without the labels.
A:
Looking at the solutions presented so far:
They all look less readable than the original, in that they involve spending more code on the mechanism of the code rather than on the algorithm itself
Some of them are broken, or were before they were edited. Most damning is the fact that people are having to think quite hard about how to write the code without labels and not break anything.
Some come with a performance penalty of running the same test twice, which may not always be trivial. The alternative to that is storing and passing round booleans, which gets ugly.
Refactoring the relevant part of the code into a method is effectively a no-op: it rearranges how the code is laid out in the file, but has no effect on how it's executed.
All of which makes me believe that, at least in the case of this question as phrased, the label is the correct solution and doesn't need to be refactored away. Certainly there are cases where labels are used incorrectly and should be refactored away. I just don't think it should be treated as some unbreakable rule.
A:
Easily, my good man.
for( int idx = 0; idx < vectorLength; idx++) {
if( conditionAtVectorPosition( v, idx ) ) continue;
for( rowIdx = 0; rowIdx < n; rowIdx++ ) {
if( anotherConditionAtVector( v, rowIdx ) ) continue;
if( conditionAtMatrixRowCol( m, rowIdx, idx ) ) break;
}
if( !conditionAtMatrixRowCol( m, rowIdx, idx ) )
setValueInVector( v, idx );
}
EDIT: Quite correct you are Anders. I've edited my solution to take that into account as well.
A:
@Patrick you are assuming calling setValueInVector( v, idx ); at the end of the second loop is OK. If the code is to be identical, logically, it must be rewritten to somethng like this:
for( int idx = 0; idx
A:
From reading your code.
I noticed your eliminating the invalid vector positions at conditionAtVectorPosition then you remove the invalid rows at anotherConditionAtVector.
It seems that checking rows at anotherConditionAtVector is redundant since whatever the value of idx is, anotherConditionAtVector only depends on the row index (assuming anotherConditionAtVector has no side effects).
So you can do this:
Get the valid positions first using conditionAtVectorPosition (these are the valid columns).
Then get the valid rows using anotherConditionAtVector.
Finally, use conditionAtMatrixRowCol using the valid columns and rows.
I hope this helps.
A:
@Sadie:
They all look less readable than the original, in that they involve spending more code on the mechanism of the code rather than on the algorithm itself
Externalizing the second loop outside the algorithm is not necessarily less readable. If the method name is well chosen, it can improve readability.
Some of them are broken, or were before they were edited. Most damning is the fact that people are having to think quite hard about how to write the code without labels and not break anything.
I have a different point of view: some of them are broken because it is hard to figure out the behavior of the original algorithm.
Some come with a performance penalty of running the same test twice, which may not always be trivial. The alternative to that is storing and passing round booleans, which gets ugly.
The performance penalty is minor. However I agree that running a test twice is not a nice solution.
Refactoring the relevant part of the code into a method is effectively a no-op: it rearranges how the code is laid out in the file, but has no effect on how it's executed.
I don't see the point. Yep, it doesn't change the behavior, like... refactoring?
Certainly there are cases where labels are used incorrectly and should be refactored away. I just don't think it should be treated as some unbreakable rule.
I totally agree. But as you have pointed out, some of us have difficulties while refactoring this example. Even if the initial example is readable, it is hard to maintain.
A:
@Nicolas
Some of them are broken, or were before they were edited. Most damning is the fact that
people are having to think quite hard about how to write the code without labels and not
break anything.
I have a different point of view: some of them are broken because it is hard to figure out
the behavior of the original algorithm.
I realise that it's subjective, but I don't have any trouble reading the original algorithm. It's shorter and clearer than the proposed replacements.
What all the refactorings in this thread do is emulate the behaviour of a label using other language features - as if you were porting the code to a language that didn't have labels.
A:
Some come with a performance penalty of running the same test twice, which may not always be trivial. The alternative to that is storing and passing round booleans, which gets ugly.
The performance penalty is minor. However I agree that running a test twice is not a nice solution.
I believe the question was how to remove the labels, not how to optimize the algorithm. It appeared to me that the original poster was unaware of how to use 'continue' and 'break' keywords without labels, but of course, my assumptions may be wrong.
When it comes to performance, the post does not give any information about the implementation of the other functions, so for all I know they might as well be downloading the results via FTP as consisting of simple calculations inlined by the compiler.
That being said, doing the same test twice is not optimal—in theory.
EDIT: On a second thought, the example is actually not a horrible use of labels. I agree that "goto is a no-no", but not because of code like this. The use of labels here does not actually affect the readability of the code in a significant way. Of course, they are not required and can easily be omitted, but not using them simply because "using labels is bad" is not a good argument in this case. After all, removing the labels does not make the code much easier to read, as others have already commented.
A:
This question was not about optimizing the algorithm - but thanks anyway ;-)
At the time I wrote it, I considered the labeled continue as a readable solution.
I asked SO a question about the convention (having the label in all caps or not) for labels in Java.
Basically every answer told me "do not use them - there is always a better way! refactor!". So I posted this question to ask for a more readable (and therefore better?) solution.
Until now, I am not completely convinced by the alternatives presented so far.
Please don't get me wrong. Labels are evil most of the time.
But in my case, the conditional tests are pretty simple and the algorithm is taken from a mathematical paper and therefore very likely to not change in the near future. So I prefer having all the relevant parts visible at once instead of having to scroll to another method named something like checkMatrixAtRow(x).
Especially at more complex mathematical algorithms, I find it pretty hard to find "good" function-names - but I guess that is yet another question
A:
I think that labelled loops are so uncommon that you can pick whatever method of labelling works for you - what you have there makes your intentions with the continues perfectly clear.
After leading the charge to suggest refactoring the loops in the original question and now seeing the code in question, I think you've got a very readable loop there.
What I had imagined was a very different chunk of code - putting the actual example up, I can see it is much cleaner than I had thought.
My apologies for the misunderstanding.
A:
Does this work for you? I extracted the inner loop into a method CheckedEntireMatrix (you can name it better than me) - Also my java is a bit rusty.. but I think it gets the message across
for( int idx = 0; idx < vectorLength; idx++) {
if( conditionAtVectorPosition( v, idx )
|| !CheckedEntireMatrix(v)) continue;
setValueInVector( v, idx );
}
private bool CheckedEntireMatrix(Vector v)
{
for( rowIdx = 0; rowIdx < n; rowIdx++ ) {
if( anotherConditionAtVector( v, rowIdx ) ) continue;
if( conditionAtMatrixRowCol( m, rowIdx, idx ) ) return false;
}
return true;
}
A:
Gishu has the right idea :
for( int idx = 0; idx < vectorLength; idx++) {
if (!conditionAtVectorPosition( v, idx )
&& checkedRow(v, idx))
setValueInVector( v, idx );
}
private boolean checkedRow(Vector v, int idx) {
for( rowIdx = 0; rowIdx < n; rowIdx++ ) {
if( anotherConditionAtVector( v, rowIdx ) ) continue;
if( conditionAtMatrixRowCol( m, rowIdx, idx ) ) return false;
}
return true;
}
A:
I'm not too sure to understand the first continue.
I would copy Gishu and write something like ( sorry if there are some mistakes ) :
for( int idx = 0; idx < vectorLength; idx++) {
if( !conditionAtVectorPosition( v, idx ) && CheckedEntireMatrix(v))
setValueInVector( v, idx );
}
inline bool CheckedEntireMatrix(Vector v) {
for(rowIdx = 0; rowIdx < n; rowIdx++)
if ( !anotherConditionAtVector(v,rowIdx) && conditionAtMatrixRowCol(m,rowIdx,idx) )
return false;
return true;
}
| Refactoring away labeled loops | After I was convinced that labeled breaks/continues are a total "nono" over here, I need help to remove the label out of my code.
I have a square matrix and a vector that has the same length. The vector has already some values in it an depending on the values in the matrix the vector is changed in the loop.
I hope, the code-fragment is basically understandable…
vectorLoop:
for( int idx = 0; idx < vectorLength; idx++) {
if( conditionAtVectorPosition( v, idx ) ) continue vectorLoop;
matrixLoop:
for( rowIdx = 0; rowIdx < n; rowIdx++ ) {
if( anotherConditionAtVector( v, rowIdx ) ) continue matrixLoop;
if( conditionAtMatrixRowCol( m, rowIdx, idx ) ) continue vectorLoop;
}
setValueInVector( v, idx );
}
Please convince me, that there is a more readable/better version without the labels.
| [
"Looking at the solutions presented so far:\n\nThey all look less readable than the original, in that they involve spending more code on the mechanism of the code rather than on the algorithm itself\nSome of them are broken, or were before they were edited. Most damning is the fact that people are having to think quite hard about how to write the code without labels and not break anything.\nSome come with a performance penalty of running the same test twice, which may not always be trivial. The alternative to that is storing and passing round booleans, which gets ugly.\nRefactoring the relevant part of the code into a method is effectively a no-op: it rearranges how the code is laid out in the file, but has no effect on how it's executed.\n\nAll of which makes me believe that, at least in the case of this question as phrased, the label is the correct solution and doesn't need to be refactored away. Certainly there are cases where labels are used incorrectly and should be refactored away. I just don't think it should be treated as some unbreakable rule.\n",
"Easily, my good man.\nfor( int idx = 0; idx < vectorLength; idx++) {\n if( conditionAtVectorPosition( v, idx ) ) continue;\n\n for( rowIdx = 0; rowIdx < n; rowIdx++ ) {\n if( anotherConditionAtVector( v, rowIdx ) ) continue;\n if( conditionAtMatrixRowCol( m, rowIdx, idx ) ) break;\n }\n if( !conditionAtMatrixRowCol( m, rowIdx, idx ) )\n setValueInVector( v, idx );\n}\n\nEDIT: Quite correct you are Anders. I've edited my solution to take that into account as well.\n",
"@Patrick you are assuming calling setValueInVector( v, idx ); at the end of the second loop is OK. If the code is to be identical, logically, it must be rewritten to somethng like this:\nfor( int idx = 0; idx \n",
"From reading your code. \n\nI noticed your eliminating the invalid vector positions at conditionAtVectorPosition then you remove the invalid rows at anotherConditionAtVector. \nIt seems that checking rows at anotherConditionAtVector is redundant since whatever the value of idx is, anotherConditionAtVector only depends on the row index (assuming anotherConditionAtVector has no side effects). \n\nSo you can do this:\n\nGet the valid positions first using conditionAtVectorPosition (these are the valid columns).\nThen get the valid rows using anotherConditionAtVector.\nFinally, use conditionAtMatrixRowCol using the valid columns and rows.\n\nI hope this helps.\n",
"@Sadie:\n\nThey all look less readable than the original, in that they involve spending more code on the mechanism of the code rather than on the algorithm itself\n\nExternalizing the second loop outside the algorithm is not necessarily less readable. If the method name is well chosen, it can improve readability.\n\nSome of them are broken, or were before they were edited. Most damning is the fact that people are having to think quite hard about how to write the code without labels and not break anything.\n\nI have a different point of view: some of them are broken because it is hard to figure out the behavior of the original algorithm.\n\nSome come with a performance penalty of running the same test twice, which may not always be trivial. The alternative to that is storing and passing round booleans, which gets ugly.\n\nThe performance penalty is minor. However I agree that running a test twice is not a nice solution.\n\nRefactoring the relevant part of the code into a method is effectively a no-op: it rearranges how the code is laid out in the file, but has no effect on how it's executed.\n\nI don't see the point. Yep, it doesn't change the behavior, like... refactoring?\n\nCertainly there are cases where labels are used incorrectly and should be refactored away. I just don't think it should be treated as some unbreakable rule.\n\nI totally agree. But as you have pointed out, some of us have difficulties while refactoring this example. Even if the initial example is readable, it is hard to maintain.\n",
"@Nicolas\n\n\nSome of them are broken, or were before they were edited. Most damning is the fact that \n people are having to think quite hard about how to write the code without labels and not \n break anything.\n\nI have a different point of view: some of them are broken because it is hard to figure out \n the behavior of the original algorithm.\n\nI realise that it's subjective, but I don't have any trouble reading the original algorithm. It's shorter and clearer than the proposed replacements.\nWhat all the refactorings in this thread do is emulate the behaviour of a label using other language features - as if you were porting the code to a language that didn't have labels.\n",
"Some come with a performance penalty of running the same test twice, which may not always be trivial. The alternative to that is storing and passing round booleans, which gets ugly.\n\nThe performance penalty is minor. However I agree that running a test twice is not a nice solution.\nI believe the question was how to remove the labels, not how to optimize the algorithm. It appeared to me that the original poster was unaware of how to use 'continue' and 'break' keywords without labels, but of course, my assumptions may be wrong. \nWhen it comes to performance, the post does not give any information about the implementation of the other functions, so for all I know they might as well be downloading the results via FTP as consisting of simple calculations inlined by the compiler.\nThat being said, doing the same test twice is not optimal—in theory.\nEDIT: On a second thought, the example is actually not a horrible use of labels. I agree that \"goto is a no-no\", but not because of code like this. The use of labels here does not actually affect the readability of the code in a significant way. Of course, they are not required and can easily be omitted, but not using them simply because \"using labels is bad\" is not a good argument in this case. After all, removing the labels does not make the code much easier to read, as others have already commented.\n",
"This question was not about optimizing the algorithm - but thanks anyway ;-) \nAt the time I wrote it, I considered the labeled continue as a readable solution.\nI asked SO a question about the convention (having the label in all caps or not) for labels in Java.\nBasically every answer told me \"do not use them - there is always a better way! refactor!\". So I posted this question to ask for a more readable (and therefore better?) solution.\nUntil now, I am not completely convinced by the alternatives presented so far.\nPlease don't get me wrong. Labels are evil most of the time. \nBut in my case, the conditional tests are pretty simple and the algorithm is taken from a mathematical paper and therefore very likely to not change in the near future. So I prefer having all the relevant parts visible at once instead of having to scroll to another method named something like checkMatrixAtRow(x).\nEspecially at more complex mathematical algorithms, I find it pretty hard to find \"good\" function-names - but I guess that is yet another question\n",
"I think that labelled loops are so uncommon that you can pick whatever method of labelling works for you - what you have there makes your intentions with the continues perfectly clear.\n\nAfter leading the charge to suggest refactoring the loops in the original question and now seeing the code in question, I think you've got a very readable loop there.\nWhat I had imagined was a very different chunk of code - putting the actual example up, I can see it is much cleaner than I had thought.\nMy apologies for the misunderstanding.\n",
"Does this work for you? I extracted the inner loop into a method CheckedEntireMatrix (you can name it better than me) - Also my java is a bit rusty.. but I think it gets the message across\nfor( int idx = 0; idx < vectorLength; idx++) {\n if( conditionAtVectorPosition( v, idx ) \n || !CheckedEntireMatrix(v)) continue;\n\n setValueInVector( v, idx );\n}\n\nprivate bool CheckedEntireMatrix(Vector v)\n{\n for( rowIdx = 0; rowIdx < n; rowIdx++ ) {\n if( anotherConditionAtVector( v, rowIdx ) ) continue;\n if( conditionAtMatrixRowCol( m, rowIdx, idx ) ) return false;\n } \n return true;\n}\n\n",
"Gishu has the right idea :\nfor( int idx = 0; idx < vectorLength; idx++) {\n if (!conditionAtVectorPosition( v, idx ) \n && checkedRow(v, idx))\n setValueInVector( v, idx );\n}\n\nprivate boolean checkedRow(Vector v, int idx) {\n for( rowIdx = 0; rowIdx < n; rowIdx++ ) {\n if( anotherConditionAtVector( v, rowIdx ) ) continue;\n if( conditionAtMatrixRowCol( m, rowIdx, idx ) ) return false;\n } \n return true;\n}\n\n",
"I'm not too sure to understand the first continue.\nI would copy Gishu and write something like ( sorry if there are some mistakes ) :\nfor( int idx = 0; idx < vectorLength; idx++) {\n if( !conditionAtVectorPosition( v, idx ) && CheckedEntireMatrix(v))\n setValueInVector( v, idx );\n}\n\ninline bool CheckedEntireMatrix(Vector v) {\n for(rowIdx = 0; rowIdx < n; rowIdx++)\n if ( !anotherConditionAtVector(v,rowIdx) && conditionAtMatrixRowCol(m,rowIdx,idx) ) \n return false;\n return true;\n}\n\n"
] | [
34,
1,
1,
1,
1,
1,
1,
1,
1,
0,
0,
0
] | [] | [] | [
"java",
"label",
"refactoring"
] | stackoverflow_0000015851_java_label_refactoring.txt |
Q:
How to get list of installed BitmapEncoders/Decoders (the WPF world)?
In WindowsForms world you can get a list of available image encoders/decoders with
System.Drawing.ImageCodecInfo.GetImageDecoders() / GetImageEncoders()
My question is, is there a way to do something analogous for the WPF world that would allow me to get a list of available
System.Windows.Media.Imaging.BitmapDecoder / BitmapEncoder
A:
You've got to love .NET reflection. I worked on the WPF team and can't quite think of anything better off the top of my head. The following code produces this list on my machine:
Bitmap Encoders:
System.Windows.Media.Imaging.BmpBitmapEncoder
System.Windows.Media.Imaging.GifBitmapEncoder
System.Windows.Media.Imaging.JpegBitmapEncoder
System.Windows.Media.Imaging.PngBitmapEncoder
System.Windows.Media.Imaging.TiffBitmapEncoder
System.Windows.Media.Imaging.WmpBitmapEncoder
Bitmap Decoders:
System.Windows.Media.Imaging.BmpBitmapDecoder
System.Windows.Media.Imaging.GifBitmapDecoder
System.Windows.Media.Imaging.IconBitmapDecoder
System.Windows.Media.Imaging.LateBoundBitmapDecoder
System.Windows.Media.Imaging.JpegBitmapDecoder
System.Windows.Media.Imaging.PngBitmapDecoder
System.Windows.Media.Imaging.TiffBitmapDecoder
System.Windows.Media.Imaging.WmpBitmapDecoder
There is a comment in the code where to add additional assemblies (if you support plugins for example). Also, you will want to filter the decoder list to remove:
System.Windows.Media.Imaging.LateBoundBitmapDecoder
More sophisticated filtering using constructor pattern matching is possible, but I don't feel like writing it. :-)
All you need to do now is instantiate the encoders and decoders to use them. Also, you can get better names by retrieving the CodecInfo property of the encoder decoders. This class will give you human readable names among other factoids.
using System;
using System.Linq;
using System.Collections.Generic;
using System.Reflection;
using System.Windows.Media.Imaging;
namespace Codecs {
class Program {
static void Main(string[] args) {
Console.WriteLine("Bitmap Encoders:");
AllEncoderTypes.ToList().ForEach(t => Console.WriteLine(t.FullName));
Console.WriteLine("\nBitmap Decoders:");
AllDecoderTypes.ToList().ForEach(t => Console.WriteLine(t.FullName));
Console.ReadKey();
}
static IEnumerable<Type> AllEncoderTypes {
get {
return AllSubclassesOf(typeof(BitmapEncoder));
}
}
static IEnumerable<Type> AllDecoderTypes {
get {
return AllSubclassesOf(typeof(BitmapDecoder));
}
}
static IEnumerable<Type> AllSubclassesOf(Type type) {
var r = new Reflector();
// Add additional assemblies here
return r.AllSubclassesOf(type);
}
}
class Reflector {
List<Assembly> assemblies = new List<Assembly> {
typeof(BitmapDecoder).Assembly
};
public IEnumerable<Type> AllSubclassesOf(Type super) {
foreach (var a in assemblies) {
foreach (var t in a.GetExportedTypes()) {
if (t.IsSubclassOf(super)) {
yield return t;
}
}
}
}
}
}
A:
Hopefully someone will correct me if I'm wrong, but I don't think there's anything like that in WPF. But hopefully this is one of the many cases where advances in the technology have rendered obsolete the way we're used to doing things. Like "how do I wind my digital watch?"
To my understanding, the reason why ImageCodecInfo.GetImageDecoders() is necessary in System.Drawing has to do with the kludgy nature of System.Drawing itself: System.Drawing is a managed wrapper around GDI+, which is an unmanaged wrapper around a portion of the Win32 API. So there might be a reason why a new codec would be installed in Windows without .NET inherently knowing about it. And what's returned from GetImageDecoders() is just a bunch of strings that are typically passed back into System.Drawing/GDI+, and used to find and configure the appropriate DLL for reading/saving your image.
On the other hand, in WPF, the standard encoders and decoders are built into the framework, and, if I'm not mistaken, don't depend on anything that that isn't guaranteed to be installed as part of the framework. The following classes inherit from BitmapEncoder and are available out-of-the-box with WPF: BmpBitmapEncoder, GifBitmapEncoder, JpegBitmapEncoder, PngBitmapEncoder, TiffBitmapEncoder, WmpBitmapEncoder. There are BitmapDecoders for all the same formats, plus IconBitmapDecoder and LateBoundBitmapDecoder.
You may be dealing with a case I'm not imagining, but it seems to me that if you're having to use a class that inherits from BitmapEncoder but wasn't included with WPF, it's probably your own custom class that you would install with your application.
Hope this helps. If I'm missing a necessary part of the picture, please let me know.
| How to get list of installed BitmapEncoders/Decoders (the WPF world)? | In WindowsForms world you can get a list of available image encoders/decoders with
System.Drawing.ImageCodecInfo.GetImageDecoders() / GetImageEncoders()
My question is, is there a way to do something analogous for the WPF world that would allow me to get a list of available
System.Windows.Media.Imaging.BitmapDecoder / BitmapEncoder
| [
"You've got to love .NET reflection. I worked on the WPF team and can't quite think of anything better off the top of my head. The following code produces this list on my machine:\nBitmap Encoders:\nSystem.Windows.Media.Imaging.BmpBitmapEncoder\nSystem.Windows.Media.Imaging.GifBitmapEncoder\nSystem.Windows.Media.Imaging.JpegBitmapEncoder\nSystem.Windows.Media.Imaging.PngBitmapEncoder\nSystem.Windows.Media.Imaging.TiffBitmapEncoder\nSystem.Windows.Media.Imaging.WmpBitmapEncoder\n\nBitmap Decoders:\nSystem.Windows.Media.Imaging.BmpBitmapDecoder\nSystem.Windows.Media.Imaging.GifBitmapDecoder\nSystem.Windows.Media.Imaging.IconBitmapDecoder\nSystem.Windows.Media.Imaging.LateBoundBitmapDecoder\nSystem.Windows.Media.Imaging.JpegBitmapDecoder\nSystem.Windows.Media.Imaging.PngBitmapDecoder\nSystem.Windows.Media.Imaging.TiffBitmapDecoder\nSystem.Windows.Media.Imaging.WmpBitmapDecoder\n\nThere is a comment in the code where to add additional assemblies (if you support plugins for example). Also, you will want to filter the decoder list to remove:\nSystem.Windows.Media.Imaging.LateBoundBitmapDecoder\n\nMore sophisticated filtering using constructor pattern matching is possible, but I don't feel like writing it. :-)\nAll you need to do now is instantiate the encoders and decoders to use them. Also, you can get better names by retrieving the CodecInfo property of the encoder decoders. This class will give you human readable names among other factoids.\nusing System;\nusing System.Linq;\nusing System.Collections.Generic;\nusing System.Reflection;\nusing System.Windows.Media.Imaging;\n\nnamespace Codecs {\n class Program {\n static void Main(string[] args) {\n Console.WriteLine(\"Bitmap Encoders:\");\n AllEncoderTypes.ToList().ForEach(t => Console.WriteLine(t.FullName));\n Console.WriteLine(\"\\nBitmap Decoders:\");\n AllDecoderTypes.ToList().ForEach(t => Console.WriteLine(t.FullName));\n Console.ReadKey();\n }\n\n static IEnumerable<Type> AllEncoderTypes {\n get {\n return AllSubclassesOf(typeof(BitmapEncoder));\n }\n }\n\n static IEnumerable<Type> AllDecoderTypes {\n get {\n return AllSubclassesOf(typeof(BitmapDecoder));\n }\n }\n\n static IEnumerable<Type> AllSubclassesOf(Type type) {\n var r = new Reflector();\n // Add additional assemblies here\n return r.AllSubclassesOf(type);\n }\n }\n\n class Reflector {\n List<Assembly> assemblies = new List<Assembly> { \n typeof(BitmapDecoder).Assembly\n };\n public IEnumerable<Type> AllSubclassesOf(Type super) {\n foreach (var a in assemblies) {\n foreach (var t in a.GetExportedTypes()) {\n if (t.IsSubclassOf(super)) {\n yield return t;\n }\n }\n }\n }\n }\n}\n\n",
"Hopefully someone will correct me if I'm wrong, but I don't think there's anything like that in WPF. But hopefully this is one of the many cases where advances in the technology have rendered obsolete the way we're used to doing things. Like \"how do I wind my digital watch?\"\nTo my understanding, the reason why ImageCodecInfo.GetImageDecoders() is necessary in System.Drawing has to do with the kludgy nature of System.Drawing itself: System.Drawing is a managed wrapper around GDI+, which is an unmanaged wrapper around a portion of the Win32 API. So there might be a reason why a new codec would be installed in Windows without .NET inherently knowing about it. And what's returned from GetImageDecoders() is just a bunch of strings that are typically passed back into System.Drawing/GDI+, and used to find and configure the appropriate DLL for reading/saving your image.\nOn the other hand, in WPF, the standard encoders and decoders are built into the framework, and, if I'm not mistaken, don't depend on anything that that isn't guaranteed to be installed as part of the framework. The following classes inherit from BitmapEncoder and are available out-of-the-box with WPF: BmpBitmapEncoder, GifBitmapEncoder, JpegBitmapEncoder, PngBitmapEncoder, TiffBitmapEncoder, WmpBitmapEncoder. There are BitmapDecoders for all the same formats, plus IconBitmapDecoder and LateBoundBitmapDecoder.\nYou may be dealing with a case I'm not imagining, but it seems to me that if you're having to use a class that inherits from BitmapEncoder but wasn't included with WPF, it's probably your own custom class that you would install with your application.\nHope this helps. If I'm missing a necessary part of the picture, please let me know.\n"
] | [
4,
1
] | [] | [] | [
"bitmap",
"c#",
"wpf"
] | stackoverflow_0000015023_bitmap_c#_wpf.txt |
Q:
IE6 - can't load a normal JPG
Try loading this normal .jpg file in Internet Explorer 6.0. I get an error saying the picture won't load. Try it in any other browser and it works fine. What's wrong? The .jpg file is just a normal picture sitting on the web server. I can even create a simple web page:
<a href="http://www.zodiacwheels.com/images/wheels/blackout_thumb.jpg">blah</a>
and use right click + save target as with IE6 to save it to my desktop, and it's a valid JPG file. However, it won't load in the browser!
Why?!
I even tried checking the header response and MIME type and it looks fine:
andy@debian:~$ telnet www.zodiacwheels.com 80
Trying 72.167.174.247...
Connected to zodiacwheels.com.
Escape character is '^]'.
HEAD /images/wheels/blackout_thumb.jpg HTTP/1.1
Host: www.zodiacwheels.com
HTTP/1.1 200 OK
Date: Wed, 20 Aug 2008 06:19:04 GMT
Server: Apache
Last-Modified: Wed, 20 Aug 2008 00:29:36 GMT
ETag: "1387402-914ac-48ab6570"
Accept-Ranges: bytes
Content-Length: 595116
Content-Type: image/jpeg
The site needs to be able to work with IE6, how come it won't load a simple .jpg file?
A:
The JPG you uploaded is in CMYK, IE and Firefox versions before 3 can't read these. Open it using Photoshop (or anything similar, I'm sure GIMP would work too) and resave it in RGB.
edit: Further Googling makes me suspect that CMYK isn't really a part of the jpeg standard, but can be shoehorned in there. That's why some software does not consider the file valid. It does however open just fine in Photoshop CS3, and shows a cmyk colorspace.
A:
It won't load in IE7 on my Vista x64 box. Also Paint.net won't save the file, saying "There was an unspecified error while saving the file."
EDIT:
In paint.net I did a Select All, New File, Paste, Save, and now it works fine. I'm guessing that file has some weird corruption.
A:
You can use jpeginfo to find out if a jpeg file is OK or not.
$jpeginfo -c blackout_thumb.jpg
blackout_thumb.jpg 240 x 240 32bit
Exif N 595116 Unsupported color
conversion request [ERROR]
In your case the file is corrupted which explain why some browsers cannot display it.
A:
Maybe it is related to this: http://photo.net/bboard/q-and-a-fetch-msg?msg_id=003j8d
A:
The file is probably not a fully valid JPG and IE6/7/8 (I tested on IE8 and it wont load). Other browsers are a bit more defensive and can load it, but perhaps IE team choose not to load it as it could be invalid in a way that causes a security hole.
As Ryan Fox says, open it in an editor and re-save it ... where did the image come from, if it came from an editor dont use that editor again.
Edit: I opened it an Paint Shop Pro and it had an unknown color palette so had to convert it ... perhaps that is the problem. You could report it as a bug to the IE team and see what they say.
A:
It is possible for other applications to register themselves as a handler for files with a particular extension. Quicktime has (or at least had) a tendency to do this with .png files, so a .png file would display fine inline in an HTML page, but with an URL referring directly to the .png file, IE would immediately delegate all responsibility for handling the file to Quicktime.
Might this be what is happening to your .jpg files? Is it only this .jpg file that you're having a problem with?
| IE6 - can't load a normal JPG | Try loading this normal .jpg file in Internet Explorer 6.0. I get an error saying the picture won't load. Try it in any other browser and it works fine. What's wrong? The .jpg file is just a normal picture sitting on the web server. I can even create a simple web page:
<a href="http://www.zodiacwheels.com/images/wheels/blackout_thumb.jpg">blah</a>
and use right click + save target as with IE6 to save it to my desktop, and it's a valid JPG file. However, it won't load in the browser!
Why?!
I even tried checking the header response and MIME type and it looks fine:
andy@debian:~$ telnet www.zodiacwheels.com 80
Trying 72.167.174.247...
Connected to zodiacwheels.com.
Escape character is '^]'.
HEAD /images/wheels/blackout_thumb.jpg HTTP/1.1
Host: www.zodiacwheels.com
HTTP/1.1 200 OK
Date: Wed, 20 Aug 2008 06:19:04 GMT
Server: Apache
Last-Modified: Wed, 20 Aug 2008 00:29:36 GMT
ETag: "1387402-914ac-48ab6570"
Accept-Ranges: bytes
Content-Length: 595116
Content-Type: image/jpeg
The site needs to be able to work with IE6, how come it won't load a simple .jpg file?
| [
"The JPG you uploaded is in CMYK, IE and Firefox versions before 3 can't read these. Open it using Photoshop (or anything similar, I'm sure GIMP would work too) and resave it in RGB.\nedit: Further Googling makes me suspect that CMYK isn't really a part of the jpeg standard, but can be shoehorned in there. That's why some software does not consider the file valid. It does however open just fine in Photoshop CS3, and shows a cmyk colorspace.\n",
"It won't load in IE7 on my Vista x64 box. Also Paint.net won't save the file, saying \"There was an unspecified error while saving the file.\"\nEDIT:\nIn paint.net I did a Select All, New File, Paste, Save, and now it works fine. I'm guessing that file has some weird corruption.\n",
"You can use jpeginfo to find out if a jpeg file is OK or not.\n\n$jpeginfo -c blackout_thumb.jpg\nblackout_thumb.jpg 240 x 240 32bit\nExif N 595116 Unsupported color\nconversion request [ERROR]\n\nIn your case the file is corrupted which explain why some browsers cannot display it.\n",
"Maybe it is related to this: http://photo.net/bboard/q-and-a-fetch-msg?msg_id=003j8d\n",
"The file is probably not a fully valid JPG and IE6/7/8 (I tested on IE8 and it wont load). Other browsers are a bit more defensive and can load it, but perhaps IE team choose not to load it as it could be invalid in a way that causes a security hole.\nAs Ryan Fox says, open it in an editor and re-save it ... where did the image come from, if it came from an editor dont use that editor again.\nEdit: I opened it an Paint Shop Pro and it had an unknown color palette so had to convert it ... perhaps that is the problem. You could report it as a bug to the IE team and see what they say.\n",
"It is possible for other applications to register themselves as a handler for files with a particular extension. Quicktime has (or at least had) a tendency to do this with .png files, so a .png file would display fine inline in an HTML page, but with an URL referring directly to the .png file, IE would immediately delegate all responsibility for handling the file to Quicktime.\nMight this be what is happening to your .jpg files? Is it only this .jpg file that you're having a problem with?\n"
] | [
29,
3,
2,
1,
1,
0
] | [] | [] | [
"cmyk",
"image",
"internet_explorer",
"jpeg",
"rgb"
] | stackoverflow_0000017469_cmyk_image_internet_explorer_jpeg_rgb.txt |
Q:
How do you place a file in recycle bin instead of delete?
Programmatic solution of course...
A:
http://www.daveamenta.com/2008-05/c-delete-a-file-to-the-recycle-bin/
From above:
using Microsoft.VisualBasic;
string path = @"c:\myfile.txt";
FileIO.FileSystem.DeleteDirectory(path,
FileIO.UIOption.OnlyErrorDialogs,
RecycleOption.SendToRecycleBin);
A:
You need to delve into unmanaged code. Here's a static class that I've been using:
public static class Recycle
{
private const int FO_DELETE = 3;
private const int FOF_ALLOWUNDO = 0x40;
private const int FOF_NOCONFIRMATION = 0x0010;
[StructLayout(LayoutKind.Sequential, CharSet = CharSet.Auto, Pack = 1)]
public struct SHFILEOPSTRUCT
{
public IntPtr hwnd;
[MarshalAs(UnmanagedType.U4)]
public int wFunc;
public string pFrom;
public string pTo;
public short fFlags;
[MarshalAs(UnmanagedType.Bool)]
public bool fAnyOperationsAborted;
public IntPtr hNameMappings;
public string lpszProgressTitle;
}
[DllImport("shell32.dll", CharSet = CharSet.Auto)]
static extern int SHFileOperation(ref SHFILEOPSTRUCT FileOp);
public static void DeleteFileOperation(string filePath)
{
SHFILEOPSTRUCT fileop = new SHFILEOPSTRUCT();
fileop.wFunc = FO_DELETE;
fileop.pFrom = filePath + '\0' + '\0';
fileop.fFlags = FOF_ALLOWUNDO | FOF_NOCONFIRMATION;
SHFileOperation(ref fileop);
}
}
Addendum:
Tsk tsk @ Jeff for "using Microsoft.VisualBasic" in C# code.
Tsk tsk @ MS for putting all the goodies in VisualBasic namespace.
A:
The best way I have found is to use the VB function FileSystem.DeleteFile.
Microsoft.VisualBasic.FileIO.FileSystem.DeleteFile(file.FullName,
Microsoft.VisualBasic.FileIO.UIOption.OnlyErrorDialogs,
Microsoft.VisualBasic.FileIO.RecycleOption.SendToRecycleBin);
It requires adding Microsoft.VisualBasic as a reference, but this is part of the .NET framework and so isn't an extra dependency.
Alternate solutions require a P/Invoke to SHFileOperation, as well as defining all the various structures/constants. Including Microsoft.VisualBasic is much neater by comparison.
| How do you place a file in recycle bin instead of delete? | Programmatic solution of course...
| [
"http://www.daveamenta.com/2008-05/c-delete-a-file-to-the-recycle-bin/\nFrom above:\nusing Microsoft.VisualBasic;\n\nstring path = @\"c:\\myfile.txt\";\nFileIO.FileSystem.DeleteDirectory(path, \n FileIO.UIOption.OnlyErrorDialogs, \n RecycleOption.SendToRecycleBin);\n\n",
"You need to delve into unmanaged code. Here's a static class that I've been using:\npublic static class Recycle\n{\n private const int FO_DELETE = 3;\n private const int FOF_ALLOWUNDO = 0x40;\n private const int FOF_NOCONFIRMATION = 0x0010;\n\n [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Auto, Pack = 1)]\n public struct SHFILEOPSTRUCT\n {\n public IntPtr hwnd;\n [MarshalAs(UnmanagedType.U4)]\n public int wFunc;\n public string pFrom;\n public string pTo;\n public short fFlags;\n [MarshalAs(UnmanagedType.Bool)]\n public bool fAnyOperationsAborted;\n public IntPtr hNameMappings;\n public string lpszProgressTitle;\n }\n\n [DllImport(\"shell32.dll\", CharSet = CharSet.Auto)]\n static extern int SHFileOperation(ref SHFILEOPSTRUCT FileOp);\n\n public static void DeleteFileOperation(string filePath)\n {\n SHFILEOPSTRUCT fileop = new SHFILEOPSTRUCT();\n fileop.wFunc = FO_DELETE;\n fileop.pFrom = filePath + '\\0' + '\\0';\n fileop.fFlags = FOF_ALLOWUNDO | FOF_NOCONFIRMATION;\n\n SHFileOperation(ref fileop);\n }\n}\n\nAddendum:\n\nTsk tsk @ Jeff for \"using Microsoft.VisualBasic\" in C# code.\nTsk tsk @ MS for putting all the goodies in VisualBasic namespace.\n\n",
"The best way I have found is to use the VB function FileSystem.DeleteFile.\nMicrosoft.VisualBasic.FileIO.FileSystem.DeleteFile(file.FullName,\n Microsoft.VisualBasic.FileIO.UIOption.OnlyErrorDialogs,\n Microsoft.VisualBasic.FileIO.RecycleOption.SendToRecycleBin);\n\nIt requires adding Microsoft.VisualBasic as a reference, but this is part of the .NET framework and so isn't an extra dependency.\nAlternate solutions require a P/Invoke to SHFileOperation, as well as defining all the various structures/constants. Including Microsoft.VisualBasic is much neater by comparison.\n"
] | [
38,
18,
13
] | [] | [] | [
".net",
"c#",
"c++",
"io",
"windows"
] | stackoverflow_0000017612_.net_c#_c++_io_windows.txt |
Q:
Differences between unix and windows files
Am I correct in assuming that the only difference between "windows files" and "unix files" is the linebreak?
We have a system that has been moved from a windows machine to a unix machine and are having troubles with the format.
I need to automate the translation between unix/windows before the files get delivered to the system in our "transportsystem". I'll probably need something to determine the current format and something to transform it into the other format.
If it's just the newline thats the big difference then I'm considering just reading the files with the java.io. As far as I know, they are able to handle both with readLine. And then just write each line back with
while (line = readline)
print(line + NewlineInOtherFormat)
....
Summary:
samjudson:
This is only a difference in text files, where UNIX uses a single Line Feed (LF) to signify a new line, Windows uses a Carriage Return/Line Feed (CRLF) and Mac uses just a CR.
to which Cebjyre elaborates:
OS X uses LF, the same as UNIX - MacOS 9 and below did use CR though
Mo
There could also be a difference in character encoding for national characters. There is no "unix-encoding" but many linux-variants use UTF-8 as the default encoding. Mac OS (which is also a unix) uses its own encoding (macroman). I am not sure, what windows default encoding is.
McDowell
In addition to the new-line differences, the byte-order mark can cause problems if files are treated as Unicode on Windows.
Cheekysoft
However, another set of problems that you may come across can be related to single/multi-byte character encodings. If you see strange unexpected chars (not at end-of-line) then this could be the reason. Especially if you see square boxes, question marks, upside-down question marks, extra characters or unexpected accented characters.
Sadie
On unix, files that start with a . are hidden. On windows, it's a filesystem flag that you probably don't have easy access to. This may result in files that are supposed to be hidden now becoming visible on the client machines.
File permissions vary between the two. You will probably find, when you copy files onto a unix system, that the files now belong to the user that did the copying and have limited rights. You'll need to use chown/chmod to make sure the correct users have access to them.
There exists tools to help with the problem:
pauldoo
If you are just interested in the content of text files, then yes the line endings are different. Take a look at something like dos2unix, it may be of help here.
Cheekysoft
As pauldoo suggests, tools like dos2unix can be very useful. Note that these may be on your linux/unix system as fromdos or tofrodos, or perhaps even as the general purpose toolbox recode.
Help for java coding
Cheekysoft
When writing to files or reading from files (that you are in control of), it is often worth specifying the encoding to use, as most Java methods allow this. However, also ensuring that the system locale matches can save a lot of pain
A:
This is only a difference in text files, where UNIX uses a single Line Feed (LF) to signify a new line, Windows uses a Carriage Return/Line Feed (CRLF) and Mac uses just a CR.
Binary files there should be no difference (i.e. a JPEG on a windows machine will be byte for byte the same as the same JPEG on a unix box.)
A:
There could also be a difference in character encoding for national characters. There is no "unix-encoding" but many linux-variants use UTF-8 as the default encoding. Mac OS (which is also a unix) uses its own encoding (macroman). I am not sure, what windows default encoding is.
But this could be another source of trouble (apart from the different linebreaks).
What are your problems? The linebreak-related problems can be easily corrected with the programs dos2unix or unix2dos on the unix-machine
A:
If you are just interested in the content of text files, then yes the line endings are different. Take a look at something like dos2unix, it may be of help here.
(Of course there are many other things that make unix and windows files different, but I don't think you're interested in those other differences right now.)
A:
In addition to the answers given, you may find issues with the different file systems:
On unix, files that start with a . are hidden. On windows, it's a filesystem flag that you probably don't have easy access to. This may result in files that are supposed to be hidden now becoming visible on the client machines.
File permissions vary between the two. You will probably find, when you copy files onto a unix system, that the files now belong to the user that did the copying and have limited rights. You'll need to use chown/chmod to make sure the correct users have access to them.
A:
In addition to the new-line differences, the byte-order mark can cause problems if files are treated as Unicode on Windows.
A:
As pauldoo suggests, tools like dos2unix can be very useful. Note that these may be on your linux/unix system as fromdos or tofrodos, or perhaps even as the general purpose toolbox recode.
However, another set of problems that you may come across can be related to single/multi-byte character encodings. If you see strange unexpected chars (not at end-of-line) then this could be the reason. Especially if you see square boxes, question marks, upside-down question marks, extra characters or unexpected accented characters.
Running the command locale on your *nix box will tell you what the system locale is. If this is different to the encoding used in the text files that have been transferred over from the windows machine, then this can sometimes cause issues, depending on the usage of those files. You can use the very powerful recode command to try and convert between the different charsets as well as any line ending issues. recode -l will show you all of the formats and encodings that the tool can convert between. It is likely to be a VERY long list.
When writing to files or reading from files (that you are in control of), it is often worth specifying the encoding to use, as most Java methods allow this. However, also ensuring that the system locale matches can save a lot of pain.
| Differences between unix and windows files | Am I correct in assuming that the only difference between "windows files" and "unix files" is the linebreak?
We have a system that has been moved from a windows machine to a unix machine and are having troubles with the format.
I need to automate the translation between unix/windows before the files get delivered to the system in our "transportsystem". I'll probably need something to determine the current format and something to transform it into the other format.
If it's just the newline thats the big difference then I'm considering just reading the files with the java.io. As far as I know, they are able to handle both with readLine. And then just write each line back with
while (line = readline)
print(line + NewlineInOtherFormat)
....
Summary:
samjudson:
This is only a difference in text files, where UNIX uses a single Line Feed (LF) to signify a new line, Windows uses a Carriage Return/Line Feed (CRLF) and Mac uses just a CR.
to which Cebjyre elaborates:
OS X uses LF, the same as UNIX - MacOS 9 and below did use CR though
Mo
There could also be a difference in character encoding for national characters. There is no "unix-encoding" but many linux-variants use UTF-8 as the default encoding. Mac OS (which is also a unix) uses its own encoding (macroman). I am not sure, what windows default encoding is.
McDowell
In addition to the new-line differences, the byte-order mark can cause problems if files are treated as Unicode on Windows.
Cheekysoft
However, another set of problems that you may come across can be related to single/multi-byte character encodings. If you see strange unexpected chars (not at end-of-line) then this could be the reason. Especially if you see square boxes, question marks, upside-down question marks, extra characters or unexpected accented characters.
Sadie
On unix, files that start with a . are hidden. On windows, it's a filesystem flag that you probably don't have easy access to. This may result in files that are supposed to be hidden now becoming visible on the client machines.
File permissions vary between the two. You will probably find, when you copy files onto a unix system, that the files now belong to the user that did the copying and have limited rights. You'll need to use chown/chmod to make sure the correct users have access to them.
There exists tools to help with the problem:
pauldoo
If you are just interested in the content of text files, then yes the line endings are different. Take a look at something like dos2unix, it may be of help here.
Cheekysoft
As pauldoo suggests, tools like dos2unix can be very useful. Note that these may be on your linux/unix system as fromdos or tofrodos, or perhaps even as the general purpose toolbox recode.
Help for java coding
Cheekysoft
When writing to files or reading from files (that you are in control of), it is often worth specifying the encoding to use, as most Java methods allow this. However, also ensuring that the system locale matches can save a lot of pain
| [
"This is only a difference in text files, where UNIX uses a single Line Feed (LF) to signify a new line, Windows uses a Carriage Return/Line Feed (CRLF) and Mac uses just a CR.\nBinary files there should be no difference (i.e. a JPEG on a windows machine will be byte for byte the same as the same JPEG on a unix box.)\n",
"There could also be a difference in character encoding for national characters. There is no \"unix-encoding\" but many linux-variants use UTF-8 as the default encoding. Mac OS (which is also a unix) uses its own encoding (macroman). I am not sure, what windows default encoding is.\nBut this could be another source of trouble (apart from the different linebreaks).\nWhat are your problems? The linebreak-related problems can be easily corrected with the programs dos2unix or unix2dos on the unix-machine\n",
"If you are just interested in the content of text files, then yes the line endings are different. Take a look at something like dos2unix, it may be of help here.\n(Of course there are many other things that make unix and windows files different, but I don't think you're interested in those other differences right now.)\n",
"In addition to the answers given, you may find issues with the different file systems:\n\nOn unix, files that start with a . are hidden. On windows, it's a filesystem flag that you probably don't have easy access to. This may result in files that are supposed to be hidden now becoming visible on the client machines.\nFile permissions vary between the two. You will probably find, when you copy files onto a unix system, that the files now belong to the user that did the copying and have limited rights. You'll need to use chown/chmod to make sure the correct users have access to them.\n\n",
"In addition to the new-line differences, the byte-order mark can cause problems if files are treated as Unicode on Windows.\n",
"As pauldoo suggests, tools like dos2unix can be very useful. Note that these may be on your linux/unix system as fromdos or tofrodos, or perhaps even as the general purpose toolbox recode.\nHowever, another set of problems that you may come across can be related to single/multi-byte character encodings. If you see strange unexpected chars (not at end-of-line) then this could be the reason. Especially if you see square boxes, question marks, upside-down question marks, extra characters or unexpected accented characters.\nRunning the command locale on your *nix box will tell you what the system locale is. If this is different to the encoding used in the text files that have been transferred over from the windows machine, then this can sometimes cause issues, depending on the usage of those files. You can use the very powerful recode command to try and convert between the different charsets as well as any line ending issues. recode -l will show you all of the formats and encodings that the tool can convert between. It is likely to be a VERY long list.\nWhen writing to files or reading from files (that you are in control of), it is often worth specifying the encoding to use, as most Java methods allow this. However, also ensuring that the system locale matches can save a lot of pain.\n"
] | [
13,
4,
2,
2,
1,
1
] | [] | [] | [
"file",
"java",
"unix",
"windows"
] | stackoverflow_0000017645_file_java_unix_windows.txt |
Q:
Retrieving the PC Name of a Client? (Windows Auth)
I have an ASP.net Application that runs on the internal network (well, actually it's running on Sharepoint 2007).
I just wonder:
Can I somehow retrieve the name of the PC the Client is using? I would have access to Active Directory if that helps. The thing is, people use multiple PCs. So, I cannot use any manual/static mapping.
If possible, I do not want to use any client-side (read: JavaScript) code, but if it cannot be done server-side, JavaScript would be OK as well. ActiveX is absolutely out of question.
A:
System.Web.HttpRequest.UserHostname as suggested in this answer just returns the IP :-(
But I just found this:
System.Net.Dns.GetHostEntry(Page.Request.UserHostAddress).HostName
That only works if there is actually a DNS Server to resolve the name, which is the case for my network.
A:
Does System.Web.HttpRequest.UserHostname provide what you're looking for?
| Retrieving the PC Name of a Client? (Windows Auth) | I have an ASP.net Application that runs on the internal network (well, actually it's running on Sharepoint 2007).
I just wonder:
Can I somehow retrieve the name of the PC the Client is using? I would have access to Active Directory if that helps. The thing is, people use multiple PCs. So, I cannot use any manual/static mapping.
If possible, I do not want to use any client-side (read: JavaScript) code, but if it cannot be done server-side, JavaScript would be OK as well. ActiveX is absolutely out of question.
| [
"System.Web.HttpRequest.UserHostname as suggested in this answer just returns the IP :-(\nBut I just found this:\nSystem.Net.Dns.GetHostEntry(Page.Request.UserHostAddress).HostName\n\nThat only works if there is actually a DNS Server to resolve the name, which is the case for my network.\n",
"Does System.Web.HttpRequest.UserHostname provide what you're looking for?\n"
] | [
4,
3
] | [] | [] | [
"asp.net",
"authentication",
"sharepoint"
] | stackoverflow_0000017664_asp.net_authentication_sharepoint.txt |
Q:
Language Conversion Testing
We created a tool which converts a language called P2, a language similar to assembly which I think exists only in Japan, to C#. There are at least a hundred modules written in P2 and we want to verify if the conversion to C# is right? How can we test this?
A:
You don't test the converter, you test the final code.
If the code doesn't compile, clearly your converter is failing. If the code compiles and your functionality tests fail, then you can tweak the code so that it passes the test. If you are fairly successful you should see that you only need to fix the modules that actually fail.
Goodluck!
A:
Short of a formal mathematical proof (which I imagine would be difficult), the proof of the pudding is in the unit tests. You have to find a way to wrap the converted C# snippets, compile the and run them under a similar environment, then compare the output against the original. Unless you're rigorous in your testing, there's no way you can be confident of the result.
| Language Conversion Testing | We created a tool which converts a language called P2, a language similar to assembly which I think exists only in Japan, to C#. There are at least a hundred modules written in P2 and we want to verify if the conversion to C# is right? How can we test this?
| [
"You don't test the converter, you test the final code. \nIf the code doesn't compile, clearly your converter is failing. If the code compiles and your functionality tests fail, then you can tweak the code so that it passes the test. If you are fairly successful you should see that you only need to fix the modules that actually fail.\nGoodluck!\n",
"Short of a formal mathematical proof (which I imagine would be difficult), the proof of the pudding is in the unit tests. You have to find a way to wrap the converted C# snippets, compile the and run them under a similar environment, then compare the output against the original. Unless you're rigorous in your testing, there's no way you can be confident of the result.\n"
] | [
2,
1
] | [] | [] | [
"c#",
"testing"
] | stackoverflow_0000017430_c#_testing.txt |
Q:
When is a file just a file?
So, you're writing a web application and you have several areas of the site where the user can upload files. My basic working method for this is to store the actual file on the server, and have a database table that connects the stored filename to the record it relates to.
My question is this: Should there be a different table for each "type" of file? Also, should the files be stored in context-related locations on the server, or all together?
Some examples: user profile photos, job application CVs, related documents on CMS pages, etc.
A:
From your example, there is an argument for two tables, as you have files that can be associated with two different things.
CVs, photos are associated with a user.
attachments are associated with a CMS page.
If you put these in one table, (and you want to allow users to have more than one photo or cv) then you need two link-tables to associate files->users and files->cms_pages. Arguably this implies a HABTM relationship, which is not correct and allows for inconsistent data.
The two table approach is slightly cleaner and only allows files to be associated with the correct type of entity with a simple belongsTo relationship.
But I don't think there is any "right" answer to this question, unless you need to store different types of metadata for different filetypes.
Also be sure to store, or be able to calculate, the mimetype for each file so it can be served correctly back to the browser, with the correct HTTP headers.
A:
From what you've said I would just store files with random (UUID or what-not) filenames in one place. I would then have a 'attachments' table or something that contains references to all your external files. This table would also contain the meta-data for that file, so what type of file it is (picture, CV etc) and so on.
There may be hard limits to the number of files in one directory though, depending on what FS you are using.
A:
There might be various reasons for storing different files in different locations.
Firstly, a restriction on the number of files in one directory might be a consideration.
Secondly security might be an issue - if some are to be publicly viewable (such as profile photos for example) but others are not (such as CVs) then placing them in different directories would be easier to manage.
Thirdly, simple admin tasks may be easier if files are split, browsing in a file explorer for example, or managing backups, or modifying the application to split file storage across multiple locations.
There is also the issue of filename conflicts, but if you rename everything to match the database id field (for example) then this wouldn't be an issue.
But at the end of the day it probably depends on volumes and your own preference.
A:
A different table for each file type only becomes relevant if you store other metadata (and therefore, additional columns) for each type of file. If your tables for each file type only contain the same columns (e.g., filename, filetype, dateuploaded, etc) then it would make sense to have them all on one table.
| When is a file just a file? | So, you're writing a web application and you have several areas of the site where the user can upload files. My basic working method for this is to store the actual file on the server, and have a database table that connects the stored filename to the record it relates to.
My question is this: Should there be a different table for each "type" of file? Also, should the files be stored in context-related locations on the server, or all together?
Some examples: user profile photos, job application CVs, related documents on CMS pages, etc.
| [
"From your example, there is an argument for two tables, as you have files that can be associated with two different things.\n\nCVs, photos are associated with a user.\nattachments are associated with a CMS page.\n\nIf you put these in one table, (and you want to allow users to have more than one photo or cv) then you need two link-tables to associate files->users and files->cms_pages. Arguably this implies a HABTM relationship, which is not correct and allows for inconsistent data. \nThe two table approach is slightly cleaner and only allows files to be associated with the correct type of entity with a simple belongsTo relationship.\nBut I don't think there is any \"right\" answer to this question, unless you need to store different types of metadata for different filetypes. \nAlso be sure to store, or be able to calculate, the mimetype for each file so it can be served correctly back to the browser, with the correct HTTP headers.\n",
"From what you've said I would just store files with random (UUID or what-not) filenames in one place. I would then have a 'attachments' table or something that contains references to all your external files. This table would also contain the meta-data for that file, so what type of file it is (picture, CV etc) and so on.\nThere may be hard limits to the number of files in one directory though, depending on what FS you are using.\n",
"There might be various reasons for storing different files in different locations.\nFirstly, a restriction on the number of files in one directory might be a consideration.\nSecondly security might be an issue - if some are to be publicly viewable (such as profile photos for example) but others are not (such as CVs) then placing them in different directories would be easier to manage.\nThirdly, simple admin tasks may be easier if files are split, browsing in a file explorer for example, or managing backups, or modifying the application to split file storage across multiple locations.\nThere is also the issue of filename conflicts, but if you rename everything to match the database id field (for example) then this wouldn't be an issue.\nBut at the end of the day it probably depends on volumes and your own preference.\n",
"A different table for each file type only becomes relevant if you store other metadata (and therefore, additional columns) for each type of file. If your tables for each file type only contain the same columns (e.g., filename, filetype, dateuploaded, etc) then it would make sense to have them all on one table.\n"
] | [
3,
2,
2,
2
] | [] | [] | [
"database_design"
] | stackoverflow_0000017715_database_design.txt |
Q:
Unix subsystem for windows
One of the bullet point features for Windows Vista Enterprize and Ultimate is the Unix subsystem for windows, which allows you to write posix... stuff? Anyway I'm outa my league talking about it... Anyone use this feature? Or explain it...
I know next to nothing about Unix programming.
A:
It's probably best not to try to use the Posix subsystem for Windows. It was never really complete and is just a useless marketing tick box.
If you're truly interested in programming stuff for Unix, download one of the many Linux distributions (ie. Ubuntu) and VirtualBox. Install and start playing.
A:
You might like Cygwin for having a Linux environment on your windows machine. Otherwise, definitely go for an isolated environment (virtual machines) like the others have suggested.
A:
I don't want to discourage you from trying linux. But in this context it should be pointed out, that Linux is not completely posix compliant!
Wikipedia has a list of fully posix compliant operating systems
From that list, Solaris is probably the best to get started.
But anyway - for most of your posix-needs Linux should be the best choice (especially for beginners!)
A:
The Posix subsystem in Windows is not only incomplete, but also slower in many cases than the "native" windows functions for the same thing. This is true for I/O for example.
A:
In addition to Cygwin mentioned by another poster you should also consider MinGW.
| Unix subsystem for windows | One of the bullet point features for Windows Vista Enterprize and Ultimate is the Unix subsystem for windows, which allows you to write posix... stuff? Anyway I'm outa my league talking about it... Anyone use this feature? Or explain it...
I know next to nothing about Unix programming.
| [
"It's probably best not to try to use the Posix subsystem for Windows. It was never really complete and is just a useless marketing tick box.\nIf you're truly interested in programming stuff for Unix, download one of the many Linux distributions (ie. Ubuntu) and VirtualBox. Install and start playing.\n",
"You might like Cygwin for having a Linux environment on your windows machine. Otherwise, definitely go for an isolated environment (virtual machines) like the others have suggested.\n",
"I don't want to discourage you from trying linux. But in this context it should be pointed out, that Linux is not completely posix compliant!\nWikipedia has a list of fully posix compliant operating systems\nFrom that list, Solaris is probably the best to get started.\nBut anyway - for most of your posix-needs Linux should be the best choice (especially for beginners!)\n",
"The Posix subsystem in Windows is not only incomplete, but also slower in many cases than the \"native\" windows functions for the same thing. This is true for I/O for example.\n",
"In addition to Cygwin mentioned by another poster you should also consider MinGW.\n"
] | [
4,
4,
1,
0,
0
] | [] | [] | [
"posix",
"windows"
] | stackoverflow_0000017704_posix_windows.txt |
Q:
Delphi resources for existing .NET developer
Can anyone recommend some decent resources for a .NET developer who wishes to get a high level overview of the Delphi language?
We are about acquire a small business whose main product is developed in Delphi and I am wanting to build up enough knowledge to be able to talk the talk with them.
Books, websites etc all appreciated.
Thanks.
A:
DelphiBasics gives a good overview of basic syntax, library functions etc.
Essential Delphi is a free e-book by Marco Cantu that should give a good overview, also of the VCL
Feel free to ask around here as well, or in the Delphi newsgroups, if you encounter specific issues :)
[edit] @Martin:
There's a free "Turbo" edition available at the Codegear/Embarcadero website. I guess it has some limitations, so you could also try downloading the trial version.
A:
There's also a Delphi wiki
This even has a "Beginning Delphi" page with lots of external links on it. (some of them already mentioned)
A:
http://www.delphifeeds.com/ is a good place to start, it has most news about what is going on in the delphi community.
A:
There are a number of videos by Alister Christie at codegearguru - check them out :)
edit... @Martin, check out the Turbo products at CodeGear
A:
@Martin there is a free version.
Turbo Delphi
If you are comfortable with c# you will see many similarities with Delphi.
I also found the community surrounding the newsgroups to be active and helpful. They have a smilar concept to MVPs they were called Team B (but as Borland doesn't own them the name may have changed now).
| Delphi resources for existing .NET developer | Can anyone recommend some decent resources for a .NET developer who wishes to get a high level overview of the Delphi language?
We are about acquire a small business whose main product is developed in Delphi and I am wanting to build up enough knowledge to be able to talk the talk with them.
Books, websites etc all appreciated.
Thanks.
| [
"\nDelphiBasics gives a good overview of basic syntax, library functions etc.\nEssential Delphi is a free e-book by Marco Cantu that should give a good overview, also of the VCL\n\nFeel free to ask around here as well, or in the Delphi newsgroups, if you encounter specific issues :)\n\n[edit] @Martin:\n\nThere's a free \"Turbo\" edition available at the Codegear/Embarcadero website. I guess it has some limitations, so you could also try downloading the trial version.\n\n",
"There's also a Delphi wiki\nThis even has a \"Beginning Delphi\" page with lots of external links on it. (some of them already mentioned)\n",
"http://www.delphifeeds.com/ is a good place to start, it has most news about what is going on in the delphi community.\n",
"There are a number of videos by Alister Christie at codegearguru - check them out :)\nedit... @Martin, check out the Turbo products at CodeGear\n",
"@Martin there is a free version.\nTurbo Delphi\nIf you are comfortable with c# you will see many similarities with Delphi.\nI also found the community surrounding the newsgroups to be active and helpful. They have a smilar concept to MVPs they were called Team B (but as Borland doesn't own them the name may have changed now).\n"
] | [
4,
3,
2,
1,
0
] | [] | [] | [
"delphi"
] | stackoverflow_0000013086_delphi.txt |
Q:
Suitable alternative to CryptEncrypt
We have a situation in our product where for a long time some data has been stored in the application's database as SQL string (choice of MS SQL server or sybase SQL anywhere) which was encrypted via the Windows API function CryptEncrypt. (direct and de-cryptable)
The problem is that CryptEncrypt can produce NULL's in the output, meaning that when it's stored in the database, the string manipulations will at some point truncate the CipherText.
Ideally we'd like to use an algo that will produce CipherText that doesn't contain NULLs as that will cause the least amount of change to the existing databases (changing a column from string to binary and code to deal with binary instead of strings) and just decrypt existing data and re-encrypt with the new algorithm at database upgrade time.
The algorithm doesn't need to be the most secure, as the database is already in a reasonably secure environment (not an open network / the inter-webs) but does need to be better than ROT13 (which I can almost decrypt in my head now!)
edit: btw, any particular reason for changing ciphertext to cyphertext? ciphertext seems more widely used...
A:
Any semi-decent algorithm will end up with a strong chance of generating a NULL value somewhere in the resulting ciphertext.
Why not do something like base-64 encode your resulting binary blob before persisting to the DB? (sample implementation in C++).
A:
Storing a hash is a good idea. However, please definitely read Jeff's You're Probably Storing Passwords Incorrectly.
A:
That's an interesting route OJ.
We're looking at the feasability of a non-reversable method (still making sure we don't explicitly retrieve the data to decrypt) e.g. just store a Hash to compare on a submission
A:
It seems that the developer handling this is going to wrap the existing encryption with yEnc to preserve the table integrity as the data needs to be retrievable, and this save all that messy mucking about with infinite-improbab.... uhhh changing column types on entrenched installations.
Cheers Guys
| Suitable alternative to CryptEncrypt | We have a situation in our product where for a long time some data has been stored in the application's database as SQL string (choice of MS SQL server or sybase SQL anywhere) which was encrypted via the Windows API function CryptEncrypt. (direct and de-cryptable)
The problem is that CryptEncrypt can produce NULL's in the output, meaning that when it's stored in the database, the string manipulations will at some point truncate the CipherText.
Ideally we'd like to use an algo that will produce CipherText that doesn't contain NULLs as that will cause the least amount of change to the existing databases (changing a column from string to binary and code to deal with binary instead of strings) and just decrypt existing data and re-encrypt with the new algorithm at database upgrade time.
The algorithm doesn't need to be the most secure, as the database is already in a reasonably secure environment (not an open network / the inter-webs) but does need to be better than ROT13 (which I can almost decrypt in my head now!)
edit: btw, any particular reason for changing ciphertext to cyphertext? ciphertext seems more widely used...
| [
"Any semi-decent algorithm will end up with a strong chance of generating a NULL value somewhere in the resulting ciphertext.\nWhy not do something like base-64 encode your resulting binary blob before persisting to the DB? (sample implementation in C++).\n",
"Storing a hash is a good idea. However, please definitely read Jeff's You're Probably Storing Passwords Incorrectly.\n",
"That's an interesting route OJ.\nWe're looking at the feasability of a non-reversable method (still making sure we don't explicitly retrieve the data to decrypt) e.g. just store a Hash to compare on a submission\n",
"It seems that the developer handling this is going to wrap the existing encryption with yEnc to preserve the table integrity as the data needs to be retrievable, and this save all that messy mucking about with infinite-improbab.... uhhh changing column types on entrenched installations.\nCheers Guys\n"
] | [
3,
1,
0,
0
] | [] | [] | [
"c++",
"encryption",
"winapi"
] | stackoverflow_0000017670_c++_encryption_winapi.txt |
Q:
Java Coding standard / best practices - naming convention for break/continue labels
Sometimes a labeled break or continue can make code a lot more readable.
OUTERLOOP: for ( ;/*stuff*/; ) {
//...lots of code
if ( isEnough() ) break OUTERLOOP;
//...more code
}
I was wondering what the common convention for the labels was. All caps? first cap?
A:
I don't understand where this "don't use labels" rule comes from. When doing non-trivial looping logic, the test to break or continue isn't always neatly at the end of the surrounding block.
outer_loop:
for (...) {
// some code
for (...) {
// some code
if (...)
continue outer_loop;
// more code
}
// more code
}
Yes, cases like this do happen all the time. What are people suggesting I use instead? A boolean condition like this?
for (...) {
// some code
boolean continueOuterLoop = false;
for (...) {
// some code
if (...) {
continueOuterLoop = true;
break;
}
// more code
}
if (continueOuterLoop)
continue;
// more code
}
Yuck! Refactoring it as a method doesn't alleviate that either:
boolean innerLoop (...) {
for (...) {
// some code
if (...) {
return true;
}
// more code
}
return false;
}
for (...) {
// some code
if (innerLoop(...))
continue;
// more code
}
Sure it's a little prettier, but it's still passing around a superfluous boolean. And if the inner loop modified local variables, refactoring it into a method isn't always the correct solution.
So why are you all against labels? Give me some solid reasons, and practical alternatives for the above case.
A:
If you have to use them use capitals, this draws attention to them and singles them out from being mistakenly interpreted as "Class" names. Drawing attention to them has the additional benefit of catching someone's eye that will come along and refactor your code and remove them. ;)
A:
The convention is to avoid labels altogether.
There are very, very few valid reasons to use a label for breaking out of a loop. Breaking out is ok, but you can remove the need to break at all by modifying your design a little. In the example you have given, you would extract the 'Lots of code' sections and put them in individual methods with meaningful names.
for ( ;/*stuff*/; )
{
lotsOfCode();
if ( !isEnough() )
{
moreCode();
}
}
Edit: having seen the actual code in question (over here), I think the use of labels is probably the best way to make the code readable. In most cases using labels is the wrong approach, in this instance, I think it is fine.
A:
Sun's Java code style seem to prefer naming labels in the same way as variables, meaning camel case with the first letter in lower case.
A:
The convention I've most seen is simply camel case, like a method name...
myLabel:
but I've also seen labels prefixed with an underscore
_myLabel:
or with lab...
labSomething:
You can probably sense though from the other answers that you'll be hard-pushed to find a coding standard that says anything other than 'Don't use labels'. The answer then I guess is that you should use whatever style makes sense to you, as long as it's consistent.
A:
wrt sadie's code example:
You gave
outerloop:
for (...) {
// some code
for (...) {
// some code
if (...)
continue outerloop;
// more code
}
// more code
}
As an example. You make a good point. My best guess would be:
public void lookMumNoLabels() {
for (...) {
// some code
doMoreInnerCodeLogic(...);
}
}
private void doMoreInnerCodeLogic(...) {
for (...) {
// some code
if (...) return;
}
}
But there would be examples where that kind of refactoring doesn't sit correctly with whatever logic you're doing.
A:
As labels are so rarely useful, it appears, that there is no clear convention. The Java language specification has one example with labels and they are in non_cap.
But since they are so rare, in my opinion it is best, to think twice whether they are really the right tool.
And if they are the right tool, make them all caps so that other developers (or yourself later on) realize them as something unusual right away. (as Craig already pointed out)
A:
The convetion/best practise would still be not to use them at all and to refactor the code so that is more readable using extract as method.
A:
They are kind of the goto of Java - not sure if C# has them. I have never used them in practice, I can't think of a case where avoiding them wouldn't result in much more readable code.
But if you have to- I think all caps is ok. Most people won't use labelled breaks, so when they see the code, the caps will jump out at them and will force them to realise what is going on.
A:
I know, I should not use labels.
But just assume, I have some code, that could gain a lot in readability from labeled breaks, how do I format them.
Mo, your premise is wrong.
The question shouldn't be 'how do I format them?'
Your question should be 'I have code that has a large amount of logic inside loops - how do I make it more readable?'
The answer to that question is to move the code into individual, well named functions. Then you don't need to label the breaks at all.
| Java Coding standard / best practices - naming convention for break/continue labels | Sometimes a labeled break or continue can make code a lot more readable.
OUTERLOOP: for ( ;/*stuff*/; ) {
//...lots of code
if ( isEnough() ) break OUTERLOOP;
//...more code
}
I was wondering what the common convention for the labels was. All caps? first cap?
| [
"I don't understand where this \"don't use labels\" rule comes from. When doing non-trivial looping logic, the test to break or continue isn't always neatly at the end of the surrounding block.\nouter_loop:\nfor (...) {\n // some code\n for (...) {\n // some code\n if (...)\n continue outer_loop;\n // more code\n }\n // more code\n}\n\nYes, cases like this do happen all the time. What are people suggesting I use instead? A boolean condition like this?\nfor (...) {\n // some code\n boolean continueOuterLoop = false;\n for (...) {\n // some code\n if (...) {\n continueOuterLoop = true;\n break;\n }\n // more code\n }\n if (continueOuterLoop)\n continue;\n // more code\n}\n\nYuck! Refactoring it as a method doesn't alleviate that either:\nboolean innerLoop (...) {\n for (...) {\n // some code\n if (...) {\n return true;\n }\n // more code\n }\n return false;\n}\n\nfor (...) {\n // some code\n if (innerLoop(...))\n continue;\n // more code\n}\n\nSure it's a little prettier, but it's still passing around a superfluous boolean. And if the inner loop modified local variables, refactoring it into a method isn't always the correct solution.\nSo why are you all against labels? Give me some solid reasons, and practical alternatives for the above case.\n",
"If you have to use them use capitals, this draws attention to them and singles them out from being mistakenly interpreted as \"Class\" names. Drawing attention to them has the additional benefit of catching someone's eye that will come along and refactor your code and remove them. ;)\n",
"The convention is to avoid labels altogether.\nThere are very, very few valid reasons to use a label for breaking out of a loop. Breaking out is ok, but you can remove the need to break at all by modifying your design a little. In the example you have given, you would extract the 'Lots of code' sections and put them in individual methods with meaningful names. \nfor ( ;/*stuff*/; ) \n{\n lotsOfCode();\n\n if ( !isEnough() )\n {\n moreCode();\n }\n}\n\n\nEdit: having seen the actual code in question (over here), I think the use of labels is probably the best way to make the code readable. In most cases using labels is the wrong approach, in this instance, I think it is fine.\n",
"Sun's Java code style seem to prefer naming labels in the same way as variables, meaning camel case with the first letter in lower case.\n",
"The convention I've most seen is simply camel case, like a method name...\nmyLabel:\n\nbut I've also seen labels prefixed with an underscore\n_myLabel:\n\nor with lab...\nlabSomething:\n\nYou can probably sense though from the other answers that you'll be hard-pushed to find a coding standard that says anything other than 'Don't use labels'. The answer then I guess is that you should use whatever style makes sense to you, as long as it's consistent.\n",
"wrt sadie's code example:\nYou gave \nouterloop:\nfor (...) {\n // some code\n for (...) {\n // some code\n if (...)\n continue outerloop;\n // more code\n }\n // more code\n}\n\nAs an example. You make a good point. My best guess would be:\npublic void lookMumNoLabels() {\n for (...) {\n // some code\n doMoreInnerCodeLogic(...);\n }\n}\n\nprivate void doMoreInnerCodeLogic(...) {\n for (...) {\n // some code\n if (...) return;\n }\n}\n\nBut there would be examples where that kind of refactoring doesn't sit correctly with whatever logic you're doing.\n",
"As labels are so rarely useful, it appears, that there is no clear convention. The Java language specification has one example with labels and they are in non_cap.\nBut since they are so rare, in my opinion it is best, to think twice whether they are really the right tool.\nAnd if they are the right tool, make them all caps so that other developers (or yourself later on) realize them as something unusual right away. (as Craig already pointed out)\n",
"The convetion/best practise would still be not to use them at all and to refactor the code so that is more readable using extract as method.\n",
"They are kind of the goto of Java - not sure if C# has them. I have never used them in practice, I can't think of a case where avoiding them wouldn't result in much more readable code. \nBut if you have to- I think all caps is ok. Most people won't use labelled breaks, so when they see the code, the caps will jump out at them and will force them to realise what is going on. \n",
"\nI know, I should not use labels.\nBut just assume, I have some code, that could gain a lot in readability from labeled breaks, how do I format them.\n\nMo, your premise is wrong.\nThe question shouldn't be 'how do I format them?'\nYour question should be 'I have code that has a large amount of logic inside loops - how do I make it more readable?'\nThe answer to that question is to move the code into individual, well named functions. Then you don't need to label the breaks at all.\n"
] | [
41,
20,
15,
10,
2,
1,
1,
0,
0,
0
] | [] | [] | [
"convention",
"java",
"label"
] | stackoverflow_0000015481_convention_java_label.txt |
Q:
Repository organisation
When I first started using revision control systems like CVS and SVN, I didn't really understand the concepts of the "trunk", branching, merging and tagging. I'm now starting to understand these concepts, and really get the importance and power behind them.
So, I'm starting to do it properly. Or so I think... This is what I understand so far: The latest release/stable version of your code should sit in /trunk/ while beta versions or bleeding edge versions sit inside the /branches/ directory as different directories for each beta release, and then merged into the trunk when you release.
Is this too simplistic a view on things? What repository layouts do you guys recommend? If it makes a difference, I'm using Subversion.
A:
See these two questions on SO for more information:
What does branch, tag and trunk really mean?
Subversion question
A:
What I do and normally see as a standard is:
The trunk should contain your main line of development, your unstable version.
You should create release branches for your releases.
Something like:
/trunk (here your are developing version 2.0)
/branches/RB-1.0 (this is the release branch for 1.0)
/branches/RB-1.5
When you find a bug in 1.5, you fix it in the RB branch and then merge to the trunk.
I also recommend this book.
A:
Eric has an excellent series of articles on Source Control use and organisational best practices.
Chapter 7 deals with branches (and yes, it recommends the /trunk/ and /branches/ directories you suggest).
A:
I have used Perforce for a long time, and so my comments may be a little Perforce-centric, but the basic principles apply to any SCM software that has half decent branching.
I'm a very strong believer in using branched development practices. I have a "main" (aka "mainline") that represents the codebase from now to eternity. The aim is that this is, most of the time, stable and, if push came to shove, you could cut a release anytime that would reflect the current functionality of the system. Those pesky sales guys keep asking....
Developments happen in branches that are branched from MAIN (normally - occasionally you may want to branch from an existing dev branch). Integrate from MAIN to your dev branches as often as you can, to stop things diverging too much - or you can simply budget for a bigger integration period later. Only integrate your arse kicking new feature to MAIN when you are sure that it will go out in a forthcoming release.
Finally, you have a RELEASE line, which the option of different branches for different releases. There's some choices depending on the labelling capabilities of your SCM software,and how different major/minor revisions are likely to be. So you may opt, for example, for a release branch for every point release, or only for major rev number. Your mileage may vary.
Generally, branch from MAIN to release as late as possible. Bugfixes and last minute changes can either go straight into RELEASE for later integration to MAIN, or into MAIN for immediate integration back up. There's no hard and fast rule - do what works best. If, however, you have changes that may be submitted to MAIN (e.g. from a dev branch, or "little tweaks" by someone on MAIN), then do the former. It depends on how your team works, what your release cycles are etc.
E.g. I would have something like this:
//MYPROJECT/MAIN/... - the top level folder for a complete build of all the product in main.
//MYPROJECT/DEV/ArseKickingFeature/... - a branch from MAIN where developers work.
//MYPROJECT/RELEASE/1.0/...
//MYPROJECT/RELEASE/2.0/...
A non-trivial project will probably have a number of DEV branches active at once. When a development has been integrated into MAIN so that it is now part of the core project, kill off the old DEV branch as soon as you can. Many engineers will treat a DEV branch as their own personal space, and reuse it for different features over time. Discourage this.
If, after release, you have to fix a bug, then do that in the corresponding release branch. If the bug has been previously fixed in MAIN, then integrate across, unless the code has changed so much in MAIN the fix is different.
What really differentiates the codelines is the policies you use to manage them. For example, what tests get run, who reviews pre/post a change, what action happens if a build breaks. Typically policies - and therefore overhead - are strongest in release branches, and weakest in DEV. There's an article here that goes through some scenarios, and links to other useful things.
Finally, I recommend going with a simple structure to start with, and only introduce extra dev & release ones as needed.
Hope that helps, and is not stating-the-bleedin'-obvious too much.
| Repository organisation | When I first started using revision control systems like CVS and SVN, I didn't really understand the concepts of the "trunk", branching, merging and tagging. I'm now starting to understand these concepts, and really get the importance and power behind them.
So, I'm starting to do it properly. Or so I think... This is what I understand so far: The latest release/stable version of your code should sit in /trunk/ while beta versions or bleeding edge versions sit inside the /branches/ directory as different directories for each beta release, and then merged into the trunk when you release.
Is this too simplistic a view on things? What repository layouts do you guys recommend? If it makes a difference, I'm using Subversion.
| [
"See these two questions on SO for more information:\n\nWhat does branch, tag and trunk really mean?\nSubversion question\n\n",
"What I do and normally see as a standard is:\nThe trunk should contain your main line of development, your unstable version.\nYou should create release branches for your releases.\nSomething like:\n/trunk (here your are developing version 2.0)\n/branches/RB-1.0 (this is the release branch for 1.0)\n/branches/RB-1.5 \nWhen you find a bug in 1.5, you fix it in the RB branch and then merge to the trunk.\nI also recommend this book.\n",
"Eric has an excellent series of articles on Source Control use and organisational best practices.\nChapter 7 deals with branches (and yes, it recommends the /trunk/ and /branches/ directories you suggest).\n",
"I have used Perforce for a long time, and so my comments may be a little Perforce-centric, but the basic principles apply to any SCM software that has half decent branching.\nI'm a very strong believer in using branched development practices. I have a \"main\" (aka \"mainline\") that represents the codebase from now to eternity. The aim is that this is, most of the time, stable and, if push came to shove, you could cut a release anytime that would reflect the current functionality of the system. Those pesky sales guys keep asking....\nDevelopments happen in branches that are branched from MAIN (normally - occasionally you may want to branch from an existing dev branch). Integrate from MAIN to your dev branches as often as you can, to stop things diverging too much - or you can simply budget for a bigger integration period later. Only integrate your arse kicking new feature to MAIN when you are sure that it will go out in a forthcoming release.\nFinally, you have a RELEASE line, which the option of different branches for different releases. There's some choices depending on the labelling capabilities of your SCM software,and how different major/minor revisions are likely to be. So you may opt, for example, for a release branch for every point release, or only for major rev number. Your mileage may vary.\nGenerally, branch from MAIN to release as late as possible. Bugfixes and last minute changes can either go straight into RELEASE for later integration to MAIN, or into MAIN for immediate integration back up. There's no hard and fast rule - do what works best. If, however, you have changes that may be submitted to MAIN (e.g. from a dev branch, or \"little tweaks\" by someone on MAIN), then do the former. It depends on how your team works, what your release cycles are etc.\nE.g. I would have something like this:\n//MYPROJECT/MAIN/... - the top level folder for a complete build of all the product in main.\n//MYPROJECT/DEV/ArseKickingFeature/... - a branch from MAIN where developers work.\n//MYPROJECT/RELEASE/1.0/...\n//MYPROJECT/RELEASE/2.0/...\n\nA non-trivial project will probably have a number of DEV branches active at once. When a development has been integrated into MAIN so that it is now part of the core project, kill off the old DEV branch as soon as you can. Many engineers will treat a DEV branch as their own personal space, and reuse it for different features over time. Discourage this.\nIf, after release, you have to fix a bug, then do that in the corresponding release branch. If the bug has been previously fixed in MAIN, then integrate across, unless the code has changed so much in MAIN the fix is different.\nWhat really differentiates the codelines is the policies you use to manage them. For example, what tests get run, who reviews pre/post a change, what action happens if a build breaks. Typically policies - and therefore overhead - are strongest in release branches, and weakest in DEV. There's an article here that goes through some scenarios, and links to other useful things.\nFinally, I recommend going with a simple structure to start with, and only introduce extra dev & release ones as needed. \nHope that helps, and is not stating-the-bleedin'-obvious too much.\n"
] | [
5,
5,
1,
1
] | [] | [] | [
"versioning"
] | stackoverflow_0000017735_versioning.txt |
Q:
Default Internet connection on Dual LAN Workstation
I know this is not programming directly, but it's regarding a development workstation I'm setting up.
I've got a Windows Server 2003 machine that needs to be on two LAN segments at the same time. One of them is a 10.17.x.x LAN and the other is 10.16.x.x
The problem is that I don't want to be using up the bandwidth on the 10.16.x.x network for internet traffic, etc (this network is basically only for internal stuff, though it does have internet access) so I would like the system to use the 10.17.x.x connection for anything that is external to the LAN (and for anything on 10.17.x.x of course, and to only use the 10.16.x.x connection for things that are on that specific LAN.
I've tried looking into the windows "route" command but it's fairly confusing and won't seem to let me delete routes tha tI believe are interfering with what I want it to do. Is there a better way of doing this? Any good software for segmenting your LAN access?
A:
I'm no network expert but I have fiddled with the route command a number of times...
route add 0.0.0.0 MASK 0.0.0.0 <address of gateway on 10.17.x.x net>
Will route all default traffic through the 10.17.x.x gateway, if you find that it still routes through the other interface, you should make sure that the new rule has a lower metric than the existing routes. Do this by adding METRIC 1 for example to the end of the line above.
You could also adjust the metric in the Advanced TCP/IP Settings window of the 10.17.x.x interface, unticking the Automatic Metric checkbox and setting the value to something low, like 1 or 2.
A:
If you don't move your network cables around and can assign yourself a static IP address on the 10.16.x.x network, you can refrain from assigning a gateway address on that network. If there is no gateway, internet packets will not be routed on that interface.
If you use DHCP, static record to recognize your MAC address and not provide a gateway IP address.
As for using advanced windows routing, the route you are looking for is the 0.0.0.0 route (default route). The important number is the metric value, which is the cost for the route, where the lower metric tends to be used first. You can set the metric at the interface level directly in the GUI.
https://web.archive.org/web/1/http://articles.techrepublic%2ecom%2ecom/i/tr/cms/contentPics/tcpip-F.gif
I believe if you set the interface metric to a high value on the 10.16.x.x interface, it will not be used as a gateway.
Personally I use the method where I refrain from defining a gateway IP.
| Default Internet connection on Dual LAN Workstation | I know this is not programming directly, but it's regarding a development workstation I'm setting up.
I've got a Windows Server 2003 machine that needs to be on two LAN segments at the same time. One of them is a 10.17.x.x LAN and the other is 10.16.x.x
The problem is that I don't want to be using up the bandwidth on the 10.16.x.x network for internet traffic, etc (this network is basically only for internal stuff, though it does have internet access) so I would like the system to use the 10.17.x.x connection for anything that is external to the LAN (and for anything on 10.17.x.x of course, and to only use the 10.16.x.x connection for things that are on that specific LAN.
I've tried looking into the windows "route" command but it's fairly confusing and won't seem to let me delete routes tha tI believe are interfering with what I want it to do. Is there a better way of doing this? Any good software for segmenting your LAN access?
| [
"I'm no network expert but I have fiddled with the route command a number of times...\nroute add 0.0.0.0 MASK 0.0.0.0 <address of gateway on 10.17.x.x net>\n\nWill route all default traffic through the 10.17.x.x gateway, if you find that it still routes through the other interface, you should make sure that the new rule has a lower metric than the existing routes. Do this by adding METRIC 1 for example to the end of the line above.\nYou could also adjust the metric in the Advanced TCP/IP Settings window of the 10.17.x.x interface, unticking the Automatic Metric checkbox and setting the value to something low, like 1 or 2.\n",
"If you don't move your network cables around and can assign yourself a static IP address on the 10.16.x.x network, you can refrain from assigning a gateway address on that network. If there is no gateway, internet packets will not be routed on that interface.\nIf you use DHCP, static record to recognize your MAC address and not provide a gateway IP address.\nAs for using advanced windows routing, the route you are looking for is the 0.0.0.0 route (default route). The important number is the metric value, which is the cost for the route, where the lower metric tends to be used first. You can set the metric at the interface level directly in the GUI.\nhttps://web.archive.org/web/1/http://articles.techrepublic%2ecom%2ecom/i/tr/cms/contentPics/tcpip-F.gif\nI believe if you set the interface metric to a high value on the 10.16.x.x interface, it will not be used as a gateway.\nPersonally I use the method where I refrain from defining a gateway IP.\n"
] | [
2,
0
] | [] | [] | [
"networking",
"windows_server_2003"
] | stackoverflow_0000017785_networking_windows_server_2003.txt |
Q:
Getting the Remote Name Address (not IP)
I wanted to show the users Name Address (see www.ipchicken.com), but the only thing I can find is the IP Address. I tried a reverse lookup, but didn't work either:
IPAddress ip = IPAddress.Parse(this.lblIp.Text);
string hostName = Dns.GetHostByAddress(ip).HostName;
this.lblHost.Text = hostName;
But HostName is the same as the IP address.
Who know's what I need to do?
Thanks.
Gab.
A:
Edit of my previous answer.
Try (in vb.net):
Dim sTmp As String
Dim ip As IPHostEntry
sTmp = MaskedTextBox1.Text
Dim ipAddr As IPAddress = IPAddress.Parse(sTmp)
ip = Dns.GetHostEntry(ipAddr)
MaskedTextBox2.Text = ip.HostName
Dns.resolve appears to be obsolete in later versions of .Net. As stated here before I believe the issue is caused by your IP address not having a fixed name or by it having multiple names. The example above works with Google addresses, but not with an address we use that has a couple of names associated with it.
A:
You need the Dns.Resolve() method from System.Net
See this article
A:
Stupid me... The code is posted was 100% valid and working... But 10 lines lower I replaced the this.lblHost.Text with another value, which happened to be the ip address.
Sorry.
A:
Also remember that reverse lookup won't allways give the same address as the one used in forward DNS lookup.
For example for google.com I get ip 64.233.167.99
but reverse dns lookup for that IP returns py-in-f99.google.com
A:
Not all IP addresses need to have hostnames. I think that's what is happening in your case. Try it ouy with more well-known IP/hostname pairs eg:
Name: google.com Address: 72.14.207.99
Name: google.com Address:
64.233.187.99
Name: google.com Address:
64.233.167.99
...I might just be wrong
A:
A lot of users have the same shared IP address, so you will not be able to find their hostnames. And a lot of users won't necessarily have DNS records in public DNS for the IPs they are coming from as well.
| Getting the Remote Name Address (not IP) | I wanted to show the users Name Address (see www.ipchicken.com), but the only thing I can find is the IP Address. I tried a reverse lookup, but didn't work either:
IPAddress ip = IPAddress.Parse(this.lblIp.Text);
string hostName = Dns.GetHostByAddress(ip).HostName;
this.lblHost.Text = hostName;
But HostName is the same as the IP address.
Who know's what I need to do?
Thanks.
Gab.
| [
"Edit of my previous answer. \nTry (in vb.net): \n Dim sTmp As String\n Dim ip As IPHostEntry\n\n sTmp = MaskedTextBox1.Text\n Dim ipAddr As IPAddress = IPAddress.Parse(sTmp)\n ip = Dns.GetHostEntry(ipAddr)\n MaskedTextBox2.Text = ip.HostName\n\nDns.resolve appears to be obsolete in later versions of .Net. As stated here before I believe the issue is caused by your IP address not having a fixed name or by it having multiple names. The example above works with Google addresses, but not with an address we use that has a couple of names associated with it. \n",
"You need the Dns.Resolve() method from System.Net\nSee this article\n",
"Stupid me... The code is posted was 100% valid and working... But 10 lines lower I replaced the this.lblHost.Text with another value, which happened to be the ip address.\nSorry.\n",
"Also remember that reverse lookup won't allways give the same address as the one used in forward DNS lookup.\n\nFor example for google.com I get ip 64.233.167.99\nbut reverse dns lookup for that IP returns py-in-f99.google.com \n",
"Not all IP addresses need to have hostnames. I think that's what is happening in your case. Try it ouy with more well-known IP/hostname pairs eg:\n\nName: google.com Address: 72.14.207.99\nName: google.com Address:\n 64.233.187.99\nName: google.com Address:\n 64.233.167.99\n\n...I might just be wrong\n",
"A lot of users have the same shared IP address, so you will not be able to find their hostnames. And a lot of users won't necessarily have DNS records in public DNS for the IPs they are coming from as well.\n"
] | [
3,
2,
2,
1,
0,
0
] | [] | [] | [
".net",
"asp.net"
] | stackoverflow_0000017795_.net_asp.net.txt |
Q:
Recommendation for javascript form validation library
Any recommendations for a javascript form validation library. I could try and roll my own (but I'm not very good at javascript). Needs to support checking for required fields, and preferably regexp validation of fields.
A:
I am about to start implementing javascript validation in my forms using jQuery Validation.
I think that StackOverflow users this jQuery plugin as well. It seems to be a very mature validation library, however it does build on top of jQuery, so it might not fit for you.
Like Tom said, don't forget that server side validation.
A:
Personally I just rolled my own because it was much simpler to integrate with my error handling system and how I wanted it displayed on the site. 99% of the time you only care about a couple of things, required fields and comparing fields.
A:
I've used this library for a couple of personal projects. It's pretty good, though I have had to make my own modifications to it a couple of times - nothing major, though, and it's easy enough to do so.
I'm sure you already do this, but also validate all of your information on the server-side, as well. Client-side-only validation is rarely, if ever, a good idea.
| Recommendation for javascript form validation library | Any recommendations for a javascript form validation library. I could try and roll my own (but I'm not very good at javascript). Needs to support checking for required fields, and preferably regexp validation of fields.
| [
"I am about to start implementing javascript validation in my forms using jQuery Validation.\nI think that StackOverflow users this jQuery plugin as well. It seems to be a very mature validation library, however it does build on top of jQuery, so it might not fit for you.\nLike Tom said, don't forget that server side validation.\n",
"Personally I just rolled my own because it was much simpler to integrate with my error handling system and how I wanted it displayed on the site. 99% of the time you only care about a couple of things, required fields and comparing fields. \n",
"I've used this library for a couple of personal projects. It's pretty good, though I have had to make my own modifications to it a couple of times - nothing major, though, and it's easy enough to do so.\nI'm sure you already do this, but also validate all of your information on the server-side, as well. Client-side-only validation is rarely, if ever, a good idea. \n"
] | [
7,
3,
1
] | [] | [] | [
"forms",
"javascript",
"validation"
] | stackoverflow_0000017817_forms_javascript_validation.txt |
Q:
Anyone know a quick way to get to custom attributes on an enum value?
This is probably best shown with an example. I have an enum with attributes:
public enum MyEnum {
[CustomInfo("This is a custom attrib")]
None = 0,
[CustomInfo("This is another attrib")]
ValueA,
[CustomInfo("This has an extra flag", AllowSomething = true)]
ValueB,
}
I want to get to those attributes from an instance:
public CustomInfoAttribute GetInfo( MyEnum enumInput ) {
Type typeOfEnum = enumInput.GetType(); //this will be typeof( MyEnum )
//here is the problem, GetField takes a string
// the .ToString() on enums is very slow
FieldInfo fi = typeOfEnum.GetField( enumInput.ToString() );
//get the attribute from the field
return fi.GetCustomAttributes( typeof( CustomInfoAttribute ), false ).
FirstOrDefault() //Linq method to get first or null
as CustomInfoAttribute; //use as operator to convert
}
As this is using reflection I expect some slowness, but it seems messy to convert the enum value to a string (which reflects the name) when I already have an instance of it.
Does anyone have a better way?
A:
This is probably the easiest way.
A quicker way would be to Statically Emit the IL code using Dynamic Method and ILGenerator. Although I've only used this to GetPropertyInfo, but can't see why you couldn't emit CustomAttributeInfo as well.
For example code to emit a getter from a property
public delegate object FastPropertyGetHandler(object target);
private static void EmitBoxIfNeeded(ILGenerator ilGenerator, System.Type type)
{
if (type.IsValueType)
{
ilGenerator.Emit(OpCodes.Box, type);
}
}
public static FastPropertyGetHandler GetPropertyGetter(PropertyInfo propInfo)
{
// generates a dynamic method to generate a FastPropertyGetHandler delegate
DynamicMethod dynamicMethod =
new DynamicMethod(
string.Empty,
typeof (object),
new Type[] { typeof (object) },
propInfo.DeclaringType.Module);
ILGenerator ilGenerator = dynamicMethod.GetILGenerator();
// loads the object into the stack
ilGenerator.Emit(OpCodes.Ldarg_0);
// calls the getter
ilGenerator.EmitCall(OpCodes.Callvirt, propInfo.GetGetMethod(), null);
// creates code for handling the return value
EmitBoxIfNeeded(ilGenerator, propInfo.PropertyType);
// returns the value to the caller
ilGenerator.Emit(OpCodes.Ret);
// converts the DynamicMethod to a FastPropertyGetHandler delegate
// to get the property
FastPropertyGetHandler getter =
(FastPropertyGetHandler)
dynamicMethod.CreateDelegate(typeof(FastPropertyGetHandler));
return getter;
}
A:
I generally find reflection to be quite speedy as long as you don't dynamically invoke methods.
Since you are just reading the Attributes of an enum, your approach should work just fine without any real performance hit.
And remember that you generally should try to keep things simple to understand. Over engineering this just to gain a few ms might not be worth it.
| Anyone know a quick way to get to custom attributes on an enum value? | This is probably best shown with an example. I have an enum with attributes:
public enum MyEnum {
[CustomInfo("This is a custom attrib")]
None = 0,
[CustomInfo("This is another attrib")]
ValueA,
[CustomInfo("This has an extra flag", AllowSomething = true)]
ValueB,
}
I want to get to those attributes from an instance:
public CustomInfoAttribute GetInfo( MyEnum enumInput ) {
Type typeOfEnum = enumInput.GetType(); //this will be typeof( MyEnum )
//here is the problem, GetField takes a string
// the .ToString() on enums is very slow
FieldInfo fi = typeOfEnum.GetField( enumInput.ToString() );
//get the attribute from the field
return fi.GetCustomAttributes( typeof( CustomInfoAttribute ), false ).
FirstOrDefault() //Linq method to get first or null
as CustomInfoAttribute; //use as operator to convert
}
As this is using reflection I expect some slowness, but it seems messy to convert the enum value to a string (which reflects the name) when I already have an instance of it.
Does anyone have a better way?
| [
"This is probably the easiest way.\nA quicker way would be to Statically Emit the IL code using Dynamic Method and ILGenerator. Although I've only used this to GetPropertyInfo, but can't see why you couldn't emit CustomAttributeInfo as well. \nFor example code to emit a getter from a property\npublic delegate object FastPropertyGetHandler(object target); \n\nprivate static void EmitBoxIfNeeded(ILGenerator ilGenerator, System.Type type)\n{\n if (type.IsValueType)\n {\n ilGenerator.Emit(OpCodes.Box, type);\n }\n}\n\npublic static FastPropertyGetHandler GetPropertyGetter(PropertyInfo propInfo)\n{\n // generates a dynamic method to generate a FastPropertyGetHandler delegate\n DynamicMethod dynamicMethod =\n new DynamicMethod(\n string.Empty, \n typeof (object), \n new Type[] { typeof (object) },\n propInfo.DeclaringType.Module);\n\n ILGenerator ilGenerator = dynamicMethod.GetILGenerator();\n // loads the object into the stack\n ilGenerator.Emit(OpCodes.Ldarg_0);\n // calls the getter\n ilGenerator.EmitCall(OpCodes.Callvirt, propInfo.GetGetMethod(), null);\n // creates code for handling the return value\n EmitBoxIfNeeded(ilGenerator, propInfo.PropertyType);\n // returns the value to the caller\n ilGenerator.Emit(OpCodes.Ret);\n // converts the DynamicMethod to a FastPropertyGetHandler delegate\n // to get the property\n FastPropertyGetHandler getter =\n (FastPropertyGetHandler) \n dynamicMethod.CreateDelegate(typeof(FastPropertyGetHandler));\n\n\n return getter;\n}\n\n",
"I generally find reflection to be quite speedy as long as you don't dynamically invoke methods.\nSince you are just reading the Attributes of an enum, your approach should work just fine without any real performance hit.\nAnd remember that you generally should try to keep things simple to understand. Over engineering this just to gain a few ms might not be worth it.\n"
] | [
11,
7
] | [] | [] | [
".net",
"attributes",
"c#",
"enums",
"reflection"
] | stackoverflow_0000017772_.net_attributes_c#_enums_reflection.txt |
Q:
Convert an asp.net application to IIS7 integrated mode
What steps I need to perform in order to convert asp.net 2 application from IIS7 classic to integrated mode?
A:
Here is a process:
Rick Strahl's blog
| Convert an asp.net application to IIS7 integrated mode | What steps I need to perform in order to convert asp.net 2 application from IIS7 classic to integrated mode?
| [
"Here is a process:\nRick Strahl's blog\n"
] | [
5
] | [
"Nothing really. ASP.NET 2.0 applications will run just as they have in IIS 6.0. If you want to take advantage of any of the new features then you just need to update your code. But unless you are changing the structure of the header of the response or intercepting requests for other applications you probably will not need to do anything.\n"
] | [
-3
] | [
".net_2.0",
"asp.net",
"iis_7"
] | stackoverflow_0000010782_.net_2.0_asp.net_iis_7.txt |
Q:
Large, Complex Objects as a Web Service Result
Hello again ladies and gents!
OK, following on from my other question on ASP.NET Web Service Results, Proxy Classes and Type Conversion. I've come to a part in my project where I need to get my thinking cap on.
Basically, we have a large, complex custom object that needs to be returned from a Web Service and consumed in the client application.
Now, based on the previous discussion, we know this is going to then take the form of the proxy class(es) as the return type. To overcome this, we need to basically copy the properties from one to the other.
In this case, that is something that I would really, really, really! like to avoid!
So, it got me thinking, how else could we do this?
My current thoughts are to enable the object for complete serialization to XML and then return the XML as a string from the Web Service. We then de-serialize at the client. This will mean a fair bit of attribute decorating, but at least the code at both endpoints will be light, namely by just using the .NET XML Serializer.
What are your thoughts on this?
A:
The .Net XML (de)serialisation is pretty nicely implemented. At first thought, I don't think this is a bad idea at all.
If the two applications import the same C# class(es) definition(s), then this is a relatively nice way of getting copy-constructor behaviour for free. If the class structure changes, then everything will work when both sides get the new class definition, without needing to make any additional changes on the web-service consumption/construction side.
There's a slight overhead in marshalling and demarshalling the XML, but that is probably dwarved by the overhead of the remote web service call. .Net XML serialisation is well understood by most programmers and should produce an easy to maintain solution.
A:
I'm loving JSON for this kind of thing. I just finished a POC drop-things type portal for my company using jQuery to contact web services with script service enabled. The messages are lightweight and parsing etc is pretty much handled. The jQuery ajax stuff I read was here (loving it!) : jquery ajax article
A:
I had some great answers on a very similar topic yesterday that might be useful for you:
Communication between javascript and the server
A:
Rob, in looking at your other question as well as this one, it's sounds like the exact situation we have in our environment. What we've done, however, is move away from ASP.Net web services to WCF web services and in the process solved (for the most part) this problem.
If there is any chance your web service could be implemented as a WCF web service, this might work for you as well. I should mention, that at the same time, we've maintained backwards compatibility with some client applications that need the "ASP.Net web service style" of implementation by using the WCF basichttp binding for the service transport. The end result is that our "newer" client applications are able to use our real business objects (through referencing an assembly containing only these shared objects) as the return types from the web service calls because they make actual WCF calls.
We do this by not utilizing the auto-generated proxy classes and constructing our own client channel to communicate with the WCF service.
If you can possibly use WCF, let me know I can post some additional information.
| Large, Complex Objects as a Web Service Result | Hello again ladies and gents!
OK, following on from my other question on ASP.NET Web Service Results, Proxy Classes and Type Conversion. I've come to a part in my project where I need to get my thinking cap on.
Basically, we have a large, complex custom object that needs to be returned from a Web Service and consumed in the client application.
Now, based on the previous discussion, we know this is going to then take the form of the proxy class(es) as the return type. To overcome this, we need to basically copy the properties from one to the other.
In this case, that is something that I would really, really, really! like to avoid!
So, it got me thinking, how else could we do this?
My current thoughts are to enable the object for complete serialization to XML and then return the XML as a string from the Web Service. We then de-serialize at the client. This will mean a fair bit of attribute decorating, but at least the code at both endpoints will be light, namely by just using the .NET XML Serializer.
What are your thoughts on this?
| [
"The .Net XML (de)serialisation is pretty nicely implemented. At first thought, I don't think this is a bad idea at all.\nIf the two applications import the same C# class(es) definition(s), then this is a relatively nice way of getting copy-constructor behaviour for free. If the class structure changes, then everything will work when both sides get the new class definition, without needing to make any additional changes on the web-service consumption/construction side.\nThere's a slight overhead in marshalling and demarshalling the XML, but that is probably dwarved by the overhead of the remote web service call. .Net XML serialisation is well understood by most programmers and should produce an easy to maintain solution.\n",
"I'm loving JSON for this kind of thing. I just finished a POC drop-things type portal for my company using jQuery to contact web services with script service enabled. The messages are lightweight and parsing etc is pretty much handled. The jQuery ajax stuff I read was here (loving it!) : jquery ajax article\n",
"I had some great answers on a very similar topic yesterday that might be useful for you:\nCommunication between javascript and the server\n",
"Rob, in looking at your other question as well as this one, it's sounds like the exact situation we have in our environment. What we've done, however, is move away from ASP.Net web services to WCF web services and in the process solved (for the most part) this problem. \nIf there is any chance your web service could be implemented as a WCF web service, this might work for you as well. I should mention, that at the same time, we've maintained backwards compatibility with some client applications that need the \"ASP.Net web service style\" of implementation by using the WCF basichttp binding for the service transport. The end result is that our \"newer\" client applications are able to use our real business objects (through referencing an assembly containing only these shared objects) as the return types from the web service calls because they make actual WCF calls. \nWe do this by not utilizing the auto-generated proxy classes and constructing our own client channel to communicate with the WCF service. \nIf you can possibly use WCF, let me know I can post some additional information.\n"
] | [
5,
4,
1,
1
] | [] | [] | [
"asp.net",
"c#",
"serialization",
"web_services",
"xml"
] | stackoverflow_0000017725_asp.net_c#_serialization_web_services_xml.txt |
Q:
How to wait for thread complete before continuing?
I have some code for starting a thread on the .NET CF 2.0:
ThreadStart tStart = new ThreadStart(MyMethod);
Thread t = new Thread(tStart);
t.Start();
If I call this inside a loop the items completely out of order. How do introduce a wait after t.Start(), so that the work on the thread completes before the code continues? Will BeginInvoke/EndInvoke be a better option for this than manually creating threads?
A:
How much order do you need to impose on the threads? If you just need all of the work started in the loop to finish before the code continues, but you don't care about the order the work within the loop finishes, then calling Join is the answer. To add more detail to Kevin Kenny's answer, you should call Join outside the loop. This means you will need a collection to hold references to the threads you started:
// Start all of the threads.
List<Thread> startedThreads = new List<Thread>();
foreach (...) {
Thread thread = new Thread(new ThreadStart(MyMethod));
thread.Start();
startedThreads.Add(thread);
}
// Wait for all of the threads to finish.
foreach (Thread thread in startedThreads) {
thread.Join();
}
In contrast, if you called Join inside the loop, the result would basically be the same as not using threads at all. Each iteration of the loop body would create and start a thread but then immediately Join it and wait for it to finish.
If the individual threads produce some result (write a message in a log, for example) then the messages may still appear out of order because there's no coordination between the threads. It is possible to get the threads to output their results in order by coordinating them with a Monitor.
A:
Another way of waiting for a thread to finish is using an AutoResetEvent.
private readonly AutoResetEvent mWaitForThread = new AutoResetEvent(false);
private void Blah()
{
ThreadStart tStart = new ThreadStart(MyMethod);
Thread t = new Thread(tStart);
t.Start();
//... (any other things)
mWaitForThread.WaitOne();
}
private void MyMethod()
{
//... (execute any other action)
mWaitForThread.Set();
}
A:
If I'm reading this properly, you're starting work items on bunch of threads but you're concerned that the threads are completing out of order, and, would prefer to wait until each thread completes in the order they were started ?
If this is the case then I'm not sure why you're creating a thread per work item. Maybe you could expand?
If you did need to wait for a thread to complete the you could just do:
t.Join();
| How to wait for thread complete before continuing? | I have some code for starting a thread on the .NET CF 2.0:
ThreadStart tStart = new ThreadStart(MyMethod);
Thread t = new Thread(tStart);
t.Start();
If I call this inside a loop the items completely out of order. How do introduce a wait after t.Start(), so that the work on the thread completes before the code continues? Will BeginInvoke/EndInvoke be a better option for this than manually creating threads?
| [
"How much order do you need to impose on the threads? If you just need all of the work started in the loop to finish before the code continues, but you don't care about the order the work within the loop finishes, then calling Join is the answer. To add more detail to Kevin Kenny's answer, you should call Join outside the loop. This means you will need a collection to hold references to the threads you started:\n// Start all of the threads.\nList<Thread> startedThreads = new List<Thread>();\nforeach (...) {\n Thread thread = new Thread(new ThreadStart(MyMethod));\n thread.Start();\n startedThreads.Add(thread);\n}\n\n// Wait for all of the threads to finish.\nforeach (Thread thread in startedThreads) {\n thread.Join();\n}\n\nIn contrast, if you called Join inside the loop, the result would basically be the same as not using threads at all. Each iteration of the loop body would create and start a thread but then immediately Join it and wait for it to finish.\nIf the individual threads produce some result (write a message in a log, for example) then the messages may still appear out of order because there's no coordination between the threads. It is possible to get the threads to output their results in order by coordinating them with a Monitor.\n",
"Another way of waiting for a thread to finish is using an AutoResetEvent.\nprivate readonly AutoResetEvent mWaitForThread = new AutoResetEvent(false);\n\nprivate void Blah()\n{\n ThreadStart tStart = new ThreadStart(MyMethod);\n Thread t = new Thread(tStart);\n t.Start();\n\n //... (any other things)\n mWaitForThread.WaitOne();\n}\n\nprivate void MyMethod()\n{\n //... (execute any other action)\n mWaitForThread.Set();\n}\n\n",
"If I'm reading this properly, you're starting work items on bunch of threads but you're concerned that the threads are completing out of order, and, would prefer to wait until each thread completes in the order they were started ? \nIf this is the case then I'm not sure why you're creating a thread per work item. Maybe you could expand?\nIf you did need to wait for a thread to complete the you could just do:\nt.Join();\n\n"
] | [
15,
5,
4
] | [] | [] | [
"c#",
"compact_framework",
"multithreading"
] | stackoverflow_0000006890_c#_compact_framework_multithreading.txt |
Q:
Why does Guid.ToString() reverse the byte order?
We're storing some Guid's in a MS SQL database. There's some legacy code that does Guid.ToString() and then passes them in to a varchar(64) and there's some newer code that passes them in using a unique identifier parameter. When you look at the results using MS SQL Management studio they look different. The byte order of the first three blocks is reversed but the last one remains the same. Why?
A:
Uniqueidentifier fields in Sql server can be indexed, and so are 'backwards'.
Guids can be generated from both machine specific info and 'event-time' information.
The default Guid in .Net is random, but you can get sequential Guids from it with an extern call:
[DllImport( "rpcrt4.dll", SetLastError = true )]
static extern int UuidCreateSequential( out Guid guid );
This will get you Guids based on your MAC address (MSDN docs) that are sequential.
If you .ToString() these sequential guids then you will see the first part of the string varies, while the rest stays constant.
This makes equality checks between Guids quicker (as the differences will be at the start) and improves the variation for truncated ones.
For searching columns SqlServer builds indexes in a similar way to a telephone directory or dictionary. It is much quicker to search for words starting with "Over*" than it would be to find ones ending in "*flow".
This means that for Sql server any sequential Guids need to be stored with the repeating value first, so it stores them back to front.
| Why does Guid.ToString() reverse the byte order? | We're storing some Guid's in a MS SQL database. There's some legacy code that does Guid.ToString() and then passes them in to a varchar(64) and there's some newer code that passes them in using a unique identifier parameter. When you look at the results using MS SQL Management studio they look different. The byte order of the first three blocks is reversed but the last one remains the same. Why?
| [
"Uniqueidentifier fields in Sql server can be indexed, and so are 'backwards'.\nGuids can be generated from both machine specific info and 'event-time' information.\nThe default Guid in .Net is random, but you can get sequential Guids from it with an extern call:\n[DllImport( \"rpcrt4.dll\", SetLastError = true )]\nstatic extern int UuidCreateSequential( out Guid guid );\n\nThis will get you Guids based on your MAC address (MSDN docs) that are sequential. \nIf you .ToString() these sequential guids then you will see the first part of the string varies, while the rest stays constant.\nThis makes equality checks between Guids quicker (as the differences will be at the start) and improves the variation for truncated ones. \nFor searching columns SqlServer builds indexes in a similar way to a telephone directory or dictionary. It is much quicker to search for words starting with \"Over*\" than it would be to find ones ending in \"*flow\".\nThis means that for Sql server any sequential Guids need to be stored with the repeating value first, so it stores them back to front.\n"
] | [
9
] | [] | [] | [
".net",
"sql_server"
] | stackoverflow_0000017679_.net_sql_server.txt |
Q:
Block user access to internals of a site using HTTP_REFERER
I have control over the HttpServer but not over the ApplicationServer or the Java Applications sitting there but I need to block direct access to certain pages on those applications. Precisely, I don't want users automating access to forms issuing direct GET/POST HTTP requests to the appropriate servlet.
So, I decided to block users based on the value of HTTP_REFERER. After all, if the user is navigating inside the site, it will have an appropriate HTTP_REFERER. Well, that was what I thought.
I implemented a rewrite rule in the .htaccess file that says:
RewriteEngine on
# Options +FollowSymlinks
RewriteCond %{HTTP_REFERER} !^http://mywebaddress(.cl)?/.* [NC]
RewriteRule (servlet1|servlet2)/.+\?.+ - [F]
I expected to forbid access to users that didn't navigate the site but issue direct GET requests to the "servlet1" or "servlet2" servlets using querystrings. But my expectations ended abruptly because the regular expression (servlet1|servlet2)/.+\?.+ didn't worked at all.
I was really disappointed when I changed that expression to (servlet1|servlet2)/.+ and it worked so well that my users were blocked no matter if they navigated the site or not.
So, my question is: How do I can accomplish this thing of not allowing "robots" with direct access to certain pages if I have no access/privileges/time to modify the application?
A:
I'm not sure if I can solve this in one go, but we can go back and forth as necessary.
First, I want to repeat what I think you are saying and make sure I'm clear. You want to disallow requests to servlet1 and servlet2 is the request doesn't have the proper referer and it does have a query string? I'm not sure I understand (servlet1|servlet2)/.+\?.+ because it looks like you are requiring a file under servlet1 and 2. I think maybe you are combining PATH_INFO (before the "?") with a GET query string (after the "?"). It appears that the PATH_INFO part will work but the GET query test will not. I made a quick test on my server using script1.cgi and script2.cgi and the following rules worked to accomplish what you are asking for. They are obviously edited a little to match my environment:
RewriteCond %{HTTP_REFERER} !^http://(www.)?example.(com|org) [NC]
RewriteCond %{QUERY_STRING} ^.+$
RewriteRule ^(script1|script2)\.cgi - [F]
The above caught all wrong-referer requests to script1.cgi and script2.cgi that tried to submit data using a query string. However, you can also submit data using a path_info and by posting data. I used this form to protect against any of the three methods being used with incorrect referer:
RewriteCond %{HTTP_REFERER} !^http://(www.)?example.(com|org) [NC]
RewriteCond %{QUERY_STRING} ^.+$ [OR]
RewriteCond %{REQUEST_METHOD} ^POST$ [OR]
RewriteCond %{PATH_INFO} ^.+$
RewriteRule ^(script1|script2)\.cgi - [F]
Based on the example you were trying to get working, I think this is what you want:
RewriteCond %{HTTP_REFERER} !^http://mywebaddress(.cl)?/.* [NC]
RewriteCond %{QUERY_STRING} ^.+$ [OR]
RewriteCond %{REQUEST_METHOD} ^POST$ [OR]
RewriteCond %{PATH_INFO} ^.+$
RewriteRule (servlet1|servlet2)\b - [F]
Hopefully this at least gets you closer to your goal. Please let us know how it works, I'm interested in your problem.
(BTW, I agree that referer blocking is poor security, but I also understand that relaity forces imperfect and partial solutions sometimes, which you seem to already acknowledge.)
A:
I don't have a solution, but I'm betting that relying on the referrer will never work because user-agents are free to not send it at all or spoof it to something that will let them in.
A:
You can't tell apart users and malicious scripts by their http request. But you can analyze which users are requesting too many pages in too short a time, and block their ip-addresses.
A:
Javascript is another helpful tool to prevent (or at least delay) screen scraping. Most automated scraping tools don't have a Javascript interpreter, so you can do things like setting hidden fields, etc.
Edit: Something along the lines of this Phil Haack article.
A:
Using a referrer is very unreliable as a method of verification. As other people have mentioned, it is easily spoofed. Your best solution is to modify the application (if you can)
You could use a CAPTCHA, or set some sort of cookie or session cookie that keeps track of what page the user last visited (a session would be harder to spoof) and keep track of page view history, and only allow users who have browsed the pages required to get to the page you want to block.
This obviously requires you to have access to the application in question, however it is the most foolproof way (not completely, but "good enough" in my opinion.)
A:
I'm guessing you're trying to prevent screen scraping?
In my honest opinion it's a tough one to solve and trying to fix by checking the value of HTTP_REFERER is just a sticking plaster. Anyone going to the bother of automating submissions is going to be savvy enough to send the correct referer from their 'automaton'.
You could try rate limiting but without actually modifying the app to force some kind of is-this-a-human validation (a CAPTCHA) at some point then you're going to find this hard to prevent.
A:
If you're trying to prevent search engine bots from accessing certain pages, make sure you're using a properly formatted robots.txt file.
Using HTTP_REFERER is unreliable because it is easily faked.
Another option is to check the user agent string for known bots (this may require code modification).
A:
To make the things a little more clear:
Yes, I know that using HTTP_REFERER is completely unreliable and somewhat childish but I'm pretty sure that the people that learned (from me maybe?) to make automations with Excel VBA will not know how to subvert a HTTP_REFERER within the time span to have the final solution.
I don't have access/privilege to modify the application code. Politics. Do you believe that? So, I must to wait until the rights holder make the changes I requested.
From previous experiences, I know that the requested changes will take two month to get in Production. No, tossing them Agile Methodologies Books in their heads didn't improve anything.
This is an intranet app. So I don't have a lot of youngsters trying to undermine my prestige. But I'm young enough as to try to undermine the prestige of "a very fancy global consultancy services that comes from India" but where, curiously, there are not a single indian working there.
So far, the best answer comes from "Michel de Mare": block users based on their IPs. Well, that I did yesterday. Today I wanted to make something more generic because I have a lot of kangaroo users (jumping from an Ip address to another) because they use VPN or DHCP.
A:
You might be able to use an anti-CSRF token to achieve what you're after.
This article explains it in more detail: Cross-Site Request Forgeries
| Block user access to internals of a site using HTTP_REFERER | I have control over the HttpServer but not over the ApplicationServer or the Java Applications sitting there but I need to block direct access to certain pages on those applications. Precisely, I don't want users automating access to forms issuing direct GET/POST HTTP requests to the appropriate servlet.
So, I decided to block users based on the value of HTTP_REFERER. After all, if the user is navigating inside the site, it will have an appropriate HTTP_REFERER. Well, that was what I thought.
I implemented a rewrite rule in the .htaccess file that says:
RewriteEngine on
# Options +FollowSymlinks
RewriteCond %{HTTP_REFERER} !^http://mywebaddress(.cl)?/.* [NC]
RewriteRule (servlet1|servlet2)/.+\?.+ - [F]
I expected to forbid access to users that didn't navigate the site but issue direct GET requests to the "servlet1" or "servlet2" servlets using querystrings. But my expectations ended abruptly because the regular expression (servlet1|servlet2)/.+\?.+ didn't worked at all.
I was really disappointed when I changed that expression to (servlet1|servlet2)/.+ and it worked so well that my users were blocked no matter if they navigated the site or not.
So, my question is: How do I can accomplish this thing of not allowing "robots" with direct access to certain pages if I have no access/privileges/time to modify the application?
| [
"I'm not sure if I can solve this in one go, but we can go back and forth as necessary.\nFirst, I want to repeat what I think you are saying and make sure I'm clear. You want to disallow requests to servlet1 and servlet2 is the request doesn't have the proper referer and it does have a query string? I'm not sure I understand (servlet1|servlet2)/.+\\?.+ because it looks like you are requiring a file under servlet1 and 2. I think maybe you are combining PATH_INFO (before the \"?\") with a GET query string (after the \"?\"). It appears that the PATH_INFO part will work but the GET query test will not. I made a quick test on my server using script1.cgi and script2.cgi and the following rules worked to accomplish what you are asking for. They are obviously edited a little to match my environment:\nRewriteCond %{HTTP_REFERER} !^http://(www.)?example.(com|org) [NC]\nRewriteCond %{QUERY_STRING} ^.+$\nRewriteRule ^(script1|script2)\\.cgi - [F]\n\nThe above caught all wrong-referer requests to script1.cgi and script2.cgi that tried to submit data using a query string. However, you can also submit data using a path_info and by posting data. I used this form to protect against any of the three methods being used with incorrect referer:\nRewriteCond %{HTTP_REFERER} !^http://(www.)?example.(com|org) [NC]\nRewriteCond %{QUERY_STRING} ^.+$ [OR]\nRewriteCond %{REQUEST_METHOD} ^POST$ [OR]\nRewriteCond %{PATH_INFO} ^.+$\nRewriteRule ^(script1|script2)\\.cgi - [F]\n\nBased on the example you were trying to get working, I think this is what you want:\nRewriteCond %{HTTP_REFERER} !^http://mywebaddress(.cl)?/.* [NC]\nRewriteCond %{QUERY_STRING} ^.+$ [OR]\nRewriteCond %{REQUEST_METHOD} ^POST$ [OR]\nRewriteCond %{PATH_INFO} ^.+$\nRewriteRule (servlet1|servlet2)\\b - [F]\n\nHopefully this at least gets you closer to your goal. Please let us know how it works, I'm interested in your problem.\n(BTW, I agree that referer blocking is poor security, but I also understand that relaity forces imperfect and partial solutions sometimes, which you seem to already acknowledge.)\n",
"I don't have a solution, but I'm betting that relying on the referrer will never work because user-agents are free to not send it at all or spoof it to something that will let them in.\n",
"You can't tell apart users and malicious scripts by their http request. But you can analyze which users are requesting too many pages in too short a time, and block their ip-addresses.\n",
"Javascript is another helpful tool to prevent (or at least delay) screen scraping. Most automated scraping tools don't have a Javascript interpreter, so you can do things like setting hidden fields, etc.\nEdit: Something along the lines of this Phil Haack article.\n",
"Using a referrer is very unreliable as a method of verification. As other people have mentioned, it is easily spoofed. Your best solution is to modify the application (if you can)\nYou could use a CAPTCHA, or set some sort of cookie or session cookie that keeps track of what page the user last visited (a session would be harder to spoof) and keep track of page view history, and only allow users who have browsed the pages required to get to the page you want to block.\nThis obviously requires you to have access to the application in question, however it is the most foolproof way (not completely, but \"good enough\" in my opinion.)\n",
"I'm guessing you're trying to prevent screen scraping?\nIn my honest opinion it's a tough one to solve and trying to fix by checking the value of HTTP_REFERER is just a sticking plaster. Anyone going to the bother of automating submissions is going to be savvy enough to send the correct referer from their 'automaton'.\nYou could try rate limiting but without actually modifying the app to force some kind of is-this-a-human validation (a CAPTCHA) at some point then you're going to find this hard to prevent.\n",
"If you're trying to prevent search engine bots from accessing certain pages, make sure you're using a properly formatted robots.txt file.\nUsing HTTP_REFERER is unreliable because it is easily faked.\nAnother option is to check the user agent string for known bots (this may require code modification).\n",
"To make the things a little more clear: \n\nYes, I know that using HTTP_REFERER is completely unreliable and somewhat childish but I'm pretty sure that the people that learned (from me maybe?) to make automations with Excel VBA will not know how to subvert a HTTP_REFERER within the time span to have the final solution. \nI don't have access/privilege to modify the application code. Politics. Do you believe that? So, I must to wait until the rights holder make the changes I requested. \nFrom previous experiences, I know that the requested changes will take two month to get in Production. No, tossing them Agile Methodologies Books in their heads didn't improve anything. \nThis is an intranet app. So I don't have a lot of youngsters trying to undermine my prestige. But I'm young enough as to try to undermine the prestige of \"a very fancy global consultancy services that comes from India\" but where, curiously, there are not a single indian working there. \n\nSo far, the best answer comes from \"Michel de Mare\": block users based on their IPs. Well, that I did yesterday. Today I wanted to make something more generic because I have a lot of kangaroo users (jumping from an Ip address to another) because they use VPN or DHCP.\n",
"You might be able to use an anti-CSRF token to achieve what you're after. \nThis article explains it in more detail: Cross-Site Request Forgeries\n"
] | [
2,
1,
1,
1,
1,
0,
0,
0,
0
] | [] | [] | [
"apache",
"http_referer",
"mod_rewrite",
"security"
] | stackoverflow_0000003486_apache_http_referer_mod_rewrite_security.txt |
Q:
What's the best way to distribute python command-line tools?
My current setup.py script works okay, but it installs tvnamer.py (the tool) as tvnamer.py into site-packages or somewhere similar..
Can I make setup.py install tvnamer.py as tvnamer, and/or is there a better way of installing command-line applications?
A:
Try the entry_points.console_scripts parameter in the setup() call. As described in the setuptools docs, this should do what I think you want.
To reproduce here:
from setuptools import setup
setup(
# other arguments here...
entry_points = {
'console_scripts': [
'foo = package.module:func',
'bar = othermodule:somefunc',
],
}
)
| What's the best way to distribute python command-line tools? | My current setup.py script works okay, but it installs tvnamer.py (the tool) as tvnamer.py into site-packages or somewhere similar..
Can I make setup.py install tvnamer.py as tvnamer, and/or is there a better way of installing command-line applications?
| [
"Try the entry_points.console_scripts parameter in the setup() call. As described in the setuptools docs, this should do what I think you want.\nTo reproduce here:\nfrom setuptools import setup\n\nsetup(\n # other arguments here...\n entry_points = {\n 'console_scripts': [\n 'foo = package.module:func',\n 'bar = othermodule:somefunc',\n ],\n }\n)\n\n"
] | [
38
] | [] | [] | [
"command_line",
"packaging",
"python"
] | stackoverflow_0000017893_command_line_packaging_python.txt |
Q:
Productivity gains of using CASE tools for development
I was using a CASE called MAGIC for a system I'm developing, I've never used this kind of tool before and at first sight I liked, a month later I had a lot of the application generated, I felt very productive and ... I would say ... satisfied.
In some way a felt uncomfortable, cause, there is no code and everything I was used to, but in the other hand I could speed up my developing. The fact is that eventually I returned to use C# because I find it more flexible to develop, I can make unit testing, use CVS, I have access to more resources and basically I had "all the control". I felt that this tool didn't give me confidence and I thought that later in the project I could not manage it due to its forced established rules of development. And also a lot of things like sending emails, using my own controls, and other things had their complication, it seemed that at some point it was not going to be as easy as initially I thought and as initially the product claims. This reminds me a very nice article called "No Silver Bullet".
This CASE had its advantages but on the other hand it doesn't have resources you can consult and actually the license and certification are very expensive. For me another dissapointing thing is that because of its simplistic approach for development I felt scared on first hand cause of my unexperience on these kind of tools and second cause I thought that if I continued using it maybe it would have turned to be a complex monster that I could not manage later in the project.
I think it's good to use these kind of solutions to speed up things but I wonder, why aren't these programs as popular as VS.Net, J2EE, Ruby, Python, etc. if they claim to enhance productivity better than the tools I've pointed?
A:
We use a CASE tool at my current company for code generation and we are trying to move away from it.
The benefits that it brings - a graphical representation of the code making components 'easier' to pick up for new developers - are outweighed by the disadvantges in my opinion.
Those main disadvantages are:
We cannot do automatic merges, making it close to impossible for parallel development on one component.
Developers get dependant on the tool and 'forget' how to handcode.
A:
Just a couple questions for you:
How much productivity do you gain compared to the control that you use?
How testable and reliant is the code you create?
How well can you implement a new pattern into your design?
I can't imagine that there is a CASE out there that I could write a test first and then use a CASE to generate the code I need. I'd rather stick to resharper which can easily do my mundane tasks and retain full control of my code.
A:
The project I'm on originally went w/ the Oracle Development Suite to put together a web application.
Over time (5+ years), customer requirements became more complex than originally anticipated, and the screens were not easily maintainable. So, the team informally decided to start doing custom (hand coded) screens in web PL/SQL, instead of generating them using the Oracle Development Suite CASE tools (Oracle Designer).
The Oracle Report Builder component of the Development Suite is still being used by the team, as it seems to "get the job done" in a timely fashion. In general, the developers using the Report Builder tool are not very comfortable coding.
In this case, it seems that the productivity aspect of such CASE tools is heavily dependent on customer requirements and developer skill sets/training/background.
A:
Unfortunaly the Magic tool doesn't generates code and also it can't implement a design pattern. I don't have control over the code cause as i stated before it doesn't have code to modify. Te bottom line is that it can speed up productivity in some way but it has the impossibility to user CVS, patterns also and I can't control all the details.
I agree with gary when he says "it seems that the productivity aspect of such CASE tools is heavily dependent on customer requirements and developer skill sets/training/background" but also I can't agree more with Klelky;
Those main disadvantages are:
1. We cannot do automatic merges, making it close to impossible for parallel development on one component.
2.Developers get dependant on the tool and 'forget' how to handcode.
Thanks
| Productivity gains of using CASE tools for development | I was using a CASE called MAGIC for a system I'm developing, I've never used this kind of tool before and at first sight I liked, a month later I had a lot of the application generated, I felt very productive and ... I would say ... satisfied.
In some way a felt uncomfortable, cause, there is no code and everything I was used to, but in the other hand I could speed up my developing. The fact is that eventually I returned to use C# because I find it more flexible to develop, I can make unit testing, use CVS, I have access to more resources and basically I had "all the control". I felt that this tool didn't give me confidence and I thought that later in the project I could not manage it due to its forced established rules of development. And also a lot of things like sending emails, using my own controls, and other things had their complication, it seemed that at some point it was not going to be as easy as initially I thought and as initially the product claims. This reminds me a very nice article called "No Silver Bullet".
This CASE had its advantages but on the other hand it doesn't have resources you can consult and actually the license and certification are very expensive. For me another dissapointing thing is that because of its simplistic approach for development I felt scared on first hand cause of my unexperience on these kind of tools and second cause I thought that if I continued using it maybe it would have turned to be a complex monster that I could not manage later in the project.
I think it's good to use these kind of solutions to speed up things but I wonder, why aren't these programs as popular as VS.Net, J2EE, Ruby, Python, etc. if they claim to enhance productivity better than the tools I've pointed?
| [
"We use a CASE tool at my current company for code generation and we are trying to move away from it.\nThe benefits that it brings - a graphical representation of the code making components 'easier' to pick up for new developers - are outweighed by the disadvantges in my opinion.\nThose main disadvantages are:\n\nWe cannot do automatic merges, making it close to impossible for parallel development on one component.\nDevelopers get dependant on the tool and 'forget' how to handcode.\n\n",
"Just a couple questions for you:\nHow much productivity do you gain compared to the control that you use?\nHow testable and reliant is the code you create?\nHow well can you implement a new pattern into your design?\nI can't imagine that there is a CASE out there that I could write a test first and then use a CASE to generate the code I need. I'd rather stick to resharper which can easily do my mundane tasks and retain full control of my code.\n",
"The project I'm on originally went w/ the Oracle Development Suite to put together a web application. \nOver time (5+ years), customer requirements became more complex than originally anticipated, and the screens were not easily maintainable. So, the team informally decided to start doing custom (hand coded) screens in web PL/SQL, instead of generating them using the Oracle Development Suite CASE tools (Oracle Designer). \nThe Oracle Report Builder component of the Development Suite is still being used by the team, as it seems to \"get the job done\" in a timely fashion. In general, the developers using the Report Builder tool are not very comfortable coding.\nIn this case, it seems that the productivity aspect of such CASE tools is heavily dependent on customer requirements and developer skill sets/training/background. \n",
"Unfortunaly the Magic tool doesn't generates code and also it can't implement a design pattern. I don't have control over the code cause as i stated before it doesn't have code to modify. Te bottom line is that it can speed up productivity in some way but it has the impossibility to user CVS, patterns also and I can't control all the details.\nI agree with gary when he says \"it seems that the productivity aspect of such CASE tools is heavily dependent on customer requirements and developer skill sets/training/background\" but also I can't agree more with Klelky; \nThose main disadvantages are:\n1. We cannot do automatic merges, making it close to impossible for parallel development on one component.\n2.Developers get dependant on the tool and 'forget' how to handcode.\nThanks\n"
] | [
1,
0,
0,
0
] | [] | [] | [
"case_tools"
] | stackoverflow_0000013550_case_tools.txt |
Q:
best practice for releasing Microsoft dll's in setup
I'm working on a setup which wants to include the Microsoft.Web.Services3 (WSE 3.0) DLL. However, I typically do not like including Microsoft DLL's in our installs except by way of Microsoft's redistributables. There is both a developer and a redist install package available from Microsoft.
So, as a best practice, should I include the single DLL in my install or refer them to one of the WSE 3.0 installs (assuming they do not already have it installed)?
A:
Usually, redistributing any of Microsoft DLLs outside of the redistributable package is forbidden by their EULA, so you might first want to check the appropriate EULA for that DLL.
Generally, I would prefer the redist package since that makes sure that it's correctly "registered" into the system, i.e. if you install a newer version of the redist it gets updated (like DirectX) or not overwritten if it's an older version (also like DirectX).
A:
Check in the installer if WSE 3.0 is installed and if it isn't alert the person and cancel the install, if it is continue normally. I wouldn't include the DLL in your setup package, because it could get out dated pretty fast, and I don't know if the EULA will allow it.
A:
I believe the MS EULA prevents you from redistributing MS code, unless its in a redistributable package.
A proper redistributable should handle any other prerequisites, so its probably the better choice anyways.
A:
If you don't include it you should at the very least link to it directly on your site or have your installer open the web browser to it (or even download it automatically). Or better yet, include the redistributable in your software package.
However, if the DLL is not very large and you suspect that few users will have it, in the interest of a better user I would prepackage it in the default installer. However, you can always have an installer that does not include it for those who want a smaller installer... a great deal of other vendors do this all the time.
A:
Thanks for the suggestions/comments! After wrestling with windows installer setup I figured out the best way to include the WSE30 redist and pop up a dialog if it is not installed.
I'm aware of it not being best practice (and against Microsoft's EULA as mentioned) to simply include the DLL, which is why I thought it strange that it was trying to include the WSE DLL outside of the redist, especially when the redist is registered with the installer (it shows up as a pre-req under properties).
Thanks again.
| best practice for releasing Microsoft dll's in setup | I'm working on a setup which wants to include the Microsoft.Web.Services3 (WSE 3.0) DLL. However, I typically do not like including Microsoft DLL's in our installs except by way of Microsoft's redistributables. There is both a developer and a redist install package available from Microsoft.
So, as a best practice, should I include the single DLL in my install or refer them to one of the WSE 3.0 installs (assuming they do not already have it installed)?
| [
"Usually, redistributing any of Microsoft DLLs outside of the redistributable package is forbidden by their EULA, so you might first want to check the appropriate EULA for that DLL.\nGenerally, I would prefer the redist package since that makes sure that it's correctly \"registered\" into the system, i.e. if you install a newer version of the redist it gets updated (like DirectX) or not overwritten if it's an older version (also like DirectX).\n",
"Check in the installer if WSE 3.0 is installed and if it isn't alert the person and cancel the install, if it is continue normally. I wouldn't include the DLL in your setup package, because it could get out dated pretty fast, and I don't know if the EULA will allow it.\n",
"I believe the MS EULA prevents you from redistributing MS code, unless its in a redistributable package.\nA proper redistributable should handle any other prerequisites, so its probably the better choice anyways.\n",
"If you don't include it you should at the very least link to it directly on your site or have your installer open the web browser to it (or even download it automatically). Or better yet, include the redistributable in your software package.\nHowever, if the DLL is not very large and you suspect that few users will have it, in the interest of a better user I would prepackage it in the default installer. However, you can always have an installer that does not include it for those who want a smaller installer... a great deal of other vendors do this all the time.\n",
"Thanks for the suggestions/comments! After wrestling with windows installer setup I figured out the best way to include the WSE30 redist and pop up a dialog if it is not installed.\nI'm aware of it not being best practice (and against Microsoft's EULA as mentioned) to simply include the DLL, which is why I thought it strange that it was trying to include the WSE DLL outside of the redist, especially when the redist is registered with the installer (it shows up as a pre-req under properties).\nThanks again.\n"
] | [
5,
2,
2,
0,
0
] | [] | [] | [
"installation",
"redistributable",
"windows_installer"
] | stackoverflow_0000017825_installation_redistributable_windows_installer.txt |
Q:
What areas of specialization within programming would you recommend to a beginner
I am a student studying software development, and I feel programming, in general, is too broad of a subject to try to know everything. To be proficient, you have to decide which areas to focus your learning and understanding. Certain skill sets synergize with each other, like data-driven web development and SQL experience. However, all the win32 API experience in the world may not directly apply to linux development. This leads me to believe, as a beginning programmer, I should start deciding where I want to specialize after I have general understanding of the basic principles of software development.
This is a multi-part question really:
What are the common specializations within computer programming and software development?
Which of these specializations have more long-term value, both as a foundation for other specializations and/or as marketable skills?
Which skill sets complement each other?
Are there any areas of specialization that hinder your ability of developing other areas of specialization.
A:
Ben, Almost all seasoned programmers are still students in programming. You never stops learning anything when you are a developer. But if you are really starting off on your career then you should be least worried about the specialization thing. All APIs, frameworks and skills that you expect that gives you a long term existence in the field is not going to happen. Technology seems changing a lot and you should be versatile and flexible enough to learn anything. The knowledge you acquire on one platform/api/framework doesn't die off. You can apply the skills to the next greatest platform/api/framework.
That being said you should just stop worrying about the future and concentrate on the basics. DataStructures, Algorithm Analysis and Design, Compiler Design, Operating system design are the bare minimum stuff you need. And further you should be willing to go back and read tho books in those field any time in your career. Thats all is required. Good luck.
Sorry if I sounded like a big ass advisor; but thats what I think. :-)
A:
Not to directly reject your premise but I actually think being a generalist is a good position in programming. You will certainly develop expertise in specific areas but it is likely to be a product of either personal interest or work necessity. Over time the stuff you are able to transfer across languages and problem domains is at the heart of what makes good programmers.
A:
I think the more important question is: What areas of specialization are you most interested in?
Once you know, begin learning in that area!
A:
I would think the greatest skill of all would be to adapt with the times, because if your employer can see this potential in you then they would be wise to hold on tightly.
That said, I would advise you dive into the area YOU would enjoy. Learning is driven by enthusiasm.
Since my current employ is with an internet provider, I've found networking knowledge particularly helpful. But someday I'd like to play with 3D graphics (not necessarily games).
A:
Go as deep as you can starting off in one environment, win32, .net, Java, Objective C... whatever.
It is important to build the deep understanding of how X works... so that you can translate the same concepts into other languages or platforms/environments, if you so desire.
"Are there any areas of specialization that hinder your ability of developing other areas of specialization." Sort of, but nothing permanent i think.
Since I am relatively green myself (less than 4 years) I come from a really OOP mindset. I've rarely jumped out of .NET, so I had a hard time on one job when coming into contact with embedded code. With embedded programmers fearing object creation and the performance loss of inheritance. I had to learn the environment, seriously low memory and slow clock times, they were coming from. Those are times to grow, I had a better time at it because i understood my area pretty well.
I will say if you pick something to specialize in for marketability and money, you will probably burn out fast. If you do start to specialize pick something you enjoy. I love GUI programing and hate server side stuff, my buddy is the opposite, but we both love our jobs. If he had to do my job, and I his, we would both go insane out of boredom.
A:
As a student I'd recommend forgetting about what you're programming and focusing on the software process itself. Understand how to analyse a problem and ask the right questions; learn every design pattern you can and actually apply them all to gain a real understanding and appreciation of object-oriented design; write tests and then code only as much as you need to in order to make the tests pass. I think the best way to really learn is to just code as much as you can - the language and the domain aren't important, browse sourceforge and freshmeat for any interesting-sounding projects and get involved. What's important is understanding the fundamentals of software engineering.
And yes, this includes C. Or Assembler. This is the easiest way to get a good understanding of how your computer works and what your high-level code is actually doing.
Finally, never stop learning - Service-oriented architecture, inversion of control, domain-specific languages, business process management are all showing huge benefits so they're important to be aware of - But by the time you finish studying and join the workforce who knows what the next big thing will be?
| What areas of specialization within programming would you recommend to a beginner | I am a student studying software development, and I feel programming, in general, is too broad of a subject to try to know everything. To be proficient, you have to decide which areas to focus your learning and understanding. Certain skill sets synergize with each other, like data-driven web development and SQL experience. However, all the win32 API experience in the world may not directly apply to linux development. This leads me to believe, as a beginning programmer, I should start deciding where I want to specialize after I have general understanding of the basic principles of software development.
This is a multi-part question really:
What are the common specializations within computer programming and software development?
Which of these specializations have more long-term value, both as a foundation for other specializations and/or as marketable skills?
Which skill sets complement each other?
Are there any areas of specialization that hinder your ability of developing other areas of specialization.
| [
"Ben, Almost all seasoned programmers are still students in programming. You never stops learning anything when you are a developer. But if you are really starting off on your career then you should be least worried about the specialization thing. All APIs, frameworks and skills that you expect that gives you a long term existence in the field is not going to happen. Technology seems changing a lot and you should be versatile and flexible enough to learn anything. The knowledge you acquire on one platform/api/framework doesn't die off. You can apply the skills to the next greatest platform/api/framework. \nThat being said you should just stop worrying about the future and concentrate on the basics. DataStructures, Algorithm Analysis and Design, Compiler Design, Operating system design are the bare minimum stuff you need. And further you should be willing to go back and read tho books in those field any time in your career. Thats all is required. Good luck. \nSorry if I sounded like a big ass advisor; but thats what I think. :-)\n",
"Not to directly reject your premise but I actually think being a generalist is a good position in programming. You will certainly develop expertise in specific areas but it is likely to be a product of either personal interest or work necessity. Over time the stuff you are able to transfer across languages and problem domains is at the heart of what makes good programmers.\n",
"I think the more important question is: What areas of specialization are you most interested in?\nOnce you know, begin learning in that area!\n",
"I would think the greatest skill of all would be to adapt with the times, because if your employer can see this potential in you then they would be wise to hold on tightly. \nThat said, I would advise you dive into the area YOU would enjoy. Learning is driven by enthusiasm. \nSince my current employ is with an internet provider, I've found networking knowledge particularly helpful. But someday I'd like to play with 3D graphics (not necessarily games).\n",
"Go as deep as you can starting off in one environment, win32, .net, Java, Objective C... whatever. \nIt is important to build the deep understanding of how X works... so that you can translate the same concepts into other languages or platforms/environments, if you so desire. \n\"Are there any areas of specialization that hinder your ability of developing other areas of specialization.\" Sort of, but nothing permanent i think.\nSince I am relatively green myself (less than 4 years) I come from a really OOP mindset. I've rarely jumped out of .NET, so I had a hard time on one job when coming into contact with embedded code. With embedded programmers fearing object creation and the performance loss of inheritance. I had to learn the environment, seriously low memory and slow clock times, they were coming from. Those are times to grow, I had a better time at it because i understood my area pretty well.\nI will say if you pick something to specialize in for marketability and money, you will probably burn out fast. If you do start to specialize pick something you enjoy. I love GUI programing and hate server side stuff, my buddy is the opposite, but we both love our jobs. If he had to do my job, and I his, we would both go insane out of boredom.\n",
"As a student I'd recommend forgetting about what you're programming and focusing on the software process itself. Understand how to analyse a problem and ask the right questions; learn every design pattern you can and actually apply them all to gain a real understanding and appreciation of object-oriented design; write tests and then code only as much as you need to in order to make the tests pass. I think the best way to really learn is to just code as much as you can - the language and the domain aren't important, browse sourceforge and freshmeat for any interesting-sounding projects and get involved. What's important is understanding the fundamentals of software engineering.\nAnd yes, this includes C. Or Assembler. This is the easiest way to get a good understanding of how your computer works and what your high-level code is actually doing.\nFinally, never stop learning - Service-oriented architecture, inversion of control, domain-specific languages, business process management are all showing huge benefits so they're important to be aware of - But by the time you finish studying and join the workforce who knows what the next big thing will be?\n"
] | [
21,
5,
3,
3,
1,
1
] | [] | [] | [
"language_agnostic"
] | stackoverflow_0000017320_language_agnostic.txt |