content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
Offsite backups
I was recently tasked with coming up with an offsite backup strategy. We have about 2TB of data that would need to be backed up so our needs are a little out of the norm.
I looked into Iron Mountain and they wanted $12,000 a month!
Does anyone have any suggestions on how best to handle backing up this much data on a budget (like a tenth of Iron Mountain)? How do other companies afford to do this?
Thanks!
UPDATE :: UPDATE
Ironically enough, I just had the sort of devastating failure we're all talking about. I had my BES server fail and than 2 days later 2 drives in my Exchange server's RAID5 died (2!!!??!). I'm currently in the process of rebuilding my network and the backup integrity is an definitely an issue.
At least now my bosses are paying attention :)
A:
You can buy external eSATA RAID boxes in the 8TB capacity range for $2600. I'm not saying that particular product is the right choice, but that's the kind of box that will do 6TB in RAID5 and still be portable enough to buy a couple of them and rotate them through the bank, like Stu says.
Obviously if you have to have to keep 7 individual days worth, a 14 day, 30 and 90 day snapshot, etc. then things are going to be much more expensive, but it's certainly doable if what you're after is just disaster recovery.
The biggest thing to make sure is part of your plan is actually testing the restoration from the backup. That seems to get overlooked WAY too often and turns out to be the weakest link in nearly all of the strategies.
You should plan for scheduled restorations as often as is reasonable where you actually dump the real data and restore from the backup. Without that, you don't know that it will work when you NEED it too.
I've lost track of the number of times I've been in a company where there's a big rack full of backup tapes/drives, all dutifully made according to the schedule only to find out that NONE of them have valid data when the server gets wiped out.
The more ways you can verify the integrity of the backups the better, but nothing substitutes for doing an actual dump/load from one of your backups to really test the setup.
A:
Amazon S3 might fit your budget better. I don't know if there is software available to automate the backup process but it's rather easy to write your own code to handle this. Here's their pricing calculator.
According to my estimates you're going to be well under the $1000/mo range.
A:
You really have to assess the true value of your data. If you lost it tomorrow what impact would it have on your business? We use offsite backups, it isn't cheap, but if we were to lose our data the business would cease to trade withing 2-3 days.
We considered on-site backups as a possible cost saver but in my experience with data centres/computer rooms over the last ten years (as both an employee and a customer) I've seen fires, fire suppression system malfunctions (wet), hardware theft and one day a car crashed through an external wall right into the suite. Add to that our last DC was located at Heathrow, right next to the runways....you never know what strange things can happen (remember the BA 777 that got caught short of the runway on landing?).
My advice, assess the value of the data then decide if $12k is too rich to keep it safe.
A:
2TB is chump change nowadays.
Look into hard-drive based hot-swappable backup machines, and rent a box at your local bank:
http://www.high-rely.com/ (there are many more products such as this, but my Google-time is limited).
A:
Jungle Disk is one such piece of software that can automate the backup process to Amazon S3. I use it for backup at home, but I guess it could work just as well from a server. Also, there are probably other backup tools that make use of S3 for offsite storage.
A:
We've been using DataDomain appliances for that purpose for about 2 years. They're not inexpensive, but compared to $12,000/month they'd pay for themselves pretty quickly.
Basically, we send our backups over NFS and CIFS to one DataDomain appliance, it deduplicates the data and then replicates the differences to the other appliance we have at a remote site.
A:
As for pure online solutions, make sure you do some back-of-the-envelope calculations first. For example, if you have 2TB of churn a month, you are going to saturate a 1Mb Internet connection just for your backup traffic!
A:
As previously mentioned, Amazon S3 is definitely an option, but it may be cheaper in the long run to own the hardware you are backing up to.
For example:
Buy a basic server and and eSATA RAID5 setup with 2-3 times the capacity you currently need, then install it at a co-location center. Preferably one with high, but cheap, bandwidth.
This way the server and storage is off-site, but after the initial cost of the hardware, you are only paying for bandwidth.
Granted, the downside to this is that, unlike something like S3, if the hardware goes down you have to go fix it yourself, or pay the CoLo people to. But this may be a tradeoff you are willing to make.
Also, with this solution, you are still going to need a beefy upload pipe to handle the traffic... so there's always the "sneakernet" solution.
A:
I've used bqbackup.com for 1-2 years no problem. You can do a sync using rsync nightly. Wanted to add that their prices are dirt cheap, and I now have close to 1TB with them.
| Offsite backups | I was recently tasked with coming up with an offsite backup strategy. We have about 2TB of data that would need to be backed up so our needs are a little out of the norm.
I looked into Iron Mountain and they wanted $12,000 a month!
Does anyone have any suggestions on how best to handle backing up this much data on a budget (like a tenth of Iron Mountain)? How do other companies afford to do this?
Thanks!
UPDATE :: UPDATE
Ironically enough, I just had the sort of devastating failure we're all talking about. I had my BES server fail and than 2 days later 2 drives in my Exchange server's RAID5 died (2!!!??!). I'm currently in the process of rebuilding my network and the backup integrity is an definitely an issue.
At least now my bosses are paying attention :)
| [
"You can buy external eSATA RAID boxes in the 8TB capacity range for $2600. I'm not saying that particular product is the right choice, but that's the kind of box that will do 6TB in RAID5 and still be portable enough to buy a couple of them and rotate them through the bank, like Stu says. \nObviously if you have to have to keep 7 individual days worth, a 14 day, 30 and 90 day snapshot, etc. then things are going to be much more expensive, but it's certainly doable if what you're after is just disaster recovery.\nThe biggest thing to make sure is part of your plan is actually testing the restoration from the backup. That seems to get overlooked WAY too often and turns out to be the weakest link in nearly all of the strategies.\nYou should plan for scheduled restorations as often as is reasonable where you actually dump the real data and restore from the backup. Without that, you don't know that it will work when you NEED it too. \nI've lost track of the number of times I've been in a company where there's a big rack full of backup tapes/drives, all dutifully made according to the schedule only to find out that NONE of them have valid data when the server gets wiped out.\nThe more ways you can verify the integrity of the backups the better, but nothing substitutes for doing an actual dump/load from one of your backups to really test the setup.\n",
"Amazon S3 might fit your budget better. I don't know if there is software available to automate the backup process but it's rather easy to write your own code to handle this. Here's their pricing calculator.\nAccording to my estimates you're going to be well under the $1000/mo range.\n",
"You really have to assess the true value of your data. If you lost it tomorrow what impact would it have on your business? We use offsite backups, it isn't cheap, but if we were to lose our data the business would cease to trade withing 2-3 days. \nWe considered on-site backups as a possible cost saver but in my experience with data centres/computer rooms over the last ten years (as both an employee and a customer) I've seen fires, fire suppression system malfunctions (wet), hardware theft and one day a car crashed through an external wall right into the suite. Add to that our last DC was located at Heathrow, right next to the runways....you never know what strange things can happen (remember the BA 777 that got caught short of the runway on landing?).\nMy advice, assess the value of the data then decide if $12k is too rich to keep it safe.\n",
"2TB is chump change nowadays.\nLook into hard-drive based hot-swappable backup machines, and rent a box at your local bank:\nhttp://www.high-rely.com/ (there are many more products such as this, but my Google-time is limited).\n",
"Jungle Disk is one such piece of software that can automate the backup process to Amazon S3. I use it for backup at home, but I guess it could work just as well from a server. Also, there are probably other backup tools that make use of S3 for offsite storage.\n",
"We've been using DataDomain appliances for that purpose for about 2 years. They're not inexpensive, but compared to $12,000/month they'd pay for themselves pretty quickly. \nBasically, we send our backups over NFS and CIFS to one DataDomain appliance, it deduplicates the data and then replicates the differences to the other appliance we have at a remote site.\n",
"As for pure online solutions, make sure you do some back-of-the-envelope calculations first. For example, if you have 2TB of churn a month, you are going to saturate a 1Mb Internet connection just for your backup traffic!\n",
"As previously mentioned, Amazon S3 is definitely an option, but it may be cheaper in the long run to own the hardware you are backing up to.\nFor example:\nBuy a basic server and and eSATA RAID5 setup with 2-3 times the capacity you currently need, then install it at a co-location center. Preferably one with high, but cheap, bandwidth.\nThis way the server and storage is off-site, but after the initial cost of the hardware, you are only paying for bandwidth.\nGranted, the downside to this is that, unlike something like S3, if the hardware goes down you have to go fix it yourself, or pay the CoLo people to. But this may be a tradeoff you are willing to make.\nAlso, with this solution, you are still going to need a beefy upload pipe to handle the traffic... so there's always the \"sneakernet\" solution.\n",
"I've used bqbackup.com for 1-2 years no problem. You can do a sync using rsync nightly. Wanted to add that their prices are dirt cheap, and I now have close to 1TB with them.\n"
] | [
8,
5,
3,
2,
2,
2,
1,
1,
0
] | [] | [] | [
"backup",
"offsite"
] | stackoverflow_0000009231_backup_offsite.txt |
Q:
Audio/Video Remote Control Profile (AVRCP) on Windows Mobile
Is there a general way to handle Bluetooth Audio/Video Remote Control Profile (AVRCP) events on a WM device? I'm especially interested in a Compact Framework way, but I would be happy with just a simple P/Invoke API.
Update.
I've read MSDN articles on this topic, but I still have no idea on how to facilitate this knowledge. There are no samples. Can anyone help me?
A:
Here is the MSDN page about AVRCP
Microsofts solution uses the Audio/Video Control Transport Control Protocol (AVCTP). The Microsoft component is an extension layer to the L2CAP layer in the Microsoft Bluetooth Protocol Stack.
The following list shows the supported commands:
Play
Stop
Pause
Forward
Backward
Other Bluetooth profiles can be found on MSDN as well.
Hope this helps
| Audio/Video Remote Control Profile (AVRCP) on Windows Mobile | Is there a general way to handle Bluetooth Audio/Video Remote Control Profile (AVRCP) events on a WM device? I'm especially interested in a Compact Framework way, but I would be happy with just a simple P/Invoke API.
Update.
I've read MSDN articles on this topic, but I still have no idea on how to facilitate this knowledge. There are no samples. Can anyone help me?
| [
"Here is the MSDN page about AVRCP\nMicrosofts solution uses the Audio/Video Control Transport Control Protocol (AVCTP). The Microsoft component is an extension layer to the L2CAP layer in the Microsoft Bluetooth Protocol Stack.\nThe following list shows the supported commands:\n\nPlay\nStop\nPause\nForward\nBackward\n\nOther Bluetooth profiles can be found on MSDN as well.\nHope this helps\n"
] | [
1
] | [] | [] | [
"windows_mobile"
] | stackoverflow_0000021347_windows_mobile.txt |
Q:
Batch file to "Script" a Database
Is it possible to somehow use a .bat file to script the schema and/or content of a SQL Server database?
I can do this via the wizard, but would like to streamline the creation of this file for source control purposes.
I would like to avoid the use of 3rd party tools, just limiting myself to the tools that come with SQL Server.
A:
There is a free tool called SubCommander that is a part of the open source SubSonic software. I have successfully used this tool myself to create both schema and data "dumps" each night.
You can script out your schema and
data (and then version it in your
favorite source control system) using
SubCommander. Simply use the command
"version" and tell SubCommander where
to put the data:
sonic.exe version /out Scripts
This will output a script file (.sql)
to the local scripts directory of your
project
You can also try using the Microsoft SQL Server Database Publishing wizard, although i am not sure that you can use it in a bat file.
| Batch file to "Script" a Database | Is it possible to somehow use a .bat file to script the schema and/or content of a SQL Server database?
I can do this via the wizard, but would like to streamline the creation of this file for source control purposes.
I would like to avoid the use of 3rd party tools, just limiting myself to the tools that come with SQL Server.
| [
"There is a free tool called SubCommander that is a part of the open source SubSonic software. I have successfully used this tool myself to create both schema and data \"dumps\" each night.\n\nYou can script out your schema and\n data (and then version it in your\n favorite source control system) using\n SubCommander. Simply use the command\n \"version\" and tell SubCommander where\n to put the data:\nsonic.exe version /out Scripts\nThis will output a script file (.sql)\n to the local scripts directory of your\n project\n\nYou can also try using the Microsoft SQL Server Database Publishing wizard, although i am not sure that you can use it in a bat file.\n"
] | [
6
] | [] | [] | [
"batch_file",
"batch_processing",
"command_line",
"scripting",
"sql_server"
] | stackoverflow_0000021353_batch_file_batch_processing_command_line_scripting_sql_server.txt |
Q:
How to create Projects/Tasks for Project Server 2003 via C#?
I need to be able to create basic MS Project items (tasks, projects, resources, etc.) programmatically from my app to my Project Server 2003 install, and haven't found any good examples. Can anyone point me to some good references or have some sample code of connecting to the server and creating these items?
A:
Developing against Project Server 2003 isn't the friendliest experience around, but I have worked a little bit with the PDS (Project Data Services) which is SOAP based
http://msdn.microsoft.com/en-us/library/aa204408(office.11).aspx
It contains .NET samples there
A:
As far as I know, the only programatic access to PS 2003 is through PWS.
I don't know if it would work, but you could try writing a managed extension for Microsoft Project 2003 (The client application) .There is a managed API for MS Project 2003, and you might be able to leverage that to communicate with the server, get a project and update it all in code.
Good luck!
| How to create Projects/Tasks for Project Server 2003 via C#? | I need to be able to create basic MS Project items (tasks, projects, resources, etc.) programmatically from my app to my Project Server 2003 install, and haven't found any good examples. Can anyone point me to some good references or have some sample code of connecting to the server and creating these items?
| [
"Developing against Project Server 2003 isn't the friendliest experience around, but I have worked a little bit with the PDS (Project Data Services) which is SOAP based\nhttp://msdn.microsoft.com/en-us/library/aa204408(office.11).aspx\nIt contains .NET samples there\n",
"As far as I know, the only programatic access to PS 2003 is through PWS. \nI don't know if it would work, but you could try writing a managed extension for Microsoft Project 2003 (The client application) .There is a managed API for MS Project 2003, and you might be able to leverage that to communicate with the server, get a project and update it all in code.\nGood luck!\n"
] | [
1,
0
] | [] | [] | [
"c#",
"project_server"
] | stackoverflow_0000018705_c#_project_server.txt |
Q:
Anyone have a link to a technical discussion of anything akin to the Facebook news feed system?
I'm looking for a presentation, PDF, blog post, or whitepaper discussing the technical details of how to filter down and display massive amounts of information for individual users in an intelligent (possibly machine learning) kind of way. I've had coworkers hear presentations on the Facebook news feed but I can't find anything published anywhere that goes into the dirty details. Searches seem to just turn up the controversy of the system. Maybe I'm not searching for the right keywords...
@AlexCuse I'm trying to build something similar to Facebook's system. I have large amounts of data and I need to filter it down to something manageable to present to the user. I cannot use another website due to the scale of what I've got to work at. Also I just want a technical discussion of how to implement it, not examples of people who have an implementation.
A:
Are you looking for something along the lines of distributed pub/sub with content based filtering? If so, you may want to look into Siena and some of the associated papers such as Design and Evaluation of a Wide-Area Event Notification Service
| Anyone have a link to a technical discussion of anything akin to the Facebook news feed system? | I'm looking for a presentation, PDF, blog post, or whitepaper discussing the technical details of how to filter down and display massive amounts of information for individual users in an intelligent (possibly machine learning) kind of way. I've had coworkers hear presentations on the Facebook news feed but I can't find anything published anywhere that goes into the dirty details. Searches seem to just turn up the controversy of the system. Maybe I'm not searching for the right keywords...
@AlexCuse I'm trying to build something similar to Facebook's system. I have large amounts of data and I need to filter it down to something manageable to present to the user. I cannot use another website due to the scale of what I've got to work at. Also I just want a technical discussion of how to implement it, not examples of people who have an implementation.
| [
"Are you looking for something along the lines of distributed pub/sub with content based filtering? If so, you may want to look into Siena and some of the associated papers such as Design and Evaluation of a Wide-Area Event Notification Service\n"
] | [
4
] | [] | [] | [
"facebook"
] | stackoverflow_0000021243_facebook.txt |
Q:
Get application name from Windows XP cut and paste?
This is a bit of a long shot, but if anyone can figure it out, you guys can...
In Windows XP, is there any meta-data that comes with a cut and paste action, from which I can ascertain the application that provided the clipboard contents?
Bonus question... if there is such information, is there any way I can access it through a Java app?
A:
The API function GetClipboardOwner returns the clipboard owner
(http://msdn.microsoft.com/en-us/library/ms649041(VS.85).aspx), -sorry cannot mark as link because the "()" in the URL-
Don't know Java, I suppose you can call native API functions from inside the Java VM.
A "quick and dirty" approach could be using an AutoHotKey script to intercept the CTRL+C / CTRL+INS hotkeys, grab the id of the active process (in a file or any place that can be accessed by the Java app) and proceed with the clipboard copy. The AutoHotKey part shouldn't be difficult, don't know about the Java part.
A:
That depends on the clipboard format. If it is plain-text, then no. Unless you want to install global hooks on the clipboard.
Which you cannot do from Java.
| Get application name from Windows XP cut and paste? | This is a bit of a long shot, but if anyone can figure it out, you guys can...
In Windows XP, is there any meta-data that comes with a cut and paste action, from which I can ascertain the application that provided the clipboard contents?
Bonus question... if there is such information, is there any way I can access it through a Java app?
| [
"The API function GetClipboardOwner returns the clipboard owner \n(http://msdn.microsoft.com/en-us/library/ms649041(VS.85).aspx), -sorry cannot mark as link because the \"()\" in the URL-\nDon't know Java, I suppose you can call native API functions from inside the Java VM.\nA \"quick and dirty\" approach could be using an AutoHotKey script to intercept the CTRL+C / CTRL+INS hotkeys, grab the id of the active process (in a file or any place that can be accessed by the Java app) and proceed with the clipboard copy. The AutoHotKey part shouldn't be difficult, don't know about the Java part.\n",
"That depends on the clipboard format. If it is plain-text, then no. Unless you want to install global hooks on the clipboard.\nWhich you cannot do from Java.\n"
] | [
2,
1
] | [] | [] | [
"clipboard",
"java",
"windows_xp"
] | stackoverflow_0000021211_clipboard_java_windows_xp.txt |
Q:
Is there any way to configure windows to not change the focus?
I'm tired of being in the middle of typing something, having a pop-up with a question appear, and hitting enter before reading it... (it also happens with some windows that are not pop-ups)
Do you know if there's some setting I could touch for this not to happen?
A:
Not that I know of. This has been a plague of Windows versions for quite some time.
A:
Actually Windows XP tries to avoid that. Of course some programs found a way to circumvented that. Microsoft Powertoy TweakUI has a way to turn the option on again in case it was turned off. You could also edit the registry yourself using the following information.
A:
It suppose to be a registry change that helps with this type of situations (mentioned in this Coding Horror post about the subject of "focus stealing"). I try it, it doesn't work with all popups but helps with some of them, causing the offending application to flash in the taskbar instead of gain focus.
| Is there any way to configure windows to not change the focus? | I'm tired of being in the middle of typing something, having a pop-up with a question appear, and hitting enter before reading it... (it also happens with some windows that are not pop-ups)
Do you know if there's some setting I could touch for this not to happen?
| [
"Not that I know of. This has been a plague of Windows versions for quite some time.\n",
"Actually Windows XP tries to avoid that. Of course some programs found a way to circumvented that. Microsoft Powertoy TweakUI has a way to turn the option on again in case it was turned off. You could also edit the registry yourself using the following information. \n",
"It suppose to be a registry change that helps with this type of situations (mentioned in this Coding Horror post about the subject of \"focus stealing\"). I try it, it doesn't work with all popups but helps with some of them, causing the offending application to flash in the taskbar instead of gain focus.\n"
] | [
0,
0,
0
] | [] | [] | [
"configuration",
"windows"
] | stackoverflow_0000021060_configuration_windows.txt |
Q:
What Predefined #if symbos does c# have?
#if SYMBOL
//code
#endif
what values does C# predefine for use?
A:
Depends on what /define compiler options you use. Visual Studio puts the DEBUG symbol in there for you via the project settings, but you could create any ones that you want.
A:
To add to what Nick said, the MSDN documentation does not list any pre-defined names. It would seem that all need to come from #define and /define.
#if on MSDN
A:
Well, that depends on the compiler you are using, and the command line options. Mono defines different names than Microsoft's compiler by default, and depending on what system you are you get different defines, etc.
If you provide a more specific system for which you are compiling, we might be able to come up with the list for that particular system (for example: x64 Vista system, using Visual Studio 2008).
| What Predefined #if symbos does c# have? | #if SYMBOL
//code
#endif
what values does C# predefine for use?
| [
"Depends on what /define compiler options you use. Visual Studio puts the DEBUG symbol in there for you via the project settings, but you could create any ones that you want.\n",
"To add to what Nick said, the MSDN documentation does not list any pre-defined names. It would seem that all need to come from #define and /define.\n#if on MSDN\n",
"Well, that depends on the compiler you are using, and the command line options. Mono defines different names than Microsoft's compiler by default, and depending on what system you are you get different defines, etc.\nIf you provide a more specific system for which you are compiling, we might be able to come up with the list for that particular system (for example: x64 Vista system, using Visual Studio 2008).\n"
] | [
4,
4,
0
] | [] | [] | [
"c#"
] | stackoverflow_0000021461_c#.txt |
Q:
How to maintain a recursive invariant in a MySQL database?
I have a tree encoded in a MySQL database as edges:
CREATE TABLE items (
num INT,
tot INT,
PRIMARY KEY (num)
);
CREATE TABLE tree (
orig INT,
term INT
FOREIGN KEY (orig,term) REFERENCES items (num,num)
)
For each leaf in the tree, items.tot is set by someone. For interior nodes, items.tot needs to be the sum of it's children. Running the following query repeatedly would generate the desired result.
UPDATE items SET tot = (
SELECT SUM(b.tot) FROM
tree JOIN items AS b
ON tree.term = b.num
WHERE tree.orig=items.num)
WHERE EXISTS
(SELECT * FROM tree WHERE orig=items.num)
(note this actually doesn't work but that's beside the point)
Assume that the database exists and the invariant are already satisfied.
The question is:
What is the most practical way to update the DB while maintaining this requirement? Updates may move nodes around or alter the value of tot on leaf nodes. It can be assumed that leaf nodes will stay as leaf nodes, interior nodes will stay as interior nodes and the whole thing will remain as a proper tree.
Some thoughts I have had:
Full Invalidation, after any update, recompute everything (Um... No)
Set a trigger on the items table to update the parent of any row that is updated
This would be recursive (updates trigger updates, trigger updates, ...)
Doesn't work, MySQL can't update the table that kicked off the trigger
Set a trigger to schedule an update of the parent of any row that is updated
This would be iterative (get an item from the schedule, processing it schedules more items)
What kicks this off? Trust client code to get it right?
An advantage is that if the updates are ordered correctly fewer sums need to be computer. But that ordering is a complication in and of it's own.
An ideal solution would generalize to other "aggregating invariants"
FWIW I know this is "a bit overboard", but I'm doing this for fun (Fun: verb, Finding the impossible by doing it. :-)
A:
I am not sure I understand correctly your question, but this could work My take on trees in SQL.
Linked post described method of storing tree in database -- PostgreSQL in that case -- but the method is clear enough, so it can be adopted easily for any database.
With this method you can easy update all the nodes depend on modified node K with about N simple SELECTs queries where N is distance of K from root node.
I hope your tree is not really deep :).
Good Luck!
A:
The problem you are having is clear, recursion in SQL. You need to get the parent of the parent... of the leaf and updates it's total (either subtracting the old and adding the new, or recomputing). You need some form of identifier to see the structure of the tree, and grab all of a nodes children and a list of the parents/path to a leaf to update.
This method adds constant space (2 columns to your table --but you only need one table, else you can do a join later). I played around with a structure awhile ago that used a hierarchical format using 'left' and 'right' columns (obviously not those names), calculated by a pre-order traversal and a post-order traversal, respectively --don't worry these don't need to be recalculated every time.
I'll let you take a look at a page using this method in mysql instead of continuing this discussion in case you don't like this method as an answer. But if you like it, post/edit and I'll take some time and clarify.
| How to maintain a recursive invariant in a MySQL database? | I have a tree encoded in a MySQL database as edges:
CREATE TABLE items (
num INT,
tot INT,
PRIMARY KEY (num)
);
CREATE TABLE tree (
orig INT,
term INT
FOREIGN KEY (orig,term) REFERENCES items (num,num)
)
For each leaf in the tree, items.tot is set by someone. For interior nodes, items.tot needs to be the sum of it's children. Running the following query repeatedly would generate the desired result.
UPDATE items SET tot = (
SELECT SUM(b.tot) FROM
tree JOIN items AS b
ON tree.term = b.num
WHERE tree.orig=items.num)
WHERE EXISTS
(SELECT * FROM tree WHERE orig=items.num)
(note this actually doesn't work but that's beside the point)
Assume that the database exists and the invariant are already satisfied.
The question is:
What is the most practical way to update the DB while maintaining this requirement? Updates may move nodes around or alter the value of tot on leaf nodes. It can be assumed that leaf nodes will stay as leaf nodes, interior nodes will stay as interior nodes and the whole thing will remain as a proper tree.
Some thoughts I have had:
Full Invalidation, after any update, recompute everything (Um... No)
Set a trigger on the items table to update the parent of any row that is updated
This would be recursive (updates trigger updates, trigger updates, ...)
Doesn't work, MySQL can't update the table that kicked off the trigger
Set a trigger to schedule an update of the parent of any row that is updated
This would be iterative (get an item from the schedule, processing it schedules more items)
What kicks this off? Trust client code to get it right?
An advantage is that if the updates are ordered correctly fewer sums need to be computer. But that ordering is a complication in and of it's own.
An ideal solution would generalize to other "aggregating invariants"
FWIW I know this is "a bit overboard", but I'm doing this for fun (Fun: verb, Finding the impossible by doing it. :-)
| [
"I am not sure I understand correctly your question, but this could work My take on trees in SQL.\nLinked post described method of storing tree in database -- PostgreSQL in that case -- but the method is clear enough, so it can be adopted easily for any database.\nWith this method you can easy update all the nodes depend on modified node K with about N simple SELECTs queries where N is distance of K from root node.\nI hope your tree is not really deep :).\nGood Luck!\n",
"The problem you are having is clear, recursion in SQL. You need to get the parent of the parent... of the leaf and updates it's total (either subtracting the old and adding the new, or recomputing). You need some form of identifier to see the structure of the tree, and grab all of a nodes children and a list of the parents/path to a leaf to update. \nThis method adds constant space (2 columns to your table --but you only need one table, else you can do a join later). I played around with a structure awhile ago that used a hierarchical format using 'left' and 'right' columns (obviously not those names), calculated by a pre-order traversal and a post-order traversal, respectively --don't worry these don't need to be recalculated every time. \nI'll let you take a look at a page using this method in mysql instead of continuing this discussion in case you don't like this method as an answer. But if you like it, post/edit and I'll take some time and clarify.\n"
] | [
1,
1
] | [] | [] | [
"algorithm",
"data_structures",
"invariants",
"mysql"
] | stackoverflow_0000020426_algorithm_data_structures_invariants_mysql.txt |
Q:
Mocking and IQueryable
I've ran into a problem while trying to test following IRepository based on NHibernate:
public class NHibernateRepository<T>: Disposable, IRepository<T>
where T : IdentifiableObject
{
...
public IQueryable<T> Query()
{
return NHibernateSession.Linq<T>();
}
}
How on the Hell to mock returning IQueryable<T> out in the way that it returns given collection in exchange certain expression. I feel I have some misunderstanding of IQueryable<T>...
A:
In Moq it would be:
mockRepository.Expect( r => r.Query() ).Returns( myEnumerable.AsQueriable() );
In RhinoMocks it would be:
Expect.Call( repository.Query() ).Return( myEnumerable.AsQueriable() );
| Mocking and IQueryable | I've ran into a problem while trying to test following IRepository based on NHibernate:
public class NHibernateRepository<T>: Disposable, IRepository<T>
where T : IdentifiableObject
{
...
public IQueryable<T> Query()
{
return NHibernateSession.Linq<T>();
}
}
How on the Hell to mock returning IQueryable<T> out in the way that it returns given collection in exchange certain expression. I feel I have some misunderstanding of IQueryable<T>...
| [
"In Moq it would be:\nmockRepository.Expect( r => r.Query() ).Returns( myEnumerable.AsQueriable() );\n\nIn RhinoMocks it would be:\nExpect.Call( repository.Query() ).Return( myEnumerable.AsQueriable() );\n\n"
] | [
7
] | [] | [] | [
"linq"
] | stackoverflow_0000021355_linq.txt |
Q:
Fast(er) way to get file inode using PHP
To grab the inode of a file in PHP, you can use this:
$fs = stat($file);
echo $fs['ino'];
The problem with this is EVERYWHERE says it's slow and you should avoid it. So the question becomes what's the fast(er) way to do it?
A:
You could use fileinode() but you should run benchmarks if you think it is slow.
A:
I think you should benchmark and take a look at what you are doing to determine if stat() is the slowest part of your code. Stating 1 file on each request on a server that gets about 100 hits/day is not a problem. Stating every file could be a problem when you have to eek out a few more requests a second.
You can avoid stating the same file repeatedly by caching the results via memcached, apc or some other in-memory caching system.
Premature optimization is the root of all evil. - Donald Knuth
| Fast(er) way to get file inode using PHP | To grab the inode of a file in PHP, you can use this:
$fs = stat($file);
echo $fs['ino'];
The problem with this is EVERYWHERE says it's slow and you should avoid it. So the question becomes what's the fast(er) way to do it?
| [
"You could use fileinode() but you should run benchmarks if you think it is slow.\n",
"I think you should benchmark and take a look at what you are doing to determine if stat() is the slowest part of your code. Stating 1 file on each request on a server that gets about 100 hits/day is not a problem. Stating every file could be a problem when you have to eek out a few more requests a second.\nYou can avoid stating the same file repeatedly by caching the results via memcached, apc or some other in-memory caching system.\nPremature optimization is the root of all evil. - Donald Knuth\n"
] | [
1,
0
] | [] | [] | [
"inode",
"php"
] | stackoverflow_0000019708_inode_php.txt |
Q:
I/O permission settings using .net installer
I am creating a program that will be installed using the .net installer project. The program writes to settings files to its directory in the Program Files dir. It believe there are some active directory settings that will prevent the application from righting to that directory if a limited user is running the program. Is there away to change the settings for the application folder through the install so this will not be a problem?
A:
Writing to the Program Files folder is a really bad idea, you should assume that this location is "read only" once installed.
Saving user settings in Program Files causes problems if more than two people use the computer at once (eg. Terminal Services) who's settings should be saved, do you want other users to know 'your' settings? What happens if your program writes settings to the file as user A, but user B can't edit the file? User B may have access to the directory, but not read/delete the preference file as this is owned by user A.
Legacy win9x programs often write to the program files folder, Windows Vista actually does some neat trickery to let these programs work. When your program writes a file, vista actually puts it someplace else that is only accessible to that user. The same is done for registry writes to HKLM (or so I discovered after hours of debugging...) and Server 2008 does the same thing.
If you're needing to save user settings the best alternative would be to save the settings to the Application Data folder (Environment Variable %APPDATA%)
If the settings are system wide, then the administrative user should set these after install or on first run and they should not be able to be overwritten by limited users.
So to answer your question - YES there is a way to do what you've asked. But it's a bad idea, it's insecure and will probably cause problems in the long run.
| I/O permission settings using .net installer | I am creating a program that will be installed using the .net installer project. The program writes to settings files to its directory in the Program Files dir. It believe there are some active directory settings that will prevent the application from righting to that directory if a limited user is running the program. Is there away to change the settings for the application folder through the install so this will not be a problem?
| [
"Writing to the Program Files folder is a really bad idea, you should assume that this location is \"read only\" once installed. \nSaving user settings in Program Files causes problems if more than two people use the computer at once (eg. Terminal Services) who's settings should be saved, do you want other users to know 'your' settings? What happens if your program writes settings to the file as user A, but user B can't edit the file? User B may have access to the directory, but not read/delete the preference file as this is owned by user A. \nLegacy win9x programs often write to the program files folder, Windows Vista actually does some neat trickery to let these programs work. When your program writes a file, vista actually puts it someplace else that is only accessible to that user. The same is done for registry writes to HKLM (or so I discovered after hours of debugging...) and Server 2008 does the same thing.\nIf you're needing to save user settings the best alternative would be to save the settings to the Application Data folder (Environment Variable %APPDATA%)\nIf the settings are system wide, then the administrative user should set these after install or on first run and they should not be able to be overwritten by limited users.\nSo to answer your question - YES there is a way to do what you've asked. But it's a bad idea, it's insecure and will probably cause problems in the long run.\n"
] | [
2
] | [
"You can write a custom installer class which can change the security permissions of the folder. This would assume the installation is done by a user who has permission to change file/directory security.\nThe best option is to not write to directories under Program Files at all.\n"
] | [
-1
] | [
".net",
"active_directory",
"installation",
"io"
] | stackoverflow_0000018675_.net_active_directory_installation_io.txt |
Q:
Grouping runs of data
SQL Experts,
Is there an efficient way to group runs of data together using SQL?
Or is it going to be more efficient to process the data in code.
For example if I have the following data:
ID|Name
01|Harry Johns
02|Adam Taylor
03|John Smith
04|John Smith
05|Bill Manning
06|John Smith
I need to display this:
Harry Johns
Adam Taylor
John Smith (2)
Bill Manning
John Smith
@Matt: Sorry I had trouble formatting the data using an embedded html table it worked in the preview but not in the final display.
A:
Try this:
select n.name,
(select count(*)
from myTable n1
where n1.name = n.name and n1.id >= n.id and (n1.id <=
(
select isnull(min(nn.id), (select max(id) + 1 from myTable))
from myTable nn
where nn.id > n.id and nn.name <> n.name
)
))
from myTable n
where not exists (
select 1
from myTable n3
where n3.name = n.name and n3.id < n.id and n3.id > (
select isnull(max(n4.id), (select min(id) - 1 from myTable))
from myTable n4
where n4.id < n.id and n4.name <> n.name
)
)
I think that'll do what you want. Bit of a kludge though.
Phew! After a few edits I think I have all the edge cases sorted out.
A:
I hate cursors with a passion... but here's a dodgy cursor version...
Declare @NewName Varchar(50)
Declare @OldName Varchar(50)
Declare @CountNum int
Set @CountNum = 0
DECLARE nameCursor CURSOR FOR
SELECT Name
FROM NameTest
OPEN nameCursor
FETCH NEXT FROM nameCursor INTO @NewName
WHILE @@FETCH_STATUS = 0
BEGIN
if @OldName <> @NewName
BEGIN
Print @OldName + ' (' + Cast(@CountNum as Varchar(50)) + ')'
Set @CountNum = 0
END
SELECT @OldName = @NewName
FETCH NEXT FROM nameCursor INTO @NewName
Set @CountNum = @CountNum + 1
END
Print @OldName + ' (' + Cast(@CountNum as Varchar(50)) + ')'
CLOSE nameCursor
DEALLOCATE nameCursor
A:
My solution just for kicks (this was a fun exercise), no cursors, no iterations, but i do have a helper field
-- Setup test table
DECLARE @names TABLE (
id INT IDENTITY(1,1),
name NVARCHAR(25) NOT NULL,
grp UNIQUEIDENTIFIER NULL
)
INSERT @names (name)
SELECT 'Harry Johns' UNION ALL
SELECT 'Adam Taylor' UNION ALL
SELECT 'John Smith' UNION ALL
SELECT 'John Smith' UNION ALL
SELECT 'Bill Manning' UNION ALL
SELECT 'Bill Manning' UNION ALL
SELECT 'Bill Manning' UNION ALL
SELECT 'John Smith' UNION ALL
SELECT 'Bill Manning'
-- Set the first id's group to a newid()
UPDATE n
SET grp = newid()
FROM @names n
WHERE n.id = (SELECT MIN(id) FROM @names)
-- Set the group to a newid() if the name does not equal the previous
UPDATE n
SET grp = newid()
FROM @names n
INNER JOIN @names b
ON (n.ID - 1) = b.ID
AND ISNULL(b.Name, '') <> n.Name
-- Set groups that are null to the previous group
-- Keep on doing this until all groups have been set
WHILE (EXISTS(SELECT 1 FROM @names WHERE grp IS NULL))
BEGIN
UPDATE n
SET grp = b.grp
FROM @names n
INNER JOIN @names b
ON (n.ID - 1) = b.ID
AND n.grp IS NULL
END
-- Final output
SELECT MIN(id) AS id_start,
MAX(id) AS id_end,
name,
count(1) AS consecutive
FROM @names
GROUP BY grp,
name
ORDER BY id_start
/*
Results:
id_start id_end name consecutive
1 1 Harry Johns 1
2 2 Adam Taylor 1
3 4 John Smith 2
5 7 Bill Manning 3
8 8 John Smith 1
9 9 Bill Manning 1
*/
A:
Well, this:
select Name, count(Id)
from MyTable
group by Name
will give you this:
Harry Johns, 1
Adam Taylor, 1
John Smith, 2
Bill Manning, 1
and this (MS SQL syntax):
select Name +
case when ( count(Id) > 1 )
then ' ('+cast(count(Id) as varchar)+')'
else ''
end
from MyTable
group by Name
will give you this:
Harry Johns
Adam Taylor
John Smith (2)
Bill Manning
Did you actually want that other John Smith on the end of your results?
EDIT: Oh I see, you want consecutive runs grouped. In that case, I'd say you need a cursor or to do it in your program code.
A:
How about this:
declare @tmp table (Id int, Nm varchar(50));
insert @tmp select 1, 'Harry Johns';
insert @tmp select 2, 'Adam Taylor';
insert @tmp select 3, 'John Smith';
insert @tmp select 4, 'John Smith';
insert @tmp select 5, 'Bill Manning';
insert @tmp select 6, 'John Smith';
select * from @tmp order by Id;
select Nm, count(1) from
(
select Id, Nm,
case when exists (
select 1 from @tmp t2
where t2.Nm=t1.Nm
and (t2.Id = t1.Id + 1 or t2.Id = t1.Id - 1))
then 1 else 0 end as Run
from @tmp t1
) truns group by Nm, Run
[Edit] That can be shortened a bit
select Nm, count(1) from (select Id, Nm, case when exists (
select 1 from @tmp t2 where t2.Nm=t1.Nm
and abs(t2.Id-t1.Id)=1) then 1 else 0 end as Run
from @tmp t1) t group by Nm, Run
A:
For this particular case, all you need to do is group by the name and ask for the count, like this:
select Name, count(*)
from MyTable
group by Name
That'll get you the count for each name as a second column.
You can get it all as one column by concatenating like this:
select Name + ' (' + cast(count(*) as varchar) + ')'
from MyTable
group by Name
| Grouping runs of data | SQL Experts,
Is there an efficient way to group runs of data together using SQL?
Or is it going to be more efficient to process the data in code.
For example if I have the following data:
ID|Name
01|Harry Johns
02|Adam Taylor
03|John Smith
04|John Smith
05|Bill Manning
06|John Smith
I need to display this:
Harry Johns
Adam Taylor
John Smith (2)
Bill Manning
John Smith
@Matt: Sorry I had trouble formatting the data using an embedded html table it worked in the preview but not in the final display.
| [
"Try this:\nselect n.name, \n (select count(*) \n from myTable n1\n where n1.name = n.name and n1.id >= n.id and (n1.id <=\n (\n select isnull(min(nn.id), (select max(id) + 1 from myTable))\n from myTable nn\n where nn.id > n.id and nn.name <> n.name\n )\n ))\nfrom myTable n\nwhere not exists (\n select 1\n from myTable n3\n where n3.name = n.name and n3.id < n.id and n3.id > (\n select isnull(max(n4.id), (select min(id) - 1 from myTable))\n from myTable n4\n where n4.id < n.id and n4.name <> n.name\n )\n)\n\nI think that'll do what you want. Bit of a kludge though.\nPhew! After a few edits I think I have all the edge cases sorted out.\n",
"I hate cursors with a passion... but here's a dodgy cursor version...\nDeclare @NewName Varchar(50)\nDeclare @OldName Varchar(50)\nDeclare @CountNum int\nSet @CountNum = 0\n\nDECLARE nameCursor CURSOR FOR \nSELECT Name\nFROM NameTest\nOPEN nameCursor\n\nFETCH NEXT FROM nameCursor INTO @NewName\n\n WHILE @@FETCH_STATUS = 0 \n\n BEGIN\n\n if @OldName <> @NewName\n BEGIN\n Print @OldName + ' (' + Cast(@CountNum as Varchar(50)) + ')'\n Set @CountNum = 0\n END\n SELECT @OldName = @NewName\n FETCH NEXT FROM nameCursor INTO @NewName\n Set @CountNum = @CountNum + 1\n\n END\nPrint @OldName + ' (' + Cast(@CountNum as Varchar(50)) + ')'\n\nCLOSE nameCursor\nDEALLOCATE nameCursor\n\n",
"My solution just for kicks (this was a fun exercise), no cursors, no iterations, but i do have a helper field\n-- Setup test table\nDECLARE @names TABLE (\n id INT IDENTITY(1,1),\n name NVARCHAR(25) NOT NULL,\n grp UNIQUEIDENTIFIER NULL\n )\n\nINSERT @names (name)\nSELECT 'Harry Johns' UNION ALL \nSELECT 'Adam Taylor' UNION ALL\nSELECT 'John Smith' UNION ALL\nSELECT 'John Smith' UNION ALL\nSELECT 'Bill Manning' UNION ALL\nSELECT 'Bill Manning' UNION ALL\nSELECT 'Bill Manning' UNION ALL\nSELECT 'John Smith' UNION ALL\nSELECT 'Bill Manning' \n\n-- Set the first id's group to a newid()\nUPDATE n\nSET grp = newid()\nFROM @names n\nWHERE n.id = (SELECT MIN(id) FROM @names)\n\n-- Set the group to a newid() if the name does not equal the previous\nUPDATE n\nSET grp = newid()\nFROM @names n\nINNER JOIN @names b\n ON (n.ID - 1) = b.ID\n AND ISNULL(b.Name, '') <> n.Name\n\n-- Set groups that are null to the previous group\n-- Keep on doing this until all groups have been set\nWHILE (EXISTS(SELECT 1 FROM @names WHERE grp IS NULL))\nBEGIN\n UPDATE n\n SET grp = b.grp\n FROM @names n\n INNER JOIN @names b\n ON (n.ID - 1) = b.ID\n AND n.grp IS NULL\nEND\n\n-- Final output\nSELECT MIN(id) AS id_start,\n MAX(id) AS id_end,\n name,\n count(1) AS consecutive\nFROM @names\nGROUP BY grp, \n name\nORDER BY id_start\n\n/*\nResults:\n\nid_start id_end name consecutive\n1 1 Harry Johns 1\n2 2 Adam Taylor 1\n3 4 John Smith 2\n5 7 Bill Manning 3\n8 8 John Smith 1\n9 9 Bill Manning 1\n*/\n\n",
"Well, this:\nselect Name, count(Id)\nfrom MyTable\ngroup by Name\n\nwill give you this:\nHarry Johns, 1\nAdam Taylor, 1\nJohn Smith, 2\nBill Manning, 1\n\nand this (MS SQL syntax):\nselect Name +\n case when ( count(Id) > 1 ) \n then ' ('+cast(count(Id) as varchar)+')' \n else ''\n end\nfrom MyTable\ngroup by Name\n\nwill give you this:\nHarry Johns\nAdam Taylor\nJohn Smith (2)\nBill Manning\n\nDid you actually want that other John Smith on the end of your results?\nEDIT: Oh I see, you want consecutive runs grouped. In that case, I'd say you need a cursor or to do it in your program code.\n",
"How about this:\ndeclare @tmp table (Id int, Nm varchar(50));\n\ninsert @tmp select 1, 'Harry Johns';\ninsert @tmp select 2, 'Adam Taylor';\ninsert @tmp select 3, 'John Smith';\ninsert @tmp select 4, 'John Smith';\ninsert @tmp select 5, 'Bill Manning';\ninsert @tmp select 6, 'John Smith';\n\nselect * from @tmp order by Id;\n\nselect Nm, count(1) from \n(\nselect Id, Nm, \n case when exists (\n select 1 from @tmp t2 \n where t2.Nm=t1.Nm \n and (t2.Id = t1.Id + 1 or t2.Id = t1.Id - 1)) \n then 1 else 0 end as Run\nfrom @tmp t1\n) truns group by Nm, Run\n\n[Edit] That can be shortened a bit\nselect Nm, count(1) from (select Id, Nm, case when exists (\n select 1 from @tmp t2 where t2.Nm=t1.Nm \n and abs(t2.Id-t1.Id)=1) then 1 else 0 end as Run\nfrom @tmp t1) t group by Nm, Run\n\n",
"For this particular case, all you need to do is group by the name and ask for the count, like this:\nselect Name, count(*)\nfrom MyTable\ngroup by Name\n\nThat'll get you the count for each name as a second column.\nYou can get it all as one column by concatenating like this:\nselect Name + ' (' + cast(count(*) as varchar) + ')'\nfrom MyTable\ngroup by Name\n\n"
] | [
2,
2,
2,
1,
1,
0
] | [] | [] | [
"sql"
] | stackoverflow_0000021489_sql.txt |
Q:
Pushing out MSI files
I have a product which has been traditionally shipped as an MSI file. It is deployed through some sort of SMS push to thousands of desktops by our various clients. The software we use to create these installers is getting long in the tooth and we are looking to replace it. We have already standardized on InstallAnywhere for most of our products as we support many operating systems. Unfortunately InstallAnywhere cannot produce MSI files.
I am wondering if it is required that SMS use MSI files or if it can handle other installer types (.exe). If not, are there any open source programmes for creating MSI files?
A:
If you want to create MSI files, try WiX: Windows Installer XML (WiX) toolset.
It's an addon to Visual Studio 2005 and 2008, is open-source, and Microsoft developed. You can use XML to specify and create MSI files. There is a wealth of resources available on it, and WiX 3.0 is, although in beta, is very complete.
Also, note that you don't have to start from scratch, you can decompile an existing MSI using the WiX Dark utility, modify the XML in any way you like, and then recompile it into an MSI.
A:
If your clients are using SMS then you're in the clear... SMS supports EXE. You enter a command line when creating 'Programs' and clients are probably already calling msiexec to launch the MSI. Also I'm pretty sure SMS predates the MSI file format :)
However if they're using Active Directory / Group Policy Objects.. then you're SOL as that does depend on MSI format for deployment.
If you do want to stick with InstallAnywhere, there are a number of "MSI repackaging" tools available. Assuming you're looking at a basic application (device drivers might be an issue) then repackaging should be a fairly painless process.
A:
Actually, with group policies, there's the ZAP file alternative, but I would recommend regardless that you learn MSI. It's not that hard, and very flexible.
| Pushing out MSI files | I have a product which has been traditionally shipped as an MSI file. It is deployed through some sort of SMS push to thousands of desktops by our various clients. The software we use to create these installers is getting long in the tooth and we are looking to replace it. We have already standardized on InstallAnywhere for most of our products as we support many operating systems. Unfortunately InstallAnywhere cannot produce MSI files.
I am wondering if it is required that SMS use MSI files or if it can handle other installer types (.exe). If not, are there any open source programmes for creating MSI files?
| [
"If you want to create MSI files, try WiX: Windows Installer XML (WiX) toolset.\nIt's an addon to Visual Studio 2005 and 2008, is open-source, and Microsoft developed. You can use XML to specify and create MSI files. There is a wealth of resources available on it, and WiX 3.0 is, although in beta, is very complete.\nAlso, note that you don't have to start from scratch, you can decompile an existing MSI using the WiX Dark utility, modify the XML in any way you like, and then recompile it into an MSI.\n",
"If your clients are using SMS then you're in the clear... SMS supports EXE. You enter a command line when creating 'Programs' and clients are probably already calling msiexec to launch the MSI. Also I'm pretty sure SMS predates the MSI file format :)\nHowever if they're using Active Directory / Group Policy Objects.. then you're SOL as that does depend on MSI format for deployment.\nIf you do want to stick with InstallAnywhere, there are a number of \"MSI repackaging\" tools available. Assuming you're looking at a basic application (device drivers might be an issue) then repackaging should be a fairly painless process.\n",
"Actually, with group policies, there's the ZAP file alternative, but I would recommend regardless that you learn MSI. It's not that hard, and very flexible.\n"
] | [
4,
4,
1
] | [] | [] | [
"deployment",
"installation",
"windows_installer"
] | stackoverflow_0000021635_deployment_installation_windows_installer.txt |
Q:
RSS/Atom for professional use
I wondered if anyone can give an example of a professional use of RSS/Atom feeds in a company product. Does anyone use feeds for other things than updating news?
For example, did you create a product that gives results as RSS/Atom feeds? Like price listings or current inventory, or maybe dates of training lessons?
Or am I thinking in a wrong way of use cases for RSS/Atom feeds anyway?
edit @abyx has a really good example of a somewhat unexpected use of RSS as a way to get debug information from program transactions. I like the idea of this process. This is the type of use I was thinking of - besides publishing search results or last changes (like mediawiki)
A:
Some of my team's new systems generate RSS feeds that the developers syndicate.
These feeds push out events that interest the developers at certain times and the information is controlled using different loggers. Thus when debugging you can get the debugging feed, when you want to see completed transactions you go to the transactions feeds etc.
This allows all the developers to get the information they want in a comfortable way and without any need to mess a lot with configuration. If you don't want to get it there's no need to remove yourself from a mailing list or edit a configuration file - simply remove the feed and be done with it.
Very cool, and the idea was stolen from Pragmatic Project Automation.
A:
Most of the digital libraries uses RSS/ATOM to display their search/results, data update, according to the OAI-PMH protocol
A:
With our internal TRAC server, I'm subscribed to the timeline view for each project that I work on. It's great for keeping track of checkins and bug tickets. This is pretty exclusive to a developer position though.
I also am subscribed to the recent changes for our installation of MediaWiki that we use for our intranet. That way it's easy to see if documents that I need have been changed, or if there's new policies etc.
Our website has a news page that I wrote an RSS feed for as well. While you mentioned that you weren't really interested in recent news, it is nice to keep up with our press releases.
A:
I have seen RSS used to syndicate gas prices from a service for a specific zip code.
A:
immobilienscout24
they use RSS feeds for updates on your search.
A:
there are many examples. Here are a couple.
SharePoint provides RSS feeds from its lists.
Many faceted navigation products allow you to get an RSS feed based on a selected filter. For example, you can navigate to view 24" LCD Monitors on newegg.com and then get an RSS feed of that view.
A:
Mantis bug tracker includes RSS feeds although I wish they were more configurable. Also we use MediaWiki for documentation which has all sorts of RSS Feeds including a per page watch, and recent changes.
A:
I just added RSS feeds to the ticketing system I use at work (TicketDesk) and that feature should be in the next release of the product.
It's nice because it basically provides me a custom search view of outstanding trouble tickets or work requests that comes to me rather then me having to go to the application. It also allows users to get feeds of issues they may be interested in, but not require them to get emails on each update.
I'm looking at implementing an RSS feed for calls for service that our agency takes, to provide the administrators a quick and easy way to see what has been going on.
A:
Atom feed documents and Atom entry documents are used as the representation format for RESTful web services that follow the Atom Publication Protocol (AtomPub).
I personally have used syndication feeds to expose a sub-set of the Windows Event Log information so that I could subscribe and be notified of critical events on a server.
| RSS/Atom for professional use | I wondered if anyone can give an example of a professional use of RSS/Atom feeds in a company product. Does anyone use feeds for other things than updating news?
For example, did you create a product that gives results as RSS/Atom feeds? Like price listings or current inventory, or maybe dates of training lessons?
Or am I thinking in a wrong way of use cases for RSS/Atom feeds anyway?
edit @abyx has a really good example of a somewhat unexpected use of RSS as a way to get debug information from program transactions. I like the idea of this process. This is the type of use I was thinking of - besides publishing search results or last changes (like mediawiki)
| [
"Some of my team's new systems generate RSS feeds that the developers syndicate.\nThese feeds push out events that interest the developers at certain times and the information is controlled using different loggers. Thus when debugging you can get the debugging feed, when you want to see completed transactions you go to the transactions feeds etc.\nThis allows all the developers to get the information they want in a comfortable way and without any need to mess a lot with configuration. If you don't want to get it there's no need to remove yourself from a mailing list or edit a configuration file - simply remove the feed and be done with it.\nVery cool, and the idea was stolen from Pragmatic Project Automation.\n",
"Most of the digital libraries uses RSS/ATOM to display their search/results, data update, according to the OAI-PMH protocol\n",
"With our internal TRAC server, I'm subscribed to the timeline view for each project that I work on. It's great for keeping track of checkins and bug tickets. This is pretty exclusive to a developer position though.\nI also am subscribed to the recent changes for our installation of MediaWiki that we use for our intranet. That way it's easy to see if documents that I need have been changed, or if there's new policies etc.\nOur website has a news page that I wrote an RSS feed for as well. While you mentioned that you weren't really interested in recent news, it is nice to keep up with our press releases.\n",
"I have seen RSS used to syndicate gas prices from a service for a specific zip code.\n",
"immobilienscout24\nthey use RSS feeds for updates on your search.\n",
"there are many examples. Here are a couple.\nSharePoint provides RSS feeds from its lists.\nMany faceted navigation products allow you to get an RSS feed based on a selected filter. For example, you can navigate to view 24\" LCD Monitors on newegg.com and then get an RSS feed of that view.\n",
"Mantis bug tracker includes RSS feeds although I wish they were more configurable. Also we use MediaWiki for documentation which has all sorts of RSS Feeds including a per page watch, and recent changes.\n",
"I just added RSS feeds to the ticketing system I use at work (TicketDesk) and that feature should be in the next release of the product. \nIt's nice because it basically provides me a custom search view of outstanding trouble tickets or work requests that comes to me rather then me having to go to the application. It also allows users to get feeds of issues they may be interested in, but not require them to get emails on each update.\nI'm looking at implementing an RSS feed for calls for service that our agency takes, to provide the administrators a quick and easy way to see what has been going on.\n",
"Atom feed documents and Atom entry documents are used as the representation format for RESTful web services that follow the Atom Publication Protocol (AtomPub).\nI personally have used syndication feeds to expose a sub-set of the Windows Event Log information so that I could subscribe and be notified of critical events on a server.\n"
] | [
4,
1,
1,
0,
0,
0,
0,
0,
0
] | [] | [] | [
"atom_feed",
"feed",
"rss",
"use_case"
] | stackoverflow_0000016164_atom_feed_feed_rss_use_case.txt |
Q:
Whats the best way to securely publish a site post build?
So, in your experience, whats the best way? Is there a secure way that's also scriptable/triggerable in a build automation tool?
Edit: I should mention this is windows/.net and I'll be deploying to iis6
A:
For some projects I use Capistrano to push out to live. It is built on top of ruby and makes deploy script writing super easy and uses ssh.
On other projects I have a tiny deploy app that uses bash to do an svn export to a temporary directory and then rsync it over to the live server. You can make rsync use ssh.
I greatly prefer the Capistrano method, even if your project isn't in ruby/rails.
A:
This seems like the sort of thing that could be done easily with SFTP. Take a look at PuTTY (psftp and pscp) or WinSCP for Windows, or rsync and OpenSSH for Unixes.
A:
Make a copy of your live site directory, use rsync to update that copy with your latest version, then rename the live and updated directories so that the updated version is now live.
In bash:
#!/bin/bash
set -e
cp -R /var/livesite /var/newversion
rsync user@devserver:/var/readytogolive /var/newversion
mv /var/livesite /var/oldlivesite
mv /var/newversion /var/livesite
Viola!
Edit: @Ted Percival - That's a good idea. I didn't even know about "set -e". Updated script. Edit: updated again at Ted's suggestion (although I think it would still work if somehow the cp command failed, and if cp fails you probably have more serious problems.)
A:
@Neall, I'd add a set -e on the second line, because you don't want the live site being replaced if the rsync fails for any reason. set -e causes the script to exit if any of its commands fail.
Edit: The set -e should be the first thing in the script, right after #!/bin/bash.
A:
I'll second the recommendation for Capistrano, though if you're looking for a GUI-based solution you could try the Webistrano front end. Clean, ssh-based, sane deployment and rollback semantics and easy scripting and extensibility via ruby.
A:
You could always write a small client/server app that encrypts at the source, pushes the files, and then decrypts at the destination. That's a little bit of work, but probably a trivial amount. And it's scriptable as long as your automation tool supports executing something in the file system (which I think all do).
The only downside is that you may not be able to get meaningful error messages on failure in your integration environment without a bit more work on your part (though depending on your setup, this could be as simple as sending error messages to stdout).
A:
hm, around here we use a staging "server" for testing purposes on the live environment (actually, its an apache virtual host on the production server) and araxis merge (a really smart line-by-line file comparison tool) to sync development and staging.
once its tested, just; replace the files on the production webroot :)
/mp
A:
On a freelance job I did, we set up three seperate enviroments.
A Dev server, that ran continues builds using CruiseControl. Any check-in would trigger a build. QA Testing was done here.
A Test Server, that user acceptance testing was done on.
Production.
The workflow was as followed:
Developer checks in changes to SourceControl.
CruiseControl builds and deploys the build to Dev.
Dev is QA'ed
After passing QA, a robocopy script is ran that deploys the Dev build to Test.
Test is UAT'ed
After Test passes, a robocopy script is ran that deploys Test to PRD.
| Whats the best way to securely publish a site post build? | So, in your experience, whats the best way? Is there a secure way that's also scriptable/triggerable in a build automation tool?
Edit: I should mention this is windows/.net and I'll be deploying to iis6
| [
"For some projects I use Capistrano to push out to live. It is built on top of ruby and makes deploy script writing super easy and uses ssh. \nOn other projects I have a tiny deploy app that uses bash to do an svn export to a temporary directory and then rsync it over to the live server. You can make rsync use ssh.\nI greatly prefer the Capistrano method, even if your project isn't in ruby/rails.\n",
"This seems like the sort of thing that could be done easily with SFTP. Take a look at PuTTY (psftp and pscp) or WinSCP for Windows, or rsync and OpenSSH for Unixes.\n",
"Make a copy of your live site directory, use rsync to update that copy with your latest version, then rename the live and updated directories so that the updated version is now live.\nIn bash:\n#!/bin/bash\n\nset -e\ncp -R /var/livesite /var/newversion\nrsync user@devserver:/var/readytogolive /var/newversion\nmv /var/livesite /var/oldlivesite\nmv /var/newversion /var/livesite\n\nViola!\nEdit: @Ted Percival - That's a good idea. I didn't even know about \"set -e\". Updated script. Edit: updated again at Ted's suggestion (although I think it would still work if somehow the cp command failed, and if cp fails you probably have more serious problems.)\n",
"@Neall, I'd add a set -e on the second line, because you don't want the live site being replaced if the rsync fails for any reason. set -e causes the script to exit if any of its commands fail.\nEdit: The set -e should be the first thing in the script, right after #!/bin/bash.\n",
"I'll second the recommendation for Capistrano, though if you're looking for a GUI-based solution you could try the Webistrano front end. Clean, ssh-based, sane deployment and rollback semantics and easy scripting and extensibility via ruby.\n",
"You could always write a small client/server app that encrypts at the source, pushes the files, and then decrypts at the destination. That's a little bit of work, but probably a trivial amount. And it's scriptable as long as your automation tool supports executing something in the file system (which I think all do).\nThe only downside is that you may not be able to get meaningful error messages on failure in your integration environment without a bit more work on your part (though depending on your setup, this could be as simple as sending error messages to stdout).\n",
"hm, around here we use a staging \"server\" for testing purposes on the live environment (actually, its an apache virtual host on the production server) and araxis merge (a really smart line-by-line file comparison tool) to sync development and staging.\nonce its tested, just; replace the files on the production webroot :)\n/mp\n",
"On a freelance job I did, we set up three seperate enviroments.\n\nA Dev server, that ran continues builds using CruiseControl. Any check-in would trigger a build. QA Testing was done here.\nA Test Server, that user acceptance testing was done on.\nProduction.\n\nThe workflow was as followed:\n\nDeveloper checks in changes to SourceControl.\nCruiseControl builds and deploys the build to Dev.\nDev is QA'ed\nAfter passing QA, a robocopy script is ran that deploys the Dev build to Test.\nTest is UAT'ed\nAfter Test passes, a robocopy script is ran that deploys Test to PRD.\n\n"
] | [
6,
4,
1,
1,
1,
0,
0,
0
] | [] | [] | [
".net",
"deployment",
"iis_6",
"windows"
] | stackoverflow_0000018224_.net_deployment_iis_6_windows.txt |
Q:
IntelliSense for XElement objects with XML schema
Reading an article called "Increase LINQ Query Performance" in July's MSDN magazine, the author states that using an Imports in VB providing a path to schema in the current project will turn IntelliSense on for XElement. In the code provided, he uses statements like xelement.@name to retreive attributes values and so on.
I did not try this out myself in VB but I would like to use that in C#. This really looks like LINQ to XSD.
Is there any equivalent in C#? It seems that it is not possible to use a namespace inside C# code, there is no using equivalent to this Import statement.
A:
This post claims to have a link to a video that shows how to use VB9's XML Literals in C#. However, it only really discusses them and from what I can gather, you cannot use them in C#. http://blogs.msdn.com/bethmassi/archive/2008/07/03/teched-panel-vb-xml-literals-for-c-developers.aspx
| IntelliSense for XElement objects with XML schema | Reading an article called "Increase LINQ Query Performance" in July's MSDN magazine, the author states that using an Imports in VB providing a path to schema in the current project will turn IntelliSense on for XElement. In the code provided, he uses statements like xelement.@name to retreive attributes values and so on.
I did not try this out myself in VB but I would like to use that in C#. This really looks like LINQ to XSD.
Is there any equivalent in C#? It seems that it is not possible to use a namespace inside C# code, there is no using equivalent to this Import statement.
| [
"This post claims to have a link to a video that shows how to use VB9's XML Literals in C#. However, it only really discusses them and from what I can gather, you cannot use them in C#. http://blogs.msdn.com/bethmassi/archive/2008/07/03/teched-panel-vb-xml-literals-for-c-developers.aspx\n"
] | [
4
] | [] | [] | [
"c#",
"linq",
"linq_to_xml",
"vb.net_to_c#",
"xsd"
] | stackoverflow_0000021912_c#_linq_linq_to_xml_vb.net_to_c#_xsd.txt |
Q:
How to enable multisampling for a wxWidgets OpenGL program?
Multisampling is a way of applying full screen anti-aliasing (FSAA) in 3D applications. I need to use multisampling in my OpenGL program, which is currently embedded in a wxWidgets GUI. Is there a way to do this? Please respond only if you know the detailed steps to achieve this.
I'm aware of enabling multisampling using WGL (Win32 extensions to OpenGL). However, since my OpenGL program isn't written in MFC (and I want the code to be multi-platform portable), that's not an option for me.
A:
I finally got Multisampling working with my wxWidgets OpenGL program. It's a bit messy right now, but here's how:
wxWidgets doesn't have Multisampling support in their stable releases right now (latest version at this time is 2.8.8). But, it's available as a patch and also through their daily snapshot. (The latter is heartening, since it means that the patch has been accepted and should appear in later stable releases if there are no issues.)
So, there are 2 options:
Download and build from their daily snapshot.
Get the patch for your working wxWidgets installation.
I found the 2nd option to be less cumbersome, since I don't want to disturb my working installation as much as possible. If you don't know how to patch on Windows, see this.
At the very least, for Windows, the patch will modify the following files:
$(WX_WIDGETS_ROOT)/include/wx/glcanvas.h
$(WX_WIDGETS_ROOT)/include/wx/msw/glcanvas.h
$(WX_WIDGETS_ROOT)/src/msw/glcanvas.cpp
After patching, recompile the wxWidgets libraries.
To enable multisampling in your wxWidgets OpenGL program, minor changes to the code are required.
An attribute list needs to be passed to the wxGLCanvas constructor:
int attribList[] = {WX_GL_RGBA,
WX_GL_DOUBLEBUFFER,
WX_GL_SAMPLE_BUFFERS, GL_TRUE, // Multi-sampling
WX_GL_DEPTH_SIZE, 16,
0, 0};
If you were already using an attribute list, then add the line with GL_SAMPLE_BUFFERS, GL_TRUE to it. Else, add this attribute list definition to your code.
Then modify your wxGLCanvas constructor to take this attribute list as a parameter:
myGLFrame::myGLFrame // Derived from wxGLCanvas
(
wxWindow *parent,
wxWindowID id,
const wxPoint& pos,
const wxSize& size,
long style,
const wxString& name
)
: wxGLCanvas(parent, (wxGLCanvas*) NULL, id, pos, size, style, name, attribList)
{
// ...
}
After the wxGLCanvas element is created, multisampling is turned on by default. To disable or enable it at will, use the related OpenGL calls:
glEnable(GL_MULTISAMPLE);
glDisable(GL_MULTISAMPLE);
Multisampling should now work with the wxWidgets OpenGL program. Hopefully, it should be supported in the stable release of wxWidgets soon, making this information irrelevant :-)
| How to enable multisampling for a wxWidgets OpenGL program? | Multisampling is a way of applying full screen anti-aliasing (FSAA) in 3D applications. I need to use multisampling in my OpenGL program, which is currently embedded in a wxWidgets GUI. Is there a way to do this? Please respond only if you know the detailed steps to achieve this.
I'm aware of enabling multisampling using WGL (Win32 extensions to OpenGL). However, since my OpenGL program isn't written in MFC (and I want the code to be multi-platform portable), that's not an option for me.
| [
"I finally got Multisampling working with my wxWidgets OpenGL program. It's a bit messy right now, but here's how:\nwxWidgets doesn't have Multisampling support in their stable releases right now (latest version at this time is 2.8.8). But, it's available as a patch and also through their daily snapshot. (The latter is heartening, since it means that the patch has been accepted and should appear in later stable releases if there are no issues.)\nSo, there are 2 options:\n\nDownload and build from their daily snapshot.\nGet the patch for your working wxWidgets installation.\n\nI found the 2nd option to be less cumbersome, since I don't want to disturb my working installation as much as possible. If you don't know how to patch on Windows, see this.\nAt the very least, for Windows, the patch will modify the following files:\n$(WX_WIDGETS_ROOT)/include/wx/glcanvas.h\n$(WX_WIDGETS_ROOT)/include/wx/msw/glcanvas.h\n$(WX_WIDGETS_ROOT)/src/msw/glcanvas.cpp\n\nAfter patching, recompile the wxWidgets libraries.\nTo enable multisampling in your wxWidgets OpenGL program, minor changes to the code are required.\nAn attribute list needs to be passed to the wxGLCanvas constructor:\nint attribList[] = {WX_GL_RGBA,\n WX_GL_DOUBLEBUFFER,\n WX_GL_SAMPLE_BUFFERS, GL_TRUE, // Multi-sampling\n WX_GL_DEPTH_SIZE, 16,\n 0, 0};\n\nIf you were already using an attribute list, then add the line with GL_SAMPLE_BUFFERS, GL_TRUE to it. Else, add this attribute list definition to your code.\nThen modify your wxGLCanvas constructor to take this attribute list as a parameter:\nmyGLFrame::myGLFrame // Derived from wxGLCanvas\n(\n wxWindow *parent,\n wxWindowID id,\n const wxPoint& pos,\n const wxSize& size,\n long style,\n const wxString& name\n)\n: wxGLCanvas(parent, (wxGLCanvas*) NULL, id, pos, size, style, name, attribList)\n{\n // ...\n}\n\nAfter the wxGLCanvas element is created, multisampling is turned on by default. To disable or enable it at will, use the related OpenGL calls:\nglEnable(GL_MULTISAMPLE);\nglDisable(GL_MULTISAMPLE);\n\nMultisampling should now work with the wxWidgets OpenGL program. Hopefully, it should be supported in the stable release of wxWidgets soon, making this information irrelevant :-)\n"
] | [
4
] | [] | [] | [
"multisampling",
"opengl",
"wxwidgets"
] | stackoverflow_0000021560_multisampling_opengl_wxwidgets.txt |
Q:
Process raw HTTP request content
I am doing an e-commerce solution in ASP.NET which uses PayPal's Website Payments Standard service. Together with that I use a service they offer (Payment Data Transfer) that sends you back order information after a user has completed a payment. The final thing I need to do is to parse the POST request from them and persist the info in it. The HTTP request's content is in this form :
SUCCESS
first_name=Jane+Doe
last_name=Smith
payment_status=Completed
payer_email=janedoesmith%40hotmail.com
payment_gross=3.99
mc_currency=USD
custom=For+the+purchase+of+the+rare+book+Green+Eggs+%26+Ham
Basically I want to parse this information and do something meaningful, like send it through e-mail or save it in DB. My question is what is the right approach to do parsing raw HTTP data in ASP.NET, not how the parsing itself is done.
A:
Something like this placed in your onload event.
if (Request.RequestType == "POST")
{
using (StreamReader sr = new StreamReader(Request.InputStream))
{
if (sr.ReadLine() == "SUCCESS")
{
/* Do your parsing here */
}
}
}
Mind you that they might want some special sort of response to (ie; not your full webpage), so you might do something like this after you're done parsing.
Response.Clear();
Response.ContentType = "text/plain";
Response.Write("Thanks!");
Response.End();
Update: this should be done in a Generic Handler (.ashx) file in order to avoid a great deal of overhead from the page model. Check out this article for more information about .ashx files
A:
Use an IHttpHandler and avoid the Page model overhead (which you don't need), but use Request.Form to get the values so you don't have to parse name value pairs yourself. Just pretend you're in PHP or Classic ASP (or ASP.NET MVC, for that matter). ;)
A:
I'd strongly recommend saving each request to some file.
This way, you can always go back to the actual contents of it later. You can thank me later, when you find that hostile-endian, koi-8 encoded, [...], whatever it was that stumped your parser...
A:
Well if the incoming data is in a standard form encoded POST format, then using the Request.Form array will give you all the data in a nice to handle manner.
If not then I can't see any way other than using Request.InputStream.
A:
If I'm reading your question right, I think you're looking for the InputStream property on the Request object. Keep in mind that this is a firehose stream, so you can't reset it.
| Process raw HTTP request content | I am doing an e-commerce solution in ASP.NET which uses PayPal's Website Payments Standard service. Together with that I use a service they offer (Payment Data Transfer) that sends you back order information after a user has completed a payment. The final thing I need to do is to parse the POST request from them and persist the info in it. The HTTP request's content is in this form :
SUCCESS
first_name=Jane+Doe
last_name=Smith
payment_status=Completed
payer_email=janedoesmith%40hotmail.com
payment_gross=3.99
mc_currency=USD
custom=For+the+purchase+of+the+rare+book+Green+Eggs+%26+Ham
Basically I want to parse this information and do something meaningful, like send it through e-mail or save it in DB. My question is what is the right approach to do parsing raw HTTP data in ASP.NET, not how the parsing itself is done.
| [
"Something like this placed in your onload event.\nif (Request.RequestType == \"POST\")\n{\n using (StreamReader sr = new StreamReader(Request.InputStream))\n {\n if (sr.ReadLine() == \"SUCCESS\")\n {\n /* Do your parsing here */\n }\n }\n}\n\nMind you that they might want some special sort of response to (ie; not your full webpage), so you might do something like this after you're done parsing.\nResponse.Clear();\nResponse.ContentType = \"text/plain\";\nResponse.Write(\"Thanks!\");\nResponse.End();\n\nUpdate: this should be done in a Generic Handler (.ashx) file in order to avoid a great deal of overhead from the page model. Check out this article for more information about .ashx files\n",
"Use an IHttpHandler and avoid the Page model overhead (which you don't need), but use Request.Form to get the values so you don't have to parse name value pairs yourself. Just pretend you're in PHP or Classic ASP (or ASP.NET MVC, for that matter). ;)\n",
"I'd strongly recommend saving each request to some file.\nThis way, you can always go back to the actual contents of it later. You can thank me later, when you find that hostile-endian, koi-8 encoded, [...], whatever it was that stumped your parser...\n",
"Well if the incoming data is in a standard form encoded POST format, then using the Request.Form array will give you all the data in a nice to handle manner.\nIf not then I can't see any way other than using Request.InputStream.\n",
"If I'm reading your question right, I think you're looking for the InputStream property on the Request object. Keep in mind that this is a firehose stream, so you can't reset it.\n"
] | [
14,
3,
2,
1,
0
] | [] | [] | [
"asp.net",
"e_commerce",
"http"
] | stackoverflow_0000020245_asp.net_e_commerce_http.txt |
Q:
Date/time conversion using time.mktime seems wrong
>>> import time
>>> time.strptime("01-31-2009", "%m-%d-%Y")
(2009, 1, 31, 0, 0, 0, 5, 31, -1)
>>> time.mktime((2009, 1, 31, 0, 0, 0, 5, 31, -1))
1233378000.0
>>> 60*60*24 # seconds in a day
86400
>>> 1233378000.0 / 86400
14275.208333333334
time.mktime should return the number of seconds since the epoch. Since I'm giving it a time at midnight and the epoch is at midnight, shouldn't the result be evenly divisible by the number of seconds in a day?
A:
Short answer: Because of timezones.
The Epoch is in UTC.
For example, I'm on IST (Irish Standard Time) or UTC+1. time.mktime() is relative to my timezone, so on my system this refers to
>>> time.mktime((2009, 1, 31, 0, 0, 0, 5, 31, -1))
1233360000.0
Because you got the result 1233378000, that would suggest that you're 5 hours behind me
>>> (1233378000 - 1233360000) / (60*60)
5
Have a look at the time.gmtime() function which works off UTC.
A:
mktime(...)
mktime(tuple) -> floating point number
Convert a time tuple in local time to seconds since the Epoch.
local time... fancy that.
The time tuple:
The other representation is a tuple of 9 integers giving local time.
The tuple items are:
year (four digits, e.g. 1998)
month (1-12)
day (1-31)
hours (0-23)
minutes (0-59)
seconds (0-59)
weekday (0-6, Monday is 0)
Julian day (day in the year, 1-366)
DST (Daylight Savings Time) flag (-1, 0 or 1)
If the DST flag is 0, the time is given in the regular time zone;
if it is 1, the time is given in the DST time zone;
if it is -1, mktime() should guess based on the date and time.
Incidentally, we seem to be 6 hours apart:
>>> time.mktime((2009, 1, 31, 0, 0, 0, 5, 31, -1))
1233356400.0
>>> (1233378000.0 - 1233356400)/(60*60)
6.0
A:
Phil's answer really solved it, but I'll elaborate a little more. Since the epoch is in UTC, if I want to compare other times to the epoch, I need to interpret them as UTC as well.
>>> calendar.timegm((2009, 1, 31, 0, 0, 0, 5, 31, -1))
1233360000
>>> 1233360000 / (60*60*24)
14275
By converting the time tuple to a timestamp treating is as UTC time, I get a number which is evenly divisible by the number of seconds in a day.
I can use this to convert a date to a days-from-the-epoch representation which is what I'm ultimately after.
A:
Interesting. I don't know, but I did try this:
>>> now = time.mktime((2008, 8, 22, 11 ,17, -1, -1, -1, -1))
>>> tomorrow = time.mktime((2008, 8, 23, 11 ,17, -1, -1, -1, -1))
>>> tomorrow - now
86400.0
which is what you expected. My guess? Maybe some time correction was done since the epoch. This could be only a few seconds, something like a leap year. I think I heard something like this before, but can't remember exactly how and when it is done...
| Date/time conversion using time.mktime seems wrong | >>> import time
>>> time.strptime("01-31-2009", "%m-%d-%Y")
(2009, 1, 31, 0, 0, 0, 5, 31, -1)
>>> time.mktime((2009, 1, 31, 0, 0, 0, 5, 31, -1))
1233378000.0
>>> 60*60*24 # seconds in a day
86400
>>> 1233378000.0 / 86400
14275.208333333334
time.mktime should return the number of seconds since the epoch. Since I'm giving it a time at midnight and the epoch is at midnight, shouldn't the result be evenly divisible by the number of seconds in a day?
| [
"Short answer: Because of timezones.\nThe Epoch is in UTC.\nFor example, I'm on IST (Irish Standard Time) or UTC+1. time.mktime() is relative to my timezone, so on my system this refers to\n>>> time.mktime((2009, 1, 31, 0, 0, 0, 5, 31, -1))\n1233360000.0\n\nBecause you got the result 1233378000, that would suggest that you're 5 hours behind me\n>>> (1233378000 - 1233360000) / (60*60) \n5\n\nHave a look at the time.gmtime() function which works off UTC.\n",
"mktime(...)\n mktime(tuple) -> floating point number\n\n Convert a time tuple in local time to seconds since the Epoch.\n\nlocal time... fancy that.\nThe time tuple:\nThe other representation is a tuple of 9 integers giving local time.\nThe tuple items are:\n year (four digits, e.g. 1998)\n month (1-12)\n day (1-31)\n hours (0-23)\n minutes (0-59)\n seconds (0-59)\n weekday (0-6, Monday is 0)\n Julian day (day in the year, 1-366)\n DST (Daylight Savings Time) flag (-1, 0 or 1)\nIf the DST flag is 0, the time is given in the regular time zone;\nif it is 1, the time is given in the DST time zone;\nif it is -1, mktime() should guess based on the date and time.\n\nIncidentally, we seem to be 6 hours apart:\n>>> time.mktime((2009, 1, 31, 0, 0, 0, 5, 31, -1))\n1233356400.0\n>>> (1233378000.0 - 1233356400)/(60*60)\n6.0\n\n",
"Phil's answer really solved it, but I'll elaborate a little more. Since the epoch is in UTC, if I want to compare other times to the epoch, I need to interpret them as UTC as well.\n>>> calendar.timegm((2009, 1, 31, 0, 0, 0, 5, 31, -1))\n1233360000\n>>> 1233360000 / (60*60*24)\n14275\n\nBy converting the time tuple to a timestamp treating is as UTC time, I get a number which is evenly divisible by the number of seconds in a day.\nI can use this to convert a date to a days-from-the-epoch representation which is what I'm ultimately after.\n",
"Interesting. I don't know, but I did try this:\n>>> now = time.mktime((2008, 8, 22, 11 ,17, -1, -1, -1, -1))\n>>> tomorrow = time.mktime((2008, 8, 23, 11 ,17, -1, -1, -1, -1))\n>>> tomorrow - now\n86400.0\n\nwhich is what you expected. My guess? Maybe some time correction was done since the epoch. This could be only a few seconds, something like a leap year. I think I heard something like this before, but can't remember exactly how and when it is done...\n"
] | [
7,
3,
2,
0
] | [] | [] | [
"datetime",
"python"
] | stackoverflow_0000021961_datetime_python.txt |
Q:
Loading assemblies and its dependencies
My application dynamically loads assemblies at runtime from specific subfolders. These assemblies are compiled with dependencies to other assemblies. The runtime trys to load these from the application directory. But I want to put them into the modules directory.
Is there a way to tell the runtime that the dlls are in a seperate subfolder?
A:
One nice approach I've used lately is to add an event handler for the AppDomain's AssemblyResolve event.
AppDomain currentDomain = AppDomain.CurrentDomain;
currentDomain.AssemblyResolve += new ResolveEventHandler(MyResolveEventHandler);
Then in the event handler method you can load the assembly that was attempted to be resolved using one of the Assembly.Load, Assembly.LoadFrom overrides and return it from the method.
EDIT:
Based on your additional information I think using the technique above, specifically resolving the references to an assembly yourself is the only real approach that is going to work without restructuring your app. What it gives you is that the location of each and every assembly that the CLR fails to resolve can be determined and loaded by your code at runtime... I've used this in similar situations for both pluggable architectures and for an assembly reference integrity scanning tool.
A:
You can use the <probing> element in a manifest file to tell the Runtime to look in different directories for its assembly files.
http://msdn.microsoft.com/en-us/library/823z9h8w.aspx
e.g.:
<configuration>
<runtime>
<assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
<probing privatePath="bin;bin2\subbin;bin3"/>
</assemblyBinding>
</runtime>
</configuration>
A:
You can use the <codeBase> element found in the application configuration file. More information on "Locating the Assembly through Codebases or Probing".
Well, the loaded assembly doesn't have
an application configuration file.
Well if you know the specific folders at runtime you can use Assembly.LoadFrom.
| Loading assemblies and its dependencies | My application dynamically loads assemblies at runtime from specific subfolders. These assemblies are compiled with dependencies to other assemblies. The runtime trys to load these from the application directory. But I want to put them into the modules directory.
Is there a way to tell the runtime that the dlls are in a seperate subfolder?
| [
"One nice approach I've used lately is to add an event handler for the AppDomain's AssemblyResolve event.\nAppDomain currentDomain = AppDomain.CurrentDomain;\ncurrentDomain.AssemblyResolve += new ResolveEventHandler(MyResolveEventHandler);\n\nThen in the event handler method you can load the assembly that was attempted to be resolved using one of the Assembly.Load, Assembly.LoadFrom overrides and return it from the method.\nEDIT:\nBased on your additional information I think using the technique above, specifically resolving the references to an assembly yourself is the only real approach that is going to work without restructuring your app. What it gives you is that the location of each and every assembly that the CLR fails to resolve can be determined and loaded by your code at runtime... I've used this in similar situations for both pluggable architectures and for an assembly reference integrity scanning tool.\n",
"You can use the <probing> element in a manifest file to tell the Runtime to look in different directories for its assembly files.\nhttp://msdn.microsoft.com/en-us/library/823z9h8w.aspx\ne.g.:\n<configuration>\n <runtime>\n <assemblyBinding xmlns=\"urn:schemas-microsoft-com:asm.v1\">\n <probing privatePath=\"bin;bin2\\subbin;bin3\"/>\n </assemblyBinding>\n </runtime>\n</configuration>\n\n",
"You can use the <codeBase> element found in the application configuration file. More information on \"Locating the Assembly through Codebases or Probing\".\n\nWell, the loaded assembly doesn't have\n an application configuration file.\n\nWell if you know the specific folders at runtime you can use Assembly.LoadFrom. \n"
] | [
18,
4,
1
] | [] | [] | [
".net",
"c#"
] | stackoverflow_0000022012_.net_c#.txt |
Q:
Table cells larger than they are meant to be
I've created a map system for a game that runs on the principle of drawing the picture of the map from tiles. There are many reasons for this which I won't go into here but if you really want to know then I'm sure you can find out how to contact me ;)
I have made the latest version live so you can see exactly where the problem lies and the source. The issue is the line between the top 2 tiles and the bottom 2 tiles, I can't figure out why it's gone like this and any help would be appreciated.
In the source is a marker called "stackoverflow", if you search for "stackoverflow" when viewing source then it should take you to the table in question.
I have also uploaded an image of the issue.
A:
I think you need to use display: block on your images. When images are inline there's a little extra space for the line spacing.
A:
You could also adjust the line height of the td element:
td {
line-height: 0
}
A:
I know this might sound bad, but you need to ensure there is no whitespace between then end of you <img> tag and the start of the end </td> tag.
i.e. The following will present the problem:
<td>
<img src="image.jpg"/>
</td>
And this will not:
<td><img src="image.jpg"/></td>
Hope that helps.
Edit: OK, that wasn't the solution at all. doh!
A:
I haven't looked up the whole thing, but the problem lies somewhere in the style sheets.
If you copy out only the table part of it, it is displaying the map correctly.
If you remove the final </span> tag from this part, it is also working (however the page gets mixed):
<div class="inner"><span class="corners-top"><span></span></span>
<div class="content" style="font-size: 1.1em;">
<!-- Stackoverflow findy thingy -->
<table border="0" cellspacing="0" cellpadding="0">
So either try from the beginning with the css or try to remove one-by-one them, to see, which is causing the problem.
| Table cells larger than they are meant to be | I've created a map system for a game that runs on the principle of drawing the picture of the map from tiles. There are many reasons for this which I won't go into here but if you really want to know then I'm sure you can find out how to contact me ;)
I have made the latest version live so you can see exactly where the problem lies and the source. The issue is the line between the top 2 tiles and the bottom 2 tiles, I can't figure out why it's gone like this and any help would be appreciated.
In the source is a marker called "stackoverflow", if you search for "stackoverflow" when viewing source then it should take you to the table in question.
I have also uploaded an image of the issue.
| [
"I think you need to use display: block on your images. When images are inline there's a little extra space for the line spacing.\n",
"You could also adjust the line height of the td element:\ntd {\n line-height: 0\n}\n\n",
"I know this might sound bad, but you need to ensure there is no whitespace between then end of you <img> tag and the start of the end </td> tag.\ni.e. The following will present the problem:\n<td>\n <img src=\"image.jpg\"/>\n</td>\n\nAnd this will not:\n<td><img src=\"image.jpg\"/></td>\n\nHope that helps.\nEdit: OK, that wasn't the solution at all. doh!\n",
"I haven't looked up the whole thing, but the problem lies somewhere in the style sheets.\nIf you copy out only the table part of it, it is displaying the map correctly.\nIf you remove the final </span> tag from this part, it is also working (however the page gets mixed):\n<div class=\"inner\"><span class=\"corners-top\"><span></span></span>\n<div class=\"content\" style=\"font-size: 1.1em;\">\n\n<!-- Stackoverflow findy thingy -->\n<table border=\"0\" cellspacing=\"0\" cellpadding=\"0\">\n\nSo either try from the beginning with the css or try to remove one-by-one them, to see, which is causing the problem.\n"
] | [
35,
5,
3,
1
] | [] | [] | [
"css",
"html"
] | stackoverflow_0000022000_css_html.txt |
Q:
How to use BITS to download from a UNC path?
What is the best way to distribute files to users in remote offices, using BITS with a UNC path or BITS with HTTP? I have a VB.NET project which currently downloads from a HTTP path, but there is added complexity involved (e.g. having a web server).
Or is there a better way to do this? Low bandwith usage is more important than speed of synching.
A:
Maybe consider not using BITS at all and use the old favourite Robocopy. Robocopy is a standalone command-line executable which is part of the Windows Server 2003 ResKit tools and now standard on Vista/2008. Robocopy has the /IPG:ms (Inter-Packet Gap) switch to "dribble" the download, which is designed specifically to not saturate slow links.
| How to use BITS to download from a UNC path? | What is the best way to distribute files to users in remote offices, using BITS with a UNC path or BITS with HTTP? I have a VB.NET project which currently downloads from a HTTP path, but there is added complexity involved (e.g. having a web server).
Or is there a better way to do this? Low bandwith usage is more important than speed of synching.
| [
"Maybe consider not using BITS at all and use the old favourite Robocopy. Robocopy is a standalone command-line executable which is part of the Windows Server 2003 ResKit tools and now standard on Vista/2008. Robocopy has the /IPG:ms (Inter-Packet Gap) switch to \"dribble\" the download, which is designed specifically to not saturate slow links. \n"
] | [
1
] | [] | [] | [
"bit",
"vb.net"
] | stackoverflow_0000021753_bit_vb.net.txt |
Q:
MSVC6: Breakpoint stops program
Using Microsoft Visual Studio 98, Microsoft Visual C++ 6.0 SP6
When running under the debugger, there's only one problem. If I pause the program and resume, everything's fine.
The problem? When I hit a breakpoint, my program stops. But not in a good way; execution halts, I'm thrown out of debug mode into edit mode. All's fine until a breakpoint is hit. And I know it's hitting the breakpoint - I see a flash of the little yellow arrow pointing at the right line of code, local variables in the inspect window and the call stack in that window. And then I'm staring at the editor.
This happens in all projects.
I've uninstalled and re-installed MSVC6. It didn't help.
I'm about to start over on a new PC; before I go that far, anyone know what I've done to this one?
Note: MSVC6 is not my choice, but there are reasons. It's the tool I work with. And, we get to target NT4, so given 2008 can't target NT4 and 2005 has issues with MFC and NT4, MSVC6 it is.
A:
Stop beating on VC6. It's old. The STL was updated in 1996 from HP code written in 1994. C++ was ratified in 1998.
What is the code doing when you are breaking? Can you reduce the situation into a simple test. When I try that I usually find the cause. If you can do that so it still happens then I'll take a look at it for you. I too am unfortunate enough to use VC6 for my day to day work.
Visual C++ Express 2008 can't be used in certain situations.
A:
The first thing I would check is if this project does the same thing on other machines. If not, it could be your box is heading south. If not it's the VC6 project itself.
Typically I get goofiness with the debugger when my program is doing something with the hardware, especially the video.
I would recommend turning off parts of your program until you figure out what part is causing this. If your program is small and not doing much it might be that the project is corrupted and needs to get rebuilt. Make a new project from scratch and put your files and settings back in by hand.
A:
Is it specific to the app you're working on or do all breakpoints in any app break the debugger?
Is anything different if you attach the debugger manually after launching the app normally?
A:
Is the device running out of memory and therefore gives up the ghost when it requires the additional memory to stop at the breakpoint?
A:
Is the device running out of memory and therefore gives up the ghost when it requires the additional memory to stop at the breakpoint?
No, there's over a gig of RAM to go, and even more of virtual memory.
| MSVC6: Breakpoint stops program | Using Microsoft Visual Studio 98, Microsoft Visual C++ 6.0 SP6
When running under the debugger, there's only one problem. If I pause the program and resume, everything's fine.
The problem? When I hit a breakpoint, my program stops. But not in a good way; execution halts, I'm thrown out of debug mode into edit mode. All's fine until a breakpoint is hit. And I know it's hitting the breakpoint - I see a flash of the little yellow arrow pointing at the right line of code, local variables in the inspect window and the call stack in that window. And then I'm staring at the editor.
This happens in all projects.
I've uninstalled and re-installed MSVC6. It didn't help.
I'm about to start over on a new PC; before I go that far, anyone know what I've done to this one?
Note: MSVC6 is not my choice, but there are reasons. It's the tool I work with. And, we get to target NT4, so given 2008 can't target NT4 and 2005 has issues with MFC and NT4, MSVC6 it is.
| [
"Stop beating on VC6. It's old. The STL was updated in 1996 from HP code written in 1994. C++ was ratified in 1998.\nWhat is the code doing when you are breaking? Can you reduce the situation into a simple test. When I try that I usually find the cause. If you can do that so it still happens then I'll take a look at it for you. I too am unfortunate enough to use VC6 for my day to day work.\nVisual C++ Express 2008 can't be used in certain situations.\n",
"The first thing I would check is if this project does the same thing on other machines. If not, it could be your box is heading south. If not it's the VC6 project itself.\nTypically I get goofiness with the debugger when my program is doing something with the hardware, especially the video. \nI would recommend turning off parts of your program until you figure out what part is causing this. If your program is small and not doing much it might be that the project is corrupted and needs to get rebuilt. Make a new project from scratch and put your files and settings back in by hand.\n",
"Is it specific to the app you're working on or do all breakpoints in any app break the debugger?\nIs anything different if you attach the debugger manually after launching the app normally?\n",
"Is the device running out of memory and therefore gives up the ghost when it requires the additional memory to stop at the breakpoint?\n",
"\nIs the device running out of memory and therefore gives up the ghost when it requires the additional memory to stop at the breakpoint?\n\nNo, there's over a gig of RAM to go, and even more of virtual memory.\n"
] | [
1,
1,
0,
0,
0
] | [
"I haven't used MSVC6 in years, but I remember the debugger basically being a flaky piece of crap. Things like this would regularly happen for no apparent reason because it just didn't like your code that day.\nIn addition to the debugger being a flaky piece of crap, the rest of it was too.\nIt's STL implementation isn't threadsafe, it doesn't support very much of the C++ standard, and the IDE is sorely lacking in niceties. \nI'm pretty sure you also just simply can't use any versions of the Platform SDK from the last 5 years because it's too old to grok them. Not a good thing.\nYou'd be mad to keep using it when there is Visual C++ Express 2008 which you can download for free.\n"
] | [
-1
] | [
"c++",
"debugging",
"visual_c++",
"visual_c++_6",
"windows"
] | stackoverflow_0000008443_c++_debugging_visual_c++_visual_c++_6_windows.txt |
Q:
Difference between `/dev/ttyS0` and `/dev/ttys0`?
In Linux, what is the difference between /dev/ttyS0 and /dev/ttys0?
I know that the first is a serial port, but what about the second, with the small s?
A:
see this
For a pseudo terminal pair such as
ptyp3 and ttyp3, the pty... is the
master or controlling terminal and the
tty... is the slave. There are only 16
ttyp's: ttyp0-ttypf (f is a
hexadecimal digit). To get more
pairs, the 3 letters q, r, s may be
used instead of p. For example the
pair ttys8, ptys8 is a pseudo terminal
pair. The master and slave are really
the same "port" but the slave is used
by the application program and the
master is used by a network program
(or the like) which supplies (and
gets) data to/from the slave port.
A:
And this: http://lists.opensuse.org/archive/opensuse/2003-12/msg02404.html
A:
In the Linux devices.txt file in the kernel docs it says:
3 char Pseudo-TTY slaves
0 = /dev/ttyp0 First PTY slave
1 = /dev/ttyp1 Second PTY slave
...
255 = /dev/ttyef 256th PTY slave
These are the old-style (BSD) PTY devices; Unix98
devices are on major 136 and above.
and goes on to say
4 char TTY devices
0 = /dev/tty0 Current virtual console
1 = /dev/tty1 First virtual console
...
63 = /dev/tty63 63rd virtual console
64 = /dev/ttyS0 First UART serial port
...
255 = /dev/ttyS191 192nd UART serial port
UART serial ports refer to 8250/16450/16550 series devices.
Older versions of the Linux kernel used this major
number for BSD PTY devices. As of Linux 2.1.115, this
is no longer supported. Use major numbers 2 and 3.
I don't know how much this helps you, but should get you started in the right direction.
| Difference between `/dev/ttyS0` and `/dev/ttys0`? | In Linux, what is the difference between /dev/ttyS0 and /dev/ttys0?
I know that the first is a serial port, but what about the second, with the small s?
| [
"see this\n\nFor a pseudo terminal pair such as\n ptyp3 and ttyp3, the pty... is the\n master or controlling terminal and the\n tty... is the slave. There are only 16\n ttyp's: ttyp0-ttypf (f is a\n hexadecimal digit). To get more\n pairs, the 3 letters q, r, s may be\n used instead of p. For example the\n pair ttys8, ptys8 is a pseudo terminal\n pair. The master and slave are really\n the same \"port\" but the slave is used\n by the application program and the\n master is used by a network program\n (or the like) which supplies (and\n gets) data to/from the slave port.\n\n",
"And this: http://lists.opensuse.org/archive/opensuse/2003-12/msg02404.html\n",
"In the Linux devices.txt file in the kernel docs it says:\n3 char Pseudo-TTY slaves\n 0 = /dev/ttyp0 First PTY slave\n 1 = /dev/ttyp1 Second PTY slave\n ...\n 255 = /dev/ttyef 256th PTY slave\n\n These are the old-style (BSD) PTY devices; Unix98\n devices are on major 136 and above.\n\nand goes on to say\n4 char TTY devices\n 0 = /dev/tty0 Current virtual console\n\n 1 = /dev/tty1 First virtual console\n ...\n 63 = /dev/tty63 63rd virtual console\n 64 = /dev/ttyS0 First UART serial port\n ...\n 255 = /dev/ttyS191 192nd UART serial port\n\n UART serial ports refer to 8250/16450/16550 series devices.\n\n Older versions of the Linux kernel used this major\n number for BSD PTY devices. As of Linux 2.1.115, this\n is no longer supported. Use major numbers 2 and 3.\n\nI don't know how much this helps you, but should get you started in the right direction.\n"
] | [
1,
0,
0
] | [] | [] | [
"linux",
"serial_port"
] | stackoverflow_0000022106_linux_serial_port.txt |
Q:
Switching to ORMs
I'm toying with the idea of phasing in an ORM into an application I support. The app is not very structured with no unit tests. So any change will be risky. I'm obviously concerned that I've got a good enough reason to change. The idea is that there will be less boiler plate code for data access and there for greater productivity.
Do this ring true with your experiences?
Is it possible or even a good idea to phase it in?
What are the downsides of an ORM?
A:
I would strongly recommend getting a copy of Michael Feather's book Working Effectively With Legacy Code (by "Legacy Code" Feathers means any system that isn't adequately covered by unit tests). It is full of good ideas which should help you with your refactoring and phasing in of best practices.
Sure, you could phase in the introduction of an ORM, initially using it for accessing some subset of your domain model. And yes, I have found that use of an ORM speeds up development time - this is one of the key benefits and I certainly don't miss the days when I used to laboriously hand-craft data access layers.
Downsides of ORM - from experience, there is inevitably a bit of a learning curve in getting to grips with the concepts, configuration and idiosyncracies of the chosen ORM solution.
Edit: corrected author's name
A:
The "Robert C Martin" book, which was actually written by Michael Feathers ("Uncle Bob" is, it seems, a brand name these days!) is a must.
It's near-impossible - not to mention insanely time-consuming - to put unit tests into an application not developed with them. The code just won't be amenable.
But that's not a problem. Refactoring is about changing design without changing function (I hope I haven't corrupted the meaning too badly there) so you can work in a much broader fashion.
Start out with big chunks. Set up a repeatable execution, and capture what happens as the expected result for subsequent executions. Now you have your app, or part of it at least, under test. Not a very good or comprehensive test, sure, but it's a start and things can only get better from there.
Now you can start to refactor. You want to start extracting your data access code so that it can be replaced with ORM functionality without disturbing too much. Test often: with legacy apps you'll be surprised what breaks; cohesion and coupling are seldom what they might be.
I'd also consider looking at Martin Fowler's Refactoring, which is, obviously enough, the definitive work on the process.
A:
I work on a large ASP.net application where we recently started to use NHibernate. We moved a large number of domain objects that we had been persisting manually to Sql Server over to NHibernate instead. It simplified things quite a bit and made it much easier to change things over time. We're glad we made the changes and are using NHibernate where appropriate for a lot of our new work.
A:
The rule for refactoring is. Do unit tests.
So maybe first you should place some unittests at least for the core/major things.
The ORM should be designed for decreasing boilerplate code. The time/trouble vs. ROI to be enterprisy is up to you to estimate :)
A:
I heard that TypeMock is often being used to refactor legacy code.
A:
I seriously think introducing ORM into a legacy application is calling for trouble (and might be the same amount of trouble as a complete rewrite).
Other than that, ORM is a great way to go, and should definitely by considered.
A:
Unless your code is already architectured to allow for "hot swapping" of your model layer backend, changing it in any way will always be extremely risky.
Trying to build a safety net of unit tests on poorly architected code isn't going to guarantee success, only make you feel safer about changing it.
So, unless you have a strong business case for taking on the risks involved it's probably best to leave well enough alone.
| Switching to ORMs | I'm toying with the idea of phasing in an ORM into an application I support. The app is not very structured with no unit tests. So any change will be risky. I'm obviously concerned that I've got a good enough reason to change. The idea is that there will be less boiler plate code for data access and there for greater productivity.
Do this ring true with your experiences?
Is it possible or even a good idea to phase it in?
What are the downsides of an ORM?
| [
"I would strongly recommend getting a copy of Michael Feather's book Working Effectively With Legacy Code (by \"Legacy Code\" Feathers means any system that isn't adequately covered by unit tests). It is full of good ideas which should help you with your refactoring and phasing in of best practices.\nSure, you could phase in the introduction of an ORM, initially using it for accessing some subset of your domain model. And yes, I have found that use of an ORM speeds up development time - this is one of the key benefits and I certainly don't miss the days when I used to laboriously hand-craft data access layers.\nDownsides of ORM - from experience, there is inevitably a bit of a learning curve in getting to grips with the concepts, configuration and idiosyncracies of the chosen ORM solution.\nEdit: corrected author's name\n",
"The \"Robert C Martin\" book, which was actually written by Michael Feathers (\"Uncle Bob\" is, it seems, a brand name these days!) is a must.\nIt's near-impossible - not to mention insanely time-consuming - to put unit tests into an application not developed with them. The code just won't be amenable.\nBut that's not a problem. Refactoring is about changing design without changing function (I hope I haven't corrupted the meaning too badly there) so you can work in a much broader fashion. \nStart out with big chunks. Set up a repeatable execution, and capture what happens as the expected result for subsequent executions. Now you have your app, or part of it at least, under test. Not a very good or comprehensive test, sure, but it's a start and things can only get better from there.\nNow you can start to refactor. You want to start extracting your data access code so that it can be replaced with ORM functionality without disturbing too much. Test often: with legacy apps you'll be surprised what breaks; cohesion and coupling are seldom what they might be.\nI'd also consider looking at Martin Fowler's Refactoring, which is, obviously enough, the definitive work on the process.\n",
"I work on a large ASP.net application where we recently started to use NHibernate. We moved a large number of domain objects that we had been persisting manually to Sql Server over to NHibernate instead. It simplified things quite a bit and made it much easier to change things over time. We're glad we made the changes and are using NHibernate where appropriate for a lot of our new work. \n",
"The rule for refactoring is. Do unit tests.\nSo maybe first you should place some unittests at least for the core/major things.\nThe ORM should be designed for decreasing boilerplate code. The time/trouble vs. ROI to be enterprisy is up to you to estimate :)\n",
"I heard that TypeMock is often being used to refactor legacy code.\n",
"I seriously think introducing ORM into a legacy application is calling for trouble (and might be the same amount of trouble as a complete rewrite).\nOther than that, ORM is a great way to go, and should definitely by considered.\n",
"Unless your code is already architectured to allow for \"hot swapping\" of your model layer backend, changing it in any way will always be extremely risky. \nTrying to build a safety net of unit tests on poorly architected code isn't going to guarantee success, only make you feel safer about changing it.\nSo, unless you have a strong business case for taking on the risks involved it's probably best to leave well enough alone.\n"
] | [
3,
2,
1,
0,
0,
0,
0
] | [] | [] | [
"language_agnostic",
"orm"
] | stackoverflow_0000022011_language_agnostic_orm.txt |
Q:
Authenticate on an ASP.Net Forms Authorization website from a console app
I'm trying to build a C# console application to automate grabbing certain files from our website, mostly to save myself clicks and - frankly - just to have done it. But I've hit a snag that for which I've been unable to find a working solution.
The website I'm trying to which I'm trying to connect uses ASP.Net forms authorization, and I cannot figure out how to authenticate myself with it. This application is a complete hack so I can hard code my username and password or any other needed auth info, and the solution itself doesn't need to be something that is viable enough to release to general users. In other words, if the only possible solution is a hack, I'm fine with that.
Basically, I'm trying to use HttpWebRequest to pull the site that has the list of files, iterating through that list and then downloading what I need. So the actual work on the site is fairly trivial once I can get the website to consider me authorized.
A:
This page should get you started. You need to first make a request to the page, and then saving the cookie to a container that you include in all later request. That should keep you logged in, and able to retrieve the files.
A:
I have dealt with something similar, and the hardest part is figuring out exactly what you needed to "fake" to get authorized. In my case it was authorizing into some Lotus Notes webservice, but the details are unimportant, the method is the same.
Essentially, we need to record a regular user session. I would recommend Fiddler http://www.fiddler2.com but if you're on linux or something, then you'll need to use wireshark to figure some of the things out. Not sure if there is a firefox plugin that could be used.
Anyway, start up IE, then start up Fiddler. Complete the login process.
Stop what you're doing. Switch to the fiddler pane, and examine the recorded sessions in detail. It should give you exactly what you need to fake using WebRequests.
| Authenticate on an ASP.Net Forms Authorization website from a console app | I'm trying to build a C# console application to automate grabbing certain files from our website, mostly to save myself clicks and - frankly - just to have done it. But I've hit a snag that for which I've been unable to find a working solution.
The website I'm trying to which I'm trying to connect uses ASP.Net forms authorization, and I cannot figure out how to authenticate myself with it. This application is a complete hack so I can hard code my username and password or any other needed auth info, and the solution itself doesn't need to be something that is viable enough to release to general users. In other words, if the only possible solution is a hack, I'm fine with that.
Basically, I'm trying to use HttpWebRequest to pull the site that has the list of files, iterating through that list and then downloading what I need. So the actual work on the site is fairly trivial once I can get the website to consider me authorized.
| [
"This page should get you started. You need to first make a request to the page, and then saving the cookie to a container that you include in all later request. That should keep you logged in, and able to retrieve the files.\n",
"I have dealt with something similar, and the hardest part is figuring out exactly what you needed to \"fake\" to get authorized. In my case it was authorizing into some Lotus Notes webservice, but the details are unimportant, the method is the same.\nEssentially, we need to record a regular user session. I would recommend Fiddler http://www.fiddler2.com but if you're on linux or something, then you'll need to use wireshark to figure some of the things out. Not sure if there is a firefox plugin that could be used.\nAnyway, start up IE, then start up Fiddler. Complete the login process.\nStop what you're doing. Switch to the fiddler pane, and examine the recorded sessions in detail. It should give you exactly what you need to fake using WebRequests. \n"
] | [
3,
3
] | [] | [] | [
"asp.net",
"authentication",
"c#"
] | stackoverflow_0000022269_asp.net_authentication_c#.txt |
Q:
Get a list of available domains (NT4 and Active Directory)
Does anyone know (in c#) a way of getting the available NT4 domains (a bit like the WinXP login box dropdown)?
I know that this is fairly easy for Active Directory using the DirectoryServices namespace, but I can't find anything for the old NT4 domains. I'd rather not use API calls if at all possible (that might be asking a bit much however).
Also, for bonus points (!), we are finally switching to Active Directory later on this autumn, so how would I construct a way of my domain list automatically switching over from NT4 to AD, when we migrate (so I don't need to recompile and re-release)
A:
Unfortunately I think your only option is to use the ADSI API. You can switch between NT4 and Active Directory by changing providers in your code. NT4 uses the WinNT provider and Active Directory uses the LDAP provider.
If you query the RootDSE node of whichever provider you are using, that should return naming contexts to which you can bind, including domains. RootDSE is an LDAP schema specific identifier. For WinNT you can query the root object as "WinNT:" to get available domains.
ADSI is available through VB script BTW.
| Get a list of available domains (NT4 and Active Directory) | Does anyone know (in c#) a way of getting the available NT4 domains (a bit like the WinXP login box dropdown)?
I know that this is fairly easy for Active Directory using the DirectoryServices namespace, but I can't find anything for the old NT4 domains. I'd rather not use API calls if at all possible (that might be asking a bit much however).
Also, for bonus points (!), we are finally switching to Active Directory later on this autumn, so how would I construct a way of my domain list automatically switching over from NT4 to AD, when we migrate (so I don't need to recompile and re-release)
| [
"Unfortunately I think your only option is to use the ADSI API. You can switch between NT4 and Active Directory by changing providers in your code. NT4 uses the WinNT provider and Active Directory uses the LDAP provider.\nIf you query the RootDSE node of whichever provider you are using, that should return naming contexts to which you can bind, including domains. RootDSE is an LDAP schema specific identifier. For WinNT you can query the root object as \"WinNT:\" to get available domains.\nADSI is available through VB script BTW.\n"
] | [
1
] | [] | [] | [
"active_directory",
"c#",
"nt4"
] | stackoverflow_0000022265_active_directory_c#_nt4.txt |
Q:
What is the best way to deploy a VB.NET application?
Generally when I use ClickOnce when I build a VB.NET program but it has a few downsides. I've never really used anything else, so I'm not sure
what my options are.
Downsides to ClickOnce:
Consists of multiple files - Seems easier to distribute one file than manageing a bunch of file and the downloader to download those files.
You have to build it again for CD installations (for when the end user dosn't have internet)
Program does not end up in Program Files - It ends up hidden away in some application catch folder, making it much harder to shortcut to.
Pros to ClickOnce:
It works. Magically. And it's built
into VisualStudio 2008 express.
Makes it easy to upgrade the
application.
Does Windows Installer do these things as well? I know it dosen't have any of the ClickOnce cons, but It would be nice to know if it also has the ClickOnce pros.
Update:
I ended up using Wix 2 (Wix 3 was available but at the time I did the project, no one had a competent tutorial). It was nice because it supported the three things I (eventually) needed. An optional start-up-with-windows shortcut, a start-up-when-the-installer-is-done option, and three paragraphs of text that my boss thinks will keep uses from clicking the wrong option.
A:
Have you seen WiX yet?
http://wix.sourceforge.net/
It builds windows installers using an XML file and has additional libraries to use if you want to fancify your installers and the like. I'll admit the learning curve for me was medium-high in getting things started, but afterwards I was able to build a second installer without any hassles.
It will handle updates and other items if you so desire, and you can apply folder permissions and the like to the installers. It also gives you greater control on where exactly you want to install files and is compatible with all the standardized Windows folder conventions, so you can specify "PROGRAM_DATA" or something to that effect and the installer knows to put it in C:\Documents and Settings\All Users\Application Data or C:\ProgramData depending on if you're running XP or Vista.
The rumor is that Office 2007 and Visual Studio 2008 used WiX to create their installer, but I haven't been able to verify that anywhere. I do believe is is developed by some Microsoft folks on the inside.
A:
I agree with Joseph, my experience with ClickOnce is its great for the vast majority of projects especially in a corporate environment where it makes build, publish and deployment easy. Implementing the "forced upgrade" to ensure users have the latest version when running is so much easier in ClickOnce, and a main reason for my usage of it.
Issues with ClickOnce: In a corporate environment it has issues with proxy servers and the workarounds are less than ideal. I've had to deploy a few apps in those cases from UNC paths...but you can't do that all the time. Its "sandbox" is great, until you want to find the executable or create a desktop shortcut.
Have not deployed out of 2008 yet so not sure if those issues still exist.
A:
Creating an installer project, with a dependency on your EXE (which in turn depends on whatever it needs) is a fairly straightforward process - but you'll need at least VS Standard Edition for that.
Inside the installer project, you can create custom tasks and dialog steps that allow you to do anything you code up.
What's missing is the auto-upgrade and version-checking magic you get with ClickOnce. You can still build it in, it's just not automatic.
A:
I don't believe there is any easy way to make a Windows Installer project have the ease or upgradability of ClickOnce. I use ClickOnce for all the internal .NET apps I develop (with the exception of Console Apps). I find that in an enterprise environment, the ease of deployment outweighs the lack of flexibility.
A:
ClickOnce can be problematic if you have 3rd party components that need to be installed along with your product. You can skirt this to some extent by creating installers for the components however with ClickOnce deployment you have to create the logic to update said component installers.
I've in a previous life used Wise For Windows Installer to create installation packages. While creating upgrades with it were not automatic like ClickOnce is, they were more precise and less headache filled when it came to other components that needed to be registered/added.
| What is the best way to deploy a VB.NET application? | Generally when I use ClickOnce when I build a VB.NET program but it has a few downsides. I've never really used anything else, so I'm not sure
what my options are.
Downsides to ClickOnce:
Consists of multiple files - Seems easier to distribute one file than manageing a bunch of file and the downloader to download those files.
You have to build it again for CD installations (for when the end user dosn't have internet)
Program does not end up in Program Files - It ends up hidden away in some application catch folder, making it much harder to shortcut to.
Pros to ClickOnce:
It works. Magically. And it's built
into VisualStudio 2008 express.
Makes it easy to upgrade the
application.
Does Windows Installer do these things as well? I know it dosen't have any of the ClickOnce cons, but It would be nice to know if it also has the ClickOnce pros.
Update:
I ended up using Wix 2 (Wix 3 was available but at the time I did the project, no one had a competent tutorial). It was nice because it supported the three things I (eventually) needed. An optional start-up-with-windows shortcut, a start-up-when-the-installer-is-done option, and three paragraphs of text that my boss thinks will keep uses from clicking the wrong option.
| [
"Have you seen WiX yet?\nhttp://wix.sourceforge.net/\nIt builds windows installers using an XML file and has additional libraries to use if you want to fancify your installers and the like. I'll admit the learning curve for me was medium-high in getting things started, but afterwards I was able to build a second installer without any hassles. \nIt will handle updates and other items if you so desire, and you can apply folder permissions and the like to the installers. It also gives you greater control on where exactly you want to install files and is compatible with all the standardized Windows folder conventions, so you can specify \"PROGRAM_DATA\" or something to that effect and the installer knows to put it in C:\\Documents and Settings\\All Users\\Application Data or C:\\ProgramData depending on if you're running XP or Vista.\nThe rumor is that Office 2007 and Visual Studio 2008 used WiX to create their installer, but I haven't been able to verify that anywhere. I do believe is is developed by some Microsoft folks on the inside.\n",
"I agree with Joseph, my experience with ClickOnce is its great for the vast majority of projects especially in a corporate environment where it makes build, publish and deployment easy. Implementing the \"forced upgrade\" to ensure users have the latest version when running is so much easier in ClickOnce, and a main reason for my usage of it.\nIssues with ClickOnce: In a corporate environment it has issues with proxy servers and the workarounds are less than ideal. I've had to deploy a few apps in those cases from UNC paths...but you can't do that all the time. Its \"sandbox\" is great, until you want to find the executable or create a desktop shortcut. \nHave not deployed out of 2008 yet so not sure if those issues still exist.\n",
"Creating an installer project, with a dependency on your EXE (which in turn depends on whatever it needs) is a fairly straightforward process - but you'll need at least VS Standard Edition for that.\nInside the installer project, you can create custom tasks and dialog steps that allow you to do anything you code up.\nWhat's missing is the auto-upgrade and version-checking magic you get with ClickOnce. You can still build it in, it's just not automatic.\n",
"I don't believe there is any easy way to make a Windows Installer project have the ease or upgradability of ClickOnce. I use ClickOnce for all the internal .NET apps I develop (with the exception of Console Apps). I find that in an enterprise environment, the ease of deployment outweighs the lack of flexibility.\n",
"ClickOnce can be problematic if you have 3rd party components that need to be installed along with your product. You can skirt this to some extent by creating installers for the components however with ClickOnce deployment you have to create the logic to update said component installers.\nI've in a previous life used Wise For Windows Installer to create installation packages. While creating upgrades with it were not automatic like ClickOnce is, they were more precise and less headache filled when it came to other components that needed to be registered/added.\n"
] | [
6,
2,
1,
1,
1
] | [] | [] | [
"clickonce",
"installation",
"vb.net",
"visual_studio"
] | stackoverflow_0000003544_clickonce_installation_vb.net_visual_studio.txt |
Q:
Implementing a log watcher
I'm wondering how you can implement a program similar to tail -f in C/C++, a program that watches for and processes new lines added to a log file?
A:
You can use fseek() to clear the eof condition on the stream. Essentially, read to the end of the file, sleep for a while, fseek() (without changing your position) to clear eof, the read to end of file again. wash, rinse, repeat. man fseek(3) for details.
Here's what it looks like in perl. perl's seek() is essentially a wrapper for fseek(3), so the logic is the same:
wembley 0 /home/jj33/swap >#> cat p
my $f = shift;
open(I, "<$f") || die "Couldn't open $f: $!\n";
while (1) {
seek(I, 0, 1);
while (defined(my $l = <I>)) {
print "Got: $l";
}
print "Hit EOF, sleeping\n";
sleep(10);
}
wembley 0 /home/jj33/swap >#> cat tfile
This is
some
text
in
a file
wembley 0 /home/jj33/swap >#> perl p tfile
Got: This is
Got: some
Got: text
Got: in
Got: a file
Hit EOF, sleeping
Then, in another session:
wembley 0 /home/jj33/swap > echo "another line of text" >> tfile
And back to the original program output:
Hit EOF, sleeping
Got: another line of text
Hit EOF, sleeping
A:
See here
You could either call out to tail and retrieve the stream back into your app, or as it's open source, maybe try to pull it into your own code.
Also, it is possible in C++ iostream to open a file for viewing only and just read to the end, while buffering the last 10-20 lines, then output that.
A:
I think what you're looking for is the select() call in c/c++. I found a copy of the man page here: http://www.opengroup.org/onlinepubs/007908775/xsh/select.html. Select takes file descriptors as arguments and tells you when one of them has changed and is ready for reading.
A:
The tail program is open source, so you could reference that. I wondered the same thing and looked at the code a while back, thinking it would be pretty simple, but I was surprised at how complex it was. There are lots of gotchas that have to be taken into account.
| Implementing a log watcher | I'm wondering how you can implement a program similar to tail -f in C/C++, a program that watches for and processes new lines added to a log file?
| [
"You can use fseek() to clear the eof condition on the stream. Essentially, read to the end of the file, sleep for a while, fseek() (without changing your position) to clear eof, the read to end of file again. wash, rinse, repeat. man fseek(3) for details.\nHere's what it looks like in perl. perl's seek() is essentially a wrapper for fseek(3), so the logic is the same:\nwembley 0 /home/jj33/swap >#> cat p\nmy $f = shift;\nopen(I, \"<$f\") || die \"Couldn't open $f: $!\\n\";\n\nwhile (1) {\n seek(I, 0, 1);\n while (defined(my $l = <I>)) {\n print \"Got: $l\";\n }\n print \"Hit EOF, sleeping\\n\";\n sleep(10);\n}\nwembley 0 /home/jj33/swap >#> cat tfile\nThis is\nsome\ntext\nin\na file\nwembley 0 /home/jj33/swap >#> perl p tfile\nGot: This is\nGot: some\nGot: text\nGot: in\nGot: a file\nHit EOF, sleeping\n\nThen, in another session:\nwembley 0 /home/jj33/swap > echo \"another line of text\" >> tfile\n\nAnd back to the original program output:\nHit EOF, sleeping\nGot: another line of text\nHit EOF, sleeping\n\n",
"See here\nYou could either call out to tail and retrieve the stream back into your app, or as it's open source, maybe try to pull it into your own code.\nAlso, it is possible in C++ iostream to open a file for viewing only and just read to the end, while buffering the last 10-20 lines, then output that.\n",
"I think what you're looking for is the select() call in c/c++. I found a copy of the man page here: http://www.opengroup.org/onlinepubs/007908775/xsh/select.html. Select takes file descriptors as arguments and tells you when one of them has changed and is ready for reading.\n",
"The tail program is open source, so you could reference that. I wondered the same thing and looked at the code a while back, thinking it would be pretty simple, but I was surprised at how complex it was. There are lots of gotchas that have to be taken into account.\n"
] | [
5,
0,
0,
0
] | [] | [] | [
"c",
"c++",
"file",
"io"
] | stackoverflow_0000022379_c_c++_file_io.txt |
Q:
Adding Inline Search function to web page
Is it possible to embed an inline search box into a web page which provides similar functionality to the IE7Pro Inline Search or similar plugins for Firefox/Safari?
A:
jQuery inline search plugin provides this functionality
A:
If I understand your question, you are asking if it is possible to allow a user to type in a query that will search the text of page they are on?
You can certainly do that. I would suggest looking into one of the javascript libraries, jQuery is my library of choice, for your functionality. It has a rich selector syntax that allows you to search various parts of the page easily and without worrying about cross-browser coding yourself.
| Adding Inline Search function to web page | Is it possible to embed an inline search box into a web page which provides similar functionality to the IE7Pro Inline Search or similar plugins for Firefox/Safari?
| [
"jQuery inline search plugin provides this functionality\n",
"If I understand your question, you are asking if it is possible to allow a user to type in a query that will search the text of page they are on?\nYou can certainly do that. I would suggest looking into one of the javascript libraries, jQuery is my library of choice, for your functionality. It has a rich selector syntax that allows you to search various parts of the page easily and without worrying about cross-browser coding yourself.\n"
] | [
1,
1
] | [] | [] | [
"html",
"javascript"
] | stackoverflow_0000022429_html_javascript.txt |
Q:
What is the difference between HttpHandler and a Web User Control and when to use each one?
I've been using user controls extensively but never use a HttpHandler and was wondering if I am doing something suboptimal or wrong
A:
Unfortunately your question is a little like "Should I use a sandwich or a cement mixer". HttpHandlers and User controls are completely different things.
HttpHandlers are used to process HTTP requests. For example, if you wanted to dynamically create an RSS feed, you could write an HTTP handler that handles all requests for ".rss" files, creates the output and sends it back to the user.
User controls are used within ASPX pages to encapsulate units of functionality that you want to re-use accross many pages.
Chances are, if you're using user controls successfully, you don't want to use HttpHandlers!
A:
Basically a user control is a piece of server logic and UI. An HTTP Handler is only a piece of logic that is executed when a resource on your server is requested. For example you may decide to handle requests for images sent to your server through your own handler and serve images from a database instead of the file system. However, in this case there's no interface that the user sees and when he visits a URL on your server he would get the response you constructed in your own handler. Handlers are usually done for specific extensions and HTTP request types (POST, GET). Here's some more info on MSDN: http://msdn.microsoft.com/en-us/library/ms227675(VS.80).aspx
A:
Even an Asp.Net page is an HttpHandler.
public class Page : TemplateControl, IHttpHandler
A user control actually resides within the asp.net aspx page.
A:
Expect a better answer (probably before I finish typing this) but as a quick summary.
A user control is something that can be added to a page.
A HttpHandler can be used instead of a page.
A:
Just to clarify the question. I was reading the Hanselman post
http://www.hanselman.com/blog/CompositingTwoImagesIntoOneFromTheASPNETServerSide.aspx
and thinking that I would never solved the problem with a HttpHandler, maybe with a simple page returning a binary content.
This led me to think that I should add HttpHandler to my developer tool belt.
| What is the difference between HttpHandler and a Web User Control and when to use each one? | I've been using user controls extensively but never use a HttpHandler and was wondering if I am doing something suboptimal or wrong
| [
"Unfortunately your question is a little like \"Should I use a sandwich or a cement mixer\". HttpHandlers and User controls are completely different things.\nHttpHandlers are used to process HTTP requests. For example, if you wanted to dynamically create an RSS feed, you could write an HTTP handler that handles all requests for \".rss\" files, creates the output and sends it back to the user.\nUser controls are used within ASPX pages to encapsulate units of functionality that you want to re-use accross many pages.\nChances are, if you're using user controls successfully, you don't want to use HttpHandlers!\n",
"Basically a user control is a piece of server logic and UI. An HTTP Handler is only a piece of logic that is executed when a resource on your server is requested. For example you may decide to handle requests for images sent to your server through your own handler and serve images from a database instead of the file system. However, in this case there's no interface that the user sees and when he visits a URL on your server he would get the response you constructed in your own handler. Handlers are usually done for specific extensions and HTTP request types (POST, GET). Here's some more info on MSDN: http://msdn.microsoft.com/en-us/library/ms227675(VS.80).aspx\n",
"Even an Asp.Net page is an HttpHandler.\npublic class Page : TemplateControl, IHttpHandler\n\nA user control actually resides within the asp.net aspx page.\n",
"Expect a better answer (probably before I finish typing this) but as a quick summary.\nA user control is something that can be added to a page.\nA HttpHandler can be used instead of a page.\n",
"Just to clarify the question. I was reading the Hanselman post\nhttp://www.hanselman.com/blog/CompositingTwoImagesIntoOneFromTheASPNETServerSide.aspx\nand thinking that I would never solved the problem with a HttpHandler, maybe with a simple page returning a binary content.\nThis led me to think that I should add HttpHandler to my developer tool belt.\n"
] | [
6,
1,
0,
0,
0
] | [] | [] | [
"asp.net"
] | stackoverflow_0000022156_asp.net.txt |
Q:
How do I display records containing specific information in SQl
How do I select all records that contain "LCS" within the title column in sql.
A:
SELECT * FROM TABLE WHERE TABLE.TITLE LIKE '%LCS%';
% is the wild card matcher.
A:
Look into the LIKE clause
A:
Are you looking for all the tables with a column name which contains the LCS in them? If yes the do this
select table_name
from information_schema.columns
where column_name like '%lcs%'
| How do I display records containing specific information in SQl | How do I select all records that contain "LCS" within the title column in sql.
| [
"SELECT * FROM TABLE WHERE TABLE.TITLE LIKE '%LCS%';\n\n% is the wild card matcher.\n",
"Look into the LIKE clause\n",
"Are you looking for all the tables with a column name which contains the LCS in them? If yes the do this\nselect table_name \nfrom information_schema.columns \nwhere column_name like '%lcs%'\n\n"
] | [
2,
0,
0
] | [] | [] | [
"sql",
"sql_server"
] | stackoverflow_0000022474_sql_sql_server.txt |
Q:
SQL Query Help - Scoring Multiple Choice Tests
Say I have a Student table, it's got an int ID. I have a fixed set of 10 multiple choice questions with 5 possible answers. I have a normalized answer table that has the question id, the Student.answer (1-5) and the Student.ID
I'm trying to write a single query that will return all scores over a certain pecentage. To this end I wrote a simple UDF that accepts the Student.answers and the correct answer, so it has 20 parameters.
I'm starting to wonder if it's better to denormalize the answer table, bring it into my applcation and let my application do the scoring.
Anyone ever tackle something like this and have insight?
A:
If I understand your schema and question correctly, how about something like this:
select student_name, score
from students
join (select student_answers.student_id, count(*) as score
from student_answers, answer_key
group by student_id
where student_answers.question_id = answer_key.question_id
and student_answers.answer = answer_key.answer)
as student_scores on students.student_id = student_scores.student_id
where score >= 7
order by score, student_name
That should select the students with a score of 7 or more, for example. Just adjust the where clause for your purposes.
A:
I would probably leave it up to your application to perform the scoring. Check out Maybe Normalizing Isn't Normal by Jeff Atwood.
A:
The architecture you are talking about could become very cumbersome in the long run, and if you need to change the questions it means more changes to the UDF you are using.
I would think you could probably do your analysis in code without necessarily de-normalizing your database. De-normalization could also lend to inflexibility, or at least added expense to update, down the road.
A:
No way, you definitely want to keep it normalized. It's not even that hard of a query.
Basically, you want to left join the students correct answers with the total answers for that question, and do a count. This will give you the percent correct. Do that for each student, and put the minimum percent correct in a where clause.
A:
Denormalization is generally considered a last resort. The problem seems very similar to survey applications, which are very common. Without seeing your data model, it's difficult to propose a solution, but I will say that it is definitely possible. I'm wondering why you need 20 parameters to that function?
A relational set-based solution will be simpler and faster in most cases.
A:
This query should be quite easy... assuming you have the correct answer stored in the question table. You do have the correct answer stored in the question table, right?
| SQL Query Help - Scoring Multiple Choice Tests | Say I have a Student table, it's got an int ID. I have a fixed set of 10 multiple choice questions with 5 possible answers. I have a normalized answer table that has the question id, the Student.answer (1-5) and the Student.ID
I'm trying to write a single query that will return all scores over a certain pecentage. To this end I wrote a simple UDF that accepts the Student.answers and the correct answer, so it has 20 parameters.
I'm starting to wonder if it's better to denormalize the answer table, bring it into my applcation and let my application do the scoring.
Anyone ever tackle something like this and have insight?
| [
"If I understand your schema and question correctly, how about something like this:\nselect student_name, score\nfrom students\n join (select student_answers.student_id, count(*) as score\n from student_answers, answer_key\n group by student_id\n where student_answers.question_id = answer_key.question_id\n and student_answers.answer = answer_key.answer)\n as student_scores on students.student_id = student_scores.student_id\nwhere score >= 7\norder by score, student_name\n\nThat should select the students with a score of 7 or more, for example. Just adjust the where clause for your purposes.\n",
"I would probably leave it up to your application to perform the scoring. Check out Maybe Normalizing Isn't Normal by Jeff Atwood.\n",
"The architecture you are talking about could become very cumbersome in the long run, and if you need to change the questions it means more changes to the UDF you are using.\nI would think you could probably do your analysis in code without necessarily de-normalizing your database. De-normalization could also lend to inflexibility, or at least added expense to update, down the road.\n",
"No way, you definitely want to keep it normalized. It's not even that hard of a query.\nBasically, you want to left join the students correct answers with the total answers for that question, and do a count. This will give you the percent correct. Do that for each student, and put the minimum percent correct in a where clause.\n",
"Denormalization is generally considered a last resort. The problem seems very similar to survey applications, which are very common. Without seeing your data model, it's difficult to propose a solution, but I will say that it is definitely possible. I'm wondering why you need 20 parameters to that function?\nA relational set-based solution will be simpler and faster in most cases.\n",
"This query should be quite easy... assuming you have the correct answer stored in the question table. You do have the correct answer stored in the question table, right?\n"
] | [
2,
1,
0,
0,
0,
0
] | [] | [] | [
"dynamic_sql"
] | stackoverflow_0000022417_dynamic_sql.txt |
Q:
XmlHttpRequest return values
I'm looking for (arguably) the correct way to return data from a XmlHttpRequest. Options I see are:
Plain HTML. Let the request format the data and return it in a usable format.
Advantage: easy to consume by the calling page.Disadvantage: Very rigid, stuck with a fixed layout.
XML. Let the request return XML, format it using XSLT on the calling page.Advantage: the requested service is easily consumed by other sources.Disadvantage: Is browser support for XSLT good enough?
JSON. Let the request return JSON, consume it using javascript, render HTML accordingly.Advantage: easier to 'OO-ify' the javascript making the request. Disadvantage: Probably not as easy to use as the previous two options.
I've also thought about going for option one while abstracting the view logic in the called service in such a way that switching in and out different layouts would be trivial. Personally I think this option is the best out of three, for compatibility reasons.
While typing this, I got another insight. Would it be a good idea to allow all three response formats, based on a parameter added to the request?
A:
If you're looking for a quick solution that should work with most available frameworks, I'd go for JSON. It's easy to start with and works.
If you're trying to build a larger application that you're going to extend (in terms of size or maybe your own API for 3rd party extensions) I'd go for XML. You could write a proxy to provide the information in JSON or HTML too, but having XML as the main source is definitly worth the time and effort while building the app.
Like @John Topley said: it depends.
A:
I'd agree with John Topley - it depends on the application. There's a good article on quirksmode that discusses the advantages and disadvantages of each format that you might want to read: http://www.quirksmode.org/blog/archives/2005/12/the_ajax_respon.html
A:
I think this sort of depends on the level of "ajaxyness" your app is going to have. If your front end is a "rich client", al'a gmail, I'd go with the JSON solution, as you'd have to solve the problem of having client side view generation anyway. If you're using ajax sparingly, to provide simple messages to the user, update a few fields now and then, etc, then I'd go with option 1, since most of your view logic is already on the server.
A:
Different response format shouldn't be difficult to produce.
JSON works best for me, i like to keep OO in js, and don't know how to parse XML well :)
A:
I think trying to use XmlHttpRequest will be a huge headache, unless its the type of headache you don't mind - to do it properly you're almost reinventing the wheel. Then again, people like to reinvent wheels in their spare time, just to say, "Hey, I did it". Not me...
I would get a framework like prototype or Extjs, that has alot of data loading functions built in for XML and JSON, plus you'll get more predictable results, as the frameworks have event handlers to make sure your XmlHttpRequest succeeded or failed. Plus you get support for all the various browsers.
| XmlHttpRequest return values | I'm looking for (arguably) the correct way to return data from a XmlHttpRequest. Options I see are:
Plain HTML. Let the request format the data and return it in a usable format.
Advantage: easy to consume by the calling page.Disadvantage: Very rigid, stuck with a fixed layout.
XML. Let the request return XML, format it using XSLT on the calling page.Advantage: the requested service is easily consumed by other sources.Disadvantage: Is browser support for XSLT good enough?
JSON. Let the request return JSON, consume it using javascript, render HTML accordingly.Advantage: easier to 'OO-ify' the javascript making the request. Disadvantage: Probably not as easy to use as the previous two options.
I've also thought about going for option one while abstracting the view logic in the called service in such a way that switching in and out different layouts would be trivial. Personally I think this option is the best out of three, for compatibility reasons.
While typing this, I got another insight. Would it be a good idea to allow all three response formats, based on a parameter added to the request?
| [
"If you're looking for a quick solution that should work with most available frameworks, I'd go for JSON. It's easy to start with and works.\nIf you're trying to build a larger application that you're going to extend (in terms of size or maybe your own API for 3rd party extensions) I'd go for XML. You could write a proxy to provide the information in JSON or HTML too, but having XML as the main source is definitly worth the time and effort while building the app.\nLike @John Topley said: it depends.\n",
"I'd agree with John Topley - it depends on the application. There's a good article on quirksmode that discusses the advantages and disadvantages of each format that you might want to read: http://www.quirksmode.org/blog/archives/2005/12/the_ajax_respon.html\n",
"I think this sort of depends on the level of \"ajaxyness\" your app is going to have. If your front end is a \"rich client\", al'a gmail, I'd go with the JSON solution, as you'd have to solve the problem of having client side view generation anyway. If you're using ajax sparingly, to provide simple messages to the user, update a few fields now and then, etc, then I'd go with option 1, since most of your view logic is already on the server.\n",
"Different response format shouldn't be difficult to produce.\nJSON works best for me, i like to keep OO in js, and don't know how to parse XML well :)\n",
"I think trying to use XmlHttpRequest will be a huge headache, unless its the type of headache you don't mind - to do it properly you're almost reinventing the wheel. Then again, people like to reinvent wheels in their spare time, just to say, \"Hey, I did it\". Not me...\nI would get a framework like prototype or Extjs, that has alot of data loading functions built in for XML and JSON, plus you'll get more predictable results, as the frameworks have event handlers to make sure your XmlHttpRequest succeeded or failed. Plus you get support for all the various browsers.\n"
] | [
2,
2,
0,
0,
0
] | [] | [] | [
"ajax",
"javascript"
] | stackoverflow_0000021992_ajax_javascript.txt |
Q:
How to set up a DB2 linked server on a 64-bit SQL Server 2005?
I need to create a linked server to a DB2 database on a mainframe. Has anyone done this successfully on a 64-bit version of SQL Server 2005? If so, which provider and settings were used?
It's important that the linked server work whether we are using a Windows authenticated account to login to SQL Server or a SQL Server login. It's also important that both the 4-part name and OPENQUERY query methods are functional. We have one set up on a SQL Server 2000 machine that works well, but it uses a provider that's not available for 64-bit SS 2005.
A:
We had this same issue with a production system late last year (sept 2007) and the official word from our Microsoft contact was that they had a 64 bit oledb driver to connect to ASI/DB2 but it was in BETA at the time.
Not sure when it will be out of beta but that was the news as of last year.
We decided to move the production server onto a 32 bit machine since we were not comfortable using beta drivers on production systems.
I know this doesn't answer your question but it hopefully gives you some insight
A:
What provider are you using for Sql 2000? I'm pretty sure MS has an x64 OLEDB driver for DB2 (part of Host Integration Server, but available as a separate download). IBM has x64 for .NET and ODBC, and possible OLEDB as well (though it's a PITA to find).
Once you get the linked server setup, I'm pretty sure all of your other requirements would be automatic....
A:
From the Sql 2005 February 2007 Feature Pack:
The Microsoft OLE DB Provider for DB2 is a COM component for integrating vital data stored in IBM DB2 databases with new solutions based on Microsoft SQL Server 2005 Enterprise Edition and Developer Edition. SQL Server developers and administrators can use the provider with Integration Services, Analysis Services, Replication, Reporting Services, and Distributed Query Processor. Run the self-extracting download package to create an installation folder. The single setup program will install the provider and tools on x86, x64, and IA64 computers.
| How to set up a DB2 linked server on a 64-bit SQL Server 2005? | I need to create a linked server to a DB2 database on a mainframe. Has anyone done this successfully on a 64-bit version of SQL Server 2005? If so, which provider and settings were used?
It's important that the linked server work whether we are using a Windows authenticated account to login to SQL Server or a SQL Server login. It's also important that both the 4-part name and OPENQUERY query methods are functional. We have one set up on a SQL Server 2000 machine that works well, but it uses a provider that's not available for 64-bit SS 2005.
| [
"We had this same issue with a production system late last year (sept 2007) and the official word from our Microsoft contact was that they had a 64 bit oledb driver to connect to ASI/DB2 but it was in BETA at the time.\nNot sure when it will be out of beta but that was the news as of last year.\nWe decided to move the production server onto a 32 bit machine since we were not comfortable using beta drivers on production systems.\nI know this doesn't answer your question but it hopefully gives you some insight\n",
"What provider are you using for Sql 2000? I'm pretty sure MS has an x64 OLEDB driver for DB2 (part of Host Integration Server, but available as a separate download). IBM has x64 for .NET and ODBC, and possible OLEDB as well (though it's a PITA to find).\nOnce you get the linked server setup, I'm pretty sure all of your other requirements would be automatic....\n",
"From the Sql 2005 February 2007 Feature Pack:\n\nThe Microsoft OLE DB Provider for DB2 is a COM component for integrating vital data stored in IBM DB2 databases with new solutions based on Microsoft SQL Server 2005 Enterprise Edition and Developer Edition. SQL Server developers and administrators can use the provider with Integration Services, Analysis Services, Replication, Reporting Services, and Distributed Query Processor. Run the self-extracting download package to create an installation folder. The single setup program will install the provider and tools on x86, x64, and IA64 computers.\n\n"
] | [
1,
0,
0
] | [] | [] | [
"db2",
"sql_server"
] | stackoverflow_0000010898_db2_sql_server.txt |
Q:
Why is ASP.NET gzip compression corrupting CSS?
I have an ASP.NET webforms application (3.5 SP1) that I'm working on, and attempting to enable gzip fpr HTML and CSS that comes down the pipe. I'm using this implementation (and tried a few others that hook into Application_BeginRequest), and it seems to be corrupting the external CSS file that the pages use, but intermittently...suddenly all styles will disappear on a page refresh, stay that way for awhile, and then suddenly start working again.
Both IE7 and FF3 exhibit this behavior. When viewing the CSS using the web developer toolbar, it returns jibberish. The cache-control header is coming through as "private," but I don't know enough to figure out if that's a contributing factor or not.
Also, this is running on the ASP.NET Development Server. Maybe it'd be fine with IIS, but I'm developing on XP and it'd be IIS5.
A:
Is it only CSS files that get corrupted? Do JS files (or any other static text files) come through ok?
Also can you duplicate the behavior if you browse directly to the CSS file?
I've only enabled compression on Windows 2003 server's IIS using this approach:
IIS → Web Sites → Properties → Service tab, check both boxes
IIS → Web Service Extensions → Right click, Add New
Name
Http Compression
Required Files
%systemroot%\system32\inetsrv\gzip.dll
IIS → Right click top node, Internet Information Services, check Enable Direct Metabase Edit
Backup and Edit %systemroot%\system32\inetsrv\MetaBase.xml
Find Location ="/LM/W3SVC/Filters/Compression/gzip"
Add png, css, js and any other static file extensions to HcFileExtensions
Add aspx and any other executable extensions to HcScriptFileExtensions
Save
Restart IIS (run iisreset)
If you have a Windows 2003/2008 server to play with you could try that approach.
A:
If you will be deploying on IIS 6 or IIS 7, just use the built-in IIS compression. We're using it on production sites for compressing HTML, CSS, and JavaScript with no errors. It also caches the compressed version on the server, so the compression hit is only taken once.
| Why is ASP.NET gzip compression corrupting CSS? | I have an ASP.NET webforms application (3.5 SP1) that I'm working on, and attempting to enable gzip fpr HTML and CSS that comes down the pipe. I'm using this implementation (and tried a few others that hook into Application_BeginRequest), and it seems to be corrupting the external CSS file that the pages use, but intermittently...suddenly all styles will disappear on a page refresh, stay that way for awhile, and then suddenly start working again.
Both IE7 and FF3 exhibit this behavior. When viewing the CSS using the web developer toolbar, it returns jibberish. The cache-control header is coming through as "private," but I don't know enough to figure out if that's a contributing factor or not.
Also, this is running on the ASP.NET Development Server. Maybe it'd be fine with IIS, but I'm developing on XP and it'd be IIS5.
| [
"Is it only CSS files that get corrupted? Do JS files (or any other static text files) come through ok?\nAlso can you duplicate the behavior if you browse directly to the CSS file?\nI've only enabled compression on Windows 2003 server's IIS using this approach:\n\nIIS → Web Sites → Properties → Service tab, check both boxes\nIIS → Web Service Extensions → Right click, Add New\n Name\n Http Compression \n Required Files\n %systemroot%\\system32\\inetsrv\\gzip.dll \nIIS → Right click top node, Internet Information Services, check Enable Direct Metabase Edit\nBackup and Edit %systemroot%\\system32\\inetsrv\\MetaBase.xml\n\nFind Location =\"/LM/W3SVC/Filters/Compression/gzip\"\n\nAdd png, css, js and any other static file extensions to HcFileExtensions\nAdd aspx and any other executable extensions to HcScriptFileExtensions\nSave \n\n\nRestart IIS (run iisreset) \n\nIf you have a Windows 2003/2008 server to play with you could try that approach.\n",
"If you will be deploying on IIS 6 or IIS 7, just use the built-in IIS compression. We're using it on production sites for compressing HTML, CSS, and JavaScript with no errors. It also caches the compressed version on the server, so the compression hit is only taken once.\n"
] | [
5,
0
] | [] | [] | [
"asp.net",
"compression",
"gzip"
] | stackoverflow_0000022509_asp.net_compression_gzip.txt |
Q:
Passing a commented, multi-line (freespace) regex to preg_match
I have a regex that is going to end up being a bit long and it'd make it much easier to read to have it across multiple lines.
I tried this but it just barfs.
preg_match(
'^J[0-9]{7}:\s+
(.*?) #Extract the Transaction Start Date msg
\s+J[0-9]{7}:\s+Project\sname:\s+
(.*?) #Extract the Project Name
\s+J[0-9]{7}:\s+Job\sname:\s+
(.*?) #Extract the Job Name
\s+J[0-9]{7}:\s+',
$this->getResultVar('FullMessage'),
$atmp
);
Is there are way to pass a regex in the above form to preg_match?
A:
You can use the extended syntax:
preg_match("/
test
/x", $foo, $bar);
A:
Yes, you can add the /x Pattern Modifier.
This modifier turns on additional
functionality of PCRE that is
incompatible with Perl. Any backslash
in a pattern that is followed by a
letter that has no special meaning
causes an error, thus reserving these
combinations for future expansion. By
default, as in Perl, a backslash
followed by a letter with no special
meaning is treated as a literal. There
are at present no other features
controlled by this modifier.
For your example try this:
preg_match('/
^J[0-9]{7}:\s+
(.*?) #Extract the Transaction Start Date msg
\s+J[0-9]{7}:\s+Project\sname:\s+
(.*?) #Extract the Project Name
\s+J[0-9]{7}:\s+Job\sname:\s+
(.*?) #Extract the Job Name
\s+J[0-9]{7}:\s+
/x', $this->getResultVar('FullMessage'), $atmp);
A:
OK, here's a solution:
preg_match(
'/(?x)^J[0-9]{7}:\s+
(.*?) #Extract the Transaction Start Date msg
\s+J[0-9]{7}:\s+Project\sname:\s+
(.*?) #Extract the Project Name
\s+J[0-9]{7}:\s+Job\sname:\s+
(.*?) #Extract the Job Name
\s+J[0-9]{7}:\s+/'
, $this->getResultVar('FullMessage'), $atmp);
The key is (?x) at the beginning which makes whitespace insignificant and allows comments.
It's also important that there's no whitespace between the starting and ending quotes and the start & end of the regex.
My first attempt like this gave errors:
preg_match('
/(?x)^J[0-9]{7}:\s+
(.*?) #Extract the Transaction Start Date msg
\s+J[0-9]{7}:\s+Project\sname:\s+
(.*?) #Extract the Project Name
\s+J[0-9]{7}:\s+Job\sname:\s+
(.*?) #Extract the Job Name
\s+J[0-9]{7}:\s+/
', $this->getResultVar('FullMessage'), $atmp);
What Konrad said also works and feels a little easier than sticking (?x) at the beginning.
A:
You should add delimiters: the first character of the regex will be used to indicate the end of the pattern.
You should add the 'x' flag. This has the same result as putting (?x) at the beginning, but it is more readable imho.
A:
In PHP the comment syntax looks like this:(?# Your comment here)
preg_match('
^J[0-9]{7}:\s+
(.*?) (?#Extract the Transaction Start Date msg)
\s+J[0-9]{7}:\s+Project\sname:\s+
(.*?) (?#Extract the Project Name)
\s+J[0-9]{7}:\s+Job\sname:\s+
(.*?) (?#Extract the Job Name)
\s+J[0-9]{7}:\s+
', $this->getResultVar('FullMessage'), $atmp);
For more information see the PHP Regular Expression Syntax Reference
You can also use the PCRE_EXTENDED (or 'x') Pattern Modifier as Mark shows in his example.
| Passing a commented, multi-line (freespace) regex to preg_match | I have a regex that is going to end up being a bit long and it'd make it much easier to read to have it across multiple lines.
I tried this but it just barfs.
preg_match(
'^J[0-9]{7}:\s+
(.*?) #Extract the Transaction Start Date msg
\s+J[0-9]{7}:\s+Project\sname:\s+
(.*?) #Extract the Project Name
\s+J[0-9]{7}:\s+Job\sname:\s+
(.*?) #Extract the Job Name
\s+J[0-9]{7}:\s+',
$this->getResultVar('FullMessage'),
$atmp
);
Is there are way to pass a regex in the above form to preg_match?
| [
"You can use the extended syntax:\npreg_match(\"/\n test\n/x\", $foo, $bar);\n\n",
"Yes, you can add the /x Pattern Modifier.\n\nThis modifier turns on additional\n functionality of PCRE that is\n incompatible with Perl. Any backslash\n in a pattern that is followed by a\n letter that has no special meaning\n causes an error, thus reserving these\n combinations for future expansion. By\n default, as in Perl, a backslash\n followed by a letter with no special\n meaning is treated as a literal. There\n are at present no other features\n controlled by this modifier.\n\nFor your example try this:\npreg_match('/\n ^J[0-9]{7}:\\s+\n (.*?) #Extract the Transaction Start Date msg\n \\s+J[0-9]{7}:\\s+Project\\sname:\\s+\n (.*?) #Extract the Project Name\n \\s+J[0-9]{7}:\\s+Job\\sname:\\s+\n (.*?) #Extract the Job Name\n \\s+J[0-9]{7}:\\s+\n /x', $this->getResultVar('FullMessage'), $atmp);\n\n",
"OK, here's a solution:\npreg_match(\n '/(?x)^J[0-9]{7}:\\s+\n (.*?) #Extract the Transaction Start Date msg\n \\s+J[0-9]{7}:\\s+Project\\sname:\\s+\n (.*?) #Extract the Project Name\n \\s+J[0-9]{7}:\\s+Job\\sname:\\s+\n (.*?) #Extract the Job Name\n \\s+J[0-9]{7}:\\s+/'\n , $this->getResultVar('FullMessage'), $atmp);\n\nThe key is (?x) at the beginning which makes whitespace insignificant and allows comments.\nIt's also important that there's no whitespace between the starting and ending quotes and the start & end of the regex.\nMy first attempt like this gave errors:\npreg_match('\n /(?x)^J[0-9]{7}:\\s+\n (.*?) #Extract the Transaction Start Date msg\n \\s+J[0-9]{7}:\\s+Project\\sname:\\s+\n (.*?) #Extract the Project Name\n \\s+J[0-9]{7}:\\s+Job\\sname:\\s+\n (.*?) #Extract the Job Name\n \\s+J[0-9]{7}:\\s+/\n ', $this->getResultVar('FullMessage'), $atmp);\n\nWhat Konrad said also works and feels a little easier than sticking (?x) at the beginning.\n",
"\nYou should add delimiters: the first character of the regex will be used to indicate the end of the pattern.\nYou should add the 'x' flag. This has the same result as putting (?x) at the beginning, but it is more readable imho.\n\n",
"In PHP the comment syntax looks like this:(?# Your comment here)\npreg_match('\n ^J[0-9]{7}:\\s+\n (.*?) (?#Extract the Transaction Start Date msg)\n \\s+J[0-9]{7}:\\s+Project\\sname:\\s+\n (.*?) (?#Extract the Project Name)\n \\s+J[0-9]{7}:\\s+Job\\sname:\\s+\n (.*?) (?#Extract the Job Name)\n \\s+J[0-9]{7}:\\s+\n ', $this->getResultVar('FullMessage'), $atmp);\n\nFor more information see the PHP Regular Expression Syntax Reference\nYou can also use the PCRE_EXTENDED (or 'x') Pattern Modifier as Mark shows in his example.\n"
] | [
5,
3,
1,
1,
0
] | [] | [] | [
"php",
"regex"
] | stackoverflow_0000022552_php_regex.txt |
Q:
PHP includes vs OOP
I would like to have a reference for the pros and cons of using include files vs objects(classes) when developing PHP applications.
I know I would benefit from having one place to go for this answer...I have a few opinions of my own but I look forward to hearing others.
A Simple Example:
Certain pages on my site are only accessible to logged in users. I have two options for implementation (there are others but let's limit it to these two)
Create an authenticate.php file and include it on every page. It holds the logic for authentication.
Create a user object, which has an authenticate function, reference the object for authentication on every page.
Edit
I'd like to see some way weigh the benefits of one over the other.
My current (and weak reasons) follow:
Includes - Sometimes a function is just easier/shorter/faster to call
Objects - Grouping of functionality and properties leads for longer term maintenance.
Includes - Less code to write (no constructor, no class syntax) call me lazy but this is true.
Objects - Force formality and a single approach to functions and creation.
Includes - Easier for a novice to deal with
Objects - Harder for novices, but frowned upon by professionals.
I look at these factors at the start of a project to decide if I want to do includes or objects.
Those are a few pros and cons off the top of my head.
A:
These are not really opposite choices. You will have to include the checking code anyway. I read your question as procedural programming vs. OO programming.
Writing a few lines of code, or a function, and including it in your page header was how things were done in PHP3 or PHP4. It's simple, it works (that's how we did it in osCommerce, for example, an eCommerce PHP application).
But it's not easy to maintain and modify, as many developers can confirm.
In PHP5 you'd write a user object which will carry its own data and methods for authentication. Your code will be clearer and easier to maintain as everything having to do with users and authentication will be concentrated in a single place.
A:
While the question touches on a couple of very debatable issues (OOP, User authentication) I'll skip by those and second Konrad's comment about __autoload. Anyone who knows C/C++ knows how much of a pain including files can be. With autoload, a PHP5 addition, if you choose to use OOP (which I do almost exclusively) you only need use some standard file naming convention and (I would recommend) restricting a single class per file and PHP will do the rest for you. Cleans up the code and you no longer have to worry about remembering to remove includes that are no longer necessary (one of the many problems with includes).
A:
I don't have much PHP experience, although I'm using it at my current job. In general, I find that larger systems benefit from the readability and understandability that OO provides. But things like consistency (don't mix OO and non-OO) and your personal preferences (although only really on personal projects) are also important.
A:
I've learned never to use include in PHP except inside the core libraries that I use and one central include of these libraries (+ config) in the application. Everything else is handled by a global __autoload handler that can be configured to recognize the different classes needed. This can be done easily using appropriate naming conventions for the classes.
This is not only flexible but also quite efficient and keeps the architecture clean.
A:
Can you be a bit more specific? For the example you give you need to use include in both ways.
In case 1 you only include a file, in case 2 you need to include the class file (for instance user.class.php) to allow instantiation of the User class.
It depends how the rest of the application is built, is it OO? Use OO.
A:
Whether you do it in classes or in a more procedural style, you simply need to check to ensure that:
There is a session;
That the session is valid; and,
That the user in possession of the session has proper privileges.
You can encapsulate all three steps into one function (or a static method in a Session class might work). Try this:
class Session
{
const GUEST = 0;
const SUBSCRIBER = 1;
const ADMINISTRATOR = 2;
public static function Type()
{
session_start();
// Depending on how you use sessions on
// your site, you might just check for the
// existence of PHPSESSID. If you track
// every visitor with sessions, however, you
// might want to assign some separate unique
// number (that you can track in a DB) to
// authenticated sessions
if(!$_SESSION['uniqid'])
{
return Session::GUEST;
}
else
{
// For the best security, don't store the
// user's access permissions in the $_SESSION,
// but rather check against the DB. This will
// ensure that recently deleted or downgraded
// administrators will not be able to make use
// of a previous session.
return THE_ACCESS_LEVEL_ACCORDING_TO_THE_DB
}
}
}
// In your files that need to check for authentication (you
// could also do this in a controller if you're going MVC
if(!(Session::Type() == Session::ADMINISTRATOR))
{
// Redirect them to wherever you want them to go instead,
// like a log in page or something like that.
}
| PHP includes vs OOP | I would like to have a reference for the pros and cons of using include files vs objects(classes) when developing PHP applications.
I know I would benefit from having one place to go for this answer...I have a few opinions of my own but I look forward to hearing others.
A Simple Example:
Certain pages on my site are only accessible to logged in users. I have two options for implementation (there are others but let's limit it to these two)
Create an authenticate.php file and include it on every page. It holds the logic for authentication.
Create a user object, which has an authenticate function, reference the object for authentication on every page.
Edit
I'd like to see some way weigh the benefits of one over the other.
My current (and weak reasons) follow:
Includes - Sometimes a function is just easier/shorter/faster to call
Objects - Grouping of functionality and properties leads for longer term maintenance.
Includes - Less code to write (no constructor, no class syntax) call me lazy but this is true.
Objects - Force formality and a single approach to functions and creation.
Includes - Easier for a novice to deal with
Objects - Harder for novices, but frowned upon by professionals.
I look at these factors at the start of a project to decide if I want to do includes or objects.
Those are a few pros and cons off the top of my head.
| [
"These are not really opposite choices. You will have to include the checking code anyway. I read your question as procedural programming vs. OO programming.\nWriting a few lines of code, or a function, and including it in your page header was how things were done in PHP3 or PHP4. It's simple, it works (that's how we did it in osCommerce, for example, an eCommerce PHP application).\nBut it's not easy to maintain and modify, as many developers can confirm.\nIn PHP5 you'd write a user object which will carry its own data and methods for authentication. Your code will be clearer and easier to maintain as everything having to do with users and authentication will be concentrated in a single place.\n",
"While the question touches on a couple of very debatable issues (OOP, User authentication) I'll skip by those and second Konrad's comment about __autoload. Anyone who knows C/C++ knows how much of a pain including files can be. With autoload, a PHP5 addition, if you choose to use OOP (which I do almost exclusively) you only need use some standard file naming convention and (I would recommend) restricting a single class per file and PHP will do the rest for you. Cleans up the code and you no longer have to worry about remembering to remove includes that are no longer necessary (one of the many problems with includes).\n",
"I don't have much PHP experience, although I'm using it at my current job. In general, I find that larger systems benefit from the readability and understandability that OO provides. But things like consistency (don't mix OO and non-OO) and your personal preferences (although only really on personal projects) are also important.\n",
"I've learned never to use include in PHP except inside the core libraries that I use and one central include of these libraries (+ config) in the application. Everything else is handled by a global __autoload handler that can be configured to recognize the different classes needed. This can be done easily using appropriate naming conventions for the classes.\nThis is not only flexible but also quite efficient and keeps the architecture clean.\n",
"Can you be a bit more specific? For the example you give you need to use include in both ways.\nIn case 1 you only include a file, in case 2 you need to include the class file (for instance user.class.php) to allow instantiation of the User class.\nIt depends how the rest of the application is built, is it OO? Use OO.\n",
"Whether you do it in classes or in a more procedural style, you simply need to check to ensure that:\n\nThere is a session;\nThat the session is valid; and,\nThat the user in possession of the session has proper privileges.\n\nYou can encapsulate all three steps into one function (or a static method in a Session class might work). Try this:\nclass Session\n{\n const GUEST = 0;\n const SUBSCRIBER = 1;\n const ADMINISTRATOR = 2;\n\n public static function Type()\n {\n session_start();\n\n // Depending on how you use sessions on\n // your site, you might just check for the\n // existence of PHPSESSID. If you track\n // every visitor with sessions, however, you\n // might want to assign some separate unique\n // number (that you can track in a DB) to\n // authenticated sessions\n if(!$_SESSION['uniqid'])\n {\n return Session::GUEST;\n }\n else\n {\n // For the best security, don't store the\n // user's access permissions in the $_SESSION,\n // but rather check against the DB. This will\n // ensure that recently deleted or downgraded\n // administrators will not be able to make use\n // of a previous session.\n\n return THE_ACCESS_LEVEL_ACCORDING_TO_THE_DB\n }\n } \n}\n\n\n// In your files that need to check for authentication (you\n// could also do this in a controller if you're going MVC\n\nif(!(Session::Type() == Session::ADMINISTRATOR))\n{\n // Redirect them to wherever you want them to go instead,\n // like a log in page or something like that.\n}\n\n"
] | [
13,
5,
1,
1,
0,
0
] | [] | [] | [
"coding_style",
"php"
] | stackoverflow_0000022528_coding_style_php.txt |
Q:
How do I cluster an upload folder with ASP.Net?
We have a situation where users are allowed to upload content, and then separately make some changes, then submit a form based on those changes.
This works fine in a single-server, non-failover environment, however we would like some sort of solution for sharing the files between servers that supports failover.
Has anyone run into this in the past? And what kind of solutions were you able to develop? Obviously persisting to the database is one option, but we'd prefer to avoid that.
A:
In our scenario, we have a separate file server that both of our front end app servers write to, that way you either server has access to the same sets of files.
A:
At a former job we had a cluster of web servers with an F5 load balancer in front of them. We had a very similar problem in that our applications allowed users to upload content which might include photo's and such. These were legacy applications and we did not want to edit them to use a database and a SAN solution was too expensive for our situation.
We ended up using a file replication service on the two clustered servers. This ran as a service on both machines using an account that had network access to paths on the opposite server. When a file was uploaded, this backend service sync'd the data in the file system folders making it available to be served from either web server.
Two of the products we reviewed were ViceVersa and PeerSync. I think we ended up using PeerSync.
A:
The best solution for this is usually to provide the shared area on some form of SAN, which will be accessible from all servers and contain failover.
This also has the benefit that you don't have to provide sticky load balancing, the upload can be handled by one server, and the edit by another.
A:
A shared SAN with failover is a great solution with a great (high) cost. Are there any similar solutions with failover at a reasonable cost? Perhaps something like DRBD for windows?
The problem with a simple shared filesystem is the lack of redundancy (what if the fileserver goes down)?
| How do I cluster an upload folder with ASP.Net? | We have a situation where users are allowed to upload content, and then separately make some changes, then submit a form based on those changes.
This works fine in a single-server, non-failover environment, however we would like some sort of solution for sharing the files between servers that supports failover.
Has anyone run into this in the past? And what kind of solutions were you able to develop? Obviously persisting to the database is one option, but we'd prefer to avoid that.
| [
"In our scenario, we have a separate file server that both of our front end app servers write to, that way you either server has access to the same sets of files.\n",
"At a former job we had a cluster of web servers with an F5 load balancer in front of them. We had a very similar problem in that our applications allowed users to upload content which might include photo's and such. These were legacy applications and we did not want to edit them to use a database and a SAN solution was too expensive for our situation.\nWe ended up using a file replication service on the two clustered servers. This ran as a service on both machines using an account that had network access to paths on the opposite server. When a file was uploaded, this backend service sync'd the data in the file system folders making it available to be served from either web server.\nTwo of the products we reviewed were ViceVersa and PeerSync. I think we ended up using PeerSync.\n\n",
"The best solution for this is usually to provide the shared area on some form of SAN, which will be accessible from all servers and contain failover.\nThis also has the benefit that you don't have to provide sticky load balancing, the upload can be handled by one server, and the edit by another.\n",
"A shared SAN with failover is a great solution with a great (high) cost. Are there any similar solutions with failover at a reasonable cost? Perhaps something like DRBD for windows?\nThe problem with a simple shared filesystem is the lack of redundancy (what if the fileserver goes down)?\n"
] | [
1,
1,
0,
0
] | [] | [] | [
"asp.net",
"cluster_computing",
"failover",
"iis_6",
"windows_server_2003"
] | stackoverflow_0000022590_asp.net_cluster_computing_failover_iis_6_windows_server_2003.txt |
Q:
What are the main differences between programming for Windows XP and for Vista?
From a desktop application developer point of view, is there any difference between developing for Windows XP and developing for Windows Vista?
A:
User Interface
Looking at the Windows Vista User Experience Guidelines you can see that they have changed many UI elements, which you should be aware of. Some major things to take note of:
Larger icons
New font (Which affects some custom UI constistency)
New dialog box features (task dialogs)
Altered common dialogs (like File Open, Save As, etc.)
Dialog text style and tone, and look and feel
New Aero Wizards
Redesigned toolbars
Better notification UI
New recommended method of including a search control
Glass
64-bit
Vista has a 64-bit edition, and although XP did too, your users are more likely to use Vista 64 than XP 64. Now you have to deal with:
Registry virtualization
Registry redirection (Wow6432Node)
Registry reflection
Digital signatures for kernel modules
MSI installers have new properties to deal with
UAC
User Account Control vastly affects the default permissions that your application has when interacting with the OS.
How UAC works and affects your application (also see the requirements doc)
Installers have to deal with UAC
New APIs
There are new APIs which are targeted at either new methods of application construction or allowing new functionality:
Cryptography API: Next Generation (CNG)
Extensible Application Markup Language (XAML)
Windows Communication Foundation (WCF)
Windows Workflow Foundation (WF)
And many more smaller ones
Installers
Because installations can only use common runtimes they install after a transaction has completed, custom actions will fail if your custom action dll requires the Visual C++ runtimes above the VS 2005 CRT (non-SP1).
A:
Do not ever assume your user has access to certain key areas of the disc (i.e. program files, windows directory etc). Instead the default user account will only be able to write to a small section of their application data. Also, they won't be able to write to global areas of the registry - only the current user section.
You can of course elevate their privileges, but that in itself is a task.
Generally programming for Vista is the same as XP, it's just the new account restrictions you have to be wary of.
Have a look at this page with regards to making your application "UAC aware"
http://www.codeproject.com/KB/vista-security/MakingAppsUACAware.aspx
A:
There can be, but that's a conscious choice you make as the developer. You can use new Vista stuff, like UAC and CommandLinks and Aero and so forth. But you don't have to (even UAC can be programmed around -- just don't do anything that needs admin privileges). If you choose to ignore all of the Vista stuff, then there's absolutely no difference between the two.
If you do want to include that stuff in your app, it makes a difference. But I'd say not a huge one. And if you abstract away the differences (for example, write your own function that shows a TaskDialog for Vista, but which dumbs down the input you give it into a MesssageBox on XP), then you'll only be writing against your own code, and the differences will seem like almost nothing.
Also, a lot of Vista's new stuff (for example, UAC or Aero) is stuff that you worry about once, when you create the first piece of functionality that uses it, get it working, and then never think about again while you're developing the app.
A:
By far the most painful part of moving an application from XP to Vista (from my point of view) is dealing with the numerous services and IPv6 stuff that uses ports which were previously free, and dealing with the Wireless Provisioning -> Native WiFi transition.
The UAC stuff is basically a moot point; there is very little the application developer needs to do.
| What are the main differences between programming for Windows XP and for Vista? | From a desktop application developer point of view, is there any difference between developing for Windows XP and developing for Windows Vista?
| [
"User Interface\nLooking at the Windows Vista User Experience Guidelines you can see that they have changed many UI elements, which you should be aware of. Some major things to take note of:\n\nLarger icons\nNew font (Which affects some custom UI constistency)\nNew dialog box features (task dialogs)\nAltered common dialogs (like File Open, Save As, etc.)\nDialog text style and tone, and look and feel\nNew Aero Wizards\nRedesigned toolbars\nBetter notification UI\nNew recommended method of including a search control\nGlass\n\n64-bit\nVista has a 64-bit edition, and although XP did too, your users are more likely to use Vista 64 than XP 64. Now you have to deal with:\n\nRegistry virtualization\nRegistry redirection (Wow6432Node)\nRegistry reflection\nDigital signatures for kernel modules\nMSI installers have new properties to deal with\n\nUAC\nUser Account Control vastly affects the default permissions that your application has when interacting with the OS.\n\nHow UAC works and affects your application (also see the requirements doc)\nInstallers have to deal with UAC\n\nNew APIs\nThere are new APIs which are targeted at either new methods of application construction or allowing new functionality:\n\nCryptography API: Next Generation (CNG)\nExtensible Application Markup Language (XAML)\nWindows Communication Foundation (WCF)\nWindows Workflow Foundation (WF)\nAnd many more smaller ones\n\nInstallers\nBecause installations can only use common runtimes they install after a transaction has completed, custom actions will fail if your custom action dll requires the Visual C++ runtimes above the VS 2005 CRT (non-SP1).\n",
"Do not ever assume your user has access to certain key areas of the disc (i.e. program files, windows directory etc). Instead the default user account will only be able to write to a small section of their application data. Also, they won't be able to write to global areas of the registry - only the current user section.\nYou can of course elevate their privileges, but that in itself is a task.\nGenerally programming for Vista is the same as XP, it's just the new account restrictions you have to be wary of.\nHave a look at this page with regards to making your application \"UAC aware\"\nhttp://www.codeproject.com/KB/vista-security/MakingAppsUACAware.aspx\n",
"There can be, but that's a conscious choice you make as the developer. You can use new Vista stuff, like UAC and CommandLinks and Aero and so forth. But you don't have to (even UAC can be programmed around -- just don't do anything that needs admin privileges). If you choose to ignore all of the Vista stuff, then there's absolutely no difference between the two.\nIf you do want to include that stuff in your app, it makes a difference. But I'd say not a huge one. And if you abstract away the differences (for example, write your own function that shows a TaskDialog for Vista, but which dumbs down the input you give it into a MesssageBox on XP), then you'll only be writing against your own code, and the differences will seem like almost nothing. \nAlso, a lot of Vista's new stuff (for example, UAC or Aero) is stuff that you worry about once, when you create the first piece of functionality that uses it, get it working, and then never think about again while you're developing the app.\n",
"By far the most painful part of moving an application from XP to Vista (from my point of view) is dealing with the numerous services and IPv6 stuff that uses ports which were previously free, and dealing with the Wireless Provisioning -> Native WiFi transition.\nThe UAC stuff is basically a moot point; there is very little the application developer needs to do.\n"
] | [
20,
5,
1,
0
] | [] | [] | [
"windows_vista",
"windows_xp"
] | stackoverflow_0000022674_windows_vista_windows_xp.txt |
Q:
Sharepoint COMException 0x81020037
I am working on a SharePoint application that supports importing multiple documents in a single operation. I also have an ItemAdded event handler that performs some basic maintenance of the item metadata. This event fires for both imported documents and manually created ones. The final piece of the puzzle is a batch operation feature that I implemented to kick off a workflow and update another metadata field.
I am able to cause a COMException 0x81020037 by extracting the file data of a SPListItem. This file is just an InfoPath form/XML document. I am able to modify the XML and sucessfully push it back into the SPListItem. When I fire off the custom feature immediately afterwards and modify metadata, it occassionally causes the COM error.
The error message basically indicates that the file was modified by another thread. It would seem that the ItemAdded event is still writing the file back to the database while the custom feature is changing metadata. I have tried putting in delays and error catching loops to try to detect that the SPListItem is safe to modify with little success.
Is there a way to tell if another thread has a lock on a document?
A:
Sometimes I see the ItemAdded or ItemUpdated firing twice for a single operation.
You can try to put a breakpoint in the ItemAdded() method to confirm that.
The solution in my case was to single thread the ItemAdded() method:
private static object myLock = new object();
public override void ItemAdded(SPItemEventProperties properties) {
if (System.Threading.Monitor.TryEnter(myLock, TimeSpan.FromSeconds(30))
{
//do your stuff here.
System.Threading.Monitor.Exit(myLock);
}
}
A:
I'll have to look into that and get back to you. The problem on my end seems to be that there is code running in a different class, in a different feature, being controlled by a different thread, all of which are trying to access the same record.
I am trying to avoid using a fixed delay. With any threading issue, there is the pathological possibility that one thread can delay or block beyond what we expect. With deployments on different server hardware with different loads, this is a very real possibility. On the other end of the spectrum, even if I were to go with a delay, I don't want it to be very high, especially not 30 seconds. My client will be importing tens of thousands of documents, and a delay of any significant length will cause the import to take literally all day.
| Sharepoint COMException 0x81020037 | I am working on a SharePoint application that supports importing multiple documents in a single operation. I also have an ItemAdded event handler that performs some basic maintenance of the item metadata. This event fires for both imported documents and manually created ones. The final piece of the puzzle is a batch operation feature that I implemented to kick off a workflow and update another metadata field.
I am able to cause a COMException 0x81020037 by extracting the file data of a SPListItem. This file is just an InfoPath form/XML document. I am able to modify the XML and sucessfully push it back into the SPListItem. When I fire off the custom feature immediately afterwards and modify metadata, it occassionally causes the COM error.
The error message basically indicates that the file was modified by another thread. It would seem that the ItemAdded event is still writing the file back to the database while the custom feature is changing metadata. I have tried putting in delays and error catching loops to try to detect that the SPListItem is safe to modify with little success.
Is there a way to tell if another thread has a lock on a document?
| [
"Sometimes I see the ItemAdded or ItemUpdated firing twice for a single operation. \nYou can try to put a breakpoint in the ItemAdded() method to confirm that.\nThe solution in my case was to single thread the ItemAdded() method:\nprivate static object myLock = new object();\npublic override void ItemAdded(SPItemEventProperties properties) {\n if (System.Threading.Monitor.TryEnter(myLock, TimeSpan.FromSeconds(30))\n {\n //do your stuff here.\n System.Threading.Monitor.Exit(myLock);\n }\n}\n\n",
"I'll have to look into that and get back to you. The problem on my end seems to be that there is code running in a different class, in a different feature, being controlled by a different thread, all of which are trying to access the same record.\nI am trying to avoid using a fixed delay. With any threading issue, there is the pathological possibility that one thread can delay or block beyond what we expect. With deployments on different server hardware with different loads, this is a very real possibility. On the other end of the spectrum, even if I were to go with a delay, I don't want it to be very high, especially not 30 seconds. My client will be importing tens of thousands of documents, and a delay of any significant length will cause the import to take literally all day.\n"
] | [
1,
0
] | [] | [] | [
"com",
"multithreading",
"sharepoint"
] | stackoverflow_0000022354_com_multithreading_sharepoint.txt |
Q:
What strategies have you employed to improve web application performance?
Any personal experience in overcoming web application performance hurdles?
Any recommended strategies for improving the performance of a data-driven web application?
My development team works on a web application (JSP reports, HTML, JavaScript) that uses an Oracle database (PL/SQL). The key functionality the application delivers is in reporting, where a user can get PDFs of reports at a high level and drill down to lower levels of supporting details.
As the number of supporting detail records has grown into the millions, the performance of the system has significantly degraded. Based on our current analysis of the metrics, the bottleneck seems to be in the logic hitting the DB and the DB performance. Changing the DB model and re-doing some of the server side logic is currently being explored.
Partioning, indexing, explain plans, and running statistics are things that have been done on the DB side to try to help improve performance. While they've helped, they haven't solved the issue satisfactorily. The toughest part in analyzing performance data is that the database and web servers are remotely administered by a different part of the IT organization, so the developers don't have regular, full access to see what's going on (especially in the production environment, which is not mirrored exactly in any other development/testing environment).
A:
While my answer may not contain any concrete steps to help this is always where I start.
First thing I would do is try to throw away all of your assumptions about what the trouble is and take steps to install metrics everywhere you can. Let the metrics guide you rather than your intuition. I've chased many, many, many white rabbits going on a hunch...the let me down more times than they've been right.
A:
Have you considered building your data ahead of time? In other words are there groups of data that are requested again and again? If so have them ready before the user asks. I'm not exactly talking about caching, but I think that is part of the equation.
It might be worth it to take a step back from the code and examine the usage patterns of the system. For example, if you are showing people monthly inventory or sales information do they look at it at only at the end of the month? If so just build the data on the last day and store it. If they look at it daily, maybe try building each previous days results and storing the results and avoid the calculation. I guess ultimately I am pushing you in to a Dynamic Programming solution; if you know an answer don't solve it again.
A:
Have you checked this out?
Best practices for making web pages fast from Yahoo!'s Exceptional Performance team
If you really are having trouble at the backend, this won't help. But we used their advice to great effect to make our site faster, and there is still more to do.
Also use the YSlow add-on for Firebug. You may be surprised when you see where the actual time is being taken up.
A:
As Webjedi says, metrics are your friend.
Also look at your stack and see where there are opportunities for caching - then employ mercilessly wherever possible!
A:
As I said in another question:
Use a profiler. Yes they cost money, and using them can occasionally be a bit awkward, but they do provide you with a great deal more real evidence rather than guesswork.
Human beings are universally bad at guessing where performance bottlenecks are. It just seems to be something our brains aren't build to do very well. It may seem obvious, you may have great ideas about what the problem is, but the real world often turns out to be doing something different. And optimising the wrong part of code means, at best, lots of work for minimal benefit. More often it makes things slower, and sometimes it breaks things entirely. So before you make any changes for the sake of optimisation, you should always have real evidence from a profiler or other accurate tool.
A:
Not all profilers cost (extra) money. For .Net, I'm successfully using an old build of NProf (currently abandoned but it still works for me) for profiling my ASP.Net applications. For SQL Server, the query profiler is part of the package. There's also the CLF Profiler from MS but I've never been able to get it to work successfully.
That being said, profilers are definitely the way to go. That way you can see where your program is spending most of its time, and not focus on things that you think are slow. Plus it means you don't have to write anything in your code to actually record the metrics.
As I hinted to at the beginning, there are different types of profilers. The three I find most useful are application profilers, which let you see which functions you actually spend most of your time in. The second is SQL profilers that let you see how long your queries take to run. The third is memory profilers, which help to show you what type of objects your memory is being used up by. All three of these are really useful, and although you won't use them every day, the times you do use them will save you a lot of headache.
| What strategies have you employed to improve web application performance? |
Any personal experience in overcoming web application performance hurdles?
Any recommended strategies for improving the performance of a data-driven web application?
My development team works on a web application (JSP reports, HTML, JavaScript) that uses an Oracle database (PL/SQL). The key functionality the application delivers is in reporting, where a user can get PDFs of reports at a high level and drill down to lower levels of supporting details.
As the number of supporting detail records has grown into the millions, the performance of the system has significantly degraded. Based on our current analysis of the metrics, the bottleneck seems to be in the logic hitting the DB and the DB performance. Changing the DB model and re-doing some of the server side logic is currently being explored.
Partioning, indexing, explain plans, and running statistics are things that have been done on the DB side to try to help improve performance. While they've helped, they haven't solved the issue satisfactorily. The toughest part in analyzing performance data is that the database and web servers are remotely administered by a different part of the IT organization, so the developers don't have regular, full access to see what's going on (especially in the production environment, which is not mirrored exactly in any other development/testing environment).
| [
"While my answer may not contain any concrete steps to help this is always where I start.\nFirst thing I would do is try to throw away all of your assumptions about what the trouble is and take steps to install metrics everywhere you can. Let the metrics guide you rather than your intuition. I've chased many, many, many white rabbits going on a hunch...the let me down more times than they've been right.\n",
"Have you considered building your data ahead of time? In other words are there groups of data that are requested again and again? If so have them ready before the user asks. I'm not exactly talking about caching, but I think that is part of the equation. \nIt might be worth it to take a step back from the code and examine the usage patterns of the system. For example, if you are showing people monthly inventory or sales information do they look at it at only at the end of the month? If so just build the data on the last day and store it. If they look at it daily, maybe try building each previous days results and storing the results and avoid the calculation. I guess ultimately I am pushing you in to a Dynamic Programming solution; if you know an answer don't solve it again. \n",
"Have you checked this out?\nBest practices for making web pages fast from Yahoo!'s Exceptional Performance team\nIf you really are having trouble at the backend, this won't help. But we used their advice to great effect to make our site faster, and there is still more to do.\nAlso use the YSlow add-on for Firebug. You may be surprised when you see where the actual time is being taken up.\n",
"As Webjedi says, metrics are your friend.\nAlso look at your stack and see where there are opportunities for caching - then employ mercilessly wherever possible!\n",
"As I said in another question:\n\nUse a profiler. Yes they cost money, and using them can occasionally be a bit awkward, but they do provide you with a great deal more real evidence rather than guesswork.\nHuman beings are universally bad at guessing where performance bottlenecks are. It just seems to be something our brains aren't build to do very well. It may seem obvious, you may have great ideas about what the problem is, but the real world often turns out to be doing something different. And optimising the wrong part of code means, at best, lots of work for minimal benefit. More often it makes things slower, and sometimes it breaks things entirely. So before you make any changes for the sake of optimisation, you should always have real evidence from a profiler or other accurate tool.\n\n",
"Not all profilers cost (extra) money. For .Net, I'm successfully using an old build of NProf (currently abandoned but it still works for me) for profiling my ASP.Net applications. For SQL Server, the query profiler is part of the package. There's also the CLF Profiler from MS but I've never been able to get it to work successfully.\nThat being said, profilers are definitely the way to go. That way you can see where your program is spending most of its time, and not focus on things that you think are slow. Plus it means you don't have to write anything in your code to actually record the metrics. \nAs I hinted to at the beginning, there are different types of profilers. The three I find most useful are application profilers, which let you see which functions you actually spend most of your time in. The second is SQL profilers that let you see how long your queries take to run. The third is memory profilers, which help to show you what type of objects your memory is being used up by. All three of these are really useful, and although you won't use them every day, the times you do use them will save you a lot of headache.\n"
] | [
6,
2,
2,
1,
1,
1
] | [] | [] | [
"oracle",
"performance",
"web_applications"
] | stackoverflow_0000022704_oracle_performance_web_applications.txt |
Q:
ASP.NET - Building your own routing system
In a recent project, I built my own MVC framework in PHP. One of the things I implemented was a routing system. I used Apache's mod_rewrite to send all requests to index.php, and then parsed the URI to extract information and route the request.
I'm dabbling in ASP.NET now, and I'm wondering if/how I might perform something similar. Is there a way to route all requests (similar to the way WordPress does it) to one page where central route processing is performed? I'm aware of the MVC framework for ASP.NET, but I'd like to take a stab at this myself as I'm tinkering around and learning.
EDIT:
BTW, my hosting provider runs IIS 6
A:
This is going to be a long answer, because I want to make sure you are fully aware of all the ways you can accomplish what you want to do.
The routing engine that powers the ASP.NET MVC Framework will work with the traditional ASP.NET Framework. You can take advantage of using the RouteTable and assigning routes, just like you would in an ASP.NET MVC application. You just don't get the MVC portion in traditional ASP.NET sites. That was a huge enhancement for the ASP.NET Framework and it was great to see them reuse that code and make it work in both frameworks. If you want to learn more about this, check out ScottGu's post and scroll down to URL Routing Improvements. Also here is a reference on how to use the System.Web.Routing in WebForms by Phil Haack.
Now, if you still want to write you own. You will need to learn the ASP.NET HTTP pipeline and how to implement the IHttpModule and the IHttpHandler interfaces to create your own HttpModule or HttpHandler class to handle your routing. These interfaces are the key in writing your own routing engine. To help put those interfaces in a working example, I couldn't recommend this MSDN article enough. It shows you how to with either interface and explains the differences when creating your own routing/url rewriting engine.
Now, if you find out that this might be to much for you. There are third party libraries you can use of people who already wrote a routing/url rewriting engine in .NET. Here is a question that I saw not to long ago asking "What Url rewriter do you use for ASP.Net?" right here on SO.
| ASP.NET - Building your own routing system | In a recent project, I built my own MVC framework in PHP. One of the things I implemented was a routing system. I used Apache's mod_rewrite to send all requests to index.php, and then parsed the URI to extract information and route the request.
I'm dabbling in ASP.NET now, and I'm wondering if/how I might perform something similar. Is there a way to route all requests (similar to the way WordPress does it) to one page where central route processing is performed? I'm aware of the MVC framework for ASP.NET, but I'd like to take a stab at this myself as I'm tinkering around and learning.
EDIT:
BTW, my hosting provider runs IIS 6
| [
"This is going to be a long answer, because I want to make sure you are fully aware of all the ways you can accomplish what you want to do.\nThe routing engine that powers the ASP.NET MVC Framework will work with the traditional ASP.NET Framework. You can take advantage of using the RouteTable and assigning routes, just like you would in an ASP.NET MVC application. You just don't get the MVC portion in traditional ASP.NET sites. That was a huge enhancement for the ASP.NET Framework and it was great to see them reuse that code and make it work in both frameworks. If you want to learn more about this, check out ScottGu's post and scroll down to URL Routing Improvements. Also here is a reference on how to use the System.Web.Routing in WebForms by Phil Haack.\nNow, if you still want to write you own. You will need to learn the ASP.NET HTTP pipeline and how to implement the IHttpModule and the IHttpHandler interfaces to create your own HttpModule or HttpHandler class to handle your routing. These interfaces are the key in writing your own routing engine. To help put those interfaces in a working example, I couldn't recommend this MSDN article enough. It shows you how to with either interface and explains the differences when creating your own routing/url rewriting engine.\nNow, if you find out that this might be to much for you. There are third party libraries you can use of people who already wrote a routing/url rewriting engine in .NET. Here is a question that I saw not to long ago asking \"What Url rewriter do you use for ASP.Net?\" right here on SO.\n"
] | [
6
] | [] | [] | [
"asp.net",
"routing",
"url_rewriting"
] | stackoverflow_0000022869_asp.net_routing_url_rewriting.txt |
Q:
ASP.Net: How to do pagination with a Repeater?
I'm using the Repeater control on my site to display data from the database. I need to do pagination ("now displaying page 1 of 10", 10 items per page, etc) but I'm not sure I'm going about it the best way possible.
I know the Repeater control doesn't have any built-in pagination, so I'll have to make my own. Is there a way to tell the DataSource control to return rows 10-20 of a much larger result set? If not, how do I write that into a query (SQL Server 2005)? I'm currently using the TOP keyword to only return the first 10 rows, but I'm not sure how to display rows 10-20.
A:
You have to use the PagedDataSource, it allows you to turn a standard data source into one that can be paged. Here's an example article
A:
This isn't a way to page the data, but have you looked into the ListView control? It gives the flexibility of repeater / data list but with built in paging like the grid view.
And for paging in sql, you would want to do something like this
A:
This was answered here.
| ASP.Net: How to do pagination with a Repeater? | I'm using the Repeater control on my site to display data from the database. I need to do pagination ("now displaying page 1 of 10", 10 items per page, etc) but I'm not sure I'm going about it the best way possible.
I know the Repeater control doesn't have any built-in pagination, so I'll have to make my own. Is there a way to tell the DataSource control to return rows 10-20 of a much larger result set? If not, how do I write that into a query (SQL Server 2005)? I'm currently using the TOP keyword to only return the first 10 rows, but I'm not sure how to display rows 10-20.
| [
"You have to use the PagedDataSource, it allows you to turn a standard data source into one that can be paged. Here's an example article\n",
"This isn't a way to page the data, but have you looked into the ListView control? It gives the flexibility of repeater / data list but with built in paging like the grid view.\nAnd for paging in sql, you would want to do something like this\n",
"This was answered here.\n"
] | [
3,
2,
0
] | [] | [] | [
"asp.net",
"sql_server"
] | stackoverflow_0000022981_asp.net_sql_server.txt |
Q:
What to use Windows CardSpace for?
I'm doing some funky authentication work (and yes, I know, open-id is awesome, but then again my open-id doesn't work right at this moment!).
Stumbling across Windows CardSpace I was wondering if anyone has used this in a real product-system. If you have used it, what were the pros and cons for you? And how can i use it in my open-id?
A:
Umm no you don't; you can accept information cards on a web site using a cheap and cheerful certificate (but not self signed) or no certificate at all.
And yes, I've used it as part of a production system which grew out of a proof of concept I did at Microsoft.
Cons: If you don't have an EV SSL certificate you get warnings. The code for parsing a card is incomplete at best (you have to hack it around for no-SSL), you have to explain to users what one is.
Pros: Well that's more interesting; I was using managed cards and issuing them and then having 3rd parties use those to check claims; but for self issued cards; well, it's stronger than username password and doesn't have the same vulnerabilities OpenID has.
| What to use Windows CardSpace for? | I'm doing some funky authentication work (and yes, I know, open-id is awesome, but then again my open-id doesn't work right at this moment!).
Stumbling across Windows CardSpace I was wondering if anyone has used this in a real product-system. If you have used it, what were the pros and cons for you? And how can i use it in my open-id?
| [
"Umm no you don't; you can accept information cards on a web site using a cheap and cheerful certificate (but not self signed) or no certificate at all.\nAnd yes, I've used it as part of a production system which grew out of a proof of concept I did at Microsoft.\nCons: If you don't have an EV SSL certificate you get warnings. The code for parsing a card is incomplete at best (you have to hack it around for no-SSL), you have to explain to users what one is.\nPros: Well that's more interesting; I was using managed cards and issuing them and then having 3rd parties use those to check claims; but for self issued cards; well, it's stronger than username password and doesn't have the same vulnerabilities OpenID has.\n"
] | [
2
] | [] | [] | [
"authentication",
"security",
"windows"
] | stackoverflow_0000019956_authentication_security_windows.txt |
Q:
Does anybody know of existing code to read a mork file (Thunderbird Address Book)?
I have the need to read the Thunderbird address book on the fly. It is stored in a file format called Mork. Not a pleasant file format to read. I found a 1999 article explaining the file format. I would love to know if someone already has gone through this process and could make the code available. I found mork.pl by Jamie Zawinski (he worked on Netscape Navigator), but I was hoping for a .NET solution.
I'm hoping StackOverflow will come to the rescue, because this just seems like a waste of my time to write something to read this file format when it should be so simple.
I love the comments that Jamie put in his perl script. Here is my favorite part:
# Let me make it clear that McCusker is a complete barking lunatic.
# This is just about the stupidest file format I've ever seen.
A:
The Beagle search engine had code to parse Mork files. It's not the most memory efficient solution, but it worked and could be a useful starting point. Here's a link to the file:
http://svn.gnome.org/viewvc/beagle/tags/BEAGLE_0_2_18/Util/Mork.cs?view=markup
(These days Beagle doesn't use this parser anymore; we took the easier (and supported) path of writing a Thunderbird extension which just sent the data to Beagle itself. Has the disadvantage of not working while Thunderbird is closed, but has the advantage of not instilling the desire to bash your head in with the nearest blunt instrument.)
| Does anybody know of existing code to read a mork file (Thunderbird Address Book)? | I have the need to read the Thunderbird address book on the fly. It is stored in a file format called Mork. Not a pleasant file format to read. I found a 1999 article explaining the file format. I would love to know if someone already has gone through this process and could make the code available. I found mork.pl by Jamie Zawinski (he worked on Netscape Navigator), but I was hoping for a .NET solution.
I'm hoping StackOverflow will come to the rescue, because this just seems like a waste of my time to write something to read this file format when it should be so simple.
I love the comments that Jamie put in his perl script. Here is my favorite part:
# Let me make it clear that McCusker is a complete barking lunatic.
# This is just about the stupidest file format I've ever seen.
| [
"The Beagle search engine had code to parse Mork files. It's not the most memory efficient solution, but it worked and could be a useful starting point. Here's a link to the file:\nhttp://svn.gnome.org/viewvc/beagle/tags/BEAGLE_0_2_18/Util/Mork.cs?view=markup\n(These days Beagle doesn't use this parser anymore; we took the easier (and supported) path of writing a Thunderbird extension which just sent the data to Beagle itself. Has the disadvantage of not working while Thunderbird is closed, but has the advantage of not instilling the desire to bash your head in with the nearest blunt instrument.)\n"
] | [
4
] | [] | [] | [
".net",
"file_format",
"mork",
"thunderbird"
] | stackoverflow_0000022943_.net_file_format_mork_thunderbird.txt |
Q:
XML Collection Best Practices
I'm creating an application that will store a hierarchical collection of items in an XML file and I'm wondering about the industry standard for storing collections in XML. Which of the following two formats is preferred? (If there is another option I'm not seeing, please advise.)
Option A
<School>
<Student Name="Jack" />
<Student Name="Jill" />
<Class Name="English 101" />
<Class Name="Math 101" />
</School>
Option B
<School>
<Students>
<Student Name="Jack" />
<Student Name="Jill" />
</Students>
<Classes>
<Class Name="English 101" />
<Class Name="Math 101" />
</Classes>
</School>
A:
I'm no XML expert, but I find Option B to be more human readable, and I think it's just as machine readable as Option A. I believe that XML is designed to be both human and machine readable, so I would go for Option B myself.
I just realized something else after Ryan Farley's post. If the Students or Classes section becomes too big and must be moved to another XML file, it seems like it would be easier to copy the node and create a new XML file out of that node with Option B.
A:
Definitely - Option B.
I wouldn't mix students and classes in the XML just the same way that I wouldn't mix students and classes in the same table in a database.
A:
Option B, absolutely. When there's a logical grouping of similar items, it should have a parent item. That way, my parser won't have to step through all 500 student records checking to see if there are class records mixed in.
A:
Another compelling reason to use option B is error checking. If the original file is modified outside an XML application, or if no XSD schema is applied, there could be the case where you have an uneven number of students and classes.
At least if you have the students and classes grouped together, you will easily be able to tell if each record is complete, independently of any other record.
| XML Collection Best Practices | I'm creating an application that will store a hierarchical collection of items in an XML file and I'm wondering about the industry standard for storing collections in XML. Which of the following two formats is preferred? (If there is another option I'm not seeing, please advise.)
Option A
<School>
<Student Name="Jack" />
<Student Name="Jill" />
<Class Name="English 101" />
<Class Name="Math 101" />
</School>
Option B
<School>
<Students>
<Student Name="Jack" />
<Student Name="Jill" />
</Students>
<Classes>
<Class Name="English 101" />
<Class Name="Math 101" />
</Classes>
</School>
| [
"I'm no XML expert, but I find Option B to be more human readable, and I think it's just as machine readable as Option A. I believe that XML is designed to be both human and machine readable, so I would go for Option B myself.\n\nI just realized something else after Ryan Farley's post. If the Students or Classes section becomes too big and must be moved to another XML file, it seems like it would be easier to copy the node and create a new XML file out of that node with Option B.\n",
"Definitely - Option B. \nI wouldn't mix students and classes in the XML just the same way that I wouldn't mix students and classes in the same table in a database. \n",
"Option B, absolutely. When there's a logical grouping of similar items, it should have a parent item. That way, my parser won't have to step through all 500 student records checking to see if there are class records mixed in.\n",
"Another compelling reason to use option B is error checking. If the original file is modified outside an XML application, or if no XSD schema is applied, there could be the case where you have an uneven number of students and classes.\nAt least if you have the students and classes grouped together, you will easily be able to tell if each record is complete, independently of any other record.\n"
] | [
4,
4,
2,
2
] | [] | [] | [
"xml"
] | stackoverflow_0000023064_xml.txt |
Q:
Acts-as-readable Rails plugin Issue
I'm using Intridea's Acts as Readable Rails plugin for a messaging system I'm currently building.
I've defined my message class accordingly:
class Post < ActiveRecord::Base
acts-as-readable
end
And everything seems to be working according to plan, but when trying to make the app show unread messages in my message view, I run into problems.
Their example: (I've changed underscores to hyphens due to formatting issues)
bob = User.find_by_name("bob")
bob.readings # => []
Post.find_unread_by(bob) # => [<Post 1>,<Post 2>,<Post 3>...]
Post.find_read_by(bob) # => []
Post.find(1).read_by?(bob) # => false
Post.find(1).read_by!(bob) # => <Reading 1>
Post.find(1).read_by?(bob) # => true
Post.find(1).users_who_read # => [<User bob>]
Post.find_unread_by(bob) # => [<Post 2>,<Post 3>...]
Post.find_read_by(bob) # => [<Post 1>]
bob.readings # => [<Reading 1>]
So it seems that if I wanted to list the number of unread messages sitting in a mailbox (for example Inbox (39) ), I should be able to do something like:
<%= Post.find_unread_by(current-user).count %>
But to no avail. I always seem to get stuck on the simple view issues after everything's set.
Any ideas?
A:
The following will work
<%= Post.find_unread_by(current_user).size %>
or
<%= Post.find_unread_by(current_user).length %>
However if you check your development.log you should see that it gets the unread count by
Retrieving all the posts
Retrieving all the posts read by the user
Removing all of 2. from 1. in ruby
This will be very bad performance wise with lots of posts.
A better way would be to retrieve the posts read by the current user and then use ActiveRecord::Calculations to get a count without retrieving all the posts in the database
Post.count(:conditions => [ "id NOT IN (?)", Post.find_read_by(current_user)])
This should go into your Post model to follow best practices of not having finders in the view or controller
Post.rb
def self.unread_post_count_for_user(user)
count(:conditions => [ "id NOT IN (?)", Post.find_read_by(user)])
end
Then your view will just be
<%= Post.unread_post_count_for_user(current-user) %>
| Acts-as-readable Rails plugin Issue | I'm using Intridea's Acts as Readable Rails plugin for a messaging system I'm currently building.
I've defined my message class accordingly:
class Post < ActiveRecord::Base
acts-as-readable
end
And everything seems to be working according to plan, but when trying to make the app show unread messages in my message view, I run into problems.
Their example: (I've changed underscores to hyphens due to formatting issues)
bob = User.find_by_name("bob")
bob.readings # => []
Post.find_unread_by(bob) # => [<Post 1>,<Post 2>,<Post 3>...]
Post.find_read_by(bob) # => []
Post.find(1).read_by?(bob) # => false
Post.find(1).read_by!(bob) # => <Reading 1>
Post.find(1).read_by?(bob) # => true
Post.find(1).users_who_read # => [<User bob>]
Post.find_unread_by(bob) # => [<Post 2>,<Post 3>...]
Post.find_read_by(bob) # => [<Post 1>]
bob.readings # => [<Reading 1>]
So it seems that if I wanted to list the number of unread messages sitting in a mailbox (for example Inbox (39) ), I should be able to do something like:
<%= Post.find_unread_by(current-user).count %>
But to no avail. I always seem to get stuck on the simple view issues after everything's set.
Any ideas?
| [
"The following will work\n<%= Post.find_unread_by(current_user).size %>\n\nor\n<%= Post.find_unread_by(current_user).length %>\n\nHowever if you check your development.log you should see that it gets the unread count by\n\nRetrieving all the posts\nRetrieving all the posts read by the user\nRemoving all of 2. from 1. in ruby\n\nThis will be very bad performance wise with lots of posts.\nA better way would be to retrieve the posts read by the current user and then use ActiveRecord::Calculations to get a count without retrieving all the posts in the database\nPost.count(:conditions => [ \"id NOT IN (?)\", Post.find_read_by(current_user)])\n\nThis should go into your Post model to follow best practices of not having finders in the view or controller\nPost.rb\ndef self.unread_post_count_for_user(user)\n count(:conditions => [ \"id NOT IN (?)\", Post.find_read_by(user)])\nend\n\nThen your view will just be\n<%= Post.unread_post_count_for_user(current-user) %>\n\n"
] | [
11
] | [] | [] | [
"plugins",
"ruby",
"ruby_on_rails"
] | stackoverflow_0000022980_plugins_ruby_ruby_on_rails.txt |
Q:
How to process Excel files stored in an image data type column using SSIS package?
I have a .NET webforms front end that allows admin users to upload two .xls files for offline processing. As these files will be used for validation (and aggregation) I store these in an image field in a table.
My ultimate goal is to create an SSIS package that will process these files offline. Does anyone know how to use SSIS to read a blob from a table into its native (in this case .xls) format for use in a Data Flow task?
A:
In my (admittedly limited) experience with SSIS, it is quite good at rapidly getting something up and running, but frusteratingly limited in getting something that "feels" like the most elegant, efficient solution to a programmer.
Since the Excel Source Editor seems to take only files as input, you need to give it a file or reimplement its functionality in code that can take a blob. I understand that this is unsatisfying, but in the end, this is a time saving tool.
| How to process Excel files stored in an image data type column using SSIS package? | I have a .NET webforms front end that allows admin users to upload two .xls files for offline processing. As these files will be used for validation (and aggregation) I store these in an image field in a table.
My ultimate goal is to create an SSIS package that will process these files offline. Does anyone know how to use SSIS to read a blob from a table into its native (in this case .xls) format for use in a Data Flow task?
| [
"In my (admittedly limited) experience with SSIS, it is quite good at rapidly getting something up and running, but frusteratingly limited in getting something that \"feels\" like the most elegant, efficient solution to a programmer. \nSince the Excel Source Editor seems to take only files as input, you need to give it a file or reimplement its functionality in code that can take a blob. I understand that this is unsatisfying, but in the end, this is a time saving tool.\n"
] | [
1
] | [] | [] | [
"ssis"
] | stackoverflow_0000022968_ssis.txt |
Q:
Any good tools to automate SQL Server management tasks?
I know I could write scripts and create jobs to run them, but at least some of what I'm wanting it to do is beyond my programming abilities for that to be an option.
What I'm imagining is something that can run on a regular schedule that will examine all the databases on a server and automatically shrink data and log files (after a backup, of course) when they've reached a file size that contains too much free space. It would be nice if it could defrag index files when they've become too fragmented as well.
I guess what I'm probably looking for is a DBA in a box!
Or it could just be that I need better performance monitoring tools instead. I know how to take care of both of those issues, but it's more that I forget to check for those issues until I start seeing performance issues with my apps.
A:
That stuff is all built in, it is called a maintenance plan
A:
If you are using SQL Server 2005. Fire up the Management Studio and look at the Maintenance Plan section.
See http://msdn.microsoft.com/en-us/library/ms187658.aspx for an overview and http://msdn.microsoft.com/en-us/library/ms189036.aspx for details on the Maintenance plan wizard.
Finally, http://msdn.microsoft.com/en-us/library/ms140255.aspx is a list of all the maintenance tasks available.
I am pretty sure this is all available even in the Express Edition. I can't speak to if anything has changed in 2008, I haven't used it yet.
A:
yeah everything you described (except maybe perf monitoring) can be done with database maintenance plans, back ups, shrinking log files etc.
A:
I guess the tool I was looking for was under my nose the whole time! I've used Maintenance Plans for backups but I think I set those up at least 4 years ago or more, long before I knew anything about shrinking files and defragging indexes. Thanks!
| Any good tools to automate SQL Server management tasks? | I know I could write scripts and create jobs to run them, but at least some of what I'm wanting it to do is beyond my programming abilities for that to be an option.
What I'm imagining is something that can run on a regular schedule that will examine all the databases on a server and automatically shrink data and log files (after a backup, of course) when they've reached a file size that contains too much free space. It would be nice if it could defrag index files when they've become too fragmented as well.
I guess what I'm probably looking for is a DBA in a box!
Or it could just be that I need better performance monitoring tools instead. I know how to take care of both of those issues, but it's more that I forget to check for those issues until I start seeing performance issues with my apps.
| [
"That stuff is all built in, it is called a maintenance plan\n",
"If you are using SQL Server 2005. Fire up the Management Studio and look at the Maintenance Plan section.\nSee http://msdn.microsoft.com/en-us/library/ms187658.aspx for an overview and http://msdn.microsoft.com/en-us/library/ms189036.aspx for details on the Maintenance plan wizard.\nFinally, http://msdn.microsoft.com/en-us/library/ms140255.aspx is a list of all the maintenance tasks available.\nI am pretty sure this is all available even in the Express Edition. I can't speak to if anything has changed in 2008, I haven't used it yet.\n",
"yeah everything you described (except maybe perf monitoring) can be done with database maintenance plans, back ups, shrinking log files etc.\n",
"I guess the tool I was looking for was under my nose the whole time! I've used Maintenance Plans for backups but I think I set those up at least 4 years ago or more, long before I knew anything about shrinking files and defragging indexes. Thanks!\n"
] | [
2,
2,
0,
0
] | [] | [] | [
"sql_server"
] | stackoverflow_0000023001_sql_server.txt |
Q:
Suggestions on Ajax development environment for PHP
I am a C/C++ programmer professionally, but I've created a couple of personal web sites using PHP and MySQL.
They're pretty basic, and I'd like to jazz them up using Ajax, but I've never done any Ajax. I've done all the development so far manually, i.e. no IDE or anything like that.
Does anyone have suggestions on Ajax development environments that can help me?
Shareware or freeware would be preferable as I'd find it hard to justify spending more than a minimal amount of money on this...
A:
If you want an IDE, try Aptana Studio. It supports HTML, CSS, JavaScript, PHP, XML, Ruby, Ruby on Rails, and more....
A:
As T.O. says, try Aptana. There's a very good free version, and they really push the AJAX. They even have Jaxer, an "AJAX Server" that they're working on. If nothing else, the plugins are great, and, other than a few quirks, I really like working in it.
A:
Aptana is supposedly a decent IDE for Javascript development. I myself just use Eclipse and a decent javascript framework like jQuery that has an easy syntax.
A:
Rolling your own AJAX has become somewhat outdated in the presence of Javascript libraries like Prototype and JQuery. I would recommend looking into one of those libraries (Jeff used JQuery for SO and he's been really impressed with it from what I understand).
As far as a development environment goes, I don't know that there's much. A typical text editor with syntax highlighting would do the trick for writing (like Notepad++). For debugging, take a look at the Firebug extension for Firefox (though if you use JQuery, a debugging tool may not be as useful).
A:
First off, make sure you understand the basics of the HTTP protocol. Then learn how the javascript httpXmlRequest function works. Once you've covered those, pick an Ajax library - prototype is good.
Then look at a few examples, and follow the API.
Job done.
I seriously have no idea how they manage to write entire books on this subject.
Edit: Why vote me down? Learning the basics first, leads to a much better understanding of the way it works. And yes, I believe Jeff should learn C too ;-P
A:
Sajax is another good toolkit with PHP support.
Mostly though I prefer to use a Javascript framework like Jquery or Prototype
| Suggestions on Ajax development environment for PHP | I am a C/C++ programmer professionally, but I've created a couple of personal web sites using PHP and MySQL.
They're pretty basic, and I'd like to jazz them up using Ajax, but I've never done any Ajax. I've done all the development so far manually, i.e. no IDE or anything like that.
Does anyone have suggestions on Ajax development environments that can help me?
Shareware or freeware would be preferable as I'd find it hard to justify spending more than a minimal amount of money on this...
| [
"If you want an IDE, try Aptana Studio. It supports HTML, CSS, JavaScript, PHP, XML, Ruby, Ruby on Rails, and more....\n",
"As T.O. says, try Aptana. There's a very good free version, and they really push the AJAX. They even have Jaxer, an \"AJAX Server\" that they're working on. If nothing else, the plugins are great, and, other than a few quirks, I really like working in it.\n",
"Aptana is supposedly a decent IDE for Javascript development. I myself just use Eclipse and a decent javascript framework like jQuery that has an easy syntax.\n",
"Rolling your own AJAX has become somewhat outdated in the presence of Javascript libraries like Prototype and JQuery. I would recommend looking into one of those libraries (Jeff used JQuery for SO and he's been really impressed with it from what I understand).\nAs far as a development environment goes, I don't know that there's much. A typical text editor with syntax highlighting would do the trick for writing (like Notepad++). For debugging, take a look at the Firebug extension for Firefox (though if you use JQuery, a debugging tool may not be as useful).\n",
"First off, make sure you understand the basics of the HTTP protocol. Then learn how the javascript httpXmlRequest function works. Once you've covered those, pick an Ajax library - prototype is good.\nThen look at a few examples, and follow the API.\nJob done.\nI seriously have no idea how they manage to write entire books on this subject.\nEdit: Why vote me down? Learning the basics first, leads to a much better understanding of the way it works. And yes, I believe Jeff should learn C too ;-P\n",
"Sajax is another good toolkit with PHP support. \nMostly though I prefer to use a Javascript framework like Jquery or Prototype\n"
] | [
2,
2,
2,
1,
0,
0
] | [] | [] | [
"ajax",
"ide",
"javascript",
"php"
] | stackoverflow_0000023176_ajax_ide_javascript_php.txt |
Q:
"All Users" Folder
Is there a .NET variable that returns the "All Users" directory?
A:
You'll want to use the system.environment variables.
Most of the predefined ones are shown here.
For the "All Users" you would use:
System.Environment.GetEnvironmentVariable("ALLUSERSPROFILE")
I know I got a lot of upmods and a correct answer for my other stuff, but this actually works. where as the other environment variables
I linked to previously don't seem to work with that function call.
A:
Or,
Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData)
You can then pass this result to System.IO.Directory.GetParent() to get the root "All Users" folder.
A:
Is this any use?
Oops:
http://msdn.microsoft.com/en-us/library/bb774096(VS.85).aspx
| "All Users" Folder | Is there a .NET variable that returns the "All Users" directory?
| [
"You'll want to use the system.environment variables.\nMost of the predefined ones are shown here. \nFor the \"All Users\" you would use:\nSystem.Environment.GetEnvironmentVariable(\"ALLUSERSPROFILE\")\n\nI know I got a lot of upmods and a correct answer for my other stuff, but this actually works. where as the other environment variables \nI linked to previously don't seem to work with that function call.\n",
"Or, \nEnvironment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData)\n\nYou can then pass this result to System.IO.Directory.GetParent() to get the root \"All Users\" folder.\n",
"Is this any use?\nOops:\nhttp://msdn.microsoft.com/en-us/library/bb774096(VS.85).aspx\n"
] | [
7,
1,
1
] | [] | [] | [
".net",
"io"
] | stackoverflow_0000023178_.net_io.txt |
Q:
Need to test an ajax timeout condition
As the title mentions, I have a timeout callback handler on an ajax call, and I want to be able to test that condition but nothing is coming to mind immediately on ways I can force my application to hit that state, any suggestions?
A:
You could always run a server-side script that keeps running for a period of time.
For example:
<?php
sleep(10); //sleep for 10 seconds.
print "This script has finished.";
>
A:
First off, I think you need to be clearer in your question - what technology are you using and where is this process that is timing out - server-side or client-side?
If you want to have the server-side code take a long time and you are using .NET, place this line in the method you call server-side:
System.Threading.Thread.Sleep(timeoutMilliseconds);
As long as you use a number sufficient so that your client-side code assumes the server has timed out, you should be good.
A:
YUI Connection Manager allows you to introduce slowdown in your Javascript to test AJAX against latency.
| Need to test an ajax timeout condition | As the title mentions, I have a timeout callback handler on an ajax call, and I want to be able to test that condition but nothing is coming to mind immediately on ways I can force my application to hit that state, any suggestions?
| [
"You could always run a server-side script that keeps running for a period of time. \nFor example:\n<?php\n sleep(10); //sleep for 10 seconds.\n print \"This script has finished.\";\n>\n\n",
"First off, I think you need to be clearer in your question - what technology are you using and where is this process that is timing out - server-side or client-side?\nIf you want to have the server-side code take a long time and you are using .NET, place this line in the method you call server-side:\nSystem.Threading.Thread.Sleep(timeoutMilliseconds);\n\nAs long as you use a number sufficient so that your client-side code assumes the server has timed out, you should be good.\n",
"YUI Connection Manager allows you to introduce slowdown in your Javascript to test AJAX against latency.\n"
] | [
3,
2,
1
] | [] | [] | [
"ajax",
"asp.net",
"testing"
] | stackoverflow_0000023124_ajax_asp.net_testing.txt |
Q:
Summary of differences in regular expression syntax for various tools and languages?
I can never remember the differences in regular expression syntax used by tools like grep and AWK, or languages like Python and PHP. Generally, Perl has the most expansive syntax, but I'm often hamstrung by the limitations of even egrep ("extended" grep).
Is there a site that lists the differences in a concise and easy-to-read fashion?
A:
For my own future reference, I'll offer the Regexp Syntax Summary page which contrasts the syntax for grep, egrep, Emacs, Perl, Python, and Tcl. As expected, Perl supports the greatest variety of operators, but Python looks equally capable, if not more so.
A:
Mastering Regular Expressions, devotes the last four chapters to Java, PHP, Perl, and .NET. One chapter for each. From what I know, the pocket edition contains just those final four chapters.
A:
I find this site helpful: http://www.regular-expressions.info/
Other than that, I use the corresponding documentation extensively and I believe, all said and done, there's no way around that.
| Summary of differences in regular expression syntax for various tools and languages? | I can never remember the differences in regular expression syntax used by tools like grep and AWK, or languages like Python and PHP. Generally, Perl has the most expansive syntax, but I'm often hamstrung by the limitations of even egrep ("extended" grep).
Is there a site that lists the differences in a concise and easy-to-read fashion?
| [
"For my own future reference, I'll offer the Regexp Syntax Summary page which contrasts the syntax for grep, egrep, Emacs, Perl, Python, and Tcl. As expected, Perl supports the greatest variety of operators, but Python looks equally capable, if not more so.\n",
"Mastering Regular Expressions, devotes the last four chapters to Java, PHP, Perl, and .NET. One chapter for each. From what I know, the pocket edition contains just those final four chapters. \n",
"I find this site helpful: http://www.regular-expressions.info/\nOther than that, I use the corresponding documentation extensively and I believe, all said and done, there's no way around that.\n"
] | [
11,
7,
5
] | [] | [] | [
"grep",
"regex"
] | stackoverflow_0000023216_grep_regex.txt |
Q:
Caching Schemes for Managed Languages
This is mostly geared toward desktop application developers. How do I design a caching block which plays nicely with the GC? How do I tell the GC that I have just done a cache sweep and it is time to do a GC? How do I get an accurate measure of when it is time to do a cache sweep?
Are there any prebuilt caching schemes which I could borrow some ideas from?
A:
All you'll ever need to know (and then some):
http://msdn.microsoft.com/en-us/library/ee817645.aspx
Oh, and GC.Collect() forces a collect.
A:
While I obviously cannot speak to the specifics of your application, in most instances you should not tie your caching implementation to some perceived expectation for how the GC will work. As Stu mentions, calling GC.Collect() will force a collection (with overloads for a specific generation) but more often than not doing so will result in worse performance than just letting the GC manage itself.
If you do find (after doing some real performance testing) that you need to interact with the GC make sure you take into account the different types of GC's that the framework currently has (see here for more information).
| Caching Schemes for Managed Languages | This is mostly geared toward desktop application developers. How do I design a caching block which plays nicely with the GC? How do I tell the GC that I have just done a cache sweep and it is time to do a GC? How do I get an accurate measure of when it is time to do a cache sweep?
Are there any prebuilt caching schemes which I could borrow some ideas from?
| [
"All you'll ever need to know (and then some):\nhttp://msdn.microsoft.com/en-us/library/ee817645.aspx\nOh, and GC.Collect() forces a collect.\n",
"While I obviously cannot speak to the specifics of your application, in most instances you should not tie your caching implementation to some perceived expectation for how the GC will work. As Stu mentions, calling GC.Collect() will force a collection (with overloads for a specific generation) but more often than not doing so will result in worse performance than just letting the GC manage itself.\nIf you do find (after doing some real performance testing) that you need to interact with the GC make sure you take into account the different types of GC's that the framework currently has (see here for more information).\n"
] | [
1,
1
] | [] | [] | [
"caching",
"garbage_collection"
] | stackoverflow_0000023175_caching_garbage_collection.txt |
Q:
Notification of drop in drag-drop in Windows
My C# program has a list of files that can be dragged from it and dropped into another program. My requirements are that the file be copied to a different directory first.
So, can I be notified of the drop operation so that I can only copy the file if operation succeeds? I'd rather wait till I know it needs to be copied before actually performing the copy.
Also, is it possible to know what program the drop operation is occurring in? Ideally I'd like to alter the filepath based on who or what its being dropped.
The solution to this can be in any .NET language or C/C++ with COM.
A:
There are a few ambiguities in your question. What operation needs to be successful?
For everything you want to know about drag and drop, browse through these search results (multiple pages worth):
Raymond Chen on drag and drop
A:
So, you intend to modify the data being dropped based on the drop target? I don't think this is possible; after all, you populate the data when the drag is initiated.
| Notification of drop in drag-drop in Windows | My C# program has a list of files that can be dragged from it and dropped into another program. My requirements are that the file be copied to a different directory first.
So, can I be notified of the drop operation so that I can only copy the file if operation succeeds? I'd rather wait till I know it needs to be copied before actually performing the copy.
Also, is it possible to know what program the drop operation is occurring in? Ideally I'd like to alter the filepath based on who or what its being dropped.
The solution to this can be in any .NET language or C/C++ with COM.
| [
"There are a few ambiguities in your question. What operation needs to be successful?\nFor everything you want to know about drag and drop, browse through these search results (multiple pages worth):\nRaymond Chen on drag and drop\n",
"So, you intend to modify the data being dropped based on the drop target? I don't think this is possible; after all, you populate the data when the drag is initiated.\n"
] | [
1,
0
] | [] | [] | [
"c#",
"c++",
"com",
"winapi",
"windows"
] | stackoverflow_0000023370_c#_c++_com_winapi_windows.txt |
Q:
In Visual Studio you must be a member of Debug Users or Administrators to start debugging. What if you are but it doesn't work?
On my Windows XP machine Visual Studio 2003 2005 and 2008 all complain that I cannot start debugging my web application because I must either be a member of the Debug Users group or of the Administrators group. So, I am an Administrator and I added Debug Users just in case, and it still complains.
Short of reformatting my machine and starting over, has anyone encountered this and fixed it [with some undocumented command]?
A:
Which users and/or groups are in your "Debug programs" right (under User Rights Assignment)? Maybe that setting got overridden by group policy (Daniel's answer), or just got out of whack for some reason. It should, obviously, include the "Debug Users" group.
A:
We encountered an issue like this and found that it was a group policy issue. There's a group policy setting for debugging that needs to be enabled. It overrides the fact that you are in the right group.
A:
You could try running "VsJITDebugger.exe -p <PID>" on the command line. I've had a simalar situation and been able to debug the application using the above.
"VsJITDebugger.exe /?" will show you all the options.
The PID can be found either in the task manager (view->Select Columns...) or Visual Studio's Attach to Process.
A:
Awesome, I'd never really known about the "Administrative Tools -> Local Security Settings -> Local Policies -> User Rights Assignment" under XP. My "Debug programs" policy is set to "Administrators" only, yet trying to debug now just worked and this is several days after installing the .NET framework 3.5, so maybe that installation fixed things in the background.
| In Visual Studio you must be a member of Debug Users or Administrators to start debugging. What if you are but it doesn't work? | On my Windows XP machine Visual Studio 2003 2005 and 2008 all complain that I cannot start debugging my web application because I must either be a member of the Debug Users group or of the Administrators group. So, I am an Administrator and I added Debug Users just in case, and it still complains.
Short of reformatting my machine and starting over, has anyone encountered this and fixed it [with some undocumented command]?
| [
"Which users and/or groups are in your \"Debug programs\" right (under User Rights Assignment)? Maybe that setting got overridden by group policy (Daniel's answer), or just got out of whack for some reason. It should, obviously, include the \"Debug Users\" group.\n",
"We encountered an issue like this and found that it was a group policy issue. There's a group policy setting for debugging that needs to be enabled. It overrides the fact that you are in the right group.\n",
"You could try running \"VsJITDebugger.exe -p <PID>\" on the command line. I've had a simalar situation and been able to debug the application using the above.\n\"VsJITDebugger.exe /?\" will show you all the options.\nThe PID can be found either in the task manager (view->Select Columns...) or Visual Studio's Attach to Process.\n",
"Awesome, I'd never really known about the \"Administrative Tools -> Local Security Settings -> Local Policies -> User Rights Assignment\" under XP. My \"Debug programs\" policy is set to \"Administrators\" only, yet trying to debug now just worked and this is several days after installing the .NET framework 3.5, so maybe that installation fixed things in the background.\n"
] | [
2,
1,
0,
0
] | [] | [] | [
"debugging",
"permissions",
"visual_studio"
] | stackoverflow_0000011724_debugging_permissions_visual_studio.txt |
Q:
Is it better to structure an SQL table to have a match, or return no result
I've got an interesting design question. I'm designing the security side of our project, to allow us to have different versions of the program for different costs and also to allow Manager-type users to grant or deny access to parts of the program to other users. Its going to web-based and hosted on our servers.
I'm using a simple Allow or Deny option for each 'Resource' or screen.
We're going to have a large number of resources, and the user will be able to set up many different groups to put users in to control access. Each user can only belong to a single group.
I've got two approaches to this in mind, and was curious which would be better for the SQL server in terms of performance.
Option A
The presence of an entry in the access table means access is allowed. This will not need a column in the database to store information. If no results are returned, then access is denied.
I think this will mean a smaller table, but would queries search the whole table to determine there is no match?
Option B
A bit column is included in the database that controls the Allow/Deny. This will mean there is always a result to be found, and makes for a larger table.
Thoughts?
A:
If it's only going to be Allow/Deny, then a simple linking table between Users and Resources would work fine. If there is an entry keyed to the User-Resource in the linking table, allow access.
UserResources
-------------
UserId FK->Users
ResourceId FK->Resources
and the sql would be something like
if exists (select 1 from UserResources
where UserId = @uid and ResourceId=@rid)
set @allow=1;
With a clustered index on (UserId and ResourceId), the query would be blindingly fast even with millions of records.
A:
I would vote for Option B. If you go with Option A and the assumption that if a user exists, they can get in, then you'll eventually run into the problem that you'll want to deny access to a user, without removing the user record.
There will be lots of cases where you'll want to lock a user out, but won't want to completely destroy their account. One such instance (not necessarily linked to your use case), is when you fail to pay, and they cut off your account until you start paying again. They don't want to delete the record, because they still want to enable it when you pay up again, instead of recreating the account from scratch, and losing all user history.
A:
B. It allows for much better checks whether the data is complete (for example, when you add an allowable/deniable feature).
Also, table size should only be a consideration for tables that you know will contain many records (as in, 100,000+). You even taking the time to type the table size consideration into this question already cost more than the extra hard drive space it would take.
A:
Approach A, but I would also include a explicit deny in addition to you implicit deney. I would make some use cases to be sure your end logic works but here are some examples.
User1 is in group1 and group2.
User2 is in group1
User3 is in group2
Folder1 allows group1 and deny group2.
User1 is denied.
User2 is allowed.
User3 is denied.
I believe your approach users1 would be allowed.
| Is it better to structure an SQL table to have a match, or return no result | I've got an interesting design question. I'm designing the security side of our project, to allow us to have different versions of the program for different costs and also to allow Manager-type users to grant or deny access to parts of the program to other users. Its going to web-based and hosted on our servers.
I'm using a simple Allow or Deny option for each 'Resource' or screen.
We're going to have a large number of resources, and the user will be able to set up many different groups to put users in to control access. Each user can only belong to a single group.
I've got two approaches to this in mind, and was curious which would be better for the SQL server in terms of performance.
Option A
The presence of an entry in the access table means access is allowed. This will not need a column in the database to store information. If no results are returned, then access is denied.
I think this will mean a smaller table, but would queries search the whole table to determine there is no match?
Option B
A bit column is included in the database that controls the Allow/Deny. This will mean there is always a result to be found, and makes for a larger table.
Thoughts?
| [
"If it's only going to be Allow/Deny, then a simple linking table between Users and Resources would work fine. If there is an entry keyed to the User-Resource in the linking table, allow access.\nUserResources\n-------------\nUserId FK->Users\nResourceId FK->Resources\n\nand the sql would be something like \nif exists (select 1 from UserResources \nwhere UserId = @uid and ResourceId=@rid)\nset @allow=1;\n\nWith a clustered index on (UserId and ResourceId), the query would be blindingly fast even with millions of records.\n",
"I would vote for Option B. If you go with Option A and the assumption that if a user exists, they can get in, then you'll eventually run into the problem that you'll want to deny access to a user, without removing the user record. \nThere will be lots of cases where you'll want to lock a user out, but won't want to completely destroy their account. One such instance (not necessarily linked to your use case), is when you fail to pay, and they cut off your account until you start paying again. They don't want to delete the record, because they still want to enable it when you pay up again, instead of recreating the account from scratch, and losing all user history.\n",
"B. It allows for much better checks whether the data is complete (for example, when you add an allowable/deniable feature).\nAlso, table size should only be a consideration for tables that you know will contain many records (as in, 100,000+). You even taking the time to type the table size consideration into this question already cost more than the extra hard drive space it would take.\n",
"Approach A, but I would also include a explicit deny in addition to you implicit deney. I would make some use cases to be sure your end logic works but here are some examples.\nUser1 is in group1 and group2. \nUser2 is in group1 \nUser3 is in group2 \n\n\nFolder1 allows group1 and deny group2. \nUser1 is denied. \nUser2 is allowed. \nUser3 is denied. \n\nI believe your approach users1 would be allowed.\n"
] | [
4,
1,
0,
0
] | [] | [] | [
"optimization",
"sql",
"sql_server"
] | stackoverflow_0000023399_optimization_sql_sql_server.txt |
Q:
How do I search content, within audio files/streams?
I have always wondered how many different search techniques existed, for searching text, for searching images and even for videos.
However, I have never come across a solution that searched for content within audio files.
For example: Let us assume that I have about 200 podcasts downloaded to my PC in the form of mp3, wav and ogg files. They are all named generically say podcast1.mp3, podcast2.mp3, etc. So, it is not possible to know what the content is, without actually hearing them. Lets say that, I am interested in finding out, which the podcasts talk about 'game programming'. I want the results to be shown as:
Podcast1.mp3 - 3 result(s) at time index(es) - 0:16:21, 0:43:45, 1:12:31
Podcast21.ogg - 1 result(s) at time index(es) - 0:12:01
So my questions:
How could one approach this problem?
Are there are suitable algorithms developed to do something like this?
One idea the cropped up in my mind was that, one could use a 'speech-to-text' software to get transcripts along with time indexes for each of the audio files, then parse the transcript to get the output.
I was considering this as one of my hobby projects.
Thanks!
A:
If you want to search for text (i.e. what is being said) inside an audio stream you would have to process it with some kind of speech recognition algorithm and store the text as meta data associated with the files. For video you could also do text recognition for text inside the video. Evernote already does this for text inside image files, but has no support for audio as far as I know.
Something similar is possible when using audio to search for audio. I don't know the details of these algorithms, but I'm guessing they involve some kind of frequency analysis. Shazam is using this kind of technology to identify songs based on audio clips.
Here are some Wikipedia articles that may be useful:
Speech recognition
Fast Fourier transform
Frequency analysis (frequency spectrum)
Optical character recognition (OCR)
| How do I search content, within audio files/streams? | I have always wondered how many different search techniques existed, for searching text, for searching images and even for videos.
However, I have never come across a solution that searched for content within audio files.
For example: Let us assume that I have about 200 podcasts downloaded to my PC in the form of mp3, wav and ogg files. They are all named generically say podcast1.mp3, podcast2.mp3, etc. So, it is not possible to know what the content is, without actually hearing them. Lets say that, I am interested in finding out, which the podcasts talk about 'game programming'. I want the results to be shown as:
Podcast1.mp3 - 3 result(s) at time index(es) - 0:16:21, 0:43:45, 1:12:31
Podcast21.ogg - 1 result(s) at time index(es) - 0:12:01
So my questions:
How could one approach this problem?
Are there are suitable algorithms developed to do something like this?
One idea the cropped up in my mind was that, one could use a 'speech-to-text' software to get transcripts along with time indexes for each of the audio files, then parse the transcript to get the output.
I was considering this as one of my hobby projects.
Thanks!
| [
"If you want to search for text (i.e. what is being said) inside an audio stream you would have to process it with some kind of speech recognition algorithm and store the text as meta data associated with the files. For video you could also do text recognition for text inside the video. Evernote already does this for text inside image files, but has no support for audio as far as I know.\nSomething similar is possible when using audio to search for audio. I don't know the details of these algorithms, but I'm guessing they involve some kind of frequency analysis. Shazam is using this kind of technology to identify songs based on audio clips.\nHere are some Wikipedia articles that may be useful:\n\nSpeech recognition\nFast Fourier transform\nFrequency analysis (frequency spectrum)\nOptical character recognition (OCR)\n\n"
] | [
9
] | [] | [] | [
"audio",
"search",
"speech_recognition"
] | stackoverflow_0000023592_audio_search_speech_recognition.txt |
Q:
CruiseControl.NET and NAnt
I have a CC.NET project configured to call a common NAnt build file, which does some stuff, and then calls a child NAnt build file. The child build file name is specified by CC.NET to the command build file using a property.
The hurdle that I am trying to get over is that the common build file log gets overwritten by the child build file log, so I don't get the common build log in the CC.NET build log.
Anyone have any ideas on how to fix this?
I thought about changing the child build's log, but reading up on the NAnt <nant> task doesn't allow me to change the child's output log.
A:
Use the nant task, so you get one single build file.
A:
Is there any way that you could include the child nant file as opposed to executing it as a full-fledged child nant project? This would prevent the overwrite, but not sure if it's possible in your situation.
| CruiseControl.NET and NAnt | I have a CC.NET project configured to call a common NAnt build file, which does some stuff, and then calls a child NAnt build file. The child build file name is specified by CC.NET to the command build file using a property.
The hurdle that I am trying to get over is that the common build file log gets overwritten by the child build file log, so I don't get the common build log in the CC.NET build log.
Anyone have any ideas on how to fix this?
I thought about changing the child build's log, but reading up on the NAnt <nant> task doesn't allow me to change the child's output log.
| [
"Use the nant task, so you get one single build file.\n",
"Is there any way that you could include the child nant file as opposed to executing it as a full-fledged child nant project? This would prevent the overwrite, but not sure if it's possible in your situation.\n"
] | [
1,
0
] | [] | [] | [
"cruisecontrol.net",
"nant"
] | stackoverflow_0000023503_cruisecontrol.net_nant.txt |
Q:
What WCF best practices do you follow in object model design?
I've noticed that a handful of WCF applications choose to "break" their objects apart; that is, a project might have a DataObjects assembly that contains DataContracts/Members in addition to a meaningful class library that performs business logic.
Is this an unnecessary level of abstraction? Is there any inherent evil associated with going through and tagging existing class libraries with DataContract information?
Also, as an aside, how do you handle error conditions? Are thrown exceptions from the service (InvalidOperation, ArgumentException and so on) generally accepted, or is there usually a level around that?
A:
The key reason to separating internal business objects from the data contracts/message contracts is that you don't want internal changes to your app to necessarily change the service contract. If you're creating versioned web services (with more than 1 version of the implemented interfaces) then you often have a single version of your apps business objects with more than 1 version of the data contract/message contract objects.
In addition, in complex Enterprise Integration situations you often have a canonical data format (Data and Message contracts) which is shared by a number of applications, which forces each application to map the canonical data format to its internal object model.
If you want a tool to help with the nitty gritty of separating data contract/message contract etc. then check out Microsoft's Web Services Software Factory http://msdn.microsoft.com/en-us/library/cc487895.aspx which has some good recipes for solving the WCF plumbing.
In regards to excpetions, WCF automatically wraps all exceptions in FaultExceptions, which are serialized as wire-format faults.
It's also possible to throw generic Fault Exceptions which allows you to specify additional details to be included with the serialized fault. Since the faults thrown by a web service operation are part of its contract it's a good idea to declare the faults on the operation declaration:
[FaultContract(typeof(AuthenticationFault))]
[FaultContract(typeof(AuthorizationFault))]
StoreLocationResponse StoreLocation(StoreLocationRequest request);
Both the AuthenticationFault and AuthorizationFault types represent the additional details to be serialized and sent over the wire and can be thrown as follows:
throw new FaultException<AuthenticationFault>(new AuthenticationFault());
If you want more details then shout; I've been living and breathing this stuff for so long I almost making a living doing it ;)
| What WCF best practices do you follow in object model design? | I've noticed that a handful of WCF applications choose to "break" their objects apart; that is, a project might have a DataObjects assembly that contains DataContracts/Members in addition to a meaningful class library that performs business logic.
Is this an unnecessary level of abstraction? Is there any inherent evil associated with going through and tagging existing class libraries with DataContract information?
Also, as an aside, how do you handle error conditions? Are thrown exceptions from the service (InvalidOperation, ArgumentException and so on) generally accepted, or is there usually a level around that?
| [
"The key reason to separating internal business objects from the data contracts/message contracts is that you don't want internal changes to your app to necessarily change the service contract. If you're creating versioned web services (with more than 1 version of the implemented interfaces) then you often have a single version of your apps business objects with more than 1 version of the data contract/message contract objects.\nIn addition, in complex Enterprise Integration situations you often have a canonical data format (Data and Message contracts) which is shared by a number of applications, which forces each application to map the canonical data format to its internal object model.\nIf you want a tool to help with the nitty gritty of separating data contract/message contract etc. then check out Microsoft's Web Services Software Factory http://msdn.microsoft.com/en-us/library/cc487895.aspx which has some good recipes for solving the WCF plumbing.\nIn regards to excpetions, WCF automatically wraps all exceptions in FaultExceptions, which are serialized as wire-format faults. \nIt's also possible to throw generic Fault Exceptions which allows you to specify additional details to be included with the serialized fault. Since the faults thrown by a web service operation are part of its contract it's a good idea to declare the faults on the operation declaration:\n[FaultContract(typeof(AuthenticationFault))]\n[FaultContract(typeof(AuthorizationFault))]\nStoreLocationResponse StoreLocation(StoreLocationRequest request);\n\nBoth the AuthenticationFault and AuthorizationFault types represent the additional details to be serialized and sent over the wire and can be thrown as follows:\nthrow new FaultException<AuthenticationFault>(new AuthenticationFault());\n\nIf you want more details then shout; I've been living and breathing this stuff for so long I almost making a living doing it ;)\n"
] | [
17
] | [] | [] | [
"wcf"
] | stackoverflow_0000023564_wcf.txt |
Q:
Running Apache alongside another web server?
Has anyone had any success running two different web servers -- such as Apache and CherryPy -- alongside each other on the same machine? I am experimenting with other web servers right now, and I'd like to see if I can do my experiments while keeping my other sites up and running. You could say that this isn't so much a specific-software question as it is a general networking question.
I know it's possible to run two web servers on different ports; but is there any way to configure them so that they can run on the same port (ie, they both run on port 80)?
The web servers would not be serving files from the same domains. For example, Apache might serve up documents from foo.domain.com, and the other web server would serve from bar.domain.com.
I do know that this is not an ideal configuration. I'd just like to see if it can be done before I go sprinting down the rabbit hole. :)
A:
You can't have two processes bound to the same port on the same IP address. You can add another IP address to the box and have each server listen on one.
Another option is to proxy pass one server to the other. With Apache, you could do something like:
NameVirtualHost *
<virtualhost *>
ServerName other.site.com
# assumes CherryPy listens on port 8080
ProxyPass / http://127.0.0.1:8080/
ProxyPassReverse / http://127.0.0.1:8080/
</Virtualhost>
That's a pretty quick example, but you can always check the ProxyPass documentation. Remember though, the application being proxyed to will get 127.0.0.1 in it's logs instead of the requester's IP address. Some web servers (apache does with mod_rpaf) can substitute the X-Forwarded-For header in place of the wrong IP address. Possibly CherryPy has this?
A:
Your best bet would be putting Apache httpd in front of port 80 and relay requests meant for other servers through Apache by using modules. Most popular scenario would be Tomcat behind Apache where you'll be able to run both php and jsp applications.
I'm not familiar with CherryPy, so I can only suggest you look for an Apache module for CherryPy.
Edit: This looks promising: http://tools.cherrypy.org/wiki/BehindApache
A:
Alternatively, to Ishmaeel's correct answer, if you have a server with 2 network cards, you could have each server answer requests on different IP addresses.
| Running Apache alongside another web server? | Has anyone had any success running two different web servers -- such as Apache and CherryPy -- alongside each other on the same machine? I am experimenting with other web servers right now, and I'd like to see if I can do my experiments while keeping my other sites up and running. You could say that this isn't so much a specific-software question as it is a general networking question.
I know it's possible to run two web servers on different ports; but is there any way to configure them so that they can run on the same port (ie, they both run on port 80)?
The web servers would not be serving files from the same domains. For example, Apache might serve up documents from foo.domain.com, and the other web server would serve from bar.domain.com.
I do know that this is not an ideal configuration. I'd just like to see if it can be done before I go sprinting down the rabbit hole. :)
| [
"You can't have two processes bound to the same port on the same IP address. You can add another IP address to the box and have each server listen on one.\nAnother option is to proxy pass one server to the other. With Apache, you could do something like:\nNameVirtualHost *\n<virtualhost *>\n ServerName other.site.com\n\n # assumes CherryPy listens on port 8080\n ProxyPass / http://127.0.0.1:8080/\n ProxyPassReverse / http://127.0.0.1:8080/\n</Virtualhost>\n\nThat's a pretty quick example, but you can always check the ProxyPass documentation. Remember though, the application being proxyed to will get 127.0.0.1 in it's logs instead of the requester's IP address. Some web servers (apache does with mod_rpaf) can substitute the X-Forwarded-For header in place of the wrong IP address. Possibly CherryPy has this?\n",
"Your best bet would be putting Apache httpd in front of port 80 and relay requests meant for other servers through Apache by using modules. Most popular scenario would be Tomcat behind Apache where you'll be able to run both php and jsp applications.\nI'm not familiar with CherryPy, so I can only suggest you look for an Apache module for CherryPy.\nEdit: This looks promising: http://tools.cherrypy.org/wiki/BehindApache\n",
"Alternatively, to Ishmaeel's correct answer, if you have a server with 2 network cards, you could have each server answer requests on different IP addresses.\n"
] | [
7,
0,
0
] | [] | [] | [
"apache",
"linux"
] | stackoverflow_0000023715_apache_linux.txt |
Q:
Reporting Systems for ASP.NET
What are the best open source (open source and commercial) reporting tools for ASP.NET similar to Crystal Reports for ASP.NET?
A:
Microsoft Reporting Services, free and included with SQL Server 2005 and 2008.
Of course, this is great if you need a separation of report design and application, which for Enterprise applications is a huge plus.
However, if what you want is to be able to create "in application" dashboards, where "you" design the reports and have limited parameters you expose to the user, then I suggest looking into "control" based charting vendors like TeeChart .
Pros/cons of each strategy:
Crystal/Microsoft Reporting services will give you out of the box handling of things like report scheduling, export to excel and pdf, and separation between application and report design.
The independent charting tools you can get give you better control, they render better on any size you need, easier to grammatically manipulate and can handle eye candy such as flash based (no flash charts in MS SSRS)
A:
+1 SSRS and ActiveReports. ryw, use ActiveReports and close the gates of Crystal Hell behind you forever.
A:
ActiveReports and DevExpress' reporting tools are both pretty good. The ReportViewer control works too (the price is right), but I find it more difficult to use. And SSRS reports can be embedded into your ASP.Net apps as well.
A:
As much as I despise Crystal Reports (we describe digging deep into it the seven layers of Crystal hell) -- it seems to be the best/most-flexible tool for the job. I hope someone comes along and knocks them off the block though.
Microsoft Reporting Services is an alternative, but didn't have the features we needed.
A:
I would suggest taking a look at MS SSRS (Microsoft SQL Server Reporting Services).
A:
I agree that SSRS is generally the right choice. But for flashy and embedded in an HTML page, I like Dundas. Their stuff looks good out of the box, has an easy-to-understand API, and is painless to get up and running.
| Reporting Systems for ASP.NET | What are the best open source (open source and commercial) reporting tools for ASP.NET similar to Crystal Reports for ASP.NET?
| [
"Microsoft Reporting Services, free and included with SQL Server 2005 and 2008.\nOf course, this is great if you need a separation of report design and application, which for Enterprise applications is a huge plus.\nHowever, if what you want is to be able to create \"in application\" dashboards, where \"you\" design the reports and have limited parameters you expose to the user, then I suggest looking into \"control\" based charting vendors like TeeChart . \nPros/cons of each strategy:\nCrystal/Microsoft Reporting services will give you out of the box handling of things like report scheduling, export to excel and pdf, and separation between application and report design. \nThe independent charting tools you can get give you better control, they render better on any size you need, easier to grammatically manipulate and can handle eye candy such as flash based (no flash charts in MS SSRS)\n",
"+1 SSRS and ActiveReports. ryw, use ActiveReports and close the gates of Crystal Hell behind you forever.\n",
"ActiveReports and DevExpress' reporting tools are both pretty good. The ReportViewer control works too (the price is right), but I find it more difficult to use. And SSRS reports can be embedded into your ASP.Net apps as well.\n",
"As much as I despise Crystal Reports (we describe digging deep into it the seven layers of Crystal hell) -- it seems to be the best/most-flexible tool for the job. I hope someone comes along and knocks them off the block though. \nMicrosoft Reporting Services is an alternative, but didn't have the features we needed.\n",
"I would suggest taking a look at MS SSRS (Microsoft SQL Server Reporting Services). \n",
"I agree that SSRS is generally the right choice. But for flashy and embedded in an HTML page, I like Dundas. Their stuff looks good out of the box, has an easy-to-understand API, and is painless to get up and running.\n"
] | [
8,
2,
1,
0,
0,
0
] | [] | [] | [
"asp.net",
"report"
] | stackoverflow_0000023614_asp.net_report.txt |
Q:
MySQL Administrator Backups: "Compatibility Mode", What Exactly is this doing?
In Mysql Administrator, when doing backups, what exactly is "Compatibility Mode"?
I'm trying to bridge backups generated by webmin with the upload tool available inside mysql administrator. My data already has a couple of inconsistencies (ticks, commas, etc, I think) I just wont try to kink out (they might just reappear in the future anyways). These kinks generate errors when I try to restore out of my backups.
Now, if I generate backups from webmin, and then use MySQL administrator to restore them, they fail. But if I generate the backups using MySQL Administrator AND tick "Compatibility Mode", then head over to MySQL administrator (another instance) and restore... it works!
According to MySQL, "Compatibility Mode" is;
Compatibility mode creates backup files that are compatible with older versions of MySQL Administrator.
Webmin, on the other hand, gives me the following options for compatibility:
ANSI
MySQL 3.2.3
MySQL 4.0
PostgreSQL
Oracle
Microsoft SQL
DB2
MaxDB
Which would you say is a best fit? My data set is very large, so it would take quite some time to experiment one by one (specially whence thinking might beat brute-forcing it).
Edit: seems like it's doing ANSI, but i'm not 100% on it.
A:
Compatibility mode - the mode that helps you create exports compabible with different versions of MYSQL or other databases.
You see, some versions of MySQL had different commands that were used in various versions. So what compatibility mode allows you to do is take a database and export the SQL to be compatible with another version of MySQL. Thus, you may want to upgrade your MySQL 3 server to 4 - this compatibility mode allows for the export your database or individual tables to create a SQL file that can import into a MySQL 4 version server (should work in 5 also).
I use webmin, also, and run MySQL 5. I use compatibility mode for MySQL 4.... I steer clear of any of the other ones, because I'm not running those other databases.
As far as the MySQL commands that were different between MySQL 3.x and 4.x, I believe there were changes in regards to how CURRENT_TIMESTAMP is translated from MySQL 3 to 4, and also MySQL 3 doesn't support charsets, according to this forum post here: http://www.phpbuilder.com/board/showthread.php?t=10330692
| MySQL Administrator Backups: "Compatibility Mode", What Exactly is this doing? | In Mysql Administrator, when doing backups, what exactly is "Compatibility Mode"?
I'm trying to bridge backups generated by webmin with the upload tool available inside mysql administrator. My data already has a couple of inconsistencies (ticks, commas, etc, I think) I just wont try to kink out (they might just reappear in the future anyways). These kinks generate errors when I try to restore out of my backups.
Now, if I generate backups from webmin, and then use MySQL administrator to restore them, they fail. But if I generate the backups using MySQL Administrator AND tick "Compatibility Mode", then head over to MySQL administrator (another instance) and restore... it works!
According to MySQL, "Compatibility Mode" is;
Compatibility mode creates backup files that are compatible with older versions of MySQL Administrator.
Webmin, on the other hand, gives me the following options for compatibility:
ANSI
MySQL 3.2.3
MySQL 4.0
PostgreSQL
Oracle
Microsoft SQL
DB2
MaxDB
Which would you say is a best fit? My data set is very large, so it would take quite some time to experiment one by one (specially whence thinking might beat brute-forcing it).
Edit: seems like it's doing ANSI, but i'm not 100% on it.
| [
"Compatibility mode - the mode that helps you create exports compabible with different versions of MYSQL or other databases.\nYou see, some versions of MySQL had different commands that were used in various versions. So what compatibility mode allows you to do is take a database and export the SQL to be compatible with another version of MySQL. Thus, you may want to upgrade your MySQL 3 server to 4 - this compatibility mode allows for the export your database or individual tables to create a SQL file that can import into a MySQL 4 version server (should work in 5 also).\nI use webmin, also, and run MySQL 5. I use compatibility mode for MySQL 4.... I steer clear of any of the other ones, because I'm not running those other databases.\nAs far as the MySQL commands that were different between MySQL 3.x and 4.x, I believe there were changes in regards to how CURRENT_TIMESTAMP is translated from MySQL 3 to 4, and also MySQL 3 doesn't support charsets, according to this forum post here: http://www.phpbuilder.com/board/showthread.php?t=10330692\n"
] | [
1
] | [] | [] | [
"backup",
"mysql"
] | stackoverflow_0000008365_backup_mysql.txt |
Q:
Syncing library/project subversion respositories
I'm developing a library alongside several projects that use it, and I've found myself frequently modifying the library at the same time as a project (e.g., adding a function to the library and immediately using it in the project).
As a result, the project would no longer compile with previous versions of the library.
So if I need to rollback a change or test a previous version of the project, I'd like to know what version of the library was used at check-in.
I suppose I could do this manually (by just writing the version number in the log file), but it would be great if this could happen automatically.
A:
An option that might work for you is to use an svn:external reference to the library. When tagging the project, you can do one of two things:
Update the svn:external to refer to a specific revision of the library; OR
Update the svn:external to refer to a new tag that you make on the library.
Since the svn:external metadata will be part of the main project's commit history, you can always get the tag on the main project and it will refer to the correct version of the library. We do it and it works very well. It also comes in handy when you want to freeze the version of the library code that you depend on in preparation for a release.
A:
I think if I were going to do this, I would use tags. It would be pretty easy to write a script that would tag both repositories with the same ID each time you upgraded the library and used it in the project. Then, if you need to roll back to a previous version, you just see what its most recent tag was, and roll the library back to that version.
UPDATE: Sorry, I've been in Mercurial land for a while, and forgot that subversion doesn't directly support tagging. Assuming you use the usual subversion directory structure
/
/trunk
/tags
/branches
you just need to run
svn copy trunk/ tags/TagName
on both repos, with the same tag name. Subversion is pretty good about smart copies, so you don't need to worry about disk space.
A:
You might find piston provides a solution
It's primarily used for importing ruby on rails plugins, but I don't see why it shouldn't work for any subversion repositories.
Basically what it does is this:
svn export latest revision of the remote path
commit these files into your local svn as if they were local files
attach metadata in the form of svn properties about the remote path and revision
This means you can keep a reference to a particular version of a remote repo without having to have it constantly updated like with an svn external.
if you want to update your local copy of the library to the latest remote version, you just do piston update
You should also be able to look at the history of updates, by simply looking at the metadata - svn properties are versioned just like files and everything else
A:
One option is to use a single subversion repository and check-in changes that effect both library and project at the same time. That way you know that whatever revision of the project you are on requires the same revision of the library.
| Syncing library/project subversion respositories | I'm developing a library alongside several projects that use it, and I've found myself frequently modifying the library at the same time as a project (e.g., adding a function to the library and immediately using it in the project).
As a result, the project would no longer compile with previous versions of the library.
So if I need to rollback a change or test a previous version of the project, I'd like to know what version of the library was used at check-in.
I suppose I could do this manually (by just writing the version number in the log file), but it would be great if this could happen automatically.
| [
"An option that might work for you is to use an svn:external reference to the library. When tagging the project, you can do one of two things:\n\nUpdate the svn:external to refer to a specific revision of the library; OR\nUpdate the svn:external to refer to a new tag that you make on the library. \n\nSince the svn:external metadata will be part of the main project's commit history, you can always get the tag on the main project and it will refer to the correct version of the library. We do it and it works very well. It also comes in handy when you want to freeze the version of the library code that you depend on in preparation for a release.\n",
"I think if I were going to do this, I would use tags. It would be pretty easy to write a script that would tag both repositories with the same ID each time you upgraded the library and used it in the project. Then, if you need to roll back to a previous version, you just see what its most recent tag was, and roll the library back to that version.\nUPDATE: Sorry, I've been in Mercurial land for a while, and forgot that subversion doesn't directly support tagging. Assuming you use the usual subversion directory structure\n/\n /trunk\n /tags\n /branches\n\nyou just need to run\nsvn copy trunk/ tags/TagName\n\non both repos, with the same tag name. Subversion is pretty good about smart copies, so you don't need to worry about disk space.\n",
"You might find piston provides a solution\nIt's primarily used for importing ruby on rails plugins, but I don't see why it shouldn't work for any subversion repositories.\nBasically what it does is this:\n\nsvn export latest revision of the remote path\ncommit these files into your local svn as if they were local files\nattach metadata in the form of svn properties about the remote path and revision\n\nThis means you can keep a reference to a particular version of a remote repo without having to have it constantly updated like with an svn external.\nif you want to update your local copy of the library to the latest remote version, you just do piston update\nYou should also be able to look at the history of updates, by simply looking at the metadata - svn properties are versioned just like files and everything else\n",
"One option is to use a single subversion repository and check-in changes that effect both library and project at the same time. That way you know that whatever revision of the project you are on requires the same revision of the library.\n"
] | [
3,
2,
2,
0
] | [] | [] | [
"svn",
"synchronization"
] | stackoverflow_0000023603_svn_synchronization.txt |
Q:
Cleanest Way to Find a Match In a List
What is the best way to find something in a list? I know LINQ has some nice tricks, but let's also get suggestions for C# 2.0. Lets get the best refactorings for this common code pattern.
Currently I use code like this:
// mObjList is a List<MyObject>
MyObject match = null;
foreach (MyObject mo in mObjList)
{
if (Criteria(mo))
{
match = mo;
break;
}
}
or
// mObjList is a List<MyObject>
bool foundIt = false;
foreach (MyObject mo in mObjList)
{
if (Criteria(mo))
{
foundIt = true;
break;
}
}
A:
@ Konrad: So how do you use it? Let's say I want to match mo.ID to magicNumber.
In C# 2.0 you'd write:
result = mObjList.Find(delegate(int x) { return x.ID == magicNumber; });
3.0 knows lambdas:
result = mObjList.Find(x => x.ID == magicNumber);
A:
Using a Lambda expression:
List<MyObject> list = new List<MyObject>();
// populate the list with objects..
return list.Find(o => o.Id == myCriteria);
A:
Put the code in a method and you save a temporary and a break (and you recycle code, as a bonus):
T Find<T>(IEnumerable<T> items, Predicate<T> p) {
foreach (T item in items)
if (p(item))
return item;
return null;
}
… but of course this method already exists anyway for Lists, even in .NET 2.0.
A:
Evidently the performance hit of anonymous delegates is pretty significant.
Test code:
static void Main(string[] args)
{
for (int kk = 0; kk < 10; kk++)
{
List<int> tmp = new List<int>();
for (int i = 0; i < 100; i++)
tmp.Add(i);
int sum = 0;
long start = DateTime.Now.Ticks;
for (int i = 0; i < 1000000; i++)
sum += tmp.Find(delegate(int x) { return x == 3; });
Console.WriteLine("Anonymous delegates: " + (DateTime.Now.Ticks - start));
start = DateTime.Now.Ticks;
sum = 0;
for (int i = 0; i < 1000000; i++)
{
int match = 0;
for (int j = 0; j < tmp.Count; j++)
{
if (tmp[j] == 3)
{
match = tmp[j];
break;
}
}
sum += match;
}
Console.WriteLine("Classic C++ Style: " + (DateTime.Now.Ticks - start));
Console.WriteLine();
}
}
Results:
Anonymous delegates: 710000
Classic C++ Style: 340000
Anonymous delegates: 630000
Classic C++ Style: 320000
Anonymous delegates: 630000
Classic C++ Style: 330000
Anonymous delegates: 630000
Classic C++ Style: 320000
Anonymous delegates: 610000
Classic C++ Style: 340000
Anonymous delegates: 630000
Classic C++ Style: 330000
Anonymous delegates: 650000
Classic C++ Style: 330000
Anonymous delegates: 620000
Classic C++ Style: 330000
Anonymous delegates: 620000
Classic C++ Style: 340000
Anonymous delegates: 620000
Classic C++ Style: 400000
In every case, using anonymous delegates is about 100% slower than the other way.
| Cleanest Way to Find a Match In a List | What is the best way to find something in a list? I know LINQ has some nice tricks, but let's also get suggestions for C# 2.0. Lets get the best refactorings for this common code pattern.
Currently I use code like this:
// mObjList is a List<MyObject>
MyObject match = null;
foreach (MyObject mo in mObjList)
{
if (Criteria(mo))
{
match = mo;
break;
}
}
or
// mObjList is a List<MyObject>
bool foundIt = false;
foreach (MyObject mo in mObjList)
{
if (Criteria(mo))
{
foundIt = true;
break;
}
}
| [
"\n@ Konrad: So how do you use it? Let's say I want to match mo.ID to magicNumber.\n\nIn C# 2.0 you'd write:\nresult = mObjList.Find(delegate(int x) { return x.ID == magicNumber; });\n\n3.0 knows lambdas:\nresult = mObjList.Find(x => x.ID == magicNumber);\n\n",
"Using a Lambda expression:\nList<MyObject> list = new List<MyObject>();\n\n// populate the list with objects..\n\nreturn list.Find(o => o.Id == myCriteria);\n\n",
"Put the code in a method and you save a temporary and a break (and you recycle code, as a bonus):\nT Find<T>(IEnumerable<T> items, Predicate<T> p) {\n foreach (T item in items)\n if (p(item))\n return item;\n\n return null;\n}\n\n… but of course this method already exists anyway for Lists, even in .NET 2.0.\n",
"Evidently the performance hit of anonymous delegates is pretty significant.\nTest code:\n static void Main(string[] args)\n {\n for (int kk = 0; kk < 10; kk++)\n {\n List<int> tmp = new List<int>();\n for (int i = 0; i < 100; i++)\n tmp.Add(i);\n int sum = 0;\n long start = DateTime.Now.Ticks;\n for (int i = 0; i < 1000000; i++)\n sum += tmp.Find(delegate(int x) { return x == 3; });\n Console.WriteLine(\"Anonymous delegates: \" + (DateTime.Now.Ticks - start));\n\n\n start = DateTime.Now.Ticks;\n sum = 0;\n for (int i = 0; i < 1000000; i++)\n {\n int match = 0;\n for (int j = 0; j < tmp.Count; j++)\n {\n if (tmp[j] == 3)\n {\n match = tmp[j];\n break;\n }\n }\n sum += match;\n }\n Console.WriteLine(\"Classic C++ Style: \" + (DateTime.Now.Ticks - start));\n Console.WriteLine();\n }\n }\n\nResults:\nAnonymous delegates: 710000\nClassic C++ Style: 340000\n\nAnonymous delegates: 630000\nClassic C++ Style: 320000\n\nAnonymous delegates: 630000\nClassic C++ Style: 330000\n\nAnonymous delegates: 630000\nClassic C++ Style: 320000\n\nAnonymous delegates: 610000\nClassic C++ Style: 340000\n\nAnonymous delegates: 630000\nClassic C++ Style: 330000\n\nAnonymous delegates: 650000\nClassic C++ Style: 330000\n\nAnonymous delegates: 620000\nClassic C++ Style: 330000\n\nAnonymous delegates: 620000\nClassic C++ Style: 340000\n\nAnonymous delegates: 620000\nClassic C++ Style: 400000\n\nIn every case, using anonymous delegates is about 100% slower than the other way.\n"
] | [
17,
4,
1,
1
] | [] | [] | [
"c#",
"refactoring"
] | stackoverflow_0000023787_c#_refactoring.txt |
Q:
The difference between loops
It's about PHP but I've no doubt many of the same comments will apply to other languages.
Simply put, what are the differences in the different types of loop for PHP? Is one faster/better than the others or should I simply put in the most readable loop?
for ($i = 0; $i < 10; $i++)
{
# code...
}
foreach ($array as $index => $value)
{
# code...
}
do
{
# code...
}
while ($flag == false);
A:
For loop and While loops are entry condition loops. They evaluate condition first, so the statement block associated with the loop won't run even once if the condition fails to meet
The statements inside this for loop block will run 10 times, the value of $i will be 0 to 9;
for ($i = 0; $i < 10; $i++)
{
# code...
}
Same thing done with while loop:
$i = 0;
while ($i < 10)
{
# code...
$i++
}
Do-while loop is exit-condition loop. It's guaranteed to execute once, then it will evaluate condition before repeating the block
do
{
# code...
}
while ($flag == false);
foreach is used to access array elements from start to end. At the beginning of foreach loop, the internal pointer of the array is set to the first element of the array, in next step it is set to the 2nd element of the array and so on till the array ends. In the loop block The value of current array item is available as $value and the key of current item is available as $index.
foreach ($array as $index => $value)
{
# code...
}
You could do the same thing with while loop, like this
while (current($array))
{
$index = key($array); // to get key of the current element
$value = $array[$index]; // to get value of current element
# code ...
next($array); // advance the internal array pointer of $array
}
And lastly: The PHP Manual is your friend :)
A:
This is CS101, but since no one else has mentioned it, while loops evaluate their condition before the code block, and do-while evaluates after the code block, so do-while loops are always guaranteed to run their code block at least once, regardless of the condition.
A:
PHP Benchmarks
A:
@brendan:
The article you cited is seriously outdated and the information is just plain wrong. Especially the last point (use for instead of foreach) is misleading and the justification offered in the article no longer applies to modern versions of .NET.
While it's true that the IEnumerator uses virtual calls, these can actually be inlined by a modern compiler. Furthermore, .NET now knows generics and strongly typed enumerators.
There are a lot of performance tests out there that prove conclusively that for is generally no faster than foreach. Here's an example.
A:
I use the first loop when iterating over a conventional (indexed?) array and the foreach loop when dealing with an associative array. It just seems natural and helps the code flow and be more readable, in my opinion. As for do...while loops, I use those when I have to do more than just flip through an array.
I'm not sure of any performance benefits, though.
A:
Performance is not significantly better in either case. While is useful for more complex tasks than iterating, but for and while are functionally equivalent.
Foreach is nice, but has one important caveat: you can't modify the enumerable you're iterating. So no removing, adding or replacing entries to/in it. Modifying entries (like changing their properties) is OK, of course.
A:
With a foreach loop, a copy of the original array is made in memory to use inside. You shouldn't use them on large structures; a simple for loop is a better choice. You can use a while loop more efficiently on a large non-numerically indexed structure like this:
while(list($key, $value) = each($array)) {
But that approach is particularly ugly for a simple small structure.
while loops are better suited for looping through streams, or as in the following example that you see very frequently in PHP:
while ($row = mysql_fetch_array($result)) {
Almost all of the time the different loops are interchangeable, and it will come down to either a) efficiency, or b) clarity.
If you know the efficiency trade-offs of the different types of loops, then yes, to answer your original question: use the one that looks the most clean.
A:
In regards to performance, a foreach is more consuming than a for
http://forums.asp.net/p/1041090/1457897.aspx
A:
Each looping construct serves a different purpose.
for - This is used to loop for a specific number of iterations.
foreach - This is used to loop through all of the values in a collection.
while - This is used to loop until you meet a condition.
Of the three, "while" will most likely provide the best performance in most situations. Of course, if you do something like the following, you are basically rewriting the "for" loop (which in c# is slightly more performant).
$count = 0;
do
{
...
$count++;
}
while ($count < 10);
They all have different basic purposes, but they can also be used in somewhat the same way. It completely depends on the specific problem that you are trying to solve.
A:
With a foreach loop, a copy of the original array is made in memory to use inside.
Foreach is nice, but has one important caveat: you can't modify the enumerable you're iterating.
Both of those won't be a problem if you pass by reference instead of value:
foreach ($array as &$value) {
I think this has been allowed since PHP 5.
A:
When accessing the elements of an array, for clarity I would use a foreach whenever possible, and only use a for if you need the actual index values (for example, the same index in multiple arrays). This also minimizes the chance for typo mistakes since for loops make this all too easy. In general, PHP might not be the place be worrying too much about performance. And last but not least, for and foreach have (or should have; I'm not a PHP-er) the same Big-O time (O(n)) so you are looking possibly at a small amount more of memory usage or a slight constant or linear hit in time.
| The difference between loops | It's about PHP but I've no doubt many of the same comments will apply to other languages.
Simply put, what are the differences in the different types of loop for PHP? Is one faster/better than the others or should I simply put in the most readable loop?
for ($i = 0; $i < 10; $i++)
{
# code...
}
foreach ($array as $index => $value)
{
# code...
}
do
{
# code...
}
while ($flag == false);
| [
"For loop and While loops are entry condition loops. They evaluate condition first, so the statement block associated with the loop won't run even once if the condition fails to meet \nThe statements inside this for loop block will run 10 times, the value of $i will be 0 to 9;\nfor ($i = 0; $i < 10; $i++)\n{\n # code...\n}\n\nSame thing done with while loop:\n$i = 0;\nwhile ($i < 10)\n{\n # code...\n $i++\n}\n\nDo-while loop is exit-condition loop. It's guaranteed to execute once, then it will evaluate condition before repeating the block\ndo\n{\n # code...\n}\nwhile ($flag == false);\n\nforeach is used to access array elements from start to end. At the beginning of foreach loop, the internal pointer of the array is set to the first element of the array, in next step it is set to the 2nd element of the array and so on till the array ends. In the loop block The value of current array item is available as $value and the key of current item is available as $index.\nforeach ($array as $index => $value)\n{\n # code...\n}\n\nYou could do the same thing with while loop, like this \nwhile (current($array))\n{\n $index = key($array); // to get key of the current element\n $value = $array[$index]; // to get value of current element\n\n # code ... \n\n next($array); // advance the internal array pointer of $array\n}\n\nAnd lastly: The PHP Manual is your friend :)\n",
"This is CS101, but since no one else has mentioned it, while loops evaluate their condition before the code block, and do-while evaluates after the code block, so do-while loops are always guaranteed to run their code block at least once, regardless of the condition.\n",
"PHP Benchmarks\n",
"@brendan:\nThe article you cited is seriously outdated and the information is just plain wrong. Especially the last point (use for instead of foreach) is misleading and the justification offered in the article no longer applies to modern versions of .NET.\nWhile it's true that the IEnumerator uses virtual calls, these can actually be inlined by a modern compiler. Furthermore, .NET now knows generics and strongly typed enumerators.\nThere are a lot of performance tests out there that prove conclusively that for is generally no faster than foreach. Here's an example.\n",
"I use the first loop when iterating over a conventional (indexed?) array and the foreach loop when dealing with an associative array. It just seems natural and helps the code flow and be more readable, in my opinion. As for do...while loops, I use those when I have to do more than just flip through an array.\nI'm not sure of any performance benefits, though.\n",
"Performance is not significantly better in either case. While is useful for more complex tasks than iterating, but for and while are functionally equivalent.\nForeach is nice, but has one important caveat: you can't modify the enumerable you're iterating. So no removing, adding or replacing entries to/in it. Modifying entries (like changing their properties) is OK, of course.\n",
"With a foreach loop, a copy of the original array is made in memory to use inside. You shouldn't use them on large structures; a simple for loop is a better choice. You can use a while loop more efficiently on a large non-numerically indexed structure like this:\nwhile(list($key, $value) = each($array)) {\n\nBut that approach is particularly ugly for a simple small structure.\nwhile loops are better suited for looping through streams, or as in the following example that you see very frequently in PHP:\nwhile ($row = mysql_fetch_array($result)) {\n\nAlmost all of the time the different loops are interchangeable, and it will come down to either a) efficiency, or b) clarity.\nIf you know the efficiency trade-offs of the different types of loops, then yes, to answer your original question: use the one that looks the most clean.\n",
"In regards to performance, a foreach is more consuming than a for\nhttp://forums.asp.net/p/1041090/1457897.aspx\n",
"Each looping construct serves a different purpose.\nfor - This is used to loop for a specific number of iterations. \nforeach - This is used to loop through all of the values in a collection.\nwhile - This is used to loop until you meet a condition.\nOf the three, \"while\" will most likely provide the best performance in most situations. Of course, if you do something like the following, you are basically rewriting the \"for\" loop (which in c# is slightly more performant).\n$count = 0;\ndo\n{\n ...\n $count++;\n}\nwhile ($count < 10); \n\nThey all have different basic purposes, but they can also be used in somewhat the same way. It completely depends on the specific problem that you are trying to solve. \n",
"\nWith a foreach loop, a copy of the original array is made in memory to use inside.\nForeach is nice, but has one important caveat: you can't modify the enumerable you're iterating.\n\nBoth of those won't be a problem if you pass by reference instead of value:\n foreach ($array as &$value) {\n\nI think this has been allowed since PHP 5.\n",
"When accessing the elements of an array, for clarity I would use a foreach whenever possible, and only use a for if you need the actual index values (for example, the same index in multiple arrays). This also minimizes the chance for typo mistakes since for loops make this all too easy. In general, PHP might not be the place be worrying too much about performance. And last but not least, for and foreach have (or should have; I'm not a PHP-er) the same Big-O time (O(n)) so you are looking possibly at a small amount more of memory usage or a slight constant or linear hit in time.\n"
] | [
10,
1,
1,
1,
0,
0,
0,
0,
0,
0,
0
] | [] | [] | [
"loops",
"php"
] | stackoverflow_0000022801_loops_php.txt |
Q:
What's a good way to check if two datetimes are on the same calendar day in TSQL?
Here is the issue I am having: I have a large query that needs to compare datetimes in the where clause to see if two dates are on the same day. My current solution, which sucks, is to send the datetimes into a UDF to convert them to midnight of the same day, and then check those dates for equality. When it comes to the query plan, this is a disaster, as are almost all UDFs in joins or where clauses. This is one of the only places in my application that I haven't been able to root out the functions and give the query optimizer something it can actually use to locate the best index.
In this case, merging the function code back into the query seems impractical.
I think I am missing something simple here.
Here's the function for reference.
if not exists (select * from dbo.sysobjects
where id = object_id(N'dbo.f_MakeDate') and
type in (N'FN', N'IF', N'TF', N'FS', N'FT'))
exec('create function dbo.f_MakeDate() returns int as
begin declare @retval int return @retval end')
go
alter function dbo.f_MakeDate
(
@Day datetime,
@Hour int,
@Minute int
)
returns datetime
as
/*
Creates a datetime using the year-month-day portion of @Day, and the
@Hour and @Minute provided
*/
begin
declare @retval datetime
set @retval = cast(
cast(datepart(m, @Day) as varchar(2)) +
'/' +
cast(datepart(d, @Day) as varchar(2)) +
'/' +
cast(datepart(yyyy, @Day) as varchar(4)) +
' ' +
cast(@Hour as varchar(2)) +
':' +
cast(@Minute as varchar(2)) as datetime)
return @retval
end
go
To complicate matters, I am joining on time zone tables to check the date against the local time, which could be different for every row:
where
dbo.f_MakeDate(dateadd(hh, tz.Offset +
case when ds.LocalTimeZone is not null
then 1 else 0 end, t.TheDateINeedToCheck), 0, 0) = @activityDateMidnight
[Edit]
I'm incorporating @Todd's suggestion:
where datediff(day, dateadd(hh, tz.Offset +
case when ds.LocalTimeZone is not null
then 1 else 0 end, t.TheDateINeedToCheck), @ActivityDate) = 0
My misconception about how datediff works (the same day of year in consecutive years yields 366, not 0 as I expected) caused me to waste a lot of effort.
But the query plan didn't change. I think I need to go back to the drawing board with the whole thing.
A:
This is much more concise:
where
datediff(day, date1, date2) = 0
A:
You pretty much have to keep the left side of your where clause clean. So, normally, you'd do something like:
WHERE MyDateTime >= @activityDateMidnight
AND MyDateTime < (@activityDateMidnight + 1)
(Some folks prefer DATEADD(d, 1, @activityDateMidnight) instead - but it's the same thing).
The TimeZone table complicates matter a bit though. It's a little unclear from your snippet, but it looks like t.TheDateInTable is in GMT with a Time Zone identifier, and that you're then adding the offset to compare against @activityDateMidnight - which is in local time. I'm not sure what ds.LocalTimeZone is, though.
If that's the case, then you need to get @activityDateMidnight into GMT instead.
A:
where
year(date1) = year(date2)
and month(date1) = month(date2)
and day(date1) = day(date2)
A:
Make sure to read Only In A Database Can You Get 1000% + Improvement By Changing A Few Lines Of Code so that you are sure that the optimizer can utilize the index effectively when messing with dates
A:
this will remove time component from a date for you:
select dateadd(d, datediff(d, 0, current_timestamp), 0)
A:
Eric Z Beard:
I do store all dates in GMT. Here's the use case: something happened at 11:00 PM EST on the 1st, which is the 2nd GMT. I want to see activity for the 1st, and I am in EST so I will want to see the 11PM activity. If I just compared raw GMT datetimes, I would miss things. Each row in the report can represent an activity from a different time zone.
Right, but when you say you're interested in activity for Jan 1st 2008 EST:
SELECT @activityDateMidnight = '1/1/2008', @activityDateTZ = 'EST'
you just need to convert that to GMT (I'm ignoring the complication of querying for the day before EST goes to EDT, or vice versa):
Table: TimeZone
Fields: TimeZone, Offset
Values: EST, -4
--Multiply by -1, since we're converting EST to GMT.
--Offsets are to go from GMT to EST.
SELECT @activityGmtBegin = DATEADD(hh, Offset * -1, @activityDateMidnight)
FROM TimeZone
WHERE TimeZone = @activityDateTZ
which should give you '1/1/2008 4:00 AM'. Then, you can just search in GMT:
SELECT * FROM EventTable
WHERE
EventTime >= @activityGmtBegin --1/1/2008 4:00 AM
AND EventTime < (@activityGmtBegin + 1) --1/2/2008 4:00 AM
The event in question is stored with a GMT EventTime of 1/2/2008 3:00 AM. You don't even need the TimeZone in the EventTable (for this purpose, at least).
Since EventTime is not in a function, this is a straight index scan - which should be pretty efficient. Make EventTime your clustered index, and it'll fly. ;)
Personally, I'd have the app convert the search time into GMT before running the query.
A:
You're spoilt for choice in terms of options here. If you are using Sybase or SQL Server 2008 you can create variables of type date and assign them your datetime values. The database engine gets rid of the time for you. Here's a quick and dirty test to illustrate (Code is in Sybase dialect):
declare @date1 date
declare @date2 date
set @date1='2008-1-1 10:00'
set @date2='2008-1-1 22:00'
if @date1=@date2
print 'Equal'
else
print 'Not equal'
For SQL 2005 and earlier what you can do is convert the date to a varchar in a format that does not have the time component. For instance the following returns 2008.08.22
select convert(varchar,'2008-08-22 18:11:14.133',102)
The 102 part specifies the formatting (Books online can list for you all the available formats)
So, what you can do is write a function that takes a datetime and extracts the date element and discards the time. Like so:
create function MakeDate (@InputDate datetime) returns datetime as
begin
return cast(convert(varchar,@InputDate,102) as datetime);
end
You can then use the function for companions
Select * from Orders where dbo.MakeDate(OrderDate) = dbo.MakeDate(DeliveryDate)
A:
Eric Z Beard:
the activity date is meant to indicate the local time zone, but not a specific one
Okay - back to the drawing board. Try this:
where t.TheDateINeedToCheck BETWEEN (
dateadd(hh, (tz.Offset + ISNULL(ds.LocalTimeZone, 0)) * -1, @ActivityDate)
AND
dateadd(hh, (tz.Offset + ISNULL(ds.LocalTimeZone, 0)) * -1, (@ActivityDate + 1))
)
which will translate the @ActivityDate to local time, and compare against that. That's your best chance for using an index, though I'm not sure it'll work - you should try it and check the query plan.
The next option would be an indexed view, with an indexed, computed TimeINeedToCheck in local time. Then you just go back to:
where v.TheLocalDateINeedToCheck BETWEEN @ActivityDate AND (@ActivityDate + 1)
which would definitely use the index - though you have a slight overhead on INSERT and UPDATE then.
A:
I would use the dayofyear function of datepart:
Select *
from mytable
where datepart(dy,date1) = datepart(dy,date2)
and
year(date1) = year(date2) --assuming you want the same year too
See the datepart reference here.
A:
Regarding timezones, yet one more reason to store all dates in a single timezone (preferably UTC). Anyway, I think the answers using datediff, datepart and the different built-in date functions are your best bet.
| What's a good way to check if two datetimes are on the same calendar day in TSQL? | Here is the issue I am having: I have a large query that needs to compare datetimes in the where clause to see if two dates are on the same day. My current solution, which sucks, is to send the datetimes into a UDF to convert them to midnight of the same day, and then check those dates for equality. When it comes to the query plan, this is a disaster, as are almost all UDFs in joins or where clauses. This is one of the only places in my application that I haven't been able to root out the functions and give the query optimizer something it can actually use to locate the best index.
In this case, merging the function code back into the query seems impractical.
I think I am missing something simple here.
Here's the function for reference.
if not exists (select * from dbo.sysobjects
where id = object_id(N'dbo.f_MakeDate') and
type in (N'FN', N'IF', N'TF', N'FS', N'FT'))
exec('create function dbo.f_MakeDate() returns int as
begin declare @retval int return @retval end')
go
alter function dbo.f_MakeDate
(
@Day datetime,
@Hour int,
@Minute int
)
returns datetime
as
/*
Creates a datetime using the year-month-day portion of @Day, and the
@Hour and @Minute provided
*/
begin
declare @retval datetime
set @retval = cast(
cast(datepart(m, @Day) as varchar(2)) +
'/' +
cast(datepart(d, @Day) as varchar(2)) +
'/' +
cast(datepart(yyyy, @Day) as varchar(4)) +
' ' +
cast(@Hour as varchar(2)) +
':' +
cast(@Minute as varchar(2)) as datetime)
return @retval
end
go
To complicate matters, I am joining on time zone tables to check the date against the local time, which could be different for every row:
where
dbo.f_MakeDate(dateadd(hh, tz.Offset +
case when ds.LocalTimeZone is not null
then 1 else 0 end, t.TheDateINeedToCheck), 0, 0) = @activityDateMidnight
[Edit]
I'm incorporating @Todd's suggestion:
where datediff(day, dateadd(hh, tz.Offset +
case when ds.LocalTimeZone is not null
then 1 else 0 end, t.TheDateINeedToCheck), @ActivityDate) = 0
My misconception about how datediff works (the same day of year in consecutive years yields 366, not 0 as I expected) caused me to waste a lot of effort.
But the query plan didn't change. I think I need to go back to the drawing board with the whole thing.
| [
"This is much more concise:\nwhere \n datediff(day, date1, date2) = 0\n\n",
"You pretty much have to keep the left side of your where clause clean. So, normally, you'd do something like:\nWHERE MyDateTime >= @activityDateMidnight \n AND MyDateTime < (@activityDateMidnight + 1)\n\n(Some folks prefer DATEADD(d, 1, @activityDateMidnight) instead - but it's the same thing).\nThe TimeZone table complicates matter a bit though. It's a little unclear from your snippet, but it looks like t.TheDateInTable is in GMT with a Time Zone identifier, and that you're then adding the offset to compare against @activityDateMidnight - which is in local time. I'm not sure what ds.LocalTimeZone is, though.\nIf that's the case, then you need to get @activityDateMidnight into GMT instead.\n",
"where\nyear(date1) = year(date2)\nand month(date1) = month(date2)\nand day(date1) = day(date2)\n\n",
"Make sure to read Only In A Database Can You Get 1000% + Improvement By Changing A Few Lines Of Code so that you are sure that the optimizer can utilize the index effectively when messing with dates\n",
"this will remove time component from a date for you: \nselect dateadd(d, datediff(d, 0, current_timestamp), 0)\n\n",
"Eric Z Beard:\n\nI do store all dates in GMT. Here's the use case: something happened at 11:00 PM EST on the 1st, which is the 2nd GMT. I want to see activity for the 1st, and I am in EST so I will want to see the 11PM activity. If I just compared raw GMT datetimes, I would miss things. Each row in the report can represent an activity from a different time zone.\n\nRight, but when you say you're interested in activity for Jan 1st 2008 EST:\nSELECT @activityDateMidnight = '1/1/2008', @activityDateTZ = 'EST'\n\nyou just need to convert that to GMT (I'm ignoring the complication of querying for the day before EST goes to EDT, or vice versa):\nTable: TimeZone\nFields: TimeZone, Offset\nValues: EST, -4\n\n--Multiply by -1, since we're converting EST to GMT.\n--Offsets are to go from GMT to EST.\nSELECT @activityGmtBegin = DATEADD(hh, Offset * -1, @activityDateMidnight)\nFROM TimeZone\nWHERE TimeZone = @activityDateTZ\n\nwhich should give you '1/1/2008 4:00 AM'. Then, you can just search in GMT:\nSELECT * FROM EventTable\nWHERE \n EventTime >= @activityGmtBegin --1/1/2008 4:00 AM\n AND EventTime < (@activityGmtBegin + 1) --1/2/2008 4:00 AM\n\nThe event in question is stored with a GMT EventTime of 1/2/2008 3:00 AM. You don't even need the TimeZone in the EventTable (for this purpose, at least). \nSince EventTime is not in a function, this is a straight index scan - which should be pretty efficient. Make EventTime your clustered index, and it'll fly. ;)\nPersonally, I'd have the app convert the search time into GMT before running the query.\n",
"You're spoilt for choice in terms of options here. If you are using Sybase or SQL Server 2008 you can create variables of type date and assign them your datetime values. The database engine gets rid of the time for you. Here's a quick and dirty test to illustrate (Code is in Sybase dialect):\ndeclare @date1 date\ndeclare @date2 date\nset @date1='2008-1-1 10:00'\nset @date2='2008-1-1 22:00'\nif @date1=@date2\n print 'Equal'\nelse\n print 'Not equal'\n\nFor SQL 2005 and earlier what you can do is convert the date to a varchar in a format that does not have the time component. For instance the following returns 2008.08.22\nselect convert(varchar,'2008-08-22 18:11:14.133',102)\n\nThe 102 part specifies the formatting (Books online can list for you all the available formats)\nSo, what you can do is write a function that takes a datetime and extracts the date element and discards the time. Like so:\ncreate function MakeDate (@InputDate datetime) returns datetime as\nbegin\n return cast(convert(varchar,@InputDate,102) as datetime);\nend\n\nYou can then use the function for companions\nSelect * from Orders where dbo.MakeDate(OrderDate) = dbo.MakeDate(DeliveryDate)\n\n",
"Eric Z Beard:\n\nthe activity date is meant to indicate the local time zone, but not a specific one\n\nOkay - back to the drawing board. Try this:\nwhere t.TheDateINeedToCheck BETWEEN (\n dateadd(hh, (tz.Offset + ISNULL(ds.LocalTimeZone, 0)) * -1, @ActivityDate)\n AND\n dateadd(hh, (tz.Offset + ISNULL(ds.LocalTimeZone, 0)) * -1, (@ActivityDate + 1))\n)\n\nwhich will translate the @ActivityDate to local time, and compare against that. That's your best chance for using an index, though I'm not sure it'll work - you should try it and check the query plan.\nThe next option would be an indexed view, with an indexed, computed TimeINeedToCheck in local time. Then you just go back to:\nwhere v.TheLocalDateINeedToCheck BETWEEN @ActivityDate AND (@ActivityDate + 1)\n\nwhich would definitely use the index - though you have a slight overhead on INSERT and UPDATE then.\n",
"I would use the dayofyear function of datepart:\n\nSelect *\nfrom mytable\nwhere datepart(dy,date1) = datepart(dy,date2)\nand\nyear(date1) = year(date2) --assuming you want the same year too\n\nSee the datepart reference here.\n",
"Regarding timezones, yet one more reason to store all dates in a single timezone (preferably UTC). Anyway, I think the answers using datediff, datepart and the different built-in date functions are your best bet.\n"
] | [
76,
21,
6,
4,
2,
2,
1,
1,
0,
0
] | [] | [] | [
"datetime",
"sql",
"sql_server",
"tsql",
"user_defined_functions"
] | stackoverflow_0000022570_datetime_sql_sql_server_tsql_user_defined_functions.txt |
Q:
Silverlight programmatic access to Sony RZ30N Video Feed
I would like to bypass the web-server functionality of a Sony SNC-RZ30N network attached web cam and display the video feed in a Silverlight application.
I can't seem to find any examples of interfacing with the camera programatically.
Any leads would be much appreciated. Thx.
Update 09/09/2008: Found a good site with Javascript examples to control the camera, but still no means to embed the video in an iFrame or the like:
http://www2.zdo.com/archives/3-JavaScript-API-to-Control-SONY-SNC-RZ30N-Network-Camera.html
Doug
A:
I don't know the details of the Sony network camera and the server side software. But what do you mean by web-server functionality - is that the UI that get served up to the users in form of a HTML page? Or is it something more, like a server capturing the video stream and transcoding it?
I think the direction you need to take is to first find the URL end-point of your video stream. Since it's a network camera I assume the camera has a built in IP-stack/HTTP server serving up the video stream. Once you have that feed you probably have to transcode it into a video format consumable by Silverlight. There are multiple tools you can use, but for Silverlight the preferred tool Microsoft Expression Encoder. It supports live transcoding of webcam video streams. I think it supports both Direct Show devices as well as video streams.
| Silverlight programmatic access to Sony RZ30N Video Feed | I would like to bypass the web-server functionality of a Sony SNC-RZ30N network attached web cam and display the video feed in a Silverlight application.
I can't seem to find any examples of interfacing with the camera programatically.
Any leads would be much appreciated. Thx.
Update 09/09/2008: Found a good site with Javascript examples to control the camera, but still no means to embed the video in an iFrame or the like:
http://www2.zdo.com/archives/3-JavaScript-API-to-Control-SONY-SNC-RZ30N-Network-Camera.html
Doug
| [
"I don't know the details of the Sony network camera and the server side software. But what do you mean by web-server functionality - is that the UI that get served up to the users in form of a HTML page? Or is it something more, like a server capturing the video stream and transcoding it?\nI think the direction you need to take is to first find the URL end-point of your video stream. Since it's a network camera I assume the camera has a built in IP-stack/HTTP server serving up the video stream. Once you have that feed you probably have to transcode it into a video format consumable by Silverlight. There are multiple tools you can use, but for Silverlight the preferred tool Microsoft Expression Encoder. It supports live transcoding of webcam video streams. I think it supports both Direct Show devices as well as video streams.\n"
] | [
1
] | [] | [] | [
"silverlight",
"streaming",
"video",
"webcam"
] | stackoverflow_0000023539_silverlight_streaming_video_webcam.txt |
Q:
What are OpenGL extensions, and what are the benefits/tradeoffs of using them?
In relation to this question on Using OpenGL extensions, what's the purpose of these extension functions? Why would I want to use them? Further, are there any tradeoffs or gotchas associated with using them?
A:
The OpenGL standard allows individual vendors to provide additional functionality through extensions as new technology is created. Extensions may introduce new functions and new constants, and may relax or remove restrictions on existing OpenGL functions.
Each vendor has an alphabetic abbreviation that is used in naming their new functions and constants. For example, NVIDIA's abbreviation (NV) is used in defining their proprietary function glCombinerParameterfvNV() and their constant GL_NORMAL_MAP_NV.
It may happen that more than one vendor agrees to implement the same extended functionality. In that case, the abbreviation EXT is used. It may further happen that the Architecture Review Board "blesses" the extension. It then becomes known as a standard extension, and the abbreviation ARB is used. The first ARB extension was GL_ARB_multitexture, introduced in version 1.2.1. Following the official extension promotion path, multitexturing is no longer an optionally implemented ARB extension, but has been a part of the OpenGL core API since version 1.3.
Before using an extension a program must first determine its availability, and then obtain pointers to any new functions the extension defines. The mechanism for doing this is platform-specific and libraries such as GLEW and GLEE exist to simplify the process.
A:
Extensions are, in general, a way for graphics card vendors to add new functionality to OpenGL without having to wait until the next revision of the OpenGL spec. There are different types of extensions:
Vendor extension - only one vendor provides a certain type of functionality.
Example: NV_vertex_program
Multivendor extension - multiple vendors have gotten together and agreed on the functionality.
Example: EXT_vertex_program
ARB extension - the OpenGL Architecture Review Board has blessed the extension. You have a reasonable expectation that this type of extension will be around for a while.
Example: ARB_vertex_program
Extensions don't have to go through all of these steps. Sometimes an extension is only ever implemented by one vendor, before hardware designs go a different way and the extension is abandoned. Other times, an extension might make it as far as ARB status before everyone decides there's a better way. (The ARB_vertex_program approach, for instance, was set aside in favor of the high-level shading language approach of ARB_vertex_shader when it came time to roll shaders into the core OpenGL spec.) Even ARB extensions don't last forever; I wouldn't write something today requiring ARB_matrix_palette, for instance.
All of that having been said, it's a very good idea to keep up to date on extensions, in particular the latest ARB and EXT extensions. In the past it has been true that some of the 'fast paths' through the hardware were only accessible via extensions. Likewise, if you want to know what all functionality a piece of hardware can do, there's no better place to look than in a vendor-specific extension.
If you're just getting started in OpenGL, I'd recommend investigating:
ARB_vertex_buffer_object (vertices)
ARB_vertex_shader / ARB_fragment_shader / ARB_shader_objects / GLSL spec (shaders)
More advanced:
ARB/EXT_framebuffer_object (off-screen rendering)
This is all functionality that's been rolled into core, but it can be good to see it in isolation so you can get a better feel for where its boundaries lie. (The core OpenGL spec seamlessly mixes the old with the new, so this can be pretty important if you want to stay on the fast path, and avoid the legacy and sometimes implemented in software paths.)
Whatever you do, make sure you have appropriate checks for the extensions you decide to use, and fallbacks where necessary. Even though your card may have a given extension, there's no guarantee that the extension will be present on another vendor's card, or even on another operating system with the same card.
A:
OpenGL Extensions are new features added to the OpenGL specification, they are added by the OpenGL standards body and by the various graphics card vendors. These are exposed to the programmer as new function calls or variables. Every new version of the OpenGL specification ships with newer functionality and (typically) includes all the previous functionality and extensions.
The real problem with OpenGL extensions exists only on Windows. Microsoft hasn't supported any extensions that have been released after OpenGL v1.1. The graphics card vendors overcome this by shipping their own version of this functionality through header files and libraries. However, using this can be a bit painful as the question you linked to shows. But this problem has mostly gone away with the popularity of GLEW, which takes care of wrapping all this into a easy-to-use package.
If you do use a very recent OpenGL extension, be aware that it may not be supported on older graphics hardware. Other than this, there's no other disadvantage to using these extensions. Most of the extensions which become standard are pretty darn useful and there's very little logic to not use them.
| What are OpenGL extensions, and what are the benefits/tradeoffs of using them? | In relation to this question on Using OpenGL extensions, what's the purpose of these extension functions? Why would I want to use them? Further, are there any tradeoffs or gotchas associated with using them?
| [
"The OpenGL standard allows individual vendors to provide additional functionality through extensions as new technology is created. Extensions may introduce new functions and new constants, and may relax or remove restrictions on existing OpenGL functions. \nEach vendor has an alphabetic abbreviation that is used in naming their new functions and constants. For example, NVIDIA's abbreviation (NV) is used in defining their proprietary function glCombinerParameterfvNV() and their constant GL_NORMAL_MAP_NV.\nIt may happen that more than one vendor agrees to implement the same extended functionality. In that case, the abbreviation EXT is used. It may further happen that the Architecture Review Board \"blesses\" the extension. It then becomes known as a standard extension, and the abbreviation ARB is used. The first ARB extension was GL_ARB_multitexture, introduced in version 1.2.1. Following the official extension promotion path, multitexturing is no longer an optionally implemented ARB extension, but has been a part of the OpenGL core API since version 1.3.\nBefore using an extension a program must first determine its availability, and then obtain pointers to any new functions the extension defines. The mechanism for doing this is platform-specific and libraries such as GLEW and GLEE exist to simplify the process.\n",
"Extensions are, in general, a way for graphics card vendors to add new functionality to OpenGL without having to wait until the next revision of the OpenGL spec. There are different types of extensions:\n\nVendor extension - only one vendor provides a certain type of functionality.\n\n\nExample: NV_vertex_program\n\nMultivendor extension - multiple vendors have gotten together and agreed on the functionality.\n\n\nExample: EXT_vertex_program\n\nARB extension - the OpenGL Architecture Review Board has blessed the extension. You have a reasonable expectation that this type of extension will be around for a while.\n\n\nExample: ARB_vertex_program\n\n\nExtensions don't have to go through all of these steps. Sometimes an extension is only ever implemented by one vendor, before hardware designs go a different way and the extension is abandoned. Other times, an extension might make it as far as ARB status before everyone decides there's a better way. (The ARB_vertex_program approach, for instance, was set aside in favor of the high-level shading language approach of ARB_vertex_shader when it came time to roll shaders into the core OpenGL spec.) Even ARB extensions don't last forever; I wouldn't write something today requiring ARB_matrix_palette, for instance.\nAll of that having been said, it's a very good idea to keep up to date on extensions, in particular the latest ARB and EXT extensions. In the past it has been true that some of the 'fast paths' through the hardware were only accessible via extensions. Likewise, if you want to know what all functionality a piece of hardware can do, there's no better place to look than in a vendor-specific extension.\nIf you're just getting started in OpenGL, I'd recommend investigating:\n\nARB_vertex_buffer_object (vertices)\nARB_vertex_shader / ARB_fragment_shader / ARB_shader_objects / GLSL spec (shaders)\n\nMore advanced:\n\nARB/EXT_framebuffer_object (off-screen rendering)\n\nThis is all functionality that's been rolled into core, but it can be good to see it in isolation so you can get a better feel for where its boundaries lie. (The core OpenGL spec seamlessly mixes the old with the new, so this can be pretty important if you want to stay on the fast path, and avoid the legacy and sometimes implemented in software paths.)\nWhatever you do, make sure you have appropriate checks for the extensions you decide to use, and fallbacks where necessary. Even though your card may have a given extension, there's no guarantee that the extension will be present on another vendor's card, or even on another operating system with the same card.\n",
"OpenGL Extensions are new features added to the OpenGL specification, they are added by the OpenGL standards body and by the various graphics card vendors. These are exposed to the programmer as new function calls or variables. Every new version of the OpenGL specification ships with newer functionality and (typically) includes all the previous functionality and extensions.\nThe real problem with OpenGL extensions exists only on Windows. Microsoft hasn't supported any extensions that have been released after OpenGL v1.1. The graphics card vendors overcome this by shipping their own version of this functionality through header files and libraries. However, using this can be a bit painful as the question you linked to shows. But this problem has mostly gone away with the popularity of GLEW, which takes care of wrapping all this into a easy-to-use package.\nIf you do use a very recent OpenGL extension, be aware that it may not be supported on older graphics hardware. Other than this, there's no other disadvantage to using these extensions. Most of the extensions which become standard are pretty darn useful and there's very little logic to not use them.\n"
] | [
23,
8,
7
] | [] | [] | [
"opengl"
] | stackoverflow_0000017352_opengl.txt |
Q:
.NET Security Policy change by standard users?
The .NET Security Policy can be changed from a script by using CasPol.exe. Say I will be distributing an application to several users on a local network. Most of those users will be unprivileged, standard accounts, so they will not have necessary permissions for the relevant command.
I think I shall be looking into domain logon scripts. Is there any alternative scenarios? Any solutions for networks without a domain?
Edit: I'm bound to use Framework version 2.0
A:
The latest version of .Net 3.5 SP1 now allows you to run managed executables over a network share without using CasPol.
See this post
| .NET Security Policy change by standard users? | The .NET Security Policy can be changed from a script by using CasPol.exe. Say I will be distributing an application to several users on a local network. Most of those users will be unprivileged, standard accounts, so they will not have necessary permissions for the relevant command.
I think I shall be looking into domain logon scripts. Is there any alternative scenarios? Any solutions for networks without a domain?
Edit: I'm bound to use Framework version 2.0
| [
"The latest version of .Net 3.5 SP1 now allows you to run managed executables over a network share without using CasPol.\nSee this post\n"
] | [
1
] | [] | [] | [
".net",
"security",
"windows"
] | stackoverflow_0000023713_.net_security_windows.txt |
Q:
Sorting a composite collection
So WPF doesn't support standard sorting or filtering behavior for views of CompositeCollections, so what would be a best practice for solving this problem.
There are two or more object collections of different types. You want to combine them into a single sortable and filterable collection (withing having to manually implement sort or filter).
One of the approaches I've considered is to create a new object collection with only a few core properties, including the ones that I would want the collection sorted on, and an object instance of each type.
class MyCompositeObject
{
enum ObjectType;
DateTime CreatedDate;
string SomeAttribute;
myObjectType1 Obj1;
myObjectType2 Obj2;
{
class MyCompositeObjects : List<MyCompositeObject> { }
And then loop through my two object collections to build the new composite collection. Obviously this is a bit of a brute force method, but it would work. I'd get all the default view sorting and filtering behavior on my new composite object collection, and I'd be able to put a data template on it to display my list items properly depending on which type is actually stored in that composite item.
What suggestions are there for doing this in a more elegant way?
A:
"Brute force" method you mention is actually ideal solution. Mind you, all objects are in RAM, there is no I/O bottleneck, so you can pretty much sort and filter millions of objects in less than a second on any modern computer.
The most elegant way to work with collections is System.Linq namespace in .NET 3.5
Thanks - I also considered LINQ to
objects, but my concern there is loss
of flexibility for typed data
templates, which I need to display the
objects in my list.
If you can't predict at this moment how people will sort and filter your object collection, then you should look at System.Linq.Expressions namespace to build your lambda expressions on demand during runtime (first you let user to build expression, then compile, run and at the end you use reflection namespace to enumerate through results). It's more tricky to wrap your head around it but invaluable feature, probably (to me definitively) even more ground-breaking feature than LINQ itself.
A:
I'm not yet very familiar with WPF but I see this as a question about sorting and filtering List<T> collections.
(withing having to manually implement sort or filter)
Would you reconsider implementing your own sort or filter functions? In my experience it is easy to use. The examples below use an anonymous delegate but you could easily define your own method or a class to implement a complex sort or filter. Such a class could even have properties to configure and change the sort and filter dynamically.
Use List<T>.Sort(Comparison<T> comparison) with your custom compare function:
// Sort according to the value of SomeAttribute
List<MyCompositeObject> myList = ...;
myList.Sort(delegate(MyCompositeObject a, MyCompositeObject b)
{
// return -1 if a < b
// return 0 if a == b
// return 1 if a > b
return a.SomeAttribute.CompareTo(b.SomeAttribute);
};
A similar approach for getting a sub-collection of items from the list.
Use List<T>.FindAll(Predicate<T> match) with your custom filter function:
// Select all objects where myObjectType1 and myObjectType2 are not null
myList.FindAll(delegate(MyCompositeObject a)
{
// return true to include 'a' in the sub-collection
return (a.myObjectType1 != null) && (a.myObjectType2 != null);
}
A:
Update: I found a much more elegant solution:
class MyCompositeObject
{
DateTime CreatedDate;
string SomeAttribute;
Object Obj1;
{
class MyCompositeObjects : List<MyCompositeObject> { }
I found that due to reflection, the specific type stored in Obj1 is resolved at runtime and the type specific DataTemplate is applied as expected!
| Sorting a composite collection | So WPF doesn't support standard sorting or filtering behavior for views of CompositeCollections, so what would be a best practice for solving this problem.
There are two or more object collections of different types. You want to combine them into a single sortable and filterable collection (withing having to manually implement sort or filter).
One of the approaches I've considered is to create a new object collection with only a few core properties, including the ones that I would want the collection sorted on, and an object instance of each type.
class MyCompositeObject
{
enum ObjectType;
DateTime CreatedDate;
string SomeAttribute;
myObjectType1 Obj1;
myObjectType2 Obj2;
{
class MyCompositeObjects : List<MyCompositeObject> { }
And then loop through my two object collections to build the new composite collection. Obviously this is a bit of a brute force method, but it would work. I'd get all the default view sorting and filtering behavior on my new composite object collection, and I'd be able to put a data template on it to display my list items properly depending on which type is actually stored in that composite item.
What suggestions are there for doing this in a more elegant way?
| [
"\"Brute force\" method you mention is actually ideal solution. Mind you, all objects are in RAM, there is no I/O bottleneck, so you can pretty much sort and filter millions of objects in less than a second on any modern computer.\nThe most elegant way to work with collections is System.Linq namespace in .NET 3.5\n\nThanks - I also considered LINQ to\n objects, but my concern there is loss\n of flexibility for typed data\n templates, which I need to display the\n objects in my list.\n\nIf you can't predict at this moment how people will sort and filter your object collection, then you should look at System.Linq.Expressions namespace to build your lambda expressions on demand during runtime (first you let user to build expression, then compile, run and at the end you use reflection namespace to enumerate through results). It's more tricky to wrap your head around it but invaluable feature, probably (to me definitively) even more ground-breaking feature than LINQ itself.\n",
"I'm not yet very familiar with WPF but I see this as a question about sorting and filtering List<T> collections.\n\n(withing having to manually implement sort or filter)\n\nWould you reconsider implementing your own sort or filter functions? In my experience it is easy to use. The examples below use an anonymous delegate but you could easily define your own method or a class to implement a complex sort or filter. Such a class could even have properties to configure and change the sort and filter dynamically.\nUse List<T>.Sort(Comparison<T> comparison) with your custom compare function:\n// Sort according to the value of SomeAttribute\nList<MyCompositeObject> myList = ...;\nmyList.Sort(delegate(MyCompositeObject a, MyCompositeObject b) \n{\n // return -1 if a < b\n // return 0 if a == b\n // return 1 if a > b\n return a.SomeAttribute.CompareTo(b.SomeAttribute);\n};\n\nA similar approach for getting a sub-collection of items from the list.\nUse List<T>.FindAll(Predicate<T> match) with your custom filter function:\n// Select all objects where myObjectType1 and myObjectType2 are not null\nmyList.FindAll(delegate(MyCompositeObject a)\n{\n // return true to include 'a' in the sub-collection\n return (a.myObjectType1 != null) && (a.myObjectType2 != null);\n}\n\n",
"Update: I found a much more elegant solution:\nclass MyCompositeObject\n{\n DateTime CreatedDate;\n string SomeAttribute;\n Object Obj1;\n{\nclass MyCompositeObjects : List<MyCompositeObject> { }\n\nI found that due to reflection, the specific type stored in Obj1 is resolved at runtime and the type specific DataTemplate is applied as expected!\n"
] | [
1,
1,
1
] | [] | [] | [
".net",
"c#",
"collections",
"data_binding",
"wpf"
] | stackoverflow_0000011288_.net_c#_collections_data_binding_wpf.txt |
Q:
How do I add existing comments to RDoc in Ruby?
I want to format my existing comments as 'RDoc comments' so they can be viewed using ri.
What are some recommended resources for starting out using RDoc?
A:
A few things that have bitten me:
:main: -- RDoc uses only the last one evaluated; best to make sure there's only one in your project and you don't also use the --main command-line argument.
same as previous, but for :title:
:section: doesn't work very well
A:
RDoc uses SimpleMarkup so it's fairly simple to create lists, etc. using *, - or a number. It also treats lines that are indented at the same column number as part of the same paragraph until there is an empty line which signifies a new paragraph. Do you have a few examples of comments you want RDoc'ed so we could show you how to do them and then you could extrapolate that for the rest of your comments?
| How do I add existing comments to RDoc in Ruby? | I want to format my existing comments as 'RDoc comments' so they can be viewed using ri.
What are some recommended resources for starting out using RDoc?
| [
"A few things that have bitten me:\n\n:main: -- RDoc uses only the last one evaluated; best to make sure there's only one in your project and you don't also use the --main command-line argument.\nsame as previous, but for :title:\n:section: doesn't work very well\n\n",
"RDoc uses SimpleMarkup so it's fairly simple to create lists, etc. using *, - or a number. It also treats lines that are indented at the same column number as part of the same paragraph until there is an empty line which signifies a new paragraph. Do you have a few examples of comments you want RDoc'ed so we could show you how to do them and then you could extrapolate that for the rest of your comments?\n"
] | [
16,
10
] | [] | [] | [
"rdoc",
"ruby"
] | stackoverflow_0000000072_rdoc_ruby.txt |
Q:
Why is an s-box input longer than its output?
I don't understand where the extra bits are coming from in this article about s-boxes. Why doesn't the s-box take in the same number of bits for input as output?
A:
It is the way s-boxes work. They can be m * n ==> m bit input , n bit output.
For example, in the AES S-box the number of bits in input is equal to the number of bits in output.
In DES, m=6 and n=4.
The input is expanded from 32 to 48 bits in the first stages of DES. So it is be reduced to 32 bits again by applying one round of S-box substitution. Thus no information is lost here.
The Wikipedia article on itself can be a bit confusing. It will make people think that information is lost. You should read the article in conjuncture with implementation details of some encryption algorithm using s-boxes.
A:
What extra bits? They are going from 6 to 4.
EDIT: Whoops! I'm an idiot. This is kinda like a 2nd grade multiplication table. They strip the outer bits off of the 6-bit block to be encypted, and leave the middle 4. Just like a table for an arithmatic operation, they go down one side, and find the outer bit sequence, then across the top and find the middle ones. To answer your question, it could input and output the same number of bits, but this s-box is just set up to do it the way it does. Its arbitrary.
| Why is an s-box input longer than its output? | I don't understand where the extra bits are coming from in this article about s-boxes. Why doesn't the s-box take in the same number of bits for input as output?
| [
"It is the way s-boxes work. They can be m * n ==> m bit input , n bit output.\nFor example, in the AES S-box the number of bits in input is equal to the number of bits in output.\nIn DES, m=6 and n=4.\nThe input is expanded from 32 to 48 bits in the first stages of DES. So it is be reduced to 32 bits again by applying one round of S-box substitution. Thus no information is lost here.\nThe Wikipedia article on itself can be a bit confusing. It will make people think that information is lost. You should read the article in conjuncture with implementation details of some encryption algorithm using s-boxes.\n",
"What extra bits? They are going from 6 to 4.\nEDIT: Whoops! I'm an idiot. This is kinda like a 2nd grade multiplication table. They strip the outer bits off of the 6-bit block to be encypted, and leave the middle 4. Just like a table for an arithmatic operation, they go down one side, and find the outer bit sequence, then across the top and find the middle ones. To answer your question, it could input and output the same number of bits, but this s-box is just set up to do it the way it does. Its arbitrary.\n"
] | [
3,
1
] | [] | [] | [
"cryptography"
] | stackoverflow_0000023988_cryptography.txt |
Q:
Best method to get objects from a BlockingQueue in a concurrent program?
What is the best method to get objects out of a BlockingQueue, in a concurrent program, without hitting a race condition? I'm currently doing the following and I'm not convinced it is the best method:
BlockingQueue<Violation> vQueue;
/*
in the constructor I pass in a BlockingQueue object
full of violations that need to be processed - cut out for brevity
*/
Violation v;
while ( ( v = vQueue.poll(500, TimeUnit.MILLISECONDS) ) != null ) {
// do stuff with the violation
}
I have yet to hit a race condition... but, I'm none too sure if this is truly safe.
A:
class Producer implements Runnable {
private final BlockingQueue queue;
Producer(BlockingQueue q) { queue = q; }
public void run() {
try {
while (true) { queue.put(produce()); }
} catch (InterruptedException ex) { ... handle ...}
}
Object produce() { ... }
}
class Consumer implements Runnable {
private final BlockingQueue queue;
Consumer(BlockingQueue q) { queue = q; }
public void run() {
try {
while (true) { consume(queue.take()); }
} catch (InterruptedException ex) { ... handle ...}
}
void consume(Object x) { ... }
}
class Setup {
void main() {
BlockingQueue q = new SomeQueueImplementation();
Producer p = new Producer(q);
Consumer c1 = new Consumer(q);
Consumer c2 = new Consumer(q);
new Thread(p).start();
new Thread(c1).start();
new Thread(c2).start();
}
}
This example was taken from the JDK 1.6 docs of BlockingQueue. So You can see that you are doing it the right way. Here's the quote which tells you that it have to work:
Memory consistency effects: As with
other concurrent collections, actions
in a thread prior to placing an object
into a BlockingQueue happen-before
actions subsequent to the access or
removal of that element from the
BlockingQueue in another thread.
| Best method to get objects from a BlockingQueue in a concurrent program? | What is the best method to get objects out of a BlockingQueue, in a concurrent program, without hitting a race condition? I'm currently doing the following and I'm not convinced it is the best method:
BlockingQueue<Violation> vQueue;
/*
in the constructor I pass in a BlockingQueue object
full of violations that need to be processed - cut out for brevity
*/
Violation v;
while ( ( v = vQueue.poll(500, TimeUnit.MILLISECONDS) ) != null ) {
// do stuff with the violation
}
I have yet to hit a race condition... but, I'm none too sure if this is truly safe.
| [
"class Producer implements Runnable {\n private final BlockingQueue queue;\n Producer(BlockingQueue q) { queue = q; }\n public void run() {\n try {\n while (true) { queue.put(produce()); }\n } catch (InterruptedException ex) { ... handle ...}\n }\n Object produce() { ... }\n }\n\n class Consumer implements Runnable {\n private final BlockingQueue queue;\n Consumer(BlockingQueue q) { queue = q; }\n public void run() {\n try {\n while (true) { consume(queue.take()); }\n } catch (InterruptedException ex) { ... handle ...}\n }\n void consume(Object x) { ... }\n }\n\n class Setup {\n void main() {\n BlockingQueue q = new SomeQueueImplementation();\n Producer p = new Producer(q);\n Consumer c1 = new Consumer(q);\n Consumer c2 = new Consumer(q);\n new Thread(p).start();\n new Thread(c1).start();\n new Thread(c2).start();\n }\n }\n\nThis example was taken from the JDK 1.6 docs of BlockingQueue. So You can see that you are doing it the right way. Here's the quote which tells you that it have to work:\n\nMemory consistency effects: As with\n other concurrent collections, actions\n in a thread prior to placing an object\n into a BlockingQueue happen-before\n actions subsequent to the access or\n removal of that element from the\n BlockingQueue in another thread.\n\n"
] | [
6
] | [] | [] | [
"concurrency",
"java"
] | stackoverflow_0000023950_concurrency_java.txt |
Q:
Create an EXE from a SWF using Flex 3 without requiring AIR?
I have a simple little test app written in Flex 3 (MXML and some AS3). I can compile it to a SWF just fine, but I'd like to make it into an EXE so I can give it to a couple of my coworkers who might find it useful.
With Flash 8, I could just target an EXE instead of a SWF and it would wrap the SWF in a projector, and everything worked fine. Is there an equivalent to that using the Flex 3 SDK that doesn't end up requiring AIR?
Note: I don't have Flex Builder, I'm just using the free Flex 3 SDK.
A:
In your Flex SDK folders you should see a 'runtimes\player\win\FlashPlayer.exe' which is a stand alone Flash player. Open your SWF with that and you'll see a 'Create Projector...' menu item in the File menu which will create the stand-alone EXE.
A:
imaginaryboy gets it right, I believe. Btw, since you don't have Flex Builder, you might look into the free and open source FlashDevelop if you're on Windows. It's my favorite environment for developing anything Actionscript (the Flex support is pretty great, too).
A:
There's also Zinc that also provides API:s for accessing the filesystem and other thinks that AIR does, but less restrictive.
| Create an EXE from a SWF using Flex 3 without requiring AIR? | I have a simple little test app written in Flex 3 (MXML and some AS3). I can compile it to a SWF just fine, but I'd like to make it into an EXE so I can give it to a couple of my coworkers who might find it useful.
With Flash 8, I could just target an EXE instead of a SWF and it would wrap the SWF in a projector, and everything worked fine. Is there an equivalent to that using the Flex 3 SDK that doesn't end up requiring AIR?
Note: I don't have Flex Builder, I'm just using the free Flex 3 SDK.
| [
"In your Flex SDK folders you should see a 'runtimes\\player\\win\\FlashPlayer.exe' which is a stand alone Flash player. Open your SWF with that and you'll see a 'Create Projector...' menu item in the File menu which will create the stand-alone EXE.\n",
"imaginaryboy gets it right, I believe. Btw, since you don't have Flex Builder, you might look into the free and open source FlashDevelop if you're on Windows. It's my favorite environment for developing anything Actionscript (the Flex support is pretty great, too).\n",
"There's also Zinc that also provides API:s for accessing the filesystem and other thinks that AIR does, but less restrictive.\n"
] | [
22,
2,
1
] | [] | [] | [
"actionscript_3",
"apache_flex",
"flash"
] | stackoverflow_0000023373_actionscript_3_apache_flex_flash.txt |
Q:
Enumerate Windows user group members on remote system using c#
Within c#, I need to be able to
Connect to a remote system, specifying username/password as appropriate
List the members of a localgroup on that system
Fetch the results back to the executing computer
So for example I would connect to \SOMESYSTEM with appropriate creds, and fetch back a list of local administrators including SOMESYSTEM\Administrator, SOMESYSTEM\Bob, DOMAIN\AlanH, "DOMAIN\Domain Administrators".
I've tried this with system.directoryservices.accountmanagement but am running into problems with authentication. Sometimes I get:
Multiple connections to a server or shared resource by the same user, using more than one user name, are not allowed. Disconnect all previous connections to the server or shared resource and try again. (Exception from HRESULT: 0x800704C3)
The above is trying because there will be situations where I simply cannot unmap existing drives or UNC connections.
Other times my program gets UNKNOWN ERROR and the security log on the remote system reports an error 675, code 0x19 which is KDC_ERR_PREAUTH_REQUIRED.
I need a simpler and less error prone way to do this!
A:
davidg was on the right track, and I am crediting him with the answer.
But the WMI query necessary was a little less than straightfoward, since I needed not just a list of users for the whole machine, but the subset of users and groups, whether local or domain, that were members of the local Administrators group. For the record, that WMI query was:
SELECT PartComponent FROM Win32_GroupUser WHERE GroupComponent = "Win32_Group.Domain='thehostname',Name='thegroupname'"
Here's the full code snippet:
public string GroupMembers(string targethost, string groupname, string targetusername, string targetpassword)
{
StringBuilder result = new StringBuilder();
try
{
ConnectionOptions Conn = new ConnectionOptions();
if (targethost != Environment.MachineName) //WMI errors if creds given for localhost
{
Conn.Username = targetusername; //can be null
Conn.Password = targetpassword; //can be null
}
Conn.Timeout = TimeSpan.FromSeconds(2);
ManagementScope scope = new ManagementScope("\\\\" + targethost + "\\root\\cimv2", Conn);
scope.Connect();
StringBuilder qs = new StringBuilder();
qs.Append("SELECT PartComponent FROM Win32_GroupUser WHERE GroupComponent = \"Win32_Group.Domain='");
qs.Append(targethost);
qs.Append("',Name='");
qs.Append(groupname);
qs.AppendLine("'\"");
ObjectQuery query = new ObjectQuery(qs.ToString());
ManagementObjectSearcher searcher = new ManagementObjectSearcher(scope, query);
ManagementObjectCollection queryCollection = searcher.Get();
foreach (ManagementObject m in queryCollection)
{
ManagementPath path = new ManagementPath(m["PartComponent"].ToString());
{
String[] names = path.RelativePath.Split(',');
result.Append(names[0].Substring(names[0].IndexOf("=") + 1).Replace("\"", " ").Trim() + "\\");
result.AppendLine(names[1].Substring(names[1].IndexOf("=") + 1).Replace("\"", " ").Trim());
}
}
return result.ToString();
}
catch (Exception e)
{
Console.WriteLine("Error. Message: " + e.Message);
return "fail";
}
}
So, if I invoke Groupmembers("Server1", "Administrators", "myusername", "mypassword"); I get a single string returned with:
SERVER1\Administrator
MYDOMAIN\Domain Admins
The actual WMI return is more like this:
\\SERVER1\root\cimv2:Win32_UserAccount.Domain="SERVER1",Name="Administrator"
... so as you can see, I had to do a little string manipulation to pretty it up.
A:
This should be easy to do using WMI. Here you have a pointer to some docs:
WMI Documentation for Win32_UserAccount
Even if you have no previous experience with WMI, it should be quite easy to turn that VB Script code at the bottom of the page into some .NET code.
Hope this helped!
A:
I would recommend using the Win32 API function NetLocalGroupGetMembers. It is much more straight forward than trying to figure out the crazy LDAP syntax, which is necessary for some of the other solutions recommended here. As long as you impersonate the user you want to run the check as by calling "LoginUser", you should not run into any security issues.
You can find sample code for doing the impersonation here.
If you need help figuring out how to call "NetLocalGroupGetMembers" from C#, I reccomend that you checkout Jared Parson's PInvoke assistant, which you can download from codeplex.
If you are running the code in an ASP.NET app running in IIS, and want to impersonate the user accessing the website in order to make the call, then you may need to grant "Trusted for Delegation" permission to the production web server.
If you are running on the desktop, then using the active user's security credentials should not be a problem.
It is possible that you network admin could have revoked access to the "Securable Object" for the particular machine you are trying to access. Unfortunately that access is necessary for all of the network management api functions to work. If that is the case, then you will need to grant access to the "Securable Object" for whatever users you want to execute as. With the default windows security settings all authenticated users should have access, however.
I hope this helps.
-Scott
A:
You should be able to do this with System.DirectoryServices.DirectoryEntry. If you are having trouble running it remotely, maybe you could install something on the remote machines to give you your data via some sort of RPC, like remoting or a web service. But I think what you're trying should be possible remotely without getting too fancy.
A:
If Windows won't let you connect through it's login mechanism, I think your only option is to run something on the remote machine with an open port (either directly or through remoting or a web service, as mentioned).
| Enumerate Windows user group members on remote system using c# | Within c#, I need to be able to
Connect to a remote system, specifying username/password as appropriate
List the members of a localgroup on that system
Fetch the results back to the executing computer
So for example I would connect to \SOMESYSTEM with appropriate creds, and fetch back a list of local administrators including SOMESYSTEM\Administrator, SOMESYSTEM\Bob, DOMAIN\AlanH, "DOMAIN\Domain Administrators".
I've tried this with system.directoryservices.accountmanagement but am running into problems with authentication. Sometimes I get:
Multiple connections to a server or shared resource by the same user, using more than one user name, are not allowed. Disconnect all previous connections to the server or shared resource and try again. (Exception from HRESULT: 0x800704C3)
The above is trying because there will be situations where I simply cannot unmap existing drives or UNC connections.
Other times my program gets UNKNOWN ERROR and the security log on the remote system reports an error 675, code 0x19 which is KDC_ERR_PREAUTH_REQUIRED.
I need a simpler and less error prone way to do this!
| [
"davidg was on the right track, and I am crediting him with the answer.\nBut the WMI query necessary was a little less than straightfoward, since I needed not just a list of users for the whole machine, but the subset of users and groups, whether local or domain, that were members of the local Administrators group. For the record, that WMI query was:\nSELECT PartComponent FROM Win32_GroupUser WHERE GroupComponent = \"Win32_Group.Domain='thehostname',Name='thegroupname'\"\n\nHere's the full code snippet:\npublic string GroupMembers(string targethost, string groupname, string targetusername, string targetpassword)\n {\n StringBuilder result = new StringBuilder(); \n try\n {\n ConnectionOptions Conn = new ConnectionOptions();\n if (targethost != Environment.MachineName) //WMI errors if creds given for localhost\n {\n Conn.Username = targetusername; //can be null\n Conn.Password = targetpassword; //can be null\n }\n Conn.Timeout = TimeSpan.FromSeconds(2);\n ManagementScope scope = new ManagementScope(\"\\\\\\\\\" + targethost + \"\\\\root\\\\cimv2\", Conn);\n scope.Connect();\n StringBuilder qs = new StringBuilder();\n qs.Append(\"SELECT PartComponent FROM Win32_GroupUser WHERE GroupComponent = \\\"Win32_Group.Domain='\");\n qs.Append(targethost);\n qs.Append(\"',Name='\");\n qs.Append(groupname);\n qs.AppendLine(\"'\\\"\");\n ObjectQuery query = new ObjectQuery(qs.ToString());\n ManagementObjectSearcher searcher = new ManagementObjectSearcher(scope, query);\n ManagementObjectCollection queryCollection = searcher.Get();\n foreach (ManagementObject m in queryCollection)\n {\n ManagementPath path = new ManagementPath(m[\"PartComponent\"].ToString()); \n { \n String[] names = path.RelativePath.Split(',');\n result.Append(names[0].Substring(names[0].IndexOf(\"=\") + 1).Replace(\"\\\"\", \" \").Trim() + \"\\\\\"); \n result.AppendLine(names[1].Substring(names[1].IndexOf(\"=\") + 1).Replace(\"\\\"\", \" \").Trim()); \n }\n }\n return result.ToString();\n }\n catch (Exception e)\n {\n Console.WriteLine(\"Error. Message: \" + e.Message);\n return \"fail\";\n }\n }\n\nSo, if I invoke Groupmembers(\"Server1\", \"Administrators\", \"myusername\", \"mypassword\"); I get a single string returned with: \nSERVER1\\Administrator\nMYDOMAIN\\Domain Admins \nThe actual WMI return is more like this:\n\\\\SERVER1\\root\\cimv2:Win32_UserAccount.Domain=\"SERVER1\",Name=\"Administrator\"\n... so as you can see, I had to do a little string manipulation to pretty it up.\n",
"This should be easy to do using WMI. Here you have a pointer to some docs:\nWMI Documentation for Win32_UserAccount\nEven if you have no previous experience with WMI, it should be quite easy to turn that VB Script code at the bottom of the page into some .NET code.\nHope this helped!\n",
"I would recommend using the Win32 API function NetLocalGroupGetMembers. It is much more straight forward than trying to figure out the crazy LDAP syntax, which is necessary for some of the other solutions recommended here. As long as you impersonate the user you want to run the check as by calling \"LoginUser\", you should not run into any security issues.\nYou can find sample code for doing the impersonation here.\nIf you need help figuring out how to call \"NetLocalGroupGetMembers\" from C#, I reccomend that you checkout Jared Parson's PInvoke assistant, which you can download from codeplex.\nIf you are running the code in an ASP.NET app running in IIS, and want to impersonate the user accessing the website in order to make the call, then you may need to grant \"Trusted for Delegation\" permission to the production web server. \nIf you are running on the desktop, then using the active user's security credentials should not be a problem.\nIt is possible that you network admin could have revoked access to the \"Securable Object\" for the particular machine you are trying to access. Unfortunately that access is necessary for all of the network management api functions to work. If that is the case, then you will need to grant access to the \"Securable Object\" for whatever users you want to execute as. With the default windows security settings all authenticated users should have access, however.\nI hope this helps.\n-Scott\n",
"You should be able to do this with System.DirectoryServices.DirectoryEntry. If you are having trouble running it remotely, maybe you could install something on the remote machines to give you your data via some sort of RPC, like remoting or a web service. But I think what you're trying should be possible remotely without getting too fancy.\n",
"If Windows won't let you connect through it's login mechanism, I think your only option is to run something on the remote machine with an open port (either directly or through remoting or a web service, as mentioned).\n"
] | [
2,
1,
1,
0,
0
] | [] | [] | [
"c#",
"user_management",
"usergroups",
"windows"
] | stackoverflow_0000021514_c#_user_management_usergroups_windows.txt |
Q:
Experiences Using ASP.NET MVC Framework
I am wondering what experiences people are having using the ASP.NET MVC Framework? In particular I am looking for feedback on the type of experience folks are having using the framework.
What are people using for their view engine? What about the db layer, NHibernate, LINQ to SQL or something else?
I know stackoverflow uses MVC, so please say this site.
Thank you.
Why the choice of NHibernate over anything else? I am not against NHibernate, just wondering the rational.
A:
I've been building a few sites with the framework since the first preview came out, and it has certainly come a long way already. It feels like a very light-weight and tidy framework.
There are a couple of areas where I think it really excels over "vanilla" asp.net:
Enables a much cleaner separation of concerns/loose coupling
makes test-driven development actually possible.
And it's much more friendly towards javascript (ajax) heavy sites.
That said, there are some areas where it has some way to go yet:
Validation
Data binding
Tag soup, as mentioned earlier (although this can be avoided to some extent; user controls, helper methods&codebehind is still allowed!)
The framework is still in beta though, so I expect these things to improve over time. Scott Hanselman has hinted that the Dynamic Data framework will be available for ASP.NET MVC at some point too, for example.
A:
I've been getting into some pretty heavy use of NHibernate with ASP.NET MVC lately, and am really loving it.
A:
I have used ASP.NET MVC for a few projects recently and its like a breath of fresh air compared to WebForms. It works with the web rather than against it, and feels like a much more natural way to develop.
I use SubSonic rather than NHibernate, and find it fits very nice within the MVC architecture.
The building blocks I commonly use for a website are:-
Asp.net mvc
Subsonic
SQL Server
Lucene
JQuery
A:
I used the MVC framework to build a small site, and I found myself frequently frustrated by the tag soup views, and lack of the server controls I had come to love.
I went back to using webforms.
WebForms, once mastered, are great...They just take a very long time to learn all the tricks.
A:
I've just been recently turned on to MVC and Linq to Sql for Asp.Net. I'm still learning both, and I'm really enjoying them both. There are quite a few screen casts on http://www.asp.net/learn/.
A:
Why the choice of NHibernate over
anything else?
It's a very powerful tool, and is (relatively) easy to learn. It takes away all the monotony and repetitiveness of manually implementing object-relational mapping.
| Experiences Using ASP.NET MVC Framework | I am wondering what experiences people are having using the ASP.NET MVC Framework? In particular I am looking for feedback on the type of experience folks are having using the framework.
What are people using for their view engine? What about the db layer, NHibernate, LINQ to SQL or something else?
I know stackoverflow uses MVC, so please say this site.
Thank you.
Why the choice of NHibernate over anything else? I am not against NHibernate, just wondering the rational.
| [
"I've been building a few sites with the framework since the first preview came out, and it has certainly come a long way already. It feels like a very light-weight and tidy framework.\nThere are a couple of areas where I think it really excels over \"vanilla\" asp.net:\n\nEnables a much cleaner separation of concerns/loose coupling\nmakes test-driven development actually possible.\nAnd it's much more friendly towards javascript (ajax) heavy sites.\n\nThat said, there are some areas where it has some way to go yet:\n\nValidation\nData binding\nTag soup, as mentioned earlier (although this can be avoided to some extent; user controls, helper methods&codebehind is still allowed!)\n\nThe framework is still in beta though, so I expect these things to improve over time. Scott Hanselman has hinted that the Dynamic Data framework will be available for ASP.NET MVC at some point too, for example.\n",
"I've been getting into some pretty heavy use of NHibernate with ASP.NET MVC lately, and am really loving it.\n",
"I have used ASP.NET MVC for a few projects recently and its like a breath of fresh air compared to WebForms. It works with the web rather than against it, and feels like a much more natural way to develop.\nI use SubSonic rather than NHibernate, and find it fits very nice within the MVC architecture.\nThe building blocks I commonly use for a website are:-\nAsp.net mvc\nSubsonic\nSQL Server\nLucene\nJQuery\n",
"I used the MVC framework to build a small site, and I found myself frequently frustrated by the tag soup views, and lack of the server controls I had come to love.\nI went back to using webforms.\nWebForms, once mastered, are great...They just take a very long time to learn all the tricks.\n",
"I've just been recently turned on to MVC and Linq to Sql for Asp.Net. I'm still learning both, and I'm really enjoying them both. There are quite a few screen casts on http://www.asp.net/learn/.\n",
"\nWhy the choice of NHibernate over\n anything else?\n\nIt's a very powerful tool, and is (relatively) easy to learn. It takes away all the monotony and repetitiveness of manually implementing object-relational mapping.\n"
] | [
4,
2,
2,
1,
0,
0
] | [] | [] | [
".net",
"asp.net",
"asp.net_mvc"
] | stackoverflow_0000023994_.net_asp.net_asp.net_mvc.txt |
Q:
How to handle including needed classes in PHP
I'm wondering what the best practice is for handling the problem with having to "include" so many files in my PHP scripts in order to ensure that all the classes I need to use are accessible to my script.
Currently, I'm just using include_once to include the classes I access directly. Each of those would include_once the classes that they access.
I've looked into using the __autoload function, but hat doesn't seem to work well if you plan to have your class files organized in a directory tree. If you did this, it seems like you'd end up walking the directory tree until you found the class you were looking for. Also, I'm not sure how this effects classes with the same name in different namespaces.
Is there an easier way to handle this?
Or is PHP just not suited to "enterprisey" type applications with lots of different objects all located in separate files that can be in many different directories.
A:
I my applications I usually have setup.php file that includes all core classes (i.e. framework and accompanying libraries). My custom classes are loaded using autoloader aided by directory layout map.
Each time new class is added I run command line builder script that scans whole directory tree in search for model classes then builds associative array with class names as keys and paths as values. Then, __autoload function looks up class name in that array and gets include path. Here's the code:
autobuild.php
define('MAP', 'var/cache/autoload.map');
error_reporting(E_ALL);
require 'setup.php';
print(buildAutoloaderMap() . " classes mapped\n");
function buildAutoloaderMap() {
$dirs = array('lib', 'view', 'model');
$cache = array();
$n = 0;
foreach ($dirs as $dir) {
foreach (new RecursiveIteratorIterator(new RecursiveDirectoryIterator($dir)) as $entry) {
$fn = $entry->getFilename();
if (!preg_match('/\.class\.php$/', $fn))
continue;
$c = str_replace('.class.php', '', $fn);
if (!class_exists($c)) {
$cache[$c] = ($pn = $entry->getPathname());
++$n;
}
}
}
ksort($cache);
file_put_contents(MAP, serialize($cache));
return $n;
}
autoload.php
define('MAP', 'var/cache/autoload.map');
function __autoload($className) {
static $map;
$map or ($map = unserialize(file_get_contents(MAP)));
$fn = array_key_exists($className, $map) ? $map[$className] : null;
if ($fn and file_exists($fn)) {
include $fn;
unset($map[$className]);
}
}
Note that file naming convention must be [class_name].class.php. Alter the directories classes will be looked in autobuild.php. You can also run autobuilder from autoload function when class not found, but that may get your program into infinite loop.
Serialized arrays are darn fast.
@JasonMichael: PHP 4 is dead. Get over it.
A:
You can define multiple autoloading functions with spl_autoload_register:
spl_autoload_register('load_controllers');
spl_autoload_register('load_models');
function load_models($class){
if( !file_exists("models/$class.php") )
return false;
include "models/$class.php";
return true;
}
function load_controllers($class){
if( !file_exists("controllers/$class.php") )
return false;
include "controllers/$class.php";
return true;
}
A:
You can also programmatically determine the location of the class file by using structured naming conventions that map to physical directories. This is how Zend do it in Zend Framework. So when you call Zend_Loader::loadClass("Zend_Db_Table"); it explodes the classname into an array of directories by splitting on the underscores, and then the Zend_Loader class goes to load the required file.
Like all the Zend modules, I would expect you can use just the loader on its own with your own classes but I have only used it as part of a site using Zend's MVC.
But there have been concerns about performance under load when you use any sort of dynamic class loading, for example see this blog post comparing Zend_Loader with hard loading of class files.
As well as the performance penalty of having to search the PHP include path, it defeats opcode caching. From a comment on that post:
When using ANY Dynamic class loader APC can’t cache those files fully as its not sure which files will load on any single request. By hard loading the files APC can cache them in full.
A:
__autoload works well if you have a consistent naming convention for your classes that tell the function where they're found inside the directory tree. MVC lends itself particularly well for this kind of thing because you can easily split the classes into models, views and controllers.
Alternatively, keep an associative array of names to file locations for your class and let __autoload query this array.
A:
__autoload will work, but only in PHP 5.
A:
Of the suggestions so far, I'm partial to Kevin's, but it doesn't need to be absolute. I see a couple different options to use with __autoload.
Put all class files into a single directory. Name the file after the class, ie, classes/User.php or classes/User.class.php.
Kevin's idea of putting models into one directory, controllers into another, etc. Works well if all of your classes fit nicely into the MVC framework, but sometimes, things get messy.
Include the directory in the classname. For example, a class called Model_User would actually be located at classes/Model/User.php. Your __autoload function would know to translate an underscore into a directory separator to find the file.
Just parse the whole directory structure once. Either in the __autoload function, or even just in the same PHP file where it's defined, loop over the contents of the classes directory and cache what files are where. So, if you try to load the User class, it doesn't matter if it's in classes/User.php or classes/Models/User.php or classes/Utility/User.php. Once it finds User.php somewhere in the classes directory, it will know what file to include when the User class needs to be autoloaded.
A:
@Kevin:
I was just trying to point out that spl_autoload_register is a better alternative to __autoload since you can define multiple loaders, and they won't conflict with each other. Handy if you have to include libraries that define an __autoload function as well.
Are you sure? The documentation says differently:
If your code has an existing __autoload function then this function must be explicitly registered on the __autoload stack. This is because spl_autoload_register() will effectively replace the engine cache for the __autoload function by either spl_autoload() or spl_autoload_call().
=> you have to explicitly register any library's __autoload as well. But apart from that you're of course right, this function is the better alternative.
| How to handle including needed classes in PHP | I'm wondering what the best practice is for handling the problem with having to "include" so many files in my PHP scripts in order to ensure that all the classes I need to use are accessible to my script.
Currently, I'm just using include_once to include the classes I access directly. Each of those would include_once the classes that they access.
I've looked into using the __autoload function, but hat doesn't seem to work well if you plan to have your class files organized in a directory tree. If you did this, it seems like you'd end up walking the directory tree until you found the class you were looking for. Also, I'm not sure how this effects classes with the same name in different namespaces.
Is there an easier way to handle this?
Or is PHP just not suited to "enterprisey" type applications with lots of different objects all located in separate files that can be in many different directories.
| [
"I my applications I usually have setup.php file that includes all core classes (i.e. framework and accompanying libraries). My custom classes are loaded using autoloader aided by directory layout map.\nEach time new class is added I run command line builder script that scans whole directory tree in search for model classes then builds associative array with class names as keys and paths as values. Then, __autoload function looks up class name in that array and gets include path. Here's the code:\nautobuild.php\ndefine('MAP', 'var/cache/autoload.map');\nerror_reporting(E_ALL);\nrequire 'setup.php';\nprint(buildAutoloaderMap() . \" classes mapped\\n\");\n\nfunction buildAutoloaderMap() {\n $dirs = array('lib', 'view', 'model');\n $cache = array();\n $n = 0;\n foreach ($dirs as $dir) {\n foreach (new RecursiveIteratorIterator(new RecursiveDirectoryIterator($dir)) as $entry) {\n $fn = $entry->getFilename();\n if (!preg_match('/\\.class\\.php$/', $fn))\n continue;\n $c = str_replace('.class.php', '', $fn);\n if (!class_exists($c)) {\n $cache[$c] = ($pn = $entry->getPathname());\n ++$n;\n }\n }\n }\n ksort($cache);\n file_put_contents(MAP, serialize($cache));\n return $n;\n}\n\nautoload.php\ndefine('MAP', 'var/cache/autoload.map');\n\nfunction __autoload($className) {\n static $map;\n $map or ($map = unserialize(file_get_contents(MAP)));\n $fn = array_key_exists($className, $map) ? $map[$className] : null;\n if ($fn and file_exists($fn)) {\n include $fn;\n unset($map[$className]);\n }\n}\n\nNote that file naming convention must be [class_name].class.php. Alter the directories classes will be looked in autobuild.php. You can also run autobuilder from autoload function when class not found, but that may get your program into infinite loop.\nSerialized arrays are darn fast.\n@JasonMichael: PHP 4 is dead. Get over it.\n",
"You can define multiple autoloading functions with spl_autoload_register:\nspl_autoload_register('load_controllers');\nspl_autoload_register('load_models');\n\nfunction load_models($class){\n if( !file_exists(\"models/$class.php\") )\n return false;\n\n include \"models/$class.php\";\n return true;\n}\nfunction load_controllers($class){\n if( !file_exists(\"controllers/$class.php\") )\n return false;\n\n include \"controllers/$class.php\";\n return true;\n}\n\n",
"You can also programmatically determine the location of the class file by using structured naming conventions that map to physical directories. This is how Zend do it in Zend Framework. So when you call Zend_Loader::loadClass(\"Zend_Db_Table\"); it explodes the classname into an array of directories by splitting on the underscores, and then the Zend_Loader class goes to load the required file.\nLike all the Zend modules, I would expect you can use just the loader on its own with your own classes but I have only used it as part of a site using Zend's MVC.\nBut there have been concerns about performance under load when you use any sort of dynamic class loading, for example see this blog post comparing Zend_Loader with hard loading of class files. \nAs well as the performance penalty of having to search the PHP include path, it defeats opcode caching. From a comment on that post:\n\n\nWhen using ANY Dynamic class loader APC can’t cache those files fully as its not sure which files will load on any single request. By hard loading the files APC can cache them in full.\n\n\n",
"__autoload works well if you have a consistent naming convention for your classes that tell the function where they're found inside the directory tree. MVC lends itself particularly well for this kind of thing because you can easily split the classes into models, views and controllers.\nAlternatively, keep an associative array of names to file locations for your class and let __autoload query this array.\n",
"__autoload will work, but only in PHP 5.\n",
"Of the suggestions so far, I'm partial to Kevin's, but it doesn't need to be absolute. I see a couple different options to use with __autoload.\n\nPut all class files into a single directory. Name the file after the class, ie, classes/User.php or classes/User.class.php.\nKevin's idea of putting models into one directory, controllers into another, etc. Works well if all of your classes fit nicely into the MVC framework, but sometimes, things get messy.\nInclude the directory in the classname. For example, a class called Model_User would actually be located at classes/Model/User.php. Your __autoload function would know to translate an underscore into a directory separator to find the file.\nJust parse the whole directory structure once. Either in the __autoload function, or even just in the same PHP file where it's defined, loop over the contents of the classes directory and cache what files are where. So, if you try to load the User class, it doesn't matter if it's in classes/User.php or classes/Models/User.php or classes/Utility/User.php. Once it finds User.php somewhere in the classes directory, it will know what file to include when the User class needs to be autoloaded.\n\n",
"@Kevin:\n\nI was just trying to point out that spl_autoload_register is a better alternative to __autoload since you can define multiple loaders, and they won't conflict with each other. Handy if you have to include libraries that define an __autoload function as well.\n\nAre you sure? The documentation says differently:\n\nIf your code has an existing __autoload function then this function must be explicitly registered on the __autoload stack. This is because spl_autoload_register() will effectively replace the engine cache for the __autoload function by either spl_autoload() or spl_autoload_call().\n\n=> you have to explicitly register any library's __autoload as well. But apart from that you're of course right, this function is the better alternative.\n"
] | [
6,
2,
1,
0,
0,
0,
0
] | [] | [] | [
"autoload",
"class",
"include",
"php"
] | stackoverflow_0000023802_autoload_class_include_php.txt |
Q:
What Comes After The %?
I've searched for this a little but I have not gotten a particularly straight answer. In C (and I guess C++), how do you determine what comes after the % when using printf?. For example:
double radius = 1.0;
double area = 0.0;
area = calculateArea( radius );
printf( "%10.1f %10.2\n", radius, area );
I took this example straight from a book that I have on the C language. This does not make sense to me at all. Where do you come up with 10.1f and 10.2f? Could someone please explain this?
A:
http://en.wikipedia.org/wiki/Printf#printf_format_placeholders is Wikipedia's reference for format placeholders in printf. http://www.cplusplus.com/reference/clibrary/cstdio/printf.html is also helpful
Basically in a simple form it's %[width].[precision][type]. Width allows you to make sure that the variable which is being printed is at least a certain length (useful for tables etc). Precision allows you to specify the precision a number is printed to (eg. decimal places etc) and the informs C/C++ what the variable you've given it is (character, integer, double etc).
Hope this helps
UPDATE:
To clarify using your examples:
printf( "%10.1f %10.2\n", radius, area );
%10.1f (referring to the first argument: radius) means make it 10 characters long (ie. pad with spaces), and print it as a float with one decimal place.
%10.2 (referring to the second argument: area) means make it 10 character long (as above) and print with two decimal places.
A:
man 3 printf
on a Linux system will give you all the information you need. You can also find these manual pages online, for example at http://linux.die.net/man/3/printf
A:
10.1f means floating point with 10 characters wide with 1 place after the decimal point.
If the number has less than 10 digits, it's padded with spaces.
10.2f is the same, but with 2 places after the decimal point.
You have these basic types:
%d - integer
%x - hex integer
%s - string
%c - char (only one)
%f - floating point (float)
%d - signed int (decimal)
%i - signed int (integer) (same as decimal).
%u - unsigned int
%ld - long (signed) int
%lu - long unsigned int
%lld - long long (signed) int
%llu - long long unsigned int
Edit: there are several others listed in @Eli's response (man 3 printf).
A:
10.1f means floating point with 1 place after the decimal point and the 10 places before the decimal point. If the number has less than 10 digits, it's padded with spaces. 10.2f is the same, but with 2 places after the decimal point.
On every system I've seen, from Unix to Rails Migrations, this is not the case. @robintw expresses it best:
Basically in a simple form it's %[width].[precision][type].
That is, not "10 places before the decimal point," but "10 places, both before and after, and including the decimal point."
A:
In short, those values after the % tell printf how to interpret (or output) all of the variables coming later. In your example, radius is interpreted as a float (this the 'f'), and the 10.1 gives information about how many decimal places to use when printing it out.
See this link for more details about all of the modifiers you can use with printf.
A:
10.1f means you want to display a float with 1 decimal and the displayed number should be 10 characters long.
A:
Man pages contain the information you want. To read what you have above:
printf( "%10.2f", 1.5 )
This will print:
1.50
Whereas:
printf("%.2f", 1.5 )
Prints:
1.50
Note the justification of both.
Similarly:
printf("%10.1f", 1.5 )
Would print:
1.5
Any number after the . is the precision you want printed. Any number before the . is the distance from the left margin.
| What Comes After The %? | I've searched for this a little but I have not gotten a particularly straight answer. In C (and I guess C++), how do you determine what comes after the % when using printf?. For example:
double radius = 1.0;
double area = 0.0;
area = calculateArea( radius );
printf( "%10.1f %10.2\n", radius, area );
I took this example straight from a book that I have on the C language. This does not make sense to me at all. Where do you come up with 10.1f and 10.2f? Could someone please explain this?
| [
"http://en.wikipedia.org/wiki/Printf#printf_format_placeholders is Wikipedia's reference for format placeholders in printf. http://www.cplusplus.com/reference/clibrary/cstdio/printf.html is also helpful\nBasically in a simple form it's %[width].[precision][type]. Width allows you to make sure that the variable which is being printed is at least a certain length (useful for tables etc). Precision allows you to specify the precision a number is printed to (eg. decimal places etc) and the informs C/C++ what the variable you've given it is (character, integer, double etc).\nHope this helps\nUPDATE:\nTo clarify using your examples:\nprintf( \"%10.1f %10.2\\n\", radius, area );\n\n%10.1f (referring to the first argument: radius) means make it 10 characters long (ie. pad with spaces), and print it as a float with one decimal place.\n%10.2 (referring to the second argument: area) means make it 10 character long (as above) and print with two decimal places.\n",
"man 3 printf\n\non a Linux system will give you all the information you need. You can also find these manual pages online, for example at http://linux.die.net/man/3/printf\n",
"10.1f means floating point with 10 characters wide with 1 place after the decimal point.\nIf the number has less than 10 digits, it's padded with spaces.\n10.2f is the same, but with 2 places after the decimal point.\nYou have these basic types:\n%d - integer\n%x - hex integer\n%s - string\n%c - char (only one)\n%f - floating point (float)\n%d - signed int (decimal)\n%i - signed int (integer) (same as decimal).\n%u - unsigned int\n%ld - long (signed) int\n%lu - long unsigned int\n%lld - long long (signed) int\n%llu - long long unsigned int\n\nEdit: there are several others listed in @Eli's response (man 3 printf).\n",
"\n10.1f means floating point with 1 place after the decimal point and the 10 places before the decimal point. If the number has less than 10 digits, it's padded with spaces. 10.2f is the same, but with 2 places after the decimal point.\n\nOn every system I've seen, from Unix to Rails Migrations, this is not the case. @robintw expresses it best:\n\nBasically in a simple form it's %[width].[precision][type].\n\nThat is, not \"10 places before the decimal point,\" but \"10 places, both before and after, and including the decimal point.\"\n",
"In short, those values after the % tell printf how to interpret (or output) all of the variables coming later. In your example, radius is interpreted as a float (this the 'f'), and the 10.1 gives information about how many decimal places to use when printing it out.\nSee this link for more details about all of the modifiers you can use with printf.\n",
"10.1f means you want to display a float with 1 decimal and the displayed number should be 10 characters long.\n",
"Man pages contain the information you want. To read what you have above:\nprintf( \"%10.2f\", 1.5 )\n\nThis will print:\n 1.50\n\nWhereas:\nprintf(\"%.2f\", 1.5 )\n\nPrints:\n1.50\n\nNote the justification of both.\nSimilarly:\nprintf(\"%10.1f\", 1.5 )\n\nWould print:\n 1.5\n\nAny number after the . is the precision you want printed. Any number before the . is the distance from the left margin.\n"
] | [
15,
7,
2,
2,
1,
0,
0
] | [
"One issue that hasn't been raised by others is whether double is the same as a float. On some systems a different format specifier was needed for a double compared to a float. Not least because the parameters passed could be of different sizes.\n\n %f - float\n %lf - double\n %g - double\n\n"
] | [
-1
] | [
"c"
] | stackoverflow_0000017980_c.txt |
Q:
Using an ocx in a console application
I want to quickly test an ocx. How do I drop that ocx in a console application. I have found some tutorials in CodeProject and but are incomplete.
A:
Isn't an OCX an ActiveX User Control? (something that you put onto a form for the user to interact with)?
The easiest way I know of to test COM/ActiveX stuff is to use excel. (Yes I know it sounds dumb, bear with me)
Run Excel, create a new file if it hasn't done this for you
Press Alt+F11 to launch the Visual Basic Editor (if you have excel 2007 it's on the 'Developer' ribbon tab thing
Now that you're in happy visual basic land...
From the Tools menu, select References
Select your OCX/COM object from the list, or click Browse... to find the file if it's not registered with COM - You may be able to skip this step if your OCX is already registered.
From the Insert menu, select UserForm
In the floating Toolbox window, right click and select Additional Controls
Find your OCX in the list and tick it
You can then drag your OCX from the toolbox onto the userform
From the Run menu, run it.
Test your OCX and play around with it.
SAVE THE EXCEL FILE so you don't have to repeat these steps every time.
A:
Sure..it's pretty easy. Here's a fun app I threw together. I'm assuming you have Visual C++.
Save to test.cpp and compile: cl.exe /EHsc test.cpp
To test with your OCX you'll need to either #import the typelib and use it's CLSID (or just hard-code the CLSID) in the CoCreateInstance call. Using #import will also help define any custom interfaces you might need.
#include "windows.h"
#include "shobjidl.h"
#include "atlbase.h"
//
// compile with: cl /EHsc test.cpp
//
// A fun little program to demonstrate creating an OCX.
// (CLSID_TaskbarList in this case)
//
BOOL CALLBACK RemoveFromTaskbarProc( HWND hwnd, LPARAM lParam )
{
ITaskbarList* ptbl = (ITaskbarList*)lParam;
ptbl->DeleteTab(hwnd);
return TRUE;
}
void HideTaskWindows(ITaskbarList* ptbl)
{
EnumWindows( RemoveFromTaskbarProc, (LPARAM) ptbl);
}
// ============
BOOL CALLBACK AddToTaskbarProc( HWND hwnd, LPARAM lParam )
{
ITaskbarList* ptbl = (ITaskbarList*)lParam;
ptbl->AddTab(hwnd);
return TRUE;// continue enumerating
}
void ShowTaskWindows(ITaskbarList* ptbl)
{
if (!EnumWindows( AddToTaskbarProc, (LPARAM) ptbl))
throw "Unable to enum windows in ShowTaskWindows";
}
// ============
int main(int, char**)
{
CoInitialize(0);
try {
CComPtr<IUnknown> pUnk;
if (FAILED(CoCreateInstance(CLSID_TaskbarList, NULL, CLSCTX_INPROC_SERVER|CLSCTX_LOCAL_SERVER, IID_IUnknown, (void**) &pUnk)))
throw "Unabled to create CLSID_TaskbarList";
// Do something with the object...
CComQIPtr<ITaskbarList> ptbl = pUnk;
if (ptbl)
ptbl->HrInit();
HideTaskWindows(ptbl);
MessageBox( GetDesktopWindow(), _T("Check out the task bar!"), _T("StackOverflow FTW"), MB_OK);
ShowTaskWindows(ptbl);
}
catch( TCHAR * msg ) {
MessageBox( GetDesktopWindow(), msg, _T("Error"), MB_OK);
}
CoUninitialize();
return 0;
}
A:
@orion thats so cool. Never thought of it that way.
Well @jschroedl thats was fun indeed.
Testing an activex in console app is fun. But I think its worth not trying down that path. You can call the methods or set and get the properties either through the way @jschroedl had explained or you can call the IDIspatch object through the Invoke function.
The first step is to GetIDsByName and call the function through Invoke and parameters to the function should be an array of VARIANTS in the Invoke formal parameter list.
All is fine and dandy. But once you get to events its downhill from there. Windows application requires a message pump to fire events. On a console you don't have one. I went down the path to implement a EventNotifier for the events just like you implement a CallBack interface in classic C++ way. But the events doesn't get to your implemented interface.
I am pretty sure this cannot be done on a console application. But I am really hoping someone out there will have a different take on events in a console application
| Using an ocx in a console application | I want to quickly test an ocx. How do I drop that ocx in a console application. I have found some tutorials in CodeProject and but are incomplete.
| [
"Isn't an OCX an ActiveX User Control? (something that you put onto a form for the user to interact with)?\nThe easiest way I know of to test COM/ActiveX stuff is to use excel. (Yes I know it sounds dumb, bear with me)\n\nRun Excel, create a new file if it hasn't done this for you\nPress Alt+F11 to launch the Visual Basic Editor (if you have excel 2007 it's on the 'Developer' ribbon tab thing\n\nNow that you're in happy visual basic land...\n\nFrom the Tools menu, select References\nSelect your OCX/COM object from the list, or click Browse... to find the file if it's not registered with COM - You may be able to skip this step if your OCX is already registered.\nFrom the Insert menu, select UserForm\nIn the floating Toolbox window, right click and select Additional Controls\nFind your OCX in the list and tick it\nYou can then drag your OCX from the toolbox onto the userform\nFrom the Run menu, run it.\nTest your OCX and play around with it.\nSAVE THE EXCEL FILE so you don't have to repeat these steps every time.\n\n",
"Sure..it's pretty easy. Here's a fun app I threw together. I'm assuming you have Visual C++.\nSave to test.cpp and compile: cl.exe /EHsc test.cpp\nTo test with your OCX you'll need to either #import the typelib and use it's CLSID (or just hard-code the CLSID) in the CoCreateInstance call. Using #import will also help define any custom interfaces you might need.\n\n#include \"windows.h\"\n#include \"shobjidl.h\"\n#include \"atlbase.h\"\n\n//\n// compile with: cl /EHsc test.cpp\n//\n\n// A fun little program to demonstrate creating an OCX.\n// (CLSID_TaskbarList in this case)\n//\n\nBOOL CALLBACK RemoveFromTaskbarProc( HWND hwnd, LPARAM lParam )\n{\n ITaskbarList* ptbl = (ITaskbarList*)lParam;\n ptbl->DeleteTab(hwnd); \n return TRUE;\n}\n\nvoid HideTaskWindows(ITaskbarList* ptbl)\n{\n EnumWindows( RemoveFromTaskbarProc, (LPARAM) ptbl);\n}\n\n// ============\n\nBOOL CALLBACK AddToTaskbarProc( HWND hwnd, LPARAM lParam )\n{\n ITaskbarList* ptbl = (ITaskbarList*)lParam;\n ptbl->AddTab(hwnd); \n\n return TRUE;// continue enumerating\n}\n\nvoid ShowTaskWindows(ITaskbarList* ptbl)\n{\n if (!EnumWindows( AddToTaskbarProc, (LPARAM) ptbl))\n throw \"Unable to enum windows in ShowTaskWindows\";\n}\n\n// ============\n\nint main(int, char**)\n{\n CoInitialize(0);\n\n try {\n CComPtr<IUnknown> pUnk;\n\n if (FAILED(CoCreateInstance(CLSID_TaskbarList, NULL, CLSCTX_INPROC_SERVER|CLSCTX_LOCAL_SERVER, IID_IUnknown, (void**) &pUnk)))\n throw \"Unabled to create CLSID_TaskbarList\";\n\n\n // Do something with the object...\n\n CComQIPtr<ITaskbarList> ptbl = pUnk;\n if (ptbl)\n ptbl->HrInit();\n\n HideTaskWindows(ptbl);\n MessageBox( GetDesktopWindow(), _T(\"Check out the task bar!\"), _T(\"StackOverflow FTW\"), MB_OK);\n ShowTaskWindows(ptbl);\n }\n catch( TCHAR * msg ) {\n MessageBox( GetDesktopWindow(), msg, _T(\"Error\"), MB_OK);\n } \n\n CoUninitialize();\n\n return 0;\n}\n\n",
"@orion thats so cool. Never thought of it that way.\nWell @jschroedl thats was fun indeed. \nTesting an activex in console app is fun. But I think its worth not trying down that path. You can call the methods or set and get the properties either through the way @jschroedl had explained or you can call the IDIspatch object through the Invoke function. \nThe first step is to GetIDsByName and call the function through Invoke and parameters to the function should be an array of VARIANTS in the Invoke formal parameter list.\nAll is fine and dandy. But once you get to events its downhill from there. Windows application requires a message pump to fire events. On a console you don't have one. I went down the path to implement a EventNotifier for the events just like you implement a CallBack interface in classic C++ way. But the events doesn't get to your implemented interface. \nI am pretty sure this cannot be done on a console application. But I am really hoping someone out there will have a different take on events in a console application\n"
] | [
3,
2,
1
] | [] | [] | [
"activex",
"c++",
"console",
"visual_c++"
] | stackoverflow_0000017928_activex_c++_console_visual_c++.txt |
Q:
How can I create virtual machines as part of a build process using MSBuild and MS Virtual Server and/or Hyper-V Server Virtualization?
What I would like to do is create a clean virtual machine image as the output of a build of an application.
So a new virtual machine would be created (from a template is fine, with the OS installed, and some base software installed) --- a new web site would be created in IIS, and the web app build output copied to a location on the virtual machine hard disk, and IIS configured correctly, the VM would start up and run.
I know there are MSBuild tasks to script all the administrative actions in IIS, but how do you script all the actions with Virtual machines? Specifically, creating a new virtual machine from a template, naming it uniquely, starting it, configuring it, etc...
Specifically I was wondering if anyone has successfully implemented any VM scripting as part of a build process.
Update: I assume with Hyper-V, there is a different set of libraries/APIs to script virtual machines, anyone played around with this? And anyone with real practical experience of doing something like this?
A:
You can actually script a fair number of tasks in MS Virtual Server:
http://www.microsoft.com/technet/scriptcenter/scripts/vs/default.mspx?mfr=true
http://msdn.microsoft.com/en-us/library/aa368876(VS.85).aspx
Also Virtual PC guy has got a ton of stuff on his blog about scripting Virtual Server/PC and now Hyper-V here:
http://blogs.msdn.com/virtual_pc_guy/default.aspx
VMware has similar capabilities:
http://www.vmware.com/support/developer/scripting-API/
A:
Checkout Powershell Management library for Hyper-V on CodePlex. Some features:
Finding a VM
Connecting to a VM
Discovering and manipulating Machine states
Backing up, exporting and snapshotting VMs
Adding and removing VMs, configuring motherboard settings.
Manipulating Disk controllers, drives and disk images
Manipluating Network Interface Cards
Working with VHD files
| How can I create virtual machines as part of a build process using MSBuild and MS Virtual Server and/or Hyper-V Server Virtualization? | What I would like to do is create a clean virtual machine image as the output of a build of an application.
So a new virtual machine would be created (from a template is fine, with the OS installed, and some base software installed) --- a new web site would be created in IIS, and the web app build output copied to a location on the virtual machine hard disk, and IIS configured correctly, the VM would start up and run.
I know there are MSBuild tasks to script all the administrative actions in IIS, but how do you script all the actions with Virtual machines? Specifically, creating a new virtual machine from a template, naming it uniquely, starting it, configuring it, etc...
Specifically I was wondering if anyone has successfully implemented any VM scripting as part of a build process.
Update: I assume with Hyper-V, there is a different set of libraries/APIs to script virtual machines, anyone played around with this? And anyone with real practical experience of doing something like this?
| [
"You can actually script a fair number of tasks in MS Virtual Server:\nhttp://www.microsoft.com/technet/scriptcenter/scripts/vs/default.mspx?mfr=true\nhttp://msdn.microsoft.com/en-us/library/aa368876(VS.85).aspx\nAlso Virtual PC guy has got a ton of stuff on his blog about scripting Virtual Server/PC and now Hyper-V here:\nhttp://blogs.msdn.com/virtual_pc_guy/default.aspx\nVMware has similar capabilities:\nhttp://www.vmware.com/support/developer/scripting-API/\n",
"Checkout Powershell Management library for Hyper-V on CodePlex. Some features:\n\nFinding a VM\n Connecting to a VM\n Discovering and manipulating Machine states\n Backing up, exporting and snapshotting VMs\n Adding and removing VMs, configuring motherboard settings.\n Manipulating Disk controllers, drives and disk images\n Manipluating Network Interface Cards\n Working with VHD files \n\n"
] | [
3,
3
] | [] | [] | [
"hyper_v",
"msbuild",
"virtualization"
] | stackoverflow_0000011720_hyper_v_msbuild_virtualization.txt |
Q:
Embedding IPTC image data with PHP GD
I'm trying to embed a IPTC data onto a JPEG image using iptcembed() but am having a bit of trouble.
I have verified it is in the end product:
// Embed the IPTC data
$content = iptcembed($data, $path);
// Verify IPTC data is in the end image
$iptc = iptcparse($content);
var_dump($iptc);
Which returns the tags entered.
However when I save and reload the image the tags are non existant:
// Save the edited image
$im = imagecreatefromstring($content);
imagejpeg($im, 'phplogo-edited.jpg');
imagedestroy($im);
// Get data from the saved image
$image = getimagesize('./phplogo-edited.jpg');
// If APP13/IPTC data exists output it
if(isset($image['APP13']))
{
$iptc = iptcparse($image['APP13']);
print_r($iptc);
}
else
{
// Otherwise tell us what the image *does* contain
// SO: This is what's happening
print_r($image);
}
So why aren't the tags in the saved image?
The PHP source is avaliable here, and the respective outputs are:
Image output
Data output
A:
getimagesize has an optional second parameter Imageinfo which contains the info you need.
From the manual:
This optional parameter allows you to extract some extended information from the image file. Currently, this will return the different JPG APP markers as an associative array. Some programs use these APP markers to embed text information in images. A very common one is to embed » IPTC information in the APP13 marker. You can use the iptcparse() function to parse the binary APP13 marker into something readable.
so you could use it like this:
<?php
$size = getimagesize('./phplogo-edited.jpg', $info);
if(isset($info['APP13']))
{
$iptc = iptcparse($info['APP13']);
var_dump($iptc);
}
?>
Hope this helps...
| Embedding IPTC image data with PHP GD | I'm trying to embed a IPTC data onto a JPEG image using iptcembed() but am having a bit of trouble.
I have verified it is in the end product:
// Embed the IPTC data
$content = iptcembed($data, $path);
// Verify IPTC data is in the end image
$iptc = iptcparse($content);
var_dump($iptc);
Which returns the tags entered.
However when I save and reload the image the tags are non existant:
// Save the edited image
$im = imagecreatefromstring($content);
imagejpeg($im, 'phplogo-edited.jpg');
imagedestroy($im);
// Get data from the saved image
$image = getimagesize('./phplogo-edited.jpg');
// If APP13/IPTC data exists output it
if(isset($image['APP13']))
{
$iptc = iptcparse($image['APP13']);
print_r($iptc);
}
else
{
// Otherwise tell us what the image *does* contain
// SO: This is what's happening
print_r($image);
}
So why aren't the tags in the saved image?
The PHP source is avaliable here, and the respective outputs are:
Image output
Data output
| [
"getimagesize has an optional second parameter Imageinfo which contains the info you need. \nFrom the manual:\n\nThis optional parameter allows you to extract some extended information from the image file. Currently, this will return the different JPG APP markers as an associative array. Some programs use these APP markers to embed text information in images. A very common one is to embed » IPTC information in the APP13 marker. You can use the iptcparse() function to parse the binary APP13 marker into something readable.\n\nso you could use it like this:\n<?php\n$size = getimagesize('./phplogo-edited.jpg', $info);\nif(isset($info['APP13']))\n{\n $iptc = iptcparse($info['APP13']);\n var_dump($iptc);\n}\n?>\n\nHope this helps...\n"
] | [
3
] | [] | [] | [
"gd",
"iptc",
"php"
] | stackoverflow_0000024456_gd_iptc_php.txt |
Q:
How much extra overhead is generated when sending a file over a web service as a byte array?
This question and answer shows how to send a file as a byte array through an XML web service. How much overhead is generated by using this method for file transfer? I assume the data looks something like this:
<?xml version="1.0" encoding="UTF-8" ?>
<bytes>
<byte>16</byte>
<byte>28</byte>
<byte>127</byte>
...
</bytes>
If this format is correct, the bytes must first be converted to UTF-8 characters. Each of these characters allocates 8 bytes. Are the bytes stored in base 10, hex, or binary characters? How much larger does the file appear as it is being sent due to the XML data and character encoding? Is compression built into web services?
A:
Typically a byte array is sent as a base64 encoded string, not as individual bytes in tags.
http://en.wikipedia.org/wiki/Base64
The base64 encoded version is about 137% of the size of the original content.
A:
I use this method for some internal corporate webservices, and I haven't noticed any major slow-downs (but that doesn't mean it's not there).
You could probably use any of the numerous network traffic analysis tools to measure the size of the data, and make a judgment call based off that.
A:
I'm not sure about all the details (compressing, encoding, etc) but I usually just use WireShark to analyze the network traffic (while trying various methods) which then allows you to see exactly how it's sent.
For example, if it's compressed the data block of the packet shouldn't be readable as plain text...however if it's uncompressed, you will just see plain old xml text...like you would see with HTTP traffic, or even FTP in certain cases.
A:
To echo what Kevin said, in .net web services if you have a byte array it is sent as a base64 encoded string by default. You can also specify the encoding of the byte array beforehand.
Obviously, once it gets to the server (or client) you need to manually decode the string back into a byte array as this isn't done automagically for you unfortunately.
A:
The main performance hit isn't going to be from the transfer of the encoded file, it's going to be in the processing that the server has to do to encode the file pre-transfer (unless the files don't change often and the encoded version can be cached somehow).
| How much extra overhead is generated when sending a file over a web service as a byte array? | This question and answer shows how to send a file as a byte array through an XML web service. How much overhead is generated by using this method for file transfer? I assume the data looks something like this:
<?xml version="1.0" encoding="UTF-8" ?>
<bytes>
<byte>16</byte>
<byte>28</byte>
<byte>127</byte>
...
</bytes>
If this format is correct, the bytes must first be converted to UTF-8 characters. Each of these characters allocates 8 bytes. Are the bytes stored in base 10, hex, or binary characters? How much larger does the file appear as it is being sent due to the XML data and character encoding? Is compression built into web services?
| [
"Typically a byte array is sent as a base64 encoded string, not as individual bytes in tags. \nhttp://en.wikipedia.org/wiki/Base64\nThe base64 encoded version is about 137% of the size of the original content.\n",
"I use this method for some internal corporate webservices, and I haven't noticed any major slow-downs (but that doesn't mean it's not there). \nYou could probably use any of the numerous network traffic analysis tools to measure the size of the data, and make a judgment call based off that.\n",
"I'm not sure about all the details (compressing, encoding, etc) but I usually just use WireShark to analyze the network traffic (while trying various methods) which then allows you to see exactly how it's sent.\nFor example, if it's compressed the data block of the packet shouldn't be readable as plain text...however if it's uncompressed, you will just see plain old xml text...like you would see with HTTP traffic, or even FTP in certain cases.\n",
"To echo what Kevin said, in .net web services if you have a byte array it is sent as a base64 encoded string by default. You can also specify the encoding of the byte array beforehand. \nObviously, once it gets to the server (or client) you need to manually decode the string back into a byte array as this isn't done automagically for you unfortunately.\n",
"The main performance hit isn't going to be from the transfer of the encoded file, it's going to be in the processing that the server has to do to encode the file pre-transfer (unless the files don't change often and the encoded version can be cached somehow).\n"
] | [
11,
0,
0,
0,
0
] | [] | [] | [
"web_services",
"xml"
] | stackoverflow_0000011820_web_services_xml.txt |
Q:
What is the purpose of the AppManifest.xaml file in Silverlight applications?
In opening up the .xap file that is generated as output from a Silverlight application I've been tinkering with lately, I noticed a file called AppManifest.xaml.
I've also noticed an option in the property pages for the Silverlight project that appears to allow you to optionally not output AppManifest.xaml for the project. When unchecking that option, however, I get errors when running the application: Invalid or malformed application: Check manifest.
What is the purpose of the AppManifest.xaml file?
A:
Maybe this blog post will help: http://blogs.msdn.com/katriend/archive/2008/03/16/silverlight-2-structure-of-the-new-xap-file-silverlight-packaged-application.aspx. It discusses the .xap file and its parts including the AppManifest.
To save people a link click, in short, it defines the application for deployment, its entry point, and references all the assemblies needed to run.
| What is the purpose of the AppManifest.xaml file in Silverlight applications? | In opening up the .xap file that is generated as output from a Silverlight application I've been tinkering with lately, I noticed a file called AppManifest.xaml.
I've also noticed an option in the property pages for the Silverlight project that appears to allow you to optionally not output AppManifest.xaml for the project. When unchecking that option, however, I get errors when running the application: Invalid or malformed application: Check manifest.
What is the purpose of the AppManifest.xaml file?
| [
"Maybe this blog post will help: http://blogs.msdn.com/katriend/archive/2008/03/16/silverlight-2-structure-of-the-new-xap-file-silverlight-packaged-application.aspx. It discusses the .xap file and its parts including the AppManifest.\nTo save people a link click, in short, it defines the application for deployment, its entry point, and references all the assemblies needed to run.\n"
] | [
6
] | [] | [] | [
"silverlight"
] | stackoverflow_0000024506_silverlight.txt |
Q:
Stylus/tablet input device
I need to make a WebCast presentation soon and need to do some "whiteboarding" during that WebCast. Does anyone have any stylus/tablet input device recommendations? Anyone ever used such an input device with WebEx's whiteboard feature?
rp
A:
Wacom http://www.wacom.com/index2.cfm
makes by far the best tablets I have ever used. They come in a variety of prices with associated features. If you want to be able to draw 'on-screen' they have the Cintiq, which is the most expensive, starting at $999 but definitely worth it. For a cheaper more 'traditional' tablet there is Bambo and Intuos which start at $79, however with the Bambo and the Intuos there is quite a learning curve if your not already used to using tablets.
A:
A lot of people recommend Wacom. I've tried one, and it is really nice to use. To some extent, it really depends if you want only a tablet (no video feedback on the device), or a 'screen' (having video feedback, which I find nice but is also a bit pricey...).
| Stylus/tablet input device | I need to make a WebCast presentation soon and need to do some "whiteboarding" during that WebCast. Does anyone have any stylus/tablet input device recommendations? Anyone ever used such an input device with WebEx's whiteboard feature?
rp
| [
"Wacom http://www.wacom.com/index2.cfm\nmakes by far the best tablets I have ever used. They come in a variety of prices with associated features. If you want to be able to draw 'on-screen' they have the Cintiq, which is the most expensive, starting at $999 but definitely worth it. For a cheaper more 'traditional' tablet there is Bambo and Intuos which start at $79, however with the Bambo and the Intuos there is quite a learning curve if your not already used to using tablets.\n",
"A lot of people recommend Wacom. I've tried one, and it is really nice to use. To some extent, it really depends if you want only a tablet (no video feedback on the device), or a 'screen' (having video feedback, which I find nice but is also a bit pricey...).\n"
] | [
2,
0
] | [] | [] | [
"webex"
] | stackoverflow_0000024599_webex.txt |
Q:
What Javascript rich text editor will not break the browser's spellcheck?
I'm using TinyMCE in an ASP.Net project, and I need a spell check. The only TinyMCE plugins I've found use PHP on the server side, and I guess I could just break down and install PHP on my server and do that, but quite frankly, what a pain. I don't want to do that.
As it turns out, Firefox's built-in spell check will work fine for me, but it doesn't seem to work on TinyMCE editor boxes. I've enabled the gecko_spellcheck option, which is supposed to fix it, but it doesn't.
Does anybody know of a nice rich-text editor that doesn't break the browser's spell check?
A:
TinyMCE only goes out of its way to disable spell-checking when you don't specify the gecko_spellcheck option (i verified this with their example code). Might want to double-check your tinyMCE.init() call - it should look something like this:
tinyMCE.init({
mode : "textareas",
theme : "simple",
gecko_spellcheck : true
});
A:
Most rich text editors let you specify whether or not to disable the browser's spellchecker (as answered by others), with the exception of those running in Safari.
There is currently no way to programmatically disable the Safari spellchecker (as there is in FF and IE7+), so most rich text editors choose to let Safari do its own thing by leaving the browser in control of the context menu.
A:
I know at least yahoo!'s Rich Text Editor will let you use the included spell checker in FireFox.
I also tested FCKeditor, but that requires the users to install additional plugins on their computer.
| What Javascript rich text editor will not break the browser's spellcheck? | I'm using TinyMCE in an ASP.Net project, and I need a spell check. The only TinyMCE plugins I've found use PHP on the server side, and I guess I could just break down and install PHP on my server and do that, but quite frankly, what a pain. I don't want to do that.
As it turns out, Firefox's built-in spell check will work fine for me, but it doesn't seem to work on TinyMCE editor boxes. I've enabled the gecko_spellcheck option, which is supposed to fix it, but it doesn't.
Does anybody know of a nice rich-text editor that doesn't break the browser's spell check?
| [
"TinyMCE only goes out of its way to disable spell-checking when you don't specify the gecko_spellcheck option (i verified this with their example code). Might want to double-check your tinyMCE.init() call - it should look something like this:\ntinyMCE.init({\n mode : \"textareas\",\n theme : \"simple\",\n gecko_spellcheck : true\n});\n\n",
"Most rich text editors let you specify whether or not to disable the browser's spellchecker (as answered by others), with the exception of those running in Safari.\nThere is currently no way to programmatically disable the Safari spellchecker (as there is in FF and IE7+), so most rich text editors choose to let Safari do its own thing by leaving the browser in control of the context menu.\n",
"I know at least yahoo!'s Rich Text Editor will let you use the included spell checker in FireFox.\nI also tested FCKeditor, but that requires the users to install additional plugins on their computer.\n"
] | [
4,
1,
0
] | [] | [] | [
"javascript",
"spell_checking"
] | stackoverflow_0000023620_javascript_spell_checking.txt |
Q:
Vi editing for Visual Studio
I'm used to the Vi(m) editor and am using MS Visual Studio 2005 at work. I couldn't find a free Vi add-in (there's only one for the 2003 version). I googled a bit, saw that there was a 'Google summer of code' project this year to write such an add-in, and am eagerly awaiting the result. I've also heard of ViEmu (not free, and I can't test it at work).
Has anyone in my situation has found a solution (and/or tested ViEmu)?
Edit: I can't test ViEmu at work because they are paranoid about what we install on our boxes: it has to go through required channels, and for 30 days I don't reckon it's worth it (and I have no Windows box at home).
Edit: Since both answers were equivalent, I ended up accepting the first one that came in.
A:
ViEmu works great with Visual Studio. I used Vi(m) strictly in Linux, but I was turned on to bringing the Vi(m) editing process into the Windows world by JP Boodhoo. JP praises about it also.
A:
ViEmu works great. I've been using it for about a year now and couldn't imagine coding in Visual Studio without it.
Why can't you test it at work? It has a 30 day free trial.
| Vi editing for Visual Studio | I'm used to the Vi(m) editor and am using MS Visual Studio 2005 at work. I couldn't find a free Vi add-in (there's only one for the 2003 version). I googled a bit, saw that there was a 'Google summer of code' project this year to write such an add-in, and am eagerly awaiting the result. I've also heard of ViEmu (not free, and I can't test it at work).
Has anyone in my situation has found a solution (and/or tested ViEmu)?
Edit: I can't test ViEmu at work because they are paranoid about what we install on our boxes: it has to go through required channels, and for 30 days I don't reckon it's worth it (and I have no Windows box at home).
Edit: Since both answers were equivalent, I ended up accepting the first one that came in.
| [
"ViEmu works great with Visual Studio. I used Vi(m) strictly in Linux, but I was turned on to bringing the Vi(m) editing process into the Windows world by JP Boodhoo. JP praises about it also.\n",
"ViEmu works great. I've been using it for about a year now and couldn't imagine coding in Visual Studio without it.\nWhy can't you test it at work? It has a 30 day free trial.\n"
] | [
6,
3
] | [] | [] | [
"editor",
"ide",
"vim",
"visual_studio"
] | stackoverflow_0000024610_editor_ide_vim_visual_studio.txt |
Q:
Securing a linux webserver for public access
I'd like to set up a cheap Linux box as a web server to host a variety of web technologies (PHP & Java EE come to mind, but I'd like to experiment with Ruby or Python in the future as well).
I'm fairly versed in setting up Tomcat to run on Linux for serving up Java EE applications, but I'd like to be able to open this server up, even just so I can create some tools I can use while I am working in the office. All the experience I've had with configuring Java EE sites has all been for intranet applications where we were told not to focus on securing the pages for external users.
What is your advice on setting up a personal Linux web server in a secure enough way to open it up for external traffic?
A:
This article has some of the best ways to lock things down:
http://www.petefreitag.com/item/505.cfm
Some highlights:
Make sure no one can browse the directories
Make sure only root has write privileges to everything, and only root has read privileges to certain config files
Run mod_security
The article also takes some pointers from this book:
Apache Securiy (O'Reilly Press)
As far as distros, I've run Debain and Ubuntu, but it just depends on how much you want to do. I ran Debian with no X and just ssh'd into it whenever i needed anything. That is a simple way to keep overhead down. Or Ubuntu has some nice GUI things that make it easy to control Apache/MySQL/PHP.
A:
It's important to follow security best practices wherever possible, but you don't want to make things unduly difficult for yourself or lose sleep worrying about keeping up with the latest exploits. In my experience, there are two key things that can help keep your personal server secure enough to throw up on the internet while retaining your sanity:
1) Security through obscurity
Needless to say, relying on this in the 'real world' is a bad idea and not to be entertained. But that's because in the real world, baddies know what's there and that there's loot to be had.
On a personal server, the majority of 'attacks' you'll suffer will simply be automated sweeps from machines that have already been compromised, looking for default installations of products known to be vulnerable. If your server doesn't offer up anything enticing on the default ports or in the default locations, the automated attacker will move on. Therefore, if you're going to run a ssh server, put it on a non-standard port (>1024) and it's likely it will never be found. If you can get away with this technique for your web server then great, shift that to an obscure port too.
2) Package management
Don't compile and install Apache or sshd from source yourself unless you absolutely have to. If you do, you're taking on the responsibility of keeping up-to-date with the latest security patches. Let the nice package maintainers from Linux distros such as Debian or Ubuntu do the work for you. Install from the distro's precompiled packages, and staying current becomes a matter of issuing the occasional apt-get update && apt-get -u dist-upgrade command, or using whatever fancy GUI tool Ubuntu provides.
A:
One thing you should be sure to consider is what ports are open to the world. I personally just open port 22 for SSH and port 123 for ntpd. But if you open port 80 (http) or ftp make sure you learn to know at least what you are serving to the world and who can do what with that. I don't know a lot about ftp, but there are millions of great Apache tutorials just a Google search away.
A:
Bit-Tech.Net ran a couple of articles on how to setup a home server using linux. Here are the links:
Article 1
Article 2
Hope those are of some help.
A:
@svrist mentioned EC2. EC2 provides an API for opening and closing ports remotely. This way, you can keep your box running. If you need to give a demo from a coffee shop or a client's office, you can grab your IP and add it to the ACL.
A:
Its safe and secure if you keep your voice down about it (i.e., rarely will someone come after your home server if you're just hosting a glorified webroot on a home connection) and your wits up about your configuration (i.e., avoid using root for everything, make sure you keep your software up to date).
On that note, albeit this thread will potentially dwindle down to just flaming, my suggestion for your personal server is to stick to anything Ubuntu (get Ubuntu Server here); in my experience, the quickest to get answers from whence asking questions on forums (not sure what to say about uptake though).
My home server security BTW kinda benefits (I think, or I like to think) from not having a static IP (runs on DynDNS).
Good luck!
/mp
A:
Be careful about opening the SSH port to the wild. If you do, make sure to disable root logins (you can always su or sudo once you get in) and consider more aggressive authentication methods within reason. I saw a huge dictionary attack in my server logs one weekend going after my SSH server from a DynDNS home IP server.
That being said, it's really awesome to be able to get to your home shell from work or away... and adding on the fact that you can use SFTP over the same port, I couldn't imagine life without it. =)
A:
You could consider an EC2 instance from Amazon. That way you can easily test out "stuff" without messing with production. And only pay for the space,time and bandwidth you use.
A:
If you do run a Linux server from home, install ossec on it for a nice lightweight IDS that works really well.
[EDIT]
As a side note, make sure that you do not run afoul of your ISP's Acceptable Use Policy and that they allow incoming connections on standard ports. The ISP I used to work for had it written in their terms that you could be disconnected for running servers over port 80/25 unless you were on a business-class account. While we didn't actively block those ports (we didn't care unless it was causing a problem) some ISPs don't allow any traffic over port 80 or 25 so you will have to use alternate ports.
A:
If you're going to do this, spend a bit of money and at the least buy a dedicated router/firewall with a separate DMZ port. You'll want to firewall off your internal network from your server so that when (not if!) your web server is compromised, your internal network isn't immediately vulnerable as well.
A:
There are plenty of ways to do this that will work just fine. I would usually jsut use a .htaccess file. Quick to set up and secure enough . Probably not the best option but it works for me. I wouldn't put my credit card numbers behind it but other than that I dont really care.
A:
Wow, you're opening up a can of worms as soon as you start opening anything up to external traffic. Keep in mind that what you consider an experimental server, almost like a sacrificial lamb, is also easy pickings for people looking to do bad things with your network and resources.
Your whole approach to an externally-available server should be very conservative and thorough. It starts with simple things like firewall policies, includes the underlying OS (keeping it patched, configuring it for security, etc.) and involves every layer of every stack you'll be using. There isn't a simple answer or recipe, I'm afraid.
If you want to experiment, you'll do much better to keep the server private and use a VPN if you need to work on it remotely.
| Securing a linux webserver for public access | I'd like to set up a cheap Linux box as a web server to host a variety of web technologies (PHP & Java EE come to mind, but I'd like to experiment with Ruby or Python in the future as well).
I'm fairly versed in setting up Tomcat to run on Linux for serving up Java EE applications, but I'd like to be able to open this server up, even just so I can create some tools I can use while I am working in the office. All the experience I've had with configuring Java EE sites has all been for intranet applications where we were told not to focus on securing the pages for external users.
What is your advice on setting up a personal Linux web server in a secure enough way to open it up for external traffic?
| [
"This article has some of the best ways to lock things down:\nhttp://www.petefreitag.com/item/505.cfm\nSome highlights:\n\nMake sure no one can browse the directories\nMake sure only root has write privileges to everything, and only root has read privileges to certain config files\nRun mod_security\n\nThe article also takes some pointers from this book:\nApache Securiy (O'Reilly Press)\nAs far as distros, I've run Debain and Ubuntu, but it just depends on how much you want to do. I ran Debian with no X and just ssh'd into it whenever i needed anything. That is a simple way to keep overhead down. Or Ubuntu has some nice GUI things that make it easy to control Apache/MySQL/PHP.\n",
"It's important to follow security best practices wherever possible, but you don't want to make things unduly difficult for yourself or lose sleep worrying about keeping up with the latest exploits. In my experience, there are two key things that can help keep your personal server secure enough to throw up on the internet while retaining your sanity:\n1) Security through obscurity\nNeedless to say, relying on this in the 'real world' is a bad idea and not to be entertained. But that's because in the real world, baddies know what's there and that there's loot to be had.\nOn a personal server, the majority of 'attacks' you'll suffer will simply be automated sweeps from machines that have already been compromised, looking for default installations of products known to be vulnerable. If your server doesn't offer up anything enticing on the default ports or in the default locations, the automated attacker will move on. Therefore, if you're going to run a ssh server, put it on a non-standard port (>1024) and it's likely it will never be found. If you can get away with this technique for your web server then great, shift that to an obscure port too.\n2) Package management\nDon't compile and install Apache or sshd from source yourself unless you absolutely have to. If you do, you're taking on the responsibility of keeping up-to-date with the latest security patches. Let the nice package maintainers from Linux distros such as Debian or Ubuntu do the work for you. Install from the distro's precompiled packages, and staying current becomes a matter of issuing the occasional apt-get update && apt-get -u dist-upgrade command, or using whatever fancy GUI tool Ubuntu provides.\n",
"One thing you should be sure to consider is what ports are open to the world. I personally just open port 22 for SSH and port 123 for ntpd. But if you open port 80 (http) or ftp make sure you learn to know at least what you are serving to the world and who can do what with that. I don't know a lot about ftp, but there are millions of great Apache tutorials just a Google search away.\n",
"Bit-Tech.Net ran a couple of articles on how to setup a home server using linux. Here are the links:\nArticle 1\nArticle 2\nHope those are of some help.\n",
"@svrist mentioned EC2. EC2 provides an API for opening and closing ports remotely. This way, you can keep your box running. If you need to give a demo from a coffee shop or a client's office, you can grab your IP and add it to the ACL.\n",
"Its safe and secure if you keep your voice down about it (i.e., rarely will someone come after your home server if you're just hosting a glorified webroot on a home connection) and your wits up about your configuration (i.e., avoid using root for everything, make sure you keep your software up to date).\nOn that note, albeit this thread will potentially dwindle down to just flaming, my suggestion for your personal server is to stick to anything Ubuntu (get Ubuntu Server here); in my experience, the quickest to get answers from whence asking questions on forums (not sure what to say about uptake though).\nMy home server security BTW kinda benefits (I think, or I like to think) from not having a static IP (runs on DynDNS).\nGood luck!\n/mp\n",
"Be careful about opening the SSH port to the wild. If you do, make sure to disable root logins (you can always su or sudo once you get in) and consider more aggressive authentication methods within reason. I saw a huge dictionary attack in my server logs one weekend going after my SSH server from a DynDNS home IP server.\nThat being said, it's really awesome to be able to get to your home shell from work or away... and adding on the fact that you can use SFTP over the same port, I couldn't imagine life without it. =)\n",
"You could consider an EC2 instance from Amazon. That way you can easily test out \"stuff\" without messing with production. And only pay for the space,time and bandwidth you use.\n",
"If you do run a Linux server from home, install ossec on it for a nice lightweight IDS that works really well.\n[EDIT]\nAs a side note, make sure that you do not run afoul of your ISP's Acceptable Use Policy and that they allow incoming connections on standard ports. The ISP I used to work for had it written in their terms that you could be disconnected for running servers over port 80/25 unless you were on a business-class account. While we didn't actively block those ports (we didn't care unless it was causing a problem) some ISPs don't allow any traffic over port 80 or 25 so you will have to use alternate ports.\n",
"If you're going to do this, spend a bit of money and at the least buy a dedicated router/firewall with a separate DMZ port. You'll want to firewall off your internal network from your server so that when (not if!) your web server is compromised, your internal network isn't immediately vulnerable as well.\n",
"There are plenty of ways to do this that will work just fine. I would usually jsut use a .htaccess file. Quick to set up and secure enough . Probably not the best option but it works for me. I wouldn't put my credit card numbers behind it but other than that I dont really care.\n",
"Wow, you're opening up a can of worms as soon as you start opening anything up to external traffic. Keep in mind that what you consider an experimental server, almost like a sacrificial lamb, is also easy pickings for people looking to do bad things with your network and resources. \nYour whole approach to an externally-available server should be very conservative and thorough. It starts with simple things like firewall policies, includes the underlying OS (keeping it patched, configuring it for security, etc.) and involves every layer of every stack you'll be using. There isn't a simple answer or recipe, I'm afraid.\nIf you want to experiment, you'll do much better to keep the server private and use a VPN if you need to work on it remotely.\n"
] | [
5,
5,
2,
2,
2,
1,
1,
1,
1,
1,
0,
0
] | [] | [] | [
"linux",
"security",
"webserver"
] | stackoverflow_0000005078_linux_security_webserver.txt |
Q:
IE 7+ Favorites
Is it possible to develop a plug-in for Internet Explorer that can replace the existing favorites functionality?
A:
Absolutely, however, it does depend somewhat on what you expect "replacing" to mean. You can develop an extension to provide a new set of menus or dropdown toolbar of some kind like the Google Bookmarks toolbar for example, or like the Delicious toolbar & sidebar. These IMO are much better designs for managing bookmarks than the built-in IE menu anyway. However, you could build a top level menu structure that worked the exact same way as the IE favorites menu if you wanted as well. There are many libraries out there that you could use to even handle the IE integration as well.
I don't know what language you develop in, but some example libraries to make the IE addons a breeze are (for .NET, there are plenty others out there for other languages as well):
http://www.add-in-express.com/programming-internet-explorer/
http://www.ssware.com/ezshell/ezshell.htm
also some articles to create your own from scratch:
http://www.codeproject.com/kb/applications/codeprojectsearchbar.aspx
http://www.codeproject.com/KB/atl/rbdeskband.aspx
That should get you going.
| IE 7+ Favorites | Is it possible to develop a plug-in for Internet Explorer that can replace the existing favorites functionality?
| [
"Absolutely, however, it does depend somewhat on what you expect \"replacing\" to mean. You can develop an extension to provide a new set of menus or dropdown toolbar of some kind like the Google Bookmarks toolbar for example, or like the Delicious toolbar & sidebar. These IMO are much better designs for managing bookmarks than the built-in IE menu anyway. However, you could build a top level menu structure that worked the exact same way as the IE favorites menu if you wanted as well. There are many libraries out there that you could use to even handle the IE integration as well. \nI don't know what language you develop in, but some example libraries to make the IE addons a breeze are (for .NET, there are plenty others out there for other languages as well): \nhttp://www.add-in-express.com/programming-internet-explorer/\nhttp://www.ssware.com/ezshell/ezshell.htm \nalso some articles to create your own from scratch:\nhttp://www.codeproject.com/kb/applications/codeprojectsearchbar.aspx\nhttp://www.codeproject.com/KB/atl/rbdeskband.aspx \nThat should get you going.\n"
] | [
5
] | [] | [] | [
"bookmarks",
"favorites",
"internet_explorer"
] | stackoverflow_0000024439_bookmarks_favorites_internet_explorer.txt |
Q:
What's the general consensus on supporting Windows 2000?
What's the general consensus on supporting Windows 2000 for software distribution? Are people supporting Windows XP SP2+ for new software development or is this too restrictive still?
A:
"OK" is a subjective judgement. You'll need to take a look at your client base and see what they're using.
Having said that, I dropped support for Win2K over a year ago with no negative impact.
A:
I'd say MS have made the decision for you if they themselves wont support it in .NET 3.5.
A:
The latest version of WinRAR still supports Windows 95. Think about it, why is that? It's because WinRAR solves a extremely common problem - of unpacking a file. People still use older systems not because they like them, but because they are forced to by the hardware. If you're making a video game, sure, drop support for anything below XP SP2, but if you're making a program that solves a specific task, like converting an RTF to PDF, I don't see a reason not to support other systems.
A:
It is not merely "OK"; it is a good idea. Anything to encourage the laggards to keep current is a good thing.
A:
A lot of computers at my company use Win2k, so we couldn't really drop support. It all depends on the client base.
A:
With XP being 5/6 years old now, I think most home users will be using it, but many business users may still be using it. all in all, it depends on your target audience.
Personally I would regard Windows 2000 support as a bonus rather than a requirement.
A:
This is very subjective, it really depends who you're selling to.
If it's average Joe then Windows 2K owners are going to be at best a percent or two of your target market. If it's the military (who I believe still run 2K on their toughbooks) then you're in trouble.
A:
Its fine by me :)
The company i work for (mining and construction) with <15k employees and we don't support Wink2k and have not for a while.
A:
I would say yes, as most have switched to XP or vista, from what I can tell.
| What's the general consensus on supporting Windows 2000? | What's the general consensus on supporting Windows 2000 for software distribution? Are people supporting Windows XP SP2+ for new software development or is this too restrictive still?
| [
"\"OK\" is a subjective judgement. You'll need to take a look at your client base and see what they're using.\nHaving said that, I dropped support for Win2K over a year ago with no negative impact.\n",
"I'd say MS have made the decision for you if they themselves wont support it in .NET 3.5.\n",
"The latest version of WinRAR still supports Windows 95. Think about it, why is that? It's because WinRAR solves a extremely common problem - of unpacking a file. People still use older systems not because they like them, but because they are forced to by the hardware. If you're making a video game, sure, drop support for anything below XP SP2, but if you're making a program that solves a specific task, like converting an RTF to PDF, I don't see a reason not to support other systems.\n",
"It is not merely \"OK\"; it is a good idea. Anything to encourage the laggards to keep current is a good thing.\n",
"A lot of computers at my company use Win2k, so we couldn't really drop support. It all depends on the client base.\n",
"With XP being 5/6 years old now, I think most home users will be using it, but many business users may still be using it. all in all, it depends on your target audience.\nPersonally I would regard Windows 2000 support as a bonus rather than a requirement.\n",
"This is very subjective, it really depends who you're selling to.\nIf it's average Joe then Windows 2K owners are going to be at best a percent or two of your target market. If it's the military (who I believe still run 2K on their toughbooks) then you're in trouble.\n",
"Its fine by me :)\nThe company i work for (mining and construction) with <15k employees and we don't support Wink2k and have not for a while.\n",
"I would say yes, as most have switched to XP or vista, from what I can tell.\n"
] | [
8,
3,
1,
1,
0,
0,
0,
0,
0
] | [] | [] | [
"compatibility",
"deployment",
"windows"
] | stackoverflow_0000011801_compatibility_deployment_windows.txt |
Q:
SelectNodes not working on stackoverflow feed
I'm trying to add support for stackoverflow feeds in my rss reader but SelectNodes and SelectSingleNode have no effect. This is probably something to do with ATOM and xml namespaces that I just don't understand yet.
I have gotten it to work by removing all attributes from the feed tag, but that's a hack and I would like to do it properly. So, how do you use SelectNodes with atom feeds?
Here's a snippet of the feed.
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:creativeCommons="http://backend.userland.com/creativeCommonsRssModule" xmlns:thr="http://purl.org/syndication/thread/1.0">
<title type="html">StackOverflow.com - Questions tagged: c</title>
<link rel="self" href="http://stackoverflow.com/feeds/tag/c" type="application/atom+xml" />
<subtitle>Check out the latest from StackOverflow.com</subtitle>
<updated>2008-08-24T12:25:30Z</updated>
<id>http://stackoverflow.com/feeds/tag/c</id>
<creativeCommons:license>http://www.creativecommons.org/licenses/by-nc/2.5/rdf</creativeCommons:license>
<entry>
<id>http://stackoverflow.com/questions/22901/what-is-the-best-way-to-communicate-with-a-sql-server</id>
<title type="html">What is the best way to communicate with a SQL server?</title>
<category scheme="http://stackoverflow.com/feeds/tag/c/tags" term="c" /><category scheme="http://stackoverflow.com/feeds/tag/c/tags" term="c++" /><category scheme="http://stackoverflow.com/feeds/tag/c/tags" term="sql" /><category scheme="http://stackoverflow.com/feeds/tag/c/tags" term="mysql" /><category scheme="http://stackoverflow.com/feeds/tag/c/tags" term="database" />
<author><name>Ed</name></author>
<link rel="alternate" href="http://stackoverflow.com/questions/22901/what-is-the-best-way-to-communicate-with-a-sql-server" />
<published>2008-08-22T05:09:04Z</published>
<updated>2008-08-23T04:52:39Z</updated>
<summary type="html"><p>I am going to be using c/c++, and would like to know the best way to talk to a MySQL server. Should I use the library that comes with the server installation? Are they any good libraries I should consider other than the official one?</p></summary>
<link rel="replies" type="application/atom+xml" href="http://stackoverflow.com/feeds/question/22901/answers" thr:count="2"/>
<thr:total>2</thr:total>
</entry>
</feed>
The Solution
XmlDocument doc = new XmlDocument();
XmlNamespaceManager nsmgr = new XmlNamespaceManager(doc.NameTable);
nsmgr.AddNamespace("atom", "http://www.w3.org/2005/Atom");
doc.Load(feed);
// successful
XmlNodeList itemList = doc.DocumentElement.SelectNodes("atom:entry", nsmgr);
A:
Don't confuse the namespace names in the XML file with the namespace names for your namespace manager. They're both shortcuts, and they don't necessarily have to match.
So you can register "http://www.w3.org/2005/Atom" as "atom", and then do a SelectNodes for "atom:entry".
A:
You might need to add a XmlNamespaceManager.
XmlDocument document = new XmlDocument();
XmlNamespaceManager nsmgr = new XmlNamespaceManager(document.NameTable);
nsmgr.AddNamespace("creativeCommons", "http://backend.userland.com/creativeCommonsRssModule");
// AddNamespace for other namespaces too.
document.Load(feed);
It is needed if you want to call SelectNodes on a document that uses them. What error are you seeing?
A:
You've guessed correctly: you're asking for nodes not in a namespace, but these nodes are in a namespace.
Description of the problem and solution: http://weblogs.asp.net/wallen/archive/2003/04/02/4725.aspx
A:
I just want to use..
XmlNodeList itemList = xmlDoc.DocumentElement.SelectNodes("entry");
but, what namespace do the entry tags fall under? I would assume xmlns="http://www.w3.org/2005/Atom", but it has no title so how would I add that namespace?
XmlDocument document = new XmlDocument();
XmlNamespaceManager nsmgr = new XmlNamespaceManager(document.NameTable);
nsmgr.AddNamespace("", "http://www.w3.org/2005/Atom");
document.Load(feed);
Something like that?
| SelectNodes not working on stackoverflow feed | I'm trying to add support for stackoverflow feeds in my rss reader but SelectNodes and SelectSingleNode have no effect. This is probably something to do with ATOM and xml namespaces that I just don't understand yet.
I have gotten it to work by removing all attributes from the feed tag, but that's a hack and I would like to do it properly. So, how do you use SelectNodes with atom feeds?
Here's a snippet of the feed.
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:creativeCommons="http://backend.userland.com/creativeCommonsRssModule" xmlns:thr="http://purl.org/syndication/thread/1.0">
<title type="html">StackOverflow.com - Questions tagged: c</title>
<link rel="self" href="http://stackoverflow.com/feeds/tag/c" type="application/atom+xml" />
<subtitle>Check out the latest from StackOverflow.com</subtitle>
<updated>2008-08-24T12:25:30Z</updated>
<id>http://stackoverflow.com/feeds/tag/c</id>
<creativeCommons:license>http://www.creativecommons.org/licenses/by-nc/2.5/rdf</creativeCommons:license>
<entry>
<id>http://stackoverflow.com/questions/22901/what-is-the-best-way-to-communicate-with-a-sql-server</id>
<title type="html">What is the best way to communicate with a SQL server?</title>
<category scheme="http://stackoverflow.com/feeds/tag/c/tags" term="c" /><category scheme="http://stackoverflow.com/feeds/tag/c/tags" term="c++" /><category scheme="http://stackoverflow.com/feeds/tag/c/tags" term="sql" /><category scheme="http://stackoverflow.com/feeds/tag/c/tags" term="mysql" /><category scheme="http://stackoverflow.com/feeds/tag/c/tags" term="database" />
<author><name>Ed</name></author>
<link rel="alternate" href="http://stackoverflow.com/questions/22901/what-is-the-best-way-to-communicate-with-a-sql-server" />
<published>2008-08-22T05:09:04Z</published>
<updated>2008-08-23T04:52:39Z</updated>
<summary type="html"><p>I am going to be using c/c++, and would like to know the best way to talk to a MySQL server. Should I use the library that comes with the server installation? Are they any good libraries I should consider other than the official one?</p></summary>
<link rel="replies" type="application/atom+xml" href="http://stackoverflow.com/feeds/question/22901/answers" thr:count="2"/>
<thr:total>2</thr:total>
</entry>
</feed>
The Solution
XmlDocument doc = new XmlDocument();
XmlNamespaceManager nsmgr = new XmlNamespaceManager(doc.NameTable);
nsmgr.AddNamespace("atom", "http://www.w3.org/2005/Atom");
doc.Load(feed);
// successful
XmlNodeList itemList = doc.DocumentElement.SelectNodes("atom:entry", nsmgr);
| [
"Don't confuse the namespace names in the XML file with the namespace names for your namespace manager. They're both shortcuts, and they don't necessarily have to match.\nSo you can register \"http://www.w3.org/2005/Atom\" as \"atom\", and then do a SelectNodes for \"atom:entry\".\n",
"You might need to add a XmlNamespaceManager.\nXmlDocument document = new XmlDocument();\nXmlNamespaceManager nsmgr = new XmlNamespaceManager(document.NameTable);\nnsmgr.AddNamespace(\"creativeCommons\", \"http://backend.userland.com/creativeCommonsRssModule\");\n// AddNamespace for other namespaces too.\ndocument.Load(feed);\n\nIt is needed if you want to call SelectNodes on a document that uses them. What error are you seeing?\n",
"You've guessed correctly: you're asking for nodes not in a namespace, but these nodes are in a namespace.\nDescription of the problem and solution: http://weblogs.asp.net/wallen/archive/2003/04/02/4725.aspx\n",
"I just want to use..\nXmlNodeList itemList = xmlDoc.DocumentElement.SelectNodes(\"entry\");\n\nbut, what namespace do the entry tags fall under? I would assume xmlns=\"http://www.w3.org/2005/Atom\", but it has no title so how would I add that namespace?\nXmlDocument document = new XmlDocument();\nXmlNamespaceManager nsmgr = new XmlNamespaceManager(document.NameTable);\nnsmgr.AddNamespace(\"\", \"http://www.w3.org/2005/Atom\");\ndocument.Load(feed);\n\nSomething like that?\n"
] | [
9,
6,
2,
0
] | [] | [] | [
".net",
"atom_feed",
"c#",
"rss"
] | stackoverflow_0000024734_.net_atom_feed_c#_rss.txt |
Q:
Displaying version of underlying software in footer of web app?
I am thinking about providing a version of say, the database schema and the dlls for business logic in the footer of my web application.
Is this advised?
Are there any pitfalls, or pointers of how to do this best?
Usability concerns?
I already have a version scheme, for both schema and dlls, used in my CI solution.
A:
Don't do this. It gives away free information to a potential attacker and makes their job easier. If there are exploits known for your given version of the software, there's no need to tell them that. There are actually search engines built on top of Google who use this information incontinence to power massive exploits (e.g. cDc's Goolag scanner).
Although this may sound like security by obscurity (because it is) it is still advisable to make an attacker's job as hard as possible. Not divulging implementation details is an important step. Of course, this can only ever be part of the effort to make a website securer.
A:
I quite like what is done e.g. here. If you look towards the bottom of the page, there's a piece of text "powered by eve community". If you click that text you get a small chunk of technical information.
To me, this is a nice tradeoff between having the (useful) information readily available (for bug reports, etc.) and having to have (unpleasant) technical jargon visible to users of the site.
A:
IMO, the only reasons to show version numbers are:
To show progress is being made
To help bug reports be localized to the version they were discovered in
So if these things are important for your bug reports, then expose them. If not, then don't.
| Displaying version of underlying software in footer of web app? | I am thinking about providing a version of say, the database schema and the dlls for business logic in the footer of my web application.
Is this advised?
Are there any pitfalls, or pointers of how to do this best?
Usability concerns?
I already have a version scheme, for both schema and dlls, used in my CI solution.
| [
"Don't do this. It gives away free information to a potential attacker and makes their job easier. If there are exploits known for your given version of the software, there's no need to tell them that. There are actually search engines built on top of Google who use this information incontinence to power massive exploits (e.g. cDc's Goolag scanner).\nAlthough this may sound like security by obscurity (because it is) it is still advisable to make an attacker's job as hard as possible. Not divulging implementation details is an important step. Of course, this can only ever be part of the effort to make a website securer.\n",
"I quite like what is done e.g. here. If you look towards the bottom of the page, there's a piece of text \"powered by eve community\". If you click that text you get a small chunk of technical information.\nTo me, this is a nice tradeoff between having the (useful) information readily available (for bug reports, etc.) and having to have (unpleasant) technical jargon visible to users of the site.\n",
"IMO, the only reasons to show version numbers are:\n\nTo show progress is being made\nTo help bug reports be localized to the version they were discovered in\n\nSo if these things are important for your bug reports, then expose them. If not, then don't.\n"
] | [
10,
6,
2
] | [] | [] | [
"assemblies",
"versioning",
"web_applications"
] | stackoverflow_0000024731_assemblies_versioning_web_applications.txt |
Q:
SQL many-to-many matching
I'm implementing a tagging system for a website. There are multiple tags per object and multiple objects per tag. This is accomplished by maintaining a table with two values per record, one for the ids of the object and the tag.
I'm looking to write a query to find the objects that match a given set of tags. Suppose I had the following data (in [object] -> [tags]* format)
apple -> fruit red food
banana -> fruit yellow food
cheese -> yellow food
firetruck -> vehicle red
If I want to match (red), I should get apple and firetruck. If I want to match (fruit, food) I should get (apple, banana).
How do I write a SQL query do do what I want?
@Jeremy Ruten,
Thanks for your answer. The notation used was used to give some sample data - my database does have a table with 1 object id and 1 tag per record.
Second, my problem is that I need to get all objects that match all tags. Substituting your OR for an AND like so:
SELECT object WHERE tag = 'fruit' AND tag = 'food';
Yields no results when run.
A:
Given:
object table (primary key id)
objecttags table (foreign keys objectId, tagid)
tags table (primary key id)
SELECT distinct o.*
from object o join objecttags ot on o.Id = ot.objectid
join tags t on ot.tagid = t.id
where t.Name = 'fruit' or t.name = 'food';
This seems backwards, since you want and, but the issue is, 2 tags aren't on the same row, and therefore, an and yields nothing, since 1 single row cannot be both a fruit and a food.
This query will yield duplicates usually, because you will get 1 row of each object, per tag.
If you wish to really do an and in this case, you will need a group by, and a having count = <number of ors> in your query for example.
SELECT distinct o.name, count(*) as count
from object o join objecttags ot on o.Id = ot.objectid
join tags t on ot.tagid = t.id
where t.Name = 'fruit' or t.name = 'food'
group by o.name
having count = 2;
A:
Oh gosh I may have mis-interpreted your original comment.
The easiest way to do this in SQL would be to have three tables:
1) Tags ( tag_id, name )
2) Objects (whatever that is)
3) Object_Tag( tag_id, object_id )
Then you can ask virtually any question you want of the data quickly, easily, and efficiently (provided you index appropriately). If you want to get fancy, you can allow multi-word tags, too (there's an elegant way, and a less elegant way, I can think of).
I assume that's what you've got, so this SQL below will work:
The literal way:
SELECT obj
FROM object
WHERE EXISTS( SELECT *
FROM tags
WHERE tag = 'fruit'
AND oid = object_id )
AND EXISTS( SELECT *
FROM tags
WHERE tag = 'Apple'
AND oid = object_id )
There are also other ways you can do it, such as:
SELECT oid
FROM tags
WHERE tag = 'Apple'
INTERSECT
SELECT oid
FROM tags
WHERE tag = 'Fruit'
A:
@Kyle: Your query should be more like:
SELECT object WHERE tag IN ('fruit', 'food');
Your query was looking for rows where the tag was both fruit AND food, which is impossible seeing as the field can only have one value, not both at the same time.
A:
Combine Steve M.'s suggestion with Jeremy's you'll get a single record with what you are looking for:
select object
from tblTags
where tag = @firstMatch
and (
@secondMatch is null
or
(object in (select object from tblTags where tag = @secondMatch)
)
Now, that doesn't scale very well but it will get what you are looking for. I think there is a better way to go about doing this so you can easily have N number of matching items without a great deal of impact to the code but it currently escapes me.
A:
I recommend the following schema.
Objects: objectID, objectName
Tags: tagID, tagName
ObjectTag: objectID,tagID
With the following query.
select distinct
objectName
from
ObjectTab ot
join object o
on o.objectID = ot.objectID
join tabs t
on t.tagID = ot.tagID
where
tagName in ('red','fruit')
| SQL many-to-many matching | I'm implementing a tagging system for a website. There are multiple tags per object and multiple objects per tag. This is accomplished by maintaining a table with two values per record, one for the ids of the object and the tag.
I'm looking to write a query to find the objects that match a given set of tags. Suppose I had the following data (in [object] -> [tags]* format)
apple -> fruit red food
banana -> fruit yellow food
cheese -> yellow food
firetruck -> vehicle red
If I want to match (red), I should get apple and firetruck. If I want to match (fruit, food) I should get (apple, banana).
How do I write a SQL query do do what I want?
@Jeremy Ruten,
Thanks for your answer. The notation used was used to give some sample data - my database does have a table with 1 object id and 1 tag per record.
Second, my problem is that I need to get all objects that match all tags. Substituting your OR for an AND like so:
SELECT object WHERE tag = 'fruit' AND tag = 'food';
Yields no results when run.
| [
"Given:\n\nobject table (primary key id)\nobjecttags table (foreign keys objectId, tagid)\ntags table (primary key id)\nSELECT distinct o.*\n from object o join objecttags ot on o.Id = ot.objectid\n join tags t on ot.tagid = t.id\n where t.Name = 'fruit' or t.name = 'food';\n\n\nThis seems backwards, since you want and, but the issue is, 2 tags aren't on the same row, and therefore, an and yields nothing, since 1 single row cannot be both a fruit and a food.\nThis query will yield duplicates usually, because you will get 1 row of each object, per tag.\nIf you wish to really do an and in this case, you will need a group by, and a having count = <number of ors> in your query for example.\n SELECT distinct o.name, count(*) as count\n from object o join objecttags ot on o.Id = ot.objectid\n join tags t on ot.tagid = t.id\n where t.Name = 'fruit' or t.name = 'food'\ngroup by o.name\n having count = 2;\n\n",
"Oh gosh I may have mis-interpreted your original comment. \nThe easiest way to do this in SQL would be to have three tables:\n1) Tags ( tag_id, name )\n2) Objects (whatever that is)\n3) Object_Tag( tag_id, object_id )\n\nThen you can ask virtually any question you want of the data quickly, easily, and efficiently (provided you index appropriately). If you want to get fancy, you can allow multi-word tags, too (there's an elegant way, and a less elegant way, I can think of). \nI assume that's what you've got, so this SQL below will work:\nThe literal way: \n SELECT obj \n FROM object\n WHERE EXISTS( SELECT * \n FROM tags \n WHERE tag = 'fruit' \n AND oid = object_id ) \n AND EXISTS( SELECT * \n FROM tags \n WHERE tag = 'Apple'\n AND oid = object_id )\n\nThere are also other ways you can do it, such as:\nSELECT oid\n FROM tags\n WHERE tag = 'Apple'\nINTERSECT\nSELECT oid\n FROM tags\n WHERE tag = 'Fruit'\n\n",
"@Kyle: Your query should be more like:\n\nSELECT object WHERE tag IN ('fruit', 'food');\n\nYour query was looking for rows where the tag was both fruit AND food, which is impossible seeing as the field can only have one value, not both at the same time.\n",
"Combine Steve M.'s suggestion with Jeremy's you'll get a single record with what you are looking for:\nselect object\nfrom tblTags\nwhere tag = @firstMatch\nand (\n @secondMatch is null \n or \n (object in (select object from tblTags where tag = @secondMatch)\n )\n\nNow, that doesn't scale very well but it will get what you are looking for. I think there is a better way to go about doing this so you can easily have N number of matching items without a great deal of impact to the code but it currently escapes me.\n",
"I recommend the following schema.\nObjects: objectID, objectName\nTags: tagID, tagName\nObjectTag: objectID,tagID\n\nWith the following query.\nselect distinct\n objectName\nfrom\n ObjectTab ot\n join object o\n on o.objectID = ot.objectID\n join tabs t\n on t.tagID = ot.tagID\nwhere\n tagName in ('red','fruit')\n\n"
] | [
4,
4,
0,
0,
0
] | [
"I'd suggest making your table have 1 tag per record, like this:\n apple -> fruit\n apple -> red\n apple -> food\n banana -> fruit\n banana -> yellow\n banana -> food\n\nThen you could just\n SELECT object WHERE tag = 'fruit' OR tag = 'food';\n\nIf you really want to do it your way though, you could do it like this:\n SELECT object WHERE tag LIKE 'red' OR tag LIKE '% red' OR tag LIKE 'red %' OR tag LIKE '% red %';\n\n"
] | [
-2
] | [
"many_to_many",
"sql",
"tagging"
] | stackoverflow_0000024715_many_to_many_sql_tagging.txt |
Q:
ASP.NET Tutorials
can you recommend some good ASP.NET tutorials or a good book?
Should I jump right to ASP.NET MVC/html/javascript or learn web forms first?
Thanks
A:
A great book if you're just beginning is Matthew MacDonald's Beginning ASP.NET 3.5 in C# 2008: From Novice to Professional. Once you're done with that a great reference (also by MacDonald) is Pro ASP.NET 3.5 in C# 2008. One of my favorite sources of information online is 4GuysFromRolla.
A:
MVC or WebForms...it's your choice but if I can offer one piece of advice regarding webforms...I know it'll be tempting to start dropping controls and playing with code, but it will help you A LOT if you don't skip over learning about the request and page lifecycles...a couple weeks later you'll thank yourself for spending the extra time there.
A:
MVC www.asp.net/mvc great videos
Asp.net www.asp.net
A:
If you're going to use ASP.NET MVC, then go straight to it. But it's a fairly new technology, not even in beta yet, so have that in mind. However, the application model is totally different compared to ASP.NET, so it is not in fact a replacement. For tutorials, you can surely check out http://www.asp.net and http://www.asp.net/mvc - there's tons of information there.
A:
A site for web tutorials including ASP.net can be found here.
| ASP.NET Tutorials | can you recommend some good ASP.NET tutorials or a good book?
Should I jump right to ASP.NET MVC/html/javascript or learn web forms first?
Thanks
| [
"A great book if you're just beginning is Matthew MacDonald's Beginning ASP.NET 3.5 in C# 2008: From Novice to Professional. Once you're done with that a great reference (also by MacDonald) is Pro ASP.NET 3.5 in C# 2008. One of my favorite sources of information online is 4GuysFromRolla.\n",
"MVC or WebForms...it's your choice but if I can offer one piece of advice regarding webforms...I know it'll be tempting to start dropping controls and playing with code, but it will help you A LOT if you don't skip over learning about the request and page lifecycles...a couple weeks later you'll thank yourself for spending the extra time there.\n",
"MVC www.asp.net/mvc great videos\nAsp.net www.asp.net \n",
"If you're going to use ASP.NET MVC, then go straight to it. But it's a fairly new technology, not even in beta yet, so have that in mind. However, the application model is totally different compared to ASP.NET, so it is not in fact a replacement. For tutorials, you can surely check out http://www.asp.net and http://www.asp.net/mvc - there's tons of information there.\n",
"A site for web tutorials including ASP.net can be found here.\n"
] | [
2,
1,
0,
0,
0
] | [] | [] | [
"asp.net",
"asp.net_mvc"
] | stackoverflow_0000022598_asp.net_asp.net_mvc.txt |
Q:
Where do I find information about Blog APIs and how to use them?
I'm thinking of creating a small offline blog editor for personal use and I don't know how do the APIs work. Where can I find this information?
I'm particularly looking for the most common providers: Blogger, Wordpress, MovableType, Live Spaces (not sure if this has an API) etc.
A:
See the following links:
Blogger
Wordpress
Live Spaces
A:
The Blogger API link you provided says:
This documentation is provided for
historical interest only. The Blogger
1.0 API is no longer supported and must not be used for new client
development. Please use our GData API
instead.
So the correct one probably is: http://code.google.com/apis/blogger/
Also, if more APIs are answered in this question, would you be kind enough to edit your answer to include them. Since I'm gonna vote it as the correct one.
Thank you.
A:
MovableType API : http://www.sixapart.com/developers/xmlrpc/movable_type_api/
MetaWeblog API : http://www.xmlrpc.com/metaWeblogApi
| Where do I find information about Blog APIs and how to use them? | I'm thinking of creating a small offline blog editor for personal use and I don't know how do the APIs work. Where can I find this information?
I'm particularly looking for the most common providers: Blogger, Wordpress, MovableType, Live Spaces (not sure if this has an API) etc.
| [
"See the following links:\nBlogger\nWordpress\nLive Spaces\n",
"The Blogger API link you provided says:\n\nThis documentation is provided for\n historical interest only. The Blogger\n 1.0 API is no longer supported and must not be used for new client\n development. Please use our GData API \n instead.\n\nSo the correct one probably is: http://code.google.com/apis/blogger/\nAlso, if more APIs are answered in this question, would you be kind enough to edit your answer to include them. Since I'm gonna vote it as the correct one.\nThank you.\n",
"MovableType API : http://www.sixapart.com/developers/xmlrpc/movable_type_api/\nMetaWeblog API : http://www.xmlrpc.com/metaWeblogApi\n"
] | [
3,
1,
1
] | [] | [] | [
"api",
"blogs"
] | stackoverflow_0000024708_api_blogs.txt |
Q:
Hooking my program with windows explorer's rename event
Is there any way, in any language, to hook my program when a user renames a file?
For example:
A user renames a file and presses enter (or clicks away) to confirm the rename action. BEFORE the file is actually renamed, my program "listens" to this event and pops up a message saying "Are you sure you want to rename C:\test\file.txt to C:\test\test.txt?".
I'm thinking/hoping this is possible with C++, C# or .NET.. But I don't have any clue where to look for.
A:
You can probably solve this by using the FileSystemWatcher class in .NET framework.
From the class remarks:
You can watch for renaming, deletion,
or creation of files or directories.
For example, to watch for renaming of
text files, set the Filter property to
"*.txt" and call the WaitForChanged
method with a Renamed specified for
its parameter.
A:
My guess is that this is not possible, I did find this which is for monitoring operations (including rename) on a folder, but there does not appear to be a similar method for files.
@Richard, FileSystemWatcher is good if you only need to monitor changes, but he needs to interrupt them which it cannot do.
A:
IFileOperationProgressSink.PreRenameItem is the closest supported thing I know of. Unfortunately, it's not a hook into Explorer - so you can only use it for your own IFileOperation actions. Depending on your needs, you can write a shell extension to do your own ConfirmRename (or something), and branch from there.
Otherwise, you're looking at hooking SHFileOperation, I think. This would have to be done in unmanaged code, as you'll be loaded into Explorer.exe. For Vista, this has been changed to IFileOperation - which probably means you'll have to hook the creation of it and pass out your mock.
Personally, I think since you're talking a rename, wilhelmtell's idea of confirming after the change, and undoing it if necessary is the best idea.
| Hooking my program with windows explorer's rename event | Is there any way, in any language, to hook my program when a user renames a file?
For example:
A user renames a file and presses enter (or clicks away) to confirm the rename action. BEFORE the file is actually renamed, my program "listens" to this event and pops up a message saying "Are you sure you want to rename C:\test\file.txt to C:\test\test.txt?".
I'm thinking/hoping this is possible with C++, C# or .NET.. But I don't have any clue where to look for.
| [
"You can probably solve this by using the FileSystemWatcher class in .NET framework.\nFrom the class remarks:\n\nYou can watch for renaming, deletion,\n or creation of files or directories.\n For example, to watch for renaming of\n text files, set the Filter property to\n \"*.txt\" and call the WaitForChanged\n method with a Renamed specified for\n its parameter.\n\n",
"My guess is that this is not possible, I did find this which is for monitoring operations (including rename) on a folder, but there does not appear to be a similar method for files.\n@Richard, FileSystemWatcher is good if you only need to monitor changes, but he needs to interrupt them which it cannot do.\n",
"IFileOperationProgressSink.PreRenameItem is the closest supported thing I know of. Unfortunately, it's not a hook into Explorer - so you can only use it for your own IFileOperation actions. Depending on your needs, you can write a shell extension to do your own ConfirmRename (or something), and branch from there.\nOtherwise, you're looking at hooking SHFileOperation, I think. This would have to be done in unmanaged code, as you'll be loaded into Explorer.exe. For Vista, this has been changed to IFileOperation - which probably means you'll have to hook the creation of it and pass out your mock.\nPersonally, I think since you're talking a rename, wilhelmtell's idea of confirming after the change, and undoing it if necessary is the best idea.\n"
] | [
5,
0,
0
] | [] | [] | [
".net",
"c#",
"file",
"io"
] | stackoverflow_0000024644_.net_c#_file_io.txt |
Q:
About File permissions in C#
While creating a file synchronization program in C# I tried to make a method copy in LocalFileItem class that uses System.IO.File.Copy(destination.Path, Path, true) method where Path is a string.
After executing this code with destination. Path = "C:\\Test2" and this.Path = "C:\\Test\\F1.txt" I get an exception saying that I do not have the required file permissions to do this operation on C:\Test, but C:\Test is owned by myself (the current user).
Does anybody knows what is going on, or how to get around this?
Here is the original code complete.
using System;
using System.Collections.Generic;
using System.Text;
using System.IO;
namespace Diones.Util.IO
{
/// <summary>
/// An object representation of a file or directory.
/// </summary>
public abstract class FileItem : IComparable
{
protected String path;
public String Path
{
set { this.path = value; }
get { return this.path; }
}
protected bool isDirectory;
public bool IsDirectory
{
set { this.isDirectory = value; }
get { return this.isDirectory; }
}
/// <summary>
/// Delete this fileItem.
/// </summary>
public abstract void delete();
/// <summary>
/// Delete this directory and all of its elements.
/// </summary>
protected abstract void deleteRecursive();
/// <summary>
/// Copy this fileItem to the destination directory.
/// </summary>
public abstract void copy(FileItem fileD);
/// <summary>
/// Copy this directory and all of its elements
/// to the destination directory.
/// </summary>
protected abstract void copyRecursive(FileItem fileD);
/// <summary>
/// Creates a FileItem from a string path.
/// </summary>
/// <param name="path"></param>
public FileItem(String path)
{
Path = path;
if (path.EndsWith("\\") || path.EndsWith("/")) IsDirectory = true;
else IsDirectory = false;
}
/// <summary>
/// Creates a FileItem from a FileSource directory.
/// </summary>
/// <param name="directory"></param>
public FileItem(FileSource directory)
{
Path = directory.Path;
}
public override String ToString()
{
return Path;
}
public abstract int CompareTo(object b);
}
/// <summary>
/// A file or directory on the hard disk
/// </summary>
public class LocalFileItem : FileItem
{
public override void delete()
{
if (!IsDirectory) File.Delete(this.Path);
else deleteRecursive();
}
protected override void deleteRecursive()
{
Directory.Delete(Path, true);
}
public override void copy(FileItem destination)
{
if (!IsDirectory) File.Copy(destination.Path, Path, true);
else copyRecursive(destination);
}
protected override void copyRecursive(FileItem destination)
{
Microsoft.VisualBasic.FileIO.FileSystem.CopyDirectory(
Path, destination.Path, true);
}
/// <summary>
/// Create's a LocalFileItem from a string path
/// </summary>
/// <param name="path"></param>
public LocalFileItem(String path)
: base(path)
{
}
/// <summary>
/// Creates a LocalFileItem from a FileSource path
/// </summary>
/// <param name="path"></param>
public LocalFileItem(FileSource path)
: base(path)
{
}
public override int CompareTo(object obj)
{
if (obj is FileItem)
{
FileItem fi = (FileItem)obj;
if (File.GetCreationTime(this.Path).CompareTo
(File.GetCreationTime(fi.Path)) > 0) return 1;
else if (File.GetCreationTime(this.Path).CompareTo
(File.GetCreationTime(fi.Path)) < 0) return -1;
else
{
if (File.GetLastWriteTime(this.Path).CompareTo
(File.GetLastWriteTime(fi.Path)) < 0) return -1;
else if (File.GetLastWriteTime(this.Path).CompareTo
(File.GetLastWriteTime(fi.Path)) > 0) return 1;
else return 0;
}
}
else
throw new ArgumentException("obj isn't a FileItem");
}
}
}
A:
It seems you have misplaced the parameters in File.Copy(), it should be File.Copy(string source, string destination).
Also is "C:\Test2" a directory? You can't copy file to a directory.
Use something like that instead:
File.Copy(
sourceFile,
Path.Combine(destinationDir,Path.GetFileName(sourceFile))
);
A:
I'm kinda guessing here, but could it be because:
You are trying to perform file operations in C: root? (there may be protection on this by Vista if you are using it - not sure?)
You are trying to copy to a non-existant directory?
The file already exists and may be locked? (i.e you have not closed another application instance)?
Sorry I cant be of more help, I have rarely experienced problems with File.Copy.
A:
I was able to solve the problem, Michal pointed me to the right direction.
The problem was that I tried to use File.Copy to copy a file from one location to another, while the Copy method does only copy all the contents from one file to another(creating the destination file if it does not already exists). The solution was to append the file name to the destination directory.
Thanks for all the help!
| About File permissions in C# | While creating a file synchronization program in C# I tried to make a method copy in LocalFileItem class that uses System.IO.File.Copy(destination.Path, Path, true) method where Path is a string.
After executing this code with destination. Path = "C:\\Test2" and this.Path = "C:\\Test\\F1.txt" I get an exception saying that I do not have the required file permissions to do this operation on C:\Test, but C:\Test is owned by myself (the current user).
Does anybody knows what is going on, or how to get around this?
Here is the original code complete.
using System;
using System.Collections.Generic;
using System.Text;
using System.IO;
namespace Diones.Util.IO
{
/// <summary>
/// An object representation of a file or directory.
/// </summary>
public abstract class FileItem : IComparable
{
protected String path;
public String Path
{
set { this.path = value; }
get { return this.path; }
}
protected bool isDirectory;
public bool IsDirectory
{
set { this.isDirectory = value; }
get { return this.isDirectory; }
}
/// <summary>
/// Delete this fileItem.
/// </summary>
public abstract void delete();
/// <summary>
/// Delete this directory and all of its elements.
/// </summary>
protected abstract void deleteRecursive();
/// <summary>
/// Copy this fileItem to the destination directory.
/// </summary>
public abstract void copy(FileItem fileD);
/// <summary>
/// Copy this directory and all of its elements
/// to the destination directory.
/// </summary>
protected abstract void copyRecursive(FileItem fileD);
/// <summary>
/// Creates a FileItem from a string path.
/// </summary>
/// <param name="path"></param>
public FileItem(String path)
{
Path = path;
if (path.EndsWith("\\") || path.EndsWith("/")) IsDirectory = true;
else IsDirectory = false;
}
/// <summary>
/// Creates a FileItem from a FileSource directory.
/// </summary>
/// <param name="directory"></param>
public FileItem(FileSource directory)
{
Path = directory.Path;
}
public override String ToString()
{
return Path;
}
public abstract int CompareTo(object b);
}
/// <summary>
/// A file or directory on the hard disk
/// </summary>
public class LocalFileItem : FileItem
{
public override void delete()
{
if (!IsDirectory) File.Delete(this.Path);
else deleteRecursive();
}
protected override void deleteRecursive()
{
Directory.Delete(Path, true);
}
public override void copy(FileItem destination)
{
if (!IsDirectory) File.Copy(destination.Path, Path, true);
else copyRecursive(destination);
}
protected override void copyRecursive(FileItem destination)
{
Microsoft.VisualBasic.FileIO.FileSystem.CopyDirectory(
Path, destination.Path, true);
}
/// <summary>
/// Create's a LocalFileItem from a string path
/// </summary>
/// <param name="path"></param>
public LocalFileItem(String path)
: base(path)
{
}
/// <summary>
/// Creates a LocalFileItem from a FileSource path
/// </summary>
/// <param name="path"></param>
public LocalFileItem(FileSource path)
: base(path)
{
}
public override int CompareTo(object obj)
{
if (obj is FileItem)
{
FileItem fi = (FileItem)obj;
if (File.GetCreationTime(this.Path).CompareTo
(File.GetCreationTime(fi.Path)) > 0) return 1;
else if (File.GetCreationTime(this.Path).CompareTo
(File.GetCreationTime(fi.Path)) < 0) return -1;
else
{
if (File.GetLastWriteTime(this.Path).CompareTo
(File.GetLastWriteTime(fi.Path)) < 0) return -1;
else if (File.GetLastWriteTime(this.Path).CompareTo
(File.GetLastWriteTime(fi.Path)) > 0) return 1;
else return 0;
}
}
else
throw new ArgumentException("obj isn't a FileItem");
}
}
}
| [
"It seems you have misplaced the parameters in File.Copy(), it should be File.Copy(string source, string destination).\nAlso is \"C:\\Test2\" a directory? You can't copy file to a directory. \nUse something like that instead:\n\nFile.Copy( \n sourceFile,\n Path.Combine(destinationDir,Path.GetFileName(sourceFile))\n );\n",
"I'm kinda guessing here, but could it be because:\n\nYou are trying to perform file operations in C: root? (there may be protection on this by Vista if you are using it - not sure?)\nYou are trying to copy to a non-existant directory?\nThe file already exists and may be locked? (i.e you have not closed another application instance)?\n\nSorry I cant be of more help, I have rarely experienced problems with File.Copy.\n",
"I was able to solve the problem, Michal pointed me to the right direction.\nThe problem was that I tried to use File.Copy to copy a file from one location to another, while the Copy method does only copy all the contents from one file to another(creating the destination file if it does not already exists). The solution was to append the file name to the destination directory.\nThanks for all the help!\n"
] | [
4,
0,
0
] | [] | [] | [
"c#",
"copy",
"file"
] | stackoverflow_0000024262_c#_copy_file.txt |
Q:
Automating WSDL.exe in a Custom Build
I have a web application written in C# that consumes several internal web services. We have a development tier, a testing tier, and a production tier. Also, we use the WSDL.exe command to generate a Proxies.cs file for a given tier's web services.
When we are ready to deploy our code up the stack from development to test or test to production, we need to run the WSDL.exe command to point to the appropriate version of the web services.
Is there a generally accepted way to automate this?
A:
There are a number of way to do it. A NAnt build script will do it, but I think the most commonly accepted method now is to use MSBuild. See MSDN for details.
A:
Our company uses a combination of NANT + Cruise Control + Custom Utility apps to build our products. More specifically, the task in NANT will allow you to fire off those command-line applications such as WSDL.exe
| Automating WSDL.exe in a Custom Build | I have a web application written in C# that consumes several internal web services. We have a development tier, a testing tier, and a production tier. Also, we use the WSDL.exe command to generate a Proxies.cs file for a given tier's web services.
When we are ready to deploy our code up the stack from development to test or test to production, we need to run the WSDL.exe command to point to the appropriate version of the web services.
Is there a generally accepted way to automate this?
| [
"There are a number of way to do it. A NAnt build script will do it, but I think the most commonly accepted method now is to use MSBuild. See MSDN for details.\n",
"Our company uses a combination of NANT + Cruise Control + Custom Utility apps to build our products. More specifically, the task in NANT will allow you to fire off those command-line applications such as WSDL.exe\n"
] | [
2,
1
] | [] | [] | [
"asp.net",
"build_process",
"wsdl"
] | stackoverflow_0000017948_asp.net_build_process_wsdl.txt |