content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
30
130
Q: C# Corrupt Memory Error I cant post the code (proprietary issues) but does anyone know what types of things would cause the following error in C#. It is being thrown by a VOIP client that I wrote (using counterpath api) when the call is ended by the other client. The error is: System.AccessViolationException was unhandled Message="Attempted to read or write protected memory. This is often an indication that other memory is corrupt." Source="System.Windows.Forms" StackTrace: at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(Int32 dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.Run(Form mainForm) at CollabAnalysisSF.Edge.GUI.Forms.Program.Main() in d:\data\beyerss\Desktop\client\GUI\ARGui\Program.cs:line 18 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException: UPDATE: Turns out one of the libraries we were using was sending off an event that we didnt know about, and the problem was in there somewhere. Fixed now. A: List of some possibilities: An object is being used after it has been disposed. This can happen a lot if you are disposing managed object in a finalizer (you should not do that). An unmannaged implementation of one of the object you are using is bugged and it corrupted the process memory heap. Happens a lot with DirectX, GDI and others. Mashaling on managed-unmanaged boundary is flawed. Make sure you pin a managed pointer before you use it on an unmanaged part of code. You are using unsafe block and doing funny stuff with it. In you case it could be a problem with Windows Forms. But the problem is not that it is happening, but rather that it is not being reported correctly; you possibly still have done something wrong. Are you able to determine what control is causing the error using the HWND? Is it always the same? Is this control doing something funny just before the application crashes? Is the unmannaged part of the control a custom window or a standard control? A: This kind of prolem can occur if you are calling unmanaged code e.g. a dll. It can occur when Marshalling goes horribly wrong. Can you tell us if you are calling unmanaged code? If so are you using default Marshalling or more specific stuff? From the looks of the stack trace are you using unsafe code e.g. Pointers and the like? This could be your problem. A: Here is a more detailed stacktrace. It looks to me like it has something to do with the System.Windows.Form.dll the TargetSite is listed as {IntPtr DispatchMessageW(MSG ByRef)} and under module it has System.windows.forms.dll
C# Corrupt Memory Error
I cant post the code (proprietary issues) but does anyone know what types of things would cause the following error in C#. It is being thrown by a VOIP client that I wrote (using counterpath api) when the call is ended by the other client. The error is: System.AccessViolationException was unhandled Message="Attempted to read or write protected memory. This is often an indication that other memory is corrupt." Source="System.Windows.Forms" StackTrace: at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(Int32 dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.Run(Form mainForm) at CollabAnalysisSF.Edge.GUI.Forms.Program.Main() in d:\data\beyerss\Desktop\client\GUI\ARGui\Program.cs:line 18 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException: UPDATE: Turns out one of the libraries we were using was sending off an event that we didnt know about, and the problem was in there somewhere. Fixed now.
[ "List of some possibilities:\n\nAn object is being used after it has been disposed. This can happen a lot if you are disposing managed object in a finalizer (you should not do that).\nAn unmannaged implementation of one of the object you are using is bugged and it corrupted the process memory heap. Happens a lot with DirectX, GDI and others.\nMashaling on managed-unmanaged boundary is flawed. Make sure you pin a managed pointer before you use it on an unmanaged part of code.\nYou are using unsafe block and doing funny stuff with it.\n\n\nIn you case it could be a problem with Windows Forms. But the problem is not that it is happening, but rather that it is not being reported correctly; you possibly still have done something wrong.\nAre you able to determine what control is causing the error using the HWND? Is it always the same? Is this control doing something funny just before the application crashes? Is the unmannaged part of the control a custom window or a standard control?\n", "This kind of prolem can occur if you are calling unmanaged code e.g. a dll. It can occur when Marshalling goes horribly wrong.\nCan you tell us if you are calling unmanaged code? If so are you using default Marshalling or more specific stuff? From the looks of the stack trace are you using unsafe code e.g. Pointers and the like? This could be your problem.\n", "Here is a more detailed stacktrace. It looks to me like it has something to do with the System.Windows.Form.dll\nthe TargetSite is listed as {IntPtr DispatchMessageW(MSG ByRef)}\nand under module it has System.windows.forms.dll\n" ]
[ 3, 1, 0 ]
[]
[]
[ "c#", "voip" ]
stackoverflow_0000017947_c#_voip.txt
Q: What is the best method for checking if a file exists from a SQL Server 2005 stored procedure? We used the "undocumented" xp_fileexist stored procedure for years in SQL Server 2000 and had no trouble with it. In 2005, it seems that they modified the behavior slightly to always return a 0 if the executing user account is not a sysadmin. It also seems to return a zero if the SQL Server service is running under the LocalSystem account and you are trying to check a file on the network. I'd like to get away from xp_fileexist. Does anyone have a better way to check for the existence of a file at a network location from inside of a stored procedure? A: You will have to mark the CLR as EXTERNAL_ACCESS in order to get access to the System.IO namespace, however as things go that is not a bad way to go about it. SAFE is the default permission set, but it’s highly restrictive. With the SAFE setting, you can access only data from a local database to perform computational logic on that data. EXTERNAL_ACCESS is the next step in the permissions hierarchy. This setting lets you access external resources such as the file system, Windows Event Viewer, and Web services. This type of resource access isn’t possible in SQL Server 2000 and earlier. This permission set also restricts operations such as pointer access that affect the robustness of your assembly. The UNSAFE permission set assumes full trust of the assembly and thus imposes no "Code Access Security" limitations. This setting is comparable to the way extended stored procedures function—you assume all the code is safe. However, this setting does restrict the creation of unsafe assemblies to users who have sysadmin permissions. Microsoft recommends that you avoid creating unsafe assemblies as much as possible. A: Maybe a CLR stored procedure is what you are looking for. These are generally used when you need to interact with the system in some way. A: I still believe that a CLR procedure might be the best bet. So, I'm accepting that answer. However, either I'm not that bright or it's extremely difficult to implement. Our SQL Server service is running under a local account because, according to Mircosoft, that's the only way to get an iSeries linked server working from a 64-bit SQL Server 2005 instance. When we change the SQL Server service to run with a domain account, the xp_fileexist command works fine for files located on the network. I created this CLR stored procedure and built it with the permission level set to External and signed it: using System; using System.Data; using System.Data.SqlClient; using System.Data.SqlTypes; using Microsoft.SqlServer.Server; using System.Security.Principal; public partial class StoredProcedures { [Microsoft.SqlServer.Server.SqlProcedure] public static void FileExists(SqlString fileName, out SqlInt32 returnValue) { WindowsImpersonationContext originalContext = null; try { WindowsIdentity callerIdentity = SqlContext.WindowsIdentity; originalContext = callerIdentity.Impersonate(); if (System.IO.File.Exists(Convert.ToString(fileName))) { returnValue = 1; } else { returnValue = 0; } } catch (Exception) { returnValue = -1; } finally { if (originalContext != null) { originalContext.Undo(); } } } } Then I ran these TSQL commands: USE master GO CREATE ASYMMETRIC KEY FileUtilitiesKey FROM EXECUTABLE FILE = 'J:\FileUtilities.dll' CREATE LOGIN CLRLogin FROM ASYMMETRIC KEY FileUtilitiesKey GRANT EXTERNAL ACCESS ASSEMBLY TO CLRLogin ALTER DATABASE database SET TRUSTWORTHY ON; Then I deployed CLR stored proc to my target database from Visual Studio and used this TSQL to execute from SSMS logged in with windows authentication: DECLARE @i INT --EXEC FileExists '\\\\server\\share\\folder\\file.dat', @i OUT EXEC FileExists 'j:\\file.dat', @i OUT SELECT @i Whether I try a local file or a network file, I always get a 0. I may try again later, but for now, I'm going to try to go down a different road. If anyone has some light to shed, it would be much appreciated.
What is the best method for checking if a file exists from a SQL Server 2005 stored procedure?
We used the "undocumented" xp_fileexist stored procedure for years in SQL Server 2000 and had no trouble with it. In 2005, it seems that they modified the behavior slightly to always return a 0 if the executing user account is not a sysadmin. It also seems to return a zero if the SQL Server service is running under the LocalSystem account and you are trying to check a file on the network. I'd like to get away from xp_fileexist. Does anyone have a better way to check for the existence of a file at a network location from inside of a stored procedure?
[ "You will have to mark the CLR as EXTERNAL_ACCESS in order to get access to the System.IO namespace, however as things go that is not a bad way to go about it. \n\nSAFE is the default permission set, but it’s highly restrictive. With the SAFE setting, you can access only data from a local database to perform computational logic on that data.\n EXTERNAL_ACCESS is the next step in the permissions hierarchy. This setting lets you access external resources such as the file system, Windows Event Viewer, and Web services. This type of resource access isn’t possible in SQL Server 2000 and earlier. This permission set also restricts operations such as pointer access that affect the robustness of your assembly.\n The UNSAFE permission set assumes full trust of the assembly and thus imposes no \"Code Access Security\" limitations. This setting is comparable to the way extended stored procedures function—you assume all the code is safe. However, this setting does restrict the creation of unsafe assemblies to users who have sysadmin permissions. Microsoft recommends that you avoid creating unsafe assemblies as much as possible.\n\n", "Maybe a CLR stored procedure is what you are looking for. These are generally used when you need to interact with the system in some way.\n", "I still believe that a CLR procedure might be the best bet. So, I'm accepting that answer. However, either I'm not that bright or it's extremely difficult to implement. Our SQL Server service is running under a local account because, according to Mircosoft, that's the only way to get an iSeries linked server working from a 64-bit SQL Server 2005 instance. When we change the SQL Server service to run with a domain account, the xp_fileexist command works fine for files located on the network.\nI created this CLR stored procedure and built it with the permission level set to External and signed it:\nusing System;\nusing System.Data;\nusing System.Data.SqlClient;\nusing System.Data.SqlTypes;\nusing Microsoft.SqlServer.Server;\nusing System.Security.Principal;\n\npublic partial class StoredProcedures\n{\n [Microsoft.SqlServer.Server.SqlProcedure]\n public static void FileExists(SqlString fileName, out SqlInt32 returnValue)\n {\n WindowsImpersonationContext originalContext = null;\n\n try\n {\n WindowsIdentity callerIdentity = SqlContext.WindowsIdentity;\n originalContext = callerIdentity.Impersonate();\n\n if (System.IO.File.Exists(Convert.ToString(fileName)))\n {\n returnValue = 1;\n }\n else\n {\n returnValue = 0;\n }\n }\n catch (Exception)\n {\n returnValue = -1;\n }\n finally\n {\n if (originalContext != null)\n {\n originalContext.Undo();\n }\n }\n }\n}\n\nThen I ran these TSQL commands:\nUSE master\nGO\nCREATE ASYMMETRIC KEY FileUtilitiesKey FROM EXECUTABLE FILE = 'J:\\FileUtilities.dll' \nCREATE LOGIN CLRLogin FROM ASYMMETRIC KEY FileUtilitiesKey \nGRANT EXTERNAL ACCESS ASSEMBLY TO CLRLogin \nALTER DATABASE database SET TRUSTWORTHY ON;\n\nThen I deployed CLR stored proc to my target database from Visual Studio and used this TSQL to execute from SSMS logged in with windows authentication:\nDECLARE @i INT\n--EXEC FileExists '\\\\\\\\server\\\\share\\\\folder\\\\file.dat', @i OUT\nEXEC FileExists 'j:\\\\file.dat', @i OUT\nSELECT @i\n\nWhether I try a local file or a network file, I always get a 0. I may try again later, but for now, I'm going to try to go down a different road. If anyone has some light to shed, it would be much appreciated.\n" ]
[ 5, 4, 3 ]
[]
[]
[ "sql_server", "sql_server_2005" ]
stackoverflow_0000016634_sql_server_sql_server_2005.txt
Q: How do I track down performance problems with page rendering? I've been tasked with improving the performance of an ASP.NET 2.0 application. The page I'm currently focused on has many problems but one that I'm having trouble digging into is the render time of the page. Using Trace.axd the duration between Begin Render and End Render is 1.4 seconds. From MSDN I see that All ASP.NET Web server controls have a Render method that writes out the control's markup that is sent to the browser. If I had the source code for all the controls on the page, I would just instrument them to trace out their render time. Unfortunately, this particular page has lots of controls, most of them third-party. Is there tool or technique to get better visibility into what is going on during the render? I would like to know if there is a particularly poorly performing control, or if there are simply too many controls on the page. A: <%@Page Trace="true" %> See http://www.asp101.com/articles/robert/tracing/default.asp. A: Download ANTS PROFILER, this will give you a perfect overview of the lines causing the slowdown. Also when it's about rendering make sure you don't use to much string concats (like string += "value") but use StringBuilders to improve performance. A: It may not help if the problem is inside one of your controls - as you expect - but if the page is poorly designed and that's causing render to be slow, YSlow should help clean that up.
How do I track down performance problems with page rendering?
I've been tasked with improving the performance of an ASP.NET 2.0 application. The page I'm currently focused on has many problems but one that I'm having trouble digging into is the render time of the page. Using Trace.axd the duration between Begin Render and End Render is 1.4 seconds. From MSDN I see that All ASP.NET Web server controls have a Render method that writes out the control's markup that is sent to the browser. If I had the source code for all the controls on the page, I would just instrument them to trace out their render time. Unfortunately, this particular page has lots of controls, most of them third-party. Is there tool or technique to get better visibility into what is going on during the render? I would like to know if there is a particularly poorly performing control, or if there are simply too many controls on the page.
[ "<%@Page Trace=\"true\" %>\n\nSee http://www.asp101.com/articles/robert/tracing/default.asp.\n", "Download ANTS PROFILER, this will give you a perfect overview of the lines causing the slowdown.\nAlso when it's about rendering make sure you don't use to much string concats (like string += \"value\") but use StringBuilders to improve performance.\n", "It may not help if the problem is inside one of your controls - as you expect - but if the page is poorly designed and that's causing render to be slow, YSlow should help clean that up.\n" ]
[ 3, 2, 0 ]
[]
[]
[ "asp.net", "performance" ]
stackoverflow_0000015513_asp.net_performance.txt
Q: How would a sdbm hash function be implemented in C#? How can a sdbm hash function (such as this) be implemented in C# ? A: You can take the C code almost without changes: uint sdbm( string str ) { uint hash = 0; foreach( char ch in str ) { hash = ch + (hash << 6) + (hash << 16) - hash; } return hash; } Or did you think of something more sophisticated? A: I don't have a C compiler set up so I can't test to see if it performs the same, but I think the following is correct: private static ulong SBDM(string str) { ulong hash = 0; foreach (char c in str) { hash = c + (hash << 6) + (hash << 16) - hash; } return hash; } If you just need to get a hash of the string and it doesn't matter too much what the implementation is you can always do the String.GetHashCode(); A: The result from the hash differs between the C++ and C# implementation. I figured out that str parameter needs to be passed as a byte array. private uint sdbm(byte[] str) { uint hash = 0; foreach (char ch in str) hash = ch + (hash << 6) + (hash << 16) - hash; return hash; } Call the method by converting the value to be hashed with the BitConverter.GetBytes method. uint Hash = sdbm(BitConverter.GetBytes(myID));
How would a sdbm hash function be implemented in C#?
How can a sdbm hash function (such as this) be implemented in C# ?
[ "You can take the C code almost without changes:\nuint sdbm( string str )\n{\n uint hash = 0;\n foreach( char ch in str )\n {\n hash = ch + (hash << 6) + (hash << 16) - hash;\n }\n return hash;\n}\n\nOr did you think of something more sophisticated?\n", "I don't have a C compiler set up so I can't test to see if it performs the same, but I think the following is correct:\nprivate static ulong SBDM(string str)\n{\n ulong hash = 0;\n\n foreach (char c in str)\n {\n hash = c + (hash << 6) + (hash << 16) - hash;\n }\n\n return hash;\n}\n\nIf you just need to get a hash of the string and it doesn't matter too much what the implementation is you can always do the String.GetHashCode();\n", "The result from the hash differs between the C++ and C# implementation. I figured out that str parameter needs to be passed as a byte array.\nprivate uint sdbm(byte[] str)\n{\n uint hash = 0;\n\n foreach (char ch in str)\n hash = ch + (hash << 6) + (hash << 16) - hash;\n\n return hash;\n}\n\nCall the method by converting the value to be hashed with the BitConverter.GetBytes method.\nuint Hash = sdbm(BitConverter.GetBytes(myID));\n\n" ]
[ 3, 1, 0 ]
[]
[]
[ "c#", "hash" ]
stackoverflow_0000015954_c#_hash.txt
Q: Error viewing csproj property pages in VisualStudio2005 When I goto view the property page for my CSharp test application I get the following error. "An error occurred trying to load the page. COM object that has been seperated from its underlying RCW cannot be used." The only thing that seems to fix it is rebooting my PC! A: This is usually caused by a 'rogue' add-in. Try disabling them all, and then re-enabling them checking for the error - so that you can narrow down the culprit. A: It seems Microsoft Style Cop was causing the issue. It was not registered as an Add-in, but was integrated into VS2005 on some deeper level.
Error viewing csproj property pages in VisualStudio2005
When I goto view the property page for my CSharp test application I get the following error. "An error occurred trying to load the page. COM object that has been seperated from its underlying RCW cannot be used." The only thing that seems to fix it is rebooting my PC!
[ "This is usually caused by a 'rogue' add-in.\nTry disabling them all, and then re-enabling them checking for the error - so that you can narrow down the culprit.\n", "It seems Microsoft Style Cop was causing the issue.\nIt was not registered as an Add-in, but was integrated into VS2005 on some deeper level.\n" ]
[ 1, 0 ]
[]
[]
[ "visual_studio" ]
stackoverflow_0000016808_visual_studio.txt
Q: In C#, do you need to call the base constructor? In C#, if I have an inherited class with a default constructor, do I have to explicitly call the base class' constructor or will it be implicitly called? class BaseClass { public BaseClass() { // ... some code } } class MyClass : BaseClass { public MyClass() // Do I need to put ": base()" here or is it implied? { // ... some code } } A: You do not need to explicitly call the base constructor, it will be implicitly called. Extend your example a little and create a Console Application and you can verify this behaviour for yourself: using System; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { MyClass foo = new MyClass(); Console.ReadLine(); } } class BaseClass { public BaseClass() { Console.WriteLine("BaseClass constructor called."); } } class MyClass : BaseClass { public MyClass() { Console.WriteLine("MyClass constructor called."); } } } A: It is implied, provided it is parameterless. This is because you need to implement constructors that take values, see the code below for an example: public class SuperClassEmptyCtor { public SuperClassEmptyCtor() { // Default Ctor } } public class SubClassA : SuperClassEmptyCtor { // No Ctor's this is fine since we have // a default (empty ctor in the base) } public class SuperClassCtor { public SuperClassCtor(string value) { // Default Ctor } } public class SubClassB : SuperClassCtor { // This fails because we need to satisfy // the ctor for the base class. } public class SubClassC : SuperClassCtor { public SubClassC(string value) : base(value) { // make it easy and pipe the params // straight to the base! } } A: It's implied for base parameterless constructors, but it is needed for defaults in the current class: public class BaseClass { protected string X; public BaseClass() { this.X = "Foo"; } } public class MyClass : BaseClass { public MyClass() // no ref to base needed { // initialise stuff this.X = "bar"; } public MyClass(int param1, string param2) :this() // This is needed to hit the parameterless ..ctor { // this.X will be "bar" } public MyClass(string param1, int param2) // :base() // can be implied { // this.X will be "foo" } } A: It is implied. A: A derived class is built upon the base class. If you think about it, the base object has to be instantiated in memory before the derived class can be appended to it. So the base object will be created on the way to creating the derived object. So no, you do not call the constructor. A: AFAIK, you only need to call the base constructor if you need to pass down any values to it.
In C#, do you need to call the base constructor?
In C#, if I have an inherited class with a default constructor, do I have to explicitly call the base class' constructor or will it be implicitly called? class BaseClass { public BaseClass() { // ... some code } } class MyClass : BaseClass { public MyClass() // Do I need to put ": base()" here or is it implied? { // ... some code } }
[ "You do not need to explicitly call the base constructor, it will be implicitly called.\nExtend your example a little and create a Console Application and you can verify this behaviour for yourself:\nusing System;\n\nnamespace ConsoleApplication1\n{\n class Program\n {\n static void Main(string[] args)\n {\n MyClass foo = new MyClass();\n\n Console.ReadLine();\n }\n }\n\n class BaseClass\n {\n public BaseClass()\n {\n Console.WriteLine(\"BaseClass constructor called.\");\n }\n }\n\n class MyClass : BaseClass\n {\n public MyClass()\n {\n Console.WriteLine(\"MyClass constructor called.\");\n }\n }\n}\n\n", "It is implied, provided it is parameterless. This is because you need to implement constructors that take values, see the code below for an example:\npublic class SuperClassEmptyCtor\n{\n public SuperClassEmptyCtor()\n {\n // Default Ctor\n }\n}\n\npublic class SubClassA : SuperClassEmptyCtor\n{\n // No Ctor's this is fine since we have\n // a default (empty ctor in the base)\n}\n\npublic class SuperClassCtor\n{\n public SuperClassCtor(string value)\n {\n // Default Ctor\n }\n}\n\npublic class SubClassB : SuperClassCtor\n{\n // This fails because we need to satisfy\n // the ctor for the base class.\n}\n\npublic class SubClassC : SuperClassCtor\n{\n public SubClassC(string value) : base(value)\n {\n // make it easy and pipe the params\n // straight to the base!\n }\n}\n\n", "It's implied for base parameterless constructors, but it is needed for defaults in the current class:\npublic class BaseClass {\n protected string X;\n\n public BaseClass() {\n this.X = \"Foo\";\n }\n}\n\npublic class MyClass : BaseClass\n{\n public MyClass() \n // no ref to base needed\n {\n // initialise stuff\n this.X = \"bar\";\n }\n\n public MyClass(int param1, string param2)\n :this() // This is needed to hit the parameterless ..ctor\n {\n // this.X will be \"bar\"\n }\n\n public MyClass(string param1, int param2)\n // :base() // can be implied\n {\n // this.X will be \"foo\"\n }\n}\n\n", "It is implied.\n", "A derived class is built upon the base class. If you think about it, the base object has to be instantiated in memory before the derived class can be appended to it. So the base object will be created on the way to creating the derived object. So no, you do not call the constructor.\n", "AFAIK, you only need to call the base constructor if you need to pass down any values to it.\n" ]
[ 63, 30, 9, 7, 5, 0 ]
[ "You don’t need call the base constructor explicitly it will be implicitly called, but sometimes you need pass parameters to the constructor in that case you can do something like:\nusing System;\nnamespace StackOverflow.Examples\n{\n class Program\n {\n static void Main(string[] args)\n {\n NewClass foo = new NewClass(\"parameter1\",\"parameter2\");\n Console.WriteLine(foo.GetUpperParameter());\n Console.ReadKey();\n }\n }\n\n interface IClass\n {\n string GetUpperParameter();\n }\n\n class BaseClass : IClass\n {\n private string parameter;\n public BaseClass (string someParameter)\n {\n this.parameter = someParameter;\n }\n\n public string GetUpperParameter()\n {\n return this.parameter.ToUpper();\n }\n }\n\n class NewClass : IClass\n {\n private BaseClass internalClass;\n private string newParameter;\n\n public NewClass (string someParameter, string newParameter)\n {\n this.internalClass = new BaseClass(someParameter);\n this.newParameter = newParameter;\n }\n\n public string GetUpperParameter()\n {\n return this.internalClass.GetUpperParameter() + this.newParameter.ToUpper();\n }\n }\n}\n\nNote: If someone knows a better solution please tells me.\n" ]
[ -3 ]
[ "c#", "constructor", "inheritance" ]
stackoverflow_0000018097_c#_constructor_inheritance.txt
Q: Creating a custom JButton in Java Is there a way to create a JButton with your own button graphic and not just with an image inside the button? If not, is there another way to create a custom JButton in java? A: When I was first learning Java we had to make Yahtzee and I thought it would be cool to create custom Swing components and containers instead of just drawing everything on one JPanel. The benefit of extending Swing components, of course, is to have the ability to add support for keyboard shortcuts and other accessibility features that you can't do just by having a paint() method print a pretty picture. It may not be done the best way however, but it may be a good starting point for you. Edit 8/6 - If it wasn't apparent from the images, each Die is a button you can click. This will move it to the DiceContainer below. Looking at the source code you can see that each Die button is drawn dynamically, based on its value. Here are the basic steps: Create a class that extends JComponent Call parent constructor super() in your constructors Make sure you class implements MouseListener Put this in the constructor: enableInputMethods(true); addMouseListener(this); Override these methods: public Dimension getPreferredSize() public Dimension getMinimumSize() public Dimension getMaximumSize() Override this method: public void paintComponent(Graphics g) The amount of space you have to work with when drawing your button is defined by getPreferredSize(), assuming getMinimumSize() and getMaximumSize() return the same value. I haven't experimented too much with this but, depending on the layout you use for your GUI your button could look completely different. And finally, the source code. In case I missed anything. A: Yes, this is possible. One of the main pros for using Swing is the ease with which the abstract controls can be created and manipulates. Here is a quick and dirty way to extend the existing JButton class to draw a circle to the right of the text. package test; import java.awt.Color; import java.awt.Container; import java.awt.Dimension; import java.awt.FlowLayout; import java.awt.Graphics; import javax.swing.JButton; import javax.swing.JFrame; public class MyButton extends JButton { private static final long serialVersionUID = 1L; private Color circleColor = Color.BLACK; public MyButton(String label) { super(label); } @Override protected void paintComponent(Graphics g) { super.paintComponent(g); Dimension originalSize = super.getPreferredSize(); int gap = (int) (originalSize.height * 0.2); int x = originalSize.width + gap; int y = gap; int diameter = originalSize.height - (gap * 2); g.setColor(circleColor); g.fillOval(x, y, diameter, diameter); } @Override public Dimension getPreferredSize() { Dimension size = super.getPreferredSize(); size.width += size.height; return size; } /*Test the button*/ public static void main(String[] args) { MyButton button = new MyButton("Hello, World!"); JFrame frame = new JFrame(); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.setSize(400, 400); Container contentPane = frame.getContentPane(); contentPane.setLayout(new FlowLayout()); contentPane.add(button); frame.setVisible(true); } } Note that by overriding paintComponent that the contents of the button can be changed, but that the border is painted by the paintBorder method. The getPreferredSize method also needs to be managed in order to dynamically support changes to the content. Care needs to be taken when measuring font metrics and image dimensions. For creating a control that you can rely on, the above code is not the correct approach. Dimensions and colours are dynamic in Swing and are dependent on the look and feel being used. Even the default Metal look has changed across JRE versions. It would be better to implement AbstractButton and conform to the guidelines set out by the Swing API. A good starting point is to look at the javax.swing.LookAndFeel and javax.swing.UIManager classes. http://docs.oracle.com/javase/8/docs/api/javax/swing/LookAndFeel.html http://docs.oracle.com/javase/8/docs/api/javax/swing/UIManager.html Understanding the anatomy of LookAndFeel is useful for writing controls: Creating a Custom Look and Feel A: You could always try the Synth look & feel. You provide an xml file that acts as a sort of stylesheet, along with any images you want to use. The code might look like this: try { SynthLookAndFeel synth = new SynthLookAndFeel(); Class aClass = MainFrame.class; InputStream stream = aClass.getResourceAsStream("\\default.xml"); if (stream == null) { System.err.println("Missing configuration file"); System.exit(-1); } synth.load(stream, aClass); UIManager.setLookAndFeel(synth); } catch (ParseException pe) { System.err.println("Bad configuration file"); pe.printStackTrace(); System.exit(-2); } catch (UnsupportedLookAndFeelException ulfe) { System.err.println("Old JRE in use. Get a new one"); System.exit(-3); } From there, go on and add your JButton like you normally would. The only change is that you use the setName(string) method to identify what the button should map to in the xml file. The xml file might look like this: <synth> <style id="button"> <font name="DIALOG" size="12" style="BOLD"/> <state value="MOUSE_OVER"> <imagePainter method="buttonBackground" path="dirt.png" sourceInsets="2 2 2 2"/> <insets top="2" botton="2" right="2" left="2"/> </state> <state value="ENABLED"> <imagePainter method="buttonBackground" path="dirt.png" sourceInsets="2 2 2 2"/> <insets top="2" botton="2" right="2" left="2"/> </state> </style> <bind style="button" type="name" key="dirt"/> </synth> The bind element there specifies what to map to (in this example, it will apply that styling to any buttons whose name property has been set to "dirt"). And a couple of useful links: http://javadesktop.org/articles/synth/ http://docs.oracle.com/javase/tutorial/uiswing/lookandfeel/synth.html A: I'm probably going a million miles in the wrong direct (but i'm only young :P ). but couldn't you add the graphic to a panel and then a mouselistener to the graphic object so that when the user on the graphic your action is preformed. A: I haven't done SWING development since my early CS classes but if it wasn't built in you could just inherit javax.swing.AbstractButton and create your own. Should be pretty simple to wire something together with their existing framework.
Creating a custom JButton in Java
Is there a way to create a JButton with your own button graphic and not just with an image inside the button? If not, is there another way to create a custom JButton in java?
[ "When I was first learning Java we had to make Yahtzee and I thought it would be cool to create custom Swing components and containers instead of just drawing everything on one JPanel. The benefit of extending Swing components, of course, is to have the ability to add support for keyboard shortcuts and other accessibility features that you can't do just by having a paint() method print a pretty picture. It may not be done the best way however, but it may be a good starting point for you.\nEdit 8/6 - If it wasn't apparent from the images, each Die is a button you can click. This will move it to the DiceContainer below. Looking at the source code you can see that each Die button is drawn dynamically, based on its value.\n\n\n\nHere are the basic steps:\n\nCreate a class that extends JComponent\nCall parent constructor super() in your constructors\nMake sure you class implements MouseListener\nPut this in the constructor:\nenableInputMethods(true); \naddMouseListener(this);\n\nOverride these methods:\npublic Dimension getPreferredSize() \npublic Dimension getMinimumSize() \npublic Dimension getMaximumSize()\n\nOverride this method:\npublic void paintComponent(Graphics g)\n\n\nThe amount of space you have to work with when drawing your button is defined by getPreferredSize(), assuming getMinimumSize() and getMaximumSize() return the same value. I haven't experimented too much with this but, depending on the layout you use for your GUI your button could look completely different.\nAnd finally, the source code. In case I missed anything. \n", "Yes, this is possible. One of the main pros for using Swing is the ease with which the abstract controls can be created and manipulates.\nHere is a quick and dirty way to extend the existing JButton class to draw a circle to the right of the text.\npackage test;\n\nimport java.awt.Color;\nimport java.awt.Container;\nimport java.awt.Dimension;\nimport java.awt.FlowLayout;\nimport java.awt.Graphics;\n\nimport javax.swing.JButton;\nimport javax.swing.JFrame;\n\npublic class MyButton extends JButton {\n\n private static final long serialVersionUID = 1L;\n\n private Color circleColor = Color.BLACK;\n\n public MyButton(String label) {\n super(label);\n }\n\n @Override\n protected void paintComponent(Graphics g) {\n super.paintComponent(g);\n\n Dimension originalSize = super.getPreferredSize();\n int gap = (int) (originalSize.height * 0.2);\n int x = originalSize.width + gap;\n int y = gap;\n int diameter = originalSize.height - (gap * 2);\n\n g.setColor(circleColor);\n g.fillOval(x, y, diameter, diameter);\n }\n\n @Override\n public Dimension getPreferredSize() {\n Dimension size = super.getPreferredSize();\n size.width += size.height;\n return size;\n }\n\n /*Test the button*/\n public static void main(String[] args) {\n MyButton button = new MyButton(\"Hello, World!\");\n\n JFrame frame = new JFrame();\n frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);\n frame.setSize(400, 400);\n\n Container contentPane = frame.getContentPane();\n contentPane.setLayout(new FlowLayout());\n contentPane.add(button);\n\n frame.setVisible(true);\n }\n\n}\n\nNote that by overriding paintComponent that the contents of the button can be changed, but that the border is painted by the paintBorder method. The getPreferredSize method also needs to be managed in order to dynamically support changes to the content. Care needs to be taken when measuring font metrics and image dimensions.\nFor creating a control that you can rely on, the above code is not the correct approach. Dimensions and colours are dynamic in Swing and are dependent on the look and feel being used. Even the default Metal look has changed across JRE versions. It would be better to implement AbstractButton and conform to the guidelines set out by the Swing API. A good starting point is to look at the javax.swing.LookAndFeel and javax.swing.UIManager classes.\nhttp://docs.oracle.com/javase/8/docs/api/javax/swing/LookAndFeel.html\nhttp://docs.oracle.com/javase/8/docs/api/javax/swing/UIManager.html\nUnderstanding the anatomy of LookAndFeel is useful for writing controls:\nCreating a Custom Look and Feel\n", "You could always try the Synth look & feel. You provide an xml file that acts as a sort of stylesheet, along with any images you want to use. The code might look like this:\ntry {\n SynthLookAndFeel synth = new SynthLookAndFeel();\n Class aClass = MainFrame.class;\n InputStream stream = aClass.getResourceAsStream(\"\\\\default.xml\");\n\n if (stream == null) {\n System.err.println(\"Missing configuration file\");\n System.exit(-1); \n }\n\n synth.load(stream, aClass);\n\n UIManager.setLookAndFeel(synth);\n} catch (ParseException pe) {\n System.err.println(\"Bad configuration file\");\n pe.printStackTrace();\n System.exit(-2);\n} catch (UnsupportedLookAndFeelException ulfe) {\n System.err.println(\"Old JRE in use. Get a new one\");\n System.exit(-3);\n}\n\nFrom there, go on and add your JButton like you normally would. The only change is that you use the setName(string) method to identify what the button should map to in the xml file.\nThe xml file might look like this:\n<synth>\n <style id=\"button\">\n <font name=\"DIALOG\" size=\"12\" style=\"BOLD\"/>\n <state value=\"MOUSE_OVER\">\n <imagePainter method=\"buttonBackground\" path=\"dirt.png\" sourceInsets=\"2 2 2 2\"/>\n <insets top=\"2\" botton=\"2\" right=\"2\" left=\"2\"/>\n </state>\n <state value=\"ENABLED\">\n <imagePainter method=\"buttonBackground\" path=\"dirt.png\" sourceInsets=\"2 2 2 2\"/>\n <insets top=\"2\" botton=\"2\" right=\"2\" left=\"2\"/>\n </state>\n </style>\n <bind style=\"button\" type=\"name\" key=\"dirt\"/>\n</synth>\n\nThe bind element there specifies what to map to (in this example, it will apply that styling to any buttons whose name property has been set to \"dirt\").\nAnd a couple of useful links:\nhttp://javadesktop.org/articles/synth/\nhttp://docs.oracle.com/javase/tutorial/uiswing/lookandfeel/synth.html\n", "I'm probably going a million miles in the wrong direct (but i'm only young :P ). but couldn't you add the graphic to a panel and then a mouselistener to the graphic object so that when the user on the graphic your action is preformed.\n", "I haven't done SWING development since my early CS classes but if it wasn't built in you could just inherit javax.swing.AbstractButton and create your own. Should be pretty simple to wire something together with their existing framework.\n" ]
[ 98, 35, 15, 9, 8 ]
[]
[]
[ "java", "jbutton", "swing" ]
stackoverflow_0000002158_java_jbutton_swing.txt
Q: cURL adding whitespace to post content? I am attempting to POST against a vendor's server using PHP 5.2 with cURL. I'm reading in an XML document to post against their server and then reading in a response: $request = trim(file_get_contents('test.xml')); $curlHandle = curl_init($servletURL); curl_setopt($curlHandle, CURLOPT_POST, TRUE); curl_setopt($curlHandle, CURLOPT_POSTFIELDS, array('XML'=>$request)); curl_setopt($curlHandle, CURLOPT_RETURNTRANSFER, TRUE); curl_setopt($curlHandle, CURLOPT_HEADER, FALSE); # Have also tried leaving this out $response = curl_exec($curlHandle); That code, in an of itself, works OK, but the other server returns a response from it's XML parser stating: Content not allowed in prolog I looked that error up and this is normally caused by whitespace before the XML, but I made sure that the XML file itself has no whitespace and the trim() should clear that up anyway. I did a TCPDump on the connection while I ran the code and this is what is sent out: POST {serverURL} HTTP/1.1 Host: {ip of server}:8080 Accept: */* Content-Length: 921 Expect: 100-continue Content-Type: multipart/form-data; boundry:---------------------------01e7cda3896f ---------------------------01e7cda3896f Content-Disposition: form-data; name="XML" [SNIP - the XML was displayed] ---------------------------01e7cda3896f-- Before and after the [SNIP] line there is visible whitespace when I replay the session in Ethereal. Is this what is causing the problem and, if so, how can I remove it, or am I looking too far and this may be an issue with the server I'm posting against? A: Not an answer, but I find the whole fopen/fread/fclose thing very dull to peruse when looking at code. You can replace: $file = 'test.xml'; $fileHandle = fopen($file, 'r'); $request = fread($fileHandle, filesize($file)); fclose($fileHandle); $request = trim($request); With: $request = trim(file_get_contents('test.xml')); But anyway - to your question; if those are the headers that are being sent, then it shouldn't be a problem with the remote server. Try changing the contents of your xml file and using var_dump() to check the exact output (including the string length, so you can look for missing things) Hope that helps A: It turns out it's an encoding issue. The app apparently needs the XML in www-form-urlencoded instead of form-data so I had to change: # This sets the encoding to multipart/form-data curl_setopt($curlHandle, CURLOPT_POSTFIELDS, array('XML'=>$request)); to # This sets it to application/x-www-form-urlencoded curl_setopt($curlHandle, CURLOPT_POSTFIELDS, 'XML=' . urlencode($request)); A: I did a wc -m test.xml and came back with 743 characters in the XML file and the var_dump on $request comes back with 742 characters so something is getting stripped with trim() (I assume). I did a: print "=====" . $request . "====="; and the start and end of the XML butts right up against the ===== with no white space.
cURL adding whitespace to post content?
I am attempting to POST against a vendor's server using PHP 5.2 with cURL. I'm reading in an XML document to post against their server and then reading in a response: $request = trim(file_get_contents('test.xml')); $curlHandle = curl_init($servletURL); curl_setopt($curlHandle, CURLOPT_POST, TRUE); curl_setopt($curlHandle, CURLOPT_POSTFIELDS, array('XML'=>$request)); curl_setopt($curlHandle, CURLOPT_RETURNTRANSFER, TRUE); curl_setopt($curlHandle, CURLOPT_HEADER, FALSE); # Have also tried leaving this out $response = curl_exec($curlHandle); That code, in an of itself, works OK, but the other server returns a response from it's XML parser stating: Content not allowed in prolog I looked that error up and this is normally caused by whitespace before the XML, but I made sure that the XML file itself has no whitespace and the trim() should clear that up anyway. I did a TCPDump on the connection while I ran the code and this is what is sent out: POST {serverURL} HTTP/1.1 Host: {ip of server}:8080 Accept: */* Content-Length: 921 Expect: 100-continue Content-Type: multipart/form-data; boundry:---------------------------01e7cda3896f ---------------------------01e7cda3896f Content-Disposition: form-data; name="XML" [SNIP - the XML was displayed] ---------------------------01e7cda3896f-- Before and after the [SNIP] line there is visible whitespace when I replay the session in Ethereal. Is this what is causing the problem and, if so, how can I remove it, or am I looking too far and this may be an issue with the server I'm posting against?
[ "Not an answer, but I find the whole fopen/fread/fclose thing very dull to peruse when looking at code.\nYou can replace:\n$file = 'test.xml';\n$fileHandle = fopen($file, 'r');\n$request = fread($fileHandle, filesize($file));\nfclose($fileHandle);\n$request = trim($request);\n\nWith:\n$request = trim(file_get_contents('test.xml'));\n\nBut anyway - to your question; if those are the headers that are being sent, then it shouldn't be a problem with the remote server. Try changing the contents of your xml file and using var_dump() to check the exact output (including the string length, so you can look for missing things)\nHope that helps\n", "It turns out it's an encoding issue. The app apparently needs the XML in www-form-urlencoded instead of form-data so I had to change:\n# This sets the encoding to multipart/form-data\ncurl_setopt($curlHandle, CURLOPT_POSTFIELDS, array('XML'=>$request));\n\nto\n# This sets it to application/x-www-form-urlencoded\ncurl_setopt($curlHandle, CURLOPT_POSTFIELDS, 'XML=' . urlencode($request));\n\n", "I did a wc -m test.xml and came back with 743 characters in the XML file and the var_dump on $request comes back with 742 characters so something is getting stripped with trim() (I assume).\nI did a:\nprint \"=====\" . $request . \"=====\";\n\nand the start and end of the XML butts right up against the ===== with no white space.\n" ]
[ 3, 2, 0 ]
[]
[]
[ "curl", "php", "xml" ]
stackoverflow_0000018166_curl_php_xml.txt
Q: Best Way to Begin Learning Web Application Design I'm a long time hobbyist programmer interested in getting into web application development. I have a fair amount of personal experience with various non-web languages, but have never really branched over to web applications. I don't usually have any issues learning new languages or technologies, so I'm not worried about which is the "best" language or web stack to work with. Instead, I'd like to know of any recommended resources (books, articles, web sites, maybe even college courses) that discuss web application design: managing and optimizing server interaction, security concerns, scalability, and other topics that fall under design rather than implementation. What would you recommend for a Standalone Application Developer wanting to branch out into Web Development? A: There is a wide variety of web application languages you could get into. The ones I have most experience with (and therefore will be talking about here) are PHP, eRuby and Ruby on Rails. All of these have good tutorials available on the internet - I'll link to some of them below. Which to choose depends on exactly what you're looking to do. Using PHP and eRuby you have to do most things yourself - whereas Ruby on Rails will do lots of stuff for you (useful, but can also be dangerous if you don't know what you're doing). Ruby on Rails is good for doing database related things - for example the standard CRUD (Create, Read, Update, Delete) application. The standard kind of app Ruby on Rails (often abbreviated to RoR) tutorials teach you is a blog application (Create entries, Read entries, Update entries, Delete entries) or an Address Book Application. It is possible to do many of these sort of applications almost in one line of code - using RoR's 'scaffold' function. PHP and eRuby make you do more of the work yourself - but this can be better in some situations. PHP is more well known and used than eRuby, but I like the Ruby language so I tend to like using eRuby. These are both good for doing simple applications (like contact forms on websites) or more complex applications (phpBB - a piece of forum software is written in php). As for which one to choose - I'd have a play with them and see what you think. Try running through the first few bits of a tutorial with each and see how whether you like it or not. Here come the links to various tutorials: PHP PHP 101 PHP Intro from W3Schools eRuby Beginning eRuby - not great, but shows you how you can embed it in HTML Try Ruby in your Browser - helps you learn Ruby which you need to know for eRuby Ruby on Rails Rolling with Ruby on Rails - the latest 'revisited' version for the latest version of RoR Rolling with Ruby on Rails part 2 There are a few tutorials to get you started. Some of these take you through installing the necessary software (webserver and anything else needed - eg. php or ruby) and some don't. A good way to get Apache (webserver), MySQL (db) and PHP installed on windows is to use XAMPP. If you're on linux then apache, mysql and php will be in your package repositories and there may be distro specific guides to setting them up. A: A lot of languages have web counterparts. JSP for Java, Rails for Ruby, Django for Python, etc. That might be a lead. If you want to go for the platform with arguably the biggest user base (and with that, the biggest pile of tutorials and examples), go for PHP. I strongly advise on looking into various frameworks though. For every web-oriented language there's bound to be a lot of resources that take away the trouble of writing all the low-level plumbing code, so you can focus on the stuff that matters. Personally I almost exclusively use .NET, but I've heard about a bunch of nice PHP frameworks, like the Zend platform and CakePHP (for MVC development). If you intend to also use javascript in your applications to give that nice web 2.0 feel to your applications, please, use a library that hides the messy browser details. You'll go nuts if you try to do all the cross-browser scripts yourself. Some good ones are Prototype and jQuery. A: Eloquent JavaScript and AppJet offer great tutorials that allow you to follow along while you learn. Once you cover all the basics, Ajaxian should answer many of the questions you have about application design, etc. Not only do they post many excellent articles on these topics, but you should explore many of the sites they link to, as these sites usually also provide a wealth of info. When it comes to server interactions, know your options. Ajax isn't all there is. Research technologies like Comet and JSON-RPC, as well as looking at various server-side frameworks that provide easy access to JavaScript such as DWR, Jayrock, or any tool that exposes your functions to JavaScript using whatever language you choose to use on the server.
Best Way to Begin Learning Web Application Design
I'm a long time hobbyist programmer interested in getting into web application development. I have a fair amount of personal experience with various non-web languages, but have never really branched over to web applications. I don't usually have any issues learning new languages or technologies, so I'm not worried about which is the "best" language or web stack to work with. Instead, I'd like to know of any recommended resources (books, articles, web sites, maybe even college courses) that discuss web application design: managing and optimizing server interaction, security concerns, scalability, and other topics that fall under design rather than implementation. What would you recommend for a Standalone Application Developer wanting to branch out into Web Development?
[ "There is a wide variety of web application languages you could get into. The ones I have most experience with (and therefore will be talking about here) are PHP, eRuby and Ruby on Rails. All of these have good tutorials available on the internet - I'll link to some of them below.\nWhich to choose depends on exactly what you're looking to do. Using PHP and eRuby you have to do most things yourself - whereas Ruby on Rails will do lots of stuff for you (useful, but can also be dangerous if you don't know what you're doing). Ruby on Rails is good for doing database related things - for example the standard CRUD (Create, Read, Update, Delete) application. The standard kind of app Ruby on Rails (often abbreviated to RoR) tutorials teach you is a blog application (Create entries, Read entries, Update entries, Delete entries) or an Address Book Application. It is possible to do many of these sort of applications almost in one line of code - using RoR's 'scaffold' function.\nPHP and eRuby make you do more of the work yourself - but this can be better in some situations. PHP is more well known and used than eRuby, but I like the Ruby language so I tend to like using eRuby. These are both good for doing simple applications (like contact forms on websites) or more complex applications (phpBB - a piece of forum software is written in php).\nAs for which one to choose - I'd have a play with them and see what you think. Try running through the first few bits of a tutorial with each and see how whether you like it or not.\nHere come the links to various tutorials:\nPHP\n\nPHP 101\nPHP Intro from W3Schools\n\neRuby\n\nBeginning eRuby - not great, but shows you how you can embed it in HTML\nTry Ruby in your Browser - helps you learn Ruby which you need to know for eRuby\n\nRuby on Rails\n\nRolling with Ruby on Rails - the latest 'revisited' version for the latest version of RoR\nRolling with Ruby on Rails part 2\n\nThere are a few tutorials to get you started. Some of these take you through installing the necessary software (webserver and anything else needed - eg. php or ruby) and some don't. A good way to get Apache (webserver), MySQL (db) and PHP installed on windows is to use XAMPP. If you're on linux then apache, mysql and php will be in your package repositories and there may be distro specific guides to setting them up.\n", "A lot of languages have web counterparts. JSP for Java, Rails for Ruby, Django for Python, etc. That might be a lead.\nIf you want to go for the platform with arguably the biggest user base (and with that, the biggest pile of tutorials and examples), go for PHP.\nI strongly advise on looking into various frameworks though. For every web-oriented language there's bound to be a lot of resources that take away the trouble of writing all the low-level plumbing code, so you can focus on the stuff that matters. Personally I almost exclusively use .NET, but I've heard about a bunch of nice PHP frameworks, like the Zend platform and CakePHP (for MVC development).\nIf you intend to also use javascript in your applications to give that nice web 2.0 feel to your applications, please, use a library that hides the messy browser details. You'll go nuts if you try to do all the cross-browser scripts yourself. Some good ones are Prototype and jQuery.\n", "Eloquent JavaScript and AppJet offer great tutorials that allow you to follow along while you learn.\nOnce you cover all the basics, Ajaxian should answer many of the questions you have about application design, etc. Not only do they post many excellent articles on these topics, but you should explore many of the sites they link to, as these sites usually also provide a wealth of info.\nWhen it comes to server interactions, know your options. Ajax isn't all there is. Research technologies like Comet and JSON-RPC, as well as looking at various server-side frameworks that provide easy access to JavaScript such as DWR, Jayrock, or any tool that exposes your functions to JavaScript using whatever language you choose to use on the server.\n" ]
[ 10, 2, 1 ]
[]
[]
[ "language_agnostic", "resources", "web_applications" ]
stackoverflow_0000018284_language_agnostic_resources_web_applications.txt
Q: SQL Server Full Text Searching I'm currently working on an application where we have a SQL-Server database and I need to get a full text search working that allows us to search people's names. Currently the user can enter a into a name field that searches 3 different varchar cols. First, Last, Middle names So say I have 3 rows with the following info. 1 - Phillip - J - Fry 2 - Amy - NULL - Wong 3 - Leo - NULL - Wong If the user enters a name such as 'Fry' it will return row 1. However if they enter Phillip Fry, or Fr, or Phil they get nothing.. and I don't understand why its doing this. If they search for Wong they get rows 2 and 3 if they search for Amy Wong they again get nothing. Currently the query is using CONTAINSTABLE but I have switched that with FREETEXTTABLE, CONTAINS, and FREETEXT without any noticeable differences in the results. The table methods are be preferred because they return the same results but with ranking. Here is the query. .... @Name nvarchar(100), .... --""s added to prevent crash if searching on more then one word. DECLARE @SearchString varchar(100) SET @SearchString = '"'+@Name+'"' SELECT Per.Lastname, Per.Firstname, Per.MiddleName FROM Person as Per INNER JOIN CONTAINSTABLE(Person, (LastName, Firstname, MiddleName), @SearchString) AS KEYTBL ON Per.Person_ID = KEYTBL.[KEY] WHERE KEY_TBL.RANK > 2 ORDER BY KEYTBL.RANK DESC; .... Any Ideas...? Why this full text search is not working correctly ? A: If you're just searching people's names, it might be in your best interest to not even use the full text index. Full text index makes sense when you have large text fields, but if you're mostly dealing with one word per field, I'm not sure how much extra you would get out of full text indexes. Waiting for the full text index to reindex itself before you can search for new records can be one of the many problems. You could just make a query such as the following. Split your searchstring on spaces, and create a list of the search terms. Select FirstName,MiddleName,LastName From person WHERE Firstname like @searchterm1 + '%' or MiddleName like @searchterm1 + '%' or LastName like @searchterm1 + '%' or Firstname like @searchterm2 + '%' etc.... A: FreeTextTable should work. INNER JOIN FREETEXTTABLE(Person, (LastName, Firstname, MiddleName), @SearchString) @SearchString should contain the values like 'Phillip Fry' (one long string containing all of the lookup strings separated by spaces). If you would like to search for Fr or Phil, you should use asterisk: Phil* and Fr* 'Phil' is looking for exactly the word 'Phil'. 'Phil*' is looking for every word which is starting with 'Phil' A: Thanks for the responses guys I finally was able to get it to work. With part of both Biri, and Kibbee's answers. I needed to add * to the string and break it up on spaces in order to work. So in the end I got .... @Name nvarchar(100), .... --""s added to prevent crash if searching on more then one word. DECLARE @SearchString varchar(100) --Added this line SET @SearchString = REPLACE(@Name, ' ', '*" OR "*') SET @SearchString = '"*'+@SearchString+'*"' SELECT Per.Lastname, Per.Firstname, Per.MiddleName FROM Person as Per INNER JOIN CONTAINSTABLE(Person, (LastName, Firstname, MiddleName), @SearchString) AS KEYTBL ON Per.Person_ID = KEYTBL.[KEY] WHERE KEY_TBL.RANK > 2 ORDER BY KEYTBL.RANK DESC; .... There are more fields being searched upon I just simplified it for the question, sorry about that, I didn't think it would effect the answer. It actually searches a column that has a csv of nicknames and a notes column as well. Thanks for the help. A: Another approach could be to abstract the searching away from the individual fields. In other words create a view on your data which turns all the split fields like firstname lastname into concatenated fields i.e. full_name Then search on the view. This would likely make the search query simpler. A: You might want to check out Lucene.net as an alternative to Full Text.
SQL Server Full Text Searching
I'm currently working on an application where we have a SQL-Server database and I need to get a full text search working that allows us to search people's names. Currently the user can enter a into a name field that searches 3 different varchar cols. First, Last, Middle names So say I have 3 rows with the following info. 1 - Phillip - J - Fry 2 - Amy - NULL - Wong 3 - Leo - NULL - Wong If the user enters a name such as 'Fry' it will return row 1. However if they enter Phillip Fry, or Fr, or Phil they get nothing.. and I don't understand why its doing this. If they search for Wong they get rows 2 and 3 if they search for Amy Wong they again get nothing. Currently the query is using CONTAINSTABLE but I have switched that with FREETEXTTABLE, CONTAINS, and FREETEXT without any noticeable differences in the results. The table methods are be preferred because they return the same results but with ranking. Here is the query. .... @Name nvarchar(100), .... --""s added to prevent crash if searching on more then one word. DECLARE @SearchString varchar(100) SET @SearchString = '"'+@Name+'"' SELECT Per.Lastname, Per.Firstname, Per.MiddleName FROM Person as Per INNER JOIN CONTAINSTABLE(Person, (LastName, Firstname, MiddleName), @SearchString) AS KEYTBL ON Per.Person_ID = KEYTBL.[KEY] WHERE KEY_TBL.RANK > 2 ORDER BY KEYTBL.RANK DESC; .... Any Ideas...? Why this full text search is not working correctly ?
[ "If you're just searching people's names, it might be in your best interest to not even use the full text index. Full text index makes sense when you have large text fields, but if you're mostly dealing with one word per field, I'm not sure how much extra you would get out of full text indexes. Waiting for the full text index to reindex itself before you can search for new records can be one of the many problems.\nYou could just make a query such as the following. Split your searchstring on spaces, and create a list of the search terms.\n\nSelect FirstName,MiddleName,LastName \nFrom person \nWHERE \nFirstname like @searchterm1 + '%'\nor MiddleName like @searchterm1 + '%'\nor LastName like @searchterm1 + '%'\nor Firstname like @searchterm2 + '%'\netc....\n\n", "FreeTextTable should work.\nINNER JOIN FREETEXTTABLE(Person, (LastName, Firstname, MiddleName), @SearchString) \n\n@SearchString should contain the values like 'Phillip Fry' (one long string containing all of the lookup strings separated by spaces).\nIf you would like to search for Fr or Phil, you should use asterisk: Phil* and Fr*\n'Phil' is looking for exactly the word 'Phil'. 'Phil*' is looking for every word which is starting with 'Phil'\n", "Thanks for the responses guys I finally was able to get it to work. With part of both Biri, and Kibbee's answers. I needed to add * to the string and break it up on spaces in order to work. So in the end I got\n....\n@Name nvarchar(100),\n....\n--\"\"s added to prevent crash if searching on more then one word.\nDECLARE @SearchString varchar(100)\n\n--Added this line\nSET @SearchString = REPLACE(@Name, ' ', '*\" OR \"*')\nSET @SearchString = '\"*'+@SearchString+'*\"'\n\nSELECT Per.Lastname, Per.Firstname, Per.MiddleName\nFROM Person as Per\nINNER JOIN CONTAINSTABLE(Person, (LastName, Firstname, MiddleName), @SearchString) \nAS KEYTBL\nON Per.Person_ID = KEYTBL.[KEY]\nWHERE KEY_TBL.RANK > 2\nORDER BY KEYTBL.RANK DESC; \n....\n\nThere are more fields being searched upon I just simplified it for the question, sorry about that, I didn't think it would effect the answer. It actually searches a column that has a csv of nicknames and a notes column as well.\nThanks for the help.\n", "Another approach could be to abstract the searching away from the individual fields.\nIn other words create a view on your data which turns all the split fields like firstname lastname into concatenated fields i.e. full_name\nThen search on the view. This would likely make the search query simpler. \n", "You might want to check out Lucene.net as an alternative to Full Text.\n" ]
[ 5, 4, 4, 2, 2 ]
[]
[]
[ "full_text_search", "search", "sql_server" ]
stackoverflow_0000017056_full_text_search_search_sql_server.txt
Q: How can Perl's system() print the command that it's running? In Perl, you can execute system commands using system() or `` (backticks). You can even capture the output of the command into a variable. However, this hides the program execution in the background so that the person executing your script can't see it. Normally this is useful but sometimes I want to see what is going on behind the scenes. How do you make it so the commands executed are printed to the terminal, and those programs' output printed to the terminal? This would be the .bat equivalent of "@echo on". A: I don't know of any default way to do this, but you can define a subroutine to do it for you: sub execute { my $cmd = shift; print "$cmd\n"; system($cmd); } my $cmd = $ARGV[0]; execute($cmd); And then see it in action: pbook:~/foo rudd$ perl foo.pl ls ls file1 file2 foo.pl A: As I understand, system() will print the result of the command, but not assign it. Eg. [daniel@tux /]$ perl -e '$ls = system("ls"); print "Result: $ls\n"' bin dev home lost+found misc net proc sbin srv System tools var boot etc lib media mnt opt root selinux sys tmp usr Result: 0 Backticks will capture the output of the command and not print it: [daniel@tux /]$ perl -e '$ls = `ls`; print "Result: $ls\n"' Result: bin boot dev etc home lib etc... Update: If you want to print the name of the command being system() 'd as well, I think Rudd's approach is good. Repeated here for consolidation: sub execute { my $cmd = shift; print "$cmd\n"; system($cmd); } my $cmd = $ARGV[0]; execute($cmd); A: Use open instead. Then you can capture the output of the command. open(LS,"|ls"); print LS; A: Here's an updated execute that will print the results and return them: sub execute { my $cmd = shift; print "$cmd\n"; my $ret = `$cmd`; print $ret; return $ret; } A: Hmm, interesting how different people are answering this different ways. It looks to me like mk and Daniel Fone interpreted it as wanting to see/manipulate the stdout of the command (neither of their solutions capture stderr fwiw). I think Rudd got closer. One twist you could make on Rudd's response is to overwite the built in system() command with your own version so that you wouldn't have to rewrite existing code to use his execute() command. using his execute() sub from Rudd's post, you could have something like this at the top of your code: if ($DEBUG) { *{"CORE::GLOBAL::system"} = \&{"main::execute"}; } I think that will work but I have to admit this is voodoo and it's been a while since I wrote this code. Here's the code I wrote years ago to intercept system calls on a local (calling namespace) or global level at module load time: # importing into either the calling or global namespace _must_ be # done from import(). Doing it elsewhere will not have desired results. delete($opts{handle_system}); if ($do_system) { if ($do_system eq 'local') { *{"$callpkg\::system"} = \&{"$_package\::system"}; } else { *{"CORE::GLOBAL::system"} = \&{"$_package\::system"}; } } A: Another technique to combine with the others mentioned in the answers is to use the tee command. For example: open(F, "ls | tee /dev/tty |"); while (<F>) { print length($_), "\n"; } close(F); This will both print out the files in the current directory (as a consequence of tee /dev/tty) and also print out the length of each filename read.
How can Perl's system() print the command that it's running?
In Perl, you can execute system commands using system() or `` (backticks). You can even capture the output of the command into a variable. However, this hides the program execution in the background so that the person executing your script can't see it. Normally this is useful but sometimes I want to see what is going on behind the scenes. How do you make it so the commands executed are printed to the terminal, and those programs' output printed to the terminal? This would be the .bat equivalent of "@echo on".
[ "I don't know of any default way to do this, but you can define a subroutine to do it for you:\nsub execute {\n my $cmd = shift;\n print \"$cmd\\n\";\n system($cmd);\n}\n\nmy $cmd = $ARGV[0];\nexecute($cmd);\n\nAnd then see it in action:\npbook:~/foo rudd$ perl foo.pl ls\nls\nfile1 file2 foo.pl\n\n", "As I understand, system() will print the result of the command, but not assign it. Eg.\n[daniel@tux /]$ perl -e '$ls = system(\"ls\"); print \"Result: $ls\\n\"'\nbin dev home lost+found misc net proc sbin srv System tools var\nboot etc lib media mnt opt root selinux sys tmp usr\nResult: 0\n\nBackticks will capture the output of the command and not print it:\n[daniel@tux /]$ perl -e '$ls = `ls`; print \"Result: $ls\\n\"'\nResult: bin\nboot\ndev\netc\nhome\nlib\n\netc...\nUpdate: If you want to print the name of the command being system() 'd as well, I think Rudd's approach is good. Repeated here for consolidation:\nsub execute {\n my $cmd = shift;\n print \"$cmd\\n\";\n system($cmd);\n}\n\nmy $cmd = $ARGV[0];\nexecute($cmd);\n\n", "Use open instead. Then you can capture the output of the command.\nopen(LS,\"|ls\");\nprint LS;\n\n", "Here's an updated execute that will print the results and return them:\nsub execute {\n my $cmd = shift;\n print \"$cmd\\n\";\n my $ret = `$cmd`;\n print $ret;\n return $ret;\n}\n\n", "Hmm, interesting how different people are answering this different ways. It looks to me like mk and Daniel Fone interpreted it as wanting to see/manipulate the stdout of the command (neither of their solutions capture stderr fwiw). I think Rudd got closer. One twist you could make on Rudd's response is to overwite the built in system() command with your own version so that you wouldn't have to rewrite existing code to use his execute() command.\nusing his execute() sub from Rudd's post, you could have something like this at the top of your code:\nif ($DEBUG) {\n *{\"CORE::GLOBAL::system\"} = \\&{\"main::execute\"};\n}\n\nI think that will work but I have to admit this is voodoo and it's been a while since I wrote this code. Here's the code I wrote years ago to intercept system calls on a local (calling namespace) or global level at module load time:\n # importing into either the calling or global namespace _must_ be\n # done from import(). Doing it elsewhere will not have desired results.\n delete($opts{handle_system});\n if ($do_system) {\n if ($do_system eq 'local') {\n *{\"$callpkg\\::system\"} = \\&{\"$_package\\::system\"};\n } else {\n *{\"CORE::GLOBAL::system\"} = \\&{\"$_package\\::system\"};\n }\n }\n\n", "Another technique to combine with the others mentioned in the answers is to use the tee command. For example:\nopen(F, \"ls | tee /dev/tty |\");\nwhile (<F>) {\n print length($_), \"\\n\";\n}\nclose(F);\n\nThis will both print out the files in the current directory (as a consequence of tee /dev/tty) and also print out the length of each filename read.\n" ]
[ 19, 10, 5, 5, 2, 2 ]
[]
[]
[ "perl", "system" ]
stackoverflow_0000017225_perl_system.txt
Q: How do I change the title bar icon in Adobe AIR? I cannot figure out how to change the title bar icon (the icon in the furthest top left corner of the application) in Adobe AIR. It is currently displaying the default 'Adobe AIR' red icon. I have been able to change it in the system tray, however. A: Does the following help? http://groups.google.com/group/chennai-flex-user-group/browse_thread/thread/cffb9ab56450c28e A: The first link shows how to change the Taskbar Icon, the second shows the application icon I believe used on the desktop. I am going to recompile and install the application and see if it works. Edit: Yea, the one that changes the Desktop Icon also changes the Title Bar icon. It's in the app.xml file.
How do I change the title bar icon in Adobe AIR?
I cannot figure out how to change the title bar icon (the icon in the furthest top left corner of the application) in Adobe AIR. It is currently displaying the default 'Adobe AIR' red icon. I have been able to change it in the system tray, however.
[ "Does the following help?\nhttp://groups.google.com/group/chennai-flex-user-group/browse_thread/thread/cffb9ab56450c28e\n", "The first link shows how to change the Taskbar Icon, the second shows the application icon I believe used on the desktop. I am going to recompile and install the application and see if it works.\nEdit: Yea, the one that changes the Desktop Icon also changes the Title Bar icon. It's in the app.xml file.\n" ]
[ 2, 1 ]
[]
[]
[ "air", "apache_flex" ]
stackoverflow_0000018298_air_apache_flex.txt
Q: I would like some tips for debugging WCF Web Service exceptions I've created a WCF service and when I browse to the endpoint I get the following fault: <s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/"> <s:Body> <s:Fault> <faultcode xmlns:a="http://schemas.microsoft.com/ws/2005/05/addressing/none"> a:ActionNotSupported </faultcode> <faultstring xml:lang="en-GB"> The message with Action '' cannot be processed at the receiver, due to a ContractFilter mismatch at the EndpointDispatcher. This may be because of either a contract mismatch (mismatched Actions between sender and receiver) or a binding/security mismatch between the sender and the receiver. Check that sender and receiver have the same contract and the same binding (including security requirements, e.g. Message, Transport, None). </faultstring> </s:Fault> </s:Body> </s:Envelope> I've fixed the problem but didn't enjoy the experience! Does anyone have any tips or tools for debugging problems like this? A: I've found SvcTraceViewer.exe to be the most valuable tool when it comes to diagnosing WCF errors.
I would like some tips for debugging WCF Web Service exceptions
I've created a WCF service and when I browse to the endpoint I get the following fault: <s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/"> <s:Body> <s:Fault> <faultcode xmlns:a="http://schemas.microsoft.com/ws/2005/05/addressing/none"> a:ActionNotSupported </faultcode> <faultstring xml:lang="en-GB"> The message with Action '' cannot be processed at the receiver, due to a ContractFilter mismatch at the EndpointDispatcher. This may be because of either a contract mismatch (mismatched Actions between sender and receiver) or a binding/security mismatch between the sender and the receiver. Check that sender and receiver have the same contract and the same binding (including security requirements, e.g. Message, Transport, None). </faultstring> </s:Fault> </s:Body> </s:Envelope> I've fixed the problem but didn't enjoy the experience! Does anyone have any tips or tools for debugging problems like this?
[ "I've found SvcTraceViewer.exe to be the most valuable tool when it comes to diagnosing WCF errors.\n" ]
[ 9 ]
[]
[]
[ ".net", "wcf", "web_services" ]
stackoverflow_0000018348_.net_wcf_web_services.txt
Q: Query a union table with fields as columns I'm not quite sure if this is possible, or falls into the category of pivot tables, but I figured I'd go to the pros to see. I have three basic tables: Card, Property, and CardProperty. Since cards do not have the same properties, and often multiple values for the same property, I decided to use the union table approach to store data instead of having a really big column structure in my card table. The property table is a basic keyword/value type table. So you have the keyword ATK and the value assigned to it. There is another property called SpecialType which a card can have multiple values for, such as "Sycnro" and "DARK" What I'd like to do is create a view or stored procedure that gives me the Card Id, Card Name, and all the property keywords assigned to the card as columns and their values in the ResultSet for a card specified. So ideally I'd have a result set like: ID NAME SPECIALTYPE 1 Red Dragon Archfiend Synchro 1 Red Dragon Archfiend DARK 1 Red Dragon Archfiend Effect and I could tally my results that way. I guess even slicker would be to simply concatenate the properties together based on their keyword, so I could generate a ResultSet like: 1 Red Dragon Archfiend Synchro/DARK/Effect ..but I don't know if that's feasible. Help me stackoverflow Kenobi! You're my only hope. A: Is this for SQL server? If yes then Concatenate Values From Multiple Rows Into One Column (2000) Concatenate Values From Multiple Rows Into One Column Ordered (2005+) A: Related but values are values are kept in separate columns and you have know your "special types" a head of time: SQL query to compare product sales by month Otherwise I would do this with cursor in a stored procedure or preform the transformation in the business or presentation layer. Stab at sql if you know all cases: Select ID,NAME ,Synchro+DARK+Effect -- add a some substring logic to trim any trailing /'s from (select ID ,NAME --may need to replace max() with min(). ,MAX(CASE SPECIALTYPE WHEN "Synchro" THEN SPECIALTYPE +"/" ELSE "" END) Synchro ,MAX(CASE SPECIALTYPE WHEN "DARK" THEN SPECIALTYPE +"/" ELSE "" END) DARK ,MAX(CASE SPECIALTYPE WHEN "Effect" THEN SPECIALTYPE ELSE "" END) Effect from table group by ID ,NAME) sub1 A: Don't collapse by concatenation for storage of related records in your database. Its not exactly best practices. What you're describing is a pivot table. Pivot tables are hard. I'd suggest avoiding them if at all possible. Why not just read in your related rows and process them in memory? It doesn't sound like you're going to spend too many milliseconds doing this... A: One option is to have Properties have a PropertyType, so: table cards integer ID | string name | ... (other properties common to all Cards) table property_types integer ID | string name | string format | ... (possibly validations) table properties integer ID | integer property_type_id | string name | string value foreign key property_type_id references property_types.ID table cards_properties integer ID | integer card_id | integer property_id foreign key card_id references cards.ID foreign key property_id references propertiess.ID That way, when you want to set a new property value, you can validate it by its type. One type could be "SpecialType" with an enumeration of values. A: I do have a type/format for my properties table, that way I know how to cast/evaluate when I'm dealing with an integer value. I wasn't sure if it was pertinent to this issue or not.
Query a union table with fields as columns
I'm not quite sure if this is possible, or falls into the category of pivot tables, but I figured I'd go to the pros to see. I have three basic tables: Card, Property, and CardProperty. Since cards do not have the same properties, and often multiple values for the same property, I decided to use the union table approach to store data instead of having a really big column structure in my card table. The property table is a basic keyword/value type table. So you have the keyword ATK and the value assigned to it. There is another property called SpecialType which a card can have multiple values for, such as "Sycnro" and "DARK" What I'd like to do is create a view or stored procedure that gives me the Card Id, Card Name, and all the property keywords assigned to the card as columns and their values in the ResultSet for a card specified. So ideally I'd have a result set like: ID NAME SPECIALTYPE 1 Red Dragon Archfiend Synchro 1 Red Dragon Archfiend DARK 1 Red Dragon Archfiend Effect and I could tally my results that way. I guess even slicker would be to simply concatenate the properties together based on their keyword, so I could generate a ResultSet like: 1 Red Dragon Archfiend Synchro/DARK/Effect ..but I don't know if that's feasible. Help me stackoverflow Kenobi! You're my only hope.
[ "Is this for SQL server?\nIf yes then\nConcatenate Values From Multiple Rows Into One Column (2000)\nConcatenate Values From Multiple Rows Into One Column Ordered (2005+)\n", "Related but values are values are kept in separate columns and you have know your \"special types\" a head of time: SQL query to compare product sales by month\nOtherwise I would do this with cursor in a stored procedure or preform the transformation in the business or presentation layer.\nStab at sql if you know all cases:\nSelect\n ID,NAME\n ,Synchro+DARK+Effect -- add a some substring logic to trim any trailing /'s\nfrom\n (select\n ID\n ,NAME\n --may need to replace max() with min().\n ,MAX(CASE SPECIALTYPE WHEN \"Synchro\" THEN SPECIALTYPE +\"/\" ELSE \"\" END) Synchro\n ,MAX(CASE SPECIALTYPE WHEN \"DARK\" THEN SPECIALTYPE +\"/\" ELSE \"\" END) DARK\n ,MAX(CASE SPECIALTYPE WHEN \"Effect\" THEN SPECIALTYPE ELSE \"\" END) Effect\n from\n table\n group by\n ID\n ,NAME) sub1\n\n", "Don't collapse by concatenation for storage of related records in your database. Its not exactly best practices. \nWhat you're describing is a pivot table. Pivot tables are hard. I'd suggest avoiding them if at all possible. \nWhy not just read in your related rows and process them in memory? It doesn't sound like you're going to spend too many milliseconds doing this...\n", "One option is to have Properties have a PropertyType, so:\ntable cards\ninteger ID | string name | ... (other properties common to all Cards)\n\ntable property_types\ninteger ID | string name | string format | ... (possibly validations)\n\ntable properties\ninteger ID | integer property_type_id | string name | string value\nforeign key property_type_id references property_types.ID\n\ntable cards_properties\ninteger ID | integer card_id | integer property_id\nforeign key card_id references cards.ID\nforeign key property_id references propertiess.ID\n\nThat way, when you want to set a new property value, you can validate it by its type. One type could be \"SpecialType\" with an enumeration of values.\n", "I do have a type/format for my properties table, that way I know how to cast/evaluate when I'm dealing with an integer value. I wasn't sure if it was pertinent to this issue or not.\n" ]
[ 1, 1, 0, 0, 0 ]
[]
[]
[ "database_design", "sql", "stored_procedures" ]
stackoverflow_0000018216_database_design_sql_stored_procedures.txt
Q: How do I update my UI from within HttpWebRequest.BeginGetRequestStream in Silverlight I am uploading multiple files using the BeginGetRequestStream of HttpWebRequest but I want to update the progress control I have written whilst I post up the data stream. How should this be done, I have tried calling Dispatch.BeginInvoke (as below) from within the loop that pushes the data into the stream but it locks the browser until its finished so it seems to be in some sort of worker/ui thread deadlock. This is a code snippet of pretty much what I am doing: class RequestState { public HttpWebRequest request; // holds the request public FileDialogFileInfo file; // store our file stream data public RequestState( HttpWebRequest request, FileDialogFileInfo file ) { this.request = request; this.file = file; } } private void UploadFile( FileDialogFileInfo file ) { UriBuilder ub = new UriBuilder( app.receiverURL ); ub.Query = string.Format( "filename={0}", file.Name ); // Open the selected file to read. HttpWebRequest request = (HttpWebRequest)WebRequest.Create( ub.Uri ); request.Method = "POST"; RequestState state = new RequestState( request, file ); request.BeginGetRequestStream( new AsyncCallback( OnUploadReadCallback ), state ); } private void OnUploadReadCallback( IAsyncResult asynchronousResult ) { RequestState state = (RequestState)asynchronousResult.AsyncState; HttpWebRequest request = (HttpWebRequest)state.request; Stream postStream = request.EndGetRequestStream( asynchronousResult ); PushData( state.file, postStream ); postStream.Close(); state.request.BeginGetResponse( new AsyncCallback( OnUploadResponseCallback ), state.request ); } private void PushData( FileDialogFileInfo file, Stream output ) { byte[] buffer = new byte[ 4096 ]; int bytesRead = 0; Stream input = file.OpenRead(); while( ( bytesRead = input.Read( buffer, 0, buffer.Length ) ) != 0 ) { output.Write( buffer, 0, bytesRead ); bytesReadTotal += bytesRead; App app = App.Current as App; int totalPercentage = Convert.ToInt32( ( bytesReadTotal / app.totalBytesToUpload ) * 100 ); // enabling the following locks up my UI and browser Dispatcher.BeginInvoke( () => { this.ProgressBarWithPercentage.Percentage = totalPercentage; } ); } } A: I was going to say that, I didn't think that Silverlight 2's HttpWebRequest supported streaming, because the request data gets buffered into memory entirely. It had been a while since the last time I looked at it though, therefore I went back to see if Beta 2 supported it. Well turns out it does. I am glad I went back and read before stating that. You can enable it by setting AllowReadStreamBuffering to false. Did you set this property on your HttpWebRequest? That could be causing your block. MSDN Reference File upload component for Silverlight and ASP.NET Edit, found another reference for you. You may want to follow this approach by breaking the file into chunks. This was written last March, therefore I am not sure if it will work in Beta 2 or not. A: Thanks for that, I will take a look at those links, I was considering chunking my data anyway, seems to be the only way I can get any reasonable progress reports out of it.
How do I update my UI from within HttpWebRequest.BeginGetRequestStream in Silverlight
I am uploading multiple files using the BeginGetRequestStream of HttpWebRequest but I want to update the progress control I have written whilst I post up the data stream. How should this be done, I have tried calling Dispatch.BeginInvoke (as below) from within the loop that pushes the data into the stream but it locks the browser until its finished so it seems to be in some sort of worker/ui thread deadlock. This is a code snippet of pretty much what I am doing: class RequestState { public HttpWebRequest request; // holds the request public FileDialogFileInfo file; // store our file stream data public RequestState( HttpWebRequest request, FileDialogFileInfo file ) { this.request = request; this.file = file; } } private void UploadFile( FileDialogFileInfo file ) { UriBuilder ub = new UriBuilder( app.receiverURL ); ub.Query = string.Format( "filename={0}", file.Name ); // Open the selected file to read. HttpWebRequest request = (HttpWebRequest)WebRequest.Create( ub.Uri ); request.Method = "POST"; RequestState state = new RequestState( request, file ); request.BeginGetRequestStream( new AsyncCallback( OnUploadReadCallback ), state ); } private void OnUploadReadCallback( IAsyncResult asynchronousResult ) { RequestState state = (RequestState)asynchronousResult.AsyncState; HttpWebRequest request = (HttpWebRequest)state.request; Stream postStream = request.EndGetRequestStream( asynchronousResult ); PushData( state.file, postStream ); postStream.Close(); state.request.BeginGetResponse( new AsyncCallback( OnUploadResponseCallback ), state.request ); } private void PushData( FileDialogFileInfo file, Stream output ) { byte[] buffer = new byte[ 4096 ]; int bytesRead = 0; Stream input = file.OpenRead(); while( ( bytesRead = input.Read( buffer, 0, buffer.Length ) ) != 0 ) { output.Write( buffer, 0, bytesRead ); bytesReadTotal += bytesRead; App app = App.Current as App; int totalPercentage = Convert.ToInt32( ( bytesReadTotal / app.totalBytesToUpload ) * 100 ); // enabling the following locks up my UI and browser Dispatcher.BeginInvoke( () => { this.ProgressBarWithPercentage.Percentage = totalPercentage; } ); } }
[ "I was going to say that, I didn't think that Silverlight 2's HttpWebRequest supported streaming, because the request data gets buffered into memory entirely. It had been a while since the last time I looked at it though, therefore I went back to see if Beta 2 supported it. Well turns out it does. I am glad I went back and read before stating that. You can enable it by setting AllowReadStreamBuffering to false. Did you set this property on your HttpWebRequest? That could be causing your block.\n\nMSDN Reference\nFile upload component for Silverlight and ASP.NET\n\nEdit, found another reference for you. You may want to follow this approach by breaking the file into chunks. This was written last March, therefore I am not sure if it will work in Beta 2 or not.\n", "Thanks for that, I will take a look at those links, I was considering chunking my data anyway, seems to be the only way I can get any reasonable progress reports out of it.\n" ]
[ 1, 0 ]
[]
[]
[ "c#", "silverlight" ]
stackoverflow_0000013217_c#_silverlight.txt
Q: Best way to bind Windows Forms properties to ApplicationSettings in C#? In a desktop application needing some serious re-factoring, I have several chunks of code that look like this: private void LoadSettings() { WindowState = Properties.Settings.Default.WindowState; Location = Properties.Settings.Default.WindowLocation; ... } private void SaveSettings() { Properties.Settings.Default.WindowState = WindowState; Properties.Settings.Default.WindowLocation = Location; ... } What's the best way to replace this? Project-imposed constraints: Visual Studio 2005 C# / .NET 2.0 Windows Forms Update For posterity, I've also found two useful tutorials: "Windows Forms User Settings in C#" and "Exploring Secrets of Persistent Application Settings". I've asked a follow-up question about using this technique to bind a form's Size here. I separated them out to help people who search for similar issues. ­­­­­­­­­­­­­­­­­­­ A: If you open your windows form in the designer, look in the properties box. The first item should be "(ApplicationSetting)". Under that is "(PropertyBinding)". That's where you'll find the option to do exactly what you want.
Best way to bind Windows Forms properties to ApplicationSettings in C#?
In a desktop application needing some serious re-factoring, I have several chunks of code that look like this: private void LoadSettings() { WindowState = Properties.Settings.Default.WindowState; Location = Properties.Settings.Default.WindowLocation; ... } private void SaveSettings() { Properties.Settings.Default.WindowState = WindowState; Properties.Settings.Default.WindowLocation = Location; ... } What's the best way to replace this? Project-imposed constraints: Visual Studio 2005 C# / .NET 2.0 Windows Forms Update For posterity, I've also found two useful tutorials: "Windows Forms User Settings in C#" and "Exploring Secrets of Persistent Application Settings". I've asked a follow-up question about using this technique to bind a form's Size here. I separated them out to help people who search for similar issues. ­­­­­­­­­­­­­­­­­­­
[ "If you open your windows form in the designer, look in the properties box. The first item should be \"(ApplicationSetting)\". Under that is \"(PropertyBinding)\". That's where you'll find the option to do exactly what you want. \n" ]
[ 12 ]
[]
[]
[ ".net", "c#" ]
stackoverflow_0000018421_.net_c#.txt
Q: Best practise to authorize all users for just one page What is the best way to authorize all users to one single page in a asp.net website. For except the login page and one other page, I deny all users from viewing pages in the website. How do you make this page accessible to all users? A: I've been using forms authentication and creating the necessary GenericIdentity and CustomPrincipal objects that allows me to leverage the User.IsInRole type functions you typically only get with Windows authentication. That way in my web.config file, I can do stuff like... <location path="Login.aspx"> <system.web> <authorization> <allow users ="*" /> </authorization> </system.web> </location> <location path="ManagementFolder"> <system.web> <authorization> <allow roles ="Administrator, Manager" /> </authorization> </system.web> </location>
Best practise to authorize all users for just one page
What is the best way to authorize all users to one single page in a asp.net website. For except the login page and one other page, I deny all users from viewing pages in the website. How do you make this page accessible to all users?
[ "I've been using forms authentication and creating the necessary GenericIdentity and CustomPrincipal objects that allows me to leverage the User.IsInRole type functions you typically only get with Windows authentication.\nThat way in my web.config file, I can do stuff like...\n<location path=\"Login.aspx\">\n <system.web>\n <authorization>\n <allow users =\"*\" />\n </authorization>\n </system.web>\n</location>\n\n<location path=\"ManagementFolder\">\n <system.web>\n <authorization>\n <allow roles =\"Administrator, Manager\" />\n </authorization>\n </system.web>\n</location>\n\n" ]
[ 5 ]
[ "I created a base \"page\" class that handles that sort of thing. All my pages can then be decorated with the RequiresLogin attribute if a login is required to view them. If the attribute is not present, the page is accessible to all.\nExample:\n<RequiresLogin()> _ \n<RequiresPermission(\"process\")> _\nPartial Class DesignReviewEditProgressPage\n Inherits MyPage 'which inherits System.Web.UI.Page and deal with logins itself\n\n ...\nEnd Class\n\nThe MyPage class checks what attributes are being tagged to itself and if RequiresLogin is present, it forwards you to a login page.\nI believe this could be adapted to fit your own problem.\n" ]
[ -1 ]
[ "asp.net", "authorization" ]
stackoverflow_0000018460_asp.net_authorization.txt
Q: Get a number from a sql string range I have a column of data that contains a percentage range as a string that I'd like to convert to a number so I can do easy comparisons. Possible values in the string: '<5%' '5-10%' '10-15%' ... '95-100%' I'd like to convert this in my select where clause to just the first number, 5, 10, 15, etc. so that I can compare that value to a passed in "at least this" value. I've tried a bunch of variations on substring, charindex, convert, and replace, but I still can't seem to get something that works in all combinations. Any ideas? A: Try this, SELECT substring(replace(interest , '<',''), patindex('%[0-9]%',replace(interest , '<','')), patindex('%[^0-9]%',replace(interest, '<',''))-1) FROM table1 Tested at my end and it works, it's only my first try so you might be able to optimise it. A: @Martin: Your solution works. Here is another I came up with based on inspiration from @mercutio select cast(replace(replace(replace(interest,'<',''),'%',''),'-','.0') as numeric) test from table1 where interest is not null A: You can convert char data to other types of char (convert char(10) to varchar(10)), but you won't be able to convert character data to integer data from within SQL. A: I don't know if this works in SQL Server, but within MySQL, you can use several tricks to convert character data into numbers. Examples from your sample data: "<5%" => 0 "5-10%" => 5 "95-100%" => 95 now obviously this fails your first test, but some clever string replacements on the start of the string would be enough to get it working. One example of converting character data into numbers: SELECT "5-10%" + 0 AS foo ... Might not work in SQL Server, but future searches may help the odd MySQL user :-D A: You'd probably be much better off changing <5% and 5-10% to store 2 values in 2 fields. Instead of storing <5%, you would store 0, and 5, and instead of 5-10%, yould end up with 5 and 10. You'd end up with 2 columns, one called lowerbound, and one called upperbound, and then just check value >= lowerbound AND value < upperbound. A: You can do this in sql server with a cursor. If you can create a CLR function to pull out number groupings that will help. Its possible in T-SQL, just will be ugly. Create the cursor to loop over the list. Find the first number, If there is only 1 number group in their then return it. Otherwise find the second item grouping. if there is only 1st item grouping returned and its the first item in the list set it to upper bound. if there is only 1st item grouping returned and its the last item in the list set it to lower bound. Otherwise set the 1st item grouping to lower, and the 2nd item grouping to upper bound Just set the resulting values back to a table A: The issue you are having is a symptom of not keeping the data atomic. In this case it looks purely unintentional (Legacy) but here is a link about it. To design yourself out of this create a range_lookup table: Create table rangeLookup( rangeID int -- or rangeCD or not at all ,rangeLabel varchar(50) ,LowValue int--real or whatever ,HighValue int ) To hack yourself out here some pseudo steps this will be a deeply nested mess. normalize your input by replacing all your crazy charecters. replace(replace(rangeLabel,"%",""),"<","") --This will entail many nested replace statments. Add a CASE and CHARINDEX to look for a space if there is none you have your number else use your substring to take everything before the first " ". -- theses steps are wrapped around the previous step. A: It's complicated, but for the test cases you provided, this works. Just replace @Test with the column you are looking in from your table. DECLARE @TEST varchar(10) set @Test = '<5%' --set @Test = '5-10%' --set @Test = '10-15%' --set @Test = '95-100%' Select CASE WHEN Substring(@TEST,1,1) = '<' THEN 0 ELSE CONVERT(integer,SUBSTRING(@TEST,1,CHARINDEX('-',@TEST)-1)) END AS LowerBound , CASE WHEN Substring(@TEST,1,1) = '<' THEN CONVERT(integer,Substring(@TEST,2,CHARINDEX('%',@TEST)-2)) ELSE CONVERT(integer,Substring(@TEST,CHARINDEX('-',@TEST)+1,CHARINDEX('%',@TEST)-CHARINDEX('-',@TEST)-1)) END AS UpperBound
Get a number from a sql string range
I have a column of data that contains a percentage range as a string that I'd like to convert to a number so I can do easy comparisons. Possible values in the string: '<5%' '5-10%' '10-15%' ... '95-100%' I'd like to convert this in my select where clause to just the first number, 5, 10, 15, etc. so that I can compare that value to a passed in "at least this" value. I've tried a bunch of variations on substring, charindex, convert, and replace, but I still can't seem to get something that works in all combinations. Any ideas?
[ "Try this,\nSELECT substring(replace(interest , '<',''), patindex('%[0-9]%',replace(interest , '<','')), patindex('%[^0-9]%',replace(interest, '<',''))-1) FROM table1 \n\nTested at my end and it works, it's only my first try so you might be able to optimise it.\n", "@Martin: Your solution works.\nHere is another I came up with based on inspiration from @mercutio\nselect cast(replace(replace(replace(interest,'<',''),'%',''),'-','.0') as numeric) test\nfrom table1 where interest is not null\n\n", "You can convert char data to other types of char (convert char(10) to varchar(10)), but you won't be able to convert character data to integer data from within SQL.\n", "I don't know if this works in SQL Server, but within MySQL, you can use several tricks to convert character data into numbers. Examples from your sample data:\n\"<5%\" => 0\n\"5-10%\" => 5\n\"95-100%\" => 95\n\nnow obviously this fails your first test, but some clever string replacements on the start of the string would be enough to get it working.\nOne example of converting character data into numbers:\nSELECT \"5-10%\" + 0 AS foo ...\n\nMight not work in SQL Server, but future searches may help the odd MySQL user :-D\n", "You'd probably be much better off changing <5% and 5-10% to store 2 values in 2 fields. Instead of storing <5%, you would store 0, and 5, and instead of 5-10%, yould end up with 5 and 10. You'd end up with 2 columns, one called lowerbound, and one called upperbound, and then just check value >= lowerbound AND value < upperbound.\n", "You can do this in sql server with a cursor. If you can create a CLR function to pull out number groupings that will help. Its possible in T-SQL, just will be ugly.\nCreate the cursor to loop over the list.\nFind the first number, If there is only 1 number group in their then return it. Otherwise find the second item grouping.\nif there is only 1st item grouping returned and its the first item in the list set it to upper bound.\nif there is only 1st item grouping returned and its the last item in the list set it to lower bound.\nOtherwise set the 1st item grouping to lower, and the 2nd item grouping to upper bound\nJust set the resulting values back to a table \n", "The issue you are having is a symptom of not keeping the data atomic. In this case it looks purely unintentional (Legacy) but here is a link about it. \nTo design yourself out of this create a range_lookup table:\nCreate table rangeLookup(\n rangeID int -- or rangeCD or not at all\n ,rangeLabel varchar(50)\n ,LowValue int--real or whatever\n ,HighValue int \n)\n\nTo hack yourself out here some pseudo steps this will be a deeply nested mess.\nnormalize your input by replacing all your crazy charecters.\n replace(replace(rangeLabel,\"%\",\"\"),\"<\",\"\")\n --This will entail many nested replace statments.\n\nAdd a CASE and CHARINDEX to look for a space if there is none you have your number\n else use your substring to take everything before the first \" \".\n -- theses steps are wrapped around the previous step.\n\n", "It's complicated, but for the test cases you provided, this works. Just replace @Test with the column you are looking in from your table.\nDECLARE @TEST varchar(10)\n\nset @Test = '<5%'\n--set @Test = '5-10%'\n--set @Test = '10-15%'\n--set @Test = '95-100%'\n\nSelect CASE WHEN \nSubstring(@TEST,1,1) = '<' \nTHEN \n0\nELSE \nCONVERT(integer,SUBSTRING(@TEST,1,CHARINDEX('-',@TEST)-1))\nEND\nAS LowerBound\n,\nCASE WHEN \nSubstring(@TEST,1,1) = '<'\nTHEN\nCONVERT(integer,Substring(@TEST,2,CHARINDEX('%',@TEST)-2))\nELSE\nCONVERT(integer,Substring(@TEST,CHARINDEX('-',@TEST)+1,CHARINDEX('%',@TEST)-CHARINDEX('-',@TEST)-1))\nEND\nAS UpperBound\n\n" ]
[ 5, 2, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "sql_server" ]
stackoverflow_0000018413_sql_server.txt
Q: .Net Parse versus Convert In .Net you can read a string value into another data type using either <datatype>.parse or Convert.To<DataType>. I'm not familiar with the fundamentals of parse versus convert so I am always at a loss when asked which one is better/faster/more appropriate. So - which way is best in what type of circumstances? A: The Convert.ToXXX() methods are for objects that might be of the correct or similar type, while .Parse() and .TryParse() are specifically for strings: //o is actually a boxed int object o = 12345; //unboxes it int castVal = (int) 12345; //o is a boxed enum object o = MyEnum.ValueA; //this will get the underlying int of ValueA int convVal = Convert.ToInt32( o ); //now we have a string string s = "12345"; //this will throw an exception if s can't be parsed int parseVal = int.Parse( s ); //alternatively: int tryVal; if( int.TryParse( s, out tryVal ) ) { //do something with tryVal } If you compile with optimisation flags TryParse is very quick - it's the best way to get a number from a string. However if you have an object that might be an int or might be a string Convert.ToInt32 is quicker. A: Here's an answer for you: http://www.dotnetspider.com/forum/ViewForum.aspx?ForumId=77428 Though I think in modern versions of .NET, the best thing to do is use TryParse in any case, if there's any doubt that the conversion will work. A: I'm a big fan of TryParse, since it saves you a lot of headache of error catching when there's a chance the value you're going to parse is not of the appropriate type. My order is usually: Parse (if I can be sure the value will be the right type, and I do try to ensure this) TryParse (if I can't be sure, which happens whenever user input is involved, or input from a system you cannot control) Convert (which I think I have not used since I started using Parse and TryParse, but I could be wrong) A: There is also the DirectCast method which you should use only if you are sure what the type of the object is. It is faster, but doesn't do any proper checks. I use DirectCast when I'm extracting values from a loosely typed DataTable when I know the type for each column. A: If you need speed, I'm pretty sure a direct cast is the fastest way. That being said, I normally use .Parse or .TryParse because is seems to make things easier to read, and behave in a more predictable manner. Convert actually calls Parse under the hood, I believe. So there is little difference there, and its really just seems to be a matter of personal taste.
.Net Parse versus Convert
In .Net you can read a string value into another data type using either <datatype>.parse or Convert.To<DataType>. I'm not familiar with the fundamentals of parse versus convert so I am always at a loss when asked which one is better/faster/more appropriate. So - which way is best in what type of circumstances?
[ "The Convert.ToXXX() methods are for objects that might be of the correct or similar type, while .Parse() and .TryParse() are specifically for strings:\n//o is actually a boxed int\nobject o = 12345;\n\n//unboxes it\nint castVal = (int) 12345;\n\n//o is a boxed enum\nobject o = MyEnum.ValueA;\n\n//this will get the underlying int of ValueA\nint convVal = Convert.ToInt32( o );\n\n//now we have a string\nstring s = \"12345\";\n\n//this will throw an exception if s can't be parsed\nint parseVal = int.Parse( s );\n\n//alternatively:\nint tryVal;\nif( int.TryParse( s, out tryVal ) ) {\n //do something with tryVal \n}\n\nIf you compile with optimisation flags TryParse is very quick - it's the best way to get a number from a string. However if you have an object that might be an int or might be a string Convert.ToInt32 is quicker.\n", "Here's an answer for you:\nhttp://www.dotnetspider.com/forum/ViewForum.aspx?ForumId=77428\nThough I think in modern versions of .NET, the best thing to do is use TryParse in any case, if there's any doubt that the conversion will work.\n", "I'm a big fan of TryParse, since it saves you a lot of headache of error catching when there's a chance the value you're going to parse is not of the appropriate type. \nMy order is usually:\n\nParse (if I can be sure the value will be the right type, and I do try to ensure this)\nTryParse (if I can't be sure, which happens whenever user input is involved, or input from a system you cannot control)\nConvert (which I think I have not used since I started using Parse and TryParse, but I could be wrong)\n\n", "There is also the DirectCast method which you should use only if you are sure what the type of the object is. It is faster, but doesn't do any proper checks. I use DirectCast when I'm extracting values from a loosely typed DataTable when I know the type for each column.\n", "If you need speed, I'm pretty sure a direct cast is the fastest way. That being said, I normally use .Parse or .TryParse because is seems to make things easier to read, and behave in a more predictable manner. \nConvert actually calls Parse under the hood, I believe. So there is little difference there, and its really just seems to be a matter of personal taste.\n" ]
[ 15, 5, 3, 1, 1 ]
[]
[]
[ ".net", "parsing" ]
stackoverflow_0000018465_.net_parsing.txt
Q: Why go 64 bit OS? On these questions: Which Vista edition is best for a developer machine? Vista or XP for Dev Machine People are recommending 64 bit, can you explain why? Is it just so you can have more then 3GB of addressable RAM that 32 bit gives you? And how does Visual Studio benefit from all this extra RAM? I went from 64 bit XP back to 32 bit due to 90% of the software I was using only being 32 bit anyway and I had issues with drivers and some software with 64 bit. A: Vista, as far as I know, has much better 64 bit support than XP. It is more well advertised than 64 bit XP, and more popular. Driver and software support should be much better for 64-bit Vista. The 64-bit switch is in progress right now in the computing industry. You might as well switch. Microsoft made the serious leap to 64-bit already, and many have already followed suit. Those who haven't switched, will soon, most likely. As for the technical benefits, there aren't many aside from the higher memory limits. Vista will certainly allow you to take advantage of the 4GB+ of RAM if you have it on 64-bit though. A: A number of reasons. Yes, you're right it is so you can have more than 3 gig of ram More and more systems are going to be 64 bit soon so it makes sense to develop on what you're going to be running on Some bugs can only be observed when running in 64 bit mode A: "There are some gotchas in terms of p/invoke calls not always working across 32/64, as well as Managed DirectX not working well under 64-bit, but on the whole I think its something people are going to be doing more as time goes by." This is caused, in .net, by having the AnyCPU flag set. AnyCPU on an x64 machine will run the process as a x64 process, which proceeds to explode when attempting to call/load a 32 bit dll. Since those libraries are 32 bit you need to set the build to x86, to ensure the app will run as an x86 process, if on an x64 machine it will run in WoW. Signed Drivers. No more "Unknown Device Driver" blue screens, drivers that cause issues are found out, and rightly blamed for their crashes. Signed drivers also means the drivers are current. Manufacturers that used to get away with updating a driver once every 2-3 years had to get signed/certified. Which means the driver is relatively current and had to pass basic "is this total crap" test at Microsoft. This "lack of driver support" I've always seen as a boon. Forcing manufacturer certification. More address space. Others have mentioned that this allows more RAM, which is true. But it has more impact on memory management performance. It also means having 4 gigs RAM and a graphics card with 512MB on it will be fully used by the system. On a 32 bit OS the system has to decide, out of the limited addresses, what hardware gets what range, physical RAM loses. Then there is always the possibility of using more than 4 gigs RAM, good for when you have lots of VMs x64 Vista loads core OS processes/services, during boot, into random addresses. Giving some exploits a 1/256 chance of picking the right memory location, instead of 100% on a 32 machine. No kernel patching. None. Nada. Zilch. It does mean some Sysinternal tools do not work, however it means xyz spyware/virus cant maliciously apply the same techniques as sysinternals to hide forever, intercept calls, etc. (this is what keeps out some anti-virus software... as well as viruses) A: Another technical benefit, aside from the increased address space, is that 64bit apps always use DEP, so you are forced to fix those bugs and potential security holes. A: 64-bit won't be mainstream before most programs are availiable in 64 bit versions. And who make programs? Developers, developers, developers! See my point? If developers don't make the shift, how is 64-bit programs going to be mainstream? Other than that, there is of cource more reasons: Signed drivers More memory, as you mentioned You get the possibility to test your programs on 64-bit (obviously) It's the future. =) A: I switched from 32 bit Vista to 64 bit and haven't looked back. I have only had a problem with one device (a multi-track firewire mixing board) - but everything else that has worked for 32-bit works for 64. Throw in the ability to add piles of cheap RAM, and I don't see any reason why anyone would stick with 32 if the processor supports it. If you're really unsure, use Vista's much improved multi-boot functionality and install 32 bit XP and 64 bit Vista on the same machine on different partitions. I did, but to tell you the truth, I haven't gone back into XP for at least 9 months now. A: Another advantage of 64 bit: All the registers associated with the microprocessors are 64-bit. This enables High- precision computations and 64-bit arithmetic to be performed in fewer clock-cycles as compared to 32-bit microprocessors. In certain cases like 64-bit multiplication, it is twice as fast. A: XP 64bit wasn't ready for prime time, there were no drivers for it. In Windows Vista 64-bit this isn't the case. So if you are looking to install Windows Vista go 64-bit if you are keeping XP stay at 32-bit. A: Bigger is always best? The RAM thing is the major advantage, and the increased address space. I guess as long as drivers aren't an issue, then why NOT 64bit? A: People are recommending 64 bit, can you explain why? Is it just so you can have more then 3Gb of addressable RAM that 32 bit gives you? This addressable RAM limit is not a problem for a regular user, but it is pretty critical on DB configuration, scientific computing, etc... And how does Visual Studio benefit from all this extra RAM? Does it??? If you want to compile faster you can gain up to 20% compilation time compiling directly from a ramdisk partition. I went from 64 bit XP back to 32 bit due to 90% of the software I was using only being 32 bit anyway and I had issues with drivers and some software with 64 bit. Switching 64 bits for a regular dev station is probably useless. A: Vista x64 has been a very pleasant experience for me. There are a couple of edge cases, but most software and drivers work fine with it at this point. The biggest practical reason I see to use it is that you can load up on RAM (say 6GB or more) and then dedicate lots of it to virtual machines and other apps that require lots of memory (like Photoshop). If you are only using Visual Studio and maybe a couple other apps day to day, then it might not be as beneficial, but I find myself 0ften running 10 to 20 apps at a time (seriously) and the extra RAM is critical. A: DotNet rocks had a recent show all about the benefits and pitfalls of going 64-bit from a .Net developer perspective. http://www.dotnetrocks.com/default.aspx?showNum=341 There are the obvious benefits of having access to more RAM in windows, as well as the obvious possible downside presented by unavailable drivers (which not only have to be 64-bit, but signed and certified as well). Other points made are in that if you ever need to test anything you are developing under 64-bit, the only way you can do that is on a 64-bit OS. You can always create VM image to test under 32-bit from a 64-bit OS. There are some gotchas in terms of p/invoke calls not always working across 32/64, as well as Managed DirectX not working well under 64-bit, but on the whole I think its something people are going to be doing more as time goes by.
Why go 64 bit OS?
On these questions: Which Vista edition is best for a developer machine? Vista or XP for Dev Machine People are recommending 64 bit, can you explain why? Is it just so you can have more then 3GB of addressable RAM that 32 bit gives you? And how does Visual Studio benefit from all this extra RAM? I went from 64 bit XP back to 32 bit due to 90% of the software I was using only being 32 bit anyway and I had issues with drivers and some software with 64 bit.
[ "Vista, as far as I know, has much better 64 bit support than XP. It is more well advertised than 64 bit XP, and more popular. Driver and software support should be much better for 64-bit Vista.\nThe 64-bit switch is in progress right now in the computing industry. You might as well switch. Microsoft made the serious leap to 64-bit already, and many have already followed suit. Those who haven't switched, will soon, most likely.\nAs for the technical benefits, there aren't many aside from the higher memory limits. Vista will certainly allow you to take advantage of the 4GB+ of RAM if you have it on 64-bit though.\n", "A number of reasons.\n\nYes, you're right it is so you can have more than 3 gig of ram\nMore and more systems are going to be 64 bit soon so it makes sense to develop on what you're going to be running on\nSome bugs can only be observed when running in 64 bit mode\n\n", "\"There are some gotchas in terms of p/invoke calls not always working across 32/64, as well as Managed DirectX not working well under 64-bit, but on the whole I think its something people are going to be doing more as time goes by.\"\nThis is caused, in .net, by having the AnyCPU flag set. AnyCPU on an x64 machine will run the process as a x64 process, which proceeds to explode when attempting to call/load a 32 bit dll. Since those libraries are 32 bit you need to set the build to x86, to ensure the app will run as an x86 process, if on an x64 machine it will run in WoW.\nSigned Drivers. No more \"Unknown Device Driver\" blue screens, drivers that cause issues are found out, and rightly blamed for their crashes. \nSigned drivers also means the drivers are current. Manufacturers that used to get away with updating a driver once every 2-3 years had to get signed/certified. Which means the driver is relatively current and had to pass basic \"is this total crap\" test at Microsoft.\nThis \"lack of driver support\" I've always seen as a boon. Forcing manufacturer certification.\nMore address space. Others have mentioned that this allows more RAM, which is true. But it has more impact on memory management performance. It also means having 4 gigs RAM and a graphics card with 512MB on it will be fully used by the system. On a 32 bit OS the system has to decide, out of the limited addresses, what hardware gets what range, physical RAM loses. \nThen there is always the possibility of using more than 4 gigs RAM, good for when you have lots of VMs\nx64 Vista loads core OS processes/services, during boot, into random addresses. Giving some exploits a 1/256 chance of picking the right memory location, instead of 100% on a 32 machine.\nNo kernel patching. None. Nada. Zilch. It does mean some Sysinternal tools do not work, however it means xyz spyware/virus cant maliciously apply the same techniques as sysinternals to hide forever, intercept calls, etc. (this is what keeps out some anti-virus software... as well as viruses) \n", "Another technical benefit, aside from the increased address space, is that 64bit apps always use DEP, so you are forced to fix those bugs and potential security holes.\n", "64-bit won't be mainstream before most programs are availiable in 64 bit versions. And who make programs? Developers, developers, developers!\nSee my point? If developers don't make the shift, how is 64-bit programs going to be mainstream?\nOther than that, there is of cource more reasons:\n\nSigned drivers\nMore memory, as you mentioned\nYou get the possibility to test your programs on 64-bit (obviously)\nIt's the future. =)\n\n", "I switched from 32 bit Vista to 64 bit and haven't looked back. I have only had a problem with one device (a multi-track firewire mixing board) - but everything else that has worked for 32-bit works for 64. Throw in the ability to add piles of cheap RAM, and I don't see any reason why anyone would stick with 32 if the processor supports it.\nIf you're really unsure, use Vista's much improved multi-boot functionality and install 32 bit XP and 64 bit Vista on the same machine on different partitions. I did, but to tell you the truth, I haven't gone back into XP for at least 9 months now.\n", "Another advantage of 64 bit:\nAll the registers associated with the microprocessors are 64-bit. This enables High- precision computations and 64-bit arithmetic to be performed in fewer clock-cycles as compared to 32-bit microprocessors. In certain cases like 64-bit multiplication, it is twice as fast.\n", "XP 64bit wasn't ready for prime time, there were no drivers for it. In Windows Vista 64-bit this isn't the case. So if you are looking to install Windows Vista go 64-bit if you are keeping XP stay at 32-bit.\n", "Bigger is always best? The RAM thing is the major advantage, and the increased address space. I guess as long as drivers aren't an issue, then why NOT 64bit?\n", "People are recommending 64 bit, can you explain why? Is it just so you can have more then 3Gb of addressable RAM that 32 bit gives you?\nThis addressable RAM limit is not a problem for a regular user, but it is pretty critical on DB configuration, scientific computing, etc...\nAnd how does Visual Studio benefit from all this extra RAM?\nDoes it??? If you want to compile faster you can gain up to 20% compilation time compiling directly from a ramdisk partition. I went from 64 bit XP back to 32 bit due to 90% of the software I was using only being 32 bit anyway and I had issues with drivers and some software with 64 bit.\nSwitching 64 bits for a regular dev station is probably useless.\n", "Vista x64 has been a very pleasant experience for me. There are a couple of edge cases, but most software and drivers work fine with it at this point. The biggest practical reason I see to use it is that you can load up on RAM (say 6GB or more) and then dedicate lots of it to virtual machines and other apps that require lots of memory (like Photoshop). If you are only using Visual Studio and maybe a couple other apps day to day, then it might not be as beneficial, but I find myself 0ften running 10 to 20 apps at a time (seriously) and the extra RAM is critical.\n", "DotNet rocks had a recent show all about the benefits and pitfalls of going 64-bit from a .Net developer perspective.\nhttp://www.dotnetrocks.com/default.aspx?showNum=341\nThere are the obvious benefits of having access to more RAM in windows, as well as the obvious possible downside presented by unavailable drivers (which not only have to be 64-bit, but signed and certified as well).\nOther points made are in that if you ever need to test anything you are developing under 64-bit, the only way you can do that is on a 64-bit OS. You can always create VM image to test under 32-bit from a 64-bit OS.\nThere are some gotchas in terms of p/invoke calls not always working across 32/64, as well as Managed DirectX not working well under 64-bit, but on the whole I think its something people are going to be doing more as time goes by.\n" ]
[ 8, 5, 4, 3, 2, 2, 2, 1, 1, 1, 1, 1 ]
[]
[]
[ "64_bit", "operating_system", "windows_vista", "windows_xp" ]
stackoverflow_0000018035_64_bit_operating_system_windows_vista_windows_xp.txt
Q: What is the best way to write a form in ASP.NET MVC? What is the the best way to write a form to submit some data in ASP.NET MVC? Is it as Scott Guthrie demonstrates here? Are there better approaches? Perhaps with less using of strings? A: I don't really like strings in my code, as it isn't possible to refactor. A nice way is to use Linq Expressions. If you get passed a model as ViewData you can use the following statement: <%= ShowDropDownBox(viewData => viewData.Name); %> ... public static string ShowDropDownList<T>(this HtmlHelper html, Expression<Action<T>> property) { var body = action.Body as MethodCallExpression; if (body == null) throw new InvalidOperationException("Expression must be a method call."); if (body.Object != action.Parameters[0]) throw new InvalidOperationException("Method call must target lambda argument."); string propertyName = body.Method.Name; string typeName = typeof(T).Name; // now you can call the original method html.Select(propertyName, ... ); } I know the original solution is performing faster but I think this one is much cleaner. Hope this helps!
What is the best way to write a form in ASP.NET MVC?
What is the the best way to write a form to submit some data in ASP.NET MVC? Is it as Scott Guthrie demonstrates here? Are there better approaches? Perhaps with less using of strings?
[ "I don't really like strings in my code, as it isn't possible to refactor. A nice way is to use Linq Expressions. If you get passed a model as ViewData you can use the following statement:\n<%= ShowDropDownBox(viewData => viewData.Name); %>\n...\n\npublic static string ShowDropDownList<T>(this HtmlHelper html, Expression<Action<T>> property)\n{\n var body = action.Body as MethodCallExpression;\n if (body == null)\n throw new InvalidOperationException(\"Expression must be a method call.\");\n if (body.Object != action.Parameters[0])\n throw new InvalidOperationException(\"Method call must target lambda argument.\");\n string propertyName = body.Method.Name;\n string typeName = typeof(T).Name;\n\n // now you can call the original method\n html.Select(propertyName, ... );\n}\n\nI know the original solution is performing faster but I think this one is much cleaner.\nHope this helps!\n" ]
[ 2 ]
[]
[]
[ "asp.net_mvc", "forms" ]
stackoverflow_0000018614_asp.net_mvc_forms.txt
Q: Webservice alive forever I often use webservice this way public void CallWebservice() { mywebservice web = new mywebservice(); web.call(); } but sometimes I do this private mywebservice web; public Constructor() { web = new mywebservice(); } public void CallWebservice() { web.call(); } The second approach likes me very much but sometimes it times out and I had to start the application again, the first one I think it brings overhead and it is not very efficient, in fact, sometimes the first call returns a WebException - ConnectFailure (I don't know why). I found an article (Web Service Woes (A light at the end of the tunnel?)) that exceeds the time out turning the KeepAlive property to false in the overriden function GetWebRequest, here's is the code: Protected Overrides Function GetWebRequest(ByVal uri As System.Uri) As System.Net.WebRequest Dim webRequest As Net.HttpWebRequest = CType(MyBase.GetWebRequest(uri), Net.HttpWebRequest) webRequest.KeepAlive = False Return webRequest End Function The question is, is it possible to extend forever the webservice time out and finally, how do you implement your webservices to handle this issue? A: The classes generated by Visual Studio for webservices are just proxies with little state so creating them is pretty cheap. I wouldn't worry about memory consumption for them. If what you are looking for is a way to call the webmethod in one line you can simply do this: new mywebservice().call() Cheers
Webservice alive forever
I often use webservice this way public void CallWebservice() { mywebservice web = new mywebservice(); web.call(); } but sometimes I do this private mywebservice web; public Constructor() { web = new mywebservice(); } public void CallWebservice() { web.call(); } The second approach likes me very much but sometimes it times out and I had to start the application again, the first one I think it brings overhead and it is not very efficient, in fact, sometimes the first call returns a WebException - ConnectFailure (I don't know why). I found an article (Web Service Woes (A light at the end of the tunnel?)) that exceeds the time out turning the KeepAlive property to false in the overriden function GetWebRequest, here's is the code: Protected Overrides Function GetWebRequest(ByVal uri As System.Uri) As System.Net.WebRequest Dim webRequest As Net.HttpWebRequest = CType(MyBase.GetWebRequest(uri), Net.HttpWebRequest) webRequest.KeepAlive = False Return webRequest End Function The question is, is it possible to extend forever the webservice time out and finally, how do you implement your webservices to handle this issue?
[ "The classes generated by Visual Studio for webservices are just proxies with little state so creating them is pretty cheap. I wouldn't worry about memory consumption for them.\nIf what you are looking for is a way to call the webmethod in one line you can simply do this:\nnew mywebservice().call()\n\nCheers\n" ]
[ 1 ]
[]
[]
[ "web_services" ]
stackoverflow_0000018702_web_services.txt
Q: Sending a mouse click to a button in the taskbar using C# In an application that I am currently working on, a requirement is to bring a window of an external application to the foreground. Making Win32 API calls such as BringWindowToTop and SetForeground window do not work all the time. This is due to some restrictions within Windows XP. What I would like to do instead is send simulate a mouse click the window's button on the taskbar which I am hoping will bring the window to the front. Does anyone know how this is possible? A: Check out the section "How to steal focus on 2K/XP" at http://www.codeproject.com/KB/dialog/dlgboxtricks.aspx, as this is exactly what you need. I wouldn't go the taskbar route as the taskbar could be hidden or simply not there. A: It's possible. But it's extremely sketchy. Your application may also break with the next version of Windows, since it's undocumented. What you need to do is find the window handle of the taskbar, then find the window handle of the child window representing the button, then send it a WM_MOUSEDOWN (I think) message. Here's a bit on finding the window handle of the taskbar: http://www.codeproject.com/ FWIW, the restrictions on BringWindowToTop/SetForeground are there because it's irritating when a window steals focus. That may not matter if you're working on a corporate environment. Just keep it in mind. :) A: I used this in a program where I needed to simulate clicks and mouse movements; Global Mouse and Keyboard Library A: To be honest I've never had an issue bringing a window to the foreground on XP/Vista/2003/2000. You need to make sure you do the following: Check if IsIconic (minimized) If #1 results in true then call ShowWindow passing SW_RESTORE Then call SetForegroundWindow I've never had problems that I can think of doing it with those steps.
Sending a mouse click to a button in the taskbar using C#
In an application that I am currently working on, a requirement is to bring a window of an external application to the foreground. Making Win32 API calls such as BringWindowToTop and SetForeground window do not work all the time. This is due to some restrictions within Windows XP. What I would like to do instead is send simulate a mouse click the window's button on the taskbar which I am hoping will bring the window to the front. Does anyone know how this is possible?
[ "Check out the section \"How to steal focus on 2K/XP\" at http://www.codeproject.com/KB/dialog/dlgboxtricks.aspx, as this is exactly what you need. I wouldn't go the taskbar route as the taskbar could be hidden or simply not there.\n", "It's possible. But it's extremely sketchy. Your application may also break with the next version of Windows, since it's undocumented. What you need to do is find the window handle of the taskbar, then find the window handle of the child window representing the button, then send it a WM_MOUSEDOWN (I think) message.\nHere's a bit on finding the window handle of the taskbar:\nhttp://www.codeproject.com/\nFWIW, the restrictions on BringWindowToTop/SetForeground are there because it's irritating when a window steals focus. That may not matter if you're working on a corporate environment. Just keep it in mind. :)\n", "I used this in a program where I needed to simulate clicks and mouse movements; \nGlobal Mouse and Keyboard Library\n", "To be honest I've never had an issue bringing a window to the foreground on XP/Vista/2003/2000. \nYou need to make sure you do the following:\n\nCheck if IsIconic (minimized) \nIf #1 results in true then call\nShowWindow passing SW_RESTORE \nThen call SetForegroundWindow\n\nI've never had problems that I can think of doing it with those steps. \n" ]
[ 4, 2, 1, 1 ]
[]
[]
[ ".net", "c#", "winapi", "windows" ]
stackoverflow_0000018505_.net_c#_winapi_windows.txt
Q: How to create a new instance of Sql Server 2005 I forgot my password for Sql Server 2005. Windows Authentication is not enabled so I cannot login. How can I remove the current instance and create a new db instance? Or is there a better solution exists? A: Assuming you are a member of the Windows Admininstrator group, you can put the server in Single User mode, you could try this - http://blogs.msdn.com/raulga/archive/2007/07/12/disaster-recovery-what-to-do-when-the-sa-account-password-is-lost-in-sql-server-2005.aspx A: My read of the question was that the server is set up to use SQL authentication only, and perhaps you don't know the sa password or any other SQL login credentials? If so, you might be able to change the authentication mode. For SQL Server 2005 default instances, it's stored in the registry at: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL.1\MSSQLServer in a DWORD called LoginMode. A value of 2 indicates Mixed Mode (both Windows and SQL authentication are supported); I think 0 is Windows only and 1 is SQL only. You can try changing it to 2, restart the MSSQL service, then try to get into the SQL management studio after logging into the machine as an administrator. If that fails, you can create another instance by re-running the setup program. A: Have you tried connecting when logged on as domain/server-local Administrator?
How to create a new instance of Sql Server 2005
I forgot my password for Sql Server 2005. Windows Authentication is not enabled so I cannot login. How can I remove the current instance and create a new db instance? Or is there a better solution exists?
[ "Assuming you are a member of the Windows Admininstrator group, you can put the server in Single User mode, you could try this -\nhttp://blogs.msdn.com/raulga/archive/2007/07/12/disaster-recovery-what-to-do-when-the-sa-account-password-is-lost-in-sql-server-2005.aspx\n", "My read of the question was that the server is set up to use SQL authentication only, and perhaps you don't know the sa password or any other SQL login credentials? If so, you might be able to change the authentication mode. For SQL Server 2005 default instances, it's stored in the registry at:\nHKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Microsoft SQL Server\\MSSQL.1\\MSSQLServer\nin a DWORD called LoginMode. A value of 2 indicates Mixed Mode (both Windows and SQL authentication are supported); I think 0 is Windows only and 1 is SQL only. You can try changing it to 2, restart the MSSQL service, then try to get into the SQL management studio after logging into the machine as an administrator.\nIf that fails, you can create another instance by re-running the setup program.\n", "Have you tried connecting when logged on as domain/server-local Administrator?\n" ]
[ 2, 2, 0 ]
[]
[]
[ "sql_server", "sql_server_2005" ]
stackoverflow_0000018772_sql_server_sql_server_2005.txt
Q: Batch code indenters and beautifiers Does anyone here know of good batch file code indenters or beautifiers? Specifically for PHP, JS and SGML-languages. Preferably with options as to style. A: The following page has code on it to tidy Javascript (written in javascript as well): http://www.howtocreate.co.uk/tutorials/jsexamples/JSTidy.html There are various ways to tidy SGML based files (i.e. XML) - HTMLTidy will often do the trick, and there are various 'pretty print' implementations in various languages out there. And finally a link to a web site with PHP code for pretty printing PHP: http://tobyinkster.co.uk/blog/2007/07/17/php-pretty-printer/ A: For HTML/XML HTML Tidy is the best option: http://tidy.sourceforge.net/
Batch code indenters and beautifiers
Does anyone here know of good batch file code indenters or beautifiers? Specifically for PHP, JS and SGML-languages. Preferably with options as to style.
[ "The following page has code on it to tidy Javascript (written in javascript as well):\nhttp://www.howtocreate.co.uk/tutorials/jsexamples/JSTidy.html\nThere are various ways to tidy SGML based files (i.e. XML) - HTMLTidy will often do the trick, and there are various 'pretty print' implementations in various languages out there.\nAnd finally a link to a web site with PHP code for pretty printing PHP: http://tobyinkster.co.uk/blog/2007/07/17/php-pretty-printer/\n", "For HTML/XML HTML Tidy is the best option:\nhttp://tidy.sourceforge.net/\n" ]
[ 1, 1 ]
[]
[]
[ "coding_style", "html", "javascript", "php" ]
stackoverflow_0000018858_coding_style_html_javascript_php.txt
Q: What's the difference between a Table Scan and a Clustered Index Scan? Since both a Table Scan and a Clustered Index Scan essentially scan all records in the table, why is a Clustered Index Scan supposedly better? As an example - what's the performance difference between the following when there are many records?: declare @temp table( SomeColumn varchar(50) ) insert into @temp select 'SomeVal' select * from @temp ----------------------------- declare @temp table( RowID int not null identity(1,1) primary key, SomeColumn varchar(50) ) insert into @temp select 'SomeVal' select * from @temp A: In a table without a clustered index (a heap table), data pages are not linked together - so traversing pages requires a lookup into the Index Allocation Map. A clustered table, however, has it's data pages linked in a doubly linked list - making sequential scans a bit faster. Of course, in exchange, you have the overhead of dealing with keeping the data pages in order on INSERT, UPDATE, and DELETE. A heap table, however, requires a second write to the IAM. If your query has a RANGE operator (e.g.: SELECT * FROM TABLE WHERE Id BETWEEN 1 AND 100), then a clustered table (being in a guaranteed order) would be more efficient - as it could use the index pages to find the relevant data page(s). A heap would have to scan all rows, since it cannot rely on ordering. And, of course, a clustered index lets you do a CLUSTERED INDEX SEEK, which is pretty much optimal for performance...a heap with no indexes would always result in a table scan. So: For your example query where you select all rows, the only difference is the doubly linked list a clustered index maintains. This should make your clustered table just a tiny bit faster than a heap with a large number of rows. For a query with a WHERE clause that can be (at least partially) satisfied by the clustered index, you'll come out ahead because of the ordering - so you won't have to scan the entire table. For a query that is not satisified by the clustered index, you're pretty much even...again, the only difference being that doubly linked list for sequential scanning. In either case, you're suboptimal. For INSERT, UPDATE, and DELETE a heap may or may not win. The heap doesn't have to maintain order, but does require a second write to the IAM. I think the relative performance difference would be negligible, but also pretty data dependent. Microsoft has a whitepaper which compares a clustered index to an equivalent non-clustered index on a heap (not exactly the same as I discussed above, but close). Their conclusion is basically to put a clustered index on all tables. I'll do my best to summarize their results (again, note that they're really comparing a non-clustered index to a clustered index here - but I think it's relatively comparable): INSERT performance: clustered index wins by about 3% due to the second write needed for a heap. UPDATE performance: clustered index wins by about 8% due to the second lookup needed for a heap. DELETE performance: clustered index wins by about 18% due to the second lookup needed and the second delete needed from the IAM for a heap. single SELECT performance: clustered index wins by about 16% due to the second lookup needed for a heap. range SELECT performance: clustered index wins by about 29% due to the random ordering for a heap. concurrent INSERT: heap table wins by 30% under load due to page splits for the clustered index. A: http://msdn.microsoft.com/en-us/library/aa216840(SQL.80).aspx The Clustered Index Scan logical and physical operator scans the clustered index specified in the Argument column. When an optional WHERE:() predicate is present, only those rows that satisfy the predicate are returned. If the Argument column contains the ORDERED clause, the query processor has requested that the rows' output be returned in the order in which the clustered index has sorted them. If the ORDERED clause is not present, the storage engine will scan the index in the optimal way (not guaranteeing the output to be sorted). http://msdn.microsoft.com/en-us/library/aa178416(SQL.80).aspx The Table Scan logical and physical operator retrieves all rows from the table specified in the Argument column. If a WHERE:() predicate appears in the Argument column, only those rows that satisfy the predicate are returned.
What's the difference between a Table Scan and a Clustered Index Scan?
Since both a Table Scan and a Clustered Index Scan essentially scan all records in the table, why is a Clustered Index Scan supposedly better? As an example - what's the performance difference between the following when there are many records?: declare @temp table( SomeColumn varchar(50) ) insert into @temp select 'SomeVal' select * from @temp ----------------------------- declare @temp table( RowID int not null identity(1,1) primary key, SomeColumn varchar(50) ) insert into @temp select 'SomeVal' select * from @temp
[ "In a table without a clustered index (a heap table), data pages are not linked together - so traversing pages requires a lookup into the Index Allocation Map.\nA clustered table, however, has it's data pages linked in a doubly linked list - making sequential scans a bit faster. Of course, in exchange, you have the overhead of dealing with keeping the data pages in order on INSERT, UPDATE, and DELETE. A heap table, however, requires a second write to the IAM.\nIf your query has a RANGE operator (e.g.: SELECT * FROM TABLE WHERE Id BETWEEN 1 AND 100), then a clustered table (being in a guaranteed order) would be more efficient - as it could use the index pages to find the relevant data page(s). A heap would have to scan all rows, since it cannot rely on ordering.\nAnd, of course, a clustered index lets you do a CLUSTERED INDEX SEEK, which is pretty much optimal for performance...a heap with no indexes would always result in a table scan.\nSo:\n\nFor your example query where you select all rows, the only difference is the doubly linked list a clustered index maintains. This should make your clustered table just a tiny bit faster than a heap with a large number of rows.\nFor a query with a WHERE clause that can be (at least partially) satisfied by the clustered index, you'll come out ahead because of the ordering - so you won't have to scan the entire table.\nFor a query that is not satisified by the clustered index, you're pretty much even...again, the only difference being that doubly linked list for sequential scanning. In either case, you're suboptimal.\nFor INSERT, UPDATE, and DELETE a heap may or may not win. The heap doesn't have to maintain order, but does require a second write to the IAM. I think the relative performance difference would be negligible, but also pretty data dependent.\n\nMicrosoft has a whitepaper which compares a clustered index to an equivalent non-clustered index on a heap (not exactly the same as I discussed above, but close). Their conclusion is basically to put a clustered index on all tables. I'll do my best to summarize their results (again, note that they're really comparing a non-clustered index to a clustered index here - but I think it's relatively comparable):\n\nINSERT performance: clustered index wins by about 3% due to the second write needed for a heap.\nUPDATE performance: clustered index wins by about 8% due to the second lookup needed for a heap.\nDELETE performance: clustered index wins by about 18% due to the second lookup needed and the second delete needed from the IAM for a heap.\nsingle SELECT performance: clustered index wins by about 16% due to the second lookup needed for a heap.\nrange SELECT performance: clustered index wins by about 29% due to the random ordering for a heap.\nconcurrent INSERT: heap table wins by 30% under load due to page splits for the clustered index.\n\n", "http://msdn.microsoft.com/en-us/library/aa216840(SQL.80).aspx\nThe Clustered Index Scan logical and physical operator scans the clustered index specified in the Argument column. When an optional WHERE:() predicate is present, only those rows that satisfy the predicate are returned. If the Argument column contains the ORDERED clause, the query processor has requested that the rows' output be returned in the order in which the clustered index has sorted them. If the ORDERED clause is not present, the storage engine will scan the index in the optimal way (not guaranteeing the output to be sorted).\nhttp://msdn.microsoft.com/en-us/library/aa178416(SQL.80).aspx\nThe Table Scan logical and physical operator retrieves all rows from the table specified in the Argument column. If a WHERE:() predicate appears in the Argument column, only those rows that satisfy the predicate are returned.\n" ]
[ 86, 5 ]
[ "A table scan has to examine every single row of the table. The clustered index scan only needs to scan the index. It doesn't scan every record in the table. That's the point, really, of indices.\n" ]
[ -3 ]
[ "indexing", "sql", "sql_server" ]
stackoverflow_0000018764_indexing_sql_sql_server.txt
Q: Modifying Cruise Control.NET We are investigating using CruiseControl.NET as both a Continues Integration build provider, as well as automating the first part of our deployment process. Has anyone modified CruiseControl.NET's dashboard to add custom login and user roles (IE, Separate out access to forcing a build to only certain individuals on a per project basis? The dashboard is a .NET App, but I believe it uses the nVelocity view engine instead of web forms, which I don't have experience with. Can you mix nVelocity and Webforms,or do I need to spend a day learning something new =) A: @Keith: We are leveraging CC.NET to both run a CI build, as well as being able to use the Force Build feature to do a Build + Deploy. That is why we want hands off the dashboard. I found this morning that I was able to place CCNET in a virtual directory within another web app, This allowed me to setup Forms Authentication, and let the root app manage that. Problem solved. A: Why do you need to? Do you really need to limit users in the way with an integration server. I think that's why CC.Net doesn't have that sort of support built in. You can always see who forced a build, and control it that way. I find that continuous integration works best with regular builds and regular unit test runs (our rather large C# app + test run takes 25 mins and checks hourly), so for me forcing a build is rarely an issue. If you want some users to have some kind of report-only access you could limit them so that they can't access the CC.Net web application at all. All the results (MSBuild, NCover, NUnit, FxCop, etc) are in XML, so you can build relativity simple report pages out of XSLT.
Modifying Cruise Control.NET
We are investigating using CruiseControl.NET as both a Continues Integration build provider, as well as automating the first part of our deployment process. Has anyone modified CruiseControl.NET's dashboard to add custom login and user roles (IE, Separate out access to forcing a build to only certain individuals on a per project basis? The dashboard is a .NET App, but I believe it uses the nVelocity view engine instead of web forms, which I don't have experience with. Can you mix nVelocity and Webforms,or do I need to spend a day learning something new =)
[ "@Keith:\nWe are leveraging CC.NET to both run a CI build, as well as being able to use the Force Build feature to do a Build + Deploy. That is why we want hands off the dashboard.\nI found this morning that I was able to place CCNET in a virtual directory within another web app, This allowed me to setup Forms Authentication, and let the root app manage that. Problem solved.\n", "Why do you need to? Do you really need to limit users in the way with an integration server. I think that's why CC.Net doesn't have that sort of support built in.\nYou can always see who forced a build, and control it that way.\nI find that continuous integration works best with regular builds and regular unit test runs (our rather large C# app + test run takes 25 mins and checks hourly), so for me forcing a build is rarely an issue.\nIf you want some users to have some kind of report-only access you could limit them so that they can't access the CC.Net web application at all. \nAll the results (MSBuild, NCover, NUnit, FxCop, etc) are in XML, so you can build relativity simple report pages out of XSLT.\n" ]
[ 3, 2 ]
[]
[]
[ "cruisecontrol.net", "nvelocity" ]
stackoverflow_0000018093_cruisecontrol.net_nvelocity.txt
Q: .NET VirtualPathProviders and Pre-Compilation We've been working on an application that quite heavily relies on VirtualPathProviders in ASP.NET. We've just come to put the thing on a live server to demonstrate it and it appears that the VirtualPathProviders simply don't work when the site is pre-compiled!! I've been looking at the workaround which has been posted here: http://sunali.com/2008/01/09/virtualpathprovider-in-precompiled-web-sites/, but so far I haven't been able to get that to work, either! (Well - it works fine in visual studio's web development server - just not on our IIS box - again!). Does anybody here have any more information on the problem? Is it fixed in .NET v3.5 (we're currently building for v2.0)? A: Unfortunately that is not officially supported. See the following MSDN article. If a Web site is precompiled for deployment, content provided by a VirtualPathProvider instance is not compiled, and no VirtualPathProvider instances are used by the precompiled site. The site you referred to is an unofficial workaround. I don't think it's been fixed in .NET 3.5 SP1
.NET VirtualPathProviders and Pre-Compilation
We've been working on an application that quite heavily relies on VirtualPathProviders in ASP.NET. We've just come to put the thing on a live server to demonstrate it and it appears that the VirtualPathProviders simply don't work when the site is pre-compiled!! I've been looking at the workaround which has been posted here: http://sunali.com/2008/01/09/virtualpathprovider-in-precompiled-web-sites/, but so far I haven't been able to get that to work, either! (Well - it works fine in visual studio's web development server - just not on our IIS box - again!). Does anybody here have any more information on the problem? Is it fixed in .NET v3.5 (we're currently building for v2.0)?
[ "Unfortunately that is not officially supported. See the following MSDN article.\n\nIf a Web site is precompiled for deployment, content provided by a VirtualPathProvider instance is not compiled, and no VirtualPathProvider instances are used by the precompiled site. \n\nThe site you referred to is an unofficial workaround. I don't think it's been fixed in .NET 3.5 SP1\n" ]
[ 4 ]
[]
[]
[ "asp.net", "virtualpathprovider" ]
stackoverflow_0000012397_asp.net_virtualpathprovider.txt
Q: Warning C4341 - 'XX': signed value is out of range for enum constant When compiling my C++ .Net application I get 104 warnings of the type: Warning C4341 - 'XX': signed value is out of range for enum constant Where XX can be WCHAR LONG BIT BINARY GUID ... I can't seem to remove these warnings whatever I do. When I double click on them it takes me to a part of my code that uses OdbcParameters - any when I try a test project with all the rest of my stuff but no OdbcParameters it doesn't give the warnings. Any idea how I can get rid of these warnings? They're making real warnings from code I've actually written hard to see - and it just gives me a horrible feeling knowing my app has 104 warnings! A: This is a compiler bug. Here's another post confirming it's a known issue. I've got the same issue in one of my projects and there's no way to prevent it from being triggered unless you have some way of avoiding the use of OdbcParameter. The most conservative way to suppress only the buggy warnings is to use #pragma warning( push ) #pragma warning( disable: 4341 ) // code affected by bug #pragma warning( pop ) A: In Visual Studio you can always disable specific warnings by going to: Project settings -> C/C++ -> Advanced -> Disable Specific warnings: 4341 A: Either wait for a compiler fix or dont #include code that triggers it. [A verbose way of saying you probably can't.]
Warning C4341 - 'XX': signed value is out of range for enum constant
When compiling my C++ .Net application I get 104 warnings of the type: Warning C4341 - 'XX': signed value is out of range for enum constant Where XX can be WCHAR LONG BIT BINARY GUID ... I can't seem to remove these warnings whatever I do. When I double click on them it takes me to a part of my code that uses OdbcParameters - any when I try a test project with all the rest of my stuff but no OdbcParameters it doesn't give the warnings. Any idea how I can get rid of these warnings? They're making real warnings from code I've actually written hard to see - and it just gives me a horrible feeling knowing my app has 104 warnings!
[ "This is a compiler bug. Here's another post confirming it's a known issue. I've got the same issue in one of my projects and there's no way to prevent it from being triggered unless you have some way of avoiding the use of OdbcParameter. The most conservative way to suppress only the buggy warnings is to use\n#pragma warning( push )\n#pragma warning( disable: 4341 )\n\n// code affected by bug\n\n#pragma warning( pop )\n\n", "In Visual Studio you can always disable specific warnings by going to:\n\nProject settings -> C/C++ -> Advanced -> Disable Specific warnings: 4341\n\n", "Either wait for a compiler fix or dont #include code that triggers it.\n[A verbose way of saying you probably can't.]\n" ]
[ 4, 3, 0 ]
[]
[]
[ ".net", "c++", "visual_c++" ]
stackoverflow_0000017786_.net_c++_visual_c++.txt
Q: .NET 3.5 SP1 and aspnet_client Crystal Reports I recently (a few days ago) installed .NET 3.5 SP1 and subsequently an aspnet_client folder with a bunch of Crystal Reports support code has been injected into my .net web apps. Anybody else experienced this? Am I correct in saying that this is a side effect of SP1? What is this? A: No it is a side effect of Crystal Reports. If you don't need it, remove it from your computer it is nothing but a headache. It is safe to delete the aspnet_client folder. A: What do you need to remove? It keeps on adding that folder back to the project that I'm working on...
.NET 3.5 SP1 and aspnet_client Crystal Reports
I recently (a few days ago) installed .NET 3.5 SP1 and subsequently an aspnet_client folder with a bunch of Crystal Reports support code has been injected into my .net web apps. Anybody else experienced this? Am I correct in saying that this is a side effect of SP1? What is this?
[ "No it is a side effect of Crystal Reports. If you don't need it, remove it from your computer it is nothing but a headache. It is safe to delete the aspnet_client folder.\n", "What do you need to remove? It keeps on adding that folder back to the project that I'm working on...\n" ]
[ 1, 0 ]
[]
[]
[ ".net", ".net_3.5", "asp.net", "crystal_reports" ]
stackoverflow_0000013545_.net_.net_3.5_asp.net_crystal_reports.txt
Q: Should menu items always be enabled? And how do you tell the user? One of the things that has been talked about a few times on the podcast is whether menu items should always be enabled to prevent "WHY ISN'T THIS AVAILABLE!" frustration for the end user. This strikes me as a good idea, but then there's the issue of communicating the lack of availability (and the reason why) to the user. Is there anything better than just popping up a message box with a blurb of text? As I'm about to start on a fairly sizeable cross-platform Windows / Mac app I thought I'd throw this out to hear the wisdom of the SO crowd. A: One thing I've seen a printer manufacturer do with their printer properties dialog is to have a little help baloon icon beside disabled items that display a tooltip when hovered over. Another thing you can do with disabled items is to add in parenthesis why it's disabled or what the user would have to do to enable it. E.g., "Save (already saved)" or "Copy (select something to copy)". I don't like keeping it enabled because then it will instill hesitation in users to select any menu item in fear that they'll just get an error message making them feel stupid for not realizing that they couldn't possibly perform that operation at the time. Menu items that spring dialogs have elipsis (...) after them to let users know it's not just click and carry on. Required form fields have an asterisk or bold label to spare the user from being scolded with a validation error message. A: You have to consider the alternatives. Hide the menu item. This is bad. Now you have menu items disappearing and reappearing all the time? Disable the menu item. Now the user can find what they're looking for, it just isn't obvious how to enable it. This is better, but still leaves the user slightly puzzled. Keep the menu item enabled, but make it display a dialog that explains what needs to be done when the program is in a state where the menu item can't be properly used. I agree with Joel on this one, #3 seems like the best choice. A: Joel has a post on that http://www.joelonsoftware.com/items/2008/07/01.html which might be a good place to start thinking about this. A: @Bill the Lizard: I'd combine #2 and #3 - disable the item, but have a tooltip that indicates why it is disabled.
Should menu items always be enabled? And how do you tell the user?
One of the things that has been talked about a few times on the podcast is whether menu items should always be enabled to prevent "WHY ISN'T THIS AVAILABLE!" frustration for the end user. This strikes me as a good idea, but then there's the issue of communicating the lack of availability (and the reason why) to the user. Is there anything better than just popping up a message box with a blurb of text? As I'm about to start on a fairly sizeable cross-platform Windows / Mac app I thought I'd throw this out to hear the wisdom of the SO crowd.
[ "One thing I've seen a printer manufacturer do with their printer properties dialog is to have a little help baloon icon beside disabled items that display a tooltip when hovered over.\nAnother thing you can do with disabled items is to add in parenthesis why it's disabled or what the user would have to do to enable it. E.g., \"Save (already saved)\" or \"Copy (select something to copy)\".\nI don't like keeping it enabled because then it will instill hesitation in users to select any menu item in fear that they'll just get an error message making them feel stupid for not realizing that they couldn't possibly perform that operation at the time. \nMenu items that spring dialogs have elipsis (...) after them to let users know it's not just click and carry on. Required form fields have an asterisk or bold label to spare the user from being scolded with a validation error message.\n", "You have to consider the alternatives.\n\nHide the menu item. This is bad. Now you have menu items disappearing and reappearing all the time?\nDisable the menu item. Now the user can find what they're looking for, it just isn't obvious how to enable it. This is better, but still leaves the user slightly puzzled.\nKeep the menu item enabled, but make it display a dialog that explains what needs to be done when the program is in a state where the menu item can't be properly used.\n\nI agree with Joel on this one, #3 seems like the best choice.\n", "Joel has a post on that http://www.joelonsoftware.com/items/2008/07/01.html which might be a good place to start thinking about this.\n", "@Bill the Lizard: I'd combine #2 and #3 - disable the item, but have a tooltip that indicates why it is disabled.\n" ]
[ 4, 2, 0, 0 ]
[]
[]
[ "menu_items", "usability", "user_interface" ]
stackoverflow_0000019113_menu_items_usability_user_interface.txt
Q: Programming Glossary As I browse through the site, I find a lot of terms that many developers just starting out (and even some advanced developers) may be unfamiliar with. It would be great if people could post here with a term and definition that might be unknown to beginners or those from different programming backgrounds. Some not-so-common terms I've seen are 'auto boxing', 'tuples', 'orthogonal code', 'domain driven design', 'test driven development', etc. Code snippets would also be helpful where applicable.. A: http://en.wikipedia.org/wiki/Boxing_(Computer_science)#Boxing http://en.wikipedia.org/wiki/Tuples http://en.wikipedia.org/wiki/Orthogonal#Computer_science http://en.wikipedia.org/wiki/Domain_driven_design http://en.wikipedia.org/wiki/Test_driven_development Someone may have beat us to it ;) A: http://en.wikipedia.org/wiki/Boxing_%28Computer_science%29#Boxing thats the correct link for boxing as related to computer science :D A: Better yet, a site domain dictionary, containing a definition (over time) for every programming term on Stackoverflow, with the definition itself modded according to the Wiki-like aspects Atwood and others have been discussing. There are coding dictionaries out there but they're all either a) crap or b) not extensible or editable in a collaborative way. Right now if I come across an unfamiliar programming term or acronym my first stop is Google, followed by Wiki, followed by one of the many dedicated dictionaries. No reason why Stackoverflow shouldn't be on that list. A: The c2 Wiki kicks butt. Great combination of concise definitions and examples, plus discussions that break it down when there are different interpretations. A: It may actually be helpful to go around adding the tag 'glossary' to specific questions (I recently saw one about Expressions vs. Statements, for instance).
Programming Glossary
As I browse through the site, I find a lot of terms that many developers just starting out (and even some advanced developers) may be unfamiliar with. It would be great if people could post here with a term and definition that might be unknown to beginners or those from different programming backgrounds. Some not-so-common terms I've seen are 'auto boxing', 'tuples', 'orthogonal code', 'domain driven design', 'test driven development', etc. Code snippets would also be helpful where applicable..
[ "\nhttp://en.wikipedia.org/wiki/Boxing_(Computer_science)#Boxing\nhttp://en.wikipedia.org/wiki/Tuples\nhttp://en.wikipedia.org/wiki/Orthogonal#Computer_science\nhttp://en.wikipedia.org/wiki/Domain_driven_design\nhttp://en.wikipedia.org/wiki/Test_driven_development\n\nSomeone may have beat us to it ;)\n", "http://en.wikipedia.org/wiki/Boxing_%28Computer_science%29#Boxing \nthats the correct link for boxing as related to computer science :D\n", "Better yet, a site domain dictionary, containing a definition (over time) for every programming term on Stackoverflow, with the definition itself modded according to the Wiki-like aspects Atwood and others have been discussing.\nThere are coding dictionaries out there but they're all either a) crap or b) not extensible or editable in a collaborative way.\nRight now if I come across an unfamiliar programming term or acronym my first stop is Google, followed by Wiki, followed by one of the many dedicated dictionaries. No reason why Stackoverflow shouldn't be on that list.\n", "The c2 Wiki kicks butt. Great combination of concise definitions and examples, plus discussions that break it down when there are different interpretations.\n", "It may actually be helpful to go around adding the tag 'glossary' to specific questions (I recently saw one about Expressions vs. Statements, for instance).\n" ]
[ 1, 1, 1, 1, 1 ]
[]
[]
[ "glossary", "language_agnostic" ]
stackoverflow_0000015729_glossary_language_agnostic.txt
Q: PHP Script to populate MySQL tables Is anyone aware of a script/class (preferably in PHP) that would parse a given MySQL table's structure and then fill it with x number of rows of random test data based on the field types? I have never seen or heard of something like this and thought I would check before writing one myself. A: What you are after would be a data generator. There is one available here which i had bookmarked but i haven't got around to trying it yet.
PHP Script to populate MySQL tables
Is anyone aware of a script/class (preferably in PHP) that would parse a given MySQL table's structure and then fill it with x number of rows of random test data based on the field types? I have never seen or heard of something like this and thought I would check before writing one myself.
[ "What you are after would be a data generator.\nThere is one available here which i had bookmarked but i haven't got around to trying it yet.\n" ]
[ 21 ]
[]
[]
[ "dataset", "mysql", "php", "test_data", "testing" ]
stackoverflow_0000019162_dataset_mysql_php_test_data_testing.txt
Q: How do I do an Upsert Into Table? I have a view that has a list of jobs in it, with data like who they're assigned to and the stage they are in. I need to write a stored procedure that returns how many jobs each person has at each stage. So far I have this (simplified): DECLARE @ResultTable table ( StaffName nvarchar(100), Stage1Count int, Stage2Count int ) INSERT INTO @ResultTable (StaffName, Stage1Count) SELECT StaffName, COUNT(*) FROM ViewJob WHERE InStage1 = 1 GROUP BY StaffName INSERT INTO @ResultTable (StaffName, Stage2Count) SELECT StaffName, COUNT(*) FROM ViewJob WHERE InStage2 = 1 GROUP BY StaffName The problem with that is that the rows don't combine. So if a staff member has jobs in stage1 and stage2 there's two rows in @ResultTable. What I would really like to do is to update the row if one exists for the staff member and insert a new row if one doesn't exist. Does anyone know how to do this, or can suggest a different approach? I would really like to avoid using cursors to iterate on the list of users (but that's my fall back option). I'm using SQL Server 2005. Edit: @Lee: Unfortunately the InStage1 = 1 was a simplification. It's really more like WHERE DateStarted IS NOT NULL and DateFinished IS NULL. Edit: @BCS: I like the idea of doing an insert of all the staff first so I just have to do an update every time. But I'm struggling to get those UPDATE statements correct. A: Actually, I think you're making it much harder than it is. Won't this code work for what you're trying to do? SELECT StaffName, SUM(InStage1) AS 'JobsAtStage1', SUM(InStage2) AS 'JobsAtStage2' FROM ViewJob GROUP BY StaffName A: You could just check for existence and use the appropriate command. I believe this really does use a cursor behind the scenes, but it's the best you'll likely get: IF (EXISTS (SELECT * FROM MyTable WHERE StaffName = @StaffName)) begin UPDATE MyTable SET ... WHERE StaffName = @StaffName end else begin INSERT MyTable ... end SQL2008 has a new MERGE capability which is cool, but it's not in 2005. A: To get a real "upsert" type of query you need to use an if exists... type of thing, and this unfortunately means using a cursor. However, you could run two queries, one to do your updates where there is an existing row, then afterwards insert the new one. I'd think this set-based approach would be preferable unless you're dealing exclusively with small numbers of rows. A: IIRC there is some sort of "On Duplicate" (name might be wrong) syntax that lets you update if a row exists (MySQL) Alternately some form of: INSERT INTO @ResultTable (StaffName, Stage1Count, Stage2Count) SELECT StaffName,0,0 FROM ViewJob GROUP BY StaffName UPDATE @ResultTable Stage1Count= ( SELECT COUNT(*) AS count FROM ViewJob WHERE InStage1 = 1 @ResultTable.StaffName = StaffName) UPDATE @ResultTable Stage2Count= ( SELECT COUNT(*) AS count FROM ViewJob WHERE InStage2 = 1 @ResultTable.StaffName = StaffName) A: The following query on your result table should combine the rows again. This is assuming that InStage1 and InStage2 are never both '1'. select distinct(rt1.StaffName), rt2.Stage1Count, rt3.Stage2Count from @ResultTable rt1 left join @ResultTable rt2 on rt1.StaffName=rt2.StaffName and rt2.Stage1Count is not null left join @ResultTable rt3 on rt1.StaffName=rt2.StaffName and rt3.Stage2Count is not null A: I managed to get it working with a variation of BCS's answer. It wouldn't let me use a table variable though, so I had to make a temp table. CREATE TABLE #ResultTable ( StaffName nvarchar(100), Stage1Count int, Stage2Count int ) INSERT INTO #ResultTable (StaffName) SELECT StaffName FROM ViewJob GROUP BY StaffName UPDATE #ResultTable SET Stage1Count= ( SELECT COUNT(*) FROM ViewJob V WHERE InStage1 = 1 AND V.StaffName = @ResultTable.StaffName COLLATE Latin1_General_CI_AS GROUP BY V.StaffName), Stage2Count= ( SELECT COUNT(*) FROM ViewJob V WHERE InStage2 = 1 AND V.StaffName = @ResultTable.StaffName COLLATE Latin1_General_CI_AS GROUP BY V.StaffName) SELECT StaffName, Stage1Count, Stage2Count FROM #ResultTable DROP TABLE #ResultTable
How do I do an Upsert Into Table?
I have a view that has a list of jobs in it, with data like who they're assigned to and the stage they are in. I need to write a stored procedure that returns how many jobs each person has at each stage. So far I have this (simplified): DECLARE @ResultTable table ( StaffName nvarchar(100), Stage1Count int, Stage2Count int ) INSERT INTO @ResultTable (StaffName, Stage1Count) SELECT StaffName, COUNT(*) FROM ViewJob WHERE InStage1 = 1 GROUP BY StaffName INSERT INTO @ResultTable (StaffName, Stage2Count) SELECT StaffName, COUNT(*) FROM ViewJob WHERE InStage2 = 1 GROUP BY StaffName The problem with that is that the rows don't combine. So if a staff member has jobs in stage1 and stage2 there's two rows in @ResultTable. What I would really like to do is to update the row if one exists for the staff member and insert a new row if one doesn't exist. Does anyone know how to do this, or can suggest a different approach? I would really like to avoid using cursors to iterate on the list of users (but that's my fall back option). I'm using SQL Server 2005. Edit: @Lee: Unfortunately the InStage1 = 1 was a simplification. It's really more like WHERE DateStarted IS NOT NULL and DateFinished IS NULL. Edit: @BCS: I like the idea of doing an insert of all the staff first so I just have to do an update every time. But I'm struggling to get those UPDATE statements correct.
[ "Actually, I think you're making it much harder than it is. Won't this code work for what you're trying to do?\nSELECT StaffName, SUM(InStage1) AS 'JobsAtStage1', SUM(InStage2) AS 'JobsAtStage2'\n FROM ViewJob\nGROUP BY StaffName\n\n", "You could just check for existence and use the appropriate command. I believe this really does use a cursor behind the scenes, but it's the best you'll likely get: \nIF (EXISTS (SELECT * FROM MyTable WHERE StaffName = @StaffName))\nbegin\n UPDATE MyTable SET ... WHERE StaffName = @StaffName\nend\nelse\nbegin\n INSERT MyTable ...\nend \n\nSQL2008 has a new MERGE capability which is cool, but it's not in 2005.\n", "To get a real \"upsert\" type of query you need to use an if exists... type of thing, and this unfortunately means using a cursor.\nHowever, you could run two queries, one to do your updates where there is an existing row, then afterwards insert the new one. I'd think this set-based approach would be preferable unless you're dealing exclusively with small numbers of rows.\n", "IIRC there is some sort of \"On Duplicate\" (name might be wrong) syntax that lets you update if a row exists (MySQL)\nAlternately some form of:\nINSERT INTO @ResultTable (StaffName, Stage1Count, Stage2Count)\n SELECT StaffName,0,0 FROM ViewJob\n GROUP BY StaffName\n\nUPDATE @ResultTable Stage1Count= (\n SELECT COUNT(*) AS count FROM ViewJob\n WHERE InStage1 = 1\n @ResultTable.StaffName = StaffName)\n\nUPDATE @ResultTable Stage2Count= (\n SELECT COUNT(*) AS count FROM ViewJob\n WHERE InStage2 = 1\n @ResultTable.StaffName = StaffName)\n\n", "The following query on your result table should combine the rows again. This is assuming that InStage1 and InStage2 are never both '1'.\nselect distinct(rt1.StaffName), rt2.Stage1Count, rt3.Stage2Count\nfrom @ResultTable rt1\nleft join @ResultTable rt2 on rt1.StaffName=rt2.StaffName and rt2.Stage1Count is not null\nleft join @ResultTable rt3 on rt1.StaffName=rt2.StaffName and rt3.Stage2Count is not null\n\n", "I managed to get it working with a variation of BCS's answer. It wouldn't let me use a table variable though, so I had to make a temp table.\nCREATE TABLE #ResultTable\n(\n StaffName nvarchar(100),\n Stage1Count int,\n Stage2Count int\n)\n\nINSERT INTO #ResultTable (StaffName)\n SELECT StaffName FROM ViewJob\n GROUP BY StaffName\n\nUPDATE #ResultTable SET \n Stage1Count= (\n SELECT COUNT(*) FROM ViewJob V\n WHERE InStage1 = 1 AND \n V.StaffName = @ResultTable.StaffName COLLATE Latin1_General_CI_AS\n GROUP BY V.StaffName),\n Stage2Count= (\n SELECT COUNT(*) FROM ViewJob V\n WHERE InStage2 = 1 AND \n V.StaffName = @ResultTable.StaffName COLLATE Latin1_General_CI_AS\n GROUP BY V.StaffName)\n\nSELECT StaffName, Stage1Count, Stage2Count FROM #ResultTable\n\nDROP TABLE #ResultTable\n\n" ]
[ 3, 2, 1, 1, 0, 0 ]
[]
[]
[ "sql", "sql_server", "tsql" ]
stackoverflow_0000019089_sql_sql_server_tsql.txt
Q: How to make a tree in C++? How do I make a tree data structure in C++ that uses iterators instead of pointers? I couldn't find anything in the STL that can do this. What I would like to do is to be able to create and manipulate trees like this: #include <iostream> #include <tree> using namespace std; int main() { tree<int> myTree; tree<int>::iterator i = myTree.root(); *i = 42; tree<int>::iterator j = i.add_child(); *j = 777; j = j.parent(); if (i == myTree.root() && i == j) cout << "i and j are both pointing to the root\n"; return 0; } Thank you, tree.hh seems to be just what I was looking for. If this is for gaining the benefit of a data-structure holding arbitrary index types, optimized for searching and good at insertion then consider using a map. A map is an associative container that has performance guarantees identical to those of a tree: logarithmic searching, logarithmic insertion, logarithmic deletion, linear space. Internally they are often implemented as red-black trees, although that is not a guarantee. Still, as an STL user all you should care about is the performance guarantees of the STL algorithms and data-structures. Whether they're implemented as trees or little green men shouldn't matter to you. I'm not sure if a map is what I need, but thanks for the info. I will remember to use maps whenever possible instead of implementing trees. A: Here is tree.hh which is a bit close to what you want to do, though a bit different. Here is a piece of code extracted from its website. int main(int, char **) { tree<string> tr; tree<string>::iterator top, one, two, loc, banana; top=tr.begin(); one=tr.insert(top, "one"); two=tr.append_child(one, "two"); tr.append_child(two, "apple"); banana=tr.append_child(two, "banana"); tr.append_child(banana,"cherry"); tr.append_child(two, "peach"); tr.append_child(one,"three"); loc=find(tr.begin(), tr.end(), "two"); if(loc!=tr.end()) { tree<string>::sibling_iterator sib=tr.begin(loc); while(sib!=tr.end(loc)) { cout << (*sib) << endl; ++sib; } cout << endl; tree<string>::iterator sib2=tr.begin(loc); tree<string>::iterator end2=tr.end(loc); while(sib2!=end2) { for(int i=0; i<tr.depth(sib2)-2; ++i) cout << " "; cout << (*sib2) << endl; ++sib2; } } } Now what's different? Your implementation is simpler when it comes to append a node to the tree. Though your version is indiscutably simpler, the dev of this lib probably wanted to have some info accessible without browsing the tree, such as the size of the tree for instance. I also assume he didn't want to store the root on all nodes for performance reason. So if you want to implement it your way, I suggest you keep most of the logic and add the link to the parent tree in the iterator and rewrite append a bit. A: Why would you want to do that? If this is for learning purposes then you can write your own tree data structure. If this is for gaining the benefit of a data-structure holding arbitrary index types, optimized for searching and good at insertion then consider using a map. A map is an associative container that has performance guarantees identical to those of a tree: logarithmic searching, logarithmic insertion, logarithmic deletion, linear space. Internally they are often implemented as red-black trees, although that is not a guarantee. Still, as an STL user all you should care about is the performance guarantees of the STL algorithms and data-structures. Whether they're implemented as trees or little green men shouldn't matter to you. As a side note, there's no such thing as a root() function. All STL containers have the begin() function implementing the conceptual beginning of a container. The kind of iterator returned by that function depends on the characteristics of the container.
How to make a tree in C++?
How do I make a tree data structure in C++ that uses iterators instead of pointers? I couldn't find anything in the STL that can do this. What I would like to do is to be able to create and manipulate trees like this: #include <iostream> #include <tree> using namespace std; int main() { tree<int> myTree; tree<int>::iterator i = myTree.root(); *i = 42; tree<int>::iterator j = i.add_child(); *j = 777; j = j.parent(); if (i == myTree.root() && i == j) cout << "i and j are both pointing to the root\n"; return 0; } Thank you, tree.hh seems to be just what I was looking for. If this is for gaining the benefit of a data-structure holding arbitrary index types, optimized for searching and good at insertion then consider using a map. A map is an associative container that has performance guarantees identical to those of a tree: logarithmic searching, logarithmic insertion, logarithmic deletion, linear space. Internally they are often implemented as red-black trees, although that is not a guarantee. Still, as an STL user all you should care about is the performance guarantees of the STL algorithms and data-structures. Whether they're implemented as trees or little green men shouldn't matter to you. I'm not sure if a map is what I need, but thanks for the info. I will remember to use maps whenever possible instead of implementing trees.
[ "Here is tree.hh which is a bit close to what you want to do, though a bit\ndifferent.\nHere is a piece of code extracted from its website.\nint main(int, char **)\n {\n tree<string> tr;\n tree<string>::iterator top, one, two, loc, banana;\n\n top=tr.begin();\n one=tr.insert(top, \"one\");\n two=tr.append_child(one, \"two\");\n tr.append_child(two, \"apple\");\n banana=tr.append_child(two, \"banana\");\n tr.append_child(banana,\"cherry\");\n tr.append_child(two, \"peach\");\n tr.append_child(one,\"three\");\n\n loc=find(tr.begin(), tr.end(), \"two\");\n if(loc!=tr.end()) {\n tree<string>::sibling_iterator sib=tr.begin(loc);\n while(sib!=tr.end(loc)) {\n cout << (*sib) << endl;\n ++sib;\n }\n cout << endl;\n tree<string>::iterator sib2=tr.begin(loc);\n tree<string>::iterator end2=tr.end(loc);\n while(sib2!=end2) {\n for(int i=0; i<tr.depth(sib2)-2; ++i) \n cout << \" \";\n cout << (*sib2) << endl;\n ++sib2;\n }\n }\n }\n\nNow what's different? Your implementation is simpler when it comes to \nappend a node to the tree. \nThough your version is indiscutably simpler, the dev of this lib probably wanted to have some info accessible without browsing the tree, such as the size of the tree for instance.\nI also assume he didn't want to store the root on all nodes for performance reason.\nSo if you want to implement it your way, I suggest you keep most of the logic and add the link to the parent tree in the iterator and rewrite append a bit.\n", "Why would you want to do that? If this is for learning purposes then you can write your own tree data structure. If this is for gaining the benefit of a data-structure holding arbitrary index types, optimized for searching and good at insertion then consider using a map.\nA map is an associative container that has performance guarantees identical to those of a tree: logarithmic searching, logarithmic insertion, logarithmic deletion, linear space. Internally they are often implemented as red-black trees, although that is not a guarantee. Still, as an STL user all you should care about is the performance guarantees of the STL algorithms and data-structures. Whether they're implemented as trees or little green men shouldn't matter to you.\nAs a side note, there's no such thing as a root() function. All STL containers have the begin() function implementing the conceptual beginning of a container. The kind of iterator returned by that function depends on the characteristics of the container.\n" ]
[ 5, 3 ]
[]
[]
[ "c++", "iterator", "tree" ]
stackoverflow_0000019193_c++_iterator_tree.txt
Q: Calling base Methods When Overriding Page Level Events In my code behind I wire up my events like so: protected override void OnInit(EventArgs e) { base.OnInit(e); btnUpdateUser.Click += btnUpateUserClick; } I've done it this way because that's what I've seen in examples. Does the base.OnInit() method need to be called? Will it be implicitly be called? Is it better to call it at the beginning of the method or at the end? What would be an example where confusion over the base method can get you in trouble? A: I should clarify: The guidelines recommend that firing an event should involve calling a virtual "OnEventName" method, but they also say that if a derived class overrides that method and forgets to call the base method, the event should still fire. See the "Important Note" about halfway down this page: Derived classes that override the protected virtual method are not required to call the base class implementation. The base class must continue to work correctly even if its implementation is not called. A: In this case, if you don't call the base OnInit, then the Init even will not fire. In general, it is best practice to ALWAYS call the base method, unless you specifically know that you do not want the base behaviour to occur. Whether its called at the start or the end depends on how you want things to work. In a case like this, where you are using an override instead of hooking up an event handler, calling it at the start of the method makes more sense. That way, your code will run after any handlers, which makes it more emulate a "normal" event handler. A: Although the official framework design guidelines recommend otherwise, most class designers will actually make the OnXxx() method responsible for firing the actual event, like this: protected virtual void OnClick(EventArgs e) { if (Click != null) Click(this, e); } ... so if you inherit from the class and don't call base.OnClick(e), the Click event will never fire. So yes, even though this shouldn't be the case according to the official design guidelines, I think it's worth calling base.OnInit(e) just to be sure. A: official framework design guidelines recommend otherwise They do? I'm curious, i've always thought the opposite, and reading Framework Design Guidelines and running FxCop has only cemented my view. I was under the impression that events should always be fired from virtual OnXxx() methods, that take an EventArgs parameter A: You probably are better off doing it that way, then this debate goes away. The article is interesting though, especially considering that the .NET Framework doesn't honour this guideline. A: @Ch00k and @Scott I dunno - I like the OnEventName pattern myself. And yeah, I'm one of the people who are guilty of firing the event from that method. I think overriding the On* method and calling the base one is the way to go. Handling your own events seems wrong somehow.
Calling base Methods When Overriding Page Level Events
In my code behind I wire up my events like so: protected override void OnInit(EventArgs e) { base.OnInit(e); btnUpdateUser.Click += btnUpateUserClick; } I've done it this way because that's what I've seen in examples. Does the base.OnInit() method need to be called? Will it be implicitly be called? Is it better to call it at the beginning of the method or at the end? What would be an example where confusion over the base method can get you in trouble?
[ "I should clarify:\nThe guidelines recommend that firing an event should involve calling a virtual \"OnEventName\" method, but they also say that if a derived class overrides that method and forgets to call the base method, the event should still fire.\nSee the \"Important Note\" about halfway down this page:\n\nDerived classes that override the protected virtual method are not required to call the base class implementation. The base class must continue to work correctly even if its implementation is not called.\n\n", "In this case, if you don't call the base OnInit, then the Init even will not fire.\nIn general, it is best practice to ALWAYS call the base method, unless you specifically know that you do not want the base behaviour to occur.\nWhether its called at the start or the end depends on how you want things to work. In a case like this, where you are using an override instead of hooking up an event handler, calling it at the start of the method makes more sense. That way, your code will run after any handlers, which makes it more emulate a \"normal\" event handler.\n", "Although the official framework design guidelines recommend otherwise, most class designers will actually make the OnXxx() method responsible for firing the actual event, like this:\nprotected virtual void OnClick(EventArgs e)\n{\n if (Click != null) Click(this, e);\n}\n\n... so if you inherit from the class and don't call base.OnClick(e), the Click event will never fire.\nSo yes, even though this shouldn't be the case according to the official design guidelines, I think it's worth calling base.OnInit(e) just to be sure.\n", "\nofficial framework design guidelines recommend otherwise\n\nThey do? I'm curious, i've always thought the opposite, and reading Framework Design Guidelines and running FxCop has only cemented my view. I was under the impression that events should always be fired from virtual OnXxx() methods, that take an EventArgs parameter\n", "You probably are better off doing it that way, then this debate goes away. The article is interesting though, especially considering that the .NET Framework doesn't honour this guideline.\n", "@Ch00k and @Scott I dunno - I like the OnEventName pattern myself. And yeah, I'm one of the people who are guilty of firing the event from that method.\nI think overriding the On* method and calling the base one is the way to go. Handling your own events seems wrong somehow.\n" ]
[ 3, 0, 0, 0, 0, 0 ]
[]
[]
[ "asp.net", "events", "webforms" ]
stackoverflow_0000019294_asp.net_events_webforms.txt
Q: What did I do wrong here? [Javascript Regex] So I am writing a registration form and I need the display name to be only numbers, letters and underscores. Have a look at my code and tell me what I'm doing wrong. <form method="post" action="/" onsubmit="return check_form()"> <input type="text" id="display-name" name="display-name" maxlength="255" /> <input type="submit" /> </form> <script type="text/javascript"> <!-- var name_regex = /^([a-zA-Z0-9_])+/ function check_form() { if (!name_regex.test(document.forms[0].elements[0].value)) { document.forms[0].elements[0].focus() alert("Your display name may only contain letters, numbers and underscores") return false } } --> </script> It's obviously been trimmed down to not include anything not related to the problem but even this snippet doesn't work. A: Your regex /^([a-zA-Z0-9_])+/ Looks for Start of string(check), followed by 1 or more letters, numbers, or underscore (check) And then whatever comes after it doesn't matter. This regex will match anything at all so long as it begins with a letter, number, or underscore If you put a $ at the end, then it will work - $ matches 'end of string', so the only way it can match is if there are only numbers, letters, and underscores between the start and end of the string. /^([a-zA-Z0-9_])+$/ Secondly, I'd suggest using document.getElementById('display-name').value instead of document.forms as it won't break if you rearrange the HTML, and is more 'the commonly accepted standard of what to do' A: My regexp would go along the lines of: /^[a-zA-Z0-9_]+$/ edit: I think it's the lack of a line end $ that makes it fail. A: By 'not working' I take it you mean it is letting invalid entries through (rather than not letting valid entries through). As @Annan has said, this would probably be due to the lack of the $ character at the end of the expression, as currently it only requires a single valid character at the start of the value, and the rest can be anything. A: What does "doesn't work" mean? Does it reject valid display names? Does it accept invalid display names? Which ones? Per @Annan, leaving off the $ would make the regexp accept invalid display names like abc123!@#. If the code is rejecting valid display names, it may be because the parentheses are being matched literally instead of denoting a group (I'm not sure of the quoting convention in JS). A: I tested your script and meddled with the javascript. This seem to work: <form method="post" action="/" onsubmit="return check_form()"> <input type="text" id="display-name" name="display-name" maxlength="255" /> <input type="submit" /> </form> <script type="text/javascript"> <!-- var name_regex = /^([a-zA-Z0-9_])+$/; function check_form() { if (!name_regex.test(document.forms[0].elements[0].value)) { document.forms[0].elements[0].focus(); alert("Your display name may only contain letters, numbers and underscores"); return false; } } --> </script> A: Sorry guys I should have been more specific. Whenever I added spaces the values were still being accepted. The dollar sign $ did the trick! A: A simpler way to write it still would be var name_regex = /^([a-z0-9_])+$/i; A: Even simpler: var name_regex = /^\w+$/;
What did I do wrong here? [Javascript Regex]
So I am writing a registration form and I need the display name to be only numbers, letters and underscores. Have a look at my code and tell me what I'm doing wrong. <form method="post" action="/" onsubmit="return check_form()"> <input type="text" id="display-name" name="display-name" maxlength="255" /> <input type="submit" /> </form> <script type="text/javascript"> <!-- var name_regex = /^([a-zA-Z0-9_])+/ function check_form() { if (!name_regex.test(document.forms[0].elements[0].value)) { document.forms[0].elements[0].focus() alert("Your display name may only contain letters, numbers and underscores") return false } } --> </script> It's obviously been trimmed down to not include anything not related to the problem but even this snippet doesn't work.
[ "Your regex\n/^([a-zA-Z0-9_])+/\n\nLooks for \n\nStart of string(check), followed by\n1 or more letters, numbers, or underscore (check)\n\nAnd then whatever comes after it doesn't matter. This regex will match anything at all so long as it begins with a letter, number, or underscore\nIf you put a $ at the end, then it will work - $ matches 'end of string', so the only way it can match is if there are only numbers, letters, and underscores between the start and end of the string.\n/^([a-zA-Z0-9_])+$/\n\nSecondly, I'd suggest using document.getElementById('display-name').value instead of document.forms as it won't break if you rearrange the HTML, and is more 'the commonly accepted standard of what to do'\n", "My regexp would go along the lines of: /^[a-zA-Z0-9_]+$/\nedit: I think it's the lack of a line end $ that makes it fail.\n", "By 'not working' I take it you mean it is letting invalid entries through (rather than not letting valid entries through).\nAs @Annan has said, this would probably be due to the lack of the $ character at the end of the expression, as currently it only requires a single valid character at the start of the value, and the rest can be anything.\n", "What does \"doesn't work\" mean? Does it reject valid display names? Does it accept invalid display names? Which ones?\nPer @Annan, leaving off the $ would make the regexp accept invalid display names like abc123!@#.\nIf the code is rejecting valid display names, it may be because the parentheses are being matched literally instead of denoting a group (I'm not sure of the quoting convention in JS).\n", "I tested your script and meddled with the javascript. This seem to work: \n<form method=\"post\" action=\"/\" onsubmit=\"return check_form()\">\n <input type=\"text\" id=\"display-name\" name=\"display-name\" maxlength=\"255\" />\n <input type=\"submit\" />\n</form>\n<script type=\"text/javascript\">\n <!--\n var name_regex = /^([a-zA-Z0-9_])+$/;\n\n function check_form()\n {\n if (!name_regex.test(document.forms[0].elements[0].value))\n {\n document.forms[0].elements[0].focus();\n alert(\"Your display name may only contain letters, numbers and underscores\");\n return false;\n }\n }\n -->\n</script>\n\n", "Sorry guys I should have been more specific. Whenever I added spaces the values were still being accepted. The dollar sign $ did the trick!\n", "A simpler way to write it still would be\nvar name_regex = /^([a-z0-9_])+$/i;\n\n", "Even simpler:\nvar name_regex = /^\\w+$/;\n\n" ]
[ 14, 6, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "javascript", "regex" ]
stackoverflow_0000018861_javascript_regex.txt
Q: Calling ASP.NET web service from ASP using SOAPClient I have an ASP.NET webservice with along the lines of: [WebService(Namespace = "http://internalservice.net/messageprocessing")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] [ToolboxItem(false)] public class ProvisioningService : WebService { [WebMethod] public XmlDocument ProcessMessage(XmlDocument message) { // ... do stuff } } I am calling the web service from ASP using something like: provWSDL = "http://servername:12011/MessageProcessor.asmx?wsdl" Set service = CreateObject("MSSOAP.SoapClient30") service.ClientProperty("ServerHTTPRequest") = True Call service.MSSoapInit(provWSDL) xmlMessage = "<request><task>....various xml</task></request>" result = service.ProcessMessage(xmlMessage) The problem I am encountering is that when the XML reaches the ProcessMessage method, the web service plumbing has added a default namespace along the way. i.e. if I set a breakpoint inside ProcessMessage(XmlDocument message) I see: <request xmlns="http://internalservice.net/messageprocessing"> <task>....various xml</task> </request> When I capture packets on the wire I can see that the XML sent by the SOAP toolkit is slightly different from that sent by the .NET WS client. The SOAP toolkit sends: <SOAP-ENV:Envelope xmlns:SOAPSDK1="http://www.w3.org/2001/XMLSchema" xmlns:SOAPSDK2="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAPSDK3="http://schemas.xmlsoap.org/soap/encoding/" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"> <SOAP-ENV:Body> <ProcessMessage xmlns="http://internalservice.net/messageprocessing"> <message xmlns:SOAPSDK4="http://internalservice.net/messageprocessing"> <request> <task>...stuff to do</task> </request> </message> </ProcessMessage> </SOAP-ENV:Body> </SOAP-ENV:Envelope> Whilst the .NET client sends: <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <soap:Body> <ProcessMessage xmlns="http://internalservice.net/messageprocessing"> <message> <request xmlns=""> <task>...stuff to do</task> </request> </message> </ProcessMessage> </soap:Body> </soap:Envelope> It's been so long since I used the ASP/SOAP toolkit to call into .NET webservices, I can't remember all the clever tricks/SOAP-fu I used to pull to get around stuff like this. Any ideas? One solution is to knock up a COM callable .NET proxy that takes the XML as a string param and calls the WS on my behalf, but it's an extra layer of complexity/work I hoped not to do. A: Kev, I found the solution, but its not trivial. You need to create a custom implementation of IHeaderHandler that creates the proper headers. There is a good step by step here: http://msdn.microsoft.com/en-us/library/ms980699.aspx EDIT: I saw your update. Nice workaround, you might want to bookmark this link regardless :D A: I take it you have access to the Services code, not just the consuming client right? Just pull the namespace out of the XmlDocument as the first part of the method. Something like: XmlDocument changeDocumentNamespace(XmlDocument doc, string newNamespace) { if (doc.DocumentElement.NamespaceURI.Length > 0) { doc.DocumentElement.SetAttribute("xmlns", newNameSpace); XmlDocument newDoc = new XmlDocument(); newDoc.LoadXml(doc.OuterXml); return newDoc; } else { return doc; } } Then: [WebService(Namespace = "http://internalservice.net/messageprocessing")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] [ToolboxItem(false)] public class ProvisioningService : WebService { [WebMethod] public XmlDocument ProcessMessage(XmlDocument message) { message = changeDocumentNamespace(message,String.Empty); // Do Stuff... } } A: I solved this: The SOAP client request node was picking up the default namespace from: <ProcessMessage xmlns="http://internalservice.net/messageprocessing"> Adding an empty default namespace to the XML sent by the ASP client overrides this behaviour: xmlMessage = "<request xmlns=''><task>....various xml</task></request>"
Calling ASP.NET web service from ASP using SOAPClient
I have an ASP.NET webservice with along the lines of: [WebService(Namespace = "http://internalservice.net/messageprocessing")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] [ToolboxItem(false)] public class ProvisioningService : WebService { [WebMethod] public XmlDocument ProcessMessage(XmlDocument message) { // ... do stuff } } I am calling the web service from ASP using something like: provWSDL = "http://servername:12011/MessageProcessor.asmx?wsdl" Set service = CreateObject("MSSOAP.SoapClient30") service.ClientProperty("ServerHTTPRequest") = True Call service.MSSoapInit(provWSDL) xmlMessage = "<request><task>....various xml</task></request>" result = service.ProcessMessage(xmlMessage) The problem I am encountering is that when the XML reaches the ProcessMessage method, the web service plumbing has added a default namespace along the way. i.e. if I set a breakpoint inside ProcessMessage(XmlDocument message) I see: <request xmlns="http://internalservice.net/messageprocessing"> <task>....various xml</task> </request> When I capture packets on the wire I can see that the XML sent by the SOAP toolkit is slightly different from that sent by the .NET WS client. The SOAP toolkit sends: <SOAP-ENV:Envelope xmlns:SOAPSDK1="http://www.w3.org/2001/XMLSchema" xmlns:SOAPSDK2="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAPSDK3="http://schemas.xmlsoap.org/soap/encoding/" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"> <SOAP-ENV:Body> <ProcessMessage xmlns="http://internalservice.net/messageprocessing"> <message xmlns:SOAPSDK4="http://internalservice.net/messageprocessing"> <request> <task>...stuff to do</task> </request> </message> </ProcessMessage> </SOAP-ENV:Body> </SOAP-ENV:Envelope> Whilst the .NET client sends: <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <soap:Body> <ProcessMessage xmlns="http://internalservice.net/messageprocessing"> <message> <request xmlns=""> <task>...stuff to do</task> </request> </message> </ProcessMessage> </soap:Body> </soap:Envelope> It's been so long since I used the ASP/SOAP toolkit to call into .NET webservices, I can't remember all the clever tricks/SOAP-fu I used to pull to get around stuff like this. Any ideas? One solution is to knock up a COM callable .NET proxy that takes the XML as a string param and calls the WS on my behalf, but it's an extra layer of complexity/work I hoped not to do.
[ "Kev,\nI found the solution, but its not trivial.\nYou need to create a custom implementation of IHeaderHandler that creates the proper headers.\nThere is a good step by step here:\nhttp://msdn.microsoft.com/en-us/library/ms980699.aspx\nEDIT: I saw your update. Nice workaround, you might want to bookmark this link regardless :D\n", "I take it you have access to the Services code, not just the consuming client right?\nJust pull the namespace out of the XmlDocument as the first part of the method.\nSomething like:\nXmlDocument changeDocumentNamespace(XmlDocument doc, string newNamespace) \n{ \n if (doc.DocumentElement.NamespaceURI.Length > 0) \n {\n doc.DocumentElement.SetAttribute(\"xmlns\", newNameSpace);\n XmlDocument newDoc = new XmlDocument();\n newDoc.LoadXml(doc.OuterXml);\n return newDoc;\n }\n else \n {\n return doc;\n }\n}\n\nThen:\n[WebService(Namespace = \"http://internalservice.net/messageprocessing\")]\n[WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]\n[ToolboxItem(false)]\npublic class ProvisioningService : WebService\n{\n [WebMethod]\n public XmlDocument ProcessMessage(XmlDocument message)\n {\n message = changeDocumentNamespace(message,String.Empty);\n // Do Stuff...\n }\n}\n\n", "I solved this:\nThe SOAP client request node was picking up the default namespace from:\n<ProcessMessage xmlns=\"http://internalservice.net/messageprocessing\">\n\nAdding an empty default namespace to the XML sent by the ASP client overrides this behaviour:\nxmlMessage = \"<request xmlns=''><task>....various xml</task></request>\"\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ ".net", "asp.net", "asp_classic", "soap", "web_services" ]
stackoverflow_0000019318_.net_asp.net_asp_classic_soap_web_services.txt
Q: How to manage Configuration Settings for each Developer In a .NET project, say you have a configuration setting - like a connection string - stored in a app.config file, which is different for each developer on your team (they may be using a local SQL Server, or a specific server instance, or using a remote server, etc). How can you structure your solution so that each developer can have their own development "preferences" (i.e. not checked into source control), but provide a default connection string that is checked into source control (thereby supplying the correct defaults for a build process or new developers). Edit: Can the "file" method suggested by @Jonathon be somehow used with the connectionStrings section? A: AppSettings can be overridden with a local file: <appSettings file="localoveride.config"/> This allows for each developer to keep their own local settings. As far as the connection string, in a perfect world all developers should connect to a test DB, not run SQL Server each. However, I've found it best to keep a file named Web.Config.Prd in source control, and use that for build deployments. If someone modifies web.config, they must also add the change to the .PRD file...There is no good automation there :( A: Edit: Can the "file" method suggested by @Jonathon be somehow used with the connectionStrings section? Or you can have multiple connection strings in the checked in config file, and use an AppSettings key to determine which ConnectionString is to be used. I have the following in my codebase for this purpose: public class ConnectionString { public static string Default { get { if (string.IsNullOrEmpty(ConfigurationManager.AppSettings["DefaultConnectionStringName"])) throw new ApplicationException("DefaultConnectionStringName must be set in the appSettings"); return GetByName(ConfigurationManager.AppSettings["DefaultConnectionStringName"]); } } public static string GetByName(string dsn) { return ConfigurationManager.ConnectionStrings[dsn].ConnectionString; } } A: I always make templates for my config files. As an example I use NAnt for the building of my projects. I have a file checked in called local.properties.xml.template. My NAnt build will warn the developer if local.properties.xml does not exist. Inside that file will be workstation specific settings. The template will be checked into source control, but the actual config won't be. A: I use quite archaic design that just works. /_Test__app.config /_Prod__app.config /app.config Then in my nant script, I have a task that copies, the current build environment plus _ app.config and copy it to app.config. Its nasty, but you can't get in between providers and ConfigurationManager to spoof it, by saying providers look at "dev" or "prod" connection string and just have 3 named connection strings. nant task: <target name="copyconfigs" depends="clean"> <foreach item="File" property="filename" unless="${string::get-length(ConfigPrefix) == 0}"> <in> <items> <include name="**/${ConfigPrefix}App.config" /> <include name="**/${ConfigPrefix}connectionstrings.config" /> <include name="**/${ConfigPrefix}web.config" /> </items> </in> <do> <copy overwrite="true" file="${filename}" tofile="${string::replace(filename, ConfigPrefix,'')}" /> </do> </foreach></target> A: Can the "file" method suggested by @Jonathon be somehow used with the connectionStrings section? No, but there is nothing stopping you from storing the ConnectionString as an AppSettings key.
How to manage Configuration Settings for each Developer
In a .NET project, say you have a configuration setting - like a connection string - stored in a app.config file, which is different for each developer on your team (they may be using a local SQL Server, or a specific server instance, or using a remote server, etc). How can you structure your solution so that each developer can have their own development "preferences" (i.e. not checked into source control), but provide a default connection string that is checked into source control (thereby supplying the correct defaults for a build process or new developers). Edit: Can the "file" method suggested by @Jonathon be somehow used with the connectionStrings section?
[ "AppSettings can be overridden with a local file:\n<appSettings file=\"localoveride.config\"/>\n\nThis allows for each developer to keep their own local settings.\nAs far as the connection string, in a perfect world all developers should connect to a test DB, not run SQL Server each.\nHowever, I've found it best to keep a file named Web.Config.Prd in source control, and use that for build deployments. If someone modifies web.config, they must also add the change to the .PRD file...There is no good automation there :(\n", "\nEdit: Can the \"file\" method suggested\n by @Jonathon be somehow used with the\n connectionStrings section?\n\nOr you can have multiple connection strings in the checked in config file, and use an AppSettings key to determine which ConnectionString is to be used. I have the following in my codebase for this purpose:\npublic class ConnectionString\n{\n public static string Default\n {\n get \n { \n if (string.IsNullOrEmpty(ConfigurationManager.AppSettings[\"DefaultConnectionStringName\"]))\n throw new ApplicationException(\"DefaultConnectionStringName must be set in the appSettings\");\n\n return GetByName(ConfigurationManager.AppSettings[\"DefaultConnectionStringName\"]);\n }\n }\n\n public static string GetByName(string dsn)\n {\n return ConfigurationManager.ConnectionStrings[dsn].ConnectionString;\n }\n}\n\n", "I always make templates for my config files. \nAs an example I use NAnt for the building of my projects. I have a file checked in called local.properties.xml.template. My NAnt build will warn the developer if local.properties.xml does not exist. Inside that file will be workstation specific settings. The template will be checked into source control, but the actual config won't be.\n", "I use quite archaic design that just works.\n\n/_Test__app.config\n/_Prod__app.config\n/app.config\n\nThen in my nant script, I have a task that copies, the current build environment plus _ app.config and copy it to app.config.\nIts nasty, but you can't get in between providers and ConfigurationManager to spoof it, by saying providers look at \"dev\" or \"prod\" connection string and just have 3 named connection strings.\nnant task:\n<target name=\"copyconfigs\" depends=\"clean\">\n <foreach item=\"File\" property=\"filename\" unless=\"${string::get-length(ConfigPrefix) == 0}\">\n <in>\n <items>\n <include name=\"**/${ConfigPrefix}App.config\" />\n <include name=\"**/${ConfigPrefix}connectionstrings.config\" />\n <include name=\"**/${ConfigPrefix}web.config\" />\n </items>\n </in>\n <do>\n <copy overwrite=\"true\" file=\"${filename}\" tofile=\"${string::replace(filename, ConfigPrefix,'')}\" />\n </do>\n </foreach></target>\n\n", "\nCan the \"file\" method suggested by @Jonathon be somehow used with the connectionStrings section?\n\nNo, but there is nothing stopping you from storing the ConnectionString as an AppSettings key.\n" ]
[ 4, 3, 0, 0, 0 ]
[]
[]
[ ".net", "configuration_files" ]
stackoverflow_0000019355_.net_configuration_files.txt
Q: Bash Pipe Handling Does anyone know how bash handles sending data through pipes? cat file.txt | tail -20 Does this command print all the contents of file.txt into a buffer, which is then read by tail? Or does this command, say, print the contents of file.txt line by line, and then pause at each line for tail to process, and then ask for more data? The reason I ask is that I'm writing a program on an embedded device that basically performs a sequence of operations on some chunk of data, where the output of one operation is send off as the input of the next operation. I would like to know how linux (bash) handles this so please give me a general answer, not specifically what happens when I run "cat file.txt | tail -20". EDIT: Shog9 pointed out a relevant Wikipedia Article, this didn't lead me directly to the article but it helped me find this: http://en.wikipedia.org/wiki/Pipeline_%28Unix%29#Implementation which did have the information I was looking for. I'm sorry for not making myself clear. Of course you're using a pipe and of course you're using stdin and stdout of the respective parts of the command. I had assumed that was too obvious to state. What I'm asking is how this is handled/implemented. Since both programs cannot run at once, how is data sent from stdin to stdout? What happens if the first program generates data significantly faster than the second program? Does the system just run the first command until either it's terminated or it's stdout buffer is full, and then move on to the next program, and so on in a loop until no more data is left to be processed or is there a more complicated mechanism? A: I decided to write a slightly more detailed explanation. The "magic" here lies in the operating system. Both programs do start up at roughly the same time, and run at the same time (the operating system assigns them slices of time on the processor to run) as every other simultaneously running process on your computer (including the terminal application and the kernel). So, before any data gets passed, the processes are doing whatever initialization necessary. In your example, tail is parsing the '-20' argument and cat is parsing the 'file.txt' argument and opening the file. At some point tail will get to the point where it needs input and it will tell the operating system that it is waiting for input. At some other point (either before or after, it doesn't matter) cat will start passing data to the operating system using stdout. This goes into a buffer in the operating system. The next time tail gets a time slice on the processor after some data has been put into the buffer by cat, it will retrieve some amount of that data (or all of it) which leaves the buffer on the operating system. When the buffer is empty, at some point tail will have to wait for cat to output more data. If cat is outputting data much faster than tail is handling it, the buffer will expand. cat will eventually be done outputting data, but tail will still be processing, so cat will close and tail will process all remaining data in the buffer. The operating system will signal tail when their is no more incoming data with an EOF. Tail will process the remaining data. In this case, tail is probably just receiving all the data into a circular buffer of 20 lines, and when it is signalled by the operating system that there is no more incoming data, it then dumps the last twenty lines to its own stdout, which just gets displayed in the terminal. Since tail is a much simpler program than cat, it will likely spend most of the time waiting for cat to put data into the buffer. On a system with multiple processors, the two programs will not just be sharing alternating time slices on the same processor core, but likely running at the same time on separate cores. To get into a little more detail, if you open some kind of process monitor (operating system specific) like 'top' in Linux you will see a whole list of running processes, most of which are effectively using 0% of the processor. Most applications, unless they are crunching data, spend most of their time doing nothing. This is good, because it allows other processes to have unfettered access to the processor according to their needs. This is accomplished in basically three ways. A process could get to a sleep(n) style instruction where it basically tells the kernel to wait n milliseconds before giving it another time slice to work with. Most commonly a program needs to wait for something from another program, like 'tail' waiting for more data to enter the buffer. In this case the operating system will wake up the process when more data is available. Lastly, the kernel can preempt a process in the middle of execution, giving some processor time slices to other processes. 'cat' and 'tail' are simple programs. In this example, tail spends most of it's time waiting for more data on the buffer, and cat spends most of it's time waiting for the operating system to retrieve data from the harddrive. The bottleneck is the speed (or slowness) of the physical medium that the file is stored on. That perceptible delay you might detect when you run this command for the first time is the time it takes for the read heads on the disk drive to seek to the position on the harddrive where 'file.txt' is. If you run the command a second time, the operating system will likely have the contents of file.txt cached in memory, and you will not likely see any perceptible delay (unless file.txt is very large, or the file is no longer cached.) Most operations you do on your computer are IO bound, which is to say that you are usually waiting for data to come from your harddrive, or from a network device, etc. A: Shog9 already referenced the Wikipedia article, but the implementation section has the details you want. The basic implementation is a bounded buffer. A: cat will just print the data to standard out, which happens to be redirected to the standard in of tail. This can be seen in the man page of bash. In other words, there is no pausing going on, tail is just reading from standard in and cat is just writing to standard out.
Bash Pipe Handling
Does anyone know how bash handles sending data through pipes? cat file.txt | tail -20 Does this command print all the contents of file.txt into a buffer, which is then read by tail? Or does this command, say, print the contents of file.txt line by line, and then pause at each line for tail to process, and then ask for more data? The reason I ask is that I'm writing a program on an embedded device that basically performs a sequence of operations on some chunk of data, where the output of one operation is send off as the input of the next operation. I would like to know how linux (bash) handles this so please give me a general answer, not specifically what happens when I run "cat file.txt | tail -20". EDIT: Shog9 pointed out a relevant Wikipedia Article, this didn't lead me directly to the article but it helped me find this: http://en.wikipedia.org/wiki/Pipeline_%28Unix%29#Implementation which did have the information I was looking for. I'm sorry for not making myself clear. Of course you're using a pipe and of course you're using stdin and stdout of the respective parts of the command. I had assumed that was too obvious to state. What I'm asking is how this is handled/implemented. Since both programs cannot run at once, how is data sent from stdin to stdout? What happens if the first program generates data significantly faster than the second program? Does the system just run the first command until either it's terminated or it's stdout buffer is full, and then move on to the next program, and so on in a loop until no more data is left to be processed or is there a more complicated mechanism?
[ "I decided to write a slightly more detailed explanation.\nThe \"magic\" here lies in the operating system. Both programs do start up at roughly the same time, and run at the same time (the operating system assigns them slices of time on the processor to run) as every other simultaneously running process on your computer (including the terminal application and the kernel). So, before any data gets passed, the processes are doing whatever initialization necessary. In your example, tail is parsing the '-20' argument and cat is parsing the 'file.txt' argument and opening the file. At some point tail will get to the point where it needs input and it will tell the operating system that it is waiting for input. At some other point (either before or after, it doesn't matter) cat will start passing data to the operating system using stdout. This goes into a buffer in the operating system. The next time tail gets a time slice on the processor after some data has been put into the buffer by cat, it will retrieve some amount of that data (or all of it) which leaves the buffer on the operating system. When the buffer is empty, at some point tail will have to wait for cat to output more data. If cat is outputting data much faster than tail is handling it, the buffer will expand. cat will eventually be done outputting data, but tail will still be processing, so cat will close and tail will process all remaining data in the buffer. The operating system will signal tail when their is no more incoming data with an EOF. Tail will process the remaining data. In this case, tail is probably just receiving all the data into a circular buffer of 20 lines, and when it is signalled by the operating system that there is no more incoming data, it then dumps the last twenty lines to its own stdout, which just gets displayed in the terminal. Since tail is a much simpler program than cat, it will likely spend most of the time waiting for cat to put data into the buffer.\nOn a system with multiple processors, the two programs will not just be sharing alternating time slices on the same processor core, but likely running at the same time on separate cores.\nTo get into a little more detail, if you open some kind of process monitor (operating system specific) like 'top' in Linux you will see a whole list of running processes, most of which are effectively using 0% of the processor. Most applications, unless they are crunching data, spend most of their time doing nothing. This is good, because it allows other processes to have unfettered access to the processor according to their needs. This is accomplished in basically three ways. A process could get to a sleep(n) style instruction where it basically tells the kernel to wait n milliseconds before giving it another time slice to work with. Most commonly a program needs to wait for something from another program, like 'tail' waiting for more data to enter the buffer. In this case the operating system will wake up the process when more data is available. Lastly, the kernel can preempt a process in the middle of execution, giving some processor time slices to other processes. 'cat' and 'tail' are simple programs. In this example, tail spends most of it's time waiting for more data on the buffer, and cat spends most of it's time waiting for the operating system to retrieve data from the harddrive. The bottleneck is the speed (or slowness) of the physical medium that the file is stored on. That perceptible delay you might detect when you run this command for the first time is the time it takes for the read heads on the disk drive to seek to the position on the harddrive where 'file.txt' is. If you run the command a second time, the operating system will likely have the contents of file.txt cached in memory, and you will not likely see any perceptible delay (unless file.txt is very large, or the file is no longer cached.)\nMost operations you do on your computer are IO bound, which is to say that you are usually waiting for data to come from your harddrive, or from a network device, etc.\n", "Shog9 already referenced the Wikipedia article, but the implementation section has the details you want. The basic implementation is a bounded buffer.\n", "cat will just print the data to standard out, which happens to be redirected to the standard in of tail. This can be seen in the man page of bash.\nIn other words, there is no pausing going on, tail is just reading from standard in and cat is just writing to standard out.\n" ]
[ 55, 1, 0 ]
[]
[]
[ "bash", "device", "linux", "pipe" ]
stackoverflow_0000019122_bash_device_linux_pipe.txt
Q: IIS 6/COM+ hangs I have a web application that sometimes just hangs over heavy load. To make it come back I have to kill the "dllhost.exe" process. Does someone know what to do? This is an Classic ASP (VBScript) app with lots of COM+ objects. The server has the following configuration: Intel Core 2 Duo 2.2 GHz / 4 GB RAM Windows Server 2003 Web Edition SP2 IIS 6.0 There is some errors in the event log related to the COM objects. But why errors in the COM objects would crash the whole server? The COM objects are PowerBuilder objects deployed as COM objects. Is IIS 7.0 (much) more stable than IIS 6.0? A: You have a memory leak :) This blog entry is my bible for IIS troubleshooting: http://blogs.msdn.com/david.wang/archive/2005/12/31/HOWTO_Basics_of_IIS6_Troubleshooting.aspx If you can't audit your code and find where the reference leaks are, an alternative is to recycle the application by restarting IIS every 24 hours or so. You can just setup a commandline script as a server job to do this. A: Sounds like dodgy COM objects causing the problem .. do you load them into the "Application", if you do then are they threadsafe; or are they used and discarded on each request? Yes, recycling every few hours would help 'hide' the problem, but they ought to be debugged and fixed properly ... have you tried divide/conquer to discover which COM object is the problem ... I can imagine this is tricky on a production environment so you need to set up some heavy automated tests to reproduce the problem locally then you can do something about it. A: There is probably some errors in your eventlog under the Application and System categories. Try to find the origin of these errors or post them here we'll see what we can do :) Edit : @Daniel Silveira A memory leak is probable. What COM+ object do you use? I had some issues with Excel with an application I support.
IIS 6/COM+ hangs
I have a web application that sometimes just hangs over heavy load. To make it come back I have to kill the "dllhost.exe" process. Does someone know what to do? This is an Classic ASP (VBScript) app with lots of COM+ objects. The server has the following configuration: Intel Core 2 Duo 2.2 GHz / 4 GB RAM Windows Server 2003 Web Edition SP2 IIS 6.0 There is some errors in the event log related to the COM objects. But why errors in the COM objects would crash the whole server? The COM objects are PowerBuilder objects deployed as COM objects. Is IIS 7.0 (much) more stable than IIS 6.0?
[ "You have a memory leak :)\nThis blog entry is my bible for IIS troubleshooting:\nhttp://blogs.msdn.com/david.wang/archive/2005/12/31/HOWTO_Basics_of_IIS6_Troubleshooting.aspx\nIf you can't audit your code and find where the reference leaks are, an alternative is to recycle the application by restarting IIS every 24 hours or so. You can just setup a commandline script as a server job to do this.\n", "Sounds like dodgy COM objects causing the problem .. do you load them into the \"Application\", if you do then are they threadsafe; or are they used and discarded on each request?\nYes, recycling every few hours would help 'hide' the problem, but they ought to be debugged and fixed properly ... have you tried divide/conquer to discover which COM object is the problem ... I can imagine this is tricky on a production environment so you need to set up some heavy automated tests to reproduce the problem locally then you can do something about it.\n", "There is probably some errors in your eventlog under the Application and System categories. Try to find the origin of these errors or post them here we'll see what we can do :)\nEdit : \n@Daniel Silveira\nA memory leak is probable. What COM+ object do you use? I had some issues with Excel with an application I support.\n" ]
[ 2, 2, 1 ]
[]
[]
[ "asp_classic", "crash", "dll", "iis" ]
stackoverflow_0000019245_asp_classic_crash_dll_iis.txt
Q: Anyway to stop Windows bringing app to front when displaying a context menu on tray icon? We are experiencing this annoying problem where we have a context menu on our tray icon, if we display this context menu we have to SetForegroundWindow and bring it to the front. This is really annoying and not at all what we want. Is there a workaround, I notice that Outlook MS Messenger and other MS apps do not suffer this, perhaps they are not using a standard menu and have had to write their own ... why dont they release this code if they have? This article describes the 'as design' behaviour: Menus for Notification Icons Do Not Work Correctly EDIT We are using C++/Win32 not forms, so we use TrackPopupMenu. A: Are you using ContextMenu or ContextMenuStrip? Your saying that opening the ContextMenu on a trayicon focuses all app forms? I have not experienced that, though I use the newer ContextMenuStrip class, not ContextMenu for my trayicons. EDIT: Would be nice to know if you are using Windows.Forms or WIN32, or MFC or what.
Anyway to stop Windows bringing app to front when displaying a context menu on tray icon?
We are experiencing this annoying problem where we have a context menu on our tray icon, if we display this context menu we have to SetForegroundWindow and bring it to the front. This is really annoying and not at all what we want. Is there a workaround, I notice that Outlook MS Messenger and other MS apps do not suffer this, perhaps they are not using a standard menu and have had to write their own ... why dont they release this code if they have? This article describes the 'as design' behaviour: Menus for Notification Icons Do Not Work Correctly EDIT We are using C++/Win32 not forms, so we use TrackPopupMenu.
[ "Are you using ContextMenu or ContextMenuStrip?\nYour saying that opening the ContextMenu on a trayicon focuses all app forms?\nI have not experienced that, though I use the newer ContextMenuStrip class, not ContextMenu for my trayicons.\nEDIT: Would be nice to know if you are using Windows.Forms or WIN32, or MFC or what.\n" ]
[ 2 ]
[]
[]
[ "menu", "trayicon", "windows" ]
stackoverflow_0000019401_menu_trayicon_windows.txt
Q: What is a good free library for editing MP3s/FLACs? What is a good free library for editing MP3s/FLACs. By editing I mean: Cutting audio file into multiple parts Joining multiple audio files together Increase playback speed of file without affecting the pitch (eg. podcasts up to 1.3x) Re-encoding audio file from Flac -> MP3 or vice versa I don't mean software, I mean a library that I can use within another application. Programming language agnostic. A: Just about every language has bindings to C, so you'll probably want to get the applicable C libraries for encoding/decoding mp3's and FLAC files. This list might include libFLAC http://flac.sourceforge.net/api/index.html FLAC encoding/decoding LAME http://lame.sourceforge.net/index.php MP3 encoding MAD http://www.underbit.com/products/mad/ MP3 decoding The rest of your signal processing needs could be gathered around a single popular API such as LADSPA http://www.ladspa.org/. Here's a stretching / pitch shifting library: http://www.breakfastquay.com/rubberband/ Most audio processing programs have a certain internal format they use. That keeps things simple. Everything coming in gets converted to the same format. Once you've standardized the internal format, cutting and splicing audio data is about as difficult as cutting and splicing strings. You don't really need a library for that. A: I use Audacity for all my editing needs Audacity is a free, easy-to-use audio editor and recorder for Windows, Mac OS X, GNU/Linux and other operating systems. You can use Audacity to: * Record live audio. * Convert tapes and records into digital recordings or CDs. * Edit Ogg Vorbis, MP3, WAV or AIFF sound files. * Cut, copy, splice or mix sounds together. * Change the speed or pitch of a recording. A: Audacity uses the Lame library, however not only is this not language agnostic it also has some questions over licensing. Nevertheless it might be a start
What is a good free library for editing MP3s/FLACs?
What is a good free library for editing MP3s/FLACs. By editing I mean: Cutting audio file into multiple parts Joining multiple audio files together Increase playback speed of file without affecting the pitch (eg. podcasts up to 1.3x) Re-encoding audio file from Flac -> MP3 or vice versa I don't mean software, I mean a library that I can use within another application. Programming language agnostic.
[ "Just about every language has bindings to C, so you'll probably want to get the applicable C libraries for encoding/decoding mp3's and FLAC files. This list might include\nlibFLAC http://flac.sourceforge.net/api/index.html FLAC encoding/decoding\nLAME http://lame.sourceforge.net/index.php MP3 encoding\nMAD http://www.underbit.com/products/mad/ MP3 decoding \nThe rest of your signal processing needs could be gathered around a single popular API such as LADSPA http://www.ladspa.org/.\nHere's a stretching / pitch shifting library: http://www.breakfastquay.com/rubberband/\nMost audio processing programs have a certain internal format they use. That keeps things simple. Everything coming in gets converted to the same format. Once you've standardized the internal format, cutting and splicing audio data is about as difficult as cutting and splicing strings. You don't really need a library for that.\n", "I use Audacity for all my editing needs\n\nAudacity is a free, easy-to-use audio\n editor and recorder for Windows, Mac\n OS X, GNU/Linux and other operating\n systems. You can use Audacity to:\n* Record live audio.\n* Convert tapes and records into digital recordings or CDs.\n* Edit Ogg Vorbis, MP3, WAV or AIFF sound files.\n* Cut, copy, splice or mix sounds together.\n* Change the speed or pitch of a recording.\n\n\n", "Audacity uses the Lame library, however not only is this not language agnostic it also has some questions over licensing. Nevertheless it might be a start\n" ]
[ 5, 1, 1 ]
[]
[]
[ "audio" ]
stackoverflow_0000019433_audio.txt
Q: how to allow files starting with period and no extension in windows 2003 server? How can I create this file in a directory in windows 2003 SP2: .hgignore I get error: You must type a file name. A: That's a "feature" of Windows Explorer. Try to create your files from a command line (or from a batch/program you wrote) and it should work fine. Try this from a dos prompt: echo Hello there! > .hgignore A: By the way Raymond Chen had a blog post about this topic a while back: Why doesn't Explorer let you create a file whose name begins with a dot? (archive.org link with comments: https://web.archive.org/web/20100305064616/http://blogs.msdn.com/oldnewthing/archive/2008/04/14/8389268.aspx) In which he mentions You can do it from the command line or use your favorite file management tool.
how to allow files starting with period and no extension in windows 2003 server?
How can I create this file in a directory in windows 2003 SP2: .hgignore I get error: You must type a file name.
[ "That's a \"feature\" of Windows Explorer. Try to create your files from a command line (or from a batch/program you wrote) and it should work fine. Try this from a dos prompt:\necho Hello there! > .hgignore\n\n", "By the way Raymond Chen had a blog post about this topic a while back:\nWhy doesn't Explorer let you create a file whose name begins with a dot? (archive.org link with comments: https://web.archive.org/web/20100305064616/http://blogs.msdn.com/oldnewthing/archive/2008/04/14/8389268.aspx)\nIn which he mentions\n\nYou can do it from the command line or\nuse your favorite file management\ntool.\n\n" ]
[ 27, 5 ]
[]
[]
[ "hgignore", "mercurial", "windows_server_2003" ]
stackoverflow_0000019442_hgignore_mercurial_windows_server_2003.txt
Q: Weird yellow bar pops-up: 'Microsoft Data Access - Remote Data Services When I access my site from any computer, I see this warning popping up: "This web site wants to run the following add-on: 'Microsoft Data Access - Remote Data Services Dat...' from 'Microsoft Corporation'. If you trust the web site and the add-on and want to allow it to run, click here..." I am guessing this is some kind of virus or something. I would like to know how to remove this from my site. A: Id be very concerned if this is on your own server. I found the following blog post that warns on the issue: http://msmvps.com/blogs/hostsnews/archive/2007/09/13/can-you-spot-the-fake.aspx but doesn't provide any way of removing it. I'd recommend making sure both the server and the client are up to date on Windows Updates, and then installing a good virus scanner.
Weird yellow bar pops-up: 'Microsoft Data Access - Remote Data Services
When I access my site from any computer, I see this warning popping up: "This web site wants to run the following add-on: 'Microsoft Data Access - Remote Data Services Dat...' from 'Microsoft Corporation'. If you trust the web site and the add-on and want to allow it to run, click here..." I am guessing this is some kind of virus or something. I would like to know how to remove this from my site.
[ "Id be very concerned if this is on your own server.\nI found the following blog post that warns on the issue: http://msmvps.com/blogs/hostsnews/archive/2007/09/13/can-you-spot-the-fake.aspx but doesn't provide any way of removing it.\nI'd recommend making sure both the server and the client are up to date on Windows Updates, and then installing a good virus scanner.\n" ]
[ 3 ]
[]
[]
[ "internet_explorer_7", "security" ]
stackoverflow_0000019471_internet_explorer_7_security.txt
Q: Delete Datalist row with multiple primary keys I have a datalist with a OnDeleteCommand="Delete_Command". I want the delete a record with multiple primary Keys but I do not know how to access it from the Delete_Command event. If I use DataKeyField I'm limited to only one key. Any workarounds for this? A: You can access all of the keys: gridView.DataKeys[rowNum][dataKeyName] where rowNum is e.RowIndex from the gridView_RowDeleting event handler, and dataKeyName is the key you want to get: <asp:GridView ID="gridView" runat="server" DataKeyNames="userid, id1, id2, id3" OnRowDeleting="gridView_RowDeleting"> protected void gridView_RowDeleting(object sender, GridViewDeleteEventArgs e) { gridView.DataKeys[e.RowIndex]["userid"]... gridView.DataKeys[e.RowIndex]["id1"]... gridView.DataKeys[e.RowIndex]["id2"]... gridView.DataKeys[e.RowIndex]["id3"]... } A: Oh, sorry, I missed it. AFAIK there is no such a possibility by default. Maybe you can create a composite key from your primary keys, like Key1UnderscoreKey2UnderscoreKey3 and split it in the event handler. So this is a DIY multi-key handler for DataList :-) Edit: The underscore got lost during format, it replaces with italic text. So instead of "underscore" word use real underscores
Delete Datalist row with multiple primary keys
I have a datalist with a OnDeleteCommand="Delete_Command". I want the delete a record with multiple primary Keys but I do not know how to access it from the Delete_Command event. If I use DataKeyField I'm limited to only one key. Any workarounds for this?
[ "You can access all of the keys:\ngridView.DataKeys[rowNum][dataKeyName]\n\nwhere rowNum is e.RowIndex from the gridView_RowDeleting event handler, and dataKeyName is the key you want to get:\n<asp:GridView ID=\"gridView\" runat=\"server\" DataKeyNames=\"userid, id1, id2, id3\" OnRowDeleting=\"gridView_RowDeleting\">\n\nprotected void gridView_RowDeleting(object sender, GridViewDeleteEventArgs e)\n{\n gridView.DataKeys[e.RowIndex][\"userid\"]...\n gridView.DataKeys[e.RowIndex][\"id1\"]...\n gridView.DataKeys[e.RowIndex][\"id2\"]...\n gridView.DataKeys[e.RowIndex][\"id3\"]...\n}\n\n", "Oh, sorry, I missed it.\nAFAIK there is no such a possibility by default. Maybe you can create a composite key from your primary keys, like \n\nKey1UnderscoreKey2UnderscoreKey3\n\nand split it in the event handler. So this is a DIY multi-key handler for DataList :-)\nEdit: The underscore got lost during format, it replaces with italic text. So instead of \"underscore\" word use real underscores\n" ]
[ 1, 0 ]
[]
[]
[ "asp.net" ]
stackoverflow_0000019436_asp.net.txt
Q: AnkhSVN Cannot Connect Due to Proxy Alright, this might be a bit of a long shot, but I have having problems getting AnkhSVN to connect from Visual Studio 2005 to an external SVN server. There is a network proxy in the way, but I can't seem to find a way in AnkhSVN to configure the proxy and doesn't seem to be detecting the Internet Explorer proxy configuration. Is there any way to resolve this issue, or will it likely just not work? A: Current version of AnkhSVN does not provide a GUI for proxy settings, but you can hand-edit the servers file (which is a simple .ini) and it should work. Servers file resides in: C:\Documents and Settings\YOU\Application Data\Subversion (or wherever your APP_DATA is) A: You can also use TortoiseSVN for editting the proxy settings. TortoiseSVN saves the settings in the registry in the common location that all Subversion clients (by default) use. UPDATE: A proxy settings dialog is now implemented in the AnkhSVN daily builds. It will be available in the next release.
AnkhSVN Cannot Connect Due to Proxy
Alright, this might be a bit of a long shot, but I have having problems getting AnkhSVN to connect from Visual Studio 2005 to an external SVN server. There is a network proxy in the way, but I can't seem to find a way in AnkhSVN to configure the proxy and doesn't seem to be detecting the Internet Explorer proxy configuration. Is there any way to resolve this issue, or will it likely just not work?
[ "Current version of AnkhSVN does not provide a GUI for proxy settings, but you can hand-edit the servers file (which is a simple .ini) and it should work. \nServers file resides in: C:\\Documents and Settings\\YOU\\Application Data\\Subversion (or wherever your APP_DATA is)\n", "You can also use TortoiseSVN for editting the proxy settings. \nTortoiseSVN saves the settings in the registry in the common location that all Subversion clients (by default) use.\nUPDATE: A proxy settings dialog is now implemented in the AnkhSVN daily builds.\nIt will be available in the next release.\n" ]
[ 5, 5 ]
[]
[]
[ "ankhsvn", "proxy", "svn", "visual_studio", "visual_studio_2005" ]
stackoverflow_0000017984_ankhsvn_proxy_svn_visual_studio_visual_studio_2005.txt
Q: Best way to structure a repository in Subversion for Visual Studio projects? I have a few C# .dll projects which are common to many applications. Currently, I have one big repository. I have each DLL stored as a separate project within the repository and every application project stored as a project within the same repository. I recently switched to Subversion for source control and I fear that I did not do a good job of structuring the repository. I would like to hear what others are doing. A: Subversion repositories are typical sub-divided into: branch/ tags/ trunk/ You would either place all of your DLL and application projects into the trunk and then use branch and tags for all of them as necessary too: branch/ tags/ trunk/ project1/ project2/ Alternatively, you could create folders for each project in the root and then place the common branch, tags and trunk folders within them. project1/ branch/ tags/ trunk/ project2/ branch/ tags/ trunk/ Note that this practice is simply convention and nothing in SVN requires (or really promotes) doing it exactly this way. However, everyone is used to it. So, you would be doing people a favor to go along. To elaborate further, the trunk is where your main development will take place. When you want to mark a particular revision (e.g. a release version), then simply svn copy the project into the tags directory. Also, just copy code into the branch directory when you want to do something dramatic or prolonged and don't want to hinder progress in the trunk. Later you can svn merge your branch back into the trunk when it is ready for action! If you want to correct mishaps in your current Subverion repository, then just use svn move to relocate them. Unlike the delete and add process of CVS, move will retain version history for the new location. A: using the branch/trunk/tag repository structure is pretty standard, but if I'm understanding you properly, your issue is that you have a set of common dll projects that get used across multiple projects. This can definately become tricky to manage. So the typical scenario here is that you have some class library called Common.Helpers that has code that is common to all your applications. Let's say I'm starting a new application called StackOverflow.Web that needs to reference Common.Helpers. Usually what you would do is create a new solution file and add a new project called Stackoverflow.Web and add the existing Common.Helpers project and then reference it from the new Stackoverflow.Web project. What I usually try and do is create a repository for the Common.Helpers project and then in subversion reference it as an external. That way you can keep the code under source control in a single location, but still use it seperately in multiple projects. A: if your sub projects can be released at different versions (like controls, web parts, ect...) then it may make sense to build your structure like this: Solution Project 1 Branch Tags Trunk Project 2 Branch Tags Trunk This way you can manage each project release independently. Otherwise the most common structure is: Branch Tags Trunk Docs (Optional) A: I store everything in the repository to make it easy for developers (or rebuilt devboxes) to check-out from SVN and then run a build (with all necessary assemblies in relative paths). If you have multiple projects that should be separate, this would also encourage the team of your shared components to deliver high quality assemblies. This could follow a normal release to production mentality where the shared assemblied would be updated in your downstream projects. This is a very natural Software Value Chain, at the cost of a little bit of disk space. JP Boodhoo has a great series on the topic of automated builds, VS folder structure, and getting developers up and running quickly. A: Thanks to everyone who answered. lomaxx, I spent the morning looking into using the external feature and it looks like this is the way to go. I was not aware of it, probably because it is not exactly prominent in Tortoise. A: If you want to use the merge-tracking of Subversion 1.5 over more than one project at the same time you should use a single tree without externals. A tracked merge is (just like a commit) always over a directory and its children. The same rule applies on atomic commits. (Works only stable within a single workingcopy. It might work in some specific other cases but that behavior is not guaranteed)
Best way to structure a repository in Subversion for Visual Studio projects?
I have a few C# .dll projects which are common to many applications. Currently, I have one big repository. I have each DLL stored as a separate project within the repository and every application project stored as a project within the same repository. I recently switched to Subversion for source control and I fear that I did not do a good job of structuring the repository. I would like to hear what others are doing.
[ "Subversion repositories are typical sub-divided into:\nbranch/\ntags/\ntrunk/\n\nYou would either place all of your DLL and application projects into the trunk and then use branch and tags for all of them as necessary too:\nbranch/\ntags/\ntrunk/\n project1/\n project2/\n\nAlternatively, you could create folders for each project in the root and then place the common branch, tags and trunk folders within them.\nproject1/\n branch/\n tags/\n trunk/\n\nproject2/\n branch/\n tags/\n trunk/\n\nNote that this practice is simply convention and nothing in SVN requires (or really promotes) doing it exactly this way. However, everyone is used to it. So, you would be doing people a favor to go along.\nTo elaborate further, the trunk is where your main development will take place. When you want to mark a particular revision (e.g. a release version), then simply svn copy the project into the tags directory. Also, just copy code into the branch directory when you want to do something dramatic or prolonged and don't want to hinder progress in the trunk. Later you can svn merge your branch back into the trunk when it is ready for action!\nIf you want to correct mishaps in your current Subverion repository, then just use svn move to relocate them. Unlike the delete and add process of CVS, move will retain version history for the new location.\n", "using the branch/trunk/tag repository structure is pretty standard, but if I'm understanding you properly, your issue is that you have a set of common dll projects that get used across multiple projects. This can definately become tricky to manage.\nSo the typical scenario here is that you have some class library called Common.Helpers that has code that is common to all your applications.\nLet's say I'm starting a new application called StackOverflow.Web that needs to reference Common.Helpers.\nUsually what you would do is create a new solution file and add a new project called Stackoverflow.Web and add the existing Common.Helpers project and then reference it from the new Stackoverflow.Web project.\nWhat I usually try and do is create a repository for the Common.Helpers project and then in subversion reference it as an external. That way you can keep the code under source control in a single location, but still use it seperately in multiple projects.\n", "if your sub projects can be released at different versions (like controls, web parts, ect...) then it may make sense to build your structure like this:\nSolution\nProject 1 \n\n\nBranch \nTags \nTrunk \n\n\nProject 2\n\n\nBranch \nTags \nTrunk \n\n\nThis way you can manage each project release independently.\nOtherwise the most common structure is:\n\n\nBranch \nTags \nTrunk \nDocs (Optional)\n\n\n", "I store everything in the repository to make it easy for developers (or rebuilt devboxes) to check-out from SVN and then run a build (with all necessary assemblies in relative paths). If you have multiple projects that should be separate, this would also encourage the team of your shared components to deliver high quality assemblies. This could follow a normal release to production mentality where the shared assemblied would be updated in your downstream projects. This is a very natural Software Value Chain, at the cost of a little bit of disk space.\nJP Boodhoo has a great series on the topic of automated builds, VS folder structure, and getting developers up and running quickly.\n", "Thanks to everyone who answered. lomaxx, I spent the morning looking into using the external feature and it looks like this is the way to go. I was not aware of it, probably because it is not exactly prominent in Tortoise.\n", "If you want to use the merge-tracking of Subversion 1.5 over more than one project at the same time you should use a single tree without externals.\nA tracked merge is (just like a commit) always over a directory and its children.\nThe same rule applies on atomic commits. (Works only stable within a single workingcopy. It might work in some specific other cases but that behavior is not guaranteed)\n" ]
[ 9, 4, 0, 0, 0, 0 ]
[]
[]
[ "svn" ]
stackoverflow_0000015621_svn.txt
Q: Which RDBMS should I use? I have developed a high speed transactional server for transfering data over the internet so I do not need to rely upon a database implementation like MySQL to provide this. That opens up the question of which SQL version to use? I really like SQLite, but I am not convinced it is industrial strength yet What I do like is how lightweight it is on resources. I loathed MySQL 8 years ago, but now it obviously IS industrial strength and my partners use it, so it is the obvious choice on the server side. If I use it I will just be connecting through "localhost" to the installed server (windows service). My concern is about the memory usage. I DO NOT load the result set into memory, but I notice about 6Mb for the first connection. I am hoping subsequent connections are not an additional 6MB! If I use the libmysqld.dll embedded libarary then does each new connection load a new instance of the embedded client/server code into memory? We assume so since each process will have its own in process memory... Regardless, the manual states that When using the libmysqld embedded server, the memory benefits are essentially lost when retrieving results row by row because "memory usage incrementally increases with each row retrieved until mysql_free_result() is called." http://dev.mysql.com/doc/refman/5.1/en/mysql-use-result.html This means I must use the installed service. But is this as fast as the embedded server? Are there any other low cost flavors that have high reliability? A: SQLite is used in more applications than any other DB. (Citation required). There are some issues with MySQL, like that it doesn't respect foreign integrity constraints. I'm currently a fan of PostgreSQL, which is also freely available (and, I think if you read the licensing of MySQL, actually turns out to have a more amenable license for commercial use). It seems to be higher performance than SQLite, which probably has more to do with it being run on an SMP machine, and making use to different threads. It also seems to be quite solid. A: Sorry to be pedantic, but the title should really be "Which RDBMS?" - the way it's phrased makes about as much sense as "Which Java?" or "Which Internet?"...
Which RDBMS should I use?
I have developed a high speed transactional server for transfering data over the internet so I do not need to rely upon a database implementation like MySQL to provide this. That opens up the question of which SQL version to use? I really like SQLite, but I am not convinced it is industrial strength yet What I do like is how lightweight it is on resources. I loathed MySQL 8 years ago, but now it obviously IS industrial strength and my partners use it, so it is the obvious choice on the server side. If I use it I will just be connecting through "localhost" to the installed server (windows service). My concern is about the memory usage. I DO NOT load the result set into memory, but I notice about 6Mb for the first connection. I am hoping subsequent connections are not an additional 6MB! If I use the libmysqld.dll embedded libarary then does each new connection load a new instance of the embedded client/server code into memory? We assume so since each process will have its own in process memory... Regardless, the manual states that When using the libmysqld embedded server, the memory benefits are essentially lost when retrieving results row by row because "memory usage incrementally increases with each row retrieved until mysql_free_result() is called." http://dev.mysql.com/doc/refman/5.1/en/mysql-use-result.html This means I must use the installed service. But is this as fast as the embedded server? Are there any other low cost flavors that have high reliability?
[ "SQLite is used in more applications than any other DB. (Citation required).\nThere are some issues with MySQL, like that it doesn't respect foreign integrity constraints.\nI'm currently a fan of PostgreSQL, which is also freely available (and, I think if you read the licensing of MySQL, actually turns out to have a more amenable license for commercial use). It seems to be higher performance than SQLite, which probably has more to do with it being run on an SMP machine, and making use to different threads. It also seems to be quite solid.\n", "Sorry to be pedantic, but the title should really be \"Which RDBMS?\" - the way it's phrased makes about as much sense as \"Which Java?\" or \"Which Internet?\"...\n" ]
[ 3, 1 ]
[]
[]
[ "sql" ]
stackoverflow_0000019458_sql.txt
Q: mod_rewrite rule to redirect all requests except for one specific path I'm trying to redirect all requests to my domain to another domain using mod_rewrite in an Apache 2.2 VirtualHost declaration. There is one exception to this -- I'd like all requests to the /audio path not to be redirected. I've written a RewriteCond and RewriteRule to do this but it's not quite right and I can't figure out why. The regular expression contains a negative lookahead for the string "/audio", but for some reason this isn't matching. Here's the definition: RewriteEngine on RewriteCond %{HTTP_HOST} ^(.*\.)?mydomain\.net(?!/audio) [NC] RewriteRule ^(.*)$ http://www.newdomain.example [L,R=301] If I change the RewriteCond to: RewriteCond %{HTTP_HOST} ^(.*\.)?mydomain\.example/(?!audio) [NC] (i.e. put the forward slash outside of the negative lookahead part) then it works, but the downside of this is that requests to mydomain.example without a trailing slash will not be redirected. Can anyone point out what I'm doing wrong? Here are the rules: <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "/var/www/mydomain.example/htdocs" ServerName www.mydomain.example ServerAlias mydomain.example RewriteEngine on RewriteCond {REQUEST_URI} !^/audio RewriteRule ^(.*)$ http://www.newdomain.example [L,R=301] RewriteLog logs/mod_rewrite_log RewriteLogLevel 3 ErrorLog logs/error_log CustomLog logs/access_log common </VirtualHost> Thanks @mercutio -- that makes perfect sense but it still doesn't seem to work. Here's what the mod_rewrite log says when I make a request to http://mydomain.example/audio/something.mp3: (2) init rewrite engine with requested uri /audio/something.mp3 (3) applying pattern '^(.*)$' to uri '/audio' (2) rewrite '/audio' -> 'http://www.newdomain.example/' (2) explicitly forcing redirect with http://www.newdomain.example (1) escaping http://www.newdomain.example for redirect (1) redirect to http://www.newdomain.example [REDIRECT/301] Since the REQUEST_URI does start with /audio I would expect the RewriteRule to be ignored. A: The HTTP_HOST only contains the host name, not the path of the URL requested. RewriteCond %{REQUEST_URI} !^/audio Should be all you need. Further, you can get debug info from the rewrite engine with the following, which is really useful to see how your conditions and rules are being matched: RewriteLog /path/to/log/file RewriteLogLevel 3
mod_rewrite rule to redirect all requests except for one specific path
I'm trying to redirect all requests to my domain to another domain using mod_rewrite in an Apache 2.2 VirtualHost declaration. There is one exception to this -- I'd like all requests to the /audio path not to be redirected. I've written a RewriteCond and RewriteRule to do this but it's not quite right and I can't figure out why. The regular expression contains a negative lookahead for the string "/audio", but for some reason this isn't matching. Here's the definition: RewriteEngine on RewriteCond %{HTTP_HOST} ^(.*\.)?mydomain\.net(?!/audio) [NC] RewriteRule ^(.*)$ http://www.newdomain.example [L,R=301] If I change the RewriteCond to: RewriteCond %{HTTP_HOST} ^(.*\.)?mydomain\.example/(?!audio) [NC] (i.e. put the forward slash outside of the negative lookahead part) then it works, but the downside of this is that requests to mydomain.example without a trailing slash will not be redirected. Can anyone point out what I'm doing wrong? Here are the rules: <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "/var/www/mydomain.example/htdocs" ServerName www.mydomain.example ServerAlias mydomain.example RewriteEngine on RewriteCond {REQUEST_URI} !^/audio RewriteRule ^(.*)$ http://www.newdomain.example [L,R=301] RewriteLog logs/mod_rewrite_log RewriteLogLevel 3 ErrorLog logs/error_log CustomLog logs/access_log common </VirtualHost> Thanks @mercutio -- that makes perfect sense but it still doesn't seem to work. Here's what the mod_rewrite log says when I make a request to http://mydomain.example/audio/something.mp3: (2) init rewrite engine with requested uri /audio/something.mp3 (3) applying pattern '^(.*)$' to uri '/audio' (2) rewrite '/audio' -> 'http://www.newdomain.example/' (2) explicitly forcing redirect with http://www.newdomain.example (1) escaping http://www.newdomain.example for redirect (1) redirect to http://www.newdomain.example [REDIRECT/301] Since the REQUEST_URI does start with /audio I would expect the RewriteRule to be ignored.
[ "The HTTP_HOST only contains the host name, not the path of the URL requested.\nRewriteCond %{REQUEST_URI} !^/audio\n\nShould be all you need.\nFurther, you can get debug info from the rewrite engine with the following, which is really useful to see how your conditions and rules are being matched:\nRewriteLog /path/to/log/file\nRewriteLogLevel 3\n\n" ]
[ 48 ]
[]
[]
[ "apache", "mod_rewrite" ]
stackoverflow_0000019493_apache_mod_rewrite.txt
Q: Accessing a CONST attribute of series of Classes This is how I wanted to do it which would work in PHP 5.3.0+ <?php class MyClass { const CONSTANT = 'Const var'; } $classname = 'MyClass'; echo $classname::CONSTANT; // As of PHP 5.3.0 ?> But I'm restricted to using PHP 5.2.6. Can anyone think of a simple way to simulate this behavior without instantiating the class? A: You can accomplish this without using eval in pre-5.3 code. Just use the constant function: <?php class MyClass { const CONSTANT = 'Const var'; } $classname = 'MyClass'; echo constant("$classname::CONSTANT"); ?>
Accessing a CONST attribute of series of Classes
This is how I wanted to do it which would work in PHP 5.3.0+ <?php class MyClass { const CONSTANT = 'Const var'; } $classname = 'MyClass'; echo $classname::CONSTANT; // As of PHP 5.3.0 ?> But I'm restricted to using PHP 5.2.6. Can anyone think of a simple way to simulate this behavior without instantiating the class?
[ "You can accomplish this without using eval in pre-5.3 code. Just use the constant function:\n<?php\n\nclass MyClass\n{\n const CONSTANT = 'Const var';\n}\n\n$classname = 'MyClass';\necho constant(\"$classname::CONSTANT\");\n\n?>\n\n" ]
[ 10 ]
[ "If you absolutly need to access a constant like that, you can do this:\n<?php\nclass MyClass\n{\n const CONSTANT = 'Const var';\n}\n\n$classname = 'MyClass';\necho eval( 'return '.$classname.'::CONSTANT;' );\n?>\n\nBut, if i were you, I'd try not to use eval.\n" ]
[ -1 ]
[ "oop", "php" ]
stackoverflow_0000005459_oop_php.txt
Q: How to find an implementation of a C# interface in the current assembly with a specific name? I have an Interface called IStep that can do some computation (See "Execution in the Kingdom of Nouns"). At runtime, I want to select the appropriate implementation by class name. // use like this: IStep step = GetStep(sName); A: Your question is very confusing... If you want to find types that implement IStep, then do this: foreach (Type t in Assembly.GetCallingAssembly().GetTypes()) { if (!typeof(IStep).IsAssignableFrom(t)) continue; Console.WriteLine(t.FullName + " implements " + typeof(IStep).FullName); } If you know already the name of the required type, just do this IStep step = (IStep)Activator.CreateInstance(Type.GetType("MyNamespace.MyType")); A: If the implementation has a parameterless constructor, you can do this using the System.Activator class. You will need to specify the assembly name in addition to the class name: IStep step = System.Activator.CreateInstance(sAssemblyName, sClassName).Unwrap() as IStep; http://msdn.microsoft.com/en-us/library/system.activator.createinstance.aspx A: Based on what others have pointed out, this is what I ended up writing: /// /// Some magic happens here: Find the correct action to take, by reflecting on types /// subclassed from IStep with that name. /// private IStep GetStep(string sName) { Assembly assembly = Assembly.GetAssembly(typeof (IStep)); try { return (IStep) (from t in assembly.GetTypes() where t.Name == sName && t.GetInterface("IStep") != null select t ).First().GetConstructor(new Type[] {} ).Invoke(new object[] {}); } catch (InvalidOperationException e) { throw new ArgumentException("Action not supported: " + sName, e); } } A: Well Assembly.CreateInstance would seem to be the way to go - the only problem with this is that it needs the fully qualified name of the type, i.e. including the namespace.
How to find an implementation of a C# interface in the current assembly with a specific name?
I have an Interface called IStep that can do some computation (See "Execution in the Kingdom of Nouns"). At runtime, I want to select the appropriate implementation by class name. // use like this: IStep step = GetStep(sName);
[ "Your question is very confusing...\nIf you want to find types that implement IStep, then do this:\nforeach (Type t in Assembly.GetCallingAssembly().GetTypes())\n{\n if (!typeof(IStep).IsAssignableFrom(t)) continue;\n Console.WriteLine(t.FullName + \" implements \" + typeof(IStep).FullName);\n}\n\nIf you know already the name of the required type, just do this\nIStep step = (IStep)Activator.CreateInstance(Type.GetType(\"MyNamespace.MyType\"));\n\n", "If the implementation has a parameterless constructor, you can do this using the System.Activator class. You will need to specify the assembly name in addition to the class name:\nIStep step = System.Activator.CreateInstance(sAssemblyName, sClassName).Unwrap() as IStep;\n\nhttp://msdn.microsoft.com/en-us/library/system.activator.createinstance.aspx\n", "Based on what others have pointed out, this is what I ended up writing:\n\n/// \n/// Some magic happens here: Find the correct action to take, by reflecting on types \n/// subclassed from IStep with that name.\n/// \nprivate IStep GetStep(string sName)\n{\n Assembly assembly = Assembly.GetAssembly(typeof (IStep));\n\n try\n {\n return (IStep) (from t in assembly.GetTypes()\n where t.Name == sName && t.GetInterface(\"IStep\") != null\n select t\n ).First().GetConstructor(new Type[] {}\n ).Invoke(new object[] {});\n }\n catch (InvalidOperationException e)\n {\n throw new ArgumentException(\"Action not supported: \" + sName, e);\n }\n}\n\n", "Well Assembly.CreateInstance would seem to be the way to go - the only problem with this is that it needs the fully qualified name of the type, i.e. including the namespace.\n" ]
[ 8, 2, 1, 0 ]
[]
[]
[ "c#", "linq", "linq_to_objects", "reflection" ]
stackoverflow_0000019656_c#_linq_linq_to_objects_reflection.txt
Q: Different solutions/project files for Local vs Build environments As part of improvements to our build process, we are currently debating whether we should have separate project/solution files on our CI production environment from our local development environments. The reason this has come about is because of reference problems we experienced in our previous project. On a frequent basis people would mistakenly add a reference to an assembly in the wrong location, which would mean it would work okay on their local environment, but might break on someone else's or on the build machine. Also, the reference paths are in the csproj.user files which means these must be committed to source control, so everyone has to share these same settings. So we are thinking about having separate projects and solutions on our CI server, so that when we do a build it uses these projects rather than local development ones. It has obvious drawbacks such as an overhead to maintaining these separate files and the associated process that would need to be defined and followed, but it has benefits in that we would be in more control over EXACTLY what happens in the production environment. What I haven't been able to find is anything on this subject - can't believe we are the only people to think about this - so all thoughts are welcome. A: In our largest project (a system comprising of many applications) we have the following structure /3rdPartyAssemblies /App1 /App2 /App3 /..... All external assemblies are added to 3rdPartyAssemblies/Vendor/Version/... We have a CoreBuild.sln file which acts as an MSBuild script for all of the assemblies that are shared to ensure building in dependancy order (ie, make sure App1.Interfaces is built before App2 as App2 has a reference to App1.Interfaces). All inter-application references target the /bin folder (we don't use bin/debug and bin/release, just bin, this way the references remain the same and we just change the release configuration depending on the build target). Cruise Control builds the core solution for any dependencies before building any other app, and because the 3rdPartAssemblies folder is present on the server we ensure developer machines and build server have the same development layout. A: I know it's anachronistic. But the single best way I've found to handle the references issue is to have a folder mapped to a drive letter such as R: and then all projects build into or copy output into that folder also. Then all references are R:\SomeFile.dll etc. This gets you around the problem that sometimes references are added by absolute path and sometimes they are added relatively. (there's something to do with "HintPath" which I can't really remember) The nice thing then, is that you can still use the same solution files on your build server. Which to be honest is an absolute must as you lose the certainty that what is being built on the dev machine is the same as on the build server otherwise. A: Usually, you would be creating Build projects/scripts in some form or another for your Production, and so putting together another Solution file doesn't come in the picture. It would be easier to train everyone to use project references, and create a directory under the project file structure for external assembly references. This way everyone follows the same environment. A: I would strongly recommend against this. Reference paths aren't only stored in the .user file. A hint path is stored in the project file itself. You should never have to check a .user file into source control. Let there be one set of (okay, possibly versioned) solution/project files which all developers use, and the Release configurations of which are what you're ultimately building in production. Having separate project files is going to cause confusion down the road, when some project setting is tweaked, not carried across, and slipped into production. You might also check this out: http://www.objectsharp.com/cs/blogs/barry/archive/2004/10/29/988.aspx http://bytes.com/forum/thread268546.html A: We have changed our project structure (making use of SVN Externals) where each project is now completely self-contained. That is, any references never go outwith the project directory (for example, if Project A references ASM X, then ASM X exists within a subfolder of ProjectA) I suspect that this should go some way towards helping solve some of our problems, but I can still see some advantages of having more control over the build projects. A: @David - believe it or not this is what we actually have just now, and yet it's still causing us problems! We're making some changes though, which are forced upon us due to moving to TeamCity and multiple build agents - so we can't have references to directories outwith the current project, as I've mentioned in my previous answer. Look at the Externals section of this link to see what I mean - http://www.dummzeuch.de/delphi/subversion/english.html
Different solutions/project files for Local vs Build environments
As part of improvements to our build process, we are currently debating whether we should have separate project/solution files on our CI production environment from our local development environments. The reason this has come about is because of reference problems we experienced in our previous project. On a frequent basis people would mistakenly add a reference to an assembly in the wrong location, which would mean it would work okay on their local environment, but might break on someone else's or on the build machine. Also, the reference paths are in the csproj.user files which means these must be committed to source control, so everyone has to share these same settings. So we are thinking about having separate projects and solutions on our CI server, so that when we do a build it uses these projects rather than local development ones. It has obvious drawbacks such as an overhead to maintaining these separate files and the associated process that would need to be defined and followed, but it has benefits in that we would be in more control over EXACTLY what happens in the production environment. What I haven't been able to find is anything on this subject - can't believe we are the only people to think about this - so all thoughts are welcome.
[ "In our largest project (a system comprising of many applications) we have the following structure\n\n/3rdPartyAssemblies /App1 /App2 /App3 /.....\n\nAll external assemblies are added to 3rdPartyAssemblies/Vendor/Version/...\nWe have a CoreBuild.sln file which acts as an MSBuild script for all of the assemblies that are shared to ensure building in dependancy order (ie, make sure App1.Interfaces is built before App2 as App2 has a reference to App1.Interfaces).\nAll inter-application references target the /bin folder (we don't use bin/debug and bin/release, just bin, this way the references remain the same and we just change the release configuration depending on the build target).\nCruise Control builds the core solution for any dependencies before building any other app, and because the 3rdPartAssemblies folder is present on the server we ensure developer machines and build server have the same development layout.\n", "I know it's anachronistic. But the single best way I've found to handle the references issue is to have a folder mapped to a drive letter such as R: and then all projects build into or copy output into that folder also. Then all references are R:\\SomeFile.dll etc. This gets you around the problem that sometimes references are added by absolute path and sometimes they are added relatively. (there's something to do with \"HintPath\" which I can't really remember)\nThe nice thing then, is that you can still use the same solution files on your build server. Which to be honest is an absolute must as you lose the certainty that what is being built on the dev machine is the same as on the build server otherwise.\n", "Usually, you would be creating Build projects/scripts in some form or another for your Production, and so putting together another Solution file doesn't come in the picture.\nIt would be easier to train everyone to use project references, and create a directory under the project file structure for external assembly references. This way everyone follows the same environment.\n", "I would strongly recommend against this.\n\nReference paths aren't only stored in the .user file. A hint path is stored in the project file itself. You should never have to check a .user file into source control.\nLet there be one set of (okay, possibly versioned) solution/project files which all developers use, and the Release configurations of which are what you're ultimately building in production. Having separate project files is going to cause confusion down the road, when some project setting is tweaked, not carried across, and slipped into production.\n\nYou might also check this out:\nhttp://www.objectsharp.com/cs/blogs/barry/archive/2004/10/29/988.aspx\nhttp://bytes.com/forum/thread268546.html\n", "We have changed our project structure (making use of SVN Externals) where each project is now completely self-contained. That is, any references never go outwith the project directory (for example, if Project A references ASM X, then ASM X exists within a subfolder of ProjectA)\nI suspect that this should go some way towards helping solve some of our problems, but I can still see some advantages of having more control over the build projects. \n", "@David - believe it or not this is what we actually have just now, and yet it's still causing us problems!\nWe're making some changes though, which are forced upon us due to moving to TeamCity and multiple build agents - so we can't have references to directories outwith the current project, as I've mentioned in my previous answer.\nLook at the Externals section of this link to see what I mean - http://www.dummzeuch.de/delphi/subversion/english.html\n" ]
[ 1, 1, 0, 0, 0, 0 ]
[]
[]
[ "build_process" ]
stackoverflow_0000014674_build_process.txt
Q: What is your preferred method of sending complex data over a web service? It's 2008, and I'm still torn on this one. So I'm developing a web method that needs a complex type passed into it and returned from it. The two options I'm toying with are: Pass and return actual business objects with both data and behavior. When wsdl.exe is run, it will automatically create proxy classes that contain just the data portion, and these will be automatically converted to and from my real business objects on the server side. On the client side, they will only get to use the dumb proxy type, and they will have to map them into some real business objects as they see fit. A big drawback here is that if I "own" both the server and client side, and I want to use the same set of real business objects, I can run into certain headaches with name conflicts, etc. (Since the real objects and the proxies are named the same.) Forget trying to pass "real" business objects. Instead, just create simple DataTransfer objects which I will map back and forth to my real business objects manually. They still get copied to new proxy objects by wsdl.exe anyway, but at least I'm not tricking myself into thinking that web services can natively handle objects with business logic in them. By the way - Does anyone know how to tell wsdl.exe to not make a copy of the object? Shouldn't we be able to just tell it, "Hey, use this existing type right over here. Don't copy it!" Anyway, I've kinda settled on #2 for now, but I'm curious what you all think. I have a feeling there are way better ways to do this in general, and I may not even be totally accurate on all my points, so please let me know what your experiences have been. Update: I just found out that VS 2008 has an option to reuse existing types when adding a "Service Reference", rather than creating brand new identical type in the proxy file. Sweet. A: I'd do a hybrid. I would use an object like this public class TransferObject { public string Type { get; set; } public byte[] Data { get; set; } } then i have a nice little utility that serializes an object then compresses it. public static class CompressedSerializer { /// <summary> /// Decompresses the specified compressed data. /// </summary> /// <typeparam name="T"></typeparam> /// <param name="compressedData">The compressed data.</param> /// <returns></returns> public static T Decompress<T>(byte[] compressedData) where T : class { T result = null; using (MemoryStream memory = new MemoryStream()) { memory.Write(compressedData, 0, compressedData.Length); memory.Position = 0L; using (GZipStream zip= new GZipStream(memory, CompressionMode.Decompress, true)) { zip.Flush(); var formatter = new System.Runtime.Serialization.Formatters.Binary.BinaryFormatter(); result = formatter.Deserialize(zip) as T; } } return result; } /// <summary> /// Compresses the specified data. /// </summary> /// <typeparam name="T"></typeparam> /// <param name="data">The data.</param> /// <returns></returns> public static byte[] Compress<T>(T data) { byte[] result = null; using (MemoryStream memory = new MemoryStream()) { using (GZipStream zip= new GZipStream(memory, CompressionMode.Compress, true)) { var formatter = new System.Runtime.Serialization.Formatters.Binary.BinaryFormatter(); formatter.Serialize(zip, data); } result = memory.ToArray(); } return result; } } Then you'd just pass the transfer object that would have the type name. So you could do something like this [WebMethod] public void ReceiveData(TransferObject data) { Type originType = Type.GetType(data.Type); object item = CompressedSerializer.Decompress<object>(data.Data); } right now the compressed serializer uses generics to make it strongly typed, but you could make a method easily to take in a Type object to deserialize using originType above, all depends on your implementation. hope this gives you some ideas. Oh, and to answer your other question, wsdl.exe doesn't support reusing types, WCF does though. A: Darren wrote: I'd do a hybrid. I would use an object like this... Interesting idea... passing a serialized version of the object instead of the (wsdl-ed) object itself. In a way, I like its elegance, but in another way, it seems to defeat the purpose of exposing your web service to potential third parties or partners or whatever. How would they know what to pass? Would they have to rely purely on documentation? It also loses some of the "heterogeneous client" aspect, since the serialization is very .Net specific. I don't mean to be critical, I'm just wondering if what you're proposing is also meant for these types of use cases. I don't see anything wrong with using it in a closed environment though. I should look into WCF... I've been avoiding it, but maybe it's time. A: oh, for sure, i only do this when i'm the consumer of the webservice or if you have some sort of controller that they request an object from and then you handle the serialization and sending rather than them directly consuming the web service. But really, if they are directly consuming the webservice, then they wouldn't need or necessarily have the assembly that would have the type in it in the first place, and should be using the objects that wsdl generates. And yes, what i put forth is very .NET specific because i don't like to use anything else. The only other time i consume webservices outside of .net was in javascript, but now i only use json responses instead of xml webservice responses :) A: there is also an argument for separating the tiers - have a set of serializable objects that get passed to and from the web service and a translator to map and convert between that set and the business objects (which might have properties not suitable for passing over the wire) Its the approach favoured by the web service software factory service factory and means that you can change your business objects without breaking the web service interface/contract
What is your preferred method of sending complex data over a web service?
It's 2008, and I'm still torn on this one. So I'm developing a web method that needs a complex type passed into it and returned from it. The two options I'm toying with are: Pass and return actual business objects with both data and behavior. When wsdl.exe is run, it will automatically create proxy classes that contain just the data portion, and these will be automatically converted to and from my real business objects on the server side. On the client side, they will only get to use the dumb proxy type, and they will have to map them into some real business objects as they see fit. A big drawback here is that if I "own" both the server and client side, and I want to use the same set of real business objects, I can run into certain headaches with name conflicts, etc. (Since the real objects and the proxies are named the same.) Forget trying to pass "real" business objects. Instead, just create simple DataTransfer objects which I will map back and forth to my real business objects manually. They still get copied to new proxy objects by wsdl.exe anyway, but at least I'm not tricking myself into thinking that web services can natively handle objects with business logic in them. By the way - Does anyone know how to tell wsdl.exe to not make a copy of the object? Shouldn't we be able to just tell it, "Hey, use this existing type right over here. Don't copy it!" Anyway, I've kinda settled on #2 for now, but I'm curious what you all think. I have a feeling there are way better ways to do this in general, and I may not even be totally accurate on all my points, so please let me know what your experiences have been. Update: I just found out that VS 2008 has an option to reuse existing types when adding a "Service Reference", rather than creating brand new identical type in the proxy file. Sweet.
[ "I'd do a hybrid. I would use an object like this\npublic class TransferObject\n{\n public string Type { get; set; }\n public byte[] Data { get; set; }\n}\n\nthen i have a nice little utility that serializes an object then compresses it.\npublic static class CompressedSerializer\n{\n /// <summary>\n /// Decompresses the specified compressed data.\n /// </summary>\n /// <typeparam name=\"T\"></typeparam>\n /// <param name=\"compressedData\">The compressed data.</param>\n /// <returns></returns>\n public static T Decompress<T>(byte[] compressedData) where T : class\n {\n T result = null;\n using (MemoryStream memory = new MemoryStream())\n {\n memory.Write(compressedData, 0, compressedData.Length);\n memory.Position = 0L;\n\n using (GZipStream zip= new GZipStream(memory, CompressionMode.Decompress, true))\n {\n zip.Flush();\n var formatter = new System.Runtime.Serialization.Formatters.Binary.BinaryFormatter();\n result = formatter.Deserialize(zip) as T;\n }\n }\n\n return result;\n }\n\n /// <summary>\n /// Compresses the specified data.\n /// </summary>\n /// <typeparam name=\"T\"></typeparam>\n /// <param name=\"data\">The data.</param>\n /// <returns></returns>\n public static byte[] Compress<T>(T data)\n {\n byte[] result = null;\n using (MemoryStream memory = new MemoryStream())\n {\n using (GZipStream zip= new GZipStream(memory, CompressionMode.Compress, true))\n {\n var formatter = new System.Runtime.Serialization.Formatters.Binary.BinaryFormatter();\n formatter.Serialize(zip, data);\n }\n\n result = memory.ToArray();\n }\n\n return result;\n }\n}\n\nThen you'd just pass the transfer object that would have the type name. So you could do something like this\n[WebMethod]\npublic void ReceiveData(TransferObject data)\n{\n Type originType = Type.GetType(data.Type);\n object item = CompressedSerializer.Decompress<object>(data.Data);\n}\n\nright now the compressed serializer uses generics to make it strongly typed, but you could make a method easily to take in a Type object to deserialize using originType above, all depends on your implementation.\nhope this gives you some ideas. Oh, and to answer your other question, wsdl.exe doesn't support reusing types, WCF does though.\n", "\nDarren wrote: I'd do a hybrid. I would use an object like this...\n\nInteresting idea... passing a serialized version of the object instead of the (wsdl-ed) object itself. In a way, I like its elegance, but in another way, it seems to defeat the purpose of exposing your web service to potential third parties or partners or whatever. How would they know what to pass? Would they have to rely purely on documentation? It also loses some of the \"heterogeneous client\" aspect, since the serialization is very .Net specific. I don't mean to be critical, I'm just wondering if what you're proposing is also meant for these types of use cases. I don't see anything wrong with using it in a closed environment though.\nI should look into WCF... I've been avoiding it, but maybe it's time.\n", "oh, for sure, i only do this when i'm the consumer of the webservice or if you have some sort of controller that they request an object from and then you handle the serialization and sending rather than them directly consuming the web service. But really, if they are directly consuming the webservice, then they wouldn't need or necessarily have the assembly that would have the type in it in the first place, and should be using the objects that wsdl generates.\nAnd yes, what i put forth is very .NET specific because i don't like to use anything else. The only other time i consume webservices outside of .net was in javascript, but now i only use json responses instead of xml webservice responses :)\n", "there is also an argument for separating the tiers - have a set of serializable objects that get passed to and from the web service and a translator to map and convert between that set and the business objects (which might have properties not suitable for passing over the wire)\nIts the approach favoured by the web service software factory service factory and means that you can change your business objects without breaking the web service interface/contract\n" ]
[ 4, 1, 1, 1 ]
[]
[]
[ ".net", "soap", "web_services", "wsdl" ]
stackoverflow_0000012982_.net_soap_web_services_wsdl.txt
Q: Interlocked.Exchange, but not for booleans? Is there an equivalent for Interlocked.Exchange for boolean? Such as an atomic exchange of values that returns the previous value and doesn't require locks? A: No; use integers instead of booleans. In principle such a thing could be written (cmpxchg, the underlying processor instruction, can operate on 8, 16, 32, and 64-bit operands on x86, 8, 16, 32, 64, and 128-bit operands on x64), but in practice most APIs stick to pointer and double pointer (32 and 64-bit on x86, 64 and 128-bit on x64) operands, because they're all you really need.
Interlocked.Exchange, but not for booleans?
Is there an equivalent for Interlocked.Exchange for boolean? Such as an atomic exchange of values that returns the previous value and doesn't require locks?
[ "No; use integers instead of booleans.\nIn principle such a thing could be written (cmpxchg, the underlying processor instruction, can operate on 8, 16, 32, and 64-bit operands on x86, 8, 16, 32, 64, and 128-bit operands on x64), but in practice most APIs stick to pointer and double pointer (32 and 64-bit on x86, 64 and 128-bit on x64) operands, because they're all you really need.\n" ]
[ 8 ]
[]
[]
[ ".net", "multithreading" ]
stackoverflow_0000019713_.net_multithreading.txt
Q: Is Visual C++ memory managed by the Dot Net framework Recently, I've been dealing with an error with accessing MAPI via the .NET framework (as described in this article). I am now left with a series of memory access violation errors. To get past the issues, I have been trying to use this 3rd party component, which has a Visual C++ core. Unfortunately - we are still having the same errors. I've personally never used Visual C++, but my question is: if the C++ library is compiled using Visual Studio 2005, using Visual C++ - does the memory of the project become managed by the .NET framework, as well, which would therefore make it subject to the same issues as the .NET libraries we're using? Or am I barking up the wrong tree? A: The two previous answers have mentioned "Managed C++", this is an old bolt-on that they did to allow you to use managed C++ in a .NET environment. It wasn't a first class citizen - unlike C++/CLI (link text. But to answer your original question, no, Visual C++ is not managed by the .NET runtime. Managed C++ & C++/CLI are. A: Unless you are using Managed C++ (which it doesn't sound like you are) then no, the memory is not managed by the CLR. The recommended method of talking to Exchange in .Net is via WebDAV. A: I'm not entirely sure what you're asking, but i'll give it a shot. Visual C++ is a pure C/C++ compiler so has none of .NET's memory management, nor any of its runtime -- You have to manually call new and delete. .NET also provides C++/CLI, which is a slightly modified version of C++ that targets the .NET runtime, and is GC aware -- eg. its memory is managed by the .NET runtime. Without more details about your bug I can't really make any suggestions, beyond suggesting that you make sure you use the appropriate GC guards, and the provide finalizers in any place they are needed.
Is Visual C++ memory managed by the Dot Net framework
Recently, I've been dealing with an error with accessing MAPI via the .NET framework (as described in this article). I am now left with a series of memory access violation errors. To get past the issues, I have been trying to use this 3rd party component, which has a Visual C++ core. Unfortunately - we are still having the same errors. I've personally never used Visual C++, but my question is: if the C++ library is compiled using Visual Studio 2005, using Visual C++ - does the memory of the project become managed by the .NET framework, as well, which would therefore make it subject to the same issues as the .NET libraries we're using? Or am I barking up the wrong tree?
[ "The two previous answers have mentioned \"Managed C++\", this is an old bolt-on that they did to allow you to use managed C++ in a .NET environment. It wasn't a first class citizen - unlike C++/CLI (link text. But to answer your original question, no, Visual C++ is not managed by the .NET runtime. Managed C++ & C++/CLI are.\n", "Unless you are using Managed C++ (which it doesn't sound like you are) then no, the memory is not managed by the CLR.\nThe recommended method of talking to Exchange in .Net is via WebDAV.\n", "I'm not entirely sure what you're asking, but i'll give it a shot.\nVisual C++ is a pure C/C++ compiler so has none of .NET's memory management, nor any of its runtime -- You have to manually call new and delete.\n.NET also provides C++/CLI, which is a slightly modified version of C++ that targets the .NET runtime, and is GC aware -- eg. its memory is managed by the .NET runtime.\nWithout more details about your bug I can't really make any suggestions, beyond suggesting that you make sure you use the appropriate GC guards, and the provide finalizers in any place they are needed.\n" ]
[ 1, 0, 0 ]
[]
[]
[ ".net", "memory", "visual_c++" ]
stackoverflow_0000019653_.net_memory_visual_c++.txt
Q: How do I unregister COM dlls initially added with RegSvr32 when the /u arg doesn't work? Right, initially ran: c:\regsvr32 Amazing.dll then, (accidentally - I might add) I must have run it again, and (indeed) again when new versions of 'Amazing.dll' were released. Yes - I know now I should've run: c:\regsvr32 /u Amazing.dll beforehand - but hey! I forgot. To cut to the chase, when add the COM reference in VS, I can see 3 instances of 'Amazing' all pointing to the same location (c:\Amazing.dll), running regsvr32 /u removes one of the references, the second time - does nothing... How do I get rid of these references? Am I looking at a regedit scenario? - If so - what exactly happens if I delete one of the keys??? Cheers A: Your object's GUID's should not be changing. In other words, once you register the COM object, re-registering shouldn't be adding anything additional to the registry. Unless you added additional COM interfaces or objects to the project. In any case, if this is a one time deal (and it sounds like it is), open regedit and delete the unneeded keys manually. A: There's a tool by MS that is still floating around and has been since Win95 days which scans the registry and does stuff like finds COM keys that aren't pointing at a valid file anymore etc called RegClean (I found it here: http://downloads.zdnet.com/abstract.aspx?assetid=881470&node=2094) which I've seen some places still using particularly when messing with legacy COM stuff in VB which are generating new COM GUIDs after every build. So if you got that, then unreg'd and deleted or moved the file, run the app and it will clean out the "orphaned" entries. If you do decide to remove the keys using RegEdit, you might need to remove the class ids as well as the guid entries. A: I've got myself into a horrible mess with COM before. I had to pick my way though the registry deleting each reference, unfortunately.
How do I unregister COM dlls initially added with RegSvr32 when the /u arg doesn't work?
Right, initially ran: c:\regsvr32 Amazing.dll then, (accidentally - I might add) I must have run it again, and (indeed) again when new versions of 'Amazing.dll' were released. Yes - I know now I should've run: c:\regsvr32 /u Amazing.dll beforehand - but hey! I forgot. To cut to the chase, when add the COM reference in VS, I can see 3 instances of 'Amazing' all pointing to the same location (c:\Amazing.dll), running regsvr32 /u removes one of the references, the second time - does nothing... How do I get rid of these references? Am I looking at a regedit scenario? - If so - what exactly happens if I delete one of the keys??? Cheers
[ "Your object's GUID's should not be changing. In other words, once you register the COM object, re-registering shouldn't be adding anything additional to the registry.\nUnless you added additional COM interfaces or objects to the project.\nIn any case, if this is a one time deal (and it sounds like it is), open regedit and delete the unneeded keys manually.\n", "There's a tool by MS that is still floating around and has been since Win95 days which scans the registry and does stuff like finds COM keys that aren't pointing at a valid file anymore etc called RegClean (I found it here: http://downloads.zdnet.com/abstract.aspx?assetid=881470&node=2094) which I've seen some places still using particularly when messing with legacy COM stuff in VB which are generating new COM GUIDs after every build.\nSo if you got that, then unreg'd and deleted or moved the file, run the app and it will clean out the \"orphaned\" entries.\nIf you do decide to remove the keys using RegEdit, you might need to remove the class ids as well as the guid entries. \n", "I've got myself into a horrible mess with COM before. I had to pick my way though the registry deleting each reference, unfortunately.\n" ]
[ 14, 4, 0 ]
[]
[]
[ "com", "dllregistration", "regsvr32", "visual_studio" ]
stackoverflow_0000019725_com_dllregistration_regsvr32_visual_studio.txt
Q: Local Currency String conversion I am maintaining an app for a client that is used in two locations. One in England and one in Poland. The database is stored in England and uses the format £1000.00 for currency, but the information is being gathered locally in Poland where 1000,00 is the format. My question is, in VB6 is there a function that takes a currency string in a local format and converts to another, or will I just have to parse the string and replace , or . ? BTW I have looked at CCur, but not sure if that will do what I want. A: The data is not actually stored as the string "£1000.00"; it's stored in some numeric format. Sidebar: Usually databases are set up to store money amounts using either the decimal data type (also called money in some DBs), or as a floating point number (also called double). The difference is that when it's stored as decimal certain numbers like 0.01 are represented exactly whereas in double those numbers can only be stored approximately, causing rounding errors. The database appears to be storing the number as "£1000.00" because something is formatting it for display. In VB6, there's a function FormatCurrency which would take a number like 1000 and return a string like "£1000.00". You'll notice that the FormatCurrency function does not take an argument specifying what type of currency to use. That's because it, along with all the other locale-specific functions in VB, figures out the currency from the current locale of the system (from the Windows Control Panel). That means that on my system, Debug.Print FormatCurrency(1000) will print $1,000.00, but if I run that same program on a Windows computer set to the UK locale, it will probably print £1,000.00, which, of course, is something completely different. Similarly, you've got some code, somewhere, I can't tell where, in Poland, it seems, that is responsible for parsing the user's string and converting it to a number. And if that code is in Visual Basic, again, it's relying on the control panel to decide whether "." or "," is the thousands separator and whether "," or "." is the decimal point. The function CDbl converts its argument to a number. So for example on my system in the US Debug.Print CDbl("1.200") produces the number one point two, on a system with the Control Panel set to European formatting, it would produce the number one thousand, two hundred. It's possible that the problem is that you have someone sitting a computer with the regional control panel set to use "." as the decimal separator, but they're typing "," as the decimal separator. A: What database are you using? And what data type is the amount stored in? As long as you are always converting from one format to another, you do not need to do any parsing, just replace "." with "," or the other way around. You may need to remove the "£"-sign as well if that is stored in your string. A: There's probably a correct answer dealing with culture objects and such, but the easiest way would be to taken the input from the polish input, and replace the , with a ., and then store it in your database as type "money" or "decimal". If you know they (possibly configurable per user) are always entering numbers in either Polish or English, you could have a function that you run all the input numbers through to convert the string to a proper "decimal" typed variable. Also, for display purposes you could run it through another similar function to ensure that the user always sees the number format they are comfortable with. The key here is to switch it to a decimal as soon as you get it from the user, and only switch it back to a string at the last step before sending it out to the user. A: @KiwiBastard yes i would think so. Are you storing your amount in an "(n)varchar" field or are you using a currency/decimal type field? If the latter is the case, the currency-symbols and separators are added by your client, and there would be no need to replace anything in the database.
Local Currency String conversion
I am maintaining an app for a client that is used in two locations. One in England and one in Poland. The database is stored in England and uses the format £1000.00 for currency, but the information is being gathered locally in Poland where 1000,00 is the format. My question is, in VB6 is there a function that takes a currency string in a local format and converts to another, or will I just have to parse the string and replace , or . ? BTW I have looked at CCur, but not sure if that will do what I want.
[ "The data is not actually stored as the string \"£1000.00\"; it's stored in some numeric format.\n\nSidebar: Usually databases are set up to store money amounts using either the decimal data type (also called money in some DBs), or as a floating point number (also called double).\nThe difference is that when it's stored as decimal certain numbers like 0.01 are represented exactly whereas in double those numbers can only be stored approximately, causing rounding errors.\n\nThe database appears to be storing the number as \"£1000.00\" because something is formatting it for display. In VB6, there's a function FormatCurrency which would take a number like 1000 and return a string like \"£1000.00\".\nYou'll notice that the FormatCurrency function does not take an argument specifying what type of currency to use. That's because it, along with all the other locale-specific functions in VB, figures out the currency from the current locale of the system (from the Windows Control Panel).\nThat means that on my system,\nDebug.Print FormatCurrency(1000)\n\nwill print $1,000.00, but if I run that same program on a Windows computer set to the UK locale, it will probably print £1,000.00, which, of course, is something completely different.\nSimilarly, you've got some code, somewhere, I can't tell where, in Poland, it seems, that is responsible for parsing the user's string and converting it to a number. And if that code is in Visual Basic, again, it's relying on the control panel to decide whether \".\" or \",\" is the thousands separator and whether \",\" or \".\" is the decimal point.\nThe function CDbl converts its argument to a number. So for example on my system in the US\nDebug.Print CDbl(\"1.200\")\n\nproduces the number one point two, on a system with the Control Panel set to European formatting, it would produce the number one thousand, two hundred.\nIt's possible that the problem is that you have someone sitting a computer with the regional control panel set to use \".\" as the decimal separator, but they're typing \",\" as the decimal separator.\n", "What database are you using? And what data type is the amount stored in?\nAs long as you are always converting from one format to another, you do not need to do any parsing, just replace \".\" with \",\" or the other way around. You may need to remove the \"£\"-sign as well if that is stored in your string.\n", "There's probably a correct answer dealing with culture objects and such, but the easiest way would be to taken the input from the polish input, and replace the , with a ., and then store it in your database as type \"money\" or \"decimal\". If you know they (possibly configurable per user) are always entering numbers in either Polish or English, you could have a function that you run all the input numbers through to convert the string to a proper \"decimal\" typed variable. Also, for display purposes you could run it through another similar function to ensure that the user always sees the number format they are comfortable with. The key here is to switch it to a decimal as soon as you get it from the user, and only switch it back to a string at the last step before sending it out to the user.\n", "@KiwiBastard yes i would think so. Are you storing your amount in an \"(n)varchar\" field or are you using a currency/decimal type field? If the latter is the case, the currency-symbols and separators are added by your client, and there would be no need to replace anything in the database.\n" ]
[ 8, 0, 0, 0 ]
[]
[]
[ "internationalization", "localization", "vb6" ]
stackoverflow_0000019786_internationalization_localization_vb6.txt
Q: What is the best strategy for retainment of large data sets? I'm leading a project where we'll be recording metrics data. I'd like to retain the data for years. However, I'd also like to keep the primary table from becoming bloated with data that, while necessary for long term trending, isn't required for short term reporting. What is the best strategy for handling this situation? Simply archive the old data to another table? Or "roll it up" via some consolidation of the data itself (and then store it off to a different table)? Or something else entirely? Additional info: we are using SQL Server 2005. A: We use both methods at my work, but slightly different, we keep all sales data in the primary table for 30 days, then at night (part of the nightly jobs) the days sales are rolled up into summaries (n qty of x product sold today ect) in a separate table for reporting reasons, and sales over 30 days are archived into a different database, then once a year (we go on tax years) a new archive database is started. not exactly perfect but.. this way we get the summaries data fast, keep all current sales data at hand and have an unlimited space for the detailed archive data. we did try keeping it all in one database (in different tables) but the file size of the database (interbase) would grow so large that it would drag the system down. the only real problem we have is accessing detailed data that spans several database, as connecting and disconnecting is slow, and analysis has to be done in code rather than sql A: If you are using SQL server 2005, this may be a good candidate for using partitioned tables. A: @Jason - I don't see how keeping data in plain old text files will allow you to do long term trending analysis easily on the data. @Jason - I guess my point is that if any sort of ad-hoc analysis (i.e. trending) needs to be done on the data by business people, rolling up or archiving the data to text files really doesn't solve any problems. Of course writing code to consume a text file is easy in many languages, but that problem has been solved. Also, I would argue that today's RDBMS's are all extremely durable when setup and maintained properly. If they weren't why would you run a business on top of one (let alone archive data to it)? I just don't see the point of archiving to a plain text file because of the claim that durability of text files is superior to that of databases. A: Depending on constraints like budget, etc, this sound like a perfect candidate for a data warehouse application. This would typically introduce a new server for use as a data warehouse. SQL Server 2005 supports a lot of this activity out of the box, further you might be able to utilize additional SQL Server services (e.g. Analysis Services, Reporting Services) to provide additional value to your users. (see http://www.microsoft.com/technet/prodtechnol/sql/2005/dwsqlsy.mspx) A: Either of those options are excellent, but it really depends on the problem domain. For things like cash balances or statistical data, I think that rolling up records and consolidating them is the best way, you can then move the rolled up records into a parallel archive table, keying them in such a way that you can "unroll" if necessary. This keeps your primary data table clean and quick, but allows you to retain the extra data for auditing or whatever. The key question is, how do you implement the "roll-up" process. Either automatically, via a trigger or server side process, or by user intervention at the application level?
What is the best strategy for retainment of large data sets?
I'm leading a project where we'll be recording metrics data. I'd like to retain the data for years. However, I'd also like to keep the primary table from becoming bloated with data that, while necessary for long term trending, isn't required for short term reporting. What is the best strategy for handling this situation? Simply archive the old data to another table? Or "roll it up" via some consolidation of the data itself (and then store it off to a different table)? Or something else entirely? Additional info: we are using SQL Server 2005.
[ "We use both methods at my work, but slightly different, we keep all sales data in the primary table for 30 days, then at night (part of the nightly jobs) the days sales are rolled up into summaries (n qty of x product sold today ect) in a separate table for reporting reasons, and sales over 30 days are archived into a different database, then once a year (we go on tax years) a new archive database is started. not exactly perfect but..\nthis way we get the summaries data fast, keep all current sales data at hand and have an unlimited space for the detailed archive data. we did try keeping it all in one database (in different tables) but the file size of the database (interbase) would grow so large that it would drag the system down.\nthe only real problem we have is accessing detailed data that spans several database, as connecting and disconnecting is slow, and analysis has to be done in code rather than sql\n", "If you are using SQL server 2005, this may be a good candidate for using partitioned tables.\n", "@Jason - I don't see how keeping data in plain old text files will allow you to do long term trending analysis easily on the data.\n@Jason - I guess my point is that if any sort of ad-hoc analysis (i.e. trending) needs to be done on the data by business people, rolling up or archiving the data to text files really doesn't solve any problems. Of course writing code to consume a text file is easy in many languages, but that problem has been solved. Also, I would argue that today's RDBMS's are all extremely durable when setup and maintained properly. If they weren't why would you run a business on top of one (let alone archive data to it)? I just don't see the point of archiving to a plain text file because of the claim that durability of text files is superior to that of databases.\n", "Depending on constraints like budget, etc, this sound like a perfect candidate for a data warehouse application. This would typically introduce a new server for use as a data warehouse. SQL Server 2005 supports a lot of this activity out of the box, further you might be able to utilize additional SQL Server services (e.g. Analysis Services, Reporting Services) to provide additional value to your users. (see http://www.microsoft.com/technet/prodtechnol/sql/2005/dwsqlsy.mspx)\n", "Either of those options are excellent, but it really depends on the problem domain. For things like cash balances or statistical data, I think that rolling up records and consolidating them is the best way, you can then move the rolled up records into a parallel archive table, keying them in such a way that you can \"unroll\" if necessary. This keeps your primary data table clean and quick, but allows you to retain the extra data for auditing or whatever. The key question is, how do you implement the \"roll-up\" process. Either automatically, via a trigger or server side process, or by user intervention at the application level?\n" ]
[ 4, 4, 2, 2, 1 ]
[]
[]
[ "database_design", "dataset" ]
stackoverflow_0000019728_database_design_dataset.txt
Q: How do I redirect a user to a custom 404 page in ASP.NET MVC instead of throwing an exception? I want to be able to capture the exception that is thrown when a user requests a non-existent controller and re-direct it to a 404 page. How can I do this? For example, the user requests http://www.nosite.com/paeges/1 (should be /pages/). How do I make it so they get re-directed to the 404 rather than the exception screen? A: Just use a route: // We couldn't find a route to handle the request. Show the 404 page. routes.MapRoute("Error", "{*url}", new { controller = "Error", action = "404" } ); Since this will be a global handler, put it all the way at the bottom under the Default route. A: Take a look at this page for routing your 404-errors to a specified page. A: Found this on the same site - Strategies for Resource based 404s
How do I redirect a user to a custom 404 page in ASP.NET MVC instead of throwing an exception?
I want to be able to capture the exception that is thrown when a user requests a non-existent controller and re-direct it to a 404 page. How can I do this? For example, the user requests http://www.nosite.com/paeges/1 (should be /pages/). How do I make it so they get re-directed to the 404 rather than the exception screen?
[ "Just use a route:\n// We couldn't find a route to handle the request. Show the 404 page.\nroutes.MapRoute(\"Error\", \"{*url}\",\n new { controller = \"Error\", action = \"404\" }\n);\n\nSince this will be a global handler, put it all the way at the bottom under the Default route.\n", "Take a look at this page for routing your 404-errors to a specified page.\n", "Found this on the same site - Strategies for Resource based 404s\n" ]
[ 16, 6, 1 ]
[]
[]
[ "asp.net_mvc", "exception", "routes" ]
stackoverflow_0000019941_asp.net_mvc_exception_routes.txt
Q: Introducing Python The company I used to work with has two developers working fulltime, and a handful of freelancers. They're in the process of hiring a new lead developer to try to bring order and management to the development. But, currently, one of the developers has seen the light of Django (the company has only developed in PHP to date) while the other developer is concerned that introducing a new language (Python) is a bad idea right now. How should they approach introducing this new technology? Obviously with only one of the developers actually knowing Python, there will be no redundancy when that dev is away or leaves the company. Should they bother to introduce Python, or should they look for PHP-only solutions until such a time when the team actually have more than one Pythonion? Without a team leader, the decisions are having to fall to them. A: I recently introduced Python to my company, which does consulting work for the Post Office. I did this by waiting until there was a project for which I would be the only programmer, then getting permission to do this new project in Python. I then did another small project in Python with similarly impressive results. In addition, I used Python for all of my small throwaway assignments ("can you parse the stats in these files into a CSV file organized by date and site?", etc) and had a quick turnaround time on all of them. I also evangelized Python a bit; I went out of my way to NOT be obnoxious about it, but I'd occasionally describe why I liked it so much, talked about the personal projects I use it for in my free time and why it's awesome for me, etc. Eventually we started another project and I convinced everyone to use Python for it. I took care to point everyone to a lot of documentation, including the specific webpages relating to what they were working on, and every time they had a question, I'd explain how to do things properly by explaining the Pythonic approach to things, etc. This has worked really well. However, this might be somewhat different than what you're describing. In my case I started with moderately small projects and Python is only being used for new projects. Also, none of my co-workers were really Perl or PHP gurus; they all knew those languages and had been using them for awhile, but it didn't take much effort for them to become more productive in Python than they'd been before. So if you're talking about new projects with people who currently use PHP but aren't super-experts and don't love that language, then I think switching to Python is a no-brainer. However, if you're talking about working with a large existing PHP code base with a lot of very experienced PHP programmers who are happy with their current setup, then switching languages is probably not a good idea. You're probably somewhere in between, so you'll have to weigh the tradeoffs; hopefully my answer will help you do that. A: If the mandate of the new lead is to put the house in order, the current situation should likely be simplified as much as possible prior. If I had to bring things to order, I wouldn't want to have to manage an ongoing language conversion project on top of everything else, or at least I'd like some choice when initiating the project. When making your recommendation, did you think about the additional managerial complexity that coming into the middle of a conversion would entail? A: @darkdog: Using a new language in production code is about more than easy syntax and high-level capability. You want to be familiar with core APIs and feel like you can fix something through logic instead of having to comb through the documentation. I'm not saying transitioning to Python would be a bad idea for this company, but I'm with John--keep things simple during the transition. The new lead will appreciate having a say in such decisions. If you'd really, really, really like to introduce Python, consider writing some extensions or utilities in straight-up Python or in the framework. You won't be upsetting your core initiatives, so it will be a low/no-risk opportunity to prove the merits of a switch. A: I think the language itself is not an issue here, as python is really nice high level language with good and easy to find, thorough documentation. From what I've seen, the Django framework is also a great tooklit for web development, giving much the same developer performance boost Rails is touted to give. The real issue is at the maintenance and management level. How will this move fragment the maintenance between PHP and Python code. Is there a need to migrate existing code from one platform to another? What problems will adopting Python and Django solve that you have in your current development workflow and frameworks, etc. A: It's really all about schedules. To me the break should be with a specific project. If you decide your direction is Django then start new projects with that. Before you start a new project with a new language/framework, either make sure that you have scheduled time to get up to speed in this new direction, or get up to speed before using on new projects. I would avoid going with a tool of the month. Make sure you want it to be your direction and commit some time/resources to learning enough to make a good decision. A: Well, python is a high level language.. its not hard to learn and if the guys already have programming knowledge it should be much easier to learn.. i like django.. i think it should be a nice try to use django .. A: I don't think it's a matter of a programming language as such. What is the proficiency level of PHP in the team you're talking about? Are they doing spaghetti code or using some structured framework like Zend? If this is the first case then I absolutely understand the guy's interest in Python and Django. It this is the latter, it's just a hype. A: I love Python and Django, and use both to develop the our core webapps. That said, it's hard to make a business case for switching at this point. Specifically: Any new platform is risky compared to staying with the tried and true You'll have the developer fragmentation you mentioned It's far easier to find PHP programmers than python programmers Moreover, as other posters have mention, if the issue is more with spaghetti code than PHP itself, there are plenty of nice PHP frameworks that could be used to refactor the code. That said, if this developer is excited about python, stopping them outright is probably demoralizing. My suggestion would be to encourage them to develop in python, but not the mission critical parts of the app. Instead they could write some utility scripts, some small internal application that needs doing, etc. In conclusion: I don't recommend switching from PHP, but I do recommend accommodating the developer's interest in some way at work.
Introducing Python
The company I used to work with has two developers working fulltime, and a handful of freelancers. They're in the process of hiring a new lead developer to try to bring order and management to the development. But, currently, one of the developers has seen the light of Django (the company has only developed in PHP to date) while the other developer is concerned that introducing a new language (Python) is a bad idea right now. How should they approach introducing this new technology? Obviously with only one of the developers actually knowing Python, there will be no redundancy when that dev is away or leaves the company. Should they bother to introduce Python, or should they look for PHP-only solutions until such a time when the team actually have more than one Pythonion? Without a team leader, the decisions are having to fall to them.
[ "I recently introduced Python to my company, which does consulting work for the Post Office. I did this by waiting until there was a project for which I would be the only programmer, then getting permission to do this new project in Python. I then did another small project in Python with similarly impressive results. In addition, I used Python for all of my small throwaway assignments (\"can you parse the stats in these files into a CSV file organized by date and site?\", etc) and had a quick turnaround time on all of them.\nI also evangelized Python a bit; I went out of my way to NOT be obnoxious about it, but I'd occasionally describe why I liked it so much, talked about the personal projects I use it for in my free time and why it's awesome for me, etc.\nEventually we started another project and I convinced everyone to use Python for it. I took care to point everyone to a lot of documentation, including the specific webpages relating to what they were working on, and every time they had a question, I'd explain how to do things properly by explaining the Pythonic approach to things, etc.\nThis has worked really well. However, this might be somewhat different than what you're describing. In my case I started with moderately small projects and Python is only being used for new projects. Also, none of my co-workers were really Perl or PHP gurus; they all knew those languages and had been using them for awhile, but it didn't take much effort for them to become more productive in Python than they'd been before.\nSo if you're talking about new projects with people who currently use PHP but aren't super-experts and don't love that language, then I think switching to Python is a no-brainer. However, if you're talking about working with a large existing PHP code base with a lot of very experienced PHP programmers who are happy with their current setup, then switching languages is probably not a good idea. You're probably somewhere in between, so you'll have to weigh the tradeoffs; hopefully my answer will help you do that.\n", "If the mandate of the new lead is to put the house in order, the current situation should likely be simplified as much as possible prior. If I had to bring things to order, I wouldn't want to have to manage an ongoing language conversion project on top of everything else, or at least I'd like some choice when initiating the project. When making your recommendation, did you think about the additional managerial complexity that coming into the middle of a conversion would entail?\n", "@darkdog:\nUsing a new language in production code is about more than easy syntax and high-level capability. You want to be familiar with core APIs and feel like you can fix something through logic instead of having to comb through the documentation.\nI'm not saying transitioning to Python would be a bad idea for this company, but I'm with John--keep things simple during the transition. The new lead will appreciate having a say in such decisions.\nIf you'd really, really, really like to introduce Python, consider writing some extensions or utilities in straight-up Python or in the framework. You won't be upsetting your core initiatives, so it will be a low/no-risk opportunity to prove the merits of a switch.\n", "I think the language itself is not an issue here, as python is really nice high level language with good and easy to find, thorough documentation.\nFrom what I've seen, the Django framework is also a great tooklit for web development, giving much the same developer performance boost Rails is touted to give.\nThe real issue is at the maintenance and management level.\nHow will this move fragment the maintenance between PHP and Python code. Is there a need to migrate existing code from one platform to another? What problems will adopting Python and Django solve that you have in your current development workflow and frameworks, etc.\n", "It's really all about schedules. To me the break should be with a specific project. If you decide your direction is Django then start new projects with that. Before you start a new project with a new language/framework, either make sure that you have scheduled time to get up to speed in this new direction, or get up to speed before using on new projects.\nI would avoid going with a tool of the month. Make sure you want it to be your direction and commit some time/resources to learning enough to make a good decision.\n", "Well, python is a high level language.. its not hard to learn and if the guys already have programming knowledge it should be much easier to learn.. i like django.. i think it should be a nice try to use django .. \n", "I don't think it's a matter of a programming language as such. \nWhat is the proficiency level of PHP in the team you're talking about? Are they doing spaghetti code or using some structured framework like Zend? If this is the first case then I absolutely understand the guy's interest in Python and Django. It this is the latter, it's just a hype.\n", "I love Python and Django, and use both to develop the our core webapps.\nThat said, it's hard to make a business case for switching at this point. Specifically:\n\nAny new platform is risky compared to staying with the tried and true\nYou'll have the developer fragmentation you mentioned\nIt's far easier to find PHP programmers than python programmers\n\nMoreover, as other posters have mention, if the issue is more with spaghetti code than PHP itself, there are plenty of nice PHP frameworks that could be used to refactor the code.\nThat said, if this developer is excited about python, stopping them outright is probably demoralizing. My suggestion would be to encourage them to develop in python, but not the mission critical parts of the app. Instead they could write some utility scripts, some small internal application that needs doing, etc.\nIn conclusion: I don't recommend switching from PHP, but I do recommend accommodating the developer's interest in some way at work.\n" ]
[ 15, 4, 2, 1, 1, 0, 0, 0 ]
[]
[]
[ "php", "python" ]
stackoverflow_0000019654_php_python.txt
Q: How do I stop MS Graph component popping up during Interop? When using Office Interop in C#, if you insert a chart object into a MS Word document, the Grap application loads up very briefly and then goes away. Is there a way to prevent this from happening? I have tried setting the Visible property of the application instance to false to no effect. EDIT: The Visible property does take effect when used against Word when interopping, and it does not pop up. I would expect there is a similar way to do this for MS Graph. A: This is common behaviour for a lot of component hosted in an executable binary. The host application will startup and then do the job. I don't know if there is a surefire way to prevent that since you have no control over the component nor over the process until the application is started and is responding. A hack I tried in the past (for something totally unrelated) was starting a process and constantly detecting if its main windows was created. As soon as it was created, I was hiding it. You could do this with the main module of the faulty application and hope it will be fast enough to hide the window before the user notices. Then you instanciate your component; the component will usually recycle an existing process, hopefuly the one with the hidden main window. I can't garentee you this will work in your situation, but it's worth a try it the issue is that important, or if you don't find a better way of course.
How do I stop MS Graph component popping up during Interop?
When using Office Interop in C#, if you insert a chart object into a MS Word document, the Grap application loads up very briefly and then goes away. Is there a way to prevent this from happening? I have tried setting the Visible property of the application instance to false to no effect. EDIT: The Visible property does take effect when used against Word when interopping, and it does not pop up. I would expect there is a similar way to do this for MS Graph.
[ "This is common behaviour for a lot of component hosted in an executable binary. The host application will startup and then do the job. I don't know if there is a surefire way to prevent that since you have no control over the component nor over the process until the application is started and is responding.\nA hack I tried in the past (for something totally unrelated) was starting a process and constantly detecting if its main windows was created. As soon as it was created, I was hiding it. You could do this with the main module of the faulty application and hope it will be fast enough to hide the window before the user notices. Then you instanciate your component; the component will usually recycle an existing process, hopefuly the one with the hidden main window.\nI can't garentee you this will work in your situation, but it's worth a try it the issue is that important, or if you don't find a better way of course.\n" ]
[ 1 ]
[]
[]
[ "c#", "interop", "ms_office" ]
stackoverflow_0000019953_c#_interop_ms_office.txt
Q: Algorithm to perform RFC calculation in Java The RFC for a Java class is set of all methods that can be invoked in response to a message to an object of the class or by some method in the class. RFC = M + R where M = Number of methods in the class. R = Total number of other methods directly invoked from the M. Thinking C is the .class and J is the .java file of which we need to calculate RFC. class J{ a(){} b(){} c(){ e1.e(); e1.f(); e1.g(); } h(){ i.k(); i.j(); } m(){} n(){ i.o(); i.p(); i.p(); i.p(); } } here M=6 and R=9 (Don't worry about call inside a loop. It's considered as a single call) Calculating M is easy. Load C using classloader and use reflection to get the count of methods. Calculating R is not direct. We need to count the number of method calls from the class. First level only. For calculating R I must use regex. Usually format would be (calls without using . are not counted) [variable_name].[method_name]([zero or more parameters]); or [variable_name].[method_name]([zero or more parameters]) with out semicolon when call return is directly becomes parameter to another method. or [variable_name].[method_name]([zero or more parameters]).method2(); this becomes two method calls What other patterns of the method call can you think of? Is there any other way other than using RegEx that can be used to calculate R. UPDATE: @McDowell Looks like using BCEL I can simplify the whole process. Let me try it. A: You could use the Byte Code Engineering Library with binaries. You can use a DescendingVisitor to visit a class' members and references. I've used it to find class dependencies. Alternatively, you could reuse some model of the source files. I'm pretty sure the Java editor in the Eclipse JDT is backed by some form of model. A: You should find your answer in the Java language specification. You have forgot static method call, method call inside parameters... A: Calling a method using reflection (the name of the method is in a string). A: Does M include calls to its own methods? Or calls to inner classes? For instance: class J { a() { } b() { this.a(); } c() { jj.aa(); } d() { i.k(); } e() { this.f().a(); } f() { return this; } g() { i.m().n(); } class JJ { aa() { a(); } } } What would the M value of this be? There's only three function calls to a method not defined in this class (the calls in the d() and g() functions). Do you want to include calls to inner classes, or calls to the main class made in the inner class? Do you want to include calls to other methods on the same class? If you're looking at any method calls, regardless of the source, then a regex could probably work, but would be tricky to get right (does your regex properly ignore strings that contain method-call like contents? Does it handle constructor calls properly?). If you care about the source of the method call then regexes probably won't get you what you want. You'd need to use reflection (though unfortunately I don't know enough about reflection to be helpful there).
Algorithm to perform RFC calculation in Java
The RFC for a Java class is set of all methods that can be invoked in response to a message to an object of the class or by some method in the class. RFC = M + R where M = Number of methods in the class. R = Total number of other methods directly invoked from the M. Thinking C is the .class and J is the .java file of which we need to calculate RFC. class J{ a(){} b(){} c(){ e1.e(); e1.f(); e1.g(); } h(){ i.k(); i.j(); } m(){} n(){ i.o(); i.p(); i.p(); i.p(); } } here M=6 and R=9 (Don't worry about call inside a loop. It's considered as a single call) Calculating M is easy. Load C using classloader and use reflection to get the count of methods. Calculating R is not direct. We need to count the number of method calls from the class. First level only. For calculating R I must use regex. Usually format would be (calls without using . are not counted) [variable_name].[method_name]([zero or more parameters]); or [variable_name].[method_name]([zero or more parameters]) with out semicolon when call return is directly becomes parameter to another method. or [variable_name].[method_name]([zero or more parameters]).method2(); this becomes two method calls What other patterns of the method call can you think of? Is there any other way other than using RegEx that can be used to calculate R. UPDATE: @McDowell Looks like using BCEL I can simplify the whole process. Let me try it.
[ "You could use the Byte Code Engineering Library with binaries. You can use a DescendingVisitor to visit a class' members and references. I've used it to find class dependencies.\nAlternatively, you could reuse some model of the source files. I'm pretty sure the Java editor in the Eclipse JDT is backed by some form of model.\n", "You should find your answer in the Java language specification.\nYou have forgot static method call, method call inside parameters...\n", "Calling a method using reflection (the name of the method is in a string).\n", "Does M include calls to its own methods? Or calls to inner classes? For instance:\nclass J {\n a() { }\n b() { this.a(); }\n c() { jj.aa(); }\n d() { i.k(); }\n e() { this.f().a(); }\n f() { return this; }\n g() { i.m().n(); }\n\n class JJ {\n aa() { a(); }\n }\n}\n\nWhat would the M value of this be? There's only three function calls to a method not defined in this class (the calls in the d() and g() functions). Do you want to include calls to inner classes, or calls to the main class made in the inner class? Do you want to include calls to other methods on the same class?\nIf you're looking at any method calls, regardless of the source, then a regex could probably work, but would be tricky to get right (does your regex properly ignore strings that contain method-call like contents? Does it handle constructor calls properly?). If you care about the source of the method call then regexes probably won't get you what you want. You'd need to use reflection (though unfortunately I don't know enough about reflection to be helpful there).\n" ]
[ 2, 0, 0, 0 ]
[]
[]
[ "algorithm", "java", "reflection", "regex" ]
stackoverflow_0000019952_algorithm_java_reflection_regex.txt
Q: Is there a way to check to see if the user is currently idle? There is some documentation on the internet that shows that Windows changes the behavior of the NotifyIcon.BalloonTipShown command if the user is currently idle and this is detected by checking for keyboard and mouse events. I am currently working on an application that spends most of its time in the system tray, but pop-ups up multiple balloon tips from time to time and I would like to prevent the user from missing any of them if they are currently away from the system. Since any currently displayed balloon tips are destroyed if a new one is displayed, I want to hold off on displaying them if the user is away. As such, is there any way to check to see if the user is currently idle if the application is minimized to the system tray? A: How about the Win32 LASTINPUTINFO function? using System.Runtime.InteropServices; [DllImport("User32.dll")] static extern bool GetLastInputInfo(ref LASTINPUTINFO plii); struct LASTINPUTINFO { public uint cbSize; public uint dwTime; } A: Managed code Check position of the mouse every second. If there are new messages for user, hold on to them until you detect any move with the mouse. Unmanaged code See Detecting Idle Time with Mouse and Keyboard Hooks A: Thanks for the responses, I ended up going with the GetLastInputInfo function as it is pretty straight forward to implement in the application I'm working on.
Is there a way to check to see if the user is currently idle?
There is some documentation on the internet that shows that Windows changes the behavior of the NotifyIcon.BalloonTipShown command if the user is currently idle and this is detected by checking for keyboard and mouse events. I am currently working on an application that spends most of its time in the system tray, but pop-ups up multiple balloon tips from time to time and I would like to prevent the user from missing any of them if they are currently away from the system. Since any currently displayed balloon tips are destroyed if a new one is displayed, I want to hold off on displaying them if the user is away. As such, is there any way to check to see if the user is currently idle if the application is minimized to the system tray?
[ "How about the Win32 LASTINPUTINFO function?\nusing System.Runtime.InteropServices;\n\n[DllImport(\"User32.dll\")] \nstatic extern bool GetLastInputInfo(ref LASTINPUTINFO plii);\n\nstruct LASTINPUTINFO \n{\n public uint cbSize;\n public uint dwTime;\n}\n\n", "Managed code\nCheck position of the mouse every second. If there are new messages for user, hold on to them until you detect any move with the mouse.\nUnmanaged code\nSee Detecting Idle Time with Mouse and Keyboard Hooks\n", "Thanks for the responses, I ended up going with the GetLastInputInfo function as it is pretty straight forward to implement in the application I'm working on.\n" ]
[ 4, 1, 0 ]
[]
[]
[ ".net", "tray", "user_interface", "windows" ]
stackoverflow_0000019185_.net_tray_user_interface_windows.txt
Q: Mixing 32 bit and 16 bit code with nasm This is a low-level systems question. I need to mix 32 bit and 16 bit code because I'm trying to return to real-mode from protected mode. As a bit of background information, my code is doing this just after GRUB boots so I don't have any pesky operating system to tell me what I can and can't do. Anyway, I use [BITS 32] and [BITS 16] with my assembly to tell nasm which types of operations it should use, but when I test my code use bochs it looks like the for some operations bochs isn't executing the code that I wrote. It looks like the assembler is sticking in extras 0x66 and 0x67's which confuses bochs. So, how do I get nasm to successfully assemble code where I mix 32 bit and 16 bit code in the same file? Is there some kind of trick? ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­ A: The problem turned out to be that I wasn't setting up my descriptor tables correctly. I had one bit flipped wrong so instead of going to 16-bit mode I was going to 32-bit mode (with segments that happened to have a limit of one meg). Thanks for the suggestions! Terry A: The 0x66 and 0x67 are opcodes that are used to indicate that the following opcode should be interpreted as a non-default bitness. More specifically, (and according to this link), "When NASM is in BITS 16 mode, instructions which use 32-bit data are prefixed with an 0x66 byte, and those referring to 32-bit addresses have an 0x67 prefix. In BITS 32 mode, the reverse is true: 32-bit instructions require no prefixes, whereas instructions using 16-bit data need an 0x66 and those working on 16-bit addresses need an 0x67." This suggests that it's bochs that at fault. A: You weren't kidding about this being low-level! Have you checked the generated opcodes / operands to make sure that nasm is honoring your BITS directives correctly? Also check to make sure the jump targets are correct - maybe nasm is using the wrong offsets. If it's not a bug in nasm, maybe there is a bug in bochs. I can't imagine that people switch back to 16-bit mode from 32-bit mode very often anymore. A: If you're in real mode your default size is implicitly 16 bits, so you should use BITS 16 mode. This way if you need a 32-bit operand size you add the 0x66 prefix, and for a 32-bit address size you add the 0x67 prefix. Look at the Intel IA-32 Software Developer's Guide, Volume 3, Chapter 16 (MIXING 16-BIT AND 32-BIT CODE; the chapter number might change according to the edition of the book): Real-address mode, virtual-8086 mode, and SMM are native 16-bit modes. The BITS 32 directive will only confuse the assembler if you use it outside of Protected Mode or Long Mode.
Mixing 32 bit and 16 bit code with nasm
This is a low-level systems question. I need to mix 32 bit and 16 bit code because I'm trying to return to real-mode from protected mode. As a bit of background information, my code is doing this just after GRUB boots so I don't have any pesky operating system to tell me what I can and can't do. Anyway, I use [BITS 32] and [BITS 16] with my assembly to tell nasm which types of operations it should use, but when I test my code use bochs it looks like the for some operations bochs isn't executing the code that I wrote. It looks like the assembler is sticking in extras 0x66 and 0x67's which confuses bochs. So, how do I get nasm to successfully assemble code where I mix 32 bit and 16 bit code in the same file? Is there some kind of trick? ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­
[ "The problem turned out to be that I wasn't setting up my descriptor tables correctly. I had one bit flipped wrong so instead of going to 16-bit mode I was going to 32-bit mode (with segments that happened to have a limit of one meg). \nThanks for the suggestions!\nTerry\n", "The 0x66 and 0x67 are opcodes that are used to indicate that the following opcode should be interpreted as a non-default bitness. More specifically, (and according to this link),\n\"When NASM is in BITS 16 mode, instructions which use 32-bit data are prefixed with an 0x66 byte, and those referring to 32-bit addresses have an 0x67 prefix. In BITS 32 mode, the reverse is true: 32-bit instructions require no prefixes, whereas instructions using 16-bit data need an 0x66 and those working on 16-bit addresses need an 0x67.\"\nThis suggests that it's bochs that at fault.\n", "You weren't kidding about this being low-level!\nHave you checked the generated opcodes / operands to make sure that nasm is honoring your BITS directives correctly? Also check to make sure the jump targets are correct - maybe nasm is using the wrong offsets.\nIf it's not a bug in nasm, maybe there is a bug in bochs. I can't imagine that people switch back to 16-bit mode from 32-bit mode very often anymore.\n", "If you're in real mode your default size is implicitly 16 bits, so you should use BITS 16 mode. This way if you need a 32-bit operand size you add the 0x66 prefix, and for a 32-bit address size you add the 0x67 prefix.\nLook at the Intel IA-32 Software Developer's Guide, Volume 3, Chapter 16 (MIXING 16-BIT AND 32-BIT CODE; the chapter number might change according to the edition of the book):\n\nReal-address mode, virtual-8086 mode, and SMM are native 16-bit modes.\n\nThe BITS 32 directive will only confuse the assembler if you use it outside of Protected Mode or Long Mode.\n" ]
[ 6, 4, 0, 0 ]
[]
[]
[ "assembly", "nasm", "operating_system", "osdev" ]
stackoverflow_0000018324_assembly_nasm_operating_system_osdev.txt
Q: WinForms ComboBox data binding gotcha Assume you are doing something like the following List<string> myitems = new List<string> { "Item 1", "Item 2", "Item 3" }; ComboBox box = new ComboBox(); box.DataSource = myitems; ComboBox box2 = new ComboBox(); box2.DataSource = myitems So now we have 2 combo boxes bound to that array, and everything works fine. But when you change the value of one combo box, it changes BOTH combo boxes to the one you just selected. Now, I know that Arrays are always passed by reference (learned that when i learned C :D), but why on earth would the combo boxes change together? I don't believe the combo box control is modifying the collection at all. As a workaround, don't this would achieve the functionality that is expected/desired ComboBox box = new ComboBox(); box.DataSource = myitems.ToArray(); A: This has to do with how data bindings are set up in the dotnet framework, especially the BindingContext. On a high level it means that if you haven't specified otherwise each form and all the controls of the form share the same BindingContext. When you are setting the DataSource property the ComboBox will use the BindingContext to get a ConcurrenyMangager that wraps the list. The ConcurrenyManager keeps track of such things as the current selected position in the list. When you set the DataSource of the second ComboBox it will use the same BindingContext (the forms) which will yield a reference to the same ConcurrencyManager as above used to set up the data bindings. To get a more detailed explanation see BindingContext. A: A better workaround (depending on the size of the datasource) is to declare two BindingSource objects (new as of 2.00) bind the collection to those and then bind those to the comboboxes. I enclose a complete example. using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Text; using System.Windows.Forms; namespace WindowsFormsApplication2 { public partial class Form1 : Form { private BindingSource source1 = new BindingSource(); private BindingSource source2 = new BindingSource(); public Form1() { InitializeComponent(); Load += new EventHandler(Form1Load); } void Form1Load(object sender, EventArgs e) { List<string> myitems = new List<string> { "Item 1", "Item 2", "Item 3" }; ComboBox box = new ComboBox(); box.Bounds = new Rectangle(10, 10, 100, 50); source1.DataSource = myitems; box.DataSource = source1; ComboBox box2 = new ComboBox(); box2.Bounds = new Rectangle(10, 80, 100, 50); source2.DataSource = myitems; box2.DataSource = source2; Controls.Add(box); Controls.Add(box2); } } } If you want to confuse yourself even more then try always declaring bindings in the constructor. That can result in some really curious bugs, hence I always bind in the Load event.
WinForms ComboBox data binding gotcha
Assume you are doing something like the following List<string> myitems = new List<string> { "Item 1", "Item 2", "Item 3" }; ComboBox box = new ComboBox(); box.DataSource = myitems; ComboBox box2 = new ComboBox(); box2.DataSource = myitems So now we have 2 combo boxes bound to that array, and everything works fine. But when you change the value of one combo box, it changes BOTH combo boxes to the one you just selected. Now, I know that Arrays are always passed by reference (learned that when i learned C :D), but why on earth would the combo boxes change together? I don't believe the combo box control is modifying the collection at all. As a workaround, don't this would achieve the functionality that is expected/desired ComboBox box = new ComboBox(); box.DataSource = myitems.ToArray();
[ "This has to do with how data bindings are set up in the dotnet framework, especially the BindingContext. On a high level it means that if you haven't specified otherwise each form and all the controls of the form share the same BindingContext. When you are setting the DataSource property the ComboBox will use the BindingContext to get a ConcurrenyMangager that wraps the list. The ConcurrenyManager keeps track of such things as the current selected position in the list. \nWhen you set the DataSource of the second ComboBox it will use the same BindingContext (the forms) which will yield a reference to the same ConcurrencyManager as above used to set up the data bindings.\nTo get a more detailed explanation see BindingContext.\n", "A better workaround (depending on the size of the datasource) is to declare two BindingSource objects (new as of 2.00) bind the collection to those and then bind those to the comboboxes.\nI enclose a complete example.\nusing System;\nusing System.Collections.Generic;\nusing System.ComponentModel;\nusing System.Data;\nusing System.Drawing;\nusing System.Text;\nusing System.Windows.Forms;\n\nnamespace WindowsFormsApplication2\n{\n public partial class Form1 : Form\n {\n private BindingSource source1 = new BindingSource();\n private BindingSource source2 = new BindingSource();\n\n public Form1()\n {\n InitializeComponent();\n Load += new EventHandler(Form1Load);\n }\n\n void Form1Load(object sender, EventArgs e)\n {\n List<string> myitems = new List<string>\n {\n \"Item 1\",\n \"Item 2\",\n \"Item 3\"\n };\n\n ComboBox box = new ComboBox();\n box.Bounds = new Rectangle(10, 10, 100, 50);\n source1.DataSource = myitems;\n box.DataSource = source1;\n\n ComboBox box2 = new ComboBox();\n box2.Bounds = new Rectangle(10, 80, 100, 50);\n source2.DataSource = myitems;\n box2.DataSource = source2;\n\n Controls.Add(box);\n Controls.Add(box2);\n }\n }\n}\n\nIf you want to confuse yourself even more then try always declaring bindings in the constructor. That can result in some really curious bugs, hence I always bind in the Load event.\n" ]
[ 39, 22 ]
[]
[]
[ "c#", "data_binding", "winforms" ]
stackoverflow_0000000482_c#_data_binding_winforms.txt
Q: "Data Execution Prevention" kills (VS2008) local ASP.Net Development Server (aka Cassini) on Vista 64 Occasionally, I find that while debugging an ASP.Net application (written in visual studio 2008, running on Vista 64-bit) the local ASP.Net development server (i.e. 'Cassini') stops responding. A message often comes up telling me that "Data Execution Prevention (DEP)" has killed WebDev.WebServer.exe The event logs simply tell me that "WebDev.WebServer.exe has stopped working" I've heard that this 'problem' presents itself more often on Vista 64-bit because DEP is on by default. Hence, turning DEP off may 'solve' the problem. But i'm wondering: Is there a known bug/situation with Cassini that causes DEP to kill the process? Alternatively, what is the practical danger of disabling Data Execution Prevention? A: The only way to know for sure would be to dig through the Cassini source and see if there are any areas where it generates code on the heap and then executes it without clearing the NX flag. However, instead of doing that, why not use IIS? EDIT: The danger of disabling DEP is that you open up security holes. DEP works by not allowing arbitrary generated code on the heap to be executed. This helps prevent malware programs from inserting code into the data segments of legit programs. A: You are on vista, iis got better (7), cassini stayed crappy. So just start this app on iis with a host header and a hosts file entry. A: You can grant certain programs exclusion from DEP if you need. As Jonathan mentions this does open up any vulnerabilities that application may have. A: Using IIS in Visual Studio isn't the pain in the ass that it used to be in 1.1/VS02/03 days. There are lots of good reasons to prefer IIS over the Cassini server (articles by Dominick Baier): Cassini considered harmful Another Reason why I would not recommend Cassini Dominick is 'the man' when it comes to IIS and security stuff. When using IIS for a web app, I always create the app in IIS first, point it at my preferred folder, then get VS to create the project. This means you don't end up cluttering c:\inetpub\wwwroot with your web apps. Of course, now we have IISExpress which if you're targeting IIS7.x it's the obvious choice for developing ASP.NET applications in Visual Studio. A: Thanks for the answers. I guess I developed such an aversion to IIS in the .net 1.x era that I've refused to consider re-using it -- until now. aside: when choosing between two equally acceptable answers from ChanChan and Jonathan, I arbitrarily marked Jonathan's as 'accepted' because a) he got in first and b) his rep is currently lower.
"Data Execution Prevention" kills (VS2008) local ASP.Net Development Server (aka Cassini) on Vista 64
Occasionally, I find that while debugging an ASP.Net application (written in visual studio 2008, running on Vista 64-bit) the local ASP.Net development server (i.e. 'Cassini') stops responding. A message often comes up telling me that "Data Execution Prevention (DEP)" has killed WebDev.WebServer.exe The event logs simply tell me that "WebDev.WebServer.exe has stopped working" I've heard that this 'problem' presents itself more often on Vista 64-bit because DEP is on by default. Hence, turning DEP off may 'solve' the problem. But i'm wondering: Is there a known bug/situation with Cassini that causes DEP to kill the process? Alternatively, what is the practical danger of disabling Data Execution Prevention?
[ "The only way to know for sure would be to dig through the Cassini source and see if there are any areas where it generates code on the heap and then executes it without clearing the NX flag.\nHowever, instead of doing that, why not use IIS?\nEDIT:\nThe danger of disabling DEP is that you open up security holes. DEP works by not allowing arbitrary generated code on the heap to be executed. This helps prevent malware programs from inserting code into the data segments of legit programs.\n", "You are on vista, iis got better (7), cassini stayed crappy.\nSo just start this app on iis with a host header and a hosts file entry.\n", "You can grant certain programs exclusion from DEP if you need.\nAs Jonathan\nmentions this does open up any vulnerabilities that application may have.\n", "Using IIS in Visual Studio isn't the pain in the ass that it used to be in 1.1/VS02/03 days. There are lots of good reasons to prefer IIS over the Cassini server (articles by Dominick Baier):\n\nCassini considered harmful\nAnother Reason why I would not recommend Cassini\n\nDominick is 'the man' when it comes to IIS and security stuff.\nWhen using IIS for a web app, I always create the app in IIS first, point it at my preferred folder, then get VS to create the project. This means you don't end up cluttering c:\\inetpub\\wwwroot with your web apps.\nOf course, now we have IISExpress which if you're targeting IIS7.x it's the obvious choice for developing ASP.NET applications in Visual Studio.\n", "Thanks for the answers. I guess I developed such an aversion to IIS in the .net 1.x era that I've refused to consider re-using it -- until now.\naside: when choosing between two equally acceptable answers from ChanChan and Jonathan, I arbitrarily marked Jonathan's as 'accepted' because a) he got in first and b) his rep is currently lower.\n" ]
[ 3, 2, 1, 1, 0 ]
[]
[]
[ "asp.net", "cassini", "dep" ]
stackoverflow_0000019349_asp.net_cassini_dep.txt
Q: Store data from a C# application I've recently taken up learning some C# and wrote a Yahtzee clone. My next step (now that the game logic is in place and functioning correctly) is to integrate some method of keeping stats across all the games played. My question is this, how should I go about storing this information? My first thought would be to use a database and I have a feeling that's the answer I'll get... if that's the case, can you point me to a good resource for creating and accessing a database from a C# application? Storing in an XML file actually makes more sense to me, but I thought if I suggested that I'd get torn apart ;). I'm used to building web applications and for those, text files are generally frowned upon. So, going with an XML file, what classes should I be looking at that would allow for easy manipulation? A: Here is one idea: use Xml Serialization. Design your GameStats data structure and optionally use Xml attributes to influence the schema as you like. I like to use this method for small data sets because its quick and easy and all I need to do is design and manipulate the data structure. using (FileStream fs = new FileStream(....)) { // Read in stats XmlSerializer xs = new XmlSerializer(typeof(GameStats)); GameStats stats = (GameStats)xs.Deserialize(fs); // Manipulate stats here ... // Write out game stats XmlSerializer xs = new XmlSerializer(typeof(GameStats)); xs.Serialize(fs, stats); fs.Close(); } A: A database would probably be overkill for something like this - start with storing your information in an XML doc (or series of XML docs, if there's a lot of data). You get all that nifty XCopy deployment stuff, you can still use LINQ, and it would be a smooth transition to a database if you decided later you really needed performant relational query logic. A: A database may be overkill - have you thought about just storing the scores in a file? If you decide to go with a database, you might consider SQLite, which you can distribute just like a file. There's an open source .NET provider - System.Data.SQLite - that includes everything you need to get started. Accessing and reading from a database in .NET is quite easy - take a look at this question for sample code. A: SQL Express from MS is a great free, lightweight version of their SQL Server database. You could try that if you go the DB route. Alternatively, you could simply create datasets within the application and serialize them to xml, or you could use something like the newly minted Entity Framework that shipped with .NET 3.5 SP1 A: I don't know if a database is necessarily what you want. That may be overkill for storing stats for a simple game like that. Databases are good; but you should not automatically use one in every situation (I'm assuming that this is a client application, not an online game). Personally, for a game that exists only on the user's computer, I would just store the stats in a file (XML or binary - choice depends on whether you want it to be human-readable or not). A: I'd recommend saving your data in simple POCOs and either serializing them to xml or a binary file, like Brian did above. If you're hot for a database, I'd suggest Sql Server Compact Edition, or VistaDB. Both are hosted inproc within your application. A: You can either use the System::Xml namespace or the System::Data namespace. The first gives you raw XML, the latter gives you a handy wrapper to the XML. A: I would recommend just using a database. I would recommend using LINQ or an ORM tool to interact with the database. For learning LINQ, I would take a look at Scott Guthrie's posts. I think there are 9 of them all together. I linked part 1 below. If you want to go with an ORM tool, say nhibernate, then I would recommend checking out the Summer of nHibernate screencasts. They are a really good learning resource for nhibernate. I disagree with using XML. With reporting stats on a lot of data, you can't beat using a relational database. Yeah, XML is lightweight, but there are a lot of choices for light weight relational databases also, besides going with a full blown service based implementation. (i.e. SQL Server Compact, SQLite, etc...) Scott Guthrie on LINQ Summer of nHibernate A: For this situation, the [Serializable] attribute on a nicely modelled Stats class and XmlSerializer are the way to go, IMO.
Store data from a C# application
I've recently taken up learning some C# and wrote a Yahtzee clone. My next step (now that the game logic is in place and functioning correctly) is to integrate some method of keeping stats across all the games played. My question is this, how should I go about storing this information? My first thought would be to use a database and I have a feeling that's the answer I'll get... if that's the case, can you point me to a good resource for creating and accessing a database from a C# application? Storing in an XML file actually makes more sense to me, but I thought if I suggested that I'd get torn apart ;). I'm used to building web applications and for those, text files are generally frowned upon. So, going with an XML file, what classes should I be looking at that would allow for easy manipulation?
[ "Here is one idea: use Xml Serialization. Design your GameStats data structure and optionally use Xml attributes to influence the schema as you like. I like to use this method for small data sets because its quick and easy and all I need to do is design and manipulate the data structure.\n\nusing (FileStream fs = new FileStream(....))\n{\n // Read in stats\n XmlSerializer xs = new XmlSerializer(typeof(GameStats));\n GameStats stats = (GameStats)xs.Deserialize(fs);\n\n // Manipulate stats here ...\n\n // Write out game stats\n XmlSerializer xs = new XmlSerializer(typeof(GameStats));\n xs.Serialize(fs, stats);\n\n fs.Close();\n}\n\n", "A database would probably be overkill for something like this - start with storing your information in an XML doc (or series of XML docs, if there's a lot of data). You get all that nifty XCopy deployment stuff, you can still use LINQ, and it would be a smooth transition to a database if you decided later you really needed performant relational query logic.\n", "A database may be overkill - have you thought about just storing the scores in a file?\nIf you decide to go with a database, you might consider SQLite, which you can distribute just like a file. There's an open source .NET provider - System.Data.SQLite - that includes everything you need to get started.\nAccessing and reading from a database in .NET is quite easy - take a look at this question for sample code.\n", "SQL Express from MS is a great free, lightweight version of their SQL Server database. You could try that if you go the DB route.\nAlternatively, you could simply create datasets within the application and serialize them to xml, or you could use something like the newly minted Entity Framework that shipped with .NET 3.5 SP1\n", "I don't know if a database is necessarily what you want. That may be overkill for storing stats for a simple game like that. Databases are good; but you should not automatically use one in every situation (I'm assuming that this is a client application, not an online game).\nPersonally, for a game that exists only on the user's computer, I would just store the stats in a file (XML or binary - choice depends on whether you want it to be human-readable or not).\n", "I'd recommend saving your data in simple POCOs and either serializing them to xml or a binary file, like Brian did above.\nIf you're hot for a database, I'd suggest Sql Server Compact Edition, or VistaDB. Both are hosted inproc within your application.\n", "You can either use the System::Xml namespace or the System::Data namespace. The first gives you raw XML, the latter gives you a handy wrapper to the XML.\n", "I would recommend just using a database. I would recommend using LINQ or an ORM tool to interact with the database. For learning LINQ, I would take a look at Scott Guthrie's posts. I think there are 9 of them all together. I linked part 1 below. If you want to go with an ORM tool, say nhibernate, then I would recommend checking out the Summer of nHibernate screencasts. They are a really good learning resource for nhibernate.\nI disagree with using XML. With reporting stats on a lot of data, you can't beat using a relational database. Yeah, XML is lightweight, but there are a lot of choices for light weight relational databases also, besides going with a full blown service based implementation. (i.e. SQL Server Compact, SQLite, etc...)\n\nScott Guthrie on LINQ\nSummer of nHibernate\n\n", "For this situation, the [Serializable] attribute on a nicely modelled Stats class and XmlSerializer are the way to go, IMO.\n" ]
[ 17, 13, 7, 4, 3, 2, 1, 1, 1 ]
[]
[]
[ ".net", "c#" ]
stackoverflow_0000020061_.net_c#.txt
Q: Check for hung Office process when using Office Automation Is there a way to check to see if an Microsoft Office process (i.e. Word, Excel) has hung when using Office Automation? Additionally, if the process is hung, is there a way to terminate it? A: Let me start off saying that I don't recommend doing this in a service on a server, but I'll do my best to answer the questions. Running as a service makes it difficult to clean up. For example with what you have running as a service survive killing a hung word or excel. You may be in a position to have to kill the service. Will your service stop if word or excel is in this state. One problem with trying to test if it is hung, is that your test could cause a new instance of word to startup and work, while the one that the service is running would still be hung. The best way to determine if it's hung is to ask it to do what it is supposed to be doing and check for the results. I would need to know more about what it is actually doing. Here are some commands to use in a batch file for cleaning up (both should be in the path): sc stop servicename - stops service named servicename sc start servicename - starts service named servicename sc query servicename - Queries the status of servicename taskkill /F /IM excel.exe - terminates all instances of excel.exe A: I remember doing this a few years ago - so I'm talking Office XP or 2003 days, not 2007. Obviously a better solution for automation these days is to use the new XML format that describes docx etc using the System.IO.Packaging namespace. Back then, I used to notice that whenever MSWord had kicked the bucket and had had enough, a process called "Dr. Watson" was running on the machine. This was my first clue that Word had tripped and fallen over. Sometimes I might see more than one WINWORD.EXE, but my code just used to scan for the good Doctor. Once I saw that (in code), I killed all WINWORD.EXE processes the good Doctor himself, and restarted the process of torturing Word :-) Hope that gives you some clues as to what to look for. All the best, Rob G P.S. I might even be able to dig out the code in my archives if you don't come right! A: I can answer the latter half; if you have a reference to the application object in your code, you can simply call "Quit" on it: private Microsoft.Office.Interop.Excel.Application _excel; // ... do some stuff ... _excel.Quit(); For checking for a hung process, I'd guess you'd want to try to get some data from the application and see if you get results in a reasonable time frame (check in a timer or other thread or something). There's probably a better way though.
Check for hung Office process when using Office Automation
Is there a way to check to see if an Microsoft Office process (i.e. Word, Excel) has hung when using Office Automation? Additionally, if the process is hung, is there a way to terminate it?
[ "Let me start off saying that I don't recommend doing this in a service on a server, but I'll do my best to answer the questions.\nRunning as a service makes it difficult to clean up. For example with what you have running as a service survive killing a hung word or excel. You may be in a position to have to kill the service. Will your service stop if word or excel is in this state. \nOne problem with trying to test if it is hung, is that your test could cause a new instance of word to startup and work, while the one that the service is running would still be hung.\nThe best way to determine if it's hung is to ask it to do what it is supposed to be doing and check for the results. I would need to know more about what it is actually doing. \nHere are some commands to use in a batch file for cleaning up (both should be in the path):\n\nsc stop servicename - stops service named servicename\nsc start servicename - starts service named servicename\nsc query servicename - Queries the status of servicename\ntaskkill /F /IM excel.exe - terminates all instances of excel.exe\n\n", "I remember doing this a few years ago - so I'm talking Office XP or 2003 days, not 2007.\nObviously a better solution for automation these days is to use the new XML format that describes docx etc using the System.IO.Packaging namespace.\nBack then, I used to notice that whenever MSWord had kicked the bucket and had had enough, a process called \"Dr. Watson\" was running on the machine. This was my first clue that Word had tripped and fallen over. Sometimes I might see more than one WINWORD.EXE, but my code just used to scan for the good Doctor. Once I saw that (in code), I killed all WINWORD.EXE processes the good Doctor himself, and restarted the process of torturing Word :-)\nHope that gives you some clues as to what to look for.\nAll the best,\nRob G\nP.S. I might even be able to dig out the code in my archives if you don't come right!\n", "I can answer the latter half; if you have a reference to the application object in your code, you can simply call \"Quit\" on it:\nprivate Microsoft.Office.Interop.Excel.Application _excel;\n// ... do some stuff ...\n_excel.Quit();\n\nFor checking for a hung process, I'd guess you'd want to try to get some data from the application and see if you get results in a reasonable time frame (check in a timer or other thread or something). There's probably a better way though.\n" ]
[ 2, 1, 0 ]
[]
[]
[ "language_agnostic", "ms_office", "office_automation" ]
stackoverflow_0000009905_language_agnostic_ms_office_office_automation.txt
Q: C# application detected as a virus Regarding the same program as my question a few minutes ago... I added a setup project and built an MSI for the program (just to see if I could figure it out) and it works great except for one thing. When I tried to install it on my parent's laptop, their antivirus (the free Avast Home Edition) set off an alarm and accused my setup.exe of being a Trojan. Does anyone have any idea why this would be happening and how I can fix it? A: Indeed, boot from a clean CD (use a known good machine to build BartPE or something similar) and scan your machine thoroughly. Another good thing to check, though, would be exactly which virus Avast! thinks your program is. Once you know that, you should be able to look it up in one of the virus databases and insure that your software can't contain it. The odds are that Avast! is just getting a false positive for some reason, and I don't know that there's much you can do about that other than contacting Avast! and hoping for a reply. A: I would do what jsight suggested and make sure that your machine did not have a virus. I would also submit the .msi file to Avast's online scanner and see what they identified as being in your package. If that reports your file as containing a trojan, contact Avast and ask them to verify that your .msi package does contain a trojan. If it doesn't contain a trojan, find out from Avast what triggered their scanner. There may be something in your code that matches a pattern that Avast looks for, They may be able to adjust their pattern to ignore your file or you could tweak your code so that it doesn't trigger their scanner. A: I don’t know “Avast”, but in Kaspersky if the configuration is set to high almost every installer fires an alarm (iTunes, Windows Update, everything) especially if the installer modify some registry key or open a port. If avast checks for behavior and your program open a port probably that’s be the cause. A: Rebuild the setup file, check the exact file size. Check the exact file size of the "suspected" setup file. If the source code hasn't changed and the two file sizes are different, there's a pretty good chance it got contaminated in transit. I'd do that as a bit of a sanity check first.
C# application detected as a virus
Regarding the same program as my question a few minutes ago... I added a setup project and built an MSI for the program (just to see if I could figure it out) and it works great except for one thing. When I tried to install it on my parent's laptop, their antivirus (the free Avast Home Edition) set off an alarm and accused my setup.exe of being a Trojan. Does anyone have any idea why this would be happening and how I can fix it?
[ "Indeed, boot from a clean CD (use a known good machine to build BartPE or something similar) and scan your machine thoroughly. Another good thing to check, though, would be exactly which virus Avast! thinks your program is. Once you know that, you should be able to look it up in one of the virus databases and insure that your software can't contain it.\nThe odds are that Avast! is just getting a false positive for some reason, and I don't know that there's much you can do about that other than contacting Avast! and hoping for a reply.\n", "I would do what jsight suggested and make sure that your machine did not have a virus. I would also submit the .msi file to Avast's online scanner and see what they identified as being in your package. If that reports your file as containing a trojan, contact Avast and ask them to verify that your .msi package does contain a trojan. \nIf it doesn't contain a trojan, find out from Avast what triggered their scanner. There may be something in your code that matches a pattern that Avast looks for, They may be able to adjust their pattern to ignore your file or you could tweak your code so that it doesn't trigger their scanner.\n", "I don’t know “Avast”, but in Kaspersky if the configuration is set to high almost every installer fires an alarm (iTunes, Windows Update, everything) especially if the installer modify some registry key or open a port.\nIf avast checks for behavior and your program open a port probably that’s be the cause.\n", "Rebuild the setup file, check the exact file size.\nCheck the exact file size of the \"suspected\" setup file.\nIf the source code hasn't changed and the two file sizes are different, there's a pretty good chance it got contaminated in transit.\nI'd do that as a bit of a sanity check first.\n" ]
[ 3, 1, 0, 0 ]
[ "The very first thing to do would be to scan your build PC for viruses.\n" ]
[ -1 ]
[ ".net", "antivirus", "c#" ]
stackoverflow_0000020168_.net_antivirus_c#.txt
Q: Associating source and search keywords with account creation As a part of the signup process for my online application, I'm thinking of tracking the source and/or search keywords used to get to my site. This would allow me to see what advertising is working and from where with a somewhat finer grain than Google Analytics would. I assume I could set some kind of cookie with this information when people get to my site, but I'm not sure how I would go about getting it. Is it even possible? I'm using Rails, but a language-independent solution (or even just pointers to where to find this information) would be appreciated! A: Your best bet IMO would be to use javascript to look for a cookie named "origReferrer" or something like that and if that cookie doesn't exist you should create one (with an expiry of ~24hours) and fill it with the current referrer. That way you'll have preserved the original referrer all the way from your users first visit and when your users have completed whatever steps you want them to have completed (ie, account creation) you can read back that cookie on the server and do whatever parsing/analyzing you want. Andy Brice explains the technique in his blog post Cookie tracking for profit and pleasure.
Associating source and search keywords with account creation
As a part of the signup process for my online application, I'm thinking of tracking the source and/or search keywords used to get to my site. This would allow me to see what advertising is working and from where with a somewhat finer grain than Google Analytics would. I assume I could set some kind of cookie with this information when people get to my site, but I'm not sure how I would go about getting it. Is it even possible? I'm using Rails, but a language-independent solution (or even just pointers to where to find this information) would be appreciated!
[ "Your best bet IMO would be to use javascript to look for a cookie named \"origReferrer\" or something like that and if that cookie doesn't exist you should create one (with an expiry of ~24hours) and fill it with the current referrer.\nThat way you'll have preserved the original referrer all the way from your users first visit and when your users have completed whatever steps you want them to have completed (ie, account creation) you can read back that cookie on the server and do whatever parsing/analyzing you want.\nAndy Brice explains the technique in his blog post Cookie tracking for profit and pleasure.\n" ]
[ 3 ]
[]
[]
[ "cookies", "ruby_on_rails", "seo" ]
stackoverflow_0000020286_cookies_ruby_on_rails_seo.txt
Q: How does the ASP.NET "Yellow Screen of Death" display code? I thought .Net code gets compiled into MSIL, so I always wondered how do Yellow Screens produce the faulty code. If it's executing the compiled code, how is the compiler able to produce code from the source files in the error message? Feel free to edit this question/title, I know it doesn't really make sense. A: A .Net assembly is compiled with metadata about the bytecode included that allows easy decompilation of the code - that's how tools like .Net Reflector work. The PDB files are debug symbols only - the difference in the Yellow Screen Of Death is that you'll get line numbers in the stack trace. In other words, you'd get the code, even if the PDB files were missing. A: like this. i've made a few changes, but it's pretty close to exactly what ms is doing. // reverse the stack private static Stack<Exception> GenerateExceptionStack(Exception exception) { var exceptionStack = new Stack<Exception>(); // create exception stack for (Exception e = exception; e != null; e = e.InnerException) { exceptionStack.Push(e); } return exceptionStack; } // render stack private static string GenerateFormattedStackTrace(Stack<Exception> exceptionStack) { StringBuilder trace = new StringBuilder(); try { // loop through exception stack while (exceptionStack.Count != 0) { trace.Append("\r\n"); // render exception type and message Exception ex = exceptionStack.Pop(); trace.Append("[" + ex.GetType().Name); if (!string.IsNullOrEmpty(ex.Message)) { trace.Append(":" + ex.Message); } trace.Append("]\r\n"); // Load stack trace StackTrace stackTrace = new StackTrace(ex, true); for (int frame = 0; frame < stackTrace.FrameCount; frame++) { StackFrame stackFrame = stackTrace.GetFrame(frame); MethodBase method = stackFrame.GetMethod(); Type declaringType = method.DeclaringType; string declaringNamespace = ""; // get declaring type information if (declaringType != null) { declaringNamespace = declaringType.Namespace ?? ""; } // add namespace if (!string.IsNullOrEmpty(declaringNamespace)) { declaringNamespace += "."; } // add method if (declaringType == null) { trace.Append(" " + method.Name + "("); } else { trace.Append(" " + declaringNamespace + declaringType.Name + "." + method.Name + "("); } // get parameter information ParameterInfo[] parameters = method.GetParameters(); for (int paramIndex = 0; paramIndex < parameters.Length; paramIndex++) { trace.Append(((paramIndex != 0) ? "," : "") + parameters[paramIndex].ParameterType.Name + " " + parameters[paramIndex].Name); } trace.Append(")"); // get information string fileName = stackFrame.GetFileName() ?? ""; if (!string.IsNullOrEmpty(fileName)) { trace.Append(string.Concat(new object[] { " in ", fileName, ":", stackFrame.GetFileLineNumber() })); } else { trace.Append(" + " + stackFrame.GetNativeOffset()); } trace.Append("\r\n"); } } } catch { } if (trace.Length == 0) { trace.Append("[stack trace unavailable]"); } // return html safe stack trace return HttpUtility.HtmlEncode(trace.ToString()).Replace(Environment.NewLine, "<br>"); } A: I believe the pdb files that are output when you do a debug build contain a reference to the location of the source code files. A: I think this is down to the debug information that can be included with the compiled assemblies..(although I could definately be wrong) A: I believe the information that maps the source to the MSIL is stored in the PDB file. If this is not present then that mapping won't happen. It is this lookup that makes an exception such a expensive operation ("exceptions are for exceptional situations").
How does the ASP.NET "Yellow Screen of Death" display code?
I thought .Net code gets compiled into MSIL, so I always wondered how do Yellow Screens produce the faulty code. If it's executing the compiled code, how is the compiler able to produce code from the source files in the error message? Feel free to edit this question/title, I know it doesn't really make sense.
[ "A .Net assembly is compiled with metadata about the bytecode included that allows easy decompilation of the code - that's how tools like .Net Reflector work. The PDB files are debug symbols only - the difference in the Yellow Screen Of Death is that you'll get line numbers in the stack trace.\nIn other words, you'd get the code, even if the PDB files were missing.\n", "like this. i've made a few changes, but it's pretty close to exactly what ms is doing.\n// reverse the stack\nprivate static Stack<Exception> GenerateExceptionStack(Exception exception)\n{\n var exceptionStack = new Stack<Exception>();\n\n // create exception stack\n for (Exception e = exception; e != null; e = e.InnerException)\n {\n exceptionStack.Push(e);\n }\n\n return exceptionStack;\n}\n\n// render stack\nprivate static string GenerateFormattedStackTrace(Stack<Exception> exceptionStack)\n{\n StringBuilder trace = new StringBuilder();\n\n try\n {\n // loop through exception stack\n while (exceptionStack.Count != 0)\n {\n trace.Append(\"\\r\\n\");\n\n // render exception type and message\n Exception ex = exceptionStack.Pop();\n trace.Append(\"[\" + ex.GetType().Name);\n if (!string.IsNullOrEmpty(ex.Message))\n {\n trace.Append(\":\" + ex.Message);\n }\n trace.Append(\"]\\r\\n\");\n\n // Load stack trace\n StackTrace stackTrace = new StackTrace(ex, true);\n for (int frame = 0; frame < stackTrace.FrameCount; frame++)\n {\n StackFrame stackFrame = stackTrace.GetFrame(frame);\n MethodBase method = stackFrame.GetMethod();\n Type declaringType = method.DeclaringType;\n string declaringNamespace = \"\";\n\n // get declaring type information\n if (declaringType != null)\n {\n declaringNamespace = declaringType.Namespace ?? \"\";\n }\n\n // add namespace\n if (!string.IsNullOrEmpty(declaringNamespace))\n {\n declaringNamespace += \".\";\n }\n\n // add method\n if (declaringType == null)\n {\n trace.Append(\" \" + method.Name + \"(\");\n }\n else\n {\n trace.Append(\" \" + declaringNamespace + declaringType.Name + \".\" + method.Name + \"(\");\n }\n\n // get parameter information\n ParameterInfo[] parameters = method.GetParameters();\n for (int paramIndex = 0; paramIndex < parameters.Length; paramIndex++)\n {\n trace.Append(((paramIndex != 0) ? \",\" : \"\") + parameters[paramIndex].ParameterType.Name + \" \" + parameters[paramIndex].Name);\n }\n trace.Append(\")\");\n\n\n // get information\n string fileName = stackFrame.GetFileName() ?? \"\";\n\n if (!string.IsNullOrEmpty(fileName))\n {\n trace.Append(string.Concat(new object[] { \" in \", fileName, \":\", stackFrame.GetFileLineNumber() }));\n }\n else\n {\n trace.Append(\" + \" + stackFrame.GetNativeOffset());\n }\n\n trace.Append(\"\\r\\n\");\n }\n }\n }\n catch\n {\n }\n\n if (trace.Length == 0)\n {\n trace.Append(\"[stack trace unavailable]\");\n }\n\n // return html safe stack trace\n return HttpUtility.HtmlEncode(trace.ToString()).Replace(Environment.NewLine, \"<br>\");\n}\n\n", "I believe the pdb files that are output when you do a debug build contain a reference to the location of the source code files.\n", "I think this is down to the debug information that can be included with the compiled assemblies..(although I could definately be wrong)\n", "I believe the information that maps the source to the MSIL is stored in the PDB file. If this is not present then that mapping won't happen.\nIt is this lookup that makes an exception such a expensive operation (\"exceptions are for exceptional situations\").\n" ]
[ 9, 5, 3, 0, 0 ]
[]
[]
[ ".net", "asp.net", "yellow_screen_of_death" ]
stackoverflow_0000020198_.net_asp.net_yellow_screen_of_death.txt
Q: Is it possible to return objects from a WebService? Instead of returning a common string, is there a way to return classic objects? If not: what are the best practices? Do you transpose your object to xml and rebuild the object on the other side? What are the other possibilities? A: As mentioned, you can do this in .net via serialization. By default all native types are serializable so this happens automagically for you. However if you have complex types, you need to mark the object with the [Serializable] attribute. The same goes with complex types as properties. So for example you need to have: [Serializable] public class MyClass { public string MyString {get; set;} [Serializable] public MyOtherClass MyOtherClassProperty {get; set;} } A: If the object can be serialised to XML and can be described in WSDL then yes it is possible to return objects from a webservice. A: Yes: in .NET they call this serialization, where objects are serialized into XML and then reconstructed by the consuming service back into its original object type or a surrogate with the same data structure. A: Where possible, I transpose the objects into XML - this means that the Web Service is more portable - I can then access the service in whatever language, I just need to create the parser/object transposer in that language. Because we have WSDL files describing the service, this is almost automated in some systems. (For example, we have a server written in pure python which is replacing a server written in C, a client written in C++/gSOAP, and a client written in Cocoa/Objective-C. We use soapUI as a testing framework, which is written in Java). A: It is possible to return objects from a web service using XML. But Web Services are supposed to be platform and operating system agnostic. Serializing an object simply allows you to store and retrieve an object from a byte stream, such as a file. For instance, you can serialize a Java object, convert that binary stream (perhaps via a Base 64 encoding into a CDATA field) and transfer that to service's client. But the client would only be able to restore that object if it were Java-based. Moreover, a deep copy is required to serialize an object and have it restored exactly. Deep copies can be expensive. Your best route is to create an XML schema that represents the document and create an instance of that schema with the object specifics. A: .NET automatically does this with objects that are serializable. I'm pretty sure Java works the same way. Here is an article that talks about object serialization in .NET: http://www.codeguru.com/Csharp/Csharp/cs_syntax/serialization/article.php/c7201 A: @Brian: I don't know how things work in Java, but in .net objects get serialized down to XML, not base64 strings. The webservice publishes a wsdl file that contains the method and object definitions required for your webservice. I would hope that nobody creates webservices that simply create a base64 string A: Daniel Auger: As others have said, it is possible. However, if both the service and client use an object that has the exact same domain behavior on both sides, you probably didn't need a service in the first place. lomax: I have to disagree with this as it's a somewhat narrow comment. Using a webservice that can serialize domain objects to XML means that it makes it easy for clients that work with the same domain objects, but it also means that those clients are restricted to using that particular web service you've exposed and it also works in reverse by allowing other clients to have no knowledge of your domain objects but still interact with your service via XML. @ Lomax: You've described two scenarios. Scenario 1: The client is rehydrating the xml message back into the exact same domain object. I consider this to be "returning an object". In my experience this is a bad choice and I'll explain this below. Scenario 2: The client rehydrates the xml message into something other than the exact same domain object: I am 100% behind this, however I don't consider this to be returning a domain object. It's really sending a message or DTO. Now let me explain why true/pure/not DTO object serialization across a web service is usually a bad idea. An assertion: in order to do this in the first place, you either have to be the owner of both the client and the service, or provide the client with a library to use so that they can rehydrate the object back into it's true type. The problem: This domain object as a type now exists in and belongs to two semi-related domains. Over time, behaviors may need to be added in one domain that make no sense in the other domain and this leads to pollution and potentially painful problems. I usually default to scenario 2. I only use scenario 1 when there is an overwhelming reason to do so. I apologize for being so terse with my initial reply. I hope this clears things up to a degree as far as what my opinion is. Lomax, it would seem we half agree ;). A: JSON is a pretty standard way to pass objects around the web (as a subset of javascript). Many languages feature a library which will convert JSON code into a native object - see for example simplejson in Python. For more libraries for JSON use, see the JSON webpage A: As others have said, it is possible. However, if both the service and client use an object that has the exact same domain behavior on both sides, you probably didn't need a service in the first place. A: As others have said, it is possible. However, if both the service and client use an object that has the exact same domain behavior on both sides, you probably didn't need a service in the first place. I have to disagree with this as it's a somewhat narrow comment. Using a webservice that can serialize domain objects to XML means that it makes it easy for clients that work with the same domain objects, but it also means that those clients are restricted to using that particular web service you've exposed and it also works in reverse by allowing other clients to have no knowledge of your domain objects but still interact with your service via XML.
Is it possible to return objects from a WebService?
Instead of returning a common string, is there a way to return classic objects? If not: what are the best practices? Do you transpose your object to xml and rebuild the object on the other side? What are the other possibilities?
[ "As mentioned, you can do this in .net via serialization. By default all native types are serializable so this happens automagically for you.\nHowever if you have complex types, you need to mark the object with the [Serializable] attribute. The same goes with complex types as properties.\nSo for example you need to have:\n[Serializable]\npublic class MyClass\n{\n public string MyString {get; set;}\n\n [Serializable]\n public MyOtherClass MyOtherClassProperty {get; set;}\n}\n\n", "If the object can be serialised to XML and can be described in WSDL then yes it is possible to return objects from a webservice.\n", "Yes: in .NET they call this serialization, where objects are serialized into XML and then reconstructed by the consuming service back into its original object type or a surrogate with the same data structure.\n", "Where possible, I transpose the objects into XML - this means that the Web Service is more portable - I can then access the service in whatever language, I just need to create the parser/object transposer in that language.\nBecause we have WSDL files describing the service, this is almost automated in some systems.\n(For example, we have a server written in pure python which is replacing a server written in C, a client written in C++/gSOAP, and a client written in Cocoa/Objective-C. We use soapUI as a testing framework, which is written in Java).\n", "It is possible to return objects from a web service using XML. But Web Services are supposed to be platform and operating system agnostic. Serializing an object simply allows you to store and retrieve an object from a byte stream, such as a file. For instance, you can serialize a Java object, convert that binary stream (perhaps via a Base 64 encoding into a CDATA field) and transfer that to service's client.\nBut the client would only be able to restore that object if it were Java-based. Moreover, a deep copy is required to serialize an object and have it restored exactly. Deep copies can be expensive.\nYour best route is to create an XML schema that represents the document and create an instance of that schema with the object specifics.\n", ".NET automatically does this with objects that are serializable. I'm pretty sure Java works the same way.\nHere is an article that talks about object serialization in .NET:\nhttp://www.codeguru.com/Csharp/Csharp/cs_syntax/serialization/article.php/c7201\n", "@Brian: I don't know how things work in Java, but in .net objects get serialized down to XML, not base64 strings. The webservice publishes a wsdl file that contains the method and object definitions required for your webservice.\nI would hope that nobody creates webservices that simply create a base64 string\n", "\n\nDaniel Auger:\n As others have said, it is possible.\n However, if both the service and\n client use an object that has the\n exact same domain behavior on both\n sides, you probably didn't need a\n service in the first place.\n\nlomax:\n I have to disagree with this as it's a\n somewhat narrow comment. Using a\n webservice that can serialize domain\n objects to XML means that it makes it\n easy for clients that work with the\n same domain objects, but it also means\n that those clients are restricted to\n using that particular web service\n you've exposed and it also works in\n reverse by allowing other clients to\n have no knowledge of your domain\n objects but still interact with your\n service via XML.\n\n@ Lomax: You've described two scenarios. Scenario 1: The client is rehydrating the xml message back into the exact same domain object. I consider this to be \"returning an object\". In my experience this is a bad choice and I'll explain this below. Scenario 2: The client rehydrates the xml message into something other than the exact same domain object: I am 100% behind this, however I don't consider this to be returning a domain object. It's really sending a message or DTO.\nNow let me explain why true/pure/not DTO object serialization across a web service is usually a bad idea. An assertion: in order to do this in the first place, you either have to be the owner of both the client and the service, or provide the client with a library to use so that they can rehydrate the object back into it's true type. The problem: This domain object as a type now exists in and belongs to two semi-related domains. Over time, behaviors may need to be added in one domain that make no sense in the other domain and this leads to pollution and potentially painful problems. \nI usually default to scenario 2. I only use scenario 1 when there is an overwhelming reason to do so.\nI apologize for being so terse with my initial reply. I hope this clears things up to a degree as far as what my opinion is. Lomax, it would seem we half agree ;).\n", "JSON is a pretty standard way to pass objects around the web (as a subset of javascript). Many languages feature a library which will convert JSON code into a native object - see for example simplejson in Python.\nFor more libraries for JSON use, see the JSON webpage\n", "As others have said, it is possible. However, if both the service and client use an object that has the exact same domain behavior on both sides, you probably didn't need a service in the first place. \n", "\nAs others have said, it is possible.\n However, if both the service and\n client use an object that has the\n exact same domain behavior on both\n sides, you probably didn't need a\n service in the first place.\n\nI have to disagree with this as it's a somewhat narrow comment. Using a webservice that can serialize domain objects to XML means that it makes it easy for clients that work with the same domain objects, but it also means that those clients are restricted to using that particular web service you've exposed and it also works in reverse by allowing other clients to have no knowledge of your domain objects but still interact with your service via XML.\n" ]
[ 7, 5, 3, 2, 2, 1, 1, 1, 1, 0, 0 ]
[]
[]
[ "web_services" ]
stackoverflow_0000011879_web_services.txt
Q: Best way to keep an ordered list of windows (from most-recently created to oldest)? What is the best way to manage a list of windows (keeping them in order) to be able to promote the next window to the top-level when the current top-level window is closed. This is for a web application, so we're using jQuery Javascript. We'd talked through a few simplistic solutions, such as using an array and just treating [0] index as the top-most window. I'm wondering if there's any potentially more efficient or useful alternative to what we had brainstormed. A: I don't really know javascript, but couldn't you create a stack of windows? A: A stack if you want to just close the window on top. A queue if you also need to open windows at the end. A: Stack/queue in JS is a simple array, which can be manipulated with .push(val), .pop(), .shift(val) and .unshift().
Best way to keep an ordered list of windows (from most-recently created to oldest)?
What is the best way to manage a list of windows (keeping them in order) to be able to promote the next window to the top-level when the current top-level window is closed. This is for a web application, so we're using jQuery Javascript. We'd talked through a few simplistic solutions, such as using an array and just treating [0] index as the top-most window. I'm wondering if there's any potentially more efficient or useful alternative to what we had brainstormed.
[ "I don't really know javascript, but couldn't you create a stack of windows?\n", "A stack if you want to just close the window on top.\nA queue if you also need to open windows at the end.\n", "Stack/queue in JS is a simple array, which can be manipulated with .push(val), .pop(), .shift(val) and .unshift().\n" ]
[ 1, 1, 1 ]
[]
[]
[ "javascript" ]
stackoverflow_0000019970_javascript.txt
Q: PHP4 to PHP5 Migration What are some good steps to follow for a smooth migration from PHP4 to PHP5. What are some types of code that are likely to break? A: I also once worked on an app which used PHP4's XML support quite heavily, and would have required quite a bit of work to move to PHP5. One of the other significant changes I was looking at at the time was the change of the default handling of function parameters. In PHP4 if I remember, they were pass-by-copy unless you specified otherwise, but in PHP5 is changed to pass-by-reference by default. In well written code, that probably won't make a big difference to you, but it could certainly cause problems. I think one other thing I found changed is that objects are no longer allowed to overwrite their 'this' field. I would say that was a really bad idea to begin with (and I think it may have not been an intentional feature in PHP4), but I certainly found a few parts of our system that relied on it. Hope some of that helps. A: The best advice I could give anyone working with PHP4 is this: error_reporting( E_ALL ); It pretty much will tell you exactly what you need to do. A: We had an app that relied heavily on the PHP 4 XML DOM functions and it required a lot of retooling to change over PHP 5. Beyond that most changes were improvements to things like error handling (to take advantage of exceptions) and PHP Classes. A: OOP is one of the largest differences. It won't break as the PHP4 and PHP5 OOP styles are interchangeable but I'd reccommend taking advantage of PHP5's new OOP styles. It's not a huge amount of work to convert your existing classes to PHP5 and it does give you some extra magical methods that may help solve some existing hacks (I remember having a near useless __toString equivalent method in most classes).
PHP4 to PHP5 Migration
What are some good steps to follow for a smooth migration from PHP4 to PHP5. What are some types of code that are likely to break?
[ "I also once worked on an app which used PHP4's XML support quite heavily, and would have required quite a bit of work to move to PHP5.\nOne of the other significant changes I was looking at at the time was the change of the default handling of function parameters. In PHP4 if I remember, they were pass-by-copy unless you specified otherwise, but in PHP5 is changed to pass-by-reference by default. In well written code, that probably won't make a big difference to you, but it could certainly cause problems.\nI think one other thing I found changed is that objects are no longer allowed to overwrite their 'this' field. I would say that was a really bad idea to begin with (and I think it may have not been an intentional feature in PHP4), but I certainly found a few parts of our system that relied on it.\nHope some of that helps.\n", "The best advice I could give anyone working with PHP4 is this:\nerror_reporting( E_ALL );\n\nIt pretty much will tell you exactly what you need to do.\n", "We had an app that relied heavily on the PHP 4 XML DOM functions and it required a lot of retooling to change over PHP 5.\nBeyond that most changes were improvements to things like error handling (to take advantage of exceptions) and PHP Classes.\n", "OOP is one of the largest differences. It won't break as the PHP4 and PHP5 OOP styles are interchangeable but I'd reccommend taking advantage of PHP5's new OOP styles. It's not a huge amount of work to convert your existing classes to PHP5 and it does give you some extra magical methods that may help solve some existing hacks (I remember having a near useless __toString equivalent method in most classes).\n" ]
[ 8, 2, 1, 1 ]
[]
[]
[ "migration", "php" ]
stackoverflow_0000006594_migration_php.txt
Q: PHP with SQL Server 2005+ Currently we have a hybrid ASP/PHP setup connecting to a SQL Server 2005 database. But all the query work is done on the client side, I'd like to move some of this to PHP. What driver and/or connection string is needed to connect to Sql Svr and what is the syntax to use in PHP? Update: OK so I was definitely trying to avoid using anything to do with copying DLLs etc. I'll look into the SQL2K5PHP driver (thanks Vincent). @jcarrascal for the sake of clarity, by "client side" I mean our application is an internal web app that runs as an HTA, with all queries done via javascript calls to an ASP which actually submits the DB request. A: You have two options: 1) php_mssql extension : If you'd like something that has the same API mysql and mysqli has, then use the php_mssql extension. But there is a catch, the bundled ntwdblib.dll file with PHP is not working. You have to find this file from a SQL Server 2000 installation or you can find it on the Internet. This API is supposedly not very reliable but I have been using it without problem for about one year. http://ca.php.net/mssql 2) Microsoft SQL Server 2005 PHP Driver : If you'd like something more modern but which does not have the same API and is missing some important functions (mssql_num_rows). The big plus is that it is supported by Microsoft and is likely to work with a future version. http://msdn.microsoft.com/en-us/data/cc299381.aspx A: Just use the mssql_connect() function like this: $conn = mssql_connect('localhost', 'sa' , '123456') or die('Can\'t connect.'); mssql_select_db('database', $conn) or die('Can\'t select the database'); Functions relating to SQL Server are defined in the PHP manual for the MSSQL driver. One question though, "all the query work is done on the client side" WTF? :D A: PHP provides an extension for accessing Microsoft SQL Server databases. To use the SQL Server extension, all that is required is to activate the extension in the PHP configuration file. Details on the MSDN page
PHP with SQL Server 2005+
Currently we have a hybrid ASP/PHP setup connecting to a SQL Server 2005 database. But all the query work is done on the client side, I'd like to move some of this to PHP. What driver and/or connection string is needed to connect to Sql Svr and what is the syntax to use in PHP? Update: OK so I was definitely trying to avoid using anything to do with copying DLLs etc. I'll look into the SQL2K5PHP driver (thanks Vincent). @jcarrascal for the sake of clarity, by "client side" I mean our application is an internal web app that runs as an HTA, with all queries done via javascript calls to an ASP which actually submits the DB request.
[ "You have two options:\n1) php_mssql extension : If you'd like something that has the same API mysql and mysqli has, then use the php_mssql extension. But there is a catch, the bundled ntwdblib.dll file with PHP is not working. You have to find this file from a SQL Server 2000 installation or you can find it on the Internet. This API is supposedly not very reliable but I have been using it without problem for about one year.\nhttp://ca.php.net/mssql\n2) Microsoft SQL Server 2005 PHP Driver : If you'd like something more modern but which does not have the same API and is missing some important functions (mssql_num_rows). The big plus is that it is supported by Microsoft and is likely to work with a future version.\nhttp://msdn.microsoft.com/en-us/data/cc299381.aspx\n", "Just use the mssql_connect() function like this:\n$conn = mssql_connect('localhost', 'sa' , '123456')\n or die('Can\\'t connect.');\nmssql_select_db('database', $conn)\n or die('Can\\'t select the database');\n\nFunctions relating to SQL Server are defined in the PHP manual for the MSSQL driver.\nOne question though, \"all the query work is done on the client side\" WTF? :D\n", "PHP provides an extension for accessing Microsoft SQL Server databases. To use the SQL Server extension, all that is required is to activate the extension in the PHP configuration file.\nDetails on the MSDN page\n" ]
[ 5, 2, 0 ]
[]
[]
[ "php", "sql_server" ]
stackoverflow_0000019837_php_sql_server.txt
Q: Script to backup svn repository to network share I have a svn repo on my machine (Windows). Anyone have a script to back it up to a network share? I'm using the repo locally since I'm disconnected a lot. The network share is on a server with a backup strategy. I'm a perfect candidate for git/hg but I don't want to give up my VS integration just yet. A: svnadmin dump C:\SVNRepositorio\Repositorio > \Backups\BkTmpSubversion\subversiontemp.dump ditto Spooky's reply ^^ On linux you might try adding "| gzip" in the middle also take a look at the --incremental & --deltas flags sparkes: For some values of "My machine" that won't be local. Also If you are using SVN for non commercial reasons (I have all my homework from collage checked into a SVN) you might not have a backup system. A: svnadmin dump C:\SVNRepositorio\Repositorio > \\Backups\BkTmpSubversion\subversiontemp.dump Try this. A: I wrote a batch file to do this for a bunch of repos, you could just hook that batch file up to windows scheduler and run it on a schedule. svnadmin hotcopy m:\Source\Q4Press\Repo m:\SvnOut\Q4Press I use the hotcopy but the svn dump would work just as well.
Script to backup svn repository to network share
I have a svn repo on my machine (Windows). Anyone have a script to back it up to a network share? I'm using the repo locally since I'm disconnected a lot. The network share is on a server with a backup strategy. I'm a perfect candidate for git/hg but I don't want to give up my VS integration just yet.
[ "\nsvnadmin dump C:\\SVNRepositorio\\Repositorio > \\Backups\\BkTmpSubversion\\subversiontemp.dump\n\nditto Spooky's reply ^^\nOn linux you might try adding \"| gzip\" in the middle\nalso take a look at the --incremental & --deltas flags\n\nsparkes: For some values of \"My machine\" that won't be local.\nAlso If you are using SVN for non commercial reasons (I have all my homework from collage checked into a SVN) you might not have a backup system.\n", "svnadmin dump C:\\SVNRepositorio\\Repositorio > \\\\Backups\\BkTmpSubversion\\subversiontemp.dump\nTry this.\n", "I wrote a batch file to do this for a bunch of repos, you could just hook that batch file up to windows scheduler and run it on a schedule.\nsvnadmin hotcopy m:\\Source\\Q4Press\\Repo m:\\SvnOut\\Q4Press\n\nI use the hotcopy but the svn dump would work just as well.\n" ]
[ 4, 3, 3 ]
[]
[]
[ "svn" ]
stackoverflow_0000020391_svn.txt
Q: Is it OK to drop sql statistics? We've been trying to alter a lot of columns from nullable to not nullable, which involves dropping all the associated objects, making the change, and recreating the associated objects. We've been using SQL Compare to generate the scripts, but I noticed that SQL Compare doesn't script statistic objects. Does this mean its ok to drop them and the database will work as well as it did before without them, or have Red Gate missed a trick? A: If you have update stats and auto create stats on then it should works as before You can also run sp_updatestats or UPDATE STATISTICS WITH FULLSCAN after you make the changes A: It is considered best practice to auto create and auto update statistics. Sql Server will create them if it needs them. You will often see the tuning wizard generate lots of these, and you will also see people advise that you update statistics as a part of your maintenance plan, but this is not necessary and might actually make things worse, just so long as auto create and auto update are enabled. A: Why are you dropping objects? Seems to me the sequence should be a lot simpler, and less destructive: assign all of these objects a default value, then make the change to not nullable. A: Statistics are too data-specific to be tooled. It would be potentially very inefficient to blindly re-create them on a data set.
Is it OK to drop sql statistics?
We've been trying to alter a lot of columns from nullable to not nullable, which involves dropping all the associated objects, making the change, and recreating the associated objects. We've been using SQL Compare to generate the scripts, but I noticed that SQL Compare doesn't script statistic objects. Does this mean its ok to drop them and the database will work as well as it did before without them, or have Red Gate missed a trick?
[ "If you have update stats and auto create stats on then it should works as before\nYou can also run sp_updatestats or UPDATE STATISTICS WITH FULLSCAN after you make the changes\n", "It is considered best practice to auto create and auto update statistics. Sql Server will create them if it needs them. You will often see the tuning wizard generate lots of these, and you will also see people advise that you update statistics as a part of your maintenance plan, but this is not necessary and might actually make things worse, just so long as auto create and auto update are enabled.\n", "Why are you dropping objects? Seems to me the sequence should be a lot simpler, and less destructive: assign all of these objects a default value, then make the change to not nullable.\n", "Statistics are too data-specific to be tooled. It would be potentially very inefficient to blindly re-create them on a data set.\n" ]
[ 2, 2, 1, 0 ]
[]
[]
[ "scripting", "sql", "sql_server", "statistics" ]
stackoverflow_0000020392_scripting_sql_sql_server_statistics.txt
Q: How do you unit test web apps hosted remotely? I'm familiar with TDD and use it in both my workplace and my home-brewed web applications. However, every time I have used TDD in a web application, I have had the luxury of having full access to the web server. That means that I can update the server then run my unit tests directly from the server. My question is, if you are using a third party web host, how do you run your unit tests on them? You could argue that if your app is designed well and your build process is sound and automated, that running unit tests on your production server isn't necessary, but personally I like the peace of mind in knowing that everything is still "green" after a major update. For everyone who has responded with "just test before you deploy" and "don't you have a staging server?", I understand where you're coming from. I do have a staging server and a CI process set up. My unit tests do run and I make sure they all pass before an an update to production. I realize that in a perfect world I wouldn't be concerned with this. But I've seen it happen before. If a file is left out of the update or a SQL script isn't run, the effects are immediately apparent when running your unit tests but can go unnoticed for quite some time without them. What I'm asking here is if there is any way, if only to satisfy my own compulsive desires, to run a unit test on a server that I cannot install applications on or remote into (e.g. one which I will only have FTP access to in order to update files)? A: I think I probably would have to argue that running unit tests on your production server isn't really part of TDD because by the time you deploy to your production environment technically speaking, you're past "development". I'm quite a stickler for TDD, and when I'm preaching the benefits to clients I often find myself saying "you can't half adopt TDD, it's all or nothing" What you probably should have is some form of automated testing that you perform "after" deployment but these are not part of TDD. Maybe you should look at your process again. A: You could write functional tests in something like WATIR, WATIN or Selenium that test what is returned in the reponse page after posting certain form data or requesting specific URLs. A: For clarification: what sort of access do you have to your web server? FTP or WebDAV only? From your question, I'm guessing ssh access isn't available - you're dropping files in a directory to deploy. Is that correct? If so, the answer for unit testing is likely 'do it before you deploy'. You can set up functional testing driven by an automated tool like Selenium to test your app remotely via the web interface, but that's not really unit testing the sense that you're restricted to testing the system as a whole. Have you considered setting up a staging server, perhaps as a VMWare instance, that mirrors or at least mimics your deployment environment? A: What's preventing you from running unit tests on the server? If you can upload your production code and let it run there, why can't you upload this other code and run it as well? A: I've written test tools for sites using python and httplib/urllib2 generally it would have been overkill but it was suitable in these cases. Not sure it's going to be of general use though.
How do you unit test web apps hosted remotely?
I'm familiar with TDD and use it in both my workplace and my home-brewed web applications. However, every time I have used TDD in a web application, I have had the luxury of having full access to the web server. That means that I can update the server then run my unit tests directly from the server. My question is, if you are using a third party web host, how do you run your unit tests on them? You could argue that if your app is designed well and your build process is sound and automated, that running unit tests on your production server isn't necessary, but personally I like the peace of mind in knowing that everything is still "green" after a major update. For everyone who has responded with "just test before you deploy" and "don't you have a staging server?", I understand where you're coming from. I do have a staging server and a CI process set up. My unit tests do run and I make sure they all pass before an an update to production. I realize that in a perfect world I wouldn't be concerned with this. But I've seen it happen before. If a file is left out of the update or a SQL script isn't run, the effects are immediately apparent when running your unit tests but can go unnoticed for quite some time without them. What I'm asking here is if there is any way, if only to satisfy my own compulsive desires, to run a unit test on a server that I cannot install applications on or remote into (e.g. one which I will only have FTP access to in order to update files)?
[ "I think I probably would have to argue that running unit tests on your production server isn't really part of TDD because by the time you deploy to your production environment technically speaking, you're past \"development\".\nI'm quite a stickler for TDD, and when I'm preaching the benefits to clients I often find myself saying \"you can't half adopt TDD, it's all or nothing\" \nWhat you probably should have is some form of automated testing that you perform \"after\" deployment but these are not part of TDD.\nMaybe you should look at your process again. \n", "You could write functional tests in something like WATIR, WATIN or Selenium that test what is returned in the reponse page after posting certain form data or requesting specific URLs.\n", "For clarification: what sort of access do you have to your web server? FTP or WebDAV only? From your question, I'm guessing ssh access isn't available - you're dropping files in a directory to deploy. Is that correct?\nIf so, the answer for unit testing is likely 'do it before you deploy'. You can set up functional testing driven by an automated tool like Selenium to test your app remotely via the web interface, but that's not really unit testing the sense that you're restricted to testing the system as a whole.\nHave you considered setting up a staging server, perhaps as a VMWare instance, that mirrors or at least mimics your deployment environment?\n", "What's preventing you from running unit tests on the server? If you can upload your production code and let it run there, why can't you upload this other code and run it as well?\n", "I've written test tools for sites using python and httplib/urllib2 generally it would have been overkill but it was suitable in these cases. Not sure it's going to be of general use though.\n" ]
[ 3, 1, 1, 1, 0 ]
[]
[]
[ "tdd", "unit_testing", "web_applications" ]
stackoverflow_0000020511_tdd_unit_testing_web_applications.txt
Q: Path Display in Label Are there any automatic methods for trimming a path string in .NET? For example: C:\Documents and Settings\nick\My Documents\Tests\demo data\demo data.emx becomes C:\Documents...\demo data.emx It would be particularly cool if this were built into the Label class, and I seem to recall it is--can't find it though! A: Use TextRenderer.DrawText with TextFormatFlags.PathEllipsis flag void label_Paint(object sender, PaintEventArgs e) { Label label = (Label)sender; TextRenderer.DrawText(e.Graphics, label.Text, label.Font, label.ClientRectangle, label.ForeColor, TextFormatFlags.PathEllipsis); } Your code is 95% there. The only problem is that the trimmed text is drawn on top of the text which is already on the label. Yes thanks, I was aware of that. My intention was only to demonstrate use of DrawText method. I didn't know whether you want to manually create event for each label or just override OnPaint() method in inherited label. Thanks for sharing your final solution though. A: @ lubos hasko Your code is 95% there. The only problem is that the trimmed text is drawn on top of the text which is already on the label. This is easily solved: Label label = (Label)sender; using (SolidBrush b = new SolidBrush(label.BackColor)) e.Graphics.FillRectangle(b, label.ClientRectangle); TextRenderer.DrawText( e.Graphics, label.Text, label.Font, label.ClientRectangle, label.ForeColor, TextFormatFlags.PathEllipsis); A: Not hard to write yourself though: public static string TrimPath(string path) { int someArbitaryNumber = 10; string directory = Path.GetDirectoryName(path); string fileName = Path.GetFileName(path); if (directory.Length > someArbitaryNumber) { return String.Format(@"{0}...\{1}", directory.Substring(0, someArbitaryNumber), fileName); } else { return path; } } I guess you could even add it as an extension method. A: What you are thinking on the label is that it will put ... if it is longer than the width (not set to auto size), but that would be c:\Documents and Settings\nick\My Doc... If there is support, it would probably be on the Path class in System.IO A: You could use the System.IO.Path.GetFileName method and append that string to a shortened System.IO.Path.GetDirectoryName string.
Path Display in Label
Are there any automatic methods for trimming a path string in .NET? For example: C:\Documents and Settings\nick\My Documents\Tests\demo data\demo data.emx becomes C:\Documents...\demo data.emx It would be particularly cool if this were built into the Label class, and I seem to recall it is--can't find it though!
[ "Use TextRenderer.DrawText with TextFormatFlags.PathEllipsis flag\nvoid label_Paint(object sender, PaintEventArgs e)\n{\n Label label = (Label)sender;\n TextRenderer.DrawText(e.Graphics, label.Text, label.Font, label.ClientRectangle, label.ForeColor, TextFormatFlags.PathEllipsis);\n}\n\n\nYour code is 95% there. The only\n problem is that the trimmed text is\n drawn on top of the text which is\n already on the label.\n\nYes thanks, I was aware of that. My intention was only to demonstrate use of DrawText method. I didn't know whether you want to manually create event for each label or just override OnPaint() method in inherited label. Thanks for sharing your final solution though.\n", "@ lubos hasko Your code is 95% there. The only problem is that the trimmed text is drawn on top of the text which is already on the label. This is easily solved:\n Label label = (Label)sender;\n using (SolidBrush b = new SolidBrush(label.BackColor))\n e.Graphics.FillRectangle(b, label.ClientRectangle);\n TextRenderer.DrawText(\n e.Graphics, \n label.Text, \n label.Font, \n label.ClientRectangle, \n label.ForeColor, \n TextFormatFlags.PathEllipsis);\n\n", "Not hard to write yourself though:\n public static string TrimPath(string path)\n {\n int someArbitaryNumber = 10;\n string directory = Path.GetDirectoryName(path);\n string fileName = Path.GetFileName(path);\n if (directory.Length > someArbitaryNumber)\n {\n return String.Format(@\"{0}...\\{1}\", \n directory.Substring(0, someArbitaryNumber), fileName);\n }\n else\n {\n return path;\n }\n }\n\nI guess you could even add it as an extension method.\n", "What you are thinking on the label is that it will put ... if it is longer than the width (not set to auto size), but that would be\nc:\\Documents and Settings\\nick\\My Doc...\n\nIf there is support, it would probably be on the Path class in System.IO\n", "You could use the System.IO.Path.GetFileName method and append that string to a shortened System.IO.Path.GetDirectoryName string.\n" ]
[ 9, 4, 3, 0, 0 ]
[]
[]
[ ".net", "c#", "path", "winforms" ]
stackoverflow_0000020467_.net_c#_path_winforms.txt
Q: State of Registers After Bootup I'm working on a boot loader on an x86 machine. When the BIOS copies the contents of the MBR to 0x7c00 and jumps to that address, is there a standard meaning to the contents of the registers? Do the registers have standard values? I know that the segment registers are typically set to 0, but will sometimes be 0x7c0. What about the other hardware registers? A: This early execution environment is highly implementation defined, meaning the implementation of your particular BIOS. Never make any assumptions on the contents of registers. They might be initialized to 0, but they might contain a random value just as well. from the OS dev Wiki, which is where I get information when I'm playing with my toy OS's A: Best option would be to assume nothing. If they have meaning, you will find that from the other side when you need the information they provide. A: Undefined, I believe? I think it depends on the mainboard and CPU, and should be treated as random for your own good. A: You can always initialize them yourself to start with a known state. A: Safest bet is to assume undefined. A: The only thing that I know to be well defined is the processor state immediately after reset. For the record you can find that in Intel's Software Developer's Manual Vol 3 chapter 8: "PROCESSOR MANAGEMENT AND INITIALIZATION" in the table titled " IA-32 Processor States Following Power-up, Reset, or INIT" A: Always assume undefined, otherwise you'll hit bad problems if you ever try to port architectures. There is nothing quite like the pain of porting code that assumes everything uninitialized will be set to zero.
State of Registers After Bootup
I'm working on a boot loader on an x86 machine. When the BIOS copies the contents of the MBR to 0x7c00 and jumps to that address, is there a standard meaning to the contents of the registers? Do the registers have standard values? I know that the segment registers are typically set to 0, but will sometimes be 0x7c0. What about the other hardware registers?
[ "\nThis early execution environment is highly implementation defined, meaning the implementation of your particular BIOS. Never make any assumptions on the contents of registers. They might be initialized to 0, but they might contain a random value just as well. \n\nfrom the OS dev Wiki, which is where I get information when I'm playing with my toy OS's\n", "Best option would be to assume nothing. If they have meaning, you will find that from the other side when you need the information they provide.\n", "Undefined, I believe? I think it depends on the mainboard and CPU, and should be treated as random for your own good.\n", "You can always initialize them yourself to start with a known state.\n", "Safest bet is to assume undefined. \n", "The only thing that I know to be well defined is the processor state immediately after reset.\nFor the record you can find that in Intel's Software Developer's Manual Vol 3 chapter 8: \"PROCESSOR MANAGEMENT AND INITIALIZATION\" in the table titled \" IA-32 Processor States Following Power-up, Reset, or INIT\"\n", "Always assume undefined, otherwise you'll hit bad problems if you ever try to port architectures.\nThere is nothing quite like the pain of porting code that assumes everything uninitialized will be set to zero.\n" ]
[ 8, 1, 1, 1, 1, 1, 1 ]
[]
[]
[ "bios", "boot" ]
stackoverflow_0000020336_bios_boot.txt
Q: Broken chart images in Crystal Reports in web application I have a collection of crystal reports that contains charts. They look fine locally and when printed, but when viewing them through a web application using a CrystalReportViewer the charts dispay as broken images. Viewing the properties of the broken image show the url as ...CrystalImageHandler.aspx?dynamicimage=cr_tmp_image_8d12a01f-b336-4b8b-b0c7-83d9571d87e4.png. I have tried adding <httpHandlers> <add verb="GET" path="CrystalImageHandler.aspx" type="CrystalDecisions.Web.CrystalImageHandler,CrystalDecisions.Web, Version=Version=10.5.3700.0, Culture=neutral, PublicKeyToken=692fbea5521e1304"/> </httpHandlers> to the web.config as suggested via a google search but that has not resolved my issue. A: Maybe a permissions issue on the Crystal libraries?? I've run into that before with Crystal, not specfically the ImageHandler though.
Broken chart images in Crystal Reports in web application
I have a collection of crystal reports that contains charts. They look fine locally and when printed, but when viewing them through a web application using a CrystalReportViewer the charts dispay as broken images. Viewing the properties of the broken image show the url as ...CrystalImageHandler.aspx?dynamicimage=cr_tmp_image_8d12a01f-b336-4b8b-b0c7-83d9571d87e4.png. I have tried adding <httpHandlers> <add verb="GET" path="CrystalImageHandler.aspx" type="CrystalDecisions.Web.CrystalImageHandler,CrystalDecisions.Web, Version=Version=10.5.3700.0, Culture=neutral, PublicKeyToken=692fbea5521e1304"/> </httpHandlers> to the web.config as suggested via a google search but that has not resolved my issue.
[ "Maybe a permissions issue on the Crystal libraries?? I've run into that before with Crystal, not specfically the ImageHandler though.\n" ]
[ 1 ]
[]
[]
[ "crystal_reports", "web_applications" ]
stackoverflow_0000020201_crystal_reports_web_applications.txt
Q: why are downloads sometimes tagged md5, sha1 and other hash indicators? I've seen this all over the place: Download here! SHA1 = 8e1ed2ce9e7e473d38a9dc7824a384a9ac34d7d0 What does it mean? How does a hash come into play as far as downloads and... What use can I make of it? Is this a legacy item where you used to have to verify some checksum after you downloaded the whole file? A: It's a security measure. It allows you to verify that the file you just downloaded is the one that the author posted to the site. Note that using hashes from the same website you're getting the files from is not especially secure. Often a good place to get them from is a mailing list announcement where a PGP-signed email contains the link to the file and the hash. Since this answer has been ranked so highly compared to the others for some reason, I'm editing it to add the other major reason mentioned first by the other authors below, which is to verify the integrity of the file after transferring it over the network. So: Security - verify that the file that you downloaded was the one the author originally published Integrity - verify that the file wasn't damaged during transmission over the network. A: When downloading larger files, it's often useful to perform a checksum to ensure your download was successful and not mangled along transport. There's tons of freeware apps that can be used to gen the checksum for you to validate your download. This to me is an interesting mainstreaming of procedures that popular mp3 and warez sites used to use back in the day when distributing files. A: SHA1 and MD5 hashes are used to verify the integrity of files you've downloaded. They aren't necessarily a legacy technology, and can be used by tools like those in the openssl to verify whether or not your a file has been corrupted/changed from its original. A: It's to ensure that you downloaded the file correctly. If you hash the downloaded the file and it matches the hash on the page, all is well. A: A cryptographic hash (such as SH1 or MD5) allows you to verify that file you have has been downloaded correctly and has not been tampered with. A: To go along with what everyone here is saying I use HashTab when I need to generate/compare MD5 and SHA1 hashes on Windows. It adds a new tab to the file properties window and will calculate the hashes. A: With a has (MD5, SHA-1) one input matches only with one output, and then if you down load the file and calculate the hash again should obtain the same output. If the output is different the file is corrupt. If (hash(file) == “Hash in page”) validFile = true; else validFile = false;
why are downloads sometimes tagged md5, sha1 and other hash indicators?
I've seen this all over the place: Download here! SHA1 = 8e1ed2ce9e7e473d38a9dc7824a384a9ac34d7d0 What does it mean? How does a hash come into play as far as downloads and... What use can I make of it? Is this a legacy item where you used to have to verify some checksum after you downloaded the whole file?
[ "It's a security measure. It allows you to verify that the file you just downloaded is the one that the author posted to the site. Note that using hashes from the same website you're getting the files from is not especially secure. Often a good place to get them from is a mailing list announcement where a PGP-signed email contains the link to the file and the hash.\nSince this answer has been ranked so highly compared to the others for some reason, I'm editing it to add the other major reason mentioned first by the other authors below, which is to verify the integrity of the file after transferring it over the network.\nSo:\n\nSecurity - verify that the file that you downloaded was the one the author originally published\nIntegrity - verify that the file wasn't damaged during transmission over the network.\n\n", "When downloading larger files, it's often useful to perform a checksum to ensure your download was successful and not mangled along transport. There's tons of freeware apps that can be used to gen the checksum for you to validate your download. This to me is an interesting mainstreaming of procedures that popular mp3 and warez sites used to use back in the day when distributing files.\n", "SHA1 and MD5 hashes are used to verify the integrity of files you've downloaded. They aren't necessarily a legacy technology, and can be used by tools like those in the openssl to verify whether or not your a file has been corrupted/changed from its original.\n", "It's to ensure that you downloaded the file correctly. If you hash the downloaded the file and it matches the hash on the page, all is well. \n", "A cryptographic hash (such as SH1 or MD5) allows you to verify that file you have has been downloaded correctly and has not been tampered with.\n", "To go along with what everyone here is saying I use HashTab when I need to generate/compare MD5 and SHA1 hashes on Windows. It adds a new tab to the file properties window and will calculate the hashes.\n", "With a has (MD5, SHA-1) one input matches only with one output, and then if you down load the file and calculate the hash again should obtain the same output.\nIf the output is different the file is corrupt.\nIf (hash(file) == “Hash in page”)\n validFile = true;\nelse\n validFile = false;\n\n" ]
[ 21, 6, 3, 2, 2, 2, 0 ]
[]
[]
[ "checksum", "download", "hash" ]
stackoverflow_0000020627_checksum_download_hash.txt
Q: Repository layout for large Maven projects I have a large application (~50 modules) using a structure similar to the following: Application Communication modules Color communication module SSN communication module etc. communication module Router module Service modules Voting service module Web interface submodule for voting Vote collector submodule for voting etc. for voting Quiz service module etc. module I would like to import the application to Maven and Subversion. After some research I found that two practical approaches exists for this. One is using a tree structure just as the previous one. The drawback of this structure is that you need a ton of tweaking/hacks to get the multi-module reporting work well with Maven. Another downside is that in Subversion the standard trunk/tags/branches approach add even more complexity to the repository. The other approach uses a flat structure, where there are only one parent project and all the modules, submodules and parts-of-the-submodules are a direct child of the parent project. This approach works well for reporting and is easier in Subversion, however I feel I lose a bit of the structure this way. Which way would you choose in the long term and why? A: We have a largish application (160+ OSGi bundles where each bundle is a Maven module) and the lesson we learned, and continue to learn, is that flat is better. The problem with encoding semantics in your hierarchy is that you lose flexibility. A module that is 100% say "communication" today may be partly "service" tomorrow and then you'll need to be moving things around in your repository and that will break all sorts of scripts, documentation, references, etc. So I would recommend a flat structure and to encode the semantics in another place (say for example an IDE workspace or documentation). I've answered a question about version control layout in some detail with examples at another question, it may be relevant to your situation. A: I think you're better off flattening your directory structure. Perhaps you want to come up with a naming convention for the directories such that they sort nicely when viewing all of the projects, but ultimately I don't think all of that extra hierarchy is necessary. Assuming you're using Eclipse as your IDE all of the projects are going to end up in a flat list once you import them anyway so you don't really gain anything from the additional sub directories. That in addition to the fact that the configuration is so much simpler without all the extra hierarchy makes the choice pretty clear in my mind. You might also want to consider combining some of the modules. I know nothing about your app or domain, but it seems like a lot of those leaf level modules might be better suited as just packages or sets of packages inside another top level module. I'm all for keeping jars cohesive, but it can be taken too far sometimes.
Repository layout for large Maven projects
I have a large application (~50 modules) using a structure similar to the following: Application Communication modules Color communication module SSN communication module etc. communication module Router module Service modules Voting service module Web interface submodule for voting Vote collector submodule for voting etc. for voting Quiz service module etc. module I would like to import the application to Maven and Subversion. After some research I found that two practical approaches exists for this. One is using a tree structure just as the previous one. The drawback of this structure is that you need a ton of tweaking/hacks to get the multi-module reporting work well with Maven. Another downside is that in Subversion the standard trunk/tags/branches approach add even more complexity to the repository. The other approach uses a flat structure, where there are only one parent project and all the modules, submodules and parts-of-the-submodules are a direct child of the parent project. This approach works well for reporting and is easier in Subversion, however I feel I lose a bit of the structure this way. Which way would you choose in the long term and why?
[ "We have a largish application (160+ OSGi bundles where each bundle is a Maven module) and the lesson we learned, and continue to learn, is that flat is better. The problem with encoding semantics in your hierarchy is that you lose flexibility. A module that is 100% say \"communication\" today may be partly \"service\" tomorrow and then you'll need to be moving things around in your repository and that will break all sorts of scripts, documentation, references, etc.\nSo I would recommend a flat structure and to encode the semantics in another place (say for example an IDE workspace or documentation).\nI've answered a question about version control layout in some detail with examples at another question, it may be relevant to your situation.\n", "I think you're better off flattening your directory structure. Perhaps you want to come up with a naming convention for the directories such that they sort nicely when viewing all of the projects, but ultimately I don't think all of that extra hierarchy is necessary.\nAssuming you're using Eclipse as your IDE all of the projects are going to end up in a flat list once you import them anyway so you don't really gain anything from the additional sub directories. That in addition to the fact that the configuration is so much simpler without all the extra hierarchy makes the choice pretty clear in my mind.\nYou might also want to consider combining some of the modules. I know nothing about your app or domain, but it seems like a lot of those leaf level modules might be better suited as just packages or sets of packages inside another top level module. I'm all for keeping jars cohesive, but it can be taken too far sometimes.\n" ]
[ 16, 3 ]
[]
[]
[ "java", "maven_2", "svn" ]
stackoverflow_0000020003_java_maven_2_svn.txt
Q: YUI Reset CSS Makes this not work This line in YUI's Reset CSS is causing trouble for me: address,caption,cite,code,dfn,em,strong,th,var { font-style: normal; font-weight: normal; } It makes my em not italic and my strong not bold. Which is okay. I know how to override that in my own stylesheet. strong, b { font-weight: bold; } em, i { font-style: italic; } The problem comes in when I have text that's both em and strong. <strong>This is bold, <em>and this is italic, but not bold</em></strong> My rule for strong makes it bold, but YUI's rule for em makes it normal again. How do I fix that? A: If your strong declaration comes after YUI's yours should override it. You can force it like this: strong, b, strong *, b * { font-weight: bold; } em, i, em *, i * { font-style: italic; } If you still support IE7 you'll need to add !important. strong, b, strong *, b * { font-weight: bold !important; } em, i, em *, i * { font-style: italic !important; } This works - see for yourself: /*YUI styles*/ address,caption,cite,code,dfn,em,strong,th,var { font-style: normal; font-weight: normal; } /*End YUI styles =*/ strong, b, strong *, b * { font-weight: bold; } em, i, em *, i * { font-style: italic; } <strong>Bold</strong> - <em>Italic</em> - <strong>Bold and <em>Italic</em></strong> A: I would use this rule to override the YUI reset: strong, b, strong *, b * { font-weight: bold; } em, i, em *, i * { font-style: italic; } A: If in addition to using YUI reset.css, you also use YUI base.css, then you will be all set with a standard set of cross browser base styles. LINK: http://developer.yahoo.com/yui/base/ A: I had a similar problem when I added the YUI Reset to the top of my stock CSS file. I found that the best thing for me was to simply remove all of the font-weight: normal; declarations from the YUI Reset. I haven't noticed that this has affected anything "cross-browser." All my declarations were after the YUI Reset so I'm not sure why they weren't taking affect. A: As long as your styles are loaded after the reset ones they should work. What browser is this? because I work in a similar way myself and I've not hit this problem I wonder if it's something in my testing at fault. A: Reset stylesheets are best used as a base. If you don't want to reset em or strong, remove them from the stylesheet. A: As Chris said, you don't have to use the exact CSS they provide religiously. I would just save a copy to your server, and edit to your needs. A: I would suggest avoiding anything which involves hacking the YUI files. You need to be able to update external libraries in the future and if your site relies on edited versions there is a good chance it will get cocked up. I think this is general good practice for any 3rd party library you use. So I thought this answer was amongst the better ones If in addition to using YUI reset.css, you also use YUI base.css, then you will be all set with a standard set of cross browser base styles. A: I thought I had an ideal solution: strong, b { font-weight: bold; font-style: inherit; } em, i { font-style: italic; font-weight: inherit; } Unfortunately, Internet Explorer doesn't support "inherit." :-( A: I see what you are saying. I guess you can add a CSS rule like this: strong em { font-weight: bold; } or: strong * { font-weight: bold; }
YUI Reset CSS Makes this not work
This line in YUI's Reset CSS is causing trouble for me: address,caption,cite,code,dfn,em,strong,th,var { font-style: normal; font-weight: normal; } It makes my em not italic and my strong not bold. Which is okay. I know how to override that in my own stylesheet. strong, b { font-weight: bold; } em, i { font-style: italic; } The problem comes in when I have text that's both em and strong. <strong>This is bold, <em>and this is italic, but not bold</em></strong> My rule for strong makes it bold, but YUI's rule for em makes it normal again. How do I fix that?
[ "If your strong declaration comes after YUI's yours should override it. You can force it like this:\nstrong, b, strong *, b * { font-weight: bold; }\nem, i, em *, i * { font-style: italic; }\n\nIf you still support IE7 you'll need to add !important.\nstrong, b, strong *, b * { font-weight: bold !important; }\nem, i, em *, i * { font-style: italic !important; }\n\nThis works - see for yourself:\n\n\n/*YUI styles*/\r\naddress,caption,cite,code,dfn,em,strong,th,var {\r\n font-style: normal;\r\n font-weight: normal;\r\n}\r\n/*End YUI styles =*/\r\n\r\nstrong, b, strong *, b * {\r\n font-weight: bold;\r\n}\r\n\r\nem, i, em *, i * {\r\n font-style: italic;\r\n}\n <strong>Bold</strong> - <em>Italic</em> - <strong>Bold and <em>Italic</em></strong>\n\n\n\n", "I would use this rule to override the YUI reset:\nstrong, b, strong *, b *\n{\n font-weight: bold;\n}\n\nem, i, em *, i *\n{\n font-style: italic;\n}\n\n", "If in addition to using YUI reset.css, you also use YUI base.css, then you will be all set with a standard set of cross browser base styles.\nLINK: http://developer.yahoo.com/yui/base/\n", "I had a similar problem when I added the YUI Reset to the top of my stock CSS file. I found that the best thing for me was to simply remove all of the\nfont-weight: normal;\n\ndeclarations from the YUI Reset. I haven't noticed that this has affected anything \"cross-browser.\"\nAll my declarations were after the YUI Reset so I'm not sure why they weren't taking affect.\n", "As long as your styles are loaded after the reset ones they should work. What browser is this? because I work in a similar way myself and I've not hit this problem I wonder if it's something in my testing at fault.\n", "Reset stylesheets are best used as a base. If you don't want to reset em or strong, remove them from the stylesheet.\n", "As Chris said, you don't have to use the exact CSS they provide religiously. I would just save a copy to your server, and edit to your needs. \n", "I would suggest avoiding anything which involves hacking the YUI files. You need to be able to update external libraries in the future and if your site relies on edited versions there is a good chance it will get cocked up. I think this is general good practice for any 3rd party library you use.\nSo I thought this answer was amongst the better ones\n\n\nIf in addition to using YUI reset.css, you also use YUI base.css, then you will be all set with a standard set of cross browser base styles.\n\n\n", "I thought I had an ideal solution:\nstrong, b \n{\n font-weight: bold;\n font-style: inherit;\n}\n\nem, i \n{\n font-style: italic;\n font-weight: inherit;\n}\n\nUnfortunately, Internet Explorer doesn't support \"inherit.\" :-(\n", "I see what you are saying. I guess you can add a CSS rule like this:\nstrong em { font-weight: bold; }\n\nor:\nstrong * { font-weight: bold; }\n\n" ]
[ 19, 7, 6, 3, 2, 2, 2, 1, 0, 0 ]
[]
[]
[ "css", "yui" ]
stackoverflow_0000020107_css_yui.txt
Q: XRef Relationships in dbml So I have a database schema like this: Users UserId RoleUserXRef RoleUserId RoleId UserId Roles RoleId Name With foreign keys defined between User & RoleUserXRef and RoleUserXRef & Role. Basically, I have a one to many relationship between users and roles. How would I model this in dbml, such that the generated User class has a list of Roles that the user has assigned to them? A: Creating a many-to-many releationship via simple DBML manipulation is not supported currently. You can extend the partial class to manually create properties, if you really want that sort of functionality "built in".
XRef Relationships in dbml
So I have a database schema like this: Users UserId RoleUserXRef RoleUserId RoleId UserId Roles RoleId Name With foreign keys defined between User & RoleUserXRef and RoleUserXRef & Role. Basically, I have a one to many relationship between users and roles. How would I model this in dbml, such that the generated User class has a list of Roles that the user has assigned to them?
[ "Creating a many-to-many releationship via simple DBML manipulation is not supported currently. You can extend the partial class to manually create properties, if you really want that sort of functionality \"built in\".\n" ]
[ 1 ]
[]
[]
[ "linq_to_sql", "many_to_many", "oop" ]
stackoverflow_0000020765_linq_to_sql_many_to_many_oop.txt
Q: What's the best way to create ClickOnce deployments Our team develops distributed winform apps. We use ClickOnce for deployment and are very pleased with it. However, we've found the pain point with ClickOnce is in creating the deployments. We have the standard dev/test/production environments and need to be able to create deployments for each of these that install and update separate from one another. Also, we want control over what assemblies get deployed. Just because an assembly was compiled doesn't mean we want it deployed. The obvious first choice for creating deployments is Visual Studio. However, VS really doesn't address the issues stated. The next in line is the SDK tool, Mage. Mage works OK but creating deployments is rather tedious and we don't want every developer having our code signing certificate and password. What we ended up doing was rolling our own deployment app that uses the command line version of Mage to create the ClickOnce manifest files. I'm satisfied with our current solution but is seems like there would be an industry-wide, accepted approach to this problem. Is there? A: I would look at using msbuild. It has built in tasks for handling clickonce deployments. I included some references which will help you get started, if you want to go down this path. It is what I use and I have found it to fit my needs. With a good build process using msbuild, you should be able to accomplish squashing the pains you have felt. Here is detailed post on how ClickOnce manifest generation works with MsBuild. A: I've used nAnt to run the overall build strategy, but pass parameters into MSBuild to compile and create the deployment package. Basically, nAnt calls into MSBuild for each environment you need to deploy to, and generates a separate deployment output for each. You end up with a folder and all ClickOnce files you need for every environment, which you can just copy out to the server. This is how we handled multiple production environments as well -- we had separate instances of our application for the US, Canada, and Europe, so each build would end up creating nine deployments, three each for dev, qa, and prod.
What's the best way to create ClickOnce deployments
Our team develops distributed winform apps. We use ClickOnce for deployment and are very pleased with it. However, we've found the pain point with ClickOnce is in creating the deployments. We have the standard dev/test/production environments and need to be able to create deployments for each of these that install and update separate from one another. Also, we want control over what assemblies get deployed. Just because an assembly was compiled doesn't mean we want it deployed. The obvious first choice for creating deployments is Visual Studio. However, VS really doesn't address the issues stated. The next in line is the SDK tool, Mage. Mage works OK but creating deployments is rather tedious and we don't want every developer having our code signing certificate and password. What we ended up doing was rolling our own deployment app that uses the command line version of Mage to create the ClickOnce manifest files. I'm satisfied with our current solution but is seems like there would be an industry-wide, accepted approach to this problem. Is there?
[ "I would look at using msbuild. It has built in tasks for handling clickonce deployments. I included some references which will help you get started, if you want to go down this path. It is what I use and I have found it to fit my needs. With a good build process using msbuild, you should be able to accomplish squashing the pains you have felt.\nHere is detailed post on how ClickOnce manifest generation works with MsBuild.\n", "I've used nAnt to run the overall build strategy, but pass parameters into MSBuild to compile and create the deployment package.\nBasically, nAnt calls into MSBuild for each environment you need to deploy to, and generates a separate deployment output for each. You end up with a folder and all ClickOnce files you need for every environment, which you can just copy out to the server.\nThis is how we handled multiple production environments as well -- we had separate instances of our application for the US, Canada, and Europe, so each build would end up creating nine deployments, three each for dev, qa, and prod.\n" ]
[ 14, 5 ]
[]
[]
[ "clickonce", "deployment", "winforms" ]
stackoverflow_0000020728_clickonce_deployment_winforms.txt
Q: How do you convert binary data to Strings and back in Java? I have binary data in a file that I can read into a byte array and process with no problem. Now I need to send parts of the data over a network connection as elements in an XML document. My problem is that when I convert the data from an array of bytes to a String and back to an array of bytes, the data is getting corrupted. I've tested this on one machine to isolate the problem to the String conversion, so I now know that it isn't getting corrupted by the XML parser or the network transport. What I've got right now is byte[] buffer = ...; // read from file // a few lines that prove I can process the data successfully String element = new String(buffer); byte[] newBuffer = element.getBytes(); // a few lines that try to process newBuffer and fail because it is not the same data anymore Does anyone know how to convert binary to String and back without data loss? Answered: Thanks Sam. I feel like an idiot. I had this answered yesterday because my SAX parser was complaining. For some reason when I ran into this seemingly separate issue, it didn't occur to me that it was a new symptom of the same problem. EDIT: Just for the sake of completeness, I used the Base64 class from the Apache Commons Codec package to solve this problem. A: String(byte[]) treats the data as the default character encoding. So, how bytes get converted from 8-bit values to 16-bit Java Unicode chars will vary not only between operating systems, but can even vary between different users using different codepages on the same machine! This constructor is only good for decoding one of your own text files. Do not try to convert arbitrary bytes to chars in Java! Encoding as base64 is a good solution. This is how files are sent over SMTP (e-mail). The (free) Apache Commons Codec project will do the job. byte[] bytes = loadFile(file); //all chars in encoded are guaranteed to be 7-bit ASCII byte[] encoded = Base64.encodeBase64(bytes); String printMe = new String(encoded, "US-ASCII"); System.out.println(printMe); byte[] decoded = Base64.decodeBase64(encoded); Alternatively, you can use the Java 6 DatatypeConverter: import java.io.*; import java.nio.channels.*; import javax.xml.bind.DatatypeConverter; public class EncodeDecode { public static void main(String[] args) throws Exception { File file = new File("/bin/ls"); byte[] bytes = loadFile(file, new ByteArrayOutputStream()).toByteArray(); String encoded = DatatypeConverter.printBase64Binary(bytes); System.out.println(encoded); byte[] decoded = DatatypeConverter.parseBase64Binary(encoded); // check for (int i = 0; i < bytes.length; i++) { assert bytes[i] == decoded[i]; } } private static <T extends OutputStream> T loadFile(File file, T out) throws IOException { FileChannel in = new FileInputStream(file).getChannel(); try { assert in.size() == in.transferTo(0, in.size(), Channels.newChannel(out)); return out; } finally { in.close(); } } } A: If you encode it in base64, this will turn any data into ascii safe text, but base64 encoded data is larger than the orignal data A: See this question, How do you embed binary data in XML? Instead of converting the byte[] into String then pushing into XML somewhere, convert the byte[] to a String via BASE64 encoding (some XML libraries have a type to do this for you). The BASE64 decode once you get the String back from XML. Use http://commons.apache.org/codec/ You data may be getting messed up due to all sorts of weird character set restrictions and the presence of non-priting characters. Stick w/ BASE64. A: How are you building your XML document? If you use java's built in XML classes then the string encoding should be handled for you. Take a look at the javax.xml and org.xml packages. That's what we use for generating XML docs, and it handles all the string encoding and decoding quite nicely. ---EDIT: Hmm, I think I misunderstood the problem. You're not trying to encode a regular string, but some set of arbitrary binary data? In that case the Base64 encoding suggested in an earlier comment is probably the way to go. I believe that's a fairly standard way of encoding binary data in XML.
How do you convert binary data to Strings and back in Java?
I have binary data in a file that I can read into a byte array and process with no problem. Now I need to send parts of the data over a network connection as elements in an XML document. My problem is that when I convert the data from an array of bytes to a String and back to an array of bytes, the data is getting corrupted. I've tested this on one machine to isolate the problem to the String conversion, so I now know that it isn't getting corrupted by the XML parser or the network transport. What I've got right now is byte[] buffer = ...; // read from file // a few lines that prove I can process the data successfully String element = new String(buffer); byte[] newBuffer = element.getBytes(); // a few lines that try to process newBuffer and fail because it is not the same data anymore Does anyone know how to convert binary to String and back without data loss? Answered: Thanks Sam. I feel like an idiot. I had this answered yesterday because my SAX parser was complaining. For some reason when I ran into this seemingly separate issue, it didn't occur to me that it was a new symptom of the same problem. EDIT: Just for the sake of completeness, I used the Base64 class from the Apache Commons Codec package to solve this problem.
[ "String(byte[]) treats the data as the default character encoding. So, how bytes get converted from 8-bit values to 16-bit Java Unicode chars will vary not only between operating systems, but can even vary between different users using different codepages on the same machine! This constructor is only good for decoding one of your own text files. Do not try to convert arbitrary bytes to chars in Java!\nEncoding as base64 is a good solution. This is how files are sent over SMTP (e-mail). The (free) Apache Commons Codec project will do the job.\nbyte[] bytes = loadFile(file); \n//all chars in encoded are guaranteed to be 7-bit ASCII\nbyte[] encoded = Base64.encodeBase64(bytes);\nString printMe = new String(encoded, \"US-ASCII\");\nSystem.out.println(printMe);\nbyte[] decoded = Base64.decodeBase64(encoded);\n\nAlternatively, you can use the Java 6 DatatypeConverter:\nimport java.io.*;\nimport java.nio.channels.*;\nimport javax.xml.bind.DatatypeConverter;\n\npublic class EncodeDecode { \n public static void main(String[] args) throws Exception {\n File file = new File(\"/bin/ls\");\n byte[] bytes = loadFile(file, new ByteArrayOutputStream()).toByteArray();\n String encoded = DatatypeConverter.printBase64Binary(bytes);\n System.out.println(encoded);\n byte[] decoded = DatatypeConverter.parseBase64Binary(encoded);\n // check\n for (int i = 0; i < bytes.length; i++) {\n assert bytes[i] == decoded[i];\n }\n }\n\n private static <T extends OutputStream> T loadFile(File file, T out)\n throws IOException {\n FileChannel in = new FileInputStream(file).getChannel();\n try {\n assert in.size() == in.transferTo(0, in.size(), Channels.newChannel(out));\n return out;\n } finally {\n in.close();\n }\n }\n}\n\n", "If you encode it in base64, this will turn any data into ascii safe text, but base64 encoded data is larger than the orignal data\n", "See this question, How do you embed binary data in XML?\nInstead of converting the byte[] into String then pushing into XML somewhere, convert the byte[] to a String via BASE64 encoding (some XML libraries have a type to do this for you). The BASE64 decode once you get the String back from XML.\nUse http://commons.apache.org/codec/\nYou data may be getting messed up due to all sorts of weird character set restrictions and the presence of non-priting characters. Stick w/ BASE64.\n", "How are you building your XML document? If you use java's built in XML classes then the string encoding should be handled for you.\nTake a look at the javax.xml and org.xml packages. That's what we use for generating XML docs, and it handles all the string encoding and decoding quite nicely.\n---EDIT:\nHmm, I think I misunderstood the problem. You're not trying to encode a regular string, but some set of arbitrary binary data? In that case the Base64 encoding suggested in an earlier comment is probably the way to go. I believe that's a fairly standard way of encoding binary data in XML.\n" ]
[ 36, 21, 2, 0 ]
[]
[]
[ "java", "serialization" ]
stackoverflow_0000020778_java_serialization.txt
Q: How do I move an item from one menu to another? In the Visual Studio designer, how do you move a menu item from one menu to another? I would assume drag and drop would work, but it seems to only work within a menu for me. I usually resort to editing the .Designer.cs files by hand. A: Right-click, cut, and paste works just fine for me.
How do I move an item from one menu to another?
In the Visual Studio designer, how do you move a menu item from one menu to another? I would assume drag and drop would work, but it seems to only work within a menu for me. I usually resort to editing the .Designer.cs files by hand.
[ "Right-click, cut, and paste works just fine for me.\n" ]
[ 10 ]
[]
[]
[ "c#", "visual_studio_2005", "winforms" ]
stackoverflow_0000020814_c#_visual_studio_2005_winforms.txt
Q: Java JPanel redraw issues I have a Java swing application with a panel that contains three JComboBoxes that do not draw properly. The combox boxes just show up as the down arrow on the right side, but without the label of the currently selected value. The boxes will redraw correctly if the window is resized either bigger or smaller by even one pixel. All of my googling has pointed to calling revalidate() on the JPanel to fix this, but that hasn't worked for me. Calling updateUI() on the JPanel has changed it from always displaying incorrectly to displaying incorrectly half of the time. Has anyone else seen this and found a different way to force a redraw of the combo boxes? A: Can you give us some more information on how you add the combo boxes to the JPanel? This is a pretty common thing to do in Swing so I doubt that it's a JVM issue but I guess anything is possible. Specifically, I would double check to make sure you're not accessing the GUI from any background threads. In this case, maybe you're reading the choices from a DB or something and updating the JComboBox from a background thread, which is a big no-no in Swing. See SwingUtils.invokeLater().
Java JPanel redraw issues
I have a Java swing application with a panel that contains three JComboBoxes that do not draw properly. The combox boxes just show up as the down arrow on the right side, but without the label of the currently selected value. The boxes will redraw correctly if the window is resized either bigger or smaller by even one pixel. All of my googling has pointed to calling revalidate() on the JPanel to fix this, but that hasn't worked for me. Calling updateUI() on the JPanel has changed it from always displaying incorrectly to displaying incorrectly half of the time. Has anyone else seen this and found a different way to force a redraw of the combo boxes?
[ "Can you give us some more information on how you add the combo boxes to the JPanel? This is a pretty common thing to do in Swing so I doubt that it's a JVM issue but I guess anything is possible.\nSpecifically, I would double check to make sure you're not accessing the GUI from any background threads. In this case, maybe you're reading the choices from a DB or something and updating the JComboBox from a background thread, which is a big no-no in Swing. See SwingUtils.invokeLater().\n" ]
[ 6 ]
[]
[]
[ "java", "jpanel", "swing" ]
stackoverflow_0000020880_java_jpanel_swing.txt
Q: SQL2005: Linking a table to multiple tables and retaining Ref Integrity? Here is a simplification of my database: Table: Property Fields: ID, Address Table: Quote Fields: ID, PropertyID, BespokeQuoteFields... Table: Job Fields: ID, PropertyID, BespokeJobFields... Then we have other tables that relate to the Quote and Job tables individually. I now need to add a Message table where users can record telephone messages left by customers regarding Jobs and Quotes. I could create two identical tables (QuoteMessage and JobMessage), but this violates the DRY principal and seems messy. I could create one Message table: Table: Message Fields: ID, RelationID, RelationType, OtherFields... But this stops me from using constraints to enforce my referential integrity. I can also forsee it creating problems with the devlopment side using Linq to SQL later on. Is there an elegant solution to this problem, or am I ultimately going to have to hack something together? Burns A: Create one Message table, containing a unique MessageId and the various properties you need to store for a message. Table: Message Fields: Id, TimeReceived, MessageDetails, WhateverElse... Create two link tables - QuoteMessage and JobMessage. These will just contain two fields each, foreign keys to the Quote/Job and the Message. Table: QuoteMessage Fields: QuoteId, MessageId Table: JobMessage Fields: JobId, MessageId In this way you have defined the data properties of a Message in one place only (making it easy to extend, and to query across all messages), but you also have the referential integrity linking Quotes and Jobs to any number of messages. Indeed, both a Quote and Job could be linked to the same message (I'm not sure if that is appropriate to your business model, but at least the data model gives you the option). A: About the only other way I can think of is to have a base Message table, with both an Id and a TypeId. Your subtables (QuoteMessage and JobMessage) then reference the base table on both MessageId and TypeId - but also have CHECK CONSTRAINTS on them to enforce only the appropiate MessageTypeId. Table: Message Fields: Id, MessageTypeId, Text, ... Primary Key: Id, MessageTypeId Unique: Id Table: MessageType Fields: Id, Name Values: 1, "Quote" : 2, "Job" Table: QuoteMessage Fields: Id, MessageId, MessageTypeId, QuoteId Constraints: MessageTypeId = 1 References: (MessageId, MessageTypeId) = (Message.Id, Message.MessageTypeId) QuoteId = Quote.QuoteId Table: JobMessage Fields: Id, MessageId, MessageTypeId, JobId Constraints: MessageTypeId = 2 References: (MessageId, MessageTypeId) = (Message.Id, Message.MessageTypeId) JobId = Job.QuoteId What does this buy you, as compared to just a JobMesssage and QuoteMessage table? It elevates a Message to a first class citizen, so that you can read all Messages from a single table. In exchange, your query path from a Message to it's relevant Quote or Job is 1 more join away. It kind of depends on your app flow whether that's a good tradeoff or not. As for 2 identical tables violating DRY - I wouldn't get hung up on that. In DB design, it's less about DRY, and more about normalization. If the 2 things you're modeling have the same attributes (columns), but are actually different things (tables) - then it's reasonable to have multiple tables with similar schemas. Much better than the reverse of munging different things together. A: @burns Ian's answer (+1) is correct [see note]. Using a many to many table QUOTEMESSAGE to join QUOTE to MESSAGE is the most correct model, but will leave orphaned MESSAGE records. This is one of those rare cases where a trigger can be used. However, caution needs to be applied to ensure that the a single MESSAGE record cannot be associated with both a QUOTE and a JOB. create trigger quotemessage_trg on quotemessage for delete as begin delete from [message] where [message].[msg_id] in (select [msg_id] from Deleted); end Note to Ian, I think there is a typo in the table definition for JobMessage, where the columns should be JobId, MessageId (?). I would edit your quote but it might take me a few years to gain that level of reputation! A: Why not just have both QuoteId and JobId fields in the message table? Or does a message have to be regarding either a quote or a job and not both?
SQL2005: Linking a table to multiple tables and retaining Ref Integrity?
Here is a simplification of my database: Table: Property Fields: ID, Address Table: Quote Fields: ID, PropertyID, BespokeQuoteFields... Table: Job Fields: ID, PropertyID, BespokeJobFields... Then we have other tables that relate to the Quote and Job tables individually. I now need to add a Message table where users can record telephone messages left by customers regarding Jobs and Quotes. I could create two identical tables (QuoteMessage and JobMessage), but this violates the DRY principal and seems messy. I could create one Message table: Table: Message Fields: ID, RelationID, RelationType, OtherFields... But this stops me from using constraints to enforce my referential integrity. I can also forsee it creating problems with the devlopment side using Linq to SQL later on. Is there an elegant solution to this problem, or am I ultimately going to have to hack something together? Burns
[ "Create one Message table, containing a unique MessageId and the various properties you need to store for a message.\nTable: Message\nFields: Id, TimeReceived, MessageDetails, WhateverElse...\n\nCreate two link tables - QuoteMessage and JobMessage. These will just contain two fields each, foreign keys to the Quote/Job and the Message.\nTable: QuoteMessage\nFields: QuoteId, MessageId\n\nTable: JobMessage\nFields: JobId, MessageId\n\nIn this way you have defined the data properties of a Message in one place only (making it easy to extend, and to query across all messages), but you also have the referential integrity linking Quotes and Jobs to any number of messages. Indeed, both a Quote and Job could be linked to the same message (I'm not sure if that is appropriate to your business model, but at least the data model gives you the option).\n", "About the only other way I can think of is to have a base Message table, with both an Id and a TypeId. Your subtables (QuoteMessage and JobMessage) then reference the base table on both MessageId and TypeId - but also have CHECK CONSTRAINTS on them to enforce only the appropiate MessageTypeId.\nTable: Message\nFields: Id, MessageTypeId, Text, ...\nPrimary Key: Id, MessageTypeId\nUnique: Id\n\nTable: MessageType\nFields: Id, Name\nValues: 1, \"Quote\" : 2, \"Job\"\n\nTable: QuoteMessage\nFields: Id, MessageId, MessageTypeId, QuoteId\nConstraints: MessageTypeId = 1\nReferences: (MessageId, MessageTypeId) = (Message.Id, Message.MessageTypeId)\n QuoteId = Quote.QuoteId\n\nTable: JobMessage\nFields: Id, MessageId, MessageTypeId, JobId\nConstraints: MessageTypeId = 2\nReferences: (MessageId, MessageTypeId) = (Message.Id, Message.MessageTypeId)\n JobId = Job.QuoteId\n\nWhat does this buy you, as compared to just a JobMesssage and QuoteMessage table? It elevates a Message to a first class citizen, so that you can read all Messages from a single table. In exchange, your query path from a Message to it's relevant Quote or Job is 1 more join away. It kind of depends on your app flow whether that's a good tradeoff or not.\nAs for 2 identical tables violating DRY - I wouldn't get hung up on that. In DB design, it's less about DRY, and more about normalization. If the 2 things you're modeling have the same attributes (columns), but are actually different things (tables) - then it's reasonable to have multiple tables with similar schemas. Much better than the reverse of munging different things together.\n", "@burns\nIan's answer (+1) is correct [see note]. Using a many to many table QUOTEMESSAGE to join QUOTE to MESSAGE is the most correct model, but will leave orphaned MESSAGE records.\nThis is one of those rare cases where a trigger can be used. However, caution needs to be applied to ensure that the a single MESSAGE record cannot be associated with both a QUOTE and a JOB.\ncreate trigger quotemessage_trg\non quotemessage\nfor delete\nas\nbegin\n\ndelete \nfrom [message] \nwhere [message].[msg_id] in \n (select [msg_id] from Deleted);\n\nend\n\nNote to Ian, I think there is a typo in the table definition for JobMessage, where the columns should be JobId, MessageId (?). I would edit your quote but it might take me a few years to gain that level of reputation!\n", "Why not just have both QuoteId and JobId fields in the message table? Or does a message have to be regarding either a quote or a job and not both?\n" ]
[ 4, 1, 1, 0 ]
[]
[]
[ "database", "referential_integrity", "sql_server_2005" ]
stackoverflow_0000019516_database_referential_integrity_sql_server_2005.txt
Q: VBScript/IIS - How do I automatically set ASP.NET version for a particular website I need to script the creation of app pools and websites on IIS 6.0. I have been able to create these using adsutil.vbs and iisweb.vbs, but don't know how to set the version of ASP.NET for the sites I have just created to 2.0.50727.0. Ideally I would like to adsutil.vbs to update the metabase. How do I do this? A: @Chris beat me to the punch on the ADSI way You can do this using the aspnet_regiis.exe tool. There is one of these tools per version of ASP.NET installed on the machine. You could shell out to - This configures ASP.NET 1.1 %windir%\microsoft.net\framework\v1.1.4322\aspnet_regiis -s W3SVC/[iisnumber]/ROOT This configures ASP.NET 2.0 %windir%\microsoft.net\framework\v2.0.50727\aspnet_regiis -s W3SVC/[iisnumber]/ROOT You probably already know this, but if you have multiple 1.1 and 2.0 sites on your machine, just remember to switch the website you're changing ASP.NET versions on to compatible app pool. ASP.NET 1.1 and 2.0 sites don't mix in the same app pool. A: I found the following script posted on Diablo Pup's blog. It uses ADSI automation. '****************************************************************************************** ' Name: SetASPDotNetVersion ' Description: Set the script mappings for the specified ASP.NET version ' Inputs: objIIS, strNewVersion '****************************************************************************************** Sub SetASPDotNetVersion(objIIS, strNewVersion) Dim i, ScriptMaps, arrVersions(2), thisVersion, thisScriptMap Dim strSearchText, strReplaceText Select Case Trim(LCase(strNewVersion)) Case "1.1" strReplaceText = "v1.1.4322" Case "2.0" strReplaceText = "v2.0.50727" Case Else wscript.echo "WARNING: Non-supported ASP.NET version specified!" Exit Sub End Select ScriptMaps = objIIS.ScriptMaps arrVersions(0) = "v1.1.4322" arrVersions(1) = "v2.0.50727" 'Loop through all three potential old values For Each thisVersion in arrVersions 'Loop through all the mappings For thisScriptMap = LBound(ScriptMaps) to UBound(ScriptMaps) 'Replace the old with the new ScriptMaps(thisScriptMap) = Replace(ScriptMaps(thisScriptMap), thisVersion, strReplaceText) Next Next objIIS.ScriptMaps = ScriptMaps objIIS.SetInfo wscript.echo "<-------Set ASP.NET version to " & strNewVersion & " successfully.------->" End Sub
VBScript/IIS - How do I automatically set ASP.NET version for a particular website
I need to script the creation of app pools and websites on IIS 6.0. I have been able to create these using adsutil.vbs and iisweb.vbs, but don't know how to set the version of ASP.NET for the sites I have just created to 2.0.50727.0. Ideally I would like to adsutil.vbs to update the metabase. How do I do this?
[ "@Chris beat me to the punch on the ADSI way\nYou can do this using the aspnet_regiis.exe tool. There is one of these tools per version of ASP.NET installed on the machine. You could shell out to -\nThis configures ASP.NET 1.1\n%windir%\\microsoft.net\\framework\\v1.1.4322\\aspnet_regiis -s W3SVC/[iisnumber]/ROOT\n\nThis configures ASP.NET 2.0\n%windir%\\microsoft.net\\framework\\v2.0.50727\\aspnet_regiis -s W3SVC/[iisnumber]/ROOT\n\nYou probably already know this, but if you have multiple 1.1 and 2.0 sites on your machine, just remember to switch the website you're changing ASP.NET versions on to compatible app pool. ASP.NET 1.1 and 2.0 sites don't mix in the same app pool.\n", "I found the following script posted on Diablo Pup's blog. It uses ADSI automation.\n'******************************************************************************************\n' Name: SetASPDotNetVersion\n' Description: Set the script mappings for the specified ASP.NET version\n' Inputs: objIIS, strNewVersion\n'******************************************************************************************\nSub SetASPDotNetVersion(objIIS, strNewVersion)\n Dim i, ScriptMaps, arrVersions(2), thisVersion, thisScriptMap\n Dim strSearchText, strReplaceText\n\n Select Case Trim(LCase(strNewVersion))\n Case \"1.1\"\n strReplaceText = \"v1.1.4322\"\n Case \"2.0\"\n strReplaceText = \"v2.0.50727\"\n Case Else\n wscript.echo \"WARNING: Non-supported ASP.NET version specified!\"\n Exit Sub\n End Select\n\n ScriptMaps = objIIS.ScriptMaps\n arrVersions(0) = \"v1.1.4322\"\n arrVersions(1) = \"v2.0.50727\"\n 'Loop through all three potential old values\n For Each thisVersion in arrVersions\n 'Loop through all the mappings\n For thisScriptMap = LBound(ScriptMaps) to UBound(ScriptMaps)\n 'Replace the old with the new \n ScriptMaps(thisScriptMap) = Replace(ScriptMaps(thisScriptMap), thisVersion, strReplaceText)\n Next\n Next \n\n objIIS.ScriptMaps = ScriptMaps\n objIIS.SetInfo\n wscript.echo \"<-------Set ASP.NET version to \" & strNewVersion & \" successfully.------->\"\nEnd Sub \n\n" ]
[ 6, 2 ]
[]
[]
[ "administration", "asp.net", "iis", "sysadmin", "vbscript" ]
stackoverflow_0000020923_administration_asp.net_iis_sysadmin_vbscript.txt
Q: How to encrypt connection string in WinForms 1.1 app.config? Just looking for the first step basic solution here that keeps the honest people out. Thanks, Mike A: This might help you along the way: http://msdn.microsoft.com/en-us/library/aa302403.aspx http://msdn.microsoft.com/en-us/library/aa302406.aspx The articles are aimed at ASP.NET but the principles are the same. A: The second piece of the puzzle is detecting an unencrypted connection string, encrypting it, and writing it back out to the config file. Writing to config files located in your exe dir is generally a very bad idea, but can be very useful during development. The pros and cons are very well described here. Be sure to read all the comments.
How to encrypt connection string in WinForms 1.1 app.config?
Just looking for the first step basic solution here that keeps the honest people out. Thanks, Mike
[ "This might help you along the way:\nhttp://msdn.microsoft.com/en-us/library/aa302403.aspx\nhttp://msdn.microsoft.com/en-us/library/aa302406.aspx\nThe articles are aimed at ASP.NET but the principles are the same.\n", "The second piece of the puzzle is detecting an unencrypted connection string, encrypting it, and writing it back out to the config file. Writing to config files located in your exe dir is generally a very bad idea, but can be very useful during development. The pros and cons are very well described here. Be sure to read all the comments.\n" ]
[ 0, 0 ]
[]
[]
[ "database", "winforms" ]
stackoverflow_0000017877_database_winforms.txt
Q: Best way to perform dynamic subquery in MS Reporting Services? I'm new to SQL Server Reporting Services, and was wondering the best way to do the following: Query to get a list of popular IDs Subquery on each item to get properties from another table Ideally, the final report columns would look like this: [ID] [property1] [property2] [SELECT COUNT(*) FROM AnotherTable WHERE ForeignID=ID] There may be ways to construct a giant SQL query to do this all in one go, but I'd prefer to compartmentalize it. Is the recommended approach to write a VB function to perform the subquery for each row? Thanks for any help. A: I would recommend using a SubReport. You would place the SubReport in a table cell. A: Simplest method is this: select *, (select count(*) from tbl2 t2 where t2.tbl1ID = t1.tbl1ID) as cnt from tbl1 t1 here is a workable version (using table variables): declare @tbl1 table ( tbl1ID int, prop1 varchar(1), prop2 varchar(2) ) declare @tbl2 table ( tbl2ID int, tbl1ID int ) select *, (select count(*) from @tbl2 t2 where t2.tbl1ID = t1.tbl1ID) as cnt from @tbl1 t1 Obviously this is just a raw example - standard rules apply like don't select *, etc ... UPDATE from Aug 21 '08 at 21:27: @AlexCuse - Yes, totally agree on the performance. I started to write it with the outer join, but then saw in his sample output the count and thought that was what he wanted, and the count would not return correctly if the tables are outer joined. Not to mention that joins can cause your records to be multiplied (1 entry from tbl1 that matches 2 entries in tbl2 = 2 returns) which can be unintended. So I guess it really boils down to the specifics on what your query needs to return. UPDATE from Aug 21 '08 at 22:07: To answer the other parts of your question - is a VB function the way to go? No. Absolutely not. Not for something this simple. Functions are very bad on performance, each row in the return set executes the function. If you want to "compartmentalize" the different parts of the query you have to approach it more like a stored procedure. Build a temp table, do part of the query and insert the results into the table, then do any further queries you need and update the original temp table (or insert into more temp tables). A: Depending on how you want the output to look, a subreport could do, or you could group on ID, property1, property2 and show the items from your other table as detail items (assuming you want to show more than just count). Something like select t1.ID, t1.property1, t1.property2, t2.somecol, t2.someothercol from table t1 left join anothertable t2 on t1.ID = t2.ID @Carlton Jenke I think you will find an outer join a better performer than the correlated subquery in the example you gave. Remember that the subquery needs to be run for each row.
Best way to perform dynamic subquery in MS Reporting Services?
I'm new to SQL Server Reporting Services, and was wondering the best way to do the following: Query to get a list of popular IDs Subquery on each item to get properties from another table Ideally, the final report columns would look like this: [ID] [property1] [property2] [SELECT COUNT(*) FROM AnotherTable WHERE ForeignID=ID] There may be ways to construct a giant SQL query to do this all in one go, but I'd prefer to compartmentalize it. Is the recommended approach to write a VB function to perform the subquery for each row? Thanks for any help.
[ "I would recommend using a SubReport. You would place the SubReport in a table cell.\n", "Simplest method is this:\nselect *,\n (select count(*) from tbl2 t2 where t2.tbl1ID = t1.tbl1ID) as cnt\nfrom tbl1 t1\n\nhere is a workable version (using table variables):\ndeclare @tbl1 table\n(\n tbl1ID int,\n prop1 varchar(1),\n prop2 varchar(2)\n)\n\ndeclare @tbl2 table\n(\n tbl2ID int,\n tbl1ID int\n)\n\nselect *,\n (select count(*) from @tbl2 t2 where t2.tbl1ID = t1.tbl1ID) as cnt\nfrom @tbl1 t1\n\nObviously this is just a raw example - standard rules apply like don't select *, etc ...\n\nUPDATE from Aug 21 '08 at 21:27:\n@AlexCuse - Yes, totally agree on the performance.\nI started to write it with the outer join, but then saw in his sample output the count and thought that was what he wanted, and the count would not return correctly if the tables are outer joined. Not to mention that joins can cause your records to be multiplied (1 entry from tbl1 that matches 2 entries in tbl2 = 2 returns) which can be unintended.\nSo I guess it really boils down to the specifics on what your query needs to return.\n\nUPDATE from Aug 21 '08 at 22:07:\nTo answer the other parts of your question - is a VB function the way to go? No. Absolutely not. Not for something this simple.\nFunctions are very bad on performance, each row in the return set executes the function.\nIf you want to \"compartmentalize\" the different parts of the query you have to approach it more like a stored procedure. Build a temp table, do part of the query and insert the results into the table, then do any further queries you need and update the original temp table (or insert into more temp tables).\n", "Depending on how you want the output to look, a subreport could do, or you could group on ID, property1, property2 and show the items from your other table as detail items (assuming you want to show more than just count).\nSomething like \nselect t1.ID, t1.property1, t1.property2, t2.somecol, t2.someothercol\nfrom table t1 left join anothertable t2 on t1.ID = t2.ID\n\n@Carlton Jenke I think you will find an outer join a better performer than the correlated subquery in the example you gave. Remember that the subquery needs to be run for each row.\n" ]
[ 2, 0, 0 ]
[]
[]
[ "reporting", "reporting_services", "service", "sql", "sql_server" ]
stackoverflow_0000020876_reporting_reporting_services_service_sql_sql_server.txt
Q: Compact Framework/Threading - MessageBox displays over other controls after option is chosen I'm working on an app that grabs and installs a bunch of updates off an an external server, and need some help with threading. The user follows this process: Clicks button Method checks for updates, count is returned. If greater than 0, then ask the user if they want to install using MessageBox.Show(). If yes, it runs through a loop and call BeginInvoke() on the run() method of each update to run it in the background. My update class has some events that are used to update a progress bar etc. The progress bar updates are fine, but the MessageBox is not fully cleared from the screen because the update loop starts right after the user clicks yes (see screenshot below). What should I do to make the messagebox disappear instantly before the update loop starts? Should I be using Threads instead of BeginInvoke()? Should I be doing the initial update check on a separate thread and calling MessageBox.Show() from that thread? Code // Button clicked event handler code... DialogResult dlgRes = MessageBox.Show( string.Format("There are {0} updates available.\n\nInstall these now?", um2.Updates.Count), "Updates Available", MessageBoxButtons.YesNo, MessageBoxIcon.Question, MessageBoxDefaultButton.Button2 ); if (dlgRes == DialogResult.Yes) { ProcessAllUpdates(um2); } // Processes a bunch of items in a loop private void ProcessAllUpdates(UpdateManager2 um2) { for (int i = 0; i < um2.Updates.Count; i++) { Update2 update = um2.Updates[i]; ProcessSingleUpdate(update); int percentComplete = Utilities.CalculatePercentCompleted(i, um2.Updates.Count); UpdateOverallProgress(percentComplete); } } // Process a single update with IAsyncResult private void ProcessSingleUpdate(Update2 update) { update.Action.OnStart += Action_OnStart; update.Action.OnProgress += Action_OnProgress; update.Action.OnCompletion += Action_OnCompletion; //synchronous //update.Action.Run(); // async IAsyncResult ar = this.BeginInvoke((MethodInvoker)delegate() { update.Action.Run(); }); } Screenshot A: Your UI isn't updating because all the work is happening in the user interface thread. Your call to: this.BeginInvoke((MethodInvoker)delegate() {update.Action.Run(); }) is saying invoke update.Action.Run() on the thread that created "this" (your form), which is the user interface thread. Application.DoEvents() will indeed give the UI thread the chance to redraw the screen, but I'd be tempted to create new delegate, and call BeginInvoke on that. This will execute the update.Action.Run() function on a seperate thread allocated from the thread pool. You can then keep checking the IAsyncResult until the update is complete, querying the update object for its progress after every check (because you can't have the other thread update the progress bar/UI), then calling Application.DoEvents(). You also are supposed to call EndInvoke() afterwards otherwise you may end up leaking resources I would also be tempted to put a cancel button on the progress dialog, and add a timeout, otherwise if the update gets stuck (or takes too long) then your application will have locked up forever. A: Have you tried putting a Application.DoEvents() in here if (dlgRes == DialogResult.Yes) { Application.DoEvents(); ProcessAllUpdates(um2); } A: @ John Sibly You can get away with not calling EndInvoke when dealing with WinForms without any negative consequences. The only documented exception to the rule that I'm aware of is in Windows Forms, where you are officially allowed to call Control.BeginInvoke without bothering to call Control.EndInvoke. However in all other cases when dealing with the Begin/End Async pattern you should assume it will leak, as you stated.
Compact Framework/Threading - MessageBox displays over other controls after option is chosen
I'm working on an app that grabs and installs a bunch of updates off an an external server, and need some help with threading. The user follows this process: Clicks button Method checks for updates, count is returned. If greater than 0, then ask the user if they want to install using MessageBox.Show(). If yes, it runs through a loop and call BeginInvoke() on the run() method of each update to run it in the background. My update class has some events that are used to update a progress bar etc. The progress bar updates are fine, but the MessageBox is not fully cleared from the screen because the update loop starts right after the user clicks yes (see screenshot below). What should I do to make the messagebox disappear instantly before the update loop starts? Should I be using Threads instead of BeginInvoke()? Should I be doing the initial update check on a separate thread and calling MessageBox.Show() from that thread? Code // Button clicked event handler code... DialogResult dlgRes = MessageBox.Show( string.Format("There are {0} updates available.\n\nInstall these now?", um2.Updates.Count), "Updates Available", MessageBoxButtons.YesNo, MessageBoxIcon.Question, MessageBoxDefaultButton.Button2 ); if (dlgRes == DialogResult.Yes) { ProcessAllUpdates(um2); } // Processes a bunch of items in a loop private void ProcessAllUpdates(UpdateManager2 um2) { for (int i = 0; i < um2.Updates.Count; i++) { Update2 update = um2.Updates[i]; ProcessSingleUpdate(update); int percentComplete = Utilities.CalculatePercentCompleted(i, um2.Updates.Count); UpdateOverallProgress(percentComplete); } } // Process a single update with IAsyncResult private void ProcessSingleUpdate(Update2 update) { update.Action.OnStart += Action_OnStart; update.Action.OnProgress += Action_OnProgress; update.Action.OnCompletion += Action_OnCompletion; //synchronous //update.Action.Run(); // async IAsyncResult ar = this.BeginInvoke((MethodInvoker)delegate() { update.Action.Run(); }); } Screenshot
[ "Your UI isn't updating because all the work is happening in the user interface thread.\nYour call to: \nthis.BeginInvoke((MethodInvoker)delegate() {update.Action.Run(); }) \n\nis saying invoke update.Action.Run() on the thread that created \"this\" (your form), which is the user interface thread.\nApplication.DoEvents()\n\nwill indeed give the UI thread the chance to redraw the screen, but I'd be tempted to create new delegate, and call BeginInvoke on that. \nThis will execute the update.Action.Run() function on a seperate thread allocated from the thread pool. You can then keep checking the IAsyncResult until the update is complete, querying the update object for its progress after every check (because you can't have the other thread update the progress bar/UI), then calling Application.DoEvents(). \nYou also are supposed to call EndInvoke() afterwards otherwise you may end up leaking resources\nI would also be tempted to put a cancel button on the progress dialog, and add a timeout, otherwise if the update gets stuck (or takes too long) then your application will have locked up forever.\n", "Have you tried putting a \nApplication.DoEvents()\n\nin here\nif (dlgRes == DialogResult.Yes)\n{\n Application.DoEvents(); \n ProcessAllUpdates(um2); \n}\n\n", "@ John Sibly\nYou can get away with not calling EndInvoke when dealing with WinForms without any negative consequences.\n\nThe only documented exception to the rule that I'm aware of is in Windows Forms, where you are officially allowed to call Control.BeginInvoke without bothering to call Control.EndInvoke.\n\nHowever in all other cases when dealing with the Begin/End Async pattern you should assume it will leak, as you stated.\n" ]
[ 6, 1, 1 ]
[]
[]
[ "c#", "compact_framework", "multithreading", "winforms" ]
stackoverflow_0000010071_c#_compact_framework_multithreading_winforms.txt
Q: "using" namespace equivalent in ASP.NET markup When I'm working with DataBound controls in ASP.NET 2.0 such as a Repeater, I know the fastest way to retrieve a property of a bound object (instead of using Reflection with the Eval() function) is to cast the DataItem object to the type it is and then use that object natively, like the following: <%#((MyType)Container.DataItem).PropertyOfMyType%> The problem is, if this type is in a namespace (which is the case 99.99% of the time) then this single statement because a lot longer due to the fact that the ASP page has no concept of class scope so all of my types need to be fully qualified. <%#((RootNamespace.SubNamespace1.SubNamspace2.SubNamespace3.MyType)Container.DataItem).PropertyOfMyType%> Is there any kind of using directive or some equivalent I could place somewhere in an ASP.NET page so I don't need to use the full namespace every time? A: I believe you can add something like: <%@ Import Namespace="RootNamespace.SubNamespace1" %> At the top of the page. A: What you're looking for is the @Import page directive.
"using" namespace equivalent in ASP.NET markup
When I'm working with DataBound controls in ASP.NET 2.0 such as a Repeater, I know the fastest way to retrieve a property of a bound object (instead of using Reflection with the Eval() function) is to cast the DataItem object to the type it is and then use that object natively, like the following: <%#((MyType)Container.DataItem).PropertyOfMyType%> The problem is, if this type is in a namespace (which is the case 99.99% of the time) then this single statement because a lot longer due to the fact that the ASP page has no concept of class scope so all of my types need to be fully qualified. <%#((RootNamespace.SubNamespace1.SubNamspace2.SubNamespace3.MyType)Container.DataItem).PropertyOfMyType%> Is there any kind of using directive or some equivalent I could place somewhere in an ASP.NET page so I don't need to use the full namespace every time?
[ "I believe you can add something like:\n<%@ Import Namespace=\"RootNamespace.SubNamespace1\" %> \n\nAt the top of the page.\n", "What you're looking for is the @Import page directive.\n" ]
[ 64, 7 ]
[]
[]
[ "asp.net" ]
stackoverflow_0000021052_asp.net.txt
Q: How to tab focus onto a dropdown field in Mac OSX In Windows, in any windows form or web browser, you can use the tab button to switch focus through all of the form fields. It will stop on textboxes, radiobuttons, checkboxes, dropdown menus, etc. However, in Mac OSX, tab skips dropdown menus. Is there anyway to change this behavior, or access the above items mentioned, without using a mouse? A: Go to System Preferences > Keyboard and Mouse, then choose Keyboard Shortcuts. At the bottom, ensure Full Keyboard Access is set to "All controls". It's a long time since I turned it on but I think that's all you need to do A: Apple Menu > System Preferences > Keyboard & Mouse > Keyboard Shortcuts: Change the radio button at the bottom from "Text boxes and lists only" to "All controls." Edit: Dammit. We're a fast group around here aren't we? :-) A: I have found that I also need to set accessibility.tabfocus to 7 in Firefox's about:config.
How to tab focus onto a dropdown field in Mac OSX
In Windows, in any windows form or web browser, you can use the tab button to switch focus through all of the form fields. It will stop on textboxes, radiobuttons, checkboxes, dropdown menus, etc. However, in Mac OSX, tab skips dropdown menus. Is there anyway to change this behavior, or access the above items mentioned, without using a mouse?
[ "Go to System Preferences > Keyboard and Mouse, then choose Keyboard Shortcuts. At the bottom, ensure Full Keyboard Access is set to \"All controls\". It's a long time since I turned it on but I think that's all you need to do\n", "Apple Menu > System Preferences > Keyboard & Mouse > Keyboard Shortcuts:\nChange the radio button at the bottom from \"Text boxes and lists only\" to \"All controls.\"\nEdit: Dammit. We're a fast group around here aren't we? :-)\n", "I have found that I also need to set accessibility.tabfocus to 7 in Firefox's about:config.\n" ]
[ 27, 4, 1 ]
[ "It's in the System Preferences - this blog post shows where the setting is.\n" ]
[ -2 ]
[ "keyboard_shortcuts", "macos", "mouse" ]
stackoverflow_0000002349_keyboard_shortcuts_macos_mouse.txt
Q: Page a Generic Collection Without Linq I've got a System.Generic.Collections.List(Of MyCustomClass) type object. Given integer varaibles pagesize and pagenumber, how can I collect only any single page of MyCustomClass objects? This is what I've got. How can I improve it? 'my given collection and paging parameters Dim AllOfMyCustomClassObjects As System.Collections.Generic.List(Of MyCustomClass) = GIVEN Dim pagesize As Integer = GIVEN Dim pagenumber As Integer = GIVEN 'collect current page objects Dim PageObjects As New System.Collections.Generic.List(Of MyCustomClass) Dim objcount As Integer = 1 For Each obj As MyCustomClass In AllOfMyCustomClassObjects If objcount > pagesize * (pagenumber - 1) And count <= pagesize * pagenumber Then PageObjects.Add(obj) End If objcount = objcount + 1 Next 'find total page count Dim totalpages As Integer = CInt(Math.Floor(objcount / pagesize)) If objcount Mod pagesize > 0 Then totalpages = totalpages + 1 End If A: Generic.List should provide the Skip() and Take() methods, so you could do this: Dim PageObjects As New System.Collections.Generic.List(Of MyCustomClass) PageObjects = AllOfMyCustomClassObjects.Skip(pagenumber * pagesize).Take(pagesize) If by "without Linq" you meant on the 2.0 Framework, I don't believe List(Of T) supports those methods. In that case, use GetRange like Jonathan suggested. A: You use GetRange on your IEnuramble implementing collection: List<int> lolInts = new List<int>(); for (int i = 0; i <= 100; i++) { lolInts.Add(i); } List<int> page1 = lolInts.GetRange(0, 49); List<int> page2 = lilInts.GetRange(50, 100); I trust you can figure out how to use GetRange to grab an individual page from here.
Page a Generic Collection Without Linq
I've got a System.Generic.Collections.List(Of MyCustomClass) type object. Given integer varaibles pagesize and pagenumber, how can I collect only any single page of MyCustomClass objects? This is what I've got. How can I improve it? 'my given collection and paging parameters Dim AllOfMyCustomClassObjects As System.Collections.Generic.List(Of MyCustomClass) = GIVEN Dim pagesize As Integer = GIVEN Dim pagenumber As Integer = GIVEN 'collect current page objects Dim PageObjects As New System.Collections.Generic.List(Of MyCustomClass) Dim objcount As Integer = 1 For Each obj As MyCustomClass In AllOfMyCustomClassObjects If objcount > pagesize * (pagenumber - 1) And count <= pagesize * pagenumber Then PageObjects.Add(obj) End If objcount = objcount + 1 Next 'find total page count Dim totalpages As Integer = CInt(Math.Floor(objcount / pagesize)) If objcount Mod pagesize > 0 Then totalpages = totalpages + 1 End If
[ "Generic.List should provide the Skip() and Take() methods, so you could do this:\nDim PageObjects As New System.Collections.Generic.List(Of MyCustomClass)\nPageObjects = AllOfMyCustomClassObjects.Skip(pagenumber * pagesize).Take(pagesize)\n\n\nIf by \"without Linq\" you meant on the 2.0 Framework, I don't believe List(Of T) supports those methods. In that case, use GetRange like Jonathan suggested.\n", "You use GetRange on your IEnuramble implementing collection:\nList<int> lolInts = new List<int>();\n\nfor (int i = 0; i <= 100; i++)\n{\n lolInts.Add(i);\n}\n\nList<int> page1 = lolInts.GetRange(0, 49);\nList<int> page2 = lilInts.GetRange(50, 100);\n\nI trust you can figure out how to use GetRange to grab an individual page from here.\n" ]
[ 2, 1 ]
[]
[]
[ "collections", "paging", "vb.net" ]
stackoverflow_0000021232_collections_paging_vb.net.txt
Q: How do I create a mapping table in SQL Server Management Studio? I'm learning about table design in SQL and I'm wonder how to create a mapping table in order to establish a many-to-many relationship between two other tables? I think the mapping table needs two primary keys - but I can't see how to create that as it appears there can only be 1 primary key column? I'm using the Database Diagrams feature to create my tables and relationships. A: The easiest way is to simply select both fields by selecting the first field, and then while holding down the Ctrl key selecting the second field. Then clicking the key icon to set them both as the primary key.
How do I create a mapping table in SQL Server Management Studio?
I'm learning about table design in SQL and I'm wonder how to create a mapping table in order to establish a many-to-many relationship between two other tables? I think the mapping table needs two primary keys - but I can't see how to create that as it appears there can only be 1 primary key column? I'm using the Database Diagrams feature to create my tables and relationships.
[ "The easiest way is to simply select both fields by selecting the first field, and then while holding down the Ctrl key selecting the second field. Then clicking the key icon to set them both as the primary key.\n" ]
[ 6 ]
[]
[]
[ "entity_relationship", "sql_server", "sql_server_2005" ]
stackoverflow_0000021262_entity_relationship_sql_server_sql_server_2005.txt
Q: How to check set of files conform to a naming scheme I have a bunch of files (TV episodes, although that is fairly arbitrary) that I want to check match a specific naming/organisation scheme.. Currently: I have three arrays of regex, one for valid filenames, one for files missing an episode name, and one for valid paths. Then, I loop though each valid-filename regex, if it matches, append it to a "valid" dict, if not, do the same with the missing-ep-name regexs, if it matches this I append it to an "invalid" dict with an error code (2:'missing epsiode name'), if it matches neither, it gets added to invalid with the 'malformed name' error code. The current code can be found here I want to add a rule that checks for the presence of a folder.jpg file in each directory, but to add this would make the code substantially more messy in it's current state.. How could I write this system in a more expandable way? The rules it needs to check would be.. File is in the format Show Name - [01x23] - Episode Name.avi or Show Name - [01xSpecial02] - Special Name.avi or Show Name - [01xExtra01] - Extra Name.avi If filename is in the format Show Name - [01x23].avi display it a 'missing episode name' section of the output The path should be in the format Show Name/season 2/the_file.avi (where season 2 should be the correct season number in the filename) each Show Name/season 1/ folder should contain "folder.jpg" .any ideas? While I'm trying to check TV episodes, this concept/code should be able to apply to many things.. The only thought I had was a list of dicts in the format: checker = [ { 'name':'valid files', 'type':'file', 'function':check_valid(), # runs check_valid() on all files 'status':0 # if it returns True, this is the status the file gets } A: I want to add a rule that checks for the presence of a folder.jpg file in each directory, but to add this would make the code substantially more messy in it's current state.. This doesn't look bad. In fact your current code does it very nicely, and Sven mentioned a good way to do it as well: Get a list of all the files Check for "required" files You would just have have add to your dictionary a list of required files: checker = { ... 'required': ['file', 'list', 'for_required'] } As far as there being a better/extensible way to do this? I am not exactly sure. I could only really think of a way to possibly drop the "multiple" regular expressions and build off of Sven's idea for using a delimiter. So my strategy would be defining a dictionary as follows (and I'm sorry I don't know Python syntax and I'm a tad to lazy to look it up but it should make sense. The /regex/ is shorthand for a regex): check_dict = { 'delim' : /\-/, 'parts' : [ 'Show Name', 'Episode Name', 'Episode Number' ], 'patterns' : [/valid name/, /valid episode name/, /valid number/ ], 'required' : ['list', 'of', 'files'], 'ignored' : ['.*', 'hidden.txt'], 'start_dir': '/path/to/dir/to/test/' } Split the filename based on the delimiter. Check each of the parts. Because its an ordered list you can determine what parts are missing and if a section doesn't match any pattern it is malformed. Here the parts and patterns have a 1 to 1 ratio. Two arrays instead of a dictionary enforces the order. Ignored and required files can be listed. The . and .. files should probably be ignored automatically. The user should be allowed to input "globs" which can be shell expanded. I'm thinking here of svn:ignore properties, but globbing is natural for listing files. Here start_dir would be default to the current directory but if you wanted a single file to run automated testing of a bunch of directories this would be useful. The real loose end here is the path template and along the same lines what path is required for "valid files". I really couldn't come up with a solid idea without writing one large regular expression and taking groups from it... to build a template. It felt a lot like writing a TextMate language grammar. But that starts to stray on the ease of use. The real problem was that the path template was not composed of parts, which makes sense but adds complexity. Is this strategy in tune with what you were thinking of? A: maybe you should take the approach of defaulting to: "the filename is correct" and work from there to disprove that statement: with the fact that you only allow filenames with: 'show name', 'season number x episode number' and 'episode name', you know for certain that these items should be separated by a "-" (dash) so you have to have 2 of those for a filename to be correct. if that checks out, you can use your code to check that the show name matches the show name as seen in the parent's parent folder (case insensitive i assume), the season number matches the parents folder numeric value (with or without an extra 0 prepended). if however you don't see the correct amount of dashes you instantly know that there is something wrong and stop before the rest of the tests etc. and separately you can check if the file folder.jpg exists and take the necessary actions. or do that first and filter that file from the rest of the files in that folder.
How to check set of files conform to a naming scheme
I have a bunch of files (TV episodes, although that is fairly arbitrary) that I want to check match a specific naming/organisation scheme.. Currently: I have three arrays of regex, one for valid filenames, one for files missing an episode name, and one for valid paths. Then, I loop though each valid-filename regex, if it matches, append it to a "valid" dict, if not, do the same with the missing-ep-name regexs, if it matches this I append it to an "invalid" dict with an error code (2:'missing epsiode name'), if it matches neither, it gets added to invalid with the 'malformed name' error code. The current code can be found here I want to add a rule that checks for the presence of a folder.jpg file in each directory, but to add this would make the code substantially more messy in it's current state.. How could I write this system in a more expandable way? The rules it needs to check would be.. File is in the format Show Name - [01x23] - Episode Name.avi or Show Name - [01xSpecial02] - Special Name.avi or Show Name - [01xExtra01] - Extra Name.avi If filename is in the format Show Name - [01x23].avi display it a 'missing episode name' section of the output The path should be in the format Show Name/season 2/the_file.avi (where season 2 should be the correct season number in the filename) each Show Name/season 1/ folder should contain "folder.jpg" .any ideas? While I'm trying to check TV episodes, this concept/code should be able to apply to many things.. The only thought I had was a list of dicts in the format: checker = [ { 'name':'valid files', 'type':'file', 'function':check_valid(), # runs check_valid() on all files 'status':0 # if it returns True, this is the status the file gets }
[ "\nI want to add a rule that checks for\n the presence of a folder.jpg file in\n each directory, but to add this would\n make the code substantially more messy\n in it's current state..\n\nThis doesn't look bad. In fact your current code does it very nicely, and Sven mentioned a good way to do it as well:\n\nGet a list of all the files\nCheck for \"required\" files\n\nYou would just have have add to your dictionary a list of required files:\nchecker = {\n ...\n 'required': ['file', 'list', 'for_required']\n}\n\nAs far as there being a better/extensible way to do this? I am not exactly sure. I could only really think of a way to possibly drop the \"multiple\" regular expressions and build off of Sven's idea for using a delimiter. So my strategy would be defining a dictionary as follows (and I'm sorry I don't know Python syntax and I'm a tad to lazy to look it up but it should make sense. The /regex/ is shorthand for a regex):\ncheck_dict = {\n 'delim' : /\\-/,\n 'parts' : [ 'Show Name', 'Episode Name', 'Episode Number' ],\n 'patterns' : [/valid name/, /valid episode name/, /valid number/ ],\n 'required' : ['list', 'of', 'files'],\n 'ignored' : ['.*', 'hidden.txt'],\n 'start_dir': '/path/to/dir/to/test/'\n}\n\n\nSplit the filename based on the delimiter.\nCheck each of the parts.\n\nBecause its an ordered list you can determine what parts are missing and if a section doesn't match any pattern it is malformed. Here the parts and patterns have a 1 to 1 ratio. Two arrays instead of a dictionary enforces the order.\nIgnored and required files can be listed. The . and .. files should probably be ignored automatically. The user should be allowed to input \"globs\" which can be shell expanded. I'm thinking here of svn:ignore properties, but globbing is natural for listing files.\nHere start_dir would be default to the current directory but if you wanted a single file to run automated testing of a bunch of directories this would be useful.\nThe real loose end here is the path template and along the same lines what path is required for \"valid files\". I really couldn't come up with a solid idea without writing one large regular expression and taking groups from it... to build a template. It felt a lot like writing a TextMate language grammar. But that starts to stray on the ease of use. The real problem was that the path template was not composed of parts, which makes sense but adds complexity.\nIs this strategy in tune with what you were thinking of?\n", "maybe you should take the approach of defaulting to: \"the filename is correct\" and work from there to disprove that statement:\nwith the fact that you only allow filenames with: 'show name', 'season number x episode number' and 'episode name', you know for certain that these items should be separated by a \"-\" (dash) so you have to have 2 of those for a filename to be correct.\nif that checks out, you can use your code to check that the show name matches the show name as seen in the parent's parent folder (case insensitive i assume), the season number matches the parents folder numeric value (with or without an extra 0 prepended).\nif however you don't see the correct amount of dashes you instantly know that there is something wrong and stop before the rest of the tests etc.\nand separately you can check if the file folder.jpg exists and take the necessary actions. or do that first and filter that file from the rest of the files in that folder.\n" ]
[ 2, 0 ]
[]
[]
[ "naming", "python", "validation" ]
stackoverflow_0000019030_naming_python_validation.txt