text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
> With the current version of Rails, Chris' approach works just fine in > this scenario, and the code required in the view is fairly > straightforward to write and maintain. Peter's right in that it just > seems like HABTM isn't "meant to" support this level of complexity; > I'm concerned that a later Rails update could break it. HABTM is indeed not well suited for complexity. It's pushing back because it wants you to discover the implicit model object that's missing from the equation. In the author/book example that model is Authorship. So you would have: class Author has_many :authorships end class Book has_many :authorships end class Authorship belongs_to :author belongs_to :book end Now the domain is explicit, but we're still in a bit of trouble. Since this pretty domain will force us to write SQL by hand to get performant access. So you'd probably have: class Author has_many :authorships def books find_by_sql( "SELECT books.* " + "FROM books, authors, authorships " + "WHERE authorships.book_id = books.id AND + " "authorships.author_id = #{id}" ) end end And you would create new relationships by doing b = Book.create :title => "Agile Web Development with Rails" david = Author.create :name => "David Heinemeier Hansson" dave = Author.create :name => "Dave Thomas" Authorship.create([ { :book => b, :author => david }, { :book => b, :author => dave } ]) Now this is actually a lot less painful than one could imagine. But its still not painless enough. So we're currently working on allowing: class Author has_many :authorships has_many :books, :through => :authorships end This would expose both the join model (authorship) and allow convenient access to the model on the other side of that join model. One could even imagine: class Author has_many :authorships has_many :books, :through => :authorships has_many :agents, :through => :authorships end class Authorship belongs_to :agent belongs_to :author belongs_to :book end So if I was having pushback from HABTM today, I would try to discover which implicit domain model I hadn't revealed yet. Then I'd spend the few minutes making the manual accessors for that join model. And I would then look forward to the day where I could remove my manual access as :through materializes. -- David Heinemeier Hansson -- Broadcasting Brain -- Online project management -- Personal information manager -- Web-application framework
http://article.gmane.org/gmane.comp.lang.ruby.rails/32742/
crawl-002
refinedweb
369
54.52
facebook ◦ twitter ◦ View blog authority View blog top tags /// Gets the document root for a given workspace on the server. /// Will load it on demand if it hasn't been created yet. /// <param name="workspaceName"><; /// Connects to a SharePoint server for accessing /// workspaces, folders, and items.); return rc; } /// This represents a wrapper class to more easily /// use the PKMCDO KnowledgeFolders object for accessing /// Sharepoint 2001 items. It uses COM interop so it's /// slooow but it works and at least you can use C# iterators. public class SharePointFolder private string folderUrl; private KnowledgeFolder folder = new KnowledgeFolder(); private ArrayList subFolders = new ArrayList(); /// Constructs a SharePointFolder object and opens /// the datasource (via a url). COM interop so its /// ugly and takes a second or so to execute. /// <param name="url"></param> public SharePointFolder(string url) folderUrl = url; folder.DataSource.Open( folderUrl, /// This loads the subfolders for the class /// if there are any available.(); public ArrayList SubFolders get { return subFolders; } public bool HasSubFolders get { return folder.HasChildren; } public string Name get { return folder.DisplayName.ToString(); }: /// This function uploads a local file to a remote SharePoint /// document library using regular HTTP responses. Can be /// included in a console app, windows app or a web app. /// . Pingback from SharePoint, IronPython, and another lesson in the virtue of laziness « Jon Udell Pingback from SharePoint, IronPython, and another lesson in the virtue of laziness at aoortic! dot com
http://weblogs.asp.net/bsimser/archive/2004/06/06/149673.aspx
crawl-002
refinedweb
231
55.54
In this section we will read about how to use the wrapper classes in Java. Wrapper classes wrap the primitive data types to objects of that class. In Java there are eight primitive data types and each data type has the corresponding wrapper class. These wrapper classes belongs to the java.lang package. In Java the wrapper classes are required to allow the null values because objects can be null whereas, the primitives can't. Wrapper classes are used in to include the values in Collection, it is required for type safety. It is required when treating the primitives values as an object with other objects. The table given below lists the primitive data types and its respective wrapper class : Following image demonstrates the hierarchy of the Wrapper class : Almost all of each wrapper class has the parseXXX(), toString(), xxxxValue() etc. Below we are discussing some of the methods of java.lang.Integer class but, these methods are usually applies on almost all wrapper classes. Example Here I am giving a simple example which will demonstrate you about how to use the wrapper classes in Java. In this example we will create a Java class where we will use the various methods of java.lang.Integer class such as compareTo(), equal(), toString() etc. WrapperClassExample.java package net.roseindia.wrapperclassexample; public class WrapperClassExample { public static void main(String args[]) { String num = "20"; int i = Integer.parseInt(num); Integer it1 = new Integer(num); Integer it2 = new Integer(20); String str = it2.toString(); Integer intstr = Integer.valueOf(str); boolean bol = it1.equals(i); int c = it2.compareTo(intstr); if(c == 0) { System.out.println("it2 and intstr are equals"); } System.out.println("String representation of it2 = "+ str); System.out.println("Integer object = "+ it1); System.out.println("Integer representation of num = "+i); System.out.println("it1 and i are equal : "+bol); } } Output When you will compile and execute the above example you will get the output as follows : If you enjoyed this post then why not add us on Google+? Add us to your Circles Liked it! Share this Tutorial Discuss: Java Wrapper Class Example Post your Comment
http://www.roseindia.net/java/beginners/wrapper-class-example-in-java.shtml
CC-MAIN-2014-52
refinedweb
354
51.14
While tracing with a UI is simple, it has limitations today. When it comes to tracing a sticky problem, you need extensibility and flexibility in order to instrument when to start and stop a trace. We facilitate this in Message Analyzer by leveraging PowerShell and a new set of Cmdlets which ship with Message Analyzer which you can download from the Microsoft Download Center. Getting Started with Help PowerShell has built-in help and a facility to keep help files up to date. We leverage this system which means you need to tell PowerShell to update the help the first time you run. You do this by running the command “Update-Help – Force _Module *” from an administrative PowerShell prompt. You can see a list of all the cmdlets by typing “help pef”. PEF stands for Protocol Engineering Framework, of which Message Analyzer is a part of. This result in the following list: 1: PS> help pef 2: 3: Name Category Module Synopsis 4: ---- -------- ------ -------- 5: Add-PefMessageProvider Cmdlet PEF Adds a message provider to a Trace Session. 6: Invoke-PefCustomAction Cmdlet PEF Creates a PEF action that runs a script block. 7: New-PefDateTimeTrigger Cmdlet PEF Creates a trigger that signals at the specifie... 8: New-PefEventLogTrigger Cmdlet PEF Creates a trigger that signals when an event l... 9: New-PefKeyDownTrigger Cmdlet PEF Creates a trigger that signals when a user pre... 10: New-PefMessageTrigger Cmdlet PEF Creates a trigger based on detecting a filter ... 11: New-PefProcessTrigger Cmdlet PEF Creates a trigger that signals when a process ... 12: New-PefTimeSpanTrigger Cmdlet PEF Creates a trigger that signals after the speci... 13: New-PefTraceSession Cmdlet PEF Creates a PEF trace session. 14: New-PefWin32EventTrigger Cmdlet PEF Creates a trigger that signals when a specifie... 15: Save-PefDataCollection Cmdlet PEF Saves the data from a PEF Trace Session. 16: Set-PefTraceFilter Cmdlet PEF Sets a Trace Filter to a PEF Trace Session. 17: Start-PefTraceSession Cmdlet PEF Starts a PEF Trace Session. 18: Stop-PefTraceSession Cmdlet PEF Stops a PEF Trace Session. Keep in mind this searches for anything with “pef” in the name, so you might see other non-related entries depending on what you have installed. Simple Circular Capture There are two ways to collect a trace, linear and circular. Linear means that you keep collecting data until you stop the trace. This is how the UI captures today. However, you can be limited by disk space in this scenario. Circular capture, on the other hand, lets you capture the most recent traffic while dropping older trace data as you exceed the maximum buffer size, which defaults to 100MB. You can use the TotalSize parameter to change this default. Using a circular trace lets you capture, potentially forever, and then stop the trace once the problem occurs. The only risk here is that the data you want to capture might roll off the end of the buffer if it is not set large enough. Circular capture in the UI is currently problematic because of the way we show messages live as they arrive. While we have plans to support this in the future, using PowerShell is the only way to get a circular capture today. The general steps for any capture session consists of the following: - Create a trace session using New-PefTraceSession. The returned session is used to pass to other cmdlets that require a session, like those to start and stop a trace session, like Start-PefTraceSession and Stop-PefTraceSession. - Add providers to the session using Add-PefMessageProvider. Remember we leverage ETW (Event Tracing for Windows), so this is effectively adding any providers on the system, including the ones we ship for capturing NDIS, Firewall and HTTP Proxy traffic (see Network Capture is Dead blog for more info). Also note here that you can alternative specify a trace file path as input instead of a provider. This lets you process a trace, perhaps to apply a filter or to chop it up into smaller bits. [Note: Add-PefMessageProvider has been replaced by Add-PefMessageSource, which allows you to add more than just providers, for instance a static input trace.] - An optional step it to decide when to stop the trace, using Stop-PefTraceSession. For instance, you can stop when a certain filter is met, for a specific amount of time, or when an Event Log message occurs. There are various types of triggers that can be included here as a stop criteria. But triggers can also be used in other situations, for instance when to start a trace. - Start the capture using Start-PefTraceSession. So you might consider it odd to use the Stop cmdlet before you Start. However, the purpose of Stop is to define when to stop a trace. You need to specify that before you start because once the session is started you are blocked from entering any more commands. So here are a list of commands to start a simple circular capture with a total size of 50MB. We add the PEF NDIS packet capture provider. The session is stopped by using a Ctrl+C, which is the default stopping mechanism. 1: PS> $TraceSession01 = New-PefTraceSession -Mode Circular -Force -Path ".\Trace01.matu" -TotalSize 50 -SaveOnStop 2: 3: PS> Add-PefMessageProvider -Session $TraceSession01 -Provider "Microsoft-PEF-WFP-MessageProvider" 4: 5: PS> Start-PefTraceSession -Session $TraceSession01 [Note: the -Session parameter has been changed to -PEFSession] The results of the trace is a .matu file, which stands for Message Analyzer Trace Unparsed. We save the data as unparsed because parsing takes time, and for these types of traces, speed is of the essence. You can save as parsed as well using the –SaveAsParsed flag, however this will slow things down during capture. On the other hand, loading a .matp file, which is the parsed format, is much faster. This is why it’s our default format when you save in the UI. Triggers The example above is pretty simple. But what if you want to control when a trace starts and stops, or decide to only apply a filter after some other event has occurred? The key to capturing something in a live environment is limiting how much data you capture. The ideal would be to let a capture run forever until some event happens. Triggers are the mechanism that allows you to get more concise about when to start and stop, which limits the data you need to capture and have to analyze. Let’s go over some of the triggers below: - New-PefDateTimeTrigger– Start/Stop/Save a trace at a specific time. - New-PefTimeSpanTrigger– Start/Stop/Save a trace after some time has passed, like 10 minutes. - New-PefEventLogTrigger– Start/Stop/Save a trace when a specific Event Log message occurs. - New-PefWin32EventTrigger – Start/Stop/Save a trace when a Win32 event is signaled. This is helpful because it provides an out of process way to control a trace. - New-PefKeyDownTrigger – Start/Stop/Save a trace when Ctrl+C is hit. Today, this cmdlet is fairly limited. In fact, PowerShell uses Ctrl+C to stop a running command by default. Example: Trigger on an Event in the System Log The following examples creates a circular capture that stops when Event 1234 occurs. The event source is a name we made up so that we could test this by issuing a unique command. This happens to be the example from the PowerShell help for New-PefEventLogTrigger. 1: PS> $Trigger01 = New-PefEventLogTrigger -LogName "Application" -EventSourceName "PEFTestSource" -EventID 1234 2: 3: PS> $TraceSession01 = New-PefTraceSession -Mode Circular -Force -Path "C:\Users\Admin\Documents\EventLog" 4: 5: -TotalSize 50 -SaveOnStop 6: 7: PS> Add-PefMessageProvider -Session $TraceSession01 -Provider "Microsoft-Pef-WFP-MessageProvider" 8: 9: PS> Stop-PefTraceSession -Session $TraceSession01 -Trigger $Trigger01 10: 11: PS> Start-PefTraceSession -Session $TraceSession01 To cause this trigger to get hit, you can manually insert an event log message using the following PowerShell commands from an elevated PowerShell prompt. More likely you’d be waiting for an event that you don’t make up, but this will let you test the functionality, since it’s not always easy to make a specific event fire. 1: PS> New-EventLog -Logname Application -Source PEFTestSource 2: 3: PS> Write-EventLog -Logname Application -Source PEFTestSource -eventId 1234 -Entrytype Information Example: Issue a custom event trigger So, you want to cause virtually anything to happen when you see an expression or pattern? No problem – because with embedded PowerShell scripting, almost anything is possible. Keep in mind the practical limitations of processing power and memory. With this cmdlet you basically have the ability to run a PowerShell script as the result of a trigger. For this example, we simply print out every time we see an ICMP message. The “New-PefMessageTrigger” issues a filter that matches any module equal to ICMP. The resulting @sb object is executed. [Update] TIP: make sure to use ping with the /4 option to force ICMP vs ICMPv6. 1: PS> $t = New-PefKeyDownTrigger -CtrlC 2: 3: PS> $sb = { $Host.UI.WriteErrorLine("ICMP found") } 4: 5: PS> $s = New-PefTraceSession -Mode Linear -SaveOnStop -Path "C:\users\paul\documents\Simple" -Force -SaveAsParsed 6: 7: Add-PefMessageProvider -Session $s -Provider Microsoft-Pef-WFP-MessageProvider 8: 9: PS> $t2 = New-PefMessageTrigger -Session $s -Filter "ICMP" -Repeat 10: 11: Invoke-PefCustomAction -Script $sb -Trigger $t2 12: 13: PS> Stop-PefTraceSession -Session $s -Trigger $t 14: 15: PS> Start-PefTraceSession -s $s Example: Triggering by an external event Sometimes you want to trigger an event programmatically and across process. We leverage Windows Win32 Events to do this. This example listens for an event called MyEvent to be signaled. 1: PS> $t = New-PefWin32EventTrigger -EventName MyEvent -CheckTimerPeriodMs 5 2: 3: PS> $s = New-PefTraceSession -Name TestEventLogEventScenario -Mode Linear -SaveOnStop -Path C:\users\paul\documents\Win32Event -Force 4: 5: Add-PefMessageProvider -Session $s -Provider Microsoft-Pef-WFP-MessageProvider 6: 7: PS> Stop-PefTraceSession -Session $s -Trigger $t 8: 9: PS> Start-PefTraceSession -Session $s I used the following script to signal the event, but you can imagine that there are many ways you can set a Win32 event programmatically or even remotely. 1: $source = @" 2: 3: using System.Threading; 4: using System; 5: 6: public class Win32Event 7: { 8: 9: EventWaitHandle ewh; 10: 11: public Win32Event(string s) 12: { 13: Console.WriteLine(s); 14: ewh = new EventWaitHandle(false, EventResetMode.AutoReset, s); 15: } 16: 17: public void Set() 18: { 19: Console.WriteLine("set"); 20: 21: ewh.Set(); 22: } 23: 24: public void Reset() 25: { 26: Console.WriteLine("reset"); 27: ewh.Reset(); 28: } 29: 30: public void SetAndReset() 31: { 32: ewh.Set(); 33: Thread.Sleep(500); 34: ewh.Reset(); 35: } 36: } 37: 38: "@ 39: 40: Add-Type -TypeDefinition $source -Language CSharp 41: 42: [string]$eventName = $args[0] 43: 44: if($args.Count -eq 0) 45: { 46: $eventName = "MyEvent" 47: } 48: 49: $testobj = New-Object -TypeName Win32Event($eventName) 50: $testobj.SetAndReset() Script for Any Situation PowerShell is flexible because of how the commands are designed to integrate with each other. The versatility is endless. And, as always, please visit the relevant documentation online for PowerShell for more detail. As we continue to evolve PowerShell and Message Analyzer we hope to make it easier to capture, and in the future analyze, in an automated fashion. More Information For a brief synopsis of the PowerShell cmdlets provided with Message Analyzer, along with versioning requirements and additional information about how to access the cmdlets and help, see the following topic from the Message Analyzer Operating Guide on TechNet: Automating Tracing Functions Hmm, weird, must be blog gnomes. I think I’ve fixed them now. Thanks. Pure Awesomeness! Not good weird CSS crapola rendering on chrome. "; }" Using PowerShell for Message Analyzer Text Log Parsers Hi Everyone, Brandon Wilson here with you again
https://blogs.technet.microsoft.com/messageanalyzer/2013/10/29/using-powershell-to-automate-tracing/
CC-MAIN-2017-47
refinedweb
1,961
54.52
Example The following example elaborates the concept of real and nominal terms more clearly. Project X is expected to generate a cash inflow of 2 million $ in one year’s time. The rate of inflation during that period is expected to be @ 10% per annum. Express the projected cash inflow in terms of ‘real’ as well as ‘nominal’ rates. Solution The projected cash inflows of 2 million $ is already expressed in ‘money’ (or nominal terms) since it is the amount of money which is actually expected next year. However, in ’real’ terms, this would be .182 million $ approximately calculated as follows: - Nominal return = 2 million $ Real return = Nomianl return/1 + inflation rate By substituting the figures, we get: = 2.0/1.10 lakhs or .182 million $* Similarly, if the real return is to be converted into nominal return, this can be done in the following manner: Nominal return = Real return (1+inflation rate) = $ .182 (1+.10) million = $ .182 (1.10) million = 2 million $. The foregoing example shows two different results in terms of ‘nominal’ and ‘real’ rates while discounting cash inflows with the rates of inflation. It is, however, perfectly correct to use either real cash flows or money cash flows in an investment appraisal clearly.* Whether the decision maker expresses the cash flows in real or money terms will make a difference to the NPV calculated unless there is some adjustment a made in the choice of discount rate. The cost of finance which we observe in the market e.g. bank lending rate includes an allowance for the rate of inflation which lenders expect. This is because lenders of money need to be compensated because of the following factors: (i) They delay consumption. (ii) They take risk that the borrower may not fulfill all of the obligations under the loan contract, and therefore needs to be compensated y a risk premium. (iii) It is also a fact that the money lent will be repaid with less purchasing power to buy goods and services. In other words, there will be erosion in the purchasing power of money between the period it is lent and it is repaid. If the investment decision maker uses real cash flows in the analysis, a discount rate which excludes an allowance for inflation, i.e., a ‘real’ must be used. On the other hand, if the decision maker prefers to deal in ‘money’ (nominal) cash flows a ‘money’ discount rate which includes an allowance for inflation should be used. It is thus clear as to why it is not possible to avoid making judgment about likely future inflation rates. If the ‘real’ route is taken, the decision maker will need to adjust prevailing rates of interest by the expected rate of inflation in order to arrive at the real discount rate. If the ‘money’ route is taken the money cash flows will need to be estimated. This will involve the decision-maker in estimating the actual amount which will be paid or received at various times, which in turn, requires some estimate of the rate of inflation. Note: However in real life everybody uses “real terms” cash forecasts and “nominal dollars” cash forecasts. In other words, two different appraisal exercises are done. The “constant money” exercise is of course, considered to be of greater importance. The crucial point above investment appraisal in an inflationary environment is that the decision maker must be must be scrupulously consistent, whichever route is taken, real or money rates of discount. Money cash flows must be discounted using a money discount rate, real cash flows must be discounted using a real discount rate. The following discussion further enumerates the concept of real and nominal rate of return and its likely implications on an investment decision. It is well known fact that interest rates are usually quoted in nominal rather than real terms. In other words, if someone buys a treasury bill of Government of India for $ 1,000 which promises to pay, say $ 1120 after one year. It does not make any promise about what $ 1120 can buy and of how much value (worth). The investor generally takes this factor into account while deciding what is a fair rate of interest since the purchasing power of $ 1120 to be received in one year’s time will be less in reality than now in view of inflation. It is, therefore, advisable to consider inflation factor also at the time of making investment decisions. If we see in our aforesaid example, the rate of interest on a one year treasury bill is 12 percent (nominal rate of return) and suppose that next year’s inflation is expected to be around 8 percent, the investor will be getting back the principal + interest ($ 100 + $ 120 = $ 1120) after one year which in reality is worth 8 percent less than the current value of $ 1120. How much actual purchasing power is represented by $ 1120? It has to be measured in terms of current purchasing power of dollar. For this, we have to convert $ 1120 to be received in one year into the current $ by dividing by 1.08 (1 plus the expected inflation rate) as follows. Purchasing power of Number of current $ 1120 to be = $ having same = 1120/1.08 = $1037 Received in one year = purchasing power Transtutors is the best place to get answers to all your doubts regarding solved examples based on concept of real and nominal terms with examples. Transtutors has a vast panel of experienced in concept of real and nominal terms financial management tutorswho can explain the different concepts to you effectively. You can submit your school, college or university level homework or assignment to us and we will make sure that you get the answers related to concept of real and nominal terms Attach Files
https://www.transtutors.com/homework-help/financial-management/project-planning-capital-budgeting/real-and-nominal-terms-solved-examples1.aspx
CC-MAIN-2018-05
refinedweb
966
50.57
The documentation you are viewing is for Dapr v1.4 which is an older version of Dapr. For up-to-date documentation, see the latest version. Redis Streams Component format To setup Redis Streams pubsub create a component of type pubsub.redis. See this guide on how to create and apply a pubsub configuration. apiVersion: dapr.io/v1alpha1 kind: Component metadata: name: redis-pubsub namespace: default spec: type: pubsub.redis version: v1 metadata: - name: redisHost value: localhost:6379 - name: redisPassword value: "KeFg23!" - name: consumerID value: "myGroup" - name: enableTLS value: "false" WarningThe above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here. Spec metadata fields. Related links - Basic schema for a Dapr component - Read this guide for instructions on configuring pub/sub components - Pub/Sub building block Feedback Was this page helpful? Glad to hear it! Please tell us how we can improve. Sorry to hear that. Please tell us how we can improve.
https://v1-4.docs.dapr.io/reference/components-reference/supported-pubsub/setup-redis-pubsub/
CC-MAIN-2021-49
refinedweb
164
52.66
TL;DR: Let's face it, Artificial Intelligence (AI) is taking over applications. In the not-so-far future, most software will have AI systems embedded in it and applications without it will slowly fade away. The ability for software applications to understand data generated by users and use it for valuable predictions is fast becoming a must-have in every application. The big names in tech are not unaware of this as we see tech giants like Google, Microsoft e.t.c. launching AI and Machine learning APIs and SDKs to enable developers to easily embed these capabilities in their applications. In this article, you would make use of Google's Vision API to build a simple emoji prediction game. What You Will Build The application you will be building is a simple emoji game that displays a random emoji to the user, the user then has to scan a physical object around them that matches the emoji using the webcam. Google's Vision API will predict the scanned image and return results that will be compared to the emoji to see if it matches. To see the finished code, please check out this GitHub repo. The game would operate as follows: - User has to sign in to play - The user would select a camera feed from a dropdown that displays all available camera feeds on the system. For example, the computer's built-in webcam - User clicks Play to start a new game, and an emoji will be displayed to the user to scan a matching object - The user scans object and clicks on Predict for Google's Vision API to return results to check for a match - Users must scan an object before the 60 seconds timer runs out - The user gets 10 points for a correct match and loses 5 points for a wrong one - Users can skip an emoji for another one to be displayed within the 60 seconds window - If the 60 seconds run out before the user gets a correct prediction, it's Game Over! "In the not-so-far future, most software will have AI systems embedded in it and applications without it will slowly fade away." Prerequisites To follow along with this article, a few things are required: - Basic knowledge of Vuejs - Nodejs installed on your system - Vue CLI installed on your system - A Google account Creating the Google API Project Creating the project The first thing you need to have set up is a Google Cloud Platform (GCP) project. To do that, simply go to the GCP Console, this would require you to sign in with your Google account if you aren't already signed in. Once signed in, you would see the GCP dashboard. To create a new project, click on the project dropdown at the left-hand side of the blue top toolbar. This pops up a modal with a list of all your projects. If you have not created a project yet, this modal will be empty. On the modal, click on the New Project button at the top right-hand corner. This automatically redirects you to the project creation page where you can create a new Google Cloud Platform project. After entering your project name, leave the Location as No Organization and click the Create button. The system automatically sets up the project and notifies you when it's done. It also selects the project as the current project in the project dropdown on the top blue toolbar. If your new project is not automatically selected, you can click the project dropdown and select the project in the project list displayed in the modal. "Let's face it, Artificial Intelligence (AI) is taking over applications." Enabling the Cloud Vision API To be able to use GCP's Vision API, you need to enable the API for your project. With your project selected on the project dropdown, click Dashboard on the side menu. Then just below the top blue toolbar, click on the Enable APIs and Services button, this will then redirect you to the API library page which shows a list of all available APIs. In the search box, type in Vision API to filter the list. You should now see the Cloud Vision API filtered out of the list. Click on it to select it. You will now be redirected to the Cloud Vision API page that gives an overview of what the API is all about. Click on the blue Enable button to enable this API for your project. Getting an API key When calling the Google Cloud Vision API from your application, you need to authenticate your requests using an API key. To get an API key, simply navigate to the Credentials page. Do ensure that your project is selected on the project dropdown at the top. Click on the Create credentials dropdown and select API Key from the list of options. The system automatically generates an API key for you and displays it in a dialog. Copy this key and close the dialog box. You will be needing this key later in the project. Please Note: Google requires that you have a billing account set up for your GCP account to continue using the API once your free limit is exceeded. You can set up a billing account here. You can find out more about Pricing and Quotas on the GCP website. Creating the Auth0 Application Next up, you will be creating your Auth0 application to handle authentication in your Vue application, so head to Auth0's website and click the Log in button to sign in to the console. Once logged in, click on Applications on the left-hand side menu. On the Applications page, click on the big orange CREATE APPLICATION button. Once the Create Application dialog that pops up, enter an appropriate name for your application and select Single Page Web Applications from the options below the application name field. Now click the Create button to complete the process. After the successful creation of the application, go to the Settings section of your newly created app. In the Allowed Callback URLs, Allowed Web Origins, Allowed Logout URLs and Allowed Origins (CORS) fields, enter. This address is the default address of the Vue application you will be creating later on. Once you're done entering these values, scroll down and hit the SAVE CHANGES button. Scaffolding the Vue Project Create a new Vue application by running the following command in the folder where you want your application to be located. vue create my-emoji-game When the Please pick a preset? prompt comes up in the interactive CLI, select default. After your selection, the Vue CLI begins scaffolding your new Vue application. Setting up Authentication with Auth0 You are going to set up authentication with Auth0 in your Vue application based on the instructions from Auth0's documentation. The Authentication Service The first thing you need to do is to write an authentication service. For this, you first need to install the @auth0/auth0-spa-js. So, at the root of your Vue application, run the following command: npm install @auth0/auth0-spa-js Once the installation is complete, the next thing you are to do is to create the authentication service. Within the authentication service, you will need your Auth0 domain and application id. To separate these details from your service code, create a new file named auth_config.json at the root of your project and paste the following snippet inside: // auth_config.json { "domain": "YOUR_AUTH0_DOMAIN", "clientId": "YOUR_APP_ID" } Ensure to replace the placeholders YOUR_AUTH0_DOMAIN and YOUR_APP_ID with the appropriate values. You can find your client id on your application page and your Auth0 domain is in the form [YOUR_USERNAME].auth0.com e.g. user1.auth0.com. Make sure you ignore this file in your .gitignore file to ensure that your sensitive credentials are not pushed to a public repository. Now to create the authentication service, inside the src folder, created a folder named auth and inside this folder create a file named index.js and paste the code below: // src/auth/index.js(o) { this.popupOpen = true; try { await this.auth0Client.loginWithPopup(o); }({ domain: options.domain, client_id: options.clientId, audience: options.audience,); } }; This service creates a wrapper object around the Auth0 SDK and implements a Vue plugin that exposes this wrapper object to the rest of the application. This wrapper API consists of user authentication methods like loginWithPopup, logout etc. that you can call within your application. Adding a router To manage redirection during Auth0's authentication process, you need to add a router to the project. For this, you will need to install the vue-router package by running the following command in the root of your project. npm install vue-router After successful installation of the package, create a file named router.js in your src folder and paste in the following code: // src/router.js import Vue from "vue"; import VueRouter from "vue-router"; import App from "./App"; Vue.use(VueRouter); const routes = [{ path: "/", component: App }]; const router = new VueRouter({ routes }); export default router; This simply sets up the router for use in the application and exports an instance of the Vue router which has been supplied an array of routes. Now go into the file main.js in the src folder and replace its contents with the following code: // src/main.js import Vue from "vue"; import App from "./App.vue"; import router from "./router"; // Import the Auth0 configuration import { domain, clientId } from "../auth_config.json"; // Import the plugin here import { Auth0Plugin } from "./auth"; // Install the authentication plugin here Vue.use(Auth0Plugin, { domain, clientId, onRedirectCallback: appState => { router.push( appState && appState.targetUrl ? appState.targetUrl : window.location.pathname ); } }); Vue.config.productionTip = false; new Vue({ router, render: h => h(App) }).$mount("#app"); This file loads in the Auth0 configuration, the router and authentication service plugin. The plugin is set up for use within the application using Vue.use with the required parameters, the router is also loaded into the root instance of the application. Note: If you run this project at this point (or at any other point in the course of this article) and you run into errors relating to core-js, simply run npm install core-js to fix the issue. Building the Emoji Game Now to the main action, this is where you tie everything together. You will be writing code to do the following: - Load and display in a select list the camera input options available on the system - Display the feed of the selected camera input - A Play button to initiate a new game - A Skip Emoji button to move to a different emoji is the player can't predict the current one - Display an object Emoji (no smileys as we don't want to include the prediction of facial expressions) - Create a countdown timer - Make a request with the captured image to Google's Vision API - Display the current score - Display the User's details - Display pop dialogs for notifications Phew! That's a handful, isn't it? But don't worry, you will be done in no time. Installing required packages The first step to building the game is to install the packages that will be required. You need to install the following packages: - axios: To make API requests to the Google Vision API - bootstrap: Basic styling - emoji.json: Emojis Library Go ahead and install these by running the following command: npm install axios bootstrap emoji.json The modal component Before proceeding to build the game page, a modal component is required to display notifications on the game page. Go into the src/components folder and delete the HelloWorld.vue file that is comes by default with the scaffolded project and create a new file named modal.vue pasting the following code in it: // src/components/modal.vue <template> <transition name="modal"> <div class="modal-mask"> <div class="modal-wrapper"> <div class="modal-container"> <div class="modal-header">{{header}}</div> <div class="modal-body">{{content}}</div> <div class="modal-footer"> <button class="modal-default-button" @OK</button> </div> </div> </div> </div> </transition> </template> <script> export default { props: ["header", "content"] }; </script> <style scoped> .modal-mask { position: fixed; z-index: 9998; top: 0; left: 0; width: 100%; height: 100%; background-color: rgba(0, 0, 0, 0.5); display: table; transition: opacity 0.3s ease; } .modal-wrapper { display: table-cell; vertical-align: middle; } .modal-container { width: 600px; margin: 0px auto; padding: 20px 30px; background-color: #fff; border-radius: 2px; box-shadow: 0 2px 8px rgba(0, 0, 0, 0.33); transition: all 0.3s ease; font-family: Helvetica, Arial, sans-serif; } .modal-header h3 { margin-top: 0; color: #42b983; } .modal-body { margin: 20px 0; } .modal-default-button { float: right; } .modal-enter { opacity: 0; } .modal-leave-active { opacity: 0; } .modal-enter .modal-container, .modal-leave-active .modal-container { -webkit-transform: scale(1.1); transform: scale(1.1); } </style> This file exports a simple modal component that takes in the modal title ( header) and modal body ( content) as props. Building the game page Now to the main action. Using the full power of Google's machine learning API for vision, you will now tie everything together for the game!. Go into the src folder and replace everything in the App.vue file with the code below: // src/App.vue <template> <div id="app" v- <nav class="navbar navbar-expand-lg navbar-dark bg-dark"> <a class="navbar-brand" href="#">The Emoji Game</a> <button class="navbar-toggler" type="button" data- <span class="navbar-toggler-icon"></span> </button> <div class="collapse navbar-collapse" id="navbarText"> <div class="navbar-nav mr-auto user-details"> <span v-{{ $auth.user.name }} ({{ $auth.user.email }})</span> <span v-else> </span> </div> <span class="navbar-text"> <ul class="navbar-nav float-right"> <li class="nav-item" v- <a class="nav-link" href="#" @Log In</a> </li> <li class="nav-item" v- <a class="nav-link" href="#" @Log Out</a> </li> </ul> </span> </div> </nav> <div class="container mt-5" v- <div class="row"> <div class="col-md-8 offset-md-2"> <div class="jumbotron"> <h1 class="display-4">Play the Emoji Game!</h1> <p class="lead">Instructions</p> <ul> <li>Login to play</li> <li>Select a camera feed of your choice from the drop down</li> <li>Click “Play” to start a new round</li> <li>Capture the correct image that matches the emoji displayed</li> <li>You can skip an emoji during a round if you can’t find the corresponding object around you to capture</li> <li>Click ‘Predict Image’ to compare your captured image against the emoji</li> <li>You gain 10 points for a correct prediction and loose 5 points for a wrong prediction</li> <li>If the clock runs out, its Game Over!</li> </ul> <a class="btn btn-primary btn-lg mr-auto ml-auto" href="#" role="button" @Log In to Play</a> </div> </div> </div> </div> <!-- The emoji game --> <div class="container mt-5" v-else> <div class="row"> <div class="col-md-4"> <select class="form-control" v- <option v-for="source in videoSources" :key="source.id" :{{source.text}}</option> </div> <div class="col-md-8"> <div class="row"> <div class="col-md-4">Countdown : {{timerStart}}</div> <div class="col-md-4">Total Score: {{totalScore}}</div> </div> </div> </div> <div class="row mt-5" v- <button class="btn btn-primary" @Start New Game</button> </div> <div class="row mt-5" v-else> <div class="col-md-6"> <button class="btn btn-success" :Play</button> <button class="btn btn-warning" :disabled="!inPlay" @Skip Emoji</button> </div> </div> <!-- row --> <div class="row mt-5"> <div class="col-md-6"> <div> <video autoplay</video> <canvas style="display:none;" id="canvas1" ref="canvas1"></canvas> </div> <div> <button class="btn btn-primary" @Capture Image</button> </div> </div> <div class="col-md-6"> <div class="row"> <div class="col-md-6"> <img id="usercapture" ref="usercapture" src="" width="200" height="200" /> <div class="mt-2"> <button class="btn btn-success" @click="predictImage()" :{{predictingImage? 'Please Wait...' : 'Predict Image'}}</button> </div> </div> <div class="col-md-6"> <p class="currentEmoji">{{currentEmoji.char}}</p> </div> </div> </div> </div> <!-- Modal --> <modal v-if="modal.show" @close="modal.show = false" :header="modal.header" :</modal> </div> </div> </template> <script> import "bootstrap/dist/css/bootstrap.css"; import axios from "axios"; import emojis from "emoji.json"; import modal from "./components/modal"; export default { name: "app"," } }; }, components: { modal }, mounted() { navigator.mediaDevices .enumerateDevices() .then(this.gotDevices) .catch(this.handleError); }, computed: { gameEmojis() { return emojis.filter(emoji => { return ( emoji.category.includes("Objects") && emoji.char.charCodeAt(0) != 55358 ); }); } }, methods: { // Log the user in this.$auth.loginWithRedirect(); }, // Log the user out this.$auth.logout({ returnTo: window.location; }, handleError(error) { this.showModal( "Error", error.message ); }, playGame() { //Get Random Emoji this.switchEmoji(); this.inPlay = true; //Start timer countdown this.setTimer(); }, skipEmoji() { this.switchEmoji(); this.imageURL = null; }, captureImage() { let canvas = this.$refs.canvas1; let videoElement = this.$refs.video1; canvas.width = videoElement.videoWidth; canvas.height = videoElement.videoHeight; canvas.getContext("2d").drawImage(videoElement, 0, 0); let imgElement = this.$refs.usercapture; // Get image let image = canvas.toDataURL("image/png"); //Trim signature to get pure image data this.imageURL = image.replace(/^data:image\/(png|jpg);base64,/, ""); //Set the image element to the data url imgElement.src = image; }, async predictImage() { if (this.imageURL) { let requestBody = { requests: [ { image: { content: this.imageURL }, features: [ { type: "LABEL_DETECTION", maxResults: 10 } ] } ] }; try { this.predictingImage = true; let predictionResults = await axios.post( this.gCloudVisionUrl, requestBody ); let predictionResponse = predictionResults.data.responses[0]; let annotations = predictionResponse.labelAnnotations; let allLabelDescriptions = annotations.map(annotation => annotation.description.toLowerCase() ); //Check if any of the prediction labels match the current emoji let match = false; allLabelDescriptions.forEach(description => { if (this.currentEmoji.name.includes(description)) { match = true; } }); if (match == true) { this.totalScore += this.pointsIncrement; this.resetTimer(); this.showModal( "Correct Answer", Congratulations, you have gained </span><span class="p">${</span><span class="k">this</span><span class="p">.</span><span class="nx">pointsIncrement</span><span class="p">}</span><span class="s2"> points, your total is now </span><span class="p">${</span><span class="k">this</span><span class="p">.</span><span class="nx">totalScore</span><span class="p">}</span><span class="s2"> ); } else { this.totalScore -= this.pointsDecrement; this.showModal( "Wrong Answer", The answer you gave was incorrect, Your captured image suggested the following (</span><span class="p">${</span><span class="nx">allLabelDescriptions</span><span class="p">}</span><span class="s2">). You have lost </span><span class="p">${</span><span class="k">this</span><span class="p">.</span><span class="nx">pointsDecrement</span><span class="p">}</span><span class="s2"> points, your total is now </span><span class="p">${</span><span class="k">this</span><span class="p">.</span><span class="nx">totalScore</span><span class="p">}</span><span class="s2"> ); } this.predictingImage = false; } catch (error) { this.handleError(error); } } else { this.showModal("Error", You are yet to capture an image); } }, switchEmoji() { let emojiIndex = this.getRandomInt(0, this.gameEmojis.length); this.currentEmoji = this.gameEmojis[emojiIndex]; }, setTimer() { this.resetTimer(); this.timerHandle = setInterval(() => { if (this.timerStart > 0) { this.timerStart -= 1; } else { //Game Over this.endGame(); } }, 1000); }, resetTimer() { //Stop the Clock clearInterval(this.timerHandle); this.timerStart = 60; }, endGame() { clearInterval(this.timerHandle); this.inPlay = false; this.gameOver = true; this.showModal( "Game Over", You could not complete the task before the time ran out. Your total score is </span><span class="p">${</span><span class="k">this</span><span class="p">.</span><span class="nx">totalScore</span><span class="p">}</span><span class="s2"> ); }, startNewGame() { this.imageURL = null; this.totalScore = 0; this.gameOver = false; this.currentEmoji = {}; }, showModal(title, body) { this.modal = { show: true, header: title, content: body }; }, getRandomInt(min, max) { min = Math.ceil(min); max = Math.floor(max); return Math.floor(Math.random() * (max - min + 1)) + min; } } }; </script> <style scoped> #video1 { width: 500px; height: 400px; background-color: grey; } .currentEmoji { font-size: 120px; } .user-details { color: white; } </style> That was a lot of code! Let's break it down starting with the Vue instance. The data object is defined with the necessary variables to be used in the application." } }; }, Note the line: gCloudVisionUrl: "", Replace YOUR_GOOGLE_PROJECT_API_KEY with the API key generated from the Google project created earlier. Next, in the mounted lifecycle hook, the navigator object is used to get all media device inputs in the system. mounted() { navigator.mediaDevices .enumerateDevices() .then(this.gotDevices) .catch(this.handleError); } To use only object emojis (emojis representing objects and not smileys), a computed property gameEmojis is created. gameEmojis() { return emojis.filter(emoji => { return ( emoji.category.includes("Objects") && emoji.char.charCodeAt(0) != 55358 ); }); } Now onto the methods! login and The login and logout methods make use of the authentication service created earlier to log the user in and out of the application respectively. login() { this.$auth.loginWithRedirect(); }, logout() { this.$auth.logout({ returnTo: window.location.origin }); }, The media stream functions The gotDevices, getStream and gotStream functions are responsible for handling the collation of video sources and display of the selected video source feed in an HTML video tag for the user to; } The Game Play Functions These are the functions responsible for handing the game playing activities. playGame: Resets the game for a user to start a new game skipEmoji: Skips the currently displayed emoji and shows a new one to predict captureImage: Draws the current frame in the video element into a canvas element and converts the canvas image to a data URL that is then loaded to the screen using an image tag predictImage: Makes a request to the Google Vision API with the captured image to return prediction results, then tries to find a match between the predicted values and the emoji label If a match exists, the user scores points, if not, the user loses points switchEmoji: Changes the emoji currently displayed setTimer: Starts the timer countdown for a game round resetTimer: Resets the timer back to zero endGame: Ends the game startNewGame: Start/Restarts the game showModal: Displays the notifications modal The game page template In the game page template, a top navigation bar displays the login/logout buttons based on the authentication state. Next, there are two sections, one for the non-authenticated state and the other for the authenticated state. These two sections are displayed conditionally based on the authenticated state. The non-authenticated section consists of the game homepage listing the game instructions and login button. The authenticated section consists of the actual game. A select list of video input sources, a button to start a new game and another button to skip an emoji while in play. Also, there is a video display of the feed of the video source selected, a capture button, the button that triggers the prediction, the emoji, score, and timer displays. Running the Application Now it's time to put all the work that has been done so far to test. Run the app by running the following command: npm run serve If all goes smoothly, you should see the login screen as shown below: Now click the login button. You will see the Auth0 login prompt where you can log in with your email or Gmail/Facebook account. Once the login process is complete, you will be redirected back to the app where the game page is then displayed. Select a camera source and hit the Play button. You should see a page similar to the one below: And there you have it, a simple emoji prediction game using Auth0, Vue, and the Google Cloud Vision API. Conclusion This is surely not a production-ready application (do not deploy to app stores :)) as it could still use a lot more engineering to make it full-fledged. However, it demonstrates how you can build intelligence into your applications using readily available machine learning APIs.
https://auth0.com/blog/creating-an-emoji-game-with-vue-auth0-and-google-vision-api/
CC-MAIN-2020-05
refinedweb
3,943
56.45
A default implementation for authentication data in Wt::Dbo. More... #include <Wt/Auth/Dbo/AuthInfo> A default implementation for authentication data in Wt::Dbo. This class implements the requirements for use as a data type in Wt::Auth::Dbo::UserDatabase. It is a template class, and needs as parameter the Dbo type which models a user (e.g. name, birth date, ...). Thus, this class only carries the authentication information for that user. It contains collections to two other types: To use these classes, you need to map three classes to tables of your choice. Returns the email address. Returns the email token. Returns the email token expiration date. Returns the email token role. Returns the number of failed login attempts. Finds an identity of a particular provider. Note, a user could in theory have multiple identities from a single provider. If there are multiple, only one of them is returned. Returns the time of the last login attempt. Returns the password hash. Returns the password method. Returns the password salt. Sets the user. This sets the user that owns this authentication information. Returns the status. Returns the unverified email address.
https://www.webtoolkit.eu/wt/wt3/doc/reference/html/classWt_1_1Auth_1_1Dbo_1_1AuthInfo.html
CC-MAIN-2018-09
refinedweb
190
63.46
/* /* * * * README * * * * Online Hierarchical Storage Manager * * version 1.2 * */ */ These are the release notes for OHSM version 1.2. Please read them carefully before installation of OHSM kernel module. WHAT IS OHSM ? ============== OHSM is a tool to manage and move data across various class of storage. It can help users to intelligently place and move data across tiers based on the attributes of the data. OHSM supports background movement of data without any visible change in namespace to user. OHSM requires kernel sources for your running kernel in order to compile. OHSM is built as an external module with recompilation required for ext4. It also needs that ext4 should be built as module for your current kernel or a complete rebuilt of kernel with ext4 compiled as module is required. The current version (1.2) of OHSM is based upon Linux 2.6.32.2 kernel. OHSM Download:: =============== 1. svn: svn co ohsm 2. Through git: git clone ssh://[email protected]/gitroot/ohsm/latest ohsm It creates ohsm directory having source code in it. Boot from the new vanilla 2.6.32.2 kernel. Build the kernel with ext4 as a module. OHSM compilation creates and installs the required ext4 module. Build and install the OHSM module. To compile OHSM, user requires device-mapper-devel and libxml2-devel installed. Basic Installation:: ==================== The simplest way to compile this package is: 1. "./configure" to configure various packages for the system. 2. "make" to compile the packages. 3. Type "make install" to install the programs and any data files and documentation. During "make install", the script asks for automatic installation of ext4 module. Select 'y' if you want to install it automatically. The install script requires root privileges to install the OHSM binaries. By default it assumes to have sudo access to the user. If sudo is not enabled on system or the user doesn't have sudo access, these files need to be copied manually using superuser privileges to the appropriate directories and the newly built ext4 module should be updated in /lib/modules/`uname -r`/kernel/fs/ext4 4. Type "insmod kernel/ohsm.ko ohsm_enable_debug=<debug_level>" to insert OHSM module into the kernel. The debug levels may vary from 0-8. Quick overview of some OHSM commands:: ====================================== 1. ohsm enable <path_of_policy_file> 2. ohsm status <mount_point> 3. ohsm disable <mount_point> 4. ohsm update -m <mount_point> -a <path_to_allocpolicy_file> 5. ohsm update -m <mount_point> -r <path_to_relocpolicy_file> TESTING OHSM:: ============== OHSM provides a `tests` directory, through which OHSM can be tested. You can use the `test_ohsm.sh` script that would enable OHSM on your system. Through this users can check if all the provided features of OHSM work correctly. Type the command "./test_ohsm.sh -e" for enabling. You can try various other features or also use the help utility to check the various facilities available. OTHER INFORMATION:: =================== For more information you can visit ohsm home page at the URL: <>. The OHSM source code and related files can be downloaded from the URL: <>. OHSM wiki page can be found at the URL: <>. Popular Pages in wiki: * FAQs for new users and installation issues * Getting Started For Developers * Getting Started For Users * Command Line Interface
http://sourceforge.net/p/ohsm/svn/HEAD/tree/
CC-MAIN-2014-41
refinedweb
529
61.22
The dot operator (.) is used for member access. The dot operator specifies a member of a type or namespace. For example, the dot operator is used to access specific methods within the .NET Framework class libraries as in line 11 and 12 below. using System; namespace Hello { class Program { static void Main() { string name; Console.WriteLine("Hello, what is your name?"); name = Console.ReadLine(); Console.WriteLine("Your name is " + name); Console.ReadLine(); } } } In our example above, Console is a class under System namespace. We are using 2 member functions or methods namely, WriteLine() and ReadLine(). The member functions or methods of a class are accessed by the dot operator.
http://codecrawl.com/2014/08/08/csharp-dot-operator/
CC-MAIN-2017-04
refinedweb
109
61.93
// CIS 226 Ch 2 Welcome to CIS 226 // Assign2 - Create a program which prompt user to enter first and last name and a welcome message is displayed. // Riter Heng - 2/16/2012 // for Extra Credit import javax.swing.JOptionPane; public class Assign2 { public static void main(String[] args) { //System.out.println("Welcome to CIS 226!"); //System.out.println(".."); //System.out.println(("..\n.."); //System.out.printf("%s\n%s\n", "..", ".."); String firstName = JOptionPane.showInputDialog("Please enter your first name:"); //if (firstName != null) {} //else {System.exit(0);} String lastName = JOptionPane.showInputDialog("Please enter your last name:"); if((firstName != null) || (lastName != null)) JOptionPane.showMessageDialog(null, "Hello, " + firstName +" " + lastName + ", " + "\n Welcome to CIS 226", "Message" , 0); else {JOptionPane.showMessageDialog(null, "Sorry, you have enter your name to enroll into CIS 226.", "Error", JOptionPane.ERROR_MESSAGE);} } // end method main }// end class Assign2 Okay so I was working with a classmate and he sorta left in the middle to go somewhere and I was left trying to figure out his coding. The program is suppose to give an error when you click cancel instead of not entering a first or last name. But it doesn't give that error and just skips on over and ends up at the "hello firstName Last Name, Welcome to CIS 226." Also when I ran it on the "hello..." I get a red cross as the icon but I need a "i", how do I fix that?
http://www.javaprogrammingforums.com/whats-wrong-my-code/14037-where-did-i-mess-up.html
CC-MAIN-2016-30
refinedweb
236
66.94
In this article, we’ll take a look at using the continue statement in C/C++. This is one of the easiest and the most basic keywords in C/C++, which gives programmers control over loops. Let’s take a quick look, using some examples! Table of Contents Using the continue statement in C/C++ The continue statement is exactly as the name suggests. Since we use this in loops, it will skip over the remaining body of the current loop, and continue to the next iteration. We can use this inside any loops like for, while, or do-while loops. Let’s start with while loops now. Using the continue statement with a while loop We’ll take a simple example to illustrate this concept. while (condition) { if (something) continue; else printf("Hi\n"); } In this case, if something was True, we will continue onto the next iteration. So, in this case, Hi will not be printed, as the loop directly moves to the while(condition) check. Take the below program, in C: #include <stdio.h> int main() { int i = 1; while (i <= 5) { if (i == 3) { i++; continue; } printf("i = %d\n", i); i++; } return 0; } Output 1 2 4 5 In this case, we start from i=1 and keep iterating through the while loop. If i = 3, we increment i and continue to the next iteration. So, when i=3, it will not be printed, since we skip the body and proceed to the next iteration! Note that we carefully placed an i++ increment before the continue, as otherwise, we will end up in an infinite loop, as the value of i won’t change when using continue normally! Using continue along with a for loop We can write the same program, using a for loop too! #include <stdio.h> int main() { for (int i=1; i<=5; i++) { if (i == 3) { i++; continue; } printf("i = %d\n", i); } return 0; } We get the same output as before. Using continue with a do-while loop We can also use this on a do-while loop, since this is also an iterative loop. #include <stdio.h> int main() { int i=10; // Set i = 10 do { if (i > 5) { // Initially, we will go to this statement, as it is a do-while loop! i = 0; continue; } printf("i = %d\n", i); } while (i <= 5) return 0; } In this case, since we have a do-while loop, we go to the body of the loop before we do our condition check. So, when i=10, we reassign it to 0 and continue. So, the output will simply be integers from 0 to 5! Output 0 1 2 3 4 5 Conclusion In this article, we learned how we could use the continue statement in C/C++ to have control over loop statements! If you want similar articles on C, do go through our C programming tutorials! References - cppreference.com page on Continue
https://www.journaldev.com/38218/continue-statement-in-c-plus-plus
CC-MAIN-2021-21
refinedweb
493
69.72
Dalin Nkulu15,587 Points foreach loop challenge: i don't understand the conditional statement in the conditional statement what is the 3 "frog" for me the first "Frog" is the class the second ("frog") act as a counter the third it the object of the class why "frogs.TongueLength" do not work but "frog.TongueLength" will work namespace Treehouse.CodeChallenges { class FrogStats { public static double GetAverageTongueLength(Frog[] frogs) { double average = 0.0; foreach(Frog frog in frogs) { average = frogs.TongueLength; } return average/frogs.Lenght; } } } namespace Treehouse.CodeChallenges { public class Frog { public int TongueLength { get; } public Frog(int tongueLength) { TongueLength = tongueLength; } } } 2 Answers Brendan WhitingFront End Web Development Treehouse Moderator 84,129 Points frog is a variable of type Frog, which is a class that has a TongueLength property. frogs is a variable of type Frog[], a list of frogs. Lists don't have TongueLength, individual frogs in the list do. Try iterating through all the lists and calculating a sum of tongue lengths, and then dividing that sum by the number of frogs in the list. Dalin Nkulu15,587 Points hey Brendan are you on social media? if yes what your handle? Brendan WhitingFront End Web Development Treehouse Moderator 84,129 Points Brendan WhitingFront End Web Development Treehouse Moderator 84,129 Points I'm on LinkedIn
https://teamtreehouse.com/community/foreach-loop-challenge-i-dont-understand-the-conditional-statement
CC-MAIN-2019-51
refinedweb
216
65.62
I am reading a book titled "An Introduction to GCC" and would like some clarification. The book indicates that the code below will cause an error but when I compile, it builds and runs perfectly: #include <math.h> #include <stdio.h> int main (void) { double x = sqrt (2.0); printf ("The square root of 2.0 is %f\n", x); return 0; } $ gcc -Wall calc.c -o calc /tmp/ccbR6Ojm.o: In function `main': /tmp/ccbR6Ojm.o(.text+0x19): undefined reference to `sqrt' $ gcc -Wall calc.c /usr/lib/libm.a -o calc sqrt (2.0); Modern GCC is well capable to determine that you are trying to find square root of a constant and thus it is able to calculate that in compile time. Your object code does not contain an actual call to sqrt. If you use a variable which is input via scanf at run then it won't link without libm. int main() { double x; scanf("%lf", &x); printf("%lf\n", sqrt(x)); return 0; } Without libm gcc 4.8.4 on Ubuntu 14.04 this results: /tmp/ccVO2fRY.o: In function `main': sqrt.c:(.text+0x2c): undefined reference to `sqrt' collect2: error: ld returned 1 exit status But if I put a constant instead of x like your example then it links fine without libm. P.S: I don't know the exact version since when GCC is able to do this. Hopefully someone else can point to that.
https://codedump.io/share/1BMQAfS8xkqw/1/gcc-linking-with-static-libraries
CC-MAIN-2018-13
refinedweb
244
77.64
I really enjoyed the series Firefly that aired a few years back. I've enjoyed it a few more times since. I'm really looking forward to the upcoming movie. So much so, in fact, that I even wanted to try to get into one of the pre-screenings. There's a website that announces dates and locations for pre-screenings. Being pretty lazy and also forgetful, I can't be bothered to actually check the website frequently enough to notice when new tickets are available before they are all sold out. Fortunately, being a programmer, I can simply bend a minion to my will. I whipped up this quickie to do the checking for me: import rfc822, md5, sys from twisted.internet import task, reactor from twisted.mail import smtp from twisted.python import log from twisted.web import client url = "" MSG = """\ Subject: The upcoming film, Serenity, is awesome Date: %(date)s From: "A Respectable Internet Citizen" <[email protected]> Message-ID: %(msgid)s Content-Type: text/html <html> <body> Serenity hash changed! Go look! <a href="%(url)s">%(url)s</a> </body> </html> """ def mailinfo(): return { 'date': rfc822.formatdate(), 'msgid': smtp.messageid(), 'url': url} lastDig = None def main(): def got(page): global lastDig dig = md5.md5(page).hexdigest() if dig != lastDig: if lastDig is not None: smtp.sendmail( sys.argv[1], # SMTP host sys.argv[2], # sender email address sys.argv[3:], # recipient email addresses MSG % mailinfo()) lastDig = dig client.getPage(url).addCallback(got) log.startLogging(sys.stdout) task.LoopingCall(main).start(60 * 30) reactor.run() The took about 10 minutes to write and even worked on the 2nd try (the first try I forget recipient addresses had to be a list and gave it a string instead). Then it ran for 21 days before sending me an email a few days ago! Alas, I don't live in Great Britain, or even Europe. It has also, of course, sent out an email once each time one of the listed theaters sells out. And now they're all sold out. :( Given the short period of time remaining before the probable release date, it seems unlikely there will be any more pre-screenings.
http://as.ynchrono.us/2005/08/simple-twisted-application-for-fun_11.html
CC-MAIN-2017-34
refinedweb
364
68.57
This data using the Scikit-Learn library. Sentiment analysis refers to analyzing an opinion or feelings about something using data like text or images, regarding almost anything. Sentiment analysis helps companies in their decision-making process. For instance, if public sentiment towards a product is not so good, a company may try to modify the product or stop the production altogether in order to avoid any losses. There are many sources of public sentiment e.g. public interviews, opinion polls, surveys, etc. However, with more and more people joining social media platforms, websites like Facebook and Twitter can be parsed for public sentiment. In this article, we will see how we can perform sentiment analysis of text data. Problem Definition Given tweets about six US airlines, the task is to predict whether a tweet contains positive, negative, or neutral sentiment about the airline. This is a typical supervised learning task where given a text string, we have to categorize the text string into predefined categories. Solution To solve this problem, we will follow the typical machine learning pipeline. We will first import the required libraries and the dataset. We will then do exploratory data analysis to see if we can find any trends in the dataset. Next, we will perform text preprocessing to convert textual data to numeric data that can be used by a machine learning algorithm. Finally, we will use machine learning algorithms to train and test our sentiment analysis models. Importing the Required Libraries The first step as always is to import the required libraries: import numpy as np import pandas as pd import re import nltk import matplotlib.pyplot as plt %matplotlib inline Note: All the scripts in the article have been run using the Jupyter Notebook. Importing the Dataset The dataset that we are going to use for this article is freely available at this Github link. To import the dataset, we will use the Pandas read_csv function, as shown below: data_source_url = "" airline_tweets = pd.read_csv(data_source_url) Let's first see how the dataset looks like using the head() method: airline_tweets.head() The output looks like this: Data Analysis Let's explore the dataset a bit to see if we can find any trends. But before that, we will change the default plot size to have a better view of the plots. Execute the following script: plot_size = plt.rcParams["figure.figsize"] print(plot_size[0]) print(plot_size[1]) plot_size[0] = 8 plot_size[1] = 6 plt.rcParams["figure.figsize"] = plot_size Let's first see the number of tweets for each airline. We will plot a pie chart for that: airline_tweets.airline.value_counts().plot(kind='pie', autopct='%1.0f%%') In the output, you can see the percentage of public tweets for each airline. United Airline has the highest number of tweets i.e. 26%, followed by US Airways (20%). Let's now see the distribution of sentiments across all the tweets. Execute the following script: airline_tweets.airline_sentiment.value_counts().plot(kind='pie', autopct='%1.0f%%', colors=["red", "yellow", "green"]) The output of the script above look likes this: From the output, you can see that the majority of the tweets are negative (63%), followed by neutral tweets (21%), and then the positive tweets (16%). Next, let's see the distribution of sentiment for each individual airline, airline_sentiment = airline_tweets.groupby(['airline', 'airline_sentiment']).airline_sentiment.count().unstack() airline_sentiment.plot(kind='bar') The output looks like this: It is evident from the output that for almost all the airlines, the majority of the tweets are negative, followed by neutral and positive tweets. Virgin America is probably the only airline where the ratio of the three sentiments is somewhat similar. Finally, let's use the Seaborn library to view the average confidence level for the tweets belonging to three sentiment categories. Execute the following script: import seaborn as sns sns.barplot(x='airline_sentiment', y='airline_sentiment_confidence' , data=airline_tweets) The output of the script above looks like this: From the output, you can see that the confidence level for negative tweets is higher compared to positive and neutral tweets. Enough of the exploratory data analysis, our next step is to perform some preprocessing on the data and then convert the numeric data into text data as shown below. Data Cleaning Tweets contain many slang words and punctuation marks. We need to clean our tweets before they can be used for training the machine learning model. However, before cleaning the tweets, let's divide our dataset into feature and label sets. Our feature set will consist of tweets only. If we look at our dataset, the 11th column contains the tweet text. Note that the index of the column will be 10 since pandas columns follow zero-based indexing scheme where the first column is called 0th column. Our label set will consist of the sentiment of the tweet that we have to predict. The sentiment of the tweet is in the second column (index 1). To create a feature and a label set, we can use the iloc method off the pandas data frame. Execute the following script: features = airline_tweets.iloc[:, 10].values labels = airline_tweets.iloc[:, 1].values Once we divide the data into features and training set, we can preprocess data in order to clean it. To do so, we will use regular expressions. To study more about regular expressions, please take a look at this article on regular expressions. processed_features = [] for sentence in range(0, len(features)): # Remove all the special characters processed_feature = re.sub(r'\W', ' ', str(features[sentence])) # remove all single characters processed_feature= re.sub(r'\s+[a-zA-Z]\s+', ' ', processed_feature) # Remove single characters from the start processed_feature = re.sub(r'\^[a-zA-Z]\s+', ' ', processed_feature) # Substituting multiple spaces with single space processed_feature = re.sub(r'\s+', ' ', processed_feature, flags=re.I) # Removing prefixed 'b' processed_feature = re.sub(r'^b\s+', '', processed_feature) # Converting to Lowercase processed_feature = processed_feature.lower() processed_features.append(processed_feature) In the script above, we start by removing all the special characters from the tweets. The regular expression re.sub(r'\W', ' ', str(features[sentence])) does that. Next, we remove all the single characters left as a result of removing the special character using the re.sub(r'\s+[a-zA-Z]\s+', ' ', processed_feature) regular expression. For instance, if we remove special character ' from Jack's and replace it with space, we are left with Jack s. Here s has no meaning, so we remove it by replacing all single characters with a space. However, if we replace all single characters with space, multiple spaces are created. Therefore, we replace all the multiple spaces with single spaces using re.sub(r'\s+', ' ', processed_feature, flags=re.I) regex. Furthermore, if your text string is in bytes format a character b is appended with the string. The above script removes that using the regex re.sub(r'^b\s+', '', processed_feature). Finally, the text is converted into lowercase using the lower() function. Representing Text in Numeric Form Statistical algorithms use mathematics to train machine learning models. However, mathematics only work with numbers. To make statistical algorithms work with text, we first have to convert text to numbers. To do so, three main approaches exist i.e. Bag of Words, TF-IDF and Word2Vec. In this section, we will discuss the bag of words and TF-IDF scheme. Bag of Words Bag of words scheme is the simplest way of converting text to numbers. For instance, you have three documents: - Doc1 = "I like to play football" - Doc2 = "It is a good game" - Doc3 = "I prefer football over rugby" In the bag of words approach the first step is to create a vocabulary of all the unique words. For the above three documents, our vocabulary will be: Vocab = [I, like, to, play, football, it, is, a, good, game, prefer, over, rugby] The next step is to convert each document into a feature vector using the vocabulary. The length of each feature vector is equal to the length of the vocabulary. The frequency of the word in the document will replace the actual word in the vocabulary. If a word in the vocabulary is not found in the corresponding document, the document feature vector will have zero in that place. For instance, for Doc1, the feature vector will look like this: [1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0] TF-IDF In the bag of words approach, each word has the same weight. The idea behind the TF-IDF approach is that the words that occur less in all the documents and more in individual document contribute more towards classification. TF-IDF is a combination of two terms. Term frequency and Inverse Document frequency. They can be calculated as: TF = (Frequency of a word in the document)/(Total words in the document) IDF = Log((Total number of docs)/(Number of docs containing the word)) TF-IDF using the Scikit-Learn Library Luckily for us, Python's Scikit-Learn library contains the TfidfVectorizer class that can be used to convert text features into TF-IDF feature vectors. The following script performs this: from nltk.corpus import stopwords from sklearn.feature_extraction.text import TfidfVectorizer vectorizer = TfidfVectorizer (max_features=2500, min_df=7, max_df=0.8, stop_words=stopwords.words('english')) processed_features = vectorizer.fit_transform(processed_features).toarray() In the code above, we define that the max_features should be 2500, which means that it only uses the 2500 most frequently occurring words to create a bag of words feature vector. Words that occur less frequently are not very useful for classification. Similarly, max_df specifies that only use those words that occur in a maximum of 80% of the documents. Words that occur in all documents are too common and are not very useful for classification. Similarly, min-df is set to 7 which shows that include words that occur in at least 7 documents. Dividing Data into Training and Test Sets In the previous section, we converted the data into the numeric form. As the last step before we train our algorithms, we need to divide our data into training and testing sets. The training set will be used to train the algorithm while the test set will be used to evaluate the performance of the machine learning model. Execute the following code: from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(processed_features, labels, test_size=0.2, random_state=0) In the code above we use the train_test_split class from the sklearn.model_selection module to divide our data into training and testing set. The method takes the feature set as the first parameter, the label set as the second parameter, and a value for the test_size parameter. We specified a value of 0.2 for test_size which means that our data set will be split into two sets of 80% and 20% data. We will use the 80% dataset for training and 20% dataset for testing. Training the Model Once data is split into training and test set, machine learning algorithms can be used to learn from the training data. You can use any machine learning algorithm. However, we will use the Random Forest algorithm, owing to its ability to act upon non-normalized data. The sklearn.ensemble module contains the RandomForestClassifier class that can be used to train the machine learning model using the random forest algorithm. To do so, we need to call the fit method on the RandomForestClassifier class and pass it our training features and labels, as parameters. Look at the following script: from sklearn.ensemble import RandomForestClassifier text_classifier = RandomForestClassifier(n_estimators=200, random_state=0) text_classifier.fit(X_train, y_train) Making Predictions and Evaluating the Model Once the model has been trained, the last step is to make predictions on the model. To do so, we need to call the predict method on the object of the RandomForestClassifier class that we used for training. Look at the following script: predictions = text_classifier.predict(X_test) Finally, to evaluate the performance of the machine learning models, we can use classification metrics such as a confusion metrix, F1 measure, accuracy, etc. To find the values for these metrics, we can use classification_report, confusion_matrix, and accuracy_score utilities from the sklearn.metrics library. Look a the following script: from sklearn.metrics import classification_report, confusion_matrix, accuracy_score print(confusion_matrix(y_test,predictions)) print(classification_report(y_test,predictions)) print(accuracy_score(y_test, predictions)) The output of the script above looks like this: [[1724 101 45] [ 329 237 48] [ 142 58 244]] precision recall f1-score support negative 0.79 0.92 0.85 1870 neutral 0.60 0.39 0.47 614 positive 0.72 0.55 0.62 444 micro avg 0.75 0.75 0.75 2928 macro avg 0.70 0.62 0.65 2928 weighted avg 0.74 0.75 0.73 2928 0.7530737704918032 From the output, you can see that our algorithm achieved an accuracy of 75.30. Conclusion The sentiment analysis is one of the most commonly performed NLP tasks as it helps determine overall public opinion about a certain topic. In this article, we saw how different Python libraries contribute to performing sentiment analysis. We performed an analysis of public tweets regarding six US airlines and achieved an accuracy of around 75%. I would recommend you to try and use some other machine learning algorithm such as logistic regression, SVM, or KNN and see if you can get better results. In the next article I'll be showing how to perform topic modeling with Scikit-Learn, which is an unsupervised technique to analyze large volumes of text data by clustering the documents into groups.
https://stackabuse.com/python-for-nlp-sentiment-analysis-with-scikit-learn/
CC-MAIN-2019-43
refinedweb
2,251
56.15
Polymorphism (C# Programming Guide) Polymorphism is often referred to as the third pillar of object-oriented programming, after encapsulation and inheritance. Polymorphism is a Greek word that means "many-shaped" and it has two distinct aspects:.. using System; using System.Collections.Generic;. var shapes = new List<Shape> { new Rectangle(), new Triangle(), new Circle() }; // Polymorphism at work #2: the virtual method Draw is // invoked on each of the derived classes, not the base class. foreach (var shape in shapes) { shape. Polymorphism Overview abstract. The derived member must use the override keyword to explicitly indicate that the method is intended to participate in virtual invocation. The following code provides an example: public class BaseClass { public virtual void DoWork() { } public virtual int WorkProperty { get { return 0; } } } public class DerivedClass : BaseClass { public override void DoWork() { } public override int WorkProperty { get { return 0; } } }: DerivedClass B = new DerivedClass(); B.DoWork(); // Calls the new method. BaseClass A = (BaseClass)B; A.DoWork(); // Also calls the new method. Virtual methods and properties enable derived classes to extend a base class without needing to use the base class implementation of a method. For more information, see Versioning with the Override and New Keywords. An interface provides another way to define a method or set of methods whose implementation is left to derived classes. For more information, see: public class BaseClass { public void DoWork() { WorkField++; } public int WorkField; public int WorkProperty { get { return 0; } } } public class DerivedClass : BaseClass { public new void DoWork() { WorkField++; } public new int WorkField; public new int WorkProperty { get { return 0; } } } Hidden base class members can still be accessed from client code by casting the instance of the derived class to an instance of the base class. For example: DerivedClass B = new DerivedClass(); B.DoWork(); // Calls the new method. BaseClass A = (BaseClass)B; A.DoWork(); // Calls the old method.: public class A { public virtual void DoWork() { } } public class B : A { public override void DoWork() { } } A derived class can stop virtual inheritance by declaring an override as sealed. This requires putting the sealed keyword before the override keyword in the class member declaration. The following code provides an example: public class C : B { public sealed override void DoWork() { } } In the previous example, the method DoWork is no longer virtual to any class derived from C. It is still virtual for instances of C, even if they are cast to type B or type A. Sealed methods can be replaced by derived classes by using the new keyword, as the following example shows: public class D : C { public new void DoWork() { } }. Accessing Base Class Virtual Members from Derived Classes A derived class that has replaced or overridden a method or property can still access the method or property on the base class using the base keyword. The following code provides an example: public class Base { public virtual void DoWork() {/*...*/ } } public class Derived : Base { public override void DoWork() { //Perform Derived's work here //... // Call DoWork on base class base.DoWork(); } } For more information, see base. Note It is recommended that virtual members use base to call the base class implementation of that member in their own implementation. Letting the base class behavior occur enables the derived class to concentrate on implementing behavior specific to the derived class. If the base class implementation is not called, it is up to the derived class to make their behavior compatible with the behavior of the base class. In This Section Versioning with the Override and New Keywords Knowing When to Use Override and New Keywords How to: Override the ToString Method See also Feedback
https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/polymorphism
CC-MAIN-2019-47
refinedweb
590
53.71
Today, we are going to learn about Generators, what they are, how to use them, and their advantages. Before going further, make sure that you know iterators. If you don’t, learn about them here. Python Generator Function A generator function is a function that returns a generator object, which is iterable, i.e., we can get an iterator from it. Unlike the usual way of creating an iterator, i.e., through classes, this way is much simpler. Moreover, regular functions in Python execute in one go. In other words, they cannot be stopped midway and rerun from that point. The lines of code after the return statement never get executed. In a generator function, however, we use yield expression instead of return, which allows us to maintain the local state of the function. When we invoke a generator function, it just returns an iterator. Note that it does not start to execute at this point. Let’s understand how a generator function works through an example. def test_generator(): i = 0 print("First execution point") yield i i += 1 print("Second execution point") yield i i += 1 print("Third execution point") yield i obj = test_generator() print(obj) Output <generator object test_generator at 0x7fad8b858a98> The above test_generator() function contains a variable i that gets incremented after each yield statement. When we call test_generator(), it returns a generator object, which is an iterator. Python next() To start its execution, we can use the next() method that takes the generator object as an argument. In doing so, the function runs until it encounters a yield statement. After that, the function returns the next value in the iterator and hands over the transfer to the caller, but its state is maintained. When we call the next() method again, the function begins its execution after the last encountered yield statement. Let’s see. print(f"The value of i is {next(obj)}") print(f"The value of i is {next(obj)}") print(f"The value of i is {next(obj)}") Output First execution point The value of i is 0 Second execution point The value of i is 1 Third execution point The value of i is 2 As you can see, the variable i does not get destroyed, i.e., the value from the previous call is maintained. Python StopIteration Exception What happens if we try to call the next() method one more time? As the function does not contain any more lines, so we will get a StopIteration exception, i.e., print(f"The value of i is {next(obj)}") Output StopIteration Traceback (most recent call last) <ipython-input-13-f06cc408a533> in <module>() ----> 1 print(f"The value of i is {next(obj)}") StopIteration: for loop Instead of calling next() each time we want to get the next item, we can use the for loop that will implicitly use next() until the generator exhausts. It is simple, short, and eliminates the chances of raising a StopIteration exception (as mentioned above). Let’s see. obj = test_generator() for val in obj: print(f"The value of i is {val}") Output First execution point The value of i is 0 Second execution point The value of i is 1 Third execution point The value of i is 2 To reiterate, we need to call the generator function again. Python Generator Expressions Generator Expressions are an even shorter way of creating iterators, and they are anonymous generator functions. They are similar to list comprehensions and defined in the same way, except it requires parenthesis instead of square brackets. They provide the same functionality as list comprehensions do but are more memory efficient and provide high performance. Let’s take an example to understand them. obj1 = [x * x for x in range(0, 9)] # list comprehension print(obj1) obj2 = (x * x for x in range(0, 9)) # generator expression print(obj2) for item in obj2: print(item) Output [0, 1, 4, 9, 16, 25, 36, 49, 64] <generator object <genexpr> at 0x7fad83a7e620> 0 1 4 9 16 25 36 49 64 Memory Efficient and High Performance List comprehensions store the complete list in memory. However, the generator expression only holds the generator object in memory. Therefore, they are more memory efficient and provide high performance. Consider the same example as above, except we are calculating the squares of numbers from 1 to 1 million here. import time import sys start = time.time() obj1 = [x * x for x in range(0, 1000000)] # list comprehension end = time.time() print(f"Time Taken: {end-start}seconds") print(f"Memory: {sys.getsizeof(obj1)} bytes\n") start = time.time() obj2 = (x * x for x in range(0, 1000000)) # generator expression end = time.time() print(f"Time Taken: {end-start} seconds") print(f"Memory: {sys.getsizeof(obj2)} bytes") Output Time Taken: 0.10663628578186035seconds Memory: 8697464 bytes Time Taken: 8.130073547363281e-05 seconds Memory: 88 bytes See the difference Infinite Sequence As we have a limited amount of memory, we cannot generate an infinite sequence using usual methods. However, we can produce it using generators since they do not need to store complete data in memory. Consider the following example that generates the square number series. def infinite_square_number(): i = 0 while True: yield i * i i += 1 sequence = infinite_square_number() for num in sequence: print(num, end=", ") Output Python send() The send() method sends a value to a generator function. When we call the send() method, the execution of the function resumes, and the value passed in the argument becomes the result of the yield expression from the previous execution point. If you use the send() method to call the generator function for the first time, then pass None as the argument as no yield expression is executed yet. import random def test_generator(n, recv=0): for i in range(n): recv = yield i * recv n = 5 obj = test_generator(n) print(obj.send(None), end=" ") for i in range(1, n): print(obj.send(random.randint(1, 10)), end=" ") Output 0 8 18 15 20 The test_generator() function runs the loop for n number of times. It yields the product of the current iteration number and the random integer sent through the send() method. Note that we pass None in send() in the first call to the test_generator() function. The close() method stops the generator. It closes the generator by raising the GeneratorExit exception. If the generator has already exited, either by exhaustion (normal exit) or due to some exception, it does nothing. Consider the following example, in which the test_generator() function takes squares of numbers from 0 to n. However, we stop the generator when the yielded value is 25. import random def test_generator(n): j = 0 for i in range(n): yield j ** 2 j += 1 n = 1000 obj = test_generator(1000) for val in obj: print(val) if val == 25: obj.close() Output 0 1 4 9 16 25 throw() It is used to throw a specified exception. It takes three arguments. The first argument is the type of exception. The second optional argument represents the value of the error, and a third optional argument is a traceback object. Consider the same example as above, except it throws ValueError when the yielded value is greater than or equal to 1000. import random def test_generator(n): j = 0 for i in range(n): yield j ** 2 j += 1 n = 1000 obj = test_generator(1000) for val in obj: if val >= 1000: obj.throw(ValueError, "Too Large") print(val, end=" ") Output Creating a Generator Pipeline Generators can be chained together to create a pipeline, in which the output of one generator goes as an input to another generator. This will make your code and pipeline more readable. Consider the following example, where one generator reads a file. The file contains a sample list of Australian cricket players and their career periods. The names of the players are all in lowercase. We want to capitalize their first and last names. Let’s do it. lines = (line for line in open("sample.txt")) # read each line from the file words_list = ( words.capitalize() + " " for l in lines for words in l.split(" ") ) # split each line and capitalize line = " " for w in words_list: line += w print(line) Output James Pattinson 2011–2015 Pat Cummins 2011– Mitchell Marsh 2011– Daniel Christian 2012–2014 Matthew Wade 2012– Peter Forrest 2012 Nathan Lyon 2012– George Bailey 2012–2016 Glenn Maxwell 2012– Aaron Finch 2013– The first generator expression yields each line of the file. The second generator expression splits each line into a list of words, and then each word is capitalized.
https://www.codesdope.com/blog/article/python-generators/
CC-MAIN-2021-39
refinedweb
1,428
54.12
Time/Place This meeting is a hybrid teleconference and IRC chat. Anyone is welcome to join...here's the info: - Time: 11:00am Eastern Daylight Time US (UTC #fcrepo chat room via Freenode Web IRC (enter a unique nick) - Or point your IRC client to #fcrepo on irc.freenode.net Attendees - A. Soroka - Unknown User (acoburn) - Michael Durbin - - Open Repositories committer meeting planning From last week: High-priority issuesClick here to expand... - Audit issues: - LDP - Performance - Clean up Tickets resolved this week: Tickets created this week: - ... Minutes Islandora/F4 migration - Some migration-utils issues turned out to be difficult - Utility will maintain a list of mappings during migration - Currently no tooling to address URL redirects at the application level (e.g. redirecting a legacy F3 URL request to the new F4 URL) - Aaron plans on using a key-value store to handle the redirects - Use a service to do a lookup in the database and redirect to the new F4 URL - Pull request to provide properties file to declare namespace prefixes before migration starts - Otherwise default namespaces get created (ns01, ns02, etc.) - Namespace prefixes can’t be changed once created (as per ModeShape). But they can be remapped - Could build a separate utility to make such changes. Would not be accessible via API - Andrew will create a ticket : - FCREPO-1567Getting issue details... STATUS OR2015 conference planning - PCDM - making sure our efforts are aligned - What aspects of PCDM are now implemented in Sufia? Do we all agree and plan to implement elsewhere accordingly? - Perhaps discuss over lunch? - Google Doc signup sheet? Doodle poll? - Committers meeting - People should come with a plan to be actively involved in the discussions - Agenda online - Software dependencies - Packages, OSGi support, dependency chains, etc. - Migrations plans - When are people migrating? - What are the blockers?
https://wiki.duraspace.org/display/FF/2015-06-04+-+Fedora+Tech+Meeting
CC-MAIN-2019-09
refinedweb
297
55.44
Asked by: Embedded XAML Comments - Hi, I was just wondering if there are any plans to allow embedded comments in the WPF designer? This 'issue' drives me crazy on an almost daily basis! When playing with XAML templates and the like, I often comment bits of the template out, temporarily. This works fine, until I want to comment out a whole section that already contains comments... it's not possible! I would have to manually remove all the sub-comments, before I can successfully comment out the whole block. I tend to cut/paste the whole block to notepad and back again when required - a real pain. I realise this is due to the fact that XAML is very much like XML and as such, does not allow two adjacent hyphens within a comment (). Are there any plans to allow this behaviour in the future? I'm surprised I couldn't find any other posts regarding this, I'm sure I'm not the only one pulling my hair out over this! Regards, DaveMonday, December 22, 2008 4:32 PM General discussion All replies I suspect most people don't use comments in XAML files at all, let alone nested ones. There's no way to allow nested XML comments without breaking every single tool in the XAML tool chain, but I suppose Microsoft could introduce a pseudo-element that directs the XAML parser to ignore everything within the paired elements, e.g. <comment>...</comment>. Such comments could be nested.Tuesday, December 23, 2008 11:13 AM - A <comment> tag that would be ignored by XAML seems like it would be handy. Not being able to nest comments makes it a bit of a hassle to experiment with enabling/disabling chunks of XAML that has comments. I would certainly use this feature if it were available. Saturday, January 03, 2009 3:23 AM - I think every tag in XAML is easy to understand,so it is not necessary to embedded comments in the WPF designer.Monday, January 05, 2009 2:39 PM - Hi MMHunter, I agree, all XAML tags are easy to understand and should not require comments - you can already add comments should you wish and this is all good. What I am on about (that Eric stated above) is the ability to temporarily comment sections of actual XAML markup - whilst working out how best to design a window/page. The <comment>...</comment> tags would be great, good idea Chris... so long as they could be added/removed by selecting a section of markup and using ctrl+kc / ctrl+ku. I use this all the time in the code editor - if you select a block of code that already contains some lines that have comments and do ctrl+kc, a second comment mark is added to the start of any line that is already commented. This way, if you subsequently un-comment the same section, the originally commented lines will still be commented... like a multi-stage undo for commenting lines. It is this type of behaviour I would love to see in the XAML editor, which I believe could be possible using the <comment> tag approach (as opposed to adding a 'comment' character to the start of each line - like the code editor does). Using the tag approach may also mean the outlining would keep working... another major annoyance to using the <!-- --> tags in the XAML designer is the lack of outlining... e.g. I have just commented out a lengthy resource at the top of a XAML file, as I think there may be a better way to achieve what I want (but don't want to delete the resource until I'm happy it's not required). Whilst not commented, I get a nice little '-' symbol next to the resource, which allows me to collapse the 100 lines the resource occupies, to just one, allowing me to simply see the rest of the markup below. Once I comment the resource, all outlining is turned off... I can't now collapse the commented resource; I have to scroll right down to see the rest of the markup!!! very annoying. A bonus would be for the <comment> tags and all text in-between the tags, to be coloured green. Anyone have any thoughts? Or does anyone know if Microsoft plans on enhancing the commenting and/or outlining in the XAML designer in the future? Regards, DaveMonday, January 05, 2009 3:13 PM - thank you for your wonderful reply!Tuesday, January 06, 2009 7:00 AM - Hello, you can use the markup compatibility mc:Ignorable attribute in your root element: Blend uses the Ignorable attribute to disable its namespace. You will have to replace mc:Ignorable="d" with mc:Ignorable="d c". mc:Ignorable does not apply to namespace mappings into assemblies so if you want to ignore elements from such a namespace you can wrap them in a dummy element <c:Comment></c:Comment>. So I guess you already have your comment tag. I hope this helps, have a look here for more info:, February 23, 2009 7:01 PM - How would this work for child elements? <c:Grid> <Button Content="Click Me" /> </c:Grid>In the example above is the Button also considered a comment as well because it is a child of an element marked with the namespace "c"?Wednesday, October 07, 2009 1:25 PM Yes, all this markup will be effectively ignored.Wednesday, October 07, 2009 1:27 PM <c:Comment></c:Comment> works great. Thanks. Saturday, January 22, 2011 11:10 PM simple write <!--> to open and close the comment. Best Regards, Hen <Window.Resources> <Style TargetType="{x:Type RadioButton}"> <Style.Triggers> <!--> comments <!--> </Style.Triggers> </Style> </Window.Resources> </Window>Monday, March 28, 2011 11:56 AM - <Window.Resources><Style TargetType="{x:Type RadioButton}"><Style.Triggers><!-->comments<!--></Style.Triggers></Style></Window.Resources></Window>Monday, March 28, 2011 11:57 AM
https://social.msdn.microsoft.com/Forums/en-US/146acdbb-934f-479f-8f13-01693b7a9ed4/embedded-xaml-comments?forum=vswpfdesigner
CC-MAIN-2015-40
refinedweb
986
62.27
What would be the best way to extend the Mesh class? I would like to add some properties to it that will be inherited by all its subclasses. It seems that just adding stuff to my Mesh.as file wouldn't be a good way to do it. The other option would be to extend all the subclasses and copy and paste the new properties, but that doesn't sound good either. Extend & override? Well if you extend a superclass, that doesn't affect any of the original superclass's subclasses. I do this. I have classes that inherit from Mesh, which all my other classes inherit from. E.g. the only class that inherits directly from Mesh is this, package Qyx.View.Mesh { import starling.display.Mesh; import starling.rendering.IndexData; import starling.rendering.VertexData; public class BaseMesh extends Mesh { private var _id :int; function BaseMesh(vData:VertexData, iData:IndexData) { super(vData, iData); pixelSnapping = false; } // the ID [inline] public final function get iID():int {return _id;} [inline] public final function set iID(val:int):void {_id = val;} public function set fAlpha(val:Number):void { throw new Error("fAlpha cannot be set on base class BaseMesh"); } } } If you e.g. have a lot of classes that derive from Mesh, or a lot of places you create Mesh objects directly, you can just search and replace Mesh with the new class name. If you use a custom MeshStyle when you create your Meshes you would need that adding to the constructor the same way as the IndexData and VertexData. Another alternative is to put those methods on a "MeshStyle" subclass instead of "Mesh". Because a MeshStyle has access to vertexData and indexData of the mesh it's attached to, it can very often do just the same as a Mesh subclass. And since you can set a default MeshStyle ("Mesh.defaultStyle"), you can then make sure that every mesh (that doesn't have a custom style assigned) uses that style. The only downside is that you have to cast your "mesh.style" ("quad.style", etc) to your custom style whenever you want to access those additional properties and methods. But a helper method could simplify that. Thanks for the suggestions everyone. In the end, I decided to extend Quad and then edit starling.display.Image to extend that instead of the original Quad. I just have to remember to edit Image every time I update Starling. If you're wondering, I was trying to add a color setter for the Sprite class. At first I thought it would be easy, and all I had to do was add a method that would set each child Mesh's color, but then I realized, what if the child already had a color? That color would have to be multiplied. So I created an InterfaceRGB interface, which my DisplayObjectContainerRGB and QuadRGB classes implement. Sprite, Sprite3D, and TextField now extend DisplayObjectContainerRGB, and Image extends QuadRGB. Basically, my RGB classes have a "true" RGB vector and a "multiplied" RGB vector. When a child gets added, the child's multiplied vector is its true vector multiplied by the parent's multiplied vector, then the child's multiplied vector is used to set it's displayed color.
https://forum.starling-framework.org/d/20598-extending-the-mesh-class
CC-MAIN-2019-22
refinedweb
539
64.61
A type parameter may set bounds on the type used, by setting an upper limit (in inheritance diagram terms) on the class used. The extends keyword is used to mean that the class must either be an instance of the specified boundary class, or extend it, or, if it is an interface, implement it: public class EmployeeLocator<T extends Employee> { . . . } public class CheckPrinter<T extends Payable> { . . . } In the first case, the class may be parameterized with Employee, or any class that extends Employee. In the second, the class may be parameterized with Payable or any type that implements the Payable interface.
https://www.webucator.com/tutorial/learn-java/collections/bounded-types-reading.cfm
CC-MAIN-2017-04
refinedweb
101
59.84
Celery aims to be a flexible and reliable, best-of-breed solution to process vast amounts of messages in a distributed fashion, while providing operations with the tools to maintain such a system. Celery has a large and diverse community of users and contributors, you should come join us on IRC or our mailing-list. To read more about Celery you should visit our website. While this version is backward compatible with previous versions it is important that you read the following section. If you use Celery in combination with Django you must also read the django-celery changelog <djcelery:version-2.5.0> and upgrade to django-celery 2.5. This version is officially supported on CPython 2.5, 2.6, 2.7, 3.2 and 3.3, as well as PyPy and Jython. The default limit is 10 connections, if you have many threads/green-threads using connections at the same time you may want to tweak this limit to avoid contention. See the BROKER_POOL_LIMIT setting for more information. Also note that publishing tasks will be retried by default, to change this default or the default retry policy see CELERY_TASK_PUBLISH_RETRY and CELERY_TASK_PUBLISH_RETRY_POLICY. The exchange used for results in the Rabbit (AMQP) result backend used to have the auto_delete flag set, which could result in a race condition leading to an annoying warning. For RabbitMQ users Old exchanges created with the auto_delete flag enabled has to be removed. The camqadm command can be used to delete the previous exchange: $ camqadm exchange.delete celeryresults As an alternative to deleting the old exchange you can configure a new name for the exchange: CELERY_RESULT_EXCHANGE = 'celeryresults2' But you have to make sure that all clients and workers use this new setting, so they are updated to use the same exchange name. The CELERYD_FORCE_EXECV setting has been added to solve a problem with deadlocks that originate when threads and fork is mixed together: CELERYD_FORCE_EXECV = True This setting is recommended for all users using the prefork pool, but especially users also using time limits or a max tasks per child setting. Enabling this option will result in a slight performance penalty when new child worker processes are started, and it will also increase memory usage (but many platforms are optimized, so the impact may be minimal). Considering that it ensures reliability when replacing lost worker processes, it should be worth it. Celery can now be configured to treat all incoming and outgoing dates as UTC, and the local timezone can be configured. This is not yet enabled by default, since enabling time zone support means workers running versions pre 2.5 will be out of sync with upgraded workers. To enable UTC you have to set CELERY_ENABLE_UTC: CELERY_ENABLE_UTC = True When UTC is enabled, dates and times in task messages will be converted to UTC, and then converted back to the local timezone when received by a worker. You can change the local timezone using the CELERY_TIMEZONE setting. Installing the pytz library is recommended when using a custom timezone, to keep timezone definition up-to-date, but it will fallback to a system definition of the timezone if available. UTC will enabled by default in version 3.0. A new serializer has been added that signs and verifies the signature of messages. The name of the new serializer is auth, and needs additional configuration to work (see Security). Contributed by Mher Movsisyan. Starting celeryd Celeryd -l info --autoreload Contributed by Mher Movsisyan. This new setting enables the configuration to modify task classes and their attributes. The setting can be a dict, or a list of annotation objects that filter for tasks and return a map of attributes to change. As an example, this is an annotation: %r' % (exc, )) CELERY_ANNOTATIONS = {'*': {'on_failure': my_on_failure}} If you need more flexibility then you can also create objects that filter for tasks to annotate: class MyAnnotate(object): def annotate(self, task): if task.name.startswith('tasks.'): return {'rate_limit': '10/s'} CELERY_ANNOTATIONS = (MyAnnotate(), {…}) The new celery.task.current proxy will always give the currently executing task. Example: from celery.task import current, task @task def update_twitter_status(auth, message): twitter = Twitter(auth) try: twitter.update_status(message) except twitter.FailWhale, exc: # retry in 10 seconds. current.retry(countdown=10, exc=exc) Previously you would have to type update_twitter_status.retry(…) here, which can be annoying for long task names. Note This will not work if the task function is called directly, i.e: update_twitter_status(a, b). For that to work apply must be used: update_twitter_status.apply((a, b)). Now depends on Kombu 2.1.0. Efficient Chord support for the memcached backend (Issue #533) This means memcached joins Redis in the ability to do non-polling chords. Contributed by Dan McGee. Adds Chord support for the Rabbit result backend (amqp) The Rabbit result backend can now use the fallback chord solution. Sending QUIT to celeryd will now cause it cold terminate. That is, it will not finish executing the tasks it is currently working on. Contributed by Alec Clowes. New “detailed” mode for the Cassandra backend. Allows to have a “detailed” mode for the Cassandra backend. Basically the idea is to keep all states using Cassandra wide columns. New states are then appended to the row as new columns, the last state being the last column. See the CASSANDRA_DETAILED_MODE setting. Contributed by Steeve Morin. The crontab parser now matches Vixie Cron behavior when parsing ranges with steps (e.g. 1-59/2). Contributed by Daniel Hepper. celerybeat can now be configured on the command-line like celeryd. Additional configuration must be added at the end of the argument list followed by --, for example: $ celerybeat -l info -- celerybeat.max_loop_interval=10.0 Now limits the number of frames in a traceback so that celeryd does not crash on maximum recursion limit exceeded exceptions (Issue #615). The limit is set to the current recursion limit divided by 8 (which is 125 by default). To get or set the current recursion limit use sys.getrecursionlimit() and sys.setrecursionlimit(). More information is now preserved in the pickleable traceback. This has been added so that Sentry can show more details. Contributed by Sean O’Connor. CentOS init script has been updated and should be more flexible. Contributed by Andrew McFague. MongoDB result backend now supports forget(). Contributed by Andrew McFague task.retry() now re-raises the original exception keeping the original stack trace. Suggested by ojii. The –uid argument to daemons now uses initgroups() to set groups to all the groups the user is a member of. Contributed by Łukasz Oleś. celeryctl: Added shell command. The shell will have the current_app (celery) and all tasks automatically added to locals. celeryctl: Added migrate command. The migrate command moves all tasks from one broker to another. Note that this is experimental and you should have a backup of the data before proceeding. Examples:$ celeryctl migrate redis://localhost amqp://localhost $ celeryctl migrate amqp://localhost//v1 amqp://localhost//v2 $ python manage.py celeryctl migrate django:// redis:// Routers can now override the exchange and routing_key used to create missing queues (Issue #577). By default this will always use the name of the queue, but you can now have a router return exchange and routing_key keys to set them. This is useful when using routing classes which decides a destination at runtime. Contributed by Akira Matsuzaki. Redis result backend: Adds support for a max_connections parameter. It is now possible to configure the maximum number of simultaneous connections in the Redis connection pool used for results. The default max connections setting can be configured using the CELERY_REDIS_MAX_CONNECTIONS setting, or it can be changed individually by RedisBackend(max_connections=int). Contributed by Steeve Morin. Redis result backend: Adds the ability to wait for results without polling. Contributed by Steeve Morin. MongoDB result backend: Now supports save and restore taskset. Contributed by Julien Poissonnier. There’s a new Security guide in the documentation. The init scripts has been updated, and many bugs fixed. Contributed by Chris Streeter. User (tilde) is now expanded in command-line arguments. Can now configure CELERYCTL envvar in /etc/default/celeryd. While not necessary for operation, celeryctl is used for the celeryd status command, and the path to celeryctl must be configured for that to work. The daemonization cookbook contains examples. Contributed by Jude Nagurney. The MongoDB result backend can now use Replica Sets. Contributed by Ivan Metzlar. gevent: Now supports autoscaling (Issue #599). Contributed by Mark Lavin. multiprocessing: Mediator thread is now always enabled, even though rate limits are disabled, as the pool semaphore is known to block the main thread, causing broadcast commands and shutdown to depend on the semaphore being released. Exceptions that are re-raised with a new exception object now keeps the original stack trace. Windows: Fixed the no handlers found for multiprocessing warning. Windows: The celeryd program can now be used. Previously Windows users had to launch celeryd using python -m celery.bin.celeryd. Redis result backend: Now uses SETEX command to set result key, and expiry atomically. Suggested by yaniv-aknin. celeryd: Fixed a problem where shutdown hanged when Ctrl+C was used to terminate. celeryd: No longer crashes when channel errors occur. Fix contributed by Roger Hu. Fixed memory leak in the eventlet pool, caused by the use of greenlet.getcurrent. Fix contributed by Ignas Mikalajūnas. Cassandra backend: No longer uses pycassa.connect() which is deprecated since pycassa 1.4. Fix contributed by Jeff Terrace. Fixed unicode decode errors that could occur while sending error emails. Fix contributed by Seong Wun Mun. celery.bin programs now always defines __package__ as recommended by PEP-366. send_task now emits a warning when used in combination with CELERY_ALWAYS_EAGER (Issue #581). Contributed by Mher Movsisyan. apply_async now forwards the original keyword arguments to apply when CELERY_ALWAYS_EAGER is enabled. celeryev now tries to re-establish the connection if the connection to the broker is lost (Issue #574). celeryev: Fixed a crash occurring if a task has no associated worker information. Fix contributed by Matt Williamson. The current date and time is now consistently taken from the current loaders now method. Now shows helpful error message when given a config module ending in .py that can’t be imported. celeryctl: The --expires and -eta arguments to the apply command can now be an ISO-8601 formatted string. celeryctl now exits with exit status EX_UNAVAILABLE (69) if no replies have been received.
http://docs.celeryproject.org/en/latest/whatsnew-2.5.html
CC-MAIN-2015-14
refinedweb
1,725
59.19
: Beginning Java Question on calling a method in an child class Toshiro Hitsuguya Greenhorn Posts: 19 posted 11 years ago import java.awt.BasicStroke; import java.awt.Color; import java.awt.Graphics2D; public abstract class BouncingDevice { int x; int y; Color color; int xDirection = 0; int yDirection = 0; final int SIZE = 20; public BouncingDevice(int startX, int startY, Color startColor) { x = startX; y = startY; color = startColor; } public void animate() { // Move the center of the object each time we draw it x += xDirection; y += yDirection; // If we have hit the edge, reverse direction if (x - SIZE/2 <= 0 || x + SIZE/2 >= SimpleDraw.WIDTH) { xDirection = -xDirection; } if (y - SIZE/2 <= 0 || y + SIZE/2 >= SimpleDraw.HEIGHT) { yDirection = -yDirection; } } public void moveInDirection(int xIncrement, int yIncrement) {//This is the method I am trying to call xDirection = xIncrement; yDirection = yIncrement; } } I am trying to call the moveInDirection method in this other class: import java.awt.BasicStroke; import java.awt.Color; import java.awt.Graphics2D;(); //This one works just fine for some reason } moveInDirection(xIncrement, yIncrement); //Trying to call the method here } Whenever, I try to call the method in my bouncing ball class, the constructor says "invalid method declaration; return type required". How can I fix it? Thank You Ernest Friedman-Hill author and iconoclast Posts: 24203 43 I like... posted 11 years ago That call to moveInDirection() appears just after the closing brace of the draw() method, meaning it's not inside any method. A normal line of code, like a method call, can only appear inside a method in Java . Swap it with the line before it, and all will be well. [Jess in Action] [AskingGoodQuestions] Christophe Verré Sheriff Posts: 14691 16 I like... posted 11 years ago This is caused by a bad indentation. See my comments in the code.(); // } // <-- extra bracket moveInDirection(xIncrement, yIncrement); // <-- where do xIncrement, yIncrement come from ? } } // <-- missing bracket [My Blog] All roads lead to JavaRanch Fred Hamilton Ranch Hand Posts: 686 posted 11 years ago two points... 1.perhaps I'm just stating the obvious, but this is a compile problem right? To state "Whenever I try to call" makes it sound like runtime. 2. Correct me if I'm wrong, but BouncingBall is a child of abstract class BouncingDevice. Isn't the child supposed to inherit the method from the parent, not call a method in the parent? And doesn't that make the "fixes" wrong? We should be fixing a bad declaration, not putting a call in the correct location. Ok I have mistated point 2. The compiler sees this as an attempt to ovverride the method in the parent, and the programmer may not want to ovveride the method. But it still feels wrong to call a method in an abstract class, and doubly wrong to call a method in a parent from a child. It seems we should be referencing the method as being in BouncingBall, not from Bouncing ball. e.g. from some other class we would have... BouncingBall bb = new BouncingBall(). ... bb.moveInDirection( dx, dy) with no reference at all to moveInDirection with the class BouncingBall. It was the best of times. It was the worst of times. It was a tiny ad. Building a Better World in your Backyard by Paul Wheaton and Shawn Klassen-Koop reply Bookmark Topic Watch Topic New Topic Boost this thread! Similar Threads Multithreading: bouncing balls problem Thread.sleep question How to make Lines I draw Draggable and Moveable etc?? Problem in image loading on JPanel creating random mole and removing it. More...
https://coderanch.com/t/446453/java/calling-method-child-class
CC-MAIN-2020-40
refinedweb
589
65.42
Dns Class Provides simple domain name resolution functionality. For a list of all members of this type, see Dns Members. System.Object System.Net.Dns [Visual Basic] NotInheritable Public Class Dns [C#] public sealed class Dns [C++] public __gc __sealed class Dns [JScript] public class Dns Thread Safety Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe. Remarks. Example [Visual Basic, C#, C++] The following example queries the DNS database for information on the host: : - DnsPermission to allow the use of Dns. See Also Dns Members | System.Net Namespace
http://msdn.microsoft.com/en-US/library/system.net.dns(v=vs.71).aspx
CC-MAIN-2014-10
refinedweb
104
54.52
Hi, I use first an Arduino UNO Rev3 with DS18B20, and all works fine with the library oneWire standard. I try to change with an Arduino UNO WIFI Rev2, and the DS18B20 is no more detected ! I try with another Arduino (same model), and I have the same problem. I make some mesure with my ascilloscope, and we see the problem (pink line = High level on one output to see when the functions 'begin and search are active, blue line= signal on the DS18B20 command: with the Arduino WIFI Rev2, nothing append on the output, I have tried others output with the same result! I try with the library OneWireNg, and with this library, it's OK I am not good enough to find what must be changed in the standard OneWire.h to correct the problem . After searching, I think the problem is in OneWire_direct_gpio.h (perhaps only a definition to add/change to be compliant with the new architecture of this new card ?) CardInfo : BN : Arduino Uno Wifi Rev2 VID : 0x03eb PID : 0x2145 The very simple test program is : #include <OneWire.h> int DS18S20_Pin = 7; int x = 0; OneWire ds(DS18S20_Pin); byte addr[8]; void setup(void) { Serial.begin(9600); delay (500); digitalWrite (5, LOW); pinMode (5, OUTPUT); pinMode (4, INPUT_PULLUP); // pin 4 is connected directly to pin 5 (only for the oscilloscope - pink line) digitalWrite (5, HIGH); ds.begin (DS18S20_Pin); digitalWrite (5, LOW); digitalWrite (5, HIGH); ds.search(addr); digitalWrite (5, LOW); digitalWrite (5, HIGH); ds.reset_search(); digitalWrite (5, LOW); } Thank in advance for your help Jean-Marc
https://forum.pjrc.com/threads/57246-Arduino-UNO-WIFI-Rev2-with-DS18B20-and-library-OneWire?s=6891d47d445b4033ad90c1854154c38b&p=212661
CC-MAIN-2019-35
refinedweb
261
70.53
A python api wrapper for SQAUC API Project description squacapipy Usage Configuration You will first need a token. Once you have a squac account you will be sent details to retrieve a token The following environmental variables are required - SQUAC_API_TOKEN - SQUAC_BASE, which is - SQUAC_API_BASE, which is $SQUAC_BASE + /v1.0 Environmental variables examples can be found in .env-example Classes Class Response All responses are of class Response and have the following two attributes: - status_code: int HTTP status code - body: array of python dictionaries objects, or error code Network get query params: - network: comma seperated string of networks. - channel: exact match Dict response Keys: - code: str two letter indentifier - name: str, long name - descritpion: str - created_at: datetime - updated_at: datetime - user: user_id of creator from squacapipy.squacapi import Network net = Network() # return all networks net.get() # return UW. Params are not case sensistive net.get(network='uw') # return UW, UO, CC net.get(network='uw,uw,cc') Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/squacapipy/
CC-MAIN-2021-39
refinedweb
184
55.74
03-28-2011 05:25 PM I'm trying to figure out how to fade between images in Flex. I want to use images in a counting sequence. It works now but the animation is not smooth. A nice crossfade would be better. e.g. digit-1.png -> digit-2.png -> digit-3.png... etc... I see lots of examples on the web for altering between two states. But how do you do it as a progression? Can I polease borrow a code snippet? Thanks 03-28-2011 07:14 PM You can use Tweener and change the alpha value of the image. You can use the delay and onComplete methods to go from one image to the next. 03-28-2011 07:14 PM BTW, Tweener is in the QNX library so no need to use the MX.Fade class. 03-28-2011 07:19 PM - edited 03-28-2011 07:30 PM I seemed to have figured it out by defining 10 states - one per digit and then setting the Current State... Not having a clue what I am doing... it works. But is it an efficient way to accomplish it? More: Okay. It's maybe not scaling well. Images may or may not show up now... How do you manage currentState on different objects? <s:states> <s:State <s:State <s:State <s:State ... <s:transitions> <s:Transition <s:CrossFade </s:Transition> ... <s:Group <s:Image <s:Image <s:Image ... currentState = "m1s"+secs1; currentState = "m2s"+secs2; 03-29-2011 09:10 AM Does the ONLY way to do this require alternating between two states? 03-29-2011 09:38 AM - edited 03-29-2011 09:38 AM I've added import caurina.transitions.Tweener; But it says that Tweener can not be found. I've added the source to the library but still nothing. Using Burito... What's the trick? 03-29-2011 09:41 AM Tweener is in the QNX BB library. 03-29-2011 09:44 AM Yes but which one? How on earth are people supposed to learn this stuff with the current state of the documentation??? 03-29-2011 10:01 AM QNX BB is merged with AIR 2.5 to build against one thing in FB. I guess you're looking for individual SWC files? There are 2 that have the qnx prefix. Try one of those. This is all beta, so documentation will always lag. Hopefully documenation and tutorials will improve after 1.0. The other issue that there are about 6 different ways to build an AIR application, so covering all those variations will be time consuming to document as well. If you're using a "different" way then BB, then coding and documentation will probably always be a little more diffilcult. 03-29-2011 10:05 AM But how do I get FlashBuilder to recognize any of the QNX libraries? I'm missing something fundamental here. Can you please explain in mode detail? Thanks
http://supportforums.blackberry.com/t5/Adobe-AIR-Development/Fading-images-in-Flex/td-p/969957
CC-MAIN-2013-48
refinedweb
495
77.33
Ok I won't accept it but I pass on the "opportunity" FROM: DR. MAGNUS OLABODE LAGOS - NIGERIA. CONFIDENTIAL EMAILS: [email protected] OR MAG_OL@ARABIA either through my above fax line or confidential E-mail addresses: [email protected] OR [email protected], as time is of the essence in this business. I wait in anticipation of your fullest co-operation. Yours faithfully, DR. MAGNUS OLABODE. [shadow]Scorp666, the Infamous Orgasmatron[/shadow] HAHAHA!!! This is one of the top 10 scams in the world today, beleive it or not... [shadow]uraloony, Founder of Loony Services[/shadow] Visit us at [gloworange][/gloworange] I got one of these letters in the mail...Yes! Real mail. I thought about sending a letter back saying... Can you put it in my Swiss bank account?? It was for $36 million. I got the same mail Info_Au got in Hotmail about two weeks ago for the same amount of money... I was wondering who else got the message. lol... Dang and I thought I was going to be RICH!!! Another dream shot down Lol... Deb Outside of a dog, a book is man's best friend. Inside of a dog it's too dark to read. aww sh1t and i justs sent them 1. My company's name,address,telephone and fax numbers. 2. My bank account and address where the money will be remitted. >_< lol v_Ln 3MakeStuff *wipes a tear away* Thanks for the trip down memory lane, Scorp. savIRC :: The Multi-Platform IRC Client v. 1.8 [Released 9.04.02] i've been saving these in my "con-artist" folder for a couple years, got i think 35 or so now... i know you're not going to believe this, but there have been (at least) a half-dozen people who have bit on the trash and some even went to foreign countries to "clinch" the deal. none of them ever been heard of since, except the ones who frantically called back for their company or family to send them more $$, and when that well ran dry they were never heard of again. "If it sounds too good to be true, it usually is". Hey, how about the people on tv last week complaining about getting skinned by some religious outfit; get this: They hand over their life savings to some religious "non-profit" organization who is supposed to be helping people in dire conditions, and these "investors" are told they'll get back a handsome return (like 20percent) on their investment.... Now, let's see, just exactly how is that supposed to happen, duhhhhhh, maybe the poor people in dire conditions are going to pay a lot of interest and the religiosos are gonna give it back to the investors......? Not on this planet? I get one of these about every week. Notice how they're all from yahoo.com addresses? SSJVegeta-Sei Pierce me with steel, rend me with claw and fang; as I die, a legend is born for another generation to follow. An\' it harm none, do as ye will. - Wiccan Rede he he sounds good to me were do i sign If you tell me Curiosity Killed the cat, Then I say he died a noble death. Forum Rules
http://www.antionline.com/showthread.php?227069-Confidential-Biz-Proposal!!!
CC-MAIN-2018-22
refinedweb
543
74.79
Positions within the BoundsInt are not inclusive of the positions on the upper limits of the BoundsInt. This iterator will only return positions of size greater than zero for each axis. using UnityEngine; public class ExampleScript : MonoBehaviour { // Create a BoundsInt of a cube with a // bottom-left coordinate of (0, 0, 0), // and a height, width and depth of 4, // and log its contained points to the console. void Start() { // bounds is a cube where every edge has exactly four points. // It has 4 * 4 * 4 = 64 points. // min = (0,0,0), max = (3,3,3). var bounds = new BoundsInt(new Vector3Int(0, 0, 0), new Vector3Int(4, 4, 4)); // Iterate through each point, and log it to the Debug Console. foreach (var point in bounds.allPositionsWithin) { Debug.Log(point.ToString()); } // The 64 unique integer 3-dimensional points that fall within this Bounds will be logged to the Debug Console. } }
https://docs.unity3d.com/2021.1/Documentation/ScriptReference/BoundsInt-allPositionsWithin.html
CC-MAIN-2022-05
refinedweb
150
64.51
Definition and Importance A package is Java's style of bundling classes together. A package is a collection of related classes and interfaces. A package does not mean only predefined classes; a package may contain user defined classes also. A package is equivalent to a header file of C-lang. Packages can be compressed into JAR files for fast traversal in a network or to download from Internet.. Importing All/Single Class Packages have an advantage over header files of C-lang. A package allows importing a single class also instead of importing all. C-lang does not have this ease of getting one function from a header file. import java.net.*; // imports all the classes and interfaces import java.awt.evnet.*; // imports all the classes and interfaces import java.net.Socket; // imports only Socket class import java.awt.event.WindowEvent; // imports only WindowEvent class Note: While importing a single class, asterisk (*) should no be used. Resolving Namespace Problems By placing the same class in two different packages, which Java permits, namespace problems can be solved. Namespace is the area of execution of a program in RAM. The Date class exists in two packages – java.util and java.sql. Importing these two packages in a program gives ambiguity problem to the compiler. In the following program compiler gets ambiguity problem and is solved with fully-qualified name. Fully Qualified Class Name In the above program, if comments are removed, it is a compilation error as JVM is unable to judge which class should be given to the programmer. To solve the ambiguity problem, the class name is given along with package name and is known as fully qualified class name. Here, java.util.Date is known as fully qualified name. Another advantage of fully qualified name is that we can know to what package the class belongs immediately on the spot. Note: When fully qualified name is included, importing the package is not necessary. Package Naming Conventions Like identifiers have conventions, the packages come with their own naming conventions. Like keywords and protocols, packages are also of lowercase letters. In a single project, a number of programmers may be involved assigned with different modules and tasks. To avoid namespace problems in storing their work, naming conventions are followed. While creating packages, they may follow company name, project name or personal name etc. to precede their package. Following are a few All the classes and interfaces that come with the installation of JDK are put together are known as Java API (Application Programming Interface). All the Java API packages are prefixed with java or javax. Following table gives some important packages, a few prominent classes and their functionality. Following table gives frequently used Predefined Packages Java supports Package vs Directory The package of Java, at execution time, is converted into a directory (folder) by the operating system. The java.util is converted as java\util and java.awt.event is treated as java\awt\event by the OS. The asterisk * is a wild character of Windows which means all the files (means all the classes)..
http://way2java.com/packages/predefined-packages-%E2%80%93-java-api/
CC-MAIN-2017-13
refinedweb
514
57.67
Hi Yakov, Thank you very much for your reply, addressing a few questions: > “Is it correct that you run your query in a loop” The query is run asynchronously, as I want to simulate multiple clients hitting the cluster at the same time (see queries/second) so its technically not a loop but just scheduled callables. > “giving enough time for the whole cluster to warmup and only then take the final measurements?” I have a non-measured warmup round firing queries for 5 minutes before starting the measurements. > I also do not understand why CPU load is 400% which may be interpreted as full (correct?). This means that at least 4 threads are busy on each node, but when you broadcast your query it is processed with only 1 thread on each node. Yes, I noticed up to 400% which is full (my box has 4 Cores), I would explain this with the fact that the ignite cluster is hit with other requests while it is processing the first, would that explain it? >Having this in mind you can try launching 1 node per 1 core on each server - this will split your data set and will lower the amount of work for each node. Would this mean that instances compete more for the same cores in a high throughput scenario? Is there a way to have one node restricted to one process – or should I lower the size of the thread pool? Code (SQL example) : //Async Test Runner final Queue<Integer> timings = new LinkedBlockingQueue<Integer>(); for (long i = 0; i < requestsPerSecond * testTime; i++) { PerformanceTest test = PerformanceTestFactory.getIgnitePerformanceTest(ignite,testName,timings); executor.schedule(test, 0, TimeUnit.SECONDS); Thread.sleep(1000/requestsPerSecond); } executor.shutdownNow(); //Runnable class public abstract class PerformanceTest implements Runnable { protected Ignite ignite; private Queue<Integer> timings; private long startTime; protected IgniteCache<String,BinaryObject> cache; public PerformanceTest(Ignite ignite,Queue<Integer> timings) { super(); this.ignite = ignite; this.timings = timings; this.cache = ignite.cache("MyObjCache").withKeepBinary(); this.startTime = System.currentTimeMillis(); } @Override public void run() { runTest(); this.timings.add((int) (System.currentTimeMillis() - this.startTime)); } public abstract void runTest(); } //Runnable subclass i.e. Test public class SQLQueryPerformanceTest extends PerformanceTest{ private static final String queryString = "select SUM(field1 * field2)/SUM(field2) as perf from MyObj "; private final SqlFieldsQuery query; public SQLQueryPerformanceTest(Ignite ignite, Queue<Integer> timings) { super(ignite, timings); this.query = new SqlFieldsQuery(queryString); } @Override public void runTest() { this.cache.query(query).getAll(); } } Ignite configuration: <bean abstract="true" id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration"> <!-- Set to true to enable distributed class loading for examples, default is false. --> <property name="peerClassLoadingEnabled" value="true"/> <!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. --> > <!-- In distributed environment, replace with actual host IP address. --> <value>127.0.0.1:47500..47509</value> </list> </property> </bean> </property> </bean> </property> </bean> From: Yakov Zhdanov [mailto:[email protected]] Sent: 03 August 2016 13:37 To: [email protected] Subject: Re: Ignite performance Manuel, The numbers you are seeing are pretty strange to me. Is it correct that you run your query in a loop giving enough time for the whole cluster to warmup and only then take the final measurements? I also do not understand why CPU load is 400% which may be interpreted as full (correct?). This means that at least 4 threads are busy on each node, but when you broadcast your query it is processed with only 1 thread on each node. Having this in mind you can try launching 1 node per 1 core on each server - this will split your data set and will lower the amount of work for each node. However question with high CPU utilization is still open. Can you please provide stack for those threads if they are Ignite threads. You can follow these instructions -<> Please tell me what machines you are running this test on. I would ask you to do all measurements on hardware machines (not virtual) giving all resources to Ignite. Please also share your code and configuration for cluster nodes. --Yakov 2016-08-03 12:49 GMT+03:00 Piubelli, Manuel <[email protected]<mailto:[email protected]>>: Hello, I am currently benchmarking Apache Ignite for a near real-time application and simple operations seem to be excessively slow for a relatively small sample size. The following is giving the setup details and timings - please see 2 questions at the bottom. Setup: • Cache mode: Partitioned • Number of server nodes: 3 • CPUs: 4 per node (12) • Heap size: 2GB per node (6GB) The first use case is computing the weighted average over two fields of the object at different rates. First method is to run a SQL style query: ... query = new SqlFieldsQuery("select SUM(field1*field2)/SUM(field2) from MyObject"); cache.query(query).getAll(); .... The observed timings are: Cache: 500,000 Queries/second: 10 Median: 428ms, 90th percentile: 13,929ms Cache: 500,000 Queries/second: 50 Median: 191,465ms, 90th percentile: 402,285ms Clearly this is queuing up with an enormous latency (>400 ms), a simple weighted average computation on a single jvm (4 Cores) takes 6 ms. The second approach is to use the IgniteCompute to broadcast Callables across nodes and compute the weighted average on each node, reducing at the caller, latency is only marginally better, throughput improves but still at unusable levels. Cache: 500,000 Queries/second: 10 Median: 408ms, 90th percentile: 507ms Cache: 500,000 Queries/second: 50 Median: 114,155ms, 90th percentile: 237,521ms A few things i noticed during the experiment: • No disk swapping is happening • CPUs run at up to 400% • Query is split up in two different weighted averages (map reduce) • Entries are evenly split across the nodes • No garbage collections are triggered with each heap size around 500MB To my questions: 1. Are these timings expected or is there some obvious setting i am missing? I could not find benchmarks on similar operations. 2. What is the advised method to run fork-join style computations on ignite without moving data? Thank you Manuel
http://mail-archives.apache.org/mod_mbox/ignite-user/201608.mbox/%3C995A262A8C933146B9847B1B7B03D76B80D52830@exlnmb34.eur.nsroot.net%3E
CC-MAIN-2017-43
refinedweb
1,003
52.49
SSRS Custom Code (Expressions, Embedded Code & External Assemblies) Introduction In this article we will look at the options for using custom .Net code in our Microsoft SQL Server Reporting Services (SSRS) reports. This post assumes you have a working knowledge of SSRS. There are three main ways to use code to help generate our reports, in increasing order of complexity (and therefore flexibility) these are; - Expressions - Embedded Code - External Assemblies The Sample Solution To demonstrate the different options we will work with a simple report example based on the Adventure Works database The Adventure Works database can be downloaded at I’ve setup one dataset which returns the total number and revenue of sales for each territory; I’ve placed a single table on the report that displays each row from our dataset; When the report runs it looks something like this; Expressions Expressions in SSRS are a very similar concept to formulas in Excel, we are going to add an extra column to our report to display the Average Sale Revenue for each territory. In the detail cell we can type the following formula to display the result of dividing the revenue by the number of sales; =Fields!SalesAmount.Value/Fields!SalesCount.Value Note that if you right click on the cell you will get an ‘Expression…’ option in the context menu which will give you a nice little GUI for designing your expression. Now if we run the report it should look something like the following; Expressions aren’t limited to cells in tables so if we add the following textbox to the page header of our report it will display the name of the report at the top of the report, this is really handy because if we change the name of a report we don’t need to worry about adjusting the header as well. You will also notice that you can use expressions to adjust properties of report controls, for example the background colour of our table can be set using a property, in fact we can adjust any properties apart from the size and position of controls using expressions. Embedded Code Now obviously expressions have their limitations, they aren’t reusable throughout our report as we need to copy and paste them into each cell or property and they can’t really execute any logic. What we can do is write report wide functions using .Net code and then call these functions from expressions. For example let’s add some colouring to our table based on the average sale, territories with an average sale of under $5,000 will be shaded red, under $10,000 will be shaded orange and $10,000+ will be shaded green. We could achieve this using expressions (utilising nested IIF statements) but if we wanted to change one of the thresholds we would have to change it on every cell that was shaded. A much nicer solution would be to create a function that accepts an average sale amount and returns the appropriate colour. To open the embedded code section select ‘Report->Report Properties…’ from the menu and open the ‘Code’ tab, below is the function I have created to calculate our background colour; If we select the detail row of our table and enter the following expression into the BackgroundColor attribute; =Code.GetColour(Fields!SalesAmount.Value/Fields!SalesCount.Value) Now when we run the report we should get something like; External Assemblies Embedded code also has its limitations, we can’t reuse our functions between reports without copy/pasting and we don’t get intelli-sense and other niceties that we have all grown to depend on. To re-use functions between reports we need to create a class library and then reference the resulting compiled assembly from our report. I’ve added a C# Class Library project to my solution and added a single ‘ReportUtils’ class as follows; The code for ReportUtils.cs is as follows; using System; using System.Collections.Generic; using System.Text; namespace RowansCustomLibrary { public sealed class ReportUtils { public static string ReportFooter() { return "Printed " + DateTime.Now.ToString("dd-MMM-yyyy HH:mm") + " by " + Environment.UserName; } } } Now to make our assembly easily available other .Net code I am going to load it into the Global Assembly Cache (GAC), to do this I need to “Strong Name” the assembly. build your class library (Ctrl+Shift+B), preferably in “Release” mode. To install your new assembly into the GAC just open up the bin\release folder of your project and drag the dll into the C:\WINDOW\assembly directory on your machine (you can also use the command line “gacutil” tool to install assemblies into the GAC). To reference our new assembly from the SSRS project we need to; - Select ‘Report->Report Properties…’ from the menu - 2. Select the ‘References’ tab - Click the ‘..’ button - Select the ‘Browse’ tab - Navigate to the dll in the bin\release folder of the class library project - Click the ‘Add’ Button - Click ‘OK’ and then ‘OK’ Note: Although we have selected the dll from our project file it will actually load the GAC version when the report runs. Now we can add a textbox to the footer of our report that uses our assembly; We will also need to deploy our assembly to the GAC on any machines where our report is executed, in most cases this would be your report server. 3 Responses to “SSRS Custom Code (Expressions, Embedded Code & External Assemblies)” Where's The Comment Form? hello Rowan, please tell me if this is possible using one of the methods (embedded code or custom assembly). I have a parameter called service. I’m thinking of dropping 7 charts (with the Hidden property set to True) on the page that use the same shared dataset. then create an expression for each chart that: if parameter at index.1 is not null, use parameter.index1.value for Chart1, then “set chart1 to visible” , if parameter at index2 is not null, do this [etc]. then I would have create an expresssion for the name of the chart also. Then it would also need simalar expression for the go to report action of the chart. Sound doable? Paul DeL April 12, 2012 Brilliant article. Was highly informative, and in a very presentable, structured way. Thank you. LampLighter July 4, 2012 There’s a full article on embedding code within SSRS 2008 at – hope this helps someone. AndyOwl July 13, 2012
http://romiller.com/2008/07/09/ssrs-custom-code/
CC-MAIN-2014-10
refinedweb
1,076
55.78
Help talk:Toolforge/Database Contents remote databases grep '^192\.168\.99\.' /etc/hosts > labsdb.hosts seems broken. I guess everything is 10.* now. For my usecase I only need enwiki.labsdb, that points to 10.64.4.11 So you can connect via mysql --defaults-file="${HOME}"/replica.my.cnf -h 10.64.4.11 enwiki_p --physikerwelt (talk) 16:09, 17 January 2015 (UTC) text table Is this the right place to mention that many (all?) of the wikis replicated in labsdb instances don't have the text table populated but use a separate storage mechanism for revision text? It would have saved me several hours hacking at database code that was never going to work. GoldenRing (talk) 13:38, 9 April 2015 (UTC) - Someone else asked at the parent talk page so expectations vary but are never fulfilled :-). - I don't know where I would put that information, and I am generally unhappy with the current documentation structure as it feels too wordy to me and the multiple pages/collapsed sections unuseful because they hide information if you search for it with Google Chrome. But I can't unlearn what I've got to know over the past years to make an educated decision about that. - So: Hey, it's a wiki! Be bold, etc. --Tim Landscheidt (talk) 22:29, 9 April 2015 (UTC) Access to multiple databases Hi, I need to query multiple lang wikipedias. How can I do this without creating multiple connections?? --Kanzat (talk) 08:35, 2 May 2015 (UTC) - You can access the databases with the corresponding prefixes, for example: SELECT p1.page_id, p2.page_id FROM enwiki_p.page AS p1 JOIN dewiki_p.page AS p2 USING (page_namespace, page_title) WHERE p1.page_namespace = 0 AND p1.page_title = 'Hamburg'; - --Tim Landscheidt (talk) 04:51, 3 May 2015 (UTC) PHP So, if a tool gets mysql_connect(): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock', what is it doing wrong? Context: phabricator:T107618. --Nemo 20:57, 31 July 2015 (UTC) Static MediaWiki I'm thinking of trying to get MediaWiki to work with the Tool Labs replicas (read-only). I basically want to set up a filter that performs textual modifications to the pages before they are served - specifically, I want to add (approximate) Pinyin readings to Chinese text to help language learners. If anyone tries to edit a page, I will provide an explanation and a link to the source wiki. Has anyone done this sort of thing before/can anyone think of any snags I might run up against? Would this be considered a valid use of Tool Labs servers? I have already noticed, as User:GoldenRing mentions, that the text tables are not present in the replicas. Can anyone tell me where they are? If necessary, I will make a fork of MediaWiki that supports this sort of use so it will be easy to do in future. --Spacemartin (talk) 14:16, 18 November 2015 (UTC) - In the WMF cluster, the article contents are not stored in the database, but on separate servers. These are neither replicated to Labs nor can they be directly accessed otherwise. The most efficient way is probably RESTBase. - For your purposes, I would suggest to develop either a JavaScript gadget that users could individually enable on wiki or a MediaWiki extension. The former option is much easier to do and deploy than the latter. --Tim Landscheidt (talk) 04:48, 19 November 2015 (UTC) - Too late! Pinyin Wiki :-) --Spacemartin (talk) 19:50, 20 November 2015 (UTC) No external access to database by Python pymysql.err.OperationalError: (2003, "Can't connect to MySQL server on 'enwiki.labsdb:3306' ([Errno 11001] getaddrinfo failed)") sqlalchemy.exc.OperationalError: (_mysql_exceptions.OperationalError) (2005, "Unknown MySQL server host 'ruwiki.labsdb' (0)") I get this errors when try run a scripts from my computer (tried modules pymysql and sqlalchemy). On the server it works without problem. Script like: import pymysql.cursors connection = pymysql.connect(host='enwiki.labsdb:3306', user=u, password=pw, db='enwiki_p', charset='utf8', cursorclass=pymysql.cursors.DictCursor) Also I tried open SSH-connect from the script, but it not help. (Schema: open ssh, test ok, try access to db - get the errors, close ssh.). --Vladis13 (talk) 04:51, 23 September 2016 (UTC) - The MySQL servers are not open to the outside world; they are only accessible from inside the labs network. If you want to connect from your local computer, see Help:Tool_Labs/Database#Steps_to_setup_SSH_tunneling_for_testing_a_tool_labs_application_which_makes_use_of_tool_labs_databases_on_your_own_computer on tunneling over SSH. valhallasw (Merlijn van Deen) (talk) 06:20, 23 September 2016 (UTC) - I found problem. 1) In Windows no the tool "ssh" about talk the Help. Instead can use the PuTTY (I did add this in Help.). 2) In module pymysql need set port like: pymysql.connect(host='127.0.0.1', port=4711, but not pymysql.connect(host='127.0.0.1:4711'. --Vladis13 (talk) 07:15, 23 September 2016 (UTC) - SQLAlchemy.py not works on server. It requires module MySQLdb.py which not installed there. --Vladis13 (talk) 07:58, 23 September 2016 (UTC) PostGIS Task T154497 Is it possible to have a tool that needs PostGIS (extension of PostgreSQL). I would like to make a tool that does spatial queries for requested images for Commons. --Tobias47n9e (talk) 12:43, 3 January 2017 (UTC) - Tobias47n9e I would suggest opening a Phabricator task with the request and we can discuss the various options. --BryanDavis (talk) 15:59, 3 January 2017 (UTC) - @BryanDavis: What category would you suggest on Phabricator? I see that the postgis package is in the apt-cache. But I am not sure if it is installed. Can a tool have access to a Postgresql database? service postgresql statussays that it is an unrecognized service. --Tobias47n9e (talk) 16:21, 3 January 2017 (UTC) - @Tobias47n9e: Tag the phab task with #tool-labs and also #DBA. I don't know that we do have a postgres db at the moment in Tool Labs. I kind of doubt that we will provision postgres just for this, but maybe we can find another way to help you build your tool. --BryanDavis (talk) 17:01, 3 January 2017 (UTC) - @BryanDavis: Here is the issue: --Tobias47n9e (talk) 17:29, 3 January 2017 (UTC) EXPLAIN The inability to use EXPLAIN seriously sucks. I imagine that's some view-related limitation, but surely there is some way to allow safe execution of explain queries... --tgr (talk) 08:53, 15 October 2017 (UTC) - The documentation on using the database is split a bit between two pages, this one and Help:MySQL_queries. This one focuses more on the Toolforge-specific parts, while the latter is relevant labs-wide. Well, that's the theory anyway. The latter page has a section on EXPLAINing queries, Help:MySQL_queries#Optimizing_queries, with a link to an on-line tool that helps to run the EXPLAIN queries (basically by running the query and using the show-me-the-query-plan-of-a-running-query function). valhallasw (Merlijn van Deen) (talk) 09:34, 15 October 2017 (UTC) - I made some edits to Help:MySQL_queries#Optimizing_queries that will hopefully be helpful. --BryanDavis (talk) 23:06, 15 October 2017 (UTC) - As far as allowing EXPLAIN, it's an upstream bug. MySQL would never fix it because of some secret requirement from their commercial side, see phab:T50875 for details. I don't know if anyone ever asked the MariaDB people if they'd fix it. Anomie (talk) 13:45, 16 October 2017 (UTC) Identifying lag right now the example query is SELECT lag FROM heartbeat_p.heartbeat JOIN meta_p.wiki ON CONCAT(shard, '.analytics.db.svc.eqiad.wmflabs') = slice WHERE dbname = 'fawiki'; shouldn't it be SELECT lag FROM heartbeat_p.heartbeat JOIN meta_p.wiki ON CONCAT(shard, '.labsdb') = slice WHERE dbname = 'fawiki'; ? -- Seth (talk) 17:33, 9 May 2018 (UTC) - The *.labsdbservice names are deprecated. They are literally DNS CNAME pointers to the *.analytics.db.svc.eqiad.wmflabsservice names which are actively maintained. New code and examples should use *.analytics.db.svc.eqiad.wmflabsand/or *.web.db.svc.eqiad.wmflabsas appropriate rather than the legacy labsdb service names. --BryanDavis (talk) 19:22, 9 May 2018 (UTC) - Hi BryanDavis! - If I login at tools-login.wmflabs.org and start sql de, then the first sql query (with .analytics....) results in an empty set. The second sql query (with labsdb) results in the wanted row as expected. - What am I doing wrong? -- seth (talk) 22:24, 9 May 2018 (UTC) - You're not doing anything wrong -- the situation is a bit confusing. Historically, we used to have 's3.labsdb' hostnames for the database servers. These names then made their way into the `meta_p.wiki` table (that table should contain just 's3', but this is now difficult to change). More recently, the database servers were revamped, including a hostname change from 's3.labsdb' to 's3.analytics.etc'. However, the wiki table still contains the old name, which cannot be used as-is anymore... - So either we have to change the slice column in wiki table (which might break tools that split the shard on '.labsdb', as well as people who use the query which was on the documentation page originally), or we have to add a new column that just contains the 's3' identifier. phab:T186675 might be a good moment to also fix this issue. - I have now rewritten the example query to no longer be dependent on a specific postfix -- instead, the query just splits on the first '.' and only uses the first component. valhallasw (Merlijn van Deen) (talk) 09:05, 10 May 2018 (UTC) Access to user database from paws or quarry Magnus Manske has published a database called "s52680__science_source_p", see. I can get a hold on the database with user@tools-bastion-03:~$ mysql --defaults-file=$HOME/replica.my.cnf -h tools.db.svc.eqiad.wmflabs MariaDB [(none)]> USE s52680__science_source_p; MariaDB [s52680__science_source_p]> SHOW TABLES; But how do I get access to that database with Quarry or PAWS? In Quarry, the following command USE s52680__science_source_p; results in "Access denied for user '<user>'@'%' to database 's52680__science_source_p'". In PAWS I tried conn = pymysql.connect( host="tools.db.svc.eqiad.wmflabs", user=os.environ['MYSQL_USERNAME'], password=os.environ['MYSQL_PASSWORD'], database='s52680__science_source_p', charset='utf8' ) but get 'OperationalError: (1044, "Access denied for user '<user>'@'%' to database 's52680__science_source_p'")'. --Fnielsen (talk) 21:34, 2 August 2018 (UTC) - The answer for Quarry is to wait for task T151158 to be implemented. Currently there is no mechanism to change database servers from the Wiki Replica server to the ToolsDB server. - For Paws, the feature request task is task T188406. --BryanDavis (talk) 23:11, 2 August) 01:19, 16 November 2018 (UTC)
https://wikitech.wikimedia.org/wiki/Help_talk:Toolforge/Database
CC-MAIN-2019-18
refinedweb
1,770
58.58
I am using spring MVC and would like to expose default validator for javascript to use. I have a bunch of controllers extending a common abstract class and bunch of validators implementing ... How do you handle the case where you want user input from a form to be htmlEscape'd when you are binding to a command object? I want this to sanitize input data ... I use the following custom editor in MANY Spring-MVC controllers according to: A controller binder.registerCustomEditor(BigDecimal.class, new CustomNumberEditor(BigDecimal.class, NumberFormat.getNumberInstance(new Locale("pt", "BR"), true)); binder.registerCustomEditor(BigDecimal.class, new CustomNumberEditor(BigDecimal.class, NumberFormat.getNumberInstance(new Locale("pt", ... I have a typical scenario - I have read many articles on this and dynamic addition seems to work fine. I could not get elegant solution for dynamic delete. I have some objects like the two below public class SavedSearch { String title; ArrayList<SearchParameters> params; } public class SearchParameter { String field; int operator; String match; } On the JSP page in the input form, I use <input type="text" name="title"> and ... Here's how my method looks like: @RequestMapping(value = "/form", method = RequestMethod.POST) public String create(@ModelAttribute("foo") @Valid final Foo foo, final BindingResult result, final Model model) { ... I´m trying to change a spring jsp example to use freemarker. I changed all fields in a form with something like this: <@spring.formInput "account.name" /> Everything worked nicely. Form binded to the object ... I have following (simplified to the bone) Controller: @Controller public class TestController { @RequestMapping(value = "/test.htm", method = RequestMethod.GET) public String showForm(final ModelMap map) { final TestFilter filter = ... I have a Class which models a User and another which models his country. Something like this: public class User{ private Country country; //other attributes and ... I have an object like so: public class FormFields extends BaseObject implements Serializable { private FieldType fieldType; //checkbox, text, radio private List<FieldValue> value; //FieldValue contains simple string/int information, id, value, label //other properties and getter/setters } In my controller I added an ArrayList to my model with the attribute name "users". Now I looked around and this is the method I found (including a question here): <form:form action="../user/edit" method="post" ... In a web application I'm working on using Spring 2.5.6.SEC01, I essentially have an Integer field that takes a number to determine which page to scroll to. The requirements changed, ... i got exception in my controller class using bindexception.using reject method how i can print the message in my jsp. I am using Spring MVC for a web application and I am working with a simple form that deals with selecting clients and showing contact information. One of the problems I ... I'm building a Spring MVC app with Spring 3.0.3. I have data binding of my form working just fine, but one of the form fields is a list of items. Hypothetically ... How do I show validation errors NEXT to each input/component? Validator: @Override public void validate( final Object obj, final Errors e ) { ValidationUtils.rejectIfEmpty( e, "firstname", "error.firstname.empty" ); } <form:label <spring:message ... I am using spring annotations I have written one method public ModelAndView showTestPage(@RequestParam("firstInstanceId") String id1, @RequestParam("secondInstanceId") String id2, HttpSession session) { ModelAndView mv = new ModelAndView("showCompareItemList"); mv.addObject("pageTitle", ... Type is enum property in object. jsp: <form:radiobutton public enum TestType { Male, Female; } I've went thru Spring documentation and source code and still haven't found answer to my question. I have these classes in my domain model and want to use them as backing form ... In this example, I don't understand what the BindingResult is for and what it means to say if (binding.hasErrors()) below. BindingResult if (binding.hasErrors()) @RequestMapping(value = "/test", method = RequestMethod.POST) public final String submit(@ModelAttribute(TEST) @Valid final Test ... tl;dr: I have a custom object that isn't a Collection. How can I get Spring to bind it to a multiple select? I have an object Field that contains a field ... Field I'm working with Spring MVC and I'd like it to bind a a persistent object from the database, but I cannot figure out how I can set my code to make ... I'm learning SpringMVC 2 and i have a form that i need to bind to an object (the command). But what if i need this command object to be an interface ... For request parameters representing string, number, and boolean values, the Spring MVC container can bind them to typed properties out of the box. How do you have the Spring MVC container bind ... I have a controller that allows users to add or edit an entity. I've removed myForm.myEntity.name from myForm but spring still shows it when the spring:bind tag is used. See the ... Using Spring 3 I have two forms: adding Item and invalidating Item I have ItemAddEditCommand that references Item and some other data. Adding Item works great but I have problem with invalidating Item. In ... I have simple spring mvc web application.I want to bind list into drop down.In this case list items are normally bind into drop down; but If I select item and click ... I'm using a domain object as a command object in the web layer. In one case, this command object is backing a form that represents a partial update of the domain ... It appears to me that Spring MVC cannot bind properties of primitive wrapper types(e.g. Integer, Boolean, etc). When it tries to bind such properties, it throws the following exception. javax.servlet.ServletException: javax.servlet.jsp.JspException: javax.servlet.jsp.JspException: ... I have a several command objects of the same type to bind, each of which represents a row from a form. How do I bind these in an annotation based controller? ... Simple application using Spring 2.5. Have a left hand side multi select box and I have a right hand side multi select box. I have no issues in populating the left side from ... I'm trying to understand the concept of data binding in Spring-MVC with Velocity (I'm learning this framework and porting an app to this platform). I'm used to getting form variables using request.getParameter("username"), ... request.getParameter("username") I have a form: <form action="/processform"> <input name="firstname" value="john" /> <input name="lastname" value="doe" /> </form> public class Person { private String firstname; ... I have a page that has a form on it where the user can save or delete a bean. The form backing object has a bean as its property. When the ... Could please anybody figure out a way how to use Spring data binders for getting values of a few fields (integers) : year, month, day, hour, minute from request and bind ... When submitting a form I get the message: com.xxx.mvc.reports.ReportController: Data binding errors: 6 {||||||| - |} I have a small question regarding spring's MVC databinding capabilities. I do have the following controller class: @Controller @RequestMapping("/foo") public class FooController() { // … some init stuff // @RequestMapping(value = "/{id}/edit.{format}", method ... I have added <mvc:annotation-driven/> to my Spring configuration, and according to the documentation it will provide: <mvc:annotation-driven/> Support for validating @Controller inputs with @Valid, if a JSR-303 Provider is present on ... Imagine a code like this one: @RequestMapping(value="/users", method=RequestMethod.GET) public String list(Model model) { ... } @InitBinder("user") public void initBinder(WebDataBinder binder) { binder.setDisallowedFields("password"); // Don't allow user to override the ... I'm following this scheme in a Spring application. Why is spring not binding the values on my nested object? The SecurityQuestion object on the RegistrationBean is set with question and answer as null, null, respectively, despite setting then in ... public class MyForm { private String username; //getter...setter } @Controller public class MyController { @RequestMapping("/handleForm") public String handleForm( MyForm form, Model model ){ //do something } } I have the following command object: public class User { private Department department; private String firstName; private String lastName; private ... Is there a way to Spring bind values in a map? For instance, I have a Map<String,String> and I want to spring bind specific values in it. The user will type ... Map<String,String> If I have a RequestMapping in a Spring controller like so... @RequestMapping(method = RequestMethod.GET, value = "{product}") public ModelAndView getPage(@PathVariable Product product) I submit a form, lets say this form contains <input name="address" ..> <input name="billingAddress" ..> class Address { String address; ... I have a form and I have registered the CustomNumberEditor for the float numbers of my objects. @InitBinder public void initBinder(WebDataBinder binder){ NumberFormat numberFormat = NumberFormat.getInstance(); binder.registerCustomEditor(Float.class, ... I'm using Spring CustomNumberEditor editor to bind my float values and I've experimented that if in the value is not a number sometimes it can parse the value and no error ... Here is one of my controller method, @ResourceMapping(value="petPortalAction") @RequestMapping(params={"transactionType=BUY_PET"}) public String handlePetPurchaseAction( ... I am wondering whether you can use a <form:errors> tag to display an error that doesn't have a binding to a field in the command object. Basically I want to do ... I'm working on page that allows users to edit profile info. I want them to be able to edit their public info, but not allow them to change system flags ... I can't fully understand the purpose of data binding in jsp of spring. Does someone have a full understanding of it? When registering a customerEditor in spring to format a number with a given numberFormat instance, it is easy to apply this to a specific field in the jsp, e.g.: NumberFormat numberFormat = ... I'm trying to bind a nested object with Spring 3, and I'm having issues. JSP: <portlet:actionURL <form:form <input name = "obj.a"...> <input name = "obj.b"...> ... I have not been able to solve my binding problem. I have one class Person Class Person { Private fname; private lname; public Address address; class Address { private ... This is the code on internet for init binder @InitBinder public void initBinder(WebDataBinder binder) { SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd"); ... I have been trying for last 3 days still i am not able to solve my problem I have Person Class @SuppressWarnings("rawtypes") @OneToMany(cascade = CascadeType.ALL, fetch=FetchType.LAZY, mappedBy="person") @JoinColumn(name="person_id") public Set<Book> books = new ... If i want to use custom date editor i need few things to ask 1)In database do i have to set variable startDate as datetime or varchar 2)IN Person class startDate is refered ... I'm trying to convert a struts 1 application to Spring MVC 3.0. We have an form with quite a few parameters, most of which where automatically binded in struts. However there ... I'd like to use the autowiring magic of @ModelAttribute for obtaining and passing around reference data. The problem in doing this is that anything added to the model with @ModelAttribute is ... @ModelAttribute What would be the best way how to bind a form data to model? I mean I have a simple model class: public class LoginCommand { private String login; ... so lets say I have some model class: public class MyRequestParams { private Long val = Long.valueOf(0); // default value // ... plus some other stuff } I have Spring Controller, and a method like this: public ModelAndView getItems() { ModelAndView mav = new ModelAndView("myView"); Item entity = new ... ModelAndView mav = new ModelAndView("myView"); Item entity = new ... I am using Spring 3 MVC Annotation Validation. I know how to do it in JSP(using modelAttribute property in JSP), but I don't have any clue how to apply this kind ... modelAttribute Hi guys is it possible to bind selected value and label at the same time on spring form? I am trying something similar to that. <form:select id="selectionCity" path="targetAddress.cityid" ... I am not sure if this is possible, but I need to do some odd binding with Spring MVC. We have to dynamically generate a page which is a precursor ... I am using spring mvc portlet for one of my applications. I have a problem in binding a dynamically filled list box with the List collection in Controller. Conference.java public class Conference { ... I have a command object associated with a spring form controller: public class PluginInstance { private Set<PluginParameter> pluginParameters = new HashSet<PluginParameter>(); ... some other string/long properties and getter setters... } I've been using binding frameworks for a while now and i'd like to know how you handle this cases. You have a newspaper that has some attributes like Is it possible to bind a form element to a List<Long>? ie. <form:input binding to an element in List<Long> formValues; in the form backing object? When I try this, it fails ... List<Long> <form:input List<Long> formValues; User.java public class User{ private String name; private List<Link> links; } public class Link{ private String ... I'm getting the following error when I try to retreive the form results in controller method.. org.springframework.validation.BindException: org.springframework.validation.BeanPropertyBindingResult: 1 errors Field error in object 'search' on field 'clients': rejected ... I need to set a "time submitted" field on one of my domain objects. Validation rules require that property to be there, but I can't have it set by data binding ... I am trying to use Spring validation to validate my model populated by Jackson converter. So I have a java class, class MyClass(){ private String myString; } I'm trying to create a form using Spring @MVC 3.0. This form is supposed to let the user update the flavor of a soup, but I'm running into some issues. ... How do you show non binding related errors on SpringMVC 3? For example, I want to show a message when a certain entity cannot be shown to a specific user on ... I have an issue binding the AutoPupulating List in a form to update the data. I was able to save the data using Autopopulating list though. Here is the form backing model. public ... I have a controller with 2 methods that return related objects via the @ModelAttribute annotation: @ModelAttribute("site") public Site getSite(){ ..... return site; } @ModelAttribute("document") public Document getDocument(){ ..... return document; } I have a simple html form, <form id="marketplaceForm" enctype="multipart/form-data" method="post"> <select name="category"> <option selected ></option> <option value="Sales">Sales</option> <option value="Marketing" >Marketing</option> <textarea type="text" id="marketplaceDesc" name="description" value="" ... I have several thoroughly unit-tested and finely crafted rich DDD model classes, with final immutable invariants and integrity checks. Object's instantiation happens through adequate constructors, static factory methods and even via ... I've got a DTO (bean) with ArrayList field: ArrayList public MyDTO { ... private List<MyThing> things; ... ... getters, setters and so on } @InitBinder public void initBinder(WebDataBinder ... In my spring mvc application, I have levels that users can create. With these levels, there are various requirements that a level needs in order to be taken (need a car, ... [solved] binding nested model of list containing custom obj Hi.. i have a problem about a nested model object which is a list containing a collection of a custom class.. I ... MVC - binding with AutoPopulatingList doesn't work I have a dynamic list of input fields that I want to bind to an AutoPopulatingList. I followed various tutorials () on this but ... [Spring MVC] Binding Overwriting Object in ModelAndView I am using Spring MVC 2.5.5. I have a controller that saves an object from an editing form and then returns to that form ... Spring MVC dataType Binding issue Hi, I'm fairly new to spring mvc. We created a jsp form with input fields and we are trying to set that value to a Hibernate ... Spring MVC3. Bind addtional data before validation I am developing an application whith Spring MVC 3.0.5. I use the @InitBinder method to initialize the binder and @Valid to validate forms I ... Is it possible to bind a value to a Map (which is directly on the request, and not on a backing object)? I have implemented the code from this JIRA, and ... I have a component in my jsp page which binds to a collection. example Code: public class Test { private Sex sex; private String[] sexes = {"male", "female"}; public String[] ... Hi, I want to be able to bind a @RequestHeader (or @CookieValue) to a model object. This seems to not work (but it does work for @RequestParam). Am I missing something, ... Hi, As of now, I'm binding more than one datasets to my form using individual forms. For example: Code: @ModelAttribute("students") public Collection getStudents() { return this.sService.getStudents(); } @ModelAttribute("courses") public Collection getCourses() ... Binding many-to-many list object on Spring MVC form Hi, This should be a common scenario where we need to bind a list object to a spring form. However, I'm getting errors ... i saw an older post on this topic from 2008 and there was a discussion about adding setters and no arg-constructors to domain objects to allow for use at ui layer ... MVC binding to wrong fields? I have a controller with 2 methods that return related objects via the @ModelAttribute annotation: Code: @ModelAttribute("site") public Site getSite(){ ..... return site; } @ModelAttribute("document") public ... Simple Spring MVC 3 Bind Form Question I have a simple html form, how to bind errors in spring mvc 3.0.5 Hi, I have written a jsp page with the following error bindings: ... There are some cases in my application where my model would be much better modeled as a Set then a List (unique guaruntee). I've noticed that SpringMVC uses the List get(int) ... Hi, i m trying to bind date value (using calender). i am setting the field value through java script. value is set correctly, but never set in the bind model property. ...
http://www.java2s.com/Questions_And_Answers/Spring/MVC/binding-1.htm
CC-MAIN-2019-13
refinedweb
2,990
57.57
With the World Cup just 3 months away, the best bit of the tournament build up is upon us – the Panini sticker album. For those looking to invest in a completed album to pass onto grandchildren, just how much will you have to spend to complete it on your own? Assuming that each sticker has an equal chance of being found, this is a simple random number problem that we can recreate in Python. This article will show you how to create a function that allows you to estimate how much you will need to spend, before you throw wads of cash at sticker boxes to end with a half-finished album. Load up pandas and numpy and let’s kick on. import pandas as pd import numpy as np To solve this, we are going to recreate our sticker album. It will be an empty list that will take on the new stickers that we find in each pack. We will also need a few variables to act as counters alongside this list: - Stickers needed - How many packets have we bought? - How many swaps do we have? Let’s define these: stickersNeeded = 682 packetsBought = 0 stickersGot = [] swapStickers = 0 Now, we need to run a simulation that will open packs, check each sticker and either add it to our album or to our swaps pile. We will do this by running a while loop that completes once the album is full. This loop will open a pack of 5 stickers and check whether or not it is featured in the album already. To simulate the sticker, we will simply assign it a random number within the album. If this number is already present, we add it to the swap pile. If it is a new sticker, we append it to our album list. We will also need to update our counters for packets bought, stickers needed and swaps throughout. Pretty simple process overall! Let’s take a look at how we implement this loop: while stickersNeeded > 0: #Buy a new packet packetsBought += 1 #For each sticker, do some things for i in range(0,5): #Assign the sticker a random number stickerNumber = np.random.randint(0,681) #Check if we have the sticker if stickerNumber not in stickersGot: #Add it to the album, then reduce our stickers needed count stickersGot.append(stickerNumber) stickersNeeded -= 1 #Throw it into the swaps pile else: swapStickers += 1 Each time you run that, you are simulating the entire album completion process! Let’s check out the results: {"Packets":packetsBought,"Swaps":swapStickers} {'Packets': 939, 'Swaps': 4013} 939 packets?! 4013 swaps?! Surely these must be outliers… let’s add all of this into one function and run it loads of times over. As the number of stickers in a pack and the sticker total may change, let’s define these as arguments that we can change with future uses of the function: def calculateAlbum(stickersInPack = 5, costOfPackp = 80, stickerTotal=682): stickersNeeded = stickerTotal packetsBought = 0 stickersGot = [] swapStickers = 0 while stickersNeeded > 0: packetsBought += 1 for i in range(0,stickersInPack): stickerNumber = np.random.randint(0,stickerTotal) if stickerNumber not in stickersGot: stickersGot.append(stickerNumber) stickersNeeded -= 1 else: swapStickers += 1 return{"Packets":packetsBought,"Swaps":swapStickers, "Total Cost":(packetsBought*costOfPackp)/100} calculateAlbum() {'Packets': 1017, 'Swaps': 4403, 'Total Cost': 813.6} So our calculateAlbum function does exactly the same as our instructions before, we have just added a total cost. Let’s run this 1000 times over and see what we can truly expect if we want to complete the album: a=0 b=0 c=0 for i in range(0, 1000): a += calculateAlbum()["Packets"] b += calculateAlbum()["Swaps"] c += calculateAlbum()["Total Cost"] {"Packets":a/1000,"Swaps":b/1000,"Total Cost":c/1000} {'Packets': 969.582, 'Swaps': 4197.515, 'Total Cost': 773.4824} 970 packets, over 4000 swaps and the best part of £800 on the album. I think we’re going to need some people to swap with! Of course, as you run these arguments, you will have different answers throughout. Hopefully here, however, our numbers are quite close together. Summary In this article, we have seen a basic example of running simulations with random numbers to answer a question. We followed the process of replicating the album experience and running it once, then 1000 times to get an average expectation. As with any process involving random numbers, you will get different answers each time, so through running it loads of times over, we get an average that should remove the effect of any outliers. We also designed our simulations to take on different parameters such as number of stickers needed, stickers in a pack, etc. This allows us to use the same functions when World Cup 2022 has twice the number of stickers! For more examples of random numbers and simulations, check out our expected goals tutorial.
https://fcpython.com/blog/much-cost-fill-panini-world-cup-album-simulations-python
CC-MAIN-2022-33
refinedweb
807
60.35
Method Overloading Method overloading allows you to declare two methods with the same name but has different signatures or parameter set. The program will automatically detect which method you are calling by looking at the arguments you are passing to the method. The signature of a method shows the order and type of each parameter. Let’s take a look at the method below: void MyMethod(int x, double y, string z) would have a signature of MyMethod(int, double, string) Note that the return type and the names of the parameter are not included in the signature of the method. The example below demonstrates method overloading. using System; namespace MethodOverloadingDemo { public class Program { static void ShowMessage(double number) { Console.WriteLine("Double version of the method was called."); } static void ShowMessage(int number) { Console.WriteLine("Integer version of the method was called."); } static void Main() { ShowMessage(9.99); ShowMessage(9); } } } Example 1 – Method Overloading Demo Double version of the method was called. Integer version of the method was called. The program defined two methods with similar names. If method overloading is not supported by C#, then the program will have a hard time choosing which method to use when the name of the method is called. The secret is in the types of the parameters of a method. The compiler can differ two or more methods if they have a different set of parameters. When we called the method, it reads the type of the argument. On the first call (line 19), we passed a double value so the ShowMessage() (line 7-10) with double as the parameter was executed. On the second method call (line 20), we passed an int argument so the other ShowMessage() (line 12-15) with int as the argument was executed. It’s essential to know method overloading. The big purpose of method overloading is when multiple methods do the same task but only matters on the data that they require. A lot of methods in .NET Classes are overloaded. For example, the Console.WriteLine() method has multiple overloads. You saw that it can accept one string argument which is the string to display, and another version of it accepts 2 or more arguments, the first being the format string.
https://compitionpoint.com/method-overloading/
CC-MAIN-2021-21
refinedweb
375
65.93
Send a Gadget a Custom Directive from a Skill If you want to trigger gadget behaviors from a custom skill, you can do that by sending one or more custom directives as part of your skill response. By using custom directives, your skill can send your gadget arbitrary payloads that only your gadget can understand. To support custom directives, you first define a Custom Interface and, in your gadget's firmware, include code that can decode the custom directives that you defined. Then, to send custom directives from your skill code, you use the Custom Interface Controller. This is an interface that you add to your skill when you set it up using the Alexa Skills Kit developer console or the Alexa Skills Kit Command-Line Interface (ASK CLI). This topic describes how to query the available gadgets and send a custom directive to the available gadgets from a skill. For instructions on how to define your interface and what you need to do on your gadget to support it, see Learn About Custom Interfaces. You can also send custom events from your gadget to a skill, as described in Receive a Custom Event from a Gadget. For sample code, see the Alexa Gadgets Raspberry Pi Samples in GitHub. Custom gadgets that provide smart home functionality will not pass certification. - Overview - Prerequisites - Step 1: Skill launch - Step 2: Query for available gadgets - Step 3: Respond to Alexa with a directive targeted to a gadget - Step 4: Decode the directive on the gadget Overview The following figure, and the description below it, provide an overview of the interaction between a skill and a gadget. The steps, which are described in more detail later on this page, are as follows: - Skill launch – After enabling the skill in the Alexa app, a user launches the skill with a phrase such as "Alexa, open [skill invocation name]". This causes Alexa to send the skill a request. This is usually a launch request, although a user may invoke your skill with a specific intent also. - Query for available gadgets – When the skill launches, your skill code needs to query an Alexa service endpoint to find out whether there are gadgets available to the skill. (This is detailed later in this topic.) "Available" means that the gadget is currently connected to the user's Echo device, and the gadget is registered in the developer portal under the same developer account you used to create the skill. The gadget query procedure has three parts: - Get information from Alexa's request – The skill extracts information from the context object that Alexa's request contained. - Call the Endpoint Enumeration API – The skill uses the extracted information to make an HTTPS GET request to the Endpoint Enumeration API. The Endpoint Enumeration API returns the endpoint ID (gadget ID), name, and capabilities for all available gadgets. - Store the endpoint IDs – The skill stores the returned endpoint IDs, if any, in the skill's session attributes so that it can send directives to those gadgets throughout the skill session. It does not need to query for gadgets within this skill session again. - Skill response – The skill responds to Alexa. The contents of the response depend on whether the skill found any available gadgets in the previous step: - Gadgets are available – If the gadget query returned one or more gadgets, then throughout the skill session, your skill can respond to any request from Alexa with a custom directive targeted to those gadgets. Alexa then sends the directive to the Echo device. - No available gadgets – If the gadget query does not return any gadgets, then the skill code reacts according to its design. For example, if a gadget is not required, the skill might continue. If a gadget is required, the skill might tell the user that they need a gadget, and then exit. - Gadget receives a directive – When a skill sends a custom directive targeted to an endpoint ID retrieved in the query, the Echo device passes the custom directive to the gadget over Bluetooth in binary format described by .proto files. The gadget extracts the payload, which is a string, and reacts accordingly. Prerequisites Before you add skill code to send custom directives. Step 1: Skill launch A skill session begins when a user invokes your skill. This causes Alexa to send your skill a request. This is usually a launch request, although a user may invoke your skill with a specific intent also. The request contains a context object. Step 2: Query for available gadgets This step has three parts: the skill must first get information from Alexa's request, then call the Endpoint Enumeration API, and then store the endpoint IDs that the Endpoint Enumeration API returned. Get information from Alexa's request From the context object in Alexa's request, you must get the values of two fields: the apiEndpoint and apiAccessToken. - The apiEndpointis the base URL of the Alexa endpoint, which depends on the geographic location of your skill. For example, the US endpoint is. - The apiAccessTokenencapsulates the permissions granted to your skill. It is in the Systemobject, which is nested in the contextobject. The following example, which is a LaunchRequest, shows an apiEndpoint and apiAccessToken. { "version": "1.0", "session": { "new": true, "sessionId": "amzn1.echo-api.session.1", "application": { "applicationId": "amzn1.ask.skill.1" }, "user": { "userId": "amzn1.ask.account.1" }, "attributes": {} }, "context": { "AudioPlayer": { "playerActivity": "IDLE" }, "System": { "application": { "applicationId": "amzn1.ask.skill.1" }, "user": { "userId": "amzn1.ask.account.1" }, "device": { "deviceId": "amzn1.ask.device.1", "supportedInterfaces": { "AudioPlayer": {} } }, "apiEndpoint": "", "apiAccessToken": "someToken" } }, "request": { "type": "LaunchRequest", "requestId": "amzn1.echo-api.request.1", "timestamp": "2018-05-11T17:33:01Z", "locale": "en-US", "shouldLinkResultBeReturned": false } } Call the Endpoint Enumeration API Now that it has the apiEndpoint and apiAccessToken, the skill can query for available gadgets. For a gadget to be available to the skill, the gadget must meet the following conditions: - The gadget is currently connected to the user's Echo device. - The gadget is registered in the developer portal under the same developer account as the skill. To get a list of all the gadgets that meet these conditions, you call the Endpoint Enumeration API with the apiEndpoint and apiAccessToken that you retrieved in the previous step. You only need to call the Endpoint Enumeration API once per skill session, because the endpoint IDs are valid for the entirety of the session. Normally, the endpoint ID for a given gadget will remain consistent across skill sessions also; however, there are situations where the endpoint ID for a gadget may change (for example, if a user disables and re-enables your skill). To call the Endpoint Enumeration API, you can use the Endpoint Enumeration Service client in an Alexa Skills Kit SDK or make an HTTPS GET request to the <apiEndpoint>/v1/endpoints endpoint, where <apiEndpoint> is the apiEndpoint that you retrieved from Alexa's request. The format of the GET request is as follows: GET <apiEndpoint>/v1/endpoints HTTP/1.1 Authorization: Bearer <apiAccessToken> For example, if apiEndpoint is and apiAccessToken is abcde12345, your skill's request to the Endpoint Enumeration API would look similar to the following: GET HTTP/1.1 Authorization: Bearer abcde12345 If Alexa is able to process the request, Alexa returns a response with an HTTP 200 status code that contains a list of available endpoints (gadgets) that your skill can send directives to. Each endpoint has an ID, name, and a list of capabilities that the gadget defined in its Alexa.Discovery.Discover.Response event. Endpoints are in the following format. { "endpoints": [ { "endpointId": "amzn1.ask.endpoint.ABC123", "friendlyName": "Gadget123", "capabilities": [ { "type":"AlexaInterface", "interface":"Custom.CustomInterface1", "version":"1.0" } ] }, { "endpointId": "amzn1.ask.endpoint.XYZ789", "friendlyName": "Gadget789", "capabilities": [ { "type":"AlexaInterface", "interface":"Custom.CustomInterface1", "version":"1.0" } ] } ] } The following are the fields of a /v1/endpoints response for a gadget. If the user's Echo device is not connected to any available gadgets, the list of endpoints will be empty. Store the endpoint IDs Store the endpoint IDs (gadget IDs) in the skill's session attributes. The endpoint IDs are valid for the entirety of the skill session; you do not need to query for them again. Step 3: Respond to Alexa with a directive targeted to a gadget Now that you have a list of gadgets, you can send a gadget a directive. To do so, you include a CustomInterfaceController.SendDirective directive in your response to Alexa's request. Note that: - Although the Endpoint Enumeration API returns all capabilities that the gadget supports, skills can only send gadgets directives that belong to Custom Interfaces (that is, interfaces that begin with Custom.). - Each directive is targeted to one gadget (that is, one endpoint ID), so if you want to send the same directive to multiple gadgets, include one directive for each gadget (within the same response). That is, include multiple directives into a single response. - The maximum size of the skill's response is 24 KB. This limit encompasses all directives and speech in the response. - The payloadof a CustomInterfaceController.SendDirectivedirective cannot exceed 1000 bytes. The following is an example of a response that sends the Custom.Robot.Spin directive to two gadgets. { "version": "1.0", "sessionAttributes": {}, "response": { "outputSpeech": {}, "card": {}, "reprompt": {}, "shouldEndSession": true, "directives": [ { "type": "CustomInterfaceController.SendDirective", "endpoint": { "endpointId": "amzn1.ask.endpoint.ABC123" }, "header": { "namespace": "Custom.Robot", "name": "Spin" }, "payload": { "direction": "clockwise", "times": 5 } }, { "type": "CustomInterfaceController.SendDirective", "endpoint": { "endpointId": "amzn1.ask.endpoint.XYZ789" }, "header": { "namespace": "Custom.Robot", "name": "Spin" }, "payload": { "color": "clockwise", "times": 5 } } ] } } For the full response format, see Response Format in the Request and Response JSON Reference. The following are the fields of a CustomInterfaceController.SendDirective directive. Step 4: Decode the directive on the gadget After your skill includes a custom directive in its response to Alexa, Alexa passes the directive to the Echo device, which passes the directive to the gadget over Bluetooth. As with the Alexa Gadgets Toolkit directives, your gadget receives the directive in a binary format described by .proto files. Your gadget must then decode and process the directive. For more information, see Write code to handle the directive on your gadget.
https://developer.amazon.com/fr-FR/docs/alexa/alexa-gadgets-toolkit/send-gadget-custom-directive-from-skill.html
CC-MAIN-2021-17
refinedweb
1,677
54.12
This page gives you information on how to start with the PUMA library. It covers the following topics. - How to use the binaries - How to compile and install from source How to use the binaries Get the binaries Please follow the instructions on the download page to download the PUMA library binaries for Linux, Windows, or MacOSX. The Puma namespace The PUMA library defines namespace Puma to enclose all its classes, types, constants, and functions. Thus an application may either add before using any PUMA classes etc., or use full qualified names, like: Compiling and linking An application using the PUMA library needs to be compiled and linked with the following compiler options, assuming variable $PUMA points to the root directory of the PUMA installation: How to compile and install from source Tools required The following tools are required to build the library. - GNU make – to run the build process - GNU compiler collection (g++, gcc) – to compile the source files - GNU binutils (ar, strip) – to collect and finish the library - AspectC++ compiler (ac++, ag++) – to apply aspects to the source files - sed, perl – for some build magic - (optional) doxygen – to build the reference manual Get the sources Please follow the instructions on the download page to obtain a fresh copy of the PUMA library sources. The sources are available in two variants, i.e. - Sources - The source code from the repository. You need an AspectC++ compiler to compile these sources. - Woven Sources - The aspect woven source code. No AspectC++ compiler is needed to compile these sources. This is useful especially to compile the library on platforms for which no AspectC++ compiler is available. This variant can not be compiled with others than the default set of extensions. Compile the sources Execute the following command from within the PUMA root directory to weave and compile the library for Linux without debugging information. Use the following command instead to compile aspect woven sources. Most of the build steps can run simultaneously. To speed up the build process on multiprocessor systems you can let make run multiple jobs. The following command starts 4 jobs to weave and compile the library. Supported target platforms The PUMA library can be built for different target platforms. The variable TARGET specifies the target platform and whether debugging mode is enabled or not. Following targets are supported. To make a Linux build of the library including debugging information invoke make like this. Building PUMA for other target platforms may require changes on the file vars.mk in the root directory of PUMA. Additional build and compilation flags can be set using the following variables. Make targets These are the supported make targets. Extensions The PUMA library can be built with some extensions. These extensions are defined in the file extensions.mk in the PUMA root directory. Extensions are enabled by setting variable EXTENSIONS. This command builds the library including GNU C/C++ language extensions and tracing code. Other extensions are disabled. Install the binaries After the library was successfully built it can be installed to /usr/local as follows. If you like to change the install location for the library, just set variable PREFIX accordingly. Here is an example for installing the library below the user’s home directory.
https://puma.aspectc.org/getting-started/
CC-MAIN-2021-39
refinedweb
541
56.35
I have been working on a small OpenGL project lately and just started using Codewarrior. I have had some problems getting Codewarrior to link the program properly when using .h/.cpp files in combination. Everything works if I put a #include "file.cpp" at the end of my "file.h" header file, (or if I put all of the code in a .cpp file and include that one in the main program) but otherwise it doesn't seem to find the .cpp file at all. Everything compiles nicely, but the linker gives me the following error: Link Error : undefined 'M_Filter3f::~M_Filter3f()' (descriptor) Referenced from '__sinit_TerraTester3_cpp' in TerraTester3.cpp Does anybody know what could be wrong?
https://www.opengl.org/discussion_boards/showthread.php/159079-Problems-using-h-files
CC-MAIN-2015-35
refinedweb
115
68.87
Lucene is a Java library that adds text indexing and searching capabilities to an application. It is not a complete application that one can just download, install, and run. It offers a simple, yet powerful core API. To start using it, one needs to know only a few Lucene classes and methods. Lucene offers two main services: text indexing and text searching. These two activities are relatively independent of each other, although indexing naturally affects searching. In this article I will focus on text indexing, and we will look at some of the core Lucene classes that provide text indexing capabilities. Lucene was originally written by Doug Cutting and was available for download from SourceForge. It joined the Apache Software Foundation's Jakarta family of open source server-side Java products in September of 2001. With each release since then, the project has enjoyed more visibility, attracting more users and developers. As of November 2002, Lucene version 1.2 has been released, with version 1.3 in the works. In addition to those organizations mentioned on the "Powered by Lucene" page, I have heard of FedEx, Overture, Mayo Clinic, Hewlett Packard, New Scientist magazine, Epiphany, and others using, or at least evaluating, Lucene. Related Reading Java Enterprise Best Practices By The O'Reilly Java Authors Like most other Jakarta projects, Lucene is distributed as pre-compiled binaries or in source form. You can download the latest official release from Lucene's release page. There are also nightly builds, if you'd like to use the newest features. To demonstrate Lucene usage, I will assume that you will use the pre-compiled distribution. Simply download the Lucene .jar file and add its path to your CLASSPATH environment variable. If you choose to get the source distribution and build it yourself, you will need Jakarta Ant and JavaCC, which is available as a free download. Although the company that created JavaCC no longer exists, you can still get JavaCC from the URL listed in the References section of this article. CLASSPATH Before we jump into code, let's look at some of the fundamental Lucene classes for indexing text. They are IndexWriter, Analyzer, Document, and Field. IndexWriter Analyzer Document Field IndexWriter is used to create a new index and to add Documents to an existing index. Before text is indexed, it is passed through an Analyzer. Analyzers are in charge of extracting indexable tokens out of text to be indexed, and eliminating the rest. Lucene comes with a few different Analyzer implementations. Some of them deal with skipping stop words (frequently-used words that don't help distinguish one document from the other, such as "a," "an," "the," "in," "on," etc.), some deal with converting all tokens to lowercase letters, so that searches are not case-sensitive, and so on. An index consists of a set of Documents, and each Document consists of one or more Fields. Each Field has a name and a value. Think of a Document as a row in a RDBMS, and Fields as columns in that row. Now, let's consider the simplest scenario, where you have a piece of text to index, stored in an instance of String. Here is how you could do it, using the classes described above: String import org.apache.lucene.index.IndexWriter; import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.analysis.standard.StandardAnalyzer; import org.apache.lucene.document.Document; import org.apache.lucene.document.Field; /** * LuceneIndexExample class provides a simple * example of indexing with Lucene. It creates a fresh * index called "index-1" in a temporary directory every * time it is invoked and adds a single document with a * single field to it. */ public class LuceneIndexExample { public static void main(String args[]) throws Exception { String text = "This is the text to index with Lucene"; String indexDir = System.getProperty("java.io.tmpdir", "tmp") + System.getProperty("file.separator") + "index-1"; Analyzer analyzer = new StandardAnalyzer(); boolean createFlag = true; IndexWriter writer = new IndexWriter(indexDir, analyzer, createFlag); Document document = new Document(); document.add(Field.Text("fieldname", text)); writer.addDocument(document); writer.close(); } } Let's step through the code. Lucene stores its indices in directories on the file system. Each index is contained within a single directory, and multiple indices should not share a directory. The first parameter in IndexWriter's constructor specifies the directory where the index should be stored. The second parameter provides the implementation of Analyzer that should be used for pre-processing the text before it is indexed. This particular implementation of Analyzer eliminates stop words, converts tokens to lower case, and performs a few other small input modifications, such as eliminating periods from acronyms. The last parameter is a boolean flag that, when true, tells IndexWriter to create a new index in the specified directory, or overwrite an index in that directory, if it already exists. A value of false instructs IndexWriter to instead add Documents to an existing index. We then create a blank Document, and add a Field called fieldname to it, with a value of the String that we want to index. Once the Document is populated, we add it to the index via the instance of IndexWriter. Finally, we close the index. This is important, as it ensures that all index changes are flushed to the disk. fieldname As I already mentioned, Analyzers are components that pre-process input text. They are also used when searching. Because the search string has to be processed the same way that the indexed text was processed, it is crucial to use the same Analyzer for both indexing and searching. Not using the same Analyzer will result in invalid search results. The Analyzer class is an abstract class, but Lucene comes with a few concrete Analyzers that pre-process their input in different ways. Should you need to pre-process input text and queries in a way that is not provided by any of Lucene's Analyzers, you will need to implement a custom Analyzer. If you are indexing text with non-Latin characters, for instance, you will most definitely need to do this. Pages: 1, 2 Next Page © 2018, O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://archive.oreilly.com/pub/a/onjava/2003/01/15/lucene.html
CC-MAIN-2018-09
refinedweb
1,046
56.66
In a few of my controllers I have an action that does not have a corresponding route because it is accessed only via a render ... and return in other controller actions. For example, I have an action def no_such_page # displays a generic error screen end In my RSpec controller test, how do I 'get' that method and look at the response body? If I try: get :no_such_page response.status.should be(200) it of course gives the error No route matches {:controller=>"foo", :action=>"{:action=>:no_such_page}"} Update Looking back over your question, it doesn't make sense to me now since you say that you are only accessing this action via render ... and return, but render renders a view, not an action. Are you sure that you even need this action? I think a view spec is the place for this test. Original answer It doesn't make sense to test the response code of an action which will never be called via an HTTP request. Likewise get :no_such_page doesn't make sense as you can't "get" the action (there is no route to it), you can only call the method. In that sense, the best way to test it would be to treat it just like any other method on a class, in this case the class being your controller, e.g. PostsController. So you could do something like this: describe PostsController do ... other actions ... describe "no_such_page" do it "displays a generic error screen" do p = PostsController.new p.should_receive(:some_method).with(...) p.no_such_page end end end But in fact, judging from what you've written, it sounds to me like your action has nothing in it, and you're just testing the HTML output generated by the corresponding view. If that's the case, then you really shouldn't be testing this in controller specs at all, just test it using a view spec, which is more appropriate for testing the content of the response body.
https://www.codesd.com/item/in-rspec-how-to-test-a-controller-action-that-does-not-have-a-route.html
CC-MAIN-2021-04
refinedweb
329
67.49
21 February 2007 08:25 [Source: ICIS news] SINGAPORE (ICIS news)--European gas majors Linde and Air Liquide on Wednesday signed agreements to acquire each other’s shares in six Asian joint ventures. ?xml:namespace> This was done in accordance with a European Commission (EC) anti-trust regulation following Linde’s acquisition of rival BOC Group in September 2006. Linde had committed itself to the EC’s condition that it was to terminate joint ventures between BOC and Air Liquide either by selling the BOC shares it held or by purchasing shares from Air Liquide, the German company said. Under the agreements, Linde will purchase Air Liquide’s shares in gas companies Malaysian Oxygen and Hong Kong Oxygen & Acetylene, while selling its stake in Singapore Oxygen, Thailand-based Eastern Industrial Gases, Vietnam Industrial Gases and Brunei Oxygen to the French company. Once the transactions have been completed, Linde will receive a net purchase price of €275m ($361.8m). The pro-rata revenue that Linde will acquire and sell through the transactions amounts to approximately €125m and €110m respectively. Following the acquisitions, Linde will have full ownership of Hong Kong Oxygen & Acetylene and a 45% holding in Malaysian Oxygen. It is required to make a takeover bid for the remaining 55% of the Malaysian joint venture, worth about €249m. In a similar move last December, Linde sold its 45% stake in Japan Air Gases to Air Liquide for €590
http://www.icis.com/Articles/2007/02/21/9008120/linde-air-liquide-sign-jv-acquisition-deals.html
CC-MAIN-2013-48
refinedweb
238
54.36
Sometime in late 2012 I was discussing dissertation project ideas with my girlfriend, as she was coming up to her final year of a computing bachelors. The usual option chosen by many graduates would be to just build a website or an app, or do some form of market research. We decided to encompass all 3 to produce something that works, but ultimately something that could be of value. If I had the time, energy, and funds I’d pursue this as it has potential for a startup, but I don’t, so the important thing that I’ve taken away is the experience working with groovy, grails, and android. The Idea… There are 2 main business drivers behind this project. Firstly we wanted to provide a service whereby restaurant owners can register, create surveys, and make them accessible to staff such as printing QR codes onto the back of their menus. Secondly, we wanted to approach this from the end users point of view, whereby customers sitting in the restaurant could download an app for free off the public market places, scan the said QR code, and be presented with the survey the restaurant owner had created. They would fill it in via the app and submit, the restaurant owner then has immediate access to the results in the form of statistics and graphs. The outcomes that we’re after: - Better visibility to restaurant owners on how their customers feel - Easy and seamless access to surveys for the customers - A scalable application which can handle increasing users as demand grows - A platform for advertising new products and features There are 3 components to this solution: - Grails web application - Rest API (built into the grails application) - Android app Why Grails? - Develop in groovy, so very accessible to java developers. - Quick to prototype with “convention over configuration” - Views auto generated if using scaffolding. - Easily deployable into the cloud, package as a war and deploy to cloudbees The Prototype… If you’re starting out with grails, I’d highly recommend that you get a copy of IntelliJ ultimate edition (and a copy of Grails in action), the support for grails is fantastic and I found it far easier than using eclipse. Whilst there are some excellent tutorials on grails out there (the official documentation is also very good) I’ll hold off and just jump right into how the application works. One of the awesome features of grails is that it follows the “convention over configuration”, which simply means that if you follow the convention implied by the framework, you don’t have to be concerned about configuration. You can’t escape configuration entirely, but boilerplate plumbing can be inferred by convention. An example of that is if you name your controllers like “SurveyController”, grails automatically knows its a controller for the survey class, based on naming conventions. A similar convention applies for views. Domain model Our data model is quite simple. We have a user, the user has some surveys, each survey has a number of questions, and each questions has a number of predefined responses. The domain classes are self explanatory, but it’s probably worth mentioning a few tweaks I made. class User { String firstName String lastName String emailAddress String companyName String facebookPageLink String twitterHandle static hasMany = [surveys: Survey] static constraints = { facebookPageLink nullable: true twitterHandle nullable: true } } By default, all fields are mandatory, however in the above example of the User class we can override these constraints to set them as nullable. There are various other constraints that you can set, have a look at the documentation. class Survey { Integer id String title static hasMany = [questions: Question] static mapping = { questions lazy: false } static constraints = { id() title() questions() } String toString(){ return title } } The relationships between the classes are defined by the “static hasMany”. This basically says that one Survey has relationships to many Questions, and this relationship is identified by “questions”. The mapping block instructs the questions to be eagerly loaded, so once a survey is loaded into memory, so are all of its questions, opposed to just the Ids which would then be loaded lazily. It’s also useful to override the toString method on your domain objects, particularly if you have relationships as the scaffolding will create drop down lists in your views. If you don’t override toString with something sensible, you’ll just see the object hash codes instead, which isn’t very useful to the user. Controllers It’s the responsibility of the controllers to manipulate the underlying data model (via services for example), and respond with views to the user. You can read more about the MVC pattern here. To get started, you could simply enable scaffolding like so. class LoginController { static scaffold = true def index = { render(view: "login.gsp") } } Scaffolding is an excellent feature of grails to get you started. Grails knows the structure of your domain object, therefore it is able to dynamically create controller CRUD operations, and views to manipulate your objects. That one small line of code and you can create, updated, delete, and view your objects! Fantastic eh?! The bad news… Whilst scaffolding is great to get you started, the moment you want to do something out of the ordinary, or customisation on views, scaffolding becomes a bit useless, and you’ll have to implement your own controllers (and possibly views). Fortunately, grails is quite flexible so you can leave scaffolding on and just override the methods that you want to customise. As with views, you’re best off getting grails to generate them for you and then customise them, to save you having to write the entire controller/view from scratch. The methods you can override, and there general uses are: - index – default action, usually just redirects to list - list – list all of the objects, handle pagination, filtering etc - create – render view to create new object - save – handle creation of new object, validation etc. - edit – render view to edit an object - update – handle update of object - delete – delete an object You can read more about the controller actions on the grails documentation. You can see in the show() method on the SurveyController that I’ve customised it to to add some charts into the response model. You can see how I generate the chart data by looking at the source code in github. The view can then render these as javascript charts (which I’ll come onto in a moment) def show(Long id) { def surveyInstance = Survey.get(id) if (!surveyInstance) { flash.message = message(code: 'default.not.found.message', args: [message(code: 'survey.label', default: 'Survey'), id]) redirect(action: "list") return } def charts = getCharts(surveyInstance) [surveyInstance: surveyInstance, charts: charts] } Views Being quite fond of the default views that grails generates, and not wanting to invest a great deal of time with customisation for this prototype, I chose to generate the views and then just tweak as I needed. In reality, the only customisation I needed to do was to place a “generate QR code” link, and to insert some javacript charts for displaying survey statistics. Having assessed HighCharts, D3, and the Google visualisation API, I opted for the latter as I felt it was far simpler to use and I didn’t have any need for the advanced features that HighCharts and D3 come with, and there was a plugin for gvisualisation. Displaying charts was straightforward, after installing the visualisation plugin, add this snippet of code to iterate over the charts that were added to the model and display a barCoreChart. <g:each <gvisualization:barCoreChart <div id="chart-${item.question_id}" align="center"></div> </g:each> This would then display something like the following, you can change various elements of the charts such as the chart type, axis labels, sizes and titles, please refer to the documentation. QR codes QR codes make it incredibly easy to share data to android devices, my intention was to embed a user ID in a QR code, when scanned the app can request all surveys pertinent to the user ID. Generating QR codes is easy with the qrcode plugin. I have provided a link on the users view to generate a QR code: <span class="property-value" aria- <g:linkGenerate QR Code</g:link> </span> This is bound to the generateQrCode action on the user controller, which will create a QR code from a user id and display it def generateQrCode(Long id){ println "Generate QR code here..." String data = "$id" int qrSize = 500 QRCodeRenderer qrcodeRenderer = new QRCodeRenderer() qrcodeRenderer.renderPng(data, qrSize, response.outputStream) } As you can see, it is as simple as providing the data to be encoded, the size (x==y), and the output stream, in this case the response. When you click the link, you should see the following: API The website element is designed for the restaurant owners, the end users will be using an android app to complete surveys. Whilst I could have developed a mobile responsive page, I felt that an android app would bring a better overall experience to the user. I have created a controller, ApiController that enables users to request surveys, and post responses. Firstly, I created the URL mappings for this new controller static mappings = { "/api/customer/$customerid"(controller: "api", action: 'getCustomer') "/api/survey/$surveyid"(controller: "api", action: [GET: 'getSurvey']) "/api/survey"(controller: "api", action: [POST: 'surveyComplete']) "/$controller/$action?/$id?"{ constraints { // apply constraints here } } "/"(controller: "home") "500"(view:'/error') } Requests on /api/customer/$customerid, such as /api/customer/123 are routed to the getCustomer method on the api controller. The same is true for the second mapping, however the action is a GET on getSurvey (in hindsight, the first mapping should be restricted to the GET method too). The third mapping is a POST on /api/survey which will be invoked when the user has completed a survey on their device. def getCustomer(){ User u = User.get(params.customerid) def surveysToPresent = [:] u.surveys.each { surveysToPresent << [title: it.title, id: it.id] } render(contentType: 'text/json') {[ 'company_name': u.companyName, 'twitter' : u.twitterHandle, 'facebook' : u.facebookPageLink, 'surveys' : [surveysToPresent] ]} as JSON } The getCustomer method finds the user from the customerid on the request path, retrieves the surveys and transforms them to a map containing the title and id (we don’t need the entire survey object when the user is presented with a list of surveys to select). The render statement enables us to return a json response very easily, we just return a map and grails (jackson) takes care of the json marshalling. def getSurvey(){ Survey s = Survey.get(params.surveyid) def questionsToPresent = [:] questionsToPresent = s.questions.collect { [ id: it.id, text: it.text, responses : it.responses.collect{ resp -> [id: resp.id, text: resp.text] } ] } render(contentType: 'text/json') {[ 'id' : s.id, 'title': s.title, 'questions' : questionsToPresent ]} as JSON } The getSurvey method behaves in a similar manner to getCustomer, it builds a map and renders as json. def surveyComplete(){ def jsonObject = request.JSON Survey theSurvey = Survey.findById(jsonObject.id) jsonObject.responses.each{ response -> theSurvey.questions.find {it.id == response.question_id}.responses.find {it.id == response.response_id}.numberOfPeopleSelected++ } theSurvey.save(flush: true, failOnError: true) render(status: 204) } The surveyComplete will retrieve a survey by id, find the responses the user has provided, and increment a count. The survey is then saved and a “204 No Content” is returned. I’ll cover how the android app consumes these services in my next post. Deployment As this project is just a prototype, I decided to host it on a free Cloudbees instance. The application doesn’t have any persistence layer, and all data is held in memory (which is fine for its current purpose), so when Cloudbees hibernates the instance after a period of inactivity, all user data will be lost. Deploying is simple, build the war using grails war Then upload the war file from the target directory to your cloud bees account, or use the command line cloud bees SDK. View source code on Github View live demo (if the live demo link doesn’t work, try again in 10 minutes as the instance will be waking from hibernation)
http://www.jameselsey.co.uk/blogs/techblog/tag/api/
CC-MAIN-2017-22
refinedweb
2,012
50.57
[jasspa] Re: ISO accent weirdness (and fix) Expand Messages - Okay, I knew this would bite one day! Please note the following, it will help in understanding the explanation of whats going on: * Internally ME uses ISO, i.e. for spelling. * ME knows that your display font is not (OEM say). * For spelling and word operations etc ME does an ISO to OEM conversion. It appears that Thomas is perverting this, that is to say you are setting the display font to OEM and then using ANSI text, hence the solid boxes etc. A simple example of a problem this causes is word movement. Because you set your display to OEM ME knows that OEM accented letters are word characters but not ISO, net effect is that when moving forward a word it will stop in the middle at the solid box (which is obvious not a letter). ME tries to help users with this problem by providing two translation commands user-to-latin-font and latin-to-user-font (probably very similar to your S-f3 command). The iso-accents-mode tries to be smart, it knows that "o translates to ISO 0246 but thats no use to the user (or so I thought) if they're using an OEM font so it translates it to OEM and if the font doesn't support o umlaute then it 'should' translate to a simple 'o' char (best that can be done). The idea here is that the user writes in their displayed font and then when finished uses user-to-latin-font to convert it to ISO. As far as a solution to Thomas' problem, I think the enclosed fix will cause as many problems as its solves for the reasons outlined above. So if any one has any ideas I'm all ears. The only suggestion I have is introducing a flag to disable the conversion, but I'd like other opinions. I hope that clears up MEs weirdness! Steve > Subject: [jasspa] ISO accent weirdness (and fix) > From: Thomas Hundt <thundt@...> > Date: Thu, 07 Oct 1999 12:04:08 -0700 > To: "JASSPA MicroEmacs Mailing List" <[email protected]> > > ME exhibits some strange behavior surrounding iso-accents-mode. > > Displaying > ---------- > We all know that not all the ISO (international) characters can be displayed > in all the fonts. The TrueType fonts (e.g., Lucida Console) tend to have all > the characters; the older system fonts (e.g., Fixedsys) do not. If the ISO > characters are in your file, they'll usually appear as solid boxes. That's > fine, as long as I know the character is there, I can hit Shift-F3 a couple of > times (which I have mapped to a macro that switches fonts) and I'll be able to > view it. > > Entry > ----- > You can, at all times, enter these characters from the keyboard using the > Windows keypad codes, e.g., an � (umlauted o) is typed as Alt-0246 (the > numbers must be typed on the numeric keypad, not the top of the keyboard). � > (sz) is typed as Alt-0223. > > ME has nifty features to make it easier to type these characters: > > ; Make accents available. "-a-esc-esc makes an � > iso-accents-mode > !force global-bind-key iso-accents-expand "esc esc" > > This enables typing of the above characters as " o esc esc for the � and s z > esc esc for the �. (The spaces I put in there for clarity. You're really > typing "o and then hitting esc twice.) > > Problem > ------- > Most of these shortcuts just plain don't work. For example, sz *never* works. > "o *always* works. The ones that don't work put in some other strange-valued > character. > > The code is in abbrev.emf (not in langutl -- docsbug!). The tables look right > to me (lists of the abbreviations and of the replacement characters). > Something in the guts of the program is failing to map the proper character. > > After some messing around, I found that you have to comment out the line > ;; 0 set-char-mask "L" ; FIX > and everything becomes groovy. > > Perhaps I'm doing something wrong (char-mask? not sure what that is) when I > switch fonts. That code is below, after the routine that has the line above. > > These lists could stand to be augmented to include characters like the German > quotes that look like << and >> (I'm too lazy to look up the code right now). > I'll probably get around to this one of these days. > > > -Th > > > [from abbrevs.emf] > ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; > ; iso-accents-expand routines > > define-macro iso-accents-expand > !if ¬ &set #l0 &lfind > "|+-|12|14|34|ae|AE|`a|^a|'a|\"a|~a|.a|co|`e|^e|'e|\"e|`i|^i|'i|\"i|~n|oe|`o|^o|'o|\"o|~o|/o|rg|sz|tm|`u|^u|'u|\"u|\"y|'y|`A|^A > &mid @wl &sub $window-col 2 2 > !abort > !endif > ;; ml-write &spr "found %s at %s" &mid @wl &sub $window-col 2 2 #l0 > backward-delete-char > backward-delete-char > set-variable #l0 &lget > " > #l0 > ;; ml-write &spr "the new char is %d %s" #l0 #l0 > ;; 0 set-char-mask "L" ; FIX > !if &set #l1 &sin #l0 $result > 0 set-char-mask "U" > set-variable #l0 &mid $result #l1 1 > !endif > insert-string #l0 > !emacro > > > > > Here is my code to switch fonts: [thundt.emf] > ;------------------------------------------------------------ > ; cycle-font: cycles through font sizes > ;------------------------------------------------------------ > define-macro cycle-font > !if &seq %os "mswin" > ; change-font "name" charSet weight width height > set-variable %font-number &mod &add %font-number 1 7 > !if &equ %font-number 1 > 1 change-font "Lucida Console" 1 0 0 15 > ml-write "[font Lucida Console 15]" > !elif &equ %font-number 2 > 1 change-font "Lucida Console" 1 0 0 12 > ml-write "[font Lucida Console 12]" > !elif &equ %font-number 3 > 1 change-font "Lucida Console" 1 0 0 11 > ml-write "[font Lucida Console 11]" > !elif &equ %font-number 4 > 1 change-font "vt100" 1 0 0 16 > ml-write "[font vt100 16]" > !elif &equ %font-number 5 > 1 change-font "vt100" 1 0 0 10 > ml-write "[font vt100 14]" > !elif &equ %font-number 6 > 1 change-font "vt100" 1 0 0 12 > ml-write "[font vt100 12]" > !else > 1 change-font "Fixedsys" 1 0 0 15 ; used in my mewin > ml-write "[font Fixedsys]" > !endif > !elif &seq %os "unix" > set-variable %font-number &mod &add %font-number 1 6 > !if &equ %font-number 1 > 1 change-font "6x10" > ml-write "[font 6x10]" > !elif &equ %font-number 2 > 1 change-font "7x13" > ml-write "[font 7x13]" > !elif &equ %font-number 3 > 1 change-font "8x13" > ml-write "[font 8x13]" > !elif &equ %font-number 4 > 1 change-font "9x15" > ml-write "[font 9x15]" > !elif &equ %font-number 5 > 1 change-font "10x20" > ml-write "[font 10x20]" > !else > 1 change-font "-b&h-*-bold-r-*-*-14-*-72-72-m-*-*-*" > ml-write "[font -b&h-*-bold-r-*-*-14-*-72-72-m-*-*-*]" > !endif > !else > ml-write "[No fonts available for this platform]" > !endif ; unix > > > !emacro > global-bind-key cycle-font S-f3 > > > > > ------------------------------------------------------------------------ > __________________________________________________________________________ > > > > This is an unmoderated list. JASSPA is not responsible for the content of > > any material posted to this list. > > > > > > > Your message has been successfully submitted and would be delivered to recipients shortly.
https://groups.yahoo.com/neo/groups/jasspa/conversations/topics/86?o=1&d=-1
CC-MAIN-2015-27
refinedweb
1,200
74.19
This article will take a look at the pinout of a basic 16×2 LCD module. Then, it’ll discuss some important instructions for the common LCD modules that are compatible with the HD44780 LCD controller/driver chip. Finally, the article will give an example C code to interface an AVR ATMEGA32 microcontroller with a 16×2 LCD. The Module Pinout The 1602A is a 16 character, 2 line display that is similar to many other 16x2 displays in use today. Each character is displayed in a 5 column × 8 row dot matrix or a 5 column × 10 row dot matrix. These pixels should be controlled correctly so that we can display the desired characters. Directly controlling all of these pixels using a microcontroller is not easy, that’s why we usually use LCD modules that have a controller/driver chip to facilitate connecting the LCD to a processor. A common LCD driver is HD44780. The pinout for these LCD modules is usually as shown in Figure 1 below. Figure 1. The common pinout for a 16×2 LCD module. Image courtesy of AAC. The GND and Vcc (+5 V) pins are the power supply pins. The VEE pin is used to adjust the display contrast. We can use a potentiometer to connect VEE to a suitable positive voltage below +5 V. The Led+ and Led- pins are used to turn on the display backlight (connect them to +5 V and ground, respectively). The RS pin is the Register Selector pin for the LCD controller. The HD44780 has two registers: an Instruction Register (IR) and a Data Register (DR). The RS pin is a control pin that specifies whether the IR or DR should be connected to the Data Bus (DB0 to DB7 pins). When RS is low, the IR is selected and DB7-DB0 are treated as an instruction code. For example, the instruction code can represent a “display clear” command. When the RS is high, the DR is selected and DB7-DB0 are treated as data. In this case, DB7-DB0 can be the code for representing a character such as “a”. The R/W pin specifies whether we are writing to the module (R/W=0) or reading from it (R/W=1). The E pin (for “Enable”) starts a read/write operation and will be discussed in the next section. The Timing Diagram for a Write Operation Although we can both write and read from the data bus, a write operation is more common. That’s why, in this section, we’ll examine the timing diagram of a write operation which is shown in Figure 2 below. The definition of the different parameters and the expected values are given in Table 1. Figure 2. Timing diagram of a write operation. Image courtesy of HITACHI. Table 1 Courtesy of HITACHI. The timing diagram shows that we should set the RS and R/W pins to appropriate values and wait for tAS( which should be greater than 40 ns) before setting the E pin to logic high. According to the table, the E signal should have a width (PWEH) greater than 230 ns. Then, the E signal should have a high-to-low edge which starts a write operation. Note that tDSW before this edge the data must be valid. Besides, after the falling edge of E, the control signals and the data should not change for some time denoted by tAH and tH in the figure. Another important issue is the “Enable Cycle Time” which should be greater than 500 ns. This shows that we should wait for some time before starting to do the next read or write operation. To summarize, a high-to-low transition on E starts data read or write but there are certain timing conditions that must be met. When interfacing the LCD module with an MCU, we’ll have to take these considerations into account. Important Instructions You can find the complete list of the instructions for an HD44780-compatible LCD module on page 24 of this datasheet. Here, we’ll only use some of these instructions to do some basic operations. Clear Display This instruction clears the display. You’ll have to set both RS and R/W to the logic low and perform a write operation which applies the hexadecimal value 0x01 to the data bus. Moreover, the datasheet states that the “clear display” command “sets DDRAM address 0 in the address counter”. What does this mean? Figure 3. Courtesy of HITACHI. The Display Data RAM (DDRAM) is a RAM that stores the ASCII code for the characters that we send to the LCD module. The DDRAM can store up to 80 characters (it has a capacity of 80×8 bits). However, only some of these 80 characters are displayed on the LCD. For example, in the case of a 16×2 LCD, only 32 of these memory locations are displayed. The relationships between the displayed DDRAM addresses and the LCD positions are shown in Figure 4. Figure 4. Courtesy of HITACHI. According to Figure 4, if we write a particular character to the DDRAM address 0x00, it will be displayed in the first cell of the upper line. Similarly, if we write a character to address 0x40, that will appear in the first cell of the lower line. To go to a particular address of the DDRAM, we can write the desired address to the Address Counter (AC). Moreover, the AC determines the position on the LCD that a character entered by a write operation goes to. Note that LCDs support shift operations that can change the relationships that were shown in Figure 4. For example, a left shift applied to the default status of Figure 4 will lead to Figure 5. For more information, please refer to the datasheet. Figure 4. Courtesy of HITACHI. Now that you’re familiar with the DDRAM and AC, the description of the “Clear Display” command should make sense to you. The “Clear Display” command “sets DDRAM address 0 in the address counter”, hence, it will return the cursor to the home position (the first cell of the upper line). Return Home Figure 6 gives the code for this command and its description. Figure 6. Courtesy of HITACHI. This command also brings the cursor back to the home position and returns the display to its original status if it was shifted. For this command, DB0 is a don’t-care. Entry Mode Set The details of this command are given in Figure 7. Figure 7. Courtesy of HITACHI. When “I/D” is 1, the cursor position is incremented by one (it moves to the right on the display) after a write operation. When “I/D” is 0, the cursor position is decremented by one (it moves to the left). The S bit specifies whether to shift the display or not (A shift changes the DDRAM addresses that are displayed on the LCD). When S is 0, the display does not shift. For the shift options (when S=1), please refer to the datasheet. In many cases, we want the cursor position to increment after a write operation while the display remains still (the shift option is not utilized). For such applications, the command code for DB7-DB0 will be the hexadecimal value 0x06. Display ON/OFF Control The details of this command are given below. Figure 8. Courtesy of HITACHI. By setting the D bit to 1 or 0, we can respectively turn the display on and off. Similarly, the C bit can be used to turn the cursor on/off. B controls the blinking capability of the cursor position. Hence, if we write the hexadecimal value 0x0C to the DB7-DB0 as an instruction, the LCD will turn on and the cursor will be off. Function Set The following figure gives details of the “Function Set” command. Figure 9. Courtesy of HITACHI. The DL bit specifies the data length for the LCD module. If DL=1, the data is sent and received as an 8-bit word on the data bus (DB7 to DB0). When DL=0, the data is sent and received in 4-bit lengths (DB7 to DB4). To keep things simple, we’ll use the 8-bit option in this article. The N bit specifies the number of display lines. For a single line display, N should be 0. For two lines and more, N should be 1. “F” determines the character font and most often is 0. Hence, when working with a 16×2 LCD that receives and sends data in 8-bit lengths, the “Function Set” code for DB7-DB0 will be the hexadecimal value 0x38. Set DDRAM Address This instruction sets the address of the DDRAM. It can be used to write a character in a particular cell of the LCD. For example, sending the hexadecimal value 0x80 to the data bus will make the cursor to move to the first cell of the upper row. Figure 10. Courtesy of HITACHI. Now, we will use the above commands to operate a 16×2 LCD. The following table summarizes the commands discussed above. Table 2 Interfacing the LCD with an AVR Now, we will write some functions to connect a 16×2 LCD to a ATMEGA32. Assume that, as shown in Figure 11, port A is connected to the LCD data bus and the first three pins of port B are used to control the RS, RW, and E pins of the LCD. Note that the connections for the VSS, VDD, and VEE are not shown in Figure 11. Figure 11 We need two functions to write commands and data to the LCD module. Before that, let’s define the following three constants: const unsigned char RS_Pin=0x01; const unsigned char RW_Pin=0x02; const unsigned char E_Pin=0x04; These constants will be used throughout the code to specify the PORTB pin numbers that are connected to the control pins of the LCD. For example, RS is connected to the first pin of port B in Figure 11 so the RS_Pin is 00000001. E is connected to the 3rd pin so E_Pin is 00000100. In this way, we can easily modify the constants to adapt the code for a future project that uses a different pin connection. We can send instructions to the LCD using the following function: void lcd_write_instruc (unsigned char instruc) { delay_ms(2); PORTB=PORTB & (~(RS_Pin)); //it is an instruction rather than data PORTB=PORTB & (~(RW_Pin)); //it is write operation PORTB=PORTB & (~(E_Pin)); //set E to 0 (see Figure 1) PORTA=instruc; //put the instruction on the data bus PORTB=PORTB | (E_Pin); //set E to 1 (see Figure 1) PORTB=PORTB & (~(E_Pin)); // set E to 0 to generate a falling edge } Here, “instruc” is the command code that must be sent to the LCD data bus. The first line of the code uses the delay_ms() function from “delay.h” library to introduce a delay of 2 ms. We need to give the LCD some time to finish its current job (if there is any). This delay is introduced to take the “Enable Cycle Time” constraint of Figure 2 into account. Similarly, we can write a function to send a character to the LCD: void lcd_write_char (unsigned char c) { delay_ms(2); PORTB=PORTB | (RS_Pin); //it is data rather than an instruction PORTB=PORTB & (~(RW_Pin)); PORTB=PORTB & (~(E_Pin)); PORTA=c; PORTB=PORTB | (E_Pin); PORTB=PORTB & (~(E_Pin)); } “c” is the data that must be sent to the LCD data bus. The following function initializes the LCD by sending some commands from Table 2. It also specifies PORTA and PORTB of the MCU as outputs. void lcd_init (void) { delay_ms(2); DDRA=0xFF; DDRB=0xFF; lcd_write_instruc(0x06); //Increment mode for the cursor lcd_write_instruc(0x0C); //The display on, the cursor off lcd_write_instruc(0x38); //An 8-bit data bus, two-line display } The following function clears the display: void lcd_clear(void) { delay_ms(2); lcd_write_instruc(0x01); //Clear the display lcd_write_instruc(0x02); //returns the display to its original status if it was shifted. } To set the AC to a given address, we can use the following function: void lcd_goto(unsigned char column, unsigned char row) { delay_ms(2); if(row==0) lcd_write_instruc(0x80 + column); //see Figures 4 and 10 if(row==1) lcd_write_instruc(0xC0+ column); //see Figures 4 and 10 } And, finally, to write a string of characters, we can successively use our lcd_write_char() function: void lcd_write_string(char *s) { delay_ms(2); while(*s != 0) { lcd_write_char(*s); s++; } } Using these functions we can have the basic functionality of the LCD module. The following code shows the main() function of an example: void main(void) { lcd_init(); lcd_clear(); lcd_goto(0,0); lcd_write_string(" All About "); lcd_goto(0,1); lcd_write_string(" Circuits "); while (1) { continue; } } The output of the above code compiled and simulated using CodeVision and Proteus tools is shown in Figure 12. Figure 12 It’s worth to mention that the MCU delay to perform the different lines of the above code provides sufficient delay to satisfy the different timing constraints of Figure 2 particularly those related to the E signal. In fact, I have used the above functions with even faster 32-bit MCUs but if you run into any trouble, you can introduce a small delay in the appropriate lines of the code to make sure that the timing requirements are met. In this article, we looked at the pinout of a basic 16×2 LCD module. We also examined some of the most important instructions for the HD44780 compatible LCD modules. The example C code given in the article can be adjusted to be used with MCUs from other vendors. To see a complete list of my articles, please visit this page. 2 CommentsLogin In the section titled “Interfacing the LCD with an AVR” when assigning the E_Pin to a pin value doesn’t make sense to me, “E is connected to the 3rd pin so E_Pin is 00000100”. Shouldn’t it be “E_Pin is connected to the 3rd pin on the Atmega32 so E_Pin is 00000011” and the code should be “const unsigned char =0x03”. Please advise. Great tutorial, it provides just enough information to make me curious. Copying and pasting this code directly into AVR Studio 7 requires minor changes to the code. Most of the time was spent configuring AS7 to work with Sparkfun’s AVR pocket programmer. Instructions can be found here : 1. 2. 3. I had no success with RS_Pin, RW_Pin, and E_Pin as const unsigned chars assigned to 0x01, 0x02, or 0x03. I bit-twiddled (bit shifting) instead of bit masking. I used PB0, PB1, and PB2 for RS_Pin, RW_Pin, and E_Pin and PD0-PD7 for D0-D7 on the LCD respectively. For example change the following from the write_char function: PORTB=PORTB & (~(RS_Pin));—> to PORTB=PORTB & (~(1<<0)); // clearing PB0 bit of PORTB aka RS_Pin PORTB=PORTB & (~(RW_Pin));—> to PORTB=PORTB & (~(1<<1)); // clearing PB1 bit of PORTB aka RW_Pin PORTB=PORTB & (~(E_Pin));—> to PORTB=PORTB & (~(1<<2)); // clearing PB2 bit of PORTB aka E_Pin Anywhere there is reference to the const unsigned chars replace with the shift operations. You can then comment out the const unsigned chars. In the code provided by this tutorial, the function delay_ms() needs to have a leading underscore. So change all delay_ms() to _delay_ms(). The leading underscore is to signify internal library use, i.e. delay.h. Lastly, define the CPU frequency as such in this order: #include <avr/io.h> #define F_CPU 16000000UL #include “util/delay.h” Have fun.
https://www.allaboutcircuits.com/technical-articles/how-to-a-162-lcd-module-with-an-mcu/
CC-MAIN-2019-39
refinedweb
2,587
70.63
Detecting water with radar ¶ Sign up to the DEA Sandbox to run this notebook interactively from a browser Compatibility: Notebook currently compatible with the DEA Sandboxenvironment only Products used: s1_gamma0_geotif_scene Background¶ Over 40% of the world’s population lives within 100 km of the coastline. However, coastal environments are constantly changing, with erosion and coastal change presenting a major challenge to valuable coastal infrastructure and important ecological habitats. Up-to-date data on the position of the coastline is essential for coastal managers to be able to identify and minimise the impacts of coastal change and erosion. While coastlines can be mapped using optical data (demonstrated in the Coastal Erosion notebook), these images can be strongly affected by the weather, especially through the presence of clouds, which obscure the land and water below. This can be a particular problem in cloudy regions (e.g. southern Australia) or areas where wet season clouds prevent optical satellites from taking clear images for many months of the year. Sentinel-1 use case¶ Radar observations are largely unaffected by cloud cover, so can take reliable measurements of areas in any weather. Radar data is readily available from the ESA/EC Copernicus program’s Sentinel-1 satellites. The two satellites provide all-weather observations, with a revisit time of 6 days. By developing a process to classify the observed pixels as either water or land, it is possible to identify the shoreline from radar data. Description¶ In this example, we use data from the Sentinel-1 satellites to build a classifier that can determine whether a pixel is water or land in radar data. Specifically, this notebook uses an analysis-ready radar product known as backscatter, which describes the strength of the signal recieved by the satellite. The worked example takes users through the code required to: Load Sentinel-1 backscatter data for an area of interest Visualise the returned data. Perform pre-processing steps on the Sentinel-1 bands. Design a classifier to distinguish land and water. Apply the classifier to the area of interest and interpret the results. Investigate how to identify coastal change or the effect of tides. datacube import numpy as np import xarray as xr import matplotlib.pyplot as plt from scipy.ndimage.filters import uniform_filter from scipy.ndimage.measurements import variance import sys sys.path.insert(1, '../Tools/') from dea_tools.plotting import display_map Connect to the datacube¶ Activate the datacube database, which provides functionality for loading and displaying stored Earth observation data. [2]: dc = datacube.Datacube(app="Radar_water_detection") Analysis parameters¶ The following cell sets the parameters, which define the area of interest and the length of time to conduct the analysis over. The parameters are latitude: The latitude range to analyse (e.g. (-11.288, -11.086)). For reasonable loading times, make sure the range spans less than ~0.1 degrees. longitude: The longitude range to analyse (e.g. (130.324, 130.453)). For reasonable loading times, make sure the range spans less than ~0.1 degrees. If running the notebook for the first time, keep the default settings below. This will demonstrate how the analysis works and provide meaningful results. The example covers Melville Island, which sits off the coast of the Northern Territory, Australia. The study area also contains an additional small island, which will be useful for assessing how well radar data distinguishes between land and water. To run the notebook for a different area, make sure Sentinel-1 data is available for the chosen area using the DEA Sandbox Explorer. [3]: # Define the area of interest latitude = (-11.288, -11.086) longitude = (130.324, 130.453). [4]: display_map(x=longitude, y=latitude) [4]: Load and view Sentinel-1 data¶ The first step in the analysis is to load Sentinel-1 backscatter data for the specified area of interest. Since there is no time range provided, all available data will be selected. Please be patient. The data may take a few minutes to load. The load is complete when the cell status goes from [*] to [number]. [5]: # Specify the parameters to query on query = { "x": longitude, "y": latitude, "product": "s1_gamma0_geotif_scene", "output_crs": "EPSG:4326", "resolution": (0.0001356, 0.0001356) } # Load the data ds_s1 = dc.load(**query) Once the load is complete, examine the data by printing it in the next cell. The Dimensions argument revels the number of time steps in the data set, as well as the number of pixels in the latitude and longitude dimensions. [6]: print(ds_s1) <xarray.Dataset> Dimensions: (latitude: 1490, longitude: 952, time: 27) Data variables: vh (time, latitude, longitude) float32 0.0025751486 ... 0.00067797 vv (time, latitude, longitude) float32 0.15903875 ... 0.0044414997 Attributes: crs: EPSG:4326: Visualise VH band¶ [7]: # Scale to plot data in decibels ds_s1["vh_dB"] = 10 * np.log10(ds_s1.vh) # Plot all VH observations for the year ds_s1.vh_dB.plot(cmap="Greys_r", robust=True, col="time", col_wrap=5) plt.show() /usr/local/lib/python3.6/dist-packages/xarray/core/computation.py:565: RuntimeWarning: invalid value encountered in log10 result_data = func(*input_data) [8]: # Plot the average of all VH observations mean_vh_dB = ds_s1.vh_dB.mean(dim="time") fig = plt.figure(figsize=(7, 9)) mean_vh_dB.plot(cmap="Greys_r", robust=True) plt.title("Average VH") plt.show() What key differences do you notice between each individual observation and the mean? Visualise VV band¶ [9]: # Scale to plot data in decibels ds_s1["vv_dB"] = 10 * np.log10(ds_s1.vv) # Plot all VV observations for the year ds_s1.vv_dB.plot(cmap="Greys_r", robust=True, col="time", col_wrap=5) plt.show() [10]: # Plot the average of all VV observations mean_vv_dB = ds_s1.vv_dB.mean(dim="time") fig = plt.figure(figsize=(7, 9)) mean_vv_dB.plot(cmap="Greys_r", robust=True) plt.title("Average VV") plt.show() What key differences do you notice between each individual observation and the mean? What about differences between the average VH and VV bands? Take a look back at the map image to remind yourself of the shape of the land and water of our study area. In both bands, what distinguishes the land and the water? Preprocessing the data through filtering¶ Speckle Filtering using Lee Filter¶ You may have noticed that the water in the individual VV and VH images isn’t a consistent colour. The distortion you’re seeing is a type of noise known as speckle, which gives the images a grainy appearence. If we want to be able to easily decide whether any particular pixel is water or land, we need to reduce the chance of misinterpreting a water pixel as a land pixel due to the noise. Speckle can be removed through filtering. If interested, you can find a technical introduction to speckle filtering here. For now, it is enough to know that we can filter the data using the Python function defined in the next cell: [11]: # Adapted from def lee_filter(da, size): img = da.values img_mean = uniform_filter(img, (size, size)) img_sqr_mean = uniform_filter(img**2, (size, size)) img_variance = img_sqr_mean - img_mean**2 overall_variance = variance(img) img_weights = img_variance / (img_variance + overall_variance) img_output = img_mean + img_weights * (img - img_mean) return img_output Now that we’ve defined the filter, we can run it on the VV and VH data. You might have noticed that the function takes a size argument. This will change how blurred the image becomes after smoothing. We’ve picked a default value for this analysis, but you can experiement with this if you’re interested. [12]: # Set any null values to 0 before applying the filter to prevent issues ds_s1_filled = ds_s1.where(~ds_s1.isnull(), 0) # Create a new entry in dataset corresponding to filtered VV and VH data ds_s1["filtered_vv"] = ds_s1_filled.vv.groupby("time").apply(lee_filter, size=7) ds_s1["filtered_vh"] = ds_s1_filled.vh.groupby("time").apply(lee_filter, size=7) Visualise filtered data¶ We can now visualise the filtered bands in the same way as the original bands. Note that the filtered values must also be converted to decibels before being displayed. Visualise filtered VH band¶ [13]: # Scale to plot data in decibels ds_s1["filtered_vh_dB"] = 10 * np.log10(ds_s1.filtered_vh) # Plot all filtered VH observations for the year ds_s1.filtered_vh_dB.plot(cmap="Greys_r", robust=True, col="time", col_wrap=5) plt.show() [14]: # Plot the average of all filtered VH observations mean_filtered_vh_dB = ds_s1.filtered_vh_dB.mean(dim="time") fig = plt.figure(figsize=(7, 9)) mean_filtered_vh_dB.plot(cmap="Greys_r", robust=True) plt.title("Average filtered VH") plt.show() Visualise filtered VV band¶ [15]: # Scale to plot data in decibels ds_s1["filtered_vv_dB"] = 10 * np.log10(ds_s1.filtered_vv) # Plot all filtered VV observations for the year ds_s1.filtered_vv_dB.plot(cmap="Greys_r", robust=True, col="time", col_wrap=5) plt.show() [16]: # Plot the average of all filtered VV observations mean_filtered_vv_dB = ds_s1.filtered_vv_dB.mean(dim="time") fig = plt.figure(figsize=(7, 9)) mean_filtered_vv_dB.plot(cmap="Greys_r", robust=True) plt.title("Average filtered VV") plt.show() Now that you’ve finished filtering the data, compare the plots before and after and you should be able to notice the impact of the filtering. If you’re having trouble spotting it, it’s more noticable in the VH band. Plotting VH and VV histograms¶ Another way to observe the impact of filtering is to view histograms of the pixel values before and after filtering. Try running the next two cells to view the histograms for VH and VV. [17]: fig = plt.figure(figsize=(15, 3)) ds_s1.filtered_vh_dB.plot.hist(bins=1000, label="VH filtered") ds_s1.vh_dB.plot.hist(bins=1000, label="VH", alpha=0.5) plt.legend() plt.xlabel("VH (dB)") plt.title("Comparison of filtered VH bands to original") plt.show() [18]: fig = plt.figure(figsize=(15, 3)) ds_s1.filtered_vv_dB.plot.hist(bins=1000, label="VV filtered") ds_s1.vv_dB.plot.hist(bins=1000, label="VV", alpha=0.5) plt.legend() plt.xlabel("VV (dB)") plt.title("Comparison of filtered VV bands to original") plt.show() You may have noticed that both the original and filtered bands show two peaks in the histogram, which we can classify as a bimodal distribution. Looking back at the band images, it’s clear that the water pixels generally have lower VH and VV values than the land pixels. This lets us conclude that the lower distribution corresponds to water pixels and the higher distribution corresponds to land pixels. Importantly, the act of filtering has made it clear that the two distributions can be separated, which is especially obvious in the VH histogram. This allows us to confidently say that pixel values below a certain threshold are water, and pixel values above it are land. This will form the basis for our classifier in the next section. Designing a threshold-based water classifier¶ Given that the distinction between the land and water pixel value distributions is strongest in the VH band, we’ll base our classifier on this distribution. To separate them, we can choose a threshold: pixels with values below the threshold are water, and pixels with values above the threshold are not water (land). There are a number of ways to determine the threshold; one is to estimate it by looking at the VH histogram. From this, we might guess that \(\text{threshold} = -20.0\) is a reasonable value. Run the cell below to set the threshold. [19]: threshold = -20.0 The classifier separates data into two classes: data above the threshold and data below the threshold. In doing this, we assume that values of both segments correspond to the same water and not water distinctions we make visually. This can be represented with a step function: Visualise threshold¶ To check if our chosen threshold reasonably divides the two distributions, we can add the threshold to the histogram plots we made earlier. Run the next two cells to view two different visualisations of this. [20]: fig = plt.figure(figsize=(15, 3)) plt.axvline(x=threshold, label=f"Threshold at {threshold}", color="red") ds_s1.filtered_vh_dB.plot.hist(bins=1000, label="VH filtered") ds_s1.vh_dB.plot.hist(bins=1000, label="VH", alpha=0.5) plt.legend() plt.xlabel("VH (dB)") plt.title("Histogram Comparison of filtered VH bands to original") plt.show() [21]: fig, ax = plt.subplots(figsize=(15, 3)) ds_s1.filtered_vh_dB.plot.hist(bins=1000, label="VH filtered") ax.axvspan(xmin=-40.0, xmax=threshold, alpha=0.25, color="green", label="Water") ax.axvspan(xmin=threshold, xmax=-0.5, alpha=0.25, color="red", label="Not Water") plt.legend() plt.xlabel("VH (dB)") plt.title("Effect of the classifier") plt.show() If you’re curious about how changing the threshold impacts the classifier, try changing the threshold value and running the previous two cells again. Build and apply the classifier¶ Now that we know the threshold, we can write a function to only return the pixels that are classified as water. The basic steps that the function will perform are: Check that the data set has a VH band to classify. Clean the data by applying the speckle filter. Convert the VH band measurements from digital number (DN) to decibels (dB) Find all pixels that have filtered values lower than the threshold; these are the waterpixels. Return a data set containing the waterpixels. These steps correspond to the actions taken in the function below. See if you can determine which parts of the function map to each step before you continue. [22]: def s1_water_classifier(ds, threshold=-20.0): assert "vh" in ds.data_vars, "This classifier is expecting a variable named `vh` expressed in DN, not DB values" filtered = ds.vh.groupby("time").apply(lee_filter, size=7) water_data_array = 10 * np.log10(filtered) < threshold return water_data_array.to_dataset(name="s1_water") Now that we have defined the classifier function, we can apply it to the data. After you run the classifier, you’ll be able to view the classified data product by running print(ds_s1.water). [23]: ds_s1["water"] = s1_water_classifier(ds_s1).s1_water [24]: print(ds_s1.water) <xarray.DataArray 'water' (time: 27, latitude: 1490, longitude: 952)> array([[[ True, True, ..., False, False], [ True, True, ..., False, False], ..., [ True, True, ..., True, True], [ True, True, ..., True, True]], [[ True, True, ..., False, False], [ True, True, ..., False, False], ..., [ True, True, ..., True, True], [ True, True, ..., True, True]], ..., [[ True, True, ..., False, False], [ True, True, ..., False, False], ..., [ True, True, ..., True, True], [ True, True, ..., True, True]], [[ True, True, ..., False, False], [ True, True, ..., False, False], ..., [ True, True, ..., True, True], [ True, True, ..., True, True]]]) Assessment with mean¶ We can now view the image with our classification. The classifier returns either True or False for each pixel. To detect the shoreline, we want to check which pixels are always water and which are always land. Conveniently, Python encodes True = 1 and False = 0. If we plot the average classified pixel value, pixels that are always water will have an average value of 1 and pixels that are always land will have an average of 0. Pixels that are sometimes water and sometimes land will have an average between these values. The following cell plots the average classified pixel value over time. How might you classify the shoreline from the average classification value? [25]: # Plot the mean of each classified pixel value plt.figure(figsize=(15, 12)) ds_s1.water.mean(dim="time").plot(cmap="RdBu") plt.title("Average classified pixel value") plt.show() Interpreting the mean classification¶ You can see that our threshold has done a good job of separating the water pixels (in blue) and land pixels (in red). You should be able to see that the shoreline takes on a mix of values between 0 and 1, highlighting pixels that are sometimes land and sometimes water. This is likely to due the effect of rising and falling tides, with some radar observations being captured at low tide, and others at high tide. Assessment with standard deviation¶ Given that we’ve identified the shoreline as the pixels that are calssified sometimes as land and sometimes as water, we can also see if the standard deviation of each pixel in time is a reasonable way to determine if a pixel is shoreline or not. Similar to how we calculated and plotted the mean above, you can calculate and plot the standard deviation by using the std function in place of the mean function. If you’d like to see the results using a different colour-scheme, you can also try substituting cmap="Greys" or cmap="Blues" in place of cmap="viridis". [26]: # Plot the standard deviation of each classified pixel value plt.figure(figsize=(15, 12)) ds_s1.water.std(dim="time").plot(cmap="viridis") plt.title("Standard deviation of classified pixel values") plt.show() Interpreting the standard deviation of the classification¶ From the image above, you should be able to see that the land and water pixels almost always have a standard deviation of 0, meaning they didn’t change over the time we sampled. Areas along the coastline however have a higher standard deviation, indicating that they change frequently between water and non-water (potentially due to the rise and fall of the tide). With further investigation, you could potentially turn this statistic into a new classifier to extract shoreline pixels. If you’re after a challenge, have a think about how you might approach this. An important thing to recognise is that the standard deviation might not be able to detect the difference between noise, tides and ongoing change, since a pixel that frequently alternates between land and water (noise) could have the same standard deviation as a pixel that is land for some time, then becomes water for the remaining time (ongoing change or tides). Consider how you might distinguish between these different cases with the data and tools you have. Detecting change between two images¶ The standard deviation we calculated before gives us an idea of how variable a pixel has been over the entire period of time that we looked at. It might also be interesting to look at which pixels have changed between any two particular times in our sample. In the next cell, we choose the images to compare. Printing the dataset should show you that there are 27 time-steps, so the first has an index value of 0, and the last has an index value of 26. You can change these to be any numbers in between, as long as the start is earlier than the end. [27]: start_time_index = 0 end_time_index = 26 Next, we can define the change as the difference in the classified pixel value at each point. Land becoming water will have a value of -1 and water becoming land will have a value of 1. [28]: change = np.subtract(ds_s1.water.isel(time=start_time_index), ds_s1.water.isel(time=end_time_index), dtype=np.float32) # Set all '0' entries to NaN, which prevents them from displaying in the plot. change = change.where(change != 0) ds_s1["change"] = change Now that we’ve added change to the dataset, you should be able to plot it below to look at which pixels changed. You can also plot the original mean VH composite to see how well the change matches our understanding of the shoreline location. [29]: plt.figure(figsize=(15, 12)) ds_s1.filtered_vh_dB.mean(dim="time").plot(cmap="Blues") ds_s1.change.plot(cmap="RdBu", levels=2) plt.title(f"Change in pixel value between time={start_time_index} and time={end_time_index}") plt.show() Coastal change or tides?¶ Tides can greatly affect the appearance of the coastline, particularly along northern Australia where the tidal range is large (up to 12 m). Without additional data, it is difficult to determine whether the change above is due to the coastline having eroded over time, or because the two radar images were captured at different tides (e.g. low vs. high tide). The radar water classifier in this notebook could potentially be combined with tidal modelling from the Coastal Erosion notebook to look into this question in more detail. Drawing conclusions¶ Here are some questions to think about: What are the benefits and drawbacks of the possible classification options we explored? How could you extend the analysis to extract a shape for the coastline? How reliable is our classifier? Is there anything you can think of that would improve it? Next steps¶ When you are done, return to the “Analysis parameters” section, modify some values (e.g. latitude and longitude) and rerun the analysis. You can use the interactive map in the “View the selected location” section to find new central latitude and longitude values by panning and zooming, and then clicking on the area you wish to extract location values for. You can also use Google maps to search for a location you know, then return the latitude and longitude values by clicking the map. If you’re going to change the location, you’ll need to make sure Sentinel-1 data is available for the new location, which you can check at the DEA Explorer.: [30]: print(datacube.__version__) 1.7+164.gbdf45994.dirty Browse all available tags on the DEA User Guide’s Tags Index Tags: sandbox compatible, sentinel 1, display_map, real world, speckle filtering, water, radar, coastal erosion, intertidal
https://docs.dea.ga.gov.au/notebooks/Real_world_examples/Radar_water_detection.html
CC-MAIN-2022-05
refinedweb
3,478
57.98
Johan Grönqvist schrieb: > think it is not possible to realize something like braces in C or let-bindings in python. But here are some Ideas to work around this problem: 1) If you define all this in the global namespace you could remove your temporary variables afterwards, e.g. spam = 1 ham = (spam,) del globals()['spam'] This only works for the global namespace and not for the local! I wouldn't recommend it. 2) create a method, which initializes ham def make_ham(): spam = {...} egg = {...} return (egg, spam) ham = make_ham() This makes it easier to reuse ham in other situations and wouldn't expose spam or egg. 3) Make a class for your ham data structure If ham is so complex that you have to split it up, it may makes sense to create a class for it. Of course this would need refactoring of the code, but it's more readable and extensible than packing your data in tuples, lists or dictionaries. - Patrick
https://mail.python.org/pipermail/python-list/2009-September/551314.html
CC-MAIN-2016-30
refinedweb
163
68.5
01 August 2011 09:34 [Source: ICIS news] Correction: In the ICIS news story headlined "?xml:namespace> SHANGHAI (ICIS)-- The producer is concerned that its propylene supply might be further disrupted as a result of the fire, because its feedstock is supplied by its sister company, Formosa Petrochemical Corp (FPCC) which operates three crackers at Mailiao, the source said. FPC’s PP plants have been running at below capacity because of a shortage of propylene feedstock, the distributors said. FPC produces injection grade PP at its 170,000 tonne/year PP plant in The absence of FPC offers will not affect the PP market in
http://www.icis.com/Articles/2011/08/01/9481314/taiwans-formosa-plastics-suspends-pp-offers-in-china-after-mailiao.html
CC-MAIN-2014-52
refinedweb
105
54.56
Hi Paul Please keep all replies on the list by using "reply all" Advertising A simple solution, which we use quite a lot, is the following: we use the "dynamic_options" attribute, eg: <inputs> <param name="foo" type="select" label="what" help="Use tickboxes to select " display="radio" dynamic_options="ds_fooOptions()"/> </inputs> <outputs> <data format="fasta" name="output" label="more foo" /> </outputs> <code file="extra_code_for_foo_list.py" /> <help> </help> </tool>and then we have a little python script ("extra_code_for_foo_list.py") with the "ds_fooOptions" function, which can read your file (ie your list of databases), eg def ds_fooOptions(): """List available foos as tuples of (displayName,value)""" foos = <whatever python code is required to generate the tuples> return foos I hope this helps, Hans On 06/29/2011 08:09 PM, Admins de Galaxy wrote: Hi Hans, yes that's it. We are offering the list of databases as options to select in the GUI, before executing the script which compares the selected database with the sequence. Paul 2011/6/29 Hans-Rudolf Hotz<[email protected]>Hi Paul You probably need to be a bit more specific...at what stage is this '.txt file' read (or rather should be read)? - are you offering the (growing) list of databases as options to select in the GUI? Hans On 06/29/2011 10:20 AM, Admins de Galaxy wrote:Hello everyone, we have a problem with one of our selfwritten tools. We have a tool, that compares sequence with a database. The List of the available databases is loaded from a .txt file. One of our other tools, manages that a new database is added to the .txt file. But Galaxy doesn't recognize the change. It would be nice if someone could give us an advice. Best regards Paul K. Deuster @ Technische Hochschule Mittelhess:
https://www.mail-archive.com/[email protected]/msg01777.html
CC-MAIN-2017-34
refinedweb
299
63.9
Overview: In this article, we will discuss how to check whether a number is positive or negative in C programming language. Now, First of all, I would like to explain the number, Integer, types of the Integer, positive integer and negative Integer. After all, I will demonstrate the logic of the C Program to Check Whether a Number is Positive or Negative and explain the working of its internal execution with output. Table of contents: - What is a Number? - What is Whole Number? - What is Integer? - What are the types of the Integer? - What is positive Integer? - What is negative Integer? - Logic to Check Whether a Number is Positive or Negative - Demonstration to Check Whether a Number is Positive or Negative - C Program to Check Whether a Number is Positive or Negative - Conclusion What is a Number? The Number is a combination of digits and decimal point. The repetition of digits does not matter but the decimal point does not repeat. For instance, a Number 1123.171 uses 4 digits i.e. 1,2,3,7,and one decimal point. What is Whole Number? Whole numbers we mean numbers without fractions or decimals. It is a collection of positive integers and zero. It is represented by the symbol “W” and the set of numbers are {0, 1, 2, 3, 4, 5, 6, 7, 8, 9,……………}. All the natural numbers are considered as whole numbers, but all the whole numbers are not natural numbers. Thus, the negative numbers are not considered as whole numbers. What is Integer? Integer is a number that can be written without a fractional component. Integers are generally denoted as Z and can be represented as follows: Z = {∞……-8,-7,-6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8,……∞} What are the types of the Integer? Mainly Integers are 3 types such as Positive, negative counting numbers, as well as zero. Z = {∞..., −3, −2, −1, 0, 1, 2, 3, ...∞}. Rational numbers that can be expressed as a ratio of an integer to a non-zero integer. All integers are rational, but the converse is not true. What is positive Integer? Positive integers are all the whole numbers greater than zero. All the numbers we use for counting are positive integers. Positive integers are actually part of a larger group of numbers called integers. Positive Integers are generally known as Natural numbers. It is denoted by N and represented as N = {1, 2, 3, 4, ....∞} For instance,. In other words, A positive integer is a whole number from zero onwards with no fractions or decimals right of the number line. What is negative Integer? A negative integer is one of the integers ..., -4, -3, -2, -1 obtained by negating the positive integers. A number which is less than zero but not a fraction or a decimal is called a Negative Integer. The negative integers are commonly denoted as Z(-). It is represented by putting a '-' sign before the number. It is shown to the left of zero on a number line. Logic to Check Whether a Number is Positive or Negative - Step 1: User gives an input - Step 2: Input is stored in an int type variable say num, num is first checked for being 0. - Step 3: if(num == 0) - Step 4: prep is then checked for being greater than 0. - Step 5: if(num > 0) - Step 6: If num is greater than 0 then the input is a positive number. Otherwise the number is a negative number. Demonstration to Check Whether a Number is Positive or Negative A number is positive if it is greater than 0 N > 0, number is positive. Otherwise the number is negative N< 0, number is negative. Note: 0 is neither positive nor negative. C Program to Check Whether a Number is Positive or Negative #include <stdio.h> int main() { int number; printf("Enter any number: "); scanf("%d",&number); if (number > 0) printf("%d is positive number", number); else if (number < 0) printf("%d is a negative number.", number); else printf(" You entered value zero."); return 0; } The output of the Program with code: Enter a number: 6 You entered a positive number. Explanation of the C Program to Count Number of Digits in an Integer with output. Above C program to find Positive or Negative Number is structured as if the first condition fails, it will traverse to the second condition. If both the conditions fail, the else statement will execute. Let’s see the working principle of both the conditions below: First Condition(if (number > 0)): Checks whether the given number is greater than 0. If this condition is true, the given value is a positive number. Second condition(else if (number < 0)): Checks whether the given number is less than 0. If this condition is true, it is a negative number If both the above conditions fail, the number is equal to 0. In this program the user entered 6 which is a positive integer displayed on the screen. Conclusion:.
https://www.onlineinterviewquestions.com/blog/c-program-to-check-whether-a-number-is-positive-or-negative/
CC-MAIN-2021-17
refinedweb
838
66.33
Important changes to forums and questions All forums and questions are now archived. To start a new conversation or read the latest updates go to forums.mbed.com. 6 years, 7 months ago. Equivalent to Arduino millis() Hi, Arduino has the fairly convenient `millis()` function which returns the number of milliseconds since boot. What is the situation for that on mBed. As far as I can tell mBed has these: - clock() - Returns number of CPU ticks, as defined by CLOCKS_PER_SEC. Unfortunately that is a) not constant, and b) not very fine-grained. For example on nRF51822 CLOCKS_PER_SEC is 100. - time_t time(...) - Standard POSIX time call, but unfortunately this just returns 0 on boards without a real RTC. - us_ticker_read() - This always returns microseconds since boot, and is required for mBed, but it has an ugly name and is an internal API. - As mentioned in the answer below, you can make an instance of Timer, but a) you have to manually start it, b) it isn't global, c) the name of the instance isn't fixed, d) you have to put it in your own Globals.h header (or similar). This seems like a sorry situation for mBed's otherwise great API. I would suggest adding some non-internal functions like this: float clock_s() { return us_ticker_read() / 1000000.0f; } uint64_t clock_ms() { return us_ticker_read() / 1000; } uint64_t clock_us() { return us_ticker_read(); } Who's with me? (By the way the name is a bit unfortunate - it could cause confusion with clock() - but I couldn't think of anything better.) Solution Ok I decided to implement those functions, and while I was at it I made them not overflow after an hour like all the existing mBed time functions do (us_ticker_read(), Timer, Ticker, etc.). This code should last thousands of years without overflowing. It assumes that the internal microsecond ticker wraps at 32 bits, which I'm *fairly* sure is always the case. Clock.h #pragma once #include <stdint.h> // Return the number of seconds since boot. float clock_s(); // Return the number of milliseconds since boot. uint64_t clock_ms(); // Return the number of microseconds since boot. uint64_t clock_us(); Clock.cpp #include <mbed.h> // The us ticker is a wrapping uint32_t. We insert interrupts at // 0, 0x40000000, 0x8000000, and 0xC0000000, rather than just 0 or just 0xFFFFFFFF because there is // code that calls interrupts that are "very soon" immediately and we don't // want that. Also because if we only use 0 and 0x80000000 then there is a chance it would // be considered to be in the past and executed immediately. class ExtendedClock : public TimerEvent { public: ExtendedClock() { // This also starts the us ticker. insert(0x40000000); } float read() { return read_us() / 1000000.0f; } uint64_t read_ms() { return read_us() / 1000; } uint64_t read_us() { return mTriggers * 0x40000000ull + (ticker_read(_ticker_data) & 0x3FFFFFFF); } private: void handler() override { ++mTriggers; // If this is the first time we've been called (at 0x4...) // then mTriggers now equals 1 and we want to insert at 0x80000000. insert((mTriggers+1) * 0x40000000); } // The number of times the us_ticker has rolled over. uint32_t mTriggers = 0; }; static ExtendedClock _GlobalClock; // Return the number of seconds since boot. float clock_s() { return _GlobalClock.read(); } // Return the number of milliseconds since boot. uint64_t clock_ms() { return _GlobalClock.read_ms(); } // Return the number of microseconds since boot. uint64_t clock_us() { return _GlobalClock.read_us(); } Edit: I updated it to use 4 timer inserts instead of 2 - some implementations (the good ones) execute events that are 'in the past' immediately, so there is a chance that insert(0x80000000) when we are at 0 will happen immediately. This fixes that. Also, if you are using nRF51 there are some bugs in the us ticker implementation that cause this to crash. See this bug and the pull requests on it: 3 Answers 5 years, 11 months ago. Hi, I need solution to ported millis() to mbe but problem is, how to make in class. Timer t should started when first object will be created then rest of objects will use same timer. 5 years, 11 months ago. Hello, I'm using the following implementation of Arduino millis() when really needed: millis.h #ifndef MILLIS_H #define MILLIS_H void millisStart(void); unsigned long millis(void); #endif millis.cpp #include "mbed.h" #include "millis.h" volatile unsigned long _millis; void millisStart(void) { SysTick_Config(SystemCoreClock / 1000); } extern "C" void SysTick_Handler(void) { _millis++; } unsigned long millis(void) { return _millis; } When using the online compiler the above functions are also available in the millis library. Make sure you call millisStart() before using: main.cpp #include "mbed.h" #include "millis.h" Serial pc(USBTX, USBRX); int main() { millisStart(); while(1) { pc.printf("millis = %d\r\n", millis()); wait(1.0); } } What a shame! Anyway, thank you Tim for the info. Nevertheless, I confirm that it works with some NUCLEO boards and, since it's a global function, it can be used inside class implementation.posted by 02 Jun 2016 6 years, 7 months ago. Hi, there's abolutely no need for another such function, as mbed already hast a much better implementation for such things than Arduino. You can setup a timer in your program and then start it at the beginning of main(). Then you can read the timer at any time you wish as seconds, milliseconds or microseconds. Actually the timer functions are internally based on the us_ticker. Look at Handbook -> Timer for more information. #include "mbed.h" ... Timer t; int main() { t.start(); ... float f = t.read(); uint32_t m = t.read_ms(); uint32_t u = t.read_us(); ... t.stop(); } Best regards Neni Yes I'm aware of `Timer`, but it isn't as convenient as `us_ticker_read()` because it isn't global, and its instance doesn't have a standard name (makes it awkward to use in libraries and examples). Perhaps if the mbed.h header had an `extern Timer deviceTime;` or something like that...posted by 28 Sep 2015 That would be a waste in setups which don't need this global timer. Also that you think the us_ticker_read is an ugly name is a bit irrelevant, you can redefine it as any name you want in your own code. A more significant problem could be that it overflows after an hour or so. And agreeing with Nenad: One of the things I really miss in Arduino are the very poor Timer functions compared to mbed. What do you need this for in the first place?posted by 28 Sep 2015
https://os.mbed.com/questions/61002/Equivalent-to-Arduino-millis/
CC-MAIN-2022-21
refinedweb
1,057
67.04
In the last couple weeks we’ve used Persistent to store a User type in a Postgresql database. Then we were able to use Servant to create a very simple API that exposed this database to the outside world. This week, we’re going to look at how we can improve the performance of our API using a Redis cache. One cannot overstate the importance of caching in both software and hardware. There's a hierarchy of memory types from registers, to RAM, to the File system, to a remote database. Accessing each of these gets progressively slower (by orders of magnitude). But the faster means of storage are more expensive, so we can’t always have as much as we'd like. But memory usage operates on a very important principle. When we use a piece of memory once, we’re very likely to use it again in the near-future. So when we pull something out of long-term memory, we can temporarily store it in short-term memory as well. This way when we need it again, we can get it faster. After a certain point, that item will be overwritten by other more urgent items. This is the essence of caching. Redis 101 Redis is an application that allows us to create a key-value store of items. It functions like a database, except it only uses these keys. It lacks the sophistication of joins, foreign table references and indices. So we can’t run the kinds of sophisticated queries that are possible on an SQL database. But we can run simple key lookups, and we can do them faster. In this article, we'll use Redis as a short-term cache for our user objects. For this article, we've got one main goal for cache integration. Whenever we “fetch” a user using the GET endpoint in our API, we want to store that user in our Redis cache. Then the next time someone requests that user from our API, we'll grab them out of the cache. This will save us the trouble of making a longer call to our Postgres database. Connecting to Redis Haskell's Redis library has a lot of similarities to Persistent and Postgres. First, we’ll need some sort of data that tells us where to look for our database. For Postgres, we used a simple ConnectionString with a particular format. Redis uses a full data type called ConnectInfo. data ConnectInfo = ConnectInfo { connectHost :: HostName -- String , connectPort :: PortId -- (Can just be a number) , connectAuth :: Maybe ByteString , connectDatabase :: Integer , connectMaxConnection :: Int , connectMaxIdleTime :: NominalDiffTime } This has many of the same fields we stored in our PG string, like the host IP address, and the port number. The rest of this article assumes you are running a local Redis instance at port 6379. This means we can use defaultConnectInfo. As always, in a real system you’d want to grab this information out of a configuration, so you’d need IO. fetchRedisConnection :: IO ConnectInfo fetchRedisConnection = return defaultConnectInfo With Postgres, we used withPostgresqlConn to actually connect to the database. With Redis, we do this with the connect function. We'll get a Connection object that we can use to run Redis actions. connect :: ConnectInfo -> IO Connection With this connection, we simply use runRedis, and then combine it with an action. Here’s the wrapper runRedisAction we’ll write for that: runRedisAction :: ConnectInfo -> Redis a -> IO a runRedisAction redisInfo action = do connection <- connect redisInfo runRedis connection action The Redis Monad Just as we used the SqlPersistT monad with Persist, we’ll use the Redis monad to interact with our Redis cache. Our API is simple, so we’ll stick to three basic functions. The real types of these functions are a bit more complicated. But this is because of polymorphism related to transactions, and we won't be using those. get :: ByteString -> Redis (Either x (Maybe ByteString)) set :: ByteString -> ByteString -> Redis (Either x ()) setex :: ByteString -> ByteString -> Int -> Redis (Either x ()) Redis is a key-value store, so everything we set here will use ByteString items. But once we’ve done that, these functions are all we need to use. The get function takes a ByteString of the key and delivers the value as another ByteString. The set function takes both the serialized key and value and stores them in the cache. The setex function does the same thing as set except that it also sets an expiration time for the item we’re storing. Expiration is a very useful feature to be aware of, since most relational databases don’t have this. The nature of a cache is that it’s only supposed to store a subset of our information at any given time. If we never expire or delete anything, it might eventually store our whole database. That would defeat the purpose of using a cache! It's memory footprint should remain low compared to our database. So we'll use setex in our API. Saving a User in Redis So now let’s move on to the actions we’ll actually use in our API. First, we’ll write a function that will actually store a key-value pair of an Int64 key and the User in the database. Here’s how we start: cacheUser :: ConnectInfo -> Int64 -> User -> IO () cacheUser redisInfo uid user = runRedisAction redisInfo $ setex ??? ??? ??? All we need to do now is convert our key and our value to ByteString values. We'll keep it simple and use Data.ByteString.Char8 combined with our Show and Read instances. Then we’ll create a Redis action using setex and expire the key after 3600 seconds (one hour). import Data.ByteString.Char8 (pack, unpack) ... cacheUser :: ConnectInfo -> Int64 -> User -> IO () cacheUser redisInfo uid user = runRedisAction redisInfo $ void $ setex (pack . show $ uid) 3600 (pack . show $ user) (We use void to ignore the result of the Redis call). Fetching from Redis Fetching a user is a similar process. We’ll take the connection information and the key we’re looking for. The action we’ll create uses the bytestring representation and calls get. But we can’t ignore the result of this call like we could before! Retrieving anything gives us Either e (Maybe ByteString). A Left response indicates an error, while Right Nothing indicates the key doesn’t exist. We’ll ignore the errors and treat the result as Maybe User though. If any error comes up, we’ll return Nothing. This means we run a simple pattern match: fetchUserRedis :: ConnectInfo -> Int64 -> IO (Maybe User) fetchUserRedis redisInfo uid = runRedisAction redisInfo $ do result <- Redis.get (pack . show $ uid) case result of Right (Just userString) -> return $ Just (read . unpack $ userString) _ -> return Nothing If we do find something for that key, we’ll read it out of its ByteString format and then we’ll have our final User object. Applying this to our API Now that we’re all set up with our Redis functions, we have the update the fetchUsersHandler to use this cache. First, we now need to pass the Redis connection information as another parameter. For ease of reading, we’ll refer to these using type synonyms ( PGInfo and RedisInfo) from now on: type PGInfo = ConnectionString type RedisInfo = ConnectInfo … fetchUsersHandler :: PGInfo -> RedisInfo -> Int64 -> Handler User fetchUsersHandler pgInfo redisInfo uid = do ... The first thing we’ll try is to look up the user by their ID in the Redis cache. If the user exists, we’ll immediately return that user. ... If the user doesn’t exist, we’ll then drop into the logic of fetching the user in the database. We’ll replicate our logic of throwing an error if we find that user doesn’t actually exist. But if we find the user, we need one more step. Before we return it, we should call cacheUser and store it for the future. maybeUser <- liftIO $ fetchUserPG pgInfo uid case maybeUser of Just user -> liftIO (cacheUser redisInfo uid user) >> return user Nothing -> Handler $ (throwE $ err401 { errBody = "Could not find user with that ID" }) Since we changed our type signature, we’ll have to make a few other updates as well, but these are quite simple: usersServer :: PGInfo -> RedisInfo -> Server UsersAPI usersServer pgInfo redisInfo = (fetchUsersHandler pgInfo redisInfo) :<|> (createUserHandler pgInfo) runServer :: IO () runServer = do pgInfo <- fetchPostgresConnection redisInfo <- fetchRedisConnection run 8000 (serve usersAPI (usersServer pgInfo redisInfo)) And that’s it! We have a functioning cache with expiring entries. This means that repeated queries to our fetch endpoint should be much faster! Conclusion Caching is a vitally important way that we can write software that is often much faster for our users. Redis is a key-value store that we can use as a cache for our most frequently used data. We can use it as an alternative to forcing every single API call to hit our database. In Haskell, the Redis API requires everything to be a ByteString. So we have to deal with some logic surrounding encoding and decoding. But otherwise it operates in a very similar way to Persistent and Postgres. Be sure to take a look at this code on Github! There’s a redis branch for this article. It includes all the code samples, including things I skipped over like imports! We’re starting to get to the point where we’re using a lot of different libraries in our Haskell application! It pays to know how to organize everything, so package management is vital! I tend to use Stack for all my package management. It makes it quite easy to bring all these different libraries together. If you want to learn how to use Stack, check out our free Stack mini-course! If you’ve never learned Haskell before, you should try it out! Download our Getting Started Checklist!
https://mmhaskell.com/blog/2017/10/16/a-cache-is-fast-enhancing-our-api-with-redis
CC-MAIN-2018-51
refinedweb
1,624
64.1
Roger Leigh <[email protected]> writes: > On Mon, Aug 10, 2009 at 07:52:23AM -0700, Steve Langasek wrote: >> On Sun, Aug 09, 2009 at 05:42:04PM -0700, Russ Allbery. > Could we not just use a "-ddbg" suffix for "detached debug" information, > perhaps with a new archive section to match? This will not conflict > with existing practice for -dbg, so could go into Policy without > violating any prexisting namespace conventions. We could also ask the existing -dbg packages that are not detached debugging symbols to rename their packages. That would probably mean less total migration effort, given that the vast majority of -dbg packages currently in the archive are detached debugging symbols, but it would cause more pain for those particular packages. > Reading through this thread, I don't see a compelling reason for using a > .ddeb extension given that they are just regular .debs, Agreed. >. -- Russ Allbery ([email protected]) <>
https://lists.debian.org/debian-devel/2009/08/msg00315.html
CC-MAIN-2016-07
refinedweb
152
63.8
Eric Lemings wrote: > > All of the test cases for the %{Ao}, %{Ad}, and %{Ax} directives (in > 0.printf.cpp) specify a width (width of integer elements to be exact). > Does this mean the width is required? I can't easily discern this by > reading the code. I don't know off the top of my head but from the output of the program below (on x86 hardware) I'm guessing the default width is 1 and the precision is taken to be the number of initial non-zero elements of the array (each of the given width). It might be worthwhile to enhance the test to exercise this. $ cat t.cpp && ./t #include <rw_printf.h> int main () { const int a [] = { 0x12345678, 0 }; rw_printf ("%{Ax}\n", a); } 78,56,34,12 Martin
http://mail-archives.apache.org/mod_mbox/stdcxx-dev/200805.mbox/%[email protected]%3E
CC-MAIN-2017-17
refinedweb
131
81.73
Many people ask about how to do this, so I thought I'd write up a sample. This sample basically searches the inbox of a user and then does an HTTP GET against the DAV:href to get the EML file and then does an HTTP PUT to upload it to a SharePoint document library. using System; using System.Web; using System.Xml; using System.Net; using System.Text; namespace UploadEmlToSharePoint { /// <summary> /// Summary description for Class1. /// </summary> class Class1 { static System.Net.CredentialCache MyCredentialCache; /// <summary> /// The main entry point for the application. /// </summary> [STAThread] static void Main(string[] args) { System.Net.HttpWebRequest Request; System.Net.WebResponse Response; string" + "<D:sql>SELECT \"DAV:displayname\" FROM \"" + strRootURI + "\"" + "WHERE \"DAV:ishidden\" = false AND \"DAV:isfolder\" = false" + "</D:sql></D:searchrequest>"; // Create a new CredentialCache object and fill it with the network // credentials required to access the server. MyCredentialCache = new System.Net.CredentialCache(); MyCredentialCache.Add( new System.Uri(strRootURI), "NTLM", new System.Net.NetworkCredential(strUserName, strPassword, strDomain) ); MyCredentialCache.Add(new System.Uri(strSPSRootURI), "NTLM", new System.Net.NetworkCredential(strUserName, strPassword, strDomain)); // Create the HttpWebRequest object. Request = (System.Net.HttpWebRequest)HttpWebRequest.Create(strRootURI); // Add the network credentials to the request. Request.Credentials = MyCredentialCache; // Specify the method. Request.Method = "SEARCH"; // Encode the body using UTF-8. bytes = Encoding.UTF8.GetBytes((string)strQuery); // Set the content header length. This must be // done before writing data to the request stream. Request.ContentLength = bytes.Length; //Set the translate header to false Request.Headers.Add("Translate","f"); // Get a reference to the request stream. RequestStream = Request.GetRequestStream(); // Write the SQL query to the request stream. RequestStream.Write(bytes, 0, bytes.Length); // Close the Stream object to release the connection // for further use. RequestStream.Close(); // Set the content type header. Request.ContentType = "text/xml"; // Send the SEARCH method request and get the // response from the server. Response = (HttpWebResponse)Request.GetResponse(); // Get the XML response stream. ResponseStream = Response.GetResponseStream(); // Create the XmlDocument object from the XML response stream. ResponseXmlDoc = new XmlDocument(); ResponseXmlDoc.Load(ResponseStream); ResponseNodes = ResponseXmlDoc.GetElementsByTagName("a:response"); if(ResponseNodes.Count > 0) { Console.WriteLine("Non-folder item hrefs..."); // Loop through the display name nodes. for(int i=0; i<ResponseNodes.Count; i++) { // Display the non-folder item displayname. XmlNode responseNode = ResponseNodes[i]; XmlNode hrefNode = responseNode.SelectSingleNode("a:href",xmlnsm); XmlNode displayNameNode = responseNode.SelectSingleNode("a:propstat/a:prop/a:displayname",xmlnsm); //Downloads the EML file from the specified URL byte[] emlFile = GetBytesFrom(hrefNode.InnerText); //Uploads the EML file to the SharePoint document library UploadToSPS(emlFile,strSPSRootURI + System.Web.HttpUtility.UrlPathEncode(displayNameNode.InnerText)); } } else { Console.WriteLine("No non-folder items found..."); } // Clean up. ResponseStream.Close(); Response.Close(); } catch(Exception ex) { // Catch any exceptions. Any error codes from the SEARCH // method request on the server will be caught here, also. Console.WriteLine(ex.Message); } Console.WriteLine("Done."); Console.Read(); } static byte[] GetBytesFrom(string DavURL) { Console.WriteLine(DavURL); byte[] buffer; System.Net.HttpWebRequest Request; System.Net.HttpWebResponse Response; System.IO.Stream ResponseStream; Request = (HttpWebRequest)HttpWebRequest.Create(DavURL); Request.Credentials = MyCredentialCache; Request.Headers.Add("Translate","f"); Response = (HttpWebResponse)Request.GetResponse(); ResponseStream = Response.GetResponseStream(); buffer = new byte[Response.ContentLength]; ResponseStream.Read(buffer,0,(int)Response.ContentLength); ResponseStream.Close(); Response.Close(); return buffer; } static void UploadToSPS(byte[] fileBytes, string URL) { Console.WriteLine("Uploading " + fileBytes.Length.ToString() + " bytes to " + URL); System.Net.HttpWebRequest Request; Request = (HttpWebRequest)HttpWebRequest.Create(URL); Request.Credentials = MyCredentialCache; Request.ContentLength = fileBytes.Length; Request.Method = "PUT"; System.IO.Stream str = Request.GetRequestStream(); str.Write(fileBytes,0,fileBytes.Length); str.Close(); Request.GetResponse(); } } } ResponseNodes = ResponseXmlDoc.GetElementsByTagName("a:response"); PingBack from Currently I can recieve but in eml format, will this fix so I recieve as email to open in my outlook client as a regular email. I need to have this work, we are intergrating the document library to our exchange 2007. I was told I needed a mapi client installed on the sharepoint 2007 server. But not sure if any special configuration is involved. Any help would be great. mmm.. nice design, I must say.. So the reason you need a MAPI client is that Outlook is a MAPI client and reads MAPI data from a MAPI store. MSG is the file format for MAPI data. These types of files would open in Outlook. It would involve a completely different code path to get the MAPI data from the store and save it as MSG. You can use MAPI Editor's source code available from this KB article () to do so. However, please note that MAPI code is not supported inline with .NET code. You would have to do this in C++ unmanaged code and wrap it in a COM+ component that you could call into from .NET. Outlook Object Model can also save messages as MSG files, but be careful again because Outlooks OMG (Object Model Guard) can prompt you with security prompts. Once you save the msg file to the disk, you can use the same UploadToSPS code I have above to push it into SharePoint. The reason I went after the EML (MIME content) files is that WebDAV is supported under .NET. We implement a VERY similar technique, although we found you have to loop read the bytes as on larger emails stuff gets lost/truncated. See my blog entry for what I mean. You will find you'll need to read the email stream in chunks to avoid loosing stuff and ending up with a truncated email (especially true when there are attachments, my blog has code for our very similar implementation. Hi Patrick, Good article. What I am trying to do is write an event handler for an email enabled document library. So, when my document library receives the eml file, I want to open the file and find out who the sender is, the subject, cc fields, body and what the attachments are. Do you know of any way to parse eml files through code? Thanks -pritam Hi Pritam. Yes, CDOSYS is an effective API for parsing eml files. I'm new to webDAV. How would you use this in a project? Hi Patrick Creehan, I am also working in SharePoint 2007. I am facing some problem with images while sending it thru mail. Our problem is, we are sending Emails (watch word - with image) to end users on daily basis. Currently we are doing it manually. Now we need to automate this using SharePoint designer work flow. The problem is background images are not displayed while getting the emails in outlook. There is no problem with the text. We are in hot stove now. Could you please help me to solve this? Thanks. Saharathendral Hi Saharathendral, Are you getting the red X where your image should be or are you just seeing a plain text email? If you are seeing the red X, then it is most likely that your images are stored in a location that requires authentication to access. You should put them somewhere they can be accessed anonymously. You'll also want to look at these articles: In Outlook 2003, there was an entirely different security mechanism in Tools/Options that governed how and if images were displayed in HTML emails. Patrick your reply helps us a lot to solve our problem. Thanks a lot. ;-) Hi Patrick! I'm not sure whether this is a good place to ask about this, so sorry if not, but here goes anyway. We have a "company-general" email address, that actually goes to just one person's mailbox. Currently the person then forwards the messages to whoever he guesses the message concerns. We would like to have the messages sent to this address be somehow automatically moved to a place in Sharepoint where other people could see them if they like and put alerts to get notification etc. Ideally this place could be a Sharepoint discussion forum so that the messages could be seen similar to as in an email reader. We could of course just forward the mails coming to the address to all those who want to receive them, but then the decision of whether to receive them or not would be more of an on-off decision, all or nothing. If the messages (even only text body would be enough) would automatically appear on a Sharepoint page, then the people could check them when they want (for example when they expect something concerning them to be inbound) and then ignore them in other times. Finally, in our location we don't have the Sharepoint server, it is managed by IT in another location so it would be best if something like this could be done without any actual extra coding, just utilizing existing features, adding a suitable web part or something similar. Do you think anything like this would be reasonably easily doable? -Antti I am getting the remote server returned an error 403 forbidden after it reach this line // Send the SEARCH method request and get the // response from the server. Response = (HttpWebResponse)Request.GetResponse(); I am getting a 403 error here on this line // Get a reference to the request stream. RequestStream = Request.GetRequestStream(); I have a https URI web email access string strRootURI = ""; string strSPSRootURI = "";
http://blogs.msdn.com/b/pcreehan/archive/2007/02/06/howto-get-email-messages-from-exchange-to-sharepoint.aspx
CC-MAIN-2014-35
refinedweb
1,522
51.44
Opened 6 years ago Closed 6 years ago #19819 closed Bug (fixed) Django highlights incorrect lines for template errors Description This issue is bugging me for a while now and after searching for existing tickets (and finding none) i've decided to create a ticket for it. Basically Django highlights the wrong lines when a template error occurs. Sometimes it even highlights a seemingly random line in the parent template (when an error occurs in the template which is included). I'll attach a screenshot with an example. Attachments (1) Change History (5) Changed 6 years ago by comment:1 Changed 6 years ago by Pullrequest at comment:2 Changed 6 years ago by The patch looks quite good, but it needs a bit of polish. First, I suggest to redo the PR against master. You must support Python 2.6+ and 3.2+. Specifically, use the syntax except TemplateSyntaxError as e: to catch exceptions. To test an exception, there's assertRaisesRegexp. Since it was renamed in Python 3, you must use this slightly contrived syntax: from django.utils import six with six.assertRaisesRegex(self, TemplateSyntaxError, ...): # raise exception here Finally, make sure the code is clean — there a suprious print statement and a stray newline in a docstring. Thanks! comment:3 Changed 6 years ago by Thanks for the feedback! Here is a new PR against master. Tested it with python 2.6, 2.7 and 3.2: Example
https://code.djangoproject.com/ticket/19819
CC-MAIN-2019-22
refinedweb
239
66.54
Understanding the shape of your model is sometimes non-trivial when it comes to machine learning. Look at convolutional neural nets with the number of filters, padding, kernel sizes etc and it’s quickly evident why understanding what shapes your inputs and outputs are will keep you sane and reduce the time spent digging into strange errors. TensorFlow’s RNN API exposed me to similar frustrations and misunderstandings about what I was expected to give it and what I was getting in return. Extracting these operations out helped me get a simple view of the RNN API and hopefully reduce some headaches in the future. In this post, I’ll outline my findings with a few examples and. Firstly the input data shape: batch size is part of running any graph and you’ll get used to seeing None or ? as the first dimension of your shapes. RNN data expects each sample to have two dimensions of it’s own. This is different to understanding that images have two dimensions, RNN data expects a sequence of samples, each of which has a number of features. Lets make this clearer with an example: import numpy as np # Batch size = 2, sequence length = 3, number features = 1, shape=(2, 3, 1) values231 = np.array([ [[1], [2], [3]], [[2], [3], [4]] ]) # Batch size = 3, sequence length = 5, number features = 2, shape=(3, 5, 2) values352 = np.array([ [[1, 4], [2, 5], [3, 6], [4, 7], [5, 8]], [[2, 5], [3, 6], [4, 7], [5, 8], [6, 9]], [[3, 6], [4, 7], [5, 8], [6, 9], [7, 10]] ]) If you understand that an RNN will feed each timestep into the cell, taking the second example, the first timestep takes [1, 4] as input, second step [2, 5] etc. Understanding that even a sequence of single numbers needs to have the shape of (batch_size, seq_length, num_features) took me a while to get. If you have such a sample as a sequence of single numbers (say [[1, 2, 3], [2, 3, 4]]), you can do np.reshape(2, 3, 1) to reshape from (2, 3) into a sequence dataset tf.nn.dynamic_rnn To understand the output of an RNN cell, you have to think about the output of the RNN cell over the input sequence. This is where the unrolling comes from and in TensorFlow for dynamic_rnn is implemented using a while loop. If we use our data from values231 above, lets understand the output from an LSTM through a TensorFlow RNN: import tensorflow as tf tf.reset_default_graph() tf_values231 = tf.constant(values231, dtype=tf.float32) lstm_cell = tf.contrib.rnn.LSTMCell(num_units=100) outputs, state = tf.nn.dynamic_rnn(cell=lstm_cell, dtype=tf.float32, inputs=tf_values231) print(outputs) # tf.Tensor 'rnn_3/transpose:0' shape=(2, 3, 100) dtype=float32 print(state.c) # tf.Tensor 'rnn_3/while/Exit_2:0' shape=(2, 100) dtype=float32 print(state.h) # tf.Tensor 'rnn_3/while/Exit_3:0' shape=(2, 100) dtype=float32 with tf.Session() as sess: sess.run(tf.global_variables_initializer()) output_run, state_run = sess.run([outputs, state]) outputs: shape = (batch_size, sequence_length, num_units). If you’ve ever seen an LSTM model, this is h(t) output for every timestep (In the image below, a vector of [n0, h1, h2]. The last time step is the same value as state.h validated by running output_run[:,-1] == state_run.h. state c and h: shape= (batch_size, num_units). This is the final state of the cell at the end of the sequence and in the image below is h2 and c Unrolled LSTM with output [h0, h1, h2] and state output (h=h2, c=c) This is even easier if you are using a GRUCell as state is just a single vector instead of the tuple from the LSTMCell. In this case running np.all(output_run[:,-1] == state_run) to verify the state is equal to the output at the last timestep tf.nn.bidirectional_dynamic_rnn The bidirectional RNN is very similar apart from obviously there is an RNN pass going both forwards and backwards through the sequence. Thus you need a cell for each separate pass and the outputs and state have a tuple pair for each RNN. In the example below, I’ve used a different num_units for the LSTM cell in each direction so it’s clear where that value is showing up in the output shape. I have also deconstructed the returned output and state into the forward and backward pairs to be clear. You may not wish to do this if you want to cleanly apply a concat operation which I will show later. import tensorflow as tf tf.reset_default_graph() tf_values231 = tf.constant(values231, dtype=tf.float32) lstm_cell_fw = tf.contrib.rnn.LSTMCell(100) lstm_cell_bw = tf.contrib.rnn.LSTMCell(105) # change to 105 just so can see the effect in output (output_fw, output_bw), (output_state_fw, output_state_bw) = tf.nn.bidirectional_dynamic_rnn( cell_fw=lstm_cell_fw, cell_bw=lstm_cell_bw, inputs=tf_values231, dtype=tf.float32) print(output_fw) # tf.Tensor 'bidirectional_rnn/fw/fw/transpose:0' shape=(2, 3, 100) dtype=float32 print(output_bw) # tf.Tensor 'ReverseV2:0' shape=(2, 3, 105) dtype=float32 print(output_state_fw.c) # tf.Tensor 'bidirectional_rnn/fw/fw/while/Exit_2:0' shape=(2, 100) dtype=float32 print(output_state_fw.h) # tf.Tensor 'bidirectional_rnn/fw/fw/while/Exit_3:0' shape=(2, 100) dtype=float32 print(output_state_bw.c) # tf.Tensor 'bidirectional_rnn/bw/bw/while/Exit_2:0' shape=(2, 105) dtype=float32 print(output_state_bw.h) # tf.Tensor 'bidirectional_rnn/bw/bw/while/Exit_3:0' shape=(2, 105) dtype=float32 with tf.Session() as sess: sess.run(tf.global_variables_initializer()) output_run, state_run = sess.run([output_fw, output_bw, output_state_fw, output_state_bw]) Understanding the outputs from the single direction RNN above, these values should make sense when thinking it’s just two RNNs. As mentioned, you may want to use the combined output state of both RNNs and concatenate the outputs. In this case, you want concat along axis=2 being the cell outputs such as tf.concat((output_fw, output_bw), axis=2, name='bidirectional_concat_outputs') Conclusion Without digging into applied samples and complex NLP or other sequence problems, these simple examples helped me understand the shapes of tensors passing through an RNN. And also what the output and state represent for each of these runs. I can definitely recommend taking a step back and running these operations on their own (outside of a more complex model) to more simply understand what’s going on You can see the Jupyter notebook for my investigation at my GitHub Gist Understanding TF RNNs.ipynb
https://www.damienpontifex.com/2017/12/06/understanding-tensorflows-rnn-inputs-outputs-and-shapes/
CC-MAIN-2019-04
refinedweb
1,067
56.35
I am trying to use IDEA’s flex compiler and am having some trouble with it. My project is setup with 2 modules: 1)a flex client module and 2)a java server module with spring and web facets. I am building the java module with my ant script, but want to use IDEA’s builder for the flex module. The flex compiler settings has a place for the “Main class”. It’s a webapp project, and a flex module to boot, so does not have a class with a main() method in the application. Without the Main class set, running the flex module results in a popup prompting me to choose a Main class. Any suggestion/insight out there would be appreciated. I've attached snapshots of the flex compiler settings and the run/debug configuration for the flex module. I am using the Maia build 90.116. Is the video demo’ing working with flex in Maia available yet? Thanks, -Mary Attachment(s): runDebugConfiguration.docx flexCompilerSetting.docx Main class in Flex is not the class that has main() method. It is usually a class (defined either in *.as or in *.mxml file) that is inherited from mx.core.Application or mx.modules.Module class. Main class is a Flex compilation setting so whatever gui or command line tool you use to compile Flex - in any case you specify main Flex class or file with main Flex class. That would be a file named "video.mxml". However, when I tried to specify that file in the Choose Main Class popup of the Flex Compiler Settings, the OK button remains disabled. What is there to do? -Mary Welcome to Flex language programming Mxml files define Flex classes. The name of the class is equal to the name of the *.mxml file without extension. Root tag (like <mx:Application/>) specifies parent class (as if you have wrote public class video extends mx.core.Application{...}) By the way Flex coding conventions recommend to start class name with an uppercase letter: So the answer is 'video' - this is the name of your main class. By the way the text field 'Main class' in Flex Compiler Settings has a button to the right from it - it opens class chooser and suggests correct main class for you. Yes, I tried using the "..." control next to the textbox, but as you can see from my attached file, the OK button remains disabled in both the Search by Name and the Project tab, indicating that the video.mxml file is not acceptable. I also tried just typing in "video" in the textbox, but at runtime, I am told that there is a Flex Compiler Problem of "Main class 'video' for module 'video_flex' is not found.". What else do I need to do? Thanks, -Mary Attachment(s): chooseMainClass.docx At your screenshot I see that flex_src folder is marked as a library home. But it should be you source folder. Please open 'Sources' tab for your Flex module settings and configure flex_src to be your source root. Then after pressing OK and indexing complete you'll be able to select 'video' class as main class. If you still face any problems please attach screenshot of 'Sources' and 'Dependencies' tabs.
https://intellij-support.jetbrains.com/hc/en-us/community/posts/206836965-using-Maia-s-flex-compiler?page=1
CC-MAIN-2019-47
refinedweb
541
74.29
Interfaces are a contract between a service provider and a service consumer. The C++ Core Guidelines has 20 rules to make them right because "interfaces is probably the most important single aspect of code organization". Before I dive into the rules, here is an overview of the 20 rules. Expects() Ensures() T* not_null I will make my discussion of the rules not so elaborated because there are too many rules. My idea is that I write in this post about the first ten rules and in the next post about the remaining 10. So, let's start. This rule is about correctness and means: assumptions should be stated in an interface. Otherwise, they are easily overlooked and hard to test. int round(double d) { return (round_up) ? ceil(d) : d; // don't: "invisible" dependency } For example, the function round does not express that its result depends on the invisible dependency round_up. This rule is kind of obvious but the emphasis lies on mutable global variables. Global constants are fine because they cannot introduce a dependency into the function and cannot be subject to race conditions. Singletons are global objects under the hood, therefore, you should avoid them. The reason for this rule makes it clear: "Types are the simplest and best documentation, have a well-defined meaning, and are guaranteed to be checked at compile time." Have a look at an example: void draw_rect(int, int, int, int); // great opportunities for mistakes draw_rect(p.x, p.y, 10, 20); // what does 10, 20 mean? void draw_rectangle(Point top_left, Point bottom_right); void draw_rectangle(Point top_left, Size height_width); draw_rectangle(p, Point{10, 20}); // two corners draw_rectangle(p, Size{10, 20}); // one corner and a (height, width) pair How easy is it to use the function draw_rect in the incorrect way. Compare this to the function draw_rectangle. The compiler guarantees that the argument is a Point or a Size object. You should, therefore, look in your process of code improvement for functions with many built-in type arguments and even worse, for functions that accept void* as parameter. If possible, preconditions such that x in double sqrt(double x) must be non-negative, should be expressed as assertions. Expects() from the Guideline support library (GSL) let you express your precondition directly. double sqrt(double x) { Expects(x >= 0); /* ... */ } Contracts, consisting of preconditions, postconditions, and assertions may be part of the next C++20 standard. See the proposal p03801.pdf. That is similar to the previous rule, but the emphasis is on a different aspect. You should use Expects() for expressing preconditions and not, for example, an if expression, a comment, or an assert() statement. int area(int height, int width) { Expects(height > 0 && width > 0); // good if (height <= 0 || width <= 0) my_error(); // obscure // ... } The expression Expects() is easier to spot and may be checkable by the upcoming C++20 standard. In accordance to the arguments of a function, you have to think about its results. Therefore, the postcondition rules are quite similar to previous precondotion rules. We will get with high probability with C++20 concepts. Concepts are predicates on template parameters that ca be evaluated at compile time. A concept may limit the set of arguments that are accepted as template parameters. I already wrote four posts about concepts, because there is a lot more to concepts. The rule of the C++ Core Guidelines is quite easy. You should apply them. template<typename Iter, typename Val> requires InputIterator<Iter> && EqualityComparable<ValueType<Iter>>, Val> Iter find(Iter first, Iter last, Val v) { // ... } The generic find algorithm requires that the template parameter Iter is an InputIterator and the underlying value of the template parameter Iter is EqualityComparable. If you invoke the find algorithm with a template argument that does not satisfy this requirement, you will get a readable and easy to understand error message. Here is the reason: "It should not be possible to ignore an error because that could leave the system or a computation in an undefined (or unexpected) state." The rule provides a bad and a good example. int printf(const char* ...); // bad: return negative number if output fails template <class F, class ...Args> // good: throw system_error if unable to start the new thread explicit thread(F&& f, Args&&... args); In the bad case, you can ignore the exception and your program has undefined behaviour. If you can't use exceptions, you should return a pair of values. Thanks to C++17 feature structured binding, you can do it quite elegantly. auto [val, error_code] = do_something(); if (error_code == 0) { // ... handle the error or exit ... } // ... use val ... That is quite easy to guess. In the next post, I write about the remaining rules to pointers, initialisation of globals objects, function parameters, abstract classes, and ABI (application binary interface)? There is a lot to know about good interface design.42 All 1930515 Currently are 390 guests and no members online Kubik-Rubik Joomla! Extensions Read more... Read more...
http://www.modernescpp.com/index.php/c-core-guidelines-interfaces
CC-MAIN-2019-22
refinedweb
823
56.96
WTF is a Type Class? I have a hard time understanding about type class a couple of weeks back while reading about Scala with Cats. After countless research on the internet and other tutorial explaining Type Classes, I also decided to share my take and the way I look at them. By knowing that the author of Cats Library used this technique to implement a lot of the functionality, I think it is handy as it was a widely used technique in designing the FP application. This technique does not only limited to FP, but OOP can also use this technique to create a more modular system. Type class is an interface that defines some behavior. It is a way to create any behavior from an existing class without modifying anything on that source code. Instead, it creates a type class to implement the new behavior. Type class usually define in 3 things: - Trait Type Class - Instances of Type class: Defining the type class that we care about (concrete with implicit). - Interface objects or interface syntax A simple example from Alvin Alexander blog: Let’s say you have these existing data types: Assumed that one day you need to add new behavior to Dog because it can speak like humans, you want to add a new speak behavior, but without really changing the source code you already have, and also not changing the behavior for Cat and Bird Create a Trait that has a generic parameter: trait SpeakBehavior[A] { def speak(a:A):Unit } Create the instances of the type class that you want to enhance. object SpeakLikeHuman{ implicit val dogSpeakLikeHuman: SpeakBehavior[Dog] = new SpeakBehavior[Dog] { def speak(a:Dog): Unit = { println(s"I'm a Dog, my name is ${dog.name}) } } } Create an API for the consumers(caller) of this type class The caller will be calling like this: BehavesLikeHuman.speak(aDog) // interface object aDog.speak // interface syntax (using implicit class) Interface Object (more explicit): object BehavesLikeHuman { def speak[A](a:A)(implicit instance: SpeakLikeHuman[A]) = instance.speak(a) } To call this type class: import SpeakLikeHuman._ // import all the implicits val dog = Dog("Rover") BehavesLikeHuman.speak(dog) Interface Syntax (implicit): object BehavesLikeHumanSyntax { implicit class BehavesLikeHumanOps[A](a:A) { def speak(implicit instance:SpeakLikeHuman[A]): Unit = { instance.speak(a) } } } Calling this: import SpeakLikeHuman._ // import the instances import BehavesLikeHumanSyntax.BehavesLikeHumanOps val dog = Dog("Rover") dog.speak // because implicitly gets it from the implicit class BehavesLikeHumanOps To me, calling Interface syntax is not as readable as calling Interface objects, especially when you are trying to read a large codebase. Implicit in Scala can be a double-edged sword, and for someone who is not used to reading Scala code, it can be hard for the first time to read a large codebase with multiple implicit. However, if you are using interface syntax on your API, the caller can invoke a new behavior directly on the instance — it looks like you added a new behavior to the Dog instance without changing any of the source code. 3 Main Takeaways - Type Class is like inheritance, but in an FP way, you get to create new behavior on the model without modifying the existing source code. - There are 3 steps to create a Type Class — the Interface, the Type-class instance, and the API - Be aware of interface syntax, and implicit in Scala because sometimes it can be hard to debug if you used it too often in a large codebase. Full Source Code on the example above is in here. Like this Article? Other Articles The 2020 Java Developer RoadMap Hello guys, first of all, I wish you a very Happy New Year 2020. I have been sharing a lot of roadmaps to become a Web… javarevisited.blogspot.com
https://medium.com/javarevisited/wtf-is-a-type-class-a5472230487b?source=post_internal_links---------0----------------------------
CC-MAIN-2021-25
refinedweb
627
51.07
Overall, timeseries per ELB. Within Datadog, when you are selecting ‘min’, ‘max’, or ‘avg’, you are controlling how multiple timeseries are combined. For example, requesting system.cpu.idle without any filter would return one series for each host that reports that metric and those series need to be combined to be graphed. On the other hand, if you requested system.cpu.idle from a single host, no aggregation would be necessary and switching between average and max would yield the same result. If you would like to collect the Min/Max/Sum/Avg from AWS (Component Specific - Ec2, ELB, Kinesis, etc.) reach out to [email protected]. Enabling this feature would provide additional metrics under the following namespace format: aws.elb.healthy_host_count.sum aws.elb.healthy_host_count.min aws.elb.healthy_host_count.max Note, enabling this feature increases the number of API requests and information pulled from CloudWatch and may potentially impact your AWS billing. More information on this behavior and AWS billing can be found here: Do you believe you’re seeing a discrepancy between your data in CloudWatch and Datadog? How do I monitor my AWS billing details?
https://docs.datadoghq.com/integrations/faq/additional-aws-metrics-min-max-sum/
CC-MAIN-2019-18
refinedweb
188
57.16
#include <Thread_Manager.h> #include <Thread_Manager.h> Inheritance diagram for ACE_Thread_Descriptor: Do nothing destructor to keep some compilers happy. Do nothing but to acquire the thread descriptor's lock and release. This will first check if the thread is registered or not. If it is already registered, there's no need to reacquire the lock again. This is used mainly to get newly spawned thread in synch with thread manager and prevent it from accessing its thread descriptor before it gets fully built. This function is only called from ACE_Log_Msg::thr_desc. Register an object (or array) for cleanup at thread termination. "cleanup_hook" points to a (global, or static member) function that is called for the object or array when it to be destroyed. It may perform any necessary cleanup specific for that object or its class. "param" is passed as the second parameter to the "cleanup_hook" function; the first parameter is the object (or array) to be destroyed. Returns 0 on success, non-zero on failure: -1 if virtual memory is exhausted or 1 if the object (or arrayt) had already been registered. Register an At_Thread_Exit hook and the ownership is retained for the caller. Normally used when the at_exit hook is created in stack. Register an At_Thread_Exit hook and the ownership is acquire by Thread_Descriptor, this is the usual case when the AT is dynamically allocated. 1 [private] Pop an At_Thread_Exit from at thread termination list, apply the at if apply is true. 0 Push an At_Thread_Exit to at thread termination list and set the ownership of at. Run the AT_Thread_Exit hooks. Dump the state of an object. This cleanup function must be called only for ACE_TSS_cleanup. The ACE_TSS_cleanup delegate Log_Msg instance destruction when Log_Msg cleanup is called before terminate. Reset this thread descriptor. Unique handle to thread (used by Win32 and AIX). Unique thread id. Set/get the next_ pointer. These are required by the ACE_Free_List. Terminate realize the cleanup process to thread termination. [friend] Reimplemented from ACE_Thread_Descriptor_Base. The AT_Thread_Exit list. Stores the cleanup info for a thread. Thread_Descriptor is the ownership of ACE_Log_Msg if log_msg_!=0 This can occur because ACE_TSS_cleanup was executed before terminate. Registration lock to prevent premature removal of thread descriptor. Keep track of termination status. Pointer to an ACE_Thread_Manager or NULL if there's no ACE_Thread_Manager>
https://www.dre.vanderbilt.edu/Doxygen/5.4.3/html/ace/classACE__Thread__Descriptor.html
CC-MAIN-2022-40
refinedweb
382
60.61
As a Java developer, lot of times I have to play around with file system. Sometimes I have to copy files/directories from one location to another; sometimes have to process certain files depending on certain pattern. In one of my test program, I wanted to calculate available disk space using Java. Lot of code snippets are available for this task. I liked the one using Apache Commons IO library. Here is a simple trick for Java developers to calculate free diskspace. We have used Apache Commons IO library to calculate this. Apache Commons IO library contains a class org.apache.commons.io.FileSystemUtils which can be used to calculate the free disk space in any system. Let us see the Java code for this. package net.viralpatel.java; import java.io.IOException; import org.apache.commons.io.FileSystemUtils; public class DiskSpace { public static void main(String[] args) { try { //calculate free disk space double freeDiskSpace = FileSystemUtils.freeSpaceKb("C:"); //convert the number into gigabyte double freeDiskSpaceGB = freeDiskSpace / 1024 / 1024; System.out.println("Free Disk Space (GB):" + freeDiskSpaceGB); } catch (IOException e) { e.printStackTrace(); } } } Output: Free Disk Space (GB): 40.145268 In above code we used FileSystemUtils.freeSpaceKb( ) method to get the free space in kilo byte. This method invokes the command line to calculate the free disk space. You may want to call this method in following way to get free disk space in Windows and Linux. FileSystemUtils.freeSpaceKb("C:"); // Windows FileSystemUtils.freeSpaceKb("/volume"); // *nix The free space is calculated via the command line. It uses ‘dir /-c’ on Windows, ‘df -kP’ on AIX/HP-UX and ‘df -k’ on other Unix. In order to work, you must be running Windows, or have a implementation of Unix df that supports GNU format when passed -k (or -kP). If you are going to rely on this code, please check that it works on your OS by running some simple tests to compare the command line with the output from this class. That functionality is already available in Java 6. In Java 1.6 there is getFreeSpace method in File class. Very good content Hi, The code works good for windows platform, on linux it doesn’t return expected result….(: .Nikhil
https://viralpatel.net/blogs/java-calculate-free-disk-space-java-apache-commons-io/
CC-MAIN-2019-09
refinedweb
367
68.16
The objective of this post is to explain how to launch a thread on MicroPython running on the ESP32. The tests were performed using a DFRobot’s ESP-WROOM-32 device integrated in a ESP32 FireBeetle board. Introduction The objective of this post is to explain how to launch a thread on MicroPython running on the ESP32. This will be a very simple initial example where we will define a function that will be executed by our thread and periodically prints a “hello world” message. The tests were performed using a DFRobot’s ESP-WROOM-32 device integrated in a ESP32 FireBeetle board. The MicroPython IDE used was uPyCraft. The code We start by importing the thread module, which makes available the function needed to launch threads. Note that the module is called _thread (the underscore is not a typo). We will also import the time module, so we can use the sleep function to introduce some delays on our program. import _thread import time Next we will define the function that will execute in our thread. Our function will simply print a “hello world” message in an infinite loop, with some delay in each iteration. We will introduce the delay using the mentioned sleep function of the time module, which receives as input the number of seconds to delay. I’m using a 2 seconds delay, but you can use a different value. def testThread(): while True: print("Hello from thread") time.sleep(2) It’s important to consider that when the function returns, the thread exits [1]. Nonetheless, in our case, this will never happen since our thread will run on an infinite loop. Finally, to start our thread, we simply call the start_new_thread function of the _thread module, specifying as first argument our previously defined function and as second a tuple which corresponds to the thread function arguments. Since our function expects no arguments, we will pass an empty tuple. An empty tuple is declared using empty parenthesis [2]. _thread.start_new_thread(testThread, ()) You can check the full source code below. import _thread import time def testThread(): while True: print("Hello from thread") time.sleep(2) _thread.start_new_thread(testThread, ()) Testing the code To test the code, simply upload the previous script to your board and run it. You should get an output similar to figure 1, which shows the output of our thread. It should print the message with the periodicity defined in the code. Figure 1 – Output of the script. References [1] [2] How do you kill a thread? LikeLiked by 1 person Hi, I haven’t yet found a way to do it and I haven’t found anything related in the documentation: The only thing that I’ve found so far is that when the thread function return, then the thread exits. If I found some way of doing it I will make a post about it. Best regards, Nuno Santos Pingback: ESP32 MicroPython: Passing arguments to a thread function | techtutorialsx
https://techtutorialsx.com/2017/10/02/esp32-micropython-creating-a-thread/
CC-MAIN-2017-43
refinedweb
496
63.29
28 March 2012 09:29 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> This was largely because prices of polyvinyl chloride (PVC) resin, one of the company’s main products, fell in the end of the third quarter in 2011, said the statement. Tianye’s operating income in 2011 rose by 5.3% year on year to CNY3.6bn, according to the statement. The company aims to produce 280,000 tonnes of PVC resin, 220,000 tonnes of caustic soda and 300,000 tonnes of calcium carbide in 2012, the statement said. In 2011, Tianye produced 260,000 tonnes of PVC resin, a decrease of 8.1% from 2010, and 191,700 tonnes of ion-exchange membrane caustic soda, a decrease of 11.4% from 2010, according to the statement. The statement did not provide the volume of ion-exchange membrane caustic soda Tianye aims to produce in 2012 and the volume of calcium carbide produced by the company in
http://www.icis.com/Articles/2012/03/28/9545423/chinas-xinjiang-tianye-posts-13.9-fall-in-2011-net-profit.html
CC-MAIN-2014-42
refinedweb
159
73.88
Question about the weekly releases. Is there a way to get these using git? The packaged downloads are great, but it would be more convenient if we could just get an update to save downloading a new source tree every week or so. If this is possible could you outline how to do it, or point me in the direction of something that would help? I couldn't find anything in the software section that outlined this. For those of us who would still like updates but are not in Commons labs at the moment this would be a big help. Thanks in advance, I miss you guys. Ben If you still have RosettaCommons GitHub access, the weekly releases are available under the weekly_releases branch namespace. For example to get the 2013 week 33 weekly release, you'd simply do something like "git fetch origin; git checkout -t origin/weekly_releases/2013-wk33". If you don't have access to the RosettaCommons git repository anymore, I do not believe we have any setup to allow for git access to the weekly releases. (As I understand things, Git & GitHub makes it hard to isolate access to just the weekly release branches.) Though as a RosettaCommons alumn, you may be able to negotiate read-only git access with your former PI, even if you might no longer have push privileges - though don't quote me on that.
https://www.rosettacommons.org/node/3436
CC-MAIN-2022-05
refinedweb
234
69.31
This is the mail archive of the cygwin mailing list for the Cygwin project. Corinna Vinschen wrote: > Unfortunately it doesn't work for variables. We can hide the timezone > function, but how do we alias timezone to _timezone in libcygwin.a? Why does the variable need to be renamed? Can't we continue to call it _timezone internally and then "#define timezone _timezone" in a public header? It looks like this is already what we get in <cygwin/time.h> if we simply stop defining __timezonefunc__. Or is pulluting the namespace with a macro called "timezone" too hideous? In that case we could try declaring it "extern long timezone asm("_timezone");" in the header. Brian -- Unsubscribe info: Problem reports: Documentation: FAQ:
http://cygwin.com/ml/cygwin/2007-12/msg00001.html
CC-MAIN-2019-30
refinedweb
121
68.26
I want a legend without the black border. I've tried a few things that have been suggested on this forum and elsewhere to no avail. According to what I've seen, it should be as simple as: import matplotlib.pyplot as plt import numpy as np N = 5 Means1 = (20, 35, 30, 35, 27) Means2 = (25, 32, 34, 20, 25) ind = np.arange(N) # the x locations for the groups width = 0.20 # the width of the bars fig = plt.figure() ax = fig.add_subplot(111) rects1 = ax.bar(ind, Means1, width, color='k') rects2 = ax.bar(ind+width, Means2, width, color='w') ax.legend( (rects1[0], rects2[0]), ('set1', 'set2'), frameon=False ) plt.show() It all works except for "frameon=False" I get this: /usr/lib/pymodules/python2.7/matplotlib/axes.pyc in legend(self, *args, **kwargs) 4042 4043 handles = cbook.flatten(handles) -> 4044 self.legend_ = mlegend.Legend(self, handles, labels, **kwargs) 4045 return self.legend_ 4046 TypeError: __init__() got an unexpected keyword argument 'frameon' I've also checked my matplotlibrc under the "Legend" section and I don't see a "legend.frameon" line. It must be something simple that I am doing wrong. Any ideas? ··· -- View this message in context: Sent from the matplotlib - users mailing list archive at Nabble.com.
https://discourse.matplotlib.org/t/legend-border-frameon-keyword/16203
CC-MAIN-2019-51
refinedweb
213
70.19
Øystein Grøvlen wrote: >>>>>>"DJD" == Daniel John Debrunner <[email protected]> writes: > > > DJD> [As an aside traditionally Cloudscape did not use synchronized *static* > DJD> methods as there were locking issues related to class loading - this was > >> From jdk 1.1 days, have those issues been fixed now?] > > I am not an expert on this, but Doug Lea says in "Concurrent > Programming in Java": > > The JVM internally obtains and releases the locks for Class > objects during class loading and intialization. Unless you > are writing a special class ClassLoader or holding multiple > locks during static initialization sequences, these internal > mechanics cannot interfere with the use of ordinary methods > and blocks synchronized on Class objects. Thanks. The "Unless ... holding multiple locks.. " part is a little troubling. And exactly what is meant by 'Class objects', as while you can have a synchronized static method, I don't see how you could have a block synchronized in the same way. The Derby code uses the workaround of creating a static final member referencing an object to be used as the synchronization object and uses that to synchronized static methods. That may still be the safest approach. E.g. public class X { private static final Object syncMe = new Object(); public void static someMethod() { synchronized (syncMe) { // ... code } } } > > I found an example of how one catches a StandardException. I guess in > my case it should be something like this: > > ConstantAction csca = new CreateSchemaConstantAction(schemaName, (String) null); > try { > csca.executeConstantAction(activation); > } catch (StandardException se) { > if (se.getMessageId().equals(SQLState.LANG_OBJECT_ALREADY_EXISTS)) { > // Ignore "Schema already exists". > // Another thread has probably created it since we checked for it > } else { > throw se; > } > } > > Does this look OK? Yes. Dan.
http://mail-archives.apache.org/mod_mbox/db-derby-dev/200504.mbox/%[email protected]%3E
CC-MAIN-2016-22
refinedweb
274
54.83
Calculating Average and high/lowest test score Hello, I am getting frustrated because I am currently trying to figure out how to solve this program: Write a program that prompts the user for test scores (doubles). The user enters -1 to stop the entry. After all of the test scores have been entered, calculate the average, the highest and the lowest test score. So far this is what I have: #include <iostream> using namespace std; int main() { double score, average, highest, lowest; int counter = -1; do { counter++; cout << "Please enter a score (enter -1 to stop): "; cin >> score[counter]; } while (scores[counter] >= 0); cout << "Highest is "; cin >> highest; cout << "Lowest is "; cin >> lowest; // I am stuck at this point which is to CALCULATE AND DISPLAY THE AVERAGE, // THE HIGHEST AND LOWEST TEST SCORE }
https://www.daniweb.com/programming/software-development/threads/299365/calculating-average-and-high-lowest-test-score
CC-MAIN-2017-17
refinedweb
133
55.41
Upgrading from rpy2 2.0.x to rpy2 3.0.x Problem: rpy2 v2 interface is arcane. Solution: Update to rpy v3! Background My Massive eco-evolutionary synthesis simulations python software uses a small chunk of R code internally (yes I’m ashamed of mixing programming languages, but there’s nothing I can do rn). Basically, all the ‘good’ phylogenetic tree packages are written in R and I need to simulate a big ol’ tree and evolve some traits on it. So in MESS I need to run some R code in the most compartmentalized way possible. rpy2 is a binding layer between python and R, where you can run an R instance and make calls to it and fetch data from it. Well! This is always an ugly process and we initially developed the MESS code using the rpy2 2.x branch. The old code looked like this: import rpy2.robjects as robjects from rpy2.robjects import r, pandas2ri ## define the function you want to call in R (This is just a skel) make_meta = """makeMeta <- function(a,b,c,d){}""" # Call the function make_meta_func = robjects.r(make_meta) res = pandas2ri.ri2py(make_meta_func(J, S_m, speciation_rate, death_proportion, trait_rate_meta)) # Unpack the results tree = res[0][0] traits = pandas2ri.ri2py(res[1]) abunds = pandas2ri.ri2py(res[2]) Only god knows what’s going on will all this craziness…. I don’t know, maybe I was doing it wrong before, but it did work. Anyway, it’s one of those things where you figure it out after hours and hours and hours of work, and then you leave it alone forever and hope it never breaks. Well today I accidentally updated rpy2 to version 3.x and it broke. :(. Here’s the error message (just the relevant part): File "/home/isaac/Continuosity/MESS/MESS/Metacommunity.py", line 191, in _simulate_metacommunity res = pandas2ri.ri2py(make_meta_func(J, S_m, speciation_rate, death_proportion, trait_rate_meta)) AttributeError: module 'rpy2.robjects.pandas2ri' has no attribute 'ri2py' Converting my code to rpy2 3.x So I kind of squinted at this for a while, and the rolled up my sleeves. Turns out the new interface is GREAT! But the documentation is still lacking (part of why I’m adding this post). After some trial and error I figured out the new way: from rpy2 import robjects # Same function as before, the R code didn't change, just the python interface make_meta = """makeMeta <- function(a,b,c,d){}""" # Call the function make_meta = robjects.r(make_meta) res = make_meta(1000, 100, 2, 0.7, 1) # Unpack the results tree = res[0][0] traits = np.array(res[1]) sad = np.array(res[2]) Literally a 100% improvement. The only exposure you have to the rpy2 interface is when you create the robjects.r thing (also technically having to cast the results to a numpy aray is an extra step). It is so transparent now it’s insane. If you are reading this and you are an rpy2 developer: “You did a great job! Thank you.”
https://isaacovercast.github.io/blog/rpy2-2.0-3.0-interface-change/
CC-MAIN-2021-43
refinedweb
498
67.65
A canvas specialized to set attributes. It contains, in general, TGroupButton objects. When the APPLY button is executed, the actions corresponding to the active buttons are executed via the Interpreter. See examples in TAttLineCanvas, TAttFillCanvas, TAttTextCanvas, TAttMarkerCanvas Definition at line 19 of file TDialogCanvas.h. #include <TDialogCanvas.h> DialogCanvas default constructor. Definition at line 35 of file TDialogCanvas.cxx. DialogCanvas constructor. Definition at line 44 of file TDialogCanvas.cxx. DialogCanvas constructor. Definition at line 55 of file TDialogCanvas.cxx. DialogCanvas default destructor. Definition at line 66 of file TDialogCanvas.cxx. Called when the APPLY button is executed. Definition at line 73 of file TDialogCanvas.cxx. Create APPLY, gStyle and CLOSE buttons. Definition at line 100 of file TDialogCanvas.cxx. Automatic pad generation by division. Pads are automatically named canvasname_n where n is the division number starting from top left pad. Example if canvasname=c1 , nx=2, ny=3: Once a pad is divided into sub-pads, one can set the current pad to a subpad with a given division number as illustrated above with TPad::cd(subpad_number). For example, to set the current pad to c1_4, one can do: Note1: c1.cd() is equivalent to c1.cd(0) and sets the current pad to c1 itself. Note2: after a statement like c1.cd(6), the global variable gPad points to the current pad. One can use gPad to set attributes of the current pad. Note3: in case xmargin <=0 and ymargin <= 0, there is no space between pads. The current pad margins are recomputed to optimize the layout. Definition at line 53 of file TDialogCanvas.h. Definition at line 37 of file TDialogCanvas.h. Definition at line 38 of file TDialogCanvas.h. Set world coordinate system for the pad. Definition at line 125 of file TDialogCanvas.cxx. Recursively remove object from a pad and its sub-pads. Definition at line 134 of file TDialogCanvas.cxx. Definition at line 41 of file TDialogCanvas.h. Definition at line 54 of file TDialogCanvas.h. Set Lin/Log scale for X. Definition at line 55 of file TDialogCanvas.h. Set Lin/Log scale for Y. Definition at line 56 of file TDialogCanvas.h. Set canvas name. In case name is an empty string, a default name is set. Reimplemented from TCanvas. Definition at line 45 of file TDialogCanvas.h. Definition at line 46 of file TDialogCanvas.h. Definition at line 47 of file TDialogCanvas.h. Deprecated: use TPad::GetViewer3D() instead. Definition at line 57 of file TDialogCanvas.h. Pointer to object to set attributes. Definition at line 26 of file TDialogCanvas.h. Pad containing object. Definition at line 27 of file TDialogCanvas.h.
https://root.cern.ch/doc/v622/classTDialogCanvas.html
CC-MAIN-2021-21
refinedweb
439
54.9
Changes: * Wed Aug 13 2001 Bob Tanner - Rev 2.2.D9 Changes: * Fri Aug 10 2001 Nate Carlson - Changed real-time.com in the .mc to 'localhost.localdomain' * Fri Aug 03 2001 Nate Carlson - Enable libmilter. * Wed Jun 13 2001 Nate Carlson - Removed FEATURE(`relay_based_on_MX'), used access for relay. * Mon Jun 11 2001 Nate Carlson - Fixed /usr/lib/sasl/Sendmail.conf * Fri Jun 08 2001 Nate Carlson - Updated to include /usr/lib/sasl/Sendmail.conf... read more * Mon Jul 23 2001 Bob Tanner <[email protected]> - Rev up to 1.4.2 The JavaTM Cryptography Extension (JCE) is a set of packages that provide a framework and implementations for encryption, key generation and key agreement, and Message Authentication Code .) Jlint will check your Java code and find bugs, inconsistencies and synchronization problems by performing data flow analysis and building lock graphs. Jlint is able to detect syntax and semantical problems. Antic tries to find bugs in tokens, operator precendences and statement bodies. - support for all IANA encoding aliases which have a clear mapping to encodings recognized by Java - fixes for degradations in DTD validation performance caused by the schema implementation - support for setAttribute/getAttribute in JAXP - two new parser properties permitting an application writer to associate schema documents with specific namespaces without relying on instance documents. * Mon Jun 11 2001 Nate Carlson <[email protected]> - Fixed /usr/lib/sasl/Sendmail.conf * Fri Jun 08 2001 Nate Carlson <[email protected]> - Updated to include /usr/lib/sasl/Sendmail.conf * Fri Jun 01 2001 Nate Carlson <[email protected] - Updated to Sendmail 8.11.4 * Tue Apr 24 2001 Nate Carlson <[email protected]> - Changed AUTH_OPTIONS to 'Ap' to fix bug with Exchange... read more JSwat is an extensible, standalone, graphical Java debugger front-end, written to use the Java Platform Debugger Architecture (JPDA) package provided by JavaSoft. The program is licensed under the GNU General Public License and thus is freely available in binary as well as source code form. You should also check out the page on the Open-Source Directory. The Element Construction Set is a Java API for generating elements for various markup languages it directly supports HTML 4.0 and XML, but can easily be extended to create tags for any markup language. It is designed and implemented by Stephan Nagy and Jon S. Stevens.). This version of the MM.MySQL driver is a beta release. BETA means that while there are no known show-stopping bugs, the features of this driver are still immature enough to cause the end-user to be exposed to some level of risk when using the driver. You should only use BETA versions of MM.MySQL if you are comfortable with the inherent risk involved with using a BETA product. MM.MySQL is an implemntation of the JDBC API for the MySQL relational database server. It strives to conform as much as possible to the API as specified by JavaSoft. It is known to work with many third-party products, including Borland JBuilder, IBM Visual Age for Java, SQL/J, the Locomotive and Symantec Visual Cafe. The Jakarta-ORO Java classes are a set of text-processing Java classes that provide Perl5 compatible regular expressions, AWK-like regular expressions, globexpressions, and utility classes for performing substitutions, splits, filteringfil. Daniel will continue to participate in their development under the Jakarta Project.. The Semantic Bovinator is a lexer, parser-generator, and parser. It is written in Emacs Lisp and is customized to the way Emacs thinks about language files, and is optimized to use Emacs' parsing capabilities. The Semantic Bovinator's goal is to provide an intermediate API for authors of language agnostic tools who want to deal with languages in a generic way. It also provides a simple way for Mode Authors who are expert's in their language, to provide a parser for those tool authors, without knowing anything about those tools. EIEIO is an Emacs lisp program which implements a controlled object-oriented programming methodology following the CLOS standard. EIEIO also has object browsing functions, and custom widget types. It has a fairly complete manual descrbing how to use it. EIEIO also supports the byte compiler for Emacs and XEmacs. Because of the nature of byte compiling, EIEIO is picky about the version of emacs you are running. It supports Emacs 19.2x+, Emacs 20.x, and XEmacs 19.1x. Byte compiling EIEIO is VERY IMPORTANT if performance is important.... read more There's no sense in reinventing the wheel--here are some servlet support classes I wrote that you can use. Most famous is the file upload package MultipartRequest and MultipartParser.. Because of the requirements, I was able to change many of the *.jar files to be %{javalibdir}/*.jar - Start of the security patches. Do not think it's a very good idea to have tomcat run as root, made changes to run tomcat as user tomcat4. - Patch so javadoc build correctly. - Patch to remove the example context from server.xml - Latest snapshot of tomcat requires jmx, jndi, and jsse, added them as secondary source files. - Found a couple issued with B5, went to B6 build date 20010523 - Server layout in B5 changed to have a common/lib directory and tomcat wants that directory, so created it. - Change the name to jakarta-tomcat to follow the official project name - Change from mkdir -p to install -d (better platform support) - Used macros for build root, in case people wish to change it - Made permission settings in 1 location. I personally like them in the %file section rather then specifing them with install --mode. - Cleaned up setup, rpm is smart about unpacking files - Added Build dependencies, so check for a jdk is not necessary - Formatting clean up. I use emacs and the shell-script major mode with rpm minor mode to keep a "standard" format. Apache SOAP v2.2 has been released. It is posted on the web-site (), and the release candidates have been removed. There is a detailed list of changes on the site, and in the distribution. Thanks, Apache SOAP Development Team
https://sourceforge.net/p/rte/news/?source=navbar&page=1
CC-MAIN-2017-22
refinedweb
1,019
57.98
Hi Edward, Edward Rozendal <[email protected]> writes: > I have an application with Xerces 2.4.0 that parses XML using SAX2. My > client has supplied me with XML schema's that include namespaces. When I > create an XML file with XMLSpy it will look like: > > <?xml version="1.0" encoding="UTF-8"?> > <!-- edited with XMLSpy v2006 sp2 U () --> > <tns:keepalive xmlns: > > The application does accept and validate this XML file. However I have been > asked if it is also possible to accept and validate XML files that look like > this: > > <keepalive/> I am pretty sure this is not possible. Element namespace is part of the element id (just like name). The best you can do is this: <keepalive xmlns=""/> hth, -boris -- Boris Kolpackov Code Synthesis Tools CC Open-Source, Cross-Platform C++ XML Data Binding
http://mail-archives.apache.org/mod_mbox/xerces-c-users/200701.mbox/%[email protected]%3E
CC-MAIN-2016-18
refinedweb
136
74.39
# + sarge? tags 220814 + fixed-in-experimental thanks mate On Fri, Nov 14, 2003 at 07:54:58PM +0000, Matthew Wilcox wrote: > Branden, I initially thought this was an hppa problem. It's not, it's > a kernel-headers problem. I suspect it will also affect unstable, > but I'm not sure. > >? Matt, we've added patches for this to both trunk and branches/4.3.0/sid; 4.3.0-0pre1v5 will (does?) have this patch, however it appears it's broken for 2.4 kernels (see James's message below). On Fri, Nov 14, 2003 at 08:17:35PM +0000, James Troup wrote: > I haven't had a chance to investigate this yet, so no bug, but I > thought I'd at least warn you. This was the 3rd attempt on vore. The > first had an out-of-date linux-kernel-headers installed in the chroot, > so I freshened the chroot and retried. #2 got bitten by the sparc32 > fuckage (see sparc-utils changelog for details). #3 (below) was in an > up-to-date chroot (with working sparc32) > > | Automatic build of xfree86_4.2.1-14 on vore by sbuild/sparc 1.170.4 > | Build started at 20031114-1051 > | ****************************************************************************** > > [...] > > | ** Using build dependencies supplied by package: > | Build-Depends: dpkg (>= 1.7.0), cpp-3.2, flex-old, bison, bsdmainutils, groff, zlib1g-dev | libz-dev, libncurses5-dev | libncurses-dev, libpam0g-dev | libpam-dev, libfreetype6-dev, libpaperg, libstdc++5-dev | libstdc++-dev, tetex-bin, po-debconf, debhelper (>= 4.1.16), html2text, libglide2-dev (>> 2001.01.26) [i386], libglide3-dev (>> 2001.01.26) [alpha i386], kernel-headers-2.4 | hurd | freebsd | netbsd | openbsd > > [...] > > | lnx_io.c: In function `KIOCSRATE_ioctl_ok': > | lnx_io.c:128: error: structure has no member named `period' > | lnx_io.c:130: error: structure has no member named `period' > | lnx_io.c:131: error: structure has no member named `period' > | make[8]: *** [lnx_io.o] Error 1 > > A complete build log can be found at > Hmm, that's pretty bizzare, given this stanza: /* Deal with spurious kernel header change */ #if defined(LINUX_VERSION_CODE) && defined(KERNEL_VERSION) # if LINUX_VERSION_CODE >= KERNEL_VERSION(2,5,42) # define rate period # endif #endif AFAICT, the only kernel headers on there are 2.4.21-sparc, and it should only be tripping #define rate period if the version is >= 2.5.42 (this is the case for both branches/4.3.0/sid and trunk, FWIW, and the only mention of 'period' in the extracted directories is protected by the L_V_C #if. Shit, *sigh*. I take it you don't still have the build tree kicking around? If not, could you please install the build-deps somewhere on vore so I can have a poke at this? Cheers, Daniel -- Daniel Stone <[email protected]> Debian X Strike Force: Attachment: pgpJKEHCECDYJ.pgp Description: PGP signature
https://lists.debian.org/debian-x/2003/11/msg00338.html
CC-MAIN-2015-11
refinedweb
458
68.77
I have a small Python script written in 2.7 that sends a subprocess call to run another small python script written in 3.6 to export some maps. While the scripts appear to work in a stand a lone environment, nothing seems to happen when I add the call script to my model. It indicates that it was run, but nothing seems to be ran on the back in. Here is my subprocess call script: import subprocess,os from subprocess import calloutPT = "C:/GIS/gisData/data/vector/sigacts/sigactsSIMS/output/DailyUpdate/provTripolitania.png" exit_code = call('C://Program Files/ArcGIS/Pro/bin/Python/envs/arcgispro-py3/pythonw.exe C://GIS/gisData/toolsNscripts/exportMaps_PRO_Daily.py')if os.path.isfile(outPT): print "New file exists..." else: print "File not located..." Does anyone know why it will not run when in modelbuilder?
https://community.esri.com/t5/modelbuilder-questions/subprocess-call-via-modelbuilder-not-working/m-p/859647
CC-MAIN-2022-27
refinedweb
139
52.26
I have written a class which query a Date from a table of the Database with getString(). When I deploy the class in the Database & run the class from a Java Stored Procedure, I get an incorrect value for the day 21/March/2002 (I obtain 20/March/2002 23:00:00) The result of the select from SQL/plus is correct: SQL> select c1,to_char(c1,'yyyy-mm-dd hh24:mi:ss') c1fmt from prueba; c1 c1fmt -------- ------------------- 20/03/02 2002-03-20 00:00:00 21/03/02 2002-03-21 00:00:00 22/03/02 2002-03-22 00:00:00 21/10/02 2002-10-21 00:00:00 When I run the class from JDeveloper or from command line directly, the result is the same: 2002-03-20 00:00:00.0 2002-03-21 00:00:00.0 2002-03-22 00:00:00.0 2002-10-21 00:00:00.0 But when I run the Java Stored Procedure associated to the class in the database the result is wrong: 2002-03-20 00:00:00.0 2002-03-20 23:00:00.0 <-- 2002-03-22 00:00:00.0 2002-10-21 00:00:00.0 ¿There is any idea what occurs with the day: 21/March/2002? Any help would be greatly appreciated. ------------------------------------------------- Here is the code of the class: import java.sql.*; import java.sql.Connection; import java.sql.DriverManager; import oracle.jdbc.driver.OracleDriver; public class Prueba{ public main() { try{ DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver()); // To deploy in DB: Connection conn = new oracle.jdbc.driver.OracleDriver().defaultConnection(); // To run from Jdeveloper: // Connection conn = DriverManager.getConnection("jdbc:oracle:thin:teseonet/teseonet@jerte:1521:ccdo2"); } catch (SQLException ex) { ex.printStackTrace(); } String fecha; try { // Obtiene el número de envios cargados. String strSQL = "SELECT c1 from prueba"; Statement stmt = conn.createStatement (); ResultSet res = stmt.executeQuery(strSQL); while (res.next()){ System.out.println(res.getString(1)); } stmt.close(); res.close(); conn.close(); } catch (Exception ex) { // Se capturan los posibles errores SQL try { conn.close(); } catch (SQLException ex_conn) { } } } } ------------------------------------------------- Discussions General J2EE: Retrieving Data problem Retrieving Data problem (2 messages) - Posted by: Patricia Palenzuela - Posted on: June 25 2003 04:28 EDT Threaded Messages (2) - Retrieving Data problem by Kalin Komitski on June 25 2003 06:32 EDT - I have solved the problem. by Patricia Palenzuela on June 27 2003 02:05 EDT Retrieving Data problem[ Go to top ] This problem should be related to "daylight savings changes". - Posted by: Kalin Komitski - Posted on: June 25 2003 06:32 EDT - in response to Patricia Palenzuela Make sure all machines have the same time zone. It could solve the problem. Unfortunately it could be just a bug. For example I had similar problem with an old JDK version. You might need to write a workaround for the problem if changing the version is not an option. Kalin I have solved the problem.[ Go to top ] I have solved the problem. - Posted by: Patricia Palenzuela - Posted on: June 27 2003 02:05 EDT - in response to Patricia Palenzuela Indeed it could be a bug, because the Time Zone of the database was correct but when I get Time Zone from my program with: TimeZone.getDefault() I obtained GMT+04:30. Its impossible! I have solved the problem changing the Timezone of the JVM after establishing database connection: conexion = new OracleDriver().defaultConnection(); TimeZone.setDefault(TimeZone.getTimeZone("Europe/Madrid")); Thanks.
http://www.theserverside.com/discussions/thread.tss?thread_id=20013
CC-MAIN-2014-15
refinedweb
574
58.69
CSS Modules and React. Article Series: - What are CSS Modules and why do we need them? - Getting Started with CSS Modules - React + CSS Modules = 😍 (You are here!) In the previous post we set up a quick project with Webpack that showed how dependencies can be imported into a file and how a build process can be used to make a unique class name that is generated in both CSS and HTML. The following example relies heavily on that tutorial so it’s definitely worth working through those previous examples first. Also this post assumes that you’re familiar with the basics of React. In the previous demo, there were problems with the codebase when we concluded. We depended on JavaScript to render our markup and it wasn’t entirely clear how we should structure a project. In this post we’ll be looking at a more realistic example whereby we try to make a few components with our new Webpack knowledge. To catch up, you can check out the css-modules-react repo I’ve made which is just a demo project that gets us up to where the last demo left off. From there you can continue with the tutorial below. Webpack’s Static Site Generator To generate static markup we'll need to install a plugin for Webpack that helps us generate static markup: npm i -D static-site-generator-webpack-plugin Now we need to add our plugin into `webpack.config.js` and add our routes. Routes would be like / for the homepage or /about for the about page. Routes tell the plugin which static files to create. var StaticSiteGeneratorPlugin = require('static-site-generator-webpack-plugin'); var locals = { routes: [ '/', ] }; Since we want to deliver static markup, and we’d prefer to avoid server side code at this point, we can use our StaticSiteGeneratorPlugin. As the docs for this plugin mentions, it provides: a series of paths to be rendered, and a matching set of index.html files will be rendered in your output directory by executing your own custom, webpack-compiled render function. If that sounds spooky hard, not to worry! Still in our `webpack.config.js`, we can now update our module.exports object: module.exports = { entry: { 'main': './src/', }, output: { path: 'build', filename: 'bundle.js', libraryTarget: 'umd' // this is super important }, ... } We set the libraryTarget because that’s a requirement for nodejs and the static site plugin to work properly. We also add a path so that everything will be generated into our `/build` directory. Still inside our `webpack.config.js` file we need to add the StaticSiteGeneratorPlugin at the bottom, like so, passing in the routes we want to generate: plugins: [ new ExtractTextPlugin('styles.css'), new StaticSiteGeneratorPlugin('main', locals.routes), ] Our complete `webpack.config.js` should now look like this: var ExtractTextPlugin = require('extract-text-webpack-plugin'); var StaticSiteGeneratorPlugin = require('static-site-generator-webpack-plugin') var locals = { routes: [ '/', ] } module.exports = { entry: './src', output: { path: 'build', filename: 'bundle.js', libraryTarget: 'umd' // this is super important }, module: { loaders: [ { test: /\.js$/, loader: 'babel', include: __dirname + '/src', }, { test: /\.css$/, loader: ExtractTextPlugin.extract('css?modules&importLoaders=1&localIdentName=[name]__[local]___[hash:base64:5]'), include: __dirname + '/src' } ], }, plugins: [ new StaticSiteGeneratorPlugin('main', locals.routes), new ExtractTextPlugin("styles.css"), ] }; In our empty `src/index.js` file we can add the following: // Exported static site renderer: module.exports = function render(locals, callback) { callback(null, '<html>Hello!</html>'); }; For now we just want to print Hello! onto the homepage of our site. Eventually we’ll grow that up into a more realistic site. In our `package.json`, which we discussed in the previous tutorial, we already have the basic command, webpack, which we can run with: npm start And if we check out our build directory then we should find an index.html file with our content. Sweet! We can confirm that the Static Site plugin is working. Now to test that this all works we can head back into our webpack.config.js and update our routes: var locals = { routes: [ '/', '/about' ] }; By rerunning our npm start command, we’ve made a new file: `build/about/index.html`. However, this will have "Hello!" just like `build/index.html` because we’re sending the same content to both files. To fix that we'lll need to use a router, but first, we'll need to get React set up. Before we do that we should move our routes into a separate file just to keep things nice and tidy. So in `./data.js` we can write: module.exports = { routes: [ '/', '/about' ] } Then we’ll require that data in `webpack.config.js` and remove our locals variable: var data = require('./data.js'); Further down that file we’ll update our StaticSiteGeneratorPlugin: plugins: [ new ExtractTextPlugin('styles.css'), new StaticSiteGeneratorPlugin('main', data.routes, data), ] Installing React We want to make lots of little bundles of HTML and CSS that we can then bundle into a template (like an About or Homepage). This can be done with react, and react-dom, which we’ll need to install: npm i -D react react-dom babel-preset-react Then we’ll need to update our `.babelrc` file: { "presets": ["es2016", "react"] } Now in a new folder, `/src/templates`, we’ll need to make a `Main.js` file. This will be where all our markup resides and it’ll be where all the shared assets for our templates will live (like everything in the <head> and our site’s <footer>: import React from 'react' import Head from '../components/Head' export default class Main extends React.Component { render() { return ( <html> <Head title='React and CSS Modules' /> <body> {/* This is where our content for various pages will go */} </body> </html> ) } } There are two things to note here: First, if you’re unfamiliar with the JSX syntax that React uses, then it’s helpful to know that the text inside the body element is a comment. You also might have noticed that odd <Head /> element—that’s not a standard HTML element—it’s a React component and what we’re doing here is passing it data via its title attribute. Although, it’s not an attribute it’s what’s known in the React world as props. Now we need to make a `src/components/Head.js` file, too: import React from 'react' export default class Head extends React.Component { render() { return ( <head> <title>{this.props.title}</title> </head> ) } } We could put all that code from `Head.js` into `Main.js`, but it’s helpful to break our code up into smaller pieces: if we want a footer then we would make a new component with `src/components/Footer.js` and then import that into our `Main.js` file. Now, in `src/index.js`, we can replace everything with our new React code: import React from 'react' import ReactDOMServer from 'react-dom/server' import Main from './templates/Main.js' module.exports = function render(locals, callback) { var html = ReactDOMServer.renderToStaticMarkup(React.createElement(Main, locals)) callback(null, '<!DOCTYPE html>' + html) } What this does is import all our markup from `Main.js` (which will subsequently import the Head React component) and then it’ll render all of this with React DOM. If we run npm start once more and check out `build/index.html` at this stage then we’ll find that React has added our `Main.js` React component, along with the Head component, and then it renders it all into static markup. But that content is still being generated for both our About page and our Homepage. Let’s bring in our router to fix this. Setting up our Router We need to deliver certain bits of code to certain routes: on the About page we need content for the About page, and likewise on a Homepage, Blog or any other page we might want to have. In other words we need a bit of software to boss the content around: a router. And for this we can let react-router do all the heavy lifting for us. Before we begin it’s worth noting that in this tutorial we’ll be using version 2.0 of React Router and there are a bevy of changes since the previous version. First we need to install it, because React Router doesn’t come bundled with React by default so we’ll have to hop into the command line: npm i -D react-router In the `/src` directory we can then make a `routes.js` file and add the following: import React from 'react' import {Route, Redirect} from 'react-router' import Main from './templates/Main.js' import Home from './templates/Home.js' import About from './templates/About.js' module.exports = ( // Router code will go here ) We want multiple pages: one for the homepage and another for the About page so we can quickly go ahead and make a `src/templates/About.js` file: import React from 'react' export default class About extends React.Component { render() { return ( <div> <h1>About page</h1> <p>This is an about page</p> </div> ) } } And a `src/templates/Home.js` file: import React from 'react' export default class Home extends React.Component { render() { return ( <div> <h1>Home page</h1> <p>This is a home page</p> </div> ) } } Now we can return to `routes.js` and inside module.exports: <Route component={Main}> <Route path='/' component={Home}/> <Route path='/about' component={About}/> </Route> Our `src/templates/Main.js` file contains all of the surrounding markup (like the <head>). The `Home.js` and `About.js` React components can then be placed inside the <body> element of `Main.js`. Next we need a `src/router.js` file. This will effectively replace `src/index.js` so you can go ahead and delete that file and write the following in router.js: import React from 'react' import ReactDOM from 'react-dom' import ReactDOMServer from 'react-dom/server' import {Router, RouterContext, match, createMemoryHistory} from 'react-router' import Routes from './routes' import Main from './templates/Main' module.exports = function(locals, callback){ const history = createMemoryHistory(); const location = history.createLocation(locals.path); return match({ routes: Routes, location: location }, function(error, redirectLocation, renderProps) { var html = ReactDOMServer.renderToStaticMarkup( <RouterContext {...renderProps} /> ); return callback(null, html); }) } If you’re unfamiliar with what’s going on here then it’s best to take a look at Brad Westfall’s intro to React Router. Because we’ve removed our `index.js` file and replaced it with our router we need to return to our webpack.config.js and fix the value for the entry key: module.exports = { entry: './src/router', // other stuff... } And finally we just need to head over to `src/templates/Main.js`: export default class Main extends React.Component { render() { return ( <html> <Head title='React and CSS Modules' /> <body> {this.props.children} </body> </html> ) } } {this.props.children} is where all our code from the other templates will be placed. So now we can npm start once more and we should see two files being generated: `build/index.html` and `build/about/index.html`, each with their own respective content. Reimplementing CSS Modules Since the <button> is the hello world of CSS, we’re going to create a Button module. And although I’ll be sticking with Webpack’s CSS loader and what I used in the previous tutorial, there are alternatives. This is the sort of file structure that we’d like in this project: /components /Button Button.js styles.css We’ll then import this custom React component into one of our templates. To do that we can go ahead and make a new file: `src/components/Button/Button.js`: import React from 'react' import btn from './styles.css' export default class CoolButton extends React.Component { render() { return ( <button className={btn.red}>{this.props.text}</button> ) } } As we learnt in the previous tutorial, the {btn.red} className is diving into the CSS from styles.css and finding the .red class, then Webpack will generate our gobbledygook CSS modules class name. Now we can make some simple styles in `src/components/Button/styles.css`: .red { font-size: 25px; background-color: red; color: white; } And finally we can add that Button component to a template page, like `src/templates/Home.js`: import React from 'react' import CoolButton from '../components/Button/Button' export default class Home extends React.Component { render() { return ( <div> <h1>Home page</h1> <p>This is a home page</p> <CoolButton text='A super cool button' /> </div> ) } } One more npm start and there we have it! A static React site where we can quickly add new templates, components and we have the added benefit of CSS Modules so that our classes now look like this: You can find a complete version of the demo above in the React and CSS Modules repo. If you notice any errors in the code above then be sure to file an issue. There are certainly ways in which we could improve this project, for one we could add Browsersync to our Webpack workflow so we don’t have to keep npm installing all the time. We could also add Sass, PostCSS and a number of loaders and plugins to help out, but in the sake of brevity I’ve decided to leave those out of the project for now. Wrapping up What have we accomplished here? Well, although this looks like an awful lot of work we now have a modular environment to write code. We could add as many components as we like: /components Head.js /Button Button.js styles.css /Input Input.js style.css /Title Title.js style.css Consequently, if we have a .large class inside the styles for our Heading component then it won’t conflict with the .large styles from our Button component. Also, we can still use global styles by importing a file such as `src/globals.css` into each component, or simply by adding a separate CSS file into the <head>. By making a static site with React we’ve lost a great deal of the magical properties that React gives us out of the box, including managing state, yet it’s still possible to serve two kinds of website with this system: you can make a static site as I’ve shown you above and then progressively enhance everything with React superpowers after the fact. This workflow is neat and tidy but there are many instances when this combination of CSS Modules, React and Webpack would be complete overkill. Depending on the size and scope of the web project it would be borderline crazy to spend the time implementing this solution—if it was only a single web page, for instance. However, if there are lots of people contributing CSS to the codebase everyday then it might be extraordinarily helpful if CSS Modules prevented any errors that are thanks to the cascade. But this might lead to designers having less access to the codebase because they must now learn how to write Javascript, too. There are also a lot of dependencies that have to be supported for this method to work correctly. Does this mean that we’ll all be using CSS Modules in the near future? I don’t think so, because—as with all front-end techniques— the solution depends on the problem and not all problems are the same. Article Series: - What are CSS Modules and why do we need them? - Getting Started with CSS Modules - React + CSS Modules = 😍 (You are here!) Great series. If you’re using CSS Modules with React, I highly recommend taking a look at react-css-modules. Very nice walkthrough. If you want to go even further (PostCSS, Redux, etc) I like this starter project: Hi, the best thing i can do is to recommand you they are the best at this. If you write CSS Modules in this style, ES CSS Modules may also be useful! Awesome article! Just what I was looking for :D Well expained, thanks a lot! Static file generation makes deployments easy since you only need to upload the files and be done with it, no hassle with git hooks or specialised deploys depending on the host/setup you have. I’m getting to quite like that approach, for a lot of cases this is plentiful. Static file generation could be quite useful for my universal-dev-toolkit. I already have hashed files, browsersync, (s)css-modules, server-side rendering and a bunch of other goodies in there so adding this shouldn’t be too hard! This is pretty cool! I’ll be sure to take a closer look at this project. Thanks so much for this series of articles/tutorials Robin, I spotted a couple of issues with this latest one which have hindered me a little though. Just thought I should highlight them: Your code example for the .babelrcfile above has "presets": ["es2016", "react"]however previously in part 2 we were using "presets": ["es2015"]so it causes errors if you are following on from part 2. Also, there’s a typo in your router.jsexample above: should be: (took me a while to figure that one out!) Anyways, hopefully you can update those when you get a chance. Thanks! Thanks Chuck! I’ve gone ahead and fixed the post now. Yo! Thanks for such a gentle introduction to CSS Modules. Going through the basics was great, though I still feel like a Part IV is definitely missing… wink ;) Taking into account more complex scenarios like CSS specificity examples would be tremendously helpful. Also, don’t forget to update your .babelrcsample code on this part, which still contains a reference to es2016instead of es2015!
https://css-tricks.com/css-modules-part-3-react/
CC-MAIN-2019-35
refinedweb
2,928
57.47
![if gte IE 9]><![endif]> Hi, I have an issue to using my own table in sitefinity, for example i want to upload data by SSMS and display it in sitefinity widget or edit the data in my own table through sitefinity. What tools should i install? what skill should i have(may be MS LINQ)? or any others requirments should i have? Anyone can give me simple example about this? And secondly, if i used standard sitefinty widget(for example blog widget), the posts must be stored somewhere in database, but i don't know to extract it for reporting purpose for example. Thanks before.. Regards, Eddy You need to create Dynamic Module. It will automatically create table in DB and you will be able to edit information through Sitefinity backend. And show it for visitors in fronteng with autogenerated widget for this dynamic module You can find more useful information here: Hi Victor, Thanks for the explanations and suggestions, it's very useful, i am exploring it for this moment. Regards, Eddy I have tried using Module Builder for creating Dynamic Module (just say: myModule), Dynamic Content Type (just say:MyType), and Dynamic Content Items, and i can see that the table structure following the structure of my Dynamic Content Type plus an additional field "BASE_ID" that defined by Sitefinity. At this point, i can insert or edit something like a post or a transaction using Sitefinity Backend (create Content Items), and the table is reflected the changes. But i still can not insert rows directly to my table because i dont have the GUID number for filling BASE_ID field, and my table has relation to table SF_DYNAMIC_CONTENT too. My purpose is to insert many rows (as a master table in my application) at once. How to achieve this? I think this link would be a good entry point for me, thanks a lot. Hi Eddy. You need to do it through sitefintiy API or webservice. As example, you can check this article Hi Eddy, you need to add this using: using Telerik.Sitefinity.Model; And i want to tell you small hint, what can help you with API Go to Administration -> Module Builder -> Choose your module -> In the right sidebar click to "Code reference..." On this page you will find many code samples for your module I have a little problem when applying it. So, when i used DynamicContent class, it seems there no SetString procedure, am I missing something? here are my snipped codes: using Telerik.Sitefinity.DynamicModules.Model; ... public static void CreateStore(Store s) ... DynamicContent storeItem = dynamicModuleManager.CreateDataItem(storeType); // This is how values for the properties are set storeItem.SetString("Title", s.Title, cultureName); storeItem.SetString("Phone", s.Phone, cultureName); ... Ok, thanks for the enlightenments, very helpful..
https://community.progress.com/community_groups/sitefinity/frontandbackend-development/f/296/t/48411
CC-MAIN-2019-09
refinedweb
462
54.83
I'm having some difficulty exposing custom styles in the 2010 Rich-Text-Editor in browsers other than IE. Followed the standard naming conventions... The custom styles show up in the Markup Styles and Styles dropdowns in the Ribbon for IE but not in any other browser because the -ms-name property is a custom CSS extension in the MS namespace. I tried adding properties like -moz-name and -webkit-name and even name but they had no effect. View Complete Post Hi All, How can I customize my RSS viewer webpart in MOSS 2007. I am talking about the styles like, alternate row colors, etc.. Thanks in advance... Hello, I am making a custom site definition. I am able to embed master page in my site definition. This is corresponding thread for it. Now in same way I want to embed style sheets with site definition. How I can do it ? Hi I have been trying to create a custom Styles select button for the HTMLEditor control based on the examples given in the CustomButtonsAndPopups.cs and CustomEditor.cs files. I have had some success: The select button is displayed and the functionality of selecting a style (eg. H1, H2, P etc) and changing the style of selected text works but there are two problems that make it different from the Font Name select button: There is no label next to the button and there is no default text: The style button is on the left. The Font Name select button is on the right Any suggestions about how to add the label and the default option? Here is my version of the CustomButtonsAndPopups.cs file:. Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
http://www.dotnetspark.com/links/36124-custom-rte-styles-browsers-other-than-ie.aspx
CC-MAIN-2016-44
refinedweb
292
64.81
History of Microsoft's AntiCompetitive Behaviorjabjoe (1042100) writes "Groklaw is highlighting a new document from the ECIS about the history of Microsoft's antiCompetitive behavior. It contains such gems as: "[W]e should just quietly grow j++ share and assume that people will take more advantage of our classes without ever realizing they are building win32-only java apps." --Microsoft's Thomas Reardon As well as the Gates 1998 Deposition" Link to Original Source White House Petition To Investigate Dodd For Bribery Agreed, it is bribery. But it wouldn't be a problem if we all had the same amount of money to give. If we did, it would be ridiculous to call it bribery. But we don't, so it is a problem. The wealthy can give more than the poor, much more. So much more that the poor, despite their number cease to matter. This is not how a democracy should work as it utterly breaks it in favour of the wealthy few. I don't understand how ScentCone can possibly think it's ok and get score:5 Insightful! Ubuntu Tablet OS To Take On Android, iOS LOL! You think? stuff to Android? If they hate X, help with Wayland. If users want normal graphical Unix, we can at least install the Wayland X server. Secret BBC Documents Reveal Flimsy Case For DRM My brother told me of a story (year or two ago) where he downloaded pirate version of a game he has bought because the DRM failed to allow him to play the game. The pirate version worked fine.... You also missing the point. DRM only need to be broken by one smart person, then everyone just copies the unprotected copy. You don't have to be the one to crack it to get a copy. Ubuntu Heads To Smartphones, and Tablets. :-) The best computer upgrade I've ever done was: The Mhz increase alone would have been good, but it was so much more. The BBC BASIC interpreter even fitted in the bigger instruction cache. As a kid writing BBC BASIC, this meant a massive speed up. I clocked it up to 287Mhz (and fitted a 486 heatsink and fan) to make it even faster. No upgrade I've ever done since then, short of replacing the computer, has made such a big difference. Old Arguments May Cost Linux the Desktop "Until the engineers get a clue, open source projects will never be more than a closet of hobbyist projects." What world are you living in? Even if you think everything you run is closed (and it probably isn't), there is a massive sea of things running free software, not just open source, that you will interactive with everyday. It's not unlikely you own a few. There are plenty of big companies and government departments that even the Linux desktop is used. Wake up. It's everywhere already. Microsoft Dilutes Open Source, Coins 'Open Surface' I've did a compete namespace and had the same issues. MS only documented at all after being taken to court by the EU. The docs are barely enough and new interfaces have been added since XP, and not all of them documented. Many don't even have a name you can get at, let alone a interface definition, just a GUID. Some, others have worked out, some, no one outside Microsoft knows. My eyes are now open to this crap way of working. The "magic blackbox" thinkings are naive if they think documenting the surface of the blackboxs are enough. The more complex the thing, the less well documentation is going to be able to cover all the combinations. I'm not saying source instead of docs, I'm saying source as a fall back for when the docs are not enough. Microsoft Dilutes Open Source, Coins 'Open Surface' You missed the point. You can have implementations doing exactly what the standard says but unable to work together because of a whole in the docs. Unix systems have the advantage that much of them are open, avoiding problems as implementations can look at what each other do. Also there is almost always multiple implementations so problems show up quickly. In Windows world you have the worse case, one, closed implementation. Any other later implementation has to use that as a reference implementation. When the docs fail, you have to find exactly what that implementation does and do the same, or at least what is compilable. Worse, if the docs say one thing and the closed reference implementation does another, you must do what the closed implementation does. Worse still, Microsoft have a history of deliberately making life for other implementations even harder. It's a nightmare situation. It's why I feel pushing Wine and Mono as things that make Microsoft technology cross platform is stupid. It's a game you can not win or even draw. Microsoft Dilutes Open Source, Coins 'Open Surface' Docs aren't perfect. We all know this. To me, if there isn't a open reference implementation I'm not sure it's a open. Only an implementation covers everything required. Yes, in theory, the docs should be an implementation written in English, but that fails as it can't be run and tested, so it's always an incomplete implementation. Also, personally, I often find it easier to dig out exact details for code from other code, rather than from written English. Microsoft Dilutes Open Source, Coins 'Open Surface' Yer, only if the specification is perfect. Which it never is. There is always things not documented that projects like Wine, Mono, Samba, etc must work out. Most of the time I don't think it's a deliberate MS policy (though I bet some of the time it is), it's just the nature of software. Really you want not just a published specification but a open reference implementation. MS, intentionally or otherwise, use this in both directions. PuTTY 0.61 Released Your right, I don't develop with cygwin. I just use it as a user. Anything I install, I install from it's repositories, and so far, it's always worked fine. I have compiled one or two things with it, and that's all just worked like it was a real Unix. This doesn't negate what you are saying, but I'm using it as an isolated environment and not trying to redistribute anything. If I did, I would try to redistrubute through the package management system, so may not hit the issues you have. But perhaps the repositories aren't controled like real Unix ones and I've just been lucky, I've not looked. I've not found speed an issue for my uses. I use the Unix like environment because I prefer it (and it has history longer than that session), but the standard console (Window's running cygwin) is terrible. MinTTY is amazing when compared with the standard console. So am I going to use PuTTY to connect to a real Unix environment? No. I'm going to use normal ssh like I was on a Unix environment. I did use PuTTY and normal Windows userland for quite a while, but I found cygwin + MinTTY works best for me. The piping stuff is an example where I can forget I'm on Windows and just do as I would normally. Don't get me wrong, I use PuTTY when on other people's machines, or don't have admin, but if it's a Windows machine that's for me, I'm going to setup and use MinTTY + cygwin because it gives me more than PuTTY. I just wish it's package management was more like APT. I miss apt-get when on cygwin. PuTTY 0.61 Released Here here! I didn't know Powershell still uses the crap default terminal! Haven't gone near it as I'm happy with cygwin and MinTTY, plus I try and avoid Windows specific stuff. PuTTY 0.61 Released Then those improvements make all the difference. Plus, you can use it to use normal ssh, normally. PuTTY 0.61 Released That is how I do still use PuTTY from time to time. When it's not your machine, it's polite to only use PuTTY rather than install anything, and if you don't have admin, it's the only option. But I don't often do this, I use cygwin+mintty as preference, like on my work machine. PuTTY 0.61 Released I find the PuTTY terminal isn't much better. Mintty is the best I found for Windows. Shame it's not like Guake, but stuff I found to do that on Windows hasn't been functional. PuTTY 0.61 Released Much nicer console and gives just standard command line ssh, which is all I want/need. I stopped using Putty years ago... /dev/video0 -f matroska pipe:" | ffplay pipe. Who Killed the Netbook? You probably can get more out of it to because it's probably using softfp instead of hardfp. Microsoft Releases Kinect SDK For Windows Hehe, I take you know it was Linux having drivers that probably started this. :-) MS probably hated all those Linux Kinect YouTube videos. C++ the Clear Winner In Google's Language Performance Tests That's the old JIT argument, and while in theory it might have some merit, in the last decade it shown not to. Christ, a great deal of stuff still targets i386 just to ensure it runs on everything, and yet those apps still out perform Java/C# apps. Why? Because the core instructions are still the core instructions and the old RISC rule holds true, most of the work is done using a few key instructions. Plus where the JIT argument breaks down is with things like DLLs. The DLL can be very specific for the computer, and old applications link in that DLL to do the work, and thus the work in question for the old application is done with the latest, computer specific, stuff. If you want speed, use pre-canned stuff, if you want productivity use something like python, if you want both, probably best use a language for each. For instance core logic in C, and GUI stuff in python. Or use C++ and except the complexity that adds. If you want to have both but are willy to compromise to have both, then maybe that is where JIT languages come in. Zero Install Project Makes 1.0 Release In Debian (and no doubt other package managers) you can have A and B use different versions of a shared lib, but one uses the "somelib.so" and the other "somelib.1.0.so". Normally a version of lib is standardize as the version of a lib. Other versions are used with version number as part of the name. If there is a conflict, then yes you can have only one or the other, but I don't see how you get out of that. For instance /usr/bin/convert and /usr/local/bin/convert is still a conflict in my book, one (local) overrides the other even if it's not overwritten it. You could hack something up with chroot, but it all starts getting ugly. Unless I'm missing something of course.
http://beta.slashdot.org/~jabjoe
CC-MAIN-2014-35
refinedweb
1,896
72.76
Visual Basic is quite easy. It gets a lot of flack for being so easy. It makes learning the concepts of programming very simple. I used to be a programming trainer and used the BASIC language as the first stepping stone into the world of software development, and it has worked. It has worked for me, technically. Let me tell you the boring story: I finished school back in 1996. By that time, my parents had already started a computer college. Needless to say, things were a lot different then. I didn't want to be involved, honestly. Not having many other qualifications basically forced me into it. A few years on, I decided to venture into programming. I attempted to start with Visual C++. It was horrible. I struggled. Add to that the fact that the book I used was literally full of errors and I had to find the errata online (with dial-up Internet). If it wasn't for Codeguru, I would have ended up somewhere else. I persisted, even though I had no idea what I was doing. I bought a book about Visual Basic. The language just made it simple enough for me to understand what I was doing. Only after I fully grasped Visual Basic, I could venture back to Visual C++ and understand it better. The funny part is that now I am employed as a C# developer. Enough about me. The aim of this article is not to bore you; it is to dig into some of the intricacies of Visual Basic. Today you will learn about Scope, Modules, and Accessibility Modifiers in Visual Basic. Scope Scope, in programming terms, refers to the visibility of assets. These assets include variables, arrays, functions, classes, and structures. Visibility in this case means which parts of your program can see or use it. Essentially, there are four levels of scope in Visual Basic. These are: - Block scope: Available only within the code block in which it is declared - Procedure scope: Available to all code within the procedure in which it is declared - Module scope: Available to all code within the module, class, or structure in which it is declared - Namespace scope: Available to all code in the namespace in which it is declared Block Scope A block is a set of statements enclosed within starting and ending declaration statements, such as the following: - If and End If - Select and End Select - Do and Loop - While and End While - For [Each] and Next - Try and End Try - With and End With If you declare a variable within a block, such as the above-mentioned examples, you can use it only within that block. In the following example, the scope of the SubTotal variable is the block between the If and End If statements. Because the variable SubTotal has been declared inside the block, you cannot refer to SubTotal when execution passes out of the block. If Tax > 15.0 Then Dim SubTotal As Decimal SubTotal = Tax * SubTotal 'Correct' End If 'Wrong. Will Not Work Because SubTotal is Out of Scope!' MessageBox.Show(SubTotal.ToString()) Procedure Scope Procedure scope refers to an an element that is declared within a procedure is not available outside that procedure. Only the procedure that contains the declaration can use it. Variables at this level are usually known as local variables. In the following example, the variable strWelcome declared in the ShowWelcome Sub procedure cannot be accessed in the Main Sub procedure: Sub Main() ShowWelcome() Console.WriteLine(strWelcome) 'Will not work!' Console.WriteLine("Press Enter to continue...") Console.ReadLine() End Sub Sub ShowWelcome()] Dim strWelcome = "Hello friend" Console.WriteLine(strWelcome) End Sub Module Scope You can declare elements at the Module level by placing the declaration statement outside of any procedure or block but within the module, class, or structure. When you create a variable at module level, the access level you choose determines the scope. More on Access Modifiers later. Private elements are available to every procedure in that particular module, but not to any code in a different module. In the following example, all procedures defined in the module can refer to the string variable strWelcome. When the second procedure is called, it displays the contents of the string variable strWelcome in a messagebox. Private strWelcome As String 'Outside of all Procedures' Sub StoreWelcomeGreeting() strWelcome = "Welcome to the world of Programming" End Sub Sub SayWelcome() MessageBox.Show(strWelcome) End Sub Namespace Scope Namespace scope can be thought of as project scope. By declaring an element at module level using either the Friend or Public keyword, it becomes available to all procedures throughout the namespace in which the element is declared. An element available from within a certain namespace also is available from within any namespaces that are nested inside that namespace. Public elements in a class, module, or structure are available to any project that references their project, as well. If I had changed the declaration of strWelcome (from the previous example) to: Public strWelcome As String 'Outside of all Procedures' strWelcome would have been accessible throughout the entire namespace. Modules A module is simply a type whose members are implicitly Shared and scoped to the declaration space of the standard module's containing namespace. This means that the entire Namespace can access items in the Module. Fully Qualified Names A fully qualified name is an unambiguous name that specifies which function, object, or variable is being referred to. An object's name is fully qualified when it includes all names in the hierarchic sequence above the given element as well as the name of the given element itself. Members of a standard module essentially have two fully qualified names if: - One fully qualified name is the name without the standard module name in front. - One fully qualified name is one including the standard module name. More than one module in a namespace may contain a member with the same name. Unqualified references to it outside of either module are ambiguous. For example: Namespace Namespace1 Module Module1 Sub Sub1() End Sub Sub Sub2() End Sub End Module Module Module2 Sub Sub2() End Sub End Module Module Module3 Sub Main() Sub1() 'Valid - Calls Namespace1.Module1.Sub1' 'Valid - Calls Namespace1.Module1.Sub1' Namespace1.Sub1() Sub2() 'Not valid - ambiguous' Namespace1.Sub2() 'Not valid - ambiguous' 'Valid - Calls Namespace1.Module2.Sub2' Namespace1.Module2.Sub2() End Sub End Module End Namespace Differences Between Modules and Classes The main difference between Modules and Classes is in the way they store data. There is never more than one copy of a module's data in memory. This means that when one part of your program changes a public object or variable in a module, and another part subsequently reads that variable, it will get the same value. Classes, on the other hand, exist separately for each instance of the class—for each object created from the class. Another difference is that data in a module has program scope. This means that the data exists for the entire life of your program; however, class data for each instance of a class exists only for the lifetime of the object. The last difference is that variables declared as Public in a module are visible from absolutely anywhere in the project, whereas Public variables in a class can only be accessed if you have an object variable containing a reference to a particular instance of a class. Accessibility Modifiers The access level of an object is what code has permission to read it or write to it. This level is determined not only by how you declare the object itself, but also by the access level of the object's container. The keywords that specify access level are called access modifiers. Visual Basic includes five (5) Access Levels, and they are: - Public - Protected - Friend - Protected Friend - Private Public Public indicates that the objects, functions, and variables can be accessed from code anywhere in the same project, or from outside projects that reference the project, and from any assembly built from the project. The next code segment creates a Public Course Class and accesses it from somewhere else in the project: Public Class Course Public CourseName As String End Class Public Class Student Public Sub Enroll() Dim c As New Course() c.CourseName = "Introduction to Programming" End Sub End Class Protected Protected indicates that the objects, variables, and functions can be accessed only from within the same class, or from a class derived from this class. A derived class is simply a class that inherits features from another class, which is called the base class. The following segment shows that if a class is not derived from a base class, it cannot access its members: Public Class Course Protected Duration As Integer End Class Public Class ProgrammingCourse Inherits Course Public Sub ChooseCourseDuration() Duration = 12 'OK' End Sub End Class Public Class WebDesignCourse Public Sub ChooseCourseDuration() Dim c As New Course() 'Inaccessible Because of Protection Level' c.Duration = 12 End Sub End Class Friend Friend specifies that the objects, functions, and variables can be accessed from within the same assembly, but not from outside the assembly. Assembly 1 Public Class Course Friend Cost As Double End Class Public Class GraphicDesignCourse Public Sub SetCost() Dim c As New Course() c.Cost = 5000 'OK' End Sub End Class Assembly2 Public Class BasicCourse Public Sub SetCost() Dim c As New Course() 'Cannot Access from external Assembly' c.Cost = 4000 End Sub End Class Protected Friend Protected Friend specify that objects, functions, and variables can be accessed either from derived (inherited) classes or from within the same assembly, or both. Private Private indicates that the objects, variables, and functions can be accessed only from within the same module, class, or structure. Conclusion Understanding Scope, Modules, and Access Levels is crucial in building any decent application. The sooner you know these, the better. There are no comments yet. Be the first to comment!
https://www.codeguru.com/vb/gen/vb_misc/algorithms/visual-basic-basics-modules-scope-and-accessibility-modifiers.html
CC-MAIN-2018-43
refinedweb
1,655
61.26
I::. Hi Luiz, Unfortunately J# doesn't support partial classes, which the Web Application Project uses heavily. Hope this helps, Hi Mandar, Yep -- you can definitely do this (covert VS 2005 Web Site Project to a VS 2005 Web Application Project). I have a tutorial on that shows how to-do this (both in Vb and C#). Scott Hi Jan, If you can send me email directly ([email protected]) I can see whether the patch for your particular language is ready. Thanks, Hi Dennis, Probably the easiest way to-do this is to copy the control declaration from the .designer.cs file into your .cs code-behind file and add the XML comment there. The designer will then not re-generate the field in the .designer.cs file (since it saw that you added it in the other part of your partial class), and you'll be free to modify the comment however you want. So Microsoft deploys Visual Studio 2005, we are all so exited but in the process we all complain about Hi Lastone, The mobile templates are actually included with the VS 2005 Web Application Project tempaltes (as opposed to being a separate project template). Try doing a Add New Item and you should see the "Mobile Web Form" item template that you can use to add a mobile-enabled page to the project. Hi Nick, This blog post describes how you change the default browser you use in VS: Beside all the things already being said, I'm really really glad you decided to support both ways. Most of my projects consist of multiple solutions files, some for developing, one for building setup packages, etc. In my opinion, the web project approach is way too inconsistent (since when did we ever store project-specific data in a solution file?). However, I see the point that developers who are forced to use a checkout-lock-based version control system were not happy at all with the web application approach. Thanks for giving back the much more configurable and consistent possibility of web application projects for those who use a more sophisticated build process (e.g. DEBUG and RELEASE versions), true project references and update-commit-based version control :-) Christoph Hi Howard, Can you send me email ([email protected]) with more details of this issue? I can then loop people in on my team to help. Hi Ryan, Can you send me email with more details about your exact configuration and scenario? thanks, Hi Robin, Can you send me email directly ([email protected])? I'll then loop you in with someone who can help. ASP.NET, Visual Studio, Tips and Tricks OK, so I have an ASP.NET 2.0 Web Application Project (and here) -- opposed to a Web Site Project --that... Unfortunately MSDN has recently changed a bunch of links. You can download it now from here: Hi James, There unfortunately was an installer bug with the SP1 beta that sometimes causes problems when adding new templates if you have previously installed WAP on the machine. The good news is that it is easy to fix. Just open up the Visual Studio 2005 Command Prompt Window and type this command: “devenv.exe /InstallVSTemplates”, and hit Enter. This will create the template cache correctly for you. Hi Hakan, Strongly typed datasets and TableAdapters are definitely supported with web application projects. Wendy's article here: details a few of the differences when using them with web application projects instead of web-site projects. There are some differences because of the different project model behaviors, but all the features of Typed DataSets work in both projects. Two weeks ago we released the VS 2005 SP1 Beta. You candownload it for free by visiting and registering Hi Paulo, Mohammed wrote up an article that describes how to make this work on Vista RC1 here: Hi Matt, You can create a strongly typed Profile object using this add-in: With Web Application Projects the Profile object isn't automatically generated for you. Instead, you need to use this utility to generate the Profile object to compile in your project: The C# Team needs your help! The Visual Studio 2005 SP1 Beta needs more testers doing more testing. We Hey, just getting into VS2005 and ASP.NET 2.0. Like the idea of the web application like in VS2003 but not sure which I should use. Printed the doc with the chart on pros and cons and that helps, but... Anyway, Guess I am too new to asp 2.0 to realize what I might be missing if I use the Web Application piece. Thanks for putting this out as people like their comfort zones. Hi Fregas, Are you using WAP with a VS 2005 Web Server, or are you trying to use it with IIS? Note that edit and continue doesn't work with IIS based proejcts. Hi Talz, It sounds like you don't have the web application project download installed. Make sure you download and install it from above and then you should see "ASP.NET Web Application" show up in the New Project dialog. Hi Jay, Have you built your project before running the ObjectDataSource wizard? This isn't required with web-site projects, but is needed with web application projects for them to show up as classes you can bind to. Hi Roland, Can you double check that the /app_globalresources directory and the .resx files it contains was copied correctly to the target location? What I suspect is happening is that the .resx files have not been set as "content" files within your project - in which case the Publish wizard within Visual Studio won't copy them to the target directory when you publish. They would then fail to resolve at runtime. If this is the case, then select the .resx files in the VS solution explorer and modify them to be "content" files within the property grid. Hi John, One approach you can use instead of using a WAP project is to add a web deployment project to your solution. You can then pipe the output of this to a VS 2005 Web Setup Project. This blog post describes how: We have a Web Service project developed in 1.1 and in the process of migrating it to 2.0. I am little confused with the above article since some times it talks Web Application project & Web Site project. Can you please tell me how to upgrade a Web Service project from 1.1 to 2.0? Regards, Raghavendra I'm using WAP, i did NOT get the Beta SP1 as I'm waiting for the 'real' thing... I'm getting this error when debugging: "Microsoft.visualstudio.web.application.dll" has encountered a problem... Will this be fixed with SP1?? Hi Camilla, I haven't heard of the issue you are reporting. Can you send me email about it and I'll loop someone in to investigate? Hi Raghavendra, You can absolutely use the VS 2005 Web Application Project option to upgrade a web-service projects from VS 2003. I have a tutorial on that walksthrough how to-do this. Scott- I'm not getting that error anymore but I do get the compiler error in this article: which there's a fix for it... If I get the dll error again, I'll capture a screen shot... is SP1 (final release) close? Thanks. Spoke too soon, I did get the WAP dll error again. Scott- I'm emailing you the screen shots. thanks. Hi Scott, any news on the localized versions? :-) --Christian Hi. When we will have the spanish version? It's important for me to know Hi Christian and Raul, The localized versions should be available soon as part of VS 2005 SP1. In the meantime, if you want to use the english version with a localized build of visual studio you can follow the below steps: 1) Install the English version of Visual Studio 2005. You can do this by using the free “trial” edition that can be found at the following location - 2) Install the Visual Studio update that enables WAP - 3) Install the WAP v1.0 Add-In - 4) Uninstall the English version of Visual Studio 2005 Trial Edition I need my web site to have PDB but DEBUG=FALSE. How can i set this kind of behaviour using WDP? Are there any downside in distrbuting the PDB (when DEBUG=FALSE) Hi Scott, I too am experiencing problems with finding the Mobile Form Templates when using the Web Application Template. I have the following items installed (i have left out the VB, c# etc)... Microsoft Visual Studio 2005 Version 8.0.50727.42 (RTM.050727-4200) Microsoft .NET Framework Version 2.0.50727 Installed Edition: Professional Microsoft Web Application Projects 2005 Version 8.0.60504.00 Thanks for any help in advance, Jose I am having the same problem as Roland with resources not working. It works perfectly on my local machine, but when I upload to the server I get the error: The resource object with key '..' not found. I don't quite understand how to implement your suggestion to Roland to set the resx files as 'content' Thanks for your help, Gail Hi Gail, Can you check on your remote server to make sure that the app_globalresources and app_localresources directories copied? I suspect what might be happening is that those files are not marked as "content" files within the solution explorer - in which case the publish wizard won't automaticlaly copy the files over. I tried copying the app_GlobalResources directory (I don't have an app_LocalResources directory) to the server, and then I get this error: The directory '/App_GlobalResources/' is not allowed because the application is precompiled. I have looked at the dll using Reflector and the resource files are definitely embedded in there... By the way Scott, in case it's not clear to you from the error, I am using a web deployment project with all outputs merged to a single assembly. Any chance you could send me an email with more details of this problem? I'll then loop in some folks from my team who can help. I wrote earlier in this thread that I couldn't find the "Mobile Web Form" template (for a Web Application Project). I read that these would be available with SP1. I have installed SP1 and still they are not available. Should this be the case? What are options I face? Thanks for any help in advance. Hi Scott Ryan wrote: I am experiencing the same problems as Perry did and did not see anything further from you on a solution. We do not experience a problem when we use Cassini, it is only when we deploy to IIS that the problem shows itself (basically no images, validators, etc are available...anything that is using Webresource.xsd). Any help you could provide would be greatly appreciated. Thanks. I have a same problem... Simple page sometimes raise error: WebResource.axd handler must be registered in the configuration Config: W2003 SP1, IIS 6 Hi Hratchia, What this usually means is that the directory on IIS isn't marked as an "application". You need to go into the IIS admin tool and make sure the directory is an application and not a virtual directory. I have a strange problem referencing standalone classes in my web application project. I added a class file to my project and created a public class and a public enum but I can't reference either one anywhere else in the project. I have a base class (base.vb) that all my webforms inherit and I have no problems there. If I paste my new stand-alone public class and public enum into base.vb then I can access them from elsewhere in the project regardless of whether base is inherited or not. Also, having the same class declared twice (in two different files) does cause a runtime error (of course) but not a compile error. So I can access different standalone classes as long as there all in the same class file. This will work but it’s not a very nice way to manage code. Do you have a solution to this problem? Thanks for the help, Mitch Hi Scott, I posted yesterday about issues referencing public standalone classes in a web application project. I've figured out what was going on but maybe you can explain why it happens. Even though this is a web application project, I created an App_Code folder for my base class and other standalone classes just because I don't want them in the project root and I'm use to the naming convention. If I created the class outside of App_Code and then moved it into the folder, everything worked fine. But if I created the class inside of App_Code, it was not recognized by the rest of the project. Also, references to other classes in a "working" class inside App_Code were problematic. Thanks, Hi Mitch, When using a web application project, you want to avoid storing classes under the /app_code directory - since classes within this directory are compiled dynamcially by ASP.NET at runtime, and aren't available to classes/code compiled in your web application project. I think that is why you are seeing some of the behavior you are seeing above. I converted the projecct to the new model, but I would like to revert the changes now to the old model ... Is that possible .. all I'm getting is the compiler errors and instructions that Codebehind is no longer supported and I should update to a new website model .... Hi Nebojsa, There isn't an automatic way to convert from a web application project to a web-site project. Did you follow these steps when doing the web-site to web application project migration: If you follow these steps exactly you should have no problems with it. I need to know the integration with Visual Studio Professional 2005 with Team Foundation server with Windows Framework 3.0 Lastone, Robin, Jose, I found this post regarding the Mobile Web Form templates and SP1: Direct link to template files here: Hi, I have the VS 2005 SP1 installed and I created a new C# Web Application Project. When I attempt to use Profile, the project does not recognize it. I thought that SP1 enabled Profile like it works with Web Site Apps? Is there something I need to do to get Profile working with Web Application Projects? Thank you. Hi BrianD, To use the Profile object with Web Application Projects, you'll need to use this add-in to generate the profile object: Why is that add-in needed? I thought that Profiles were supported with the release of SP1? thanks. I post here, but they never seem to appear. Why do I need the add-in? I thought the Profile was fixed for WAP in SP1? Thanks for the link Scott, much appreciated. Sorry for the delay in your comment being posted. I've been in Europe the last few days and am just catching up on comments now. To answer your question - unfortunately you still do need the add-in to enable profiles with SP1. This isn't built-in to the web application project just yet. Hi, Business Integration Services Project Template is missing in my VS 2005. How to add this Template. I tried uninstalling and reinstalling VS 2005, executing ‘devenv /installvstemplate’ but nothing works. Help... please We had a number of problems migrating from vs.net 2003 to vs.net 2005 and my blog documents most of them. We are currently using a website project. The website project is pretty cool but I have one big problem. 1.If a developer adds a file to the website project and then both developers get the latest version. If one of the developers then deletes the file the other developer when he checks the files they have checked out it will detail the file that the other developer deleted needs to be checked in. Have you come accross this problem before. We are currently using visual source safe 6.0d. Also the web site project compiles each page, user control to individual dlls. Is there an easy way to combine them into one dll? If not this could work to our advantage as we could just publish the dll that had a code fix as long as the dll did not change name when being published. Thanks James Both of the feautres you mentioned are solved if you use the web application project option instead of the web-site one. This tutorial walksthrough how you can change the project type to be a web application project: You will then be able to compile everything into one assembly just like with VS 2003. I'm getting this error when I Ctrl-F5 my application: "Cannot Create/Shadow Copy '<projectname>' when that file arleady exists" Using WAP. I close the window, run it again and it's fine. But it's happened more than once today... Any ideas? That is pretty odd. Is there any chance your disk might be full? If not, and it keeps happening, can you send me an email with more details? I'll then loop someone in to help you with it. Scott - Disk is not full. I have 66 GB free disk space on my C Drive. Will email you if it happens again. It happened like 4 or 5 times the other day. I cant find ASP.net Web Application in new project menu what to do? Hi Rauf, Have you installed either VS 2005 SP1 or the Web Application Project download? I Have problem with custom build provider of sitemap. i have the error impossible to load the type ... it works if i put my class in a folder named app_code but it's not good; Ronny the May 25, 2006 seems to have the same problem.$ Could you help me please ? Scott - found my answer to: "Cannot Create/Shadow Copy '<projectname>' when that file arleady exists" error. Added <hostingEnvironment shadowCopyBinAssemblies="false" /> to my web.config. Is it possible to create control event hookups in the server side code without going into the aspx 'design' view? I have never trusted clicking on the design view tab since VS2003, and I don't think I ever will. The only time that I have to use design mode is to add a new server side event bind to a control and cringe whenever I have todo it. I usually check my code into SVN then do a diff whenever I do this just to be sure, and to date VS2005 has not mangled my code as VS2003 used to, however I do not wish to use design mode in any way, so of there was a method to bind control events using the new .net 2.0 WAP model I would be very interested to hear of it. -Steve Oh man, now I get another error. I cant even make a change to my project and comiple. Nothing.... Same error as here: Do I need to be removing "<hostingEnvironment shadowCopyBinAssemblies="false" />"? The poster metions " have heard that sometimes you need to shadow copy dlls in order to get around this. " What to do?? Yes, I had to remove: <hostingEnvironment shadowCopyBinAssemblies="false" /> to fix "Unable to copy file XXXX.dll. The process cannot access the file XXX.dll because it is being used by another process" Error! But now without shadowCopyBinAssemblies, I get "Cannot Create/Shadow Copy '<projectname>' when that file arleady exists" error." when I click control-F5 to run without debug. Not good. Can you send me an email ([email protected]) that describes this problem more? I will then have someone help you. Pingback from Slow compilation of large ASP.NET sites on IIS « JAK Pingback from unlimited music downloads Pingback from » Visual Studio 2005 Web Application Project Clint Modien
http://weblogs.asp.net/scottgu/archive/2006/05/08/VS-2005-Web-Application-Project-V1.0-Released.aspx
crawl-002
refinedweb
3,344
72.76
Originally posted by rahul_mkar: hi, i would like to know 1) what is the sequence of variable creation, is it static variables first and then instance variables or vice versa for example class x{ static int i=5; int z=3; } would i be created first and then z or z first and then i or both at almost the same time; class MBT{ MBT(int i){ System.out.println(i); } } public class Stat { static MBT m1=new MBT(1); MBT m2=new MBT(2); static MBT m3=new MBT(3); public static void main(String[] s){ Stat st=new Stat(); } } Try out this program you have your answer. 2)if all variables created first and then initialized as the written sequence then in my first example "private int i = j; private int j = 10; comiler complains about a Forward Referencing error" in the above case variables i and j should have been initialized to 0 and then when it comes to the first statement ie "private int i=j" then i should have been assigned 0 and then in the next statement it should have been assigned 10. However this does not happen. please clarify especially on how the two questions give different results.
http://www.coderanch.com/t/191440/java-programmer-SCJP/certification/referencing
CC-MAIN-2015-22
refinedweb
205
50.03
I'm having some trouble getting the correct solution for the following problem: Your goal is given a positive integer n, find the minimum number of operations needed to obtain the number n starting from the number 1. # Failed case #3/16: (Wrong answer) # got: 15 expected: 14 # Input: # 96234 # # Your output: # 15 # 1 2 4 5 10 11 22 66 198 594 1782 5346 16038 16039 32078 96234 # Correct output: # 14 # 1 3 9 10 11 22 66 198 594 1782 5346 16038 16039 32078 96234 # (Time used: 0.10/5.50, memory used: 8601600/134217728.) def optimal_sequence(n): sequence = [] while n >= 1: sequence.append(n) if n % 3 == 0: n = n // 3 optimal_sequence(n) elif n % 2 == 0: n = n // 2 optimal_sequence(n) else: n = n - 1 optimal_sequence(n) return reversed(sequence) input = sys.stdin.read() n = int(input) sequence = list(optimal_sequence(n)) print(len(sequence) - 1) for x in sequence: print(x, end=' ') You are doing a greedy approach. Whey you have n == 10 you check and see it's divisible by 2 so you assume that's the best step, which is wrong in this case. What you need to do is proper dynamic programming. v[x] will hold the minimum number of steps to get to result x. def solve(n): v = [0]*(n+1) # so that v[n] is there v[1] = 1 # length of the sequence to 1 is 1 for i in range(1,n+1): if not v[i]: continue if v[i+1] == 0 or v[i+1] > v[i] + 1: v[i+1] = v[i] + 1 # Similar for i*2 and i*3 solution = [] while n > 1: solution.append(n) if v[n-1] == v[n] - 1: n = n-1 if n%2 == 0 and v[n//2] == v[n] -1: n = n//2 # Likewise for n//3 solution.append(1) return reverse(solution)
https://codedump.io/share/48wkMdk5guw2/1/primitive-calculator---dynamic-approach
CC-MAIN-2018-05
refinedweb
315
66.67
>> Over years of looking at tons of Python code, I've seen several variants of >> enums. In *every* case, they were just a toy recipe and were not be used >> in anything other than demo code. [Mark Summerfeld] > Did you also see lots of examples of people creating their own set > types? > Until something is part of the language or library people will > toy with their own version but probably won't commit to it. But once > something is standardised it gets used---if it is general purpose. This is true. Unfortunately, it means that if you add something to the language that shouldn't be there, it will get used also. When a tool is present, it suggests that the language designers have thought it through and are recommending it. It then takes time and experience to unlearn the habit (for example, it takes a while to learn that pervasive isinstance() checks get in the way of duck typing). This is doubly true for features that are found in compiled languages but of questionable value in a dynamic language. IMO, the hardest thing for a Python newbie is to stop coding like a Java programmer. >> There are already so many reasonable ways to do this or avoid doing it, >> that it would be silly to add an Enum factory. > > I thought that one of Python's strengths was supposed to be that in > general there is just one best way to do things. It's strange that you bring that up as argument for adding yet another type of namespace. > None of the examples you gave provides any level of const-ness. That is not a virtue. It is just psuedo const-ness. It is dog slow and not compiled as a constant. All you're getting is something that is harder to write to. That doesn't make sense in a consenting adults environment. Take a look through the standard library at how many times we make an attribute read-only by using property(). It is just not our style. BTW, the rational module needs to undo that choice for numerator and denominator. It makes the code slow for basically zero benefit. [Jonathan Marshall] > we find that experienced python programmers are use to the > absence of a standard enum type but new python programmers > are surprised by its absence I think this is a clear indication that new programmers all wrestle with learning to write Pythonically instead of making shallow translations of code they would have written in statically compiled languages. Adding something like Enum() will only reinforce those habits and slow-down their progress towards becoming a good python programmer. Raymond
https://mail.python.org/pipermail/python-ideas/2008-January/001350.html
CC-MAIN-2016-36
refinedweb
444
71.55
. Here are some tips for creating your own backdoors for use in penetration testing: TIP #1: Do your reconnaissance. Know what antivirus software target system personnel are running. While it is certainly possible to make a backdoor that evades all antivirus software products, there is no need to waste those cycles if your target is only running one product, a significant likelihood. Narrow down your options by getting this information from target system personnel by asking, looking for information leakage such as e-mails footers that proclaim the AV product, or even a friendly social engineering phone call if such interaction is allowed in your rules of engagement. TIP #2: If. Alternatively if your target is using one of the nine AV products scanned by VirusNoThanks, you could use and be sure to select "Do no distribute the sample" at the bottom of the page. TIP #3: KISS — Keep it simple, shell-boy. I'm a minimalist when it comes to remote access. I just need enough to get in, disable antivirus (if the rules of engagement will allow it), and then move in with more full-featured tools. This approach requires less coding on my part and there is less of a chance that I will incorporate something that antivirus doesn't like. TIP #4: You don't have to COMPLETELY reinvent this wheel. Metasploit has templates in the data/templates/src directory for DLLs, EXEs, and Windows Services. Start with them and modify them only as required to avoid your target's defenses. For example: $ cat data/templates/src/pe/exe/template.c #include <stdio.h> #define SCSIZE 4096 char payload[SCSIZE] = "PAYLOAD:"; char comment[512] = ""; int main(int argc, char **argv) { (*(void (*)()) payload)(); return(0); } You can set the payload[SCSIZE] array to any shell code that meets your needs and compile it. There are plenty of options out there for shell code. You can get several examples of shell code from exploit-db () and many of them do not trigger antivirus software. Or, you can also use msfpayload or msfvenom from Metasploit to generate C shell code and plug that into the template. For example: $ ./msfpayload windows/shell_bind_tcp C This generates C shell code to bind a shell to TCP port 4444. Compile it, and check to see if the AV product running in your lab detects it. If the compiled program is detected, you have a lot of flexibility in source code. You can try: - Moving part of your shell code to a different data segment - Compile it to different PE, Old EXE, or COM (yes? I said .COM) formats - Break the shell code up into smaller strings and mix the order in the source code. Then reassemble it into a variable in memory in the correct order before calling it - Use timed events or wait() functions to delay the payload execution to avoid heuristic engines - Create your own simple encoding engine to mask the bytes... it is easier than you think! Check out I like writing in Python, then using pyinstaller to create an exe out of my Python script. Here is a Python template I wrote that does the same thing as the C template provided with Metasploit: from ctypes import * shellcode = '<-ascii shell code here ex: \x90\x90\x90->' memorywithshell = create_string_buffer(shellcode, len(shellcode)) shell = cast(memorywithshell, CFUNCTYPE(c_void_p)) shell() If you want to use a Metasploit payload as your shell code, you can easily turn C source into a Python-compatible string by deleting all the double quotes and new lines using the handy tr command as follows: $ ./msfpayload windows/shell_bind_tcp C | tr -d '"' | tr -d '\n' If you generate a multi-stage payload, just grab the string for stage one. For example, to create a Metasploit framework reverse Meterpreter, I would do the following: $ ./msfpayload windows/meterpreter/reverse_tcp LHOST=127.0.0.1 C | tr -d '"' | tr -d '\n' | more Then grab the string produced for STAGE1 and plug it into my template as follows:7f\x00\x00\x01\x68\x02\x00\x11\x5c' memorywithshell = create_string_buffer(shellcode, len(shellcode)) shell = cast(memorywithshell, CFUNCTYPE(c_void_p)) shell() Next, I'll compile my new backdoor with pyinstaller with the following options: $ python configure.py $ python makespec.py --onefile --noconsole shell_template.py $ python build.py shell_template\shell_template.spec To use the new payload we setup the Metasploit framework with the multi-handler "exploit". Once our program is run on the target, it connects back to the framework where stage2 is delivered. msf > use multi/handler msf exploit(handler) > set payload windows/meterpreter/reverse_tcp payload => windows/meterpreter/reverse_tcp msf exploit(handler) > set LHOST 127.0.0.1 LHOST => 127.0.0.1 msf exploit(handler) > exploit I hope you find these techniques useful as you help organizations better understand their security risks and improve their defenses through your penetration testing work! Posted October 13, 2011 at 2:22 PM | Permalink | Reply chao-mu your #include stdio.h got cut out, probably due to the angle brackets. Posted October 13, 2011 at 3:00 PM | Permalink | Reply Ed Good catch. I did use escapes in it appropriately, but I think it is outbound filtering on server that snags them. (BTW, outbound filtering is a good idea, but sometimes it bites you like this). Working on it. But, thanks!
https://pen-testing.sans.org/blog/2011/10/13/tips-for-evading-anti-virus-during-pen-testing?reply-to-comment=96
CC-MAIN-2018-13
refinedweb
882
62.17
I need to use Google App Engine to text my girlfriend23 Nov 2016 This is the story of how I had to build and deploy a freaking app just so I can text my girlfriend when I’m at the office. Perhaps it’ll help others who are also subject to the arbitrary rules of IT departments everywhere. (Dilberts of the world, unite!) For some two years now my messaging app of choice has been Telegram. It’s lightweight, end-to-end encrypted, well designed, and free; it’s impossible not to love it. Now, I hate typing on those tiny on-screen keyboards, so most of the time what I actually use is Telegram’s desktop app. Problem is, I can’t use it when I’m at work. My organization’s IT department blocks access to Telegram’s servers (dont’ ask). I can install the app, but it doesn’t connect to anything; it can’t send or receive messages. So, I looked into Telegram’s competitors. I tried WhatsApp, but its desktop version is blocked as well at my organization. And in any case I tried it at home and it’s sheer garbage: the desktop app needs your phone to work (!) and it crashes every ~15 minutes. (I keep pestering my friends to switch from WhatsApp to Telegram but WhatsApp is hugely popular in Brazil and network externalities get in the way.) Then it hit me: why not Slack? The IT department doesn’t block it and I already use Slack for professional purposes. Why not use it to talk to my girlfriend too? I created a channel, got her to sign up, and we tried it for a couple of days. Turns out Slack solved the desktop problem at the cost of creating a mobile problem. I don’t have any issues with Slack’s web interface - I keep my channels open on Chrome at all times and that works just fine. But when I switch to mobile… boy, that’s one crappy iOS app. Half the time it just doesn’t launch. Half the time it takes forever to sync. Granted, my iPhone 5 is a bit old. But the Telegram iOS app runs as smooth and fast as it did two years ago, so the hardware is not at fault here. As an aside, turns out Slack’s desktop app is also ridiculously heavy. I don’t really use it - I use Slack’s web interface instead -, but that’s dispiriting nonetheless. I tried Facebook’s Messenger. Blocked. I tried a bunch of lesser-known alternatives. Blocked. Eventually I gave up on trying different messaging apps and asked the IT department to unblock access to Telegram’s servers. They said no - because, well, reasons. (In the words of Thomas Sowell, “You will never understand bureaucracies until you understand that for bureaucrats procedure is everything and outcomes are nothing”.) The IT guys told me I could appeal to a higher instance - some committee or another -, but I’ve been working in the government for a while and I’ve learned to pick my fights. Also, I believe in Balaji Srinivasan’s “don’t argue” policy. So, I rolled up my sleeves and decided to build my own solution. I don’t need to build a full-fledged messaging app. What I need is extremely simple: a middleman. Something that serves as a bridge between my office computer and Telegram’s servers. I need a web app that my office computer can visit and to which I can POST strings and have those strings sent to my girlfriend’s Telegram account. That app needs to be hosted somewhere, so the first step is choosing a platform. I briefly considered using my personal laptop for that, just so I didn’t have to deal with commercial cloud providers. But I worry about exposing to the world my personal files, laptop camera, browser history, and the like. Also, I want 24/7 availability and sometimes I have to bring my laptop to the office. I settled on Google App Engine. I used it before (to host an app that lets people replicate my Ph.D. research) and I liked the experience. And, more importantly, it has a free tier. GAE has changed quite a bit since the last time I used it (early 2014), but it has an interactive tutorial that got me up to speed in a matter of minutes. You can choose a number of programming languages on GAE. I picked Python because that’s what I’m fastest at. (In hindsight, perhaps I should’ve used this as a chance to learn some basic Go.) Instead of starting from scratch I started with GAE’s default “Hello, world!” Python app. The underlying web framework is Flask. That’s my go-to framework for almost all things web and that made things easier. Using Flask, this is how you control what happens when a user visits your app’s homepage: # this is all in the main.py file of GAE's default "Hello, world!" Python app from flask import Flask app = Flask(__name__) @app.route('/') def hello(): return 'Hello, world!' I don’t want a static webpage though, I want to communicate with Telegram’s servers. In order to do that I use a Python module called telepot. This is how it works: you create a Telegram bot account and then you use telepot to control that bot. (In other words, the sender of the messages will not be you, it will be the bot. When you create your bot you receive a token credential, which you will then pass to telepot. import telepot bot = telepot.Bot('YOUR_TOKEN') bot.getMe() You can now make your bot do stuff, like sending messages. Now, Telegram enforces a sort of Asimovian law: a bot cannot text a human unless it has been texted by that human first. In other words, bots can’t initiate conversations. So I created my bot, told my girlfriend its handle ( @bot_username), and had her text it. That message (like all Telegram messages) came with metadata (see here), which included my girlfriend’s Telegram ID. That’s all I need to enable my bot to text her. girlfriend_id = 'SOME_SEQUENCE_OF_DIGITS' bot.sendMessage(girlfriend_id, 'How you doing?') Now let’s merge our web app code and our telepot code in our main.py file: import telepot from flask import Flask app = Flask(__name__) bot = telepot.Bot('YOUR_TOKEN') bot.getMe() girlfriend_id = 'SOME_SEQUENCE_OF_DIGITS' @app.route('/') def textGirlfriend(): bot.sendMessage(girlfriend_id, 'How you doing?') return 'message sent!' (This can be misused in a number of ways. You could, say, set up a cron job to text ‘thinking of you right now!’ to your significant other at certain intervals, several times a day. Please don’t.) The rest of the default “Hello, world!” Python app remains the same except for two changes: a) you need to install telepot; use pip install with the -t option to specify the lib directory in your repository; and b) you need to add ssl under the libraries header of your app.yaml file. So, I created a web app that my IT department does not block and that texts my girlfriend when visited. But I don’t want to text ‘How you doing?’ every time. So far, the app doesn’t let me choose the content of the message. Fixing that in Flask is quite simple. We just have to: a) add a text field to the homepage; b) add a ‘submit’ button to the homepage; and c) tell the app what to do when the user clicks ‘submit’. (We could get fancy here and create HTML templates but let’s keep things simple for now.) import telepot from flask import Flask from flask import request # so that we can get the user's input app = Flask(__name__) bot = telepot.Bot('YOUR_TOKEN') bot.getMe() girlfriend_id = 'SOME_SEQUENCE_OF_DIGITS' @app.route('/') def getUserInput(): return '<form method="POST" action="/send"><input type="text" name="msg" size="150"><br><input type="submit" value="submit"></form>' @app.route('/send', methods = ['POST']) def textGirlfriend(): bot.sendMessage(girlfriend_id, request.form['msg']) return 'message sent!' And voilà, I can now web-text my girlfriend. Yeah, I know, that would hardly win a design contest. But it works. This is where I’m at right now. I did this last night, so there is still a lot of work ahead. Right now I can send messages this way, but if my girlfriend simply hit ‘reply’ her message goes to the bot’s account and I just don’t see it. I could have the app poll the bot’s account every few seconds and alert me when a new message comes in, but instead I think I’ll just create a Telegram group that has my girlfriend, myself, and my bot; I don’t mind reading messages on my phone, I just don’t like to type on my phone. Another issue is that I want to be able to text-app my family’s Telegram group, which means adding radio buttons or a drop-down menu to the homepage so I can choose between multiple receivers. Finally, I want to be able to attach images to my messages - right now I can only send text. But the core is built; I’m free from the tyranny of on-screen keyboards. This is it. In your face, IT department.
http://thiagomarzagao.com/2016/11/23/the-tyranny-of-it-departments/
CC-MAIN-2017-30
refinedweb
1,575
74.39
Abstract This document describes the implementation of a geometric algebra module in python that utilizes the sympy symbolic algebra library. The python module GA numpy and the sympy modules.,Lasenby]. The elements of the geometric algebra are called multivectors and consist of the linear combination of scalars, vectros, and the geometric product of two or more vectors. The additional axioms for the geometric algebra are that for any vectors , , and in the base vector space: By induction these also apply to any multivectors. Several software packages for numerical geometric algebra calculations are available from Doran-Lazenby group and the Dorst group. Symbolic packages for Clifford algebra using orthongonal bases such as , where is a numeric array are available in Maple and Mathematica. The symbolic algebra module, GA, developed for python does not depend on an orthogonal basis representation, but rather is generated from a set of arbitrary symbolic vectors, and a symbolic metric tensor . In order not to reinvent the wheel all scalar symbolic algebra is handled by the python module sympy. The basic geometic algebra operations will be implemented in python by defining a multivector class, MV, and overloading the python operators in Table 1 where A and B are any two multivectors (In the case of +, -, *, ^, |, <<, and >> the operation is also defined if A or B is a sympy symbol or a sympy real number). Table 1. Multivector operations for symbolicGA The option to use < or << for left contraction and > or >> is given since the < and > operators do not have r-forms (there are no __rlt__() and __rlt__() functions to overload) while << and >> do have r-forms so that x << A and x >> A are allowed where x is a scalar (symbol or integer) and A is a multivector. With < and > we can only have mixed modes (scalars and multivectors) if the multivector is the first operand. neccessarily (1) where each of the is a symbol representing all of the dot products of the basis vectors. Note that the symbols are named so that since for the symbol function . Note that the strings shown in equation 1 are only used when the values of are output (printed). In the GA module (library) the symbols are stored in a static member list of the multivector class MV as the double list MV.metric ( = MV.metric[i][j]). The default definition of can be overwritten by specifying a string that will define . As an example consider a symbolic representation for conformal geometry. Define for a basis basis = 'a0 a1 a2 n nbar' and for a metric metric = '# # # 0 0, # # # 0 0, # # # 0 0, 0 0 0 0 2, 0 0 0 2 0' then calling would initialize Here we have specified that and are orthonal to all the ‘s, , made available to the programmer for future calculations. In addition to the basis vectors the are also made available to the programer with the following convention. If a0 and a1 are basis vectors, then their dot products are denoted by a0sq, a2sq, and a0dota1 for use as python program varibles. If you print a0sq the output would be a0**2 and the output for a0dota1 would be (a0.a1) as shown in equation 1. If the default value are overridden the new values are output by print. For examle if then “print a0sq” would output “0.” More generally, if metric is not a string, but a list of lists or a two dimension numpy array, it is assumed that each element of metric is symbolic variable so that the could be defined as symbolic functions as well as variables. For example instead of letting we could have where we use a symbolic function. Note Additionally MV.setup has an option for an othogonal basis where the signature of the metric space is defined by a string. For example if the signature of the vector space is (Euclidian 3-space) set metric = '[1,1,1]' Likewise if the signature is that of spacetime, then define metric = '[1,-1,-1,-1]'. In our symbolic geometric algebra we assume that all multivectors of interest to us where and . We call these multivectors bases and represent them internally with the list of integers . The bases are labeled, for the purpose of output display, with strings that are concatenations of the strings representing the basis vectors. So that in our example [1,2] would be labeled with the string 'a1a2' and represents the geometric product a1*a2. Thus the list [0,1,2] represents a0*a1*a2.For our example the complete set of bases and labels are shown in Table 2 Note The empty list, [], represents the scalar 1. MV.basislabel = ['1', ['a0', 'a1', 'a2'], ['a0a1', 'a0a2', 'a1a2'], ['a0a1a2']] MV.basis = [[], [[0], [1], [2]], [[0, 1], [0, 2], [1, 2]], [[0, 1, 2]]] Table 2. Multivector basis labels and internal basis representation. Since there are bases and the number of bases with equal list lengths is the same as for the grade decomposition of a dimension geometric algebra we will call the collections of bases of equal length psuedogrades. The critical operation in setting up the geometric algebra module is reducing the geomertric product of any two bases to a linear combination of bases so that we can calculate a multiplication table for the bases. First we represent the product as the concatenation of two base lists. For example a1a2*a0a1 is represented by the list [1,2]+[0,1] = [1,2,0,1]. The representation of the product is reduced via two operations, contraction and revision. The state of the reduction is saved in two lists of equal length. The first list contains symbolic scale factors (symbol or numeric types) for the corresponding interger list representing the product of bases. If we wish to reduce the starting point is the coefficient list and the bases list . We now operate on each element of the lists as follows: These processes are repeated untill every basis list in is in normal (ascending) order with no repeated elements. Then the coefficents of equivalent bases are summed and the bases sorted according to psuedograde and ascending order. We now have a way of calculating the geometric product of any two bases as a symbolic linear combination of all the bases with the coefficients determined by . The base multiplication table for our simple example of three vectors is given by (the coefficient of each psuedo base is enclosed with {} for clarity): (1)(1) = 1 (1)(a0) = a0 (1)(a1) = a1 (1)(a2) = a2 (1)(a0a1) = a0a1 (1)(a0a2) = a0a2 (1)(a1a2) = a1a2 (1)(a0a1a2) = a0a1a2 (a0)(1) = a0 (a0)(a0) = {a0**2}1 (a0)(a1) = a0a1 (a0)(a2) = a0a2 (a0)(a0a1) = {a0**2}a1 (a0)(a0a2) = {a0**2}a2 (a0)(a1a2) = a0a1a2 (a0)(a0a1a2) = {a0**2}a1a2 (a1)(1) = a1 (a1)(a0) = {2*(a0.a1)}1-a0a1 (a1)(a1) = {a1**2}1 (a1)(a2) = a1a2 (a1)(a0a1) = {-a1**2}a0+{2*(a0.a1)}a1 (a1)(a0a2) = {2*(a0.a1)}a2-a0a1a2 (a1)(a1a2) = {a1**2}a2 (a1)(a0a1a2) = {-a1**2}a0a2+{2*(a0.a1)}a1a2 (a2)(1) = a2 (a2)(a0) = {2*(a0.a2)}1-a0a2 (a2)(a1) = {2*(a1.a2)}1-a1a2 (a2)(a2) = {a2**2}1 (a2)(a0a1) = {-2*(a1.a2)}a0+{2*(a0.a2)}a1+a0a1a2 (a2)(a0a2) = {-a2**2}a0+{2*(a0.a2)}a2 (a2)(a1a2) = {-a2**2}a1+{2*(a1.a2)}a2 (a2)(a0a1a2) = {a2**2}a0a1+{-2*(a1.a2)}a0a2+{2*(a0.a2)}a1a2 (a0a1)(1) = a0a1 (a0a1)(a0) = {2*(a0.a1)}a0+{-a0**2}a1 (a0a1)(a1) = {a1**2}a0 (a0a1)(a2) = a0a1a2 (a0a1)(a0a1) = {-a0**2*a1**2}1+{2*(a0.a1)}a0a1 (a0a1)(a0a2) = {2*(a0.a1)}a0a2+{-a0**2}a1a2 (a0a1)(a1a2) = {a1**2}a0a2 (a0a1)(a0a1a2) = {-a0**2*a1**2}a2+{2*(a0.a1)}a0a1a2 (a0a2)(1) = a0a2 (a0a2)(a0) = {2*(a0.a2)}a0+{-a0**2}a2 (a0a2)(a1) = {2*(a1.a2)}a0-a0a1a2 (a0a2)(a2) = {a2**2}a0 (a0a2)(a0a1) = {-2*a0**2*(a1.a2)}1+{2*(a0.a2)}a0a1+{a0**2}a1a2 (a0a2)(a0a2) = {-a0**2*a2**2}1+{2*(a0.a2)}a0a2 (a0a2)(a1a2) = {-a2**2}a0a1+{2*(a1.a2)}a0a2 (a0a2)(a0a1a2) = {a0**2*a2**2}a1+{-2*a0**2*(a1.a2)}a2+{2*(a0.a2)}a0a1a2 (a1a2)(1) = a1a2 (a1a2)(a0) = {2*(a0.a2)}a1+{-2*(a0.a1)}a2+a0a1a2 (a1a2)(a1) = {2*(a1.a2)}a1+{-a1**2}a2 (a1a2)(a2) = {a2**2}a1 (a1a2)(a0a1) = {2*a1**2*(a0.a2)-4*(a0.a1)*(a1.a2)}1+{2*(a1.a2)}a0a1+{-a1**2}a0a2 +{2*(a0.a1)}a1a2 (a1a2)(a0a2) = {-2*a2**2*(a0.a1)}1+{a2**2}a0a1+{2*(a0.a2)}a1a2 (a1a2)(a1a2) = {-a1**2*a2**2}1+{2*(a1.a2)}a1a2 (a1a2)(a0a1a2) = {-a1**2*a2**2}a0+{2*a2**2*(a0.a1)}a1+{2*a1**2*(a0.a2) -4*(a0.a1)*(a1.a2)}a2+{2*(a1.a2)}a0a1a2 (a0a1a2)(1) = a0a1a2 (a0a1a2)(a0) = {2*(a0.a2)}a0a1+{-2*(a0.a1)}a0a2+{a0**2}a1a2 (a0a1a2)(a1) = {2*(a1.a2)}a0a1+{-a1**2}a0a2 (a0a1a2)(a2) = {a2**2}a0a1 (a0a1a2)(a0a1) = {2*a1**2*(a0.a2)-4*(a0.a1)*(a1.a2)}a0+{2*a0**2*(a1.a2)}a1 +{-a0**2*a1**2}a2+{2*(a0.a1)}a0a1a2 (a0a1a2)(a0a2) = {-2*a2**2*(a0.a1)}a0+{a0**2*a2**2}a1+{2*(a0.a2)}a0a1a2 (a0a1a2)(a1a2) = {-a1**2*a2**2}a0+{2*(a1.a2)}a0a1a2 (a0a1a2)(a0a1a2) = {-a0**2*a1**2*a2**2}1+{2*a2**2*(a0.a1)}a0a1+{2*a1**2*(a0.a2) -4*(a0.a1)*(a1.a2)}a0a2+{2*a0**2*(a1.a2)}a1a2 In terms of the bases defined an arbitrary multivector can be represented as a list of arrays (we use the numpy python module to implement arrays). If we have basis vectors we initialize the list self.mv = [0,0,...,0] with integers all zero. Each zero is a placeholder for an array of python objects (in this case the objects will be sympy symbol objects). If self.mv[r] = numpy.array([list of symbol objects]) each entry in the numpy.array will be a coefficient of the corresponding psuedo base. self.mv[r] = 0 indicates that the coefficients of every base of psuedo grade are 0. The length of the array self.mv[r] is the binomial coefficient. For example the psuedo basis vector a1 would be represented as a multivector by the list: a1.mv = [0,numpy.array([numeric(0),numeric(1),numeric(0)]),0,0] and a0a1a2 by: a0a1a2.mv = [0,0,0,numpy.array([numeric(1)])] The array is stuffed with sympy numeric objects instead of python integers so that we can perform symbolically manipulate sympy expressions that consist of scalar algebraic symbols and exact rational numbers which sympy can also represent. The numpy.array is used because operations of addition, substraction, and multiplication by an object are defined for the array if they are defined for the objects making up the array, which they are by sympy. We call this representation a base type because the r index is not a grade index since the bases we are using are not blades. In a blade representation the structure would be identical, but the bases would be replaced by blades and self.mv[r] would represent the r grade components of the multivector. The first use of the base representation is to store the results of the multiplication tabel for the bases in the class variable MV.mtabel. This variable is a group of nested lists so that the geometric product of the igrade and ibase with the jgrade and jbase is MV.mtabel[igrade][ibase][jgrade][jbase]. We can then use this table to calculate the geometric product of any two multivectors. Since we can now calculate the symbolic geometric product of any two multivectors we can also calculate the blades corresponding to the product of the symbolic basis vectors using the formula where is a multivector of grade and is a vector. For our example basis the result is shown in Table 3. Bases blades in terms of bases. The important thing to notice about Table 3 is that it is a triagonal (lower triangular) system of equations so that using a simple back substitution algorithym we can solve for the psuedo bases in terms of the blades giving Table 4. Bases in terms of basis blades. Using Table 4 and simple substitution we can convert from a base multivector representation to a blade representation. Likewise, using Table zero for a base representation and 1 can be decomposed into pure grade multivectors (a linear combination of blades of all the same order) so that in a -dimensional vector space The geometric product of two pure grade multivectors and has the form where projects the grade components of the multivector argument. The inner and outer products of and are then defined to be and Likewise the right ( ) and left ( ) contractions are defined as and The MV class function for the outer product of the multivectors mv1 and mv2 is @staticmethod def outer_product(mv1,mv2): product = MV() product.bladeflg = 1 mv1.convert_to_blades() mv2.convert_to_blades() for igrade1 in MV.n1rg: if not isint(mv1.mv[igrade1]): pg1 = mv1.project(igrade1) for igrade2 in MV.n1rg: igrade = igrade1+igrade2 if igrade <= MV.n: if not isint(mv2.mv[igrade2]): pg2 = mv2.project(igrade2) pg1pg2 = pg1*pg2 product.add_in_place(pg1pg2.project(igrade)) return(product) The steps for calculating the outer product. For the inner product of the multivectors mv1 and mv2 the MV class function is @staticmethod def inner_product(mv1,mv2,mode='s'): """ MV.inner_product(mv1,mv2) calculates the inner mode = 's' - symmetic (Doran & Lasenby) mode = 'l' - left contraction (Dorst) mode = 'r' - right contraction (Dorst) """ if isinstance(mv1,MV) and isinstance(mv2,MV): product = MV() product.bladeflg = 1 mv1.convert_to_blades() mv2.convert_to_blades() for igrade1 in range(MV.n1): if isinstance(mv1.mv[igrade1],numpy.ndarray): pg1 = mv1.project(igrade1) for igrade2 in range(MV.n1): igrade = igrade1-igrade2 if mode == 's': igrade = igrade.__abs__() else: if mode == 'l': igrade = -igrade if igrade >= 0: if isinstance(mv2.mv[igrade2],numpy.ndarray): pg2 = mv2.project(igrade2) pg1pg2 = pg1*pg2 product.add_in_place(pg1pg2.project(igrade)) return(product) else: if mode == 's': if isinstance(mv1,MV): product = mv1.scalar_mul(mv2) if isinstance(mv2,MV): product = mv2.scalar_mul(mv1) else: product = None return(product) The inner product is calculated the same way as the outer product except that in step 4, i1+i2 is replaced by abs(i1-i2) or i1-i2 for the right contraction or i2-i1 for the left contraction. If i1-i2 is less than zero there is no contribution to the right contraction. If i2-i1 is less than zero there is no contribution to the left contraction. is the geometric product of vectors then the reverse of designated is defined by The reverse is simply the product with the order of terms reversed. The reverse of a sum of products is defined as the sum of the reverses so that for a general multivector A we have but (2) which is proved by expanding the blade bases in terms of orthogonal vectors and showing that equation 2 holds for the geometric product of orthogonal vectors. The reverse is important in the theory of rotations in -dimensions. If is the product of an even number of vectors and then is a composition of rotations of the vector . If is the product of two vectors then the plane that defines is the plane of the rotation. That is to say that rotates the component of that is projected into the plane defined by and where . may be written , where is the angle of rotation and is a unit blade that defines the plane of rotation. If we have linearly independent vectors (a frame), , then the reciprocal frame is where , is the Kronecker delta (zero if and one if ). The reciprocal frame is constructed as follows: Then where indicates that. Additionaly there is the function reciprocal_frame(vlst,names='') external to the MV class that will calculate the reciprocal frame of a list, vlst, of vectors. If the argument names is set to a space delimited string of names for the vectors the reciprocal vectors will be given these names. If is a multivector field that is a function of a vector (we are using the summation convention that pairs of subscripts and superscripts are summed over the dimension of the vector space) then the geometric derivative is given by (in this section the summation convention is used): If is a grade- multivector and then Note that can only contain grades and so that also can only contain those grades. For a grade- multivector the inner (div) and outer (curl) derivatives are defined as and For a general multivector function the inner and outer derivatives are just the sum of the inner and outer dervatives of each grade of the multivector function. Curvilinear coordinates are derived from a vector function where where the number of coordinates is equal to the dimension of the vector space. In the case of 3-dimensional spherical coordinates and the coordinate generating function is A coordinate frame is derived from by . The following show the frame for spherical coordinates. The coordinate frame generated in this manner is not necessarily normalized so define a normalized frame by This works for all since we have defined . For spherical coordinates the normalized frame vectors are The geometric derivative in curvilinear coordinates is given by where are the connection multivectors for the curvilinear coordinate system. For a spherical coordinate system they are list). offset is an integer that is added to the multivector coefficient index. For example if one wishes to start labeling vector coefficient indexes at one instead of zero then set offset=1. Additionally, MV.setup() calculates the pseudo scalar, and its inverse, and makes them available to the programmer as MV.I and MV.Iinv. After MV.setup() is run one can reinialize the MV class with curvilinear coordinates using: A typical usage of MV.rebase for generating spherical curvilinear coordinate is: metric = '1 0 0,0 1 0,0 0 1' MV.setup('gamma_x gamma_y gamma_z',metric,True) coords = make_symbols('r theta phi') x = r*(sympy.cos(theta)*gamma_z+sympy.sin(theta)*\ (sympy.cos(phi)*gamma_x+sympy.sin(phi)*gamma_y)) x.set_name('x') MV.rebase(x,coords,'e',True) The input parameters for MV.rebase are x: Vector function of coordinates (derivatives define curvilinear basis) coords: List of sympy symbols for curvilinear coordinates debug: If True printout (LaTeX) all quantities required for derivative calculation debug_level: Set to 0,1,2, or 3 to stop curvilinear calculation before all quatities are calculated. This is done when debugging new curvilinear coordinate systems since simplification of expressions is not sufficiently automated to insure success of process of any coordinate system defined by vector function x To date MV.rebase works for cylindrical and spherical coordinate systems in any number of dimensions (until the execution time becomes too long). To make it work for these systems required creating some hacks for expression simplification since both trigsimp and simplify were not general enough to perform the required simplification. If str_mode=0 the string representation of the multivector contains no newline characters (prints on one line). If str_mode=1 the string representation of the multivector places a newline after each grade of the multivector (prints one grade per line). If str_mode=2 the string representation of the multivector places a newline after each base of the multivector (prints one base per line). In both cases bases with zero coefficients are not printed. Note This function directly affects the way multivectors are printed with the print command since it interacts with the __str__() function for the multivector class which is used by the print command. Now that grades and bases have been described we can show all the ways that a multivector can be instantiated. As an example assume that the multivector space is initialized with MV.setup('e1 e2 e3'). Then the vectors e1, e2, and e3 are made available (broadcast) for use in the program . Warning This is only true if the statement set_main(sys.modules[__name__]) appears immediately after the from sympy.galgebra.GA import * statement. So that multivectors could be instantiated with statements such as (a1, a2, and a3 are sympy symbols): x = a1*e1+a2*e2+a3*e3 y = x*e1*e2 z = x|y w = x^y or with the multivector class constructor: mvname is a string that defines the name of the multivector for output purposes. value and type are defined by the following table and fct is a switch that will convert the symbolic coefficients of a multivector to functions if coordinate variables have been defined when MV.setup() is called: If the value argument has the option of being a string s then a general symbolic multivector will be constructed constrained by the value of mvtype. The string s will be the base name of the multivector symbolic coefficients. If coords is not defined in MV.setup() the indices of the multivector bases are appended to the base name with a double underscore (superscript notation). If coords is defined the coordinate names will replace the indices in the coefficient names. For example if the base string is A and the coordinates (x,y,z) then the coefficients of a spinor in 3d space would be A, A__xy, A__xz, and A__yz. If the latex_ex is used to print the multivector the coefficients would print as , , , and . If the fct argrument of MV() is set to True and the coords argument in MV.setup() is defined the symbolic coefficients of the multivector are functions of the coordinates. __call__ returns the the igrade, ibase coefficient of the multivector. The defaults return the scalar component of the multivector. Convert multivector from the base representation to the blade representation. If multivector is already in blade representation nothing is done. Convert multivector from the blade representation to the base representation. If multivector is already in base representation nothing is done. If r is a integer return the grade- components of the multivector. If r is a multivector return the grades of the multivector that correspond to the non-zero grades of r. For example if one is projecting a general multivector and r is a spinor, A.project(r) will return only the even grades of the multivector A since a spinor only has even grades that are non-zero. Return the even grade components of the multivector. Return the odd grade components of the multivector. Return the reverse of the multivector. See section Reverse of Multivector. Return true if multivector is pure grade (all grades but one are zero). Return the partial derivative of the multivector function with respect to variable . Return the geometric derivative of the multivector function. Return the outer (curl) derivative of the multivector function. Equivalent to curl(). Return the inner (div) derivative of the multivector function. Equivalent to div(). Warning If A is a vector field in three dimensions = A.grad_int() = A.div(), but = -MV.I*A.grad_ext() = -MV.I*A.curl(). Note that grad_int() lowers the grade of all blades by one grade and grad_ext() raises the grade of all blades by one. Set the multivector coefficient of index (grade,base) to value. All the following fuctions belong to the MV class and apply the corresponding sympy function to each component of a multivector. All these functions perform the operations inplace (None is returned) on each coefficient. For example if you wished to simplify all the components of the multivector A you would invoke A.simplify(). The argument list for each function is the same as for the corresponding sympy function. The only function that differs in its action from the sympy version is trigsimp() in its case the function TrigSimp is applied (see documentation on TrigSimp()). These are functions in GA, but not in the multivector (MV) class. set_main() passes the argument main_program from the main program to the GA module. The argument must be sys.modules[__name__] and the call should be placed immediately after sys and GA are imported. The purpose of this call is to allow GA to broadcast to the main program sympy variables and multivectors created by calls to GA. It is used by MV.setup() and make_symbols(). make_symbols() creates a list of sympy symbols with names defined by the space delimited string symnamelst. In addition to returning the symbol list the function broadcasts the named symbols to the main program. For example if you make the call: syms = make_symbols('x y ab') Not only will syms contain the symbols, but you can also directly use x, y, and ab as symbols in your program. Warning You can only directly use x, y, and ab as symbols in your program if the statement set_main(sys.modules[__name__]) appears immediately after the from sympy.galgebra.GA import * statement. set_names() allows one to name a list, var_lst, of multivectors enmass. The names are in var_str, a blank separated string of names. An error is generated if the number of name is not equal to the length of var_lst. reciprocal_frame() implements the proceedure described in section Reciprocal Frames. vlst is a list of independent vectors that you wish the reciprocal frame calculated for. names is a blank separated string of names for the reciprocal vectors if names are required by you application. The function returns a list containing the reciprocal vectors. In general sympy.trigsimp() will not catch all the trigonometric simplifications in an sympy expression. Neither will TrigSimp(), but it will catch a lot more of them. TrigSimp() is so simple it is show below in its entirety. All it does is apply sympy.trigsimp() to the expressions generated by sympy.cse(). def TrigSimp(f): (w,g) = sympy.cse(f) g = sympy.trigsimp(g[0]) for sub in reversed(w): g = g.subs(sub[0],sub[1]) g = sympy.trigsimp(g) return(g) S() instanciates a scaler multivector of value x, where x can be a sympy variable or integer. This is just a shorthand method for constructing scalar multivectors and can be used when there is any ambiguity in a multivector expression as to whether a symbol or constant should be treated as a scalar multivector or not. The following examples of geometric algebra (not calculus) are all in the file testsymbolicGA.py which is included in the sympy distribution examples under the galbebra directory. The section of code in the program for each example is show with the respective output following the code section. This is the header of testsymbolicGA.py that allows access to the required modules and also allow variables and certain multivectors to be broadcast from the GA module to the main program. import os,sys,sympy from sympy.galgebra.GA import set_main, make_symbols, types, MV, ZERO, ONE, HALF from sympy import collect set_main(sys.modules[__name__]) def F(x): """ Conformal Mapping Function """ Fx = HALF*((x*x)*n+2*x-nbar) return(Fx) def make_vector(a,n = 3): if type(a) == types.StringType: sym_str = '' for i in range(n): sym_str += a+str(i)+' ' sym_lst = make_symbols(sym_str) sym_lst.append(ZERO) sym_lst.append(ZERO) a = MV(sym_lst,'vector') return(F(a)) if __name__ == '__main__': Example of basic geometric algebra operation of geometric, outer, and inner products. MV.setup('a b c d e') MV.set_str_format(1) print 'e|(a^b) =',e|(a^b) print 'e|(a^b^c) =',e|(a^b^c) print 'a*(b^c)-b*(a^c)+c*(a^b) =',a*(b^c)-b*(a^c)+c*(a^b) print 'e|(a^b^c^d) =',e|(a^b^c^d) print -d*(a^b^c)+c*(a^b^d)-b*(a^c^d)+a*(b^c^d) print (a^b)|(c^d) e|(a^b) = {-(b.e)}a +{(a.e)}b e|(a^b^c) = {(c.e)}a^b +{-(b.e)}a^c +{(a.e)}b^c a*(b^c)-b*(a^c)+c*(a^b) = {3}a^b^c e|(a^b^c^d) = {-(d.e)}a^b^c +{(c.e)}a^b^d +{-(b.e)}a^c^d +{(a.e)}b^c^d {4}a^b^c^d {(a.d)*(b.c) - (a.c)*(b.d)}1 Examples of comformal geometry [Lasenby,Chapter 10]. The examples show that basic geometric entities (lines, circles, planes, and spheres) in three dimensions can be represented by blades in a five dimensional (conformal) space. print '\nExample: Conformal representations of circles, lines, spheres, and planes' metric = '1 0 0 0 0,0 1 0 0 0,0 0 1 0 0,0 0 0 0 2,0 0 0 2 0' MV.setup('e0 e1 e2 n nbar',metric,debug=0) MV.set_str_format(1) e = n+nbar #conformal representation of points A = make_vector(e0) # point a = (1,0,0) A = F(a) B = make_vector(e1) # point b = (0,1,0) B = F(b) C = make_vector(-1*e0) # point c = (-1,0,0) C = F(c) D = make_vector(e2) # point d = (0,0,1) D = F(d) X = make_vector('x',3) print 'a = e0, b = e1, c = -e0, and d = e2' print 'A = F(a) = 1/2*(a*a*n+2*a-nbar), etc.' print 'Circle through a, b, and c' print 'Circle: A^B^C^X = 0 =',(A^B^C^X) print 'Line through a and b' print 'Line : A^B^n^X = 0 =',(A^B^n^X) print 'Sphere through a, b, c, and d' print 'Sphere: A^B^C^D^X = 0 =',(A^B^C^D^X) print 'Plane through a, b, and d' print 'Plane : A^B^n^D^X = 0 =',(A^B^n^D^X) Example: Conformal representations of circles, lines, spheres, and planes a = e0, b = e1, c = -e0, and d = e2 A = F(a) = 1/2*(a*a*n+2*a-nbar), etc. Circle through a, b, and c Circle: A^B^C^X = 0 = {-x2}e0^e1^e2^n +{x2}e0^e1^e2^nbar +{-1/2 + 1/2*x0**2 + 1/2*x1**2 + 1/2*x2**2}e0^e1^n^nbar Line through a and b Line : A^B^n^X = 0 = {-x2}e0^e1^e2^n +{-1/2 + x0/2 + x1/2}e0^e1^n^nbar +{x2/2}e0^e2^n^nbar +{-x2/2}e1^e2^n^nbar Sphere through a, b, c, and d Sphere: A^B^C^D^X = 0 = {1/2 - 1/2*x0**2 - 1/2*x1**2 - 1/2*x2**2}e0^e1^e2^n^nbar Plane through a, b, and d Plane : A^B^n^D^X = 0 = {1/2 - x0/2 - x1/2 - x2/2}e0^e1^e2^n^nbar Example shows the calculation of the reciprocal frame for three arbitrary vectors and verifies that the calculated reciprocal vectors have the correct properties. MV.setup('e1 e2 e3',metric) print 'Example: Reciprocal Frames e1, e2, and e3 unit vectors.\n\n' E = e1^e2^e3 Esq = (E w = (E1|e2) w.collect(MV.g) w = w().expand() print 'E1|e2 =',w w = (E1|e3) w.collect(MV.g) w = w().expand() print 'E1|e3 =',w w = (E2|e1) w.collect(MV.g) w = w().expand() print 'E2|e1 =',w w = (E2|e3) w.collect(MV.g) w = w().expand() print 'E2|e3 =',w w = (E3|e1) w.collect(MV.g) w = w().expand() print 'E3|e1 =',w w = (E3|e2) w.collect(MV.g) w = w().expand() print 'E3|e2 =',w w = (E1|e1) w = w().expand() Esq = Esq.expand() print '(E1|e1)/E^2 =',w/Esq w = (E2|e2) w = w().expand() print '(E2|e2)/E^2 =',w/Esq w = (E3|e3) w = w().expand() print '(E3|e3)/E^2 =',w/Esq Example: Reciprocal Frames e1, e2, and e3 unit vectors. E = e1^e2^e3 E^2 = -1 - 2*(e1.e2)*(e1.e3)*(e2.e3) + (e1.e2)**2 + (e1.e3)**2 + (e2.e3)**2 E1 = (e2^e3)*E = {-1 + (e2.e3)**2}e1+{(e1.e2) - (e1.e3)*(e2.e3)}e2+{(e1.e3) - (e1.e2)*(e2.e3)}e3 E2 =-(e1^e3)*E = {(e1.e2) - (e1.e3)*(e2.e3)}e1+{-1 + (e1.e3)**2}e2+{(e2.e3) - (e1.e2)*(e1.e3)}e3 E3 = (e1^e2)*E = {(e1.e3) - (e1.e2)*(e2.e3)}e1+{(e2.e3) - (e1.e2)*(e1.e3)}e2+{-1 + (e1.e2)**2}e3 E1|e2 = 0 E1|e3 = 0 E2|e1 = 0 E2|e3 = 0 E3|e1 = 0 E3|e2 = 0 (E1|e1)/E^2 = 1 (E2|e2)/E^2 = 1 (E3|e3)/E^2 = 1 Examples of calculation of distance in hyperbolic geometry [Lasenby,pp373-375]. This is a good example of the utility of not restricting the basis vector to be orthogonal. Note that most of the calculation is simplifying a scalar expression. print 'Example: non-euclidian distance calculation' metric = '0 # #,# 0 #,# # 1' MV.setup('X Y e',metric) MV.set_str_format(1) L = X^Y^e B = L*e Bsq = (B*B)() print 'L = X^Y^e is a non-euclidian line' print 'B = L*e =',B BeBr =B*e*B.rev() print 'B*e*B.rev() =',BeBr print 'B^2 =',Bsq print 'L^2 =',(L*L)() make_symbols('s c Binv M S C alpha') Bhat = Binv*B # Normalize translation generator R = c+s*Bhat # Rotor R = exp(alpha*Bhat/2) print 's = sinh(alpha/2) and c = cosh(alpha/2)' print 'R = exp(alpha*B/(2*|B|)) =',R Z = R*X*R.rev() Z.expand() Z.collect([Binv,s,c,XdotY]) print 'R*X*R.rev() =',Z W = Z|Y W.expand() W.collect([s*Binv]) print '(R*X*rev(R)).Y =',W M = 1/Bsq W.subs(Binv**2,M) W.simplify() Bmag = sympy.sqrt(XdotY**2-2*XdotY*Xdote*Ydote) W.collect([Binv*c*s,XdotY]) W.subs(2*XdotY**2-4*XdotY*Xdote*Ydote,2/(Binv**2)) W.subs(2*c*s,S) W.subs(c**2,(C+1)/2) W.subs(s**2,(C-1)/2) W.simplify() W.subs(1/Binv,Bmag) W = W().expand() print '(R*X*R.rev()).Y =',W nl = '\n' Wd = collect(W,[C,S],exact=True,evaluate=False) print 'Wd =',Wd Wd_1 = Wd[ONE] Wd_C = Wd[C] Wd_S = Wd[S] print '|B| =',Bmag Wd_1 = Wd_1.subs(Bmag,1/Binv) Wd_C = Wd_C.subs(Bmag,1/Binv) Wd_S = Wd_S.subs(Bmag,1/Binv) print 'Wd[ONE] =',Wd_1 print 'Wd[C] =',Wd_C print 'Wd[S] =',Wd_S lhs = Wd_1+Wd_C*C rhs = -Wd_S*S lhs = lhs**2 rhs = rhs**2 W = (lhs-rhs).expand() W = (W.subs(1/Binv**2,Bmag**2)).expand() print 'W =',W W = (W.subs(S**2,C**2-1)).expand() print 'W =',W W = collect(W,[C,C**2],evaluate=False) print 'W =',W a = W[C**2] b = W[C] c = W[ONE] print 'a =',a print 'b =',b print 'c =',c D = (b**2-4*a*c).expand() print 'Setting to 0 and solving for C gives:' print 'Descriminant D = b^2-4*a*c =',D C = (-b/(2*a)).expand() print 'C = cosh(alpha) = -b/(2*a) =',C Example: non-euclidian distance calculation L = X^Y^e is a non-euclidian line B = L*e = X^Y +{-(Y.e)}X^e +{(X.e)}Y^e B*e*B.rev() = {2*(X.Y)*(X.e)*(Y.e) - (X.Y)**2}e B^2 = -2*(X.Y)*(X.e)*(Y.e) + (X.Y)**2 L^2 = -2*(X.Y)*(X.e)*(Y.e) + (X.Y)**2 s = sinh(alpha/2) and c = cosh(alpha/2) R = exp(alpha*B/(2*|B|)) = {c}1 +{Binv*s}X^Y +{-(Y.e)*Binv*s}X^e +{(X.e)*Binv*s}Y^e R*X*R.rev() = {Binv*(2*(X.Y)*c*s - 2*(X.e)*(Y.e)*c*s) + Binv**2*((X.Y)**2*s**2 - 2*(X.Y)*(X.e)*(Y.e)*s**2) + c**2}X +{2*Binv*c*s*(X.e)**2}Y +{Binv**2*(-2*(X.e)*(X.Y)**2*s**2 + 4*(X.Y)*(Y.e)*(X.e)**2*s**2) - 2*(X.Y)*(X.e)*Binv*c*s}e (R*X*rev(R)).Y = {Binv*s*(-4*(X.Y)*(X.e)*(Y.e)*c + 2*c*(X.Y)**2) + Binv**2*s**2*(-4*(X.e)*(Y.e)*(X.Y)**2 + 4*(X.Y)*(X.e)**2*(Y.e)**2 + (X.Y)**3) + (X.Y)*c**2}1 (R*X*R.rev()).Y = S*(-2*(X.Y)*(X.e)*(Y.e) + (X.Y)**2)**(1/2) + (X.Y)*Binv*C*(-2*(X.Y)*(X.e)*(Y.e) + (X.Y)**2)**(1/2) + (X.e)*(Y.e)*Binv*(-2*(X.Y)*(X.e)*(Y.e) + (X.Y)**2)**(1/2) - (X.e)*(Y.e)*Binv*C*(-2*(X.Y)*(X.e)*(Y.e) + (X.Y)**2)**(1/2) Wd = {1: (X.e)*(Y.e)*Binv*(-2*(X.Y)*(X.e)*(Y.e) + (X.Y)**2)**(1/2), S: (-2*(X.Y)*(X.e)*(Y.e) + (X.Y)**2)**(1/2), C: (X.Y)*Binv*(-2*(X.Y)*(X.e)*(Y.e) + (X.Y)**2)**(1/2) - (X.e)*(Y.e)*Binv*(-2*(X.Y)*(X.e)*(Y.e) + (X.Y)**2)**(1/2)} |B| = (-2*(X.Y)*(X.e)*(Y.e) + (X.Y)**2)**(1/2) Wd[ONE] = (X.e)*(Y.e) Wd[C] = (X.Y) - (X.e)*(Y.e) Wd[S] = 1/Binv W = 2*(X.Y)*(X.e)*(Y.e)*C + (X.Y)**2*C**2 + (X.e)**2*(Y.e)**2 - (X.Y)**2*S**2 + (X.e)**2*(Y.e)**2*C**2 - 2*C*(X.e)**2*(Y.e)**2 - 2*(X.Y)*(X.e)*(Y.e)*C**2 + 2*(X.Y)*(X.e)*(Y.e)*S**2 W = -2*(X.Y)*(X.e)*(Y.e) + 2*(X.Y)*(X.e)*(Y.e)*C + (X.Y)**2 + (X.e)**2*(Y.e)**2 + (X.e)**2*(Y.e)**2*C**2 - 2*C*(X.e)**2*(Y.e)**2 W = {1: -2*(X.Y)*(X.e)*(Y.e) + (X.Y)**2 + (X.e)**2*(Y.e)**2, C**2: (X.e)**2*(Y.e)**2, C: 2*(X.Y)*(X.e)*(Y.e) - 2*(X.e)**2*(Y.e)**2} a = (X.e)**2*(Y.e)**2 b = 2*(X.Y)*(X.e)*(Y.e) - 2*(X.e)**2*(Y.e)**2 c = -2*(X.Y)*(X.e)*(Y.e) + (X.Y)**2 + (X.e)**2*(Y.e)**2 Setting to 0 and solving for C gives: Descriminant D = b^2-4*a*c = 0 C = cosh(alpha) = -b/(2*a) = 1 - (X.Y)/((X.e)*(Y.e)) The calculus examples all use the extened LaTeXoutput module, latex_ex, for clarity. In geometric calculus the equivalent of the electromagnetic tensor is where a spacetime vector is given by where , the pseudoscalar , and are four vectors where the time component is zero and the spatial components equal to the electric and magnetic field components. Then Maxwell’s equations can be all written as with the four current. This example shows that this equations generates all of Maxwell’s equations correctly (in our units:math: ) [Lasenby,pp229-231]. Begin Program Maxwell.py from sympy import * from sympy.galgebra.GA import * from sympy.galgebra.latex_ex import * set_main(sys.modules[__name__]) if __name__ == '__main__': metric = '1 0 0 0,'+\ '0 -1 0 0,'+\ '0 0 -1 0,'+\ '0 0 0 -1' vars = make_symbols('t x y z') MV.setup('gamma_t gamma_x gamma_y gamma_z',metric,True,vars) LatexPrinter.format(1,1,1,1) I = MV(1,'pseudo') print '$I$ Pseudo-Scalar' print 'I =',I B = MV('B','vector',fct=True) E = MV('E','vector',fct=True) B.set_coef(1,0,0) E.set_coef(1,0,0) B *= gamma_t E *= gamma_t J = MV('J','vector',fct=True) F = E+I*B print ' ' print '$B$ Magnetic Field Bi-Vector' print 'B = Bvec gamma_0 =',B print '$F$ Electric Field Bi-Vector' print 'E = Evec gamma_0 =',E print '$E+IB$ Electo-Magnetic Field Bi-Vector' print 'F = E+IB =',F print '$J$ Four Current' print 'J =',J gradF = F.grad() print 'Geometric Derivative of Electo-Magnetic Field Bi-Vector'' xdvi(filename='Maxwell.tex') End Program Maxwell.py Begin Program Output Pseudo-Scalar Magnetic Field Bi-Vector Electric Field Bi-Vector Electo-Magnetic Field Bi-Vector Four Current Geometric Derivative of Electo-Magnetic Field Bi-Vector All Maxwell Equations are Div and Curl Equations Curl and Div equations End Program Output The geometric algebra/calculus allows one to formulate the Dirac equation in real terms (no ). Spinors are even multivectors in space time (Minkowski space with signature (1,-1,-1,-1)) and the Dirac equation becomes . All the terms in the real equation are explined in Doran and Lasenby [Lasenby,pp281-283]. Begin Program Dirac.py #!/usr/local/bin/python #Dirac.py from sympy.galgebra.GA import * from sympy.galgebra.latex_ex import * from sympy import * set_main(sys.modules[__name__]) if __name__ == '__main__': metric = '1 0 0 0,'+\ '0 -1 0 0,'+\ '0 0 -1 0,'+\ '0 0 0 -1' vars = make_symbols('t x y z') MV.setup('gamma_t gamma_x gamma_y gamma_z',metric,True,vars) parms = make_symbols('m e') Format('1 1 1 1') I = MV(ONE,'pseudo') nvars = len(vars) psi = MV('psi','spinor',fct=True) A = MV('A','vector',fct=True) sig_x = gamma_x*gamma_t sig_y = gamma_y*gamma_t sig_z = gamma_z*gamma_t print '$A$ is 4-vector potential' print A print r'$\bm{\psi}$ is 8-component real spinor (even multi-vector)' print psi dirac_eq = psi.grad()*I*sig_z-e*A*psi-m*psi*gamma_t dirac_eq.simplify() print 'Dirac equation in terms of real geometric algebra/calculus '+\ r'$\lp\nabla \bm{\psi} I \sigma_{z}-eA\bm{\psi} = m\bm{\psi}\gamma_{t}\rp$' print 'Spin measured with respect to $z$ axis' Format('mv=3') print r'\nabla \bm{\psi} I \sigma_{z}-eA\bm{\psi}-m\bm{\psi}\gamma_{t} = ',dirac_eq,' = 0' xdvi(filename='Dirac.tex') End Program Dirac.py Begin Program Output is 4-vector potential is 8-component real spinor (even multi-vector) Dirac equation in terms of real geometric algebra/calculus Spin measured with respect to axis End Program Output Curvilinear coodinates are implemented as shown in section Geometric Derivative. The gradient of a scalar function and the divergence and curl of a vector function ( is the curl in three dimensions in the notation of geometric algebra) to demonstrate the formulas derived in section Geometric Derivative. Begin Program coords.py #!/usrlocal/bin/python #EandM.py from sympy.galgebra.GA import * from sympy.galgebra.latex_ex import * from sympy import * import sympy,numpy,sys set_main(sys.modules[__name__]) if __name__ == '__main__': metric = '1 0 0,'+\ '0 1 0,'+\ '0 0 1' MV.setup('gamma_x gamma_y gamma_z',metric,True) Format('1 1 1 1') coords = make_symbols('r theta phi') x = r*(sympy.cos(theta)*gamma_z+sympy.sin(theta)*\ (sympy.cos(phi)*gamma_x+sympy.sin(phi)*gamma_y)) x.set_name('x') MV.rebase(x,coords,'e',False) #psi = MV.scalar_fct('psi') psi = MV('psi','scalar',fct=True) #psi.name = 'psi' dpsi = psi.grad() print 'Gradient of Scalar Function $\\psi$' print '\\nabla\\psi =',dpsi #A = MV.vector_fct('A') A = MV('A','vector',fct=True) #A.name = 'A' print 'Div and Curl of Vector Function $A$' print A gradA = A.grad() I = MV(ONE,'pseudo') divA = A.grad_int() curlA = -I*A.grad_ext() print '\\nabla \\cdot A =',divA Format('mv=3') print '-I\\lp\\nabla \\W A\\rp =',curlA xdvi(filename='coords.tex') End Program coords.py Begin Program Output Gradient of Scalar Function Div and Curl of Vector Function End Program Output
https://docs.sympy.org/0.7.0/modules/galgebra/GA/GAsympy.html
CC-MAIN-2019-30
refinedweb
7,252
57.47
. Process[] pArr = Process.GetProcessesByName foreach(Process pTemp in pArr) { pTemp.Kill(); } include the system.Diagnostics namespace in your code i am putting at the end of my code this: doc.Close(Word.WdSaveOptio doc = Nothing WordApp = Nothing is there anything that i need to add?? Broken down into practical pointers and step-by-step instructions, the IT Service Excellence Tool Kit delivers expert advice for technology solution providers. Get your free copy now. doc.Close(Word.WdSaveOptio WordApp .Quit(Word.WdSaveOptions.w doc = Nothing WordApp = Nothing Experts Exchange Solution brought to you by Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.Start your 7-day free trial Dim parr As Process() = Process.GetProcessesByName Imports System.Diagnostics ..... parr.Length is always = 0 Dim parr As Process() = Process.GetProcessesByName For i As Integer = 0 To parr.Length - 1 parr(i).Kill() any idea why is that?? Your method worked...thanks for your help!
https://www.experts-exchange.com/questions/21827957/kill-the-winword-exe-process-in-the-task-manager.html
CC-MAIN-2018-30
refinedweb
167
55
MODULE 43 NETWORK PROGRAMMING SOCKET PART V Advanced TCP/IP and RAW SOCKET My Training Period: hours Note: This is a continuation from Part IV, Module42 . Working program examples compiled using gcc , tested using the public IPs, run on Fedora 3, with several times of update, as root or suid 0. The Fedora machine used for the testing having the " No Stack Execute " disabled and the SELinux set to default configuration. This Module will concentrate on the TCP/IP stack and will try to dig deeper till the packet level. The protocols: IP, ICMP, UDP and TCP To fabricate our own packets, what we all need to know is the structures of the protocols that need to be included. We can define our own protocol structure (packets’ header) then assign it with new values or we just assign new values for the standard built-in structures’ elements. Below you will find detail information of the IP, ICMP, UDP and TCP headers. Unix/Linux systems provide standard structures for the header files, so it is very useful in learning and understanding packets by fabricating our own packet by using a struct, so we have the flexibility in filling the packet headers. We can always create our own struct, as long as the length of each field is correct. In building our program later on, note also the little endian (Intel x86) notation and the big endian based machines (some processor architectures other than Intel x86 such as Motorola). The following sections try to analyze header structures that will be used to construct our own packet in the program examples that follows, so that we know what values should be filled in and which meaning they have. The data types that we need to use are: unsigned char (1 byte/8 bits), unsigned short int (2 bytes/16 bits) and unsigned int (4 bytes/32 bits). Some of the information presented in the following sections might be a repetition from the previous one. IP The following figure is IP header format that will be used as our reference in the following discussion. Figure 23: IP header format. The following is a structure for IP header example. Here we try defining all the IP header fields. struct ipheader { unsigned char iph_ihl:4, ip; unsigned int iph_dest; }; The. Page 1 of 19 Element/field Description iph_ver 4 bits of the version of IP currently used, the ip version is 4 (other version is IPv6). iph_ihl 4 bits, the ip header (datagram) length in 32 bits octets (bytes) that point to the beginning of the data. The minimum value for a correct header is 5. This means a value of 5 for the iph_ihl means 20 bytes (5 * 4). Values other than 5 only need to be set if the ip header contains options (mostly used for routing). iph_tos 8 bits, type of service controls the priority of the packet. 0x00 is normal; the first 3 bits stand for routing priority, the next 4 bits for the type of service (delay, throughput, reliability and cost). It indicates the quality of service desired by specifying how an upper-layer protocol would like a current datagram to be handled, and assigns datagrams various levels of importance. This field is used for the assignment of Precedence, Delay, Throughput and Reliability. These parameters are to be used to guide the selection of the actual service parameters when transmitting a datagram through a particular network. Several networks offer service precedence, which somehow treats high precedence traffic as more important than other traffic (generally by accepting only traffic above certain precedence at time of high load). The major choice is a three way tradeoff between low-delay, high-reliability, and high-throughput. Bits 0-2: Precedence. 111 - Network Control 110 - Internetwork Control 101 - CRITIC/ECP 100 - Flash Override 011 - Flash 010 - Immediate 001 - Priority 000 – Routine Bit 3: 0 = Normal Delay, 1 = Low Delay. Bit 4: 0 = Normal Throughput, 1 = High Throughput. Bit 5: 0 = Normal Relibility, 1 = High Relibility. Bit 6-7: Reserved for Future Use. 0 1 2 3 4 5 6 7 Precedence D T R 0 0. iph_len The total is 16 bits; total length must contain the total length of the ip datagram (ip and data) in bytes. This includes ip header, icmp or tcp or udp header and payload size in bytes. The maximum length could be specified by this field is 65,535 bytes. Typically, hosts are prepared to accept datagrams up to 576 bytes (whether they arrive whole or in fragments). iph_ident The iph_ident sequence number is mainly used for reassembly of fragmented IP datagrams. When sending single datagrams, each can have an arbitrary ID. It contains an integer that identifies the current datagram. This field is assigned by sender to help receiver to assemble the datagram fragments. iph_flag. The Control Flags: Bit 0: reserved, must be zero. Page 2 of 19 Bit 1: (DF) 0 = May Fragment, 1 = Don't Fragment. Bit 2: (MF) 0 = Last Fragment, 1 = More Fragments. 0 1 2 0 DF MF ihp_offset The fragment offset is used for reassembly of fragmented datagrams. The first 3 bits are the fragment flags, the first one always 0, the second the do-not-fragment bit (set by ihp_offset = 0x4000) and the third the more-flag or more-fragments-following bit (ihp_offset = 0x2000). The following 13 bits is the fragment offset, containing the number of 8-byte big packets already sent. This 13 bits field indicates the position of the fragment's data relative to the beginning of the data in the original datagram, which allows the destination IP process to properly reconstruct the original datagram. iph_ttl 8 bits, time to live is the number of hops (routers to pass) before the packet is discarded, and an icmp error message is returned. The maximum is 255. It is a counter that gradually decrements down to zero, at which point the datagram is discarded. This keeps packets from looping endlessly. iph_protocol 8 bits, the transport layer protocol. It can be tcp (6), udp (17), icmp (1), or whatever protocol follows the ip header. Look in /etc/protocols or RFC 1700 for more. It indicates which upper-layer protocol receives incoming packets after IP processing is complete. iph_chksum 16 bits, a checksum on the header only, the ip datagram. Every time anything in the datagram changes, it needs to be recalculated, or the packet will be discarded by the next router. It helps ensure IP header integrity. Since some header fields change, e.g., Time To Live, this is recomputed and verified at each point that the Internet header is processed. iph_source 32 bits, source IP address. It is converted to long format, e.g. by inet_addr(). Can be chosen arbitrarily (as used in IP spoofing). iph_dest 32 bits, destination IP address, converted to long format, e.g. by inet_addr(). Can be chosen arbitrarily.. Padding Variable. The internet header padding is used to ensure that the internet header ends on a 32 bit boundary. The padding is zero. Table 9: IP header fields description. Fragmentation Fragmentation, transmission and reassembly across a local network which is invisible to the internet protocol (IP) are called intranet fragmentation. Fragmentation of an internet datagram is necessary when it originates in a local network that allows a large packet size and must traverse a local network that limits packets to a smaller size to reach its destination. An internet datagram can be marked "don't fragment". When the internet datagram is marked like that, it is not to be internet fragmented under any circumstances. If internet datagram that has been marked as "don't fragment" cannot be delivered to its destination without fragmenting it, it will be discarded instead. Page 3 of 19: IMCP). Element/field Description icmph_type The message type, for example 0 - echo reply, 8 - echo request, 3 - destination unreachable. Look in for all the types. For each type of message several different codes are defined. An example of this is the Destination Unreachable message, where possible messages are: no route to destination, communication with destination administratively prohibited, not a neighbor, address unreachable, port unreachable. For further details, refer to the standard . icmph_code This is significant when sending an error message (unreach), and specifies the kind of error. Again, consult the include file for more. The 16-bit one's complement of the one's complement sum of the ICMP message starting with the ICMP type. For computing the checksum, the checksum field should be zero. icmph_chksum The checksum for the icmp header + data. Same as the IP checksum. Note: The next 32 bits in an icmp packet can be used in many different ways. This depends on the icmp type and code. The most commonly seen structure, an ID and sequence number, is used in echo requests and replies, but keep in mind that the header is actually more complex. icmph_ident An identifier to aid in matching requests/replies; may be zero. Used to echo request/reply messages, to identify the request. icmph_seqnum Sequence number to aid in matching requests/replies; may be zero. Used to identify the sequence of echo messages, if more than one is sent. Page 4 of 19 Table 10: ICMP header fields description. The following is an example of the ICMP header format as defined in the above structure for Echo or Echo Reply Figure 25: An example of IP header format for Echo or Echo Reply Message. The description: Field Description Type 8 - For echo message; 0 - for echo reply message. Code 0. Checksum The checksum is the 16-bit ones’. Identifier If code = 0, an identifier to aid in matching echoes and replies, may be zero. Sequence Number If code = 0, a sequence number to aid in matching echoes and replies, may be zero. The data received in the echo message must be returned in the echo reply message. The identifier and sequence number may be used by the echo sender to aid in matching the replies with the echo requests. For example, the identifier might be used like a port in TCP or UDP to identify a session, and the sequence number might be incremented on each echo request sent. The echoer returns these same values in the echo reply. Code 0 may be received from a gateway or a host. Table 11: IP header fields for Echo or Echo Reply Message description. UDP The User Datagram Protocol is a transport protocol for sessions that need to exchange data. Both transport protocols, UDP and TCP provide 65535 (2 16 ): Element/field Description udph_srcport The source port that a client bind()s to, and the contacted server will reply back to in order to direct his responses to the client. It is an optional field, when meaningful, it indicates the Page 5 of 19 port of the sending process, and may be assumed to be the port to which a reply should be addressed in the absence of any other information. If not used, a value of zero is inserted. udph_destport The destination port that a specific server can be contacted on. udph_len The length of udp header and payload data in bytes. It is a length in bytes of this user datagram including this header and the data. (This means the minimum value of the length is eight.) udph_chksum The checksum of header and data, see IP checksum. It is the 16-bit one's complement of the one's complement sum of a pseudo header (shown in the following figure) used in TCP. If the computed checksum is zero, it is transmitted as all ones (the equivalent in one's complement arithmetic). An all zero transmitted checksum value means that the transmitter generated no checksum (for debugging or for higher level protocols that don't care). Table 12: UDP header fields: Element/field Description tcph_srcport The 16 bits source port, which has the same function as in UDP. Page 6 of 19 tcph_destport The 16 bits destination port, which has the same function as in UDP. tcph_seqnum The 32 bits sequence number of the first data octet in this segment (except when SYN is present). If SYN is present the sequence number is the initial sequence number (ISN) and the first data octet is ISN+1. It is used to enumerate the TCP segments. The data in a TCP connection can be contained in any amount of segments (= single tcp datagrams), which will be put in order and acknowledged. For example, if you send 3 segments, each containing 32 bytes of data, the first sequence would be (N+)1, the second one (N+)33 and the third one (N+)65. "N+" because the initial sequence is random. tcph_acknum 32 bits. If the ACK control bit is set this field contains the value of the next sequence number the sender of the segment is expecting to receive. Once a connection is established this is always sent. Every packet that is sent and a valid part of a connection is acknowledged with an empty TCP segment with the ACK flag set (see below), and the tcph_acknum field containing the previous tcph_seqnum number. tcph_offset The segment offset specifies the length of the TCP header in 32bit/4byte blocks. Without tcp header options, the value is 5. tcph_reserved 4 bits reserved for future use. This is unused and must contain binary zeroes. tcph_flags This field consists of six bits flags (left to right). They can be ORed. TH_URG - Urgent. Segment will be routed faster, used for termination of a connection or to stop processes (using telnet protocol). TH_ACK - Acknowledgement. Used to acknowledge data and in the second and third stage of a TCP connection initiation. TH_PSH - Push. The systems IP stack will not buffer the segment and forward it to the application immediately (mostly used with telnet). TH_RST - Reset. Tells the peer that the connection has been terminated. TH_SYN - Synchronization. A segment with the SYN flag set indicates that client wants to initiate a new connection to the destination port. TH_FIN - Final. The connection should be closed, the peer is supposed to answer with one last segment with the FIN flag set as well. tcph_win 16 bits Window. The number of bytes that can be sent before the data should be acknowledged with an ACK before sending more segments. tcph_chksum. It is the checksum of pseudo header, tcp header and payload. The pseudo is a structure containing IP source and destination address, 1 byte set to zero, the protocol (1 byte with a decimal value of 6), and 2 bytes (unsigned short) containing the total length of the tcp segment. The checksum also covers a 96 bit pseudo header (shown in the following figure). tcph_urgptr Urgent pointer. Only used if the TH_URG flag is set, else zero. It points to the end of the payload data that should be sent with priority. Table 13: TCP header fields description. Figure 29: TCP pseudo header format. The TCP Length is the TCP header length plus the data length in octets (this is not an explicitly transmitted quantity, but is computed), and it does not count the 12 octets of the pseudo header. Building and injecting datagrams program examples Page 7 of 19 [root@bakawali testraw]# cat rawudp.c // ----rawudp.c------ // Must be run by root lol! Just datagram, no payload/data #include <unistd.h> #include <stdio.h> #include <sys/socket.h> #include <netinet/ip.h> #include <netinet/udp.h> // The packet length #define PCKT_LEN 8192 // Can create separate header file (.h) for all // headers' structure // IP header's structure struct ipheader { unsigned char iph_ihl:5, iph_ver:4; unsigned char iph_tos; unsigned short int iph_len; unsigned short int iph_ident ; unsigned char iph_flag; unsigned short int iph_offset; unsigned char iph_ttl; unsigned char iph_protocol; unsigned short int iph_chksum; unsigned int iph_sourceip; unsigned int iph_destip; } ; // UDP header's structure struct udpheader { unsigned short int udph_srcport; unsigned short int udph_destport; unsigned short int udph_len; unsigned short int udph_chksum; }; // total udp header length: 8 bytes (=64 bits) // Function for checksum calculation // From the RFC, the checksum algorithm is: // "The checksum field is the 16 bit one's complement of the one's // complement sum of all 16 bit words in the header. For purposes o f // computing the checksum, the value of the checksum field is zero." unsigned short csum(unsigned short *buf, int nwords) { // unsigned long sum; for(sum=0; nwords>0; nwords--) sum += *buf++; sum = (sum >> 16) + (sum &0xffff); sum += (sum >> 16); return (unsigned short)(~sum); } // Source IP, source port, target IP, target port from // the command line arguments int main(int argc, char *argv[]) { int sd; // No data/payload just datagram char buffer[PCKT_LEN]; // Our own headers' structures struct ipheader *ip = (struct ipheader *) buffer; struct udpheader *udp = (struct udpheader *) (buffer + sizeof(struct ipheader)); // Source and destination addresses: IP and port); } // Create a raw socket with UDP protocol Page 8 of 19 sd = socket(PF_INET, SOCK_RAW, IPPROTO_UDP); if(sd < 0) { perror("socket() error"); // If something wrong just exit exit(-1); } else printf("socket() - Using SOCK_RAW socket and UDP protocol is OK.\n"); // The source is redundant, may be used later if needed // Address family sin.sin_family = AF_INET; din.sin_family = AF_INET; // Port numbers sin.sin_port = htons(atoi(argv[2])); din.sin_port = htons(atoi(argv[4])); // IP addresses sin.sin_addr.s_addr = inet_addr(argv[1]); din.sin_addr.s_addr = inet_addr(argv[3]); // Fabricate the IP header or we can use the // standard header structures but assign our own values. ip->iph_ihl = 5; ip->iph_ver = 4; ip->iph_tos = 16; // Low delay ip->iph_len = sizeof(struct ipheader) + sizeof(struct udpheader); ip->iph_ident = htons(54321); ip->iph_ttl = 64; // hops ip->iph_protocol = 17; // UDP // Source IP address, can use spoofed address here!!! ip->iph_sourceip = inet_addr(argv[1]); // The destination IP address ip->iph_destip = inet_addr(argv[3]); // Fabricate the UDP header // Source port number, redundant udp->udph_srcport = htons(atoi(argv[2])); // Destination port number udp->udph_destport = htons(atoi(argv[4])); udp->udph_len = htons(sizeof(struct udpheader)); // Calculate the checksum for integrity ip->iph_chksum = csum((unsigned short *)buffer, sizeof(struct ipheader) + sizeof(struct udpheader)); // Inform the kernel do not fill up the packet structure // we will build our own... if(setsockopt(sd, IPPROTO_IP, IP_HDRINCL, val, sizeof(one)) < 0) { perror("setsockopt() error"); exit(-1); } else printf("setsockopt() is OK.\n"); // Send loop, send for every 2 second for 100 count printf("Trying...\n"); printf("Using raw socket and UDP protocol\n"); printf("Using Source IP: %s port: %u, Target IP: %s port: %u.\n", argv[1], atoi(argv[2]), argv[3], atoi(argv[4])); int count; for(count = 1; count <=20; count++) { if(sendto(sd, buffer, ip->iph_len, 0, (struct sockaddr *)&sin, sizeof(sin)) < 0) // Verify { perror("sendto() error"); exit(-1); } else { printf("Count #%u - sendto() is OK.\n", count); sleep(2); } } close(sd); return 0; } Page 9 of 19 [root@bakawali testraw]# gcc rawudp.c -o rawudp [root@bakawali testraw]# ./rawudp - Invalid parameters!!! - Usage ./rawudp <source hostname/IP> <source port> <target hostname/IP> <target port> [root@bakawali testraw]# ./rawudp 192.168.10.10 21 203.106.93.91 8080 socket() - Using SOCK_RAW socket and UDP protocol is OK. setsockopt() is OK. Trying... Using raw socket and UDP protocol Using Source IP: 192.168.10.10 port: 21, Target IP: 203.106.93.91 port: 8080. Count #1 - sendto() is OK. Count #2 - sendto() is OK. Count #3 - sendto() is OK. Count #4 - sendto() is OK. Count #5 - sendto() is OK. Count #6 - sendto() is OK. Count #7 - sendto() is OK. ... You can use network monitoring tools to capture the raw socket datagrams at the target machine to see the effect. The following is a raw socket and tcp program example. [root@bakawali testraw]# cat rawtcp.c //---cat rawtcp.c--- // Run as root or suid 0, just datagram no data/payload #include <unistd.h> #include <stdio.h> #include <sys/socket.h> #include <netinet/ip.h> #include <netinet/tcp.h> // Packet length #define PCKT_LEN 8192 // May create separate header file (.h) for all // headers' structures // IP header's structure char tcph_flags;; } ; // Simple checksum function, may use others such as // Cyclic Redundancy Check, CRC Page 10 of 19 unsigned short csum(unsigned short *buf, int len) { unsigned long sum; for(sum=0; len>0; len--) sum += *buf++; sum = (sum >> 16) + (sum &0xffff); sum += (sum >> 16); return (unsigned short)(~sum); } int main(int argc, char *argv[]) { int sd; // No data, just datagram char buffer[PCKT_LEN]; // The size of the headers struct ipheader *ip = (struct ipheader *) buffer; struct tcpheader *tcp = (struct tcpheader *) (buffer + sizeof(struct ipheader));); } sd = socket(PF_INET, SOCK_RAW, IPPROTO_TCP); if(sd < 0) { perror("socket() error"); exit(-1); } else printf("socket()-SOCK_RAW and tcp protocol is OK.\n"); // The source is redundant, may be used later if needed // Address family sin.sin_family = AF_INET; din.sin_family = AF_INET; // Source port, can be any, modify as needed sin.sin_port = htons(atoi(argv[2])); din.sin_port = htons(atoi(argv[4])); // Source IP, can be any, modify as neede d sin.sin_addr.s_addr = inet_addr(argv[1]); din.sin_addr.s_addr = inet_addr(argv[3]); // IP structure ip->iph_ihl = 5; ip->iph_ver = 4; ip->iph_tos = 16; ip->iph_len = sizeof(struct ipheader) + sizeof(struct tcpheader); ip->iph_ident = htons(54321); ip->iph_offset = 0; ip->iph_ttl = 64; ip->iph_protocol = 6; // TCP ip->iph_chksum = 0; // Done by kernel // Source IP, modify as needed, spoofed, we accept through // command line argument ip->iph_sourceip = inet_addr(argv[1]); // Destination IP, modify as needed, but here we accept through // command line argument ip->iph_destip = inet_addr(argv[3]); // TCP structure // The source port, spoofed, we accept through the command line tcp->tcph_srcport = htons(atoi(argv[2])); // The destination port, we accept through command line tcp->tcph_destport = htons(atoi(argv[4])); tcp->tcph_seqnum = htonl(1); tcp->tcph_acknum = 0; tcp->tcph_offset = 5; tcp->tcph_syn = 1; tcp->tcph_ack = 0; tcp->tcph_win = htons(32767); Page 11 of 19 tcp->tcph_chksum = 0; // Done by kernel tcp->tcph_urgptr = 0; // IP checksum calculation ip->iph_chksum = csum((unsigned short *) buffer, (sizeof(struct ipheader) + sizeof(struct tcpheader))); // Inform the kernel do not fill up the headers' // structure, we fabricated our own if(setsockopt(sd, IPPROTO_IP, IP_HDRINCL, val, sizeof(one)) < 0) { perror("setsockopt() error"); exit(-1); } else printf("setsockopt() is OK\n"); printf("Using:::::Source IP: %s port: %u, Target IP: %s port: %u.\n", argv[1], atoi(argv[2]), argv[3], atoi(argv[4])); // sendto() loop, send every 2 second for 50 counts unsigned int count; for(count = 0; count < 20; count++) { if(sendto(sd, buffer, ip->iph_len, 0, (struct sockaddr *)&sin, sizeof(sin)) < 0) // Verify { perror("sendto() error"); exit(-1); } else printf("Count #%u - sendto() is OK\n", count); sleep(2); } close(sd) ; return 0; } [root@bakawali testraw]# gcc rawtcp.c -o rawtcp [root@bakawali testraw]# ./rawtcp - Invalid parameters!!! - Usage: ./rawtcp <source hostname/IP> <source port> <target hostname/IP> <target port> [root@bakawali testraw]# ./rawtcp 10.10.10.100 23 203.106.93.88 8008 socket()-SOCK_RAW and tcp protocol is OK. setsockopt() is OK Using:::::Source IP: 10.10.10.100 port: 23, Target IP: 203.106.93.88 port: 8008. Count #0 - sendto() is OK Count #1 - sendto() is OK Count #2 - sendto() is OK Count #3 - sendto() is OK Count #4 - sendto() is OK ... Network utilities applications such as ping and Traceroute (check Unix/Linux man page) use ICMP and raw socket. The following is a very loose ping and ICMP program example. It is taken from ping-of-death program. [root@bakawali testraw]# cat myping.c /* Must be root or suid 0 to open RAW socket */ ; Page 12 of 19 struct sockaddr_in dst; int offset; int on; int num = 100; if(argc < 3) { printf("\nUsage: %s <saddress> <dstaddress> [number]\n", argv[0]); printf("- saddress is the spoofed source address\n"); printf("- dstaddress is the target\n"); printf("- number is the number of packets to send, 100 is the default\n"); exit(1); } /* If enough argument supplied */ if(argc == 4) /* Copy the packet number */ num = atoi(argv[3]); /* Loop based on the packet number */ for(i=1;i<=num;i++) { on = 1; bzero(buf, sizeof(buf)); /* Create RAW socket */ if((s = socket(AF_INET, SOCK_RAW, IPPROTO_RAW)) < 0) { perror("socket() error"); /* If something wrong, just exit */ exit(1); } /* socket options, tell the kernel we provide the IP structure */ if(setsockopt(s, IPPROTO_IP, IP_HDRINCL, &on, sizeof(on)) < 0) { perror("setsockopt() for IP_HDRINCL error"); exit(1); } if((hp = gethostbyname(argv[2])) == NULL) { if((ip->ip_dst.s_addr = inet_addr(argv[2])) == -1) { fprintf(stderr, "%s: Can't resolve, unknown host.\n", argv[2]); exit(1); } } else bcopy(hp->h_addr_list[0], &ip->ip_dst.s_addr, hp->h_length); /* The following source address just redundant for target to collect */ if((hp2 = gethostbyname(argv[1])) == NULL) { if((ip->ip_src.s_addr = inet_addr(argv[1])) == -1) { fprintf(stderr, "%s: Can't resolve, unknown host\n", argv[1]); exit(1); } } else bcopy(hp2->h_addr_list[0], &ip->ip_src.s_addr, hp->h_length); printf("Sending to %s from spoofed %s\n", inet_ntoa(ip->ip_dst), argv[1]); /* Ip structure, check the /usr/include/netinet/ip.h */ ip->ip_v = 4; ip->ip_hl = sizeof*ip >> 2; ip->ip_tos = 0; ip->ip_len = htons(sizeof(buf)); ip->ip_id = htons(4321); ip->ip_off = htons(0); ip->ip_ttl = 255; ip->ip_p = 1; ip->ip_sum = 0; /* Let kernel fills in */ dst.sin_addr = ip->ip_dst; dst.sin_family = AF_INET; icmp->type = ICMP_ECHO; Page 13 of 19 icmp->code = 0; /* Header checksum */ */ /* sending time */ if(sendto(s, buf, sizeof(buf), 0, (struct sockaddr *)&dst, sizeof(dst)) < 0) { fprintf(stderr, "offset %d: ", offset); perror("sendto() error"); } else printf("sendto() is OK.\n"); /* IF offset = 0, define our ICMP structure */ if(offset == 0) { icmp->type = 0; icmp->code = 0; icmp->checksum = 0; } } /* close socket */ close(s); usleep(30000); } return 0; } [root@bakawali testraw]# gcc myping.c -o myping [root@bakawali testraw]# ./myping Usage: ./myping <saddress> <dstaddress> [number] - saddress is the spoofed source address - dstaddress is the target - number is the number of packets to send, 100 is the default [root@bakawali testraw]# ./myping 1.2.3.4 203.106.93.94 10000 sendto() is OK. sendto() is OK. ... ... sendto() is OK. sendto() is OK. Sending to 203.106.93.88 from spoofed 1.2.3.4 sendto() is OK. ... You can verify this ‘attack’ at the target machine by issuing the tcpdump –vv command or other network monitoring programs such as Ethereal. SYN Flag Flooding By referring to the previous "three-way handshake" of the TCP, when the server gets a connection request, it sends a SYN-ACK to the spoofed IP address, normally doesn't exist. The connection is made to time-out until it gets the ACK segment (often called a half-open connection). Since the server connection queue resource is limited, flooding the server with continuous SYN segments can slow down the server or completely push it offline. This SYN flooding technique involves spoofing the IP address and sending multiple SYN segments to a server. In this case, a full tcp connection is never established. We can also write a code, which sends a SYN packet with a randomly spoofed IP to avoid the firewall blocking. This will result in all the entries in our spoofed IP list, sending RST segments to the victim server, upon getting the SYN-ACK from the victim. This can choke the target server and often form a crucial part of a Denial Of Service (DOS) attack. When the attack is launched by many zombie hosts from various location, all target the same victim, it becomes Distributed DOS (DDOS). In worse case this DOS/DDOS attack might be combined with other exploits such as buffer overflow. The DOS/DDOS attack also Page 14 of 19 normally use transit hosts s a launching pad for attack. This means the attack may come from a valid IP/Domain name. The following is a program example that constantly sends out SYN requests to a host (Syn flooder). [root@bakawali testraw]# cat synflood.c #include <unistd.h> #include <stdio.h> #include <sys/socket.h> #include <netinet/ip.h> #include <netinet/tcp.h> /* TCP flags, can define something like this if needed */ /* #define URG 32 #define ACK 16 #define PSH 8 #define RST 4 #define SYN 2 #define FIN 1 */; } ; /*; Page 15 of 19); sin.sin_addr.s_addr = inet_addr(argv[1]); /* zero out the buffer */ memset(datagram, 0, 4096); /* we'll now fill in the ip/tcp header values */ iph->iph_ihl = 5; iph->iph_ver = 4; iph->iph_tos = 0; /* just datagram, no payload. You can add payload as needed */ iph->iph_len = sizeof (struct ipheader) + sizeof (struct tcpheader); /* the value doesn't matter here */ iph->iph_ident = htonl (54321); iph->iph_offset = 0 ; iph->iph_ttl = 255; iph->iph_protocol = 6; // upper layer protocol, TCP /* set it to 0 before computing the actual checksum later */ iph->iph_chksum = :o() Page 16 of 19 { Ethereal. has been introduced and becomes part of the Linux kernels, in order to protect. To protect your system from SYN flooding, the SYN Cookies have to be enabled. 1. echo 1 > /proc/sys/net/ipv4/tcp_syncookies to your /etc/rc.d/rc.local script. 2. Edit /etc/sysctl.conf file and add the following line: net.ipv4.tcp_syncookies = 1 3. Restart your system. Session Hijacking Raw socket can also be used for Session Hijacking. In this case, we inject our own packet that having same specification with the original packet and replace it. As discussed in the previous section of the tcp connection termination,. Before trying to hijack a TCP connection, we need to understand the TIME_WAIT state. Consider two systems, A and B, communicating. After terminating the connection, if these two clients want to communicate again, they Page 17 of 19: 1.); }. ---------------------------------------------------------------Break----------------------------------------------------------------- Note:. ---------------------------------------------------------------Break----------------------------------------------------------------- SYN Handshakes Port scanner/sniffer such as Nmap use raw sockets to the advantage of stealth. They use a half-way-SYN handshake that basically works like the following steps: ▪ Host A sends a special SYN packet to host B. ▪ Host B sends back a SYN/ACK packet to host A. ▪. --------------------------------------------------------End-------------------------------------------------------- Further interesting reading and digging: Secure Socket Layer (SSL) Page 18 of 19 R OpenSSL , the open source version to learn more about SSL. One of real project example that implements the SSL is Apache web server ( apache-ssl ). Information about program examples can be obtained at opens SSH ssh program which replaces rlogin and telnet, scp which replaces rcp, and sftp which replaces ftp. Also included is sshd which is the server side of the package, and the other basic utilities like ssh-add , ssh-agent , ssh- keysign , ssh-keyscan , ssh-keygen and sftp-server . OpenSSH supports SSH protocol versions 1.3, 1.5, and 2.0. -----------------------------------------Real End----------------------------------------- ------ More reading and digging: 1. Check the best selling C/C++, Networking, Linux and Open Source books at Amazon.com . Page 19 of 19 Log in to post a comment
https://www.techylib.com/en/view/hollowtabernacle/c_socket_programming_program_examples_based_on_tcp_udp_ip
CC-MAIN-2017-30
refinedweb
5,105
64.1
Plot doesn't show up when running the code Hi all, I'm relatively new to Python. I'm doing my first steps with Backtrader and tried to run the sample code. I got a problem concerning the plot. The code runs fine but when it comes to the plot I only get the following output but not plot: [[<Figure size 640x480 with 5 Axes>]] I'm using the Anaconda-Distribution with Python 3.7 and Spyder on Win 10. Probably it's a noob question but I don't know how to solve this 'problem'. Thanks in advance - backtrader administrators last edited by @knolch said in Plot doesn't show up when running the code: [[<Figure size 640x480 with 5 Axes>]] That's the result of this in the code cerebro.plot() Because plotdoes actually return the created matplotlibfigure, should the user desire to do something with it. If you were assigning the return value to a variable, you wouldn't see that. The problem is Spyder. The configuration you have is preventing the chart from being displayed. Your options: Try cerebro.plot(iplot=False) This disables automatic inline plotting detection in backtrader, but this is probably not going to help. Have a look at the charting options in Spyderfor plotting. If set to automatic, change it to inlineand viceversa. It's the problem with this Python kernel hijacking shells: they think they know better. @backtrader Thanks for your reply. I followed your steps but unfortunately it didn't work. I then switched to the standard Python IDLE as you mentioned it's a problem with Spyder and it worked out. I will not use Spyder now for developing and testing my trading strategies but do you have any recommendations what I could use instead? I mean IDLE works but it's very uncomfortable. Which Editor are you using for backtesting? Thanks for your help! - backtrader administrators last edited by @knolch said in Plot doesn't show up when running the code: I will not use Spyder now for developing and testing my trading strategies but do you have any recommendations what I could use instead? I mean IDLE works but it's very uncomfortable. Which Editor are you using for backtesting? Apparently people like PyCharmfor which a Free Community edition is available. Unless you are comfortable with vimor emacsand don't mind using a standard shell (which for all intent and purposes happens to be bashthese days) I use Spyder and here is my solution (Some hack). - Change the spyder setting : "Preferences" -> "IPython console" -> "Graphics" tab -> "Graphics backend" -> "Backend" -> "Automatic" - Restart the current Console (The one inside spyder) - Import below in sequence import matplotlib import matplotlib.pyplot as plt import backtrader as bt import backtrader.indicators as btind import backtrader.analyzers as btanalyzers import backtrader.feeds as btfeeds import backtrader.strategies as btstrats import backtrader.plot matplotlib.use('Qt5Agg') plt.switch_backend('Qt5Agg') - Plot with iplot = false cerebro.plot(height= 30, iplot= False) - Run your code and you should see a separate windows pop up @sobeasy It seems to me that your solution should work. But it blows up for me (Spyder 3.3.6, Anaconda3, Win10 64 bit). Somehow, it really wants tkinter! ImportError: Cannot load backend 'TkAgg' which requires the 'tk' interactive framework, as 'qt5' is currently running Time to try VS Code; PyCharm's footprint is too heavy.
https://community.backtrader.com/topic/1563/plot-doesn-t-show-up-when-running-the-code
CC-MAIN-2020-10
refinedweb
562
66.64
* avaFX Author Submitting form data to a new screen using javafx Chris Creed Ranch Hand Joined: Feb 27, 2009 Posts: 66 I like... posted Oct 08, 2013 17:09:25 0 I'm currently working on a desktop app using javafx. IN the introductory screen, I have a form where the user would fill in various information, and then submit the form contents to a new screen, discarding the old screen as the user would not be heading back there in the application logic flow. However I'm at a loss as to how this would be done. I was thinking it would be like Android where I could call a get new screen function, calling the class for the new screen's constructor and it'd pass over to there, but when trying that, it just created a new window, which obviously isn't what I'm after. Was wondering if anyone knew of a better way that I could move the application to a new screen, and still easily pass the form information over to that screen for further processing. John Damien Smith Ranch Hand Joined: Jan 26, 2012 Posts: 96 6 posted Oct 08, 2013 20:50:27 0 I think this question is (essentially) a duplicate of another coderanch question: switch between scenes (not stage) in javafx Have a look over the links in the answers to that question and see if it answers your question. Jamie Coleshill Greenhorn Joined: Oct 18, 2013 Posts: 8 I like... posted Oct 18, 2013 14:01:46 0 I would use FXML and make multiple scenes. If you are not to sure on how to handle multiple scenes, I would suggest you goto the following two links and read over them. Angela's Blog getting started with javafx I myself made a login screen based somewhat on those codes as well as another one that I cant remember at the moment, but how I went about it is as follows... Key parts in my main class are public class MPClient extends Application { private Stage stage; private User user; private int screenWidth, screenHeight; private static MPClient instance; String sep = System.getProperty("file.seperator"); private Scene scene; public MPClient() { instance = this; } public static MPClient getInstance() { return instance; } @Override public void start(Stage primaryStage) { Toolkit tk = Toolkit.getDefaultToolkit(); Dimension d = tk.getScreenSize(); screenWidth = d.width; screenHeight = d.height; user = new User(); try { stage = primaryStage; stage.initStyle(StageStyle.UNDECORATED); stage.centerOnScreen(); gotoLogin(); stage.show(); } catch (Exception ex) { Logger.getLogger(MPClient.class.getName()).log(Level.SEVERE, null, ex); } } public void gotoLogin() { try { LoginController login = (LoginController) replaceSceneContent("login/Login.fxml"); login.setApp(this); login.setUser(user); login.setSize(screenWidth, screenHeight); } catch (Exception ex) { Logger.getLogger(MPClient.class.getName()).log(Level.SEVERE, null, ex); } } public void gotoClientHome(User user1) { try { this.user = user1; ClientHomeController clientHome = (ClientHomeController) replaceSceneContent("clienthome/ClientHome.fxml"); clientHome.setApp(this); clientHome.setUser(user); } catch (Exception ex) { Logger.getLogger(MPClient.class.getName()).log(Level.SEVERE, null, ex); } } private Initializable replaceSceneContent(String fxml) throws Exception { FXMLLoader loader = new FXMLLoader(); InputStream isin = MPClient.class.getResourceAsStream(fxml); loader.setBuilderFactory(new JavaFXBuilderFactory()); loader.setLocation(MPClient.class.getResource(fxml)); StackPane page = new StackPane(); try { page = (StackPane) loader.load(isin); } finally { isin.close(); } scene = new Scene(page, screenWidth, screenHeight, Color.BLACK); stage.setScene(scene); stage.sizeToScene(); return (Initializable) loader.getController(); } This is only part of the code I have used and is also only from my main class, but what I have done is when the program starts it does the gotoLogin first. The login screen's controller is made aware of the main screen via the method shown in Angela's blog, and when a user successfully logs in the controller calls on the main screen's method of gotoClientHome and pass it a user object. In your case you would be passing the information that the user had entered. I agree. Here's the link: subject: Submitting form data to a new screen using javafx Similar Threads MVC & Swing - How model classes work together ? Non Struts App -> Struts App - use splash screen and auto forward? Opening a new browser window from the applet and supplying data via POST Passing data from jsp to jsp Having forms and tables in success page and send it to action class on submit ? All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/621496/JavaFX/java/Submitting-form-data-screen-javafx
CC-MAIN-2014-10
refinedweb
728
54.02
Created on 2016-07-25 06:27 by mattrobenolt, last changed 2021-02-26 19:10 by eryksun. This also affects socket.getaddrinfo on macOS only, but is fine on Linux. I've not tested on Windows to see behavior there. Given the IP address `0177.0000.0000.0001`, which is a valid octal format representing `127.0.0.1`, we can see varying results. Confirmed in both python 2.7 and 3.5. First, socket.gethostbyname is always wrong, and always returns `177.0.0.1`: ``` >>> socket.gethostbyname('0177.0000.0000.0001') '177.0.0.1' ``` This can be seen on both Linux and macOS. With `socket.getaddrinfo`, resolution is correct on Linux, but the bad 177.0.0.1 on macOS. Linux: ``` >>> socket.getaddrinfo('0177.0000.0000.0001', None)[0] (2, 1, 6, '', ('127.0.0.1', 0)) ``` macOS: ``` >>> socket.getaddrinfo('0177.0000.0000.0001', None)[0] (2, 2, 17, '', ('177.0.0.1', 0)) ``` This behavior exists in both 2.7.12 and 3.5.2 at least. I haven't tested many others, but I assume pretty universal. This would appear to be a platform OS issue. Is it "broken" also for FreeBSD? (I put broken in quotes because interpreting ocatal isn't part of the posix speck for gethostbyname. It could even be an accident that it works on Linux. I'm not going to close this yet, since it might be worth a doc issue, or at least documenting here what the status of this is on FreeBSD. To clarify: by platform OS issue, I mean that the octal-conversion-or-not is none of Python's doing, it is done by the C library call that gethostbyname is a thin wrapper around. On Linux, it seems it's not an accident. inet_addr(3) explicitly says it can handle octal or haxadecimal forms. Sorry, to add a data point, in C, `gethostbyname` also does the correct thing on macOS. See: ``` #include <stdio.h> #include <errno.h> #include <netdb.h> #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> int main(int argc, char *argv[]) { int i; struct hostent *lh = gethostbyname("0177.0000.0000.0001"); struct in_addr **addr_list; if (lh) { addr_list = (struct in_addr **)lh->h_addr_list; for (i=0; addr_list[i] != NULL; i++) { printf("%s", inet_ntoa(*addr_list[i])); } printf("\n"); } else { herror("gethostbyname"); } return 0; } ``` So I'm not sure this is platform specific. Either way, `socket.gethostbyname` is wrong on both linux and macOS. I'm a bit lost with what's going on here though, admittedly. :) And lastly, it seems that `socket.gethostbyname_ex` _does_ work correctly on both platforms. ``` >>> socket.gethostbyname_ex('0177.0000.0000.0001') ('0177.0000.0000.0001', [], ['127.0.0.1']) ``` Hmm. Since gethostbyname is a deprecated interface, perhaps there is nothing to do here. However, if someone wants to investigate further and finds a fix, we will evaluate it. Is it worth investigating the different behavior then with `getaddrinfo` between platforms? As far as I know, that's the only method that works with both ipv6 and will tell you "here are all the IP addresses this resolves to". A similar bug report can be seen at. There someone makes a conclusion that getaddrinfo (Python seems to use getaddrinfo to implement gethostbyname) doesn't work correctly with octal form. They finally ignore this inconsistent behaviour. Ah, I just confirmed broken behavior in macOS as well using `getaddrinfo()` in C. I guess I'd be ok with python ignoring this as well. Maybe worth a change to documentation to note this? socket.gethostbyname calls the internal function setipaddr, which tries to avoid a name resolution by first calling either inet_pton or inet_addr. Otherwise it calls getaddrinfo. Windows ------- setipaddr calls inet_addr, which supports octal [1]. ctypes example: ws2_32 = ctypes.WinDLL('ws2_32') in_addr = ctypes.c_ubyte * 4 ws2_32.inet_addr.restype = in_addr >>> ws2_32.inet_addr(b'0177.0000.0000.0001')[:] [127, 0, 0, 1] 3.5+ could call inet_pton since it was added in Vista. However, it does not support octal: >>> addr = in_addr() >>> ws2_32.inet_pton(socket.AF_INET, b'0177.0000.0000.0001', addr) 0 >>> ws2_32.inet_pton(socket.AF_INET, b'127.0.0.1', addr) 1 >>> addr[:] [127, 0, 0, 1] socket.inet_pton instead calls WSAStringToAddressA, which does support octal: >>> list(socket.inet_pton(socket.AF_INET, '0177.0000.0000.0001')) [127, 0, 0, 1] socket.gethostbyname_ex calls gethostbyname since gethostbyname_r isn't defined. This does not support octal and errors out: >>> socket.gethostbyname_ex('0177.0000.0000.0001') Traceback (most recent call last): File "<stdin>", line 1, in <module> socket.herror: [Errno 11001] host not found getaddrinfo also does not support octal and errors out: >>> socket.getaddrinfo('0177.0000.0000.0001', None)[0] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Program Files\Python35\lib\socket.py", line 732, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): socket.gaierror: [Errno 11001] getaddrinfo failed >>> ctypes.FormatError(11001) 'No such host is known.' [1]: @David The symptoms from FreeBSD look a little different: Only gethostbyname affected only on 2.7 and 3.3 on all freebsd versions (9, 10, 11). Python 3.2 was not tested (freebsd port was deleted), but likely affected as well Feels/Appears like a gethostbyname fix or other change affecting gethostbyname in 3.4, missing merges to 3.3, (likely 3.2) and 2.7. Full test matrix attached For what it is worth: the relevant standard says that octal and hexadecimal addresses should be accepted (POSIX getaddrinfo refers to inet_addr for numeric IP addresses and that says that octal and hexadecimal numbers are valid in IP addresses), see: Adding a implementation note to the documentation might be useful, but it should IMHO only mention that the platform getaddrinfo is used in the implementation for the Python functions and should not mention specific platforms because we don't have the processes to keep such specific notes up-to-date. I don't understand the point of the issue. Is it a documentation issue? Python doesn't parse anything: it's a thin wrapper on top of the standard C library. If you want to complain, report the issue to the maintainers of your C library ;-) > However, if someone wants to investigate further and finds a fix, we will evaluate it. IMHO the best fix is to document that the exact behaviour depends on the platform, and that only IPv4 decimal and IPv6 hexadecimal are portable. Corner cases like IPv4 octal addresses are not portable, you should write your own parser. Note: I checked ipaddress, it doesn't seem to support the funny octal addresses format. Why do you need octal addresses? What is your use case? :-p > Why do you need octal addresses? What is your use case? :-p I didn't, but an attacker leveraged this to bypass security. We had checks against `127.0.0.1`, but this resolved to `177.0.0.1` incorrectly, bypassing the check. We were using `socket.gethostbyname` which yielded this. See for a little bit more context. > I didn't, but an attacker leveraged this to bypass security. Ah, that's a real use case. Can you please rephrase the issue title to make it more explicit? Because in this issue, it's not obvious to me if octal addressses must be accepted on all platforms, or rejected on all platforms. There's also the fact that Eryk pointed out that there are different ways to implement this on Windows, so there might be something we want to "fix" there. It seems like we're not consistent in how we handle addresses in the various socket module functions. koobs' results are also interesting, since they indicate that *something* changed on the python side that affected this for freebsd. Update from my previous comment in 2016: in Python 3.7+, the socket module's setipaddr() function calls Winsock inet_pton() instead of inet_addr(), and falls back on getaddrinfo(). Neither supports octal addresses. At least using octal fails instead of mistakenly parsing as decimal.
https://bugs.python.org/issue27612
CC-MAIN-2021-17
refinedweb
1,339
61.33
Take the first step in raising your robot army and meet the Arduino, the microcontroller designed to be approachable and fun. From blinky lights to motors, temperature sensors to wi-fi, RFID to MIDI, you can make your code do stuff. Learn to look at projects in terms of outputs and inputs as this article walks you through the code and electronics to blink an LED and react to a pushbutton. You'll also see how simple it is to control a servo motor. Your experience as a developer will apply directly to the Arduino, which uses a language that extends C++. You'll also need to learn about electronics, and the Arduino makes a great platform for doing so. I'll introduce the basics here, to start your journey. Use to see descriptions of the different Arduino models, find places to buy one, and refer to the language reference. Your First Output: Blink an LED Making an LED (light-emitting diode) blink is the “hello, world” of electronics. You can think of your Arduino projects as inputs (buttons and sensors that feed facts about the world into your code), outputs (lights, buzzers, and motors that manifest your code’s intent), and your code running on the Arduino microcontroller as the conductor between them. See Figure 1 for an illustration. With that perspective, let’s start by controlling your first output. Components You’ll Need - Arduino - USB cable - Computer running the Arduino IDE - LED Connect the LED You connect components to your Arduino via the pin headers running down both sides of it. Small text printed on the green circuit board labels the pins; use that to find the pin headers for pin 13 and the GND (ground) pin. See Figure 2. An LED has two wires, called leads, and by convention they are different lengths. Plug the shorter lead (called the cathode) into the GND pin header and the longer lead (the anode) into the 13 pin header. If you get it backwards, the LED simply won’t light, but it’s otherwise harmless. If your LED is not lighting up when you get to the final step of testing your project, try flipping it around. Write the Code Arduino programs are called sketches. Every sketch must contain a setup() function and a loop() function. For reuse and readability, you can extract your logic into additional functions, but you have to have at least those two. The setup() function runs once, when your sketch starts, and the loop() function runs continuously, like the event loop in a game. This sketch blinks the LED in a syncopated rhythm. Enter the code into the Arduino IDE, shown in Figure 3. I’ll discuss the sketch line by line. #define led 13 void setup() { pinMode(led, OUTPUT); } void loop() { digitalWrite(led, HIGH); delay(700); digitalWrite(led, LOW); delay(300); } The first line in our blinky LED sketch is a precompiler directive to define the text “led” as a stand-in for “13”, to make the code easier to read by giving the pin a meaningful name, and easier to change if you decide to use a different pin. The setup() function calls the Arduino’s pinMode() function to establish pin 13 as an output pin, meaning the sketch writes instructions out to that pin. Later, when you add a pushbutton to the project, you’ll set up the button’s pin as an input, a pin from which yopu’ll read information. For an explanation of what the Arduino does electrically when you call pinMode(), see. Inside the loop() function, the sketch turns the LED on, pauses the thread, turns the LED off, and pauses again. The Arduino calls this function repeatedly, for as long as it has power. The digitalWrite() function takes two arguments: the pin to control, and the signal to send. As you might imagine, the Arduino also supports a digitalRead() function, which you’ll use later when you read the state of a button, and analogWrite() and analogRead() functions, for values that are from a range instead of binary. Lastly, the delay() function causes the program’s execution to wait for the specified number of milliseconds. Without the delay, you wouldn’t see the LED blink, because it would switch on and off faster than you could perceive. Upload the Sketch I used to think getting code onto a microcontroller was too daunting to attempt-something those hardware people do. The Arduino makes this delightfully simple. Use a USB cable to connect the Arduino to your computer, as in Figure 4, and click the Upload button in the Arduino IDE. The Getting Started guide at provides operating-system-specific instructions for selecting the right port and uploading your sketch. The chip on an Arduino comes preinstalled with a bootloader program. When the board gets power (such as via your computer’s USB port), the bootloader listens for new instructions coming over the wire. If it receives a new sketch, it stores that and starts it; if not, it runs the last sketch it stored. Expected Results Here are the steps you just completed. - You plugged an electronics component into the pin headers on your Arduino board; in this case, an LED serves as the output for your sketch. - You wrote a sketch in the Arduino IDE to tell the microcontroller how to control the LED. - You uploaded the sketch to your Arduino via USB, and that USB cable is now supplying power to the Arduino. You should see the LED blinking, on for 700 milliseconds then off for 300 milliseconds. Experiment with making additional calls to digitalWrite() and changing the delay() durations. Now that your sketch is stored on the Arduino, you could unplug the USB cable and power the board via a 9-volt battery instead. Listen for an Input: React to a Pushbutton If you’re going to make a tractable robot army, you need to make it respond to your commands. The Arduino can listen for inputs you control, such as buttons, joysticks, and flex sensors, as well as environmental sensors like light, temperature, and humidity sensors. Let’s add a button to the project to turn the LED on and off manually. Components You’ll Need - Pushbutton - 10K O resistor - Breadboard - Jumper wires for making connections on the breadboard - The setup from the LED project above Connect the Button You were able to plug the LED directly into the Arduino because the pins You needed, GND and 13, are side by side. For the button, You need more room to work, so you’ll use an electronics breadboard to make connections between the components. See Figure 5 and the sidebar “Breadboards for Prototyping.” In addition to the button, you also need a pull-up resistor. “Pull-up” describes,not the type of resistor, but the role it plays in this circuit. See the sidebar “Pull-Up Resistors” for more on that. Figure 6 is the schematic, including the button and its pull-up resistor, plus the LED from the first part of this project. Figure 7 shows how to lay the components out on the breadboard. Edit the Code Let’s add to the blinky LED sketch. Add another precompiler directive, to define “button” as an alias for pin 2. Also declare an integer variable named buttonState, to store the state of the button when you read it in, and initialize the variable with a value of 0. In the setup() function, add a call to pinMode() that sets the button pin to be an input. In the loop() function, read from the button and assign the result to the buttonState variable. buttonState = digitalRead(button); Next, write an if/else statement that lights the LED depending on the state of the button. Because you are using a pull-up resistor instead of a pull-down resistor, pin 2 will report that it is HIGH when the button is not pressed, and LOW when it is pressed. Therefore, your if statement will be backward from what you might have expected. if (buttonState == 1) { digitalWrite(led, LOW); } else { digitalWrite(led, HIGH); } No, I don’t like if/else statements, either, so let’s refactor that to a single line. digitalWrite(led, 1 - buttonState); It is 1 minus buttonState in order to invert the value, which is needed because of the pull-up resistor. 1 - 0 = 1, and 1 - 1 = 0, so that statement writes a LOW voltage to the LED when the button’s pin is HIGH, and vice versa. See Listing 1 for the final sketch. Upload the Sketch When you first connect the USB cable, you will see the LED blinking as the Arduino runs the prior sketch. When you upload this one, you will overwrite the old sketch. Expected Results Here are the steps you just completed. - You used a breadboard to add additional components to your circuit. - You read a schematic to understand what to build. - You used a pull-up resistor to make your digital pin read predictable, reliable results. - You edited your sketch to read from an input, evaluate conditional logic, and respond by controlling an output. The LED should remain off by default. When you press and hold the button, the LED will light for as long as you are holding the button. Continue to Play Now that you have completed “hello, world,” what else will you build with this small computer that manifests your code in the real world? Add a Motor The Arduino IDE ships with a number of libraries that make it straightforward to interact with more complex components, such as a servo motor. Beyond that, a vibrant open-source community contributes hundreds of libraries. Unlike a motor that spins continuously, a servo motor can be set to a specific position, usually between 0° and 180°. The Arduino’s Servo library lets you declare a Servo object and tell it to “write” to a specific location. This sketch uses the Servo library and instructs a servo motor to wave its arms back and forth. #include <Servo.h> Servo robot; void setup() { robot.attach(9); } void loop() { robot.write(105); delay(500); robot.write(75); delay(500); } The #include <Servo.h> statement brings the library into your sketch. The Servo robot; line declares a variable named robot of type Servo. In the setup() function, the robot is initialized to write to pin 9. In the loop() function, the Servo class’s write() method takes a value between 0 and 180, causing the Arduino to send the corresponding voltages to the servo motor so that it moves to the right position. Connecting the servo motor is also straightforward. A servo has three wires: one goes to power, one to ground, and one to the Arduino pin that will give it instructions. Figure 8 shows the schematic. The datasheet for the servo will tell you which wire is which. More Outputs and Inputs You’ve seen two outputs, an LED and a motor, and one input, a pushbutton. And there are so many more. From LCDs, speakers and MIDI, motors and relays, to GPS, bar code readers, magnetic field sensors, and touch sensors, your Arduino can sense the world around it and execute your code to change its environment. The Arduino Playground, a community-edited wiki, presents an imagination-spurring list of outputs and inputs at. Good Habits On your journey, two strategies will accelerate your progress, and they both relate to hanging on to what you’ve learned. First, keep a notebook. When you figure something out or learn a new concept, write yourself a note about it. Keep your drawings and schematics as you design and plan projects. It’s surprising how quickly and thoroughly you can forget something when you don’t work with it every day. Give yourself a reference so that when you sit back down to a work in progress, you can remind yourself where you left off. Second, use source control and commit often. (I open-source my sketches on GitHub.) Work a project incrementally, getting a small part working before going onto the next, and check that code into source control with a descriptive comment at each small step. Similar to your notebook, your commit history serves as an annotated reference as you learn. What’s Next? With outputs and inputs, you have the building blocks you need to make fun, interactive projects. What will you do next? Are there sensors and alerts that might be handy to have around the house? Electronics could play a role in your art. Consider adding some brains to your next Halloween costume. What projects might the kids in your life like to build with you? Build things. Share them. Have fun.
http://www.codemag.com/article/1305081
CC-MAIN-2016-30
refinedweb
2,127
71.04
java.lang.Object org.jboss.cache.Fqn<E>org.jboss.cache.Fqn<E> @Immutable public class Fqn<E> A Fully Qualified Name (Fqn) is a list of names (typically Strings but can be any Object), which represent a path to a particular Node or sometimes a Region in a Cache. ROOT), or relative to any node in the cache. Reading the documentation on each API call that makes use of Fqns will tell you whether the API expects a relative or absolute Fqn. For instance, using this class to fetch a particular node might look like this. (Here data on "Joe" is kept under the "Smith" surname node, under the "people" tree.) FqnAlternatively, the same Fqn could be constructed using a List abc = Fqn.fromString("/people/Smith/Joe/"); Node joesmith = Cache.getRoot().getChild(abc);
http://docs.jboss.org/jbosscache/3.2.1.GA/apidocs/org/jboss/cache/Fqn.html
CC-MAIN-2015-18
refinedweb
134
63.29
I am very new to C++. I have been reading some tutorial mainly on the cprogramming tutorial page. I wanted to but the very basic things I have learned to use and see if I could create just a little program. I ran into a problem and I looked for the answer but didn't get the answer I need. Here is my code and if you could tell me what I was doing wrong I would be very grateful. I am thinking it may be very off the wall. I just started learning C++ yesterday and I wrote all this without referencing back to anything. When I was done I tried to run it but it didn't work, so I went back and tried to find out what I had wrong but like I said it didn't work. So anways here it is. Code:#include <iostream> using namespace std; int main() { int enteredname; int sexofchar; cout<<"Welcome to Lords of the Land. This is a text-based Role-Playing game that will immerse you into a world with countless people to meet and activities to do. You will be able to create your character, hunt with your character, build your character anyway you fancy, join guilds and form alliances, and any other thing your imagination can conjure. Prepare to enter a world of pure imagination!!"; cout<<"First off I need your to pick your Characters name. So if you would please enter your characters name."; cin>> enteredname; cin.ignore(); cin.get(); cout<<"From now okay I shall call you "<<enteredname<<"\n"; cout<<"Next I need to know the gender of your character. Please type in either M or F. Letters are case sensitive."; cin>> sexofchar; cin.ignore(); cin.get(); if ( sexofchar = M ) { cout<<"Sir: "<<enteredname<<",I'm glad you could come join us on this Journey."; } else if ( sexofchar = F ) { cout<<"Madam: "<<enteredname<<",I'm glad you could come join us on this Journey."; } cin.get(); return 0; }
http://cboard.cprogramming.com/cplusplus-programming/59444-help-would-appreciated.html
CC-MAIN-2014-10
refinedweb
331
83.25
My new Flatiron School team, Anthony, Ei-lene, Eugene, and I are working on HandRaise, an application to handle student questions during lab or working sessions. HandRaise will help us keep track of which student questions are up next for teachers to answer. In this post, I explain how we implemented some basic authorization to allow or restrict specific users in taking certain actions. Authentication: Before we assigned user permissions, we first implemented authentication so that users may sign in and sign out of the application. We followed the RailsCast tutorial, Authentication from Scratch. Defining Permissions: Once we had authentication running, we defined user roles. In our User model, we defined users roles as admins and students. class User < ActiveRecord::Base attr_accessible :role USER_ROLES = { :admin => 0, :student => 10 } Then we wrote instance methods in our User model that allowed us to define which users were admins and students. def set_as_admin self.role = USER_ROLES[:admin] end def set_as_student self.role = USER_ROLES[:student] end Authorization: Next, we created methods to check if a user can edit the issue, destroy the issue, or mark the issue as resolved. We knew we wanted our authorization methods to read like English so that the other team members would be able to easily understand what is happening in the code. We wanted the following methods to work: def can_edit?(issue) true if owns?(issue) || admin? end def can_destroy?(issue) true if owns?(issue) || admin? end def can_resolve?(issue) true if owns?(issue) || admin? end In order for the methods above to work, we had to create the methods admin? and owns?(issue). The admin? method checks if the user is an admin. The owns?(issue) method let’s us pass an issue as an argument to see if the user created it. def owns?(issue) true if self.id == issue.user_id end def admin? true if self.role_name == :admin end def role_name User.user_roles.key(self.role) end def self.user_roles USER_ROLES end After testing in the console to make sure our new methods worked, we added them to our view. The links to edit, resolve, or delete issues are only shown to users that are authorized to access those actions. Below is an example of how we used our new authorization methods in the view to check if a user has the ability to edit an issue: <% if @current_user.can_edit?(issue) %> <%= link_to 'Edit', edit_issue_path(issue) %> <% end %> Next Steps: As you can see, we have some repetition in our authorization methods. The conditions for the can_edit?, can_resolve?, and can_delete? are all the same. Our next steps will include refactoring this code so we keep it dry. Resources: - Ryan Bates wrote an awesome authorization gem for Ruby on Rails called CanCan - RailsCast on how to implement Authorization using CanCan - While writing this post, I listened to this mixtape by Diplo from the Diplo & Friends radio show on BBC 1xtra. It’s good.
http://janeeats.github.io/blog/2013/03/23/authorization-basics-in-rails/
CC-MAIN-2017-30
refinedweb
485
57.87
04 January 2007 20:23 [Source: ICIS news] By Joseph Chang ?xml:namespace> KeyBanc Capital Markets analyst Michael Sison had the winning pick in 2006 with For 2007, the analyst said he likes specialty chemical and aerospace materials maker Cytec Industries, which trades at around $57 (€43)/share. “Cytec has underperformed recently, but there is a lot of earnings power potential if they can turn around their Surface Specialties division,” Sison said. “Coupled with commercial aerospace exposure, the environment for Cytec looks good.” The analyst has a price target of $69 on Cytec and expects earnings per share (EPS) to rise by 16% from 3.46 in 2006 to $4 in 2007. Lehman Brothers analyst Sergey Vasnetsov highlighted Celanese as his top pick for 2007 after having success with the same stock in 2006. Celanese trades at around $25/share. “Celanese is our best long idea among commodity chemicals,” he said. “The company will continue to capitalise on its strong market share and advantaged cost position in acetic acid and VAM, along with improvements from its restructuring programme.” Vasnetsov said he expects Celanese’s EPS to rise by 9% from $2.85 in 2006 to $3.10 in 2007. Deutsche Bank Securities analyst David Begleiter also doubled down on Celanese as his top pick for two years in a row. “It is mispriced at just 8.9 [times] 2007 EPS versus its differentiated peers at 13-14 times,” said the analyst. “The overhang by the private equity sponsors has been the key issue for Celanese, but Blackstone is now down to just 14% - a negligible position.” Banc of America Securities analyst Kevin McCarthy also highlighted Celanese as his top pick, with a $29/share price target. “The company has an attractive hybrid business model, a low-cost position with proprietary technology and the greatest sales concentration in both Asia and ($1 = €0.76) (Look for the full story on Wall Street’s top picks for 2007 in the 8 January issue of ICIS Chemical Business Americ.
http://www.icis.com/Articles/2007/01/04/1117925/outlook+07+wall+street+unveils+top+picks.html
CC-MAIN-2013-20
refinedweb
336
63.59