text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Why does Java Webstart take so long to start downloading the application? I used Netbeans to launch a local Java Webstart application and I see the following: "Java6 ..." logo comes up in under a second [4-8 seconds elapse] "Downloading application" displays for a second My application begins running My question is what is "Java6 ..." doing for 4-8 seconds? The HD light barely flickers during this time, however I noticed 1 out of my 4 CPU cores registers 85% activity. Two questions there: 1) What could possibly be so cpu-intensive when the application hasn't launched yet? Maybe "Java6 ..." just a GIF and the actual JVM is taking 4-8 seconds to launch? 2) Secondly, why isn't Webstart using multiple cores? I mean, if you could launch 4 times faster by using all 4 of my cores that would sure help ;) Thanks, Gili Roger, Please download (38MB) and let me know when this is done. If anyone else downloads this file I will be forced to take it offline sooner, so please guys leave this file for Roger alone :) Here is what I see: - If I rename "deployment.dirty" to C:\Users\Gili\AppData\LocalLow\Sun\Java\Deployment then running the testcase causes "Java6..." to display 3 dots on average. - If I rename "deployment.clean" to C:\Users\Gili\AppData\LocalLow\Sun\Java\Deployment then running the testcase causes "Java6..." to display 1 dot on average. This gets worse the more times I run the application (say one dot per 5 minutes I try this). In order to reproduce the problem faster I modified the testcase to depend on: JFreeChart javamail jaxen joda-time but you can generally speaking throw any JAR files at it on your end. The more JAR files, the quicker performance will degrade. I hope the snapshot of my cache directory will give you a hint of what is wrong. Gili I have opened a CR for this issue: CR 6720393. I seem to be able to reproduce it with 6u10-b25 only, but not b26 or later. Need to investigate further on this. I have included both review ID under this CR. Thank you for reporting the issue and the information provided! -- RY Gilli, I am trying to reproduce the issue with your sample. But I cannot reproduce it. Do you have metrics about the usual time and longest time your provided code launches? I encountered zombie process left behind if I try to launch your app many times (), but not able to reproduce the long startup time issue. Regards, RY Roger, I am still trying to find a more reliable way to reproduce the problem but Vista makes it very difficult. I'm trying to avoid SuperFetch and disk access to "C:\System Volume Information" from interfering with these tests :) In the meantime I found another bug which might be related. Up until now I have been running the application sequentially but I just tried running a few instances concurrently and I ran into a race-condition bug. If you launch the same application multiple times simultaneously you will get the following exception: java.io.EOFException: encoding.error.not.xml at com.sun.deploy.xml.XMLEncoding.decodeXML(XMLEncoding.java:48) at com.sun.javaws.jnl.XMLFormat.parse(XMLFormat.java:62) at com.sun.javaws.jnl.LaunchDescFactory.buildDescriptor(LaunchDescFactory.java:59) at com.sun.javaws.jnl.LaunchDescFactory.buildDescriptor(LaunchDescFactory.java:68) at com.sun.javaws.Launcher.updateFinalLaunchDesc(Launcher.java:266) at com.sun.javaws.Launcher.prepareToLaunch(Launcher.java:167) at com.sun.javaws.Launcher.launch(Launcher.java:111) at com.sun.javaws.Main.launchApp(Main.java:306) at com.sun.javaws.Main.continueInSecureThread(Main.java:210) at com.sun.javaws.Main$1.run(Main.java:107) at java.lang.Thread.run(Thread.java:619) Once that happens, the Webstart cache is corrupt and the application cannot be run again. It is worth noting that when I opened up the "Java Cache Viewer" it showed same JNLP file twice under the "resources" tab, something which I thought was supposed to be impossible. The only way I could fix the problem was by running "javaws.exe -uninstall". Should I file a bug report for this one too? Gili Gili, I recommend you please do post that bug. Btw, may name is Roger also, but I know you are addressing RY :-) ... RB How confusing ;) So which one of you is Roger from Sun? Okay, I've posted a bug report to BugParade. It's Review ID is 1279784. Re the "failure to launch and leave no trail", I can reproduce this problem with any webstart app using b27. Although the problem is intermittent and may be related to the other issues discussed in this thread, I have filed a bug, 6726716, for this specific problem that should appear in a couple of days (unless they determine it is a duplicate). I see one link that isn't 6u10. This is (for me anyway) only a 6u10 issue. 6u6 works fine. I see a reference to 6u10 b26. I am on b25 and don't see any b26 out yet. Can I assume that was a typo? I see the problem in IE7 and Firefox 3. I cannot provide a test case because I cannot reliably reproduce it, especially not now. I just spent half the day trying to figure out why Vista would not let me rename my cache directory even though I have every priv and I could not find anything that admitted to having anything in the cache opened. I ended up cleaning up my Vista and stopping about 20-30 startup programs and services. After the reboot I could rename cache (not a shock), did so, and download is fast. I haven't gotten up to about 20 times yet. We'll see if it changes. So right now I cannot reproduce the problem. I actually have another problem now, which is some applications download fine, but fail to launch and leave no trail. I am going to study the JNLP files to see if there is any difference between those that work and those that don't. More on that later if I can define the problem. There are no large images involved. ...RB rb5563, Process Explorer allows you to find out which process has a file or directory locked: Gili Yes, and it said nothing was opened. Vista first popped up the window requesting confirmation, then the UAC window asking for permission, and then finally an error saying "You need permission". It does not say anything is locked (it would be nice if it gave me a "file access conflict" if that was the case). I own the directory, have every permission, and have unselected the permission interlocks with parent folders. This too is not reproducible at will, but is also obviously not a Java issue. The problem I'm seeing goes away if the Java cache is emptied. Is it the case for you? Try running javaws -uninstall or just renaming or deleting your cache directory (C:\Users\username\AppData\LocalLow\Sun\Java\Deployment by default under Vista I think). I wonder if this is related to the slow downloading of remote images/files I mentioned at Does emptying your cache help? Which browser are you using? What is your network connection setting? Are you seeing similar issue as: Thanks, RY I don't think it's the same bug. I only ever use FireFox, and on my machine both FF and IE have proxies completely disabled. Could you please provide a test case where we can reproduce the issue? Thanks, RY Hi Roger, Here is what worked for me: WebstartBug.java ----------------------------------- public class Main { public static void main(String[] args) { System.exit(1); } } launch.jnlp ------------------------------------ ------------------------------------------------- The JAR dependencies seem to be the key. If you run the program 20 times or so you will notice the startup times increasing (slowly). I am expecting the "Java6..." animation to display for at most 1 second. It adds one dot per animation so eventually I see the startup time go from one dot all the way up to 3 dots (after about 20 runs) and continue to grow the more I run it. Again, all files are running from a local codebase and the delay occurs *before* Webstart begins downloading updates from the server. Gili Gili, When you say "here is what worked" you mean this is what reproduces the problem/issue, per RY's request, right? Or do you mean this solved it? When you say the "JAR dependencies seem to be the key" how do you mean that? Could you elaborate? Btw, when I was getting the problem, it was in all phases (Java 6 splash with the dot dot dot, and download (which would stall), and also launching the application post download). ...RB "here is what worked" means here is a testcase for reproducing the problem. It seems that when you have "JAR dependencies" the cache seems to take longer at the "Java6..." phase. If you have a program without any dependencies I'm not sure the bug will occur (or if it does it would take 100s of runs to notice it). The more dependencies you have, the easier it seems to be to reproduce it. I don't know about the download stall because as I mentioned in my case I'm using a local codebase. My personal testcase is only for the "Java6..." part. It could very well be that solving my problem will solve it for you as well but you need to come up with a testcase for reproducing it 100% of the time. Gili, Your observation that running the same application 20 times (serially, I suppose, not concurrently) causes slowdown intrigues me, are you noticing that your cache directory grows over this period? I would have thought that after downloading the first time nothing should change, right? Robert I would have thought the same, but apparently this isn't the case. What I'm wondering is if the cache grows or just changes structure in some way. Comparing the two cache directories I would say the following: - Many new *.idx files added (though they are tiny - 221 bytes) - *.lap file indicates that the dirty cache saw 55 launches of the testcase whereas the clean one only saw 3. I believe the IDX files are the culprit because I see 55 idx files in the bad cache versus 3 in the good cache. It looks as if Webstart adds a new file per run and the overhead of processing an ever-growing list of indexes (regardless of how small each one is) is killing it :) What I find even more interesting is that the bug is CPU-bound not disk-bound. You didn't mention O/S I am seeing extreme slowness on Windows Vista but no problem on XP. Downlod keeps going into "stalled" state and takes over a minute when it should take about 3 sconds. Anyone else seeing this? I am running Vista 64-bit on my end. I can't comment on the download time because I'm running a local Webstart application (file://...) When you think about it, that makes it even worse :) Webstart has no excuse to take so long to start up when all my components are local. I would further point out that if I invoke "javaws -uninstall", run the same local application 20 times the problem comes back. At this point the cache should only contain references to my local application so it isn't clear why it would become slow. Secondly, I think taking 3 seconds to start up or download a local application (if it's already cached) is unreasonable. Ideally Webstart should start in under a second, two seconds being the limit. Anything longer than a second is very noticeable from an end-user point of view so it should really be the exception not the rule. I have some good news and bad news. The good news is that running "javaws -uninstall" reduced the delay to 1 second, so we know the problem is somehow related to the Java Webstart cache. The bad news is that I just destroyed my reproducible testcase so I can't track down the exact source of the problem. I think we can safely assume that as the cache grows, the delay somehow increases. Maybe the algorithm doesn't scale well as the number of entries increases? That being said, I am really worried by this behavior because I had almost no applications in my cache (I rarely use Webstart). Is it possible that Webstart's cache tries contacting the web servers of each entry in the cache on startup? I really hope not, but that would definitely explain what I've been seeing. Has anyone else been able to reproduce this? We have some beta testers here who are having the same problem. Extreme slowness when starting applets and Web Start applications which disappears when "javaws -uninstall" is run. If any Sun people are interested, I have asked one of our users not to apply the fix, so I have a broken system available for experimentation. Bug filed, review ID 1279673 Oops, I also filed a bug report ;) Mine has review ID 1279666 Hopefully Sun will merge the two. A (presumably) related symptom is that launching the cache viewer jdk1.6.0_10/bin/javaws -viewer takes twice or 3 times as long as jdk1.6.0_06/bin/javaws -viewer I've attached HPROF sampling profiles.
https://www.java.net/node/679511
CC-MAIN-2015-18
refinedweb
2,261
65.01
EclipseLink/UserGuide/JPA/Basic JPA Development/Caching/Query Cache EclipseLink JPA Key API Native API Contents Query Results Cache The EclipseLink query results cache allows the results of named queries to be cached, similar to how objects are cached. By default in EclipseLink all queries access the database, unless they are by Id, or by cache indexed fields. The resulting rows will still be resolved with the cache, and further queries for relationship will be avoided if the object is cached, but the original query will always access the database. EclipseLink does have options for querying the cache, but these options are not used by default, as EclipseLink cannot assume that all of the objects in the database are in the cache. The query results cache allows for non-indexed and result list queries to still benefit from caching. The query results cache is indexed by the name of the query, and the parameters of the query. Only named queries can have their results cached, dynamic queries cannot use the query results cache. As well, if you modify a named query before execution, such as setting hints or properties, then it cannot use the cached results. The query results cache does not pick up committed changes from the application as the object cache does. It should only be used to cache read-only objects, or should use an invalidation policy to avoid caching stale results. Committed changes to the objects in the result set will still be picked up, by changes that affect the results set (such as new or changed objects that should be added/removed from the result set) will not be picked up. The query results cache supports a fixed size, cache type, and invalidation options. Configuring Query Results Cache The query results cache is configured through query hints. Query results cache annotation example ... @Entity @NamedQuery( name="findAllEmployeesInCity", query="Select e from Employee e where e.address.city = :city", hints={ @Hint(name="eclipselink.query-results-cache", value="true") @Hint(name="eclipselink.query-results-cache.size", value="500") }) public class Employee { ... } Query results cache XML example <?xml version="1.0"?> <entity-mappings <entity name="Employee" class="org.acme.Employee" access="FIELD"> <named-query <hint name="eclipselink.query-results-cache" value="true"/> <hint name="eclipselink.query-results-cache.size" value="500"/> </named-query> ... </entity> </entity-mappings> Query results cache query example Query query = em.createNamedQuery("findAllEmployeesInCity"); query.setParameter("city", "Ottawa"); List<Employee> employees = query.getResultList();
http://wiki.eclipse.org/index.php?title=EclipseLink/UserGuide/JPA/Basic_JPA_Development/Caching/Query_Cache&amp;action=info
CC-MAIN-2019-22
refinedweb
406
57.16
I needed to create a script of medium complexity that goes along these lines: 1. Take a list of databases. 2. Each database has an SVN folder, list of files to execute (with possible wildcards), and list of files to skip. 3. For each database, perform “get” on the SVN folder, ignore the excluded files and run files from the “execute” list in the order specified, respecting the wildcards. What do they use to write such scripts these days? My first attempt was in NAnt. We already use NAnt for continuous integration and it seems to work reasonably well with files and external programs. The problem I had with NAnt is that it lacks adequate control structures and data types. All properties are strings, and I needed structured data. The only type of for loop is iteration over a delimited string (or files in a directory). There are no parameterized function calls, although they can be simulated (poorly) with setting global properties and calling a target. Then I briefly thought about writing it in C#, but rejected the idea, since I wanted the script to be easily modifiable, and the compilation step gets in the way of that. Also, in compiled languages like C# script parameters are traditionally passed either via command line or via config file that must be parsed. My parameters were too big for the command line, and writing a parser was not in my plans. It is possible to do dynamic compilation in C#, but it is quite cumbersome. You don’t want your config file to start with something like using System; class Config { ConfigItem[] Data = ... , do you? My next thought was Powershell, but this one is shipped only with the newest version of Windows Server, and we are not there yet. On older versions it requires installation with admin rights. It may be easier to get an audience with the Pope of Rome than to install an app with admin rights on a production box in a huge financial corporation. I then settled on Javascript (WSH), and it worked, but I was plagued by various issues. First off, WSH does not seem to have a built-in way to execute an external command that sends output to current console window and wait for the result. You get either asynchronous execution without wait in current window ( Exec), or optionally synchronous execution in external window ( Run). I ended up writing a piece of code that uses Exec and then polls the process for the exit status. Also, WSH version of JavaScript lacks support for include files. My script became big enough that I wanted it, primarily to separate config from code. One can use an eval call to read and execute external file, or use WSF files, but both of these options greatly mess up line numbers and make detecting errors virtually impossible. var fileSystem = new ActiveXObject("Scripting.FileSystemObject"); function include(fileName) { eval(fileSystem.OpenTextFile(fileName, 1).ReadAll()); } There are, of course, other alternatives: maybe I will try Python next time. But the situation with a scripting solution available on all currently active version of Windows is bleak. Powershell would be a reasonably good answer if not for Microsoft’s love of admin-rights installers. Requiring an admin right installer is a huge demotivator in the corporate world. 1 Comment Permalink Hi Ivan, Look at this Regards, Oleg.
https://ikriv.com/blog/?p=1228
CC-MAIN-2021-43
refinedweb
562
62.98
FPC Advantages From Free Pascal wiki Advantages of the Free Pascal compiler This page should clarify the strong points of the Free Pascal compiler (FPC), and present some of the benefits of using it. - No Makefiles – Unlike most programming languages, FPC does not need Makefiles. You can save a huge amount of time, the compiler just figures out by itself which files need to be recompiled. - Very fast compilation – Pascal compilers are Fast with a big F and FPC is no exception. Just hit the compile key and it's done, even for large programs. - Each unit has its own identifiers – In FPC you never need to worry about polluting the namespace, like in C where an identifier needs to be globally unique across the entire program. No, FPC helds for each unit a very own namespace and that's very relaxed. - Integrated development environment – FPC comes with an IDE which works on several platforms. You can write, compile and debug your programs within the IDE. You will save huge amounts of time using the IDE, the best programming friend you have. - Smartlinking – FPC does smart linking leaving out any variable or code that you do not use. That makes small programs small with a big S, while they are still statically linked, avoiding DLL hell! - Distribution independence (Linux) – As a result of this, software compiled by the Linux version of FPC.
http://wiki.freepascal.org/FPC_Advantages
CC-MAIN-2018-30
refinedweb
232
70.43
6.3 Assignment statements Assignment.3). ‘a, b = "xy"’ is now legal as long as the string has the right length.) Assignment of an object to a single target is recursively defined as follows. - If the target is an identifier (name): - If the name does not occur in a globalstatement in the current code block: the name is bound to the object in the current local namespace. - Otherwise: the name is bound to the object in the current global namespace. -' (for example ‘a, b = b, a’ swaps two variables), overlaps within the collection of assigned-to variables are not safe! For instance, the following program prints ‘[0, 2]’: x = [0, 1] i = 0 i, x[i] = 1, 2 print x
http://www.network-theory.co.uk/docs/pylang/Assignmentstatements.html
crawl-001
refinedweb
120
71.55
10 April 2013 23:26 [Source: ICIS news] HOUSTON (ICIS)--US-based producer Ecolab has completed its acquisition of ?xml:namespace> The total transaction value was about $2.3bn (€1.7bn), Ecolab said in a press release. Ecolab had said earlier this week that it was awaiting approval by the federal government for the acquisition after the US Department of Justice’s (DOJ) antitrust division filed a civil lawsuit in district court to block the proposed deal. But Ecolab reached a settlement with the antitrust regulators that requires the divestiture of Champion’s chemical-management services for deepwater oil and gas wells in the The Champion deal was first announced in Oct
http://www.icis.com/Articles/2013/04/10/9657840/us-ecolab-completes-purchase-of-champion-technologies.html
CC-MAIN-2014-42
refinedweb
112
50.57
App::CamelPKI::SysV::Apache - Modeling the Camel-PKI web server. use App::CamelPKI::SysV::Apache; use App::CamelPKI::Error; my $apache = load App::CamelPKI::SysV::Apache($directory); $apache->set_keys(-certificate => $cert, -key => $key, -certification_chain => [ $opcacert, $rootcacert ]); $apache->https_port(443); try { $apache->start(); } catch App::CamelPKI::Error::OtherProcess with { die "Dude, your Apache is out of whack!" if $apache->is_wedged; die "Could not start Apache: " . $apache->tail_error_logfile(); }; $apache->update_crl($crl); $apache->stop(); Instances of App::CamelPKI::SysV::Apache each represent an Apache Web server that serves the App-PKI application. App::CamelPKI::SysV::Apache encapsulates all the system- and distribution-specific knowledge needed to run an Apache web server: it knows how to create a configuration file, start and stop it, manage its PID file, log files, and so on. In the current implementation, an instance of App::CamelPKI::SysV::Apache only listens to one TCP port in HTTP/S, and the URLs are (mostly) interpreted relative to App::CamelPKI's standard URL namespace. The essential feature of App::CamelPKI::SysV::Apache that the default, Catalyst-provided server lacks ( camel_pki_server.pl) is the support for client-side authentication using SSL certificates. Thanks to this feature, App-PKI is able to use itself for its own authentication needs. Creates and returns an instance of App::CamelPKI::SysV::Apache by loading it from the file system. Like all constructors that take a directory as argument, load is subdued to capability discipline using App::CamelPKI::RestrictedClassMethod. $directory is the path where the server's various persistent files are stored (configuration file, PID file, cryptographic keys etc). $directory must exist, but can be empty. Gets or set the port on which the daemon will listen for HTTP/S requests. The default value is 443 if the current process is privileged enough to bind to it, or 3443 otherwise. This port number is persisted onto disk and therefore only needs to be set once. Gets, sets or disables the test PHP script directory in this instance of App::CamelPKI::SysV::Apache. The default is to disable this feature, which only serves for Camel-PKI's self-tests (unit and integration). The value of test_php_directory is persisted to disk, so that it need not be reset at each construction. It only takes effect the next time the server is restarted with "start". Gets or sets the "has App-PKI" flag, which defaults to true. Instances of App::CamelPKI::SysV::Apache that have has_camel_pki() set to false do not contain the Camel-PKI application. Again, this is only useful for tests. The value of has_camel_pki is persisted to disk, so that it need not be reset at each construction. It only takes effect the next time the server is restarted with "start". Installs key material that will allow this Apache daemon to authenticate itself to its HTTP/S clients ($cert and $key, which must be instances of App::CamelPKI::Certificate and App::CamelPKI::PrivateKey respectively), and also to verify the identity of HTTP/S clients that themselves use a certificate (@chain, which is a list of instances of App::CamelPKI::Certificate; see also "update_crl"). If $cert is a self-signed certificate, -certification_chain and its parameter \@chain may be omitted. Returns true if and only if the ad-hoc cryptographic material has been added to this Web server using "set_keys". Returns the Web server's SSL certificate, as an instance of App::CamelPKI::Certificate. Given $crl, an instance of App::CamelPKI::CRL, verifies the signature thereof and stores it into this Apache server if and only if it matches one of the CAs previously installed using "set_keys"' -certificate_chain named option, and $crl is older than any CRL previously added with update_crl(). If these security checks are successful and Apache is already running, it will be restarted so as to take the new CRL into account immediately. Note that a Web server works perfectly without a CRL, and therefore calling update_crl is optional. However, remember that CRLs have expiration dates: once a CRL has been installed using this method, one should plan for a suitable mechanism (e.g. a crontab entry) that will download updated CRLs on a regular basis and submit them using update_crl(). Starts the daemon synchronously, meaning that start will only return control to its caller after ensuring that the Apache process wrote its PID file and bound to its TCP port. start() is idempotent, and terminates immediately if the serveur is already up. An "App::CamelPKI::Error::OtherProcess" in App::CamelPKI::Error exception will be thrown if the server doesn't answer within "async_timeout" seconds. An "App::CamelPKI::Error::User" in App::CamelPKI::Error exception will be thrown if one attempts to start() the server before providing it with its certificate and key with "set_keys". Available named options are: Starts Apache under the strace debug command, storing all results into $strace_logfile. Starts Apache with the -X option, which causes it to launch only one worker and to not detach from the terminal. Starts Apache under the GNU debugger attached to tty $tty (or the current tty, if the value 1 is specified). Incompatible with -strace. If this option is specified, start() will not time out after "async_timeout" seconds, but will instead wait an unlimited amount of time for the server to come up. Don't fork a subprocess, use the exec system call instead (see "exec" in perlfunc) to run Apache directly (or more usefully, some combination of Apache and a debugger, according to the above named options). The current UNIX process will turn into Apache, and the start method will therefore never return. Stops the daemon synchronously, meaning that stop will only return control to its caller after ensuring that the Apache process whose PID is in the PID file is terminated, and the TCP port is closed. Like "start", this method is idempotent and returns immediately if the server was already down. An exception of class "App::CamelPKI::Error::OtherProcess" in App::CamelPKI::Error will be thrown if the server still hasn't stopped after "async_timeout" seconds. Note that the "started" or "stopped" state is persisted to the filesystem using the usual UNIX PID file mechanism; therefore it is not necessary to use the same Perl object (or even the same process) to "start" and stop() a given server. Returns true iff the PID file currently contains the PID of a live Apache process, and one can connect to the TCP port. Returns true iff the PID file (if it exists at all) contains something that is not the PID of a live Apache process, and the TCP port is closed. Returns true iff neither "is_stopped", nor "is_started" are true (e.g. if the TCP port is taken, but not by us). One cannot call "start" or "stop" against an instance of App::CamelPKI::SysV::Apache that is_wedged() ("App::CamelPKI::Error::OtherProcess" in App::CamelPKI::Error exceptions would be thrown). More generally, neither can one call any method that act upon other processes such as "update_crl". The systems administrator therefore needs to take manual corrective action to get out of this state. Returns true if Apache id installed and has perl support as a static or shared module, false otherwise. Returns true if Apache id installed and has php support as a static or shared module, false otherwise. Returns true iff the Perl interpreter we're currently running under is a mod_perl belonging to this object's App::CamelPKI::SysV::Apache instance. Returns true iff the Perl interpreter currently running is mod_perl. Contrary to "is_current_interpreter", this method returns true even if called from within another Apache container; in other words it doesn't look at $self, and indeed it can be called as a class method too. Gets or sets the maximum time (in seconds) that "start" and "stop" will wait for the Apache server to come up (resp. down). The default value is 20 seconds; it does not get persisted, and therefore must be set by caller code after each "load". Returns the amount of text that was appended to the error log file since the object was created since the previous call to tail_eror_logfile() (or barring that, to "load"). Returns undef if the log file does not exist (yet). In App::CamelPKI::Apache's current implementation, only Apache 2 for Ubuntu Edgy is supported. However, the encapsulation of the class makes it easy to support other environments, without changing anything in the rest of Camel-PKI.
http://search.cpan.org/~grm/App-CamelPKI-0.07/lib/App/CamelPKI/SysV/Apache.pm
CC-MAIN-2015-48
refinedweb
1,404
51.99
Queue: Queues are dynamic collections which have some concept of order. A Queue is a collection where elements are processed first in, first out (FIFO). The item is put first in the queue is read first. The item in the end is read last. Real life Example of queues – 1) Patients waiting outside the doctor's clinic: The patient who comes first visits the doctor first, and the patient who comes last visits the doctor last. Therefore, it follows the first-in-first-out (FIFO) strategy of queue. 2) Queue in the Bank 3) Queue at the bus stand 4) Queue at the Ration Examples of queues in Software systems : 1) 1) Print jobs waiting to be processed in a print queue 2) 2) A thread waiting for the CPU in a round-robin fashion. Often there are queues where the elements processed differ in their priority. For example, in the queue at the Bank, Gold card customers are processed before common customers. Here, multiple queues can be used, one queue for every priority. In the bank this can easily be found out, because there are separate queues for Gold card and common customers. You can have an array or a list of queues where on item in the array stands for a priority. Within every array item there is a queue, where processing happens with the FIFO principle. Implementing classes and Interfaces : In C# queue is implemented with Queue<T> class in the namespace System.Collections.Generic. It implements the interfaces IEnumerable & ICollection. Some useful methods of queue : 1) To create a queue: Queue<int> p = new Queue<int>(); 2) To add element to it : p.Enqueue(i * 10); 3) To remove element from queue : p.Dequeue( ); 4) To count total elements in queue: p.Count; using System;using System.Collections.Generic; namespace ConsoleApplication17{ class Class1 { public static void Main() { int i; Queue<int> p = new Queue<int>(); for (i = 0; i < 5; i++) { p.Enqueue(i * 10); } Console.WriteLine("Total elements : " + p.Count); Console.WriteLine("Elements is queue are :"); for (i = 0; i < 5; i++) { Console.WriteLine("Element {0}: {1} ", i, p.Dequeue()); } Console.WriteLine("Total elements : "+ p.Count); } }} Latest Articles Latest Articles from Gajananbok Login to post response
http://www.dotnetfunda.com/articles/show/1692/introducing-queue-in-csharp
CC-MAIN-2017-09
refinedweb
371
66.74
Animation Node experiment - script node spiral For this animation I used the python script node of the Animation-Node-Addon to create a list of coordinates for a bezier-curve. you can download the blend file here First script node and connected it to a loop I used the following script to return 3 float lists containing the x, y and z coordinates of the control points. import math lenght = 100 zlist = [ x/4 for x in range(-lenght,lenght)] ylist = [math.sin(x) * math.exp( - x**2/1000) for x in range(-lenght,lenght)] xlist = [math.cos(x) * math.exp( - x**2/1000) for x in range(-lenght,lenght)] Then I used a loop to add the points to the curve using this node setup. I´m pretty sure this is not the most elegant way to solve the problem, but it does the job The I duplicated the resulting curve and used a emission shader on the first one and a slightly transparent diffuse shader on the second curve. The animation is created using a build modifier on each node. See also: AN experiment - delayed instanciation Animation Node experiment - circular sound visualizer Animation Node experiment - polar coordinates Animation Node experiment - cube grid Thank you very much for the 'how to' explanations under the last few of your experiments. Much appreciated, as I am trying to come to grips with the Animation-Node-Addon myself at the moment.The guy, who created it, is brilliant! I am looking forward to seeing more of your experiments with this addon and will be grateful for more explanations! Thanks also for all your previous experiments and for all your blends which I always download. I've learnt a lot from them!
https://www.local-guru.net/blog/2015/11/19/Animation-Node-experiment---script-node-spiral
CC-MAIN-2019-09
refinedweb
289
50.87
To understand this example, you should have the knowledge of following C programming topics: #include <stdio.h> int main() { int n, i; float num[100], sum = 0.0, average; printf("Enter the numbers of elements: "); scanf("%d", &n); while (n > 100 || n <= 0) { printf("Error! number should in range of (1 to 100).\n"); printf("Enter the number again: "); scanf("%d", &n); } for(i = 0; i < n; ++i) { printf("%d. Enter number: ", i+1); scanf("%f", &num[i]); sum += num[i]; } average = sum / n; printf("Average = %.2f", average); return 0; } Output Enter the numbers of elements: 6 1. Enter number: 45.3 2. Enter number: 67.5 3. Enter number: -45.6 4. Enter number: 20.34 5. Enter number: 33 6. Enter number: 45.6 Average = 27.69 This program takes the number of elements in the array and stores in the variable n. Then, the for loop gets all the elements from the user and stores the sum of the entered numbers in sum. Finally, the average is calculated by dividing sum by the number of elements n. It takes a lot of effort and cost to maintain Programiz. We would be grateful if you support us by either: Disabling AdBlock on Programiz. We do not use intrusive ads. orDonate on Paypal
https://www.programiz.com/c-programming/examples/average-arrays
CC-MAIN-2019-09
refinedweb
214
70.7
ASP.NET WebGrid - Get the Most out of WebGrid in ASP.NET MVC By Stuart Leeks | July 2011 Earlier this year Microsoft released ASP.NET MVC version 3 (asp.net/mvc), as well as a new product called WebMatrix (asp.net/webmatrix). The WebMatrix release included a number of productivity helpers to simplify tasks such as rendering charts and tabular data. One of these helpers, WebGrid, lets you render tabular data in a very simple manner with support for custom formatting of columns, paging, sorting and asynchronous updates via AJAX. In this article, I’ll introduce WebGrid and show how it can be used in ASP.NET MVC 3, then take a look at how to really get the most out of it in an ASP.NET MVC solution. (For an overview of WebMatrix—and the Razor syntax that will be used in this article—see Clark Sell’s article, “Introduction to WebMatrix,” in the April 2011 issue at msdn.microsoft.com/magazine/gg983489). This article looks at how to fit the WebGrid component into an ASP.NET MVC environment to enable you to be productive when rendering tabular data. I’ll be focusing on WebGrid from an ASP.NET MVC aspect: creating a strongly typed version of WebGrid with full IntelliSense, hooking into the WebGrid support for server-side paging and adding AJAX functionality that degrades gracefully when scripting is disabled. The working samples build on top of a service that provides access to the AdventureWorksLT database via the Entity Framework. If you’re interested in the data-access code, it’s available in the code download, and you might also want to check out Julie Lerman’s article, “Server-Side Paging with the Entity Framework and ASP.NET MVC 3,” in the March 2011 issue (msdn.microsoft.com/magazine/gg650669). Getting Started with WebGrid To show a simple example of WebGrid, I’ve set up an ASP.NET MVC action that simply passes an IEnumerable<Product> to the view. I’m using the Razor view engine for most of this article, but later I’ll also discuss how the WebForms view engine can be used. My ProductController class has the following action: The List view includes the following Razor code, which renders the grid shown in Figure 1: Figure 1 A Basic Rendered Web Grid The first line of the view specifies the model type (for example, the type of the Model property that we access in the view) to be IEnumerable<Product>. Inside the div element I then instantiate a WebGrid, passing in the model data; I do this inside an @{...} code block so that Razor knows not to try to render the result. In the constructor I also set the defaultSort parameter to “Name” so theWebGrid knows that the data passed to it is already sorted by Name. Finally, I use @grid.GetHtml() to generate the HTML for the grid and render it into the response. This small amount of code provides rich grid functionality. The grid limits the amount of data displayed and includes pager links to move through the data; column headings are rendered as links to enable paging. You can specify a number of options in the WebGrid constructor and the GetHtml method in order to customize this behavior. The options let you disable paging and sorting, change the number of rows per page, change the text in the pager links and much more. Figure 2 shows the WebGrid constructor parameters and Figure 3 the GetHtml parameters. Figure 2 WebGrid Constructor Parameters Figure 3 WebGrid.GetHtml Parameters The previous Razor code will render all of the properties for each row, but you may want to limit which columns are displayed. There are a number of ways to achieve this. The first (and simplest) is to pass the set of columns to the WebGrid constructor. For example, this code renders just the Name and ListPrice properties: You could also specify the columns in the call to GetHtml instead of in the constructor. While this is slightly longer, it has the advantage that you can specify additional information about how to render the columns. In the following example, I specified the header property to make the ListPrice column more user-friendly: Often when you render a list of items, you want to let users click on an item to navigate to the Details view. The format parameter of the Column method allows you to customize the rendering of a data item. The following code shows how to change the rendering of names to output a link to the Details view for an item, and outputs the list price with two decimal places as typically expected for currency values; the resulting output is shown in Figure 4. Figure 4 A Basic Grid with Custom Columns Although it looks like there’s some magic going on when I specify the format, the format parameter is actually a Func<dynamic,object>—a delegate that takes a dynamic parameter and returns an object. The Razor engine takes the snippet specified for the format parameter and turns it into a delegate. That delegate takes a dynamic parameter named item, and this is the item variable that’s used in the format snippet. For more information on the way these delegates work, see Phil Haack’s blog post at bit.ly/h0Q0Oz. Because the item parameter is a dynamic type, you don’t get any IntelliSense or compiler checking when writing your code (see Alexandra Rusina’s article on dynamic types in the February 2011 issue at msdn.microsoft.com/magazine/gg598922). Moreover, invoking extension methods with dynamic parameters isn’t supported. This means that, when calling extension methods, you have to ensure that you’re using static types—this is the reason that item.Name is cast to a string when I call the Html.ActionLink extension method in the previous code. With the range of extension methods used in ASP.NET MVC, this clash between dynamic and extension methods can become tedious (even more so if you use something like T4MVC: bit.ly/9GMoup). Adding Strong Typing While dynamic typing is probably a good fit for WebMatrix, there are benefits to strongly typed views. One way to achieve this is to create a derived type WebGrid<T>, as shown in Figure 5. As you can see, it’s a pretty lightweight wrapper! public class WebGrid<T> : WebGrid { public WebGrid( IEnumerable<T> source = null, ... parameter list omitted for brevity) : base( source.SafeCast<object>(), ... parameter list omitted for brevity) { } public WebGridColumn Column( string columnName = null, string header = null, Func<T, object> format = null, string style = null, bool canSort = true) { Func<dynamic, object> wrappedFormat = null; if (format != null) { wrappedFormat = o => format((T)o.Value); } WebGridColumn column = base.Column( columnName, header, wrappedFormat, style, canSort); return column; } public WebGrid<T> Bind( IEnumerable<T> source, IEnumerable<string> columnNames = null, bool autoSortAndPage = true, int rowCount = -1) { base.Bind( source.SafeCast<object>(), columnNames, autoSortAndPage, rowCount); return this; } } public static class WebGridExtensions { public static WebGrid<T> Grid<T>( this HtmlHelper htmlHelper, ... parameter list omitted for brevity) { return new WebGrid<T>( source, ... parameter list omitted for brevity); } } So what does this give us? With the new WebGrid<T> implementation, I’ve added a new Column method that takes a Func<T, object> for the format parameter, which means that the cast isn’t required when calling extension methods. Also, you now get IntelliSense and compiler checking (assuming that MvcBuildViews is turned on in the project file; it’s turned off by default). The Grid extension method allows you to take advantage of the compiler’s type inference for generic parameters. So, in this example, you can write Html.Grid(Model) rather than new WebGrid<Product>(Model). In each case, the returned type is WebGrid<Product>. Adding Paging and Sorting You’ve already seen that WebGrid gives you paging and sorting functionality without any effort on your part. You’ve also seen how to configure the page size via the rowsPerPage parameter (in the constructor or via the Html.Grid helper) so that the grid will automatically show a single page of data and render the paging controls to allow navigation between pages. However, the default behavior may not be quite what you want. To illustrate this, I’ve added code to render the number of items in the data source after the grid is rendered, as shown in Figure 6. Figure 6 The Number of Items in the Data Source As you can see, the data we’re passing contains the full list of products (295 of them in this example, but it’s not hard to imagine scenarios with even more data being retrieved). As the amount of data returned increases, you place more and more load on your services and databases, while still rendering the same single page of data. But there’s a better approach: server-side paging. In this case, you pull back only the data needed to display the current page (for instance, only five rows). The first step in implementing server-side paging for WebGrid is to limit the data retrieved from the data source. To do this, you need to know which page is being requested so you can retrieve the correct page of data. When WebGrid renders the paging links, it reuses the page URL and attaches a query string parameter with the page number, such as (the query string parameter name is configurable via the helper parameters—handy if you want to support pagination of more than one grid on a page). This means you can take a parameter called page on your action method and it will be populated with the query string value. If you just modify the existing code to pass a single page worth of data to WebGrid, WebGrid will see only a single page of data. Because it has no knowledge that there are more pages, it will no longer render the pager controls. Fortunately, WebGrid has another method, named Bind, that you can use to specify the data. As well as accepting the data, Bind has a parameter that takes the total row count, allowing it to calculate the number of pages. In order to use this method, the List action needs to be updated to retrieve the extra information to pass to the view, as shown in Figure 7. public ActionResult List(int page = 1) { const int pageSize = 5; int totalRecords; IEnumerable<Product> products = productService.GetProducts( out totalRecords, pageSize:pageSize, pageIndex:page-1); PagedProductsModel model = new PagedProductsModel { PageSize= pageSize, PageNumber = page, Products = products, TotalRows = totalRecords }; return View(model); } With this additional information, the view can be updated to use the WebGrid Bind method. The call to Bind provides the data to render and the total number of rows, and also sets the autoSortAndPage parameter to false. The autoSortAndPage parameter instructs WebGrid that it doesn’t need to apply paging, because the List action method is taking care of this. This is illustrated in the following code: <div> @{ var grid = new WebGrid<Product>(null, rowsPerPage: Model.PageSize, defaultSort:"Name"); grid.Bind(Model.Products, rowCount: Model.TotalRows, autoSortAndPage: false); } @grid.GetHtml(columns: grid.Columns( grid.Column("Name", format: @<text>@Html.ActionLink(item.Name, "Details", "Product", new { id = item.ProductId }, null)</text>), grid.Column("ListPrice", header: "List Price", format: @<text>@item.ListPrice.ToString("0.00")</text>) ) ) </div> With these changes in place, WebGrid springs back to life, rendering the paging controls but with the paging happening in the service rather than in the view! However, with autoSortAndPage turned off, the sorting functionality is broken. WebGrid uses query string parameters to pass the sort column and direction, but we instructed it not to perform the sorting. The fix is to add the sort and sortDir parameters to the action method and pass these through to the service so that it can perform the necessary sorting, as shown in Figure 8. public ActionResult List( int page = 1, string sort = "Name", string sortDir = "Ascending" ) { const int pageSize = 5; int totalRecords; IEnumerable<Product> products = _productService.GetProducts(out totalRecords, pageSize: pageSize, pageIndex: page - 1, sort:sort, sortOrder:GetSortDirection(sortDir) ); PagedProductsModel model = new PagedProductsModel { PageSize = pageSize, PageNumber = page, Products = products, TotalRows = totalRecords }; return View(model); } AJAX: Client-Side Changes WebGrid supports asynchronously updating the grid content using AJAX. To take advantage of this, you just have to ensure the div that contains the grid has an id, and then pass this id in the ajaxUpdateContainerId parameter to the grid’s constructor. You also need a reference to jQuery, but that’s already included in the layout view. When the ajaxUpdateContainerId is specified, WebGrid modifies its behavior so that the links for paging and sorting use AJAX for the updates: <div id="grid"> @{ var grid = new WebGrid<Product>(null, rowsPerPage: Model.PageSize, defaultSort: "Name", ajaxUpdateContainerId: "grid"); grid.Bind(Model.Products, autoSortAndPage: false, rowCount: Model.TotalRows); } @grid.GetHtml(columns: grid.Columns( grid.Column("Name", format: @<text>@Html.ActionLink(item.Name, "Details", "Product", new { id = item.ProductId }, null)</text>), grid.Column("ListPrice", header: "List Price", format: @<text>@item.ListPrice.ToString("0.00")</text>) ) ) </div> While the built-in functionality for using AJAX is good, the generated output doesn’t function if scripting is disabled. The reason for this is that, in AJAX mode, WebGrid renders anchor tags with the href set to “#,” and injects the AJAX behavior via the onclick handler. I’m always keen to create pages that degrade gracefully when scripting is disabled, and generally find that the best way to achieve this is through progressive enhancement (basically having a page that functions without scripting that’s enriched with the addition of scripting). To achieve this, you can revert back to the non-AJAX WebGrid and create the script in Figure 9 to reapply the AJAX behavior: $(document).ready(function () { function updateGrid(e) { e.preventDefault(); var url = $(this).attr('href'); var grid = $(this).parents('.ajaxGrid'); var id = grid.attr('id'); grid.load(url + ' #' + id); }; $('.ajaxGrid table thead tr a').live('click', updateGrid); $('.ajaxGrid table tfoot tr a').live('click', updateGrid); }); To allow the script to be applied just to a WebGrid, it uses jQuery selectors to identify elements with the ajaxGrid class set. The script establishes click handlers for the sorting and paging links (identified via the table header or footer inside the grid container) using the jQuery live method (api.jquery.com/live). This sets up the event handler for existing and future elements that match the selector, which is handy given the script will be replacing the content. The updateGrid method is set as the event handler and the first thing it does is to call preventDefault to suppress the default behavior. After that it gets the URL to use (from the href attribute on the anchor tag) and then makes an AJAX call to load the updated content into the container element. To use this approach, ensure that the default WebGrid AJAX behavior is disabled, add the ajaxGrid class to the container div and then include the script from Figure 9. AJAX: Server-Side Changes One additional point to call out is that the script uses functionality in the jQuery load method to isolate a fragment from the returned document. Simply calling load(‘’) will load the contents of the URL. However, load(‘ #someId’) will load the content from the specified URL and then return the fragment with the id of “someId.” This mirrors the default AJAX behavior of WebGrid and means that you don’t have to update your server code to add partial rendering behavior; WebGrid will load the full page and then strip out the new grid from it. In terms of quickly getting AJAX functionality this is great, but it means you’re sending more data over the wire than is necessary, and potentially looking up more data on the server than you need to as well. Fortunately, ASP.NET MVC makes dealing with this pretty simple. The basic idea is to extract the rendering that you want to share in the AJAX and non-AJAX requests into a partial view. The List action in the controller can then either render just the partial view for AJAX calls or the full view (which in turn uses the partial view) for the non-AJAX calls. The approach can be as simple as testing the result of the Request.IsAjaxRequest extension method from inside your action method. This can work well if there are only very minor differences between the AJAX and non-AJAX code paths. However, often there are more significant differences (for example, the full rendering requires more data than the partial rendering). In this scenario you’d probably write an AjaxAttribute so you could write separate methods and then have the MVC framework pick the right method based on whether the request is an AJAX request (in the same way that the HttpGet and HttpPost attributes work). For an example of this, see my blog post at bit.ly/eMlIxU. WebGrid and the WebForms View Engine So far, all of the examples outlined have used the Razor view engine. In the simplest case, you don’t need to change anything to use WebGrid with the WebForms view engine (aside from differences in view engine syntax). In the preceding examples, I showed how you can customize the rendering of row data using the format parameter: The format parameter is actually a Func, but the Razor view engine hides that from us. But you’re free to pass a Func—for example, you could use a lambda expression: Armed with this simple transition, you can now easily take advantage of WebGrid with the WebForms view engine! Wrapping Up In this article I showed how a few simple tweaks let you take advantage of the functionality that WebGrid brings without sacrificing strong typing, IntelliSense or efficient server-side paging. WebGrid has some great functionality to help make you productive when you need to render tabular data. I hope this article gave you a feel for how to make the most of it in an ASP.NET MVC application. Stuart Leeks is an application development manager for the Premier Support for Development team in the United Kingdom. He has an unhealthy love of keyboard shortcuts. He maintains a blog at blogs.msdn.com/b/stuartleeks where he talks about technical topics that interest him (including, but not limited to, ASP.NET MVC, Entity Framework and LINQ). Thanks to the following technical experts for reviewing this article: Simon Ince and Carl Nolan Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus.
https://msdn.microsoft.com/en-us/magazine/hh288075.aspx
CC-MAIN-2018-30
refinedweb
3,103
53
01 October 2012 10:53 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> There is no information of when operations at the MA plant will resume and investigations of the explosion is ongoing, the source said. At the moment, although there are some MA stocks in their inventory, shipping activity for all products has been suspended on the back of the explosion. Hence, MA cargoes will not be available to its customers, the source added. A separate producer said it received a few enquiries from traders on Monday for cargoes of 50-100 tonnes to Spot MA prices were assessed at $1,740-1,780/tonne (€1,357-1,388/tonne) CFR (cost & freight) southeast (SE) Asia in the week ended 28 September, while September contract prices of MA were assessed at yen (Y) 150-190/kg DEL (delivered) Japan,
http://www.icis.com/Articles/2012/10/01/9599830/Japans-Nippon-Shokubai-shuts-MA-plant-at-Himeji-on.html
CC-MAIN-2014-35
refinedweb
139
54.56
Sorry – still new to Python, but having trouble with my code working. I can’t really figure out where to move the declared variables and getting it to work. However, if I put them outside the function, the other functions are ignored and go straight to n’s input request. Any way to fix this? EDIT: Added the entire code and also the errors I am getting at the bottom, I don’t know if this is a simple indentation error. # menu selector from tkinter.tix import Select select = 0 def DisplayMenu() : print ("enter your choice") print ("1 for a Linear Search") print ("2 for a Binary Search") print ("3 for a Bubble Sort") print ("4 for a Selection Sort") print ("5 for a Insertion Sort") def SelectRoutine() : global select DisplayMenu() select = int(input()) if (select == 1) : print ("Call the Linear Search Routine") LinearSearch() elif (select == 2) : print ("Call the Binary Search Routine") BinarySearch() elif (select == 3) : print ("Call the Bubble Sort Routine") BubbleSort() elif (select == 4) : print ("Call the Selection Sort") SelectionSort() elif (select == 5): print ("Call the Insertion Sort") InsertionSort() else : print("invalid selection") def LinearSearch() : elements = [10, 20, 80, 70, 60, 50] x = int(input("please enter the number to search: ")) found = False for i in range(len(elements)) : if(elements[i] == x) : found = True print("%d found at %dth position" % (x, i)) break if (found == False) : print("%d is not in list" % x) def BinarySearch(n, sortedlist, x) : start = 0 end = n - 1 for i in range(n) : sortedlist.append(int(input("Enter %dth element: " % i))) while(start <= end) : mid = int((start + end) / 2) if (x == sortedlist[mid]) : return mid elif(x < sortedlist[mid]) : end = mid - 1 else : start = mid + 1 return -1 n = int(input("Enter the size of the list: ")) sortedlist = [] x = int(input("Enter the number to search: ")) position = BinarySearch(n, sortedlist, x) if (position != -1) : print("element number %d is present at position: %d" % (x,position)) else : print("element number %d is not present in the list" % x) SelectRoutine() enter your choice 1 for a Linear Search 2 for a Binary Search 3 for a Bubble Sort 4 for a Selection Sort 5 for a Insertion Sort 2 Call the Binary Search Routine Traceback (most recent call last): File "/Users/uglycode.py", line 68, in <module> SelectRoutine() File "/Users/uglycode.py", line 22, in SelectRoutine BinarySearch() TypeError: BinarySearch() missing 3 required positional arguments: 'n', 'sortedlist', and 'x' >Solution : You should pass n, sortedlist, position variables when you run BinarySearch function. - You should define variables n, sortedlist, position from out of the BinarySearch function. - Your indent under while loop is wrong. You should run binary search logic under your while function. - If you want to run BinarySearch function when you want, make it all to one function as run_binary_search() If 1. and 2. are not modified, the code will fall into an infinite loop. If you apply it to SelectRoutine() then, you should define def BinarySearch() function like it. def BinarySearch(): def _binary_serach(n, sortedlist, x): start = 0 end = n - 1 while (start <= end) : mid = int((start + end) / 2) if (x == sortedlist[mid]): position = mid break elif(x < sortedlist[mid]): end = mid - 1 else: start = mid + 1 position = -1 if (position != -1) : print("element number %d is present at position: %d" % (x,position)) else : print("element number %d is not present in the list" % x) return position n = int(input("Enter the size of the list: ")) sortedlist = [] for i in range(n): sortedlist.append(int(input("Enter %dth element: " % i))) x = int(input("Enter the number to search: ")) position = _binary_serach(n, sortedlist, x) return position
https://devsolus.com/2022/06/24/binarysearch-missing-required-positional-arguments/
CC-MAIN-2022-27
refinedweb
602
52.12
Download presentation Presentation is loading. Please wait. Published byDella Fleming Modified over 4 years ago 1 8 Shell Programming Mauro Jaskelioff 2 Introduction Environment variables –How to use and assign them –Your PATH variable Introduction to shell programming –Permissions and making your file executable –Input to and Output from shell scripts –Control structures If then else For loops –Booleans - test –Controlling input from within the shell 3 Environment Variables Environment variables are pieces of information used by the shell and by other programs Useful for customising your working environment and for shell programming Some examples: –PATH - the directories the system searches to execute commands –TERM - The type of terminal (most commonly xterm and vt100) –HOME - Your home directory –PS1 – The format of the prompt 4 Using Environment Variables 5 Using Environment Variables Environment variables can be used by any program public class EnvDemo1 { public static void main(String args[]) { String s = System.getenv(“PATH"); System.out.println(s); } 6 Assigning Environment Variables at the Prompt VAR=value The change is only visible in the current shell Child processes don’t automatically inherit the environment variables from their parent. We use export to let child processes get the changed value. export VAR=value To add something to your PATH: export PATH=$PATH:new stuff 7 Your PATH Variable) 8 UNIX Command Line. 9 What is a Shell Script?). 10 Common Shell Script Components The first line (for bourne shell) is usually #!/bin/sh # is also used for comments Hence, this line is a special kind of comment: it tells the shell which program to use to execute the commands in the file 11 Writing and Running a Shell Script 12 Making your File Executable ls -l tells you if files are readable, writable and/or executable and by whom You can change these permissions by using chmod [zlizmj@unnc-cslinux ~]$ ls -l Foo -rw-r--r-- 1 zlizmj Domain U 0 Mar 12 10:23 Foo [zlizmj@unnc-cslinux ~]$ chmod who?what filename 13 Chmod Revisited who is one of u (user), g (group) or o (other) –can also have a (all) ? is one of + (add a permission) or - (remove a permission) what is one of r (read permission), w (write permission) or x (execute permission) 14 Examples of chmod) 15 A Simple Shell Script Each command appears on a separate line #!/bin/sh ls echo "done " #This is a comment 16 The Simple Shell Script in Action [zlizmj@unnc-cslinux 1]$ ls done.sh [zlizmj@unnc-cslinux 1]$ chmod +x done.sh [zlizmj@unnc-cslinux 1]$./done.sh done.sh done [zlizmj@unnc-cslinux 1]$ 17 Input and Output The first argument to a shell script is called $1 The second argument to a shell script is called $2 ….etc… Shell uses echo like Java’s println 18 Input and Output: An Example [zlizmj@unnc-cslinux 1]$ cat firstarg.sh #!/bin/sh #Print first argument echo $1 [zlizmj@unnc-cslinux 1]$./firstarg.sh hello world hello 19 Control Structures Control structures are built in syntax for controlling the order in which execution happens Common structures are conditionals (if-then-else) and loops (for loops) Keywords should appear at the start of a line 20 Conditionals NOTE: the else is optional… if …. then …. else …. fi 21 Example Using Conditionals #!/bin/sh if javac $1 then echo "compilation done" else echo "compilation failed" fi 22 Booleans if needs something true or false Often this means you want to compare things This is more complicated in shell than in most languages Need to use test if test $1 -ge $2 –succeeds if the first argument is “greater than or equal to the second” 23 Some test inputs if test $1 = $2 –if $1 is equal to $2 (for strings) if test $1 -eq $2 –if $1 is equal to $2 (for numbers) -ge –(greater or equal) -gt –(greater) if test -f $FILE –if $FILE exists and is a normal file 24 More test inputs You don’t have to use “test” –if [$1 -ge $2] Is just syntactic sugar for –if test $1 -ge $2 To learn more about test: man test 25 For Loops NOTE: There are more complex forms of loop in Bash. for IDEN in list do …. done 26 A Simple For Loop Generally this script will look in the current directory: If you want it to look elsewhere, you need to put in the full path #!/usr/bin/sh # This is list_shell.sh for IDEN in *.sh do echo "$IDEN" done 27 The For Loop in Action $./list_shell.sh done.sh echo_three.sh edit_shell.sh list_shell.sh new.sh simple.sh 28 Your.bash_profile is a shell script # set up personal bin directories PATH=$HOME/bin:$PATH: EDITOR=emacs export PATH EDITOR 29 Variables and Shell 30 Interacting with the User To get input use read followed by a variable $./interact.sh please type: hello computer #!/bin/sh # This is interact.sh echo "please type:" read ANS echo $ANS 31 Variables and read read can have more than one argument –e.g. read COMMAND ARGUMENTS It will bind the first word of input to the first variable and bind the rest to the second This acts like a list or array – so can be used with for 32 More Complex read Example #!/bin/sh # This is interact2.sh echo "please type:" read COMMAND ARGUMENTS for ARG in $ARGUMENTS do $COMMAND $ARG done 33 The Example in Action $./interact2.sh please type: cat interact.sh simple.sh #!/bin/sh echo "please type:" read ANS echo $ANS #!/bin/sh echo $1 34 Controlling Input from Within the Shell << tells a command to use input from within a shell script Syntax is command << end where end is some string which will tell the command to stop taking input (<< EOF is most common) This is useful when testing programs – you can automatically run them on sample input 35 Example of Input Control $./interact3.sh please type: hello goodbye hello #!/bin/sh # This is interact3.sh./interact2.sh <<EOF echo hello goodbye hello EOF 36 Summary Environment Variables Running Shell Programs Command Line Arguments If-then-else and for loops. Controlling Input and Output. Similar presentations © 2020 SlidePlayer.com Inc.
http://slideplayer.com/slide/6990983/
CC-MAIN-2020-05
refinedweb
1,036
55.68
how to sort an simple array logically (for both string & int) ? without using Arrays.sort(arr) or any comparators. how to sort an simple array logically (for both string & int) ? without using Arrays.sort(arr) or any comparators. Use Arrays.sort(). See the API documentation of class java.util.Arrays. Example: import java.util.Arrays; // ... String[] arr = new String[] { "one", "two", "three" }; Arrays.sort(arr); edit - Ok, you've edited your question and added "without using Arrays.sort()". Is this homework? Are you supposed to implement your own sorting algorithm? Then just do some research on sorting algorithms and implement one yourself. To add a little bit more to their answers above - as you mentioned "sorting logically", you can implement your own Comparator and use Arrays.sort(array, comparator). Is there a specific reason you want to avoid Arrays.sort()? Here is a nice selection including code examples for the various ways to sort arrays and/or collections: Rosetta Code Write a quicksort or mergesort algorithm. Smells homework, too. Have a look at Insertion Sort.
http://ansaurus.com/question/3279447-java-array-sorting
CC-MAIN-2017-47
refinedweb
175
62.54
Here we go again! Round 4! This time we're diving into Advanced Hooks. As the whole world seems to have moved onto Zoom my connection seems to be getting worse and worse. So, I struggled with a lot of lag today. I don't know if that was because everyone in my area was online or just the way it happened to fall. These workshops are coming thick and fast. I know I'll revisit these posts as I revisit the learning over the coming months. So, see this as a work in progress of my learning and thinking on these topics. This is a deliberately simple example to allow us to focus on the API. Why not just use useState with an object when you want to deal with multiple states? The reducer hook has a more intuitive way to deal with more complex state - it basically has a better API. Here's an example of using useReducer to manage the value of a name in an input. function nameReducer(previousName, action) { return action } const initialNameValue = 'Joe' function NameInput() { const [name, setName] = React.useReducer(nameReducer, initialNameValue) const handleChange = event => setName(event.target.value) return ( <> <label> Name: <input defaultValue={name} onChange={handleChange} /> </label> <div>You typed: {name}</div> </> ) } It was good to get into the workings of this API and see the things built up from a useState basic state. State needs to be treated immutable. As always, Kent has blog posts to answer most questions that come his way. How to use React Context effectively Implementing a simple state machine in JS Immer - to deal with immutable state The first exercise here was to create a custom hook to generalize some async behaviour. For this to work, there was an issue that passing in the dependency array can't be linted or type checked. We use useCallback to memoize a callback function so that we can depend on the same function being called for the same values. This stops infinite rerendering when we pass the function itself as a dependency. Sharing state without having to prop drill - this is noisy and creates some issues with maintainance. You mightn't need to use context as soon as you expect. Michael Jackson suggests composition is a useful pattern. Interesting to create a CountProvider component rather than having the state managed in the app. Then the state can be handled away from the component logic. How to use context effectively How to optimize your context value Kent continues the blogpost provision :) Application State Management Having separate providers allows us to modularise our state and not have unexpected breaks. It makes it easier to reason about and makes things more performant. react-table library There are two ways to tell React to run side-effects after it renders: useEffect useLayoutEffect The difference about these is subtle (they have the exact same API), but significant. 99% of the time useEffect is what you want, but sometimes useLayoutEffect can improve your user experience. To learn about the difference, read useEffect vs useLayoutEffect Basically, for visual effects we use useLayoutEffect. Most effects are not visibly observable. I hadn't seen this hook before and it seems pretty interesting. This allows us to pass back imperative handlers if the user passes a ref to the component. This was achieved in class components with creating a reference inside the instance of the component, but this isn't possible in functional components. Now, we create a ref and forward it to the component. The component then uses that ref to pass back handler function that can be used to imperatively direct the component. We can add debug messages to our hooks to be able to see what is being referenced. There is a formatCountDebugValue that can be used to give a better output.
https://www.kevincunningham.co.uk/posts/advanced-hooks/
CC-MAIN-2020-29
refinedweb
635
55.44
The concept of Interfaces and its practical implementation may be rather daunting for beginners. It may require immense time and patience to assimilate the idea of interfaces. Interfaces are an abstraction that helps you to achieve the following objectives with your code: 1. Clean & readable. 2. Scalable. An interface defines a set of methods without any implementation code. So, an interface is simply an abstraction and can't contain any variables or code. Declaration format of interfaces: method1 (param list) return type method1 (param list) return type } As per the official recommendation: - In the above format, Logger is the name of interface. It is idiomatic in Go to suffix the name of interface with er in most of the cases. There are exceptions but a common practice for naming an interface is: - method name + er - Interfaces with one or two methods are common in Go code. More than three methods in an Interface is not considered idiomatic. - A type can implement multiple interfaces. Interface can be implemented by any type. But an interface can't implement itself. - Multiple types can implement the same interface. - Like many other languages, in Go also, we must implement all the methods in the interface. Here's a code snippet (Code Ref# 1) that finds the area of a rectangle and triangle using functions. The same exercise is refactored in Code Ref#2 using structs, methods and interfaces. Code Ref# 1. Play Here package main import "fmt" func AreaRectangle(l, w int) int { return l * w } func AreaTriangle(b, h int) int { return (b * h) / 2 //Area of a triangle is (base*height)/2 } func main() { fmt.Println("Area of Rectangle:", AreaRectangle(3, 4)) fmt.Println("Area of Triangle:", AreaTriangle(5, 20)) } Output Area of Rectangle: 12 Area of Triangle: 50 Area of Triangle: 50 Code Ref# 2. Play Here package main import "fmt" type Rectangle struct { width, height int } type Triangle struct { base, height int } type Shaper interface { Area() int } // You must implement all the methods in the interface func (s Rectangle) Area() int { return s.width * s.height } func (t Triangle) Area() int { return (t.base * t.height) / 2 } // Variable declared as an interface type can call methods that are in the named interface. // Generic 'measure' function that works on any Shape. func measure(s Shaper) { fmt.Println(s) fmt.Println(s.Area()) } func main() { s := Rectangle{width: 3, height: 4} t := Triangle{base: 5, height: 20} // The Rectangle struct type implements the 'Shaper' interface so we can use instances // of these structs as arguments to measure. measure(s) measure(t) } {3 4} 12 {5 20} 50P.S. Interfaces are a bit difficult to understand. A beginner must persist with it, practice as many examples as possible to understand and assimilate the concept. I'll review this post every now and then and may add a few more code samples.
http://www.golangpro.com/2015/10/go-interfaces-absolute-beginners.html
CC-MAIN-2017-26
refinedweb
477
64.81
Talk:System software From OLPC The following was removed from the main page since it seems more appropriate for the Discussion. Operating System Selection I would like to seriously question using Linux as the basis of the laptop. It's a path of constant tweaking and tuning. Linux is also further away from the "pure" microkernel architectures than the BSDs. In short Linux is like a F1 tuned for racing - but not as reliable. When working on a familiar hardware platform the BSDs become very robust and nice platform to work with. And more importantly without the horrible quality problems of the Linux kernel. FreeBSD kernel is also magnitudes more secure - it just is cleaner and better thought and developed with the focus first on stability and reliablity. Unlike Linux.. The bottom line is that the project has got the possibility to choose from others than just Linux as well. I'd go for Frisbee or mentioned reasons but there are others too. - Hah. FreeBSD recently got around to fixing a kernel bug that would let anyone become root via the LDT control interface. (you could zero memory... like the memory holding your UID) Either via stupidity or an attempt to hide the problem, the fix went into CVS with a commit message indicating that the bug was merely a way to crash the system. Nobody was told to upgrade. This is not a secure operating system. - Ubuntu has proven very solid and user friendly for me. It recognized all hardware on my ibm thinkpad to include wireless device and the install was seemless. Upgrades are managed very efficiently, as well. - (While I agree that *BSD might be a better option -- perhaps OpenBSD in particular for reasons of stability, et cetera -- I think the disparaging remarks about Linux-based operating systems were poorly conceived, exaggerated, and unnecessary as presented. - Chad Perrin) - OpenBSD lacks any form of Mandatory Access Control, which is needed to implement Bitfrost. You can't protect a user from trojans on OpenBSD. - Indeed: BSD is a good alternative. Another alternative may be an older Linux kernel. The 2.4 kernel line is pretty stable and good also. The current Linux kernel version is 2.6 - The 2.4 kernel is slow, obsolete, and without many power-saving features. - (User-contributed) Yes, the older the Linux kernel is, the more stable it becomes. For example, the current 1.08 version of PuppyLinux is based on the 2.4 kernel, and can confidently deliver satisfactory GUI experience despite the resource constraints in the OLPC machine. - BSD is just as monolithic as Linux is, and the reverse - and their developers target reliability and security just as much as others, proved by millions of servers running it and some big companies wasting lot of engineering resources in making it even more stable (for example Linux distros are already shipping advanced security frameworks like SElinux, while FreeBSD (not openbsd) is still working on it) Why UNIX(like) at all? Recently Negroponte stated that a "slimmer Linux" is needed for the $100 laptop. Like others here, I question why it has to be Linux. But I'll take that notion even further: Why does the OS even need to be Unix or a Unix work alike? Why is Negroponte so bent on Linux? There are other FOSS options that are much more focused on the desktop, yet still have Posix compliance. - Haiku-OS is an FOSS implementation of the BeOS tool kits with a FOSS kernel. It maintains binary and source level compatibility with BeOS R5, so all pre-existing free R5 software will work on it. (There is quite a bit) - BeOS??? Be serious. There very are few BeOS-only apps, especially open ones. There are many Linux-only apps, and many more POSIX-like apps that will need something a bit more full-featured than a buggy bare-bones POSIX. - Syllable OS Is a lot like BeOS. (But it's not BeOS) It's getting pretty mature. Both these options are FOSS. If you really want the Unixy goodness, both also happen to have Posix compliant shells, so much of the Unix CLI stuff is portable. Both have fast native GUI systems and very modern design behind the internals. Both are extremely light weight compared to BSD or Linux + X + desktop env. And if one was really concerned with the stability of the kernel, it might be possible to run one of these alternative environments on top of a BSD or Linux kernel instead of their native kernels. - I suppose an Apple-like environment on X is available via GNUstep, but that would need some serious work. The current GNUstep stuff is nasty-looking and woefully out of date. Fixing that could take forever. - Why UNIX(cont'd). Availability of applications and the fact that they are likely to see this in the real world. More and more countries are using linux as their preferred OS for state agencies and we are likely to se linux continue to gain popularity within business community. - Why UNIX(like)?: What is the point of running a non-Unix kernel just to run Unix software using compatibility layers? A Unixy operative system will make things much easier (POSIX is native there and software is already running there and it's the main development platform) and will allow developers to concentrate on more serious issues like ram usage. It might also be possible to buy out the BeOS R5 codebase from Palm Inc. or have them donate it to the project. They don't seem interested in doing anything with it. Then the OLPC project could release the the source under GPL and get help from the Haiku and Yellow Tab guys to whip it into shape for the $100 laptop. I really can't convey how great BeOS or Haiku would be project. It's a perfect match for a system with the $100 laptops specs. - Yeah, but... BeOS sucks. It did a few things well. Linux does most things well. The need for a distributed OS The deployment environment is a school, not a business. So it needs cheap, easy system administration above all, as well as painless system integration. It would also be good if it distributed the file system so the spare storage on the laptops can be aggregated. The logical choice is Plan 9. Plan 9 is open-sourced, designed for graphics, lightweight, network-centric, has a UNIX api (glued on the side, but it's there), and is designed to integrate and implement distributed file systems. It should be easy to aggregate the file systems of many laptops just by merging their name spaces. - Plan 9 is designed for graphics of a decade or two ago. It's obsolete. - Recent Linux kernels already support the 9P protocol and per-process namespaces to some degree. While Plan9 is a beautiful operative system, its sadly more of a research OS than the real alternative that OLPC needs - most of the *nix software have never been ported and it doesn't gets ported by just pressing a button. > ... The logical choice is Plan 9 ... - Yes; isn't Inferno the current incarnation? Better still, A2. ... Peasthope 15:24, 2 November 2011 (UTC) > ... designed for graphics of a decade or two ago. It's obsolete. - Yes; for the OLPC program, the OLPC interface is needed, regardless of what other interfaces are available. ... Peasthope 15:24, 2 November 2011 (UTC) I think FreeDOS is a good alternative. Free, simple, small, effective. Peer to Peer distribution of books This was copied from the main page since it is discussion. It should really be sumamrized into the main page and the first comment really should join the rest here. I believe one of the most useful purposes of the laptop will be to distribute electronic text (i.e. e-books). The distribution of any information without a persistent network connection will be difficult. I imagine a situation where 1 out of 100 or 1000 kids may have access to a network connection. The peer to peer network could be used to distribute e-books from that single network connection to 1000 kids. I'd like to propose the design of a peer-to-peer network client designed specifically for this purpose. A simple, graphical language-localized client would be designed to present a catalog of e-books. The client would pull down a listing of books available in a certain age-range for a targeted language. The student would pick texts that he or she has interest in. This list of requests would consist of a very small packet of data, perhaps a unique identifier of the device and a unique identifier of the text. When the device sees another device, it would off load it's packet to the peer device and vice-versa. Each device would contain a listing of requests from all of the peers that it came in contact with. The next time a device connects to the Internet, it would pull as many texts as allowable by pre-defined memory limits (say 3-5 meg). As the device comes into contact with other devices, it would deliver the texts to the other devices. Hopefully, over time, the requestor would be delivered some of the texts originally requested. As each text is delivered, a delivery or cancellation notice would be sent back through the peer network. The peer to peer network should gather performance intelligence over time. It should be able to guess which routes have better chances of making a request and returning a delivery. If a proof of concept proves to work well, the peer to peer network would be extended to handle two-way communication for interaction such as email or the submission and grading of assignments. --65.7.133.163 04:41, 27 January 2006 (EST) Jason Hoekstra - [email protected] To continue on with what Jason and Volburger said, there is a fundamental conflict in OLPC. On one hand, you need to keep costs down and must therefore have small persistent storage. On the other hand, the purpose of the laptop is for learning, and learning requires the storage of information. What a child needs to learn with is really nothing more than an encyclopaedia and a simple way to navigate it, but it is probably not feasible to store an entire encyclopaedia in the available space, especially if you include multimedia, which you definitely should. The approach suggested by Jason, which is not a bad one at all, is to retrieve information based on interest. I have some experience in mobile, ad hoc, sensor, and peer to peer networking, and what he's proposing is something similar to rumor routing and directed diffusion. There are a couple of problems with the suggestion, though. The first is that you will likely have a lot more requests for books than space to keep them. In a store and forward network with limited space and unpredicatable mobility, you are unlikely to hang onto enough books that you will be able to satisfy the requests. The second problem is that in all likelihood, some children will have far more Internet access than others, due to geographical location, being able to afford transport, or whatever the case may be. This may form a book distribution tree rooted at a few children with many children as leaves, which will cause significant distribution problems. Basically, a few children will need to store and forward books for a large number of children, and will quickly fill up their space without satisfying many requests. This depends on the particular country and situation of course, but in general, the branching factor and depth of the distribution tree can have serious repercussions on how many of the requests are satisfied. I personally agree with Jason that a distributed, peer to peer filesystem is necessary. You may only have half a gigabyte of space, but there are a lot of half gigabytes running around, hopefully within a few hops of each other. So step one is that you need a routing protocol to form multihop ad hoc networks. There is a lot of literature on this; look at MobiHoc if you need a starting point. Step two is that you need a discovery protocol to learn what books are available and who has them, the aforementioned catalogue. Note that this catalogue needs to be updated as well, but the updates can be distributed using controlled flooding. And the reason I'm writing this whole thing is that I think you need a decent data (book) dissemination protocol. You are dealing with a sparsely connected network of resource starved nodes, with intermittent connectivity and esoteric mobility patterns. I believe you need some sort of centralised logic, such as a tracker in the Bittorrent protocol. If the computers belonging to the children that have Internet access can cooperate on which books to download and store, and if the computers themselves can try to form a rough topology of who is connected to who (in terms of the books they want), then you are in a much better position to allocate your resources to satisfy the most requests. I hope this helps and I wish you the very best of luck in your endeavours. Emerson Farrugia (emerson AolpcT runelands D0T net) - March 17th, 2006 Automated Language Localization of some Preset Sentences I have for some time been interested in whether it would be of practical use (rather than just fun in researching what can and cannot be done) to have a collection of sentences and part sentences defined and translated into many languages, each sentence or part sentence having a code number, with the idea that an author may construct a message using one such code number or a sequence of such code numbers and then the code numbers could be used by a software system in the computer of a recipient of the message in conjunction with a small database of code numbers and the text of the sentences in a chosen language so as to produce a localized message displayed for the recipient. For example, suppose that there were only two sentences from which to choose and that these have been encoded as sentences 21011 and 21012. The English database would contain the following. 21011 It is raining. 21012 It is snowing. The French database would contain the following. 21011 Il pleut. 21012 Il neige. The database could be translated into as many languages as desired and possible. So, if someone whose preferred language is English is authoring a message and wishes to send the message "It is raining." then he or she looks throgh the database using whatever search tools that are available at his or her location and encodes the message as 21011 and then sends it. So, if someone whose preferred language is French receives the message then the text "Il pleut." can be displayed automatically. So, if there were more sentences than that and also sentences with a parameter such as for "The temperature here is P1 degrees Celsius." where the value of parameter 1 is sent as a digit string (possibly including a decimal point) to accompany the 21852 code of the parameterized sentence, and that list of sentences were available in many languages, then, for example, weather information could be broadcast on a pan-European basis on an interactive television channel and localized automatically in interactive televisions in, for example, England, France, Italy, Finland and Latvia. As to how to encode such a system, well there are various possibilities. I started off using a deliberately unusual yet valid sequence of regular Unicode characters to act as a key that would be most unlikely to occur in any other use context, namely a comet, a circumflex accent and an enclosing keycap design. I have also looked at using Unicode Private Use Area characters. It has been suggested to me that XML would be the best approach, though I have reservations as I would like a system where a short sequence could be added into a plain text file without having to restructure the whole document, however I am unsure of that so it is possible that XML would be the way to go. I am wondering whether the technique, whether using the key or the Private Use Area codes, or using XML, or otherwise, could be useful for autolocalizing some part of the education process. For example, a sentence such as "Please tell your teacher that you have now completed the task." and such as "You have chosen the correct answer." and "Well done.". I did a little with the idea theoretically some time ago. I never got it beyond English! A later development was to incorporate the LOCODE concept so as to specify names of places that were to be localized, such as the way Firenze is expressed as Florence in English and London is expressed as Londres in French. William Overington 17 March 2006 Light-wight scripting language (Lua?) Instead of measuring Python memory usage, why not provide a lighter scripting language from day one? Lua would fit this kind of a machine perfectly; it's been tested and used in embedded, as well as games development for years. Lua 5.1 has a module system, which allows it to be used as a system (end application) programming platform just as Python and/or Ruby. Only, it weighs no more than 100kB in the binary (with basic modules, no bindings). People interested in bringing full OLPC programming support for Lua, please contact me for future plans. I already have a project going for Cairo & Lua, OLPC could simply be its "physical" incarnation? :) This is not to say Python on the device is pointless, it is not. But both solutions can co-exist, and provide a term of healthy competition with each other, that's all. :) Asko Kauppi 10:18, 18 Jul 2006 (EET) - I think use of Python for Sugar, etc is pretty set in stone, with C (for certain applications) being the only alternative anyone has spoken with any kind of seriousness. --SamatJain 11:30, 18 July 2006 (EDT) - That's what I'd like to challenge, or rather extend (Python may well have the top position, no problems with that). My current standing is to observe the project, and kick in a Lua API binding if/when I see the time is ripe. Can be done as a one-man work, I'm sure. :) --Asko Kauppi 23:30, 18 July 2006 (EET) Why do We Assume there will Always be a School Server Nearby? It will ultimately limit distribution if OLPC can only be used in areas with organized education. To plan for as broad a distribution as possible, expect isolated OLPC's, and plan on running successfully in meshes without a school hub, or even without any other mesh participants. - I would assume that the first or initial drive/waves will try to benefit/reuse whatever infrastructure is already available—school's infrastructure & people mainly; so it seems a reasonable assumption. - As the laptop deployment process pushes farther and farther, to where no laptop has gone before, setting the server (afaik, could be a laptop configured differently plus some extra hardware) within a town limits (ie: the elder's meeting place or the house of a local 'responsible' person—remember that you need the 'handler' for activation and other issues) should not pose too many differences... except that currently the national education bureaucracies know who the principal of a given school is, but ignore the social dynamics of a town they are not in contact with... --Xavi 06:40, 8 March 2007 (EST) > It will ultimately limit distribution if OLPC can only be used in areas with organized education. ... expect isolated OLPC's, ... - This could help.. Regards, ... Peasthope 14:49, 2 November 2011 (UTC)
http://wiki.laptop.org/index.php?title=Talk:System_software&oldid=265665
CC-MAIN-2014-49
refinedweb
3,323
61.16
The Hadoop archiving tool can be invoked using the following command: hadoop archive -archiveName name -p <parent> <src>* <dest> Where -archiveName is the name of the archive you would like to create. The archive name should be given a .har extension. The <parent> argument is used to specify the relative path to the location where the files are to be archived in the HAR. Example. Archiving does not delete the source files. If you would like to delete the input files after creating an archive to reduce namespace, you must manually delete the source files. Although the hadoop archive command can be run from the host file system, the archive file is created in the HDFS file system from directories that exist in HDFS. If you reference a directory on the host file system rather than in HDFS, you will get the following error: The resolved paths set is empty. Please check whether the srcPaths exist, where srcPaths = [</directory/path>] To create the HDFS directories used in the preceding example, use the following series of commands: hdfs dfs -mkdir /user/zoo hdfs dfs -mkdir /user/hadoop hdfs dfs -mkdir /user/hadoop/dir1 hdfs dfs -mkdir /user/hadoop/dir2
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_hdfs-administration/content/creating_hadoop_archive.html
CC-MAIN-2018-39
refinedweb
198
55.68
I've looked over other questions/answers but I just can't find one that addresses my issue. I'm having trouble initiating Bootstrap Tour on a multipage tour once I get to the second page. I have the tour start with a click event and localStorage false $(document).ready(function () { // Instance the tour var tour = new Tour({ name: "CiteTour", steps: [{ element: "", title: "#1", content: "You can find help with formatting citations on the Guide page. Click 'Next' to go there now.", placement: "" }, { element: "#CiteTour2", title: "#2 - Citation Resources", content: "There are several options for getting help with formatting citations. Once on the Guide page, look for the box labeled 'Citation Help.'", placement: "right", path: "/newpath", onNext: function (tour) { tour.init(); tour.restart(); } }, { element: "#CiteTour3", title: "#3", content: "This site can help format your research paper and references in APA, MLA, and the other major citation formats.", placement: "right", }, { element: "#AskTour1", title: "#4 - Ask-a-Librarian", content: "If you still have questions about citations or citation format, feel free to contact the librarians. Good luck!", placement: "left", }], storage: false, }); // Initialize the tour tour.init(); $('#CiteTour-go').on('click', function () { // Start the tour tour.start(); }); }); First, you must make sure you are using storage: window.localStorage, which uses the Storage API. This is the default tour option, so all you have to do is not override it to false as you have done. What this does is allow Bootstrap Tour to persist the current step information across multiple pages within the same domain. Want Proof? - Open up your dev tools and see: Second, if you are specifying the path option for any step, you should specify it for all steps. When a single page tour starts, it doesn't need to worry about navigating to different pages, but as soon as you've moved to a new page, if you haven't specified paths for previous steps, bootstrap tour has no way of knowing where to navigate back to. Furthermore, you need to use an absolute-path reference by prefxing the url with a single slash so it is relative to the root directory. If you use relative paths, the path will be changed as you move through pages/steps. For more info, see my section at the bottom on the Infinite Page Refreshing Issue Third, as long as you define the tour object and initialize it, the tour will pickup automatically on a new page. Let's look at a simplified version of what init() does: Tour.prototype.init = function(force) { // other code omitted for brevity if (this._current !== null) { this.showStep(this._current); } }; So once you've initialized the tour, as long as it notices that a tour has started and has not yet ended (i.e. it has a current step), it will automatically start-up that step. So you don't need to initialize by tapping into the onNext event on your second step. Editable Plunk | Runnable Demo $(function() { // define tour var tour = new Tour({ steps: [{ path: "/index.html", element: "#my-element", title: "Title of my step", content: "Content of my step" }, { path: "/newPage.html", element: "#my-other-element", title: "Title of my step", content: "Content of my step" }] }); // init tour tour.init(); // start tour $('#start-tour').click(function() { tour.restart(); }); }); <!DOCTYPE html> <html> <head> <title>Multipage Bootstrap Tour - Page 1</title> <link rel="stylesheet" href="bootstrap.css" /> <link rel="stylesheet" href="bootstrap-tour.min.css"> </head> <body> <div class="container"> <h1>First Page</h1> <button class="btn btn-lg btn-primary" id="start-tour"> Start Tour </button><br/><br/> <span id="my-element"> My First Element </span> </div> <script src="jquery.min.js"></script> <script src="bootstrap.js"></script> <script src="bootstrap-tour.min.js"></script> <script src="script.js"></script> </body> </html> <!DOCTYPE html> <html> <head> <title>Multipage Bootstrap Tour - Page 2</title> <link rel="stylesheet" href="bootstrap.css" /> <link rel="stylesheet" href="bootstrap-tour.min.css"> </head> <body> <div class="container"> <h1>New Page</h1> <span id="my-other-element"> My Second Elemennt </span> </div> <script src="jquery.min.js"></script> <script src="bootstrap.js"></script> <script src="bootstrap-tour.min.js"></script> <script src="script.js"></script> </body> </html> Where you've brought in the following libraries: In a lot of configurations, you'll get into a loop where the page infinitely refreshes, continually attempting to resolve to the path of the current step. Here's a look into why this issue occurs and how to fix it. How does Bootstrap Tour go to the next step? When you hit the Next Button, the tour will call showStep(i) for the next step Here's a simplified version of showStep: Tour.prototype.showStep = function (i) { // other code omitted for brevity // get step path path = tour._options.basePath + step.path; // get current path - join location and hash current_path = [document.location.pathname, document.location.hash].join(''); // determine if we need to redirect and do so if (_this._isRedirect(path, current_path)) { _this._redirect(step, path); return; } }; So, if the current path in the document is different than the path for the next step, then tour will automatically redirect to the next step. Here's a simplified form of the redirection that just takes into account string values: I've omitted regex based paths although Bootstrap Tour also supports them Tour.prototype._isRedirect = function(path, currentPath) { var checkPath = path.replace(/\?.*$/, '').replace(/\/?$/, ''); var checkCurrent = currentPath.replace(/\/?$/, ''); return (checkPath !== checkCurrent); }; Tour.prototype._redirect = function(step, path) { this._debug("Redirect to " + path); return document.location.href = path; }; Note: The regex is just there to remove query parameters ( /\?.*$/) and trailing forward slashes (//?$/`) When any page loads, it's not sure if Bootstrap Tour has redirected it, or you're just coming back and trying to pickup the tour where you left off. So on any page when you initialize the tour: In other words, it knows how to get to where it needs to go next, but has no way of confirming if that's the case once it gets there. Take this situation for example with a step that looks like this: var step = { path: "index.html", element: "#my-element", title: "Title of my step", content: "Content of my step" } It can be redirected to the relative reference just fine, but when the page loads again, and checks that it has been loaded at the correct address, this will happen: "KyleMit", you might protest, "can't it just figure out what I want?" If you rely on relative paths for redirection, when it's loading a step, it can't gaurantee that you've actually arrived at the step and it will try to redirect you again. That's because, in web addresses, "index.html" !== "\index.html". They are two different paths! One is guaranteed to be at the domain root, while the other could be anywhere. Imagine you have some nested views like this: When navigating between pages, how can bootstrap know if it's arrived at the correct destination if you've only told it the correct page name. Which brings us to the resolution of this issue: Tip: Get a better sense of what's going on by passing in debug:truewhen creating your tour, which will log every redirect:
https://codedump.io/share/Lhd0USKvlsce/1/bootstrap-tour---initiating-multipage
CC-MAIN-2017-04
refinedweb
1,203
56.15
If you were excited about Node.js, Vert.x could be the next big thing for you: a similarly architected enterprise system that is built on the JVM. This installment of the Open source Java projects series introduces Vert.x with two hands-on examples based on the newly released Vert.x 2.0: First, build a simple Vert.x web server, then discover for yourself how the Vert.x event bus handles publish/subscribe and point-to-point messaging for effective enterprise integration. When Node.js emerged a few years ago, many developers were excited about its unusual approach to building scalable server-side applications. Rather than starting heavyweight containers that would service requests using multiple threads, Node.js starts multiple lightweight, single-threaded servers and routes traffic to them. Now a similar framework has emerged, which deploys servers inside a JVM, using JVM facilities to manage traffic to lightweight server processes. In this installment of the Open source Java projects series you'll learn about Vert.x, an event-driven framework similar to Node.js, that builds on the JVM and also extends it in some important new ways. Highlights of Vert.x Vert.x applications are event-driven, asynchronous, and single-threaded. Vert.x processes communicate via an event bus, which is a built-in piece of Vert.x's event-driven architecture. Combining asynchronous processing, single-threaded components, and an event bus yields a high degree of scalability, and writing single-threaded applications can be a relief for Java developers accustomed to multithreaded concurrency. Arguably, the best part of Vert.x is its modular JVM-based architecture. Vert.x applications can run on virtually any operating system, and they can be written using any supported JVM-compatible programming language. A Vert.x application can be written entirely in a single language, or it could be a mash-up of modules written in different programming languages. Vert.x modules are integrated on the Vert.x event bus. Event-based programming in Vert.x Like other tools and frameworks I've recently covered in this series, Vert.x speaks the language of modern enterprise development, but puts its own emergent spin on familiar technology. Vert.x's event-based programming model is a mix of standard and unique features. Vert.x applications are largely written by defining event-handlers, which do things like manage HTTP requests and pass messages through the event bus. Unlike traditional event-based applications, however, Vert.x applications are guaranteed not to block. Rather than opening a socket to a server, requesting a resource, and then waiting (blocking) for the response, Vert.x sends the response to your application asynchronously, via an event handler. Vert.x's programming framework includes some vernacular that will be helpful to know when you work through the two demo applications later in this article: - A verticle is the unit of deployment in Vert.x. Every verticle contains a main method that starts it. An application may be a single verticle or may consist of multiple verticles that communicate with one another via the event bus. - Verticles run inside of a Vert.x instance. Each Vert.x instance runs in its own JVM instance and can host multiple verticles. A Vert.x instance ensures that verticles are isolated from each other by running each one in its own classloader, so that there is no risk of one instance modifying another's static variables. A host may run a single Vert.x instance or multiple ones. - A Vert.x instance guarantees that each verticle instance is always executed in the same thread. Concurrency in Vert.x is single-threaded. - Internally, Vert.x instances maintain a set of threads (typically one for each CPU core) that executes in an event loop: check to see if there's work to do, do it, and go to sleep. - Verticles communicate by passing messages using an event bus. This message-passing strategy closely resembles the Actor model employed by the Akka framework, which I profiled in May 2013. - While you might assume that shared data and scalability are diametrically opposed, that's only true when data is mutable. Vert.x provides a shared map and a shared-set facility for passing immutable data across verticles running in the same Vert.x instance. - Vert.x uses relatively few threads to create an event loop and execute verticles. But in some cases a verticle needs to do something either computationally expensive, or that might block, such as connecting to a database. When this happens Vert.x allows you to mark a verticle instance as a worker verticle, in which case it will be executed by a background thread pool. Vert.x ensures that worker verticles will never be executed concurrently, so you want to keep them to a minimum, but they are there to help you when you need them. Figure 1 shows the architecture of a Vert.x system consisting of Vert.x instances, verticles, JVMs, the server host, and the event bus. Figure 1. Architecture of a Vert.x system (click to enlarge) Vert.x functionality can be divided into two categories: core services and modules. Core services are services that can be directly called from a verticle and include clients and servers for TCP/SSL, HTTP, and web sockets; services to access the Vert.x event bus; timers, buffers, flow control, file system access, shared maps and sets, logging, access configuration, SockJS servers, and deploying and undeploying verticles. Core services are fairly static and not expected to change, so all other functionality is provided by modules. Vert.x applications and resources can easily be packaged into modules and shared via the Vert.x public module repository. Interacting with modules is asynchronous via the Vert.x event bus: send a module a message in JSON and your application will receive a response. This decoupling between modules and integration through the service bus means that modules can be written in any supported language and used by any other supported language. So, if someone writes a module in Ruby that you want to use in your Java application, nothing is stopping you from doing it! More articles in the Open source Java projects series Write a Java-based Vert.x web server We'll start getting to know Vert.x by setting up an environment that we can use to develop our two examples for this article: a basic web server application and a message-passing system. First download Vert.x; as of this writing the latest version is 2.0.0.final. Decompress it locally and add its bin folder to your PATH. Note that you will need to install Java 7 if you haven't already. If you're a Maven person like me, then you can simply add the following dependencies to your POM file: Listing 1. Maven POM for Vert.x <dependency> <groupId>io.vertx</groupId> <artifactId>vertx-core</artifactId> <version>2.0.0-final</version> </dependency> <dependency> <groupId>io.vertx</groupId> <artifactId>vertx-platform</artifactId> <version>2.0.0-final</version> </dependency> Vert.x's "Hello, World" application is a web server, a good starter application for getting to know Vert.x's event-based programming model. The in-house Vert.x tutorial demonstrates how to create a web server that serves content from a directory called webroot with just a few lines of code. I've written a Java-based variation on the demo as an introductory exercise. Listing 2 shows the contents of my Server.java file, which is very similar to the one found on the Vert.x homepage. Download the source code for this article to see the complete file. Listing 2. Server.java package com.geekcap.vertxexamples; import org.vertx.java.core.Handler; import org.vertx.java.core.http.HttpServerRequest; import org.vertx.java.deploy.Verticle; public class Server extends Verticle { public void start() { vertx.createHttpServer().requestHandler(new Handler<HttpServerRequest>() { public void handle(HttpServerRequest req) { String file = req.path.equals("/") ? "index.html" : req.path; req.response.sendFile("webroot/" + file); } }).listen(8080); } } The first few lines in Listing 2 import the required Vert.x classes: - Handler is the base class for all handlers; in short: something happened asynchronously, so handle it! - HttpServerRequest represents a server-side HTTP request in Vert.x. An instance of this class will be created for each request that is handled by the server, then passed to your application via the Handlerinstance (which you will have registered with the HttpServer). - Verticle is the primary unit of deployment in a Vert.x application. In order to use Vert.x, you need to extend the Verticleclass and override the start()method, which is the entry-point to your Verticle. See the Vert.x Javadoc to learn more about these classes. Notice the vertx variable in Listing 2: What is it for? The Verticle class defines vertx as a protected member variable (that your Verticle inherits), which provides access to the Vert.x runtime. The vertx variable is of type Vertx, which exposes the following methods: createHttpClient()creates an HTTP/HTTPS client createHttpServer()creates an HTTP/HTTPS server createNetClient()creates a TCP/SSL client createNetServer()creates a TCP/SSL server creatSockJSServer()creates a SockJS server that wraps an HTTP server eventBus()provides your application access to the event bus fileSystem()provides your application access to the file system sharedData()provides your application access to the shared data object, which can be used to share data between Verticles The code in Listing 2 creates a new HTTP server, retrieves its request-handler reference, and sets the request handler to a newly created HttpServerRequest handler. The Handle interface defines a method named handler() and uses generics to define the type of instance that is passed to it in the class definition; in this case HttpServerRequest. The HttpServerRequest then defines the following fields: - method is a Stringcontaining the method of the given request, such as GET, PUT, DELETE, and so forth. - path is a Stringcontaining the requested path, such as /index.html. - query is a Stringcontaining the query parameters, such as the part that follows the question mark in the following: ?key=value. - response is a reference to an HttpServerResponsethat represents the response to the HTTP request. - uri is the complete URI of the request. Listing 2 completes by mapping an empty request -- " /" -- to index.html, and then invoking the HttpServerResponse's sendFile() method to tell Vert.x to stream the specified file back to the caller. In summary, the Server class accesses the Vert.x runtime, asks it to create a new HTTP server, and registers a Handler (that expects an HttpServerRequest variable) with the HttpServer. In the Handler's handle() method, the Server class serves files from the filesystem located in the webroot directory, which is relative to wherever you launched the Server Verticle from. Building the web server Let's build the sample application, then we'll use Vert.x to execute it. The Maven POM file for this project is shown in Listing 3.
https://www.javaworld.com/article/2078838/mobile-java/open-source-java-projects-vert-x.html
CC-MAIN-2018-30
refinedweb
1,839
58.08
Evan Kirkconnell Mon, 27 Aug 2007 13:36:38 -0700 Are you creating the problem elements with code or are they in your loaded document? Jon Brisbin wrote: > Searched in the archives and didn't find what I was looking for (but not > really sure what I'm looking for, so don't throw things at me if this > has been asked before). > > I have an XML template that I read into a DOM4J (using 1.6) document > when I begin processing. The default namespace is set in the document > and I set it again, just for good measure: > > SAXParser p = SAXParserFactory.newInstance().newSAXParser(); > > DocumentFactory df = DocumentFactory.getInstance(); > df.setXPathNamespaceURIs( namespaces ); > > SAXContentHandler ch = new SAXContentHandler(); > p.parse( ClassLoader.getSystemResourceAsStream( TEMPLATE_FILE ), ch ); > > Document doc = ch.getDocument(); > doc.getRootElement().addNamespace( "", EFILE_NS ); > > > When I process the raw data, I do XPath lookups on nodes in the template > and set them like so: > > XPath xp = root.createXPath( xpath ); > xp.setNamespaceURIs( namespaces ); > Element e = (Element) xp.selectSingleNode( root ); > if ( e != null ) { > e.setText( value ); > return e; > } else { > return root; > } > > But when the file is generated, some of the elements below the root > element (but not all of them) have namespaces set to '' (blank). This > causes the other party's XML processing to blow up with the error that > the element with the blank namespace isn't in the E-file namespace, like > it's supposed to be. > > If the namespace is set in the template, then I would expect it work > correctly when I've read that document in and set the namespaces > accordingly. That's not happening and some elements in the DOM are being > "taken out" of the default namespace. > > Can someone shed some light on why this is happening? > > -------------------------------------------------------------------------
http://www.mail-archive.com/dom4j-user%40lists.sourceforge.net/msg02660.html
crawl-001
refinedweb
287
55.84
Distributed Pandas on a Cluster with Dask DataFrames This work is supported by Continuum Analytics the XDATA Program and the Data Driven Discovery Initiative from the Moore Foundation Summary Dask Dataframe extends the popular Pandas library to operate on big data-sets on a distributed cluster. We show its capabilities by running through common dataframe operations on a common dataset. We break up these computations into the following sections: - Introduction: Pandas is intuitive and fast, but needs Dask to scale - Read CSV and Basic operations - Read CSV - Basic Aggregations and Groupbys - Joins and Correlations - Shuffles and Time Series - Parquet I/O - Final thoughts - What we could have done better Accompanying Plots Throughout this post we accompany computational examples with profiles of exactly what task ran where on our cluster and when. These profiles are interactive Bokeh plots that include every task that every worker in our cluster runs over time. For example the following computation read_csv computation produces the following profile: >>> df = dd.read_csv('s3://dask-data/nyc-taxi/2015/*.csv') If you are reading this through a syndicated website like planet.python.org or through an RSS reader then these plots will not show up. You may want to visit directly. Dask.dataframe breaks up reading this data into many small tasks of different types. For example reading bytes and parsing those bytes into pandas dataframes. Each rectangle corresponds to one task. The y-axis enumerates each of the worker processes. We have 64 processes spread over 8 machines so there are 64 rows. You can hover over any rectangle to get more information about that task. You can also use the tools in the upper right to zoom around and focus on different regions in the computation. In this computation we can see that workers interleave reading bytes from S3 (light green) and parsing bytes to dataframes (dark green). The entire computation took about a minute and most of the workers were busy the entire time (little white space). Inter-worker communication is always depicted in red (which is absent in this relatively straightforward computation.) Introduction Pandas provides an intuitive, powerful, and fast data analysis experience on tabular data. However, because Pandas uses only one thread of execution and requires all data to be in memory at once, it doesn’t scale well to datasets much beyond the gigabyte scale. That component is missing. Generally people move to Spark DataFrames on HDFS or a proper relational database to resolve this scaling issue. Dask is a Python library for parallel and distributed computing that aims to fill this need for parallelism among the PyData projects (NumPy, Pandas, Scikit-Learn, etc.). Dask dataframes combine Dask and Pandas to deliver a faithful “big data” version of Pandas operating in parallel over a cluster. I’ve written about this topic before. This blogpost is newer and will focus on performance and newer features like fast shuffles and the Parquet format. CSV Data and Basic Operations I have an eight node cluster on EC2 of m4.2xlarges (eight cores, 30GB RAM each). Dask is running on each node with one process per core. We have the 2015 Yellow Cab NYC Taxi data as 12 CSV files on S3. We look at that data briefly with s3fs >>> import s3fs >>> s3 = S3FileSystem() >>> s3.ls('dask-data/nyc-taxi/2015/') ['dask-data/nyc-taxi/2015/yellow_tripdata_2015-01.csv', 'dask-data/nyc-taxi/2015/yellow_tripdata_2015-02.csv', 'dask-data/nyc-taxi/2015/yellow_tripdata_2015-03.csv', 'dask-data/nyc-taxi/2015/yellow_tripdata_2015-04.csv', 'dask-data/nyc-taxi/2015/yellow_tripdata_2015-05.csv', 'dask-data/nyc-taxi/2015/yellow_tripdata_2015-06.csv', 'dask-data/nyc-taxi/2015/yellow_tripdata_2015-07.csv', 'dask-data/nyc-taxi/2015/yellow_tripdata_2015-08.csv', 'dask-data/nyc-taxi/2015/yellow_tripdata_2015-09.csv', 'dask-data/nyc-taxi/2015/yellow_tripdata_2015-10.csv', 'dask-data/nyc-taxi/2015/yellow_tripdata_2015-11.csv', 'dask-data/nyc-taxi/2015/yellow_tripdata_2015-12.csv'] This data is too large to fit into Pandas on a single computer. However, it can fit in memory if we break it up into many small pieces and load these pieces onto different computers across a cluster. We connect a client to our Dask cluster, composed of one centralized dask-scheduler process and several dask-worker processes running on each of the machines in our cluster. from dask.distributed import Client client = Client('scheduler-address:8786') And we load our CSV data using dask.dataframe which looks and feels just like Pandas, even though it’s actually coordinating hundreds of small Pandas dataframes. This takes about a minute to load and parse. import dask.dataframe as dd df = dd.read_csv('s3://dask-data/nyc-taxi/2015/*.csv', parse_dates=['tpep_pickup_datetime', 'tpep_dropoff_datetime'], storage_options={'anon': True}) df = client.persist(df) This cuts up our 12 CSV files on S3 into a few hundred blocks of bytes, each 64MB large. On each of these 64MB blocks we then call pandas.read_csv to create a few hundred Pandas dataframes across our cluster, one for each block of bytes. Our single Dask Dataframe object, df, coordinates all of those Pandas dataframes. Because we’re just using Pandas calls it’s very easy for Dask dataframes to use all of the tricks from Pandas. For example we can use most of the keyword arguments from pd.read_csv in dd.read_csv without having to relearn anything. This data is about 20GB on disk or 60GB in RAM. It’s not huge, but is also larger than we’d like to manage on a laptop, especially if we value interactivity. The interactive image above is a trace over time of what each of our 64 cores was doing at any given moment. By hovering your mouse over the rectangles you can see that cores switched between downloading byte ranges from S3 and parsing those bytes with pandas.read_csv. Our dataset includes every cab ride in the city of New York in the year of 2015, including when and where it started and stopped, a breakdown of the fare, etc. >>> df.head() Basic Aggregations and Groupbys As a quick exercise, we compute the length of the dataframe. When we call len(df) Dask.dataframe translates this into many len calls on each of the constituent Pandas dataframes, followed by communication of the intermediate results to one node, followed by a sum of all of the intermediate lengths. >>> len(df) 146112989 This takes around 400-500ms. You can see that a few hundred length computations happened quickly on the left, followed by some delay, then a bit of data transfer (the red bar in the plot), and a final summation call. More complex operations like simple groupbys look similar, although sometimes with more communications. Throughout this post we’re going to do more and more complex computations and our profiles will similarly become more and more rich with information. Here we compute the average trip distance, grouped by number of passengers. We find that single and double person rides go far longer distances on average. We acheive this one big-data-groupby by performing many small Pandas groupbys and then cleverly combining their results. >>> df.groupby(df.passenger_count).trip_distance.mean().compute() passenger_count 0 2.279183 1 15.541413 2 11.815871 3 1.620052 4 7.481066 5 3.066019 6 2.977158 9 5.459763 7 3.303054 8 3.866298 Name: trip_distance, dtype: float64 As a more complex operation we see how well New Yorkers tip by hour of day and by day of week. df2 = df[(df.tip_amount > 0) & (df.fare_amount > 0)] # filter out bad rows df2['tip_fraction'] = df2.tip_amount / df2.fare_amount # make new column dayofweek = (df2.groupby(df2.tpep_pickup_datetime.dt.dayofweek) .tip_fraction .mean()) hour = (df2.groupby(df2.tpep_pickup_datetime.dt.hour) .tip_fraction .mean()) We see that New Yorkers are generally pretty generous, tipping around 20%-25% on average. We also notice that they become very generous at 4am, tipping an average of 38%. This more complex operation uses more of the Dask dataframe API (which mimics the Pandas API). Pandas users should find the code above fairly familiar. We remove rows with zero fare or zero tip (not every tip gets recorded), make a new column which is the ratio of the tip amount to the fare amount, and then groupby the day of week and hour of day, computing the average tip fraction for each hour/day. Dask evaluates this computation with thousands of small Pandas calls across the cluster (try clicking the wheel zoom icon in the upper right of the image above and zooming in). The answer comes back in about 3 seconds. Joins and Correlations To show off more basic functionality we’ll join this Dask dataframe against a smaller Pandas dataframe that includes names of some of the more cryptic columns. Then we’ll correlate two derived columns to determine if there is a relationship between paying Cash and the recorded tip. >>> payments = pd.Series({1: 'Credit Card', 2: 'Cash', 3: 'No Charge', 4: 'Dispute', 5: 'Unknown', 6: 'Voided trip'}) >>> df2 = df.merge(payments, left_on='payment_type', right_index=True) >>> df2.groupby(df2.payment_name).tip_amount.mean().compute() payment_name Cash 0.000217 Credit Card 2.757708 Dispute -0.011553 No charge 0.003902 Unknown 0.428571 Name: tip_amount, dtype: float64 We see that while the average tip for a credit card transaction is $2.75, the average tip for a cash transaction is very close to zero. At first glance it seems like cash tips aren’t being reported. To investigate this a bit further lets compute the Pearson correlation between paying cash and having zero tip. Again, this code should look very familiar to Pandas users. zero_tip = df2.tip_amount == 0 cash = df2.payment_name == 'Cash' dd.concat([zero_tip, cash], axis=1).corr().compute() So we see that standard operations like row filtering, column selection, groupby-aggregations, joining with a Pandas dataframe, correlations, etc. all look and feel like the Pandas interface. Additionally, we’ve seen through profile plots that most of the time is spent just running Pandas functions on our workers, so Dask.dataframe is, in most cases, adding relatively little overhead. These little functions represented by the rectangles in these plots are just pandas functions. For example the plot above has many rectangles labeled merge if you hover over them. This is just the standard pandas.merge function that we love and know to be very fast in memory. Shuffles and Time Series Distributed dataframe experts will know that none of the operations above require a shuffle. That is we can do most of our work with relatively little inter-node communication. However not all operations can avoid communication like this and sometimes we need to exchange most of the data between different workers. For example if our dataset is sorted by customer ID but we want to sort it by time then we need to collect all the rows for January over to one Pandas dataframe, all the rows for February over to another, etc.. This operation is called a shuffle and is the base of computations like groupby-apply, distributed joins on columns that are not the index, etc.. You can do a lot with dask.dataframe without performing shuffles, but sometimes it’s necessary. In the following example we sort our data by pickup datetime. This will allow fast lookups, fast joins, and fast time series operations, all common cases. We do one shuffle ahead of time to make all future computations fast. We set the index as the pickup datetime column. This takes anywhere from 25-40s and is largely network bound (60GB, some text, eight machines with eight cores each on AWS non-enhanced network). This also requires running something like 16000 tiny tasks on the cluster. It’s worth zooming in on the plot below. >>> df = c.persist(df.set_index('tpep_pickup_datetime')) This operation is expensive, far more expensive than it was with Pandas when all of the data was in the same memory space on the same computer. This is a good time to point out that you should only use distributed tools like Dask.datframe and Spark after tools like Pandas break down. We should only move to distributed systems when absolutely necessary. However, when it does become necessary, it’s nice knowing that Dask.dataframe can faithfully execute Pandas operations, even if some of them take a bit longer. As a result of this shuffle our data is now nicely sorted by time, which will keep future operations close to optimal. We can see how the dataset is sorted by pickup time by quickly looking at the first entries, last entries, and entries for a particular day. >>> df.head() # has the first entries of 2015 >>> df.tail() # has the last entries of 2015 >>> df.loc['2015-05-05'].head() # has the entries for just May 5th Because we know exactly which Pandas dataframe holds which data we can execute row-local queries like this very quickly. The total round trip from pressing enter in the interpreter or notebook is about 40ms. For reference, 40ms is the delay between two frames in a movie running at 25 Hz. This means that it’s fast enough that human users perceive this query to be entirely fluid. Time Series Additionally, once we have a nice datetime index all of Pandas’ time series functionality becomes available to us. For example we can resample by day: >>> (df.passenger_count .resample('1d') .mean() .compute() .plot()) We observe a strong periodic signal here. The number of passengers is reliably higher on the weekends. We can perform a rolling aggregation in about a second: >>> s = client.persist(df.passenger_count.rolling(10).mean()) Because Dask.dataframe inherits the Pandas index all of these operations become very fast and intuitive. Parquet Pandas’ standard “fast” recommended storage solution has generally been the HDF5 data format. Unfortunately the HDF5 file format is not ideal for distributed computing, so most Dask dataframe users have had to switch down to CSV historically. This is unfortunate because CSV is slow, doesn’t support partial queries (you can’t read in just one column), and also isn’t supported well by the other standard distributed Dataframe solution, Spark. This makes it hard to move data back and forth. Fortunately there are now two decent Python readers for Parquet, a fast columnar binary store that shards nicely on distributed data stores like the Hadoop File System (HDFS, not to be confused with HDF5) and Amazon’s S3. The already fast Parquet-cpp project has been growing Python and Pandas support through Arrow, and the Fastparquet project, which is an offshoot from the pure-python parquet library has been growing speed through use of NumPy and Numba. Using Fastparquet under the hood, Dask.dataframe users can now happily read and write to Parquet files. This increases speed, decreases storage costs, and provides a shared format that both Dask dataframes and Spark dataframes can understand, improving the ability to use both computational systems in the same workflow. Writing our Dask dataframe to S3 can be as simple as the following: df.to_parquet('s3://dask-data/nyc-taxi/tmp/parquet') However there are also a variety of options we can use to store our data more compactly through compression, encodings, etc.. Expert users will probably recognize some of the terms below. df = df.astype({'VendorID': 'uint8', 'passenger_count': 'uint8', 'RateCodeID': 'uint8', 'payment_type': 'uint8'}) df.to_parquet('s3://dask-data/nyc-taxi/tmp/parquet', compression='snappy', has_nulls=False, object_encoding='utf8', fixed_text={'store_and_fwd_flag': 1}) We can then read our nicely indexed dataframe back with the dd.read_parquet function: >>> df2 = dd.read_parquet('s3://dask-data/nyc-taxi/tmp/parquet') The main benefit here is that we can quickly compute on single columns. The following computation runs in around 6 seconds, even though we don’t have any data in memory to start (recall that we started this blogpost with a minute-long call to read_csv.and Client.persist) >>> df2.passenger_count.value_counts().compute() 1 102991045 2 20901372 5 7939001 3 6135107 6 5123951 4 2981071 0 40853 7 239 8 181 9 169 Name: passenger_count, dtype: int64 Final Thoughts With the recent addition of faster shuffles and Parquet support, Dask dataframes become significantly more attractive. This blogpost gave a few categories of common computations, along with precise profiles of their execution on a small cluster. Hopefully people find this combination of Pandas syntax and scalable computing useful. Now would also be a good time to remind people that Dask dataframe is only one module among many within the Dask project. Dataframes are nice, certainly, but Dask’s main strength is its flexibility to move beyond just plain dataframe computations to handle even more complex problems. Learn More If you’d like to learn more about Dask dataframe, the Dask distributed system, or other components you should look at the following documentation: The workflows presented here are captured in the following notebooks (among other examples): What we could have done better As always with computational posts we include a section on what went wrong, or what could have gone better. - The 400ms computation of len(df)is a regression from previous versions where this was closer to 100ms. We’re getting bogged down somewhere in many small inter-worker communications. - It would be nice to repeat this computation at a larger scale. Dask deployments in the wild are often closer to 1000 cores rather than the 64 core cluster we have here and datasets are often in the terrabyte scale rather than our 60 GB NYC Taxi dataset. Unfortunately representative large open datasets are hard to find. - The Parquet timings are nice, but there is still room for improvement. We seem to be making many small expensive queries of S3 when reading Thrift headers. - It would be nice to support both Python Parquet readers, both the Numba solution fastparquet and the C++ solution parquet-cpp blog comments powered by Disqus
http://matthewrocklin.com/blog/work/2017/01/12/dask-dataframes
CC-MAIN-2020-29
refinedweb
2,977
55.95
Why you should avoid using Python Lists? Introduction Yes, you heard that right, you should avoid using Python lists. Even though they might be arguably the most popular of the Python containers, a Python List has so much more going on behind the curtains. Lists are so popular because of their diverse usage. A list of integers can be created like this: Not just integers, we can create lists with strings too: L2 = [ str(c) for c in L ] L2[ : 5 ] > [ "0", "1", "2", "3", "4" ] type(L2[0]) > str Since Python is dynamically typed, we can create mixed lists also: L3 = [ True, 3, "6", 9.0 ] [ type(ele) for ele in L3 ] > [ bool, int, str, float ] But all this flexibility does not come for free. First, to understand why we should avoid using lists, we have to look under the hood to see how Python actually works. Understanding Python Data Types To become efficient in Data-driven programming and computation requires a deep understanding of how data is stored and manipulated, and as a Data Scientist, this will help you in the long run. More and more programmers are drawn to Python because of its ease of use, one piece of which is Dynamic typing. While in statically typed languages like C++ or Java, all the variables have to be declared explicitly, a dynamically typed language like Python skips this step. Let us take a code snippet in C++ : int sum = 0 ; for( int i = 0 ; i <= 100 ; i++ ) sum += i ; court<< sum ; > 5050 The same program can be written in Python like: sum = 0 for i in range(101): sum += i print( sum ) > 5050 The main difference we can see here is that in C++, all the variable types have to be explicitly declared, whereas in Python the types are dynamically inferred. Thus we can assign any type of data to any variable. This flexibility points out that a Python variable is more than just a value, it contains extra information about its type. Let’s explore that in the next section. Why Python Integer is not just an Integer The Python interpreter is itself written in C, thus all the Python objects are a disguised version of C structures, therefore it contains not only its value but other information as well. For example, if we declare a variable in Python like: x = 10 x is not just a raw integer, but rather a pointer to a compound C structure containing several values. If we dig deeper, we can find out how this C structure looks like. struct _longobject { long ob_refcnt; PyTypeObject *ob_type; size_t ob_size; long ob_digit[1]; }; Thus it contains 4 pieces: 1) ob_refcnt, a reference count which handles allocation and deallocation of memory. 2) ob_type, type of variable. 3) ob_size, size of the data members. 4) ob_digit, the actual value the variable represents. All this extra information means more overhead charges in terms of memory and computational power. Thus a Python int object is essentially a pointer to a position in memory containing all the information regarding that variable, including the memory bytes which contain the actual integer value. All this extra information is what lets you code in Python soo freely. Not just integers but all the data types in Python comes with this overhead cost, however, this cost becomes significant in structures that combine many of these objects i.e Lists! Why Python List is not just a List Now let’s consider what happens when we use a standard Python container, consisting of multiple elements. The standard Python container is a list, much like an Array in C, both are mutable, but as we discussed earlier, Lists can be heterogeneous. But this flexibility is quite costly. To be heterogeneous, each of the elements of the list must contain its own type info, reference count, and all the other information as well. In other words, each item is a complete Python object. So if we break it down further, a Python list contains a pointer, which points to another block of pointers, and within that block, all these pointers in turn point to a separate full Python object like the one we saw earlier. This is like one big nesting doll! When all the variables are of the same type, much of this information becomes redundant. Won’t it be more efficient to store data like this in a container that does not have such redundancy or not so useful information, and still retain the useful functionalities of lists? Well, the alternatives I’ve got for you are in the next section. Alternatives for Lists 1) Array Python has a built-in module named ‘array‘ which is similar to arrays in C or C++. In this container, the data is stored in a contiguous block of memory. Just like arrays in C or C++, these arrays only support one data type at a time, therefore it’s not heterogenous like Python lists. The indexing is similar to lists. The type of the array has to be specified using the typecode provided in the official documentation. Here is a short tutorial cum examples of using the array module: from array import array a = array( "l", range(10) ) print( a ) >array( 'l', [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] ) The “l” we used is to specify the typecode of the array we want i.e signed long. Some of the useful methods that can be used with arrays are: - array.typecode – returns typecode of the array - array.itemsize – returns length in bytes of one array element. - array.append(x) – appends a new element x to the right of the array. - array.count(x) – returns the number of times x occurs in the array. - array.extend(iterable) – appends all the items to the tight of the array. There are many more useful operations and methods about which you can read about in the official documentation here. 2) Numpy Arrays Numpy arrays are even faster than the arrays from the array module. Numpy arrays take up less space than lists since it contains homogenous data. Since the last decade, Python’s popularity increased and thus the need for faster scientific computation was needed. This gave rise to Numpy, which is mainly used for different mathematical calculations. The reasons why NumPy arrays are faster than lists are : - Numpy arrays are homogenous and contiguous, whereas the lists due to their flexibility need much more space and are not contiguous. - In NumPy, all the tasks are broken down into small segments and then all these segments are processed parallelly. - All the NumPy functions and methods are implemented in languages like C, C++, and Fortran, which have very very less execution time than python Here is a nice comparison of time taken for different operations by Python lists and Numpy arrays: So as you can see, one can side with so much more efficiently in terms of memory usage and speed while using alternatives for Lists like arrays and Numpy arrays. Knowing about these small minuscule details is what separates a great Data scientist from a good Data Scientist. if you are looking to optimize your code further, I would suggest you look into the Python module called Numba. Here is my article about Numba, which can make your Python code run 1000X times faster!! – I hope you enjoyed reading the blog as much as I enjoyed writing it, and I hope it helps you in your Data Science journey. Cheers!! The media shown in this article are not owned by Analytics Vidhya and is used at the Author’s discretion. Leave a Reply Your email address will not be published. Required fields are marked *
https://www.analyticsvidhya.com/blog/2021/05/why-you-should-avoid-using-python-lists/
CC-MAIN-2022-40
refinedweb
1,286
59.64
First time here? Check out the FAQ!! Thanks! This is a great solution. Here is an implementation combining the other two ideas here: var('t,z') def op(func,variable,n_max): """ Input a function, a variable, and a number Returns operator applied to function """ output = func for i in range(1,n_max+1): output = variable*output.derivative(variable) - i*output return output If you just want to see the operator, you can apply op to a symbolic function: op sage: symb_f = function('f',t) sage: op(symb_f,t,2) # the first two operators composed t^2*D[0, 0](f)(t) - 2*t*D[0](f)(t) + 2*f(t) sage: op(symb_f,t,3) # the first three operators composed t^3*D[0, 0, 0](f)(t) - 3*t^2*D[0, 0](f)(t) + 6*t*D[0](f)(t) - 6*f(t) Here's how to use it for your example: sage: g = op(z*t*exp(t/z),t,5) sage: g t^5*(t*e^(t/z)/z^4 + 5*e^(t/z)/z^3) - 5*t^4*(t*e^(t/z)/z^3 + 4*e^(t/z)/z^2) + 20*t^3*(t*e^(t/z)/z^2 + 3*e^(t/z)/z) - 60*t^2*(t*e^(t/z)/z + 2*e^(t/z)) - 120*t*z*e^(t/z) + 120*(t*e^(t/z) + z*e^(t/z))*t You can simplify symbolic expressions like this: sage: g.simplify_full() t^6*e^(t/z)/z^4 This is not what I meant. I've added a clarification in my post. Is the clarification clearer?}$.? This is a follow-up to DSM's answer. Just for future reference, I found that I can turn the Trues and Falses into 1's and 0's in the following way: sage: M = Matrix([[1,2,3],[0,4,5],[0,0,6]]) sage: 1 * ( M.numpy() == 0 ) array([[0, 0, 0], [1, 0, 0], [1, 1, 0]]) That's how numpy matrices work, so one possibility is to write: numpy sage: M = Matrix([[1,2,3],[0,4,5],[0,0,6]]) sage: M.numpy() array([[1, 2, 3], [0, 4, 5], [0, 0, 6]]) sage: M.numpy() == 0 array([[False, False, False], [ True, False, False], [ True, True, False]], dtype=bool) sage: M.numpy() == zero_matrix(ZZ, 3) array([[False, False, False], [ True, False, False], [ True, True, False]], dtype=bool)? So, I asked my question just a few minutes too soon. I had two problems: I needed to define the TestRingElement before TestRing, and I needed to specify the Element attribute of TestRing. After the right things are imported, the following code runs as expected: Element class TestRingElement(CombinatorialFreeModuleElement): def __init__(self, M, x): super(TestRingElement, self).__init__(M, x) self.data = x class TestRing(CombinatorialFreeModule): Element = TestRingElement def __init__(self, R, G): super(TestRing, self).__init__(R, G) I am still curious, though, if anyone can answer my other question about passing in lists to the constructor.: a b data TestRingElement? TestRing Great answer! Thanks.: solve?
https://ask.sagemath.org/users/258/shacsmuggler/?sort=recent
CC-MAIN-2021-31
refinedweb
511
55.74
AutoCAD Crack + With Registration Code For Windows (Final 2022) A user works on a drawing in AutoCAD. Anatomy of AutoCAD There are several common ways in which AutoCAD applications can be used. AutoCAD is designed primarily to help the user plan and execute a design project. It can be used to quickly and efficiently prepare and evaluate preliminary drawings for the design stage of a project. It is not an application that is typically used to create detailed drawings. Tools included in AutoCAD are similar to those found in drafting applications. These include various types of drafting commands and tools such as line, arc, circle, text, dimension, and others. When you start the application, a new project is created, unless you have already started a project. The project includes a set of drawings for a particular purpose. It is possible to have more than one project open at a time. Some people use AutoCAD as a reference tool, using the keyboard and mouse. Others use it with a graphics tablet or digital pen. For example, it can be used as a spreadsheet or to take notes on a tablet. Before you begin using AutoCAD, you need to familiarize yourself with it and with AutoCAD terminology. If you have worked with AutoCAD before, there is a section called AutoCADpedia that provides information on AutoCAD terminology, commands, and procedures. Quick Start AutoCAD offers several guided tutorials on using AutoCAD. These tutorials will take you step-by-step through the application’s various features and tools. After you have started, you can follow these tutorial links: What’s New in AutoCAD 2018 The new AutoCAD 2018 software includes many new features and enhancements. This includes a new 3D modeling environment, completely revamped image tools, improved usability, enhanced data management capabilities, an improved user interface, and more. AutoCAD 2018 is available for Windows, Mac, and Linux operating systems. It is available as a desktop app and can be used for both stand-alone and cloud based work. What’s New in AutoCAD 2018 Desktop? While other features are updated, the new features available in AutoCAD 2018 for the desktop are: 3D Modeling Environment: AutoCAD’s new 3D modeling environment is based on Revit, an application made by the same company, Autodesk. Many of the same commands are available for 3D model AutoCAD Crack + Free Applications which use such APIs to create their functionality are referred to as Autodesk-compatible applications. This includes, in particular, 3D rendering applications such as Visual Studio, Maya, Rhinoceros and Cinema4D, as well as non-3D applications, such as Audition, PowerDirector, MXPaint, Photoshop, etc. Most Autodesk-compatible applications also run under the Autodesk Encompass platform. * Category:Autodesk Category:Autodesk AutoCADQ: Different behavior for lambda function and for the class function on the same class in Python I am trying to create a class that acts as a wrapped decorator in Python 2.7. I have the following code: from functools import wraps class Wrapper: def __init__(self, func): self.func = func def __call__(self, *args, **kwargs): “”” Wraps the decorated function, to be used in the class context “”” @wraps(self.func) def wrap(*args, **kwargs): return self.func(*args, **kwargs) return wrap I have a function that accepts a class and a function and I want it to act as a decorator, to be able to apply the decorated function on any class that has that function in it: def apply_decorator(func): “”” Returns a decorator that wraps the decorated function in the class context “”” @wraps(func) def wrap(cls, *args, **kwargs): return cls(func(*args, **kwargs)) return wrap Now, if I run my code as it is above, it works fine, as expected: @apply_decorator def my_func(): print(“This is my_func()”) class MyClass: @my_func 5b5f913d15 AutoCAD Open Autodesk Autocad From the menu bar, go to Add-ins Click on Autodesk Autocad Click on More Click on Autocad Data Manager On the left hand side of the data manager, click on the Insert button. Select Autocad Click on OK. In the autocad software, follow the path to “My Projects”. Inside that folder, click on Data Manager. Click on Data file for opening your datafile. Here is how my datafile looks like. Q: How to pass parameters to a Linq query using var arguments I have a method that has a string parameter like: public void ChangeRole(string RoleName) { //some code } How can I use LINQ to pass a variable role name, i.e., instead of using the string parameter “RoleName”, I want to use a variable called roleName. A: var roleName = “someName”; ChangeRole(roleName); using var also causes the variable to be made “real” i.e. not a compile time type A: Pass in the string as a parameter, or use a lambda. It’s one of those two choices. ChangeRole(string roleName); OR ChangeRole(() => “SomeRoleName”); (thanks to Piter for noticing the lambda syntax) A: If you’re working in.NET 4.0 you can now use the var keyword for this: var roleName = “someName”; ChangeRole(roleName); Protection: 100% Safety: 100% 5. Don’t give your child a share of what they’ve done I want my children to feel proud of their work, but I don’t want them to see my life as their responsibility. I’m very clear with them that they are in this world by the will of God and the privilege of my wife and me, not because of anything they’ve done. You don’t want them to make assumptions about how much work their dad does at home, so when they ask you to buy them a pet, give them a real answer: that’s not a favor you’ What’s New In? Extend your thinking by giving feedback as you draw. Markup Assist is a feature of CAD software that allows you to input text, symbols, and line styles directly into your drawings. (video: 1:40 min.) Help and assistant Create diagrams with help and assistant. The built-in help authoring tool offers support for doing tasks efficiently and precisely. (video: 2:16 min.) Automate your workflows and routines. Make it easier to create and track batch jobs, and connect processes with the help of a job database. (video: 2:53 min.) Get real-time insights from working drawings. Analyze your designs more accurately and efficiently with the ability to view live data, and the ability to filter and sort multiple dimensions and attributes. (video: 2:27 min.) Digital day: You can view and work with AutoCAD drawings on your computer, tablet, and mobile devices—and switch between the latest versions of the software and the drawing. A single license gives you all of the tools, data, and features that you need for the entire year. and the drawing. You can view and work with AutoCAD drawings on your computer, tablet, and mobile devices—and switch between the latest versions of the software and the drawing. A single license gives you all of the tools, data, and features that you need for the entire year. Includes: 2-year maintenance 2023 AutoCAD has many new features and enhancements. From paper and material handling to modeling and 3D work, AutoCAD 2023 is packed with new features and enhancements. It’s even easier to work with those changes than ever before. AutoCAD 2023 is the perfect match for your CAD skills and the business goals you want to achieve. Download a copy to experience new features for yourself, and sign up for a free 30-day trial. A Free 30-Day Trial Now you can use all the latest and greatest AutoCAD software features right away, free of charge. Start your free 30-day trial today. Download AutoCAD Software 2-year maintenance 2-year maintenance AutoCAD has been engineered to last. In addition to the advanced new features and enhancements, we back AutoCAD with a 2-year maintenance policy, which covers AutoCAD and AutoCAD LT System Requirements For AutoCAD: Phenomena Resources Phenomena Resources: A quick note before the walkthrough. For these, I have considered all three difficulties of the game: Novice, Pro, and Elite. There are a lot of things to consider for these, but I’ll try and guide you through what’s relevant, rather than gloss over everything, and make sure you know what to do at each difficulty level. Basics Start by figuring out your concept of the game. That means knowing what will happen in each step of the game,
https://www.luthierdirectory.co.uk/autocad-crack-license-code-keygen/
CC-MAIN-2022-33
refinedweb
1,407
61.97
Welcome to this exciting Java tutorial! Those looking to get into game programming with java will likely find this tutorial more interesting than others. This tutorial will cover: Collision detection, movement via keyboard, double buffered animation, and a basic game loop. We are basically going to create a window with swing, actively render two double-buffered rectangles, move them around the screen, and check for collisions between them, all while using a basic game loop. I am going to be as Object Oriented (OO) as I can, so we will be creating a few classes. The very first thing we need to do is get something on the screen. Let's create a window with Swing. I am going to assume that everyone knows at least a little swing. Create a new java class named Gui.java with your favorite ide or editor. I'm a newb, so I use netbeans. package collide; import javax.swing.*; public class Gui { JFrame window; public Gui() { window = new JFrame("Collision detection, movement, double buffering, and a game loop!"); window.setSize(800,600); window.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); window.setVisible(true); } public static void main(String[]args) { Gui game = new Gui(); } } Go ahead and compile/run. This should pop up a window. If not, check your code and fix it. O.K. good. Next, we need a surface on which to draw and in this case we'll use a JPanel. Go ahead and create another class, name it DrawPanel.java, then add this code: package collide; import java.awt.event.*; import javax.swing.*; public class DrawPanel extends JPanel implements KeyListener { public DrawPanel() { setIgnoreRepaint(true); addKeyListener(this); setFocusable(true); } public void keyTyped(KeyEvent e) { } public void keyPressed(KeyEvent e) { } public void keyReleased(KeyEvent e) { } } You'll notice that when we create our DrawPanel class, we did some other things; we made it "extend" JPanel, and "implement" KeyListener. When we declare that one class "extends" another, we are telling the JVM that our class, DrawPanel, is a sub-class of JPanel. This way we can have access to JPanel's graphics object which will allow us to draw to the screen. One of the first things we need to do is to tell awt not to repaint the JPanel. We will use a custom paint method later. We implement KeyListener to let the JVM know that we are going to be processing keyboard input. KeyListener is an interface, and whenever we implement an interface in java, we must also override all of that interface's methods. In this case, there are only 3, which are: keyTyped(), keyPressed(), and keyReleased(). I am sure you can figure out what they do just by reading their names. We need to do two more things to ensure that our keyListener works, and they are both shown in DrawPanel's constructor. addKeyListener(this) and setFocusable(true). the method addKeyListener(this) does just what it looks like. It tell the JVM to listen for keys in this class. SetFocusable(true) says "Hey, concentrate on me!" Now we need to create a DrawPanel object and add it to the content pane of the JFrame. Our Gui.java code changes to this: package collide; import javax.swing.*; /** * * @author Tom */ public class Gui { JFrame window; DrawPanel panel; public Gui() { window = new JFrame("Collision detection, movement, double buffering, and a game loop!"); panel = new DrawPanel(); window.setSize(800,600); window.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); window.getContentPane().add(panel); window.setVisible(true); } public static void main(String[]args) { Gui game = new Gui(); } } Check out the changes. We've created a drawPanel reference named panel, instantiated it in Gui's constructor, and also called getContentPane().add() to add to to the JFrame. You won't see any difference from the last time you compiled and ran, but go ahead and do it again just to make sure things work. If you get errors, check your code! Now we need to get things drawn to the JPanel, but first, I'd like to get the outline for our game loop going. A typical game loop basically consists of this: 1. Initialize the game. 2. Update the game. 3. Check for collisions. 4. Draw to the screen. 5. Wait for a bit, then do again and again. Our DrawPanel class will handle all of this, so add these methods to it: public void Initialize() { } public void update() { } public void checkCollisions() { } public void drawBuffer() { } public void drawScreen() { } public void startGame() { } Now we need to tell our panel to execute these 6 methods, wait for a little bit, the do it again...and again...and again. I like to use a while loop. Add this into the startGame() method: initialize(); while(true) { try { update(); checkCollisions(); drawBuffer(); drawScreen(); Thread.sleep(15); } catch(Exception e) { e.printStackTrace(); } } When the startGame method is run, it will constantly loop these methods. You will see how this is important. The biggest games in the world all use this simple game loop technique. O.K. now we're going to start drawing things on the screen. We're going to use a tried and true technique called double buffering. Double buffering will help us to avoid flickering when we start moving things around. The idea behind double buffering is this: Instead of drawing all of our movement updates directly to the screen, which takes a lot of time and causes annoying, ugly flicker, we draw out movements to an image that is the same size as our window 800x600 in ram, then just copy that single image to our screen. It takes much less time drawing to an image in memory than it does to the screen, so this technique avoids that annoying flicker effect. The first thing we need to do is to create an image the size of our window. I like to use a BufferedImage. We need to make an instance variable at the beginning of our DrawPanel class, just above the DrawPanel() constructor. BufferedImage buffer; then we need to import java.awt.image.* and java.awt.event.* at the top of our class with the other imports. If you're using netbeans, you can just hit ctrl+shift+i Now, in our initialize method(), we do: buffer = new BufferedImage(800,600,BufferedImage.TYPE_INT_RGB); This creates a buffered image in which we can draw our game updates Now we need to get a graphics2d object from our DrawPanel class that we can use to draw into the BufferedImage (buffer). In our drawBuffer() method, add this: Graphics2D b = buffer.createGraphics(); Now we need another graphics2d object that we can use to draw directly to the screen with, so in our drawScreen() method, add this: Graphics2D g = (Graphics2D)this.getGraphics(); *Quick explanation: We must explicitly cast this.getGraphics() with (Graphics2D) as this.getGraphics only returns an object of type Graphics. We have almost everything we need to start doing some drawing, but first we need to make a few modifications to our Gui.java class. Open Gui.java and above our main method, add: public void go() { panel.startGame(); } then add game.go(); to Gui.java's main method, right under Game gui = new Game(); Go back to DrawPanel.java. We're going to add some drawing code to make sure that things are working. In the drawBuffer() method, under the Graphics2D object, add: b.setColor(Color.bkack); b.fillRect(0,0,800,600); b.dispose(); then in our drawScreen() method we're going to add this code under our Graphics2D object: g.drawImage(buffer,0,0,this); Toolkit.getDefaultToolkit().sync(); g.dispose(); This will draw whatever we draw to our buffer image to the screen. From here on out we will only change drawing code in our drawBuffer method(). Go ahead and compile and run the program. You should now see a black window. If you do, you have succeeded. If not, backtrack to see where you screwed up. This concludes the double buffering portion of this tutorial. Let's move on to some more Drawing. We're going to create two rectangles on screen. One will represent our player, and the other will represent an enemy. You'll be able to move the player with the up, down, left, and right arrow keys. When our player bumps into our obstacle, our player will be stopped in his tracks! The first thing we need to do is create a new class to represent our player and our enemy. I am calling this new class Entity.java. Here is the code for it: package collide; import java.awt.Rectangle; public class Entity { int x,y,speed,width,height; boolean up, down, left, right,stop; public Entity(int x, int y) { this.x = x; this.y = y; speed = 3; width = 100; height = 100; up = false; down = false; left = false; right = false; } public int getX() { return x; } public int getY() { return y; } public int getWidth() { return width; } public int getHeight() { return height; } public Rectangle getBounds() { return new Rectangle(getX(),getY(),getWidth(),getHeight()); } public void move() { if (up) y -= speed; if (down) y += speed; if (left) x -= speed; if (right) x += speed; } } I feel the need to explain a few things. Obviously the int variables for x, y, are the object's location on the screen, and speed is how fast it will move. The up, down, left, and right booleans are directions of movement. Check the move() method to get a better idea about how the object moves. If up is true, the we move the y value of the player upwards by subtracting speed from y. Confusing? Yeah. I know. You may need to read the move() method a few times before you get the hang of it. The getBounds() method simply returns a new rectangle that encompases our player or enemy. It will be used for collision detection. getX(), getY(), getWidth(), getHeight() shoudl all be self-explanatory. Let's get our player and enemy draw on the screen. Open up DrawPanel.java and under BufferedImage buffer; add: Entity player; Entity enemy; Now, in the initialize method() we add: player = new Entity(100,100); enemy = new Entity(400,400); Now head on over to the drawBuffer method so we can draw our player and enemy. Our player and enemy are just rectangles so we use the fillRect() method to draw them. We want to make sure that we can tell the difference between them, so we also want to use the setColor() method for each as shown below. Your drawBuffer() should look like this: public void drawBuffer() { Graphics2D b = buffer.createGraphics(); b.setColor(Color.black); b.fillRect(0,0,800,600); b.setColor(Color.red); b.fillRect(player.getX(),player.getY(),player.getWidth(),player.getHeight()); b.setColor(Color.blue); b.fillRect(enemy.getX(),enemy.getY(),enemy.getWidth(),enemy.getHeight()); b.dispose(); } Compile and run to make sure you have done everything correctly. If not, check your code. Now that you have 2 rectangles on screen representing both our player and enemy, it is time to make our player move. Remember those keyPressed() and keyReleased() methods in DrawPanel? We're going to use them now. Head on over to the keyPressed() method in DrawPanel. The first thing you should notice is that the method accepts an argument, which is KeyEvent e. When you press or release a button on the keyboard, java fires off a KeyEvent object to whichever method is appropriate. If you just pressed a key, it sends it to keyPressed(). If you released a key, it sends an event to keyReleased(). Our keyPressed() and keyReleased methods should look like this. While this is not technically the ideal way to move objects, as I am breaking an important rule of encapsulation, I found it to be easier to understand when I was first learning. public void keyPressed(KeyEvent e) { int key = e.getKeyCode(); if (key == KeyEvent.VK_LEFT) player.left = true; if (key == KeyEvent.VK_RIGHT) player.right = true; if (key == KeyEvent.VK_UP) player.up = true; if (key == KeyEvent.VK_DOWN) player.down = true; } public void keyReleased(KeyEvent e) { int key = e.getKeyCode(); if (key == KeyEvent.VK_LEFT) player.left = false; if (key == KeyEvent.VK_RIGHT) player.right = false; if (key == KeyEvent.VK_UP) player.up = false; if (key == KeyEvent.VK_DOWN) player.down = false; } As you can see when a key is presed, a boolean from the player's Entity class is toggled to true, and when a key is released, the corresponding direction is set to false. Now we need to modify DrawPanel's update method so that we can actually see our movement on screen. This is done by running the player's move() method in DrawPanel's update() method like so: public void update() { player.move(); } Go ahead and compile and run. Move your arrow keys. Do you see movement? Notice how you move right through the other rectangle? if so, you've done everything correctly. If not, you know the drill. Now it's time to check for collisions! First, we have to make a few small changes to our Entity class,though. Add a boolean instance variable to the Entity class. Find where we have: boolean up, down, left, right; Now make it: boolean up, down, left, right,collision; Then in the constructor of Entity, add: collision = false; Open up our DrawPanel class and find the checkCollisions() method. Add: if (player.getBounds().intersects(enemy.getBounds())) player.collision = true; else player.collision = false; Java has a nice little method to determine whether two objects are "intersecting" one another, and that is the intersects() method. It's a great little tool that you will use all of the time in java game programming. It's so cool that even MS copied it on over to C# as the "IntersectsWith()" method...along with everything else in java. Now, head on over drawBuffer() method. We'll use the value of player.collision to do something when it's true, this way we'll have a visual indicator showing that both objects are colliding. Modify the drawBuffer() method as such: public void drawBuffer() { Graphics2D b = buffer.createGraphics(); b.setColor(Color.black); b.fillRect(0,0,800,600); if (player.collision == false) { b.setColor(Color.red); b.fillRect(player.getX(),player.getY(),player.getWidth(),player.getHeight()); b.setColor(Color.blue); b.fillRect(enemy.getX(),enemy.getY(),enemy.getWidth(),enemy.getHeight()); b.dispose(); } else b.setColor(Color.white); b.drawString("C O L L I S I O N !",350,300); b.dispose(); } And that's that. I hope this little? tutorial gave you some insight on the topics covered. Remember, there is always more than one way to do something, and I can promise you that my movement code is far from desirable. Can you think of a better way to handle movement? I can, but that is for another time. Edited by farrell2k, 16 March 2012 - 12:28 AM.
http://forum.codecall.net/topic/47096-double-buffering-movement-and-collision-detection/
CC-MAIN-2015-06
refinedweb
2,465
66.64
You can subscribe to this list here. Showing 2 results of 2 I was thinking a bit about what you were talking about with forms, and I thought that it would make sense to fold data conversion in with validation. Also, there should be a hook for doing client-side validation in addition. I was thinking about something like: {"field1": (NotEmpty(), AsInteger(greater_than=0))} Only one of the restrictions could convert, the others would simply validate. Anyway, AsInteger would be implemented something like: -------------------untested code snippet--------------------- class AsInteger: # This should probably be a subclass of something, reducing # the amount of code in this actual class -- but this is # imaginary code, so that isn't going to happen. def __init__(self, greater_than = None, less_than = None, error_description = None): # There should be less_than_or_equal_to, etc... self.greater_than = greater_than self.less_than = less_than self.error_description = error_description def convert(self, value): try: i = int(value) except ValueError: if self.error_description: raise ValueError, error_description else: raise ValueError, "Please enter an integer number" if (not self.greater_than is None) and i <= self.greater_than: if self.error_description: raise ValueError, error_description else: if self.less_than: raise ValueError, "Please enter a number between %s and %s" \ % (self.greater_than, self.less_than) else: raise ValueError, "Please enter a number greater than %s" \ % (self.greater_than) # yadayada, same thing for less_than return i def javascript(self, form_name, field_name, field_description): js = """ var value; value = document.forms['%(form_name)s']['%(field_name)s']; if (int(value) == 0 && value != "0") { return "%(field_description)s: Please enter an integer number"; } """ if not self.greater_than is None: js = js + """ if (int(value) <= %(greater_than)s) { return "%(field_description)s: Please enter a number greater than %(greater_than)s"; } """ # Same thing for less_than... js = js + """ return false; // false == found no problems """ js = js % {"form_name": form_name, "field_name": field_name, "field_description": field_description, "greater_than": self.greater_than, "less_than": self.less_than} return js -------------------/untested code snippet--------------------- Anyway, the form generator inserts the javascript in the page (if possible) and would also have to keep track of form and field names, and package it all up in something that gets executed onSubmit. Then at the end it converts values, catching ValueErrors to give error messages. When you are done, you have the data in the form you want it (more or less). It would be fairly easy to generate new classes for new behavior (e.g., checking usernames against a database and converting to user id). The JavaScript is a convenience, for more responsive feedback, but is never relied upon to actually work. I'm not sure how this would fit into form generation exactly -- it *should*, since there'll be a lot of overlap in the required information. Maybe it would involve something like: Field(name="age", description="Age", validate=NotEmpty(), convert=AsInteger(greater_than=0)) And then the Field instance would somehow generate the HTML. Perhaps select boxes and the like would be subclasses. I'm not entirely sure how best the flow could work, as you display errors and the like -- potentially even as for more information. E.g., I'm considering a situation where a manager could enter a user by entering their email, username, or full name -- but there might not exist anybody by that description, in which case maybe a search form or a few guesses might come up -- or there might be several people (since full name isn't unique), and there'd be a select box for which one. That's almost like a wizard, and I don't know if it makes sense to include that sort of generality. Anyway, those are just some thoughts about how I might be inclined to implement something like this. Hopefully it will give some ideas to others. -- Ian Bicking 4869 N. Talman Ave., Chicago, IL 60625 (773) 275-7241 ianb@... The@...
http://sourceforge.net/p/webware/mailman/webware-discuss/?viewmonth=200103&viewday=17
CC-MAIN-2016-07
refinedweb
621
55.84
- Postgresql. My favorite. Works on Microsoft Windows, too (with the driver from [1 ]). Create a database named 'bank' and a user 'bankuser' with password 'bankpass', or change the variables in the demo to something else. (BAS Note that there is also a Microsoft Windows binary here [2 ]) with libpq statically linked in as well. - Sqlite. Extremely capable embedded-SQL database. Download TCLsqlite from . That is tclsqlite - not just 'sqlite'!. No extra config needed. The demo will try to create bank.dat in the current working directory. - ODBC. Most useful on Microsoft Windows; on Unix this is troublesome. It will allow you to run the demo with a Microsoft SQL server database, Access database file or any other ODBC database. Create a user or system ODBC datasource to your favorite database. Call it 'bank' and make sure 'bankuser' with 'bankpass' may connect, or change the configuration variables near the top of the demo. # Choose a driver to run the demo #set driver odbc set driver sqlite #set driver postgres # Alter the datasource to your needs if you use driver other than odbc, sqlite or postgres # See for the correct # content of this variable. set datasource localhost:5432:bank set username bankuser set password bankpass set odbc_dsn bank package require nstcl namespace import nstcl::* if { $driver eq "sqlite" } { set loaded "" foreach version {sqlite3 sqlite} { if { [catch { package require $version set loaded $version } E] } { if { [file exists tcl$version[info sharedlibextension]] } { load tcl$version[info sharedlibextension] tcl$version set loaded $version } else { puts "$version not found." } } if { $loaded ne "" } {break} } if { $loaded eq "" } { puts "Sqlite not installed. Just drop the libsqlite[info sharedlibextension] \ or libsqlite3[info sharedlibextension] in the same directory as this demoscript." exit 1 } if { $loaded eq "sqlite3" } { interp alias {} sqlite {} sqlite3 } puts "Loaded $loaded." } elseif { $driver eq "postgres" } { if { [catch { #The next generation Postgresql driver package require Pgtcl } E] } { puts "Could not load Pgtcl (pgtcl-ng): $E" puts "Trying libpgtcl..." package require libpgtcl } } # Before you can use any type of database, nstcl needs to load the appropriate # database drivers. For this example, we use either odbc or sqlite. # # Load driver will automatically load the tclodbc package. sqlite needs to be # preloaded, as no default package require exists for it. # # Doc: nstcl::load_driver $driver # nstcl works with database pools, later on, we will use the pool name for # our statements. # # Doc: # ::nstcl::configure_pool ?-immediately? ?-default? driver poolname connections ?datasource? ?username? ?password? ?verbose? switch $driver { odbc { # driver pool #connections DSN user password nstcl::configure_pool odbc bank 1 $odbc_dsn $username $password catch { db_dml bank:drop "drop table account;" db_dml bank:drop "drop table orders;" } } postgres { # driver pool #connections DSN user password nstcl::configure_pool postgres bank 1 $datasource $username $password catch { db_dml bank:drop "drop table account;" db_dml bank:drop "drop table orders;" } } sqlite { catch { file delete [file join [file dirname [info script]] bank.dat] file delete [file join [file dirname [info script]] bank.dat-journal] } # driver pool #connections sqlite-file nstcl::configure_pool sqlite bank 1 [file join [file dirname [info script]] bank.dat] } default { # driver pool #connections DSN user password nstcl::configure_pool $driver bank 1 $datasource $username $password catch { db_dml bank:drop "drop table account;" db_dml bank:drop "drop table orders;" } } } # We now have a working database connection! # Lets create some tables: db_dml bank:table_accounts { create table account ( id integer, name varchar(200), balance numeric(10,2) ); } # There is no reason to have only one statement in a db_dml # Putting in more than one does *NOT* make it a transaction! db_dml bank:statement2 { create table orders ( id integer, account integer, description varchar(200), amount numeric(10,2) ); } db_dml bank:statement2 { insert into account (id, name, balance) values (1, 'Pascal Scheffers', 100.00); } #So far so good. You see you can execute properly formatted sql statements. #That is no big surprise. # For data entry, there is something better, however. Some values may need # quoting, and depending on the database type, quoting may differ between # database types and drivers. nstcl takes care of that. set accounts { 4 "Arjen Markus" 500.00 2 "Jean-Claude Wippler" 23.15 3 "Julian Scheffers" 56.87 } foreach {accountno name amount} $accounts { db_dml bank:new_accounts { insert into account (id, name, balance) values (:accountno, :name, :amount); } } # Okay, we have something in the database. Lets display the content: # in a proc so we can do it again! proc list_accounts {} { db_foreach bank:accounts { select id as a_id, name, balance from account order by id; } { # In the db_foreach loop, the column names are available as a normal # tcl variable. Becareful they don't clash with local variables! # rename them if you must! puts [format " %4d %8.2f %s" $a_id $balance $name] } } list_accounts # A very convenient function is db_string, which will let you get a single # value from the database: set total [db_string bank:all_accounts_total "select sum(balance) from account"] puts "The total in the bank is: [format %8.2f $total]\n" # db_foreach is one of my personal favorites, but you may need something # different. # # The important ones are: # # db_list # Obtains a list of the first rows of the query: set idList [db_list bank:ids "select id, name from account"] puts "Account numbers: $idList\n" # Note that the name column was dropped from the result! # # If you want the column names too, use db_list_of_lists: set idNameList [db_list_of_lists bank:idAndNames "select id, name from account"] puts "Account numbers and names: $idNameList\n" # # Now, db_foreach sets names variables for each row/column retrieved. # I find that I frequently need the variables for a single row: proc single_account { id } { # Note the bind variable again! db_1row bank:oneaccount "select name, balance from account where id=:id" puts "Account : $id" puts "Name : $name" puts "Balance : [format %.2f $balance]" puts "" } single_account 2 single_account 4 # # db_1row will raise an error if the statement does not return exactly 1 row # there is a companion function, db_0or1row which allows for checking the existence # of a row. # Both of these functions can also set an array, instead of the variables # this is very convenient, as it won't clutter your local variables! proc have_account? { id } { if { [db_0or1row bank:oneaccount "select id, name, balance from account where id=:id" \ -column_array row] } { puts "Account : $row(id)" puts "Name : $row(name)" puts "Balance : [format %.2f $row(balance)]" } else { puts "Account $id does not exist!" } puts "" } have_account? 1 have_account? 5 # # Similarly, [db_string] has a -default option, so it does not throw an error # but returns the default value: puts "Account 6 is owned by [db_string bank:oneacct "select name from account where id=6" -default "nobody"]" puts "Account 3 is owned by [db_string bank:oneacct "select name from account where id=3" -default "nobody"]" # # # That covers most of the data access and modification functions. # With a banking system, transactions are important: here is some code which # demonstrates transactions: proc transfer {fromAcct toAcct amount} { catch { db_transaction { set initial_balance [db_string bank:total "select sum(balance) from account"] db_dml bank:transferfrom "update account set balance=balance-:amount where id=:fromAcct" db_dml bank:transferto "update account set balance=balance+:amount where id=:toAcct" set final_balance [db_string bank:total "select sum(balance) from account"] if { [format %.2f $initial_balance] ne [format %.2f $final_balance] } { puts "Balance mismatch: [format %.2f $initial_balance] ne [format %.2f $final_balance] abort!" db_abort_transaction } else { puts "Transfered 25.00 from account $fromAcct to account $toAcct" } } } } puts "\nTransfer money (correctly):" list_accounts transfer 1 2 25.00 list_accounts puts "\nTransfer money (incorrectly):" transfer 4 5 105.22 list_accountsLES 13-08-2007: The above script didn't quite work for me. Although I have the tclsqlite extension installed with ActiveTcl, also in another directory in my PATH and also in the same directory where I ran the script, the line 59 nstcl::load_driver $driver gives me an error: couldn't load file "tclsqlite.so": tclsqlite.so: cannot open shared object file: No such file or directory while executing "load [::nstcl::find_shared_library tclsqlite]" (procedure "::nstcl::database::sqlite::load_driver" line 3) invoked from within "::nstcl::database::${driver}::load_driver $args" (procedure "::nstcl::database::load_driver" line 3) invoked from within "nstcl::load_driver $driver" (file "./nsdb.tcl" line 28)Eventually, I edited /path/ActiveTcl/lib/nstcl-1.2/nstcl-database-sqlite.tcl, changed load [::nstcl::find_shared_library tclsqlite]to load [file normalize [::nstcl::find_shared_library tclsqlite]]...and it worked.But then the transfer proc towards the end of the script gives me an error too: could not allocate 1 handle(s) from pool "bank" while executing "::nstcl::ns_db gethandle $pool" (procedure "::nstcl::database::api_get_dbhandle" line 37) invoked from within "::nstcl::database::api_get_dbhandle $statement_name" (procedure "db_foreach" line 29) invoked from within "db_foreach bank:accounts { select id as a_id, name, balance from account order by id; } { # In the db_foreach loop, the column nam..." (procedure "list_accounts" line 2) invoked from within "list_accounts" (file "./nsdb.tcl" line 244)I haven't been able to fix that one. I'll try again later. cd zzz now.
http://wiki.tcl.tk/13185
CC-MAIN-2017-04
refinedweb
1,457
53.71
The application I’m developing includes an XML feed, in which one of the items is a URL Unfortunately, using the {% url %} tag in the template only gives the path portion of the URL. Time for a little research. Many web posts pointed me to the Sites framework included with Django, as it can return the domain for the site. I tried it out, and it worked, but I didn’t like how it would have to be implemented in my multiple server scenario (laptop, DEV server, QA server, Production). Each would have to have any entry in the sites table, and the SITE_ID in the settings.py file for each instance would have to point to the correct entry. This would be awkward at best, and would limit the possibility of making the application reusable. A couple of bloggers mentioned using RequestSite instead. It shares the same API with the Sites class, but doesn’t use the sites table. Instead, the domain name is pulled from the request object. Here’s a sample of the view logic: from django.contrib.sites.models import RequestSite site_name = RequestSite(request).domain return render_to_response('template.xml', {'entries': queryset, 'site_name' : site_name}, context_instance=RequestContext(request), mimetype='text/xml') After pulling the domain from the request, it is passed along to the template in the context dictionary. In the template, I can reference the variable ‘ site_name‘ in front of the {% url %} tag. However, before I have a full URL, I also need to know the protocol. Another blog post demonstrated the is_secure attribute of the request object. Testing this attribute can help us determine is the protocol is http or https. The temple code for a full URL is: {% if request.is_secure %}https{% else %}http{% endif %}://{{ site_name }} {% url entry e.pk %} Maybe a little clumsy, but it gets the job done without hard coding anything. I’d love to hear your thoughts in the comments. You dont need to use RequestSite class for this. Instead site_name = RequestSite(request).domain just write site_name = request.get_host() It’s simplier and does exactly same thing. Class RequestSite was created as drop in replacement for Site class for those who dont have Site application installed . @waspoza Thanks. I’ll give that a try.
http://dashdrum.com/blog/2010/03/full-url-in-django/
CC-MAIN-2014-52
refinedweb
373
67.25
HTTP handlers and modules are explained as well as their use in ASP.Net. Really, I always like to work and read about ASP.Net request handling mechanisms. From the outside, it's just request & response tricks. But, if you go in-depth, then you will understand some interesting internal things.Most developers have experienced questions on ASP.Net page life cycle in interviews.I had that experience when I started my carrier. In this article, I am going to explain about the core part of the ASP.Net request processing. i.e HTTP Handler and HTTP Module.What are HTTP handlers?The ASP.Net life cycle starts with a request sent by a browser to the web server. When the web server receives the request, it will pass the request to an instance of the HTTP Runtime class. The HTTP Runtime object observes and figures out from which application it has been sent (virtual directory). Then it uses the HTTP Application factory to create a HTTP Application Object to process the request. The HTTP Application objects holds the collection of HTTP Module objects. HTTP Modules are filters that can modify the content of the HTTP request and response messages. The next process is done by the HTTP Handler object. When a request passes through the pipeline, it looks at the file name extension. Depending upon the extension, the corresponding ISAPI extension (ISAPI extensions are implemented by using Win32 DLLs. These are typically developed using C/C++). If it is related to an ASP.Net extension such as aspx, then this will be handled by the HTTP handler which is the .Net equivalent of ISAPI extensions.So, the HTTP handler is a program to handle a particular page type. HTTP Module is another component; it also participates in the ASP.Net request process. But they work before and after the HTTP Handler does its work.The HTTP Module is responsible for hooking events like this way, HttpModule and Httphandler takes a important role while processing request.So, that explains the purpose of a HTTP Handler. Now I want to create one extension file like .arv and want to use it in my application.Step 1:First, we need to create a handler, so create a class library using Visual Studio and name it MyHandler.It contains one property and one function. The IsReusable property specifies whether the same instance of HTTP Handler can be used to fulfill another request of the same type.Process Request: This method is called when processing ASP.Net requests. Here you can perform all the things related to processing the request.Here I have written a message to return.Step 2:Build the application and you will get MyHandler dll. Step 3:Create one Visual Studio 2010 web application. Add a reference to the MyHandler class library. You can add assembly (dll) instead of class library.I am adding a dll/class library to the web application because, I want to use this handler within this application only. If you want to use it with all ASP.Net applications, then you have to add this assembly into IIS (Handler Mapping).Step 4:Add one web form to the web application and rename its extension aspx to arv as shown bellow.Step 5:Register handler in web config.Under section System.web, I have inserted one more section called httphandlers.Inside httphandlers, I have added one more tag<add verb="*" path="*.arv" type="MyHandler.MyHandler, MyHandler" />VERB: Verb attribute specifies the HTTP verbs supported by a handler. If the handler supports all of the HTTP verbs, simply use "*" , otherwise you can specify particular ones like "GET,POST".PATH: Path is nothing but, just tell the handler to which type of file should be handled.TYPE: Information about handled by whom. Here, MyHandler.MyHandler is namespace.class name. and MyHandler is the handler name.Step 6:Add the reference of the MyHandler class library to web application. Run the application.Browse the newly added web form (new extension).ThanksEnjoy the coding! View All
https://www.c-sharpcorner.com/UploadFile/aravindbenator/http-handlers-and-modules/
CC-MAIN-2020-45
refinedweb
676
61.02
by Gerrit Grunwald A components library that attempts to fill a gap in the JavaFX list of features. I love graphical programming, especially controls that visualize data. My favorite kind of controls are gauges, and I've spent a lot of time on them during the last seven years. It all started with the need for a gauge control when I was working for the semiconductor and nanotechnology industry back in 2009. From that point on, I was infected with the urge to create controls for all kind of platforms, such as Swing, JavaFX, HTML 5, and Android. When JavaFX 2 came out, I was totally thrilled by the ease of use of the new APIs. Because JavaFX 2 came with charts, I hoped that gauges would be part of a future version of JavaFX, but that hasn't happened. So I decided to add some controls to the JFXtras project and to also create my own libraries, for example, SteelSeriesFX and Enzo. The problem with all the gauges I've created during the last four years is that they always come with a special design, which might or might not fit your needs. In 2013, it got even worse when Apple released iOS 7, which was completely based on a flat UI design—which meant I also had to create controls with a flat UI design. Long story short, when I started my vacation in December 2015, I decided to create another library for gauges...Medusa. Figure 1. Medusa from Greek mythology I thought about the most important features that all gauges have in common, and I came up with this little list of features: - Title, subtitle, and unit - Tickmarks - Tickmark labels - Needle and/or bar - Colored sections and/or areas In principle, that's all a gauge needs (of course, there are some more things, but they are not that important). The Gauge Control If you use a standard Medusa gauge in JavaFX, you will get something like what's shown in Figure 2. Figure 2. Standard Medusa gauge This gauge doesn't look really fancy, but the big advantage of this approach is that you can embed this gauge into your own control very easily. That means that if you have your own style/design, you simply create a control that contains the frame and the background that you need and embed the Medusa standard gauge. The FGauge Control I've created a JavaFX control that demonstrates how embedding the Medusa standard gauge could look: the FGauge (F stands for Frame). Figure 3 is an example of what the FGauge looks like. Figure 3. The FGauge control As you can see, the inner part is similar to the Medusa standard gauge, but here we also have a frame around the gauge, a background, and a foreground. So the main idea of Medusa is to use one gauge control class that contains all the properties a gauge needs and create skin classes for the different use cases, such as rich or flat gauges. No CSS If you take a look at the source code of Medusa, you will quickly figure out that I don't use any CSS for the gauges. The reasons for not using CSS for Medusa are the following. Using CSS for styling an application is a great idea and works pretty well in JavaFX, but it's not always nice to use CSS for controls. In my experience, you have to create a lot of boilerplate code to make a control styleable, especially if you have a lot of different parameters. In addition, the parsing of CSS in JavaFX is not always as fast as when you use code directly. Especially on embedded devices, you can see the difference when using a lot of CSS compared to using a pure-code approach. I agree that using CSS makes sense for styling applications and also for styling some controls, but I decided to not use it for the gauges. If you really want to make use of CSS, you could always create a control that contains all the styleable properties you need and pass them to an embedded instance of a Medusa gauge. Another reason to not use CSS for the gauges was the fact that nearly all of the time, programmers, not designers, will style the controls (that was originally the plan), and the CSS implementation in JavaFX is not really standard CSS but rather a subset that is based on CSS 2.1 that also uses different names for CSS properties (for example, background in CSS is -fx-background in JavaFX). Some Code Now that you know some things about the background and features of Medusa, let's have a look at some code. I loved the builders that came with JavaFX 2 but were deprecated with JavaFX 8, so I've created a builder for most of the classes. So if you would like to create a standard Medusa gauge with a range from 0 to 100 and a title, a subtitle, and a unit, you simply can use the following code snippet: Gauge gauge = GaugeBuilder.create() .title("Title") .subTitle("SubTitle") .unit("Unit") .build(); This will give you the gauge that you saw in Figure 1. Because the gauge control contains a lot of features, it's really not easy to understand all the different parameters that are available. Therefore, I created a GaugeDemo class that mainly contains one standard Medusa gauge and a builder with most of the available features. With this class, you can easily play around with the different parameters to see their meaning. Listing 1 shows what the "full-size" GaugeBuilder looks like: Gauge gauge = GaugeBuilder .create() .prefSize(500,500) // Set the preferred size of the control // Related to Foreground Elements .foregroundBaseColor(Color.BLACK) // Defines a color foreground elements // Related to Title Text .title("Title") // Set the text for the title .titleColor(Color.BLACK) // Define the color for the title text // Related to Sub Title Text .subTitle("SubTitle") // Set the text for the subtitle .subTitleColor(Color.BLACK) // Define the color for the subtitle text // Related to Unit Text .unit("Unit") // Set the text for the unit .unitColor(Color.BLACK) // Define the color for the unit // Related to Value Text .valueColor(Color.BLACK) // Define the color for the value text .decimals(0) // Set the number of decimals for the value/lcd text // Related to LCD .lcdVisible(false) // Display a LCD instead of the plain value text .lcdDesign(LcdDesign.STANDARD) // Set the design for the LCD .lcdFont(LcdFont.DIGITAL_BOLD) // Set the font for the LCD // Related to scale .scaleDirection(ScaleDirection.CLOCKWISE) // CLOCKWISE, COUNTER_CLOCKWISE .minValue(0) // Set the start value of the scale .maxValue(100) // Set the end value of the scale .startAngle(320) // Start angle of your scale (bottom -> 0, direction -> CCW) .angleRange(280) // Angle range of your scale starting from the start angle // Related to Tick Labels .tickLabelDecimals(0) // Number of decimals for the tick labels .tickLabelLocation(TickLabelLocation.INSIDE) // Tick labels in- or outside the scale .tickLabelOrientation(TickLabelOrientation.HORIZONTAL) // ORTHOGONAL, TANGENT .onlyFirstAndLastTickLabelVisible(false) // Show only first and last tick label .tickLabelSectionsVisible(false) // Sections for tick labels should be visible .tickLabelSections(new Section(75, 100, Color.RED) // Sections to color tick labels .tickLabelColor(Color.BLACK) // Color for tick labels // Related to Tick Marks .tickMarkSectionsVisible(false) // Sections for tick marks should be visible .tickMarkSections(new Section(75, 100, Color.RED) // Sections to color tick marks // Related to Major Tick Marks .majorTickMarksVisible(true) // Major tick marks should be visible .majorTickMarkType(TickMarkType.LINE) // LINE, DOT, TRIANGLE, TICK_LABEL .majorTickMarkColor(Color.BLACK) // Color for the major tick marks // Related to Medium Tick Marks .mediumTickMarksVisible(true) // Medium tick marks should be visible .mediumTickMarkType(TickMarkType.LINE) // LINE, DOT, TRIANGLE .mediumTickMarkColor(Color.BLACK) // Color for the medium tick marks // Related to Minor Tick Marks .minorTickMarksVisible(true) // Minor tick marks should be visible .minorTickMarkType(TickMarkType.LINE) // LINE, DOT, TRIANGLE .minorTickMarkColor(Color.BLACK) // Color for minor tick marks // Related to LED .ledVisible(false) // LED should be visible .ledType(LedType.STANDARD) // STANDARD, FLAT .ledColor(Color.rgb(255, 200, 0)) // Color of the LED .ledBlinking(false) // LED should blink // Related to Needle .needleShape(NeedleShape.ANGLED) // ANGLED, ROUND, FLAT .needleSize(NeedleSize.STANDARD) // THIN, STANDARD, THICK .needleColor(Color.CRIMSON) // Color of the needle // Related to Needle behavior .startFromZero(false) // Needle should start from the 0 value .returnToZero(false) // Needle should always return to the 0 value // Related to Knob .knobType(KnobType.STANDARD) // STANDARD, METAL, PLAIN, FLAT .knobColor(Color.LIGHTGRAY) // Color that should be used for the center knob .interactive(false) // Should be possible to press the center knob .onButtonPressed(buttonEvent -> System.out.println("Knob pressed")) .onButtonReleased(buttonEvent -> System.out.println("Knob released")) // Related to Threshold .thresholdVisible(false) // Threshold indicator should be visible .threshold(50) // Value for the threshold .thresholdColor(Color.RED) // Color for the threshold .checkThreshold(false) // Check each value against threshold .onThresholdExceeded(thresholdEvent -> System.out.println("Threshold exceeded")) .onThresholdUnderrun(thresholdEvent -> System.out.println("Threshold underrun")) // Related to Gradient Bar .colorGradientEnabled(false) // Gradient filled bar should be visible .gradientLookupStops(new Stop(0.0, Color.BLUE), // Gradient for gradient bar new Stop(0.25, Color.CYAN), new Stop(0.5, Color.LIME), new Stop(0.75, Color.YELLOW), new Stop(1.0, Color.RED)) // Related to Sections .sectionsVisible(false) // Sections will be visible .sections(new Section(50, 75, Color.ORANGE) // Sections that will be drawn .checkSectionsForValue(false) // Check current value against each section // Related to Areas .areasVisible(false) // Areas will be visible .areas(new Section(75, 100, Color.RED)) // Areas that will be drawn // Related to Markers .markersVisible(false) // Markers will be visible .markers(new Marker(75, "Marker 1", Color.HOTPINK) // Markers that will be drawn // Related to Value .animated(false) // Needle will be animated .animationDuration(500) // Speed of the needle in milliseconds (10 - 10000 ms) .onValueChanged(o -> System.out.println(((DoubleProperty) o).get())) .build(); Listing 1. Code for "full-size" GaugeBuilder That's a lot of code, but it describes nearly every property that is available in the standard Medusa gauge. If you then want to use this gauge within the FGauge control at a size of 500x500 pixels, you can use the following code. FGauge fGauge = FGaugeBuilder .create() .prefSize(500, 500) .gauge(gauge) .gaugeDesign(GaugeDesign.METAL) .gaugeBackground(GaugeBackground.CARBON) .foregroundVisible(true) .build(); This should help you to play around with the Gauge and FGauge in Medusa, but there's more. I've also created some additional skins that might come in handy. They are shown in Figure 4 through Figure 16. To make it more convenient for you to use these skins, I've added a skin() parameter to the GaugeBuilder. In addition, each of these skins needs some of the gauge parameters to be set to specific values. These presets will also be done by the GaugeBuilder class. Let's take a look at the ModernSkin as an example. One way of using it is shown in Listing 2. Gauge gauge = new Gauge(); gauge.setSkin(new ModernSkin(gauge)); gauge.setTitle("TITLE"); gauge.setUnit("UNIT"); gauge.setDecimals(0); gauge.setValueColor(Color.WHITE); gauge.setTitleColor(Color.WHITE); gauge.setSubTitleColor(Color.WHITE); gauge.setBarColor(Color.rgb(0, 214, 215)); gauge.setNeedleColor(Color.WHITE); gauge.setThresholdColor(Color.rgb(204, 0, 0)); gauge.setTickLabelColor(Color.rgb(151, 151, 151)); gauge.setTickMarkColor(Color.BLACK); gauge.setTickLabelOrientation(TickLabelOrientation.ORTHOGONAL); Listing 2. ModernSkin example But a more convenient way would be using the GaugeBuilder, as follows: Gauge gauge = GaugeBuilder.create() .skin(ModernSkin.class) .title("TITLE") .unit("UNIT") .build(); So, for all the available skins, the easiest way is to use the GaugeBuilder and set only the values that are needed. A Simple Dashboard With this approach, you could easily create a little Internet of Things (IoT) dashboard using around 100 lines of code, as shown in Listing 3: public class Main extends Application { private GridPane pane; private Gauge steps; private Gauge distance; private Gauge activeCalories; private Gauge foodCalories; private Gauge weight; private Gauge bodyFat; @Override public void init() { GaugeBuilder builder = GaugeBuilder.create().skin(SlimSkin.class); steps = builder.decimals(0).maxValue(10000).unit("STEPS").build(); distance = builder.decimals(2).maxValue(10).unit("KM").build(); activeTopicBox("STEPS", Color.rgb(77,208,225), steps); VBox distanceBox = getTopicBox("DISTANCE", Color.rgb(255,183,77), distance); VBox foodCaloriesBox = getTopicBox("FOOD", Color.rgb(129,199,132), foodCalories); VBox weightBox = getTopicBox("WEIGHT", Color.rgb(149,117,205), weight); VBox bodyFatBox = getTopicBox("BODY FAT", Color.rgb(186,104,200), bodyFat); VBox actvCaloriesBox = getTopicBox("ACTIVE CALORIES", Color.rgb(229,115,115), activeCalories); pane = new GridPane(); pane.setPadding(new Insets(20)); pane.setHgap(10); pane.setVgap(15); pane.setBackground(new Background(new BackgroundFill(Color.rgb(39,44,50), CornerRadii.EMPTY, Insets.EMPTY))); pane.add(stepsBox, 0, 0); pane.add(distanceBox, 1, 0); pane.add(actvCaloriesBox, 0, 2); pane.add(foodCaloriesBox, 1, 2); pane.add(weightBox, 0, 4); pane.add(bodyFatBox, 1, 4); } @Override public void start(Stage stage) { Scene scene = new Scene(pane); steps.setValue(5201); distance.setValue(3.12); activeCalories.setValue(347); foodCalories.setValue(1500); weight.setValue(78.7); bodyFat.setValue(14.2); stage.setTitle("Medusa Dashboard"); stage.setScene(scene); stage.show(); } @Override public void stop() { System.exit(0); } private VBox getTopic(Color.rgb(39,44,50); GAUGE.setAnimated(true); VBox vBox = new VBox(bar, label, GAUGE); vBox.setSpacing(3); vBox.setAlignment(Pos.CENTER); return vBox; } public static void main(String[] args) { launch(args); } } Listing 3. Example IoT dashboard The code in Listing 3 will give you the simple dashboard shown in Figure 17. Figure 17. Example IoT dashboard Conclusion I hope this Medusa library will be useful. I will add more skins and documentation (Javadoc, blogposts, and demos) in the future. If you have any needs for a special skin, let me know: [email protected]. I will try to add your suggested skin to the library, if it makes sense for others, too. The Medusa library is available under the Apache 2.0 license, which will give you all the freedom you need—whether you would like to use it in a commercial project or in an open source project. See Also - The Medusa project is hosted on GitHub. - The Medusa binaries are available on the Maven Central Repository (search for "Medusa") and on Bintray. About the Author Gerrit Grunwald is a software engineer with more than ten years of experience in software development. He has been involved in Java desktop application and controls development. Gerrit is interested in Java on the desktop and Java-driven embedded technologies based on Oracle Java SE Embedded. He is a true believer in open source and has participated in popular projects such as JFXtras.org as well as creating his own projects (Medusa, Enzo, SteelSeries Swing, and SteelSeries Canvas). Gerrit blogs regularly on subjects related to the IoT, Java, and JavaFX, and he is an active member of the Java community, where he founded and leads the Java User Group Münster (Germany). He is a JavaOne Rock Star and a Java Champion. He is also a speaker at conferences and user groups internationally and writes for several magazines. Join the Conversation Join the Java community conversation on Facebook, Twitter, and the Oracle Java Blog!
https://community.oracle.com/docs/DOC-992746
CC-MAIN-2019-47
refinedweb
2,506
50.23
(for example disused=yes) is now discouraged for many features. not always correct. It was used for the same purpose as described above, but the syntax was unhelpful because automated consumers of the data would have had to be rewritten. - Some disused features will be tagged in the database as disused=yes. This tag alone is not always sufficient to describe the object's status consistently. - Make any tags which no longer have current meaning as a result of the disuse unavailable to software. This can be done by prefixing their keys with the namespace disused: rather than using disused=yes. Updating older tagging Uses of the simple tag should ideally be updated as follows: Note that for some objects disused=yes is a good idea. For example disused quarries should be tagged landuse=quarry + disused=yes, not with disused:landuse=quarry. The difference is that shop=* describes only active shops (where you may buy things or services) while landuse=quarry describes type of landscape, not a company operating there. The same applies also to for example man_made=adit or building=*. Disused railways Disused rail lines have a special method of tagging which is in use: See also - Lifecycle prefix - Comparison of life cycle concepts, for a discussion of how to tag features through their life-cycle from proposal through operation to complete obliteration. Possible Tagging Mistakes Automated edits are strongly discouraged unless you really know what you are doing!
https://wiki.openstreetmap.org/wiki/Disused
CC-MAIN-2020-45
refinedweb
240
53.71
The to persist an actor state but it’s integration needs to be well thought as it can greatly impact your application design. It fits nicely with the actor model and distributed system design – but is quite different from what a “more classic” application looks like. In this post I am going to gloss over the different components of Akka Persistence and see how they influence the design choices. I’ll also try to cover some of the common pitfalls to avoid when building a distributed application with Akka Persistence. Although Akka Persistence allows you to plug in various storage backends in this post I mainly discuss using the Cassandra backend. Persisting data Akka persistence is a way to persist data and it’s is not a way to access that data! The persisted data is serialised and saved as raw bits. Which means it’s not readable nor queryable. It’s possible to query the metadata (persistence id, timestamps, sequence number, …) but not the data itself as it is stored in a blob. It we look at the akka_journal table definition in Cassandra we can see that the application data is stored as a blob into the event field. CREATE TABLE warehouse.akka_journal ( persistence_id text, partition_nr bigint, sequence_nr bigint, timestamp timeuuid, timebucket text, event blob, event_manifest text, message blob, ser_id int, ser_manifest text, tag1 text, tag2 text, tag3 text, used boolean static, writer_uuid text, PRIMARY KEY ((persistence_id, partition_nr), sequence_nr, timestamp, timebucket) ) Of course, it’s still possible to store the data somewhere else but it’s a different problem. We’ll come back to that later but the key thing to understand is that these 2 concerns shouldn’t be mixed together. It means that the actor that is persisting data with Akka Persistence shouldn’t try to write the same data into another table. This is just going to make the actor’s logic unnecessary complicated (and probably slow down the performances as well). Imagine an actor that is persisting financial transactions. For sure we need to have this information stored elsewhere so that we can query it, audit it or even derive other information from it like balance account information, …. All these operations must not be added into the persisting actor. They don’t belong here. In fact all these other tables that we need are just different views (or queries) of the persisted data and they form the ‘read’ side of the data while Akka Persistence provides the ‘write’ side of the data. We want to keep them separated from each other. The main reason is that the may likely have very different load profiles. For instance, our system can persist thousands of transactions per second (write intensive) but the read side is accessed much less frequently (e.g. only when a customer logs in into its account or to generate a daily report, …). Separating the reads and the writes accesses is known as the CQRS principle (Command-Query-Responsability-Segragation). We’ll go back to the read-part later on. For now let’s focus on what of data needs to be persisted. Event sourcing Akka persistence provides a way to persist the actor state so surely enough the persisted data is just the actor state. That’s one way to do it but not always the most efficient one. Let’s imagine that we have an actor that receive all the transactions of a given customer. Its state is the sequence of all the transactions executed over a day. If it persists its state for every transaction it receives it will persist the same transactions many times. That is not efficient (but Akka persistence allows to it anyway as it doesn’t care of what is persisted). A more common pattern is to persist the events. In this example it means that the actor is persisting every single transaction. It can then rebuild its state (the daily list of transactions) by replaying all the persisted transactions. Now it’s the recovery that is inefficient as the actor needs to replay all the events just to build up the list of the last day of transactions. In practice the 2 approaches are often combine together. The actor persist every single events and once in a while it persists its entire state (the snapshot). On recovery it needs to load the latest snapshot and all the events that occurred after this snapshot. Persisting the events (and replaying them when needed) is known as event-sourcing. In event sourcing there is a distinction between a command and an event. A command is a request to perform an action whereas the event is the outcome of that action. Let’s consider an actor in charge of placing orders. It receives PlaceOrder command, checks if all the products are available and if so persists and emits an OrderPlaced event. import akka.actor._ import akka.persistence._ class OrderProcessor extends Actor with PersistentActor { val persistenceId: String = "OrderProcessor" var turnover: Long = 0L def receiveCommand: Receive = { case command: OrderPlaced => if (checkAvailability(command)) { val orderPlaced = OrderPlaced(command) persist(orderPlaced) { event => turnover = turnover + event.totalPrice sender() ! event } } } def receiveRecover: Receive = { case event: OrderPlaced => turnover = turnover + event.totalPrice } } In event-sourcing only the events are persisted (and not the commands). It makes it easier to rebuild the system state as the events represents all the actions that already happened. The state can be updated directly from the events. It the commands were persisted we’d need to re-apply the commands to recover the state (and if the command fails during the replay – but not when they ware received – we’d end up with an inconsistent state). The key point here is to really be careful about which data you persist. Calling persist is quite easy and it’s possible to persist anything serialisable. Contrast this situation with a schema database (SQL, Cassandra, …) where you need to define your table schema, write your query, go through any ORM-layer (Object-Relational-Mapping) before actually storing any data. Don’t go this slippery road and think carefully about the persisted data. Ideally only the events should be persisted as it makes it easy to rebuild the actor state (along with the snapshot states). So always question why a data needs to be persisted. Serialisation Now that we know which data to persist, let’s focus on the serialisation mechanism. In fact the serialisation mechanism isn’t really part of Kaka-Persistence but provided by Akka-serialisation. It’s the same mechanism that is used when a message needs to be sent over the network (e.g. to another JVM). It’s not mandatory to use serialisation. For instance if your persistent storage supports JSON it’s possible to store a JSON representation of your data and skip the serialisation. (That being said, serialising to/from JSON is certainly not the most efficient solution). The default serialisation mechanism is the classic Java serialisation. The only advantage is that it works out of the box (and it’s probably the only reason why it’s the default option). The main drawbacks are that is both slow and not efficient in terms of serialised data size, plus it doesn’t provide a way to deal with schema evolution. Even the official documentation makes it clear that you shouldn’t go on prod with this serialisation option. While the performance both in terms of speed and data size does matter, for me, the key-factor is the way we deal with schema evolution. Why is it so important? Simply because as the application changes so does the data model of the persisted data. It means that we need to be able to read different versions of our data model when an actor recovers. As of today the most appropriate solution for serialisation seem to be Protobuf and Avro. They are quite similar in terms of performances but the main difference is the way they deal with schema evolution. It’s definitely something you want to know before making a decision here. The subject is worth a blog post and Martin Klepmann already wrote a really good one on this. The read side If you read this far, it should be pretty clear how to persist data. However we also need to read or query this data which wasn’t possible so far. For this matter Akka-persistence provides us with persistence-queries. At the beginning I didn’t quite get it. A persistence query … is it a query that runs forever ? … like a ‘SELECT’ statement that returns all the data and keeps waiting for new ones instead of terminating. It turns out it’s pretty much what it is. Another way to look at it, is to see it as a stream of persisted events. And in fact this is exactly what Akka-Persistence provides: an Akka-Stream of persisted events. This is great we can now subscribe to the events of a given persistence id and write them in a dedicated table that’s going to support our queries. The code looks something like this: implicit val system: ActorSystem = ... implicit val materialiser: Materializer = ... CassandraReadJournal .instance .eventsByPersistenceId("OrderProcessor", 0L, Long.MaxValue) .runForeach(saveEvent) def saveEvent(envelope: EventEnvelope): Unit = envelope.event match { case orderPlaced: OrderPlaced => // Store the orderPlaced in a table we can query .... } And we can use Akka-Streams to do pretty much anything (log, write, aggregate, …). However we get only the events of a single actor (because of the single persistent id). What if we need the messages from different actors? There are actually 2 other type of streams available: - allPersistenceId: A stream of all the persistence ids used - eventsByTag: A stream of all the events tagged with a given value The first one makes you aware of all the persistence IDs available in the system. It might be useful if you need to dynamically subscribe to new ‘eventsByPersistenceId’ streams. The second one is much more interesting as it let’s you combine the events from multiple actor in a single stream. This is especially useful to perform aggregation. However it requires that the events are tagged when persisted on the write side. It means the persistent actor should wrapped the event into a ‘Tagged’ envelope: persist(Tagged(orderPlaced, "b2bOrder")) { event => turnover = turnover + event.totalPrice sender() ! event } Alternatively it’s possible to put in place a ‘WriteEventAdapter’ that can wrap the event in the ‘Tagged’ envelope. It avoids to change to persistent actor code and plug-in more/different behaviour as needed. Now back to the read side: CassandraReadJournal .instance .eventsByTag("b2bOrder") .runForeach(saveEvent) In case you’re wondering how the events flow from the write-side to the read-side it relies on the Akka-PubSub to notify the poller (‘PersistenceIdEventPoller’) of new available events. Additionally the poller also regularly queries the ‘akka-journal’ table (typically every 5 seconds). The support of the tagged events depends on the persistent backend. The Cassandra persistence plugin supports it but only with 1 tag per event. (In case you need to apply multiple tags to a single event you can duplicate the message applying one tag to each copy). Behind the scene the eventsByTag query is backed by a Cassandra materialised view. That’s great as it keeps the writes atomic: Any event written to the table ‘akka_journal’ is automatically replicated by Cassandra into the materialised view. CREATE MATERIALIZED VIEW warehouse.eventsbytag1 AS SELECT tag1, timebucket, timestamp, persistence_id, partition_nr, sequence_nr, event, event_manifest, message, ser_id, ser_manifest, writer_uuid FROM warehouse.akka_journal WHERE persistence_id IS NOT NULL AND partition_nr IS NOT NULL AND sequence_nr IS NOT NULL AND tag1 IS NOT NULL AND timestamp IS NOT NULL AND timebucket IS NOT NULL PRIMARY KEY ( (tag1, timebucket), timestamp, persistence_id, partition_nr, sequence_nr ) As we’ve seen the stream of events for a given persistence ID (i.e. the query eventsByPersistenceId) forms an ordered sequence of events indexed by the ‘sequenceNr’. It is a simple structure and provides some useful guarantees. If events A occurred before B, A will be before B in the events sequence. It’s a useful property if there is a causal dependency between A and B and your application expects to see A before B. The usage of a sequence number makes sure there is no hole in the sequence (it’s not possible to receive event 2 before event 1). By using the eventsByTag query we lose such guarantees. The is no longer a unique sequence number for the stream (a sequence number is per persistence ID – not per tag). Therefore there is no guarantee that if event A1 is received before event B2 it actually occurred in this order (B2 might occur first and then A1). That means there is no more causal consistency. Finally the Cassandra materialised view does replicate any data from the base table to the view but it doesn’t guarantee they are written in the same order. (The final order in the view is defined by the clustering key but there is no guarantee on the order in which the data is written). It means the event stream may receive event B before event A even though A was persisted before B. In most cases this is not a problem but if your application requires such consistency guarantees it’s good to know how the system works and what assumptions hold. Conclusion And that concludes this guidelines on Akka-Persistence. This is a very interesting framework to build distributed application. It is quite flexible as it allows you to choose your storage backend, … It also requires that you really think through the design of your application as it doesn’t really prevent you from doing the wrong thing. Building a distributed system is a complex thing and you need to understand how the system your using works in order to know which assumptions you can make regarding the consistency guarantees.
http://www.beyondthelines.net/computing/akka-persistence/
CC-MAIN-2017-26
refinedweb
2,316
54.32
Forms Data Controls :: Binding Parent Repeater Item Index In Child Repeater Control?Jun 17, 2010 I want to bind parent repeater item index in child repeater control using inline code not code behind side. For example [Code].... I want to bind parent repeater item index in child repeater control using inline code not code behind side. For example [Code].... I have two repeater which are nested..Means there is one repeater and inside that i have another repeater. I want data in following format: *Transaction Id:1xasd2* Product1 2 500 *Transaction Id:2asd21* Product2 1 100 Product3 2 200 So how can i achieve this? protected void Page_Load(object sender, EventArgs e) [Code].... I am using nested repeaters and wanted to access a value from my parent repeater within the child...is this possible?View 3 Replies have some Objects, lets say Employee and Role defined as below and I have defined relationships in my database that gives me a list of objects say employees and thanks to my framework each employee object also has a Role object linked via the RoleIDID, UserName, Password, Email, RoleIDRoleID, RoleNameSo in code I can do something like this Employee emp = dataService_GetEmployeeByID(1); string RoleName = emp.Role.RoleName; Now here is my problemI can bind any object in a repeater and it works fine for the first level in my relationship For instance <%# Eval("UserName") %> But I need to be able to show the details for my child objects as well (Role) so something like this (which does not work) <%# Eval("Role.RoleName") %> Edit: I have a working solution already - I would just like to know why my original attempt didn't work. My original attempt is the code below. I'm using the approach I found here:[URL] 306154 to implement a nested Repeater. Each parent item has one or more children items (the point of having the nested Repeater) with a dropdown horizontally aligned to each child item. In an effort to re-use the nested part of the Repeater I wanted to develop that piece as a user control but couldn't get it to work. I am wondering if it is even possible and if so how? Here is my user control apsx: <asp:Repeater .... <ItemTemplate> <tr class="text" id="RepeaterItemRow" runat="server"> <td> <%#DataBinder.Eval(Container.DataItem, "Name") %> </td> <td> </td> <td> </td> <td> </td> <td> </td> <td> </td> </tr> </ItemTemplate> And here is my code behind for the user control. I noticed when I debugged NestedDataSource was null even though in the parent page load the data is there: public DataSet NestedDataSource { get; set; } protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { NestedRepeater.DataSource = NestedDataSource; NestedRepeater.DataBind(); } } In the parent Repeater in the `<ItemTemplate> I have: <asp:RepUC And finally the page code behind: protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { BuildWBS(); } } private void BuildPage() { DataSet ds = new DataSet(); ds = DataAccessLayer.GetData("System"); ds.Relations.Add("nestedrel", ds.Tables[0].Columns["Id"], ds.Tables[1].Columns["ParentId"]); ParentRepeater.DataSource = ds; ParentRepeater.DataBind(); } The page loads but nothing relating to the repeater appears on the screen. So, can you see anything wrong with what I have? Is the way I tried doing this even possible? How can I get current viewed itemindex of a repeater item template, have tried a few stuffs i got from google search, but it seems not to work. I have items like: 1 text link1 2 text link2 3 text link3 4 text link4 where 1, 2, 3, 4 are the itemindex, i want to be able to get 1 if text link1 is clicked, or 2 if text link two is clicked etc. Here is my repeater html"..... Right now I'm using the following code in my markup: <asp:HiddenField ID="TheName" runat="server" Value=<%#Eval("SpeakerName")%> /> I would like to use: <asp:HiddenField ID="TheName" runat="server" Value=<%#Eval(0)%> /> I would like to be able to call it by index instead of explicitly by "SpeakerName". Is there a way to do this in ASP.NET 4.0? how to loop check index in repeater control ?View 1 Replies How to get all item from repeater control that already bidding ? Code error as below : Dim tt Dim item As RepeaterItem For Each item In Repeater1.Items MsgBox(item.DataItem.ToString). I have this code protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { ArrayList olist = new ArrayList() { "aa", "bb", "cc", "dd" }; [Code] .... I want to find the index of the button inside the repeater. It's look something like this .... am trying to get the index of the DataItem from the DataTable and insert that into the repeater. I tried this solution: [URL] but that does not give me what I want. that solution only gives me the location of the item within the Repeater, but I want the location of the item within its source DataTable. The reason for this is because I want to number my search results, and if I use the above solution then the numbers reset on pagination.View 1 Replies I want to add a class to a div inside my repeater control based on whether the query string value is true or false, so that I can style it differently.View 6 Replies I am using a repater whose item template is having a dropdownlist. Now i want to access that dropdownlist from a button click event. Here is the code iam using : protected void btn1_Click(object sender, EventArgs e) { foreach (RepeaterItem item in rptWord.Items) { DropDownList ddl1 = (System.Web.UI.WebControls.DropDownList)rptWord.FindControl("ddlWord"); } } But m getting ddl1 as null. For this i created a function which is as follows: public void myFunction(object sender, RepeaterItemEventArgs e) { foreach (RepeaterItem item in rptWord.Items) { DropDownList la = (System.Web.UI.WebControls.DropDownList)e.Item.FindControl("ddlWord"); } } using this function iam able to access the repeater but i guess its not possible to call this function on button click event . I am trying to use a repeater control that will display a hyperlink control. For the text of that Hyperlink control I would like to concatenate to fields from my data source (lets say First Name and Last Name). How would I do this appropriately for the Hyperlink control within an ItemTemplate?View 3 Replies I have an aspx form containing many individual controls like and one repeater control. The repeater control items are basically having dropdowns, textboxes etc. next to each other. My problem is I can set the tab index of each individual element easily but I don't know how to set the tab index of the first element in of the first repeater item. That is why first I need to click the item and then the Tab index inside the repeater control works fine. Do you know how can I fix this? Should I handle this on the server side? or jQuery etc?View 1 Replies i need to Filter and display Google Map Markers from database based on DropDownList selection. for that I need to pass the selected value of DropDownList to the query.What should i do in the following code to do the above task? using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; [code]... I title 2 </div> <div> title 3 title 4 </div> <div> title 5 title 6 </div> How would I do this in the ItemDataBound event of repeater control? The title pretty much explains it. I want to bind a single item to a detail type control. I can bind to a repeater perfectly fine and obviously only one item will be displayed. It seems like there would be a better suited control for this. I know about FormView and DetailsView but they both generate a table which I don't really want. Something similar to the Repeater since it doesn't generate any content other than what you put in the template.View 5 Replies
https://asp.net.bigresource.com/Forms-Data-Controls-Binding-parent-repeater-item-index-in-child-repeater-control--MIKsTbrqQ.html
CC-MAIN-2021-31
refinedweb
1,333
64.51
Deploy Swift HTTP Serverless Container to Google Cloud Run in 5 minutes Published at Apr 23, 2019 >. The service will be invocable via the HTTP request from the client and the payment will be using pay per use model. There are many features that Google Cloud provides for the Cloud Run, such as: - Fast autoscaling, automatically scales our app up and down based on the traffic. Container based, using Docker container to build and deploy our service. - No DevOps. All we need to do is deploy our container and Google Cloud will manage all the rest for us. - Based on Knative, the portability means we can also deploy to Google Kubernetes Engine (GKE Cluster) across platforms. - Integrated logging and monitoring with StackDriver. - Ability to use our own custom domains. You can learn more about the Cloud Run directly from Google with the official link Cloud Run | Google Cloud. What we will build In this article, we will deploy a simple Swift HTTP Server app to Google Cloud Run using Dockerfile. We will use the Google Cloud SDK with Command Line for this. There are only 4 main tasks that we need to perform: - Prepare our HTTP Swift app. - Build the Dockerfile. - Upload to Container Registry. - Deploy container to Google Cloud Run. Setting up Google Cloud Before you begin, here are the things that you require to have: 1. Register and Sign in to the Google Cloud Platform. (). 2. Download and install Google Cloud SDK to your machine. (). 3. Create a new project from the Google Cloud Console. 4. Make sure to follow all these steps in here to activate the Google Cloud Run API for your project. (). Prepare our HTTP Swift Application Open your terminal/shell, create a new directory named hello-swift-cloudrun and navigate to that directory. mkdir hello-swift-cloudrun cd hello-swift-cloudrun Inside the directory, create a new swift package. swift package init --type executable Next, open Package.swift and copy the following code into the file. We will add the Swifter tiny HTTP server library as the dependency to run our HTTP Server in Swift. / swift-tools-version:5.0 // The swift-tools-version declares the minimum version of Swift required to build this package.import PackageDescriptionlet package = Package( name: "hello-swift-cloudrun", dependencies: [ .package(url: "", .upToNextMajor(from: "1.4.6")) ], targets: [ .target( name: "hello-swift-cloudrun", dependencies: ["Swifter"]), .testTarget( name: "hello-swift-cloudrunTests", dependencies: ["hello-swift-cloudrun"]), .] ) Next, open the main.swift file from the Sources directory. Copy the following code. import Swifter import Dispatch import Foundation let dateFormatter = DateFormatter() dateFormatter.locale = Locale(identifier: "en_US") dateFormatter.dateStyle = .full dateFormatter.timeStyle = .full let server = HttpServer() server["/html"] = { req -> HttpResponse in return .ok(.html(""" <h1>Swift Hello World from Google Cloud Run Serverless</h1> <p>Current time is \(dateFormatter.string(from: Date()))</p> """)) } server["/api"] = { req -> HttpResponse in return .ok(.json([ "result": """ Swift Hello World from Google Cloud Run Serverless\n Current time is \(dateFormatter.string(from: Date())) """ ] as AnyObject)) } let semaphore = DispatchSemaphore(value: 0) do { let port: Int = Int(ProcessInfo.processInfo.environment["PORT"] ?? "8080") ?? 8080 try server.start(UInt16(port)) print("Server has started ( port = \(try server.port()) ). Try to connect now...") semaphore.wait() } catch { print("Server start error: \(error)") semaphore.signal() } Here are the things that it performs: - Register 2 routes /html and /api. These routes will return the response containing the current date that is formatted using DateFormatter. The html path will return the text in HTML format, while the api path will return the response in JSON format. - Retrieve the PORT from the environment variable. - Start the HTTPServer passing the PORT to listen for the request in the port. Try to build and run the server by typing these commands the terminal. swift build swift run To test, open your browser and navigate to the address. You should see the text printed with the current time in your browser. . Build the Dockerfile Next, we will containerize our app by creating the Dockerfile. Create the file and copy the following code below. FROM ibmcom/swift-ubuntu:latest WORKDIR /usr/src/app COPY . . RUN swift build --configuration release CMD [ "swift", "run", "--configuration", "release" ] This will copy all the file to the container image, then run the swift build using release configuration. It will also run the server after the build has been finished. Upload to Container Registry Next, we need to upload our container to Cloud Registry. Make sure to retrieve your project id for your project. Run the command below. gcloud builds submit --tag gcr.io/[PROJECT-ID]/hello-swift-cloudrun Wait for the container builds process to finish and then uploaded to container registry. It will print the success message to the terminal. You can check the list of the successfully uploaded container using this command. gcloud container images list Deploy Container to Google Cloud Run At last, we need the deploy the image to the Google Cloud Run. Type these following command. gcloud config set run/region us-central1 gcloud beta run deploy --image gcr.io/[PROJECT-ID]/hello-swift-cloudrun --memory 512M --allow-unauthenticated Here are several things that it performs: - Set the region of deployment to us-central1. - Deploy the image from the Container Registry path in this case hello-swift-cloudrun. - Configure to use 512M of memory to use. - Allow unauthenticated request to invoke the HTTP. You can configure other things, from memory, concurrency, and request timeout. Check the link at Configuring memory limits | Cloud run | Google Cloud. After the deployment finished successfully, the terminal will print the URL endpoint of the deployed service that we can use. Open your browse and navigate to: - - Monitoring through Dashboard You can view all your deployed services to Cloud Run fromt the console dashboard. In here you can also manage the custom domains, delete and create services, and view the logs of your deployed services. !!!Make sure to delete all the resources that you have created after you finish this article to avoid billings!!! Conclusion You can clone the completed project in the repository at alfianlosari/SwiftCloudRun. That’s it, in just a simple steps we have deployed our serverless backend using Docker to the Google Cloud Run managed autoscaling service. The serverless paradigm provide us speed and reliability to execute rapidly as the size of our application grows over time ⚡️⚡️⚡️.
https://www.alfianlosari.com/posts/deploy-swift-http-containt-to-google-cloud-run-in-5minutes/
CC-MAIN-2021-17
refinedweb
1,051
58.99
|< Windows Process & Threads: C Run-Time 3 | Main | Windows Process & Threads: C Run-Time 5 >| Site Index | Download | MODULE S PROCESSES AND THREADS: C RUN-TIME Part 4: Program Examples The _beginthread() function creates a thread that begins execution of a routine at start_address. The routine at start_address must use the __cdecl calling convention and should have no return value. When the thread returns from that routine, it is terminated automatically. _beginthreadex() resembles the Win32 CreateThread() API more closely than _beginthread() does. _beginthreadex() differs from _beginthread() in the following ways: ▪ _beginthreadex() has three additional parameters: initflag, security, threadaddr. The new thread can be created in a suspended state, with a specified security (Windows NT only), and can be accessed using thrdaddr, which is the thread identifier. ▪ The routine at start_address passed to _beginthreadex() must use the __stdcall calling convention and must return a thread exit code. ▪ _beginthreadex() returns 0 on failure, rather than –1L. ().. Compiler Options for Multithread program For Visual C++: The using of the /MD, /ML, /MT, /LD compiler options (Run-Time Library), the explanation is given in the following Table. The /MD[d], /ML[d], /MT[d], /LD[d] options select either single-threaded or multithreaded run-time routines, indicate if a multithreaded module is a DLL, and select retail or debug versions of the run-time library.. To set this compiler option in the Visual Studio development environment for Visual C++. For Visual Studio .Net (C++ .Net - shown in the following figures): Project menu (or other short-cut keys) → your_projet_name Properties… → C/C++ folder → Code Generation. Program Example The following example uses _beginthread() and _endthread(). // mythread.cpp // compile with: /MT /D "_X86_" /c for Visual C++/.Net #include <windows.h> /* _beginthread, _endthread */ #include <process.h> #include <stddef.h> #include <stdio.h> #include <stdlib.h> #include <conio.h> /* Function prototypes... */ void Bounce(void *ch); void CheckKey(void *dummy); /* GetRandom() returns a random integer between min and max. */ #define GetRandom(min, max) ((rand() % (int)(((max) + 1) - (min))) + (min)) /* Global repeat flag and video variable */ BOOL repeat = TRUE; /* Handle for console window */ HANDLE hStdOut; /* Console information structure */ CONSOLE_SCREEN_BUFFER_INFO csbi; int main(int argc, char *argv[]) { CHAR ch = 'A'; hStdOut = GetStdHandle(STD_OUTPUT_HANDLE); if (hStdOut == INVALID_HANDLE_VALUE) printf("GetStdHandle() failed, error: %d.\n", GetLastError()); else printf("GetStdHandle() is OK.\n"); /* Get display screen's text row and column information. */ if (GetConsoleScreenBufferInfo(hStdOut, &csbi) == 0) printf("GetConsoleScreenBufferInfo() failed, error: %d.\n", GetLastError()); else printf("GetConsoleScreenBufferInfo() is OK.\n"); printf("--------ENJOY THE SHOW-------\n"); /* Launch CheckKey() thread to check for terminating keystroke. */ _beginthread(CheckKey, 0, NULL); /* Loop until CheckKey() terminates program. */ while(repeat) { /* On first loops, launch character threads. */ _beginthread(Bounce, 0, (void *) (ch++)); /* Wait one second between loops. */ Sleep(1000L); } return 0; } /* CheckKey() - Thread to wait for a keystroke, and then clear repeat flag. */ void CheckKey(void *dummy) { printf("Press any key to stop.\n"); _getch(); /* _endthread implied */ repeat = 0; } /* Bounce - Thread to create); printf("Thread ID: %d.\n", _threadid); newcoord.X = GetRandom(0, csbi.dwSize.X + 2); newcoord.Y = GetRandom(0, csbi.dwSize.Y - 4);(-2, 2); newcoord.Y += GetRandom(-2, 2); /* Correct placement (and beep) if about to go off the screen. */ if (newcoord.X < 0) newcoord.X = 1; else if (newcoord.X == csbi.dwSize.X) newcoord.X = csbi.dwSize.X - 4; else if (newcoord.Y < 0) newcoord.Y = 1; else if (newcoord.Y == csbi.dwSize.Y) newcoord.Y = csbi.dwSize.Y - 4; /* If not at a screen border, continue, otherwise beep. */ else continue; Beep(((char) ch - 'A') * 100, 175); } /* _endthread given to terminate */ _endthread(); } The output sample: Verifying the threads creation through Windows Task Manager. Another Example. // mythread.cpp // compile with: /MT – Multithreaded, Visual C++/.Net #include <windows.h> #include <stdio.h> #include <conio.h> #include <process.h> unsigned Counter; unsigned __stdcall SecondThreadFunc(void* pArguments) { printf("In second thread...\n"); while (Counter < 1000000) Counter++; _endthreadex(0); return 0; } int main(int argc, char *argv[]) { HANDLE hThread; unsigned threadID; printf("Creating second thread...\n"); printf("Thread ID: %d.\n", thread); return 0; } Other functions and structures definitions used in the previous program examples are presented in the following section. _doserrno, errno, _sys_errlist, and _sys_nerr These global variables hold error codes used by the perror() and strerror() functions for printing error messages. Manifest constants for these variables are declared in STDLIB.H as follows: extern int _doserrno; extern int errno; extern char *_sys_errlist[ ]; extern int _sys_nerr;. On an error, errno is not necessarily set to the same value as the error code returned by a system call. For I/O operations only, use _doserrno to access the operating-system error-code equivalents of errno codes. For other(). The following errno values are compatible with 32-bit Windows applications. Only ERANGE and EDOM are specified in the ANSI standard. Take note that ANSI standard has been superseded by ISO/IEC standard.
http://www.tenouk.com/ModuleS.html
crawl-001
refinedweb
798
52.05
1007. Maximum Subsequence Sum (25) GivenSample Output: 10 1 4 #include<stdio.h> using namespace std; int buf[10002]; int main() { freopen("F://Temp/input.txt", "r", stdin); int n; scanf("%d", &n); for(int i = 0; i < n; i ++) scanf("%d", &buf[i]); bool neg_flag = true;//是否全部是负数 for(int i = 0; i < n; i ++) if(buf[i] >= 0) { neg_flag = false; break; } if(neg_flag) { printf("0 %d %d\n", buf[0], buf[n-1]); return 0; } int start, end; int max = -1, tmp_sum = 0; for(int i = 0; i < n; i ++) { tmp_sum += buf[i]; if(tmp_sum > max) { max = tmp_sum; end = i; } else if(tmp_sum < 0) tmp_sum = 0; } max = -1, tmp_sum = 0; for(int i = end; i >= 0; i --) { tmp_sum += buf[i]; if(tmp_sum > max) { max = tmp_sum; start = i; } } printf("%d %d %d\n", max, buf[start], buf[end]); return 0; }
https://blog.csdn.net/caicai_zju/article/details/49947817
CC-MAIN-2018-30
refinedweb
140
51.86
Writing software that is concurrent, scalable and fault-tolerant is hard. To achieve concurrency developers have to manage multiple threads, which can be tricky and error-prone. This post looks at how the Actor model makes it easier to write concurrent code. Specifically it renders a Mandelbrot Set using the Akka framework and investigates how Akka can be used with Scala to create highly concurrent and scalable systems. What is the Actor Model? The Actor model allows the developer to write concurrent and distributed systems by abstracting the low-level problems of having to deal with locking and thread management. The model is made up from a set of components known as Actors. They communicate by sending messages, with some content, to one another. There are several key points about how Actors behave: * When a message is received they decide what to do based upon the message type and content. * In addition to normal operations, they can create new Actors or send messages to other Actors. * They each have an address. Messages can only be sent to Actors whose address is known. * They have a mailbox, all messages received go to the mailbox. * Typically, they act upon messages in the order they arrived in the mailbox. * If the mailbox is empty the Actor will handle a new message as soon as it arrives. * Since the Actors can act asynchronously there is no guarantee what order the messages will arrive in. The diagram below shows multiple Actors in an Actor system communicating through message passing. There are various implementations of the Actor model. Akka is a framework available for both Java and Scala, in this post I have used it with Scala. Scala used to have its own implementation of the Actor model but this was deprecated in Scala 2.10 in favour of the Akka framework’s implementation. Implementing the Actor Model using Akka To demonstrate the Actor model I have looked at the problem of defining which complex numbers lie in the Mandelbrot Set. Specifically, I have used the Escape Time Algorithm to see if each point lies in the set. The algorithm iteratively applies a set of mathematical operations to an input value and terminates when the value reaches a specified threshold or after a maximum number of iterations. The number of iterations performed for each complex number is the value that I have associated with it. This problem is well suited for the Actor model as it is known as an ‘embarrassingly parallel’ problem. Since the calculations for each individual complex number can be done completely without state and independent of one another, this problem is easy to parallelise. I have split the grid of complex numbers into several ‘horizontal’ segments which can each be calculated concurrently. I used the Sbt build tool for this Scala project and my build.sbt is as follows: name := "akka-scala" version := "1.0" scalaVersion := "2.11.2" resolvers ++= Seq( "Typesafe Repository" at "" ) libraryDependencies ++= Seq( "com.typesafe.akka" %% "akka-actor" % "2.3.4" ) This file defines the project name, project version number, the version of Scala to be compiled against and any dependencies I need. For this project the only dependency is the Akka framework, of which I’m using version 2.3.4. I can now start creating my Actor model, in which I will use three different Actors: - Master Actor - keeps track of results for each complex number and will handle the forwarding of segments of work to the workers. - Worker Actor - performs the necessary calculations for each of the segments of complex numbers. - Result Handler Actor - handles the result when all points have been calculated. I have started by creating the following: object Mandelbrot extends App { calculate(numWorkers = 4, numSegments = 10) sealed trait MandelbrotMessage case object Calculate extends MandelbrotMessage def calculate(numWorkers: Int, numSegments: Int) { val system = ActorSystem("MandelbrotSystem") val resultHandler = system.actorOf(Props[ResultHandler], name = "resultHandler") val master = system.actorOf(Props( new Master(numWorkers, numSegments, resultHandler)), name = "master") master ! Calculate } } Here I have created an object which extends App so that it is executed when the program is run. The object starts by calling a calculate method with the number of workers and segments to use. numWorkers will become clear later, numSegments is how many ‘horizontal’ segments I have split the grid into. The first thing the calculate method does is create an Actor system, this is a collection of Actors that can share configuration. The method then creates a Result Handler Actor, the address of which is passed into the Master Actor. The final line of the method sends a Calculate message to the Master Actor - this message is used to tell the it to calculate the Mandelbrot Set. Messages in an Akka model should be lightweight and immutable, this makes case objects/classes perfect. I have therefore created a Calculate case object. It extends a trait which will be used for all my messages. I now add the Master Actor to the Mandelbrot object: import akka.actor._ import akka.routing.RoundRobinPool import scala.concurrent.duration._ import scala.collection.mutable object Mandelbrot extends App { ... case class Work(start: Int, numYPixels: Int) extends MandelbrotMessage val canvasWidth: Int = 1000 val canvasHeight: Int = 1000 ... class Master(numWorkers: Int, numSegments: Int, resultHandler: ActorRef) extends Actor { var mandelbrot: mutable.Map[(Int, Int), Int] = mutable.Map() val workerRouter = context.actorOf(Props[Worker].withRouter(RoundRobinPool(numWorkers)), name = "workerRouter") def receive = { case Calculate => val pixelsPerSegment = canvasHeight/numSegments for (i <- 0 until numSegments) workerRouter ! Work(i * pixelsPerSegment, pixelsPerSegment) } } } There is a lot going on here so I’ll go through it step by step. The Master Actor is implemented in the Master class which extends Akka’s Actor class. The constant values canvasHeight and canvasWidth are used to define the range of complex numbers I will use. The Master Actor starts by creating a mutable map, which is used to hold the results for each complex number. Each entry in the map has a tuple as the key (containing the co-ordinates of the point) and the number of iterations it takes to ‘escape’ the algorithm as the value. I have used a mutable map so I can easily add new results as and when they are received. A round robin router is then created; each time a message is passed to the router it will be forwarded to the next Actor, i.e. if the router contains two Actors the first message it receives will go to the first Actor, the second to the second Actor, the third to the first Actor and so on. numWorkers defines how many Worker Actors are available in the router. Every Actor in Akka must implement the receive method, this method is called when a message is received. In the Master Actor the message is handled using pattern matching; if the message is a Calculate object then it passes Work messages to the worker router. This is where the problem is parallelised by splitting the complete set of complex numbers into the specified number of segments. Each Work message sent to the worker router states the segment of numbers it should calculate. If the receive method receives a message that is not a Calculate object then it will do nothing and ignore it - in a production environment you would more likely want to throw an exception if this happened. Next I need to create the Worker Actor: case class Result(elements: mutable.Map[(Int, Int), Int]) extends MandelbrotMessage val maxIterations: Int = 1000 ... class Worker extends Actor { def calculateMandelbrotFor(start: Int, numYPixels: Int): mutable.Map[(Int, Int), Int] = { var mandelbrot: mutable.Map[(Int, Int), Int] = mutable.Map() for (px <- 0 until canvasWidth) { for (py <- start until start + numYPixels) { // Convert the pixels to x, y co-ordinates in // the range x = (-2.5, 1.0), y = (-1.0, 1.0) val x0: Double = -2.5 + 3.5*(px.toDouble/canvasWidth.toDouble) val y0: Double = -1 + 2*(py.toDouble/canvasHeight.toDouble) var x = 0.0 var y = 0.0 var iteration = 0 while (x*x + y*y < 4 && iteration < maxIterations) { val xTemp = x*x - y*y + x0 y = 2*x*y + y0 x = xTemp iteration = iteration + 1 } mandelbrot += ((px, py) -> iteration) } } mandelbrot } def receive = { case Work(start, numYPixels) => sender ! Result(calculateMandelbrotFor(start, numYPixels)) } } This might initially look quite complicated but it isn’t actually doing that much. As with all Actors I need to implement the receive method - if a Work message is received then values are calculated for the complex numbers. The calculation happens in the calculateMandelbrotFor method. Given a start value and the number of values to compute the method calculates the values as defined by the Escape Time Algorithm and returns them in a mutable map. Once calculated, the results are sent back to the Master Actor. This is done using the sender reference which is always passed with a message in Akka. It is a reference to the sender of the current message. The results are sent using the newly created Result class. The Master Actor can now be updated to handle this message: case class MandelbrotResult(elements: mutable.Map[(Int, Int), Int], duration: Duration) extends MandelbrotMessage ... class Master(numWorkers: Int, numSegments: Int, resultHandler: ActorRef) extends Actor { var numResults: Int = 0 val start: Long = System.currentTimeMillis() ... def receive = { ... case Result(elements) => mandelbrot ++= elements numResults += 1 if (numResults == numSegments) { val duration = (System.currentTimeMillis() - start).millis resultHandler ! MandelbrotResult(mandelbrot, duration) context.stop(self) } } } Here I have added a new case to the receive method which handles Result messages. The first thing this does is add the new elements to the map. A count of the number of results received is then incremented. Results from all Workers are received once the number of Work messages sent equals the number of Result messages received. When all results have been received the length of time it took for the calculations to be performed is stored. A message is then passed to the Result Handler Actor. This message contains the map of calculated values and the time it took to complete the calculations. At this point the Master Actor is no longer needed so it can be stopped. Stopping an Actor will also stop all of its child Actors, in this case the Worker Actors that it was using. The final step is to create the Result Handler Actor: class ResultHandler extends Actor { def receive = { case MandelbrotResult(elements, duration) => println("completed in %s!".format(duration)) context.system.shutdown() } } This is the simplest of the Actors, when it receives a MandelbrotResult message it prints a line in the console displaying the amount of time the computation took. The Actor system is then no longer needed and can be shut down. The Result Handler Actor has been sent the elements in the Mandelbrot Set but currently does nothing with them. It would be nice if there was some way to visualise the results. I have done this using a JFrame: import javax.swing.JFrame import java.awt.{Graphics, Color, Dimension} import scala.collection.mutable class MandelbrotDisplay(points: mutable.Map[(Int, Int), Int], height: Int, width: Int, maxIterations: Int) extends JFrame { setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE) setPreferredSize(new Dimension(height, width)) pack setResizable(false) setVisible(true) override def paint(g: Graphics) { super.paint(g) var histogram: Array[Int] = new Array[Int](maxIterations) for(px <- 0 until width) { for (py <- 0 until height) { val numIters = points(px,py) histogram(numIters-1) += 1 } } var total = 0 for(i <- 0 until maxIterations) { total += histogram(i) } for(px <- 0 until width) { for (py <- 0 until height) { val numIters = points(px,py) var colorVal = 0.0 for(i <- 0 until numIters) { colorVal += histogram(i).toFloat / total.toFloat } val rgb = Color.HSBtoRGB(0.1f+colorVal.toFloat, 1.0f, colorVal.toFloat*colorVal.toFloat) g.setColor(new Color(rgb)) g.drawLine(px, py, px, py) } } } } This algorithm is based on histogram colouring. All it does is extend a JFrame and overrides the paint method. In this implementation each pixel is looped through and given a colour based on the number of iterations it took to ‘escape’ from the algorithm. This can be created by adding one more line to the Result Handler: class ResultHandler extends Actor { def receive = { case MandelbrotResult(elements, duration) => println("completed in %s!".format(duration)) context.system.shutdown() new MandelbrotDisplay(elements, canvasHeight, canvasWidth, maxIterations) } } If I now run the code I see an image similar to the one below, pretty cool! To understand more about what the image shows you should read about Mandelbrot Sets. If you want to see the entirety of my code and try it for yourself you can find it here. I have now implemented a simple Actor model using the Akka framework and I hope you’ll agree that it was a lot simpler than the alternative of explicitly handling thread management. The workers are the part of the system that act concurrently to calculate the value of points in the set. Here I have used four Worker Actors that work at the same time to calculate values. What do you think will happen if I change the value of the numWorkers parameter in the call to the calculate method? Performance Considerations The main advantage of using an Actor model is to easily create highly concurrent and distributed systems, as such you would expect to see a performance benefit from using it. By changing the number of Actors in the router pool in this example I can change the performance of the system. The more Actors in the pool the quicker the calculations will be. I have run the project on a computer with a quad core processor. This means that four Actors is roughly the best performance I can get. Four Actors equates to one thread per core, allowing the core to put all its resources onto that thread. Therefore there is no benefit if I create a pool of eight Actors, as two threads would run on each core, but each thread would have fewer resources and would take longer to calculate. I executed the program a number of times with one to four Actors in the pool and measured the average time it took for the program to execute. The graph below shows the results. With one Actor the average time was 1790ms, this time was drastically decreased to 788ms when four Actors were used. There is a significant performance improvement each time the number of Actors in the router pool is increased and there was more than a 50% reduction in the execution time by changing from one to four actors in the pool. In this example I have used Akka’s RoundRobinPool for delegating messages between the Workers. The round robin pool passes messages to each Actor in the pool in turn. This might not provide the best performance or be appropriate, depending on the use case. Akka offers several other routing methods, including: RandomPool- selects Actors at random to pass the message to. BalancingPool- attempts to distribute work evenly between Actors. SmallestMailboxPool- sends messages to the Actor with the fewest messages in its mailbox. BroadcastPool- The router sends all messages to all Actors in the pool. When creating an Actor system you should decide which routing method is most appropriate for the situation. It is also possible to create custom routers should none of the routers provided by the framework suit your needs. Another consideration when using Akka is that of message delivery. There are three basic categories for message delivery; the default, which has been used in this example, is at-most-once delivery. at-most-once delivery means that the message will be sent once and will be received either once, or it will be lost during delivery and will not be received. This is the cheapest message delivery method with the highest performance. The second message delivery method is at-least-once delivery, multiple messages could be sent, such that at least one is delivered - it indeed could be the case that multiple messages are received. The final method has the worst performance - exactly-once delivery. For this method both the sender and receiver need to keep state to check that duplicate messages are neither sent nor received. When designing an Actor model you should decide which delivery method is most suitable for your needs, sometimes the performance trade-off will be necessary to ensure that every message is delivered the correct number of times. So, Should You Use an Actor Model? Actor models make concurrent software much easier for developers to write, as the developer does not have to deal with thread management and locking. They can write in simple, high level terms of message passing between Actors and let the framework deal with thread management. This will allow developers to produce correct concurrent software much quicker than was previously possible. If you have a problem that can be parallelised then I would suggest you consider using an Actor model. The performance benefit can be substantial and the implementation is relatively straightforward. This benefit can be more substantial when used in a production environment when your program is not running on a multi-core processor but on multiple distributed servers. This post has only scratched the surface of the Actor model and Akka framework but hopefully you have seen their benefits. I would encourage you to try using an Actor model when you next get the chance.
https://blog.scottlogic.com/2014/08/15/using-akka-and-scala-to-render-a-mandelbrot-set.html
CC-MAIN-2022-33
refinedweb
2,891
55.34
Hello, this is my first article here, so please, if you can, bare with me. Also, my English can be "different", and I apologize for any mistakes. OK, first of all, the problem: I needed to map an object's properties to a set of ListView columns and store the object itself within a ListViewItem. Why? Because one of the greatest things of OOP is that you can create a class that would contain a set of data and methods that operate with that data, and use it in a very elegant way. Because I like working with objects instead of any other method of storing data, I wanted to make the ListViewItem hold an object and automatically map its properties to a set of columns from a ListView control. ListView ListViewItem The solution: The ListViewExtendedItem class. This class inherits ListViewItems, and adds a new property, a different constructor, and a (two if you count the overload) method. ListViewExtendedItem Well, because I make my own classes that hold the data I'm interested in keeping, I have full control over how I build them up. Also, I have complete control over what columns to display and their order. Because of this, I can use custom attributes to mark the properties of the data classes with the corresponding column name. A simple solution, but with a big disadvantage: no other class may be used except those that have been written with this custom attribute. ListViewColumnAttribute This is the attribute we use to mark the properties with the column name where they should be mapped to. It is a very basic attribute, as you can see below. It only has one constructor that takes a string as a parameter (the column name) and overrides the ToString() method so that when we later check for the column name, it is a bit easier. ToString() public class ListViewColumnAttribute : Attribute { private string _columnName; public ListViewColumnAttribute(string columnName) { _columnName = columnName; } public override string ToString() { return _columnName; } } So far so good, nothing too fancy going on. We will use this attribute when we write our classes like this: [ListViewColumn("some column")] This is the class that does all the work. It inherits the ListViewItem class and adds the following: ListViewExtendedItem(ListView parentListView, object data) parentListView data Data object Print() Update Update(object data) Update(object data, ListView listView) OK, the actual code that does all the work is contained within the body of the second overload of the Update method. Both the constructor and the first overload calls this one to do the job. Here is the code: public void Update(object data, ListView listView) { //Clear all the subitems. this.SubItems.Clear(); //Get the type of the data object. Type typeOfData = data.GetType(); //Define this to keep track of whats happening bool completed_column = false; foreach (ColumnHeader column in listView.Columns) { completed_column = false; //Get all the properties of the object's type. foreach (PropertyInfo pInfo in typeOfData.GetProperties()) { //Get all the custom attributes for the //current property, the use of true here tells //the runtime that you wish to check the inherited //class also, this may be usefull if //your objects inherit from a base one. foreach (object pAttrib in pInfo.GetCustomAttributes(true)) { //Check to see if the type of the attribute //is that of ListViewColumnAttribute. if (pAttrib.GetType() == typeof(ListViewColumnAttribute)) { //Check to see if the column names coincide. if (pAttrib.ToString() == column.Name) { //Check to see if it has to update it's own //Text property, or if it has to add subitems. if (column.DisplayIndex == 0) { this.Text = pInfo.GetValue(data, null).ToString(); completed_column = true; break; } else { this.SubItems.Add(pInfo.GetValue(data, null).ToString()); completed_column = true; break; } } } } if (completed_column) { break; } } } //Keep the object here so that it can be easyley retrieved //when the user performs some action on the ListView. _data = data; } First, we clear all the SubItems, so there won't be any other columns besides the ones needed. Then, we get the Type of the data object, and we define a bool variable that will help optimize the method a bit. Then, for each column the provided ListView control has, we have to check the object's properties if one of them has been marked with a custom attribute of type ListViewColumnAttribute and if the attribute has the same column name as the column we are currently searching for. The code is not really pleasant, three nested foreach and some ifs along the way are not pleasant to the eyes. However, it does the job done, and for the time being, I cannot think of another way to check the members of a type. After one member is found that has been marked with a matching attribute and the column name matches the one stored by the attribute, we proceed to check if the column we are populating is the first one (DisplayIndex=0) because the item's own text property is displayed, instead of the text held by a SubItem. SubItem Type bool foreach if DisplayIndex=0 Notice that there is an if statement checking the value of completed_column. This is to ensure that after a match has been found, the code does not linger and search any more, but goes to the next column. After all the columns are done, the internal _data object is assigned the one passed to the function. completed_column _data This is pretty much all there is to it. The ListViewExtendedItem can be added to the Items collection of the ListView control, and it will display the marked properties on the right column. Items The first obvious annoying thing is that when you retrieve an extended item, you have to type cast it to ListViewExtenededItem in order to make use of it, and also, you have to unbox the Data property to access its members. This can all be resolved with a generic implementation of the class; however, for the time being, several things about generics escape me. ListViewExtenededItem Note that I have not tested if this works while using Virtual Items in the ListView, but it should work, I don't see what would break it... Also, if the data object properties have some obscure type that does not override the ToString() method, the results are not going to be very helpful. This can be solved by using attributes again to store the name of a certain field you want to show from that obscure object (maybe add an interface that would provide a method to retrieve a property name so that it can be changed at runtime). And, the last annoying thing, Visual Studio won't update the Name property of the ColumnHeaders used in the ListView, you have to set them up in code. Name ColumnHeader In the download (link above), you will find a demo app that makes use of this class. The usage is pretty straightforward. Use the New button to create a new object instance, modify its properties with the property grid, then click Add to save it in the list view. If you click on an item in the list view, you will see the data object's properties in the property grid. After you modify them, click on Update to save the changes and display it in the list view control. The ListViewColumnAttribute can be found in the file with the same name, and the same goes for the ListViewExtendedItem class. Thanks for reading this article. I hope it helps. Let me know if you like/don't like something about the extended item.
http://www.codeproject.com/Articles/36443/Extending-the-ListViewItem?fid=1540730&df=90&mpp=25&sort=Position&spc=Relaxed&tid=3041703
CC-MAIN-2016-44
refinedweb
1,255
67.18
swift-url-routingswift-url-routing A bidirectional URL router with more type safety and less fuss. This library is built with Parsing. Learn MoreLearn More This library was discussed in an episode of Point-Free, a video series exploring functional programming and the Swift programming and the Swift language, hosted by Brandon Williams and Stephen Celis. MotivationMotivation URL routing is a ubiquitous problem in both client-side and server-side applications: - Clients, such as iOS applications, need to route URLs for deep-linking, which amounts to picking apart a URL in order to figure out where to navigate the user in the app. - Servers, such as Vapor applications, also need to pick apart URL requests to figure out what page to serve, but also need to generate valid URLs for linking within the website. This library provides URL routing function for both client and server applications, and does so in a composable, type-safe manner. Getting StartedGetting Started To use the library you first begin with a domain modeling exercise. You model a route enum that represents each URL you want to recognize in your application, and each case of the enum holds the data you want to extract from the URL. For example, if we had screens in our Books application that represent showing all books, showing a particular book, and searching books, we can model this as an enum: enum AppRoute { case books case book(id: Int) case searchBooks(query: String, count: Int = 10) } Notice that we only encode the data we want to extract from the URL in these cases. There are no details of where this data lives in the URL, such as whether it comes from path parameters, query parameters or POST body data. Those details are determined by the router, which can be constructed with the tools shipped in this library. Its purpose is to transform an incoming URL into the AppRoute type. For example: import URLRouting let appRouter = OneOf { // GET /books Route(.case(AppRoute.books))) { Path { "books" } } // GET /books/:id Route(.case(AppRoute.book(id:))) { Path { "books"; Digits() } } // GET /books/search?query=:query&count=:count Route(.case(AppRoute.searchBooks(query:count:))) { Path { "books"; "search" } Query { Field("query") Field("count", default: 10) { Digits() } } } } This router describes at a high-level how to pick apart the path components, query parameters, and more from a URL in order to transform it into an AppRoute. Once this router is defined you can use it to implement deep-linking logic in your application. You can implement a single function that accepts a URL, use the router's match method to transform it into an AppRoute, and then switch on the route to handle each deep link destination: func handleDeepLink(url: URL) throws { switch try appRouter.match(url: url) { case .books: // navigate to books screen case let .book(id: id): // navigate to book with id case let .searchBooks(query: query, count: count): // navigate to search screen with query and count } } This kind of routing is incredibly useful in client side iOS applications, but it can also be used in server-side applications. Even better, it can automatically transform AppRoute values back into URL's which is handy for linking to various parts of your website: appRouter.path(for: .searchBooks(query: "Blob Bio")) // "/books/search?query=Blob%20Bio" Node.ul( books.map { book in .li( .a( .href(appRouter.path(for: .book(id: book.id))), book.title ) ) } ) <ul> <li><a href="/books/1">Blob Autobiography</a></li> <li><a href="/books/2">Blobbed around the world</a></li> <li><a href="/books/3">Blob's guide to success</a></li> </ul> For Vapor bindings to URL Routing, see the Vapor Routing package. DocumentationDocumentation The documentation for releases and main are available here: LicenseLicense This library is released under the MIT license. See LICENSE for details.
https://swiftpackageregistry.com/pointfreeco/swift-url-routing
CC-MAIN-2022-40
refinedweb
634
59.64
3043/how-can-we-use-selenium-with-python How do I set up Selenium with Python? I want to write scripts in Python as well as execute them. Please help me out. First Install Python based on the Operating System you are using. Then Install it with the following command: pip install -U selenium then use this in your code: from selenium import webdriver You can also use many of the following as required from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import Select from selenium.common.exceptions import NoSuchElementException The below code containing Keys.ENTER might just ...READ MORE There are many resources for selenium - ...READ MORE Recently Selenium launched v3 and if you ...READ MORE Yup its possible. Instead of a new ...READ MORE The only reason for a timeout error ...READ MORE "disable-infobars" flag has been deprecated, but you ...READ MORE The title of the page will not ...READ MORE Try resetting value of y. Put y="" ...READ MORE I have found a solution to this ...READ MORE Selenium IDE works with all Firefox versions, ...READ MORE OR
https://www.edureka.co/community/3043/how-can-we-use-selenium-with-python
CC-MAIN-2019-22
refinedweb
186
62.85
Peivend Ghayori485 Points I don't get this. I tried absolutely everything. I have tried everything on this one... func fizzBuzz(n: Int) -> String { // Enter your code between the two comment markers for i in 1...100{ if (n%3 == 0) && (n%5 == 0) { return("FizzBuzz") } else if (n%3 == 0) { return("Fizz") } else if (n%5 == 0) { return ("Buzz") } else { return(n) } } // End code return "\(n)" } 1 Answer KRIS NIKOLAISENPro Student 51,735 Points You are close. Some hints from the instructions: 1) Do not worry about the default case (so no else) 2) The challenge also does not need you to loop over a range of values
https://teamtreehouse.com/community/i-dont-get-this-i-tried-absolutely-everything
CC-MAIN-2019-51
refinedweb
109
80.31
Navigation trail: TurbineProjectPages - JakartaTurbine2 - JakartaTurbine2Faq Turbine 2.x Frequently Asked Questions This document serves as a catch-all for questions and problems commonly encountered in Turbine 2.x. Please be sure to read this document fully before asking repetitive questions on the turbine-user mailing list. It will save the entire Turbine community, including yourself, a little time. You are encouraged to improve this document as you see fit! Q: Why do I get a { { { "java.lang.VerifyError" } } } when trying to set up the TDK 2.x? A: This is because of a specific version of an XML related JAR that is incompatible with TDK 2.1 located somewhere in your classpath. The fixes that are known to work usually involve trying one (or more) of the following steps: See if there is a <code>XercesImpl.jar</code> in your <code>ANT_HOME\lib</code> directory, if so, try removing it (maybe move it into another directory for now) and running 'ant init' again. - Try searching for anything jaxp-related jar files from your system classpath (which may even include your JDK installation directory). People have had success after 'moving' out of the system classpath the following files: - xercesImpl.jar - sax.jar - xsltc.jar - xalan.jar - dom.jar Note: by 'moving' these files to allow the TDK to run, any other project that you are using Ant for _may_ not work, so it's best to keep the moved JAR files into a directory for safe keeping. Q: I would like to extend { { { TurbineUser } } } to include...how do I go about that? A: *There is a detailed how-to document available to get you started. It is possible to add additional attributes (fields) to the extended object which will be persisted to the database, as well as add new foreign key references with the TURBINE_USER table in your <code>project-schema.xml</code> file (cf. Torque). The turbine-user list archives contains a large library of tips, suggestions and solutions concerning extending { { { TurbineUser } } }. This is probably the most frequently discussed topic in the list, so please be sure to educate yourself thoroughly. Q: How do I capture uploaded files? A: Read the documentation for the UploadService. Q: What is with the 4 versions of Turbine currently available? Why? And Which do I use? A: See the Turbine project page for the current version information. Q: Despite what you people say, Intake does not work! How do I get it to do what you claim it does? A: Please refer to the page dedicated to common issues and stumbing points with Intake: /CommonIntakeProblems Q: I created my own version of the login action. Why doesn't Turbine use it? A: Turbine handles the login and logout actions differently than normal actions. The settings <code>action.login</code> and <code>action.logout</code> in { { { TurbineResources.properties } } } are used to tell Turbine which actions are login and logout. Q: What are Fulcrum and Torque? A: In Turbine 2.1, the code for all of the services (such as Intake, Velocity, Database, Logging, etc) were tightly coupled to Turbine. In order to allow this code to be reused in other projects Torque and Fulcrum were created. They are both still considered to be under the Turbine umbrella. Torque is the database access layer. Turbine 2.2 uses this new decoupled version. More information on Torque can be found on the Torque project page. Fulcrum contains all of the decoupled service code. Turbine 2.2 still uses the coupled services although you can switch to the Fulcrum version for some services. Fulcrum does not have a released version. It is suggested that you do not use the Fulcrum services at this time. More information can be found on the Fulcrum project page. Q: Why do I have to use a comma with the $link tool? A: Turbine's method of parsing URL parameters relies on everything being seperated by the "/" character. If you will notice the URL generated by $link.setPage("login.vm"), reads something like <code></code>. The output of $link.setPage("blah/login.vm") looks something like <code></code>. When turbine parses the second URL, it thinks that you have requested a template named "blah". The way to be able to request blah/login.vm is to replace the "/" character with a comma. This is the preferred way of using the $link pull tool. This does not mean that you MUST do it this way. Some people have complained that it is confusing to the people writing the view. If you really don't like the comma, you can use a different version of the pull tool. There is another version supplied in the org.apache.turbine.util.template package called TemplateLinkWithSlash. This version of the pull tool enables you to use the slash anyway. When it generates the URL for you, it will replace the "/" character with the comma. Of course, this is inefficient since there is a search and replace going on everytime you generate a URL. You are also free to create your own implementation of the $link pull tool. Simply override the setPage() method to replace whatever character you want to use as the seperator with the commas. If you do decide to use a different version of the pull tool than the one configured by default, you will need to change the <code>tool.request.link</code> setting in your <code>TurbineResources.properties</code> file to the correct class name of the version that you want to use. Q: Turbine only initializes during the first request to the application. How do I make it initialize on startup? A: You can use the <load-on-startup> tag in your web.xml to accomplish this. Example: {{{ <servlet> <servlet-name>turbine</servlet-name> <servlet-class>org.apache.turbine.Turbine</servlet-class> <init-param> <param-name>properties</param-name> <param-value>/WEB-INF/conf/TurbineResources.properties</param-value> </init-param> <load-on-startup/> </servlet> }}} Q: I keep getting the following error message: log4j:WARN No appenders could be found for logger... A: If you are using Tomcat and you do not get this message everytime you start the server, it is probably because Tomcat is attempting to reload the sessions that were serialized during the last shutdown. If this is the source of your problem, restarting Tomcat will ALWAYS fix the problem. It will seem to come back on every other restart. To fix this, you can modify your startup script/batch file to delete *.ser from Tomcat's work directory. Q: Can I use my IDE's debugger with Turbine? A: Yes. The actual procedure of how to accomplish this will vary depending on the IDE and your servlet container though. Check the JakartaTurbineIDEDebugging page to see if someone has posted instruction for your environment. If not, please add the appropriate text to that page after you figure it out Q: How can I test my Turbine actions? A: Use Cactus! Using Cactus to perform in container testing is much simpler then trying to use Junit directly, manually start up Turbine, and then try and obtain access to all the resources that you need. Write your Cactus test cases as if you were just writing code that was running inside of Turbine. The only extra step is to create your own { { { RunData } } } and Context objects. For example, I have an action called { { { DaughterboardDisplay } } } . To test it I would write: {{{ { { { <snip> // Junit imports import junit.awtui.TestRunner; import junit.framework.Test; import junit.framework.TestSuite; // Cactus imports import org.apache.cactus.ServletTestCase; import org.apache.cactus.WebRequest; import org.apache.cactus.WebResponse; // Turbine imports import org.apache.turbine.util.RunData; import org.apache.turbine.services.TurbineServices; import org.apache.turbine.services.rundata.RunDataService; import org.apache.velocity.context.Context; import org.apache.turbine.services.velocity.TurbineVelocity; <snip> public class TestDaughterboardDisplay extends ServletTestCase { private DaughterboardDisplay daughterboardDisplay; private RunData data= null; private Context context = null; <snip> - public void setUp() - throws Exception { super.setUp(); // Create the context objects to be used during testing. data = ((RunDataService) TurbineServices.getInstance().getService(RunDataService.SERVICE_NAME)).getRunData(request, response, config); context = TurbineVelocity.getContext(data); // Create the Action daughterboardDisplay = (DaughterboardDisplay) ActionLoader.getInstance().getInstance("DaughterboardDisplay"); - } /** - Tests if an daughterboard displays nicely. - / - public void testSimpleDisplayofDaughterboard () throws Exception { // Add parameters to the RunData that will be used to display a daughterboard data.getParameters ().setString ( "daughterboard_id", "636" ); // Call the do* method daughterboardDisplay.doPerform(data, context); // Evaulate the results Daughterboard db = (Daughterboard)context.get("daughterboard"); assertEquals ( "Loaded up the daughterboard into the context.", new Integer("636"), db.getDaughterboardId()); <snip> } } } }}} Note: You must have a web.xml that automatically starts up Turbine when the container starts up. See the FAQ section " Turbine only initializes during the first request to the application. How do I make it initialize on startup? ". Note: I have a TurbineTestCase that I inherit from. In the setUp() method is where all the rundata stuff goes. Then I have it for all my test cases. Any other global stuff, like resetting the database using DbUnit goes there as well. -EricPugh Q: How can I start Turbine in Cactus? A: If you are testing services, or don't want to use a custom web.xml to startup Turbine on load, or are testing different config files, you can manually start up Turbine like this: {{{ { { { - /** - This setup will be running server side. We startup Turbine and - get our test port from the properties. This gets run before - each testXXX test. - / - throws Exception - super.setUp(); config.setInitParameter("properties", - "/WEB-INF/conf/TurbineCacheTest.properties"); - After each testXXX test runs, shut down the Turbine servlet. - / - throws Exception - turbine.destroy(); super.tearDown(); Q: Why do I see jsessionid=xxxx on my URL when I access my application? A: This should only be happening when you first access your application after a restart. If you are on a unix platform you can add a command to the startup.sh file to make lynx access your application. The following example code was submitted by Jeff Painter: {{{ <tdkroot>/bin/catalina.sh - elif [ "$1" = "start" ] ; then - ..snipped... # # pause 15 seconds... wait for turbine to come up # /bin/sleep 15 # # open first login page.. dump to /dev/null # lynx -source > /dev/null # elif[ "$1" = "stop" ] ; then ...end snip... }}} Q: How can I tune Turbine to perform better A: There are quite a few things you can do to help Turbine run better: - If you are running Tomcat as a windows NT Service, check out this link: Q: How can I build Turbine 2.3 with maven-1.0-beta10 A: Henning has posted a solution to this problem: - Follow this link:
https://wiki.apache.org/jakarta/JakartaTurbine2Faq?highlight=WebResponse
CC-MAIN-2017-17
refinedweb
1,764
60.11
If you have an e-commerce application, a payment gateway lets you process payments on your website on the fly. With all the modern payment gateway solutions available these days, there are many ways you can integrate payments and charge your users for your product or services. In this tutorial, we are going to build a landing page that lets the end user purchase products from a web application. The page looks like this: Live Demo: Integrate Payments Source Code: Integrate Payments Source Code Some of the popular payment gateways that are available are: Today, we are going to learn how to integrate Razorpay with a Next.js (React) application and understand how the flow actually works. Tech Stack For our Stack, we are going to use the following technologies: - Next.js – A framework for React that gives access to serverless functions and React architecture. - TailwindCSS – A utility-based CSS framework for easy styling - Razorpay – A payment gateway system that lets users access payments. - Vercel – For hosting our Next.js application (if not already hosted) - Tailwind Master Kit – For easily accessible Tailwind Components Project Setup If you already have a project, then you can directly skip to the integration part of the article. If not, let’s get started by creating a Git repository and hosting our project on Vercel. How to Set Up a Next.js Repository and Website First, head over to Vercel and create a hobby account for yourself. (If you’re going to use it for a commercial project, make sure you buy their plan. Hobby accounts are just for testing and creating playgrounds.) Once the account is created, click on New Project Then, select Next.js from the available options and create a Git Repository on the platform itself. Your site will be deployed within seconds and you will get a URL for the live website. How to Set Up TailwindCSS Now since the website is setup, you can directly go to GitHub, clone the repository, to run it in your local environment. For that, follow these simple steps: - Go to GitHub and find your newly created repository - Click on the codesection and copy the repository URL. - Open your terminal on the desktop and write git clone <repo_name>. This will clone the repository in your local environment so that you can start working. - Once the repository is cloned/copied in your local environment, open the project in your favourite code editor (VSCode is the best in my opinion). - In the terminal, open the location of the application and write npm install. This will install all the related node modules. - You can start the local development server by writing npm run dev. Now the project is up and running in your local environment. To access your website locally, open localhost:3000 in your browser and you will be able to see the boilerplate website already there for you. Setting up tailwind is very simple. Their documentation makes it even simpler. Check out their docs for reference and more on TailwindCSS as a framework. To setup Tailwind on your local environment, follow the below steps: npm install -D tailwindcss postcss autoprefixer– This will install TailwindCSS along with other important dependencies for compiling and running your Tailwind code. npx tailwindcss init -p– This will initialize a tailwind.config.jsfile that is just an object which can be manipulated according to the user’s needs. - In the tailwind.config.jsfile, paste the below code which basically tells Tailwind to compile the code present in the /pagesand /componentsdirectories. module.exports = { content: [ "./pages/**/*.{js,ts,jsx,tsx}", "./components/**/*.{js,ts,jsx,tsx}", ], theme: { extend: {}, }, plugins: [], } - Open the globals.cssfile present in the /stylesdirectory and paste the following code. These code snippets import all the Tailwind related setup code: @tailwind base; @tailwind components; @tailwind utilities; - Restart your website by quitting the terminal and writing npm run devon the terminal. Now you’re ready to harness the power of TailwindCSS. Now that Tailwind and our website are setup, let’s jump right into developing the page and integrating payments. Landing Page Development The landing page that we are going to use is directly taken from the Tailwind Master Kit that lets you access components built with TailwindCSS. Let’s break down the code and understand it better. Navbar.js import React from "react"; export const Navbar = () => { return ( <div className="flex flex-row items-center justify-between px-20 py-10"> <div className="flex flex-row items-center"> <h1 className="font-bold italic text-2xl text-white mr-10">Payments</h1> <ul className="flex flex-row space-x-10"> <li> <a href="#" className="text-gray-400 text-sm tracking-wide font-light" > Pricing </a> </li> <li> <a href="#" className="text-gray-400 text-sm tracking-wide font-light" > Product </a> </li> <li> <a href="#" className="text-gray-400 text-sm tracking-wide font-light" > Team </a> </li> <li> <a href="#" className="text-gray-400 text-sm tracking-wide font-light" > Sales </a> </li> </ul> </div> <div className="flex flex-row space-x-10 items-center"> <a href="#" className="text-gray-400 text-sm tracking-wide font-light"> Sales </a> <button className="bg-[#272A30] text-gray-300 px-8 text-sm py-2 rounded-md shadow-xl drop-shadow-2xl"> Sign in </button> </div> </div> ); }; Building the Navbar is simple. It’s a Flexbox container with links and unordered list items aligned in a row. The button, however, is interesting. It used the new TailwindCSS drop shadow class which drops a background shadow. (We can also use colored shadows in TailwindCSS 3.0+ versions – pretty cool.) Hero.js const Hero = ({ onClick }) => { return ( <div className="relative z-10 flex flex-col md:flex-row mt-10 items-center max-w-6xl justify-evenly mx-auto"> <div className="md:w-1/3 mb-20 md:mb-0 mx-10"> <h1 className=" text-white font-bold text-5xl mb-10"> Integrate{" "} <span className="bg-clip-text text-transparent bg-gradient-to-r from-pink-500 to-violet-500"> payments </span>{" "} in less than 10 minutes. </h1> <p className="text-sm text-gray-300 font-light tracking-wide w-[300px] mb-10"> Learn how to integrate a Payment Gateway with your Next.js and React application. </p> <div className="bg-gradient-to-r from-[#3e4044] to-[#1D2328] p-[1px] rounded-md mb-4"> <button onClick={onClick} Purchase Now! </button> </div> <div className="bg-gradient-to-r from-[#3e4044] to-[#1D2328] p-[1px] rounded-md"> <button className="bg-gradient-to-r from-[#1D2328] to-[#1D2328] rounded-md w-full py-4 shadow-sm drop-shadow-sm text-gray-400 font-light"> Read Blog </button> </div> </div> {/* <div className="w-2/3 bg-white flex-shrink-0 relative"> */} <img className="w-full md:w-[36rem] h-full" alt="stripe payment from undraw" src="/payments.svg" /> {/* </div> */} </div> ); }; The hero section contains our Purchase Now button which will initialise the payments for us (we’ll look at the implementation in the next section). The layout contains two sections: the Left section contains all the text and the Right Section contains a large image (taken from Undraw, a free and open source illustrations website). The onClick action on the button is important since it is responsible for triggering the action that will initialise the payments. The onClick is nothing but a callback that calls the function which is passed down as a prop to the component. That’s pretty much it for the UI part. Let’s jump into the payments section and understand how to setup a developer account on Razorpay and use their SDK to make payments on our website. How to Set Up a Razorpay Account and Retrieve API Keys For integrating payments (that is, receiving payments on our website), we need to have two things: - A Razorpay account - A set of API Keys that lets us access their services. Let’s create an account and retrieve the API keys. - Head over to Razorpay and sign up for an account - After signing up you can access the Dashboard where you will find all the necessary details that are required for integrating payments. - For now, we will be in Test mode so that we can test our payments before we actually go live. - In the left panel, scroll down to Settings– There you will find the API keys section along with the configurations you can make to your payments UI. - Since you will be doing it for the first time, click on Generate API Keysand the download will automatically start. The downloaded file contains Razorpay API Keyand Razorpay API Secret. Now you’re all set with the API keys and setting up the platform. Let’s jump directly into how to actually trigger the Razorpay API and make payments. How to Integrate Payments with Razorpay For our payments to be integrated, we need a button click that actually initializes the Razorpay Purchase Now the calls a function onClick that is nothing but a callback. Let’s see the actual implementation and understand the code behind it. To initialise a payment, we need to add Razorpay’s checkout script into our code. In React, we can simply do it using the document.body.appendChild(script) code. initializeRazorpay() const initializeRazorpay = () => { return new Promise((resolve) => { const script = document.createElement("script"); script.src = ""; script.onload = () => { resolve(true); }; script.onerror = () => { resolve(false); }; document.body.appendChild(script); }); }; Now, we are using a promise to achieve this task. We do this because later on, we are going to use the initializeRazorpay() in such a way that every time Purchase Now is clicked, the payments are initialised. We simply have to await this function to create and append a script into the DOM. Let’s look at the main function which is responsible for creating and initializing payments on the page. makePayment() function const makePayment = async () => { const res = await initializeRazorpay(); if (!res) { alert("Razorpay SDK Failed to load"); return; } // Make API call to the serverless API const data = await fetch("/api/razorpay", { method: "POST" }).then((t) => t.json() ); console.log(data); var options = { key: process.env.RAZORPAY_KEY, // Enter the Key ID generated from the Dashboard name: "Manu Arora Pvt Ltd", currency: data.currency, amount: data.amount, order_id: data.id, description: "Thankyou for your test donation", image: "", handler: function (response) { // Validate payment at server - using webhooks is a better idea. alert(response.razorpay_payment_id); alert(response.razorpay_order_id); alert(response.razorpay_signature); }, prefill: { name: "Manu Arora", email: "[email protected]", contact: "9999999999", }, }; const paymentObject = new window.Razorpay(options); paymentObject.open(); }; The makePayment() method is responsible for initialising and opening the Razorpay popup. The makePayment() function does the following operations: Initializes the Razorpay Checkout script and appends it to the body. This was handled by the initializeRazorpaymethod as we saw earlier. Makes a call to the /api/razorpay.jsserverless function. (This we will talk about in a minute). Creates an Object which has 4 important keys: currency– The currency in which we want the transaction to happen amount– The amount in which the transaction has to happen. Note that it has to be the smallest denomination. Example if you’re from the USA, then the amount will be in cents. order_id– This will be generated from the serverless API which we are going to talk about in a minute. handler– When the payments are successful, this callback function is called. Finally, a paymentObjectis created with the optionspassed down as the parameters to the window.Razorpaymethod. This is available to us because of the We looked at the above makePayment() method and saw a line of code which is: const data = await fetch("/api/razorpay", { method: "POST" }).then((t) => t.json() ); But what does it mean? Next.js allows us to access serverless functions with the help of apis that are available to us in the api folder within Next.js. The serverless APIs are nothing but Lambda Functions that act as a back-end for our JAMStack applications. Here, we can write our back-end related code easily without having to create a separate back-end. Here, we need serverless because the order_id that we saw in the makePayments() code is unique and has to be generated at the backend. Not only this but the amount and currency also comes from the backend. This is to ensure that no one can manipulate the amount and the currency and the portal is secure for payments. Let’s have a look at the serverless API code and understand it better. /api/razorpay.js const Razorpay = require("razorpay"); const shortid = require("shortid"); export default async function handler(req, res) { if (req.method === "POST") { // Initialize razorpay object const razorpay = new Razorpay({ key_id: process.env.RAZORPAY_KEY, key_secret: process.env.RAZORPAY_SECRET, }); // Create an order -> generate the OrderID -> Send it to the Front-end const payment_capture = 1; const amount = 499; const currency = "INR"; const options = { amount: (amount * 100).toString(), currency, receipt: shortid.generate(), payment_capture, }; try { const response = await razorpay.orders.create(options); res.status(200).json({ id: response.id, currency: response.currency, amount: response.amount, }); } catch (err) { console.log(err); res.status(400).json(err); } } else { // Handle any other HTTP method } } This of razorpay.js as your route which leads to /api/razorpay. Every file you create in the API folder becomes a serverless route. Just like we create APIs in the back-end, we create files here in the APIs folder which becomes a route for us. For example: let’s say you create a file in the /api folder named posts.js. So the route will become /api/posts which can return anything you want depending upon the use case. For our case, we need to make a POST request to our back-end that will create an order_id for us along with amount and currency that can be returned to the front-end for making payments. Let’s understand the flow for this API. - First we need to install the razorpaymodule along with shortidfor generating short unique ids. To do that, head over to your terminal and write npm install razorpayand npm install shortid - Now, to access a POSTrequest, we check the request object and access the method by using the below snippet: export default async function handler(req, res) { if (req.method === "POST") { // Initialize razorpay object const razorpay = new Razorpay({ key_id: process.env.RAZORPAY_KEY, key_secret: process.env.RAZORPAY_SECRET, }); // rest of the code... } 3. Here, request.method checks for the method. If the method is POST we go ahead and initialize the Razorpay object. 4. The Razorpay object takes in 2 parameters: key_id and key_secret. Remember when we downloaded the keys from Razorpay dashboard? Let’s put them to use. 5. Open/create the .env file in your folder structure’s root and paste the following code: RAZORPAY_KEY=YOUR_KEY_HERE RAZORPAY_SECRET=YOUR_SECRET_HERE Here, you can plug in your API key and secret and you will be good to go. Note: Make sure you restart your development server – otherwise the changes won’t be reflected. Once the razorpay object is setup, it takes in three important options: receipt, amount and currency. const payment_capture = 1; const amount = 499; const currency = "INR"; const options = { amount: (amount * 100).toString(), currency, receipt: shortid.generate(), }; Note that amount and currency are being declared in our back-end so that there’s no way for attackers to tamper with it. Once the options are setup, we can create orders with Razorpay’s razorpay.orders.create(options) method. try { const response = await razorpay.orders.create(options); res.status(200).json({ id: response.id, currency: response.currency, amount: response.amount, }); } catch (err) { console.log(err); res.status(400).json(err); } Here, we simply await the create() method provided by Razorpay. When the create method is successful, we get an id which is nothing but the order_id that we need to supply to the front-end in order to generate unique payments. Once everything is successful, we send a 200 response with id, currency and amount fields. This is all what is required by the front-end to process payments. How to Make Payments with Razorpay Once everything is integrated and is in place, we can start using Razorpay’s charging methods – there are various options available. With this, you can start charging for your services and products by simply accepting payments on your website. The whole popup is customisable and can be edited directly from Razorpay’s dashboard portal. Since you’re in Test mode, to start using their services in production, you need to complete their Identification process by submitting your proof documents and simply toggle between test mode and live mode. That’s all you need to do from the coding side to make the transition from test to live. Environment Variables To make sure that our changes are reflected in our live production website, we need to add the same environment variables that we added in the code on the Vercel platform as well. For that: - Head over to Vercel and open your project - Click on settings - Click on environment variables. - You will get 2 input fields – Name and Value. - First, enter RAZORPAY_KEYand add the API key - Second, enter RAZORPAY_SECRETand add the secret value - Redeploy the website and you will be able to make payments in the live environment as well. Live Demo and Source Code The entire source code for the application can be found here. The live demo of the website is here. Conclusion Integrating payments is easy, thanks to Razorpay’s excellent documentation that is easy to understand. I enjoyed coding this website and integrating payments. You can also see a snippet of the code at my website: Manu Arora’s Code Snippets If you liked this blog, try implementing it in your own website so you can reach out to your end-users and make payments an easy task for yourself. If you’d like to give any feedback, reach out to me at my Twitter handle or visit my Website Also thanks to Tailwind Master Kit for the beautiful Landing Page UI. Happy Coding. 🙂
https://envo.app/how-to-set-up-a-payment-gateway-in-next-js-and-react-with-razorpay-and-tailwindcss/
CC-MAIN-2022-33
refinedweb
2,994
55.44
- Calendar This article is created to cover some programs in Python, to display calendar of a month or all months (whole year). Here are the list of programs: - Display Calendar of a Month using year value and month number entered by user - Display All Months (Whole Year's) Calendar Display Calendar of a Month To display calendar in Python, you have to ask from user to enter the Year and a month. The month must be entered as its number. For example, to display calendar of January, 2021, then enter 2021 as year input and 1 as month input like shown in the program and its output given below: import calendar print("Enter Year: ") yy = input() print("\nEnter Month Number (1-12): ") mm = input() y = int(yy) m = int(mm) print("\n", calendar.month(y, m)) Here is the initial output produced by this Python program: Now supply the input say 2021 as year, press ENTER key and then type 1 as month, again ENTER key to display the calendar of January, 2021 as shown in the snapshot given below: Note - The calendar module in Python allows us to work with calendar. That is, calendar.month() prints calendar of a month. For example, calendar.month(2021, 4) prints calendar of fourth month of year 2021. Modified Version of Previous Program This is the modified version of previous program. The end= used in this program to skip inserting an automatic newline using print(). This program also uses try-except to handle with invalid input. import calendar print("Enter Year: ", end="") try: yy = int(input()) print("\nEnter Month Number (1-12): ", end="") try: mm = int(input()) if mm>=1 and mm<=12: print("\n", calendar.month(yy, mm)) else: print("\nInvalid Month Number!") except ValueError: print("\nInvalid Input!") except ValueError: print("\nInvalid Input!") Here is its sample run with user input, 2021 as year and 2 as month number: In above program, the following code: if mm>=1 and mm<=12: is used to check whether the value of month number entered by user is a valid input or not. That is, if user enters month number from 1 to 12, then it is a valid month number, otherwise will treated as an invalid input. Display All Months (Whole Year) Calendar Now this program receives an input as year, and prints calendar of all months of given year. For example, if user enters 2022 as year, then whole year's calendar gets printed like shown in the program and its output given below: import calendar print("Enter Year: ", end="") try: yy = int(input()) print() mm = 1 while mm<=12: print(calendar.month(yy, mm)) mm = mm+1 except ValueError: print("\nInvalid Input!") Here is its sample run with year input 2021: The snapshot given above is upper part of complete output. Here is the lower part of complete output produced by above program with exactly same user input, that is 2021: The dry run of following block of code (from above program): mm = 1 while mm<=12: print(calendar.month(yy, mm)) mm = mm+1 goes like: - Initial values, mm=1, yy=2021 (entered by user as in sample run) - The condition (of while loop) mm<=12 or 1<=12 evaluates to be true, therefore program flow goes inside the loop - Inside the loop, using calendar.month(2021, 1), the calendar of first month of 2021 gets printed - Now the value of m gets incremented by 1. So m=2 - Now program flow again evaluates the condition of while loop. This time also, the condition mm<=12 or 2<=12 evaluates to be true, therefore program flow again goes inside the loop and prints the calendar of second month of 2021 - This process continues until the condition of while loop evaluates to be false, or until the value of i becomes greater than 12 « Previous Program Next Program »
https://codescracker.com/python/program/python-program-display-calendar.htm
CC-MAIN-2022-21
refinedweb
646
57.61
#1 Posted 17 June 2006 - 11:56 AM Got an annoying problem. I've been using Visual Studio 2002 and now switched to VS 2005. Everything compiles fine. However, when i try to link my program, it complains, that the following symbols are undefined: nvDXTlibMT.lib(NormalMapGen.obj) : error LNK2019: Undefined extern Symbol ""public: void __thiscall std::_String_base::_Xran(void)const " (?_Xran@_String_base@std@@QBEXXZ)" in Funktion ")". nvDXTlibMT.lib(NormalMapGen.obj) : error LNK2019: Undefined extern Symbol ""public: void __thiscall std::_String_base::_Xlen(void)const " (?_Xlen@_String_base@std@@QBEXXZ)" in Funktion ""protected: bool __thiscall std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >::_Grow(unsigned int,bool)" (?_Grow@?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@IAE_NI_N@Z)". I use the nvDXTlib to load DDS files. Now, suddenly, after switching to VS 2005 i get these two error-messages. It seems, as if the programmers of the nvDXTlib used an implementation of the STL, that is not compatible to the one shipping with VS 2005. Did anyone else encounter this problem? Any ideas how to work around it? Note, that i downloaded the most recent nvDXTlib, but there seems to be no update, that fixes this problem. Jan. #2 Posted 17 June 2006 - 09:32 PM #3 Posted 18 June 2006 - 12:56 PM #4 Posted 18 June 2006 - 04:28 PM #5 Posted 18 June 2006 - 07:18 PM Jan said: If nobody has a good alternative to nvdxt, the best route is to just keep bugging nVidia. #6 Posted 18 June 2006 - 10:14 PM I mean, if they would release their source-code, everyone could just compile it himself. I was really happy to get free code, that would deal with DDS file loading, since i don't want to do that myself. But in the end all this is counter-productive. I already send them an "error-report" (try sending them the error-message if you are not allowed to send them more than 900 characters!). I don't expect an answer, though. Jan. #7 Posted 19 June 2006 - 12:51 AM I know, it's a dirty hack, but... #8 Posted 19 June 2006 - 02:03 AM #9 Posted 19 June 2006 - 10:01 PM Hope I'm not mudding the issue. #10 Posted 19 June 2006 - 11:57 PM I think it's a piece of ****. The code simply does not load DDS files. At least not files created with nVidias Photoshop plugin. I put some debug output into the loading code and the "load" function, that is supposed to read the DDS file fails at recognizing the DDS file. First it checks whether "DDS" is written in the file, which seems not to be the case (at least not where it tries to read it). Of course all subsequent checks (DXT FOURCC Code, ...) also fail. I really need code, that reads a DDS file and is able to decode it, because i need the raw, uncompressed data. Exactly what the nvDXTlib does. I rewrote my whole texture-loading code, to at least be able to directly upload the data, because the Cg lib is not able to decode the loaded data. But then it simply didn't read the file at all, GRRRR !!! Cunning as i am, i now SOLVED the problem :-D Yeah, i know, i am great. How i did it? I removed the DDS loading code and now load TGA files, again. Great, isn't it. Maybe i will reinstall VS 2002 only because of this. Or i switch to D3D. So many options! So, in the end i ask you: Did nVidia really do us a favor in releasing a library, that first allowed us to use the DDS format, but then dropping support for it, so that we are now hanging in the air and need to somehow do all this on our own? I think not. I'm mighty disappointed. Jan. #11 Posted 20 June 2006 - 01:25 AM If you wanted to develop your own DXT reading library, you could maybe start with the Cg stuff and then add decoding logic based on the information in this document, which describes the precise format of the compressed data (in the appendix). #12 Posted 09 August 2006 - 05:08 PM // // Crap to make the nvDXTLib link with the new VS8. // namespace std { // // From old VS7 xstring. // class _CRTIMP2 _String_base { public: void _Xlen() const; void _Xran() const; }; // // From new VS8 xstring. // // //(); // }; // // // From old VS7 string.cpp // _CRTIMP2 void _String_base::_Xlen() const { // // _THROW(length_error, "string too long"); // throw("string too long"); } _CRTIMP2 void _String_base::_Xran(void) const { // // _THROW(out_of_range, "invalid string position"); // throw("invalid string position"); } } #13 Posted 09 August 2006 - 05:36 PM There is an API to encode whole images (even though the webpage doesn't mention it). It doesn't have all the features of the nVidia encoder and doesn't deal with resampling image, but it does a good job of encoding and is about as fast as the nVidia encoder. As for decoding... well apart from it being trivial, I can see no good reason to need a software decoder. The whole point of s3tc is to let the HW do it. As for a dds loader... well the loader shouldn't decode the DXT blocks... just leave them in-tact. My own dds loader does happen to flip the image rows (cuz we're using OpenGL not DX) but that can be done without decoding the blocks... just massaging them a little. tga's are great. Maybe consider having your pipeline use Squish or nvDXTLib to encode and generate the mipmap levels automatically. #14 Posted 09 August 2006 - 07:27 PM Great, thanks! Of course S3TC is about decoding that stuff in hardware. But sometimes you need a software-decoder, because you want to make a tool that processes image data. DDS is mainly about getting the textures into the hardware efficiently, but sometimes you need to work on the raw image-data. Jan. #15 Posted 10 August 2006 - 06:43 PM Hope the code worked for you. I suppose the discussion of a software dxt decoder is a bit of a philosophical one... but I think reasonably important. I suppose I agree that there are rare cases where a decoder could be justified... like for a dds viewer where you don't want the image to look like what the HW would give you (ex: older nVidia cards use 16bit interpolation... giving that ugly banding). Another reasonable use might be validating or computing error metrics for various compression schemes (HW or SW). However, since s3tc is lossy... and badly lossy at that (much worse than, say, a jpg, at the same file-size), I think it's a mistake to process the dxt blocks except for simple lossless manipulations (row flips, mirror images, etc). Personally I don't even like doing edits to jpg's, and the generation loss with a jpg is much lower than with a s3tc image. Processing dds files in general is reasonable though... there are many other supported pixel formats... many of which are lossless in an editable way. Anyways, these are clearly just my opinions... obviously Photoshop with the NV dxt plugin allows editing dds's with dxt data in them... so I suppose there are smart people out there that disagree with me ;) ah well. #16 Posted 12 September 2006 - 01:57 PM has anyone gotten tri's snippet code from above to run? If I put it in my cpp file, I get class redifinition errors (right where _String_base gets defined). Can anyone help? Cheers, Sebastian #17 Posted 13 September 2006 - 06:49 PM Create a new module and define LEAN_AND_MEAN before including windows.h. Then use the code above. 1 user(s) are reading this topic 0 members, 1 guests, 0 anonymous users
http://devmaster.net/forums/topic/5046-nvdxtlib-on-visual-studio-2005/
crawl-003
refinedweb
1,312
73.88
This is another in a series of posts looking at how to add multi-tenancy to ASP.NET Core applications using SaasKit. SaasKit is an open source project, created by Ben Foster, to make adding multi-tenancy to your application easier. In the last two posts I looked at how you can load your tenants from the database, and cache the TenantContext<AppTenant> between requests. Once you have a tenant context being correctly resolved as part of your middleware pipeline, you can start to add additional tenant-specific features on top of this. Theming and static files One very common feature in multi-tenant applications is the ability to add theming, so that different tenants can have a custom look-and feel, while keeping the same overall functionality. Ben described a way to do this on his blog using custom Views per tenant, and a custom IViewLocationExpander for resolving them at run time. This approach works well for what it is trying to achieve - a tenant can have a highly customised view of the same underlying functionality by customising the view templates per tenant. Similarly, the custom _layout.cshtml files reference different css files located at, for example /themes/THEME_NAME/assets, so the look of the site can be customised per tenant. However this is relatively complicated if all you want to do is, for example, serve a different file for each tenant - it requires you to create a custom theme and view for each tenant. Also, in this approach there is no isolation between the different themes, the templates just reference different files. It is perfectly possible to reference the files of one theme from another, by just including the appropriate path. This approach assumes there is no harm with a tenant using theme A accessing files from theme B. This is a safe bet when just used for theming, but what if we were serving some semi-sensitive file, say a site logo. It may be that we don't want Tenant A to be able to view the logo of Tenant B, without explicitly being within the Tenant B context. To demonstrate the problem, I created a simple MVC multi-tenant application using the default template and added SaasKit. I added my AppTenant model shown below, and configured the tenant to be loaded by hostname from configuration for simplicity. You can find the full code on GitHub. public class AppTenant { public string Name { get; set; } public string Hostname { get; set; } public string Folder { get; set; } } Note that the AppTenant class has a Folder property. This will be the name of the subfolder in which tenant specific assets live. Static files are served by default from the wwwroot folder; we will store our tenant specific files in a sub folder of this as indicated by the Folder property. For example. for Tenant 1, we store our files in /wwwroot/tenants/tenant1: Inside of each of the tenant-specific folders I have created an images/banner.svg file which will we show on the homepage for each tenant. The key thing to keep in mind is we don't want tenants to be able to access the banner of another tenant. First attempt - direct serving of static files The easiest way to show the tenant specific banner on the homepage is to just update the image path to include AppTenant.Folder. To do this we first inject the current AppTenant into our View as described in a previous post, and use the property directly in the image path: @inject AppTenant Tenant; @{ ViewData["Title"] = "Home Page"; } <div id="myCarousel" class="carousel slide"> <div class="carousel-inner" role="listbox"> <div class="item active"> <img src="~/tenant/@Tenant.Folder/images/banner.svg" alt="ASP.NET" class="img-responsive" /> </div> </div> </div> Here you can see we are creating a banner header containing just one image, and injecting the AppTenant.Folder property to ensure we get the right banner. The result is that different images are displayed per tenant Tenant 1 (localhost:5001): Tenant 2 (localhost:5002): This satisfies our first requirement of having tenant-specific files, but it fails at the second - we can access the Tenant 2 banner from the Tenant 1 hostname (localhost:5001): This is the specific problem we are trying to address, so we will need a new approach. Forking the middleware pipeline The technique we are going to use here is to fork the middleware pipeline. As explained in my previous post on creating custom middleware, middleware is essentially everything that sits between the raw request constructed by the web server and your application behaviour. In ASP.NET Core the middleware effectively sits in a sequential pipe. Each piece of middleware can perform some operation on the HttpContext, and then either return, or call the next middleware in the pipe. Finally it gets another chance to modify the HttpContext on the way 'back through'. When you use SaasKit in your application, you add a piece of TenantResolutionMiddleware into the pipeline. It is also possible, as described in Ben Foster's post, to split the middleware pipeline per tenant. In that way you can have different middleware for each tenant, before the pipeline merges again, to continue with the remainder of the middleware: To achieve our requirements, we are going to be doing something slightly different again - we are going to fork the pipeline completely such that requests to our tenant specific files go down one branch, while all other requests continue down the pipeline as usual. Building the middleware Before we go about building the required custom middleware, it's worth noting that there are actually lots of different ways to achieve what I'm aiming for here. The approach I'm going to show is just one of them. - Tenant resolution should happen at the start of the pipeline - Requests for tenant specific static files should arrive at the static file path, with the AppTenant.Foldersegment removed. e.g. from the example above, a request for the banner image for tenant 1 should go to /tenant/images/banner.svg. - Register a route which matches paths starting with the /tenant/segment. - If the route is not matched, continue on the pipeline as usual. - If the route is matched, fork the pipeline. Insert the appropriate AppTenant.Foldersegment into the path and serve the file using the standard static file middleware. UseRouter to match path and fork the pipeline The first step in processing a tenant-specific file, is identifying when a tenant-specific static file is requested. We can achieve this using the IRouter interface from the ASP.NET Core library, and configuring it to look for our path prefix. We know that any requests to our files should start with the folder name /tenant/ so we configure our router to fork the pipeline whenever it is matched. We can do this using a RouteBuilder and MapRoute in the Startup.Configure method: var routeBuilder = new RouteBuilder(app); var routeTemplate = "tenant/{*filePath}"; routeBuilder.MapRoute(routeTemplate, (IApplicationBuilder fork) => { //Add middleware to rewrite our path for tenant specific files fork.UseMiddleware<TenantSpecificPathRewriteMiddleware>(); fork.UseStaticFiles(); }); var router = routeBuilder.Build(); app.UseRouter(router); We are mapping a single route as required, and also specifying a catch-all route parameter which will match everything after the first segment, and assign it to the filePath route parameter. It is also here that the middleware pipeline is forked when the route is matched. We have added the static file middleware to the end of the pipeline fork, and our custom middleware just before that. As the static file middleware just sees a path that contains our tenant-specific files, it acts exactly like normal - if the file exists, it serves it, otherwise it returns a 404. Rewriting the path for tenant-specific files In order to rewrite the path we will use a small piece of middleware which is called before we attempt to resolve our tenant-specific static files. public class TenantSpecificPathRewriteMiddleware { private readonly RequestDelegate _next; public TenantSpecificPathRewriteMiddleware( RequestDelegate next) { _next = next; } public async Task Invoke(HttpContext context) { var tenantContext = context.GetTenantContext<AppTenant>(); if (tenantContext != null) { //remove the prefix portion of the path var originalPath = context.Request.Path; var tenantFolder = tenantContext.Tenant.Folder; var filePath = context.GetRouteValue("filePath"); var newPath = new PathString($"/tenant/{tenantFolder}/{filePath}"); context.Request.Path = newPath; await _next(context); //replace the original url after the remaining middleware has finished processing context.Request.Path = originalPath; } } } This middleware just does one thing - it inserts the AppTenant.Folder segment into the path, and replaces the value of HttpContext.Request.Path. It then calls the remaining downstream middleware (in our case, just the static file handler). Once the remaining middleware has finished processing, it restores the original request path. That way, any upstream middleware which looks at the path on the return journey through will be unaware any change happened. It is worth noting that this setup makes it impossible to access files from another tenant's folder. For example, if I am Tenant 1, attempting to access the banner of Tenant 2, I might try a path like /tenant/tenant2/images/banner.svg. However, our rewriting middleware will alter the path to be /tenant/tenant1/tenant2/images/banner.svg - which likely does not exist, but in any case resides in the tenant1 folder and so is by definition acceptable for serving to Tenant 1. Referencing a tenant specific file Now we have the relevant infrastructure in place we just need to reference the tenant-specific banner file in our view: @{ ViewData["Title"] = "Home Page"; } <div id="myCarousel" class="carousel slide"> <div class="carousel-inner" role="listbox"> <div class="item active"> <img src="~/tenant/images/banner.svg" alt="ASP.NET" class="img-responsive" /> </div> </div> </div> As an added bonus, we no longer need to inject the tenant into the view in order to build the full path to the tenant-specific file. We just reference the path without the AppTenant.Folder segment in the knowledge it'll be added later. Testing it out And that's it, we're all done! To test it out we verify that localhost:5001 and localhost:5002 return their appropriate banners as before. Tenant 1 (localhost:5001): Tenant 2 (localhost:5002): So that still works, but what about if we try and access the purple banner of Tenant 2 from Tenant 1? Success - looking at the developer tools we can see that the request returned a 404. This was because the actual path tested by the static file middleware, /tenant/tenant1/tenant2/images/banner.svg, does not exist. Tidying things up Now we've seen that our implementation works, we can tidy things up a little. As a convention, middleware is typically added to the pipeline with a Use extension method, in the same way UseStaticFiles was added to our fork earlier. We can easily wrap our router in an extension method to give the same effect public static IApplicationBuilder UsePerTenantStaticFiles<TTenant>( this IApplicationBuilder app, string pathPrefix, Func<TTenant, string> tenantFolderResolver) { var routeBuilder = new RouteBuilder(app); var routeTemplate = pathPrefix + "/{*filePath}"; routeBuilder.MapRoute(routeTemplate, (IApplicationBuilder fork) => { fork.UseMiddleware<TenantSpecificPathRewriteMiddleware<TTenant>>(pathPrefix, tenantFolderResolver); fork.UseStaticFiles(); }); var router = routeBuilder.Build(); app.UseRouter(router); return app; } As well as wrapping the route builder in an IApplicationBuilder extension method, I've done a couple of extra things too. First, I've made the method (and our TenantSpecificPathRewriteMiddleware) generic, so that we can reuse it in apps with other AppTenant implementations. As part of that, you need to pass in a Func<TTenant, string> to indicate how to obtain the tenant-specific folder name. Finally, you can pass in the tenant/ routing template prefix, so you can name the tenant-specific folder in wwwroot anything you like. To use the extension method , we just call it in Startup.Configure, after the tenant resolution middleware: public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory) { //other configuration app.UseMultitenancy<AppTenant>(); app.UsePerTenantStaticFiles<AppTenant>("tenant", x => x.Folder); app.UseStaticFiles(); //app.UseMvc(); etc } Considerations As always with middleware, the order is important. Obviously we cannot use tenant specific static files if we have not yet run the tenant resolution middleware. Also, it's critical for this design that the UseStaticFiles call comes after both UseMultitenancy and UsePerTenantStaticFiles. This is in contrast to the usual pattern where you would have UseStaticFiles very early in the pipeline. The reason for this is that we need to make sure we fork the pipeline as early as possible when resolving paths of the form /tenant/REST_OF_THE_PATH. If the static file handler was first in the pipeline then we would be back to square one in serving files from other tenants! Another point I haven't addressed is how we handle the case when the tenant context cannot be resolved. There are many different ways to handle this, which Ben covers in detail in his post on handling unresolved tenants. These include adding a default tenant (so a context always exists), adding additional middleware to redirect, or returning a 404 if the tenant cannot be resolved. With respect to our fork of the pipeline, we are explicitly checking for a tenant context in the TenantSpecificPathRewriteMiddleware, and if one is not found, we are just returning immediately. Note however that we are no setting a status code, which means that the response sent to the browser will be the default 200, but with no content. The result is essentially undefined at this point, so it is probably wise to handle the unresolved context issue immediately after the call to UseMultitenancy, before calling our tenant-specific static file middleware. As I mentioned previously, there are a number of different ways we could achieve the end result we're after here. For example, we could have used the Map extension on IApplicationBuilder to fork the pipeline instead of using an IRouter. The Map method looks for a path prefix ( /tenant in our case) and forks the pipeline at this point, in a similar way to the IRouter implementation shown. It's worth nothing there's also a basic url-rewriting middleware in development which may be useful for this sort of requirement in the near future. Summary Adding multi-tenancy to an ASP.NET Core application is made a lot simpler thanks to the open source SaasKit. Depending on your requirements, it can be used to enable data partitioning by using different databases per client, to provide different themes and styling across tenants, or to wholesale swap out portions of the middleware pipeline depending on the tenant. In this post I showed how we can create a fork of the ASP.NET Core middleware pipeline and to use it to map generic urls of the form PREFIX/path/to/file.txt, to a tenant-specific folder such as PREFIX/TENANT/path/to/file.txt. This allows us to isolate static files between tenants where necessary.
https://andrewlock.net/forking-the-pipeline-adding-tenant-specific-files-with-saaskit-in-asp-net-core/
CC-MAIN-2021-04
refinedweb
2,479
52.39
® New ® Application command for a classic VCL-based Windows program and with the File ® New ® CLX Application command for a new CLX-based portable application. ® ®_USERSoftwareBorlandDelphi7.0). This Registry key uses a string (in place of Boolean values), where '-1' and 'True' indicate true and '0' and 'False' indicate false. ®7.0CodeCompletionExtraUnits.. The editor has many more shortcut keys that depend on the editor style you've selected. Here are a few of the lesser-known shortcuts: ® ®Delphi7.0 key of the Registry in the HKEY_CURRENT_USERSoftware. Delphi's multitarget Project Manager (View ® Of all the projects in the group, only one is active; this is the project you operate on when you select a command such as Project ® File (*.rc) as the file type. This resource file will be bound to the project automatically, even without a corresponding $R directive. Delphi saves the project groups with the .BPG extension, which stands for Borland Project Group. This feature comes from C++Builder and from past Borland C++ compilers; this history is clearly visible when you open the source code of a project group, which is basically that of a makefile in a C/C++ development environment. Here is a simple example: #————————————————————————————— VERSION = BWS.01 #————————————————————————————— !ifndef ROOT ROOT = $(MAKEDIR).. !endif #————————————————————————————— MAKE = $(ROOT)inmake.exe -$(MAKEFLAGS) -f$** DCC = $(ROOT)indcc32.exe $** BRCC = $(ROOT)inrcc32.exe $** #————————————————————————————— PROJECTS = Project1.exe #————————————————————————————— default: $(PROJECTS) #————————————————————————————— Project1.exe: Project1.dpr $(DCC) The Project Manager doesn't provide a way to set the options of two different projects at one time. Instead, you can invoke the Project Options dialog from the Project Manager for each project. The first page of Project Options (Forms) lists the forms that should be created automatically at program startup and the forms that are created manually by the program. The next page (Application) is used to set the name of the application and the name of its Help file, and to choose its icon. Other Project Options choices relate to the Delphi compiler and linker, version information, and the use of run-time packages. There are two ways to set compiler options. One is to use the Compiler page of the Project Options dialog. The other is to set or remove individual options in the source code with the {$X+} and {$X-} directives, where you replace X with the option you want to set. This second approach is more flexible, because it allows you to change an option only for a specific source-code file, or even for just a few lines of code. The source-level options override the compile-level options. All project options are saved automatically with the project, but in a separate file with a .DOF extension. This is a text file you can easily edit. You should not delete this file if you have changed any of the default options. Delphi also saves the compiler options in another format in a CFG file, for command-line compilation. The two files have similar content but a different format: The dcc command-line compiler cannot use .DOF files, but needs the .CFG format. Another alternative for saving compiler options is to press Ctrl+O+O (press the O key twice while keeping Ctrl pressed). This key combination inserts, at the top of the current unit, compiler directives that correspond to the current project options (including all of the new compiler warning settings), as in the following listing: {$A8,B-,C+,D+,E-,F-,G+,H+,I+,J-,K-,L+,M-,N+,O+,P+,Q-,R-,S-,T-,U-,V+,W-,X+,Y+,Z1} {$MINSTACKSIZE $00004000} {$MAXSTACKSIZE $00100000} {$IMAGEBASE $00400000} {$APPTYPE GUI} {$WARN SYMBOL_DEPRECATED ON} {$WARN SYMBOL_LIBRARY ON} {$WARN SYMBOL_PLATFORM ON} {$WARN UNIT_LIBRARY ON} {$WARN UNIT_PLATFORM ON} {$WARN UNIT_DEPRECATED ON} {$WARN HRESULT_COMPAT ON} {$WARN HIDING_MEMBER ON} {$WARN HIDDEN_VIRTUAL ON} {$WARN GARBAGE ON} {$WARN BOUNDS_ERROR ON} {$WARN ZERO_NIL_COMPAT ON} {$WARN STRING_CONST_TRUNCED ON} {$WARN FOR_LOOP_VAR_VARPAR ON} {$WARN TYPED_CONST_VARPAR ON} {$WARN ASG_TO_TYPED_CONST ON} {$WARN CASE_LABEL_RANGE ON} {$WARN FOR_VARIABLE ON} {$WARN CONSTRUCTING_ABSTRACT ON} {$WARN COMPARISON_FALSE ON} {$WARN COMPARISON_TRUE ON} {$WARN COMPARING_SIGNED_UNSIGNED ON} {$WARN COMBINING_SIGNED_UNSIGNED ON} {$WARN UNSUPPORTED_CONSTRUCT ON} {$WARN FILE_OPEN ON} {$WARN FILE_OPEN_UNITSRC ON} {$WARN BAD_GLOBAL_SYMBOL ON} {$WARN DUPLICATE_CTOR_DTOR ON} {$WARN INVALID_DIRECTIVE ON} {$WARN PACKAGE_NO_LINK ON} {$WARN PACKAGED_THREADVAR ON} {$WARN IMPLICIT_IMPORT ON} {$WARN HPPEMIT_IGNORED ON} {$WARN NO_RETVAL ON} {$WARN USE_BEFORE_DEF ON} {$WARN FOR_LOOP_VAR_UNDEF ON} {$WARN UNIT_NAME_MISMATCH ON} {$WARN NO_CFG_FILE_FOUND ON} {$WARN MESSAGE_DIRECTIVE ON} {$WARN IMPLICIT_VARIANTS ON} {$WARN UNICODE_TO_LOCALE ON} {$WARN LOCALE_TO_UNICODE ON} {$WARN IMAGEBASE_MULTIPLE ON} {$WARN SUSPICIOUS_TYPECAST ON} {$WARN PRIVATE_PROPACCESSOR ON} {$WARN UNSAFE_TYPE OFF} {$WARN UNSAFE_CODE OFF} {$WARN UNSAFE_CAST OFF} There are several ways to compile a project. If you run the project (by pressing F9 or clicking the Run toolbar icon), Delphi will compile it first. When Delphi compiles a project, it compiles only the files that have changed. If you select Project ® Build All instead, every file is compiled, even if it has not changed. You should only need this second command infrequently, because Delphi can usually determine which files have changed and compile them as required. The only exception is when you change some project options, in which case you have to use the Build All command to put the new options into effect. To build a project, Delphi first compiles each source code file, generating a Delphi Compiled Unit (DCU). (This step is performed only if the DCU file is not already up to date.) The second step, performed by the linker, is to merge all the DCU files into the executable file, optionally with compiled code from the VCL library (if you haven't decided to use packages at run time). The third step is binding into the executable file any optional resource files, such as the RES file of the project, which hosts its main icon, and the DFM files of the forms. You can better understand the compilation steps and follow what happens during this operation if you enable the Show Compiler Progress option (in the Preferences page of the Environment Options dialog box). The Compile command can be used only when you have loaded a project in the editor. If no project is active and you load a Pascal source file, you cannot compile it. However, if you load the source file as if it were a project, that will do the trick and you'll be able to compile the file. To do this, simply select the Open Project toolbar button and load a PAS file. Now you can check its syntax or compile it, building a DCU. I've mentioned before that Delphi allows you to use run-time packages, which affect the distribution of the program more than the compilation process. Delphi packages are dynamic link libraries (DLLs) containing Delphi components. By using packages, you can make an executable file much smaller. However, the program won't run unless the proper DLLs (such as vcl70.bpl, which is quite large) are available on the computer where you want to run the program. If you add the size of this dynamic library to that of the small executable file, the total amount of disk space required by the apparently smaller program built with run-time packages is much larger than the space required by the apparently bigger stand-alone executable file. Of course, if you have multiple applications on a single system, you'll end up saving a lot, both in disk space and memory consumption at run time. The use of packages is often but not always recommended. I'll discuss all the implications of packages in detail in Chapter 10. In both cases, Delphi executables are extremely fast to compile, and the speed of the resulting application is comparable to that of a C or C++ program. Delphi compiled code runs at least five times faster than the equivalent code in interpreted or "semicompiled" tools. Compiler Message Helpers and Warnings As I mentioned at the beginning of this chapter (in the section "Extended Compiler Messages and Search Results in Delphi 7"), in addition to the classic compiler messages, Delphi 7 provides a new window with additional information about some error messages. This window is activated using the View ® Additional Message Info menu command. It displays information stored in a local file, which can be updated by downloading a new version from Borland's website. Another change in Delphi 7 relates to the increased control you have over compiler warnings. The Project Options dialog box now includes a Compiler Messages page where you can choose many individual warnings. This feature was probably introduced due to the fact that Delphi 7 has a new set of warnings related to compatibility with the future Delphi for .NET tool. These warnings are quite extensive, and I've disabled them as shown in Figure 1.10. Figure 1.10: The new Compiler Messages page of the Project Options dialog box You can also enable or disable some of these warnings using compiler options like these: {$Warn UNSAFE_CODE OFF} {$Warn UNSAFE_CAST OFF} {$Warn UNSAFE_TYPE OFF} In general, it is better to keep these settings outside the source code of the program—something Delphi 7 finally allows you to do. Delphi has always included a tool to browse the symbols of a compiled project, although this tool's name has changed many times (from Object Browser to Project Explorer and now to Project Browser). In Delphi 7, you activate the Project Browser window using the View ® Browser menu command, which displays the window shown in Figure 1.11. The browser allows you to see the hierarchical structure of the project's classes and to look for its symbols and the source-code lines where they are referenced. Figure 1.11: The Project Browser Unlike the Code Explorer, the Project Browser is updated only as you recompile the project. This browser allows you to list classes, units, and globals, and lets you choose whether to look only for symbols defined within your project or for those from both your project and VCL. You can change the settings of the Project Browser and those of the Code Explorer in the Explorer page of the Environment Options or by selecting the Properties command in the shortcut menu of the Project Explorer. Some of the categories you see in this window are specific to the Project Browser; others relate to both tools. In addition to the IDE, when you install Delphi you get other, external tools. Some of them, such as the Database Desktop, the Package Collection Editor (PCE.exe), and the Image Editor (ImagEdit.exe), are available from the Tools menu in the IDE. In addition, the Enterprise edition has a link to the SQL Monitor (SqlMon.exe). Other tools that are not directly accessible from the IDE include many command-line utilities you can find in the Delphi bin directory. For example, these tools include a command-line Delphi compiler (DCC32.exe), a Borland resource compiler (BRC32.exe and BRCC32.exe), and an executable viewer (TDump.exe). Finally, some of the sample programs that ship with Delphi are actually useful tools that you can compile and keep at hand. I'll discuss some of these tools in the book, as needed. Here are a few of the useful and higher-level tools, most of which are available in the Delphi7in folder and in the Tools menu: Web App Debugger (WebAppDbg.exe) The debugging web server introduced in Delphi 6. It is used to keep track of the requests sent to your applications and to debug them. This debugger was rewritten in Delphi 7: It is now a CLX application and its connectivity is based on sockets. I'll discuss this tool in Chapter 20. XML Mapper (XmlMapper.exe) A tool for creating XML transformations to be applied to the format produced by the ClientDataSet component. You'll find more on this topic in Chapter 22. External Translation Manager (etm60.exe) The stand-alone version of the Integrated Translation Manager. This external tool can be given to external translators and was available for the first time in Delphi 6. Borland Registry Cleanup Utility (D7RegClean.exe) A tool that helps you remove all the Registry entries that Delphi 7 adds to a computer. TeamSource An advanced version-control system provided with Delphi, starting with version 5. The tool is very similar to its past incarnation and is installed separately from Delphi. Delphi 7 ships with version 1.01 of Team Source, the same version available after applying an available patch to the Delphi 6 version. WinSight (Ws32.exe) A Windows "message spy" program available in the bin directory. Database Explorer A tool that can be activated from the Delphi IDE or as a stand-alone tool, using the DBExplor.exe program of the bin directory. Because it is meant for the BDE, the Database Explorer is not used much nowadays. OpenHelp (oh.exe) The tool you can use to manage the structure of Delphi's own Help files, integrating third-party files into the help system. Convert (Convert.exe) A command-line tool you can use to convert DFM files into the equivalent textual description and vice versa. Turbo Grep (Grep.exe) A command-line search utility, which is much faster than the embedded Find In Files mechanism but not as easy to use. Turbo Register Server (TRegSvr.exe) A tool you can use to register ActiveX libraries and COM servers. The source code for this tool is available under DemosActiveXTRegSvr. Resource Explorer A powerful resource viewer (but not a full-blown resource editor) you can find under DemosResXplor. Resource Workshop An old 16-bit resource editor that can also manage Win32 resource files. The Delphi installation CD includes a separate installation for Resource Workshop. It was formerly included in Borland C++ and Pascal compilers for Windows and was much better than the standard Microsoft resource editors then available. Although its user interface hasn't been updated and it doesn't handle long filenames, this tool can still be very useful for building custom or special resources. It also lets you explore the resources of existing executable files. Delphi produces various files for each project, and you should know what they are and how they are named. Basically, two elements have an impact on how files are named: the names you give to a project and its units, and the predefined file extensions used by Delphi. Table 1.1 lists the extensions of the files you'll find in the directory where a Delphi project resides. The table also shows when or under what circumstances these files are created and their importance for future compilations. Besides the files generated during the development of a project in Delphi, many others are generated and used by the IDE itself. In Table 1.2, I've provided a short list of extensions worth knowing about. Most of these files are in proprietary and undocumented formats, so there is little you can do with them. I've just listed some files related to the development of a Delphi application, but I want to spend a little time covering their actual format. The fundamental Delphi files are Pascal source code files, which are plain ASCII text files. The bold, italic, and colored text you see in the editor depends on syntax highlighting, but it isn't saved with the file. It is worth noting that there is a single file for the form's whole code, not just small code fragments. For a form, the Pascal file contains the form class declaration and the source code of the event handlers. The values of the properties you set in the Object Inspector are stored in a separate form description file (with a .DFM extension). The only exception is the Name property, which is used in the form declaration to refer to the components of the form. The DFM file is by default a text representation of the form, but it can also be saved in a binary Windows Resource format. You can set the format you want to use for new projects in the Designer page of the Environment Options dialog box, and you can toggle the format of individual forms with the Text DFM command on a form's shortcut menu. A plain-text editor can read only the text version. However, you can load DFM files of both types in the Delphi editor, which will, if necessary, first convert them into a textual description. The simplest way to open the textual description of a form (whatever the format) is to select the View As Text command on the shortcut menu in the Form Designer. This command closes the form, saving it if necessary, and opens the DFM file in the editor. You can later go back to the form using the View As Form command on the shortcut menu in the editor window. You can edit the textual description of a form, although you should do so with extreme care. As soon as you save the file, it will be parsed to regenerate the form. If you've made incorrect changes, compilation will stop with an error message; you'll need to correct the contents of your DFM file before you can reopen the form. For this reason, you shouldn't try to change the textual description of a form manually until you have good knowledge of Delphi programming. In addition to the two files describing the form (PAS and DFM), a third file is vital for rebuilding the application: the Delphi project file (DPR), which is another Pascal source code file. This file is built automatically, and you seldom need to change it manually. You can see this file with the Project ® View Source menu command. Some of the other, less relevant files produced by the IDE use the structure of Windows INI files, in which each section is indicated by a name enclosed in square brackets. For example, this is a fragment of an option file (DOF): [Compiler] A=1 B=0 ShowHints=1 ShowWarnings=1 [Linker] MinStackSize=16384 MaxStackSize=1048576 ImageBase=4194304 [Parameters] RunParams= HostApplication= The same structure is used by the Desktop files (DSK), which store the status of the Delphi IDE for the specific project, listing the position of each window. Here is a small excerpt: [MainWindow] Create=1 Visible=1 State=0 Left=2 Top=0 Width=800 Height=97 Del. Figure 1.12: The first page of the New Items dialog box, generally known as the Object Repository. The Empty Project Template When you start a new project, Delphi automatically opens a blank form, too. However, if you want to base a new project on one of the form objects or wizards, you don't need this form. To solve this problem, you can add an Empty Project template to the Gallery. The steps required to accomplish this are simple: When you select this project from the Object Repository, you gain two advantages: You have your project without a form, and you can pick a directory where the project template's files will be copied. There is also a disadvantage—you have to remember to use the File ® Save Project As command to give a new name to the project, because saving the project any other way automatically uses the default name in the template.. Installing New DLL Wizards Technically, new wizards come in two different forms: They may be part of components or packages, or they may be distributed as stand-alone DLLs. In the first case, they are installed the same way you install a component or a package, using the Components ® Install Packages menu command and then clicking the Add button. When you've received a stand-alone DLL, you should add the name of the DLL in the Windows Registry under the key SoftwareBorlandDelphi7.0Experts. Simply add a new string key under this key, choose a name you like (it doesn't really matter what it is), and use as text the path and filename of the wizard DLL. You can look at the entries already present under the Experts key to see how the path should be entered. When. This chapter has presented an overview of the new and more advanced features of the Delphi 7 programming environment, including tips and suggestions about some lesser-known features that were already available in previous Delphi versions. I didn't provide a step-by-step description of the IDE, partly because it is generally simpler to begin using Delphi than it is to read about how to use it. Moreover, there is a detailed Help file describing the environment and the development of a new simple project; and you might already have some exposure to one of the past versions of Delphi or a similar development environment. Now we are ready to spend the next chapter looking into the Delphi programming language. Then we'll proceed by studying the run-time library (RTL) and the class library included in Delphi.
http://flylib.com/books/en/2.37.1/delphi_7_and_its_ide.html
CC-MAIN-2018-05
refinedweb
3,480
60.14
Simplifying the Data Access Layer with Spring and Java Generics 1. Overview This. 2.1. A Generic DAO Instead of having multiple implementations – one for each entity in the system – a single parametrized DAO can be used in such a way that it still takes full advantage of the type safety provided by generics. Two implementations of this concept are presented next, one for a Hibernate centric persistence layer and the other focusing on JPA. These implementation are by no means complete – only some data access methods are included, but they can be easily be made more thorough. 2.2. The Abstract Hibernate DAO) this.getCurrentSession().get( this.clazz, id ); } public List< T > findAll(){ return this.getCurrentSession() .createQuery( "from " + this.clazz.getName() ).list(); } public void save( T entity ){ this.getCurrentSession().persist( entity ); } public void update( T entity ){ this.getCurrentSession().merge( entity ); } public void delete( T entity ){ this.getCurrentSession().delete( entity ); } public void deleteById( Long entityId ){ T entity = this.getById( entityId ); this.delete( entity ); } protected Session getCurrentSession(){ return this previous post of the series. 2.3. The Abstract JPA D ); } } Similar to the Hibernate DAO implementation, the Java Persistence API is used here directly, again not relying on the now deprecated Spring JpaTemplate. 2.4. The Generic DAO Now, the actual implementation of the generic DAO is as simple as it can be – it contains no logic. Its only purpose is to be injected by the Spring container in a service layer (or in whatever other type of client of the Data Access Layer): @Repository @Scope( BeanDefinition.SCOPE_PROTOTYPE ) public class GenericJpaDAO< T extends Serializable > extends AbstractJpaDAO< T > implements IGenericDAO< T >{ // } @Repository @Scope( BeanDefinition.SCOPE_PROTOTYPE ) public class GenericHibernateDAO< T extends Serializable > extends AbstractHibernateDAO< T > implements IGenericDAO< T >{ // } First, note that the generic implementation is itself parametrized – allowing the client to choose the correct parameter in a case by case basis. This will mean that the clients gets all the benefits of type safety without needing to create multiple artifacts for each entity. Second, notice the prototype scope of these generic DAO implementation. Using this scope means that the Spring container will create a new instance of the DAO each time it. 3. The Service ){ this.dao = daoToSet; this.dao.setClazz( Foo.class ); } // ... } Spring autowires the new DAO insteince using setter injection so that the implementation can be customized with the Class object. After this point, the DAO is fully parametrized and ready to be used by the service. 4. Conclusion This article discussed the simplification of the Data Access Layer by providing a single, reusable implementation of a generic DAO. This implementation was presented. The next article of the Persistence with Spring series will focus on setting up the DAL layer with Spring 3.1 and JPA. In the meantime, you can check out the full implementation in the github project. If you read this far, you should follow me on twitter here. (Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.) Vitaliy Morarian replied on Thu, 2012/01/05 - 10:53am Vitaliy Morarian replied on Thu, 2012/01/05 - 10:53am Eugen Paraschiv replied on Thu, 2012/01/05 - 1:59pm in response to: Vitaliy Morarian In short, the PROTOTYPE scope is needed because otherwhise there would be a single DAO instance (singleton) in the Spring container, not one for each Entity class. For a in depth discussion on this (PROS and CONS), see the comment section of the article. Thanks. Eugen. Lutz replied on Fri, 2012/01/06 - 8:33am in response to: Eugen Paraschiv <bean name="fooDao" class="dao.AbstractHibernateDAO" > <parameter name="clazz" value="domain.Foo" /> </bean> this way you would only ever have one DAO instance per domain class, and can inject the appropriate DAOs in your services. Caesar Ralf Fra... replied on Fri, 2012/01/06 - 12:08pm Eugen Paraschiv replied on Sat, 2012/01/07 - 8:48am in response to: Lutz I prefer making good use of Spring 3 annotations, which means that the DAOs are not defined in XML at all. That being said, the way the class is injected seems to be generating a lot of comments. Yes, there is more than one way to do it: in XML, with reflection, by hand, etc. I prefer doing it by hand simply because it a simple thing and doesn't require to facy of a solution - I prefer the readability and clearity of doing it plainly, but I will make sure to update the article with the other options as well. Thanks you for the interesting feedback. Eugen. Eugen Paraschiv replied on Sat, 2012/01/07 - 8:54am in response to: Caesar Ralf Franz Hoppen Yes, you can also obtain the class that way. My own preference is towards the simpler solution, as I find it more readable and easy to follow. Also, this exact issue has already been discussed in the comments of the original article. Regards, Eugen. Gökhan Ozar replied on Mon, 2012/02/20 - 4:58am
http://java.dzone.com/articles/simplifying-data-access-layer
CC-MAIN-2013-48
refinedweb
837
54.63
ASF Bugzilla – Bug 48956 SSI regular expressions not working Last modified: 2012-06-08 12:16:47 UTC I was trying to use Tomcat SSI filter: While it generally works, I have discovered that SSI regular expressions are not supported. As much as I can see from tomcat source code, this feature is just not implemented. For example, those expressions always return "did not match": <!--#if expr="abc = /abc/" -->matches<!--#else -->did not match<!--#endif --> <!--#if expr="abc = /[a-z]/" -->matches<!--#else -->did not match<!--#endif --> This bug is a showstopper for me, because in my application I need SSI regular expressions support. Patches for enhancement requests are always welcome Fixed in 7.0.x and will be included in 7.0.17 onwards. I don't see this being back-ported to 6.0.x or 5.5.x. Hi, I found a bug working with Tomcat, SSI and regular expression that is still unresolved and my work is essential for proper operation: For example: <!--#set var="aux" value="aa12" --> <!--#if expr="$aux=/^aa([a-zA-Z0-9\-_]*)/" --> <!--#set var="aux2" value="$1" --> <!--#endif --> Resulting value:<!--#echo var="aux2" --> In a html file works correctly, but when working on Tomcat, a complex regular expression fails, and the page returns nothing from it. Regards and await your response. (In reply to comment #3) > Hi, I found a bug working with Tomcat, SSI and regular expression that is > still unresolved and my work is essential for proper operation: I do not understand your description. What do you mean by "fails" and what do you mean by "returns nothing"? Please provide: 1. What are the steps to reproduce your problem 2. What is the actual behaviour that you are observing. 3. What do you expect the correct behaviour to be. ======================================= By the way: 1. The actual implementation Regexp matching is in org/apache/catalina/ssi/ExpressionParseTree.java, in method compareBranches() 2. The feature was implemented by r1136231 and r1136399 3. r1136399 was result of discussion in "Re: r1136231" thread on dev@. In archives: 4. If you can provide a patch for ExpressionParseTree#compareBranches(), it would be faster. (In reply to comment #3) I think I understood what you are asking for. It is a separate issue, so I filed it separately -> bug 53387 I am re-closing this issue as FIXED.
https://bz.apache.org/bugzilla/show_bug.cgi?id=48956
CC-MAIN-2016-44
refinedweb
390
60.72
Today’s Programming Praxis problem is an easy one: all we have to do is convert a string that contains numbers and number ranges to a full list of numbers. The original PL/SQL solution is 49 lines, and the scheme solution has 9. Let’s see if we can bring that down further still. Our import: import Data.List.Split What we have to do is pretty simple: first we split the string on the commas. The resulting chunks are split on the dash. The resulting numbers are converted to Ints. Since we need two numbers to define a range, we cycle these numbers, and take the range between the first two numbers. modOut :: String -> [Int] modOut = concatMap ((\(a:b:_) -> [a..b]) . cycle . map read . sepBy "-") . sepBy "," A quick test reveals that everything’s working correctly: main :: IO () main = print $ modOut "1-6,9,13-19" And so we’ve reduced the solution size by another factor of three. That will do nicely. Tags: Haskell, kata, mod, out, praxis, programming, system
https://bonsaicode.wordpress.com/2009/06/23/programming-praxis-the-mod-out-system/
CC-MAIN-2017-30
refinedweb
173
74.9
Selection sort in Java is used to sort the unsorted values in an array. In selection sorting algorithm, the minimum value in an array is swapped to the very first position in that array. In the next step the first value of array is left and minimum element from the rest of the array is swapped to second position. This procedure is repeated till the array is sorted completely. Selection sort is probably the most spontaneous sorting algorithm. Selection sort is simple and more efficient when it comes to more complex arrays. The complexity of selection sort in worst-case is Θ(n2), in average-case is Θ(n2), and in best-case is Θ(n2). Following example shows the selection sort in Java. In selection sort algorithm, first assign minimum index in key as index_of_min=a. Then the minimum value is searched. Index of minimum value in key is assigned as index_of_min=b. Minimum value and the value of minimum index are then swapped. In the next step the first element (value) is left, and remaining values are sorted by following same steps. This process is repeated till the whole list is sorted. Example of Selection Sort in Java: public class selectionSort{ public static void main(String a[]){ int i; int array[] = {15, 67, 40, 92, 23, 7, 77, 12}; System.out.println("\n\n RoseIndia\n\n"); System.out.println(" Selection Sort\n\n"); System.out.println("Values Before the sort:\n"); for(i = 0; i < array.length; i++) System.out.print( array[i]+" "); System.out.println(); selection_srt(array, array.length); System.out.print("Values after the sort:\n"); for(i = 0; i <array.length; i++) System.out.print(array[i]+" "); System.out.println(); System.out.println("PAUSE"); } public static void selection_srt(int array[], int n){ for(int x=0; x<n; x++){ int index_of_min = x; for(int y=x; y<n; y++){ if(array[index_of_min]<array[y]){ index_of_min = y; } } int temp = array[x]; array[x] = array[index_of_min]; array[index_of_min] = temp; } } } Output: C:\array\sorting>javac selectionSort.java C:\array\sorting>java selectionSort RoseIndia Selection Sort Values Before the sort: 15 67 40 92 23 7 77 12 Values after the sort: 7 12 15 23 40 67 77 92: Selection Sort in Java Post your Comment
http://roseindia.net/java/beginners/arrayexamples/selection-sort-in-java.shtml
CC-MAIN-2016-18
refinedweb
379
50.43
Created on 2016-10-13 10:36 by arigo, last changed 2017-08-02 14:08 by dubiousjim. This issue is now closed. Follow-up on. Another crash of using WeakValueDictionary() in a thread-local fashion inside a multi-threaded program. I must admit I'm not exactly sure why this occurs, but it is definitely showing an issue: two threads independently create their own WeakValueDictionary() and try to set one item in it. The problem I get is that the "assert 42 in d" sometimes fails, even though 42 was set in that WeakValueDictionary on the previous line and the value is still alive. This only occurs if there is a cycle of references involving the value. See attached file. Reproduced on Python 2.7, 3.3, 3.5, 3.6-debug. Here is simpler reproducer for Python 3. One thread updates WeakValueDictionary in a loop, other threads runs garbage collecting in a loop. Values are collected asynchronously and this can cause removing new value by old key. Following patch fixes this example (or at least makes race condition much less likely). But it doesn't fix the entire issue. If add list(d) after setting a new value, the example fails again. I'll admit I don't know how to properly fix this issue. What I came up with so far would need an atomic compare_and_delete operation on the dictionary self.data, so that we can do atomically: + elif self.data[wr.key] is wr: del self.data[wr.key] Following patch fixes more cases. But I don't think it fixes race conditions, it just makes them less likely. Increased priority because this bug makes weakref.WeakValueDictionary unusable in multithread program. One possibility would be to always delay removals (always put them in _pending_removals). We would then have to enforce removals from time to time, but synchronously. (or we bite the bullet and add a C helper function for the atomic test-and-delete thing) Here is a pure Python patch. Note the issue with this patch is that it may keep keys (with dead values) alive longer than necessary: - this may prevent memory consumption from decreasing - this may keep alive some system resources This is ok when keys are small simple objects (strings or tuples), though. Here is a patch showing the "atomic C function" approach. It will avoid the aforementioned memory growth in the common case, in exchange for a small bit of additional C code. issue28427-atomic.patch: is it still necessary to modify weakref.py so much, then? What I had in mind was a C function with Python signature "del_if_equal(dict, key, value)"; the C function doesn't need to know about weakrefs and checking if they are dead. The C function would simply call PyObject_GetItem() and PyObject_DelItem()---without releasing the GIL in the middle. Hi Armin, > is it still necessary to modify weakref.py so much, then? Not sure. I'll take a look again. Modifying __len__() at least is necessary, as the previous version took into account the length of _pending_removals (and could therefore return wrong results). I'm inclined to be a bit defensive here. >(). I think the issue of __len__() is a different matter. More below. >>(). Right, I see your point: your version will, in any case, remove only a dead weakref as a dictionary value. +1 Now about __len__() returning a wrong result: it is a more complicated issue, I fear. I already patched weakref.py inside PyPy in order to pass a unit test inside CPython's test suite. This patch is to change __len__ to look like this: def __len__(self): # PyPy change: we can't rely on len(self.data) at all, because # the weakref callbacks may be called at an unknown later time. result = 0 for wr in self.data.values(): result += (wr() is not None) return result The problem is the delay between the death of a weakref and the actual invocation of the callback, which is systematic on PyPy but can be observed on CPython in some cases too. It means that code like this may fail: if list(d) == []: assert len(d) == 0 because list(d) might indeed be empty, but some callbacks have not been called so far and so any simple formula in __len__() will fail to account for that. ('issue28427-atomic.patch' does not help for that, but I might have missed a different case where it does help.) > Now about __len__() returning a wrong result: it is a more complicated issue, I fear. I already patched weakref.py inside PyPy in order to pass a unit test inside CPython's test suite. This patch is to change __len__ to look like this: [...] Thanks for the explanation. Yes, you are right on the principle. But there is also a general expectation that len() on an in-memory container is a O(1) operation, not O(n) - this change would break that expectation quite heavily. I don't know how to fix len() without losing O(1) performance. It seems we're in a bit of a quandary on this topic. However, we can still fix the other issues. Agreed about fixing the other issues. I'm still unclear that we need anything more than just the _remove_dead_weakref function to do that, but also, I don't see a particular problem with adding self._commit_removals() a bit everywhere. About the O(1) expectation for len(): it's still unclear if it is better to give a precise answer or if an over-estimate is usually enough. I can see of no reasonable use for a precise answer---e.g. code like this while len(d) > 0: do_stuff(d.popitem()) is broken anyway, because a weakref might really die between len(d) and d.popitem(). But on the other hand it makes tests behave strangely. Maybe the correct answer is that such tests are wrong---then I'd be happy to revert the PyPy-specific change to __len__() and fix the test instead. Or maybe weakdicts should always raise in __len__(), and instead have a method .length_upper_bound(). The dict implementation in 3.6 has become very complicated, so I'd like someone to review the attached 3.6 patch. Serhiy, Inada? New changeset b8b0718d424f by Antoine Pitrou in branch '3.5': Issue #28427: old keys should not remove new values from New changeset 97d6616b2d22 by Antoine Pitrou in branch '3.6': Issue #28427: old keys should not remove new values from New changeset e5ce7bdf9e99 by Antoine Pitrou in branch 'default': Issue #28427: old keys should not remove new values from New changeset 9acdcafd1418 by Antoine Pitrou in branch '2.7': Issue #28427: old keys should not remove new values from I've pushed the fixes now. It does introduce a small amount of additional code duplication in dictobject.c, but nothing unmanageable. Sidenote: all branches now have a different version of dict object each, which makes maintenance really painful... I tested this in a freshly-built 3.4.6. Although it reproduced the behavior you're complaining about--it threw the assert in Armin's test.py, and Serhiy's issue28427.py prints an admonishing FAIL--neither test *crashes* CPython. So I'm not convinced either of these is a *security* risk. This is a bug, and 3.4 isn't open for bugfixes, so I don't plan to accept a backport for this in 3.4. If this is a crashing bug, please tell me how to reproduce the crashing bug with 3.4.6. In response to Issue #7105, self._pending_removals was added to WeakValueDictionaries (and also WeakKeyDictionaries, but they're not relevant to what I'm about to discuss). This was in changesets 58194 to tip and 58195 to 3.1, back in Jan 2010. In those changesets, the implementation of WeakValueDictionary.setdefault acquired a check on self._pending_removals, but only after the key lookup had failed. (See lines starting 5.127 in both those changesets.) In changeset 87778, in Dec 2013, this same patch was backported to 2.7. More recently, in response to the issue discussed above (Issue #28427), similar checks were added to WeakValueDictionary.get, but now BEFORE the key lookup. This was in changesets 105851 to 3.5, 105852 to 3.6, 105853 to tip, and 105854 to 2.7, in Dec 2016. Notably, in the last changeset, the check on self._pending_removals on WeakValueDictionary.setdefault is also moved to the top of the function, before the key lookup is attempted. This parallels the change being made to WeakValueDictionary.get. However, that change to WeakValueDictionary.setdefault was only made to the 2.7 branch. If it's correct, then why wasn't the same also done for 3.5, 3.6, and tip?
https://bugs.python.org/issue28427
CC-MAIN-2020-40
refinedweb
1,463
75.91
In this section, we'll describe the AngularJS sandbox, explain how exploits can escape from the sandbox, and spell out how content security policy (CSP) can be bypassed in the context of the AngularJS sandbox. The AngularJS sandbox is a mechanism that prevents access to potentially dangerous objects, such as window or document, in AngularJS template expressions. It also prevents access to potentially dangerous properties, such as __proto__. Despite not being considered a security boundary by the AngularJS team, the wider developer community generally thinks otherwise. Although bypassing the sandbox was initially challenging, security researchers have discovered numerous ways of doing so. As a result, it was eventually removed from AngularJS in version 1.6. However, many legacy applications still use older versions of AngularJS and may be vulnerable as a result. The sandbox works by parsing an expression, rewriting the JavaScript, and then using various functions to test whether the rewritten code contains any dangerous objects. For example, the ensureSafeObject() function checks whether a given object references itself. This is one way to detect the window object, for example. The Function constructor is detected in roughly the same way, by checking whether the constructor property references itself. The ensureSafeMemberName() function checks each property access of the object and, if it contains dangerous properties such as __proto__ or __lookupGetter__, the object will be blocked. The ensureSafeFunction()function prevents call(), apply(), bind(), or constructor() from being called. You can see the sandbox in action for yourself by visiting this fiddle and setting a breakpoint at line 13275 of the angular.js file. The variable fnString contains your rewritten code, so you can look at how AngularJS transforms it. A sandbox escape involves tricking the sandbox into thinking the malicious expression is benign. The most well-known escape uses the modified charAt() function globally within an expression: 'a'.constructor.prototype.charAt=[].join When it was initially discovered, AngularJS did not prevent this modification. The attack works by overwriting the function using the [].join method, which causes the charAt() function to return all the characters sent to it, rather than a specific single character. Due to the logic of the isIdent() function in AngularJS, it compares what it thinks is a single character against multiple characters. As single characters are always less than multiple characters, the isIdent() function always returns true, as demonstrated by the following example: isIdent= function(ch) { return ('a' <= ch && ch <= 'z' || 'A' <= ch && ch <= 'Z' || '_' === ch || ch === '$'); } isIdent('x9=9a9l9e9r9t9(919)') Once the isIdent() function is fooled, you can inject malicious JavaScript. For example, an expression such as $eval('x=alert(1)') would be allowed because AngularJS treats every character as an identifier. Note that we need to use AngularJS's $eval() function because overwriting the charAt() function will only take effect once the sandboxed code is executed. This technique would then bypass the sandbox and allow arbitrary JavaScript execution. So you've learned how a basic sandbox escape works, but you may encounter sites that are more restrictive with which characters they allow. For example, a site may prevent you from using double or single quotes. In this situation, you need to use functions such as String.fromCharCode() to generate your characters. Although AngularJS prevents access to the String constructor within an expression, you can get round this by using the constructor property of a string instead. This obviously requires a string, so to construct an attack like this, you would need to find a way of creating a string without using single or double quotes. In a standard sandbox escape, you would use $eval() to execute your JavaScript payload, but in the lab below, the $eval() function is undefined. Fortunately, we can use the orderBy filter instead. The typical syntax of an orderBy filter is as follows: [123]|orderBy:'Some string' Note that the | operator has a different meaning than in JavaScript. Normally, this is a bitwise OR operation, but in AngularJS it indicates a filter operation. In the code above, we are sending the array [123] on the left to the orderBy filter on the right. The colon signifies an argument to send to the filter, which in this case is a string. The orderBy filter is normally used to sort an object, but it also accepts an expression, which means we can use it to pass a payload. You should now have all the tools you need to tackle the next lab. Content security policy (CSP) bypasses work in a similar way to standard sandbox escapes, but usually involve some HTML injection. When the CSP mode is active in AngularJS, it parses template expressions differently and avoids using the Function constructor. This means the standard sandbox escape described above will no longer work. Notice that the from() function is used, which allows you to convert an object to an array and call a given function (specified in the second argument) on every element of that array. In this case, we are calling the alert() function. We cannot call the function directly because the AngularJS sandbox would parse the code and detect that the window object is being used to call a function. Using the from() function instead effectively hides the window object from the sandbox, allowing us to inject malicious code. This next lab employs a length restriction, so the above vector will not work. In order to exploit the lab, you need to think of various ways of hiding the window object from the AngularJS sandbox. One way of doing this is to use the array.map() function as follows: [1].map(alert) map() accepts a function as an argument and will call it for each item in the array. This will bypass the sandbox because the reference to the alert() function is being used without explicitly referencing the window. To solve the lab, try various ways of executing alert() without triggering AngularJS's window detection. To prevent AngularJS injection attacks, avoid using untrusted user input to generate templates or expressions.
https://portswigger.net/web-security/cross-site-scripting/contexts/angularjs-sandbox
CC-MAIN-2020-24
refinedweb
998
53.21
"Ben Collins-Sussman" wrote > > [...] > > Would you like to write a new python test for that? :-) I am trying to write a small test case without success. test_svn_ls('') => Traceback (most recent call last): File "<pyshell#22>", line 1, in -toplevel- test_svn_ls('') File "C:/Dev/pye/repo_open.py", line 36, in test_svn_ls from libsvn.client import svn_client_ls File "C:\Python23\lib\site-packages\libsvn\client.py", line 4, in -toplevel- import _client ImportError: DLL load failed with error code 182 Is there something special I need to do to import _client? e Thursday, October 7, 2004, 4:38:51 PM, I wrote: >. > Issue 2011 describes a problem with working-copy paths. > UNC paths also don't work in repository URIs. > It would be nice if > > translated to UNC > \\host\path\to\repos > on Windows. > There may be some objection to putting a host name in the file spec; an > alternative is > > but svn strips away one of the leading slashes. > Is it svn_path_canonicalize that is stripping away one of the leading > // in the UNC pathname? If so, perhaps svn_path_canonicalize could > be changed so that > /* If this is an absolute path, then just copy over the initial > separator character. */ > if (*src == '/') > { > *(dst++) = *(src++); > absolute_path = TRUE; > } > becomes > /* If this is an absolute path, then just copy over the initial > separator character. */ > if (*src == '/') > { > *(dst++) = *(src++); > absolute_path = TRUE; /* not used ? */ > /* for Windows only!? a second slash means this is a UNC path */ > if (*src == '/') > { > *(dst++) = *(src++); > UNC_path = TRUE; /* not used ? */ > } > } > e --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected] Received on Sat Oct 9 22:28:09 2004 This is an archived mail posted to the Subversion Dev mailing list.
https://svn.haxx.se/dev/archive-2004-10/0421.shtml
CC-MAIN-2019-22
refinedweb
286
67.76
#include <Track.h> List of all members. A track has a Track::Theme and Paths to drive on. Definition at line 34 of file Track.h. Intialise by loading data from a stream, sharing an already loaded theme. Make an empty track. Get a graph showing connections between faces that AI can drive on. Call update_ai_mesh() first. Return a mesh of the faces the ai drives on. Faces from different sources that line up will be joined together, allowing the mesh to be used as a navigatable surface. If you have changed the track, the ai mesh does not update automatically. Call update_ai_mesh() to update all references to the ai mesh. Precondition: update_ai_mesh() has been called since the last edit. Postcondition: get_ai_mesh() reflects the surface the AI should consider. Return the collision shape for the entire track. Get the filename previously set by set_filename. Return the collision shape for the part of the track you can drive on. Return the length of the lap. Requires update_ai_mesh() was called since the last change to the track. Get a the path without modification. Find position coordinates for the AI mesh. Requires that m_ai_mesh is up to date, but invalidates m_ai_graph. Abuses u texture coordinate to mean how far through the lap a point is. Also sets the m_lap_length. In editor mode, scan the meshes in the theme and set them to display wireframe. Set the filename to record in replay files. Recalculate the surface that the AI can drive on, and its connectivity graph. Generated at Mon Sep 6 00:41:19 2010 by Doxygen version 1.4.7 for Racer version svn335.
http://racer.sourceforge.net/classTrack_1_1Track.html
CC-MAIN-2017-22
refinedweb
271
78.85
This tutorial assumes you are using an up-to-date Raspbian install, have access to either LXTerminal or SSH and have an internet connection! We're going to go through the steps on how to use a GPS module with your Raspberry Pi! In this tutorial we're going to use the HAB GPS HAT! By default, the Raspberry Pi serial port console login is enabled. We need to disable this before we can use the serial port for ourselves. To do this, simply load up the raspberry pi configuration tool: sudo raspi-config Then go to option 8 – Advanced Options Then go to option A8 – Serial Over to “No” And finally “Ok” Now go to “Finish”. If you are using a Raspberry Pi 3, there are some additional steps to free up serial. If you are not using a RPi3, skip to the "Power off your Pi with:" section. First we need to edit the boot config file sudo nano /boot/config.txt and change the line: enable_uart=0 to: enable_uart=1 Then we need to add the following lines: dtoverlay=pi3-miniuart-bt force_turbo=1 Next, we need to edit the cmdline txt file sudo nano /boot/cmdline.txt and remove any "console=" references, for example, if your cmdline txt file looks like this: dwc_otg.lpm_enable=0 console=tty1 root=/dev/mmcblk0p7 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait change it to this: dwc_otg.lpm_enable=0 root=/dev/mmcblk0p7 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait Lastly, we need to edit the hciuart service file: sudo nano /lib/systemd/system/hciuart.service Change this line: After=dev-serial1.device to this: After=dev-ttys0.device And then change this line: ExecStart=/usr/bin/hciattach /dev/serial1 bcm43xx 921600 noflow - to this: ExecStart=/usr/lib/hciattach /dev/ttys0 bcm43xx 460800 noflow - Power off your Pi with: sudo shutdown -h now With the Raspberry Pi powered off, we can now plug our GPS HAT in and attach an aerial. Once everything is plugged in, we can power up the Pi. Before we go any further we need to make sure our GPS HAT has a “lock”. To find this out, you’ll need to refer to your GPS HAT manual, or if you are using the HAB Supplies GPS HAT, look for a blinking green led, labelled “timepulse”. Keep in mind that it can take a long time for the HAT to get a lock, so be patient. If you are struggling to get a lock after 30mins try moving you’re aerial. For best results make sure the aerial is outside and has direct line of sight to the sky. Once we have a GPS lock, we can do a quick test to make sure our Pi is able to read the data provided by the HAT. So, log in to your Pi. You can do this via SSH or via the normal method! Please Note. We're running Raspian from Terminal and have an internet connection! Start by setting up the serial port: stty -F /dev/ttyAMA0 raw 9600 cs8 clocal -cstopb Now simply run: cat /dev/ttyAMA0 You should see something like this: What you are seeing here is the raw GPS “NMEA sentence” output from the GPS module. The lines we are interested in are the ones beginning with $GNGGA (again, this might differ depening on your GPS HAT you have, but look for the line that has “GGA” at the beginning.) If your $GNGGA lines are looking a little empty, and contains a lot of commas “,” with nothing in between them, then you don’t have a GPS lock. Now it’s time to access this information in a python script! We are going to use 2 libraries in our script: - serial - pynmea2 The first one, serial, we don’t need to install anything, this is a default library and will be pre-installed with Raspbian. The second one, pynmea2, we need to install. So let’s do that! (pynmea2 is an easy to use library for parsing NMEA sentences. We could write our own parser, but why re-invent the wheel!) If you don’t already have “pip” installed, start by installing it: sudo apt-get install python-pip Once pip is installed we can then go ahead and install pynmea2 using pip: sudo pip install pynmea2 Now we're going to start logging our GPS data using a Python script. This is a basic script that reads the serial port, passes each line to our pynmea2 parser and simply prints out a formatted string containing some information./GPS We now need to browse to the repo we just downloaded. So change the directory to the GPS folder: cd GPS We can now run our Python script! To start, simply type: sudo python gps.py You should see some results like these: That's it! You're now tracking your GPS data! CODE import serial import pynmea2 def parseGPS(str): if str.find('GGA') > 0: msg = pynmea2.parse(str) print "Timestamp: %s -- Lat: %s %s -- Lon: %s %s -- Altitude: %s %s" % (msg.timestamp,msg.lat,msg.lat_dir,msg.lon,msg.lon_dir,msg.altitude,msg.altitude_units) serialPort = serial.Serial("/dev/ttyAMA0", 9600, timeout=0.5) while True: str = serialPort.readline() parseGPS(str)
https://www.modmypi.com/blog/tutorials/raspberry-pi-gps-hat-and-python
CC-MAIN-2018-22
refinedweb
879
64
A Value is a general-purpose, abstract value entity. More... #include <Value.hh> A Value is a general-purpose, abstract value entity. There are two Value categories: a primitive Value - Integer, Real or String - has a corresponding type of datum; an Array is a list of Values. A digits member is provided for use in creating a text representation of a primitive Value: the precision of a Numeric Value or the field width of a String. A base member is also provided for specifying the radix in creating a text representation of a numeric Value. A pointer to a parent value is provided for navigating Array hierachies; a Value with no parent is not a member of an Array. Arrays list may have Values of mixed types and contain Arrays of mixed sizes. Numeric Values can be directly converted to/from their primitive datum types. This enables them to be used in mathematical expressions. A String, when it represents a numerical value, can also be converted to an integer or real data type. And, of course, a String can be directly converted to/from a std::string. A set of logical operators is also provided for all Values so they can be meaningfully compared. N.B.: The storage precision of Integer and Real values is determined at the time the PVL library is compiled by the High_Precision_Integer_type (and High_Precision_Unsigned_Integer_type) and High_Precision_Floating_Point_type typedefs defined in the Utility/Types.hh file. These are expected to be either long or long long and double or long double, respectively. To ensure complete type conversion coverage is provided the idaeim_LONG_LONG_INTEGER is defined if long long is used and idaeim_LONG_DOUBLE is defined if long double is used. The storage precision of an Integer Value is independent of the storage precision of a Real Value. Any Value, including an Array, may have a units string that provides an arbitrary description for the Value. Provision is made for a Value to inherit the units of its parent. A Parser is used for interpreting the Parameter Value Language syntax from text into Value objects. A Lister is used for generating Parameter Value Language syntax from Value objects. Subtype identifiers. There are four specific implementations of Value: Integer, Real, String and Array. However, a String and Array can also be distinguished in the way they are represented with the PVL syntax. Since this distinction is not functional - in fact is a characteristic that can be readily changed within the specific type - additional subclasses are not used. Instead, a Type code is provided and used as a "subtype" characteristic, in addition to offering a convenient means of identifying the general type (class) of a Value. The STRING Type codes are: The ARRAY Type codes are: INTEGER and REAL Type codes are also provided for the Integer and Real classes for consistency. These latter two classes are both categorized as NUMERIC as an aid to the application in identifying how it will use a Value. Type specification codes are bit flags. These are organized such that the general Type bit is present for all specific members: Both INTEGER and REAL contain the NUMERIC bit; IDENTIFIER, SYMBOL, TEXT and DATE_TIME contain the STRING bit; and both SET and SEQUENCE contain the ARRAY bit. Note: Value Type codes are guaranteed to be distinct from Parameter Type codes: they occupy completely separate bit fields and can not be accidently confused. The range of allowable radix base values. Virtual destructor. Gets the Value's parent. Note: The parent of a Value should only be set by entering the Value into an Array Value; it should only be cleared by removing the Value from its Array. References Value::Parent. Casts the Value to an unsigned int. References Value::operator int(). Casts the Value to an Integer_type. Implemented in Integer, Real, String, and Array. Referenced by String::operator int(), and Value::operator Unsigned_Integer_type(). Casts the Value to an Unsigned_Integer_type. References Value::operator Integer_type(). Casts the Value to a String_type. Implemented in Integer, Real, String, and Array. Sets the Type of the Value. Gets the Type of the Value. Implemented in Integer, Real, String, and Array. Gets the name for the Type of the Value. Implemented in Integer, Real, String, and Array. Referenced by Array::type_name(), String::type_name(), Real::type_name(), and Integer::type_name(). Gets the name for a Type. The name of Type is identical to its enum symbol except only the first character is uppercase. If the argument is not a recognized Type, then the name returned will be "Invalid". Gets the numeric base. References Value::Base. Gets the units description for the Value. References Value::Units. Gets the "nearest" units description for the Value. The units string of the Value is returned. However, if the units string is empty and the Value is a member of an Array, then the units of the parent Array will be recursively sought. The effect is to return the "nearest" non-empty units string, if there is one. Sets the units description for the Value. References Value::units(), and Value::Units. Referenced by Value::units(). Gets the number of digits for the Value representation. References Value::Digits. Sets the number of digits for the Value representation. How the number of digits is used is the responsibility of the specific type of Value. References Value::digits(), and Value::Digits. Referenced by Value::digits(). Referenced by Array::operator[](). Assigns another Value to this Value. Implemented in Integer, Real, String, and Array. Referenced by Array::operator=(), and String::operator=(). Assigns an int value to this Value. Implemented in Integer, Real, String, and Array. Assigns an Integer_type value to this Value. Implemented in Integer, Real, String, and Array. Assigns an unsigned int value to this Value. Implemented in Integer, Real, String, and Array. Assigns a Real_type value to this Value. Implemented in Integer, String, and Array. Assigns a C-string (char*) value to this Value. Implemented in Integer, Real, String, and Array. Adds another Value to this Value. The meaning of addition is the responsibility of the specific type of Value. Implemented in Integer, Real, String, and Array. Logically compares another Value to this Value. The meaning of the comparison is the responsibility of the specific type of Value. Implemented in Integer, Real, String, and Array. Referenced by Value::operator!=(), Value::operator<(), Value::operator<=(), Value::operator==(), Value::operator>(), and Value::operator>=(). Test if this Value is logically equivalent to another Value. References Value::compare(). Test if this Value is logically less than another Value. References Value::compare(). Test if a Value is logically not equivalent to another Value. References Value::compare(). Test if a Value is logically greater than another Value. References Value::compare(). Test if a Value is logically less than or equal to another Value. References Value::compare(). Test if a Value is logically greater than or equal to another Value. References Value::compare(). Writes the Value to an ostream. N.B.: The output is not terminated with an end-of-line. It is, however, flushed to the ostream. Referenced by Value::print(). Prints the Value ot an ostream. The Value is written to the stream with no indenting. References Value::write(). Assigns the next Value from a Parser to this Value. The next Value interpreted from PVL syntax is obtained from the Parser. The Value obtained is assigned to this Value. Warning: Unless the next PVL item available to the Parser is a Value, this method will fail. Also, the type of Value obtained by the must be assignable to this Value or the method will fail. Any type of Value may be assigned to an Array. Converts an integer value to its string representation. Converts a string to the integer value it represents. If the numeric base is 0, it will be intuited from the string: a leading sign is ignored; a leading "0x" or "0X" indicates hexadecimal (base 16) notation, a leading '0' alone indicates octal (base 8) notation, while anything else is taken to be decimal notation. To prevent padding with leading '0' characters to be interpreted as octal specify the base. A conversion that succeeds up to a decimal point character ('.') is accepted as a real number representation truncated to an integer. Thus the conversion can be fooled by an otherwise invalid string. Leading and trailing whitespace in the string is ignored. Attemps to intuit the numeric base of string. Leading whitespace is ignored and, after any whitespace, a leading sign character ('+' or '-') is also ignored; a leading "0x" or "0X" indicates hexadecimal (base 16) notation, a leading '0' alone indicates octal (base 8) notation, while anything else is taken to be decimal notation. An internal test conversion based on the tentative surmise of the base representation is attempted. If the conversion fails a value of 0 is returned indicating that the string does not represent a numeric value. An empty string is defined as base 10. Converts a floating point Real_type value to its string representation using optional format controls. The real value representation is generated by an ostringstream. If format flags and/or a precision value is specified they are applied to the ostringstream. N.B.: The showpoint format flag is always applied. If a non-zero precision is specified, it is also applied; otherwise, if neither the fixed nor scientific format flags are specified, at least one digit after the decimal point will be used. Converts a string to the floating point value it represents. Leading and trailing whitespace in the string is ignored. Referenced by Array::clone(). Class identification name with source code version and date. Convience constants for setting/testing is_signed. Referenced by Integer::operator=(). The Array of which this Value is a member, or NULL if not a member of an Array. Referenced by Value::parent(). The units description string. Referenced by Value::units(). The number of digits in the Value representation. Referenced by Value::digits(). The numeric base of the Value. Referenced by Value::base().
https://pirlwww.lpl.arizona.edu/software/idaeim/PVL/classidaeim_1_1PVL_1_1Value.html
CC-MAIN-2019-04
refinedweb
1,654
51.14
i have to do a program in BDK.my BDK and JDK are in the same folder java. i have set the path for jdk/bin in control panel.i have created the jar file, jar cfm [jar file name] [manifest file name] [java class name]. i didn't get any error there. but in jdk when i try to load the jar file,it is giving me error saying that "jar file doesn't have any beans.Each jar file needs to contain a manifest file describing which entries are beans.you should provide a suitable manifest file" my manifest file is first.mnf and the content is Name: first.class Java-Bean: True my java file is first.java and the coding is as follows import java.awt.*; import java.io.Serializable; import java.beans.*; public class first extends Canvas implements Serializable{ private Color color = Color.green; public Color getColor(){ return color; } public void setColor(Color newColor){ color = newColor; repaint(); } public void paint(Graphics g) { g.setColor(color); g.fillRect(20, 5, 20, 30); } public first(){ setSize(60,40); setBackground(Color.red); } } CAN ANYONE PLEASE HELP ME?
https://www.daniweb.com/programming/software-development/threads/146951/help-me-creating-jar-file
CC-MAIN-2017-26
refinedweb
189
61.22
Python: Get a list of dates between two dates Python Datetime: Exercise-50 with Solution Write a Python program to get a list of dates between two dates. Sample Solution: Python Code: from datetime import timedelta, date def daterange(date1, date2): for n in range(int ((date2 - date1).days)+1): yield date1 + timedelta(n) start_dt = date(2015, 12, 20) end_dt = date(2016, 1, 11) for dt in daterange(start_dt, end_dt): print(dt.strftime("%Y-%m-%d")) Sample Output: 2015-12-20 2015-12-21 2015-12-22 2015-12-23 2015-12-24 2015-12-25 2015-12-26 2015-12-27 2015-12-28 2015-12-29 2015-12-30 2015-12-31 ------- 2016-01-08 2016-01-09 2016-01-10 2016-01-11 Flowchart: Visualize Python code execution: The following tool visualize what the computer is doing step-by-step as it executes the said program: Python Code Editor: Contribute your code and comments through Disqus. Previous: Write a Python program to convert a string into datetime. Next: Write a Python program to generate RFC 3339 timestamp.
https://www.w3resource.com/python-exercises/date-time-exercise/python-date-time-exercise-50.php
CC-MAIN-2022-27
refinedweb
180
66.07
How we can install and use Spacy Models. Spacy is an open-source software library for advances natural language processing, and specifically designed for production use and helps to build applications that process understand large volumes of text. Also it can be used for information extraction. Spacy Models These are the models which are used for tagging, parsing and entity recognition. Let us see how to install spacy models and how to use them. !pip install spacy !python -m spacy download en_core_web_sm !python -m spacy download en !python -m spacy download en_core_web_sm-2.2.0 import spacy load_model = spacy.load("en_core_web_sm") doc = load_model("Hi my name is mak") doc Hi my name is mak
https://www.projectpro.io/recipes/install-and-use-spacy-models
CC-MAIN-2021-43
refinedweb
114
53.17
Language.Javascript.JQuery Description Module for accessing minified jQuery code (). As an example: import qualified Language.Javascript.JQuery as JQuery main = do putStrLn $ "jQuery version " ++ show JQuery.version ++ " source:" putStrLn =<< readFile =<< JQuery.file Documentation version :: Version Source The version of jQuery provided by this package. Not necessarily the version of this package, but the versions will match in the first three digits. A remote URL of the jQuery sources for version. The URL does not have a protocol prefix, so users may need to prepend either "http:" or "https:" (both work). The URL currently uses the jQuery CDN links at. Alternative CDN links are listed at.
https://hackage.haskell.org/package/js-jquery-3.1.1/docs/Language-Javascript-JQuery.html
CC-MAIN-2021-43
refinedweb
106
61.33
CodeGuru Forums > .NET Programming > C-Sharp Programming > Traffic Lights c# PDA Click to See Complete Forum and Search --> : Traffic Lights c# johnboy14 October 4th, 2008, 08:08 AM I am trying to create a set of traffic lights using visual c#, my problem is how to implement a timer between each light. I have 3 traffic lights and 2 pedestrian lights. Any help on the timer folks. TheCPUWizard October 4th, 2008, 01:26 PM Look at the Timer class..... :rolleyes: :rolleyes: johnboy14 October 4th, 2008, 04:51 PM where's that Mr Vague TheCPUWizard October 4th, 2008, 05:15 PM where's that Mr Vague Did you ever consider a really radical approach of going to the MSDN site and typing in "Timer"...The very first hit is what you need. In case that is too difficult: funnyusername October 5th, 2008, 11:34 AM LOL :D You guys crack me up! :D johnboy14 October 5th, 2008, 11:41 AM very useful, I am new to c# so all and visual c# so I ain't quite up todate on all the sites and best places for different stuff. Thanks anyway. eclipsed4utoo October 5th, 2008, 11:51 AM very useful, I am new to c# so all and visual c# so I ain't quite up todate on all the sites and best places for different stuff. Thanks anyway. maybe you should also try to use the little known search site called Google. a simple search for "c# timers" would have netted you plenty of results. TheCPUWizard October 5th, 2008, 12:03 PM msdn.microsoft.com is the official site for any Microsoft product related questions. google.com is also a useful resource, but one needs to be careful since the results can come from "anyone". There is alot of incorrect material on the internet, and much of the .Net postings contain incorrect or sub-optimal information. I usually recommmend starting with msdn then google if necessary. If you find something that looks good on google, you will have specific keywords that you can use to confirm the information back on msdn. johnboy14 October 5th, 2008, 12:06 PM Would I declare the timer before the main or as soon as I enter the main. You see I have a Button that activates the lights, I want it to wait 5 seconds then give me an amber light. TheCPUWizard October 5th, 2008, 12:08 PM You "Declare" the timer by dragging it from the toolbox onto your form. You can initialize the duration in the properties window, or in code. You can start and stop the timer with the enabled property. You write an event handler which is triggered when the timer expires. Did you download and step through the available samples? johnboy14 October 5th, 2008, 12:10 PM bits of it, I am familar with declaring variables and methods etc. but I programmed a set of traffic lights with an AVR board once in C but low level stuff to be honest. I'll read through them, p.s I was unaware that there was a timer on visual c#, so I could drag and drop. I can't seem to find it on my toobar though. johnboy14 October 5th, 2008, 12:43 PM In all honesty I'm not sure how to start. this below is my code for my start button, I set the timer to intervals of 5 seconds so am I free to place what what I want my lights to do now. private void button1_Click(object sender, System.EventArgs e) { timer1.Enabled = true TheCPUWizard October 5th, 2008, 01:03 PM Go back to design mode on your form. Select the timer. Select the Properties Windows. Switch to events view (looks like a lightning bolt). Double click on the event. This will add an empty routine to your code where you put the logic of what to do when the timer expires... I will ask one more time... Did you download and step through the available samples? johnboy14 October 5th, 2008, 01:09 PM No I Did Not. johnboy14 October 5th, 2008, 01:36 PM I want it to just run in a loop. until the button is pressed TheCPUWizard October 5th, 2008, 02:15 PM No I Did Not. Then do so. If you have specific problems understanding someting in one of the turorials, plase post the URL you got the tutorial from and your question. Until then.... johnboy14 October 5th, 2008, 03:59 PM I'm using this tutorial, not I can't figure out what to put in my timer event. I want to press button1 wait 5 seconds and the world "ON" should be placed in the text box of my choice wait another 5 and the word "ON" is allocated as sequence of traffic lights. TheCPUWizard October 5th, 2008, 04:15 PM Press button. In click even enable the timer. In the timer_tick method look to see if the light is on, if so turn it off, else turn it in. Thats it johnboy14 October 6th, 2008, 03:07 AM [QUOTE=In the timer_tick method look to see if the light is on, if so turn it off, else turn it in. Thats it[/QUOTE] I don't know what your talking about here mate. using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; namespace WindowsFormsApplication1 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void InitializeTimer() { //' Run this procedure in an appropriate event. // Set to 5 second. timer1.Interval = 5000; // Enable timer. timer1.Enabled = true; StartButton.Text = "Stop"; } private void timer1_Tick(object sender, EventArgs e) { } private void StartButton_Click(object sender, EventArgs e) { timer1.Enabled = true; red.Text = ("OFF"); amber.Text = ("ON"); green.Text = ("OFF"); } } } TheCPUWizard October 6th, 2008, 06:31 AM 1) Please use [ code ] tags. and use preview, so you dont have messed up posts. private void timer1_Tick(object sender, EventArgs e) { if (red.Text == "ON") { red.Text=="OFF" green.Text = "ON" } else if (green.Text == "ON") { green.Text = "OFF"; amber.Text = "ON"; } else { amber.Text = "OFF"; red.Text = "ON"; } } There are more efficient way using enums, and arrays, but this will work (sucject to typos, but you should get the ides.... johnboy14 October 6th, 2008, 06:39 AM that makes perfect sense, using bool expressions. Thanks again, I don't mean to be a dump *** but its all part of my learning methods. codeguru.com
http://forums.codeguru.com/archive/index.php/t-462528.html
crawl-003
refinedweb
1,083
75.71
The CS106A DrawCanvas system provides a few simple drawing functions for things like lines and rectangles. The code is built on top of Python's built-in TK GUI/Drawing system. To put a drawing on screen, create a DrawCanvas object of the desired width and height, and then it can be used with the various drawing functions listed below. The canvas is initially white. # Create a 500x300 canvas, draw a red line on it canvas = DrawCanvas(500, 300, title='Drawing Window') canvas.draw_line(0, 0, 100, 100, color='red') ... DrawCanvas.mainloop() Optionally, canvas creation can specify a title='Window Title' parameter to put a title on the window. When the code is done creating the canvas and drawing on it, it should call DrawCanvas.mainloop() - this puts up the drawing window up on screen and waits for the user to close it. An easy approach is placing the call to DrawCanvas.mainloop() as the last line in main(). For CS106A code, we typically include this call in the starter code. The top-left pixel of the canvas is at (0, 0), with x values growing to the right and y values growing going down in typical CS fashion. def draw_line(x1, y1, x2, y2): The draw_rect(x, y, width, height) function draws a 1-pixel wide frame with its upper-left pixel at x,y. The variant fill_rect(x, y, width, heigh) fills the whole rectangle instead of just drawing a frame. def draw_rect(x, y, width, height): def fill_rect(x, y, width, height): The draw_oval() and fill_oval() functions are analogous to the rectangle functions. The ovals will fill the theoretical enclosing rectangle, just touching the middle of each of the four sides. def draw_oval(x, y, width, height): def fill_oval(x, y, width, height): For the point x,y, draw the text of the given string on a horizontal line with x,y at its upper left. def draw_string(x, y, text): By default, drawing is in black. An optional color='red' parameter can be added to any of the draw functions to draw in that color. A list of available colors is in the constant DrawCanvas.COLORS: """A few of the TK color constant names""" COLORS = ['red', 'orange', 'yellow', 'green', 'blue', 'lightblue', 'purple', 'darkred', 'darkgreen', 'darkblue', 'pink', 'black', 'gray'] Instead of a color string, code may instead pass a red,green,blue tuple like (255, 255, 0) to specify a color that way. Here is some sample code which exercises the draw functions and below is what its output looks like. This code is in the test_draw() function in drawcanvas.py. If you run drawcanvas.py from the command line, it runs this code as a test. def test_canvas(width, height): """ Creates and draws on DrawCanvas as a test. """ canvas = DrawCanvas(width, height, title='Draw Test') canvas.draw_rect(0, 0, width, height, color='red') canvas.fill_oval(0, 0, width, height, color=(100, 100, 200)) # rgb tuple form n = 30 for i in range(n): x = (i / (n - 1)) * (width - 1) canvas.draw_line(0, 0, x, height - 1, color='blue') canvas.draw_string(10, 10, 'Behold my pixels ye mighty and despair!') By default, the canvas is created with its fast_mode option set to True. This defers the drawing to be done all at once when the window appears on screen or update() (below) is called, which works fine for CS106A. With fast_mode False, each drawing command is rendered to the screen immediately which is much, much slower and is not really useful. canvas = DrawCanvas(500, 300, fast_mode=False) We are not using DrawCanvas for animation in CS106A, supplying TK GUI code for those cases since it has the best performance. However it is possible to animate in DrawCanvas with these two functions. The erase() function removes all the built-up drawing on the canvas, setting it back to be blank. The update() function pushes any pending drawing to the screen. These can be used to do on-screen animation with something like the following. canvas = DrawCanvas(500, 300) x = 0 while x < canvas.width: x += 5 canvas.erase() canvas.fill_oval(x, 10, 10, 10) canvas.update() DrawCanvas was created by Nick Parlante for CS106A, and it just does the most basic sort of drawing, calling Python's underlying TK functions to do the drawing. More complicated drawing in later projects can use Python's TK system directly. The weird design of the TK drawing functions motivated us to create this simple system, in which x,y and width,height and color all have reasonable definitions.
http://web.stanford.edu/class/cs106a/handouts_w2021/reference-draw.html
CC-MAIN-2021-49
refinedweb
757
64.61
FileInfo Constructor Namespace: System.IONamespace: System.IO Assembly: mscorlib (in mscorlib.dll) The following example uses this constructor to create two files, which are then written to, read from, copied, and deleted. using namespace System; using namespace System::IO; int main() { String^ path = "c:\\MyTest.txt"; FileInfo^ fi1 = gcnew FileInfo( path ); if ( !fi1->Exists ) { / = String::Concat( path, "temp" ); FileInfo^ fi2 = gcnew namespace System; using namespace System::IO; int main() { // Open an existing file, or create a new one. FileInfo^ fi = gcnew = gcnew StreamReader( fi->OpenRead() ); while ( sr->Peek() != -1 ) Console::WriteLine( sr->ReadLine() ); } //This code produces output similar to the following; //results may vary based on the computer/file structure/etc.: // //This is a new entry to add to the file //This is yet another line to add... -.
http://msdn.microsoft.com/en-us/library/system.io.fileinfo.fileinfo.aspx?cs-save-lang=1&cs-lang=cpp
CC-MAIN-2014-52
refinedweb
128
50.23
I'm pretty new to programming, so this may be really obvious, but I'm having trouble with it. I'm trying to learn python, so I thought I'd make a simple program in python to help me write similar lines of java code faster, (I'm going to easily have hundreds of lines of this.) and I've got it all worked out, except one little thing, which I'm sure is very simple, but I searched around a bit for it, and couldn't find anything, so I thought I'd just ask here before going to bed. As you can probably tell, this will take the variables I set in the program, and then write out one line of code to the text file... but it always only writes to the first line, and overwrites everything in the file, even what is on other lines. I need it to automatically go to the next empty line, and write there instead. Could you guys give me a hand? (Time for bed. Zzz.... lol) def applybutton(self, event): X = XCOORD.GetValue() Y = XCOORD.GetValue() Z = YCOORD.GetValue() B = BLOCK.GetValue() file = open('Blueprints.txt','w') file.write('world.setBlockWithNotify(i ') file.write(`X`) file.write(', j ') file.write(`Y`) file.write(', k ') file.write(`Z`) file.write(', Block.') file.write(B) file.write('.blockID);') file.close()
https://www.daniweb.com/programming/software-development/threads/398898/automatically-writing-a-new-line-in-a-txt-file
CC-MAIN-2017-47
refinedweb
228
75.3
I actually have a simple question, but couldn't find an answer. Maybe you can point me to a duplicate. So, the question is: is it possible to tell cmake to instruct a compiler to automatically include some header at the beginning of every source file, so there would be no need to put #include foo.h? Thanks! CMake doesn't have a feature for this specific use case, but as you've hinted, compilers such as GCC have the -include flag which acts as if there was an #include "foo.h" in the source file, and since CMake can pass arguments to compilers, you can do it via add_definitions. This answer covers what the flag is for GCC, Clang and MSVC which should cover a lot of bases. So in CMake, detect what the compiler is and pass the appropriate flag. Here's what the CMake code might look like: if(MSVC) add_definitions(/FI"foo.h") else() # GCC or Clang add_definitions(-include foo.h) endif() In general, doing this is a bad idea. Code inspection tools (like IDEs, or doxygen) will be confused by it, not to mention other humans looking at the code. If not all source files actually require the definition, adding extra #includes will slow down compile time. If you actually do need the same header (and it's not a system header) in all your source files, it may be symptomatic of high coupling in your code. And for what benefit? Not having to add one line to your files? However, it's necessary to note that compilers support this for a reason; there are a few weird edge cases (example 1, example 2) where it's a useful thing to do. Just be aware that you're doing this for the right reasons.
https://codedump.io/share/DFAL04W9W232/1/cmake-include-header-into-every-source-file
CC-MAIN-2021-21
refinedweb
299
71.55
Hi, it means it works first then stop working? can you check you can still send message to AMQ broker (using AMQ directly), if not please check your AMQ config. - Romain 2012/8/11 almos <[email protected]> > Hello, > > I am facing a strange behavior with MDB's that are listening on topics. > After long application inactive time (in other words - system is not used > overnight (3-4 hours)) first JMS message send to the topic NEVER reaches > destination (I have couple of MDBs listening on the topic and neither of > them gets invoked). I.e MDB isn't invoked (according to the verbose logs). > All next subsequent send operations works fine and all messages reach > destinations. > According to our observations this doesn't happen with JMS Queues which > uses > the same infrastructure code referenced below. > > What might be a problem? What we are doing incorrectly? > > Here is a code I use to send messages to the topic (without spaces, looks > like forum engine strips some URLs) > http:// pastebin.com /kZ5W0kXS > > So having this code to send messages to topic/queue I use following > approach: > > 1. inject sender into ejb bean: > @EJB private IISMClient ismClient; > 2. invoke send function > ismClient.post(notification); > > Here is my tomee.xml (without spaces) > http:// pastebin.com /kcdBcgEw > > Here is how we use @MessageDriven annotation. And as I mentioned after long > user inactivity first message sent to the topic ismTopic never gets > delivered to the listeners: > > @MessageDriven(activationConfig = { > @ActivationConfigProperty( > propertyName = "destinationType", > propertyValue = "javax.jms.Topic"), > @ActivationConfigProperty( > propertyName = "destination", > propertyValue = "ismTopic") > }) > public class TestListener extends AGenericListener implements > MessageListener { > @Override > public void onMessage(Message msg) > { > TextMessage tm = (TextMessage) msg; > try { > String message = tm.getText(); > logger.info("TestListener [message]: " + message); > setRawPacket(message); > process(); > } catch (JMSException e) { > e.printStackTrace(); > } > } > } > > Sender and listeners resides currently under the same TomEE 1.0 instance on > the same machine. > > Could you please check what I might be doing wrong? > > Thanks, > Alex > > > > -- > View this message in context: > > Sent from the OpenEJB User mailing list archive at Nabble.com. >
http://mail-archives.apache.org/mod_mbox/openejb-users/201208.mbox/%3CCACLE=7PANQZkoK8_yq13WDTZYcDdY8DMpj_fueT4zDN_Q527eQ@mail.gmail.com%3E
CC-MAIN-2016-36
refinedweb
334
50.02
This Instructable will show you how to make a concentration game using a MicroBit. This is going to create a metal shape and wand to have a reaction when the wand touches the metal shape. Inspired by carnival games and wanted to challenge my students. Supplies: Items needed: MicroBit – any MicroBit will do. I have sets from FiriaLabs that come with everything you need, but you can also get MicroBits from their website or Amazon. (any option should include the battery pack) Battery Pack – should be included when you buy MicroBit Speaker (optional)- if you want it to make a sound when touched. I used the one that FiriaLabs puts in the box. Alligator Clips – (2-4 needed one should be long) I used 39″ long set from amazon. If you want a buzzer you will need 4 alligator clips. Metal Wire – I used floral wire from the DollarTree any should do, in store you can buy just one for a dollar. 3D printer for base – I made a base to hold all items in TinkerCad and printed on a MakerBot printer. Step 1: 1. Make Your Metal Shape I used floral wire from the Dollar Tree and searched Google to find paperclip shapes, you can also search for silhouettes. While this shape has a nice outline, I only need the outline and would want the start and stop point to be around the foot, so it will stand on my base. Start by bending your metal around your outline to make your shape. You need to have an extra couple inches hanging down from your shape for the beginning and end. This will allow you to connect the alligator clips to the shape. (I find it helpful to print the image and bend wire on a flat surface. Step 2: 2. Make a Wand Next you will need to make a wand. I use the same floral wire as in the first step. I just took a long piece of wire and bent in half turning the wire around to make the handle. Make sure to leave the loop large enough to go around the metal figure. You can also just make a loop at the end if you are short on wire. Step 3: 3. Make the Base to Hold Everything I used TinkerCad CodeBlocks to make this shape. The STL files are also attached. You will need to Click on “CodeBlocks” on the left hand side. Then you can use my code, I have made comments where you may need to change the code. I also made a base in regular TinkerCad (“Baker MicroBit Challenge Base”) also attached. Step 4: 4. Program Your MicroBit I used Firia Lab’s CodeSpace to write my code. This program has a free option but lessons and more options come when you purchase your MicroBit from them. This is a Python based code and you can see the code below. The code says to make a big X turning lights on and off. Then start with a smile face, if the wand (Which is attached to port 0) touches the shape (which is attached to port 1) then show X and make a sound. from microbit import* import music big_x = Image( “90009:” “09090:” “00900:” “09090:” “90009:”) def start_alarm(cause): music.play(music.FUNERAL, wait=False, loop=False) display.scroll(cause, wait=False, loop=True ) def stop_alarm(): music.stop() display.clear() pin1.set_pull(pin1.PULL_UP) # Make sure pin is “pulled high” normally while True: if pin1.read_digital() == 0: # Is this pin being “pulled low” by the wand? start_alarm(“Test”) display.show(big_x) else: display.show(Image.HAPPY) Step 5: 5. Connect Everything The metal shape will fit in the holes on the top of the base. You need to have the wire feed through the hole, connect an alligator clip to port 0 and the other end to the metal shape(there should be some hanging down inside the base). Then connect the wand to the GND (ground), I find it helpful to have a long alligator clip for this, it allows you more flexibility to go around the shape. If you are wanting to add sound, you will need to add a speaker with alligator clips to port 1 on one side and the other side of the speak is also clipped to the GND. The battery pack fits on the shelf in the bottom of the base. MicroBit will show a Smile until the wand touches the shape. Then you will see an X and if you have the speaker attached it will make a sound. Source: MicroBit Concentration Game
https://atmega32-avr.com/microbit-concentration-game/
CC-MAIN-2021-31
refinedweb
770
82.04
In this article I discuss using cookies to make an authenticated web call to an external domain. Over the past few months I have been tinkering with an Electron + React app. I had never worked with these technologies before so mainly my goal has been to learn. I am using the opportunity to improve my developer experience at work. I like to call it my developer console. Anyway, I wanted to create a small tool inside the developer console that would allow me to create reproducible data configurations for the software I work on, i.e., a one-button-click system for creating complex, reproducible test data scenarios. This required me to communicate and authenticate with my local development environment, something I hadn't previously had to do in the developer console. Communication was an easy fetch call, but the authentication element was the unknown. How could I send cookies for a domain as part of my fetch? A quick internet search didn't give me exactly what I wanted, but I was able to piece together what I needed from various sources. Below is a snippet for how to set the cookies for a domain in Electron, and how to include them in a fetch. import electron from 'electron'; function performExternalRequest() { const cookieJar = electron.remote.session.defaultSession.cookies; const cookie = { url: '', name: 'your-cookie-name', value: 'your-cookie-value' }; cookieJar.set(cookie) .then(() => { fetch('', { credentials: 'include' }) .then((response) => { console.log(response); }) }); } A gist for the above is here. The key elements in the above are: - adding the cookie to the cookies for the session; and - setting the credentialsproperty on the fetch request options to include. The former sets the cookie so it can be included in the request. The latter ensures the cookie is included in the request. Setting the credentials option warrants a bit of discussion. Depending on your browser version, the default value for credentials will either be omit or same-origin. If the former, then no cookie will be sent. If the latter, and if your electron app is on a different domain than the server you want to communicate with, which is likely, then also no cookies will be sent to the server. The only way to send the cookies to the server when your app is on a different domain is to set the credentials option as include. For more information, see: NB: While fetch is not exclusive to React, I mention React in this post as it was a key element of the context in which my questions existed. Discussion (0)
https://dev.to/olsnacky/send-cookies-in-electron-react-app-10el
CC-MAIN-2021-31
refinedweb
428
54.83
. Oh..you’re where I got that code from. Thanks! Posted by dsas on April 7th, 2007. dsas: no problem. :) Posted by sil on April 7th, 2007. if i have more than one row in header, I want to do sort by the last header row, how can I set it? thx Posted by margiex on April 8th, 2007. margiex: you can’t, I’m afraid. Sorttable doesn’t support tables with more than one header row, because it gets a bit confused by them. Posted by sil on April 8th, 2007. It would be nice if a third click on a header would revert the table back to the original sort, rather than just toggling ascending and descending on that column. Just a thought ;) Posted by mrben on April 8th, 2007. mrben: daresay it would, but that would mean caching the table structure, which I’d like to avoid :) If you want that, hit refresh. Posted by sil on April 8th, 2007. Why would it mean caching the table structure? Surely you could define the default sort somewhere, and then re-run that particular sort on every third click of a header? Or have I over-simplified? Posted by mrben on April 8th, 2007. mrben: there isn’t a “default sort”; there’s just “what the table looked like when it got served from the server”, which could be in any order at all or even no order. Posted by sil on April 8th, 2007. Ah. Point taken. Shame ;) Posted by mrben on April 9th, 2007. mrben: hence “hit refresh” to get that order back. Posted by sil on April 9th, 2007. [...] Follow any comments here with the RSS feed for this post. Post a comment or leave a trackback: Trackback URL. « Discussion List, Crossdomain.xml forFlash [...] Posted by log @ Make Data Make Sense » Sortable Tables with Totals and Averages on April 10th, 2007. Would it be possible to add some extra variables to the makeSortable() function, like: makeSortable($(’table’), ‘header’, ‘ASC’); Where the last two can the name of the header and sorting order you’d like to use for sorting immediately instead of just adding the event listeners. Posted by tommie on April 11th, 2007. tommie: Sorttable doesn’t support sorting the table as soon as the page is loaded. There are some notes as to why not in the old sorttable documentation (see “Sorting a table the first time the page is loaded”). Posted by sil on April 11th, 2007. In Netscape 8 this script close the window. There is a solution for this? Posted by Anonymous on April 13th, 2007. Stuart: I’ve just had the inevitable “client-side sorting” request on a new project I’m working on. Less than 5 minutes later, we had it working. Thanks a lot for this script, I owe you a drink. Posted by GaryF on April 14th, 2007. Nice job! If I could only get it to work :( When I include your script file in my Asp.net AJAX site I get the following error. Error Sys.ArgumentTypeException: Object of type ‘Object’ cannot be converted to type ‘Array’ Parametername:array This occurs even if I don’t have any sortables defined. Any Ideas? The error is occuring in a file calse ResourceScript.axd Here is the function Array.forEach = function Array$forEach(array, method, instance) { /// /// /// var e = Function._validateParams(arguments, [ {name: "array", type: Array, elementMayBeNull: true}, {name: "method", type: Function}, {name: "instance", mayBeNull: true, optional: true} ]); if (e) throw e; for (var i = 0, l = array.length; i Posted by ScottW on April 16th, 2007. I’ve narrowed it down further and the code below in the sorttable init function is causing the error. forEach(document.getElementsByTagName(’table’), function(table) { if (table.className.search(/\bsortable\b/) != -1) { sorttable.makeSortable(table); } }); I’m not as versed in Javascript as you are so any help would be appreciated. Posted by ScottW on April 16th, 2007. Got it working. It took 2 code changes. Basically it was erroring on function(table) and onsort it error on function(Cell). I’ve include my new code in case it helps someone else. // forEach(document.getElementsByTagName(’table’), function(table) { var tables = document.getElementsByTagName(”table”); for (var i=0; i Posted by ScottW on April 16th, 2007. Change 1 var tables = document.getElementsByTagName(”table”); for (var i=0; i Posted by ScottW on April 16th, 2007. Sorry guys, this forum keeps cutting off my post when I include the code. I basically changed each foreach loop into an index loop and accessed each table and cell using the appropriate array index. I also had the problem that my tables are loaded via AJAX after page initialization. A simple call to sorttable.makeSortable(MyNewTable) worked like a charm. Thanks again for the code! Scott Posted by ScottW on April 16th, 2007. GaryF: that’s what it’s for. :) Posted by sil on April 16th, 2007. ScottW: glad you got it sorted! I’d like to talk over this in more detail; could you drop me a mail? (See the contact link for my address.) Posted by sil on April 16th, 2007. you are very appreciated for you good job at sorttable.js. And i also have a question. To have the function ” when the table is loaded, one column is sorted”, how can i do it? thanks very much! RobertLee From Peking of China Posted by robert Lee on April 19th, 2007. Hey, I was tryin out the script, great job :) although having a few a problems, I think it is similar to ScottW’s problem, if you could somehow please provide me with the fix, I would be really thankful. I dont know if this is another problem when your table is in a javascript variable, and calling the html contents of the variable through innerHTML, but the script doesnt work on the tables then. (I think it works in IE, but not firefox and mozilla, really being odd like that) Posted by Varun on April 20th, 2007. To Varun: I have used sorttable.js. it does indeed work in IE. Posted by robert Lee on April 20th, 2007. Fantastic tool, thanks so much! I’m curious why you chose to use the non-standard sorttable_customkey="2"rather than a class attribute? I’m loath to use invalid markup. ;) That said, I can see myself using this in many places, particularly at my workplace, a community college in British Columbia, Canada. I’ve been creating complicated workarounds in my custom CMS but this is a far better solution all around. Well done! Posted by Jason Friesen on April 20th, 2007. Jason: two reasons. Firstly, a custom sortkey is emphatically not a class :) I feel a bit guilty about using class atrributes for things like column type overrides, but they are *sort of* appropriate. If you use a class of “2″ as a sortkey, then you’re suggesting that that table cell is like other cells with a class “2″, which it ain’t; it contains (something that maps to) the value 2, which is not the same thing. That’s the academic ivory-tower argument. The other argument is that if you’ve got more than one class applied to the cell then it won’t know which one to use as the sortkey, and setting class=”sorttable_sortkey_2″ seemed very silly :) I didn’t like the invalid markup either, I have to admit it, but it seemed the lesser of the evils. Posted by sil on April 20th, 2007. Fair enough. I’m working tangentially with Moodle that does all sorts of your second example to attempt to be fully compliant. I admit it’s a bit crazy to look at class="course_47 subsection_cheese_danish_12". I think I’d still prefer that, but I definitely see your point. Nevertheless, I sure appreciate the code. :) Posted by Jason Friesen on April 20th, 2007. I was wondering if there is any way to have the table sorted (instead of default) onload Posted by Varun on April 21st, 2007. Thank you so much for your script! Is there anyway to have the script ignore strings that start with “the”, “a”, and “an” when sorting? I have a list of movie titles, and it would be great if the sort ignored these words. I can use the custom sort keys, but it would be great if the script did this automatically. Thanks! Posted by Rhonda on April 22nd, 2007. Rhonda: it’s not possible to do that easily, I’m afraid. I’d recommend using the custom keys to get this; I won’t make sorttable do it automatically because some people may want A or The to be included. Posted by sil on April 22nd, 2007. FYI, I noticed there were some lines that included a conditional statement to use font face ="webdings"for IE. There seems to be semicolon missing after the & nbsp. I also had some trouble with some Windows boxes that didn’t have webdings installed, so I replaced them with images (ie only). I’ll try to put the code in below. original code: sortfwdind.innerHTML = stIsIE ? '& nbsp<font face="webdings">6</font>' : '& nbsp;& #x25BE;'; my replacement: sortfwdind.innerHTML = stIsIE ? '& nbsp;<img src=\'res/i/interface/arrow-down.gif\' alt=\'\' />' : '& nbsp;& #x25BE;'; Posted by Jason Friesen on April 23rd, 2007. (please note that I added spaces to the above comment; they shouldn’t be included in the original js, of course.) Posted by Jason Friesen on April 23rd, 2007. How can i get default sort on particular column on page load? Posted by Jai on May 2nd, 2007. Is there a quick way to change the color of the column that you are sorting? This would be very usefull for big tables. Posted by Mike on May 3rd, 2007. My application uses dates in the form ‘Jan 1, 2007′. Be default, sorttable treats this as alpha. I’ve managed to override this in my page with class=”sorttable_mmdd” which seems to sort chronologically instead and I thought this fixed my problem. The problem arrises when there are blank values in the list however. Instead of having all the blank values go to the beginning like I would have expected (and how the alpha and date sorts work), there are various dispersed blank values in the list. Is this a bug? Any suggestions would be welcome. Thanks. Posted by Scott on May 4th, 2007. Scott: I advise you, in that situation, to use custom sortkeys. Instead of <td>Mar 1, 2007</td> <td></td> use <td sorttable_customkey=”20070301″>Mar 1,2007</td> <td></td> and then sorttable will happily sort them all correctly. Posted by sil on May 4th, 2007. My dates are being pulled from a database. How do I set up these custom keys when the output is variable? I’d need to translate 356 possible values. Many of these fields have timestamps as well. I’m not sure if what I’m trying to do is possible with this utility. Thanks. Posted by Scott on May 4th, 2007. Scott: is you data stored in the database as the string “May 02 2007″, or is it stored as a date which you then format in the output? Most languages have something that will let you output ISO date format, which is designed for this sort of thing. Failing that, if sorttable isn’t good enough then don’t let m eprevent you writing your own, of course. Posted by sil on May 4th, 2007. I really like this. I was using v1 for some time and just happened to come back here (to look up something) and see v2 was put up. Great! One question; if the table (which is querie’d from a db) has no rows, i get some javascript errors. I tried to fix this myself, but got into hot water quite quickly. Not that javascript errors are terrible perse; but I know people around my company will come telling me about them. Posted by jimbo on May 5th, 2007. jimbo: hrm. I may patch sorttable to cope with this, but to be honest the answer to “sorttable doesn’t sort a table with no rows” is “don’t do that, then”, I think ;-) Posted by sil on May 5th, 2007. heheh, well, i do agree. of course, my data is coming from a mysql query, so i don’t really know how many rows are getting returned. of course, i guess i could just not display the table at all if there are no rows to display. hmmm.. not sure why i didn’t think of that before, actually. i guess i’ll do that. thanks. i really like this… Posted by jimbo on May 7th, 2007. I realize that you don’t support any ’sort on load’ function. I get lots of requests to restore a particular sort order when a page is loaded / refreshed though. In an ideal world I would create a cookie whenever a header is clicked and look for that when a page is loaded and sort accordingly. That would make my life much easier than having to pass a special ‘order by’ query value back each time. Just my $0.02. Thanks for the excellent script. Posted by j on May 8th, 2007. My solution to the “sort dates with blanks” problem: (line 271) sort_ddmm: function(a,b) { mtch = a[0].match(sorttable.DATE_RE); if(mtch==null) {dt1=0}; else { y = mtch[3]; Regards Posted by Erich Bakx on May 9th, 2007. Is there a quick way to change the color of the column that you are sorting? This would be very usefull for big tables. Posted by Mike on May 11th, 2007. jimbo, I use a function to check if there are rows come back from the db, if not I display a row, in the 1st column that says No Results. This will tell the user there is nothing, rather than just leaving it empty, and also solves your problem Sydney Posted by Sydney on May 16th, 2007. I have a table with styles which shows odd rows with one colour and even rows with another. The problem comes up when I sort the table, then styles keep in the same row and the table is shown with unordered colours. Please, let me know if you have a solution for this. Madrid, Spain Posted by Carlos on May 21st, 2007. A good way to deal with alternating rows is to use a modulo in the Append Child area. (There are two locations, the 2nd is for reversing the sort) Search for tb.appendChild(row_array[j][1]); and tbody.appendChild(newrows[i]); and add the follow above them (make sure to stay inside the for loop) if(j % 2) row_array[j][1].className = “tRowA”; else row_array[j][1].className = “tRowB”; – and – if(i % 2) newrows[i].className = “tRowB”; else newrows[i].className = “tRowA”; Note: You hafto swap the order in the second function. Posted by Johnny Moon on May 24th, 2007. Hi there, Is there a solution for begin the sort in a particular row?, for example I have other kind of information in the first row, and the header begin in the second row, I’ve tried to fix it but I dont found the solution… Posted by Max on June 4th, 2007. Hi, Love the script as it’s saved me a whole load of grief! I think I might have found a small bug though :-( When there is more than a single table on the page, say tables A and B, and each has columns, say 1, 2 & 3, when alternate table headers are clicked for sorting, with the third click on the same as the first header to reverse, I get an error on page and the sorting arrow appears in the wrong table. Using the table notation this would be: click B1/2/3 -> click A1 = Error>>. Does this relate at all to ScottW’s posts? Thanks Rich Posted by Rich G on June 5th, 2007. submission seems to have shredded part of the post order is A1 then B1/2/3 then A1 = Error Posted by Rich G on June 5th, 2007. When I attempt to use more than one table I am having the same issue as Rich G . The browser displays an “invalid argument” message. Debugging shows that this is being thrown from on of the following code segments depending upon the sort: this.removeChild(document.getElementById(’sortable_sortrevind)); and this.removeChild(document.getElementById(’sortable_sortfwdind)); Posted by Matt H on June 5th, 2007. Nice to know I haven’t gone mad ;-) I’ve come to the conclusion this is because the element found by document.getElementById does not have to be part of the same table as the calling cell and hence cannot be removed from its children in this case. I’m now working out how to ensure that: a: the element name is unique and known (or work outable) per table. b: only the cell’s siblings are searched and have the element removed as I want the sorting indicator to be possible on each table. Any thoughts? Cheers! Posted by Rich G on June 5th, 2007. Yes, more than one table on a page is broken, because I’m a bit of an idiot. It’s on the list for sorttable 3. Posted by sil on June 5th, 2007. Hey Rich G – I have to switch to another project for the next couple of days. If you come up with some possible ideas or dead-ends can you post them? If I can find some time, I will do the same. Thanks! Posted by Matt H on June 5th, 2007. Hi, Have a solution to this, the logic behind it is that the search by id is not necessary if the table head element is searched for children of a new sort_span type and any existing ones stripped. This ensures that the spans that have the arrows are removed for that table only. I’ve taken the liberty of using the same forEach method for the three cases for clarity. Hope that is all clear ;-) Changed code is: dean_addEvent(headrow[i],”click”, function(e) { if (this.className.search(/\bsorttable_sorted\b/) != -1) { // if we’re already sorted by this column, just // reverse the table, which is quicker sorttable.reverse(this.sorttable_tbody); this.className = this.className.replace(’sorttable_sorted’, ’sorttable_sorted_reverse’); /revind = document.createElement(’sort_span’); //sortrevind.id = ”sorttable_sortrevind”; sortrevind.innerHTML = stIsIE ? ’ <font face=”webdings”>5</font>’ : ’ ▴’; this.appendChild(sortrevind); return; } if (this.className.search(/\bsorttable_sorted_reverse\b/) != -1) { // if we’re already sorted by this column in reverse, just // re-reverse the table, which is quicker sorttable.reverse(this.sorttable_tbody); this.className = this.className.replace(’sorttable_sorted_reverse’, ’sorttable_sorted’); /fwdind = document.createElement(’sort_span’); //sortfwdind.id = ”sorttable_sortfwdind”; sortfwdind.innerHTML = stIsIE ? ’ <font face=”webdings”>6</font>’ : ’ ▾’; this.appendChild(sortfwdind); return; } // remove sorttable_sorted classes theadrow = this.parentNode; forEach(theadrow.childNodes, function(cell) { if (cell.nodeType == 1) { // an element cell.className = cell.className.replace(’sorttable_sorted_reverse’,”); cell.className = cell.className.replace(’sorttable_sorted’,”); //get the spans in each cell theNodes = cell.getElementsByTagName(’sort_span’); //and remove them for(var z = 0; z < theNodes.length; z++) { cell.removeChild(theNodes[z]); } } }); this.className += ’ sorttable_sorted’; sortfwdind = document.createElement(’sort_span’); sortfwdind.id = ”sorttable_sortfwdind”; sortfwdind.innerHTML = stIsIE ? ’ <font face=”webdings”>6</font>’ : ’ ▾’; this.appendChild(sortfwdind); Posted by Rich G on June 6th, 2007. just realised that the last sortfwdind.id = ”sorttable_sortfwdind”; should have been commented out Posted by Rich G on June 6th, 2007. Also fixed/tweaked the code for alternate table row styling in the reverse function suggected by Johnny Moon, as this didn’t take into account the fact that the mod of the last row number (and hence the first row) for the reversed table was not the same as the mod of the first row in the normal table if there were an even number of rows, which meant that the styling would be swapped on the first reverse sort click for the row and this would persist until another column was chosen for sorting or the page refreshed, a very minor bug but visually distracting. The fix checks whether the table has an odd or even number of lines and then allocates styling accordingly, note that only the reverse function was affected by this: for (var i=newrows.length-1; i>=0; i–) { //begin inserted statement’); } //end of inserted statement// tbody.appendChild(newrows[i]); Posted by Rich G on June 6th, 2007. Thanks Rich G! Posted by Matt H on June 6th, 2007. A bug in IE 6 causes a security warning when loading the script over https A fix is to replace this line: document.write(”<script id=__ie_onload defer src=javascript:void(0)><\/script>”); With: document.write(”<script id=__ie_onload defer src=’dummy.html’><\/script>”); The bug is documented all over the place. The html page given doesn’t even need to exist. —Alex Posted by Alex on June 6th, 2007. Ah thanks Alex I ran into that one last night and was about to look for a solution today and hey presto… Posted by Rich G on June 7th, 2007. Any plans to add scrollability to the code? Ideally, headers and footer stay fixed, body is scrollable. We have the first part worked out (IE6), but not the second. Posted by Jett on June 8th, 2007. Jett: I’m not planning on adding that to sorttable; it’s just for sorting tables. might be useful. Posted by sil on June 8th, 2007. In V2, how do I change the initial defualt sort direction from ascending to descending? Thanks! Posted by Mike on June 17th, 2007. Mike: this requires editing sorttable.js. After the line: row_array.sort(this.sorttable_sortfunction); add a new line: row_array.reverse(); Let me know if this works. Posted by sil on June 18th, 2007. Beautiful! That did work! Thank you very much for all the help, you do excellent work! Mike Posted by mike on June 18th, 2007. Hi Stuart, Just to say thanks for making the code available to everyone, it saved me quite some time from writing my own. Thanks again, Dan Posted by Dan on June 24th, 2007. Hi everyone, hi Rich G! You said you fixed the problem with multiple tables, but I really don’t know where to place the code you posted a few days ago. Could you be as kind as to post your full working code (or a link to it) here? I’d be very thankful! mark Posted by mark on June 26th, 2007. Hi mark, It replaces the dean_addEvent part of the existing makeSortable: function(table) function and starts from about line 93ish in my editor, the rest of the code is unchanged so I wont bother clogging up the rest of this page with it ;-) Let me know if you have any further probs Rich G Posted by Rich G on June 28th, 2007. Hi Rich, Thank you very much for your help! I%27ve finally got it!! I always just looked at the bottom of the source where the dean_addEvent() function is defined… Thanks for fixing this multiple table problem. It was quite important for me! Posted by mark on June 28th, 2007. No worries it solved quite a few probs for me too, and I only really tweaked the stuff that was already there, which made my life a whole lot easier ;-) BTW just got the spam blocking maths question wrong which is a little worrying :-( Posted by Rich G on June 28th, 2007. Here’s a kludge for ’sort on load’: <body onload=’loader();> function loader() { if (!document.all) { var fireOnThis = document.getElementById(’TH id value’); var evObj = document.createEvent(’MouseEvents’); evObj.initEvent( ‘click’, true, true ); fireOnThis.dispatchEvent(evObj); } else { var fireOnThis = document.getElementById(’TH id value’); fireOnThis.fireEvent(”onclick”); } } now put some unique ‘id=’ value in the <th> element for the column that you want to have sorted when the page loads. This works with FF and IE7. Posted by j on June 29th, 2007. Great script! One question though.. Would it be possible to sort rows in pairs? Essentially, i have the following structure: tr > td.price , td.date, td.blah tr > td.description .. repeat Unfortunately I need two rows to fit all of the information about one product, but this creates a number of problems when trying to sort the table – namely, the second row gets all out of order, any ideas, suggestions? Best, Ilya Posted by Ilya on July 1st, 2007. Ilya: sorttable won’t do that; it’d need some fairly hefty modifications to the script, I’m afraid. Posted by sil on July 1st, 2007. Great great code! Many thanks! Posted by Johnnie on July 1st, 2007. I just added this script to my 3D game comparison table but the script seriously screws the table up. I’d like to be able to sort by game name (top row) and feature (first, middle, and last columns). Any ideas? The table is very complex with nested col/row spans. Posted by on July 1st, 2007. eep2: sorttable can’t handle tables with colspans or rowspans, because working out how to sort a table like that is pretty close to impossible without knowing the details of the table. Posted by sil on July 1st, 2007. Well, how else to know the details of the table than to analyze the table prior to sorting? Posted by Eep² on July 1st, 2007. perhaps the best way would be to rethink the table structure :-( I’ve had a look at your page and it looks interesting but hard to follow from a first encounter (ie without knowing the background as to why you chose to do it that particular way) How is your table populated? Manually or via a db? Posted by Rich G on July 2nd, 2007. Manually. I chose the layout (with the games across the top) because that reduces how much horizontal scrolling is necessary (since vertical scrolling is more common and easier). I tried to get it into a database but it’s just too complicated for me (not being much of a programmer)–check the main page and news/updates for more info. I’d be happy just to use a flat-file system with some kind of character-delimited list that PHP can sort and tabularize (allowing x games and x features/sections to be displayed only, etc), too. Posted by Eep² on July 2nd, 2007. Sorry, the introduction was unreadable for me due to the colour scheme and font size :-( (and that’s on a 24″ widescreen at 1920 x 1200) Posted by Rich G on July 3rd, 2007. Er, so increase the text size in your web browser and override the colors if you don’t like them… Posted by Eep² on July 3rd, 2007. Sorry, inaccessibility of content is one of my pet hates, if i can’t read it I’m already browsing away Posted by Rich G on July 3rd, 2007. just realised that sounded very critical of your site sorry, it wasn’t personal, I was just trying to explain why I rarely bother resetting my system to cope with a specific designers decision, have a look at sites such as to see what clear and accessible design can achieve, its a good principle to design so that everyone can view your site rather than having to change their browser settings, Posted by Rich G on July 3rd, 2007. So where’s the option on that site to have a dark layout? I don’t like bright white–it hurts my eyes… If I knew how to set cookies and allow people to choose/design their own color scheme, I would, but I don’t. Posted by Eep² on July 4th, 2007. in the zen garden each scheme has a different layout but the same content (supplied by the site), its an exercise in separating style from content and allows you to see what layouts you like and what you don’t, you may well have a valid reason for choosing your scheme I was merely commenting that it’s very hard to read ;-) You could use alternative stylesheets to have a selector for viewing your site as white on black or black on white for example or completely change the layout for people with rotated widescreen monitors, css is both fun and amazingly irritating to play with! to get slightly back towards the original start of our discussion, you could make your table into xml and then style that using css thus making it easy to add new content (by changing the xml only) or new layouts (by changing the css only) it might conceivably help your sorting table layout issues too! Hopefully I’m being slightly helpful and you don’t feel too criticised by my random thoughts ;-) Posted by Rich G on July 4th, 2007. test Posted by Eep² on July 4th, 2007. Dunno why my comments aren’t being posted (aside from short ones–assuming this one will work). This is the third time now… Posted by Eep² on July 4th, 2007. Need a way to edit/delete comments… Posted by Eep² on July 4th, 2007. Blah…my last comment didn’t show up, for some reason (maybe cuz I had “a href” HTML links in it. Anyway, I had the thought of doing CSS hiding via this script (minus the animation) but I’m not sure if it works with tables (without them having to be separate tables per game, which defeats the purpose of having multiple games in a single table to compare, in the first place). Posted by Eep² on July 4th, 2007. But then there’s this sortable table script which has a filter, which might be configurable as a checkbox above each column/game field (and perhaps even each row/feature) to hide/show it. I don’t know XML but maybe HTML+CSS+Javascript will suffice. Posted by Eep² on July 4th, 2007. Oh and there’s a stylesheet switcher that I’ll look into adding to my site(s). Anyway, thanks for the replies. (OK, one link per comment is stupid…) Posted by Eep² on July 4th, 2007. They got moderated, which is why they didn’t show up. The one-link-per-comment thing is Wordpress, not me specifically, but the huge majority of comments I get with more than one link in are spam… Posted by sil on July 4th, 2007. if you know the column class in a table you can hide the entire column and the same for rows so you could have a separate class for each game and switch them on or off as you like Posted by Rich G on July 4th, 2007. But is that only for columns that don’t span multiple rows (and vice versa)? Cuz, ideally, I’d like to consolidate cells that have the same content (developer, “yes”, “no”, etc)… Posted by Eep² on July 5th, 2007. I’m taking rubbish of course! there is no concept of a column in a html table definition, just rows and cells (hangs head in shame). Styling rows with a unique css class will allow easy turning on and off but doing this by hand will become a bit more tedious each time you add stuff and for columns like you have with multiple cell spanning, you would have to know something like each cell location in the parent row -> child cells tree and to find this you would have the same problem as what you have already encountered :-( I think for a table like this it might be easier to have the duplication for the sake of easier coding as you would be able to rely on knowing the same number of cells per row etc. I would do it in ASPX myself and so would avoid the problem entirely by generating the table on the fly each time things were clicked on but that might be using a sledgehammer to avoid the nut! Hmm I might have just talked you back to where you were a few days ago :-( sorry! Posted by Rich G on July 5th, 2007. Of course there’s a column concept in HTML tables, otherwise there wouldn’t be a col element and colspan attribute, for one. For two, there’s a colgroup element. I don’t know ASPX (whatever that is) either. I’m looking into various content management systems but am not having much luck (trying Drupal right now–I’d post a link but stupid WordPress is too paranoid about “spam links” ). Anyway, I’m more informed than I was a few days ago but still pretty much stuck on what to do and how to implement it. Posted by Eep² on July 5th, 2007. I’ve learn’t something new too, never come across col attribute before, will have a play could be very useful! ASPX is basically dynamic server side generated webpages ie you code how you want to generate the page and when the user browses to it the page is generated for them. Posted by Rich G on July 5th, 2007. Right…more coding…blech. Posted by Eep² on July 5th, 2007. you know you enjoy it really, join us on the dark side: “Don’t be so proud of this technological terror you’ve created. The ability to destroy a planet is insignificant next to the power of the force.” - Darth Vader Posted by Rich G on July 5th, 2007. Patch for stable sort : sort_alpha: function(a,b) { if (a[0]<b[0]) return -1; if (a[0]>b[0]) return 1; // when equals keep previous order (=row index) if (a[1].rowIndex<b[1].rowIndex) return -1; else return 1 } Apply similar rule to sort_numeric, sort_ddmm, … Posted by opus27 on July 6th, 2007. Patch to show ’wait’ cursor while sorting : Insert this : dean_addEvent(headrow[i],”mouseover”, function(e) { this.style.cursor=’pointer’; }); dean_addEvent(headrow[i],”mousedown”, function(e) { this.style.cursor=’wait’; }); just before that : dean_addEvent(headrow[i],”click”, function(e) { … }); and add this line : this.style.cursor=’pointer’; before each return and after : delete row_array; in the same function Posted by opus27 on July 6th, 2007. I need a patch to sort single-digit dates (days, months, and even years) correctly, and not always in DD/MM/YY format either but MM/DD/YY, MM/D/YY, MM/D/Y, M/D/Y, M/DD/Y, etc. Posted by Eep² on August 31st, 2007. opus, what is “stable sort”? Posted by Eep² on August 31st, 2007. Hello? Anyone? Beuler, Beuler? Posted by Eep² on September 9th, 2007. That’s “Bueller”. You might get a patch if you asked a touch more nicely about it… Posted by sil on September 9th, 2007. Uh, how much more nicely can I ask “what is ’stable sort’”? Geez…too many damn overhypersensitive people online–I fucking swear it’s ridiculous! Posted by Eep² on September 10th, 2007. The definition is a google search away. The sorttable page itself links to the Wikipedia description of stable sorting. And I wasn’t talking about stable sorting, anyway, I was talking about you just saying “I need a patch to do X, Y, and Z”. Posted by sil on September 10th, 2007. Er, it’s not like I wrote “I demand a patch NOW!*($#” or something… I was being quite neutral about it, actually. Besides, why should I need to ask nicely for something that the script should already be able to do inherently? <buh-link> Anyway, stable sorting isn’t what I need… Posted by Eep² on September 11th, 2007. Single-digit date handling is on the list for sorttable v3. Posted by sil on September 11th, 2007. “Besides, why should I need to ask nicely for something that the script should already be able to do inherently?” Well thats how to win friends! “Anyway, stable sorting isn’t what I need…” Nope, quite right, what you need is to look through the code and fix it then share the fix with everyone else. Posted by Rich G on September 11th, 2007. Wondering if anybody has used sortable with checkboxes. I have a table with a column of checkboxes and labels set to sorttable_nosort. If I sort the table, check some boxes and submit the form, all the form elements get passed except for the checked boxes. If I submit the form without sorting the table,everything works as expected. Any ideas? Posted by Mark G on September 13th, 2007. Mark G: you’re not the first person to report this problem, but I’ve not been able to replicate it. Do you have a URL that demonstrates the problem? Which browser are you using? Posted by sil on September 13th, 2007. Mark G, All my table have checkboxes and work fine so I can’t say I’ve found a problem with it sorry :-( Posted by Rich G on September 13th, 2007. It’s an internal project so I don’t have a URL for you to look at. I’ve been able to replicate the problem on Firefox and Opera. It works fine on Safari. Thanks Posted by Mark G on September 13th, 2007. Thanks for the great script ! It help a lots. I have a minor problem, my table with 3 header rows (i understand it doesn’t work for this version) so I decided to change the script. But afetr some hard works, :o( I am wondering will I be able to achieve it ? I change like: headrow = table.tHead.rows[table.tHead.rows.length - 1] Posted by EngDian on September 19th, 2007. (oops, sorry, haven’t finish my typing… ) and make changes to row_array (to make it start from table.tbody, not counting the header…) Can I know am I in right direction ? or I am totally talking nonsense ? Thanks. Posted by EngDian on September 19th, 2007. EngDian: that’s certainly one approach to take. I haven’t yet decided how best to make sorttable support multiple header rows, which is why sorttable doesn’t do it yet. It’s planned for version 3. Posted by sil on September 19th, 2007. Sortable and AJAX.NET both has an Array.forEach function it’s just that they do not work the same way. To fix that just put a prefix before all the ForEach-functions in sorttable.js. Don’t forget to also change the resolve.STforEach(object, block, context); // ST is the prefix in the base forEach function. ScottW has another solution above but I think this one might be simpler. Posted by Jonas Elfström on September 25th, 2007. Jonas: quite right, and added to the bug list. Posted by sil on September 25th, 2007. i would like to order the table by binding 2 row together Posted by Anonymous on November 6th, 2007. Is there a way to skip a column from sorting? I have a first column with ranking (1 to whatever), and I want that to stay the same when sorting the table. Posted by Ryan on January 3rd, 2008. Ryan: I’m afraid not. This is a known bug, though; see for details. Posted by sil on January 3rd, 2008. Hi Stuart. Great code, very useful. I’m running into a problem using your sorttable v2 with BarelyFitz’s Tabber.js (), such that when I include a sorttable table, all tabs vanish (standard tabs w/cookies). When I use sorttable v1, it works fine. While I realize that its likely to be much to ask, I was wondering if you had any suggestions? (I’ve tried the prefix fix mentioned above to no avail) Also, does anyone have any advice for sorting a street address field, already merged into one field of Street # + Street Name? (where not all properties have a #?) Thanks a lot! Posted by josh on January 7th, 2008. Sadly, sorttable doesn’t seem to work if your doctype is XHTML 1.0 versus HTML 4.01. If you do upgrade the library, let me know! Posted by Kathy on January 11th, 2008. Kathy: that’s not supposed to happen. Do you have an example of a page that it doesn’t work on, so I can test? Feel free to contact me by email if you prefer. Posted by sil on January 11th, 2008. josh: have you got a page that exhibits the problem, so I can take a look? For sorting oddly-formatted fields, you’ll want to use sorttable’s custom sort keys. Posted by sil on January 14th, 2008. I spent some time searching for a solution to this until I came back and reread the whole thread. The bugfix link isn’t by the error message tha caused it. sys.argumenttypeexception object of type object cannot be converted to type array I fixed mine by doing a replace on forEach to STforEach Posted by phillipW on January 17th, 2008. Your script doesn’t work on IE7. Great with FF2. As an exemple, you tables on this page are not active with IE7 but are on FF2. Chears, Posted by Fabrice C on February 29th, 2008. Fabrice: oops. Added to my bug list. Thanks. Posted by sil on February 29th, 2008. Thanks for the script. I use it at work for sorting off of an Indexing Server with ASP. Is there any way to manipulate the script to allow for date modified? Since this is a search I cant simply alter the cell. I need to sort in the format mm/dd/yyyy HH:MM:SS AM/PM Thanks again for sharing this great script! Posted by Jared on March 3rd, 2008. Jared: if you can’t use custom keys, then the script will need to be modified, as you mention. It can be done; you’ll need to edit the date-parsing functions and the date regular expression. If you want me to do it, contact me and we can talk about rates and so on. Posted by sil on March 3rd, 2008. Here is a patch to make the sort stable. The modification is to set id to the original row index and sort by row.id when rows are equal (one drawback is that existing id:s on tr-elements are removed). The added code starts with ”+”: forEach(document.getElementsByTagName(’table’), function(table) { if (table.className.search(/\bsortable\b/) != -1) { sorttable.makeSortable(table); + if (table.className.search(/\bstablesort\b/) != -1) + sorttable.setrowids(table); } }); }, + setrowids: function(table) { + // Set id=rownumber, makes the sorting stable with the help of sort_by_id + sortfn = sorttable.sort_alpha; + for (var i=0; i<table.tBodies[0].rows.length; i++) + table.tBodies[0].rows[i].id = i+100000; + }, + sort_by_id: function(a,b) { + if (a[1].id==b[1].id) return 0; + if (a[1].id<b[1].id) return -1; + return 1; + }, sort_numeric: function(a,b) { aa = parseFloat(a[0].replace(/[^0-9.-]/g,”)); if (isNaN(aa)) aa = 0; bb = parseFloat(b[0].replace(/[^0-9.-]/g,”)); if (isNaN(bb)) bb = 0; + if (aa==bb) + return sorttable.sort_by_id(a,b); + else return aa-bb; }, sort_alpha: function(a,b) { if (a[0]==b[0]) + return sorttable.sort_by_id(a,b); if (a[0]<b[0]) return -1; return 1; }, sort_ddmm and sort_mmdd: + if (dt1==dt2) return sorttable.sort_by_id(a,b); Posted by Niklas on March 10th, 2008. Niklas: what was wrong with ? Posted by sil on March 10th, 2008. I sent you an email as requested about sorting mm/dd/yyyy hh:mm:ss AM/PM but I never heard back from you. Just wanted to make sure you got the message? Posted by Jared on March 11th, 2008. Jared: sorry, replied now! Posted by sil on March 11th, 2008. > Niklas: what was wrong with ? Differences: shaker_sort: * Uses custom sort function * multi-level sorting of data (when two rows are equal the previous sort order is used) My patch: * Uses JavaScript sort (sorting should be faster, but initial setup is slower) * When two rows are equal the original order is used * If you sort with sort_by_id, or on a column with all the same values, the result is the original order. So you could quite easily add a third state when clicking a third time on the column header – unsorted (i.e. remove the arrow and sort with sort_by_id) Note: To make my patch behave like shaker_sort (general way of making any sort stable): 1. skip setrowids (not needed) 2. add j as the third element in row_array for (var j=0; j<rows.length; j++) row_array[row_array.length] = [sorttable.getInnerText(rows[j].cells[col]), rows[j], j]; 3. Modify the sort_by_id function (or replace all “sort_by_id” with “a[2] – b[2]“): sort_by_id: function(a,b) { return a[2] – b[2]; }, 4. Use row_array.sort(this.sorttable_sortfunction); as normal Should give the same result (not tested though). Posted by Niklas on March 13th, 2008. hi great script!! i need to know how to do this: I have one column that I don’t want to be sorted at all, how do I do that? Thanks in advance! Posted by lau on March 24th, 2008. lau: see the documentation. Posted by sil on March 24th, 2008. Hi, just updated to v2. The speed increase is excellent. is there any documentation on any additional css stuff to interact with sortable.. I was thinking along the lines of a:hover.sortable but cant get anything to work. in v1 when you would hover over the heading the pointer would change… in v2 it doesn’t & it shows no obvious signs of being a “link” or sortable, or do I have mine incorrectly setup? many thanks Posted by gee on March 31st, 2008. gee: to style links in a sortable table, you want table.sortable a { styles } and to style your headers, use table.sortable th { cursor: hand; } Posted by sil on March 31st, 2008. Hi, Many thanks for the response.. I think I worded the question badly but it gave me some ideas for future use. just to check did you mean table.sortable thead {} rather than table.sortable th {} as I can’t get anything out of th …. what I have done though is this table.sortable thead { padding:3px; background-color:#52418C; color:white; font-weight: bold; text-decoration: none; height: 12.75pt; } table.sortable thead:hover { text-decoration: underline; background:#A5DD29; cursor: pointer; } note pointer rather than hand as I found out hand don’t work in ff what happens isn’t exactly what I’d expect in that when you hover over one header cell is the whole header row changes colour & underlines… which makes me think I’ve got something wrong & it’s where ’th’ comes in? off to research css a bit more, but any pointers much appreciated. Posted by gee on April 1st, 2008. doh… th won’t work unless you use <th></th> Posted by gee on April 1st, 2008. gee: ah, yeah, you should be using th inside a thead :) Posted by sil on April 1st, 2008. Awesome script!! It works in IE7 with Tabber.js (which Josh reported) for me. I have a display problem however. When the column header string is shorter than its values in tbody, adding the unicode triangle when sorting on the header works great. See example below. Name Employee A Employee B But when the column header string is longer than any of its text in tbody, the added triangle increases the th length and messes up my table format. The right table border is gone, and the tbody formatting (background color) doesn’t apply to the increased thead/th length. For example, Name A B Can you give me some tips on how to work around this? Thanks!!! Posted by Hana on April 10th, 2008. Hi, thanks for this script ! I use striped-tables, and I see that is in the wish list. As I did not want to wait, I have to add this to script : 1/ a new function striped: function(table){ for (var i=0; i<table.tBodies[0].rows.length; i++) { table.tBodies[0].rows[i].style.backgroundColor=(i%2==0)?table.getAttribute(’R1BgColor’):table.getAttribute(’R2BgColor’); } }, 2/ I call it in init just before makeSortable … sorttable.striped(table); sorttable.makeSortable(table); … 3/Another call before sort function … delete row_array; sorttable.striped(this.sorttable_tbody.parentNode); … 4/ I add two attributes to the table Note: Class instead og Bgcolor would be better, I think. What are you thinking about this ? Posted by Martial on May 3rd, 2008. Martial: that’s certainly a good way to solve your specific problem, and I’m glad sorttable could help you! Posted by sil on May 3rd, 2008. [...] have a sortable sub folder structure and sortable images per folder. Searching the web, I found an excellent javascript solution for sorting tables. The script works very well and I have adapted it for use in the Lazyest Gallery Manage page with [...] Posted by Brimosoft » Blog Archive » Sortable Tables on May 9th, 2008. Is there any “onsorted” event that gets fired after a sort is complete? If not, any recommendation on how to add support for that? I need to make an Ajax call after a sort to store the new order on the server. I’m thinking it would be most convenient if I could just add an onsorted=”myjsfunc();” to the table. Thanks, Russ Posted by Russ on May 30th, 2008. Thanx, It is a beautiful script works like charm in my scripts without much changes Posted by Maneesh on June 2nd, 2008. hey… i am trying to get sortable to work for textbox inputs…the values are entered dynamically each time for some further validations… the text i am entering is of a certain format eg. ‘PP_VER-RE_908′ the innerText code in the sorttable.js considers this as numeric…after that it just inverts the rows in the table…sort doesnt happen..any suggestions??? Posted by Vinnie on June 4th, 2008. Vinnie: what you’re talking about will, I think, require some custom (thus paid-for) enhancements to sorttable. Drop me an email (http;//) and we can talk about it. Posted by sil on June 4th, 2008. i need to know how to do this: I have one line not a column that I don’t want to be sorted at all (some ads), how do I do that? Thanks in advance! Posted by bastien on June 17th, 2008. Bastien: if you mean that you’d like that column to stay in the same order even if the rest of the table is sorted, then that’s which is a known request for enhancement. I’m not sure when I’ll have time to get to it, though. (You can get it up my list with money: see for details :)) Posted by sil on June 17th, 2008. Oh my God! I can’t believe it! All that functionality and it took me 5 seconds to drop it in and it just works! AWESOME!!! You just saved me a ton of time in dinking with database query strings.. You are the man! THANK YOU!!! -Hatch in Costa Rica… Posted by Hatch on June 20th, 2008. Hatch: no problem Glad you liked the script. :) Posted by sil on June 20th, 2008. It seems I ran into a bug with the date format dd.mm.yy. At least the sorting does not work properly and I cannot figure out why. It sorts in a very weird way. You can see it at The script seems to loaded right as sorting the other columns works fine. Unfortunately I am rather “code-blind” so I can’t see why it does work so wrongly. Tried Firefox 2.0.0.14 and Opera. Posted by Hannes on June 20th, 2008. I love it, it’s da bomb. Is there any way to sort an image column? No, I don’t mean sort them by what they contain (!), just if there’s an icon there or not. For instance, if there’s a star for “Featured” something or other, and a camera icon for “Has a Photo”, etc. it would be nice to be able to just bring all these to the top. I know, I know, give ‘em an inch… ;^) Posted by Table Sorta Guy on June 28th, 2008. ps: I mean besides inserting a number “1″ in front of the icons and giving them a css font size of 1 and a color the same as the background ;^) Because that works! (but it’s a little wacked)… Posted by Table Sorta Guy on June 29th, 2008. Table Sorta Guy: add a custom sort key of 1 or 0 to the cells in question. See for details. Posted by sil on June 29th, 2008. Awesome! Thank you !!! Posted by Table Sorta Guy on June 29th, 2008. whatever Posted by Sandy on July 3rd, 2008. I love this script, however, I have a checkbox that allows one to select certain records for processing via a post of the form information. I am trying to add an option to select or unselect all but I would like the select or unselect to select or unselect only visible rows, not all rows. Is there any easy way to do this that anyone knows of? Posted by Stephanie on July 25th, 2008. Stephanie: I’m not sure what you mean by “visible” rows? Sorttable doesn’t make rows visible or invisible. Posted by sil on July 31st, 2008. Hi, more stuff: <pre> * Sort detect empty as lowest ”value” </pre> Great job - rgds. Henrik at Lassen dot dk Solutions proposed ================== <pre> //* Sort detect empty as lowest ”value” sort_numeric: function(a,b) { aa = parseFloat(a[0].replace(/[^0-9.-]/g,”")); bb = parseFloat(b[0].replace(/[^0-9.-]/g,”")); return sorttable.sort_NaN(aa) - sorttable.sort_NaN(bb); }, </pre> Posted by Henrik on August 11th, 2008. Hi, here is what i am looking for (resend): * Embed script (One page HTML) * Sort detect IP addresses Great job - great speed - rgds. Henrik at Lassen dot dk PS. Never understod the difference between ”Submit comment” and ”Post comment” Solutions proposed ================== //Embed script: Use double qoutes //Sort detect IP addresses if (text != ”") { if (text.match(/^(\d{1,3}\.){3}\d{1,3}$/)) return sorttable.sort_ip; if (text.match(/^-?[£$¤]?[\d,.]+%?$/ )) return sorttable.sort_numeric; … sort_ip: function(a,b) { // Rgds HL aa = a[0].split(”.”); aaa = 0; for(i in aa) aaa = aaa*256+parseInt(aa[i]); bb = b[0].split(”.”); bbb = 0; for(i in bb) bbb = bbb*256+parseInt(bb[i]); return sorttable.sort_NaN(aaa) - sorttable.sort_NaN(bbb); }, sort_NaN: function (a) { if (isNaN(a)) return -Number.MAX_VALUE; return a; }, Posted by Henrik on August 11th, 2008. I just wanted to submit my tweak to enable the correct sorting of negative numbers that are denoted by enclosing them in parenthesis like (10.39) = -10.39 What I do is have it replace the opening parenth with a negative sign before stripping all other non-number chars. That way it retains the negative and sorts in true numeric rather than as an absolute value when using parenths. sort_numeric: function(a,b) { aa = parseFloat(a[0].replace(/\(/g,’-').replace(/[^0-9.-]/g,”)); if (isNaN(aa)) aa = 0; bb = parseFloat(b[0].replace(/\(/g,’-').replace(/[^0-9.-]/g,”)); if (isNaN(bb)) bb = 0; return aa-bb; } Posted by Scott Jilek on October 27th, 2008. Is there an equivalent to “sortbottom” that will keep a row at the top? (e.g. “sorttop”) Thanks. Posted by Jason on November 5th, 2008. Jason: I’m afraid not. You’d need to make a custom enhancement to sorttable to do that (or talk to me about rates for custom JavaScript work by dropping me a mail). Posted by sil on November 6th, 2008. Hey! I’ve implemented this script but am having trouble getting it to work. is my code, When I click on the header column titles an arrow comes, and if I repeatedly click it the arrow points up / down, but the column isnt actually sorting.. any idea why? Posted by kris on December 12th, 2008. Here’s a complete solution for the empty cell problem with dates. Thanks to Erich Bakx! Just add a check for each mtch-variable. After doing this it should look like the following code: sort_ddmm: function(a,b) { mtch = a[0].match(sorttable.DATE_RE); if(mtch==null) dt1=0; else { y = mtch[3]; m = mtch[2]; d = mtch[1]; if (m.length == 1) m = ’0′+m; if (d.length == 1) d = ’0′+d; dt1 = y+m+d; } mtch = b[0].match(sorttable.DATE_RE); if(mtch==null) dt2=0; else { y = mtch[3]; m = mtch[2]; d = mtch[1]; if (m.length == 1) m = ’0′+m; if (d.length == 1) d = ’0′+d; dt2 = y+m+d; } if (dt1==dt2) return 0; if (dt1<dt2) return -1; return 1; }, sort_mmdd: function(a,b) { mtch = a[0].match(sorttable.DATE_RE); if(mtch==null) dt1=0; else { y = mtch[3]; d = mtch[2]; m = mtch[1]; if (m.length == 1) m = ’0′+m; if (d.length == 1) d = ’0′+d; dt1 = y+m+d; } mtch = b[0].match(sorttable.DATE_RE); if(mtch==null) dt2=0; else { y = mtch[3]; d = mtch[2]; m = mtch[1]; if (m.length == 1) m = ’0′+m; if (d.length == 1) d = ’0′+d; dt2 = y+m+d; } if (dt1==dt2) return 0; if (dt1<dt2) return -1; return 1; }, Now the sort-functionality also works with dates and empty cells. Posted by Tobias on December 18th, 2008. I just came across the bug with https as described earlier. I tried the solution: A fix is to replace this line: document.write(””); With: document.write(””); but although the page loads securely – the table doesn’t sort any more! Am I missing something? Posted by Philip on January 6th, 2009. while doing the following perl ( very parred down example to save space ) #!/usr/bin/perl use CGI qw(:standard); $sy = 2009 ; $sm = ‘02′; $sd = 13 ; $st = ‘CIMIS’ ; print header; print start_html(-title=>”WeatherTRAK Climate Database Output”, -script=>{-src=>”sorttable.js”} ); print ”; print “”; open(OUT, “/data/www/pgm-bin/ete_rpt $sy$sm$sd $st |”); while() { print $_; } close OUT; exit; I GET THE FOLLOWING ERROR [Thu Feb 19 03:01:17 2009] [error] [client 10.0.0.8] (8)Exec format error: exec of ‘/var/www/cgi-bin/sorttable.js’ failed, referer: [Thu Feb 19 03:01:17 2009] [error] [client 10.0.0.8] Premature end of script headers: sorttable.js, referer: yet if I redirect the perl to a file wiht the > and then use the web browser on that file it always works.. IDEAS PLEASE ?? Posted by DanD on February 19th, 2009. DanD: you have sorttable.js in your cgi-bin folder, and Apache is therefore trying to execute it as a CGI, which doesn’t work. MOve it somewhere else. Posted by sil on February 20th, 2009. Hi there, great script works like a bomb. I was wondering how I can set the alternating row colors with sorttable. Is there a simple way to do this. thanks Posted by Articfox on February 23rd, 2009. Articfox: currently there isn’t. This is a known issue (). If you’re interested in having it fixed and want to pay me to do it, drop me a line. :) Posted by sil on February 23rd, 2009. Hi there, this is a really great script. I just had one uestion if i have alternating row colors how can i get them to stay alternating with this script cause mine colors stay with the sorteed data and the row colors get mixed up Posted by Articfox on February 23rd, 2009. Hello, For the alternating table row styles, I copied the code provided by Johnny Moon as such: above tb.appendChild(row_array[j][1]); (within the for loop) I placed: tb.appendChild(row_array[j][1]);’); } This made the table sort not work for me. Did I do this correctly? Posted by Barbara on February 27th, 2009. Oop….correction to the above comment: above tb.appendChild(row_array[j][1]); (within the for loop) I placed: if(j % 2) row_array[j][1].className = “tRowA”; else row_array[j][1].className = “tRowB”;’); } Posted by Barbara on February 27th, 2009. I trimmed down the fix to make alternating row styles down to a single added line (prefixed with + below). Hopefully this doesn’t get reformatted too much. tb = this.sorttable_tbody; for (var j=0; j<row_array.length; j++) { + row_array[j][1].className = j%2 ? “oddrow” : “evenrow”; tb.appendChild(row_array[j][1]); } Posted by Tom Brown on March 4th, 2009. You have reinforced my faith in human good nature (and skill). Thank you so much for putting your wonderful work in the public domain. It has saved me literally hours of menial work… and I probably would have indeed used a querystring and another call to SQL to generate a far less elegant solution to my problem. Actually, that would have been difficult because some of the columns were being generated from results from previous columns (which came from various sql tables, some incomplete)… Dude, you are a legend! Posted by Andrew Hood on March 4th, 2009. Hello, Are you planning to include fix suggested by ScottW that prevents script working with asp.net ajax. I did and it looks like working. You would save some time to others. Thanks, Velja Radenkovic Posted by Velja Radenkovic on March 6th, 2009. Velja: I can’t test that the fix works. I’m loath to include a fix that I myself haven’t tested. Besides, it’s in my bug list, which means it’s on my radar to fix. Posted by sil on March 6th, 2009. sil, You can’t test it? Yes you can. The fix is to replace foreach iteration with plain for loop and access the element of array using index. When I think better I think the problem is not in foreach iteration implementation but rather in ‘table’, ‘cell’ etc. variable names. Replacing iteration solved the problem because it eliminates variable named ‘table’ from js and uses tables[i] instead which is not in collision with asp.net ajax scripts. Generally speaking using table, cell, layer and similar as variable names is always bad idea in java script because it doesn’t have namespaces or packages or any other method of code separation. I have my piece of code working so don’t understand that I am pushing you because I need something. Its a good script and it works flawlessly with asp.net GridView. It would be a pity to discourage people from using it because of minor problem. Also alternating row colors is common thing in html tables and script doesn’t take care of that. That can be fixed easily too. Thanks, Velja Posted by Velja Radenkovic on March 8th, 2009. Velja: no, no, I meant that I can’t test that it no longer breaks ASP.NET because I can’t run any ASP.NET sites. The whole forEach implementation is going away in sorttable v3 anyway. Posted by sil on March 8th, 2009. Simply great script, and very useful. Used it for a company in an effort to minimize load on servers where tables where retrieved and likely to be sorted on different fields. One modification I made to the script though, I will suggest. This minor mod has worked quite well and have found no errors yet. Since I live in Iceland and our character set is a bit different, e.g. we have letters like ‘á’, ‘ð’,'þ’ and script did not give correct results. So what I tried is this with the sort_alpha method: sort_alpha: function(a,b) { a[0]=a[0].toLowerCase();b[0]=b[0].toLowerCase(); var regexp=/á/g;var regexp2=/ð/g;var regexp3=/é/g;var regexp4=/í/g; var regexp5=/ó/g;var regexp6=/ú/g;var regexp7=/ý/g;var regexp8=/æ/g;var regexp9=/ö/g; var regArray=new Array([regexp,'a{'],[regexp2,'d}'],[regexp3,'e~'], [regexp4,'i~'],[regexp5,'o¡'],[regexp6,'u¢'],[regexp7,'y£'], [regexp8,'þ¤'],[regexp9,'þ¥']); for ( var i=0; i<regArray.length; i++) { a[0]=a[0].replace(regArray[i][0],regArray[i][1]); b[0]=b[0].replace(regArray[i][0],regArray[i][1]); } if (a[0]==b[0]) return 0; if (a[0]<b[0]) return -1; return 1; }, Also I made an extra function to sort the names since they are sorted by the first name and last, anything in between is less important. Like to hear what you think and any comments are appreciated. In all, great script that works well, thanks Stuart. Posted by Max on March 14th, 2009. OOps, sorry! I forgot to explain the idea. The ‘unique’ characters were replaced with others, e.g. ‘¤’ that are not likely to be part of any name. Then finally, is the sort used. Posted by Max on March 14th, 2009. Max: the best way to do that is to use localeCompare, as described in the outstanding bug report which I plan to implement in sorttable v3. Posted by sil on March 14th, 2009. Ok. I did not know about this bug report. Thanks, you were quick with answers! Posted by Max on March 15th, 2009. [Tried emailing Stuart to no effect; here is the message] I have used your sorttable scripts on sites that I design and develop for The Nature Conservancy to good effect, and I thank you for making them available. In redoing one site, I have run into a problem in IE (6 and 7) on WinXP. I have not tested in Vista. This problem does *not* occur in FF or Safari or Google Chrome. In this new version, I have wrapped my sortable in a tag whose display is alternately “none” or “block” (Close/Open), controlled by another script. This does not affect the sort function in any browser or system besides IE. I have tested it now with the simplest possible show/hide script directly in the file head , with no luck, but I have the following observation (I think): (1) When I load the page into a new tab in IE, and Open the table, it does NOT sort (2) when I click on another page, then return to my page and Open the table, it DOES sort (3) Opening and Closing the table without reloading does not change the sortability (4) When I Reload the page without going elsewhere and returning, and Open the table, it does NOT sort [test and live site URLs deleted; please email me] Thanks for any help. I have several tables of this type. Posted by SusanB on March 16th, 2009. Great scripts. I got one problem. When my table uses and to specify the column widths, the sorting becomes very slow for a table with 600 rows. It is like 10 seconds vs. 2 seconds without colgroup. Can you shed some light if this is solvable? Thanks! Posted by Henry j. on March 25th, 2009. (Sorry to post again. HTML tags in my previous post have been removed by the page) Great scripts. I got one problem. When my table uses tags “colgroup” and “col width=xx” to specify column widths, the sorting becomes very slow for a table with 600 rows. It is like 10 seconds vs. 2 seconds without colgroup. Can you shed some light if this is solvable? Thanks! Posted by Henry j. on March 25th, 2009. Henry: cor, I’ve never tried that. Can you drop me a mail with more details and (ideally) a URL to an example? Posted by sil on March 27th, 2009. Wonderful scripts. It’s really helpful. I have one concern, I made a table inside a div that when query reaches 20 items, the div scrolls. But I want the table header to be excluded from the scrolling part so when the user scrolls down to the last item, the table head would still remain on top, that way the sorting would still be visible. How do i do that with your script? Thanks! Posted by Ryan on April 15th, 2009. Ryan: you should investigate. Posted by sil on April 15th, 2009. I made it! thanks… Posted by Ryan on April 15th, 2009. This is a great script! Very usable. FYI. I discovered that IE 6 & 7 does not like this script under SSL though. It complains that there are insecure items when using it. I have recently discovered that there is some detection of removeChild() as being erroneously detected as “unsafe”, but by replacing those with outerHTML=” it still detects this script as insecure. If anyone solves this issue (other than using a different browser ;) ) Posted by Crispy on April 30th, 2009. Crispy: see for a record of this bug and a possible workaround. Posted by sil on April 30th, 2009. Hi, Is it possible to make the small hand symbol appear each time we roll the mouse over the headers. Makes it more intuitive. Thanks! and great code ! Posted by Seb on May 6th, 2009. Seb: table.sortable th { cursor: hand; } in your CSS file. Posted by sil on May 6th, 2009. I was having a problem with IE7 on an XP machine. When the table headers were clicked to sort the rows, the entire table would disappear. In order to fix this problem I changed the behavior of sorttable.js to copy the table rows instead of reference the existing rows. Sorttable is great. I especially like the version 2 options to not sort some rows, and the custom sort keys. Here is the function to copy the rows. , copyTableRow: function(tableRow) { var tr = document.createElement(’tr’); columns = tableRow.getElementsByTagName(’td’); for (var i=0; i<columns.length; i++) { var td = document.createElement(’td’); td.innerHTML = columns[i].innerHTML; if (columns[i].getAttribute(”sorttable_customkey”) != null) { td.setAttribute(”sorttable_customkey”,columns[i].getAttribute(”sorttable_customkey”)); } tr.appendChild(td); } return tr; } —— I added it to the makeSortable function and reverse function… (In makeSortable, just before the shakersort option) for (var j=0; j<rows.length; j++) { var tr = sorttable.copyTableRow(rows[j]); row_array[j] = [sorttable.getInnerText(rows[j].cells[col]), tr]; } (and the beginning of the amended reverse function) reverse: function(tbody) { // reverse the rows in a tbody newrows = []; for (var i=0; i<tbody.rows.length; i++) { newrows[newrows.length] = sorttable.copyTableRow(tbody.rows[i]); } Posted by Greg Kontos on June 15th, 2009. Any release date for Version 3 ? I’m sure it’s been suggested, but sorting by multiple columns would be very useful. My company would pay $$$ for Version 3, since it is the least process intensive script for sorting we have come accross. Cheers once again for the great work Mick Posted by Mick on June 16th, 2009. Mick: I’d be happy to talk about custom paid enhancements to sorttable; those custom enhancements can go into sorttable v3, certainly. Do please drop me a line () to talk about the enhancements you’d like! Posted by sil on June 16th, 2009.
http://www.kryogenix.org/days/2007/04/07/sorttable-v2-making-your-tables-even-more-sortable
crawl-002
refinedweb
11,861
75.2
Naming classes, windows and controls So far, ODABA GUI resources do not support name spaces on global level, i.e. one cannot arrange GUI classes in different projects or name spaces. Hence, class names have not to be unique, only, but have to be defined in global name space. Since persistent GUI classes, i.e. GUI classes inheriting from persistent data type definitions, get the same name as the complex data type they are based on, persistent data types intended to be referred to in GUI applications should also be defined in global namespace, until ODABA GUI is supporting namespaces, too. In particular, this means, that complex data types intended to be used in DUI applications, should not contain local data type definitions. Windows and controls are defined in the local namespace of the complex data type the GUI class is based on, i.e. names for windows and controls have to be unique within the context of the GUI class, only. Referring to a window or control, always requires scoped names, or class name and window/control name. The typical way referring to controls in other classes a two level assignment including GUI class selection first and and selecting control in the selected GUI class afterward. Fields and regions in control definitions use physical referenced (by identity). Thus, one may rename classes or controls without destroying link information. GUI action definitions, however, use logical references just containing names for classes and windows. Those are not maintained when changing class or window names and have to be updated explicitly by the developer.
http://www.run-software.com/content/documentation/odabagui/ode/HierarchyTopics/OGTD_Naming_classes.html#727695
crawl-003
refinedweb
262
52.8
Wrappers for specific Multimedia Command (MMC) commands e.g., READ DISC, START/STOP UNIT. More... #include <cdio/mmc.h> Go to the source code of this file. Wrappers for specific Multimedia Command (MMC) commands e.g., READ DISC, START/STOP UNIT.. Get drive capabilities via SCSI-MMC GET CONFIGURATION Return results of media event status via SCSI-MMC GET EVENT STATUS Run a SCSI-MMC MODE SELECT (10-byte) command and put the results in p_buf. Run a SCSI-MMC MODE SENSE command (10-byte version) and put the results in p_buf Run a SCSI-MMC MODE SENSE command (6-byte version) and put the results in p_buf Request preventing/allowing medium removal on a drive via SCSI-MMC PREVENT/ALLOW MEDIUM REMOVAL. Issue a MMC READ_CD command. b_sync_header return the sync header (which will probably have the same value as CDIO_SECTOR_SYNC_HEADER of size CDIO_CD_SYNC_SIZE). The Header preceeds the rest of the bytes (e.g. user-data bytes) that might get returned. For CD-DA, the User Data is CDIO_CD_FRAMESIZE_RAW bytes. For Mode 1, The User Data is ISO_BLOCKSIZE bytes beginning at offset CDIO_CD_HEADER_SIZE+CDIO_CD_SUBHEADER_SIZE. For Mode 2 formless, The User Data is M2RAW_SECTOR_SIZE bytes beginning at offset CDIO_CD_HEADER_SIZE+CDIO_CD_SUBHEADER_SIZE. For data Mode 2, form 1, User Data is ISO_BLOCKSIZE bytes beginning at offset CDIO_CD_XA_SYNC_HEADER. For data Mode 2, form 2, User Data is 2 324 bytes beginning at offset CDIO_CD_XA_SYNC_HEADER. The presence and size of EDC redundancy or ECC parity is defined according to sector type: CD-DA sectors have neither EDC redundancy nor ECC parity. Data Mode 1 sectors have 288 bytes of EDC redundancy, Pad, and ECC parity beginning at offset 2064. Data Mode 2 formless sectors have neither EDC redundancy nor ECC parity Data Mode 2 form 1 sectors have 280 bytes of EDC redundancy and ECC parity beginning at offset 2072 Data Mode 2 form 2 sectors optionally have 4 bytes of EDC redundancy beginning at offset 2348. Request information about et drive capabilities via SCSI-MMC READ DISC INFORMATION Set the drive speed in K bytes per second using SCSI-MMC SET SPEED. Load or Unload media using a MMC START STOP UNIT command. Check if drive is ready using SCSI-MMC TEST UNIT READY command.
http://www.gnu.org/software/libcdio/doxygen/mmc__ll__cmds_8h.html
CC-MAIN-2014-49
refinedweb
372
55.54
"Conan is my role model." If I make that statement at the dinner table, my son would immediately think that I pattern myself after Conan the Barbarian, whereas my wife would think I want to be like the late-night talk show host, Conan O'Brien. This context confusion is known in IT as name collision. Many languages have a strategy to circumvent name collision and, with V5.3, so does PHP. PHP solves the name collision problem with its new namespaces feature. Of course, the names on which PHP resolves collision are not the names of people but rather the names of classes, functions, and constants. This article explains why you should consider using namespaces on your next project. It provides an overview of namespace semantics, provides best practices, and offers a sample Model-View-Controller (MVC) application that uses namespaces. The article then discusses namespace support in Eclipse, NetBeans, and Zend Studio, with specific instructions on using namespaces with Eclipse. Do I need namespaces? A strength of the PHP language is its simplicity. So if you are new to PHP, namespaces are yet another concept you will need to understand. But if any of the following are true, you should consider their use: - You are developing a large application with hundreds of PHP files. - Your application is being developed by a team of coders. - You are planning on using frameworks that use PHP V5.3 and namespaces. - You have used namespaces (or comparable functionality, such as packages) in other languages, such as the Java™, Ruby, or Python languages. If you are the sole developer of relatively small applications, namespaces may not be for you. But for the rest of us, namespaces provides a clean way to organize class structures and, of course, prevent name collision. These two reasons are why many framework developers are adopting the use of namespaces. Zend Framework (the 800-pound gorilla of PHP frameworks), for example, is using namespaces in Zend Framework V2.0. A quick overview A namespace provides a context for a name. For example, the two classes shown in Listing 1 have name collision. Listing 1. Two classes with the same name cause collision without namespaces class Conan { var $bodyBuild = "extremely muscular"; var $birthDate = 'before history'; var $skill = 'fighting'; } class Conan { var $bodyBuild = "very skinny"; var $birthDate = '1963'; var $skill = 'comedy'; } To specify a namespace, you simply add a namespace declaration as the first statement in the source, as shown in Listing 2. Listing 2. Two classes of the same name but with namespaces resolves collision <?php namespace barbarian; class Conan { var $bodyBuild = "extremely muscular"; var $birthDate = 'before history'; var $skill = 'fighting'; } namespace obrien; class Conan { var $bodyBuild = "very skinny"; var $birthDate = '1963'; var $skill = 'comedy'; } $conan = new \barbarian\Conan(); assert('extremely muscular born: before history' == "$conan->bodyBuild born: $conan->birthDate"); $conan = new \obrien\Conan(); assert('very skinny born: 1963' == "$conan->bodyBuild born: $conan->birthDate"); ?> The above code runs fine, but before I describe why the two Conans work well together, let me point out two things. First, I'm using assertions to prove that the code works as expected. And second, I'm doing something you should never do: declaring multiple namespaces in one source file. The namespace provides a unique qualifier for the two Conans. The code clearly states when I'm referring to the burly destroyer or the late-night talk show host. Notice that the syntax for the instantiation uses a backslash ( \) followed by the namespace name: $conan = new \barbarian\Conan(); and: $conan = new \obrien\Conan(); Those qualifiers kind of look like Windows®-style directory qualifiers, which is not a bad way to think about them because, for one, namespaces support both relative and absolute references (just like directories), and for another, it is a best practice to put the source for your class files in directories that match the namespaces. Using namespaces It would be more real-world to separate the two Conan classes into directories called barbarian and obrien, and then reference those classes from other PHP files. There are three ways to reference a PHP namespace: - Prefix the class name with the namespace - Import the namespace - Alias the namespace To use the first option, you simply prefix the class name with the namespace (after, of course, you the include the source file): include "barbarian/Conan.php"; $conan = new \barbarian\Conan(); That's pretty straightforward, but the issue with option one's strategy is that, given a large application, you will be constantly retyping the namespace. And besides all that typing, you are needlessly cluttering up your code base. With option two, you import the namespace with the PHP V5.3 reserved word use: include "barbarian/Conan.php"; use barbarian\Conan; $conan = new Conan(); Option three lets you specify an alias for the namespace: include "barbarian/Conan.php"; use \barbarian\Conan as Cimmerian; $conan = new Cimmerian(); (Cimmerian, by the way, is yet another moniker Conan the Barbarian is known by.) One issue I have with all three of the above examples is the use of the include statement. You can remove the need for the includes by using an __autoload function. The PHP magic method __autoload function is called whenever a class is referenced that has not yet been included in the source file. Place the code in Listing 3 in a file called autoload.php. Listing 3. A magic __autoload function dynamically includes source files <?php function __autoload($classname) { $classname = ltrim($classname, '\\'); $filename = ''; $namespace = ''; if ($lastnspos = strripos($classname, '\\')) { $namespace = substr($classname, 0, $lastnspos); $classname = substr($classname, $lastnspos + 1); $filename = str_replace('\\', '/', $namespace) . '/'; } $filename .= str_replace('_', '/', $classname) . '.php'; require $filename; } ?> Then import autoload.php into your source: require_once "autoload.php"; use \barbarian\Conan as Cimmerian; The big advantage of the auto-loader is that you won't have to create an include statement for every class. Note that although PHP's namespaces can be used for functions and constants as well as classes, the auto-loader technique only works for classes. The auto-loader is so handy that rather than coding functions, you can create methods in a appropriately named utility class and put your constants in immutable classes. Getting real with MVC Leaving O'Brien to ridicule The Destroyer while being slain, let's move on to a simple example MVC application. To benefit from namespaces, you should design your naming conventions before keying a line of code. A common best practice is to use a namespace tree. Understand that namespaces have high-level namespaces and sub-namespaces. If your company has multiple applications, it might be handy to have a high-level namespace that is your company name. Then, you would use a sub-namespace for the application. Next, you'd have a level that contains directories that in turn have names that specify the application functionality of the PHP classes contained therein. For example, let's say the high-level company namespace is denoncourt, the first sub-level is retail, and the third level has functional names, as shown in Listing 4. Listing 4. A design for namespaces can include nested sub-namespaces /denoncourt /retail /common /controller /model /utility /view The controller, model, and view sub-namespaces are obviously for the MVC architecture, but the utility and common sub-namespaces I threw in to be used for general classes that didn't cleanly fit in one of the other sub-namespaces. Let's jump right into the code for the mini-MVC application. Listing 5 provides the code for index.php, which is placed in the root folder. Listing 5. The MVC application's index PHP uses the controller class <?php require "autoload.php"; use denoncourt\retail\controller as Control; $controller = new Control\Controller(); $controller->execute(); ?> Notice the long namespace and the use of the alias name of Control. The use of aliases is my preferred method for using namespaces for two reasons: First, if I later rename the namespace, I only have one line of code to change per source file. And second, given that it is a best practice to fully qualify your namespace as you instance classes, my use of Control\Controller() is effectively the same thing as \denoncourt\retail\controller\Controller(). Note that I could have just as well created an alias for a higher-level namespace, and then used the names of the sub-namespace for the class instantiation: use denoncourt\retail as Retail; $controller = new retail\controller\Controller(); This is a handy feature for those times when you will be referring to multiple levels of your namespace in the same source file. In the denoncourt/retail/controller directory, I created Controller.php, which is shown in Listing 6. Listing 6. The MVC controller class predicates action based on user input <?php namespace denoncourt\retail\controller; use denoncourt\retail as retail; class Controller { public function execute() { switch ($_GET['action']) { case 'showItem' : $item = new retail\model\Item(); require "denoncourt/retail/utils/format.php"; require "denoncourt/retail/view/item.php"; break; } } } ?> In denoncourt/retail/model, I created Item.php. Listing 7 shows the code. Listing 7. The MVC Item class is in the model sub-namespace <?php namespace denoncourt\retail\model; class Item { public $itemNo = '123'; public $price = 2.45; public $qtyOnHand = 87; } ?> In denoncourt/retail/utils, I created format.php, which is shown Listing 8. Listing 8. The dollar PHP shows how a function can also be namespaced <?php namespace denoncourt\retail; function dollar($dollar) { return "\$$dollar"; } ?> Note that, as stated earlier, I would have preferred to put the format function in a utility class (so the auto-loader would handle the import of the code and I wouldn't have had to code the require statement for format.php). Finally, the item.php view page is in denoncourt/retail/views. Listing 9 shows the code. Listing 9. The item page displays the model instanced in the controller <html> <head> <style> dt { float:left; clear:left; font-weight:bold; margin-right:10px; width:15%; text-align: right; } dd { text-align:left; } </style> </head> <body> <dl> <dt>Item No:</dt><dd><?php echo "$item->itemNo"; ?></dd> <dt>Price:</dt><dd> <?php echo \denoncourt\retail\dollar($item->price); ?> </dd> <dt>Quantity On Hand:</dt><dd><?php echo "$item->qtyOnHand"; ?></dd> </dl> </body> </html> Notice how the item page qualifies the dollar function with \denoncourt\retail\ namespace. Fall back If a source file has a namespace declaration, then all references to classes, functions, and constants use the namespace semantics. When PHP encounters an unqualified class, function, or constant, it does what is know as fallback. A fallback on a user class causes the compiler to assume the current namespace. To refer to non-namespaced classes, you need to put a lone backslash. For example, to refer to PHP Exception class, you would use $error = new \Exception();. Keep that in mind as you use any of the Standard PHP Library classes (such as ArrayObject, FindFile, and KeyFilter). For functions and constants, if the current namespace does not contain that function or constant, PHP's fallback mechanism will fall back to the standard PHP function. So, for example, if you've coded your own strlen function, PHP would resolve to your function. But, if you also wanted to use the standard PHP strlen function (say, within your own strlen implementation), you'd need to precede the function invocation with a backslash, as Listing 10 shows. Listing 10. PHP standard functions can be qualified with a backslash to identify the global namespace <?php namespace denoncourt\retail; function strlen($str) { return \strlen(); } ?> The namespace global variable and strings If you like to code dynamic methods, you may be tempted to place a namespace in a double-quoted string: "denoncourt\retail\controller". But remember that you'll need to escape those slashes: "denoncourt\\retail\\controller". One workaround is simply to use single quotation marks: 'denoncourt\retail\controller'. As you do your dynamic programming, keep in mind that PHP V5.3 has a new global variable called __NAMESPACE__. Consider using the global variable rather than typing it: $echo 'I am using this namespace:'.__NAMESPACE__; IDE support for namespaces Most of the major IDEs already have support for PHP V5.3. NetBeans V6.8 has great support for namespaces. Not only does it have code completion but it also makes suggestions for improving your code with best practices. For example, it is a best practice with PHP namespaces to fully qualify your namespaces within your code using with absolute references rather than relative references. If you key code that uses relative namespace qualifiers, NetBeans displays a light bulb icon in the left-most code margin. If you hover over the icon, NetBeans shows a tool tip describing the suggested change. And if you then click the icon, NetBeans makes the code change for you. Zend Studio provides similar capabilities. If you are reticent to begin using namespaces, consider upgrading your IDE and try out namespaces with a bit of help from your favorite IDE. Note that you may find that you don't even have to upgrade your IDE, as many of them have provided PHP V5.3 features for more than a year now. PHP Development Tools (PDT) V2.1 also has solid support for namespaces. PDT is a plug-in for Eclipse. A link to the installation notes for PDT is provided in the Resources section. To enable namespace support, I first had to tell Eclipse/PDT to use PHP V5.3. To do that, from the application main menu, click Window > Preferences, as Figure 1 shows. Expand PHP in the tree pane, then choose PHP Interpreter. Then, change the PHP version to PHP 5.3, and click OK. Figure 1. The Eclipse PDT plug-in requires you to set the interpreter to PHP V5.3 You can create a PHP project by clicking File > New Project, expanding the PHP node, and then clicking PHP Project. To create a PHP file, simply right-click the project in PHP Explorer, then click PHP file. PDT uses appropriate syntax highlighting for the namespace keywords of namespace and use (see Figure 2). Figure 2. PDT uses syntax highlighting for namespace keywords and displays namespaces in the PHP Explorer and Outline views It's handy to have PDT show you the namespaces in the PHP Explorer and Outline views, as it helps you visualize how your namespaces are assigned to various classes. PDT also provides something we've come to expect with IDEs: code completion (see Figure 3). Code completion is invoked by PHP while keying the use statement. Figure 3. PDT provides code completion for namespaces PDT will also pop up a code completion window while you key class names. For example, if I type new Item, PDT will also show a window listing Item � denoncourt\retail\item. When I select denoncourt\retail\item, PDT inserts the required use statement and the qualifier on the instantiation line: use denoncourt\retail\model; new model\Item(); What's cool is when I type new Conan, PDT also shows a window listing: Conan � obrien Conan � barbarian Allowing me to select the appropriate Conan. And now that I've wandered back to my infatuation with the two Conans, perhaps it is time to wrap things up. Wrapping up If you are still hesitant to get started with namespaces, before you put off learning namespaces for another year, I suggest you load your favorite IDE with PHP V5.3 support and give namespaces a whirl. As to naming conventions, it's more important to set some simple ones than to agonize over coming up with the perfect strategy. Personally, with a long background in Java development, I like to follow the Java naming conventions. I use camel-cased names for my PHP namespaces and stay away from the underscores. By using namespaces in your next PHP project, your code will be cleaner and more organized. You will become acquainted with a facility that is common to most leading languages. And you will be prepared for using the wealth of frameworks already using PHP V5.3 — and namespaces in particular. Download Resources Learn - Read the PHP Architect article "PHP 5.3 namespaces for the rest of us" for more insight on namespaces. - Check out Nathan A. Good's developerWorks article "Creating better namespaces in PHP." - Get the basics on PHP namespaces. - Find answers to your PHP questions in the PHP Namespace FAQ. - Read more about PHP Namespace Support in NetBeans V6.8. - Get the PDT installation notes. - Eclipse PDT V2.1. - - Connect with other PHP developers by joining the PHP Developers group in the developerWorks.
http://www.ibm.com/developerworks/opensource/library/os-php-5.3namespaces/index.html
CC-MAIN-2016-22
refinedweb
2,759
55.54
Contents In Java there are no free functions, which simplifies lookup rules and code organization. Many C++ style guides have adopted the “only classes” style, prohibiting free functions. But C++ is not Java. First things first: This is no rant against Java. I am not a language zealot who thinks that “There Is Only One Language” and ignorantly ridicules all other languages. I think of both Java and C++ as different tools, suitable to tackle different problems. The Benefits of “Everything in a Class” The Java approach makes things easy. If every function is inside a class, then the compiler and the reader have a clear context for every piece of code. This makes lookup rules very simple, since any nonqualified function call has to be a method of the current class or one of its base classes. On qualified call the object or, in the case of static methods, the class is provided in the code, so lookup is easily simple. Code organization is easily simple: You either already have a class where a function you have to write clearly belongs to, or you create a new class. Since each class usually has its own source file in Java, you immediately know where to put the function. C++ lookup rules In C++, if you do not stick to “Everything in a Class”, lookup rules get fairly complicated. Qualified function calls behave similar to Java. But with unqualified function calls you can get lost quickly. The compiler will look up unqualified function calls in different places. At first it will look for a matching name in the same scope, i.e. the same class and its base classes for methods or the same namespace. Then it will go into the next enclosing scope, i.e. outer classes or namespaces, until it hits the global namespace. But it does not sop there. Enters argument dependent lookup (ADL). If the function has arguments, it looks into the namespaces of the types and base types of those arguments, if there is a free function that has a matching name. And the outer namespaces of those classes. And it looks for free friend functions of their base classes, even if those are in yet other namespaces. This can get very complicated very quickly. But does that justify the “Everything in a Class” rule? Drawbacks of the rule Banning free functions has several implications on how code gets structured and restricts the use of language features. I will list a few of them, but there are more. Artificial classes: Having to put everything in a class means you have to create artificial helper classes for functions that don’t belong into an existing class. Such classes often feel unnatural and irritating. Operator overloading: Many operators should be or even have to be free functions. Being strict about the rule means to cripple one of the language’s key abilities to design classes with a fluent and readable interface. Readability: A call to a well named free function often is enough to know what is going on, even if the function does not belong to the class where the call appears. Having to make a qualified call with the name of some helper class hurts the fluent readability of code. Fat interfaces and scope creep: In order to avoid artificial helper classes that due to other coding style rules would have to go into separate files, programmers sometimes tend to put into classes that are only closely related to the function itself, thereby needlessly augmenting the interface of that class. But we can’t just drop the rule, can we? No, we should not simply drop it. There is a reason it is part of Java. But since C++ is different, we should not blindly copy it. We should replace it by sensibly organizing our code. Keep closely related functions that do not need access to private members as free functions in the same file. Helper functions and operators that clearly belong to a class should be declared in the header that contains the class definition. If you have member functions that don’t need access to the class’ private or protected members, consider if they can be made free functions to decrease coupling. Keep free functions that are less closely related in the same namespace as the classes they work on. Functions that don’t directly belong to a class but work with objects of the class usually belong to the same group of functionality. Therefore they should be part of the same namespace. In other words, don’t rely on ADL too much. More generally speaking: Organize your code in a way that represents the dependencies of the classes and functions. Of course this does not mean you should drop helper classes altogether. Use them if it makes sense, e.g. if their names provide a context that otherwise would be missing. Choose disciplined and well-thought organization of your code over the dogmatic and overly restrictive “Everything in a Class” rule. 11 Comments Permalink Permalink Permalink In C++, just put the global/free functions into reasonably named namespaces. problem solved. That’s what all decent libraries and programs do, in case you haven’t noticed. Anything you’ve discussed beyond that is just a matter of coding style, not language restrictions. Permalink Of course. That’s what this blog is mostly about: Coding style. The “everything in a class” rule is also just a coding style rule. Permalink this stuff reminds me of my friend who writes trading software. he has a wealth of experience writing code, but all his colleagues graduated from the ivy leagues and other expensive private schools without any programming or computer science type classes — the company provided them with a 13 week crash course. my friend called their work products “Super Classes” because of their size and scope. He spent most of his time “cleaning up their mess” and basically doing their work for them. java super classes … Permalink Wow, I hate to see a compiler even compared with an interpreter. It like compoaring writing a program to running one. Permalink I have to admit I have no idea what you mean. Lookup rules and the “Everything in a class” rule are not specific to compilers or interpreters. Permalink He’s just mocking Java by calling it an interpreted language. Permalink Maybe. However, I won’t approve any of the incoming troll and anti-troll comments that are following. Permalink Permalink
https://arne-mertz.de/2015/05/everything-in-a-class-c-is-not-java/
CC-MAIN-2018-34
refinedweb
1,083
72.76
Introduction: Temperature Wand 1: Parts List 1 Arduino RBBB $10 Arduino USB$5 6 KTY81-250 Temperature sensors [email protected] 6 1.2K resistors $0.01@. Step 3: Electrical Design The KT81-250 Temperature Sensor is basically a resistor whose resistance changes with temperature. At room temperature the resistance is about 2000 ohms. The Arduino can’t read resistance but can read voltage so I built a voltage divider with 5 volts going to a 1.2K resistor then to the KTY81 sensor to ground. The connection between the resistor and sensor is attached to the appropriate Analog input of the Arduino. There are six of these circuits as shown in the schematic. The schematic also shows an 8.2K resistor going from 3.3V to Aref. I had a good reason for using an 8.2K resistor but don’t remember. It works fine. The Arduino software has to use analogReference(EXTERNAL) to use this. THEORY: The 1.2K resistors were selected to yield the best resolution for the Arduino. The resolution is about 0.6 degrees F. This means the Arduino can only detect a change of 0.6 degrees. The Arduino has a 10 bit resolution so there are only 1023 different steps. The resistor was also selected so that the temperatures between 50 to 100 degrees F will be less than 3.3V and readable by this Arduino. Six resistors are connected to the Analog inputs. The other ends are tied to 5 VDC. The Ethernet cable is wired to the Analog inputs and grounds. (See pictures) Step 4: Problems I will get back to the calibration step later. Problem: Well my first attempt was a disaster. The temperatures were all over the place and not very stable. Solution1: One of the problems was that I was powering the Arduino with 5V from the USB port of my computer through a 4 port hub. Well, this voltage measured about 4.65V and varied a lot. So I added a 5Volt regulator and capacitors and a 6 Volt power supply. See schematic. Some of you sharp-eyed readers will notice that the power supply (See picture) is rated at 6 VDC output and that the 7805 regulator is specified for a minimum of 7 VDC. Well, most of these power supplies actually put out more voltage than rates. With this one hooked up, the output was 7.5VDC and the regulator output was a consistent 5.1VDC output. Solution2: The software I first wrote sampled the temperature 10 times, averaged it then output it. It repeated this every minute. Well, I decided to implement a rolling average. The way this works is that it samples the voltage constantly and averages the last 25 samples and outputs it. Programmers: I created a 25 element array for each sensor and a pointer to the array. float TempArray[NUM_SAMPLES][MAX_TEMPS]; byte ArrPtr=0; Then I read in the counts for each sensor and stored it in the array bin pointed to by the ArrPtr. Then I incremented the ArrPtr++ and repeated the process. If the ArrPtr = 25 then it is set to 0. Each array is summed and averaged by dividing by 25, then this averaged count value is converted to a temperature. This is like a low pass filter. The sensor is averaged over about 50 seconds. The temperature shouldn’t change significantly in that period of time. Step 5: Calibration I’ve worked with temperature sensors before. I know they are hard to calibrate especially in air. It is very hard to get them to within a degree F of each other. As with most temperature sensors, the KTY81s are not very accurate and need to be calibrated. For example, the resistance at room temperature can vary from 1900 to 2100 ohms. If you want to build this, then you will need to calibrate your sensors. This involves Algebra, maybe a game killer for some readers. But I will try to take you through a simple two point linear fit. Procedure Theory: First you need to get the sensor (or in this case sensors) at a fixed and known temperature. The known temperature is measured by some known instrument. I used my IR thermometer. The Arduino will read the associated sensor and send out a number from 0 and 1023. The temperature and count is recorded. Then for a different temperature the whole process is repeated. Procedure Applied: I wrapped the hacked Ethernet cable in a coil and stuck them in a box, then closed it. (See pictures) I put it in a fairly stable environment on my floor and let it set for a while. Then I got a readout from the Arduino, just the raw counts from the Analog sensors and I measured the temperature in the box with my IR Thermometer.(See picture) Next I set the box outside at a hotter temperature and repeated the process. So now you should have two different temperatures and two different counts for each sensor. Algebra: So these sensors are fairly linear. That means the resistance changes pretty evenly with temperature. So I used a linear fit. TempF = Multiplier * count + Offset TempF is the temperature in Fahrenheit. Count is the Arduino count. Multiplier is a constant for each sensor. Offset is a different constant for each sensor. Once you figure out what the Multiplier and Offset is for each sensor then when the Arduino reads the count from the sensor, the software will multiply this by the Multiplier and add the Offset to give the temperature in Fahrenheit. To find the Multiplier and Offset for a sensor, you know the TempF and counts for two different points so you have two different equations. Example: At 83.5 degrees, the fifth sensor had 999.3 counts. At 75.5 degrees, the fifth sensor had 979.5 counts. The two equations are: 83.5 = M * 999.3 + O 75.5 = M * 979.5 + O M=Multiplier and O=Offset Using Algebra you can subtract the second equation from the first 83.5-75.5 = M * 999.3 - M * 979.5 + O - O Simplify: 8 = M * 999.3 - M * 979.5 8 = 19.8 * M M = 0.4040 So now you know what M is. To find O, just plug the M into one of the starting equations: 83.5 = M * 999.3 + O 83.5 = 0.4040 * 999.3 + O 83.5 = 403.7576 + O 83.5 - 403.7576 = O O = -320.258 To check your calculations you can plug the M and O into the other equation. Alert Readers: Some may wonder how I got a count of 999.3 when the Arduino only outputs 0 to 1023. That is correct but I am using an average value over 25 samples. The Offset is a negative number. This is okay as the computer knows that adding a negative number is the same as subtracting it. Repeat the above procedure for the other five sensors and plug the values into the software. By the way I used Excel to do the calculations. The Arduino software is attached. Software Notes: Once the software is loaded to the Arduino, the serial terminal is used to display the results. Each line contains the temperatures from the top sensor to the bottom separated by commas. The software will have to run through 25 samples before it starts averaging correctly. This will take about a minute. Code Notes: float TempArray[NUM_SAMPLES][MAX_TEMPS]; This is a double array, 25 samples x 6 sensors Under “void setup(void){“ you will see the Multipliers and Offsets for each sensor. Under “float getTemperatureF(unsigned int TempNum){“ there are two return statements. The one commented out “//return (SumTotal/NUM_SAMPLES);” is uncommented to get the average raw counts for calibration The other "return Multiplier[TempNum]*(SumTotal/NUM_SAMPLES)+Offset[TempNum];" returns the calibrated temperature. Step 6: In Use and Conclusions To use the Temperature wand, I bought ten feet of ½” PVC and a couple of couplers. I have a little car, so I had to cut the ten foot piece in half to get it into my car. Unwrap the coiled sensors. Cut PVC pieces to desired height and connect with couplers. Attach the top sensor to the top of the PVC. I used tie wraps. Plug in and run software. Results: The picture is a sample of a hot room. (I added some fudge factors into the software) It does shows the fairly wide gradient in temperatures. Conclusions: Well, I am not too happy with the results. When I have some free time, I think I will try a modified calibration procedure to see if I can get better results. But it does show the temperature gradients in a room. I think it would be a good project for someone with ceiling fans and/or attic fans. You could probably get a good idea on how effective they are. I might try some experiments with fans myself. Recommendations We have a be nice policy. Please be positive and constructive. 3 Comments Just a tip, you can use the DS18B20 temperature sensor and you can have multiple temperature sensors over the same data line because each sensor has its own unique id... I am planning on putting temperature sensors throughout my house similar to what you are doing... nifty set up. I may try something like that on a different scale Thanks. It should be easy to scale it up or down. LOG
http://www.instructables.com/id/Temperature-Wand/
CC-MAIN-2018-13
refinedweb
1,584
68.67
2206/iterating-over-multiple-lists import itertools for item in itertools.chain(listone, listtwo): #listnames ... The print() is getting called multiple times ...READ MORE key is just a variable name. for key ...READ MORE You can do it like this import pandas ...READ MORE Hii Kartik, You could do it in two ...READ MORE You can also use the random library's ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE can you give an example using a ...READ MORE You can simply the built-in function in ...READ MORE You have to use the zip() function: for ...READ MORE You can loop over the dictionary and ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/2206/iterating-over-multiple-lists?show=2207
CC-MAIN-2020-34
refinedweb
119
88.84
Hi everyone! Within 48 hours I'll be having a "windy" situation here. So I prepare my Quick Check list in case of hurricane. I'll be updating it from time to time. Couple of extras and some additions provided by the community. - [_] Don't Panic! (yeah, sure...) - [X] Back up your SpiceWorks system - [_] Get a copy of the Disaster Recovery Plan (hopefully updated) - [_] Check phones and cell phones - [_] Grab a hardcopy of your contacts (Don't rely on the cell phone address book) - [_] Begin Backups (Critical Systems, follow by non critical and anything else if possible) - [_] Secure OS Disks and Licenses - [_] Secure Backups - [_] Check the generator, fuel, UPS, power cords, and multi-plugs - [_] Board everything up - [_] Shutdown (if possible) all systems - [_] Quick test your remote access - [_] Lock up the facility - [_] Secure yourself (after all, you are carrying the DRP on your head) and your family. - [_] Refresh your emergency kit (food, tools, water) - [X] (Recommended: Stock up your bacon reserves. Optional: (since most of us do this every time) stock up your pretzels and beer reserves :) ) Did I miss anything? Wish me luck! 14 Replies Aug 29, 2010 at 5:28 UTC akp982 is an IT service provider. Good luck!!! Aug 29, 2010 at 9:34 UTC Información Tech is an IT service provider. Hello Blueshore, What about the company must work 24/7 although the hurricane is comming? What about the telephone system? How the crisis team will work, how to document, how the communication will work? http:/ By the way I have to call Dominican Rep. to see if my family are already list Aug 29, 2010 at 11:02 UTC Greetings Andy: Thanks! Saludos Jose: Thanks for adding a couple of extra details. 24/7 Operations are hosted in the US (with east and west facilities hosting companies). Regarding the phones and communications I see that as a weak point, since we all depend on a single cell phone carrier. I'm using multiple carriers for data, so a redeployment of communications and network can be implemented within 24 hours. The team was activated at 9:00 am (I'm been running backups since 1 am, I still running non critical system backups as I'm writing). Lucky, we were reviewing the DRP last June, so we are up to date. I will add your recommendations to the list. NHC last report looks as scary as the previous one. My mayor worry is electrical power. The last storm in 2005 was a weak one and knock out power for almost two weeks. Running the datacenter on diesel was a logistics nightmare (no to mentions the fumes and the neighbors complains). Creo que los hermanos alla en el Republica Dominicana no tendran nada mas alla de marejadas, pero si me preocupaba Haiti. (our brothers on Dominican Republic will have only rough seas conditions, I was more worried about Haiti). Aug 29, 2010 at 12:00 UTC Información Tech is an IT service provider. Hello, Blueshore, I understand, a communication plan as part of a crisis plan in a hurricane is not easy, you never know what will happen and therefore we are concerned, even this does not depend on us but some services by an external supplier. By default the Energy companies cut power when we're on alert orange or almost red. For this we must have emergency power plants. I think we also have a good plan to recover as you mentioned: backup, in this also see the possibility of moving those backup to places where the forecast indicates that affect less. It's definitely good to have some written and planned, but a hurricane can become unpredictable. Allá en Dominicana esto es una fiesta, todo el mundo sale a comprar para estar listo, un juego de dominó, un asopao, chocolate y pan, etc. Es un fiestón.... ya sabes es algo ya normal de esta epoca del año... Esperemos en Dios que no pase nada en el área completa del Caribe que solo sea agua, aunque también se combierte en peligro por los derrumbe de tierra... In the Dominican this is a party, everyone goes to buy to get ready, a game of dominoes, a stew, chocolate and bread, etc. It is a bash .... you know it's something normal and this time of year ... Hope in God that nothing happens in the entire area of the Caribbean it is only water, but also combierte endangered by landslide ... Aug 29, 2010 at 2:07 UTC Well, I got most of the systems secure, backups are going smooth. Probably the worst thing will be to go to the store and get some bottled water and soups! (most people will be stocking critical stuff... like... beer and hard liquor ;) just kidding, but stores are starting to see more people as of now). Save me some of that asopao, for the looks of things, warm food might take a bit more to return (and yet I start smelling BBQ! Looks like the party started.) ¡Gracias! Aug 30, 2010 at 7:40 UTC We'll looking to get your wind once it's done with you. Thankfully we just moved everything to a new datacenter rated for cat 5 hurricane, so for me all i need to do is make sure my offsite in atlanta is ready to go. Aug 30, 2010 at 8:02 UTC We've got our eyes on Earl and I don't have a clue what our hurricane plan is. We back up offsite with some crappy backup software called SOS. It's takes us hours to restore one file while the software builds the catalog prior to restore. It's horrible. Contract is up in three months so I'll move to a real man's backup software. I think offsite backup is the best thing to do. I never feel comfortable about taking hard drives with me. Aug 30, 2010 at 9:27 UTC If I lived in a hurricane (or tornado) area my checklist would look like this: [ X ] Grab skillet, camp stove, cooler and bacon. [ X ] Grab the family and GTFO. . . *Edited for reality. Aug 30, 2010 at 9:55 UTC Información Tech is an IT service provider. If I lived in a hurricane (or tornado) area my checklist would look like this: [ X ] Grab the family and GTFO. Good action... Aug 30, 2010 at 10:05 UTC Forgot to secure your supply of bacon! Aug 30, 2010 at 10:27 UTC Información Tech is an IT service provider. Forgot to secure your supply of bacon! Yes, I commented to Blueshore, there in Dominican it is the begging of the party. Aug 30, 2010 at 10:47 UTC There we go. All fixed. Aug 31, 2010 at 10:11 UTC Well, Earl just arrived, have a couple of laughs, stock up for the road ahead (uh-oh), and leave us some extra stuff (rain, flooding, pocket blackouts and took up some gas station flags as parting gifts). Since the datacenter was seal up, and the generator and UPS were ready, we left the systems running. Only one power spike was detected at 12:35 am. All operations should return to normal tomorrow, and then we might need to set a table for Fiona. John, thanks for remind me to update point 14 on the checklist (Secure yourself! (after all, you are carrying the DRP on your head)). Sometime we forget that we have family and people that depend on us, so we have to turn into one man FEMA team. (and yes, I secure some supplies and tools, but I'm stuck on an Island... no much where to go :) ). Once the dust (or rain should I say) settles, I'm planning to test the restore procedures. My intention with the quick list was not create an entire DRP, but ensure a DRP can be started without a lot of delays. Aug 31, 2010 at 12:35 UTC Well good luck with everything. Sounds like you have everything shored up pretty well to begin with so recovery time should be down. All things considered it could always be worse. Good thing it's not.
https://community.spiceworks.com/topic/109283-quick-checklist-in-case-of-hurricane
CC-MAIN-2016-50
refinedweb
1,377
71.85
Talk:Proposed features/temporary Contents reason/cause annotation I suggest adding something to this proposal (if only in the examples) showing the recommended way to say what is causing this temporary change. In some cases it will be obvious from tags, (eg if you do temporary:construction=yes) but in other cases (like a street festival) people will probably want to know _why_ the street is closed. How should we tag that? temporary_note=*? Thanks, -- JasonWoof 22:29, 23 January 2011 (UTC) - Well, construction=* I would interpret as "a new (previously nonexistent) road is being built" - but I get your point. If we wanted to do something like temporary:cause=*, we'd need a near-exhaustive list of all possible causes - which I think is difficult for a potentially wide range of applications. The note=* tag is already around for notes on a map feature (intended for presentation to the map user), so indeed temporary:note=* may be a good idea for freeform text. Being potentially language-bound, I would also consider temporary:note:it=* and the like for extra languages (the main tag should be in the local language). --Stanton 23:01, 3) Example 6: Humanitarian Mapping Another example: Widespread flooding in raining season leads to (temporary) displacement of thousands of people, spontaneous camps are built in Hacienda San José. landuse=farmland temporary:tourism=camp_site temporary:refugee=yes name=Hacienda San José temporary:name=Hacienda San José Refugee Camp temporary:date_on=2012-11-01 temporary:date_off=2013-01-30 In this case, the tagging author (humanitarian organisation) should revise the date off. I support this proposal, it can help humanitarian mapping, which is generally time-limited. --Federico Explorador 21:18, 2 June 2012 (BST) Proposal status - I read the proposal (I've got the same idea this morning :-p) and it looks good for me, so when will start the RFC, and later the vote ? --Dri60 14:52, 3 April 2011 (BST) - Is someone going to get this proposal further? I Think it is very useful. --Hedaja (talk) 15:38, 7 November 2014 (UTC) - I would support it if the proposal was extended to support multiple temporary changes. For example maxspeed=80, temporary:1:maxspeed=50, temporary:1:date_on=..., temporary:2:maxspeed=30, temporary:2:date_on=.... Construction works are often performed in multiple phases, with different modifications during every phase. --Pbb (talk) 10:52, 17 September 2015 (UTC) - Absolutely agree this could use further development. My hometown is full of roadworks & construction stuff so my routeplanner sends me the wrong way regularly, so I would be really happy if OsmAnd could work with it. Looking at Proposal_process#Proposed, however, we first need to address some of the issues mentioned here and get the draft on the main page finalised. Glozzie (talk) 07:11, 13 May 2016 (UTC) Direction dependence I would suggest that in addition, a direction should be available to indicate if the restriction applies to only one direction of traffic. Chaz6 13:38, 25 January 2013 (UTC) - This would simply use the "temporary:" prefix on existing tags for permanent items that apply to only one direction. This is the beauty of this proposal: All the other osm tags can be added to the temporary conditions. For example one could indicate that a oneway restriction on a road is temporarily lifted, to allow two-way traffic while some other road is temporarily closed. - User:Jbohmdk 16:49, 8 August 2013 (UTC) Multiple temporal changes The system needs to be open for multiple temporal changes. Both with overlapping timeframes and successive. Say for example a road that is both closed for private cars during the summer, and is closed for reconstruction works during some period. -- Pbb (talk) 22:21, 2 April 2013 (UTC) - an option to achieve this could be via namespaces with tags like "temporary:x:*". x could be simply numeric or even a more telling string --GNius (talk) 16:13, 21 January 2015 (UTC) - I think the solution with time range added to the tag itself, and semicolon for multiple values, could work. See Talk:Proposed features/temporary (conditional)#Multiple temporal changes. --Pbb (talk) 08:01, 23 May 2016 (UTC) Why not use a relation We've been talking at lot about this on talk-gb after the flooding otf the Somerset Levels, where the floods were mapped as natural=water, and the destruction of the sea-wall at Dawlish which meant that the main railway was out of action. In the former everything was done with new ways, but in the latter the temporary closure of the railway requires splitting the line and affects major relations. In general this is likely to cause things to go wrong, and anyway it's not clear that we want that sort of edit anyway. My idea is instead of altering the existing way, one adds a new way sharing the nodes, and they are linked with a relation of type temporary. A temporary relation would have to have a finish date, and could be removed automatically by a bot. Ideally the additional way would also be marked in some way to distinguish it from the permanent feature (this needs to happen in the relation as well) which would allow the temporary way to carry a whole array of free-form tagging. In the former case there would be no permanent members of the relation just the temporary one with the natural=water as a transient member. In the latter the transient member could either carry additional tags: access=no, impassable=yes, or the full range of railway tags. Exactly how to process the latter cases for routing and rendering apps I have not worked out, although for rendering it could possibly be tweaked by adding a layer=5 tag. The real point is that the temporary relation would allow anything NOT interested in this info to ignore it, and the relation could be used for semi-automatic management of temporary objects. SK53 (talk) 16:36, 7 February 2014 (UTC) - This method has the problem that it would be highly susceptible to breakage by novice users or users with "simple" editors like iD. They often break polygons, relations and layer roads over roads even if there is a single object at the spot. Requiring the layering of objects on top of each other (like the example road) by design seems to just multiply the problem. I think automatic removal of tags can be done also with the proposed "temporary:" tag schema. Automatic removal of the object can also be done if a specific tag is set onto the object. But any automatic removal carries the risk that the temporary event does not finish on the expected date and the objects/tags are removed prematurely. Aceman444 (talk) 14:52, 5 January 2015 (UTC) - The relation idea just came to me as well. It also has the upside that it can applied to many objects at the same time. Besides, if there is a planned time sequence of properties to change (such as: the road is blocked for four weeks, then it is a one-way street for three weeks, then it is open for vehicles up to 3.5 t for a week, and then all restrictions are gone) could easily formulated with a bunch of relations, while the current proposal makes it impossible to have such kinds of sequences. - About the problem that novice users could break things: don't we have this problem always? --glglgl ✉ 15:35, 24 April 2018 (UTC) Avoid mapping very temporary stuff We actually need to make a general Good Practice guideline for mappers, to tell them to avoid mapping temporary things. But... well I'm not suggesting a blanket ban. There's shades-of-grey. I mean I think there's been agreement in the past that certain types of very temporary things don't belong in OpenStreetMap. But there's also been a fair amount of temporary stuff getting mapped in a way which is widely accepted. I asked a question about this here: Question:What is the recommended way to tag temporary road works and traffic situations?, and User:joto gave a good answer. It's one of those things where you have to draw the line somewhere, because you can think of silly extremes. There was a jokey mailing list discussion back when we first got access to imagery in OpenStreetMap. Hurray! We can map out where the sheep are in the farmer fields! but then... do we shuffle the nodes around as they move?? :-) OK so that's silly. Less extreme example: We could add and remove a road whenever tower bridge opens and closes. Also pretty silly! (certainly not what the OpenStreetMap database and API is designed for) . So where's the line that we're drawing there? So far we just trusted it to people's common sense. As I say, this needs fleshing out on a wiki page linked of Good Practice, with various examples I suppose. ...And then we should think about how it fits with this proposed tagging scheme. I mean I don't think it will have any affect on the proposal at all, except that, somewhere near the top of the text, it should acknowledge that some types of temporary data shouldn't be added, and link the wiki page explaining more. The tagging scheme itself seems useful, for those cases where the temporary data is welcomed. -- Harry Wood (talk) 04:12, 23 December 2015 (UTC) Automated deletion "A temporary relation would have to have a finish date, and could be removed automatically by a bot." - SK53 (talk) - "any automatic removal carries the risk that the temporary event does not finish on the expected date and the objects/tags are removed prematurely." - Aceman444 (talk) - Maybe instead of automatic deletion, a bot could automatically create a note? Then these can be picked up by regular users for removal/updating the date if applicable. On the other hand: I'm not too sure that in every area notes are handled quickly enough not to pollute the map: I think there might be a higher chance that temporary stuff doesn't get removed manually after the temporary situation has changed back to original situation (irl), than there is a chance that temporary things are removed (automatically) prematurely. Glozzie (talk) 07:04, 13 May 2016 (UTC)
https://wiki.openstreetmap.org/wiki/Talk:Proposed_features/temporary
CC-MAIN-2018-30
refinedweb
1,728
58.21
collective.workspace 1.0 collective.workspace Introduction collective.workspace package for providing ‘membership’ in specific areas of a Plone Site. It allows you to grant people access to areas of content using a membership group rather than local roles for each user, and to delegate control over that group to people who don’t have access to the site-wide user/group control panel. collective.workspace provides a behavior that can be enabled for any Dexterity content type. When enabled, it adds a “Roster” tab which is where you can manage the team. All the functionality takes place via an IWorkspace adapter, which can be overridden to specify: - A list of groups, and the roles that each group should receive. These groups are created automatically via a PAS plugin, and automatically granted local roles using a borg.localrole adapter. - The schema for which fields should be stored for each member in the roster. This includes checkboxes for the groups, to determine which groups the member is in. - Action links for each row in the roster. The default is an “Edit” link which brings up a popup to edit the fields for that person’s roster membership. - Action buttons at the bottom of the roster which apply to the rows the user selects. An example of this could be a ‘Send email’ action, so a roster admin can easily email users in the roster. Unlike similar previous packages (see slc.teamfolder and collective.local.*), collective.workspace supplies its own PAS groups plugin, instead of using standard Plone groups. This means that Workspace-specific groups do not appear in the sitewide group control panel. Some other features are: - Membership in a roster is indexed, so you can search the catalog for items of portal_type X that have a particular user in their roster. - Events are fired when roster memberships are added/modified/removed. Basic Installation - Add collective.workspace to your buildout eggs. - Install collective.workspace in the ‘Add-ons’ section of Plone’s Site Setup. - Enable the behaviour on your dexterity content type (Either using GenericSetup or Site Setup -> Dexterity Content Types). Custom Workspace Groups The default groups available on a workspace are ‘Members’ and ‘Admins’. You can customise the groups that are available and the default permissions they are given by adding a custom IWorkspace adapter: configure.zcml <adapter for="mypackage.MyContentType" provides="collective.workspace.interfaces.IWorkspace" factory=".adapters.MyWorkspace" /> adapters.py from collective.workspace.workspace import Workspace class MyWorkspace(Workspace): """ A custom workspace behaviour, based on collective.workspace """ # A list of groups to which team members can be assigned. # Maps group name -> roles available_groups = { u'Supervillians': ('Reader', ), u'Superheroes': ('Reader', 'Contributor', 'Reviewer', 'Editor',), } Contributors - David Glick - Original Author - Adam Forsythe-Cheasley - Documentation/Testing - Ben Cole - Documentation/Testing - Matthew Sital-Singh - Documentation Changelog 1.0 (2014-07-04) - Initial release - Downloads (All Versions): - 6 downloads in the last day - 39 downloads in the last week - 44 downloads in the last month - Author: David Glick - Keywords: plone workspace collaboration - License: gpl - Categories - Package Index Owner: mattss, adamfc, davisagli - DOAP record: collective.workspace-1.0.xml
https://pypi.python.org/pypi/collective.workspace/1.0
CC-MAIN-2015-11
refinedweb
513
56.25
Opened 3 years ago Closed 2 years ago #10068 closed defect (fixed) AttributeError: 'Environment' object has no attribute 'get_read_db' Description Hi, I compiled and installed the plugin and then did the trac instance upgrade as pointed by the installation. The "Projects" link appears in the admin panel but as soon as I click on it a message appears: Trac detected an internal error: AttributeError: 'Environment' object has no attribute 'get_read_db' First let me say that I actually don't know if this plugin is intended to be only used in trac 0.12.x instances. I have 0.11.7 installed running on FastCGI with SQLite. The reason I'm reporting this issue is because I did not find any information regarding the versio needed on the hacks page. That being said, I googled a little bit about the error, which is shown as a general Trac, error and found something similar for another hack in #8854, I followed the instructions on one of the comments to do this: import trac trac.__version__ # verify that this returns a trac 12 version >> '0.12.3dev-r10639' # verify you can connect to the database from python import trac.env e = trac.env.Environment('/var/trac/test') # enter your trac instance directory db = e.get_read_db() cur = db.cursor() cur.execute("SELECT 1") data = cur.fetchone() Then, as soon as I get to the db = e.get_read_db() line I get this exception: Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'Environment' object has no attribute 'get_read_db' Which is of course the same as the exception shown on the Trac page itself when clicking the Projects link. I just want to clear this since the guy on the #8854 ticket got something different after trying the same. My thoughts are that this is actually a 0.12 > plugin and it won't work on my trac. If this is the case please disregard my ticket but it would be good if the required version was mentioned on the hack page. I've seen some other tickets reported for this plugin with the 0.11 version selected so I still have hopes :) If you need anything else please let me know. Thanks in advance. Attachments (0) Change History (7) comment:1 Changed 3 years ago by grubio comment:2 Changed 3 years ago by crossroad hello, Thanks for reporting the problem The ticket # 8854, you said your problem solved by doing the following: Problem Was That the trac.ini file readable by not was the user the server is running as. Somewhere along the way I must-have changed the perms. Hopefully I can work on your trac :) comment:3 Changed 3 years ago by anonymous - Status changed from new to assigned Reviewing documentation of the api for obtaining a Connection suitable for read queries (i.e. SELECT) the env.with_transaction decorator, for using a db parameter. reviewing documentation of the api function 'get_read_db' was introduced with version 0.12 of trac. Would have to find a way to make it work with any such condition. comment:4 Changed 3 years ago by falkb comment:5 Changed 3 years ago by falkb comment:6 Changed 3 years ago by anonymous comment:7 Changed 2 years ago by falkb - Resolution set to fixed - Status changed from assigned to closed Just as a side note, I ran the trac-admin upgrade just in case and I got this:
http://trac-hacks.org/ticket/10068
CC-MAIN-2015-06
refinedweb
574
62.38
SwiftExif SwiftExif is a wrapping library for libexif and libiptcdata for Swift to provide a JPEG metadata extraction on Linux and macOS. SwiftExif was written to facilitate porting the Munin image gallery generator to run on both Linux and macOS (it previously required ImageIO/CoreGraphics). libexif is used to extract and format the EXIF data from the image, while libiptcdata extracts the "newer" IPTC standard. Requirements - Linux (Ubuntu 20.10 tested) or macOS (10.15 tested) - Swift 5.2 (or newer) - libexif 0.6.22 (available in Homebrew or Ubuntu 20.10) - libiptcdata 1.0.4 Installation On Ubuntu/Debian based Linux: apt install -y libiptc-data libexif-dev libiptcdata0-dev On macOS using brew: brew install libexif libiptcdata Swift Package Manager Add SwiftExif to your dependencies: dependencies: [ .package(url: "", from: "0.0.x"), ] Usage SwiftExif aims to provide some simple helper functions that essentially returns all the data as dictionaries. For example: import SwiftExif // Read a JPEG file and return an Image object // Note: current error behaviour is to return empty dictionaries, no error is thrown. let exifImage = SwiftExif.Image(imagePath: fileURL) // Get a [String : [String : String]] dictionary. The first dictionary has items // from the spec e.g. 0, 1, EXIF, GPS... // The values are returned in "human readable format". let exifDict = exifImage.Exif() // Get a [String : [String : String]] dictionary. The first dictionary has items // from the spec e.g. 0, 1, EXIF, GPS... // The values are returned in a "raw" format. let exifRawDict = exifImage.ExifRaw() // Get a [String : Any] dictionary. // Most items are String, however "Keywords" are [String] let iptcDict = exifImage.Iptc() In addition to the high-level functions, a set of lower-level functions is available in the different classes. Have a look at the code or the unit tests to see what else you can do.
https://swiftpack.co/package/kradalby/SwiftExif
CC-MAIN-2021-04
refinedweb
300
61.53
Friends: I am a professor doing research with undergraduates. I am turning to this forum because it seems MUCH more collegial, kind, helpful and thoughtful than any other forums which I have read. Here is the issue: I am testing the Time-of-Flight board for a project and am unclear about inserting a delay value into the code that is based on distance measurements. In summary, at long distances an ERM is activated in one pattern, and at short distances it is activated in another pattern. I need a delay between activation patterns that is proportional to distance but have been unsuccessful. Sample code is inserted with comments. The problem is that the delay must be very short if the distance drops suddenly. I have been unsuccessful in doing so. Suggestions about how to code that DELAY? Ideally, I’d like to compare a stored previous distance (“int inches”) with a new distance reading (“int new_inches”) but I have been unsuccessful. I would really appreciate specific suggestions of code syntax and placement. Thanks in advance. The portion of the sketch with the problem is noted by this line “//HERE IS THE PROBLEM PORTION**” #include <Wire.h> #include "Adafruit_VL53L0X.h" Adafruit_VL53L0X lox = Adafruit_VL53L0X(); int pin5 = 5;// to debug with LED int ERM = 6; int y = 10; int inches; void setup() { pinMode (pin5, OUTPUT);//to debug with LED Serial.begin(115200); // wait until serial port opens for native USB devices while (! Serial) { delay(1); } Serial.println(F("Adafruit VL53L0X test")); if (!lox.begin()) { Serial.println(F("Failed to boot VL53L0X")); while (1); } // power Serial.println(F("VL53L0X test\n\n")); } void loop() { VL53L0X_RangingMeasurementData_t measure; //Serial.print(F("Reading a measurement... ")); lox.rangingTest(&measure, false); // pass in 'true' to get debug data printout! if (measure.RangeStatus != 4) { // phase failures have incorrect data } //This is the IF condition that activates the ERM at long distances // 24 to 60 inches away. Inches = mm*0.039. if (measure.RangeMilliMeter >= 600 && measure.RangeMilliMeter <= 1524) { int inches = (measure.RangeMilliMeter * 0.04); //convert mm to inches int PWM = ((-27 * log(inches)) + 250); for (int x = 0 ; x <= PWM; x += 4) { analogWrite( 6 , x ); delay (y); analogWrite( 6 , 0 ); } for (int x = PWM; x >= 0; x -= 4) { analogWrite( 6 , x ); delay (y); analogWrite( 6 , 0 ); } Serial.print(F("long inches= ")); Serial.println (inches); Serial.println (); } //At this point, the sketch needs to reassess the distance measurement and //DELAY between stimuli (the ISI or inter-stimulus delay) based on that //new distance (i.e., long for far away but very short if distance //has suddenly dropped. However I'm not sure how best to do this. //If the distance has dropped suddenly after the last assessment //here, I need to read a new distance measurement from the incoming data stream. //I tried creating a new variable to compare to previous inches but have been unsuccessful. //The IF conditional below is not working as I'd like. I need the delay to be short //immediately if the distance drops suddenly but stay long if the //distance stays long. Ideas on how to code this? if (measure.RangeMilliMeter > 0 && measure.RangeMilliMeter < 600) { delay (50); } else if (measure.RangeMilliMeter > 600) { int inches = (measure.RangeMilliMeter * 0.04); // convert mm to inches int ISI01 = (151 * (exp(0.0732 * inches))); // calculate inter-stimulus interval #1 delay (ISI01); //inter-stimulus #1 } //This is the IF condition that activates the ERM at short distances // 0 to 24 inches away. Inches = mm*0.039. if (measure.RangeMilliMeter >= 0 && measure.RangeMilliMeter <= 600) { int inches = (measure.RangeMilliMeter * 0.04); // mm to inches int ISI02 = (151 * (exp(0.0732 * inches))); //e = 2.72; calculate ISI #2 int PWM = ((-27 * log(inches)) + 250); Serial.print(F("short inches= ")); Serial.println (inches); Serial.println (); analogWrite( 6 , PWM ); delay (70); analogWrite( 6 , 0 ); delay (ISI02) } // close if (measure.RangeMilliMeter 0-600) else { //Serial.print(F("waiting")); //Serial.println(); } } //closes Void loop
https://forum.pololu.com/t/vl53l0x-compare-values/15178/1
CC-MAIN-2018-30
refinedweb
646
60.11
Details Description Currently NameNode allows format command while it running. In this case the command is executed partially (Lock file is deleted) and an exception thrown. Because of this Name Node should be formatted after restart. This sort of cases can happen accidentally. To prevent such cases Name Node should not execute the format command partially while it is running. It can stright away throw exception/log saying, Name Node is running. Issue Links Activity - All - Work Log - History - Activity - Transitions -1 overall. Here are the results of testing the latest attachment against trunk revision 10822 -1 contrib tests. The patch failed contrib unit tests. +1 system test framework. The patch passed system test framework compile. Test results: Findbugs warnings: Console output: This message is automatically generated. NameNode should not format partially. If the lock file is present the format command should warn and exit. I believe this is what it does. Does it not? The lock file is the right way to determine that the NameNode is running, not the web server. Thanks to Konstantin for spending time and giving comments. I agree with you, But the problem here is if cluster shuts down abruptly then that lock file will still present in namespace directory. In this situation if user wanted to format, he can not format the NameNode at all. Thats why i have choosen this option. Please tell me your opinion. In that edge condition, I think it's better to make the user manually remove the lock file before proceeding. Using the web port really seems like a hack, and for rare conditions like this, it's best to make the user be very sure of what they're doing rather than allow them to do something stupid inadvertently. The purpose for the lock file "in_use.lock" is to hold a lock on that file while the NameNode is running. This is in order to prevent from accidentally starting another NN in the same directory, which can mess up the fsimage and the edits. When the NameNode shuts down abruptly, which it always does, as there is no other way to stop the NameNode but to kill it, the lock on the lock file will be released, and the format will be able to delete the file. Locking may be platform dependent, but is known to work for most traditional file systems. Also in_use.lock is deleteOnExit. It would be good if you could investigate why it is not deleted upon exiting. Thanks for your comments & sorry for the late reply. Lock file will be deleted for normal exiting of system ( kill <pid>) Lock file will not be deleted only for abrupt killing ( kill -9 <pid>) Below is from deleteOnExit() docs - Deletion will be attempted only for normal termination of the - virtual machine, as defined by the Java Language Specification. Why NN format command is deleting the in_use.lock file? Since hdfs format command will execute in separate JVM, it will add lock file into DeleteOnExitHook. FileLock tryLock() throws IOException { File lockF = new File(root, STORAGE_FILE_LOCK); lockF.deleteOnExit(); ………… As we know, Once format command completed,that particular JVM will exit. On exit of that JVM, this Hook will try to delete the files which are added to that DeleteOnExitHook. In Linux, we can delete the file even if other process using that. Delete api will return true in this case. I did the same Test in windows, Here we can not delete that lock file when other process using it. Just delete api will return false in this case. My proposal would. Please let me know your opinion. I think we can just skip checking the presence of in_use.lock during formatting and go directly to setting the lock on that file? The effect will be the same. If file does not exist, it will create and lock it. I think the problem with formatting is in StorageDirectory.clearDirectory(). It calls FileUtil.fullyDelete() if curDir exists, which deletes all files in the directory, but the order of deletion is not deterministic. This probably causes the partial formatting you mentioned before. If format() first deletes in_use.lock it will fail correctly if another NN is still running. If it first delete fsimage then in_use.lock, then it will also fail, but will leave the state unrecoverable. So I'd propose to modify StorageDirectory.clearDirectory() to first delete file STORAGE_FILE_LOCK=in_use.lock then the rest of it. I see now it's a bug. Hi Konstantin, Thankyou for the comments I think we can just skip checking the presence of in_use.lock during formatting and go directly to setting the lock on that file? The effect will be the same. If file does not exist, it will create and lock it. No. First itself i will check the in_use.lock file presence. After that only i am going for other conditions. Above mentioned proposal. So I'd propose to modify StorageDirectory.clearDirectory() to first delete file STORAGE_FILE_LOCK=in_use.lock then the rest of it. I see now it's a bug. But here the problem could be that , As i know in Linux, always i can delete the files, no matter other process is using it or not. One more idea i got, i.e, why con't we restrict besed on PID in script file itself ?. It is like, how hadoop start script will restrict to start the process when it is already running. Thanks Todd. I have seen it. One option is, if already web server started on one port , we can just ignore/throw exception for format command.Because NN will start the webserver mandatorily right.
https://issues.apache.org/jira/browse/HDFS-1690
CC-MAIN-2016-36
refinedweb
941
68.26
In this lab, we’re going to begin to look at what makes computers do their thing so to speak. It is rather insightful to look at how Wikipedia summarizes the computer: A computer is a programmable machine designed to sequentially and automatically carry out a sequence of arithmetic or logical operations. The particular sequence of operations can be changed readily, allowing the computer to solve more than one kind of problem. In other words, a computer is a calculator–and much more. Furthermore, the definition of a computer goes on to include access to storage and peripherals, such as consoles (graphical displays), printers, and the network. We already got a glimpse of this access when we explored Console.WriteLine in the first lab exercise. We have discussed all the syntax and concepts needed in recent sections on Arithmetic, Variables and Assignment, Combining Input and Output, and Casting. Also you can make things easier for yourself using Substitutions in Console.WriteLine to format output. Before writing your final program, you might like to review some of the parts, testing in the Csharp program, so you get immediate feedback for the calculations. We want to develop a program that can do the following: Your final program should work as in this sample run, and use the same labeled format: Please enter the numerator? 14 Please enter the denominator? 4 Integer division result = 3 with a remainder 2 Floating point division result = 3.5 The result as a mixed fraction is 3 2/4. For this lab the example format 3 2/4 is sufficient. It would look better as 3 1/2, but a general efficient way to reduce fractions to lowest terms is not covered until the section on the algorithm Greatest Common Divisor. To do the part requiring a decimal quotient you are going to need to have a double value, though your original data was of type int. You could use the approach in Casting, with an explicit cast. Another approach mentioned in that section was to do the cast implicitly in a double declaration with initialization from an int. If we already had int variables, numerator and denominator, that were previously assigned their values, we could use: double numeratorDouble = numerator; // implicit cast double quotientDouble = numeratorDouble/denominator; ... Remember: at least one operand in a quotient must be double to get a double result. To help you get started with your program code, we provided this simple stub in the example file do_the_math_stub/do_the_math.cs. You are encouraged to copy this into your own project as reviewed after the lab in Xamarin Studio Reminders and Fixes. The body of Main presently contains only comments, skipped by the compiler. We illustrate two forms (being inconsistent for your information only): //to the end of the same line /*to */through any number of lines. Save the stub in a project of your own and replace the comments with your code to complete it: using System; class DoTheMath { // Lab stub static void Main() { /* Prompt the user for the numerator using Console.Write(). Convert this text into int numerator using int.Parse(). Do the same for the denominator. Calculate quotient and remainder (as integers) Use Console.WriteLine() to display the labels as illustrated in the sample output in the lab. Do the same but using floating point division and not doing the remainder calculation. Create the sentence with the mixed fraction. Be careful of the places there are *not* spaces. */ } } Be sure to run it and test it thoroughly. Show your output to a TA. Be careful to open your Xamarin Studio solution and add a new C# Console project to it, and add your new file directly into the project (through the Solution pad). There are two main places to mess up here. We emphasize them and mention fixes if you make the easy mistakes: It is easy to select Empty Project instead of C# and Console Project. If you do that, a correct program will compile successfully, but it will run in limbo, with no console attached to it, and all Console.ReadLine() calls return null, which is likely to make the program have a run-time error. One way to fix it: If you discovered this while running your program, there is no good access to the running process. (You lack a console!) In this case you need to close your solution, ending the running process, and open the solution again. Double click on the project in the Solution Pad (if that does anything, or right-click it and select Options). An elaborate Project Options dialog window appears. In the left pane under Run, select General. In the right pane, two check boxes should appear. Make sure you have the first checked: Run on external console. That should check the second one automatically. Close the window and you should be set. Be careful, it is possible to uncomment the second checkbox, which makes your execution console close instantly at the end of your program, so you miss any last thing printed. Recheck if necessary. Another common error is to proceed like with most text processors, and go to the the top Xamarin Studio menu (not in the the Solution pad) and open a file using the application’s File menu item, and choose to open and edit a new file for your program. This does not put the file in your Xamarin project. Hence you cannot run this program from Xamarin Studio. The file you edit must show in the solution pad in Xamarain Studio, as a source file in your project. If you have a separate project set up, but without this file or any other showing in the Solutions pad, an attempt to run the project will say no Main method (in fact no program at all). The fix: Always use the Solution pad to add files to your project and open them to edit. If you lose the display of the Solution pad somehow, you can go to the View menu, select Pads, and then select Solution.
http://books.cs.luc.edu/introcs-csharp/data/lab-division-sentences.html
CC-MAIN-2019-09
refinedweb
1,009
63.19
Container Classes Introduction The. The Container Classes Qt provides the following sequential containers: QList, QLinkedList, QVector, QStack, and QQueue. For most applications, QList is the best type to use. Although it is implemented as an array-list, it provides very fast prepends and appends. If you really need a linked-list, use QLinkedList; if you want your items to occupy consecutive memory locations, use QVector. QStack and QQueue are convenience classes that provide LIFO and FIFO semantics. Qt also provides these associative containers: QMap, QMultiMap, QHash, QMultiHash, and QSet. The "Multi" containers conveniently support multiple values associated with a single key. The "Hash" containers provide faster lookup by using a hash function instead of a binary search on a sorted set.. The Iterator Classes. Java-Style Iterators), and printing them to the console: QList<QString> list; list << "A" << "B" << "C" << "D"; QListIterator<QString> i(list); while (i.hasNext()) qDebug() <<Debug() <<LinkedList's, QVector's, and(); STL-Style Iterators is done with reverse iterators: QList<QString> list; list << "A" << "B" << "C" << "D"; QList<QString>::reverse_iterator i; for (i = list.rbegin(); i != list.rend(); +: QVector<int> a, b; a.resize(100000); // make a big vector filled with 0. QVector<int>::iterator i = a.begin(); // WRONG way of using the iterator i: b = a; /* Now we should be careful with iterator i since it will point to shared data If we do *i = 4 then we would change the shared instance (both vectors) The behavior differs from STL containers. Avoid doing such things in Qt. */ a[0] = 5; /* Container a is now detached from the shared data, and even though i was an iterator from the container a, it now works as an iterator in b. Here the situation is that (*i) == 0. */ b.clear(); // Now the iterator i is completely invalid. int j = *i; // Undefined behavior! /* The data from b (which i pointed to) is gone. This would be well-defined with STL containers (and (*i) == 5), but with QVector this is likely to crash. */ The above example only shows a problem with QVector, but the problem exists for all the implicitly shared Qt containers. The foreach Keyword ones. In addition to foreach, Qt also provides a forever pseudo-keyword for infinite loops: forever { ... } If you're worried about namespace pollution, you can disable these macros by adding the following line to your .pro file: CONFIG += no_keywords Other Container-Like Classes Qt includes three template classes that resemble containers in some respects. These classes don't provide iterators and cannot be used with the foreach keyword. - QVarLengthArray<T, Prealloc> provides a low-level variable-length array. It can be used instead of QVector in places where speed is particularly important. - QCache<Key, T> provides a cache to store objects of a certain type T associated with keys of type Key. - QContiguousCache<T> provides an efficient way of caching data that is typically accessed in a contiguous way. - QPair<T1, T2> stores a pair of elements. QLinkedList::insert(). - Logarithmic time: O(log n). A function that runs in logarithmic time is a function whose running time is proportional to the logarithm of the number of items in the container. One example is qBinaryFind(). - Linear time: O(n). A function that runs in linear time will execute in a time directly proportional to the number of items stored in the container. One example is QVector:. Growth StrategiesString allocates 4 characters at a time until it reaches size 20. - From 20 to 4084, it advances by doubling the size each time. More precisely, it advances to the next power of two, minus 12. (Some memory allocators perform worst when requested exact powers of two, because they use a few bytes per block for book-keeping.) - From 4084 on, it advances by blocks of 2048 characters (4096 bytes). This makes sense because modern operating systems don't copy the entire data when reallocating a buffer; the physical memory pages are simply reordered, and only the data on the first and last pages actually needs to be copied..
https://doc.qt.io/archives/qt-5.11/containers.html
CC-MAIN-2021-39
refinedweb
671
57.57
NAME Data::CapabilityBased - Ask your data not what it is, but what it can do for your program SYNOPSIS use Data::Store; use Data::Collection; use Data::Stream; use Data::Query; DESCRIPTION The Data::CapabilityBased module itself is, and will always be, an empty placeholder providing an overview of the concepts of the project and links to the known distributions making use of this code. Sub namespaces of this module will likely contain helper modules to ease bla blah finish this bit when we find out if we need them. This distribution is uploaded in the absence of code in order to function as a central point to document the design of the capabilities as we flesh them out and start building the compliance suites; please see the Data::Store, Data::Collection, Data::Stream, Data::Query for the progress made so far towards this. MANIFESTO The principle behind this system is: when you're being passed something that's being treated as simply data, you shouldn't be thinking about -what- you've been passed but merely whether you can use it in the manner you need to. So, to pick an example I deal with every day, you have code that needs to process a set of objects. Think - method do_something ($to) { foreach my $target (@$to) { $to->frotz; } } Now, of course, this is great if $to is an arrayref. But otherwise you're in trouble. So, you think "hey, I'll add a type check:" method do_something ($to) { confess "Dammit, Jim, I'm a deckchair not an osculator" unless ref($to) eq 'ARRAY'; ... But now what happens if it's an object that arrayifies? BOOM. (a good example of this from my world would be a DBIx::Class::ResultSet). Well. We could test it's something that arrayifies: method do_something ($to) { confess "Out of Cleese error. Call stack has Goon away." unless (ref($to) eq 'ARRAY' || (blessed($to) && $to->can('(@{}'))); ... but for a start that's really ugly, and more importantly it only handles the case that we want to do @$to. Which is probably fine, except now we're going to slurp whatever contents of that resultset were into memory at once. If it contains a million records, we just made your computer cry (and probably your sysadmin developercidal). So, what's a better approach? Well, what if we could say to our data "hey, I know you're capable of returning me a series of objects, but I really just want to run something on all of them" - so, something like method do_something ($to) { $to->each(sub { $_->frotz }); } but then how do we know if this is something that can provide a suitable each method ... and what do we do about plain arrayrefs, which don't have methods at all? Well, given autoboxing can provide an ->each method on an arrayref, we can do something like: use Data::Collection::Capabilities qw(Eachable); use Data::Collection::Autobox; method do_something (Eachable $to) { and then an arrayref will be automatically autoboxed with an ->each method that supports this interface, and report that it provides the capability, and a collection object that declares its capabilities will pass the type test as well. AUTHOR Matt S. Trout (mst) <[email protected]> LICENSE This library is free software under the same license as perl itself.
https://metacpan.org/pod/Data::CapabilityBased
CC-MAIN-2015-35
refinedweb
554
57.3
A Wagtail add-on for supporting marketeer's in daily activities Project description About wagtail-marketing-addons An (opinionated) overview of all pages and their corresponding promotion settings. Support - Wagtail 2.11 and higher Use-case When dealing with large amounts of pages the content editor experience differs from a marketeer's editor experience. A marketeer would more likely want to see what page options were set for SEO and SEA purposes. In this case it can be quite a burden to go through the page explorer, verifying whether the page title, SEO title and search description were set properly. Solution The SEO Listing within the wagtail-marketing-addons will show: - An overview of all pages in a single list; - Relevant properties: page title, SEO title, search description; - A preview what it could look like in a search engine; - A basic score indicating how this would perform in terms of word and character count. Things to consider As stated this plugin contains an opinionated perspective on how you would handle your HTML rendering. With this use-case and solution we're assuming the following rationale on your page: <title>{% if self.seo_title %}{{ self.seo_title }}{% else %}{{ self.title }}{% endif %} | Your Site</title> <meta name="description" content="{% if self.search_description %}{{ self.search_description }}{% endif %}"> In this case your SEO title (when filled in) has a greater priority over the Page Title. Documentation For more information on getting started, an overview of all available settings and the rationale behind the scoring mechanism please see our documentation on Read the Docs. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/wagtail-marketing-addons/
CC-MAIN-2021-25
refinedweb
279
52.6
This site uses strictly necessary cookies. More Information I've been trying to find an elegant way to do this without much luck. Imagine the character is running around on the inside of this surface, upside-down and the like, but never falling off. The key thing is that despite not falling, gravity is still always pulling towards the world down, so this is not the Mario Galaxy effect. Rather, the character will slide down (world down) surfaces where you'd expect him to, except that it even happens when he's upside-down. For example, imagine the character is upside-down on the east-facing underside wall of that large valley. He won't fall off, because that's the effect I'm after, but the gravity pulling towards the world down will still cause him to slide down the underside of that valley. Standing at the bottom point of that valley, upside-down, is however effortless. Force cannot be used to keep the character pushed against the surface because of this: "Disconnecting" from the surface will often mean not falling back towards it. I have been working with an approach of duplicating the whole surface, leaving the duplicate in the same place and flipping its normals, and using the 2 surfaces as 2 mesh colliders. Then creating 2 sphere colliders, parented to a rigidbody, that are hooked around the surface. Maybe this is the best way to go, maybe it isn't. I find it requires transform.rotation to keep things in the right place, which is obviously not playing nice with the physics side of things. Is there a better way known to go about this? Or is coding in my own physics the only thing for it? Well I'd be going with the physics approach and apply just enough force to stay on the surface using the normal of the part hit. I'm not sure what you have attached to this surface, so can't comment on how great this force would need to be when opposing gravity. Real things which cling oppose gravity (hence its hard for massy things to cling). It should be reasonably easy to calculate the necessary force each fixed step. Edited the question in response. Basically, if the character is upside-down at an angle, the force is either going to push the character up the slope, which is bad, or pull the character down off the slope, which is bad. Aren't you really just wanting to have no gravity on this particular object only when touching the surface and align it to the surface normal at a given distance? Nope, the gravity must always be pulling the character down any surface, just not off it. I was suggesting previously that you apply a force to combat gravity compared to the surface angle, that would not make it slide if it were calculated correctly, but is efectively the opposite of gravity combined with the vector to the surface, just that seems to be easier to fudge. Answer by s_guy · Apr 24, 2013 at 08:16 AM It really depends on the feel of gameplay you're after. If you really want stuff to stick to the track or play relative to the track itself and not gamespace, I wouldn't use Unity physics or a generalized physics solution at all. Use Unity's character controller, roll your own, or use a kinematic rigid body. This way, you don't have to tune complex force interactions. Such a set up would likely be fragile. Instead, limit the movement direction explicitly and let your gravity calculations influence it. Set a movement vector to Vector3.zero for each Update Determine the current direction vector based on your character's position on the track and user input. Perhaps raycast to the surface under the character's collider (not necessarily "down") and look at the hitinfo's normal to work out effective character direction. Add to your movement vector, the direction vector multiplied by speed related to input Add to your movement vector, the the gravity factor; could just be addition of a constant negative value to the "vertical" axis of your movement vector Multiply this movement vector by Time.deltaTime for that Update Invoke your move, e.g. controller.Move(movementVector) With this approach you can have: global, unidirectional gravity complete control of allowed character movement characters perfectly "stuck" to the surface below them Also see restricting movement to a curved path Good luck! I believe this would also require seeking the ground then transform.position'ing the character any time they passed a convex curve. Doable, but I think it involves some guesstimating raycasts to find the ground. I'm not sure of the best way to go about that. I think that gravity factor is not just a constant, but it could be worked out. This is basically the core of your suggestion; work out the factor of gravity's acceleration along the current surface plane and subtract that from whatever the player's input plus inertia would produce. The same with the movement vector. I wouldn't want to set it to zero unless the character had no inertia at all. So I would need to also recalculate that for the new surface angle. You would need something to drop your character to the track or figure out how to start it flush with the track. I would probably use my own gravity solution and raycast from the "foot" of my controller to see if I've landed. I'm suggesting that gravity is always "down" and you manually manage inertia across Update events, if at all. Gravity plays no part in keeping you stuck to the track; you simply don't allow movement except along the track. Gravity just makes going uphill "hard" and accelerates you downhill, making it "easy". I would get a solution working with no inertia or gravity at first. Basically, your character would glide freely (and correctly) on the track with forward and backward input. Once you've got that, you can add complications. Setting the movement to zero each frame is just a methodology for starting with a clean slate before determining how you're going to influence the controller each update. If you want to later have some frame-to-frame influence, have fun! (but I suggest starting without that.) Regarding overshooting convex bends, here's an image describing that problem. The blue text indicates the issue with no gravity; the red, with gravity, which I of course want. What do you think? Allowing the discrepancy would mean allowing it for all convex bends, which I imagine could make motion around the outside of spheres be faster than it should. $$anonymous$$aybe not enough/too infrequent to notice. All I can think of for this is casting some arbitrary number of rays out behind the character in frame 2, and using the hit of the ray that travelled the least distance. Answer by harperrhett · Nov 02, 2020 at 02:23 AM I know this is an older post but I'd like to share my coded solution anyways just in case. It essentially does what was suggested above. Also not sure if it's the exact solution as I'm not sure what is being asked about sliding and additional forces. The player used has a rigid body on it and has gravity turned off btw, but to get what you're looking for you would probably just keep it on. It just takes the normals of the surface below the player, rotates the player to align with them, and then applies gravity in that direction. I also used lerping to smooth out the transition between surfaces. Using UnityEngine; [RequireComponent (typeof (Rigidbody))] public class GravityBody : MonoBehaviour { // Initialize private Rigidbody rb; private Vector3 normal = Vector3.down; private Vector3 targetDirection = Vector3.down; private Quaternion targetRotation = new Quaternion(0.0f, 0.0f, 0.0f, 0.0f); private const float GRAVITY = -10.0f; private const float RAYDISTANCE = 15.0f; private const float ROTATIONSPEED = 0.15f; // When script initializes private void Awake() { rb = GetComponent<Rigidbody>(); } // Physics update private void FixedUpdate() { // Set up ray to check the surface below player RaycastHit temp_hit; Ray temp_ray = new Ray(transform.position, -transform.up * RAYDISTANCE); // Gets the normal of the surface below the character, and also creates a target direction for gravity if (Physics.Raycast(temp_ray, out temp_hit)) { normal = temp_hit.normal; targetDirection = (transform.position - temp_hit.point).normalized; } // Finds desired rotation relative to surface normal targetRotation = Quaternion.FromToRotation(transform.up, normal) * transform.rotation; // Apply rotation and gravity transform.rotation = Quaternion.Lerp(transform.rotation, targetRotation, ROTATIONSPEED); rb.AddForce(targetDirection * GRAVITY); } } Answer by Belangia · Apr 24, 2013 at 08:36 AM I would use force for gravity, shoot a ray from the Models local bottom to the (LAND). I would then record that impact and use force to push towards the (LAND). Then I would have the model always look away from the direction of force so that the Model seems to always have its bottom side facing the (GROUND). I would use OnTriggerEnter() to check if force needs to shut off, if so also prevent the new translation on the transform so that you do not collide with the (LAND). Other than that I would use trial and error till you get the feel you want to have. Good Luck. Sorry I don't have Code to show for this but the logic is there. Can't use force because it will be denying the pull of gravity the player should feel. And we don't know where LAND is, except for your earlier suggestion of the last point at which contact was made... which is not something I'd want to align the character base to. I think because of the force issue, s_guy is right. Apart from my "2 colliders" approach, I can't just say "ignore gravity but obey gravity" to Unity. walk on walls (third person) 0 Answers Airplane Stall 0 Answers How can I rotate and move a cylindrical rod from one side with respect to other side? 1 Answer Apply Gravity to Car Physics 0 Answers Affecting FPS controller/Character Motor physics with game object 1 Answer EnterpriseSocial Q&A
https://answers.unity.com/questions/443938/whats-a-good-way-to-keep-a-character-connected-to.html
CC-MAIN-2021-25
refinedweb
1,712
62.78
Electronics > Mechanical & Automation Engineering Programming TSP in Python (Keithley DMM7510) (1/2) > >> Coolboy4999: Hello there. I've recently started a project on a DMM7510 :-DMM, connected by USB to a PC (Windows OS) where Python code is running, sending TSP commands to the device. (PyUSB, PyVISA and all the other libs already incorporated and working fine). My Python code successfully sends the TSP commands to the device and it executes them, but I'm looking at a way to get some return values from the TSP commands onto my Python code. For example, I'm using the DMM's display as an user interface (instead of the PC's CMD or a Python generated one [for now, at least]) and if I were to send a TSP commands like this... --- Code: ---my_instrument.write("display.input.prompt(display.BUTTONS_YESNO, \"Continue?\")") --- End code --- ... I'd want the Python code to know which YES/NO Button was pressed, since I'm placing all the "decision-making" there. Any tips? P.S.: I've received a reply on another forum () but I encountered another issue. Before you reply please do have a look at what was answered. Much appreciated :D jeremy: Can you just do something like: while 1: try: val = inst.read() break catch pyvisa.errors.VisaIOError as e: if was_timeout(e): continue else: raise You would need to implement “was_timeout” yourself; basically just inspect the exception to see if it was the timeout like you mentioned in your Tek forum post. Coolboy4999: hey jeremy Firstly I think you mistook Python for another language (Java, maybe, idk), because in Python the syntax is "try ... except". Anyways, now my code does seem to "wait" for an input but only at the first instance of running the code. On the following times, it fetches the previous value/button pressed as I had already mentioned. Maybe I need to clear the "read()" buffer before actually reading it, no? Also, this ".read()" function: --- Code: ---my_instrument = rm.open_resource(address) (...) val = my_instrument.read() --- End code --- It isn't the same as "dmm.measure.read()", "dmm.digitize.read()", "file.read()" or "tspnet.read()", right? I can't seem to find it on the Reference Manual () Also, not a Python expert, I'm not sure how to handle/check that Error "type" in the "was_timeout()" function. Tips? jeremy: Yes, you are correct about the except, C++ has been corrupting my brain lately… Yes, you probably need to empty the buffer at the beginning of the next iteration or run script. If there isn’t a function for it, you could just do a similar thing: while 1: try: val = inst.read() except pyvisa.errors.VisaIOError as e: if was_timeout(e): break else: raise jeremy: To check the error, it looks like you can do something like: def was_timeout(e): return e.abbreviation == "VI_ERROR_TMO" Edit: see implementation here: Navigation [0] Message Index [#] Next page
https://www.eevblog.com/forum/mechanical-engineering/programming-tsp-in-python-(keithley-dmm7510)/?wap2;PHPSESSID=e6ofrbn7fphju6jm67e11ugsb3
CC-MAIN-2022-33
refinedweb
482
61.36
#include <wx/graphics.h> A wxGraphicsMatrix is a native representation of an affine matrix. The contents are specific and private to the respective renderer. Instances are ref counted and can therefore be assigned as usual. The only way to get a valid instance is via wxGraphicsContext::CreateMatrix() or wxGraphicsRenderer::CreateMatrix(). Concatenates the matrix passed with the current matrix. Concatenates the matrix passed with the current matrix. Returns the component values of the matrix via the argument pointers. Returns the native representation of the matrix. For CoreGraphics this is a CFAffineMatrix pointer, for GDIPlus a Matrix Pointer, and for Cairo a cairo_matrix_t pointer. Inverts the matrix. Returns true if the elements of the transformation matrix are equal. Returns true if the elements of the transformation matrix are equal. Return true if this is the identity matrix. Rotates this matrix clockwise (in radians). Scales this matrix. Sets the matrix to the respective values (default values are the identity matrix). Applies this matrix to a distance (ie. performs all transforms except translations). Applies this matrix to a point. Translates this matrix.
https://docs.wxwidgets.org/3.0/classwx_graphics_matrix.html
CC-MAIN-2018-51
refinedweb
178
54.18
One of the new areas that came along with Windows 10 and the UWP was the ability for one application to offer a ‘service’ to another application. There was a //Build session about this back in May; with the discussion on app services starting at around the 25 minute mark. An app service is essentially a background task that an app offers out such that other apps can invoke that background task. You might think of examples such as a Twitter app which provides a service to get some portion of the user’s tweets or an Instagram app that offers a service to get photos or similar. Naturally, the app providing the service needs to take care to ensure that the service that it is offering isn’t open to abuse and there might be scenarios where it makes more sense for the app to support the ‘launch for results’ type of operation that’s also talked about in that //Build video above. But, app services are a welcome addition to the platform and I’ve been demo’ing them for quite a while with an app that goes off and performs my favourite API call of searching for photos on the flickR service. The screen capture below shows how this works – I didn’t have a microphone and so I just typed out what was going on… In terms of the way in which this is built, there are the 2 pieces as you’d expect – the app offering the service and the app consuming it. The App Service My app service app is very simple. It has 2 projects, one being the app and one being the custom WinRT component that offers up a background task; The app itself is somewhat unusual in that it doesn’t actually register any background tasks at all. There is no ‘app service trigger’ that I use to register the background task. What the app does have is a reference to the background task project (as an easy way of making sure that it gets deployed); and it has an entry for it in the app manifest; and then it just has some code which populates the UI (and the clipboard) with the details of the service; public sealed partial class MainPage : Page { public MainPage() { this.InitializeComponent(); this.Loaded += OnLoaded; } private void OnLoaded(object sender, RoutedEventArgs e) { this.txtPackageName.Text = Package.Current.Id.FamilyName; } void OnClick(object sender, RoutedEventArgs e) { DataPackage package = new DataPackage(); package.SetText(this.txtPackageName.Text); Clipboard.SetContent(package); } } Any remaining implementation is in the background task itself; using Windows.ApplicationModel.AppService; using Windows.ApplicationModel.Background; using Windows.Foundation.Collections; using System; using System.Net.Http; using Windows.Storage; using System.IO; using Windows.ApplicationModel.DataTransfer; public sealed class BackgroundTask : IBackgroundTask { BackgroundTaskDeferral deferral; public async void Run(IBackgroundTaskInstance taskInstance) { this.deferral = taskInstance.GetDeferral(); taskInstance.Canceled += OnCancelled; var appService = taskInstance.TriggerDetails as AppServiceTriggerDetails; if (appService != null) { // we should do more validation here if (appService.Name == "flickrphoto") {(); } } this does a couple of ‘interesting’ things if you haven’t seen app services before; - Line 19 grabs the TriggerDetails as an AppServiceTriggerDetails. - Line 24 does a basic check to make sure that the caller is expecting the “flickrphoto” service. - Line 26 sets up a RequestReceived handler to pick up requests for that service. The OnAppServiceRequestReceived handler does the rest of the work – the one thing I’d flag here is that the system uses ValueSet in order to pass data to/from the consumer of the app service and the service itself. ValueSet is just a dictionary and, initially, I thought this might be very limiting for my service because I wanted to pass a file back across the boundary between the service and its client. However, there’s a means to do exactly that – you’ll see that line 52 is making a temporary file, line 59 is using the SharedStorageAccessManager to get a token for that file and that token is passed back across the boundary to the client of the service such that it can be ‘redeemed’ for the file on the other side of the bounday. The Client The client’s a pretty simple thing here. One thing it does is to monitor the clipboard and copy and newly pasted items from it in the hope that it might be the package family name from the other app that offers the service; void OnLoaded(object sender, RoutedEventArgs e) { Clipboard.ContentChanged += OnClipboardChanged; this.OnClipboardChanged(null, null); } async void OnClipboardChanged(object sender, object e) { var packageView = Clipboard.GetContent(); if (packageView != null) { try { var text = await packageView.GetTextAsync(); if (!string.IsNullOrEmpty(text)) { this.txtPackageName.Text = text; } } catch { } } } the other thing that it does is to wait for a button press before attempting to call the app service. That code looks like; async void OnClick(object sender, RoutedEventArgs e) { AppServiceConnection connection = new AppServiceConnection(); connection.PackageFamilyName = this.txtPackageName.Text; connection.AppServiceName = this.txtServiceName.Text;(); } } and you’ll see that it makes use of the new AppServiceConnection class and sets up the necessary PackageFamilyName and AppServiceName parameters before attempting to OpenAsync() on that connection. You’d also spot that it sends a ValueSet across the boundary to the app service where (in my case) that value set simply contains the text of the query that we’d like to execute. When the service responds we attempt to get the token that was created for the response file and we use SharedStorageAccessManager.RedeemTokenForFileAsync() in order to turn that token back into a file that we can present as a bitmap on the screen. Simple! But powerful If you want the code for this, it’s here for download. If you compile it, you’ll find an error at the point where you need to insert your own API key for flickR.
https://mtaulty.com/2015/10/16/m_15973/
CC-MAIN-2021-25
refinedweb
966
50.87
How To Create Your First Web Application In Python Bottle is a great python web framework for building small web applications, rapid prototyping and getting started with web development. I like it very much and one of the reason is the fact that. How To Download And Install bottle In Your Machine You can easily install bottle in Ubuntu and Debian based systems with the pip python package manager. Install pip with the following command. sudo apt-get install python-pip Then install bottle. sudo pip install bottle Or install the python micro web-framework directly with the following command. sudo apt-get install python-bottle After the installation is finished open a new terminal, run python in interactive mode and try to see if you can import bottle. python import bottle If import bottle does not produce any error it means the installation of bottle is done right. Is there any way to use bottle without installing it? Yes, it is. You can download bottle into your project directory and start coding immediatly. Use the wget utility to download it like shown below. wget After the download is finished copy bottle.py to your project directory and start programming. Now that you have installed bottle it is time to start and code a simple web application. Simple Web Application In Bottle Create the app.py file with your favourite text editor. I will use vim for this tutorial. You can easily install vim in Ubuntu and debian based systems with the following command. sudo apt-get install vim Create app.py file and open it for editing. vim app.py In order to create our simple web application and make it work we need to import route and run from bottle like shown below. from bottle import route, run What is route? What about run? Why do we need to import them? How are we going to use them in our script? Can you explain that to us? route() is a decorator that will help us to bind a piece of code to an url path. For example the following code binds the my_app() function to '/myapp' path. The piece of code shown below is not part of our script. I am showing it to you only for learning purposes so you can easily understand how the route() decorator works later when we build our simple script. @route('/myapp') def my_app(): return "My first web app" run() will start a built-in development server which will run on localhost on port 8080 and will serve requests. Now that you know all that is needed for building our first web application in python using the bottle micro web-framework it is time to put everything you learned in practise and enjoy the results. from bottle import route, run Then type the piece of code shown in the following picture. We created a decorator and binded the test() function to /test path. Then run() calls the builtin development server like shown below. Save the app.py file, quit and run the following command to execute it. python app.py To see the results visit You should see the same output as shown in the following picture. You might also like Category: HOWTOS, PROGRAMMING
http://linoxide.com/how-tos/web-python/
CC-MAIN-2016-44
refinedweb
542
66.54
[SOLVED] High battery usage (Pro-Mini / RFM69 / Si7021) Hi all, A have a plain vanilla Temperature + Humidity sensor using all standard codes from MySensors. I saw a lot of posts mentioning +1year battery life, but I am getting a couple of weeks (11 days actually). I use a pro-mini 3V (removed the led and regulator), with a RFM69 (433MHz) and the SI7021. I also setup the battery level measurement setup as per the website (2 resistors, but have not used the capacitor). Below is the code I used. The only customization I made on the regular code was adding a way to measure the RSSI. I also tried to check the actual current, but without any success with my multimeter (maybe it is not sensible enough). Does disabling DEBUG reduce battery consumption (arduino is not connected to any serial port to receive debug anyway). Any though is much appreciated. /** *: Yveaux * * DESCRIPTION * This sketch provides an example of how to implement a humidity/temperature * sensor using a Si7021 sensor. * * For more information, please visit: * * */ // Enable debug prints #define MY_DEBUG // Atualizar para nó esperado #define MY_NODE_ID 3 // for frequency setting. Needed if you're radio module isn't 868Mhz (868Mhz is default in lib) //#define MY_IS_RFM69HW // Mandatory if you radio module is the high power version (RFM69HW and RFM69HCW), Comment it if it's not the case #include <MySensors.h> static bool metric = true; // Sleep time between sensor updates (in milliseconds) static const uint64_t UPDATE_INTERVAL = 60000; //10 minutos #include <SI7021.h> static SI7021 sensor; ); //Código novo - BatteryPoweredSensor int BATTERY_SENSE_PIN = A0; // select the input pin for the battery sense point int oldBatteryPcnt = 0; #endif #define CHILD_ID_HUM 0 #define CHILD_ID_TEMP 1 #define CHILD_ID_RSSI 2 //RSSI #define CHILD_ID_VOLT 3 //Battary Voltage int16_t rssiVal; //RSSI MyMessage msgHum(CHILD_ID_HUM, V_HUM); MyMessage msgTemp(CHILD_ID_TEMP, V_TEMP); MyMessage msgRSSI(CHILD_ID_RSSI,V_VAR5); MyMessage msgVOLT(CHILD_ID_VOLT,V_VOLTAGE); void presentation() { // Send the sketch info to the gateway sendSketchInfo("Si7021_Tem_Hum_v03", "2.0"); // Present sensors as children to gateway present(CHILD_ID_HUM, S_HUM, "Humidity"); present(CHILD_ID_TEMP, S_TEMP, "Temperature"); present(CHILD_ID_RSSI, S_CUSTOM, "RSSI"); present(CHILD_ID_VOLT, S_MULTIMETER, "Voltage"); metric = getControllerConfig().isMetric; } void setup() { while (not sensor.begin()) { Serial.println(F("Sensor not detected!")); delay(5000); } // use the 1.1 V internal reference #if defined(__AVR_ATmega2560__) analogReference(INTERNAL1V1); #else analogReference(INTERNAL); #endif }); send(msgTemp.set(temperature, 2)); send(msgHum.set(humidity, 2)); #ifdef REPORT_BATTERY_LEVEL // get the battery Voltage int sensorValue =; float batteryV = sensorValue * 0.003216031; // medido #ifdef MY_DEBUG Serial.println(sensorValue);; static MyMessage msgVolt(CHILD_ID_VOLT, V_VOLTAGE); send(msgVolt.set(batteryV, 2)); } #endif rssiVal = _radio.readRSSI(); send(msgRSSI.set(rssiVal)); #ifdef MY_DEBUG Serial.print("RSSI: "); Serial.println(rssiVal); #endif // Sleep until next update to save energy #ifdef MY_DEBUG Serial.println("Sleep Start"); #endif sleep(UPDATE_INTERVAL); #ifdef MY_DEBUG Serial.println("Wake up!"); #endif } Duracell PlusPower (alkaline). An update: I was able to measure the currency using the powersource below (which is actually very helpful for testing) and it seem to be using 20 mA for 1s between sleeps and less than 1mA as currency is show zero (probably the 120uA estimated by MySensors). So consumption seem to be as expected. If I make all calculations with the above values, I should have around 6 months battery life. I will try to get new batteries, as those are probably +1 year old. any further thought is welcome. Thanks ******* IMPORTANTE UPDATE ********** Actually there is also a step-up converter, bought through the MySensors site (DC-DC Step Up Boost Module 3v3). What's the impact of the step-up converter on power consumption while the circuit is in sleeping mode? Thanks, @Oumuamua said in High battery usage (Pro-Mini / RFM69 / Si7021): Duracell PlusPower (alkaline). Well I actually meant what type of batteries, but I suppose you're using 2xAA, correct? The sketch indicates you started off from the sketch presented in Please follow the exact build instructions in this article (except for the radio connection ofcourse). You also don't need resistors to measure the battery level then, and a boost converter also isn't required with 2xAA. I personally have no experience with battery powered RFM6x sensors, but for Nrf24 the calculations come pretty close to reality so far; i have a pir sensor that has been running for 4.5 years on a single set of varta batteries and is still at 63%! Furthermore i wouldn't trust that current measurement. These usb sticks are ok to get a ballpark number, but they're likely milliamps off (at best). Invest in a good quality multimeter or eg a uCurrent. And last, the individual modules (arduino, si7021,...) can all have a higher than usual current consumption for whatever reason. If the consumption of the whole node is too high, you need to measure individual modules in isolation. But again, you need a decent current meter for that... @Oumuamua said in High battery usage (Pro-Mini / RFM69 / Si7021): What's the impact of the step-up converter on power consumption while the circuit is in sleeping mode? According to the datasheet 18-30μA Your sleep interval is set for 1 minute but the comment says 10 minutes. If you want 10 minutes between sends then you have to change the sleep_interval from 60000 to 600000. (note the extra '0'). Hi, Thanks for all replies. @skywatch : thanks! I was running a test with a 1 min loop but didn't change the description. @Yveaux : have you been able to run up to 63% without the booster? my sensor stops working when the battery is at 2.8v (2x AA) @eiten : this might be the latest issue I'm having. Even with my limited current measurement, the step-up is actually using a crazy 11mA (not uA) when idle. Same current as when not connected with the board. Will investigate further and post back. Thanks all, Guys, quick update: Seems the step-up had a problem. Changed to a new one does not use that much current. @Oumuamua said in High battery usage (Pro-Mini / RFM69 / Si7021): have you been able to run up to 63% without the booster Yes, just 2xAA, directly powering pro mini 8mhz & nrf24 What brown out level is set in the the atmega fuses? Mine are set to 1.8v (and yes, I realize I'm overclocking the atmega when batteries are low...) @Oumuamua said in High battery usage (Pro-Mini / RFM69 / Si7021): the step-up is actually using a crazy 11mA (not uA) when idle. Same current as when not connected with the board. That is not normal! Do you have a second step up? It seems to be... not normal @Oumuamua said in High battery usage (Pro-Mini / RFM69 / Si7021): Seems the step-up had a problem. Changed to a new one does not use that much current. Oh... Sorry. I should read all the new messages befor posting again.... Hi all, Thanks for all the help. After some testing and letting the node run for some days, it seems that the issue is partially solved: battery consumption is very low when the node is idle. However, the battery consumption seems to reach full runtime levels when something happens with the gateway. My follow-up question is: What exactly happens happens when the node sends a message to a gateways that is either unreachable or cannot pass the message on (to a MQTT server, for example)? Shouldn't the node try a couple of times and then move on to sleep as per the code? I have been working on my home network and had to disconnect / reboot the router, gateway, etc. Every time it happens, two situations occurred: gateway didn't report updates to the MQTT server (probably an issue with the gateway as the sensors are meters away and have -25 to -40 RSSI levels, I reseted the gateway in those few circumstances) message reaches the gateway and MQTT server, but the controller didn't record it (Openhab), in which case I rebooted the controller Many thanks again, Guys, Just quick update to help others that may find this topic relevant: problem is solved. Problem was the step-up (probably low quality). Sensor is working well and with very low battery usage (no change in 5 days). Thanks for the comments. - NeverDie Hero Member last edited by NeverDie The product listing for the step-up that the OP is using says: Sucks the juice out of your batteries. Sounds like it delivers on what it promises....
https://forum.mysensors.org/topic/11331/solved-high-battery-usage-pro-mini-rfm69-si7021
CC-MAIN-2022-33
refinedweb
1,401
55.84
For all programmers who are skilled in C-style languages, and beginners who fish for new experience with these! A short article showing how to handle events (click on a button, mouse move, ...) in Java. Posted by moci on Nov 2nd, 2010 Basic Client Side Coding. Working with events in Java is, to me, different from other languages I’ve seen. If you’ve ever worked with Visual Basic or Visual C#, and specially for me ActionScript 3 then you will have written event handlers like so: function someHandler(event:EventType) {} That’s it! In Java however you’re working with classes. As Java tries to be a full OO language you more or less always need to use a class for everything – which is a good thing most of the time! But in this case I had to take 5 extra minutes to get my handlers working! Here’s how. You can ‘listen’ to these events in 3 different ways Classes, Inner Classes, Anonymous Classes. A separate class With this method you will create a separate class for each kind of event. The drawback of this method is the fact that you will lose all connection to the variables used in the main application. A solution for this problem is just to add your main application class as a parameter. Almost every parameter for a function is a reference (this means: if you change something to that parameter it will change everywhere – look up ‘by value’ and ‘by reference’ if you want to learn more about this exciting topic). Here is an example of a small application class and a listener class. //Main application class public class ClassEvents extends JFrame { public static void main(String[] args) { new ClassEvents(); } public ClassEvents() { JButton button = new JButton(); button.setText("Click me"); button.addActionListener(new ExternalEventListener()); this.add(button); this.setDefaultCloseOperation(EXIT_ON_CLOSE); this.setPreferredSize(new Dimension(320, 240)); this.pack(); this.setVisible(true); } } //External listener class public class ExternalEventListener implements ActionListener { @Override public void actionPerformed(ActionEvent arg0) { System.out.println("External class sais YO!"); } } Via the ‘ActionEvent’ parameter you can check for the ‘target’ of this event. In this case a button. An inner clas This is the way I prefer to deal with events, inner classes are classes that you write inside of another class! This way you will remain inside the ‘parent’ class and be able to use all of the functions and parameters this class has to offer (think of this as the ‘function’ solution most other languages offer – actually the next method probably resembles the ‘function’ solution more…). Here’s how the inner class method works. //Main application class public class InnerClassEvent extends JFrame { private String message = "Inner class sais YO!"; public static void main(String[] args) { new InnerClassEvent(); } public InnerClassEvent() { JButton button = new JButton(); button.setText("Click me"); button.addActionListener(new InnerEventListener()); this.add(button); this.setDefaultCloseOperation(EXIT_ON_CLOSE); this.setPreferredSize(new Dimension(320, 240)); this.pack(); this.setVisible(true); } //Inner listener class class InnerEventListener implements ActionListener { @Override public void actionPerformed(ActionEvent arg0) { System.out.println(message); } } } Great isn’t it! Just add an inner class which can be treated like a real object, but it can still get all the variables and functions you created in the parent class. Another area where inner classes are a good solution is threads, but that’s for another article. An anonymous inner class You could argue that the inner class method and this method are practically the same – and you would be right. For me this method makes things a bit too chaotic when you’ve got listeners all over the place. With this method you are essentially creating an inline class (so creating a class inside a function). The problem here is that you might lose track of where this class was created when you’re scrolling through pages of code. Here’s an example of this method. //Main application class public class AnonymousEventListener extends JFrame { private String message = "Anonymous class sais YO!"; public static void main(String[] args) { new AnonymousEventListener(); } public AnonymousEventListener() { JButton button = new JButton(); button.setText("Click me"); button.addActionListener(new ActionListener() { //Anonymous listener class @Override public void actionPerformed(ActionEvent e) { System.out.println(message); } }); this.add(button); this.setDefaultCloseOperation(EXIT_ON_CLOSE); this.setPreferredSize(new Dimension(320, 240)); this.pack(); this.setVisible(true); } } You create a new instance of the listener and its function right inside another function. I would probably forget I did that the next time I opened up this project. (In a bigger project of course). Conclusion Java has a “different” approach to events, I haven’t used it enough to say that it’s better… for now all I can say is that it’s another way of doing the same thing. And because you’re always inside an actual object you’ve got benefits of for example variables inside the listener or functions that only the listener needs (and would otherwise just add to the main application code, without any real purpose). It’s up to you and your project to know what kind of method you’ll be using, but now you know the options. And because I’m not an expert and I only recently used these methods I’m probably skipping a few other ways of doing this. Have a good coding day.
http://www.moddb.com/groups/curly-bracket-programming-realm/tutorials/listening-to-events-in-java
CC-MAIN-2015-35
refinedweb
883
55.84
Bubble. After comparing if the first element is greater than the second, the two elements are swapped then. => Visit Here For The Complete C++ Course From Experts. What You Will Learn: Bubble Sort Technique Using the bubble sort technique, sorting is done in passes or iteration. Thus at the end of each iteration, the heaviest element is placed at its proper place in the list. In other words, the largest element in the list bubbles up. We have given a general algorithm of bubble sort technique below. General Algorithm Step 1: For i = 0 to N-1 repeat Step 2 Step 2: For J = i + 1 to N – I repeat Step 3: if A[J] > A[i] Swap A[J] and A[i] [End of Inner for loop] [End if Outer for loop] Step 4: Exit Here is a pseudo-code for bubble sort algorithm, where we traverse the list using two iterative loops. In the first loop, we start from the 0th element and in the next loop, we start from an adjacent element. In the inner loop body, we compare each of the adjacent elements and swap them if they are not in order. At the end of each iteration of the outer loop, the heaviest element bubbles up at the end. Pseudocode Procedure bubble_sort (array , N) array – list of items to be sorted N – size of array begin swapped = false repeat for I = 1 to N-1 if array[i-1] > array[i] then swap array[i-1] and array[i] swapped = true end if end for until not swapped end procedure The above given is the pseudo-code for bubble sort technique. Let us now illustrate this technique by using a detailed illustration. Illustration We take an array of size 5 and illustrate the bubble sort algorithm. Array entirely sorted. The above illustration can be summarized in a tabular form as shown below: As shown in the illustration, with every pass, the largest element bubbles up to the last thereby sorting the list with every pass. As mentioned in the introduction, each element is compared to its adjacent element and swapped with one another if they are not in order. Thus as shown in the illustration above, at the end of the first pass, if the array is to be sorted in ascending order, the largest element is placed at the end of the list. For the second pass, the second largest element is placed at the second last position in the list and so on. When we reach N-1 (where N is a total number of elements in the list) passes, we will have the entire list sorted. Bubble sort technique can be implemented in any programming language. We have implemented the bubble sort algorithm using C++ and Java language below. C++ Example Let us see a programming Example to demonstrate the bubble sort. #include<iostream> using namespace std; int main () { int i, j,temp,pass=0; int a[10] = {10,2,0,14,43,25,18,1,5,45}; cout <<"Input list ...\n"; for(i = 0; i<10; i++) { cout <<a[i]<<"\t"; } cout<<endl; for(i = 0; i<10; i++) { for(j = i+1; j<10; j++) { if(a[j] < a[i]) { temp = a[i]; a[i] = a[j]; a[j] = temp; } } pass++; } cout <<"Sorted Element List ...\n"; for(i = 0; i<10; i++) { cout <<a[i]<<"\t"; } cout<<"\nNumber of passes taken to sort the list:"<<pass<<endl; return 0; } Output: Input list … 10 2 0 14 43 25 18 1 5 45 Sorted Element List … 0 1 2 5 10 14 18 25 43 45 Number of passes taken to sort the list:10 Java Example class Main { public static void main(String[] args) { int pass = 0; int[] a = {10,-2,0,14,43,25,18,1,5,45}; System.out.println("Input List..."); for(int i=0;i<10;i++) { System.out.print(a[i] + " "); } for(int i=0;i<10;i++) { for (int j=0;j<10;j++) { if(a[i]<a[j]) { int temp = a[i]; a[i]=a[j]; a[j] = temp; } } pass++; } System.out.println("\nSorted List ..."); for(int i=0;i<10;i++) { System.out.print(a[i] + " "); } System.out.println("\nNumber of passes taken to complete sort:" + pass); } } Output: In both the programs, we have used an array of 10 elements and we sort it using the bubble sort technique. In both programs, we have used two for loops to iterate through the adjacent elements of the array. At the end of each pass (outer loop), the largest element in the array is bubbled up to the end of the array. We also count the number of passes that are required to sort the entire array. Complexity Analysis Of The Bubble Sort Algorithm From the pseudo code and the illustration that we have seen above, in bubble sort, we make N-1 comparisons in the first pass, N-2 comparisons in the second pass and so on. Hence the total number of comparisons in bubble sort is: Sum = (N-1) + (N-2) + (N-3)+ … + 3 + 2 + 1 = N(N-1)/2 = O(n2) => Time complexity of bubble sort technique Thus the various complexities for bubble sort technique are given below: The bubble sort technique requires only a single additional memory space for the temp variable to facilitate swapping. Hence the space complexity for bubble sort algorithm is O (1). Note that the best case time complexity for bubble sort technique will be when the list is already sorted and that will be O (n). Conclusion The main advantage of Bubble Sort is the simplicity of the algorithm. In bubble sort, with every pass, the largest element bubbles up to the end of the list if the array is sorted in ascending order. Similarly for the list to be sorted in descending order, the smallest element will be in its proper place at the end of every pass. Being the simplest and easy to implement sorting technique, bubble sort is usually taken for introducing sorting to the audience. Secondly, bubble sort is also used in application like computer graphics wherein filling of polygon edges, etc. require bubble sort to sort the vertices lining the polygon. In our upcoming tutorial, we will learn about Selection Sort in detail. => Visit Here To Learn C++ From Scratch.
https://www.softwaretestinghelp.com/bubble-sort/
CC-MAIN-2021-17
refinedweb
1,060
58.21
This is the third article out of four where I continue to discuss delegates and introduce events.Because the articles are logically and contextually connected, I would recommend reading them accordingly. In my first article, I discussed delegates and the way C# compiler treats them. We already know that a delegate is an object of a specific type. It is created by the compiler, it has references to one or many functions; when it calls the Invoke method, it actually calls the referenced function(s). This is the link to Delegates in C# - Attempt to Look Inside - Part 1. Invoke The second article explains how a delegate can use multithreading. The delegate can use the Invoke method to call the referenced function(s) synchronously or BeginInvoke/EndInvoke methods to make an asynchronous call.This is the link to Delegates in C# - Attempt to Look Inside - Part 2. BeginInvoke EndInvoke After I published the first two articles, I received some positive feedback. Many programmers liked the approach of explaining programming terms not only using definitions and code examples but also making up real life situations and to try to translate them into programming. When you create strong associations between real life and programming, it helps you program better. Your program comes to life; you can see where and how to use programming techniques that you've never used before. Let’s take delegates as an example. We use delegates in everyday life without even knowing it. Every now and then we order food by phone, buy things online, take online courses; talk with friends via email, etc... Look: Never thought about it? But it's true. A delegate is not only a person representing somebody at a conference or a meeting but also anything that can represent anything. The notion of phones, websites, etc... basically is a delegates declaration in programming world. You globally declare the phone delegate, website delegate, or email delegate. They can be accessed by everyone but they are not the real thing. To make them real, you have to instantiate them. In the real world, you cannot just call using a phone, you call using a specific phone number.So when you get a number for your call, you create an instance of the phone delegate. This number belongs to something (a restaurant for example). This is a method or function in the programming world. So when you actually call the number, you invoke the delegate. And we can pass the delegates. You can pass a phone number of a restaurant to a friend. Now your friend can use this delegate. But there is more in this.Using a phone, you can order not only food but also talk to people, order tickets, make an appointment, etc… So, one delegate – a phone – can represent many different things. It depends on the phone number and the message that you pass via the phone. We don’t need to have four different phones in our apartment to call for four different things. One phone is sufficient. Do you get the idea? The same delegate can be used for many different functions/methods. Just the signature should be observed. And how does this apply to programming? Imagine that you have a procedure that logs error messages into a file. It returns void and accepts a string parameter: error message. void string Another procedure logs error messages into a database. It returns void and accepts a string parameter: error message. Another procedure emails error messages to an admin. It returns void and accepts a string parameter: error message. You can declare a public delegate representing any of these procedures. It must have the procedures’ signature: it returns void and accepts a string parameter. public In the program, you create an instance of this delegate pointing to a procedure that is more appropriate. When you use the delegate’s instance, it invokes the proper procedure and logs an error message the way you want it. Using this approach, I want to take a close look at events. There is no doubt that a C# programmer uses events a lot. I cannot imagine any production program without events. That means that everyone knows how to use events and this creates a challenge for me to bring something knew into the topic. Nevertheless, I will try to find another angle in my explanation. What is an event? According to the Google dictionary, an event is a thing that happens, esp. one of importance. If we want to single out something really important to us, we would make an event for it. In order to bring the association closer to programming, let us assume that we don’t know when the event occurs. Example: A family is waiting for a baby. The exact delivery date is naturally unknown. They want family and friends to visit the future mother in the hospital. So they announce the event. They publish a message on Facebook asking potential participants to let them know if they are coming. If I am a friend and want to come, I would call them and give my phone number asking them to notify me when it happens. When the event occurs, they call and notify me and I will come and visit the family in the hospital. Do you want to plunge deeper into this? Follow me then. When the family announced the event, they actually brought a delegate in place. It is not instantiated yet. If it was instantiated, it would be used to notify all participants. The first caller creates an instance of this delegate. A special function must be created and the reference to this function is placed into the delegate. When an event occurs, they run the event’s Invoke method (if the event is not null: they have participants). Every participant will be notified by running the function created for the event. null In programming terms, the family published an event and participants subscribed for the event. An event is a delegate. Everything that we know about delegates, you can apply to an event. By publishing an event, you select a proper delegate type from previously declared delegates or from pre-constructed by the environment delegates. At the same time, an event modifies this delegate so we can say that the event keyword is a delegate modifier. Let us consider some differences. void() (object sender, EventArgs e) MyFamilyEventHandler is a delegate that we prepared to be a source for a future event: MyFamilyEventHandler // declare delegate for future event public delegate void MyFamilyEventHandler(object sender, EventArgs e); Family class declares an event OnBabyBorn. The method BabyBorn raises the event. It checks if the event is not null, in other words it checks that the event has subscribers. Family OnBabyBorn BabyBorn The event has a type of MyFamilyEventHandler.The base class it inherits from is MulticastDelegate (See Picture 1). This proves that an event is just a delegate by nature. MyFamilyEventHandler MulticastDelegate Here goes the Family class: /// <summary> /// Family that waits for a baby /// </summary> public class Family { ///<summary> /// declare event of MyFamilyEventHandler /// type /// </summary> public event MyFamilyEventHandler OnBabyBorn; public string Name { get; set; } ///<summary> /// constructor /// </summary> ///<param name="name"> public Family(string name) { Name = name; } ///<summary> /// raise OnBabyBorn event; /// </summary> public void BabyBorn() { if (null != OnBabyBorn) OnBabyBorn(Name, new EventArgs()); } } Now I created the Friend class.Friend class has a public property Family. Friend Family When this property is being assigned (within Set procedure), the friend subscribes for family event OnBabyBorn. Subscription is done by using the following code: friend family family.OnBabyBorn += new MyFamilyEventHandler(family_OnBabyBorn); As you already know, when a compiler meets this code, it creates an instance of the event (which means a delegate) and assigns the reference of the function family_OnBabyBorn to the delegate. From now on, when the event is raised (which means that it's the base delegate invoke), the function is called and executed. family_OnBabyBorn invoke Remember, the delegate resides at family class. So the friend provides the family with his method information through the delegate that will be invoked when event occurs. ///<summary> /// Family friend /// </summary> public class Friend { private Family family; public Family Family { get { return family; } set { family = value; //subscribe for the family OnBabyBorn event: family.OnBabyBorn += new MyFamilyEventHandler (family_OnBabyBorn); } } public string Name { get; set; } ///<summary> /// constructor /// </summary> ///<param name="name"> public Friend(string name) { Name = name; } ///<summary> /// run when OnBabyBorn event in /// family occurs /// </summary> ///<param name="sender"> ///<param name="e"> void family_OnBabyBorn(object sender, EventArgs e) { Console.WriteLine("{0}, go visit {1} family", Name, sender.ToString() ); } } Finally the Program class. Program It creates Family object and two Friend objects. It assigns the family to the friends. It runs family BabyBorn procedure, which triggers the OnBabyBorn event. friend family BabyBorn OnBabyBorn Both friends are notified that the event occurred and the friend’s pre-assigned function is called. class Program { static void Main(string[] args) { //create new family: Family family = new Family("Adams"); //create a friend: Friend Ed = new Friend("Ed"); //assign the family to the friend: Ed.Family = family; //create another friend: Friend Alex = new Friend("Alex"); //assign the family to the friend: Alex.Family = family; //baby is born family.BabyBorn(); Console.Read(); } } Here is the output: By associating programming subjects (delegates and events) with real world subjects, I tried to show you that in your code, you should use similar logic. Delegates and events in programming have the same meaning as in the real world. I intentionally created a very simple example of how to use events. The idea is just to show common sense behind events in programming. I hope this will help to better understand delegates and events and use them in your code.The next article I am working on is to introduce you to the modern world of delegates..
http://www.codeproject.com/Articles/115710/Delegates-in-C-Attempt-to-Look-Inside-Part-3
CC-MAIN-2015-32
refinedweb
1,637
57.47
In this tutorial, we are going to tackle the basics of using NgRx for state management in an Ionic/Angular application. If you are unfamiliar with the general concept of State Management/Redux/NgRx I would recommend watching this video: What is Redux? as a bit of a primer. The video is centered around Redux, but since NgRx is inspired by Redux the concepts are mostly the same. NgRx is basically an Angular version of Redux that makes use of RxJS and our good friends observables. To quickly recap the main points in the Redux video, we might want to use a state management solution like NgRx/Redux because: - It provides a single source of truth for the state of your application - It behaves predictably since we only create new state through explicitly defined actions - Given the structured and centralised nature of managing state, it also creates an environment that is easier to debug and the general concept behind these state management solutions is to: - Have a single source of “state” that is read-only - Create actions that describe some intent to change that state (e.g. adding a new note, or deleting a note) - Create reducers that take the current state and an action, and create a new state based on that (e.g. combining the current state with an action to add a new note, would return a new state that includes the new note) To demonstrate using some pseudocode (I am just trying to highlight the basic concept here, not how you would actually do this with NgRx), we might have some “state” that looks like this: { notes: [ {title: 'hello'}, {title: 'there'} ], order: 'alphabetical', nightMode: false } The data above would be our state/store that contains all of the “state” for our application. The user might want to toggle nightMode at some point, or add new notes, or change the sort order, so we would create actions to do that: ToggleNightMode CreateNote DeleteNote ChangeOrder An action by itself just describes intent. To actually do something with those actions we might use a reducer that looks something like this: function reducer(state, action){ switch(action){ case ToggleNightMode: { return { // New state with night mode toggled } } default: { return // State with no changes } } } NOTE: This is just pseudocode, do not attempt to use this in your application. This reducer function takes in the current state, and the action, and in the case of the ToggleNightMode action being supplied it will return a new state with the nightMode property toggled. Before We Get Started We will be taking a look at converting a typical Ionic/Angular application to use NgRx for state management. To do that, we are going to take the application built in this tutorial: Building a Notepad Application from Scratch with Ionic and add NgRx to it. You don’t need to complete that tutorial first, but if you want to follow along step-by-step with this tutorial you should have a copy of it on your computer. We are going to keep this as bare bones as possible and just get a basic implementation up and running - mostly we will be focusing on the ability to create notes and delete notes. My main goal with this tutorial is to help give an understanding of the basic ideas behind NgRx, and what it looks like. There is so much you can achieve with NgRx, and you can create some very advanced/powerful implementations, but that does come with an associated level of complexity. Looking at implementations of NgRx can be quite intimidating, so I’ve tried to keep this example as basic as possible (whilst still adhering to a good general structure). In the future, we will cover more complex implementations - if there is something, in particular, you would like to see covered, let me know in the comments. Finally, it is worth keeping in mind that solutions like NgRx and Redux are not always necessary. NgRx and Redux are both very popular (for good reason), but they are not a “standard” that everyone needs to be using in all applications. Simple applications might not necessarily realise much of a benefit through using this approach. 1. Installing NgRx We can easily install NgRx in an existing Angular application through the ng add command: ng add @ngrx/store As well as installing the @ngrx/store package it will also create a reducers folder with an index.ts file that looks like this: import { ActionReducer, ActionReducerMap, createFeatureSelector, createSelector, MetaReducer } from '@ngrx/store'; import { environment } from '../../environments/environment'; export interface State { } export const reducers: ActionReducerMap<State> = { }; export const metaReducers: MetaReducer<State>[] = !environment.production ? [] : []; We will be adding onto this implementation throughout the tutorial, but we’re mostly just going to leave it alone for now. One thing in particular that we haven’t covered yet, but you may notice here, is the concept of a “meta reducer”. A regular reducer is responsible for taking in the state and an action and returning a new state. A meta reducer would take in a reducer as an argument and return a new reducer (kind of like how we can pipe operators onto an observable and return an altered observable if you are familiar with that concept). We won’t be using this concept in this tutorial, but you could use a meta reducer to do things like create a logging service for debugging (e.g. create a meta reducer that logs out some value every time the ToggleNightMode action is triggered). The ng add command also adds the following line to your app.module.ts file: StoreModule.forRoot(reducers, { metaReducers }) This takes a global approach to implementing state management, but if you prefer you can also use StoreModule.forFeature in your individual lazy loaded modules to help keep things separate. There are many approaches you could take to structuring NgRx in your application. As I mentioned, I am trying to keep things simple here, so I would recommend taking a look at a few examples to see what style suits you best. We are also going to store our actions in an actions folder, but the command doesn’t create that for us automatically. Let’s do that now. Create an actions folder and note.actions.ts file at src/app/actions/note.actions.ts Let’s start looking into how we can implement our first action. 2. Creating the Actions To create an action we create classes that implement Action from the NgRx library. As we discussed, an action just describes intent. We won’t be adding any code to actually do anything here, we just want to describe what we want to do. Modify src/app/actions/note.actions.ts to reflect the following: import { Action } from "@ngrx/store"; import { Note } from "../interfaces/note"; export enum ActionTypes { CreateNote = "[Notes Service] Create note", DeleteNote = "[Notes Service] Delete note" } export class CreateNote implements Action { readonly type = ActionTypes.CreateNote; constructor(public payload: { note: Note }) {} } export class DeleteNote implements Action { readonly type = ActionTypes.DeleteNote; constructor(public payload: { note: Note }) {} } export type ActionsUnion = CreateNote | DeleteNote; First, we have an ActionTypes enumerated type that lists our various actions related to notes, and a description of what the action will do. Since these actions will be triggered from our existing notes service (you can trigger these actions elsewhere if you like) we make a note of the source of the action in the square brackets. This is purely to be more explicit/descriptive, it doesn’t serve a functional purpose. We then create classes for each of the actions. It implements Action, it defines a type so we can tell what kind of action it is, and we can optionally supply a payload for that action. In the case of creating and deleting notes we will need to send a payload of data to correctly add or delete a note, but some actions (like toggling nightMode) would not require a payload. Finally, we have the ActionsUnion which exports every action created in this file. 3. Creating the Reducers As we now know, actions don’t do anything by themselves. This is where our reducers come in. They will take the current state, and an action, and give us a new state. Let’s implement our first reducer now. Create a file at src/app/reducers/note.reducer.ts and add the following: import * as fromNote from "../actions/note.actions"; import { Note } from "../interfaces/note"; export interface NoteState { data: Note[]; } export const initialState: NoteState = { data: [] };; } } } Things are starting to look a little bit more complex now, so let’s break it down. There is some stuff we are going to need in our reducer from our note actions that we just created, so we import everything from that actions file as fromNote so that we can make use of it here (this saves us from having to import everything that we want to use from that file individually). We define the structure or “shape” of our note state as well as supply it with an initial state: export interface NoteState { data: Note[]; } export const initialState: NoteState = { data: [] }; The only data we are interested in are the notes themselves which will be contained under data, but we could also add additional state related to notes here if we wanted (like sort order, for example). The last bit is the reducer function itself:; } } } As you can see, the arguments for the reducer function are the state and the specific action which needs to be one of all the possible note related actions which we defined in our actions file. All we are doing here is switching between the possible actions, and we handle each action differently. Although we might run different code, the goal is the same: to return the new state that we want as a result of the action. In the case of the CreateNote action, we want to return all of the existing state, but we also want the data to contain our new note (as well as all of the existing notes). We use the spread operator ... to return a new state object containing all of the same properties and data as the existing state, except by specifying an additional data property we can overwrite the data property in the new state. To simplify that statement a bit, this: return { ...state }; Basically means “take all of the properties out of the state object and add them to this object”. In effect, it is the same as just doing this: return state; Except that we are creating a new object. This: return { ...state, data: [...state.data, action.payload.note] }; Basically means “take all of the properties out of the state object and add them to this object, except replace the data property with this new data instead”. We keep everything from the existing state, except we overwrite the data property. We do still want all of the existing notes to be in the array in addition to the new one, so we again use the spread operator (this time just on the data) to unpack all of the existing notes into a new array, and then add our new one. To reiterate, this: [...state.data] would mean “create a new array and add all of the elements contained in the data array to this array” which, in effect, is the same as just using state.data directly (except that we are creating a new array with those same value). This: [...state.data, action.payload.note] means “create a new array and add all of the elements contained in the data array to this array, and then add action.payload.note as another element in the array. To simplify even further, let’s pretend that we are just dealing with numbers here. If state.data was the array [1, 2, 3], and action.payload.note was the number 7, then the code above would create this array: [1, 2, 3, 7] Hopefully that all makes sense. Once again, the role of our reducer is to modify our existing state in whatever way we want (based on the action it receives) and then return that as the new state. Before we can make use of our actions/reducers, we need to set them up in our index.ts file. Modify src/app/reducers/index.ts to reflect the following: import { ActionReducer, ActionReducerMap, createFeatureSelector, createSelector, MetaReducer } from "@ngrx/store"; import { environment } from "../../environments/environment"; import * as fromNote from "./note.reducer"; export interface AppState { notes: fromNote.NoteState; } export const reducers: ActionReducerMap<AppState> = { notes: fromNote.reducer }; export const metaReducers: MetaReducer<AppState>[] = !environment.production ? [] : []; The important part that has changed here is: import * as fromNote from "./note.reducer"; export interface AppState { notes: fromNote.NoteState; } export const reducers: ActionReducerMap<AppState> = { notes: fromNote.reducer }; We set up our overall application state on the AppState interface (this is named State by default). We are just working with a single notes reducer here, but you could also have additional state, for example: export interface AppState { notes: fromNote.NoteState; photos: fromPhoto.PhotoState; } Our store or “single source of truth” creates a nested/tree structure that might look something like this: { notes: [ {title: 'hello'}, {title: 'there'} ], photos: [ {url: ''}, {url: ''} ] } When we are creating our notes actions/reducers we are just working within the notes “sub-tree” but it is still a part of the entire application state tree. We also add the reducer we created for our notes under the notes property in the reducers constant that is exported (and is in turn used in our app.module.ts by StoreModule.forRoot()). 4. Creating Selectors The last thing we are going to do before making use of our new state management solution in our notes service is create some selectors. A selector will allow us to read state from our store. To create a selector, we can use createSelector which is provided by NgRx. We just supply it with the state we want to select from (e.g. the “sub-tree” of our state that we want to access), and a function that returns the specific data that we are interested in. Add the following to the bottom of src/app/reducers/note.reducer.ts: export const getNotes = (state: NoteState) => state.data; export const getNoteById = (state: NoteState, props: { id: string }) => state.data.find(note => note.id === props.id); We are creating two functions here to use with createSelector. The getNotes function, when given the notes from our state tree, will return just the data property (which is the one we are interested in, since it is what actually contains the notes data). The getNoteById function will take in additional props that can be supplied when attempting to select something from the store, which will allow us to provide an id. This function will then look for a specific note in the data that matches that id and return just that note. Add the following to the bottom of src/app/reducers/index.ts: export const getNoteState = (state: AppState) => state.notes; export const getAllNotes = createSelector( getNoteState, fromNote.getNotes ); export const getNoteById = createSelector( getNoteState, fromNote.getNoteById ); With our functions created, we now just need to use them to create our selectors with createSelector. We first create a function called getNoteState to return the notes portion of our state tree. We then supply that, and the functions we just created, as arguments to createSelector in order to create our selector functions. 5. Accessing State and Dispatching Actions Now everything finally comes together, and maybe you can see some of the benefit of doing all of this leg work up front. With our selectors created, we can easily get the data we want from our store wherever we like in our application. We can also easily make use of our CreateNote and DeleteNote actions. To finish things off, we are going to keep the existing structure of the notes application, and just modify the methods in the NotesService. Modify src/app/services.notes.service.ts to reflect the following: import { Injectable } from "@angular/core"; import { Store } from "@ngrx/store"; import { Storage } from "@ionic/storage"; import { Observable } from "rxjs"; import { Note } from "../interfaces/note"; import * as NoteActions from "../actions/note.actions"; import { AppState, getAllNotes, getNoteById } from "../reducers"; @Injectable({ providedIn: "root" }) export class NotesService { public notes: Observable<Note[]>; constructor(private storage: Storage, private store: Store<AppState>) { this.notes = this.store.select(getAllNotes); } getNote(id: string): Observable<Note> { return this.store.select(getNoteById, { id: id }); } createNote(title): void { let id = Math.random() .toString(36) .substring(7); let note = { id: id.toString(), title: title, content: "" }; this.store.dispatch(new NoteActions.CreateNote({ note: note })); } deleteNote(note): void { this.store.dispatch(new NoteActions.DeleteNote({ note: note })); } } To get access to our notes data, we just call this.store.select(getAllNotes) using the selector we created and it will return an observable. This observable will update any time the data in the store changes. To get a specific note, we use our getNoteById selector, but since that selector also uses additional props (an id in this case) we pass that data along too: return this.store.select(getNoteById, { id: id }); This will allow us to grab a specific note. Then we just have our createNote and deleteNote methods which are able to create or delete notes just be triggering the appropriate action and passing the note along with it: this.store.dispatch(new NoteActions.CreateNote({ note: note })); this.store.dispatch(new NoteActions.DeleteNote({ note: note })); Since we now have our notes class member set up as an observable now, if we want to be able to display the data it contains in our template we will need to add the async pipe. Modify the <ion-item>in src/app/home/home.page.html to use the asyncpipe: <ion-item button detail * The ngOnInit in the detail page will also need to be updated to make use of the observable returned by the getNote method now (if you have been following along with the notes application tutorial): Modify ngOnInitin src/app/detail/detail.page.ts to reflect the following: ngOnInit() { let noteId = this.route.snapshot.paramMap.get("id"); this.notesService.getNote(noteId).subscribe(note => { this.note = note; }); } Summary Using NgRx looks a lot more complicated than just simply managing state yourself (and it is), but like a lot of things worth doing it’s a bit more upfront work for a longer-term payoff. The state in this example application could rather easily be managed without using an advanced state management solution like NgRx. However, as applications become more complex the upfront work in setting up NgRx can be a great investment.
https://www.joshmorony.com/using-ngrx-for-state-management-in-an-ionic-angular-application/
CC-MAIN-2021-04
refinedweb
3,100
51.48
The BufferedOutputStream class of the java.io package is used with other output streams to write the data (in bytes) more efficiently. It extends the OutputStream abstract class.. Hence, the number of communication to the disk is reduced. This is why writing bytes is faster using BufferedOutputStream. Create a BufferedOutputStream In order to create a BufferedOutputStream, we must import the java.io.BufferedOutputStream package first. Once we import the package here is how we can create the output stream. // Creates a FileOutputStream FileOutputStream file = new FileOutputStream(String path); // Creates a BufferedOutputStream BufferedOutputStream buffer = new BufferOutputStream(file); In the above example, we have created a BufferdOutputStream named buffer with the FileOutputStream named file. Here, the internal buffer has the default size of 8192 bytes. However, we can specify the size of the internal buffer as well. // Creates a BufferedOutputStream with specified size internal buffer BufferedOutputStream buffer = new BufferOutputStream(file, int size); The buffer will help to write bytes to files more quickly. Methods of BufferedOutputStream The BufferedOutputStream class provides implementations for different methods in the OutputStream class. write() Method write()- writes a single byte to the internal buffer of the output stream write(byte[] array)- writes the bytes from the specified array to the output stream write(byte[] arr, int start, int length)- writes the number of bytes equal to length to the output stream from an array starting from the position start Example: BufferedOutputStream to write data to a File import java.io.FileOutputStream; import java.io.BufferedOutputStream; public class Main { public static void main(String[] args) { String data = "This is a line of text inside the file"; try { // Creates a FileOutputStream FileOutputStream file = new FileOutputStream("output.txt"); // Creates a BufferedOutputStream BufferedOutputStream output = new BufferedOutputStream(file); byte[] array = data.getBytes(); // Writes data to the output stream output.write(array); output.close(); } catch (Exception e) { e.getStackTrace(); } } } In the above example, we have created a buffered output stream named output along with FileOutputStream. The output stream is linked with the file output.txt. FileOutputStream file = new FileOutputStream("output.txt"); BufferedOutputStream output = new BufferedOutputStream(file); internal buffer, we can use the flush() method. This method forces the output stream to write all data present in the buffer to the destination file. For example, import java.io.FileOutputStream; import java.io.BufferedOutputStream; public class Main { public static void main(String[] args) { String data = "This is a demo of the flush method"; try { // Creates a FileOutputStream FileOutputStream file = new FileOutputStream(" flush.txt"); // Creates a BufferedOutputStream BufferedOutputStream buffer = new BufferedOutputStream(file); // Writes data to the output stream buffer.write(data.getBytes()); // Flushes data to the destination buffer.flush(); System.out.println("Data is flushed to the file."); buffer.close(); } catch(Exception e) { e.getStackTrace(); } } } Output Data is flushed to the file. When we run the program, the file flush.txt is filled with the text represented by the string data. close() Method To close the buffered output stream, we can use the close() method. Once the method is called, we cannot use the output stream to write the data. To learn more, visit Java BufferedOutputStream (official Java documentation).
https://www.programiz.com/java-programming/bufferedoutputstream
CC-MAIN-2021-04
refinedweb
513
50.12