id
stringlengths 30
34
| text
stringlengths 0
75.5k
| industry_type
stringclasses 1
value |
---|---|---|
2014-23/2168/en_head.json.gz/5760 | Desktop dreams: Ubuntu 11.10 reviewed
Ubuntu 11.10, codenamed Oneiric Ocelot, was released earlier this month. In …
- Oct 24, 2011 1:00 am UTC
Photo illustration by Aurich Lawson Ubuntu 11.10, codenamed Oneiric Ocelot, prowled out of the development forest earlier this month. In our review of Ubuntu 11.04, released back in April, we took a close look at the strengths and weaknesses of the new Unity shell and compared it with GNOME 3.0. In this review, we're going to revisit Unity to see how much progress it has made over the past six months. We will also take a close look at the updated Software Center user interface and the transition from Evolution to Thunderbird.
The Unity desktop shell, which provides the heart of Ubuntu's user interface, is one of the distribution's key differentiating features. It was originally unveiled at an Ubuntu Developer Summit in 2010 and became a standard part of the desktop installation in version 11.04 earlier this year.
Although the Unity shell brought a number of significant aesthetic and usability improvements to the Ubuntu desktop, it suffered from some real drawbacks. The quality of the Unity environment that shipped in Ubuntu 11.04 was eroded by technical rough edges, questionable design decisions, and a handful of reliability issues. It was ready to ship, but not mature enough to shine.
Ubuntu's developers have done a great deal of work over the past six months to fill in the gaps and make the Unity experience better for end users. This effort has helped to flesh out previously incomplete parts of the user interface and has produced a noticeable improvement in Unity's robustness during day-to-day use. It's now more stable and its behavior is generally more predictable.
A number of design changes have also contributed to better usability in 11.10, but some of our major grievances with Unity still haven't been fully addressed. Unity looks like a work in progress, and looks like it needs one more cycle of refinement to reach full maturity.
A particularly large number of changes were made to Unity's Dash interface in Ubuntu 11.10. The Dash is an overlay that exposes all of the application launchers and provides quick access to files. In the previous version of Ubuntu, the main Dash views were accessible through separate "lenses" that had specific features. In 11.10, the standard lenses have all been consolidated into a more a streamlined Dash.
The Ubuntu button used to activate the Dash was previously located on the left edge of the top panel, but in the new version it has been moved into the dock itself. It now appears as the top item in the dock. This seems like a good change because it increases consistency, making the function of the button more obvious.
Unity's Dash user interface The user can optionally maximize the Dash in 11.10, making it fill the whole screen so that additional content is visible. This feature can be toggled by hitting the maximize button that appears in the top left-hand corner while the Dash is visible. I like to have the Dash maximized on my netbook, but favor the regular size on a desktop computer.
A view of the Dash interface when it is maximized to fill the screen The main Dash view gives you quick access to the Web browser, mail client, music player, and photo management application. At the bottom of the Dash is a new context switcher that allows the user to easily change between the various lens views: application launchers, documents, music, and home. The application and document lenses work mostly like their equivalents in the previous version.
The music lens is a new feature designed to integrate with the Banshee music player and with Canonical's music store. The music lens will show you the songs and albums that you have in your media library. When you click one, it will launch Banshee and start playing, and the music is displayed with album cover art. You can use the built-in search feature to quickly find specific songs and albums. In addition to showing music in your local library, it will also show an additional section of matching songs that are available for purchase from the Ubuntu music store.
Searching for a song with the Dash's music lens The file and application lenses got some improvements in 11.10. The file lens has a more sophisticated set of filtering options that make it easier to find specific files, and it has already proved useful by saving me a few trips to the file manager.
The new and improved file lens with advanced filtering features In our review of Ubuntu 11.04, we singled out the abominable app lens for special criticism, accusing it of being one of the worst atrocities perpetrated in the history of desktop interface design since Microsoft Bob. The replacement of the wretched pseudo combobox in favor of a nice and clean category filter interface has made the whole thing much less execrable in 11.10. It's actually starting to feel respectable.
When you select a category, it shows three sections: frequently used applications in the category, applications in the category, and applications that you can install in that category. The list of installable applications is still a mostly-useless and seemingly random assortment of things that I don't care about. There is also sometimes redundancy between the list of frequently used applications and the regular list.
The new app lens with advanced filtering. Notice the redundant entry for the File Roller The application lens will now thankfully remember when you expand the full list of installed applications, so you don't have to do so every time you want to launch something. Between that and the proper one-click category filtering, the application lens is a great deal more practical in 11.10. It's finally workable enough for day-to-day use.
The Dash's home lens allows a global search of Dash content, including files, applications, and music. It subjectively feels faster than the equivalent feature from the previous version, and Dash performance in general seems a bit snappier.
Global menubar
My criticisms of the global menubar implementation still stand in 11.10. The issues with inconsistent titling, title truncation, dialogs that make a parent window's menus inaccessible, and the inherent lack of discoverability haven't been addressed. Some applications with non-standard widgets, such as LibreOffice, still don't support global menubar integration.
I wrote in my 11.04 review that I was on the fence about the global menubar in Ubuntu but thought that it seemed promising. Seeing that none of the issues have materially been addressed in 11.10 was a bit disappointing. I'm hopeful that the developers will refine it further before Ubuntu 12.04 or consider rethinking it in favor of an approach that fits better with the per-window menu paradigm of the standard Linux widget toolkits.
Page: 1 2 3 4 5 Next → Reader comments 119 | 计算机 |
2014-23/2168/en_head.json.gz/6638 | Blizzard FAQ
Tours FAQ
What is Blizzard Entertainment?
Best known for blockbuster hits including World of Warcraft® and the Warcraft®, StarCraft®, and Diablo® series, Blizzard Entertainment, Inc. (www.blizzard.com), a division of Activision Blizzard (NASDAQ: ATVI), is a premier developer and publisher of entertainment software renowned for creating some of the industry's most critically acclaimed games. Blizzard Entertainment’s track record includes fourteen#1-selling games and multiple Game of the Year awards. The company's online-gaming service, Battle.net®, is one of the largest in the world, with millions of active players.
Where is Blizzard Entertainment located?
Our headquarters, where all game development takes place, are located in Irvine, California. In addition, we have offices in several other locations around the world to support players of our games in those regions. Check out some of the jobs located around the world.
How can I get a job or internship at Blizzard Entertainment?
We’re currently hiring qualified applicants for a number of positions. Please check our jobs page or our university relations page for full details.
Where can I buy your games?
Boxed copies of our games can be found at many gaming and electronics retailers. Alternatively, you can purchase copies digital copies from the Battle.net Shop and a variety of other related products from the Blizzard Gear store. Can I come visit your office?
At this time, we're only able to offer tours for Blizzard HQ in Irvine, CA. Tours typically run once a month with a limited number of spots. For more information, please check out our Tours FAQ and contact us at [email protected].
What is BlizzCon®?
BlizzCon is a gaming festival celebrating the communities that have sprung up around our games. It offers hands-on playtime with upcoming titles, developer panels, tournaments, contests, and more. To learn more about BlizzCon, visit our BlizzCon website.
How is the Warcraft movie progressing?
We continue to work closely with Legendary Pictures on the Warcraft movie. Duncan Jones (Moon, Source Code) has signed on to direct the film.
How does Blizzard Entertainment feel about total conversions or mods of its games?
We've seen some very polished and fun mods and conversions for our games, and have no problems with them, so long as they are for personal, non-commercial use and do not infringe on the End User License Agreement included in our games, nor the rights of any other parties including copyrights, trademarks or other rights. If you have any other legal questions regarding Blizzard Entertainment or our products, please see our Legal FAQ.
University Relations Careers FAQ Events
eSports Press | 计算机 |
2014-23/2168/en_head.json.gz/7299 | Strategy you can't buy
Dan_Stapleton
The hottest trend in PC gaming these days is high-quality free-to-play games that make their revenue from advertising or microtransactions (players buying low-priced in-game items or currency), instead of a retail price tag. Right now, you can download and play free MMORPGs like Sword of the New World and Requiem: Bloodymare, and EA is gearing up for a bold experiment in free-to-play shooters with Battlefield Heroes. (You can also play the free-but-mediocre War Rock…though I don’t recommend it.) But so far, no one has made a true free-to-play AAA-quality strategy game. Where’s our free stuff?
We should be careful what we wish for. Could this crazy get-rich-eventually scheme even work in the strategy genre? That’s debatable; one of my favorite topics to rant on during the PC Gamer Podcast is how allowing players to use real-world cash to purchase an in-game advantage over other players is a terrible idea that will end in tears. The joy of online strategy gaming, and online gaming in general, is defeating an opponent who was just as likely to defeat you—so how much fun is a game if, no matter how good you are, you may get owned by some kid who blew his allowance on WMDs? If you don’t stand a chance in a “free” game without shelling out, then the game ain’t really free.
Spring is a free fan-made RTS. It’s a little tricky to get into a multiplayer game, though
RTS games in particular are poorly suited for an uneven playing field. Just look at the lopsided matches in the global conquest modes of games like Warhammer 40K: Dawn of War: Soulstorm or C&C3: Kane’s Wrath, which allow persistent armies to carry over between battles in order to see how those matches would play out. The player with the more powerful starting force immediately rushes, destroying or crippling the underdog within the first two minutes of play. Game over. Those modes are single-player-only for exactly this reason—it’s no fun to be on the receiving end of that. I think I’d avoid that kind of game, especially if it employed the same “buy more stuff or lose!” model you see in card games like Magic: The Gathering and tabletop games like Warhammer. I would, however, be all over a well-made free RTS that simply showed ads during loading screens and end-of-game score reports. Hell, I’d even accept an on-screen ad bug in the interface if it meant the playing field could remain level.
Persistent armies are a cool concept, but hopelessly unbalanced Someone is definitely going to give free a chance, though, so the only question is who will do it first. Petroglyph (Star Wars: Empire at War and Universe at War: Earth Assault) announced in April that it has already begun development on a free-to-play microtransaction-based RTS, but any number of these projects could be in the works behind closed doors at various developers. By this time next year, I’d bet on at least one more surfacing.
In the meantime, a few free RTS games are available. Spring, the fan-made re-creation of Total Annihilation comes to mind. You can also play a free version of Saga, an indie MMORTS, though some features are gimped until you fork over $20.
July 16, 2008 We Recommend By ZergNet | 计算机 |
2014-23/2168/en_head.json.gz/8161 | Video Games vs Computer Games
By Geoffrey Morrison Posted: Jul 22, 2011 Call me purist. Call me curmudgeon. For most, the terms "video game" and "computer game" are interchangeable. I disagree, and my complaint is more than just semantics.You see, the difference is a simple one: video games are dumb.That in itself isn't a bad thing. The problem is, they're making computer games dumber as well.Let me explain. Video games, tracing their linage back to Atari and Pong, are designed with the living room and TV in mind. Their "10-foot-interface" is large enough to be seen from the couch, and control requires only as many buttons as available on the standard controller (under a dozen, unless you were one of the three owners of an Atari Jaguar).Then there's the audience. This has changed over the years, so it's not fair to say "computer" gamers are older than "video" gamers (though they typically are, by a couple of years). Modern video games, like those found on the PlayStation 3 and Xbox 360, are aimed at reaching the widest possible audience. As such, they lack the complexity and difficulty of many computer games. They have to. And in theory, this isn't a bad thing.In addition to the limitations of the interface, one has to realize what vast sums of money go into publishing a game for the PS3 or 360. We're talking millions of dollars just to step up to the plate. Even though the games eventually retail for $50-$60, the royalties paid to Sony and Microsoft take a significant cut from every title sold. So making the development money back isn't a sure thing. That risk is reflected in the core of the games themselves: widest possible audience, highest likelihood of continued play.The video game industry got hit by the global financial meltdown just like everyone else, so now the vast majority of games are released by only a handful of publishers. These big companies, like all big companies, are frantically afraid of risk. So the edict is mainstream or no stream. Rarely do you see "low budget" video games. Rarely do you see a game that tries something new. There can't be. The cost of entry, and the ease of failure, is too high.This is also why nearly every game on the market now or coming out in the near future is a sequel or based on a known property. It's safe (if you see a parallel to the sorry state of the movie industry, it's for the same reasons).The Wii takes this to another extreme, but as it was designed from the beginning for "casual" gamers, it belongs in its own category.The biggest casualty of the dumbafication of games is the first person shooter (FPS), one of the most popular genres on any platform. Gaming controllers are imprecise tools, so concessions have to be made in terms of difficulty and level design. Games must be made easier to allow FPSes to work on the 360/PS3. Features like auto-aim may not seem like a big detractor, but removing the aiming challenge gives the gaming experience an "on rails" feel. Gamers move the thumbpad a little, shoot, move a little, shoot, with no real skill or activity. Worse, the games are designed with this style of play in mind, so disabling auto-aim makes them needlessly difficult.Worse, knowing it's hard to move well with a controller, game designers implemented the dumbest invention in gaming ever: unlimited shields. Halo did this to the worst degree. Your character was essentially immortal. Getting shot too much? Duck behind cover and you'll be 100% in seconds! Gone is the strategy and tense gameplay. In its place, cut scenes broken up by glorified shuffleboard. NEXT: Page 2 » | 计算机 |
2014-23/2168/en_head.json.gz/8394 | Digital Media Services >
Multimedia Pre-Production >
The goal of this document is to help students design and create storyboards that are useful when filming. The techniques shown will include: how to design storyboards, including how to show correct camera angles for the scene, writing your story, and how to use video transitions.
A storyboard is a comic book of a movie that allows you to easily explain the shots in your movie to other people. Storyboards are created from scripts, so write your script before continuing this tutorial.
Find some paper that has a box for an image and several lines beneath the image to write on. An example storyboard template is found here: Blank Storyboard Template
Type of shot (camera angle) – close up, wide shot, etc. By doing this in the storyboard it will help to establish what settings to use when filming later on. Screen direction – what’s happening in the shot.
Dialogue – indicate who is saying what this includes any voices coming from off-camera
A traditional board may look like this:
A frame is a single image on the storyboard, like the box on the left.
A shot is continuous footage with no cuts – that is, from the time the camera is turned on to the time it’s turned off.
Shot Types
Listed are some basic camera angles and lengths that you will find useful when designing the visual scenes and giving instructing for filming with the camera. While only a few shots are listed here, more are discussed in a provided link.
Choose the angle framing (how wide the shot is) by asking yourself, “What do I need the audience to see right now?”
Establishing Shot
The first shot of the sequence is usually the establishing shot which sets the stage for the action. This shot orients the viewers and puts a map inside the mind of the audience. The establishing shot is a long distance, or “bird’s eye view,” shot. This shot typically has much more detail than other, following shots in the same location.
Used to stress the environment or setting, a long shot is set from a distance, but not a distance as great as the establishing shot.
A medium shot is a shot that frames actors from the waist up. This is often used to focus attention on interaction between two actors.
Over-the-shoulder shot
An over-the-shoulder shot of one actor, taken over the shoulder of another. This shot is used when two characters are interacting face-to-face and focuses the audience’s attention to one actor at a time.
Taken only inches away from an object or actor’s face, the close-up is designed to focus attention or give significance to an element. Close-ups should capture a story-point –a moment or action that’s very important to the story.
Don’t frame close-ups too tightly. Reserve the extreme close-up for rare occasions of extreme emotion. Further, avoid centering characters. Leave some extra room in the direction that they’re facing. This is sometimes called “look room.”
Storyboarding concepts
Panning is moving the camera to follow an object or action. Depicting this in storyboards can be done one of several ways, depending on the context.
Note: It’s best to pan when you’re following the action or a character; give the eye something to follow.
If the shot is one big scene and you want to pan to the side or up (or down) to show the rest of the scene, you can take a bigger box and draw the entire scene you plan on filming. Then you will want to draw boxes indicating where you’d like to pan. One box will indicate the start of the filming and then draw arrows to the second box to indicate that the pan is finished.
If the shot covers action, you can use separate frames to show the character(s) at different times.
In the first frame of the scene you want to pan, draw an arrow that runs to the last frame of the pan.
Staging & Movement
Always make sure to draw a clear background in the first panel of a scene. This panel orients the viewer, so it’s important to consider which objects will be present. Background details can be incorporated, if needed, in later frames of the storyboard.
When deciding where the actors stand on screen, keep space for necessary actors and movement in future panels. Don’t let your main objects bump against the sides of the panel – keep the white-space.
When an actor is moving, use arrows liberally to communicate the movement. Overall, use as few models as possible to communicate action.
When to cut
Cut IN (closer) when you need to see a specific expression or small action.
Always give a reason for the cut: This is called “Motivating the cut.” Possible reasons can be a voice off-screen voice or action. This action of cutting is used to create a sense of curiosity in the viewer.
At the first frame of each new sequence, put a number next to that frame. That number is the sequence number, and is used for clarity when cutting from one sequence to another. Anytime you cut from one scene to another, indicate it on the storyboard.
Storyboards are important! Storyboards are a smooth transition from a script to filming, and allow you see the film before filming. They can tell you where your script needs to be changed to look good in a film, saving you time, effort, and possibly money. A storyboard should be so complete that a director who sees a storyboard can, without reading the script or talking to anyone involved in the original project, create the film as it was intended to be. Use the techniques discussed in this document to create such a storyboard.
http://www.learner.org/interactives/cinema/directing2.html
http://www.mediaknowall.com/camangles.html
en.wikibooks.org/wiki/Movie_Making_Manual/Storyboarding
http://en.wikipedia.org/wiki/Storyboard
http://www.youtube.com/user/StoryboardSecrets
http://www.dummies.com/how-to/content/storyboarding-your-film.html
Forty-six percent of UW Bothell's first year students are the first in the families to attend college. | 计算机 |
2014-23/2168/en_head.json.gz/9841 | HomeNewsEventsCertificationMeet UsCommunityContact UsResourcesFAQBSDCGSponsorsStore
Meet the BSD Certification Group
The BSDCG (BSD Certification Group) is comprised of educators, writers and sysadmins who are well versed in, and passionate about BSD systems.
The Advisory Board is comprised of a group of respected voices within the Unix community who provide advice and wisdom on specific issues from time to time. Current Advisory Board members include former members of the CSRG at the University of Berkeley, developers, authors, trainers and speakers.
Officers and Members
Dru Lavigne, President
Dru has been teaching networking and routing certifications since 1998; these include MCSE, CNE, CCNA, CCSE, Sco UnixWare, Linux+, Network+, Security+ and A+. She has also designed courses and developed curricula at the post-secondary level in accordance with Ontario`s Ministry of Education standards. She is the author of BSD Hacks, The Best of FreeBSD Basics, The Definitive Guide to PC-BSD and maintains the PC-BSD Users Handbook and the FreeNAS Users Guide. She has been a Director with the FreeBSD Foundation since 2009. Jim Brown, Vice President and Treasurer
Jim has worked in the computer industry with continuous Unix involvement in development or administration since the early 1980s. His experience includes applications, systems and database programming, in a variety of languages. He started out as a NetBSD aficionado in the mid 1990s, but switched to FreeBSD (around 2.x) because of the larger number of ported/packaged applications. He has used OpenBSD as well since 1999. Currently, he works for Walmart in the Information Systems Division, and is located in Northwest Arkansas, USA.
George Rosamond, Vice President
George has been in technology for over 15 years and holds a SANS GSEC. After being a BSD user for several years, he initiated the New York City *BSD User Group (NYCBUG) in December 2003 which he continues to operate. His firm, Cee Tone Technology, formerly Secure Design & Development Inc., is based in New York City, with a focus on secure | 计算机 |
2014-23/2168/en_head.json.gz/10362 | Information Research, Vol. 7 No. 1, October 2001,
Intelligence obtained by applying data mining to a database of French theses on the subject of Brazil
Kira Tarapanoff*, Luc Quoniam ¶Rogério Henrique de Araújo Júnior* and Lillian Alvares*
* Instituto Brasileiro de Informação em Ciência e Tecnologia Brazil
¶ Centro Franco-Brasileiro de Documentação Técnico-Científica Brazil
The subject of Brazil was analyzed within the context of the French database DocThéses, comprising the years 1969 -1999. The data mining technique was used to obtain intelligence and infer knowledge. The objective was to identify indicators concerning: occurrence of thesis by subject areas; thesis supervisors identified with certain subject areas; geographical distribution of cities hosting institutions where the theses were defended; frequency by subject area in the period when the theses were defended. The technique of data mining is divided into stages which go from identification of the problem-object, through selection and preparation of data, and conclude with analysis of the latter. The software used to do the cleaning of the DocThéses database was Infotrans, and Dataview was used for the preparation of the data. It should be pointed out that the knowledge extracted is directly proportional to the value and validity of the information contained in the database. The results of the analysis were illustrated using the assumptions of Zipf's Law on bibliometrics, classifying the information as: trivial, interesting and 'noise', according to the distribution of frequency. It is concluded that the data mining technique associated with specialist software is a powerful ally when used with competitive intelligence applied at all levels of the decision -making process, including the macro level, since it can help the consolidation, investment and development of actions and policies.
The storage capacity and use of databases has increased at the same rate as advances in the new information and communication technologies. Extracting relevant information is, as a result, becoming quite a complex task. This 'panning for gold' process is known as Knowledge Discovery in Databases - KDD.
KDD can be regarded as the process of discovering new relationships, patterns and significant trends through painstaking analysis of large amounts of stored data. This process makes use of recognition technologies using statistical and mathematical patterns and techniques. Data mining is one of the techniques used to carry out KDD. Specific aspects of the technique are: the investigation and creation of knowledge, processes, algorithms and mechanisms for recovering potential knowledge from data stocks (Norton, 1999).
The discovery of knowledge in databases, KDD, is regarded as a wider discipline and the term 'data mining' is seen as a component concerned with the methods of discovery and knowledge (Fayyad et al., 1996).
The application of data mining permits testing of the premise of turning data into information and then into knowledge. This possibility makes the technique essential to the decision -making process. In order to achieve this result it is necessary to investigate the effective use of knowledge obtained by data mining in the decision -making process and the impact it has on the effective resolution of problems and on planned and executed actions.
This study intends to demonstrate the application of the data mining technique, using as a case study the DocThéses database, a catalogue of French theses. The study focuses on theses dealing with Brazil and includes also theses by Brazilians defended in France. The period studied is 1969 -1999. The parameters of the study were:
Occurrence of theses related to Brazil by subject areas;
Thesis supervisors identified with certain subject areas; Geographical distribution of cities hosting institutions where the theses were defended; Frequency by subject area in the period when the theses were defended between 1969 and 1999.
The data mining process
Included in the concept of data mining (DM) are all those techniques that permit the extraction of knowledge from a mass of data which would otherwise remain hidden in large databases. In the first stage of DM we have pre-processing, in which data are collected, loaded and 'cleaned'. In order to do this successfully, it is necessary to know the database, which involves understanding its data, the cleaning process and preparation data in order to avoid duplication of content as a result, for example, of typing errors, different forms of abbreviation or missing values.
Data mining tools identify all the possibilities of correlation that exist in databases. By means of data-exploration techniques it is possible to develop applications that can extract from the databases critical information with the aim of providing maximum possible assistance in an organization's decision-making procedures.
The concept of data mining, according to Cabena et al. (1997) is: the technique of extracting previously unknown information with the widest relevance from databases, in order to use it in the decision-making process.
Figure 1: Diagram of data mining technique
Figure 2 shows the logical placing of the different phases of decision -making with their potential value in the areas of tactics and strategy. In general, the value of information to support the taking of a decision increases from the lower part of the pyramid towards the top. A decision based on data in the lower levels, in which there are usually millions of data items, has little added value , while that which is supported by highly abbreviated data at the upper levels of the pyramid probably have greater strategic value.
By the same token, we find different users at the different levels. An administrator, for example, working at an operational level, is more interested in daily information and routine operations of the 'what' type, found in records and databases at the bottom of the information pyramid. This information creates data. On the other hand, business analysts and executives responsible for showing the way forward, creating strategies and tactics and supervising their execution, need more powerful information. They are concerned with trends, patterns, weaknesses, threats, strong points and opportunities, market intelligence and technological changes. They need 'why' and 'and if' information. They need internal and external information. They are the creators and those who demand data analyzed with a high level of value added, information from the top of the pyramid.
Figure 2: Evolution of strategic value of database
(Source: based on Cabena et al., 1997 and Tyson, 1998)
A general view of the stages involved in DM is shown in Figure 3. The process starts with a clear definition of the problem - stage 1, followed by stage 2, which is the selection process aimed at identifying all the internal and external sources of information and selecting the sub -group of data necessary for the application of DM, to deal with the problem. Stage 3 consists of preparing the data, which includes pre -processing, the activity that involves the most effort. It is divided into visualization tools and data reformatting tools, which make up 60% of DM, a situation illustrated in Figure 4. This preparation is crucial for the final quality of the results and because of this, the tools used are very important. The software used at this stage must be capable of performing many different procedures, such as adding values, carrying out conversions, filtering variables, having a format for exporting data, working with relational databases and mapping entry variables. In general these stages resemble the information cycle or the information management process carried out within the thematic area of Information Science, particularly in the information retrieval process.
Figure 3: Stages in the data mining process(Source: Cabena et al., 1997)
Figure 4: Typical effort needed for each stage of data mining (Source: Cabena et al., 1997)
We now pass on to stage 4 in the analysis of results obtained through the DM process, two basic aspects of which have to be considered: giving information about new discoveries and presenting them in such a way that they can be potentially exploited. In this phase the participation of an expert in the area of databases is recommended in order to answer specific technical questions that may influence the analysis. Business managers and executives may be involved at this stage.
By applying data mining we may achieve various kinds of knowledge discovery. Among these, the discovery of associations, discovery of groupings, discovery of classifications, discovery of forecasting rules, classification hierarchies, discovery of sequential patterns, and discovery of patterns in categorized segmented and time series, which are found in Alvares (2000).
Case Study for the Application of Data Mining
The Database chosen to study data mining was DocThéses, the catalogue of theses defended in French universities. This catalogue is the responsibility of the Agence Bibliographique de l'Enseignement Supérieur - ABES, connected to the Department of Research and Technology of the French National Ministry of Education and its aim is to supply the University Documentation System, to locate and register the documentary resources of higher education libraries and also to monitor the regulation of cataloguing and indexing texts.
The DocThéses database is available on CD -ROM and the year 2000 version was used for this study. Theses that had Brazil as their research topic were extracted. The total sample was 1,355 theses (bibliographic records), among which were also included all theses written by Brazilians and defended in France between 1969 and 1999.
The format for each bibliographic reference (occurrence) followed the following structure:
Author;
Title;
Supervisor;
Discipline (subject area);
Keywords;
Year of Defense;
University or establishment where the work was presented, and
Complete text.
We chose to study various tendencies in procedure, created and chosen by means of applying the Dataview bibliometric software which will be the object of commentary and analysis in subsequent sections of this study.
Simplified Methodology
After the data preparation stage in which Infotrans Version 4.07 software was used, and once the working database had been prepared, we began data mining using Dataview, bibliometric software for extracting trend indicators developed by the Centre de Recherche Rétrospective de Marseille - CRRM of the Aix -Marseille III University, St. Jérome Centre, Marseilles, France.
Dataview is based on bibliometric methods whose ultimate objective is to turn data into intelligence for decision -making by creating elements for statistical analysis. To achieve this, reformatting data is a basic condition for bibliometric treatment. After statistical analysis the information retrieved will have a decisive influence on generating knowledge and intelligence, a process in which two aspects will be considered.
Both value and validity of information will have a decisive influence in the search for knowledge in databases (KDD). This is the philosophy which must direct any study concerning data mining as well as generating knowledge. When applying Dataview it became obvious the importance of the previous phase of preparation of data (data cleaning) done with Infotrans. The quality of the data generated by Infotrans did result in clear results from the bibliometric analysis.
In Figure 5 we present the situation of Dataview in a bibliometric study. Another important characteristic of the Dataview software relates to the measurement characteristic of bibliometry established on numerical bases which in their turn are created by using occurrences. Thus, for each unit of bibliographic element, occurrence must be dealt with in three ways, a) primary state - simple location of occurrences, presence or absence of reference elements, b) condensed state - expansion of these occurrences or frequencies, and c) co -occurrence, which represents the combination of primary and condensed states. In this way lists will be created - occurrence frequency and co -occurrence and frames - frameworks of presences and absences (Rostaing, 2000).
Figure 5: Position of Dataview in a bibliometric study(Source: Rostaing, 2000)
In Figure 6 we show a schematic view of the stages of a work session in Dataview.
Figure 6: Stages in a Dataview work session(Source: Rostaing, 2000)
To gain an understanding of data it is important to know the three basic laws of bibliometry:
1) Bradford's Law (or the Law of Dispersion): concentrates on the repetitive behavior of occurrences in a specific field of knowledge. Bradford chose periodicals for his analysis because of their characteristics of occurrence of themes and tendencies, and found that few periodicals produce many articles and many periodicals produce few articles.
2) Lotka's Law: analyses writers' scientific production, that is, it measures the contribution of each of them to scientific progress. Lotka's Law states the following: the number of writers who produce n works is in the proportion of 1/1 raised to n2 of writers who produce only one work;
3) Zipf's Law: is called the fundamental quantitative law of human activity. It is sub -divided into Zipf's First Law which relates to the frequency of words appearing in a text (number of occurrences of words). It is controlled by the following mathematical expression:
Where K = constant; R = word order, and F = word frequency.
Zipf's Second Law identifies low-frequency words that occur in such a way that several words show the same frequency (Tarapanoff, Miranda & Araújo Jr., 1995).
For this study, we shall look at the Zipf curve in the light of the Figure below:
Figure 7: The Zipf curve
According to Quoniam (1992), on the Zipf curve we have: Zone I - Trivial information : defining the central themes of the bibliometric analysis;
Zone II - Interesting information: found between Zones I and II and showing both peripheral topics and also potentially innovative information. It is here that technology transfers related to new ideas should be considered, and Zone III - Noise: characterized by containing concepts that have not yet emerged in which it is impossible to say whether they will emerge or if they will remain merely statistical noise. Zones I, II and III are represented on the Zipf curve as shown in the following Figure: Figure 8: Zones of distribution(Source: Based on Quoniam, 1992)
Starting from this reference point, we chose to present the results of the data mining exercise as applied to the DocThése database taking into account only Zones I and II by reason of their ability to define the central themes of the bibliometric analysis and of potentially innovative information, respectively. The results are presented in the following section. Analysis of Results
Occurrence of these containing the term 'Brazil', by subject area
Graph 1: Occurrence by subject area
Remember to close the pop-up window
A third of the total of theses which had Brazil as either the researcher's country of origin or as the topic of research, were found in the areas of economics, sociology and technological sciences, closely followed by 101 and 98 theses in the areas of geography and biology respectively, as may be seen in Graph I and which corresponds to Zone I - Trivial information.
As France, together with Germany, has one of the most important and longstanding schools of sociology, it is therefore a favorable location for the elaboration of academic studies in this area, as is shown in Graph I. The same is true of economics, where we also find a strong interest in Latin American topics. These are topics that students researching Brazil look for and are of constant interest.
Thesis supervisors identified with certain subject areas
Table 1: Thesis supervisors identified with certain subject areas
In terms of the area of technology, France is one of the world leaders in technological development, having an efficient system of technological innovation that justifies its position in the rankings of this field of research. Of the supervisors represented in these areas, Table 1 shows that production is concentrated around those lecturers who together account for 20%, 18% and 7.1% of the total number of theses defended.
It should be pointed out that the areas analyzed were those in which the numbers of Brazilians in France grew during the period up to 1994, after which time there was a decline in demand.
Zone II - Interesting information, in its turn, represents those areas that are emerging, which is indicated by the areas of education, medical sciences, Latin American studies and history, which have been increasing in popularity since 1995. Some of the facts that have been creating interest in these areas of study are found in the influence of the new scientific and technological dimension as is the case of the areas of education and medicine, which are constantly affected by new discoveries and technologies that move them forward in the field of human knowledge. In the case of history, the fact of our living in a period of abrupt transition in this type of society, forces us to engage in a constant re -reading and search continually for explanations concerning new aspects of this society.
Within the area of history, Fréderic Mauro stands out, because among all the supervisors, he supervised the greatest number of thesis between 1969 and 1999, with 25% of the total relative to the first group of supervisors (Zone II - interesting information), as Graph 2 illustrates. This performance results fundamentally from the strong influence of French historiography in Brazilian academic life. In the 1930s, a group from France, composed of several lecturers from different areas, brought to Brazil the eminent teacher Fernand Braudel, one of the creators of the first founding generation of the French School of Analysis which still contains important figures in historiography such as Marc Bloch and Lucien Fébvre. At this time, as a result of the French visit, the History Department of the University of São Paulo was founded, an event that began the decisive influence of French historiography in Brazil. In the particular case of Professor Fréderic Mauro, his great influence in this school of historiography, together with that of Georges Duby and Jacques Le Goff, among others, belongs to the second generation.
As a result of the facts mentioned above we may state that not only does the number of thesis supervised by Mauro account for the significant number of works noted in the area of history, but that this is also clearly due to the fact that French historiography has been the main catalyst for the interest of Brazilian historians seeking training abroad.
Graph 2: Thesis supervisors identified with certain subject areas, by groups
Concentration of defended thesis by French cities
When we looked at the careers of researchers in France, the result was Graph 3, the results of which indicate that 62% of defended theses were presented in Paris. Of the remaining 40%, Montpellier, Toulouse, Marseilles, Grenoble and Bordeaux accounted for 50%. The remaining approximately 50% were in 30 other French towns.
Graph 3: Concentration of defended thesis, by city
Period of Presentation of Theses between 1969 and 1999
By analyzing Table 3 we find that between 1974 and 1978, only the area of Law achieved high levels of interest, the greatest concentration that was found relative to all the other areas. This situation is noticeable and may be explained in part by the political circumstances prevailing in Brazil during the 1970s.
The coincidence of the high level of concentration of theses defended in France with the peak of the military dictatorship in Brazil from 1967 raised the level of interest in understanding the state of law imposed there, especially in relation to the citizen's basic rights and guarantees.
In the area of linguistics, it will be seen that it peaked between 1980 and 1984, with a tendency to recapture interest after 1995.
Table 3: Incidence of Subject Areas by Periods of Years (1969 - 1999)
By and large, in relation to the number of thesis defended during the period in question, we may note that since 1996 the number has been falling rapidly, as may be seen in Graph 4. The reason for this is perhaps found in the fact that since 1999 there has been uncertainty about grants for overseas study in the areas of humanities and social studies, which has meant that the area of technology alone does not reach high levels as the whole.
It is interesting to note that in the period of relative equilibrium in the curve, which oscillates between 36 and 58 theses defended between 1980 and 1990, an average of about 47 theses were defended each year, with the field of economics being especially prominent during this period.
Graph 4: Incidence of defense of thesis by periods of years
In the field of information sciences, twelve thesis were defended between 1974 and 1999. The golden age was between 1980 and 1984, with a total of five thesis. Prominent among the supervisors are F. Ballet, followed by J. Meyriat. The other five, each responsible for one thesis, were P. Albert, M. Menou, M. Mouillard, J. Perriault and G. Thibault, the latter, based in Bordeaux, being the only one working outside Paris. M. Menou has worked in information sciences as an international consultant in Canada, where he has developed several lines of research on the impact of information on development. He has developed a wide -ranging consultancy network in Brazil in conjunction with the Instituto Brasileiro de Informação em Ciência e Tecnologia - IBICT, linked to the Brazilian Government's Ministry of Science and Technology.
With regard to Zone III - the so -called zone of noise, in spite of its not yet having established emerging concepts and because it is not a very conclusive area, it must be systematically monitored since it can show, or at least allow, in the analysis of weak signals, the inference of future interests in training and research. Thus we should not dismiss it a priori. In this zone are found art and archaeology, literature, political science, science and technology, philosophy, administration, information science and communication studies, among others.
The analysis of the DocThése database in relation to retrieving the word 'Brazil' by means of data mining, was revealing with regard to the chosen subject areas, related supervisors, chronological period of major concentration of theses defended and cities chosen.
Discovery of knowledge occurred gradually as the data mining process took shape. In the first stage - defining the problem - it was decided to explore the database related to Brazil both by key -word and by origin of supervisor. The second stage - cleaning the data - brought about the first contact with the data, extracting only those of potential interest in discovering a pattern. In the third stage - carrying out the data mining per se - it was decided to use the Dataview software which already had embedded in its system statistical rules and the ability to visualize data to find knowledge. The first analyses and findings come from this phase, in line with the aim of the research. The fourth stage - analysis of data - new associations were created and knowledge emerged.
The results obtained are an illustration of how national organs for encouraging research and training high -level human resources, such as the Coordenação de Aperfeiçoamento do Pessoal de Nível Superior (CAPES), and the Conselho Nacional de Pesquisa (CNPq) can direct their investments into areas of knowledge that are felt to be relevant, by means of knowledge discovered in databases. On the academic side the Brazilian Federal Universities already started to use data mining in laboratorial research and consultancy work using several softwares, among them Clementine (SPSS, 2001).
Although the utilization of data minig in Brazil still is in its initial phase, in the governmental and productive sector there are signs of its application. The Brazilian Programme of the Industrial Technological Prospective (Programa Brasileiro de Prospectiva Tecnológica Industrial) makes use of the methodology of Technology Foresight, and uses data mining on historical and current data bases to foresee probable futures.
The figures obtained and their application reinforce the aims of the data mining process by turning data into information and being used in the decision -making process of organizations that take decisions related to the preservation of and innovation in knowledge. Although economics, sociology and history are not priority areas for development in Brazil, they are essential to an understanding of the roles of the Brazilian economy, society and history, which have been strongly influenced by France from the point of view of theoretical and cultural orientation. These areas should continue to receive investment. The impact of this influence was seen in the Exhibition of the Re -Discovery of Brazil (2000), where many documents written by French travelers indicated their presence and influence in the country. Other areas such as technological sciences should be examined because they are diminishing, while areas in expansion should be examined from the point of view of elaborating bilateral technical, cultural and economic co-operation agreements.
It is impossible to deal with all the implications concerning political, technical, economic and cultural agreements that may be achieved through analyzing databases of the kind studied here. Other bases from other sources and other countries would provide different possible implications.
It is possible that the present article might be the start of a series dealing with the rise of interest in research in Brazil and other countries that seeks parallels and discovers knowledge from the results found by applying data mining as an effective managerial tool.
ACM special interest group on knowledge discovery in data and data mining. Available at http://www.acm.org/sigkdd/
Alvares, Lillian. (2001) Aplicação de data mining em bases de dados especializadas em ciência da informação para obtenção de informações sobre a estrutura de pesquisa e desenvolvimento em ciência da informação no Brasil. Brasília : UFRJ/ECO, MCT/INT/IBICT, 2001.
Brasil. Ministério do Desenvolvimento, Indústria e Comércio Exterior. Secretaria de Tecnologia Industrial. (1999) Programa Brasileiro de Prospectiva Tecnológica Industrial: plano de ação. Brasília: MDIC/STI.
Cabena, Peter, et al. (1998) Discovering data mining: from concept to implementation. Englewood Cliffs, NJ: Prentice Hall.
Datamation Magazine. Available at http://www.datamation.com
DeFayyad, U.M., Piatevsky-Shapiro, G., Smith, P. & Uthurusamy, R. (1996) Advances in knowledge discovery and dataming. Cambridge, MA: AAAI Press.
Information Discovery Inc. Available at http://www.datamining.com
International Conference on knowledge discovery & data mining. Available at http://www.digimine.com/usama/datamine/kdd99/
Mooers, C.N. (1951) "Datacoding applied to mechanical organization of knowledge". American Documentation, 2, 20-32.
Norton, M. Jay. (1999) "Knowledge discovery in databases" Library Trends, 48(1), 9-21.
Pritchard, Alan. (1969) "Statistical bibliography or bibliometrics?" Journal of Documentation, 25(4), 348-349.
Quoniam, Luc. (1992) "Bibliométrie sur des référence bibliographiques: methodologie" In: Desvals H.; Dou, H., Eds.. La veille technologique. (pp. 244-262) Paris: Dunod.
Quoniam, Luc. (1996) Les productions scientifiques en bibliométrie et dossier de travaux. Marseille: Université de Droit d'Economie et des Sciences d'Aix-Marseille III
Rostaing, Hervé. (1996) La bibliometrie et ses tecniques. Toulouse: Sciences de la Societé; Marseille: CRRM.
Rostaing, Hervé. (2000) Guide d'utilisation de Dataview: logiciel bibliométrique d'aide à l'élaboration d'indicateurs de tendances. Marseille: CRRM
SPSS Brasil (2001) Customer Relationship Management e Business Intelligence. Apresentação do software Clementine. São Paulo: SPSS.
Taylor, Robert S. (1986) Value-added process in information systems. Norwood, NJ: Ablex.
Tarapanoff, Kira; AraÚJO Jr., Rogério Henrique de; Cormier, Patricia Marie Jeanne. (2000) "Sociedade da informação e inteligência em unidades de informação". Ciência da Informação, 29(3), 91-100.
Tarapanoff, Kira. (Org.) (2001) Inteligência organizacional e competitiva. Brasília: Editora Universidade de Brasília. Technology Foresight. Available at http://www.ics.trieste.it/foresight/technology -foresight/
Tyson, K.W.M. (1998) The complete guide to competitive intelligence. Chicago, IL: Kirk Tyson International.
Zanasi, Alessandro. (1998) "Competitive intelligence through data mining public sources." Competitive Intelligence Review, 9(1), 44 -54.
How to cite this paper:
Tarapanoff, Kira, et al. (2001) "Intelligence obtained by applying data mining to a database of French theses on the subject of Brazil" Information Research, 7(1). Available at: http://InformationR.net/ir/7-1/paper117.html
© the authors, 2001. Updated: 26th September 2001 | 计算机 |
2014-23/2168/en_head.json.gz/10367 | Who Goes There? Measuring Library Web Site Usage
ONLINE, January 2000
Copyright © 2000 Information Today, Inc.
After all the work, time, and money that's invested in building and maintaining the library Web site, you and your staff will most likely want to know who, if anyone, is using it. Additionally, what features and resources do visitors use most often? Are the people accessing the site the same people who come into the library? How do people find the Web site? Do they use a search engine? These are usage questions, and librarians already have experience in gathering usage data. For example, librarians count the number of questions asked at a reference desk as a way of measuring its use. Like the reference desk, the library Web site represents a service point. The Web site service point, however, is electronic, and it requires new methods of measuring usage.
Understanding the basics of Web server technology and the data servers record is a good start in developing usage measurement techniques. After that, you can explore the software that exists to help you make sense of Web site statistics, and find the right software for your system.
WEB SERVER LOG FILES
Every transaction on the Internet consists of a request from a browser client and a corresponding action from the computer server. Each individual client/server transaction is recorded on the server in what is called a server log file. Its most basic form is called the common log file. [Note: In the examples that follow, log entries typically appear as a single line of text.]
The Common Log File
The common log file format is the standard set by the World Wide Web Consortium. The syntax of an entry in a common log file looks like the following:
remotehost rfc931 authuser [date] "request" status bytes
Broken out, each component of a common log file has its own meaning. remotehost: The name of the computer accessing the Web server.
rfc931: The name of the remote user. This field is often blank.
authuser: The login of the remote user. This field is also often blank.
[date]: Date and time of the access request.
"request": The URL of the file requested, exactly as the client requested it.
Status: the error code generated by the request and returned to the client.
Bytes: size in bytes of the document returned to the client.
A typical log entry might look something like:
gateway.iso.com - - [10/MAY/1999:00:10:30 -000] "GET /class.html HTTP/1.1" 200 10000
In this example, the remote host is gateway.iso.com. The next two fields, rfc931 and authuser, are blank (represented by dashes). The request was made on May 10, 1999 at 10 minutes after midnight. The file requested was class.html. The error code 200 (status OK) was returned, and the file requested was 10,000 bytes in size.
The common log file format may be the standard, but variations of log files exist. Additional information may be stored in referrer and agent logs.
Referrer Log File
Many servers record information about the referrer site, or the URL a visitor came from immediately before making a request for a page at the current Web site. 08/02/99, 12:02:35, http://ink.yahoo.com/bin/query?p="sample+log+file"&b=21&hc=0&hs=0, 999.999.999.99, jaz.med.yale.edu
In this example, the referring page was a search engine, ink.yahoo.com, and the search used to find the requested page was "sample log file." (Many Web designers and marketers are interested in the search words that lead users to their sites.) Note that the IP address of the computer making the request, 999.999.999.99, is also recorded here.
Agent Log File
A third type of recording is the agent log. An agent log records the browser and operating system used by a visitor. It will also record the name of spiders or robots used to probe your Web site. An example of a hit from a Northern Light search engine, recorded in an agent log, might look like:
07/09/99, 13:59:24, , 999.999.99.99, scooby.northernlight.com, [email protected], Gulliver/1.2
In addition to the standard information about the date, time, and IP address, [email protected] tells you that this hit came from a crawler.
A hit from a Web browser would reveal the browser name and version, such as Mozilla/4.0. This probably means the visitor's browser was Netscape version 4.0 (Mozilla was the code name for Netscape and is still used for a browser compliant with the open-source Netscape code.) Browser information, however, is not always considered reliable.
Common log files, referrer logs, and agent logs are sometimes combined into one log. Whatever format your Web server uses, the first thing you will need to do is determine what type of log file is being generated. The person responsible for the server should be able to tell you what format is used. In addition, there may be options in the log file that determine what data is recorded, and you may be able to use these options to increase or decrease the data collected, depending on your needs.
To sum up, some of the things you can learn from your Web server's log files are:
What pages on your site are requested.
The IP addresses of computers making requests.
The date and time of requests.
If a file transfer is successful or not.
The last page a requester visited before coming to your site.
The search terms which led someone to your site.
LOG FILE LIMITATIONS
Log files are designed to help server administrators gauge the demands on a server, and they are very good at this. However, log files are not designed to describe how people use a site. You cannot always distinguish, for example, if three requests for a file came from three different people, or from one person requesting the file three times. This is largely because many people use the Internet through dial-up connections to an ISP. In a practice called dynamic addressing, the ISP assigns an IP address to the user while online, but will reassign the IP to a new user when the first user disconnects. An individual cannot be identified with an IP, unless there's a direct Internet connection (called static addressing). Some log analysis software tries to identify a single visitor to a site by looking at requests from the same IP address during short periods of time, but it is still not possible to tell when the same person returns the next day with a different IP. There are also concerns about how caching affects log files. Caching occurs when you visit a Web page and your browser stores that page in memory. The next time you request the same URL, your browser will search its memory for the URL. If it is cached, it will pull the page from memory, and the server will never receive the request. You are using the site, but that will not be recorded in the server's log files. ISPs also utilize caching, which exacerbates the problem.
A good rule to remember is that the log file measures requests for specific files on a server, not exact usage. The number of requests does not translate into number of unique visitors, and the numbers may not reflect all usage because of caching. Measuring usage requires extrapolating from what the log file tells us and entails some level of error. To gain more exact knowledge about Web site usage, other means of investigation, such as questionnaires or cookies, must be used.
The good news about log file limitations is that they represent at least some protection of user privacy. A log file never records the user's name, or home or email address. Such information can only be recorded if the Web site asks a user to register and then requires a login for each subsequent visit. A site that does this can then link log files to a database of user profiles to generate reports about individual usage. Analyzing log files, by themselves, can only provide data about groups of users.
Also remember that dynamic addressing masks some individual users because they are not associated with a unique IP address. Anyone, however, who connects directly to the Internet will have a unique, unchanging IP address. Even though a name is not recorded, access to an individual's IP address can reveal their actions on a Web site. There are currently no laws covering how to handle the information contained in a log file, but because log files can contain information about individual IP addresses, they should be considered confidential, much as circulation records are confidential. Any data the library makes public from its log files should mask individual IP addresses. Data can always be presented at the level of usage by large groups (such as users from a particular country or in-house versus outside users).
However your library decides to analyze log files, the library Web site should carry a complete statement of what data is collected, who can see the data, and how that data is used.
CHOOSING LOG ANALYSIS SOFTWARE
Log files consist of thousands of lines of text. It is impossible to extract useful information by simply reading them. Log files can be analyzed by downloading their contents into a spreadsheet or database designed specifically for your site. Creating this type of application is time-consuming and very difficult. A far more common approach to analyzing log files is to use software designed to manipulate and analyze the data they contain.
Before you consider analysis software, make sure you understand what you really want to know about your library's Web-site usage. There is free and commercial software available for log analysis-each has advantages and disadvantages. In general, commercial software offers more features, enhanced graphics, and some level of customer support. If your needs are not too complex, a simpler-and less expensive-alternative may suit you as well as, or better than, the most full-featured analysis packages. The following are not reviews of software. They are quick snapshots of some of the features of free and commercial software to acquaint you with what is available and the price range. No single software package is right for everyone. Performance of individual software will be affected by the types of log files your server produces, so you need to test your own system using your own log files to evaluate what works best in your environment. As you examine software options, keep one key point in mind. Log analysis software can aid in gathering, distilling, and displaying information from log files, but no matter how sophisticated the software, it cannot add to or improve on what is already available in the log file. The contents of the log file are the ultimate limiting factor in what log analysis software can do for you. FREE LOG ANALYSIS SOFTWARE
TITLE: Analog Version 3.31
PRODUCER: Stephen Turner, University of Cambridge Statistical Laboratory
URL: http://www.statslab.cam.ac.uk/~Esret1/analog/
CUSTOMER SUPPORT: No
LOG FILE FORMAT: Configurable to support many formats
PLATFORM: Windows, Macintosh, UNIX, and others
Analog is a very popular, freely available log analysis software developed by Stephen Turner. Analog offers a standard report, which can be configured to the specifications of the user, and offers a General Summary of requests to a Web server.
An important feature is the Request Report. This report displays the most requested Web pages on the Web site, from most to least. The Request Report lists the number of requests, the last date when the file was requested, and the file name.
In addition to the General Summary and the Request Report, Analog will display monthly, daily, and hourly summaries. This can help to identify the busiest month, day of the week, and hour of the day. Also, Analog can show the most common domain names of computers where requests for the server's pages originated. This can tell you, for example, that 35% of requests came from academic sites in the U.S. Analog makes no attempt to try to identify the number of visitors to a site.
Analog is widely used. It runs on a variety of platforms and can recognize many log files. It does not offer advanced graphics capabilities. The two titles below are also free and can be downloaded from the Internet.
TITLE: wwwstat
URL: http://www.ics.uci.edu/pub/Websoft/wwwstat/
PRODUCER: Roy Fielding
LOG FILE FORMAT: Common log file format
PLATFORM: UNIX
TITLE: http-Analyze 2.01
URL: http://www.netstore.de/Supply/http-analyze/default.htm
PRODUCER: RENT-A-GURU
PRICE: Free for educational or individual use
LOG FILE FORMAT: Common log file format, some extended log file formats
COMMERCIAL LOG ANALYSIS SOFTWARE
TITLE: WebTrends Log Analyzer
URL: www.Webtrends.com
PRODUCER: WebTrends Corporation
PRICE: $ 399, more expensive products also available
CUSTOMER SUPPORT: Free technical support, by phone, email, and Web server; online FAQ and documentation.
TRIAL: Free 14-day trial
LOG FILE FORMATS: Recognizes 30 log file formats
PLATFORM: Windows 95/98/NT
WebTrends is a powerful software package that attempts to simplify the process of log analysis. Log profiles and reports are created and edited in menu-driven systems, with wizards and online help available to ease the process. WebTrends lets you manage multiple log files across several servers. Generating a customized report is done easily through the Report Wizard. In the report creation module, you may elect to generate tables and graphics from General Statistics, Resources Accessed, Visitors & Demographics, Activity Statistics, Technical Statistics, Referrers & Keywords, and Browsers & Platforms. Including a table or graph is as easy as checking a box in the wizard process. Graphs can be further customized as pie charts, or bar or line graphs. Reports can be generated as HTML, Microsoft Word, or Microsoft Excel documents. Some of the WebTrends reports are similar to what is offered in free software. For example, WebTrends will generate a report of the most requested pages on the Web site. Notice that a graph is included and that file addresses are also identified by titles.
WebTrends has more reports available than the free software. In the area of Resources Accessed alone, WebTrends generates tables and graphs for entry pages, exit pages, paths through the site, downloaded files, and forms. The other report sections are also full of enhanced capabilities. Referrers & Keywords presents the top search engines, sending hits to your site and the search terms that found your site. WebTrends reports can be filtered to exclude or include particular data. For example, you can choose to exclude requests generated by library employees by filtering those IP addresses out of the report. Other filters can present data for only one page, for a particular day of the week or hour of day, or for a particular referrer page. This feature is helpful in controlling the amount of data presented and aids in more finely targeting your reports to a particular subject.
WebTrends uses mathematical algorithms to try to distinguish the number of visitors to your site. There are difficulties in determining unique users from log files, and this information may not be credible. WebTrends itself states that the only way to determine a unique visitor to the site is to use authentication (i.e., logons and passwords.) For sites that do require authentication, WebTrends offers the ability to link user profile information in databases to visitor activity on the site. WebTrends offers many easy-to-use features. In some ways, it's a bridge between low-cost or free utilities and very high-end software packages, which can cost from $7,000-$10,000. Some of the advanced capabilities in WebTrends might be more than your library requires. The following two software packages present many of the same capabilities as WebTrends, such as predefined and customizable reports, data filtering, graphics, and friendly user interfaces. TITLE: Netintellect V.4.0
URL: http://www.Webmanage.com/
PRODUCER: WebManage
CUSTOMER SUPPORT: Support by phone, email, and online, as well as an online tutorial and documentation TRIAL: Free 15-day trial
LOG FILES: Recognizes 45 log file formats
PLATFORMS: Windows 95/98/NT
TITLE: FastStats
URL: http://www.mach5.com/fast/
CUSTOMER SUPPORT: Free technical support via email, no phone support available
Library Web sites will only become more important. They represent vital service points and investments of money and staff time. If a library wishes to measure usage of its Web site, server log analysis is one tool that should be employed. Libraries that wish to gain more in-depth knowledge about usage should investigate other means of data gathering, such as questionnaires and cookies.
Server logs were designed to measure traffic and demand loads on a computer server, and they work well for this purpose. When server log files are used to try to measure how people use a site, they don't work quite as well. They can, however, give you useful information about the relative usage of pages on your Web site, other sites that refer visitors to your site, and how search engines help people find your site, among other important data.
Although log analysis isn't perfect, few measures of usage are. For example, when we count people who come through the doors of our library, we don't know if they are there to read books or magazines, or just use the bathroom. When we circulate a book, we don't know why it was selected or even if it is read. Server log file analysis can be viewed in the same light, as a flawed but necessary measure of usage. The important thing is to educate yourself about the abilities and limitations of log file analysis so that you can make educated use of the data it produces.
1. Goldberg, Jeff. "Why Web Usage Statistics Are (Worse Than) Meaningless." Available at http://www.cranfield.ac.uk/docs/stats3.
2. Stehle, Tim. "Getting Real About Usage Statistics." Available at http://www.wprc.com/wpl/stats.html.
3. Stout, Rick. "Web Site Stats: Tracking Hits and Analyzing Traffic." Osborne/McGraw-Hill, Berkeley, CA, 1997.
4. "Web Site Analysis Tools." PC Magazine Online, March 10, 1998. Available at http://www.zdnet.com/pcmag/features/Webanalysis2/default.htm.
Kathleen Bauer ([email protected]) is an Informatics Librarian at the Yale School of Medicine Library. Comments? Email letters to the Editor at [email protected].
[infotoday.com]
[ONLINE]
[Current Issue] [Subscriptions]
Copyright © 2000, Information Today, Inc. All rights reserved. Comments | 计算机 |
2014-23/2168/en_head.json.gz/10551 | The Fedora Project is an openly-developed project designed by Red Hat, open for general participation, led by a meritocracy, following a set of project objectives. The goal of The Fedora Project is to work with the Linux community to build a complete, general purpose operating system exclusively from open source software. Development will be done in a public forum. The project will produce time-based releases of Fedora about 2-3 times a year, with a public release schedule. The Red Hat engineering team will continue to participate in building Fedora and will invite and encourage more outside participation than in past releases. Fedora 15, a new version of one of the leading and most widely used Linux distributions on the market, has been released. Some of the many new features include support for Btrfs file system, Indic typing booster, redesigned SELinux troubleshooter, better power management, LibreOffice productivity suite, and, of course, the brand-new GNOME 3 desktop: "GNOME 3 is the next generation of GNOME with a brand new user interface. It provides a completely new and modern desktop that has been designed for today's users and technologies. Fedora 15 is the first major distribution to include GNOME 3 by default. GNOME 3 is being developed with extensive upstream participation from Red Hat developers and Fedora volunteers, and GNOME 3 is tightly integrated in Fedora 15." manufacturer website
1 dvd for installation on an 86_64 platform back to top | 计算机 |
2014-23/2168/en_head.json.gz/10552 | Linux Gets Faster with Splashtop
Mar 05, 2009 By LJ Staff One of the nagging problems for Linux is that the most popular laptops are still codesigned by Microsoft and its OEMs. It's not for nothing that laptops come with stickers on the bottom that say, “Windows Vista—Business OEM Software” or whatever. These are not white boxes. You can get Linux running on them, but the hermit crab approach isn't the swiftest route to market leadership.
It's starting to look like that route may come through Splashtop, by DeviceVM. Splashtop starts a laptop in just a few seconds. Its Web site explains:
Splashtop is preinstalled on the hard drive or in the onboard Flash memory of new PCs and motherboards by their manufacturers. Splashtop is a software-only solution that requires no additional hardware. A small component of Splashtop is embedded in the BIOS of the PC—that's the part that runs as soon as you press the power button.
Within Splashtop, you have the choice of running one of its applications, such as the Splashtop Web Browser, or booting your operating system. Splashtop is compatible with any operating system, including Windows and Linux.
Splashtop has similar networking capabilities to what you find in other operating systems. It can connect to networks over Wi-Fi, LAN, xDSL and cable. WEP, WPA and WPA2 wireless security standards are supported.
Note that first line. Splashtop does for Linux what those old OEM deals did for Microsoft: gives it a leg up, an advantage right out of the startup gate (pun intended).
At the time of this writing, Splashtop is preinstalled on laptops from ASUS, VoodooPC and Lenovo, and on all motherboards from ASUS. Every one of them is winning where it counts most with users—by saving time.
Splashtop is also committed to open source. At the time of this writing, it's still building its SDK. Check the Developers page at www.splashtop.com for progress on that. Meanwhile, expect to see more news about how Linux is winning the battle for quick startup times. ______________________ | 计算机 |
2014-23/2168/en_head.json.gz/10915 | Ultimate Typing 2nd Place Winner In 2013 Typing Software Review Software review website TopTenReviews recently announced its 2013 top typing tutor software list, with Ultimate Typing™ software being placed at No. 2. In a brief statement today, representatives from Ultimate Typing's™ parent company eReflect commented on this placement, and explained how they will use the website's feedback to continue to improve its product. ...the software could also be of great help for younger, beginner typists who wish to start learning to type from a young age and perfect their typing skills the right way.
New York City, NY (PRWEB) July 03, 2013 TopTenReviews.com, an established software review website, recently revealed its choices for the year’s most efficient typing software. Ultimate Typing™ was awarded the No. 2 place in view of its extensive, innovative features and solutions offered. The TopTenReviews experts closely evaluated the best typing software available in the market in an effort to give their visitors the most accurate and reliable overview of all typing tutor software products. Ultimate Typing™ was awarded the second top position, with its overall score being very similar to the No. 1 software’s score. The eReflect representatives looked at each category in the review, and discovered that the TopTenReviews team marked the product down slightly in the “goal setting” category, saying that it would be more helpful for users to have the software analyze their initial typing skill level and use that information to help users adjust their initial goals for speed and accuracy. eReflect has noted this suggestion, and the development team will factor this into future releases of the product.
Overall, as the eReflect representative reported today, the reviewers at TopTenReviews praised the software’s user-friendliness, intuitive interface, affordability, and extensive user support. The reviewers seemed particularly pleased with features like the detailed and easy-to-follow video tutorials, and the wide range of different activities, lessons and games. They also commended upon the efficiency of the step by step training which supports the user throughout the learning process. The reviewers commented that in view of its expert design and intuitive interface and lesson structure, Ultimate Typing™ can be equally useful and beneficial to all sorts of learners. Beginners, intermediate-level users, and even advanced typists can find the software’s lessons challenging enough. It was also noted that the software could also be of great help for younger, beginner typists who wish to start learning to type from a young age and perfect their typing skills the right way.
Ultimate Typing™ got an overall score of 9.5, right behind the No. 1 typing software product. The review concluded with the reviewers mentioning how the Ultimate Typing™ designers, the team at the e-learning software company eReflect, worked hard in focusing on improving both typing speed and accuracy in equal amounts so that learners can achieve a balanced typing performance by the end of their practicing sessions. This review of typing software and the suggestions from the review team will be added to the customer feedback database, said the eReflect representative, and the development team will use this information for future releases of the product, with the goal of reaching and maintaining a first-place status in the next review.
For more details on Ultimate Typing™ please visit http://www.ultimatetyping.com/.
About Ultimate Typing™
Ultimate Typing™ software is designed specifically for the improvement of typing skills. Created by eReflect, a world leader in e-learning and self-development software, Ultimate Typing™ has been informed by the latest developments in the science of touch typing.
Since its creation in 2006 by Marc Slater, the company has already catered to over 112 countries all over the world, offering products with the latest cutting-edge technology, some of which are among the world’s most recognized and awarded in the industry.
eReflect+1 408 520 9803 | 计算机 |
2014-23/2168/en_head.json.gz/13374 | 2002_06_13/2
Typesetting is dead. Long live type.
Typesetting is dead. Long live type. – June 13, 2002
Bad Homburg, June 13th 2002 – An extraordinary thing has happened. Since the typesetting industry all but disappeared as a definable service within the printing industry, the demand for type has grown to unprecedented levels. It is less expensive, it is certainly more accessible (most of us have at least a 100 or so fonts on our own desktops) and the requirement for variety and diversity has never been higher. Type plays a critical role in a world awash with messages and communication by giving companies and publications an instant recognition factor in their material, in print or on screen.
Linotype Library bears one of the printing industry’s most famous and enduring brand names. As well as the company’s founder inventing the Linotype Linecasting machine that effectively established the modern newspaper industry, Linotype consistently led the innovative developments that still characterise the pre-press end of the printing industry. One of the milestones was Linotype’s partnering with Apple and Adobe to develop PostScript as the medium that has so changed the printing process from concept to paper.
But the constant in Linotype’s history has been the development of quality typefaces to feed the developing need for distinctive graphic design across the century. Linotype’s business is now solely type. Designing and manufacturing fonts for every conceivable application. From its headquarters in Bad Homburg, Germany, Linotype supplies professional graphic artists, printers and newspapers with more than 10,000 fonts every month. Its library of more than 5,200 different typefaces contains many of the world’s most important designs such as Helvetica™, Univers™ and Palatino™. A large part of Linotype’s business is licensing these typefaces and hundreds more to printer and RIP and other display manufacturers around the world, to ensure the consistency and integrity of the original designs, regardless of the output unit. But significantly, Linotype is constantly bringing the work of new type designers to the market. During the last two years alone, Linotype Library has introduced more new typefaces than in the entirety of its first half century in existence. Each of them engineered to the highest quality, using the latest technology and available through Linotype’s unique Font Explorer™ system (www.linotypelibrary.com). Linotype may not be in the linecasting business any longer, but it still leads the world in type.
Further information, plus examples of applications for fonts, can be found on the internet at www.linotypelibrary.com.
If you would like a demo CD for trying out some of the fonts, just let us know.
Linotype GmbH – a member of the Heidelberg Group. It offers state-of-the-art font technology and one of the world’s largest libraries of original fonts. Over 5,500 PostScript and TrueType fonts are currently available for Mac and PC. Linotype FontExplorer, a specially developed browser and navigation system, supports rapid font selection. All fonts are on CD, and are also available online for instant ordering and downloading.
The new Linotype Font Identifier is a patented system for rapid identification of individual fonts. Guiding the user through a simple sequence of questions, the Linotype Font Identifier pinpoints that unknown font fast. This useful tool is available free of charge on the internet at www.linotypelibrary.com.
This press release is available as pdf file. Please download:
English version (261,3 kb) | 计算机 |
2014-23/2168/en_head.json.gz/14156 | W3C RIF-WG Wiki
UCRCollaborative Policy Development for Dynamic Spectrum Access
This is an archive of an inactive wiki and cannot be modified.
This use case demonstrates how the RIF leads to increased flexibility in matching the goals of end-users of a service/device, with the goals of providers and regulators of such services/devices. The RIF can do that because it enables deployment of third party systems that can generate various suitable interpretations and/or translations of the sanctioned rules governing a service/device. This use case concerns Dynamic Spectrum Access for wireless communication devices. Recent technological and regulatory trends are converging toward a more flexible architecture in which reconfigurable devices may operate legally in various regulatory and service environments. The ability of a device to absorb the rules defining the policies of a region, or the operational protocols required to dynamically access available spectrum, is contingent upon those rules being in a form that the device can use, as well as their being tailored to work with devices in the same class having different capabilities. In this use-case we suppose a region adopts a policy that allows certain wireless devices to opportunistically use frequency bands that are normally reserved for certain high-priority users. (The decision by the European Union to allow "Dynamic Frequency Selection" (DFS) use of the 5 GHz frequency band by wireless systems, a band intermittently used by military and weather radar, is a recent example - See http://europa.eu.int/eur-lex/lex/LexUriServ/site/en/oj/2005/l_187/l_18720050719en00220024.pdf.) Suppose the policy states: A wireless device can transmit on a 5 GHz band if no priority user is currently using that band.
How does a device know that no priority user is currently using a band it wants to use? The answer will depend on the specific capabilities of the device. One type of device may answer this question by sensing the amount of energy it is receiving on that band. That is, it might employ the rule: If no energy is detected on a desired band then assume no other device is using the band.
A second type of device, may get information from a control channel that lets it know whether the desired band is being used by a priority user. That is, it might employ the rule: If no control signal indicating use of a desired band by a priority user is detected then assume the band is available.
So each type of device will need to employ different "interpretations" or "operational definitions" of the policy in question. Now assume that there are 10 manufacturers of these 2 different types of wireless devices. Suppose that each of these manufacturers uses a distinct rule-based platform in designing its devices. Each manufacturer needs to write 2 interpretations of the policy (for each of the two types of device). That means that 20 different versions of the policy must be written, tested and maintained. Enter the RIF. The 10 manufacturers form a consortium. This is a third-party group that is responsible for translating regional policies into the RIF. When it does so, however, it provides different versions corresponding to the possible interpretations (operational definitions) of the policy. So in this case, 2 RIF versions of the DFS policy are provided for the 2 types of device mentioned above. Each of these RIF specifications can be automatically translated into the appropriate rule-platform provided a RIF-Compiler for the target device architecture exists. Clearly it will be in the interest of each device manufacturer to develop such compilers. That is because the manufacturer only needs to develop such a compiler once for every architecture it owns. Contrast that investment with having to produce, test, and maintain different versions of various policies over the lifetime of a product. This arrangement also allows the overall process to be organized in a fashion that maintains the natural division of labor in the corresponding division of artifacts produced by that labor: the policy and its various interpretations are written and maintained in platform-independent artifacts (the RIF); knowledge about how to translate from the RIF to a particular device architecture is maintained in the compilers. A change in policy is inserted at the top level in the policy artifact hierarchy where it should be; possible operational interpretations of that change are inserted at the next level down; the implementation implications for the various device architectures is generated automatically at the lowest level. Motivates: Default Behavior The regulatory policies specify certain constraints, e.g., "if radar is sensed on a channel in use, the channel must be evacuated within 10 seconds," which can be viewed as a default for a device to be in compliance. However, the RIF-based specifications promulgated by the consortium will not simply state the constraint, but rather contain a set of implementable rules that make it possible for a suitably configured device to meet this constraint. For some configurations and device types these rules may go beyond simply ceasing transmission on the channel, e.g., the device might send a control message to a master device (an access point) asking if an alternate channel is available, etc. As long as these additional steps do not prevent devices from vacating the channel within 10 seconds (and do not violate any other constraints) they are allowed. So, it would be worthwhile to allow the RIF-based specifications to "point" to a RIF-based version of the general 10-second constraint as a default behavior if the more detailed rules cannot be applied. Different semantics Depending upon the needs of an application, there are a number of ways that a formal representation of a policy can be achieved. A device may need to reason about what a policy requires and it may also need to allow its behavior to be guided by the policy. In the former case, deductive logic can be used to formulate statements and draw valid inferences as to what the policy entails. For example, relative to the 10 second channel evacuation requirement mentioned above, it turns out that if a device (or its associated master) checks for radar every 9 seconds then there will be enough time to evacuate the channel if needed. So a RIF-based specification might contain a declarative rule that states "if a channel is in use, and it's last radar check was 9 seconds ago, then a radar check on that channel is due." The important thing to note here is that the rule is a statement (capable of being true or false) of what the implementation requires. In order to utilize such statements to guide the behavior of a device, connections must be forged between conclusions reached and actions to be taken. Production rules, specifically, ECA rules can be used to establish those connections. For example, "if it has been concluded that a radar check for a channel is due, then do <action>!" Since this use case envisions devices that are both capable of reasoning about policy requirements and being guided by them, we expect that these RIF-based specifications will require rules having both declarative and imperative semantics. Limited Number of Dialects As the use case states, RIF-based specifications are beneficial because they allow a group of interested parties (the consortium) to write machine-usable specifications that can be deployed to a wide variety of devices provided the device manufacturer, or other party, writes a "RIF-compiler," i.e., translator, for the given device platform. If RIF-based specifications were themselves allowed to take on many different forms in a non-cohesive fashion, and specifications using them were generated, it is possible that this benefit would be compromised. In other words, a manufacturer or third party might find it necessary to invest too much time in maintaining translators to make use of the RIF worthwhile. OWL data The rules in these applications will utilize concepts that are defined in accordance with the definitions devised by standard organizations. The use of OWL ontologies is likely for that task. Moreover it is possible that future protocol message payloads might contain OWL data. Coverage The rules require support for negation. The rules react to changes in the environment. Features from production rules and ECA rules such as forward chaining, events, and actions will be useful. TitleIndex | 计算机 |
2014-23/2168/en_head.json.gz/14569 | Home - Discover APWA - Resources April 2002
Report Card: "Plays well with others"
Dave Reinke
InfoLink Project Manager
APWA Washington, D.C. Office
If you look closely, you can see the future. In the past year, technology has been advancing at its usual hectic pace, even if the speed of "Internet Time" has lessened just a bit during the economic slowdown. But some new developments are taking place that will have as much of an impact as any since the rise of the World Wide Web in the last decade. The future of the Internet is arriving now, and the good news is that this technological evolution lets things work together productively.
Now all of us know that hype coming from the computer/Internet sector is not unusual, and given the unfulfilled promises from now-defunct Internet companies, probably should be viewed with at least a little skepticism. What is different this time is the amount of coverage generated, its positive tone, and the focus of the technology on customer needs.
It's all about "Web Services." The name might not sound like much, but the major technology players like Microsoft, Sun, and IBM are all pushing their implementation as the "Next Big Thing." Microsoft is currently in the midst of a nationwide rollout for Microsoft ".Net," and the folks from Redmond are pinning their future on this technology (which gives you some idea of its relative importance). Microsoft is also releasing a suite of tools for developers of these services. As a major initiative from the largest software company in the world (love them or hate them, just don't underestimate their impact), web services are here. Now when it's this big (and complex) it can be a little hard to define, but the term basically refers to software components that can interact across the Internet. As an analyst from the Gartner Group said, while it's not "the Holy Grail of computing, web services will ultimately deliver more of the promises made by earlier technologies."
That's probably the real benefit of the technology—the ability to finally deliver on the potential that the Internet has always presented. Building on the foundation of all that's gone before, web services leverages existing networks, databases, and information to integrate previously separate systems and data. Just as InfoLink has built a solid base of users, regular site visitors, and information sources, these existing Internet communities can take the next step into useful collaboration and expansion of services.
Of course, like all things technical, it comes with an alphabet soup of XML, SOAP, and UDDI, working on processes like EAI, BI and CRM. But you don't need to know what they mean (actually stand for), just what they mean in the long term—a way for different computer systems in different locations to interoperate. We finally have different vendors of different systems agreeing to standards that allow it to happen.
Even the geographic layers of GIS/geospacial information are falling into a zone of interchangeability. Leading vendors of GIS software can import and export alternate formats, and the federal government is sponsoring a number of initiatives to ensure the data transportability of national mapping projects. The Federal Geographic Data Committee (FGDC) and National Spacial Data Infrastructure (NSDI) and the new E-Government Geospacial One-Stop are working toward the establishment of policies, standards and procedures to ensure the accessibility and interoperability of geospacial data layers by government, private, and academic groups, again boosting usefulness and value to existing and new communities of users.
For public works departments, what does this mean for you? It is not a question of if this impacts you, but when. Historically, perhaps, not the first to get equipped with computers for administrative tasks, e-mail, or Internet access, public works departments will be equipped with the technology and will become part of this larger data sharing universe. Overcoming institutional barriers and technical access problems might still take awhile, but as e-government initiatives spread, and more departments deliver their information and services to citizens electronically, streets, sanitation, and water information will be included.
One early example of this is eMontgomery.org, from Montgomery County, Maryland, which begins to show how web services matter when delivered to citizens. The site, among the others that can be accessed through www.apwa-infolink.com, consolidates information from a wide variety of departments, in a wide variety of formats, and makes them accessible in one place. The site is also a good example of the implementation of portal technology, increasingly being used for web delivery within companies and governments. "One of the keys to success for the eMontgomery site," says Kevin Novak, manager of the eMontgomery site, "is the ability to...find information that resides in many different locations and formats and deliver it to the user in a way that meets his or her needs." This tying together of documents, databases, and other information is notable not only for the technological ability to deliver, but for the focus on customer needs as well.
Web Services delivering Enterprise Application Integration is just a fancy way of saying that the tools you need do your work and deliver your services can work together. Which, if you think about it, makes you wonder what took so long. But now that we are finally clearing the hurdles of proprietary formats and competitive secrecy, the interconnectedness of the Internet can be put to greater use in delivering quality services, and APWA-InfoLink is positioned to do just that.
To reach Dave Reinke, call (202) 408-9541 or send e-mail to [email protected]. | 计算机 |
2014-23/2168/en_head.json.gz/14657 | (Redirected from Internet Corporation for Assigned Names and Numbers)
Fadi Chehadé (CEO)
Focus(es)
Manage Internet protocol numbers and Domain Name System root
One World. One Internet.
www.icann.org
An Opte Project visualization of routing paths through a portion of the Internet.
History of the Internet
Internet phenomena
Internet exchange point
Internet protocol suite
Internet Message Access Protocol
Simple Mail Transfer Protocol
ICANN headquarters in Playa Vista
The Internet Corporation for Assigned Names and Numbers (ICANN, /ˈaɪkæn/ EYE-kan) is a nonprofit organization that is responsible for the coordination of maintenance and methodology of several databases of unique identifiers related to the namespaces of the Internet, and ensuring the network's stable and secure operation.[1]
Most visibly, much of its work has concerned the Internet's global Domain Name System, including policy development for internationalization of the DNS system, introduction of new generic top-level domains (TLDs), and the operation of root name servers. The numbering facilities ICANN manages include the Internet Protocol address spaces for IPv4 and IPv6, and assignment of address blocks to regional Internet registries. ICANN also maintains registries of Internet protocol identifiers.
ICANN performs the actual technical maintenance work of the central Internet address pools and DNS Root registries pursuant to the IANA function contract.
ICANN's primary principles of operation have been described as helping preserve the operational stability of the Internet; to promote competition; to achieve broad representation of the global Internet community; and to develop policies appropriate to its mission through bottom-up, consensus-based processes.[2]
ICANN was created on September 18, 1998, and incorporated on September 30, 1998.[3] It is headquartered in the Playa Vista section of Los Angeles, California. On September 29, 2006, ICANN signed a new agreement with the United States Department of Commerce (DOC) that moves the organization further towards a solely multistakeholder governance model.[4]
2 Notable events
3.1 Governmental Advisory Committee
3.2 Democratic input
4.1 Proposed elimination of public DNS whois
Before the establishment of ICANN, the IANA function of administering registries of Internet protocol identifiers (including the distributing top-level domains and IP addresses) was performed by Jon Postel, a researcher at the University of Southern California's Information Sciences Institute (ISI) who had been involved in the creation of ARPANET.[5][6] The Information Sciences Institute was funded by the U.S. Department of Defense, as was SRI International's Network Information Center, which also performed some assigned name functions.[7]
As the Internet grew and expanded globally, the U.S. Department of Commerce initiated a process to establish a new organization to take over the IANA functions. On January 30, 1998, the National Telecommunications and Information Administration (NTIA), an agency of the U.S. Department of Commerce, issued for comment, "A Proposal to Improve the Technical Management of Internet Names and Addresses." The proposed rule making, or "Green Paper", was published in the Federal Register on February 20, 1998, providing opportunity for public comment. NTIA received more than 650 comments as of March 23, 1998, when the comment period closed.[8]
The Green Paper proposed certain actions designed to privatize the management of Internet names and addresses in a manner that allows for the development of robust competition and facilitates global participation in Internet management. The Green Paper proposed for discussion a variety of issues relating to DNS management including private sector creation of a new not-for-profit corporation (the "new corporation") managed by a globally and functionally representative Board of Directors.[citation needed] ICANN was formed in response to this policy.[citation needed], and manages the Internet Assigned Numbers Authority (IANA) under contract to the United States Department of Commerce (DOC) and pursuant to an agreement with the IETF.[9]
ICANN was incorporated in California on September 30, 1998, with entrepreneur and philanthropist Esther Dyson as founding chairwoman.[3] It is qualified to d | 计算机 |
2014-23/2168/en_head.json.gz/15703 | The DaVinci Institute
Futurist Speaker
FuturistSpeaker.com
Book Tom Now
DaVinci Coders
DaVinci Inventor Showcase
Council of Luminaries
September 14, 2009 - Monday
Change Yourself to Change the World: The Science of Perception with Rennie Davis, Former Member of the Chicago Seven, Chairman of the Foundation for a New Humanity and Alexia Parks, Chris Kauza, Michael Chushman and Joy Milkowski
Event Video Available
The DaVinci Institute's video archives are available to members only. If you are currently a member, login here. For more information on memberships, click here.
Mahatma Gandhi once said the person seeking to change the world must first be the change they seek for the world. Rennie Davis exams this historic philosophy and explores how the world works through the science of perception. On this evening, you will journey to the world’s new particle collider located outside Geneva in the Swiss Alps where the largest machine on Earth has been built to study the smallest particles in the universe. This is a science project to understand what happened at the origin of the universe, uncover the nature of this world, and find the ‘god particle’ that makes the empty space of the atom appear dense. Rennie argues that when the field of particle physics fully uncovers the mystery of the atom, it will be both thrilled and shocked. To understand the atom is to understand how this world operates on a mirror principle. No one is doing anything to you. The ‘god particle’ is not in the atom but in the eye of the beholder. Your own perception is your ‘god particle.’ Rennie Davis makes the compelling case that you are the figment of all your fantasies and the author of the story line you are presently living too. Your beliefs, truths and feelings form your own residual self image. How you see others is how you see yourself. How you see yourself is how you experience your world. The world is not solid, objective or real but a psychological construct whose origin is yourself. You live what you fear and enjoy what you love. In the 60,000 thinks that cross your brain every 24 hours, you will discover the source of all the experiences you are currently living. EVENT: Night with a Futurist
DATE: September 14, 2009 - Monday
TIME: 06:30am-09:00am
WEBSITE: http://www.davinciinstitute.com/events/683/night-with-a-futurist-monday-january-13-2014
LOCATION: MADCAP Theater, 10679 Westminster Blvd, Westminster, CO 80020
DIRECTIONS: Driving Directions
COST: $0, Members: Free, SuperMembers: Free
TOPIC: Change Yourself to Change the World: The Science of Perception with Rennie Davis, Former Member of the Chicago Seven, Chairman of the Foundation for a New Humanity and Alexia Parks, Chris Kauza, Michael Chushman and Joy Milkowski
SPEAKERS: Rennie Davis, Joy Milkowski, Alexia Parks, Chris Kauza, Michael Cushman SPEAKER: Rennie Davis
Former Member of the Chicago Seven – Chairman of Great Turning Solutions
In the 1960s, Rennie Davis was the coordinator of the largest anti-war and civil rights coalition in the United States. He remains a recognized spokesmen for his generation, featured on numerous network television documentaries and media forums, from the Legends series produced by CBS to Larry King Live, Barbara Walters, VH1, CNN and other network programs. His leadership in the socially responsible investment industry has been profiled in the Dow Jones Investment Advisor.
In the 1980s, he was the managing partner for a consulting company with an exclusive clientele of board members and officers of Fortune 500 companies and wealthy private families. His consulting company purchased and developed the 80-acre Greystone estate for the purpose of establishing a unique Colorado technology development center for inventors and scientists. He has supported start-up companies in capital development and taken them public. He has experience in technology deal structures and presented various technology projects to U.S. financial markets. He has also served as a valued consultant to directors and senior management of diverse Fortune 500 companies in executive search, leadership training, financial planning, team building, employee benefit design and executive outplacement. Clients have included the President of HBO, president of the Manville Corporation, officers of Time-Warner and IBM, the board of directors of Gates Rubber Company and people in the Forbes 400 Richest. He was a principal in the organization and development of TSL Incorporated where he secured agreements with Union Carbide. He also organized RBC Universal and served as the company’s Chief Operating Officer where he secured strategic agreements with Ford Motor Company, EverReady Battery Company and Ray-O-Vac.
He has spent his recent years fine-tuning his unique personnel empowerment strategies through hundreds of training workshops and coaching sessions. He currently supports newly organized companies with capital development and business planning. He guides companies that want to create peak performance teams with positive motivated personnel. He develops whole system designs for highly efficient, effective workplaces. He is Chairman of Great Turning Solutions, LLC.
MODERATOR: Joy Milkowski Founder of Access Marketing Company Joy Milkowski is the Founder of Access Marketing Company where she specializes in bringing search engine marketing and web marketing intelligence to her clients in a way they can understand and measure. She has over a decade of marketing, sales and professional copywriting experience in a variety of areas including tech, industrial, direct to consumer sales, commercial real estate ventures, professional and collegiate athletics and Internet-based marketing. In addition, Joy trains small businesses and entrepreneurs who want to leverage the power of search engines as an advertising tool to reach their audiences.
PANELIST: Alexia Parks
President of Votelink.com
For more than 30 years, the writing, work, and community service of Alexia Parks has had a focus in the fields of renewable energy, the environment, education, and communications. In 1995, she co-founded Votelink.com – the first electronic democracy website on the Internet - and continues as its president today. At its launch, Newsweek magazine called her “one of 50 people who matter most on the Net.”
In 2007, Alexia was the first accredited blogger for the United Nations conference on climate change in Bali. She is currently a blogger for the Huffington Post, Colorado/Denver edition; and also blogs for Intent.com.
As president of Votelink.com, Alexia Parks has applied her knowledge of communications and the Internet to offer an easy-to-use Online Town Hall. She is marketing this new voting and moderated discussion system to members of Congress so they can reach out to 100% of their constituents.
Alexia Parks is also author of seven books, including her latest:
OM Money Money, A Return to Sacred Money.
PANELIST: Chris Kauza
Vice President of Social Media and Marketing at Fett Marketing
Chris Kauza is Vice President of Social Media and Marketing at Fett Marketing (www.FettMarketing.com), a U.S. firm focused on enriching and growing business relationships. Previously, Chris was was the Vice President for Technology and the Western Regions for ACS, where he led a 600-person organization that had broad responsibilities of aligning IT infrastructure and operations to client-specific business strategies. Prior to ACS, he was at Sun Microsystems, where he helped grow the Cloud Computing and Managed Services groups domestically and internationally.
Chris' background includes marketing, solution planning and delivery, business growth and management, partnership development and strategic sales. His industry experience includes: Telecommunications, Financial Services, Manufacturing, New Media, Entertainment and Gaming, Health Care, Retail, Government and Defense-related sectors. His credentials include an ITIL Manager's certification, Six Sigma Green Belt Certification, an MS in Public Policy Analysis from the University of Rochester and an MBA in Marketing from Pepperdine University.
Chris is a Founding Board member of Ubuntu Now (www.ubuntunow.org), a non-profit organization that actively promotes the practice and principles of peace and kindness, with direct focus on victims of trauma and violence, creating economic opportunities for marginalized and underprivileged classes, and support for other social causes that provide stability to our world.
PANELIST: Michael Cushman President of Key Change Institute and a Senior Fellow at the DaVinci Institute
Michael Cushman is a Senior Fellow at the DaVinci Institute who loves helping others create the future. Over a 30 year career, as an executive, management consultant and thought leader, Michael has proven results in leading operational excellence. Michael's expertise is in human development, advanced learning technologies and techniques, as well as business effectiveness. He specializes in creating award-winning products, successfully introducing profitable technology and process improvements at over 50 companies, including Fortune 500 companies such as Verizon, BT, Revlon, and Chevron. In his role as an executive, Michael served in leadership positions at several startup companies on their way to market success.
Michael is the President of Key Change Institute. He often speaks on the future of learning, career management, nonverbal communications, and advanced change techniques. He is a sought out expert and has appeared as a spokesperson on local and national TV.
6:30 - 7:00 - Registration and networking
7:00 - 7:15 - Announcements and introductions
7:15 - 8:00 - Rennie Davis Keynote - Change Yourself to Change the World: The Science of Perception
8:00 - 8:45 - Panel discussion
8:45 - 9:00 - Networking
9:00 - Thank you for coming
Kilpatrick Townsend
Westminster, Colorado
ColoradoBiz Magazine
Net-Results Marketing Automation
Lynott & Associates
Lightspeed Commercial Arts
Impact Lab - A laboratory of the future human experience
FuturistSpeaker.com - A voice from the future
Terms of Use | Privacy Policy | Legal Statement 511 E South Boulder Road, Louisville, CO 80027 (map) | 303-666-4133 | contact (at) davinciinstitute (dot) com
Copyright © 1997 - 2014.The DaVinci Institute, Inc. All Rights Reserved. |
The DaVinci Institute is a 501(C)3 non-profit under the laws of the State of Colorado. | 计算机 |
2014-23/2168/en_head.json.gz/15962 | Hasan Cam
US Army Research Lab, USA Hasan Cam is a Computer Scientist at US Army Research Lab. He currently works on the projects involved in cyber security, metrics, and dataset generation in wired and wireless networks. His research interests include cyber security, network security, wireless sensor and cellular networks, mobile ad hoc networks, secure data aggregation, source coding and target tracking, and computer architecture. Cam has previously worked as a faculty member at various universities and as a senior research scientist in the industry. He has been serving as a guest editor and an editorial member on various journals and as a technical program committee member in numerous conferences, in addition to organizing symposiums. Cam received the Ph.D. degree in electrical and computer engineering from Purdue University in 1992 and the M.S. degree in computer science from Polytechnic Institute of New York in 1986. He is a Senior Member of IEEE.
Biography Updated on 2 January 2012 | 计算机 |
2014-23/2168/en_head.json.gz/16226 | Creative Notes
Tips, tricks, and news for creative professionals
, Photography software
, animation, publishing, photo editing
Adobe unleashes Creative Suite 5
Adobe has launched the fifth incarnation of its Creative Suite collection of professional applications for print and Web designers and videographers. This set of coordinated programs, popularly called Adobe CS5, includes new versions of 14 products and their associated apps, four new online services, and a brand new interactive Web design product.
“As a technology that generates more than half of the company’s revenue, this is an incredibly important release for us,” John Loiacono, Adobe's senior vice president and general manager of Creative Solutions, told Macworld. “We’ve hit our stride not just with a speed bump in functionality and performance...this is a big leap ahead in some of the capabilities we have built in to CS5. We have enormous expectations on how this will perform in the market.”
More on Photoshop CS5
More on InDesign CS5
More on Illustrator CS5
More on Premiere Pro CS5
More on Dreamweaver, Fireworks, and Contribute CS5
More on Flash Catalyst and Flash Professional CS5, and Flash Builder 4
More on After Effects and Soundbooth CS5
Since 2003, when Adobe first gathered its print and Web tools into a suite (later adding its video package), the company has offered a steady parade of updates for its creative professional user base. The new CS5 veers in a somewhat different direction than earlier versions with a specific concentration on online services and Web analytics. Creative Suite 5 products, for the first time, include access to Omniture technologies—Web utilities that capture, store, and analyze information generated by Web sites and other sources.
The suite now hosts three discrete versions of Flash—the familiar Flash Professional, Flash Builder (previously called Flex Builder), and a brand new interactive design app called Flash Catalyst. There is now a greater emphasis on online services that Adobe is relying on bridge the gap between its 18- to 24-month upgrade cycle.
“One of the challenges that we have with product cycles that tend to be 18 to 24 months in length, is that they’re long in development. So we’re trying to update services much more rapidly...and decouple some of these features that we’re adding and manifest them as services, which allows us to move a lot quicker to modify and test them...I see the services as an extension of the applications,” Loiacono said.
The updates in CS5—more that 250 new features have been integrated throughout the Master Collection of all programs—address not only technical changes in hardware capabilities to make them faster and more efficient, but also strive to solve workflow problems. “Our beta testers are giving us high marks in hitting the mark on not just key functionality that they need but actually understanding their workflow,” Loiacono said. “At the end of the day, building the next generation of really cool features is a requirement and it’s expected, but it’s not sufficient anymore. We can’t just be pixel polishers. We have to look at the next generation of the workflow challenges that people are facing.”
Technology advancements
Several CS5 apps have advanced technologically to keep pace with advances in Apple hardware. Photoshop, Premiere Pro, and After Effects are now 64-bit native to take better advantage of the increased memory built into the Mac’s new hardware, and Premiere Pro is now better optimized for multi-core Intel Macs. Improvements in Photoshop's OpenGL engine will make the new version faster and more responsive, as well.
As Adobe announced last year, CS5 will run only on Intel Macs and with only the most recent operating systems, such as 10.5.7 (Leopard) or Snow Leopard (10.6). In addition to native 64-bit support, Adobe has introduced the Mercury Playback Engine to Premiere Pro, its flagship video editing app.
The Mercury Playback Engine speeds up processing and rendering so editors can work on large, complex projects without delays. The key to this improvement is GPU (Graphics Processing U | 计算机 |
2014-23/2168/en_head.json.gz/16298 | Hello guest register or sign in or with: Reviews - Final Fantasy Tactics Game
Square Enix | Released Jan 28, 1998
summary news reviews features tutorials downloads mods videos images Final Fantasy Tactics is a tactical role-playing game developed and published by Square (now Square Enix) for the Sony PlayStation video game console. It was released in Japan in June 1997 and in the United States in January 1998. The game combines thematic elements of the Final Fantasy video game series with a game engine and battle system unlike those previously seen in the franchise. In contrast to other 32-bit era Final Fantasy titles, Final Fantasy Tactics uses a 3D, isometric, rotatable playing field, with bitmap sprite characters. RSS Feed
No user reviews have been posted matching the criteria provided. Check back later, perhaps someone will share their thoughts.
Speak your mind and have a rant. It will feel right! Or read other peoples reviews. | 计算机 |
2014-23/2155/en_head.json.gz/3541 | Helvetica: The Movie
I finished out my stay in Austin yesterday with a slightly different rendition of what was, in many ways, the overriding SXSW Interactive theme: an idea crazy enough that it just might work. This time we were talking not about music-making dot-matrix printers or the next mind-blowing Web app but about a feature-length documentary on. . . a typeface.
Helvetica, which had its world premiere at the conference, presents the life story of something all of us encounter on a daily (or even hourly) basis. Created in 1957 by the Swiss modernist designer Max Miedinger as a response to the cluttered typography and design of the postwar era, Helvetica's clean neutrality and balanced use of the empty space surrounding letters quickly made it a go-to font for public signage, advertising, corporate logos and works of modernist design around the world. When it was licensed as a default font on every new Macintosh (itself a tool that revolutionized the design field), its position as the world's most ubiquitous typeface was solidified. In fact, saving any custom browser tweaks, you're looking at Helvetica right now on this blog (as well as the majority of all other sans-serif text on the Web). An interesting story, to be sure, but worthy of an entire 80-minute documentary? Really? Yes.
Filmmaker Gary Hustwitt revels in his fascination with something so commonplace that it blends almost entirely into a context-less background, becoming a detective of sorts to unveil the myriad everyday places Helvetica is hiding (“It's a disease,” Hustwitt said of his obsessive font-spotting). And he's clearly not alone. He has assembled a laundry list of heavy hitters in the graphic-design world to wax poetic on Helvetica—and we're talking extremely poetic: One describes experiencing Helvetica as “like crawling through the desert, having your mouth full of dust and dirt, and suddenly being presented with a cold, clean glass of water”; another accuses its corporate sameness of playing a role in the Vietnam War. And they're only sort of joking. The film treats all of this with earnestness but without forgetting the fun, revealing something I never assumed most graphic designers would have: great senses of humor. Helvetica begins its international screening tour this month. —John Mahoney
March 14, 2007 in SXSW | Permalink
Posted by popsci
Dan Rather at SXSW: “What a Steaming Blob of Horsehockey”
This afternoon's keynote speaker was none other than fearless newsman and recently converted HDTV acolyte Dan Rather. Forgive the length of this post, but I'm going to just put up a transcript of a segment of the discussion that I think was particularly important. In other words, we interrupt PopSci's regularly scheduled technology blog to bring you "The Future of Journalistic Integrity," after the jump. —Megan Miller
Continue reading "Dan Rather at SXSW: “What a Steaming Blob of Horsehockey”" »
So, Like, When Is the Matrix Going to Be Real?
Sunday's afternoon seminar, "Toward a Spatial Reality," delved into the mysteries of geo-tagging and included several instances of semantic amazingness. (At a certain point, one panelist complimented another's idea by remarking that he was "riding on a fascinating tiger," and at another point, an apparent lunatic in the audience started screaming about how the GeoWeb was soon going to be in the hands of mastermind criminals: Wa-ha-ha-ha-ha!) The room was filled with engineering whizzes and other people really excited about modeling a virtual 3D version of the real world and layering it on Google Earth's satellite maps in order to see every building in every city in eye-popping, textured detail. There was also much talk about the use of ComStat by police departments to track the location of cop cars. ComStat basically allows police to be held accountable when crime rates don't seem to be going down, say, in the Cherry Hill neighborhood of Baltimore, because all the officers are clustered around the Dunkin' Donuts on Howard Street. (You watch The Wire, right?) The big idea is that ComStat could be used in lots of cities for lots of problems, in a way similar to New York's use of 311, the municipal help line. But instead of dialing up on the telephone to report rats in your neighbor's trashcan, or a big pothole on Broadway, users would upload photos or stories about their issues to ComStat-like Google Earth layover software, and this would be monitored by city officials.
This sort of real-time information layover is being used right now by CBS to mashup breaking news reports with maps, so you can see exactly where in the world all the trouble is happening, and avoid those places. (Kidding, sort of.)
The seminar wrapped up with a Utopian vision of a future, maybe just a few years away, when cell phone GPS systems will not only act as map-based mobile Web browsers that give you (or allow you to submit) news and information about what's going on around you, but also act as negotiators on your behalf, pinging nearby businesses you might be interested in to get the best deals on products and services. In this future, we'll always be interacting simultaneously with the world around us, and with the reflection of the world displayed on our GPS systems and enhanced with user-submitted info. The upshot? We're getting closer and closer to entering the Matrix. —Megan Miller
The Nintendo DS Gets Artistic
Lots of cool stuff here this morning at the Game Perverts session, all focused on hacking videogame hardware and software—everything from using a Gameboy Advance's processor to control robots to altering the frequency of an ancient dot-matrix printer's shriek to make music. Most impressive and surprising, though, was artist and software designer Bob Sabiston's still-under-development paint and animations application for the Nintendo DS. Sabiston is most famous for designing Rotoshop, the software used to digitally create the distinctive rotoscoping animation used most prominently in director Richard Linklater's Waking Life and A Scanner Darkly. Also an accomplished illustrator, Sabiston saw unused potential in the Nintendo DS, with the device's dual screens and touch-sensitive, stylus-based interface naturally positioning it as a great platform for drawing. If you remember Mario Paint for the Super Nintendo, Sabiston's project will be right up your alley. Not only can you use it to create pixel- and vector-based illustrations; it also supports flip-book style animations and a sort of vector-graphics sequencer used to make more fluid animated works. No part of the DS's unique hardware is overlooked, as users will also be able to add recorded sound effects via the built-in microphone and upload their creations to the Web via Wi-Fi, providing near-infinite storage. Sabiston used the software to create the pixel illustration seen above (printed on a large canvas after additional image processing), with the DS's top screen showing the overall workspace and the bottom providing a zoomed workspace (more images are available on his Web site). As of now, there are no finite plans for release. The project is on Nintendo's radar, but failing a commercial release,
Sabiston mentioned the possibility of making it available to homebrew
hackers on the Web. Here's hoping this powerful DS app makes it to the stores, though; after today's demo, I can't wait to get my hands on it. See below for a video of the app in action. —John Mahoney
March 11, 2007 in Games, SXSW | Permalink
That's a Lotta Schwag
Schwag bags--single-handedly filling conference goers' hotel Dumpsters with reams of unnecessary papers since, well, the beginning of conference-going time. You can't go to a conference of any decent size without seeing them. Today, looking for the one actually useful piece of paper within (an hour-by-hour session schedule), I stumbled upon the schwag queen's hive. O the amount of wood pulp sacrificed to bring you these images!
Check out a few more after the jump.. --John Mahoney
Continue reading "That's a Lotta Schwag" »
"Web 2.0 Is Toxic and Needs to Die"
Andy Budd and Jeremy Keith, of the U.K.-based superstar Web-design firm Clearleft, led a rousing and rather subversive seminar at SXSW Interactive this morning (which included the buzzword bingo game pictured at left—I didn't win) called "Bluffing Your Way through Web 2.0." The point was basically to make fun of the widespread abuse of the term "Web 2.0." What the hell does that mean, exactly? The term connotes different things to different people, depending on whether they work in the areas of business, design or development. To business people, it means the functionality of communities: getting users to rate stuff and comment; creating cool apps that you can sell to Google for millions of dollars. To designers, it means a certain style defined by bright colors, reflective surfaces, "lickable," candy-like logos, rounded corners and modern fonts. To developers it means API mashups and AJAX. Budd and Keith proposed abandoning the term altogether, since, though it was useful when it was introduced two years ago, it's actually becoming a hindrance to design firms like Clearleft, who now have to field requests for proposals that say things like "we want a total Web 2.0 site that operates according to all the Web 2.0 design standards." (There are standards for Web 2.0? Who knew?)
More useful is to think of Web 2.0 in terms of social media. In fact, maybe we should all just start saying "social media" instead, since the main point is to involve the community and provide a platform for user participation. My favorite takeaway from the panel—apart from the "toxic and needs to die" statement, from Mr. Budd—came from the development angle, however: "Don't ever learn any code if you can help it," Keith suggested. "Just copy someone else's. That's Web 2.0." —Megan Miller
PopSci @ South by Southwest Interactive
This weekend, team PopSci.com is temporarily relocating to warmer climes down south for the great digital mind-meld that is South by Southwest Interactive in Austin, Texas—the nerdier stepchild of the definitive SXSW music conference happening later this month. We're dusting off our conference caps to soak up anything and everything from keynote speakers including MAKE's Phillip Torrone, Dan Rather and the godfather of Spore, Will Wright, as well as sessions from just about anyone who's anyone in the Internet game. Chances are, if it's going to define the way we use technology and the Web in the next few years, it'll be talked about by someone in Austin this weekend. So obviously, we're pumped. Watch this special category page for our blog updates from the conference. —John Mahoney
Subscribe to Popular Science Magazine | 计算机 |
2014-23/2155/en_head.json.gz/4114 | Council chief's apology after SAP go-live
Officials in Somerset say that the first stage of a SAP go-live for two councils, a police authority and a fire brigade has had a "high degree of success", though an internal e-mail to staff concedes that there are multiple problems. The e-mail says that the SAP project team is working "relentlessly day and night to fix all the issues raised via the help desk, so that we can resume normal service as soon as possible". IBM has been working for more than a year on the SAP implementation. The first phase went live on 1 April 2009 at Somerset County Council, Taunton Deane Borough Council, and Avon and Somerset Police, which are all members of SouthWest One, a joint venture run by IBM. Devon and Somerset Fire Brigade also went live. The lead authority in SouthWest One, Somerset County Council, told Computer Weekly, "The launch of the system impacts upon every employee, covering everything from procurement activity to booking annual leave... As with any large-scale implementation, there are some teething difficulties to resolve for a small proportion of users, and there are plenty of mechanisms in place for users to report issues, with everyone up to the County Council's chief executive taking a daily interest in getting the entire system in place and on line as soon as possible." The council added that "workarounds and contingency plans are being effectively used to ensure business continuity and no adverse impact to the public, or any of our range of service users". Teething problems Alan Jones, chief executive of Somerset County Council, has apologised to staff affected by the problems. He said in an e-mail to council staff on 7 April that the difficulties were "teething". He said, "The main engine of SAP is working - it has been tested, and some staff have been able to process and create orders on the system. However, some staff have not been able to do so for a number of reasons. Please accept my apologies if you are one of those affected." Jones said that for some staff SAP has not yet gone live and is not yet capable of offering full functionality. "Despite all the detailed planning and preparation by our staff and those in South West One, many of these glitches can only be ironed out - frustrating though it is - during implementation. The set-up needs to be absolutely right or SAP will not perform as well as we all want it to in the longer term." Technical issue being corrected He said the single biggest issue was that, for some people, the correct SAP attributes for an individual's role had not been loaded. "This was a massive data load of over 30,000 records and some have failed. This problem was most prevalent within the Environment Directorate. We have had to reload this data and it will take some time to correct the attributes for all 3,670 users. We hope that the majority of these will be corrected in the next few days and apologise for this delay." The e-mail said that some staff were unable to raise requisitions because they could not access e-catalogues. "This is a technical issue that has been referred back to SAP and we await its advice as to how to resolve this. In the meantime, we advise using the catalogues wherever possible on the vendor's website and then using free text orders as a temporary measure." Somerset County Council said that when SAP is fully implemented, it will "streamline and supersede many of the County Council's existing systems and processes, in turn allowing greater levels of efficiency and benefits to the public than ever before". Additional links: Read the full email on the blog of Ian Liddell-Grainger, an MP in Somerset County Council's area who has campaigned against the setting up of SouthWest One. Documentary on £400m IBM deal Public Sector remains wary of SouthWest One James Barlow on SouthWest One
Somerset council settles out of court with service provider Southwest One
Southwest...">Somerset schools to get fibre-optic broadband network
Southwest One outsourcing achieves 10% of target savings
Staff get stress support after SAP go-live
IBM-led SAP implementation at two...">SAP go-live politically sensitive in elections run-up
Oracle vs. SAP lawsuit: The trial begins
Will SAP Enterprise Support increases roil the waters again?
SearchSAP.com podcasts and audio downloads
SAP executives urge customers to upgrade
VMworld attendees demand, but don't get, vSphere details | 计算机 |
2014-23/2155/en_head.json.gz/5328 | The following terms govern the access to and use of www.theuntz.com (the Website), including without limitation, participation in its forums, chats, and all other areas (except to the extent stated otherwise on a specific page).
The Website is owned and operated by The Untz, a California Partnership. Hereinafter, the term User(s) is used to include both registered users and non-registered visitors to the Website.
You can access the Terms of Use any time at http://www.theuntz.com/terms-of-use.php. Your use of and/or registration on any aspect of the Website will constitute your agreement to comply with these terms. If you cannot agree to the terms set forth below, please do not access or use the Website.
In addition to reviewing the Terms of Use, please read our Privacy Policy. Your use of the Website constitutes agreement to its terms and conditions as well.
These Terms of Use may be modified from time to time. The most recent version will be posted on this page. Continued access of the Website by you will constitute your acceptance of and agreement to any changes or revisions to the Terms of Use.
A.USE OF THE SITE
Unless otherwise specified, the Website is intended for your personal use only. You agree not to reproduce, duplicate, copy, sell, trade, resell or exploit for any commercial purposes, any portion or use of, or access to the Website.
You may voluntarily submit certain personally identifiable information about yourself on the Website, including your name and personal contact information ("Personal Information"). All information gathered from Users of this website will be governed by our Privacy Policy, which is incorporated in these Terms of Use by reference. If there is a conflict between the terms of these Terms of Use and our Privacy Policy, the terms of the Privacy Policy will prevail. Please carefully review our Privacy Policy. | 计算机 |
2014-23/2155/en_head.json.gz/5566 | Nvidia: Intel Has No Particular Advantages in Heterogeneous Multi-Core Technologies
Nvidia's David Kirk Shares His Views on Heterogeneous Computing, Graphics Chips
David Kirk, an Nvidia Corp. fellow and the former chief scientist of the graphics company, admitted that heterogeneous computing architecture is the most efficient since it allows to process all types of data in the best way. But Mr. Kirk is not sure that Intel Corp., which is developing its multi-core code-named Knights Corner accelerator for high-performance computing will succeed in creating a viable heterogeneous multi-core platform.
At present many companies working in the fields of oil and gas exploration, seismic processing financial services and other are employing graphics processing units (GPUs) and/or special compute accelerators or their base (such as AMD FireStream and Nvidia Tesla) for high-performance computing (HPC) instead of traditional central processing units (CPUs). Intel Corp., the world's largest maker of microprocessors, failed to deliver its own graphics chip code-named Larrabee and is currently working on the code-named Knights Corner chip that will combine many-core architecture with x86 compatibility. Still, in order to run modern operating systems traditional CPUs will be required and Mr. Kirk does not expect them to disappear."We find that most problems, if not all, are a mix of serial control tasks and parallel data and computation tasks. This is why we believe in heterogeneous parallel computing - both [parallel and serial] are needed. CPUs are commodity technology and there are multiple CPU vendors that we work with. In my opinion, Intel has no particular advantage in developing a hybrid system - in fact, they have had little success historically in designing either parallel machines or programming environments," said David Kirk on Thursday during a public interview.Intel's HPC platforms featuring Knights Corner will consist of separate CPUs and many-core HPC accelerators that will be plugged into PCI Express sockets. Many HPC specialists believe that PCIe bus is a bottleneck for such accelerators because of low bandwidth and one of the things that could solve the problem is creating a chip that combines x86 cores with massively-parallel graphics cores; something that Advanced Micro Devices is doing with its Fusion project. But Mr. Kirk claims that PCI Express' bandwidth is not necessarily a bottleneck."Contrary to popular belief, the PCIe bandwidth is not often the bottleneck in most applications. The PCIE bandwidth is faster than many other data paths in the system, including the disk, the network, and in many systems, the system memory bus or front-side bus. That being said, there are certainly technical improvements we can make going forward [to solve PCIe bandwidth potential problems]. You'll have to wait and see," said David Kirk.Just like other specialists in the field of HPC and parallel computing, the Nvidia fellow does not believe in the future of Cell processor designed by IBM, Sony and Toshiba. Mr. Kirk, who currently focuses on CUDA and GPU computing education and research, claims that Cell has played some role in defining the current realities of the market, but which does not represent a threat to modern GPUs in appropriate spaces."The Cell processor was a great innovation for its time. Many of the ideas in Cell, including heterogeneous processing and local memory, are part of a modern GPU compute architecture. Cell was a 'point product' though, which means that it did not continue to evolve. We continue to evolve and improve our GPU architectures every 6 months or so. This makes Cell no longer competitive," said the fellow of Nvidia. | 计算机 |
2014-23/2155/en_head.json.gz/5890 | Microsoft Office 2013 review: Nice upgrades, but save your cash From: cnn.com Published On: January 30, 2013, 13:21 GMT
Office productivity suites -- word processing, spreadsheets, presentations, databases and so on -- reached their platonic ideal more than a decade ago. None are in need of any radical reinvention.That's why Microsoft has been content to incrementally update Office every couple of years. It makes sure its applications run smoothly, folds in the latest tech standards, and mostly leaves things alone. Now, though, it's facing an existential threat: Google Docs, a free suite that replicates most, if not quite all, of Microsoft Office's functionality.With Office 2013, Microsoft (MSFT, Fortune 500) has to prove that its standalone applications are still worth paying for. It focused its efforts in two key areas: a substantial user-interface redesign, and adding Internet services through its "Office 365" package.Improved looks, improved handling: Office was one of the last holdouts in Microsoft's crusade to bring its products into the 21st century, design-wise. Like Xbox 360, Windows 8 and Windows Phone 8, Microsoft Office now looks -- finally -- about as good as you can make a productivity suite look in 2013.The texture-free graphics and stark contrast between colors makes it easier to navigate through the various interfaces of Office's different apps. There are a few places where the new design seems like a hindrance, like the "File" drop-down menu that takes you to a whole new screen. That's an exception, though: Most of the changes streamline the Office experience.Microsoft's attempt to make Office 2013 touch-friendly involved minor tinkering. Everything is a little more spaced out, which makes it easier to edit with your fingers. It works very well in some places, like highlighting cells in Excel or editing PowerPoint slides.That said, this isn't a touchscreen revolution. Office 2013 is still best experienced with a physical keyboard and mouse.One area where design seems to have gotten away from Microsoft is Outlook. Microsoft took a stab at a revamp, but the result is just as cluttered and busy as ever. The software has a more minimal look, but the lack of strong visual separation between panes and buttons makes everything feel incohesive and jumbled.Office anywhere: The main functional improvements in Office 2013 come through Microsoft's new "Office 365" subscription service, which costs $100 per year. In a nutshell, it turns Office into an Internet-connected app, allowing you save documents in the cloud, collaborate with others, receive regular software updates, and use a remote version of Office from any computer if you're in a pinch.The beauty of Office 365 is that it isn't a completely separate piece of software. Its features are integrated straight into the same software that non-365 users have.When you want to save a Word document to the cloud, the menu prompt sits right beside the option to save locally. If you're working on a spreadsheet with someone else, a little "refresh" graphic pops up over the save icon to let you know that changes have been made.It's an experience that's mostly seamless -- but with room for improvement. Collaborative document editing, for example, doesn't pop up on screen in real-time as it does with Google Docs; changes only reveal themselves when you save or refresh. That minimizes distractions, but when you're working with collaborators, real-time updates are far more efficient.Office 365 also offers a new feature called "Office on Demand" that lets you tap into Office even if you're working on a machine that doesn't have it installed. Go to Microsoft's Office website, launch Office 365, and a little applet acts as a terminal between you and an Office version running on Microsoft's servers.This is strictly for emergencies, though. Right now it's simply too laggy to use for any extended period without wanting to pull your hair out.Should you buy it? Office 2013 and Office 365 are a clear improvement over previous iterations. All the core aspects function as advertised, and there's no major product flaw or shortcoming that should stop anyone from using the software.But there's a larger question here about who Microsoft Office 2013 and Office 365 are really for.Some students and professionals actually need all the bells and whistles. For small businesses, the free software upgrades and simplicity of having all users on a standard, shared apps suite make Office 365 an appealing option.But for the person who doesn't need cloud support and just wants to print up a garage sale flier, or share a spreadsheet for managing the family finances, online services like the very good (and very free) Google Docs work just fine. An existing copy of Microsoft Office -- even one that's years out of date -- will also get the job done.We all use word processors and spreadsheets in our day-to-day lives, but most of us don't need all the rich formatting options, plug-ins and cloud services that Office 2013 and Office 365 provide. Not for the $140 price tag (for Office 2013's most basic version) or $100 a year subscription fee (for Office 365) that Microsoft is charging.This is one upgrade most of us can afford to skip. To top of page | 计算机 |
2014-23/2155/en_head.json.gz/6159 | You are here: Free MMORPG - MMOHuts » Hunted Cow Reveals New Game Details for Eldevin
Hunted Cow Reveals New Game Details for Eldevin
Games in this Article: Eldevin
Hunted Cow, the online role playing game experts today released the first gameplay footage and information about the studio’s long-awaited, innovative new browser-based, massively multiplayer, 3D, online, role-playing game, Eldevin. The world exclusive trailer for Eldevin is now LIVE on YouTube:
Eldevin is a new story-driven game, set in a corrupt fantasy world on the precipice of all out war. Players can join the Eldevin army, or the Mages of the Arcane Council, in a quest to recover the magical artefacts, which have taken the kingdom to the edge of disaster.
The game runs entirely within a web browser, using Java technology. This means it runs on all major web browsers, with no additional downloads or installations. Eldevin does not require high end graphics processors and will work on the majority of netbooks, laptops and desktops running Windows, Mac OS or Linux, making Eldevin accessible to almost every computer user.
Eldevin is vast. The game brings players one of the richest, most in-depth game experiences ever found in a browser-based game. It offers a powerful, classless, real-time combat system. There are 100 different abilities, 200 talents, several hundred different items, which can be collected or crafted by players within the game, by mastering up to 14 different professions.
From launch Eldevin will offer hundreds of hours of gameplay within a huge, diverse world. There are over 600 individual quests, as well as group dungeons and solo adventures to keep individual players and parties challenged and engaged for months. The game also offers player versus player combat, including 5 v 5 battlegrounds and free for all matches.
John Stewart, the studio manager of Hunted Cow, said, “Eldevin is our flagship project. We’re a small indie developer but we’ve been working on this game for the best part of eight years now. It incorporates everything we’ve learned from our previous games, such as Fallen Sword and Gothador, but adds a wide range of innovations and ideas we believe take the genre in a number of new directions. As huge MMO fans ourselves, our goal was simple – to build the best massively multiplayer online role playing game on the market. We’re incredibly proud of what we’ve accomplished and we’re ready to find out what players think of the game. We want to see everyone in-game for the closed beta test in March 2013, which will be followed closely by the open beta and full game launch.”
The development team is hard at work ensuring the game is ready for the beta period and subsequent launch. However, the team is already planning additional new content, new features, updates and expansions, to ensure that from launch and into the foreseeable future, Eldevin remains at the forefront of online role-playing gaming.
The Eldevin closed beta test launches in March. Players wishing to join in should visit www.eldevin.com/beta to sign up. | 计算机 |
2014-23/2155/en_head.json.gz/6391 | Myfip's Titan Rain connection
Bill Brenner
LURHQ researchers say the Myfip worm is a good example of the malcode Chinese hackers are using in the so-called Titan Rain attacks against U.S. government networks.
Worm burrows into BlackIce security product
Experts consider Sasser's Netsky connection
Dabber worm exploits Sasser flaw
Build a Business Case for Running Finance in the Cloud
Presentation Transcript: Malware Reloaded - Mitigating the Next Evolution of ...
–Dell SecureWorks UK
World War C: Understanding Nation-State Motives Behind Today’s Advanced Cyber ...
–FireEye
On the surface, Myfip is an underachiever that hasn't spread much since it was discovered last year. Look more closely, however, and it's the perfect example of malcode Chinese hackers are using to steal sensitive files from U.S. government networks in the so-called Titan Rain attacks. "Worms like this don't look like much by themselves, but in the big picture they're part of a larger threat," said Joe Stewart, senior security researcher with Chicago-based security management firm LURHQ Corp. Stewart and other researchers from LURHQ's Myrtle Beach, S.C.-based Secure Operations Center spent months picking the worm apart and recently issued a report on the findings. "Titan Rain is an example of what worms like this can do if focused properly." Read more about how attacks are evolving
Companies see a surge in phishing attacks Experts predict new path for malicious code, antivirus products
Titan Rain is the code name U.S. investigators have attached to the attacks, in which Chinese Web sites targeted computer networks in the Defense Department and other U.S. agencies, compromising hundreds of unclassified networks. Though classified information hasn't been taken, officials worry that even small, seemingly insignificant bits of information can paint a valuable picture of an adversary's strengths and weaknesses when pulled together. According to The Washington Post, which broke the story last week, U.S. analysts are divided on whether the attacks are a coordinated Chinese government campaign to penetrate U.S. networks or the handiwork of other hackers using Chinese networks to disguise the origins of the attacks. Below the radar Stewart said these kinds of attacks are succeeding because hackers are using worms like Myfip. And the minimal media attention Myfip has received in the past year is a bonus for the bad guys. "It's in these guys' best interests to fly under the radar," Stewart said. "They don't want as many victims as possible. They want the right victims." According to the LURHQ report, Myfip was first discovered in August 2004. "It didn't get an extreme amount of attention at the time, just a few articles talking about a new worm which stole .pdf files," the report said. "It wasn't terribly widespread or damaging, so it didn't rate very high on the antivirus companies' threat indicators." Indeed, the researchers found imperfections. Based on the worm's behavior, Stewart said its author didn't appear to be very familiar with corporate firewalls. And there was no clever social engineering involved. But, he said, "Looking at the code itself, there's not a lot wrong with it. Given the right person sending it and more effort in the social engineering, this could be very effective." He said it doesn't take a tremendous amount of skill to construct worms like this, and that cyberspace will see more of its kind in the future. If the wrong document leaves your network it could have devastating consequences. LURHQ Report,
Myfip and its successors might not spread like a Slammer or Blaster. But since it's designed to quietly go in and lift files from the network, the report said companies that are infected could suffer greatly. "If the wrong document leaves your network it could have devastating consequences," the report said. "Typically when we think of [data] theft, we think of the 'inside job.' However, it is hard to pin down these types of theft to know when and where and to whom it happens. We just don't know unless the theft is discovered after the fact. But Myfip is tangible; it's here and now, and could affect your network. And Myfip is by no means alone; we've seen over the last year a rash of targeted Trojans which appear to be designed solely for the purpose of intellectual property theft." Stewart said the proof is in all the high-profile reports of data theft this year from the likes of Bank of America, BJ's Wholesale Club and Lexis-Nexis. How it gets in and what it steals Myfip typically arrives in an e-mail, an example of which is in the report that says, "If your employees are suspicious at all, they might notice the poor grammar and avoid opening the attachment. But, if they haven't been quite so diligent installing security updates for Internet Explorer, the embedded IFRAME tag in the e-mail might just go ahead and do the job for them." Myfip doesn't spread back out via the Simple Mail Transfer Protocol (SMTP). "There is no code in the worm to do this," the report said. "From certain key headers in the message, we can tell that the attachment was sent directly to [users]." One element that stands out is that Myfip e-mails always have one of two X-Mailer headers: X-Mailer: FoxMail 4.0 beta 2 [cn] and X-Mailer: FoxMail 3.11 Release [cn]. Also, it always uses the same MIME boundary tag: _NextPart_2rfkindysadvnqw3nerasdf. "These are signs of a frequently-seen Chinese spamtool…," the report said. When it runs, the worm sets up a few registry keys "to ensure that it will start at every boot," the report said. "For the most part, they have remained constant, so this is an easy way to spot a Myfip infection that might have already occurred on a machine." The original Myfip only stole .pdf files. But Myfip-B and later variants steal any files with the following extensions: .pdf - Adobe Portable Document Format .doc - Microsoft Word Document .dwg - AutoCAD drawing .sch - CirCAD schematic .pcb - CirCAD circuit board layout .dwt - AutoCAD template .dwf - AutoCAD drawing .max - ORCAD layout .mdb - Microsoft Database "We can see that the Myfip author is now looking for Word documents and several types of CAD/CAM files," the report said. "This is the core of where many companies' intellectual property resides. We could liken these files to the crown jewels in many cases. And, of course, if you're going to steal a company's product designs, you might as well take their customer list or any other databases that might be lying around in .mdb files." The China connection Stewart said his team was easily able to trace the source of Myfip and its variants. "They barely make any effort to cover their tracks," he said. And in each case, the road leads back to China. Every IP address involved in the scheme, from the originating SMTP hosts to the "document collector" hosts, are all based there, mostly in the Tianjin province. How far has Myfip's reach been so far? While Stewart believes its impact has been limited, he said it's difficult to come up with a specific number of infections. "Nobody's willing to come forward and say they've been infected, so that makes it hard," he said. "No one wants to admit their intellectual property has been stolen." He said it's also hard to measure the spread because "AV companies will get something like this, update its signatures and move on." In the battle against Myfip and worms like it, Stewart said companies have to do everything they normally do to keep viruses out. IT administrators should also discourage widespread use of instant messaging programs, which have become increasingly popular in the corporate world. "IM could be a bigger problem in the future," he said. "We really encourage clients not to use IM to send to the outside and receive from the outside." He said a little paranoia doesn't hurt, either. "Companies really need to be paranoid about the attachments people can open," he said. "They really need to make their users aware of the social engineering that can be used to trick them into opening infected files." If enterprises don't take the threat seriously, he said attacks like Titan Rain will be repeated over and over again. | 计算机 |
2014-23/2155/en_head.json.gz/7035 | Comparing an Integer With a Floating-Point Number, Part 1: Strategy
We have two numbers, one integer and one floating-point, and we want to compare them.
Last week, I started discussing the problem of comparing two numbers, each of which might be integer or floating-point. I pointed out that integers are easy to compare with each other, but a program that compares two floating-point numbers must take NaN (Not a Number) into account.
White Papers Top 8 Considerations To Enable and Simplify Mobility The Convergence of Security and Compliance More >>Reports Research: Federal Government Cloud Computing Survey Database Defenses More >>Webcasts Deeper Network Security: Protection Tips Revealed Inside Threats: Is Your Company at Risk? More >>
That discussion omitted the case in which one number is an integer and the other is floating-point. As before, we must decide how to handle NaN; presumably, we shall make this decision in a way that is consistent with what we did for pure floating-point values.
Aside from dealing with NaN, the basic problem is easy to state: We have two numbers, one integer and one floating-point, and we want to compare them. For convenience, we'll refer to the integer as N and the floating-point number as X. Then there are three possibilities:
N < X.
X < N.
Neither of the above.
It's easy to write the comparisons N < X and X < N directly as C++ expressions. However, the definition of these comparisons is that N gets converted to floating-point and the comparison is done in floating-point. This language-defined comparison works only when converting N to floating-point yields an accurate result. On every computer I have ever encountered, such conversions fail whenever the "fraction" part of the floating-point number — that is, the part that is neither the sign nor the exponent — does not have enough capacity to contain the integer. In that case, one or more of the integer's low-order bits will be rounded or discarded in order to make it fit.
To make this discussion concrete, consider the floating-point format usually used for the float type these days. The fraction in this format has 24 significant bits, which means that N can be converted to floating-point only when |N| < 224. For larger integers, the conversion will lose one or more bits. So, for example, 224 and 224+1 might convert to the same floating-point number, or perhaps 224+1 and 224+2 might do so, depending on how the machine handles rounding. Either of these possibilities implies that there are values of N and X such that N == X, N+1 == X, and (of course) N < N+1. Such behavior clearly violates the conditions for C++ comparison operators.
In general, there will be a number — let's call it B for big — such that integers with absolute value greater than B cannot always be represented exactly as floating-point numbers. This number will usually be 2k, where k is the number of bits in a floating-point fraction. I claim that "greater" is correct rather than "greater than or equal" because even though the actual value 2k doesn't quite fit in k bits, it can still be accurately represented by setting the exponent so that the low-order bit of the fraction represents 2 rather than 1. So, for example, a 24-bit fraction can represent 224 exactly but cannot represent 224+1, and therefore we will say that B is 224 on such an implementation.
With this observation, we can say that we are safe in converting a positive integer N to floating-point unless N > B. Moreover, on implementations in which floating-point numbers have more bits in their fraction than integers have (excluding the sign bit), N > B will always be false, because there is no way to generate an integer larger than B on such an implementation.
Returning to our original problem of comparing X with N, we see that the problems arise only when N > B. In that case we cannot convert N to floating-point successfully. What can we do? The key observation is that if X is large enough that it might possibly be larger than N, the low-order bit of X must represent a power of two greater than 1. In other words, if X > B, then X must be an integer. Of course, it might be such a large integer that it is not possible to represent it in integer format; but nevertheless, the mathematical value of X is an integer.
This final observation leads us to a strategy:
If N < B, then we can safely convert N to floating-point for comparison with X; this conversion will be exact.
Otherwise, if X is larger than the largest possible integer (of the type of N), then X must be larger than N.
Otherwise, X > B, and therefore X can be represented exactly as an integer of the type of N. Therefore, we can convert X to integer and compare X and N as integers.
I noted at the beginning of this article that we still need to do something about NaN. In addition, we need to handle negative numbers: If X and N have opposite signs, we do not need to compare them further; and if they are both negative, we have to take that fact into account in our comparison. There is also the problem of determining the value of B.
However, none of these problems is particularly difficult once we have the strategy figured out. Accordingly, I'll leave the rest of the problem as an exercise, and go over the whole solution next week.
Perforce Takes Federated Versioning to the EdgeOnsen UI 1.1 Support For jQueryIn-Memory Computing Platforms SolidifyDo You Burn From JavaScript Churn?More News» Commentary
Coding for High-DPI Displays in WindowsBuilding ArduinoDevelopers and Testers To Easily Extend SoapUIPernicious Scrum Anti-PatternsMore Commentary» Slideshow
Developer Reading ListJolt Awards: The Best BooksJolt Awards: The Best BooksDeveloper Staffing Survey: Needs Outstrip SupplyMore Slideshows» Video
Verizon App Challenge WinnersMaster the Mainframe World ChampionshipIBM Watson Developers CloudAmazon Connection: Broadband in the RainforestMore Videos» Most Popular
Coding for High-DPI Displays in WindowsWhy Build Your Java Projects with Gradle Rather than Ant or Maven?How To Use Reverse Iterators Without Getting ConfusedUsing SQLite on AndroidMore Popular» INFO-LINK
Closing the Book on Windows Server 2003: Planning for Windows Server 2012 Opens New Possibilities WinXP at the 11th Hour Innovations in Integration: Achieving Holistic Rapid Detection and Response Smarter Process: Five Ways to Make Your Day-to-Day Operations Better, Faster and More Measurable Inside Threats: Is Your Company at Risk? More Webcasts>> | 计算机 |
2014-23/2155/en_head.json.gz/7192 | Gamestop letting you pre-order digital download games
A lot of PC gamers that I know have gone the way of digital download purchases. Why go to the store unless you really want a physical copy of the game. It's just so much easier to sit at a computer and click through an app or webpage to purchase something.
Well, Gamestop wants in on the action and starting with Deux Ex: Human Revolution, you'll be able to pre-order a digital download of the PC game at the store.
So, they want me to leave the convenience of getting everything at home without having to fight traffic so that I can maybe fight the crowd of people at a store to pre-order a game that I don't have to go into the store to pick up? OK, so the only advantage of this is that if you have some games to trade in towards a digital download, but other than that, why bother? I guess that's where pre-order exclusives come in. Still, I'll sit at home and pre-order through Steam or Origin thank you very much.
GameStop Launches New In-Store Digital PC Game Purchase Method
GameStop (NYSE: GME), the world's largest multichannel video game retailer, is making it easier for PC gamers to get their hands on hot new releases. Exclusively at GameStop, customers can now use any accepted form of payment, including trade credit and GameStop gift cards, to purchase digital PC games at their local store and access the titles immediately at launch. Deus Ex(R): Human Revolution(TM) is the first of what will soon be many titles to support this new purchase method.
"This is a great illustration of how the digital distribution model and in-store experience really complement one another," said Steve Nix, GameStop's general manager of digital distribution. "We have seen great success selling DLC for console titles in our stores, so expanding on that model and helping customers discover digitally distributed PC games in stores is a natural fit."
In addition to immediate access to the game at launch, customers who pre-purchase the digital PC version of Deus Ex: Human Revolution at GameStop will also receive:
A digital version of Deus Ex(TM): Game of the Year Edition and Deus Ex: Invisible War(TM)
The Explosive Mission DLC Pack for in-game use in Deus Ex: Human Revolution
Double points for GameStop(R) PowerUp Rewards(TM) members
As an added bonus, PowerUp Rewards(TM) members who pre-order and purchase Deus Ex: Human Revolution will be entered to win a decked out living room in the Augment Your Living Room Sweepstakes. Visit www.poweruprewards.com/PUR/Index/Augment for complete details on the Augment Your Living Room Sweepstakes.
Deus Ex: Human Revolution, which launches Aug. 23, 2011, is available for pre-order now in GameStop stores nationwide and online at www.GameStop.com.
Deus Ex and Deus Ex: Human Revolution are registered trademarks or trademarks of Square Enix Ltd. | 计算机 |
2014-23/2155/en_head.json.gz/7208 | First-Person Perspective Concept »
First-Person is a vantage point that attempts to simulate looking through a game character's eyes. It is most commonly found in First-Person Shooters and Racing Games, and to a lesser extent in other genres, such as RPGs and 3D Platformers.
The 7th Guest 3: The Collector
Third installment in The 7th Guest series that has been in development for many years.
An upcoming horror game from Frictional Games for PC and PlayStation 4, coming in 2015. It will explore themes of the self, mind, and consciousness.
Hellraid
Fight for the survival of mankind against the armies of Hell.
A procedurally generated space exploration game from the creators of Joe Danger, Hello Games.
Firewatch
Firewatch is a mystery set in the Wyoming wilderness developed by Campo Santo, where the protagonist's only emotional lifeline is the person on the other end of a handheld radio.
Devil's Third
An action game by Valhalla Game Studios, headed by ex-Tecmo producer Tomonobu Itagaki. It will be published by Nintendo as a Wii U exclusive.
Dying Light is a first-person, open world game set in a zombie apocalypse. The player character is able to free-run to get around the environment quickly.
An upcoming sci-fi first-person adventure.
From the creator of Braid, The Witness is an exploration-focused puzzle game that takes place on an uninhabited island.
Coma: A Mind Adventure
An upcoming first-person puzzle-adventure game that takes place entirely inside the mind of a comatose man.
Chaos;Child
Chaos;Child is the fourth major entry in the Science Adventure franchise of visual novels, set for a Japanese release in 2014.
Viscera Cleanup Detail
A space station cleaning simulator by RuneStorm.
A first-person action-RPG based heavily around spellcasting. The first game from independent developer Xaviant, and one of the first titles to use CryEngine 3.
A post-apocalyptic free-to-play MMO from the makers of EverQuest II and The Matrix Online. Set fifteen years after an initial horrifying outbreak, players must survive against the infected, wild animals, and other survivors.
StarCrawlers
Starcrawlers is an upcoming RPG that will play like a classic dungeon crawler in a futuristic sci-fi setting
The Talos Principle
A "first person philosophical puzzler" from Croteam.
Beasts of Prey
Beasts of Prey is a first person survival game. It was currently in development but an early access is available to play.
In the Kingdom
A deliberately retro horror FPS.
Black Ice is an indie FPS/RPG about hacking. Set in cyberspace, the player must defend himself from strange creatures while hacking into evil corporations to steal their software. Black Ice features a giant procedurally generated world, billions of items, and bright neon colors.
Ubisoft is set to publish a new game in the Far Cry series.
While investigating a mass murder, Detective Sebastian Castellanos descends into a gruesome, nightmarish world. This third-person survival horror game marks the debut of Tango Gameworks, a studio headed by Resident Evil progenitor Shinji Mikami.
A survival horror game set 15 years after the original Alien film, It stars Amanda Ripley, the daughter of film protagonist Ellen Ripley.
Doorways: The Underworld
Doorways: The Underworld is a horror adventure game. It is the standalone third chapter of Doorways.
The Fifth Day
The Fifth Day is a first person survival game by Touz.
Top Rated Lists for First-Person Perspective
My List of 95 Great Things
My Favorite Things In Video Games
Doogie2K | 计算机 |
2014-23/2155/en_head.json.gz/7236 | HomeRandom WatchlistUploadsSettingsLog in About Grand Theft Wiki - The GTA WikiContent Licensing & Copyright Last modified on 26 April 2014, at 10:50 Liberty CityDiscussion This is a disambiguation page for Liberty City, which could mean a number of things
Liberty City or City of Liberty is a city based primarily on New York City, that appears in several games, and has three different renditions.
Liberty City in GTA 1 - Only appearing in GTA 1.
Liberty City in GTA III Era - Appearing in GTA III, GTA Advance and GTA Liberty City Stories, with a minor appearance in GTA Vice City and GTA San Andreas.
Liberty City in GTA IV Era - Appearing in GTA IV and its episodes and GTA Chinatown Wars.
Liberty City is used in two titles:
Grand Theft Auto: Liberty City Stories - A game based in Liberty City.
Grand Theft Auto: Episodes from Liberty City- The title given to the disc version of The Lost and Damned and The Ballad of Gay Tony.
Spoiler Warning: Plot and/or ending details are in the text which follows. Trivia
Liberty City is a real place. It is located in Miami, Florida, the same place Grand Theft Auto: Vice City and Grand Theft Auto: Vice City Stories is based.
Liberty City in every Era (and Alderney in the fourth era) is the only main area in the Grand Theft Auto series to date where a protagonist hasn't (knowingly) died. A protagonist has died in Vice City, Blaine County, Ludendorff (faked), and two optional protagonist deaths can occur in Los Santos.
Read in another language Grand Theft Wiki | 计算机 |
2014-23/2155/en_head.json.gz/7547 | Apple releases completed Xcode 4, offers App Store version
updated 04:00 pm EST, Wed March 9, 2011
No Mac Developer membership required
After an extended production period, Apple has at last released the finished version of Xcode 4. The software is Apple's central tool for developers, capable of programming both Mac and iOS titles. Some upgrades over earlier releases include a new, mostly single-window interface, Fix-it auto-correction, full support for C++ in the LLVM compiler and integration of Interface Builder into the Xcode IDE.Apple has also taken the unusual step of putting Xcode on the Mac App Store. The tool was previously available only through the Mac and iOS Developer programs, which cost $99 a year. At the Mac App Store Xcode is $5, and apparently detached from any membership requirements. The download measures 4.24GB and requires Mac OS X 10.6.6.
Xcode 4 has had a long and sometimes troubled history, going through numerous developer previews. It's in fact one of the few pieces of Apple software to have been given a second gold master release, attributable to serious bugs in the first one. Gold masters are intended to be completed code, simply made available to developers ahead of time.
by MacNN Staff | 计算机 |
2014-23/2155/en_head.json.gz/8615 | Clark Lane
Perspicacious Perspectives from CHEN PR
How Do You Make Money from Open Source Software?
David Skok, a partner at VC firm Matrix, sketched out the options at a recent Mass. Software Council session, “Open Source -- Is it Entering the Mainstream?” Matrix was the lead investor in JBoss, with the sole investor board seat, so he knows whereof he speaks. Let’s face it: VCs spend more time than the average bear on the making money topic.
Skok cited four models:
Paid support (e.g., Red Hat and JBoss) -- If you follow open source at all, you are probably familiar with the Red Hat and JBoss models, where most of their revenue derives not from selling software, but from varying levels of support packages.
Dual license (e.g., MySQL) -- The approach taken by the popular open source database company MySQL offers the software under the General Public License (GPL) for open source developers. The catch with the GPL license is that if you bind closely to GPL code in your application, you must also GPL your code. For companies that decide they want to sell their application that incorporates MySQL, the organization offers a traditional paid license. Visit their site for a detailed explanation.
Upgrade to proprietary software (e.g., SourceFire and Sun) -- I’m most familiar with this approach, as Sun uses this model with its tools line, offering an entry point with the open source IDE NetBeans. From there, if developers want all the bells and whistles, they can move up to Java Studio Creator or Java Studio Enterprise. The same holds true for OpenOffice.org; users who want support and advanced features buy StarOffice.
Offer a hosted service (e.g., SugarCRM) -- Skok noted that not long ago he’d felt application software would not be a likely area for open source to prosper, but he now feels that this startup may be onto something, with its hybrid model.
Skok noted that when a Forrester survey survey asked respondents about the benefits and concerns associated with open source software, 57% cited lack of support as a key concern. This explains why Red Hat and JBoss are doing well with their model. Skok says JBoss is getting 10,000 leads per month.
Another interesting point -- it’s a given that lifetime sales and marketing costs for a software product are high (up to 55% of the expenditure). Skok estimates that if you have a successful open source development community contributing to the software, you can cut maintenance costs by up to 20%.
According to an Evans Data Corp study, more than 1.1 million developers in North America are spending at least some of their time working on open source development projects. So it seems that there are plenty of developers out there willing to invest the time.
More on open source next time…
posted by Author: Barbara Heffner @ 1:04 PM 62 comments
A Full Measure
From CHEN PR Veep Randy Wambold...
The days when marketing was to some extent insulated from the same bottom line accountability of every other group in an organization are long behind us. An IDC conference held yesterday in snowy New York, "IDC Marketing Performance Measurement Summit for Business-to-Business Marketers," drove home the point.
In today's competitive climate, marketers are obligated to show ROI like everybody else. The challenge is, unlike other areas of the business -- say operations, or finance of course -- where well-understood, widely accepted industry standard metrics exist, marketing lacks these measurement tools.
Of course, if the problem was easy to solve, some 100 or more people from around the world wouldn't have felt compelled to travel to New York yesterday for a day of meetings on the topic. So in many ways yesterday's event was about intelligent, thought-provoking discussion on the topic more than coming away with all the answers.
Some particularly interesting points emerged from the discussion:
Marketing is being held to financial accountability in a way that was never the case in the past in the tech market. Consequently, if marketers want to have relevance and "a seat at the management table," they must learn to think and talk in business terms. The "I'm creative so it's hard to be accountable in that same way" mindset ain't gonna cut it any more. Though this increased accountability will cause some short term pain -- in no small part due to the lack of measurement metrics that were the focus of the conference -- longer term, this will help garner marketing the respectability it has lacked in many circles. Or, as one panelist memorably put it, it will help get marketing "out of the ghetto." Brand awareness is paramount. It was a hot topic for all of the companies represented. Marketing needs to become integrated and compatible with other areas of the business such as sales and finance. The historical skepticism and friction between marketing and other disciplines has hurt marketers. As one panelist nicely put it, "It's time for marketers to get over our victim mentality." The customer is king. I'm reminded of the fact that when Scott McNealy of Sun sat on stage with Steve Ballmer of Microsoft in 2004 and answered the question of how two staunch rivals had come to make peace, the gist of his answer was: "Our customers asked us to do it, and the customer is in the driver's seat." This same mentality prevailed yesterday. A marketing program that doesn't rely on customer-focused data will fail on the face of it. Your metrics are only as good as the data they rely on. In the afternoon we broke into small groups to discuss measurement, and my industry colleagues and I talked almost the entire session about the great need and the great difficulty in getting good data to use as the basis for good metrics. On a more positive note, conference chairperson and IDC analyst Richard Vancil perceptively points out that there is a silver lining in the measurement challenge clouds for marketers. In the boom times of tech, the role of marketers was limited, he argued (rightly, in my opinion). Sure, marketers helped provide sales support and competitive positioning and increase brand awareness, etc. But at the end of the day, companies didn't really, truly need expert marketing because demand was so strong and capital so plentiful. In these post-boom days, with demand neither nearly as strong, nor capital nearly so plentiful, tech companies truly, urgently need marketing in a fundamental way that they haven't before. For those of us in tech marketing, this is a real career opportunity. And at a more macro level, it can only benefit the tech market long term to have the industry more marketing-focused.
On a separate note, I applaud IDC for donating $5 to the Make-a-Wish foundation for each evaluation survey turned in by an attendee. Classy move.
And finally, on a more personal note, the hotel in which the conference was held was a stone's throw from the World Trade Center. Though I have been back to New York since 9/11, this is the first time I've been back in that immediate area. I took the opportunity to walk over to Ground Zero on lunch. Though of course the area does not resemble the pile of rubble that is in many of our mind's eyes from the coverage immediately after the event -- in fact very early work has already begun on foundations for the new buildings -- the sunken crater was still a vivid, stirring reminder of the tragedy of that day that will be with all of us for the rest of our lives.
posted by Author: Barbara Heffner @ 10:56 AM 8 comments
Pew Project Nets it All Out
A post of Pew pearls from my partner Chris Carleton...
Have you had time yet to peruse the recently released Future of the Internet Report issued under the auspices of the Pew Internet and American Life Project? If not, it's a must-read for high tech PR folks.
In fact, it's a must read for just about everyone, since the Internet and related technologies affect us all.
The report is based on a broad-ranging survey of technology leaders, scholars, industry officials and interested members of the public. The 24-question survey was emailed out in September and generated responses from nearly 1,300 individuals.
In addition to the Grandpappy of All Things Net, Vint Cerf, respondents ranged from folks like Ethernet inventor, tech VC and all-around industry icon Bob Metcalfe to uber-journalists/industry pundits Esther Dyson and Dan Gillmor. Some, preferring to shield their identities, came from such institutions as MIT, The Federal Communications Commission, U.S. Department of State, Harvard, Google, Microsoft, AOL, Disney and IBM. And still others imparted their wisdom, but neither their names nor their affiliations.
The survey finding that has generated the most attention is that 66% agreed with the prediction that at least one devastating attack will occur in the next 10 years on the networked information infrastructure or the country's power grid. Most media outlets covering the report jumped all over that, since it has the F.U.D. factor that grabs eyeballs. That's not to say it didn't warrant the attention. We expect the majority of us find this prediction as likely as it is frightening.
News and publishing organizations are expected to incur the most profound level of change. Proof of that pudding is perhaps reflected no more strongly than in the Blogoshere. Others trailing closely behind are educational institutions, workplaces and healthcare institutions. The least amount of change is expected in religious institutions.
The report contains lots of other goodies that deserve airtime:
59% believed that more government and business surveillance will occur as computing devices proliferate and become embedded in appliances, cars, phones and even clothes. 57% agreed that virtual classes will become more widespread in formal education and that students might sometimes be grouped with others who share their interests and skill levels rather than just their age. 56% think that as telecommuting and home-schooling expand, the boundary between work and leisure will diminish and family dynamics will change because of that. Half believe that anonymous, free music file-sharing on peer-to-peer networks will still be easy to do a decade from now. posted by Author: Barbara Heffner @ 5:20 PM 1 comments
'Vette Coup or Snafu?
Musings from my colleague Randy Wambold...
I'm always curious about the lives of PR people in industries other than tech. Are their day-to-day jobs roughly analogous to mine? Or is, say, PR for the automobile industry so different from tech PR that it might as well be a different profession?
An article in the Wall Street Journal on Friday entitled "GM's AWOL Corvette; How Car Maker Lost Control Of Its New Model's Rollout Shows Power of Web Fan Sites" shed light on this question. (WSJ.com is a paid subscription site so I can't link to it, but those of you with access will find it on-line.)
The article concerns the announcement of the new Corvette Z06. GM's communications planned to announce the car at the Detroit auto show this week. They placed media under embargoes accordingly. Trouble is, unauthorized photos of the car began appearing a couple of months ago. GM tried to stem the distribution, but as we know in this age, once the chain reaction gets started, it's near impossible to stop it. The unspoken question of the Journal article seems to me to be whether the actual announcement next week won't be a bit of a yawner coming on the heels of all the pre-announcement coverage.
I had several observations about this story from the perspective of a tech PR professional:
Like in tech PR, automotive industry PR has been reliant on media embargoes as a form of message control. Like in tech PR, broken embargoes are a constant occurrence, and what to do about them a constant issue. Like in tech PR, the embargo's effectiveness -- and to some extent its feasibility even -- in our current age is very much in question. New ways of thinking about communicating a message to the marketplace are called for. "The Z06 snafu is a high-profile illustration of how Detroit's decades-old tactics for generating buzz around a new model don't always mesh with the realities of the digital media universe," the article notes. A reaction to an unplanned media event is sometimes as important as the event itself. After the photos began appearing, GM chose to try to finger the culprits and limit the "damage," by means that some interpreted as heavy-handed. This appears to have exacerbated the situation, alienating some of GM's most loyal customers ("GM should probably find a better use of their time than p-----g off current and future Corvette owners," the article quoted one fan group Web site manager as saying.) Easy for me to say as a Monday morning quarterback, but perhaps if GM had embraced and tried to leverage the interest the digital community was showing in the new Corvette, this might have been a PR coup rather than a PR snafu.
posted by Author: Barbara Heffner @ 8:45 PM 2 comments
Covert Propaganda
Today's Washington Post includes a chilling article. It details a story by Mike Morris that aired early last year on local TV news stations across the country about the dangers of drug abuse. The catch: "...Morris is not a journalist and his 'report' was produced by the government, actions that constituted illegal 'covert propaganda,' according to an investigation by the Government Accountability Office."
It's always a bit frightening to think about the number of government employees who must have been involved in this little venture. Didn't someone (with warning bells blaring) ask his/her colleagues: "Isn't this morally wrong? Isn't this what they mean by the manipulation of the media?"
There's comfort in the fact that checks and balances work, and the GAO stepped in. But it's cold comfort. It doesn't bring back our $155,000.
Thanks to Terri Molini at Sun for flagging this story earlier today.
"Life was simple before World War II. After that, we had systems."
-Rear Admiral Grace Hopper
Pubs and PR
Here's another guest entry from my colleagues Chris Carleton and Randy Wambold, based on their December London trip.
Part of the fun of traveling abroad is being reminded just how much of what we take for granted in the U.S. is specific to our culture. Stereotypes in both directions end up making for some entertaining exchanges.
For example, many of our industry colleagues presumed that we Americans would be less than willing to share a Guinness or two with them. We were told they thought that because of our country's often-over-the-top focus on health and diet and abstinence from most things enjoyable. We were only too happy to disabuse them of that notion!
On the flip side, we arrived back on U.S. soil with some of our British stereotypes held firmly in place. Their command of English, for example, really does make you want to dig out and dust off your grammar and usage handbook.
But, there are stereotypes and then there is plain fact.
With regard to the latter, we're here to tell you that Brits really are as incredulous about our president's re-election as you hear reported in the news. Combine that with the fact that mixing business with politics is less taboo in the U.K., and we found ourselves talking about taxes and Texas at least as much as technologies and trade pubs.
On the business front, we were struck by the similarities between tech PR in the U.K. and the States. It's not that differences don't exist, though. For example, after our meetings, our sense is that while the U.K. definitely got caught up in the tech boom hype of the late 90s, it wasn't to quite the same extent as in the U.S. In particular, there doesn't seem to have been as much of a VC spending spree. So when the bubble burst, as a generalization tech PR firms in the U.K., like the market on the whole, might not have had as far to fall.
Conversely, the U.S. market seems to remain a bit gun-shy as a result of the still-raw wounds from that rapid rise and equally rapid descent. Not so for our friends in the U.K., who bring an enthusiasm to the table hearkening back to the days when sock puppets were for fun in preschool classes rather than icons of failed on-line ventures lacking that little ingredient called a business model. Their enthusiasm was a shot in the arm coming as it does at a time when our own sense is that cautious optimism continues to grow in the U.S. tech market.
Also on the differences front, and perhaps indicative of the market dynamic just mentioned, there seems to exist a professional collegiality between tech journalists and PR professionals in the U.K. that may have waned in the U.S. We're talking in general terms here. But, whether it be due to U.S. journalists' own experiences getting "dot-bombed," or to the fact that during the boom in the U.S., any bloke with a computer and a press release template was billing himself or herself as a "strategic PR pro," sullying our profession in the process, in the U.K. market journalists and PR professionals seem to have a little more mutual respect and a little less of a wary eye for one another.
But as we say above, we were struck as much by the similarities between tech PR in the U.K. and tech PR in the States as by the differences.
posted by Author: Barbara Heffner @ 8:50 AM 1 comments
Bryan Grillo
Brad Baker
Chris Carleton
Barb Ewen
Kevin Kosh
Meghan Rozanski
Ramya Kumaraswamy
Maggie Roth
Ben Sharbaugh
Randy Wambold
Juli Greenwood
Client Holiday Promos
The Future of Advertising
A Grand Evening for CHEN PR Clients
Keep Pitching! and other Lessons Learned from Assi...
Mixin' and Minglin' for Mass Cleantech
A "CHEN Slam" at the MassTLC Finalists Event
CHEN Client Digital Lumens a WEF Tech Pioneer!
The Smart Grid Meets the Electic Vehicle | 计算机 |
2014-23/2155/en_head.json.gz/8679 | About Diamond Strategies
« Check Claim
Is It Time To Revamp Your Traditional Outdated Email Newsletter?
When you think about Email Marketing one of the first things that comes to mind is a company’s email newsletter. Or in some circles, it’s called an “eNewsletter”. Regardless, a lot of businesses create and distribute them on a regular basis. Back in the early 2000’s, email newsletters became one of the must-have communication tools in a company’s marketing strategy. But with prevalence of real-time marketing and social media tools, I firmly believe that it is time to revamp the traditional format of a company’s email newsletter.
The traditional email newsletter typically leads with a main customer-focused article, followed by secondary articles that marketing or sales deemed important to promote to customers, users, constituents, etc. Each newsletter is focused on getting the majority of your audience interested in what you’re doing and saying – and to get them to take action! Go to this landing page, download this white paper or click to learn more, right?
The second topic for debate is how frequently a company should send out an email newsletter? Should it be weekly, monthly or quarterly? Do we have enough content for a monthly newsletter, or what? Will a quarterly newsletter be relevant or has too much time passed?
Oh yes, let’s not forget about the email newsletter side bar. The multi-purpose area that contains information that never changes, like contact information, as well as links to articles that didn’t make the cut for the main body of the newsletter.
With our status updates, check-ins, blog posts, you-name-it feeds along with a constantly-connected mentality, the traditional email newsletter is not as valuable as it used to be. Today, customers opt-in and want their information delivered to them in real-time. Traditional newsletters are published and once the news is more than a week old, the information seems out of date and, more importantly, out of touch with technology and social networking.
In the new email newsletter format, there isn’t a main customer-focused article. It’s a list of the best-of-the-best published marketing content – blogs, articles, posts and tweets that you have written, promoted and or endorsed. This way, the frequency of the newsletter then depends on the amount real-time content the company aggregates during a certain period of time. Plus, instead of just tracking one-off newsletters articles, the overarching benefit is that each piece of content can be tracked from the moment it’s published in real-time, shared through social media networks and of course, re-published again within the newsletter.
This new email newsletter format is a snapshot of all the great content that was published in real-time, through status updates, check-ins and blog posts. For example, the newsletter would read more like a LinkedIn Network Update with comments such as
“Customer X explains why they like Product Y (Review)”
“Product Z has an update (23 features, 72 Fixes)”
“Company A renewed and upgraded their support contract to Gold”
Newsletters like this would be published when new information is available versus publishing so-so content just to stay on a consistent timeline. The savvy customer/consumer today knows the difference between real content and filler. Maybe we can all apply some good-old mom logic to email communications, “If you don’t have anything good to say, don’t say anything at all”.
Image courtesy of banlon1964’s flickr photostream
Tags: email, marketing, newsletters
on Wednesday, May 5th, 2010 at 8:09 PM and is filed under Newsletter.
Diamond Strategies is proudly powered by | 计算机 |
2014-23/2155/en_head.json.gz/9249 | Migration, security, and the economy top 2002 management scene
Jan Stafford
After a year that brought recession and disaster, change is in order in 2002. That's why more IT managers will join the systems migration and security enhancement movement that started in 2001. On a broader scale, an analyst predicts that hopes for a happy new year will be realized in an economic comeback.
Some things will change in the Windows management tools market in 2002, as this searchWindowsManageability year-end review will show. Some things, however, will remain the same.
In the Windows systems management market, migration tools -- particularly Active Directory -- were in hot demand in 2001 and won't cool off in 2002. They'll have to share the spotlight, though, because interest in data integration tools and storage will grow significantly in the next year. Further, the fallout from the Sept. 11th tragedy will make 2002 a busy year for security and disaster recovery product and services providers.
The current roster of major systems management software vendors, however, won't be sharing center stage. They'll continue to dominate the enterprise marketplace in 2002, but the slow economy will force them to work harder to maintain their top positions. Not surprisingly, vendor consolidation will continue, as the majors continue the practice of adding technologies to their lines by acquiring smaller vendors.
In 2001, many businesses invested in migration tools and began planning Active Directory migrations, according to Audrey Rasmussen,
research director at Boulder, Colo.-based Enterprise Management Associates. However, she added that "most companies delayed going to AD because of its complexity."
The need to improve the functionality of the ubiquitous Windows user interface is driving interest in data integration tools, said Richard Ptak, senior vice president at the Framingham, Mass.-based analyst firm Hurwitz Group. "Everyone has their own user interface in Windows," he said. So, the data received from multiple tools needs to be accessible and viewable from one format. Tools will surface that will consolidate the data collected from different files, he said. Once the data is consolidated, the tools will manipulate, correlate, relate and present the data.
In the enterprise systems management software space, the competition will heat up, Ptak said. The current roster of major systems management software vendors -- Hewlett-Packard Co., Tivoli Systems, Inc., BMC Software, Inc., and Computer Associates International, Inc. -- will continue to dominate the enterprise marketplace in 2002. They won't be on easy street, however. Second-tier players, like Candle, Corp. and Compuware, Corp. are competing with the top four more aggressively, he said. Also, the slow economy will force top vendors to work harder to maintain their top positions. In a recession, however, even major vendors will need to work harder to keep their customers. They'll "focus their resources and tactics to be more sensitive to helping their customers immediate needs," said Ptak. They'll provide the services needed to build stronger business relationships, in hopes of keeping customers loyal.
Selling service will be very important, because sales increases of system management software won't set the world on fire for the next few years. Worldwide system management software revenues grew only from $8,901.3 million to $9,672 million between 2000 and 2001, according to San Jose, Calif.-based Gartner Dataquest. In 2002, revenues should reach $11,127.1 million. That pace should continue through 2005, when revenues will hit $17,595.3 million.
New innovators have sprung up in the performance management space and some consolidation will take place, Ptak predicted. Some major players will acquire smaller players. The major players may also try to acquire tools that focus on providing faster and more efficient utilization of the Internet infrastructure, he said. There are two ways that smaller performance management companies are currently providing tools that make using the Internet infrastructure more efficient, he said. One is to focus on problem identification and root cause analysis. The other is to focus on fixing the problems, while addressing the exact performance issue, whether it is the switching, host providers or load balancers. Ptak feels some of the larger players may try to buy the smaller player's tools rather than create their own. Storage software such as storage area networks (SANs) will become more popular in 2002, predicted Mark Crawford, a network design specialist at Keystone Health Plan Central of Camp Hill, Penn. Costs will be coming down, he said, while the user base will increase. "SANs are not targeted at huge-sized companies anymore."
SANs allow for server consolidation, which saves money, too, Crawford said. Further, once a company has purchased a SAN, "incremental expansion is inexpensive because you just buy more hard drives." Crawford also predicted that IP-based storage networks may become a cost effective implementation. Keystone, in particular, is looking to implement network attached storage (NAS) devices and drives at an offsite facility in order to use the storage network to move data.
The Sept. 11 attack increased the demand for security, disaster recovery tools and services, and collaboration tools, Rasmussen and Ptak said. IT managers are seeking root-cause analysis and storage resource management tools that help them automatically resolve and head off problems.
On a more optimistic note, Ptak predicted that the damper the Sept. 11 attacks put on the economy will evaporate soon. "We will see a recovery in the economy toward the end of the first quarter 2002," he predicted. The built-up creative and innovative IT talent of today will be unleashed and address the business problems businesses are experiencing. "The restored growth to the industry and the economy will be a phenomenon," he concluded.
What new technologies are on your mind for 2002? Talk about it with your peers in our Management Tools Discussion Forum
Or maybe you'd like some expert advice. SearchWindowsManageability's experts are on call this holiday season to answer your technical questions.
Windows Operating System Management,
Second beta of MOM 2004 released
Management tools shopping dos, don'ts
Microsoft service desk partner to add option for MOM
When buying up isn't always better for SMBs | 计算机 |
2014-23/2155/en_head.json.gz/9902 | Firefox 3: will Mac users switch from Safari?
Scot Finnie
Firefox gets a redesigned interface
Ads are coming to Firefox
Mozilla kills Firefox for Windows 8 Metro
Dreamweaver CC 2014 review
Tumult Hype 2 review
Firefox has also had perennial stability issues, sometimes leading to loss of performance over time, as a result of memory leaks. To be honest, though, I've only ever seen or heard about that problem under Windows. Mozilla's developers were able to rid the Gecko 1.9 browser engine -- under development for almost three years -- of some of the reliability-robbing inefficiencies of its predecessors. According to Mozilla's Firefox release notes:
Memory usage: Several new technologies work together to reduce the amount of memory used by Firefox 3 over a Web browsing session. Memory cycles are broken and collected by an automated cycle collector, a new memory allocator reduces fragmentation, hundreds of leaks have been fixed, and caching strategies have been tuned.
The long list of new features in Firefox 3.0 is attractive in its own right. For example, there's a selection of welcome security tweaks, full-page zoom, better password management, a new download manager, and numerous improvements to address-bar auto-complete and bookmarks. New Mac integration includes a native OS X application look and feel, support for OS X widgets and support of some Growl notifications -- although the "green + button" still does a Windows-style maximize.
And then there are the intangibles: I have always liked the way Firefox feels. What does that mean? I can't really explain it. Safari doesn't have the fun factor that I get from Firefox. Safari may take you down the virtual highway with performance akin to a BMW M3, but while you're doing it, you'll feel like you're driving your father's Oldsmobile. (Is there any other kind anymore?) Firefox feels more like the M3, and now it comes close in the speed department. Of course, Apple has reportedly released a beta of Safari 4 to its developer community, so there's another chapter to come.
The catch: Bookmark synching
Despite the newfound performance and pleasant interface, I'm not necessarily dropping Safari like a hot potato in favour of Firefox. Apple has another ace up its sleeve with respect to Safari -- especially for people like me who live and work on multiple Macs. Apple's .Mac service (recently renamed MobileMe or .Me for short) can automatically synchronize browser bookmarks, usernames and passwords on all your Macs. This Apple service costs $99 a year, so it's not for everyone. But for those who do use it, it's another reason to stick with Safari.
I'm unaware of similar service for Firefox that works as seamlessly and automatically as MobileMe. There are several utilities and services that you can use to solve the problem. For example, you can get around the problem by using a Web-based service, like Google Bookmarks. (I'm not as fond as many people are of using Google and Yahoo for personal data like email, so it's not a method I'd prefer.)
I've recently come across two products that look promising: Foxmarks Bookmark Synchronizer (free) and Everyday Software's BookIt ($12). Neither of these products has the whole ball of wax. Foxmarks appears to handle everything I want it to, but only among Firefox browsers (including Firefox 3). BookIt is a manual synching tool (it doesn't work automatically), but it works with multiple browsers and even supports the iPhone (although in its current 3.75 release, BookIt does not support Firefox 3).
Firefox has caught up to Safari's performance but has not surpassed it in any notable way. What that means is that the decision is effectively a photo finish for the legions of Safari users on the Mac. It will probably come down to individual perceptions and predilections. As a previous Firefox user and supporter and also someone who has been using Safari for the better part of two years, I've got skin in the game on both sides of the question. It's going to take me some time to sort it out.
The question for me -- the decision point -- after I install Firefox 3 on one of my machines is: "Should I make Firefox my default browser?" So far, except for the purposes of testing, Safari is still winning. But that may just be muscle memory.
One thing is for sure: This is one Firefox upgrade that existing Firefox users don't want to miss. And whatever browser you use on your Mac, you'll want to check out Firefox 3. It's that good. | 计算机 |
2014-23/2155/en_head.json.gz/11015 | Office 365 Public Beta: A Web-Based Way to “Go Microsoft”
By Harry McCracken | Monday, April 18, 2011 at 10:58 am Last October, Microsoft announced Office 365, a new product (replacing something called the Business Productivity Office Suite, or BPOS) that ties together an array of offerings into one Web-hosted service. Today, it’s launching a public beta, which you can sign up for at Office365.com. It’s letting folks into the service in batches, so expect a bit of a wait until you can try it out; the final version should go live later this year.
Office 365 enters the market as the instant archrival of Google’s Google Apps, but the two services are anything but exact counterparts. Philosophically, they’re at odds: Google Apps is based on the idea that you’ll do most or all of your work using Web-based apps, resorting to a traditional suite such as Microsoft Office either not at all or only in a pinch. (Google continues to acknowledge that many businesses aren’t ready to dump Office by introducing features designed to make Apps and Office work better together.)
Microsoft, oddly enough, thinks that most companies don’t want to get rid of Office in its traditional software form. So Office 365 is designed to supplement old-school Office rather than render it irrelevant, by making it easier to deploy and manage Microsoft Web-based services that complement the Office suite, including the Exchange e-mail platform, SharePoint collaboration, and Lync communications system. (You can either continue to buy the desktop suite the old fashioned way, or pay for it on a subscription basis as part of Office 365.)
Office 365 also builds in the Office Web Apps versions of Word, Excel, PowerPoint, and OneNote, junior-sized versions of their desktop antecedents which are still pretty limited when it comes to features, although they do a nice job of rendering documents properly. You can also use the Office Web Apps for free, but the freebie editions are aimed at consumers and tie into Hotmail and Microsoft’s SkyDrive service; the Office 365 ones work with Exchange and SharePoint and are therefore much better-suited to business use.
Various versions of Office 365 are aimed at organizations of all sorts, from one-person professional service firms to megacorporations. A version called Plan P1 is designed for small businesses, giving them hosted Exchange, SharePoint, Lync, and Office Web Apps for the reasonable price of $6 per user per month.
To riff on Google App’s oft-repeated concept of “Going Google,” Office 365 provides a one-stop way to Go Microsoft. I think that the small businesses who will find it most attractive are ones who are already serious, reasonably happy Office users and are comfortable with the idea of becoming even more Office-centric–not malcontents who are tempted by Google Apps. For instance, while you can use Outlook with any e-mail server, using Office 365 provides access to a bunch of features which are dependent on the Exchange server, such as full-blown workgroup calendaring, without requiring you to set up an Exchange server.
Me, I still find myself veering between Google Apps and desktop Office, depending on what I’m doing. I want it all: apps as powerful as old-school Office that live in the browser like Google Apps, with both Google’s painless collaboration features and the Office Web Apps’ slick document rendering. It’s going to be great to watch Office 365 and Google Apps duke it out, but both Microsoft and Google have a long, long way to go. Google, in fact, is still adding extremely basic stuff like pagination and doesn’t quite have an answer for the question “how do I stay productive when I don’t have an Internet connection?” And Microsoft is still financially and emotionally invested in desktop software in a way that may limit the ambition of the Office Web Apps. (We’ll know that’s changed when a fairly serious Office user can look at the Web apps and say to him or herself, “Hey, I could use this for 90% of my work.”)
For more details on Office 365, check out Elsa Wenzel’s story over at PCWorld.
Read more: E-Mail, Microsoft Office, Microsoft Office 365, Office Suites
Pashmina Says:
May 17th, 2011 at 7:29 pm Has anyone seriosuly vetted the collaboration aspects of Office 365 vs. Google yet? Or is it too early? If designed in the write way, where all my MS documents could be synced ala Dropbox but also accessible on the web ala Google style for collaboration, they tempt me to make the switch back. But trying all these new tools is exhausting and time consuming. Has it matured enough to give it a go? Comment on This Story | 计算机 |
2014-23/2155/en_head.json.gz/11055 | Virtualisation for Beginners
It's not just for server jockeys, you know
Chris Bidmead,
Both VMware and Parallels also allow you run the whole guest operating system either as a single entity in its own window, or full screen, which makes it look as if it owns the entire machine. In conjunction with the Mac OS X Spaces feature - a way of flicking immediately between virtual desktops with the touch of a key combination or a mouse gesture - this is a great way to switch almost instantly between multiple operating systems.YouTube HD running smoothly on Windows 7 in a VirtualBox VM under Ubuntu Linux
One other side benefit of virtualisation should appeal to anyone who runs lengthy processes on their machine. It seems to happen all too often that you're re-encoding a movie as a background process, when you need to reboot your machine, to install a system update, say, or fix a hardware problem. If the conversion is running on the host operating system you'll either have to wait until it's finished, or stop it and start all over again from scratch. But if it's running in a virtual machine, you just suspend the virtual machine and then reboot the host or whatever else you have to do. When you restart the virtual machine, the compilation will pick up from exactly where you left it.With a sufficiently powerful processor, running multiple guest operating systems becomes feasible. Virtualisation has a performance impact, of course - if you're thinking of using it for gaming, for instance, dream on - but for office and straight graphics work my Intel Core 2 Quad easily runs Ubuntu Linux, Windows 7 and Windows XP in separate VMs under Snow Leopard. ®
Sun's VirtualBox is a free virtualisation utility available for Windows, Mac OS X and Linux. It lacks some of the finer features of VMWare and Parallels, but ot | 计算机 |
2014-23/2155/en_head.json.gz/11161 | Search Box Apply Now
National Center for Cognitive Informatics & Decision Making
Products / Publications
Sameer Bhat
Sameer Bhat Vice President of Sales and Co-founder, eClinicalWorks After graduating Karnataka University in 1993, with a Bachelor’s degree in electronics and communications, Sameer Bhat began his career at Integra Microsystems, where he served as a lead engineer for developing Web-based document management software from 1994-1996. In 1996, he moved to Novell, Inc., to develop applications for remote desktop and network management as a senior software engineer until 1999. Both of these positions helped lay the groundwork for eClinicalWorks.
In 1999, Bhat brought his experience to eClinicalWorks, a healthcare software company, as one of the co-founders. Bhat oversees the company’s technical direction and leads eClinicalWorks’ services efforts in the areas of architecture, design and sales. Using some of the expertise from Novell and Integra, he also serves as one of the original key contributors who designed and built the company’s Web-based technology architecture.
In addition to providing technical guidance, Bhat leads the company’s sales team and has made it one of, if not the, best in the industry. eClinicalWorks is currently debt-free with more than 40,000 medical providers using its software across all 50 states. Revenues have grown 100 percent year-over-year and were approximately $100 million in 2009.
Sameer Bhat’s ability to use his technical expertise to meet the needs of customers has helped make eClinicalWorks a solid business that will be a driving force in the market for many years to come. As a co-founder, he has structured this profitable, debt-free company in a way that will outlive most software companies. This work was recognized by the Worcester Business Journal when it honored Bhat with its 2009 40 Under Forty Award for business leaders under 40 years of age. Find a UTHealth program/department: | 计算机 |
2014-23/2155/en_head.json.gz/11183 | Contents | Guideline 1 | Guideline 2 | Guideline 3 | Guideline 4 | Appendix A: Prompting | Glossary | References
Implementation Techniques for
Authoring Tool Accessibility Guidelines 2.0:
Appendix A: Techniques for user prompting
Group Working Draft 3 December 2002
ttp://www.w3.org/WAI/AU/2002/WD-ATAG20-TECHS-20021203/appa
http://www.w3.org/TR/A
TAG-wombat-techs/appa
ttp://www.w3.org/WAI/AU/2002/WD-ATAG-WOMBAT-TECHS-20020820/appa
ATAG 1.0 Recommendation:
Editors of this chapter:
Copyright �1999 - 2002 W3C� (MIT, INRIA, Keio), All Rights Reserved. W3C liability, trademark, document use and software licensing rules apply. Table of Contents
The ATAG Definition of "Prompting"
User Configurability
Prompting Before a Problem Exists Input Field Order
Input Field Highlighting
Prompting After a Problem Exists Immediate Prompting (Interruption)
Negotiated Prompting (Interruption)
Scheduled Prompting (Interruption)
Sequential Prompting
Prompting in a Real-Time (Live) Authoring Environment
The purpose of this appendix is to more fully explain the sense in which "prompting" is used in ATAG v1.0. This document will try to situate the concept of prompting in the scope of classical HCI paradigms, where it is linked to the psychological concept of user-interruption (McFarlane, 1999). Four user-interruption methods are usually identified: (1) immediate, (2) negotiated, (3) mediated, and (4) scheduled. In the case of web authoring, the third case, mediated, is not relevant. Several interface mockups are included for illustrative reasons. They are not intended as expressions of any AUWG or member opinion.
The concept of "prompting" is central to the practical implementation of the ATAG guidelines v1.0. The term appears several times in the guidelines themselves and dozens of times in the ATAG implementation techniques. Including:
Guideline 3: From the introductory text: "... authoring tool developers should attempt to facilitate and automate the mechanics of [producing equivalent information]. For example, prompting authors to include equivalent alternative information such as text equivalents, captions, and auditory descriptions ..." Checkpoint 3.1: From the checkpoint text: "Prompt the author to provide equivalent alternative information (e.g., captions, auditory descriptions, and collated text transcripts for video)." Checkpoint 3.4: From the introductory text: "For example, prompt the author for a text equivalent of an image ..." Checkpoint 4.1: From the introductory text: "Note: Accessibility problems should be detected automatically where possible. Where this is not possible, the tool may need to prompt the author to make decisions or to manually check for certain types of problems." When ATAG 1.0 was first released there were some misunderstandings about whether the original definition of "prompting" implied that less intrusive mechanisms such as prominent input fields and in-line prompting would not qualify. In fact, the AUWG believes they should, so on 5 July 2000, an Errata was published that clarified the definition:
"In this document [ATAG v1.0] 'prompt' does not refer to the narrow software sense of a 'prompt', rather it is used as a verb meaning to urge, suggest and encourage. The form and timing that this prompting takes can be user configurable. 'Prompting' does not depend upon the author to seek out the support but is initiated by the tool. 'Prompting' is more than checking, correcting, and providing help and documentation as encompassed in [ATAG v1.0] guidelines 4, 5, 6. The goal of prompting the author is to encourage, urge and support the author in creating meaningful equivalent text without causing frustration that may cause the author to turn off access options. Prompting should be implemented in such a way that it causes a positive disposition and awareness on the part of the author toward accessible authoring practices."
In other words, "prompting" is used to denote any user interface mechanism that provides the author with the opportunity to add accessible content (Note: although the definition uses the phrase "equivalent text", this covers audio descriptions of video, etc.) or reminds them of the need to do so. The definition does not assume any particular prompting mechanism or require that the chosen mechanism be either irritating or aesthetically unpleasant. In fact, ATAG checkpoint 5.1 requires quite the opposite; that prompting be naturally integrated into the overall look and feel of the tool.
Remember: The ultimate goal of the "prompting" is to obtain correct and complete information with the assistance of the author . This is most likely to occur if the author has been convinced to provide the information voluntarily.
User acceptance of the accessibility features of an authoring tool will most likely depend on the degree to which the new features preserve existing author work patterns. That is why the ATAG definition clearly states that: "the form and timing that this prompting takes can be user configurable". In other words, the author should be able to control how and when prompting will appear in order to reconcile the additional accessibility authoring tasks with their regular work patterns. To achieve this, tools may offer the author a range of checking and prompting options (see Figure A-1). These might include allowing the author to specify:
which accessibility standards they wish to follow, and where applicable, to which level,
the nature and timing of the prompting (interruption scheduling),
the degree to which the prompts are highlighted in the interface (immediate or negotiated interruptions), and
the nature of the accessibility guidance they wish to receive.
Of course, as with all the examples in this document, this is just a suggestion. The Authoring Tools Working Group (AUWG) encourages developers to adapt and develop solutions that are suitable for their own tools.
Figure A-1: Accessibility options card.
(Source: mockup by Jan Richards)
Prompting Before a Problem Exists
It is preferable to guide the user towards the production of accessible content. Otherwise, if the author is allowed free rein, they may be overwhelmed by the full weight of the accumulated problems once they are informed of them later in the authoring process. ATAG checkpoint 5.2 addresses this issue by recommending that input fields related to accessibility (as determined by WCAG) should be among the most obvious and easily initiated by the author. To a large extent, this means designing dialogs and other mechanisms so that the author's attention is drawn to the presence and purpose of accessibility-related input fields.
There are several ways this type of prompting might be achieved:
Input Field Order:
ATAG Checkpoint 5.2 does not require that accessibility related controls either obscure or hinder other controls. Instead, the checkpoint emphasizes that these controls should be allotted a screen presence that is appropriate for their importance. For example, some tools have floating properties bars that display input fields appropriate to the currently selected element (see Figure A-2). The relative importance of a property can be communicated to the author in two ways.
Reading Order: Without any other forms of organization, most people will read interface items in a "localized" reading order (i.e. in English, French, etc. this is left to right and top to bottom). The higher visibility of items that occur early in the reading order confers higher apparent importance.
Grouping: Grouping input fields can change the reading order and the related judgments of importance. For example, the "H" field in the figure below, rises in the reading order due to a strong grouping with the "W" field. Advanced Options: When the properties are explicitly or implicitly grouped into sets of basic and advanced properties, the basic properties will gain apparent importance. For example, in the figure, below, the fact that the "alt" field appears in the default properties, rather than the collapsible portion of the properties bar confers visibility and apparent importance.
Figure A-2: Floating properties bar (top: maximized, bottom: minimized) with prominent alt field.
(Source: Macromedia Dreamweaver 2.0) Input Field Highlighting:
Visibility of input fields related to accessibility may be further enhanced by visual highlighting. For example, the fields may be distinguished from others using icons (see Figure A-3), color (see Figure A-4), underlining, etc. When these methods are used, it is important to ensure that that they are consistent with the overall look and feel of the authoring tool interface (as per ATAG Checkpoint 5.1). For example, if an authoring tool uses an icon in the shape of a black dot to denote the required field, this convention might be extended so that a red dot is used to denote the accessibility-related fields. An additional consideration is that in order to meet ATAG Checkpoint 7.1, the highlighting must be implemented so that it is available through APIs, allowing an author with disabilities to access the highlighting through assistive devices (MSAA, Java Accessibility API, GNOME accessibility).
Figure A-3: Input field highlighting with an iconic reference to a note.
(Source: mockup by Jan Richards) d
Figure A-4: Input field highlighting with colored input field.
Prompting After a Problem Exists
If, despite the prompting discussed in the previous section, accessibility problems are created (or are present when a user opens a document for editing), the properties dialogs or other insertion mechanisms may not be seen by the author again, substantially reducing the effectiveness of any prompting that they contain. In addition, some accessibility problems arise from the interaction between multiple elements and are therefore not well suited to prompting in any particular insertion mechanism. Therefore, it is necessary to implement prompting mechanisms that can operate more generally. Since the problems are already present in the markup, this system will require a robust automated accessibility checking system (see ATAG checkpoint 4.1) in order to detect the problems and alert the author to them..
Immediate Prompting (Interruption)
An immediate interruption is the most intrusive form of prompting because the author's attention is actively diverted from the current editing task to highlight some accessibility issue (for instance, by an alert dialog, see Figure A-6). This type of alert presents multiple usability problems, and should be used sparingly because it interferes with the normal design process. Intrusive warnings are probably only appropriate when the window of opportunity for correcting a serious accessibility problem is about to close. An example of a closing window of opportunity for correction is when the author is publishing a document to their site. In general, we recommend using the less disruptive options that will be described in the following sections.
Figure A-6: Accessibility alert dialog.
The term "negotiated interruption" refers to interface mechanisms (icons, line or color highlighting of the element, audio feedback, etc.) that alert the author to a problem, but are flexible as to whether the author should take immediate action or address the issue at a later stage. This type of unintrusive alert can be better integrated into the design workflow. For example, a colored outline might be drawn around an object in a WYSIWYG view (see Figure A-7) that has unresolved accessibility issues, while the markup text for the same object might by highlighted by a different font color in the code view (see Figure A-8). In either case, when the author clicks on the highlighted text, they could be presented with several correction options. Besides being unintrusive, such indicators will have the added benefit of informing the author about the distribution of errors within the document without interrupting their editing process.
Of course, some authors may choose to ignore the alerts completely. In this case, the AUWG does not recommend that the tool force the author to fix the problem. Instead, it recommends that, at some major editing event (e.g., when publishing), the tool should remind the author of the continuing unresolved accessibility issues.
Figure A-7: Object highlighting of an accessibility problem in a WYSIWYG editor.
Figure A-8: Font highlighting of an accessibility problem in a text view.
With scheduled prompting, the author can set the tool to alert them of accessibility issues on a configurable schedule (see Figure A-1). One option for the schedule might be to have the prompts associated with the interface mechanisms for significant authoring events (saving, exiting, publishing, printing, etc.). In this case, at the significant authoring event, the author is informed of the problem and is given the means to initiate the correcting actions (Note: The author should never be prevented from performing the significant authoring action, itself). For example, a "save as" dialog could display an accessibility warning and an option to launch a correction utility after saving (see Figure A-9). A potential downside of this type of prompting is that by the time the prompt is displayed (publishing, etc.), the author may not have time to make the required changes, especially if they are extensive. d
Figure A-9: A scheduled prompt as part of a "save as" dialog.
Once the author has been made aware of a problem and chosen to correct it, the tool has at least two options. First, it might display the property editing mechanism for the offending element (see Figure A-2). This is the simplest solution, but it suffers from the drawback that it does not focus the author's attention on the required correction. The second option is to display a custom "correction" prompt that includes only the input field(s) for the information required as well as additional information and tips that the author may require in order to properly provide the requested information (see Figure A-10). Notice that in Figure A-10, a drop-down edit box has been used for the alt-text field. This technique might be used to allow the author to select from text strings used previously for the alt-text of this image (see ATAG Checkpoint 3.5 for more).
Figure A-10: Accessibility problem checker.
(Source: mockup by Jan Richards based on A-Prompt).
Sequential Prompting:
In cases where there are many pieces of information missing, authors may benefit from a sequential presentation of correction prompts. This may take the form of a wizard or a checker. In the case of a wizard, a complex interaction is broken down into a series of simple sequential steps the user can complete one at a time. The later steps can then be updated on the fly to take into account the information provided by the user in earlier steps. A checker is a special case of a wizard in which the number of detected errors determines the number of steps. For example, word processors usually have checkers that display all the spelling problems one at a time in a standard template with places for the misspelled word, a list of suggested words, and the correct word. The user also has correcting options, some of which can store responses to affect how the same situation is handled later.
In an accessibility problem checker, sequential prompting is an efficient way of correcting problems. However, because of the wide range of problems the checker needs to handle (i.e. missing text, missing structural information, improper use of color, etc.), the interface template will need to be even more flexible than that of a spell checker. Nevertheless, the template is still likely to include areas for identifying the problem (WYSIWYG or markup-based according to the target audience of the tool), suggesting multiple solutions and choosing between or creating new solutions. In addition, the dialog may include context-sensitive instructive text to help the author with the current correction.
When authoring tools produce real-time content, the luxury of prompting on a user configurable schedule is to a large degree lost. At the same time, due to the time pressure, authors in real-time environments tend to be less receptive to intrusive prompts. Nevertheless, tools that allow this kind of authoring should still take accessibility issues into account by supporting the following:
Determination of Participant Requirements: If a real-time communication takes place between individuals with no special communicative needs, there may be no need for real-time prompting. However, the author may not personally know all the special communicative needs of the participants (even if the author knows everyone personally). The tool might be able to facilitate a decision about whether supplements need to be provided by asking participants which types of supplemental material they wish to have made available (see "Request whiteboard descriptions" checkbox in Figure A-11). and then prompt the author (or see Assistant Author) to provide these (preferably during Preparation Time). In cases when it is not possible to know the needs of everyone participating in a communication, the tool should assume there are unidentified users with disabilities. Moreover, even if there are no individuals with special communicative needs participating in the original real-time communication, if the communication is archived there will always be a possibility that future users will experience accessibility problems with the material. Therefore, even when it has been determined that the original communication does not require supplements, if the author chooses to archive the communication, the authoring tool should guide the author through a configurable interruption process to check for and repair accessibility problems after the real-time session has ended, but prior to archiving.
Assistant Author: In some cases, it may be possible to designate a secondary author in the live community, who can receive and respond to the intrusive prompts for supplemental information generated as the primary author proceeds uninterrupted (See Figure A-11). The secondary author might be an unrelated specialist, analogous to Sign language interpreter, or a co-author (helpful for describing technical drawings, etc.).
Preparation Time: If the authoring tool allows the author time to pre-assemble materials for a live presentation (e.g. a professor preparing for an online class), this authoring is not considered real-time authoring. The authoring tool has the opportunity to provide both intrusive and unintrusive prompts and alerts as described elsewhere in this document. For example, when the professor imports an image to be used in her lecture, she could be prompted to provide an alternative representation of that image.
Figure A-11: Real-time presentation in a Whiteboard/Chat environment. Notice the functionality for requesting whiteboard descriptions, volunteering to be the secondary author (describer), and describing a whiteboard object even as the dialog continues. (source: mockup by Jan Richards). If it has been determined that the author must provide real-time supplements, but no preparation time or assistant author are available, then in addition to allowing the author control of the nature and timing of prompting, the authoring tool can facilitate the inclusion of supplements by:
Implementing the equivalent alternatives management functionality required for ATAG Checkpoint 3.5. Then, if the author uses an object that has been used before, the tool can suggest the previously stored alternative, which the author can quickly accept or decline without substantial workflow disruption.
Providing a voice recognition capability so that the author's real-time speech input can be converted into captioning.
McFarlane, Daniel C. (1999) "Coordinating the Interruption of People in Human-Computer Interaction", Human-Computer Interaction - INTERACT '99, pp. 295-303. Contents | Guideline 1 | Guideline 2 | Guideline 3 | Guideline 4 | Appendix A: Prompting | Glossary | References | 计算机 |
2014-23/2155/en_head.json.gz/11319 | Don't Participate in Illegal E-mail Chain Letters Upgrade To SecureIT Plus
Now With Parental Controls
Ask The Help Desk
How To Save Changes To Attachment Files Before Resending
Great Sites To Check Out In May!
Short Tutorial
Adding Contacts To Your Address Book
Welcome to the Homefront Reporter E-mail has become part of our daily routines and it's the focus of this month's eNewsletter. We take a look at chain letters sent by e-mail and remind you that if money is involved, such letters are illegal. We also provide instructions on how to save changes you make to e-mail attachments, and how to add contact information to your e-mail address book. When you're finished with e-mail, browse through the Great Sites list. It includes an inspiring resource about Memorial Day, tips for fuel efficiency, and some fun ideas for your spare time. The goal of each of our monthly eNewsletters is to keep our subscribers informed regarding their Internet connection and to improve their Internet experience. To meet this goal, each monthly newsletter will usually contain information related to: Warnings on a recent virus, e-mail hoax or security issue that may affect you An update on new services and other local interests An answer to a frequently asked Internet related question Some fun, seasonal websites to check out A short, step-by-step tutorial on an e-mail or browser related task We think you'll find the information contained in this newsletter to be a valuable tool for enhancing your Internet experience. If, however, you'd prefer not to receive these bulletins on a monthly basis, click HERE. To see what's inside this issue, take a look at the index to the left and thanks for reading! - The Homefront Reporter Team
E-Mail Scam - Don't Participate in Illegal E-mail Chain Letters You probably receive them regularly — e-mailed chain letters that promise a big return on a small investment. Most contain a list of names and addresses and instruct you to send a few dollars to the person at the top of the list, remove that name from the list, and add your own name to the bottom. The fraudulent promise behind chain letters is that by the time your name gets to the top of the list, so many people will be involved that you'll receive a fortune. One recently circulated e-mail chain letter promised earnings of "$50,000 or more within the next 90 days of sending e-mail." These e-mail messages often falsely claim that, "this is not a chain letter, but a perfectly legal money-making opportunity." Or they may include personal testimonials that are hard to prove and often fabricated. The Federal Trade Commission (FTC) reminds you that chain letters that involve money or other valuables and promise big returns are illegal. If you start one or forward one, you are breaking the law and could be prosecuted for mail fraud. This applies whether a chain letter is sent by e-mail or regular mail. What should you do if you receive a potentially illegal e-mail chain letter? DO NOT REPLY OR PARTICIPATE. You can also report the scam to the FTC at [email protected]. If you receive a chain letter via regular mail, call the Postal Inspection Service toll-free at 1-888-877-7644.
Upgrade To SecureIT Plus - Now With Parental Controls
3 Rivers is pleased to announce that parental controls have been added to SecureIT Plus, our Internet security software. All of our customers who use this software now have the added protection of content filtering, access and time management controls, and monitoring/reporting capabilities, at no additional cost! Beginning May 1, 2007, all 3 Rivers SecureIT Plus users will automatically have their software upgraded with this new parental control enhancement during the regularly scheduled live update. The parental controls feature will initially be inactive until you activate it and proceed through the menus. To enable Parental Controls, open the SecureIT Management Console by right-clicking on the gold padlock found in your icon tray. At the Management Console, select the "Parental Controls" button on the left hand navigation bar. At the Parental Controls page, click the white box entitled "Enable Parental Controls" in order to activate the service and begin setup. If you're a 3 Rivers dial-up customer and are considering upgrading to high speed DSL broadband, now is the time. SecureIT Plus, with new parental controls, is free for one computer when you sign up for 3 Rivers' DSL service. It is available for purchase by dial-up customers or for additional computers for DSL users for $3.95 per month (assisted installation is available for a one-time fee of $4.95). SecureIT Plus is a comprehensive suite of fully managed and fully automated security software featuring antivirus, patch management, firewall, pop-up blocker, and spyware detection and removal, all with free 24/7 technical support. Visit www.3rivers.net or call a 3 Rivers customer service representative at 1-800-796-4567 for more information. Back to Top
Ask The Help Desk - How To Save Changes To Attachment Files Before Resending
Question: I'm having a problem with e-mail attachments. When I receive an e-mail with an attachment (usually a Word document), I open the file, make and save my changes, and then forward the edited file to a co-worker. When my co-worker opens the Word document, none of my changes have been saved. What am I doing wrong?
Answer: It sounds like you are failing to first save the attached Word document to your computer's hard drive. Making changes to a file without first saving the file to your computer will not save your changes. Next time, save the file to your hard drive. Then you can open the document, make your edits, save the changes to your file, prepare an outgoing e-mail message, and then add the edited document as an attachment to the outgoing message. Your edited version of the file will be sent to the recipient.
Great Sites To Check Out This Month
A Grateful Nation http://www.thulix.com/memorial_day/ - Memorial Day, celebrated this year on May 28th, is a time to remember the ultimate sacrifice made by so many to support our country. This site is a comprehensive resource about Memorial Day, and includes information on the holiday's history and special events as well links to museums, memorials, and cemeteries honoring fallen heroes from the Civil War to the present day. You'll also find opportunities to offer support to U.S. troops currently deployed.
Will the Third Time Be a Charm? http://spiderman3.sonypictures.com - The upcoming release of Spider-Man 3 promises to deliver the excitement and fun we've come to expect from the first two movies. Spider-Man 3 reunites Tobey Maguire and Kirsten Dunst, and follows the adventures of the super hero as his suit turns from red to black, giving him greater powers but posing new challenges. Check out the official site to see the trailer and get the behind-the-scenes scoop. Find Your Celebrity Look Alike http://www.starsinyou.com - Ever been told you that you look like a famous celebrity? Now you can see for yourself in seconds ... and it's free. Simply go to this site and upload a full-face photo, and the search engine will compare your face with thousands of celebrities. You'll instantly receive photos of your celebrity matches, along with a percentage figure for each. You could be 76% Shania Twain, for example, and 67% Laura Linney. It makes a fun break on a busy day.
Discover a Unique Style of Vacation http://discoverhouseboating.org - You've been on a cruise. You've stayed in plenty of hotels, cabins, and cottages. But have you ever taken a houseboat vacation? Houseboating lets you (and your family and friends) enjoy the amenities of a luxurious condominium while you move to a different lakefront spot whenever you want a change in scenery. This site includes a variety of articles geared for houseboat beginners, and includes links to places you can rent a houseboat in the U.S. and Canada.
Slow Down and Save http://fueleconomy.gov - Every five miles an hour you drive over 60 mph is like paying an extra 20 cents more per gallon of fuel. Clearly, it pays to slow down, especially since the cost of gasoline typically rises during the summer vacation season. You'll find lots more fuel-saving tips when you visit this site, including advice on driving more efficiently, keeping your car in shape, and planning and combining trips. You can also download the 2007 Fuel Economy Guide to help you choose a more efficient vehicle — a move that can save you hundreds of dollars every year. Back to Top Short Tutorial - Adding Contacts To Your Address Book
They're probably sitting on your desk right now--business cards or other printed materials containing important names, e-mail addresses, phone numbers, and so on. How do you add this contact information to your e-mail address book? Just follow the steps below for your e-mail program: Adding Contacts To Your Address Book When Using Outlook Express 6 In Windows XP Home Edition: With Outlook Express open, click on the "Addresses" button on the toolbar. In the Address Book window, click on the "New" button and then click "New Contact" from the resulting drop-down menu. Enter your new contact's information in the "Properties" window. If you'd like to enter more detailed information such as their home, business, and personal contact information, click on each corresponding tab to do so. Click the "OK" button to save your new contact and close the "Properties" window. Your new contact will now be listed in the "Address Book" window. Close the Address Book window by clicking on the red "X" in the upper right hand corner of the window. Adding Contacts To Your Address Book When Using Thunderbird 1.5 On Windows XP Home Edition And Macintosh OS 10.4 With Thunderbird open, click your cursor arrow on the "Address Book" button on the Thunderbird toolbar. In the Address Book window, click on the "New Card" button. Start by clicking on the Contact tab and fill in the fields. Continue with the fields in the other tabs if needed. When you're finished, click on the "OK" button. Your new contact will show up in your address book list. Adding Contacts To Your Address Book 4.0.4 When Using Mail 2.1.1 On Macintosh OS 10.4.9 With your Address Book open, click on the "+" sign under the field that includes the list of contacts. A new card will appear in the right hand field. Click on each labeled area and type in the information for your new contact. Close the Address Book program by going to the Address Book menu and selecting "Quit Address Book." Back to Top
We hope you found this newsletter to be informative. It's our way of keeping you posted on the happenings here. If, however, you'd prefer not to receive these bulletins on a monthly basis, click HERE. Thanks for your business! Best regards,
The Homefront Reporter Team
3 Rivers Communications
Offering advanced technology with a personal touch
202 5th St S
Fairfield, MT 59436 1-800-796-4567 406-467-2535 (We have used our best efforts in collecting and preparing the information published herein. However, we do not assume, and hereby disclaim, any and all liability for any loss or damage caused by errors or omissions, whether such errors or omissions resulted from negligence, accident, or other causes.) ©2007 Cornerstone Publishing Group Inc. Trademarks: All brand names and product names used in this eNewsletter are trade names, service marks, trademarks or registered trademarks of their respective owners. | 计算机 |
2014-23/2155/en_head.json.gz/12198 | A B C D E F G H I J K L M N O
P Q R S T U V W X
AutoPublish
The Author is an individual who has composed material in the form of text, audio, video or other multimedia files to complete a life Story. If you write a Story you are considered the Author of that Story. The Author of the Story owns the Story. See our Terms of Use for more information on copyright ownership and publications on the internet. Home | About Us | Contact Us | Privacy Policy | Terms of Use | FAQ Story of My Life Foundation | Eravita Inc
Story of My Life by Eravita® 2014 | 计算机 |
2014-23/2155/en_head.json.gz/15858 | Written by Travis Huinker on 3/22/2012 for
PC Independent games are known for their novel approaches to design by introducing bold and innovative gameplay concepts. In what can almost be considered a resurgence of the independent community, it can often be challenging to sort through the vast amount of releases in search of polished and worthwhile titles. Even more difficult, the discovery of a game that fully embraces an unique idea and produces a result that hasn’t been seen before in the industry. These notions can be applied to the recently released independent game, Waveform. Developed by Ryan Vandendyck and his small team at Eden Industries, Waveform tasks gamers with controlling a wave of light through a galaxy of planets.
At the beginning, only a slight amount of information is provided regarding the game’s premise and overall purpose for navigating the light wave. Tutorials explain the controls and directs gamers toward their goal of reaching the Sun from Pluto, a journey which traverses over 100 levels through 11 different worlds. As further levels are reached, the goal of reaching the Sun expands into stopping a singularity that threatens to darken the entire galaxy. More elements of the story are revealed in later levels through victory screens that often contain amusing references to popular science fiction franchises and global game statistics.
The question that most will ask is exactly how one controls a wave of light through space. Fortunately, the process is quite simple and only requires the click of the mouse and movement in the correct direction. The wave of light continually glides across the screen with gamers simply tasked with directing its arc and overall shape. The nature of its shape will depend on the various curves and locations of objects that are placed throughout levels. To earn better completion rates and stars for unlocking bonus levels, the wave of light must be positioned upon orbs of light to gain points. The wave’s health points are its rings of light that can either be gained by navigating through them in levels or lost by hitting obstacles, such as dark matter and bombs. Each level is a constant balancing act between collecting points versus avoiding an unfortunate demise. Checkpoints are placed at the midway points in levels or before challenging areas to avoid wave-induced frustration.
Along with light orbs and obstacles, an array of objects and scenarios will need to be utilized to survive and gain higher scores. Mirrors, portals, black holes, and other objects or scenarios will affect the course of the light wave. Their close placement often requires a rapid change in the shape of the light wave to avoid obstacles and collect further points. The new objects and scenarios that are continually introduced in each of the game’s worlds provide an increasing scale of difficulty. The controls are never the cause of difficulty, but instead the practice and finesse required for quickly shaping the light wave. As players master the skill and gain more points, the wave will increase in speed resulting in split second decisions. It’s an interesting concept to have the game increase in difficulty as the player becomes more proficient with the wave of light.
With the increase in difficulty and variety of objects to keep in mind with each level, gamers will most likely experience some frustration with managing to reach the end. There were a few levels that required frequent restarts to understand the rhythm of the level and how to properly shape the light wave. However, these minor spikes in difficulty arose to occurrences of extreme satisfaction once the challenges had been mastered with repeated practice and memorization.
The level of quality on display in Waveform is top notch for an independent studio's first game release. The sleek space visuals and zen-like music create a relaxing atmosphere that’s hard to step away from after completing each level. The soundtrack composed by Scott McFadyen contains an assortment of tracks that fit perfectly along with the soothing motion of the wave and star-filled backdrops. Minimalism works well in Waveform’s favor by including only the necessary visual and sound elements for an enjoyable and mesmerizing experience.
Also impressive as the game’s presentation is the staggering amount of content that will easily solve any periods of boredom. After gamers complete the 100 plus levels, they will be able to revisit each level in a new game plus mode that adds new effects and scenarios. In addition, each of the game worlds offer a Deep Space mode that act as an endless survival mode with the goal of reaching a high score on the Steam leaderboards. With 60 achievements as well, Waveform offers gamers a great amount of worthwhile content.
Waveform is available now on Steam for Windows PC. The concept of controlling a wave of light is executed flawlessly through beautifully rendered levels and a challenging, but addictive gameplay experience. Gamers looking for a truly unique and innovative experience shouldn’t hesitate to add Waveform to their list of must-play games this year.
Rating: 9.5 Exquisite | 计算机 |
2014-23/2155/en_head.json.gz/16669 | Next-Gen Spam: Quality Over Quantity Oct 04, 2013 (05:10 AM EDT) Read the Original Article at http://www.informationweek.com/news/showArticle.jhtml?articleID=240162247
The sustained drop in spam volumes since 2010 has coincided with a change in tactics. Anti-spam nonprofit Spamhaus says just 100 operations generate 80% of spam, and these specialists are moving away from indiscriminate, high-volume mailings and instead fine-tuning their campaigns. Spammers know, for instance, that users are less likely to open messages during the weekend, so they've reduced the volume of messages sent on Saturday and Sunday by 25% compared with the rest of the week. It's marketing 101 -- make messages timely and relevant to the audience, send them at the most opportune moment, improve click-through rates.
Meanwhile, spammers are counteracting anti-spam measures. Botnets can check if an individual member's IP address is blacklisted as a sender of spam and if so, assign a different task to that machine. They can spread a spam campaign across a wide range of IP addresses, so each address sends only a small number of emails, thus minimizing the chance of the spam being identified by the telltale sending of large volumes of messages from a single source.
And where once spammers had to hang around underground forums to gain access to those botnets, a shadow economy devoted to servicing the industry has sprung up. The advent of professional spammer services means that access to high-volume "bulletproof" email servers is only a search engine query away. The unique selling point of these companies is that they send email without the inconvenience of having services withdrawn over complaints about spam, and hence, serve customers for whom a takedown-resistant service is important. Such services are widely advertised, with transparent pricing plans and service levels. Spammers can even pay by credit card. These services operate without fear of legal reprisals -- and with a willingness to go on the attack.
Spammers Bite Back
In March, Spamhaus added the bulletproof hosting provider CyberBunker to its directory of suspect IP addresses; this list is widely used to block connections suspected of sending spam. Shortly after, a major distributed denial-of-service attack was launched against Spamhaus's systems. Using a list of poorly configured DNS servers, the attackers sent small DNS requests, spoofed to appear to originate from Spamhaus's IP addresses, seeking large amounts of DNS data. Think of an attacker using a small Post-it note to deliver a copy of the Yellow Pages to a victim's address. Ultimately, the attack failed, Spamhaus was able to continue its work, and two alleged perpetrators have been arrested and charged in relation to the crime. But such battles are expensive, and arrests are few and far between.
Meanwhile, spammers have become adept at hijacking breaking news as an effective method of engaging with users. In the aftermath of the Boston Marathon bombing on April 15, spammers launched two large-scale campaigns with fake news bulletins designed to prey on interest around the attack. Both sets of emails contained links to malicious websites that attempted to install malware. At its peak on April 17, Boston Marathon bombing-related spam comprised 40% of all spam messages delivered worldwide. This is unusual. Typically, spammers prefer to send spam in lower volumes to avoid triggering security alerts, but spammers are keen to make the most of a topic with broad interest but possibly a short lifespan.
When current news events aren't compelling enough, spammers are not beneath inventing their own news to entice users to open their messages. During the period of discussion regarding the possibility of international military action against Syria, spammers sent fake news updates announcing that bombings had already taken place. Again, these emails linked to malicious websites serving malware rather than the "full story."
Protecting Users
Teaching users not to click links in suspicious emails is a cornerstone of security training. However, even the most highly trained user will be tempted to open a message when faced with a convincing news report that is compelling and consistent with current events. Recipients are less likely to see emails that are relevant to their interests as suspicious, so make sure you regularly apprise employees of current spam tactics.
Still, as the spam industry professionalizes, expect deep analysis by spammers of the types of messages that users are likely to open and with which they're likely to interact. The less spam looks like spam, the more important it is to keep malicious messages from ever hitting user inboxes. When evaluating anti-spam options, look for an advanced threat intelligence network that can consider the context of a message, the reputation of the sender, the reputation of the sending IP address, and the nature of websites hosting any links within the email.
Plan for a few malicious messages being missed by anti-spam systems or accessed through other means, such as via personal webmail accounts. In these cases, the next defense is a Web scanning tool that blocks access to links that are actively being distributed in spam campaigns; even if a user has accessed a message, the device can prevent the link from being opened.
Blocking websites based on reputation is particularly useful to foil gangs that use shady hosting companies to spread malicious content, not only by email but also via social networking or other techniques. Obfuscated malware can be particularly difficult to detect using traditional signature-based antivirus techniques.
And remember that all this expensive technology is useless if the email is delivered to a junk mailbox, where someone can click on messages that look like legitimate email. If you don't have enough confidence in the false-positive rate of your anti-spam system to prevent users from having access to blocked spam email, reconsider your strategy.
One thing is for certain: Spammers aren't getting any dumber. The spam community, and the ecosystem that supports it, will continue to evolve in terms of range and professionalism. Spammers already convincingly spoof legitimate news sources; the next step must be to convincingly spoof messages from a user's employer, friends and family. Only by keeping these messages away from intended recipients can we hope to stop feeding the beast. | 计算机 |
2014-23/2155/en_head.json.gz/18506 | Endgame blog
Amazingly interesting words from the Endgame crew
Fractured Soul – A neverending story
Posted on November 8, 2011 by grant.davies With all the indie studios starting up around Australia and the world these days, I thought it might be interesting to provide an insight into the potential difficulties of getting a game onto a store shelf. For other folks, this tale might be good for a healthy dose of schadenfreude.
Way back in 2003, Nick and I started Endgame with the vague idea of making original games our way. When the Nintendo DS was announced in 2004, it
seemed like a great opportunity to produce some innovative original game concepts. We wrote up 5 ideas that each took advantage of the DS in a unique way. We then pitched them to the handful of publishers that we knew. One of the concepts was a very early version of Fractured Soul –called Slidatron at the time. The high concept was a shmup game much like Ikaruga; except where Ikaruga flipped colours on the same screen, Slidatron was split across both screens of the DS.
Partly because of our lack of solid publisher contacts, partly because of the inauspicious beginnings that the DS was experiencing at the time, and partly because all we had was a bunch of words on a page, Slidatron didn’t get very far.
We were convinced that gamers would embrace the concept, but without any funding it was going nowhere fast. Distracted by other fee-for-service work, it lay dormant until the end of 2005, by which time we had enough cash in the bank to pay for a smattering of concept art, so we packaged it up into a funding proposal to show to Film Victoria.
Fortunately for us, Film Victoria saw the merit in the idea and approved a small grant for us to develop a prototype. In early 2006, we began full-steam development of this prototype with a view to showing it to publishers at E3 that year. Unfortunately, development of the prototype was delayed, thanks to an incompetent solicitor and a previous customer who was suddenly unable to pay for months of work. We were cash-starved and unable to move the project forward until we could access the Film Victoria grant money. This unfortunate combination of events compacted the development time until E3 and no doubt had a significant impact on the prototype.
Early on in development, we came to the realisation that this game could work in a much more interesting way as a platform game. It suddenly seemed obvious that this was the right direction to move the project forward: we had substantial experience in developing platform games, including seeing how the industry leaders develop platform games having recently worked with the source code to Rayman 3, and this spin on the platform game genre had never been done before. Luckily for us, Film Victoria again saw the merit in this idea and approved the changes.
We built the basic technology on the DS, and then set about designing some puzzle scenarios. The first puzzles were extremely encouraging – they were fun to play, and added a totally new dimension to the platform game genre – a genre of which we were avid fans.
With some rough puzzle designs on various pieces of paper (possibly napkins), we started to concentrate on what would ultimately form the prototype level. The big question we kept coming back to was: “how difficult should we make the prototype?” There’s no doubt that switch-screen-platform gameplay can be extremely challenging yet it could also be extremely simple. “How quickly should the game ramp up?” we asked each other, “and how difficult should it become?”
The conclusions we came to were, sadly, terribly wrong. We reasoned that the switching mechanic was what was awesome about Slidatron. It was the selling point. It’s what differentiated it from every other game that had ever been made before it. Therefore, we concluded that there was no point in holding back: we should hit the publishers with the (most difficult) puzzles that best showed off the mechanic in its most unique form, and we should hit them with these puzzles from the get go.
It was fundamentally flawed reasoning on two counts: first, we were omitting the crucial part of teaching the player our radically new gameplay mechanic and simply launching them into the deep end, and second, we made the hopelessly misguided assumption that employees of publishing companies had core gaming competence.
With that decision made, we had sealed our fate for E3 and the immediate beyond. Like the Titanic casting off from Southampton, we had a fatally flawed design, and we were headed for iceberg E3.
Putting aside this level design misstep, and a few other fairly minor design kinks, we knew we had a solid demo. For someone competent enough to play it, it showed a unique and fun gameplay mechanic, on top of robust technology, and some pretty decent art to go with it.
Still with limited publisher contacts, we pre-booked as many meetings as possible for the show (which amounted to probably 8 or 9) and prepared to do the awkward cold-calling dance that so many developers know only too well.
We worked up until 4am on the morning of the flight to the US – grabbed 2 hours sleep, then boarded a flight to LA.
The range of responses at E3 was broad, though even with our poorly designed prototype, we received some overwhelmingly good responses – so goo | 计算机 |
2014-23/2155/en_head.json.gz/19070 | NATO and cyber defence
Against the background of increasing dependence on technology and web-based communications, NATO is advancing its efforts to confront the wide range of cyber threats targeting the Alliance’s networks on a daily basis. NATO’s Strategic Concept and the 2012 Chicago Summit Declaration recognised that the growing sophistication of cyber attacks makes the protection of the Alliance’s information and communications systems an urgent task for NATO.
In June 2011, NATO adopted a new cyber defence policy and the associated Action Plan, which sets out a clear vision of how the Alliance plans to bolster its cyber defence efforts. This policy reiterates that the priority is the protection of the NATO network but that any collective defence response is subject to a decision by the North Atlantic Council, NATO’s principal political decision-making body.
The revised policy offers a coordinated approach to cyber defence across the Alliance. It focuses on the capability to better detect, prevent and respond to cyber threats against NATO’s networks. All NATO structures will be brought under centralised cyber protection to deal with the vast array of cyber threats it currently faces, integrating these defensive requirements into the NATO Defence Planning Process. This way, Allies will ensure that appropriate cyber defence capabilities are included as part of their planning to protect information infrastructures that are connected to the NATO network and critical for core Alliance tasks. The revised cyber defence policy also stipulates NATO’s cooperation with partner countries, international organisations, the private sector and academia.
Principal cyber defence activities
Assisting individual Allies
NATO’s top priority on cyber defence is protecting the communication systems owned and operated by the Alliance. The protection of national critical infrastructures remains a national responsibility, which requires nations to invest resources in developing their own capabilities. NATO is helping Allies in their efforts to build up cyber defences by sharing information and best practices and conducting cyber defence exercises in order to develop the necessary expertise to compliment the related technology. Allies are still discussing how NATO should further facilitate this collective effort and what support could be provided to Allies, if requested.
NATO requires a reliable and secure supporting infrastructure. To this end, it will work with national authorities to develop principles and criteria to ensure a minimum level of cyber defence where national and NATO networks interconnect. To achieve this, NATO will identify its critical dependencies on the Allies’ national information systems and networks and will work with Allies to develop common minimum security requirements. Integrating cyber defence into the NATO Defence Planning Process
In accordance with the Lisbon mandate, cyber defence began its integration into the NATO Defence Planning Process (NDPP) in April 2012. NDPP is a crucial tool to provide a framework within which national and Alliance defence planning activities can be harmonised to meet agreed targets in the most effective way.
Cyber defence has also been integrated into NATO’s Smart Defence initiative, endorsed at the 2012 Chicago Summit. Smart Defence is a new mindset, enabling countries to work together to develop and maintain capabilities they could not afford to develop or procure alone, and to free resources for developing other capabilities. To draw attention to models for ‘early engagement’ with industry by NATO and its constituent bodies, the NATO Industrial Advisory Group (NIAG) provided in 2012 an industry perspective on how a NATO-Industry Partnership can be achieved (see below).
Research and training
According to the revised policy, NATO will accelerate its efforts in training and education on cyber defence through its existing schools and the cyber defence center in Tallinn, Estonia. The Cooperative Cyber Defence Centre of Excellence (CCD CoE) in Tallinn, which was accredited as a NATO CoE in 2008, conducts research and training on cyber defence and has cyber defence staff, including specialists from the sponsoring countries. Further information on the CCD CoE can be found at http://www.ccdcoe.org/
The NATO Cyber Coalition Exercise (CC13) in November 2013 offered a good opportunity to exercise NATO crisis management and information-sharing procedures.
Cooperating with partners and international organisations
As cyber threats defy state borders or organisational boundaries, cooperation with partners and international organisations including the European Union (EU) on cyber defence is an important element of the revised NATO policy. Informal staff-level talks regarding cyber defence have continued with the EU. Engagement with partners is tailored and based on shared values and common approaches, with an emphasis on complementarity and non-duplication. Cyber defence goals and benchmarks have been incorporated into approximately 75 per cent of the bilateral cooperation programmes that have been agreed with individual Partners. Five partner nations (Austria, Finland, Ireland, Sweden and Switzerland) participated in CC13. The cyber defence staff of the European Union and New Zealand observed.
Further cyber defence engagement with partner countries and international organisations in areas such as crisis management, best practices, education, training and exercises will be conducted upon decision by Allies on a case-by-case basis.
Cooperating with industry
Developing genuine partnership with industry is broadly recognised as vital in ensuring effective cyber defence both within NATO countries and also for NATO.
In 2012, the NIAG examined how the private sector can best assist NATO in carrying out its responsibilities for cyber defence, particularly concerning NATO’s role in coming to the aid of member countries subject to a potential or actual cyber attack. It provided an industry perspective on how an enhanced, sustainable NATO-Industry Partnership could be achieved across a wide range of cyber-defence related activities, to include information exchange, crisis management, planning and exercises. In 2013 and 2014, the NIAG will conduct an in-depth study on actions NATO should take in collaboration with industry to facilitate NATO cyber defence during crisis. The NIAG is a high-level consultative body of senior industrialists of NATO member countries, acting under the Conference of National Armaments Directors (CNAD), and plays an important role in advising the CNAD on key issues regarding armaments cooperation policy and the industrial and technological base of the Alliance.
Coordinating and advising on cyber defence
The NATO Policy on Cyber Defence will be implemented by NATO’s political, military and technical authorities, as well as by individual Allies. According to the 2011 revised policy, the North Atlantic Council provides the high-level political oversight on all aspects of implementation. The Council will be apprised of major cyber incidents and attacks and exercises principal decision-making authority in cyber defence related crisis management. The Defence Policy and Planning Committee provides oversight and advice to Allies on the Alliance’s cyber defence efforts at the expert level. At the working level, the NATO Cyber Defence Management Board (CDMB) has the responsibility for coordinating cyber defence throughout NATO civilian and military bodies. The NATO CDMB comprises the leaders of the political, military, operational and technical staffs in NATO with responsibilities for cyber defence. This body operates under the auspices of the Emerging Security Challenges Division in NATO HQ (i.e. Chairmanship and staff support).
The NATO Consultation, Control and Command (NC3) Board constitutes the main body for consultation on technical and implementation aspects of cyber defence.
The NATO Military Authorities (NMA) and the NATO Communications and Information (NCI) Agency bear the specific responsibilities for identifying the statement of operational requirements, acquisition, implementation and operating of NATO’s cyber defence capabilities.
Lastly, the NCI Agency, through its NATO Computer Incident Response Capability (NCIRC) Technical Centre, is responsible for the provision of technical and operational cyber security services throughout NATO. NCIRC is a two-tier functional capability where the NCIRC Technical Center constitutes NATO’s principal technical and operational capability and has a key role in responding to any cyber aggression against the Alliance. It provides a means for handling and reporting incidents and disseminating important incident-related information to system/ security management and users. It also concentrates incident handling into one centralised and coordinated effort, thereby eliminating duplication of effort. The NCIRC Coordination Centre is located in NATO Headquarters in Brussels, Belgium. It is a staff element responsible for coordination of cyber defence activities within NATO and with nations, staff support to the CDMB, planning of an annual cyber coalition exercise and cyber defence liaison with international organisations such as the European Union, the Organization for Security and Co-operation in Europe (OSCE) and the United Nations/International Telecommunication Union (UN/ITU ).
Context and evolution
Although NATO has always been protecting its communication and information systems, the 2002 Prague Summit first placed cyber defence on the Alliance’s political agenda. Building on the technical achievements put in place since Prague, Allied leaders reiterated the need to provide additional protection to these information systems at their Riga Summit in 2006.
After the cyber attacks against Estonian public and private institutions in April and May 2007, the NATO Defence Ministers at a meeting in June 2007 agreed that urgent work was needed in this area. In the months to follow, NATO conducted a thorough assessment of its approach to cyber defence, and the findings of the assessment recommended specific roles for the Alliance as well as the implementation of a number of new measures aimed at improving protection against cyber attacks. It also called for the development of a NATO cyber defence policy. In the summer of 2008, the war in Georgia demonstrated that cyber attacks have the potential to become a major component of conventional warfare. The development and use of destructive cyber tools that could threaten national and Euro-Atlantic security and stability represented a strategic shift that had increased the urgency for a new NATO cyber defence policy in order to strengthen the cyber defences not only of NATO Headquarters and its related structures, but across the Alliance as a whole.
On 8 June 2011, NATO Defence Ministers approved a revised NATO Policy on Cyber Defence, a policy that sets out a clear vision for efforts in cyber defence throughout the Alliance, and an associated Action Plan for its implementation. In October 2011, ministers agreed on details of the Action Plan. This revised policy offers a coordinated approach to cyber defence across the Alliance with a focus on preventing cyber attacks and building resilience. In February 2012, a €58 million contract was awarded to establish an upgrade of the NCIRC, to be fully operational by autumn 2013. A Cyber Threat Awareness Cell is also being set up to enhance intelligence sharing and situational awareness. In April 2012, cyber defence began its integration into the NATO Defence Planning Process (NDPP). Relevant cyber defence requirements will be identified and prioritised through the NDPP.
At Chicago in May 2012, heads of state and government reaffirmed their commitment to improve the Alliance’s cyber defences by bringing all of NATO’s networks under centralised protection and implementing a series of upgrades to the NCIRC.
On 1 July 2012, against the background of the NATO Agencies Reform, which is part of an ongoing NATO reform process, the NATO Communications and Information (NCI) Agency was established. The agency will facilitate bringing all NATO bodies under centralised protection and provide significant operational benefits and long-term cost savings.
In April 2013, a critical implementation milestone was met when the core network defence management infrastructure and analytic capability was installed at the NCIRC Technical Centre in Mons, Belgium.
On 4 June 2013, in their first-ever meeting dedicated to cyber defence, NATO Defence Ministers agreed that the Alliance’s NCIRC should have its upgrade completed by autumn 2013. This includes the establishment of Rapid Reaction Teams to help protect NATO’s own systems. Defence ministers also agreed to continue the discussions at their next meeting in October 2013 on how NATO can support and assist Allies who request assistance if they come under cyber attack.
On 22 October 2013, NATO Defence Ministers concluded that the Alliance is on track in upgrading its ability to protect NATO’s networks.
select languageFrench
Last updated: 22-Oct-2013 13:29 | 计算机 |
2014-23/2155/en_head.json.gz/19245 | The skill PR pros will need in the future: HTML5
By Shel Holtz | Posted: November 16, 2012
You have a smartphone and maybe a tablet, both of which are loaded with apps.
There’s a whole economy emerging around app development, most of which are built specifically for an operating system, mainly iOS for Apple products, Android, and Windows. But a shift away from these “native” apps is inevitable. That shift will increasingly involve the use of HTML5.
Just as we communicators—once skeptical that we need needed to know any kind of code at all—got to know at least the basics of HTML as the Web’s popularity grew, we’ll need to familiarize ourselves with this new standard.
HTML5 opens a lot of possibilities for traditional Web development. For example, it dispenses with the need for plugins to view video. But it’s in the mobile arena where HTML5 will have a real impact.
Developers are embracing responsive Web design as one fundamental way to ensure pages look right regardless of the screen size on which it’s displayed. HTML5—along with CSS3—is at the heart of the concept.
But apps are where HTML5 development will have its greatest impact. Rather than pay for development of each version of an app, you only need to create it once. There’s also no need to push out updates, since any changes you make to the code kick in as soon as a user opens the app.
There are drawbacks, of course. Users need to have an Internet connection to run a Web app; they don’t work in airplane mode. Native apps also tend to look and feel slicker. And the standard hasn’t been finalized; work on various components is still underway with a final recommendation not even due until 2014, even as more and more developers apply it to their current projects.
Still, its dominance is inevitable for reasons that go beyond its being the primary alternative to native apps. HTML5 also offers geolocation functionality without using GPS and multiple video streams (Apple limits its devices to one video at a time), among other advantages. More than a third of the world’s top 100 website use HTML5.
A lot of people talk trash about HTML5. Native app developers have a vested interest in dismissing its potential. Technology research firm Gartner—whose track record doesn’t rate taking any of its predictions for granted—believes widespread HTML5 adoption is a decade away. There was a lot of buzz when Facebook CEO Mark Zuckerberg called it a “mistake” to build the Facebook app on HTML5, but most people took the statement out of context. What he actually said was:
“It’s not that HTML5 is bad. I’m actually, on long-term, really excited about it. One of the things that’s interesting is we actually have more people on a daily basis using mobile web Facebook than we have using our iOS or Android apps combined. So mobile web is a big thing for us.”
Despite what you may hear from detractors, adoption of HTML5 by app developers is continuing apace. A recent survey of more than 4,000 developers shows outside influences aren’t slowing developers from embracing the standard. The study, from Kendo UI, showed more than half of developers believe HTML5 is important for their jobs right now; another 31 percent believe it’ll be important in the next 12 months.
Some notable organizations are opting for Web apps. Saabre, the travel reservation company, has switched its TripCase app from native versions for three platforms (including Blackberry) to one that is based mostly on HTML5 and Javascript. (Nick Heath goes deep on the TripCase shift to HTML5 in a TechRepublic article.)
Search engine optimization (SEO) is among the reasons communicators will need to pay attention to HTML5. Discoverability is a vital element of any online communication, so communicators have had to pay attention to SEO. HTML5 adds some complexity, according to Gerald Hanks, writing for Webdesigner Depot.
The content inside new markup tags for multimedia content (such as menus, audio and video) can boost your rankings. New values for the “rel” attribute of link tags should also “provide greatly improved search results,” Hanks says. You’ll get the benefit of these enhancements only if you know enough to incorporate them into your writing or issue appropriate instructions to your web developers.
Ultimately, communicators would be wise to bet on the Web in general, which has shown remarkable resilience against competing approaches to online access.
While some might argue that knowing the guts of HTML5 is a developer’s job, it has always been important for communicators to understand the basics. Back in the print days, if you didn’t know what presses were capable of and how they worked, you had a much harder time managing projects. So it is with HTML5, which will be the foundation for much of what we undertake online as we make the transition to mobility.
There are several good primers that can provide you with a basic understanding of HTML5. Smashing Magazine has an HTML5 Cheat Sheet. The World Wide Web Consortium created an exhaustive HTML5 reference guide. I enjoyed Sencha’s HTML5 Primer for the Overwhelmed. And Ants Magazine curated a list of 50 beginner tutorials for HTML5.
Get up to speed. Knowing the building blocks of HTML5 will soon be as important as it was 25 years ago to know what it meant to go four-up, two over two with perfect binding, and a blind emboss.
Shel Holtz is principal of Holtz Communication + Technology. A version of this story first appeared on his blog a shel of my former self. | 计算机 |
2014-23/2155/en_head.json.gz/19273 | Flip HTML5: Self Publishing Platform Now Available With Unlimited Cloud Storage Flip HTML5, a self digital publishing solution, now provides free cloud storage to users who do not have their own server. Flip HTML5 Self Publishing Solution
(PRWEB) May 25, 2014 A self publishing platform, Flip HTML5 now is available with an Upload Online Service, available for independent publishers who want to share their content with readers worldwide. Users can upload as many as 500 books (100G) per month at no cost. Once they create a flip book in Flip HTML5 account, customers can easily view it online using a PC, Mac, iPhone, iPad, or Android device.
The digital brochure publishing solution is also possible to share a digital brochure contained on the server to others via email, Facebook and Twitter. Publishers can embed the book on a web page by using simple codes if they like. Benefit of the Flip HTML5 self publishing solution is that users don’t need a third-party tool such as FTP. Data protection is guaranteed courtesy of the Amazon S3 service so the flipbook is always available.
All books uploaded to the free cloud storage are showcased on Flip HTML5 website. A large number of readers are attracted to the website every day. Flipbooks can be set to be public or private; other management tools allow book details to be edited, for example. More storage space is available through four different payment plans, from the Pro Plan of 2000 books per month to the Enterprise Plan supporting unlimited number of books per month.
With the digital publishing platform, it’s so easy to self publish articles, newspapers, reports, brochures, magazines and e-publications. Create a Flip HTML5 account, upload a PDF file, and then the platform will help to convert it to HTML5 flipbook format. Before “Publish”, users can edit the title and description details to make the book SEO friendly. It is a very simple process that does not require any expertise to complete.
In addition to the convenience of PDF conversions, full customization, branding, multimedia, and mobile compatibility provided by Flip HTML5, the Upload Online Service and free cloud storage are among its most outstanding selling points. For more information on the software and its great features, visit http://fliphtml5.com. About FlipHTML5 Software Co., Ltd.
FlipHTML5 Software Co., Ltd. is a China-based company established in 2010. It provides digital self publishing tools and is now the leading provider of such software in the world. The company’s products are cost-friendly and ideal for both commercial and personal customers.
FlipHTML5 +86 13119535729
Flash Page Flipping Magazine | 计算机 |
2014-23/2155/en_head.json.gz/20017 | Special Publication 800-12: An Introduction to Computer Security - The NIST Handbook
Section I. Introduction & Overview
Chapter 1: INTRODUCTION 1.1 Purpose
This handbook provides assistance in securing computer-based resources (including hardware, software, and information) by explaining important concepts, cost considerations, and interrelationships of security controls. It illustrates the benefits of security controls, the major techniques or approaches for each control, and important related considerations.1
The handbook provides a broad overview of computer security to help readers understand their computer security needs and develop a sound approach to the selection of appropriate security controls. It does not describe detailed steps necessary to implement a computer security program, provide detailed implementation procedures for security controls, or give guidance for auditing the security of specific systems. General references are provided at the end of this chapter, and references of "how-to" books and articles are provided at the end of each chapter in Parts II, III and IV.
The purpose of this handbook is not to specify requirements but, rather, to discuss the benefits of various computer security controls and situations in which their application may be appropriate. Some requirements for federal systems2 are noted in the text. This document provides advice and guidance; no penalties are stipulated.
1.2 Intended Audience
The handbook was written primarily for those who have computer security responsibilities and need assistance understanding basic concepts and techniques. Within the federal government,3 this includes those who have computer security responsibilities for sensitive systems. For the most part, the concepts presented in the handbook are also applicable to the private sector.4 While there are differences between federal and private-sector computing, especially in terms of priorities and legal constraints, the underlying principles of computer security and the available safeguards -- managerial, operational, and technical -- are the same. The handbook is therefore useful to anyone who needs to learn the basics of computer security or wants a broad overview of the subject. However, it is probably too detailed to be employed as a user awareness guide, and is not intended to be used as an audit guide.
Definition of Sensitive Information
Many people think that sensitive information only requires protection from unauthorized disclosure. However, the Computer Security Act provides a much broader definition of the term "sensitive" information
any information, the loss, misuse, or unauthorized access to or modification of which could adversely affect the national interest or the conduct of federal programs, or the privacy to which individuals are entitled under section 552a of title 5, United States Code (the Privacy Act), but which has not been specifically authorized under criteria established by an Executive Order or an Act of Congress to be kept secret in the interest of national defense or foreign policy.
The above definition can be contrasted with the long-standing confidentiality-based information classification system for national security information (i.e., CONFIDENTIAL, SECRET, and TOP SECRET). This system is based only upon the need to protect classified information from unauthorized disclosure; the U.S. Government does not have a similar system for unclassified information. No governmentwide schemes (for either classified or unclassified information) exist which are based on the need to protect the integrity or availability of information.
1.3 Organization
The first section of the handbook contains background and overview material, briefly discusses of threats, and explains the roles and responsibil | 计算机 |
2014-23/2155/en_head.json.gz/20643 | Dota 2 | Table of Contents | Walkthrough
Laning
Warding
Neutral Creep Bestiary
DotA vs. Dota 2
Diretide
Greeviling
Radiant Strength
Radiant Agility
Radiant Intelligence
Dire Strength
Dire Agility
Dire Intelligence
MOBAAction RTS
Dota 2 Cheats for PC
Dota 2 a multiplayer action RTS game developed and published by Valve Corporation in 2012 for Windows. It is more precisely the main example of a new type of game: the MOBA, the Mobile Online Battle Arena. It is a indirect sequel to the original Defense of the Ancients (DotA) mod of Warcraft III: The Frozen Throne. It is developed by Valve as a stand-alone game with help from IceFrog, who is the most recent developer for the original DotA. Even though it's a sequel, most of the mechanics are taken straight from DotA.
Dota is now a trademark of Valve, though it's now just a random combination of letters instead of being the acronym of Defense of the Ancients. Therefore, references to the original mod will be referred to with the name DotA, while when referring to the game itself the guide will use the name "Dota 2", without a capitalized "a".
The game was released in 2013, and is free to download by Steam members.
Dota 2 takes place on a battlefield where to competing forces struggle for dominance. Players take on the role of powerful heroes that must level up and earn gold to become powerful enough to defeat the rivaling forces. A typical game involves five players on two teams. Each player gets to choose from one of the 103+ heroes available in the game. The goal of the game is to destroy the enemy Ancient, a large building defended by several towers which each team must destroy first.
Retrieved from "http://strategywiki.org/w/index.php?title=Dota_2&oldid=683373" Categories: Guides at completion stage 22013GamesValve CorporationActionRTSMultiplayerWindows This page was last modified on 29 October 2013, at 14:24.
Guide pagesGuide images Table of Contents
Dota 2Table of Contents | 计算机 |
2014-23/2155/en_head.json.gz/22364 | Linux CD turns Opteron PC into "gaming console on steroids"
Posted by: jonconley
Linux CD turns Opteron PC into "gaming console on steroids" - 10/02/03 12:35 AM
A new 64-bit Linux CD can instantly turn an AMD Opteron-equipped PC into the ultimate gaming console, according to Super Computer Inc. (SCI). The company has created a distribution of the popular America's Army multi-player strategy game on a bootable Linux CD, that it says was developed in partnership with AMD, nVidia, and the US Army. According to SCI, the GameStorm CD boots directly into a gaming-console-like environment that maximizes hardware access for the game software and cuts out legacy operating system overhead, resulting in the feeling of "a gaming console on steroids."SCI says its GameStorm technology fits onto a single CD and essentially turns the PC into an embedded Linux based "console-like" gaming system. The Linux OS scans the hardware, loads a custom distribution of 64-bit embedded Linux, and then runs the game software. The experience for the end-user is fast and powerful game playing that boots in under one minute, without the usual overhead from the legacy operating systems traditionally used in the gaming industry, SCI claims."It feels like a gaming console on steroids and even allows for online access so you can connect to online game servers for multi-player action," said Jesper Jensen, CEO of Super Computer, Inc. "With a pure 64-bit environment and no overhead, SCI has created a powerful single-CD showcase for both AMD and GameStorm technology!"SCI's first GameStorm title, America's Army, originally debuted on July 4, 2002, becoming one of the most popular games online, according to SCI. The Army has recorded more than 1.6 million registered user accounts with more than 1 million players completing basic training. Gamers have played more than 130 million missions and the average number of completed missions per day is 450,000. "The fact that America's Army is available in 64-bit on the GameStorm CD allows gamers to get a taste of the next generation of gaming just by inserting a CD and powering up the computer," said Major Bret Wilson, Operations Officer for America's Army."With the AMD Athlon 64 processor and GameStorm technology, AMD is able to showcase a fully-integrated 64-bit environment that delivers performance and realism to the most demanding gamers," said Tim Wright, director, desktop marketing, AMD Computation Products Group. "AMD64 will revolutionize the gaming market by delivering immersive super-realistic environments."Earlier this year, Super Computer Inc. unveiled what was claimed to be the world's first AMD Opteron processor-based gaming server cluster, featuring U.S. Army's "America's Army Game," at the 2003 Electronic Entertainment Expo. Company Marketing Manager Jay Majumdar says America's Army on GameStorm will be distributed free by AMD with Opteron-equipped PCs, and that the company is now working on porting several more 32-bit and 64-bit games to the GameStorm platform. Majumdar notes that Army recruiters will use the CD during recruiting events. "They can run the game on a floor model at Best Buy, and leave the hard drive untouched," he says. View article here @ LinuxDevices.com | 计算机 |
2014-23/2155/en_head.json.gz/24155 | The Elder Scrolls Online Impressions
Written by Travis Huinker on 2/7/2014 for
PC With the ever-increasing number of massively multiplayer online games moving to free-to-play, The Elder Scrolls Online stands against the crowd with a traditional subscription payment model. The question that remains is if the Elder Scrolls title can convince gamers to support what most consider a dying payment model. Fortunately, ZeniMax Online provided members of the press with an extended look at The Elder Scrolls Online beginning Friday, January 31. During the full weekend and partial week of gameplay with my Wood Elf Dragonknight character, I developed a love and hate relationship while exploring the vast lands of Tamriel. In this article, I'll discuss what I ultimately think worked and what didn't work quite as well during my adventures.
What WorkedMost importantly, The Elder Scrolls Online feels like, well, an Elder Scrolls game. Ranging from its presentation style to the narrative and its quests, the game incorporates enough elements from past games to be recognizable while also introducing new gameplay concepts. The fan service is ever-present from completing quests for the Fighters and Mages guilds to the instantly recognizable voice work of past actors from Skyrim and Oblivion. The quality of the game's narrative and its quests in particular were surprisingly well designed and unique from one another, most of which move away from the genre traditions of fetch-these-items or slay-those-monsters quest types. One quest in particular had me solving a murder mystery by collecting evidence and questioning other characters. Even though the traditional genre quests still exist in the game, the portions of the narrative that I've played thus far repeatedly introduced new locations and objectives that kept the gameplay continually interesting.
Also in the tradition of past Elder Scrolls games is the in-depth customization of player characters. All of the series' traditional races are included with returning customization options such as hair type and face tattoos, while also introducing a batch of new sliders ranging from full body tattoos to head adornments. The high level of customization should solve the common issue that plagues other MMOs in that most characters running around the world look awfully similar to one another. Another aspect of the game's character design that I found refreshing was that player characters blend together well with the non-playable characters that inhabitant Tamriel. Even higher-level characters with fancier armor never looked as if they seemed out of place standing among non-playable inhabitants. This in part can be contributed to the well-designed armor and clothing pieces that are especially lore friendly.
While most MMOs suffer from information overload in their various interface and menu elements, The Elder Scrolls Online fortunately takes many cues from Skyrim in employing an interface that only contains the necessary components. Nearly every element of the menu is easy to use straight from the beginning without much tutorial explanation. Skyrim veterans (PC players who use the SkyUI mod in particular) will feel right at home as everything is simple to use and scales well to specific resolutions. The simplified menu system also makes locating group members a quick and painless process as the world map always indicates their current location. Even with the game's fast travel system, I opted to travel by foot for most of my quests to discover new locations in the world. The map interface can also be switched between various zoom levels that range from the entire continent of Tamriel to the detailed area view of a particular region.
Combat in a MMOs can either be an entertaining gameplay feature or simply serve as another tedious function for progressing in the game. Fortunately, The Elder Scrolls Online incorporates an assortment of elements into the combat system from light and heavy weapon attacks to blocking and interrupting enemy attacks. I had to break away from the genre tradition of simply standing in one spot and repeatedly clicking the attack button as my character would quickly die during enemy encounters. The timing of both attacks and blocks are crucial in battles especially when various spells and special attacks are added into the mix. I particularly enjoyed the Dragonknight's special class attacks that included a fiery chain that could pull enemies toward my character and another that temporarily added dragonscale armor for increased defense. I'm looking particularly forward to experimenting with the game's character classes and their personalized attacks and spells.
What Didn't WorkWhile the press weekend for the beta contained a far fewer amount of players than there will be with the game's upcoming launch, it was obvious that many gameplay aspects still require extensive amounts of balance and difficulty revisions to ensure an enjoyable experience for both solo and group players. For my initial hours with the beta I felt confident in completing the various quests, but soon had to seek the help of groups to overcome a few tougher boss encounters. This issue was made worse with the odd balance of my character's leveling progression in relation to the amount of available quests for earning experience. I hit an experience road block on a few occasions in which I wasn't able to locate additional quests that were tailored to my character's current level. I'm hopeful that upon the game's release there will be a wider selection of quests as well as a better balanced progression of character levels.
Recent Elder Scrolls games have included the option to instantly switch between first and third-person views, both of which have their groups of advocates for the use of one perspective over the other. The addition of a first-person view to The Elder Scrolls Online was announced at a later date in the game's development as many fans advocated for its inclusion. Unfortunately, the current first-person view is best described as clunky and impractical during actual gameplay. This really didn't come as a surprise when considering that the MMO genre has always preferred the third-person view due to more dynamic and unpredictable gameplay elements. While not to say that the first-person view is completely useless, it just makes for a far more constricting view of both your character and the gameworld.
Some of the bugs I experienced on a few occasions ranged from falling through the ground to unresponsive combat and certain sound effects that would stop playing. I also experienced some odd timing of enemy mob respawns that often seemed inconsistent and unexpected when enemies would randomly appear on top of my character. Fortunately, the game is still in its beta stage and hopefully with more testing these issues will be solved before release.
Final ThoughtsUltimately, I am optimistic for the launch in April after my experiences from the game's current beta stage. While there is still an array of features that require additional polish or further revision, the majority of the content is on par with past Elder Scrolls games in regards to both the gameplay and narrative. The Elder Scrolls name after all is simply a title and won't be able to solve gameplay issues on merit alone. One true indicator of any game, especially in regards to MMOs, is the urge of returning to the game and progressing just a little bit further. The Elder Scrolls Online was no exception as I found myself continually wanting to further level my Wood Elf character as well as explore more of Tamriel, which above all else is the core selling point of the series. Check back next Friday, February 14 for part two of our Elder Scrolls Online coverage which will cover the game's player-versus-player content.
The Elder Scrolls Online will be available on April 4 for Windows PC and Mac, and later in June for PlayStation 4 and Xbox One. * The product in this article was sent to us by the developer/company for review. | 计算机 |
2014-23/2155/en_head.json.gz/25235 | Top 10 Tumblr Music Sites
Kick out the jams with these cutting-edge music blogs
Whether you’re chasing down the next hot band, an old favorite song or an interesting twist on a musical genre, you can find it on Tumblr. The microblogging platform has become a beacon of creatively curated sites covering every angle of finding, enjoying and dissecting songs and artists.
But with so much good stuff to choose from, how can you find the best Tumblr music sites for your specific sonic fix? We put our ears to the ground and came back with this guide of the top 10 Tumblr music sites for diehard fans.
Make some noise
Copycats Out of the top 10 Tumblr music sites listed, Copycats provides one of the most interesting paths to discovering “new” music. The site’s content consists exclusively of artists covering other artists, remixes and mash-ups. Recent posts included Gotye and The Little Stevies covering Paul Simon’s “Graceland” and an inspired mash-up of Outkast/White Stripes’ “Blue Orchid.”
FreeIndie Every few days, FreeIndie posts three perfectly legal downloads from an independent artist they think might interest their readers. These guilt-free pleasures are just the thing to jazz up the soundtrack of your life. (Recently featured band Tiger Waves would be perfect for your “lying out by the pool” mix.)
2N Pronounced “tune,” this site isn’t quite as prolific as FreeIndie (only one song posted per week), and the focus is more about simple exposure rather than providing a free download. But 2N consistently posts deserving songs. It may be something new or old, highly regarded or completely under the radar; the only rule is that it’s a song worth listening to. Even accounting for subjective tastes, 2N nearly always hits the mark. If you’re looking to get hooked on 2N, recent postings from Grimes, LCD Soundsystem, Sharon Van Etten, College and Phantogram should do the trick.
One Week One Band 2N and FreeIndie can get you started with a few new tracks, but if you want the full backstory on a band or artist you’ve just discovered, One Week One Band is the Tumblr for you. Each week, a trusted music aficionado will showcase an artist or musician that she or he feels is important for you to discover. It may be someone you’ve never heard of, or it could be an eye-opening history lesson involving a musician that you’ve known and loved for years.
Private Noise Tumblr is as much about social networking as it is about blogging, and Private Noise is the perfect example of that. This site features “person on the street” photos of people listening to music, followed by a short interview with the subject of the photo explaining what they’re listening to and why, along with a link to the song they named.
Rock & Roll Tedium This site collects normal people’s tales of banal, asinine run-ins with famous rock stars. Don’t worry; it’s funnier than it sounds. And, it totally disproves the notion that rock starts are doing crazy, wild things every second of the day. Sometimes, Thom Yorke just goes for a jog.
Break Up Your Band Shockingly for those of us who grew up in the era, the 90s are back in style; and, writer John Frusciante (The Onion News Network, Cracked.com) does a fine job with this Tumblr dedicated to highlighting the best, worst and weirdest moments in 90s music history. Oh, and for all you Red Hot Chili Peppers fans out there: No, it’s not that John Frusciante.
Lastly, for top genre-specific music Tumblr sites, give these a spin:
Hip-Hop Cassette: Great hip-hop tracks, new and old, to keep your head ringin’.
Both Kinds of Music: “Both kinds” refers to Country & Western, and as this site likes to point out, “This ain't your Dad's country music. It's your Granddad's!” Think Waylon, Willie, Hank and Johnny.
Holy Soul: The name says it all: a digital bible of the greatest soul music ever recorded. If you visit one Tumblr-music site today, make it this one.
Have music, will travel
The best thing about all the new music you’ll discover through these Tumblrs is that you can load it all on your favorite mobile device and take it anywhere you go. Just make sure your mobile security is up to date so that malware and viruses don’t bring your Tumblr-inspired dance party to a screeching halt.
By Jamey Bainer | 计算机 |
2014-23/2155/en_head.json.gz/25725 | Metroid Prime 2: Echoes Second Opinion
Andrew is entirely correct in saying that the first Metroid Prime was Nintendo's success story for the current generation. It defied all odds by being a truly great experience in the face of so many obstacles stacked against it, stuffing crow in the mouth of everyone (including myself) who thought that bringing Samus Aran to living, breathing 3D was an impossible task. On the subject of Echoes, however, Andrew and I couldn't have more diverging opinions. Where he has appreciation and tolerance, I have nothing but impatience and scorn. Simply put, Metroid Prime 2: Echoes was far and away the most tedious and inspiration-free game that I actually bothered to finish in 2005.Instead of seeing Echoes as upholding and consolidating a legacy, I see it as a blatant cash-in on Prime's success. Its only reason for existence was to fill Nintendo's coffers with my money while offering an experience that was indistinguishably identical at best, and painfully unpleasant at worst.Technically, Echoes is as polished and proud as Andrew claims. There aren't any rough edges or glitches rearing their ugly heads anywhere in the game's vast world. The graphics are nice, and the controls still work. The disc's biggest crime is not related to the quality of its production values, but instead that it does absolutely nothing that wasn't already accomplished the first time around. What reason is there to sit through hours of the exact same item-finding, object-scanning and constant backtracking when Prime did it all, and did it perfectly? Conceptually, the logic behind the game's plot and world are junk, hardly making any sense due to illogical inconsistencies and contrivances. The formula in Echoes is exactly the same as it was before: find items, open up new areas of the map, find more items, and repeat until the end. Many will argue that this play progression is vital and key to the identity of Metroid, but I say that it's high time the se | 计算机 |
2014-23/2155/en_head.json.gz/25890 | Friendster Sdn. Bhd. does not send Spam or sell email addresses.
About Friendster and the Information We Collect
Profile Information submitted by Members to Friendster
Friendster Member creates his or her own profile, which contains the personal information that the Member chooses to include. This personal information includes such things as:
First name or first and last names of the Member (depending on the options selected by the Member),
Gender, age, birthday, and other similar personal information,
Location (e.g., city and state), and
Photos, videos or other shared content uploaded by the Member to his or her profile, to the extent that it includes personal information.
Other Information submitted by Members to Friendster
Friendster also collects Member-submitted account information such as name and email address to identify Members, send notifications related to the use of the Friendster service and conduct market research and marketing for Friendster's internal use only. You should also be aware that Members may reveal other personal information while communicating on other areas of the Friendster website, e.g., while participating in discussions on Friendster Groups, Forums, Chats and/or posting information on Friendster pages. This information, in turn, might be viewed by other Members or visitors to the Friendster website who are not Members.
Information Not Directly Submitted by Members or Other Website Users to Friendster
We also collect some information from Members, as well as from other visitors to the Friendster website, that is not personally identifiable, such as browser type and IP address. This information is gathered from all Members and visitors to the website.
Use of Information Obtained by Friendster
Display of Members' Information
Except where a Member might choose to include full name in his or her profile, a Member's email address and full name will only be used in the following circumstances:
When the Member invites a friend via email to become a Friendster Member,
When we send notifications to the Member relating to his or her use of the Friendster service or Friendster website, and
If the Member so chooses, when we send regular notifications, weekly updates, or other news regarding the Friendster service or the Friendster website.
Except when inviting or adding friends or as otherwise expressly set forth herein, a Member's email address will never be shared with or displayed to any other person. Members and their friends, and other Members within their personal networks communicate on Friendster with each other through the Friendster service, without disclosing email addresses.
Friendster's Use of Members' Information
We may use information relating to each Member's server information, IP address, and browser-type.
Sharing of the Information this Site Gathers/Tracks
Except as explained herein, or where you are expressly informed otherwise, we do not sell, rent, share, trade or give away any of your personal information unless required by law or for the protection of your Membership. However, Friendster may share profile information and aggregate usage information in a non-personally identifiable manner with advertisers and other third parties in order to present to Members and other users of the Friendster website more targeted advertising, products and services; provided that, in such situations, we will never disclose information that would personally identify you. As set forth above, Friendster may (i) use your email address to conduct market research for Friendster's internal use only; and/or (ii) use your email address to market Friendster to you (e.g., new features and applications, announcements, opportunities of interest). Friendster may contract with a third party to conduct such research and marketing; provided that such third party will be fully bound by an obligation of confidentiality to Friendster and may not use any personally-identifiable information provided by Friendster other than as expressly instructed by Friendster and in strict compliance with the terms of this Privacy Policy.
The Friendster Website contains links to other websites. This includes links that are placed there by Friendster (such as in advertisements), as well as by other Friendster Members. Please be aware that Friendster is not responsible for the privacy practices of any other website. We encourage all Members and other users of the Friendster website to be aware of when they leave our website, and to read the privacy policies of each and every website that collects personally identifiable information. This privacy policy applies solely to information collected by Friendster through the Friendster website.
A cookie is a small text file that is stored on a user's computer for record-keeping purposes. We use cookies on the Friendster website. However, we do not and will not use cookies to collect private information from any Member or other user of the Friendster website which they did not intentionally submit to us other than as stated in this Privacy Policy. We use both "session ID cookies" and "persistent cookies." We use session ID cookies to make it easier for you to navigate our site. A session ID cookie expires when you close you browser. A persistent cookie remains on your hard drive for an extended period of time. Note also that Members may optionally use a cookie to remember their email in order to automatically log in to our website. You can remove persistent cookies by following directions provided in your Internet browser's "help" file; however, please note that, if you reject cookies or disable cookies in your web browser, you may not be able to use the Friendster website.
YOU CAN ALWAYS CHANGE THE PERSONAL INFORMATION YOU'VE SUBMITTED TO THE FRIENDSTER WEBSITE, INCLUDING THE INFORMATION INCLUDED IN YOUR FRIENDSTER PROFILE, BY CLICKING THE "EDIT" LINKS ON YOUR FRIENDSTER SETTINGS PAGE.
Changes in Our Privacy Policy
If we change our privacy policy, we will post those changes on our web site so Members and other users of the Friendster website are always aware of what information we collect, how we use it, and under what circumstances, if any, we disclose it. If we are going to use Members' or other users' personally identifiable information in a manner different from that stated at the time of collection, we will notify those Members and users via email or by placing a prominent notice on our website.
If a Member elects to use our "Invite" feature to invite a friend to become a Member of the Friendster service, we ask them for that friend's email address. Friendster will automatically send the friend an email inviting them to join the site. Friendster stores this email address for the purpose of: (i) automatically adding the respondent to the “friends list” of the Member sending the invitation; (ii) sending reminders of the invitation and (iii) sending emails on recent updates by the Member on Friendster to the Member's invited friend. Friendster will never sell these email addresses or use them to send any other communication besides invitations and invitation reminders. Any person who receives an invitation may
contact Friendster
to request the removal of this information from our database.
The Friendster account of every Member is password-protected. Friendster takes every precaution to protect the information of the Members, as well as information collected from other users of the Friendster website. We use industry standard measures to protect all information that is stored on our servers and within our database. We limit the access to this information to those employees who need access to perform their job function such as our customer service personnel. If you have any questions about the security at our website, please
Members who no longer wish to receive our weekly email updates or other email notifications may opt-out of receiving these communications by following the instructions contained in the applicable email or by logging-in and changing their settings in the "Account Settings" section of the Friendster website. You can access this page by clicking on the "Settings" link on the top right of your Friendster homepage.
The ads appearing on this Web site are delivered to Members by our Web advertising partners. Our Web advertising partners may use cookies. Doing this allows the ad network to recognize your computer each time they send you an online advertisement. In this way, ad networks may compile information about where you, or others who are using your computer, saw their advertisements and determine which ads are clicked on. This information allows an ad network to deliver targeted advertisements that they believe will be of most interest to you. Friendster does not have access to or control of the cookies that may be placed on the computer of any Member or other user of the Friendster website by the third-party ad servers or ad networks.
This privacy statement covers the use of cookies by Friendster only, and does not cover the use of cookies by any other party.
Members and Users Located Outside of Malaysia
We have made an effort to protect the personal information of all Members and other users of the Friendster website, and to the extent applicable, we attempt to comply with local data protection and consumer rights laws. If you are unsure whether this privacy policy is in conflict with the applicable local rules where you are located, you should not submit your personal information to the Friendster website.
IN ADDITION, IF YOU ARE LOCATED WITHIN THE EUROPEAN UNION, YOU SHOULD NOTE THAT YOUR INFORMATION WILL BE TRANSFERRED TO MALAYSIA, THE LAWS OF WHICH MAY BE DEEMED BY THE EUROPEAN UNION TO HAVE INADEQUATE DATA PROTECTION (see, for example, European Union Directive 95/46/EC of 24 October 1995, a copy of which can be found
Members and other users of the Friendster website located in countries outside of Malaysia who submit personal information do thereby consent to the general use of such information as provided in this privacy policy and to the transfer of that information to and/or storage of the information in Malaysia.
Contacting Friendster
If you have any questions about this privacy policy, Friendster's privacy practices, or your dealings with Friendster, please
Log in or Sign up to earn points and enjoy rewards with your friends
continue to play anonymously | 计算机 |
2014-23/2155/en_head.json.gz/27676 | RFG ID#
Submission Stats
Register for an RFG Account
Submit Game Additions / Edits
Submit Hardware Additions / Edits
You are either not logged in or not a registered member. In order to submit info to the site you must be a registered memeber and also logged in. If you are a registered member and would like to log in, you can do so via this link. If you are not a registered member and would like to register, please follow this link to register. Please note that there are perks to being logged in, such as the ability to see pending submissions and also the ability to view your submissions log.
Welcome to the RF Generation Submit Info Pages. These pages will allow you to submit info for all of the database entries, and will even allow you to submit entries to add. Use the menu to select the action that you would like to complete. If this is your first time visitng the submit pages I highly suggest that you visit the FAQ page to learn some very important info. I also suggest that you visit the FAQ if you are confused about these pages. Before you're overzealous to your home country
We know that you may or may not know whether or not the game you own is a region wide release, but we'd like for you to make a concerted effort to ensure that the title you are adding really exists. For example, you may live in the US. Therefore, you may think that all your games are US releases, right? Well, they are. But, more often than not, they are also Canadian and Mexican releases, and as such they are a North American Release. For the record, most modern releases in North America are region wide. I can actually look at the back of Mario Kart DS and see that this version of the game was not only authorized to be sold in the US but also Canada, Mexico, and Latin America! As a general rule of thumb, assume, unless you know otherwise, that the title that you are about to submit was a region wide release. Please Read the following regarding image submissions!
We appreciate all submissions that you are willing to give RF Generation, but we need to adhere to certain standards so that there is consistency in our database. As such, please take note that your scans should be 550 pixels on the short side! Your submissions could be rejected if they do not meet these size requirements! Please also note that there are exceptions to this rule, for example, you don't really need a 550 pixel wide scan of a DS or GBA game. Use proper judgement! If you have any questions please contact a staff member, as we are more than willing to help you decipher our standards. We appreciate all submissions that are made, we just want to make sure your submissions are not in vain.
Site content Copyright © rfgeneration.com unless otherwise noted. Oh, and keep it on channel three. | 计算机 |
2014-23/2155/en_head.json.gz/32462 | - easy customization & skinning, - very simply and easy to maintain, - compatible with web browsers, - built-in forum, - built-in file management and newsletters, - multilingual, - very stable and secured.
Buy now! ($49.99)
Demo (admin/demo1234)
Get Support!
If you did not try eazyPortal before, do not hesitate to visit our demo.Administrator login details are: admin / demo1234.
- Did you know ? (google for open source)
In general, open source refers to any program whose source code is made available for use or modification as users or other developers see fit. (Historically, the makers of proprietary software have generally not made source code available.) Open source software is usually developed as a public collaboration and made freely available. Open source is a certification mark owned by the Open Source Initiative (OSI). Developers of software that is intended to be freely shared and possibly improved and redistributed by others can use the open source trademark if their distribution terms conform to the OSI's open source Definition.
The idea is very similar to that behind free software and the copyleft concept of the Free Software Foundation. Open source is the result of a long-time movement toward program that is developed and improved by a group of volunteers cooperating together on a network. Many parts of the Unix operating system were developed this way, including today's most popular version, Linux. Linux uses applications from the GNU project, which was guided by Richard Stallman and the Free Software Foundation, Software Development. The open source Definition, spearheaded by Eric Raymond (editor of The New Hacker's Dictionary), is an effort to provide a branded model or guideline for this kind of program distribution and redistribution. The OSI considers the existing program distribution licenses used by GNU, BSD (a widely-distributed version of UNIX), X Window System, and Artistic to be conformant with the open source Definition. - Did you know ? (google for content management system)
A content management system (CMS) is a software system used for c.m. This includes computer files, image media, audio files, electronic documents and web containing. The idea behind a content management system is to make these files available inter-office, as well as over the web. A content management system would most often be used as archival as well. Many companies use a content management system to store files in a non-proprietary form. Companies using a content management system file share with ease, as most system use server based software, even further broadening file availability. As shown below, many content management systems include a feature for Web containing, and some have a feature for a "workflow process."
"Workflow" is the idea of moving an electronic document along for either approval, or for adding containing. Some content management systems will easily facilitate this process with email notification, and automated routing. This is ideally a collaborative creation of documents. A CMS facilitates the organization, control, and publication of a large body of documents and other containing, such as images and multimedia resources are commonly done by php programmer.
A web content management system is a content management system with additional features to ease the tasks required to publish web containing to web sites.
Web content management systems are often used for storing, controlling, versioning, and publishing industry-specific documentation such as news articles, operators' manuals, technical manuals, sales guides, and marketing brochures. A content management system may support the following features:
Import and creation of documents and multimedia material ,
Identification of all key users and their c.m. roles ,
The ability to assign roles and responsibilities to different containing categories or types. Definition of the containing workflow tasks, often coupled with event messaging so that containing managers are alerted to changes in containing. The ability to track and manage multiple versions of a single instance of containing. The ability to publish the containing to a repository to support access to the containing. Increasingly, the repository is an inherent part of the sys, and incorporates enterprise search and retrieval. Some content management systems allow the textual aspect of containing to be separated to some extent from formatting. For example the CMS may automatically set default color, fonts, or layout. Content management systems take the following forms:
A web content management system is software for web site management - which is often what is implicitly meant by this term the work of a newspaper editorial staff organization,
a workflow for article publication,
a document management sys,
a single source content management system - where content is stored in chunks within a relational database.
Copyright © 2006 - 2014 eazyPortal.com, All rights reserved. | 计算机 |
2014-23/2155/en_head.json.gz/32957 | MINE ACTION GATEWAY
E-MINE
THE UN MINE ACTION GATEWAY
Fourteen UN department, agencies, programmes and funds play a role in mine-action programs in 30 countries and three territories. A policy developed jointly by these institutions (Mine Action and Effective Coordination: the United Nations Inter-Agency Policy) guides the division of labor within the United Nations. Much of the actual work, such as demining and mine-risk education, is carried out by nongovernmental organizations. But commercial contractors and, in some situations, militaries, also provide humanitarian mine-action services. In addition, a variety of intergovernmental, international and regional organizations, as well as international financial institutions, also support mine action by funding operations or providing services to individuals and communities affected by landmines and explosive remnants of war.
The Strategy presents the common objectives and commitments that will guide the UN in mine action over the next 6 years. DOWNLOAD THE PDF
The vision of the United Nations is a world free of the threat of landmines and explosive remnants of war, where individuals and communities live in a safe environment conducive to development and where the needs of victims are met. The inter-agency partners working towards the achievement of this vision are: UN Department of Peacekeeping Operations (DPKO) DPKO integrates mine action into worldwide UN peacekeeping operations in line with a November 2003 Presidential Statement of the Security Council. Mr. Hervé Ladsous, the Under-Secretary-General for Peacekeeping Operations chairs the Inter-Agency Coordination Group on Mine Action, which brings together representatives from all UN mine-action entities. UNMAS provides direct support and assistance to UN peacekeeping missions. Careers & Business Opportunities
United Nations Mine Action Service (UNMAS) UNMAS is located in the Department of Peacekeeping Operations Office of Rule of Law and Security Institutions and is the focal point for mine action in the UN system. It is responsible for ensuring an effective, proactive and coordinated UN response to landmines and explosive remnants of war. Careers & Business Opportunities
United Nations Office for Disarmament Affairs (UNODA) UNODA advises and assists the UN Secretary-General in his work related to the Anti-Personnel Mine-Ban Treaty and the Convention on Certain Conventional Weapons. ODA promotes universal participation in international legal frameworks related to landmines and explosive remnants of war and assists countries in complying with their treaty obligations Careers & Business Opportunities
United Nations Development Programme (UNDP) Through its country offices and its New York-based Mine Action Team of the Bureau for Crisis Prevention and Recovery, UNDP assists mine-affected countries to establish or strengthen national and local mine action programmes. Careers & Business Opportunities
United Nations Children's Fund (UNICEF) UNICEF was created to work with others to overcome the obstacles that violence, poverty, disease and discrimination place in a child's path. This includes children in mine-affected countries globally. UNICEF supports the development and implementation of mine risk education and survivor assistance projects and advocacy for an end to the use of landmines, cluster munitions and other indiscriminate weapons. Careers & Business Opportunities
United Nations Office for Project Services (UNOPS) UNOPS is a principal service provider in mine action, offering project management and logistics services for projects and programmes managed or funded by the United Nations, international financial institutions, regional and sub-regional development banks or host governments. Careers & Business Opportunities
United Nations Mine Action is also supported by: Food and Agricultural Organisation (FAO) The FAO has a mandate to provide humanitarian relief, which sometimes requires the organization to participate in mine action in complex emergencies, particularly in rural areas. Office for the Coordination of Humanitarian Affairs (OCHA) OCHA shares information with all other organizations about the humanitarian impact of landmines and works with UNMAS on resource mobilization. OCHA is manager of the UN Central Emergency Revolving Fund and coordinator of the "Consolidated Appeal Process," both of which provide or mobilize financial resources for mine action. United Nations Entity for Gender Equality and the Empowerment of Women (UN Women) UN Women, among other issues, works for the elimination of discrimination against women and girls; empowerment of women; and achievement of equality between women and men as partners and beneficiaries of development, human rights, humanitarian action and peace and security. UN High Commissioner for Human Rights (OHCHR) The OHCHR does not have any specific mandate in the field of mine action, but it does carry out several relevant projects. OHCHR, for example, seeks to protect the rights people with disabilities, including survivors of landmines or unexploded ordnance. Office of the United Nations High Commissioner for Refugees (UNHCR) UNHCR's involvement in mine action ranges from contracting and mine clearance services, to training, advocacy against the use of anti-personnel mines and victim assistance. World Food Programme (WFP) WFP is involved in the clearance of landmines and unexploded ordnance to facilitate delivery of food assistance in emergency situations. World Health Organisation (Injuries and Violence Prevention Department) (WHO) WHO is primarily responsible for the development of standards, the provision of technical assistance and the promotion of institutional capacity building in victim assistance. It works with the ministries of health of affected countries and cooperates closely with UNICEF and the International Committee of the Red Cross. World Bank (WB) The World Bank helps address the long-term consequences of landmines and unexploded ordnance on economic and social development. It also plays a significant role in mobilizing resources. (Hosted by E-Mine)
UN MINE ACTION
Copyright 2013 United Nations | 计算机 |
2014-23/2155/en_head.json.gz/34568 | Jon Udell on Instant Outlining
Tuesday, April 2, 2002 by Dave Winer.
Scratching our own itch In November of last year we started working on a new piece of software designed to facilitate internal communication at UserLand. We have people in the US and Canada, on both coasts, and were largely using email to narrate our work. Like many groups who depend on email for communication, we were frustrated and inefficient, messages were lost, as were opportunities. It was hard for me as the manager of this group to know what people were doing, when they hit milestones, or dead-ends, when they needed help from me, and if they were in synch with the rest of the group. We desperately needed something to make our work flow more smoothly.
We've been using this tool, which we call an Instant Outliner, since November. We shipped Radio 8 with it. When we switched over, our workgroup's productivity soared. All of a sudden people could narrate their work. We've gotten very formal about how we use it. I can't imagine doing an engineering project without this tool. We decided that we had to make a product out of this. It's too good to keep to ourselves.
Bootstrapping the Instant Outliner Over the New Year holiday we did the second revision, and in March the third, and we started deploying to users of Radio 8, which has a built-in outliner, in late March. We've been working quietly with people who use and study workgroup tools, including Jon Udell, who writes for ComputerWorld, InfoWorld, Byte and O'Reilly. This morning O'Reilly published an essay by Udell, which we think will be definitive of this category of software. So without further ado, here's Jon's piece, courtesy of O'Reilly.
A quote: "It's been clear to me for a long while that the only thing that might displace email would be some kind of persistent IM. That's exactly what instant outlining is. If it catches on, and it's buzz-worthy enough to do that, we'll have a framework within which to innovate in ways that email never allowed."
A big direction I hope you read Jon's observations. I will of course be writing more about it myself in the coming weeks. I believe Instant Outlining is to Instant Messaging as the spreadsheet is to a desktop calculator. A calculator can edit one number, a spreadsheet can express relationships between numbers. An Instant Messaging client can transmit a single idea, an Instant Outlining client can express relationships between ideas.
Instant Outlining in Radio is still a beta and will be for a while. If you're an adventurous soul with high tolerance for user interface glitches, please give it a try. Everything Jon said is true, and what Tim said is true too. It's not for everyone. But it is for workgroups who want to get to the next level after email. Dave Winer
PS: To facilitate growth, the formats and protocols used in Instant Outlining are open, clonable and documented.
© Copyright 1994-2004 Dave Winer. Last update: 2/5/07; 10:50:05 AM Pacific. "There's no time like now." | 计算机 |
2014-23/2155/en_head.json.gz/35377 | American Conquest: Fight Back (c) CDV Software
Windows, 450MHz processor, 64MB RAM, 1500MB HDD, 12X CD-ROM
Monday, November 24th, 2003 at 11:47 AM
By: Steven 'Westlake' Carter
American Conquest: Fight Back review
I’m pondering a new rule for computer games. How does this sound? If an expansion pack comes out less than a year after the original game was released, don’t buy it. I’m sure there’s a counter-example out there somewhere, but I can’t think of one now, and the rule covers most Heroes of Might & Magic expansion packs accurately, so it seems reasonable so far. And it’s relevant here because American Conquest: Fight Back was released a mere seven months after American Conquest, and it’s not worth buying.That’s not to say Fight Back is a terrible game. It’s just that, as a stand-alone expansion pack to American Conquest, its new content is a little underwhelming. According to the game’s web site, there are five new nations (including Germany, Russia, and the Alaska natives), 50 new units, 26 new missions in eight “thrilling” campaigns, and a new battlefield mode for people who don’t want to mess with bases and economics. The reason I had to refer to the web site is because most of those changes are difficult to detect, and if you install Fight Back you might think somebody messed up and gave you another copy of American Conquest.Consider the additions that should have had the greatest impact: the new nations and units. The problem with them is that American Conquest is a game in the same mold as Age of Empires II. That is, there are a bunch of playable factions, but there isn’t a great deal of difference between them. Really, the American Conquest factions only differed between the European nations (who could play defensively) and the American natives (who had to play offensively). The Fight Back nations don’t add a third way of playing. As far as I can tell, those nations and their “new” units are simple re-paintings of existing units, and they don’t add anything to gameplay.Worse, developer GSC Game World didn’t even tweak or balance the existing units. And so all the problems from American Conquest -- like forts and fortresses happily blowing up friendly buildings, and canoes taking out any number of infantry units (not to mention doing a fair job against caravels) -- are still there in Fight Back. If the engine changed at all, I didn’t see it, and I even went back and played a little American Conquest to see if I could tell the difference.That means the only two real reasons to buy Fight Back are the eight new campaigns and the battlefield mode. Let me start with the campaigns. There isn’t anything wrong with them; it’s just they’re not very exciting. The campaigns cover things like Cortez and the Aztecs, Pontiac’s rebellion, and Russia’s attempt to conquer Alaska, but that’s a far cry from, oh, the American Revolution. And once again GSC lets you play both sides of conflicts, but that just means the 26 new missions take place on about 15 new maps, and playing most maps twice is a little boring.Luckily, the final addition in the expansion pack, battlefield mode, is sort of fun. It’s a mode for people with ADD. You select one of ten battles, take 20 seconds to possibly buy upgrades for your units, and then start fighting. You don’t have to worry about bases or resources at all. It’s just a single battle, and the game keeps track of your score (evaluating things like the difficulty and how well you managed your units), and so you can play battles multiple times to perfect your strategy. Battlefield mode is also a decent primer for playing the campaigns (and multiplayer). If you just tell your units to attack the enemy, you’ll get beaten badly, and so you have to be careful about when you let your units fire, and how you position them, and things like that. But even here I’m not sure how worthwhile the mode is. American Conquest came with an editor (Fight Back does too), and I assume you could use it to create some pitched battles similar to the ones in battlefield mode.And so Fight Back is almost identical to American Conquest, except I didn’t like its campaign missions as much. And so even though I’m giving the expansion pack a decent score, I wouldn’t recommend you buy it. If you played American Conquest then you’ve already (essentially) played Fight Back, and if you haven’t played American Conquest yet, I’d recommend you play that game instead.
Written By: Steven 'Westlake' Carter
Ratings:(36/50) Gameplay(20/30) Additions(06/10) Improvements(04/05) Technical(04/05) Documentation | 计算机 |
2014-23/2155/en_head.json.gz/36504 | Click here to Subscribe BPL
LMDS
More... Other Services:
Search All Issues, Conference Reports and Tutorials
Web Services Summit
Fair Use or Copyright?
Deregulation Smoke and Mirrors
Video Compression Tutorial
Video Compression Technology
At its most basic level, compression is performed when an input video stream is analyzed and information that is indiscernible to the viewer is discarded. Each event is then assigned a code - commonly occurring events are assigned few bits and rare events will have codes more bits. These steps are commonly called signal analysis, quantization and variable length encoding respectively. There are four methods for compression, discrete cosine transform (DCT), vector quantization (VQ), fractal compression, and discrete wavelet transform (DWT). Discrete cosine transform is a lossy compression algorithm that samples an image at regular intervals, analyzes the frequency components present in the sample, and discards those frequencies which do not affect the image as the human eye perceives it. DCT is the basis of standards such as JPEG, MPEG, H.261, and H.263. We covered the definition of both DCT and wavelets in our tutorial on Wavelets Theory. Vector quantization is a lossy compression that looks at an array of data, instead of individual values. It can then generalize what it sees, compressing redundant data, while at the same time retaining the desired object or data stream's original intent.
Fractal compression is a form of VQ and is also a lossy compression. Compression is performed by locating self-similar sections of an image, then using a fractal algorithm to generate the sections. Like DCT, discrete wavelet transform mathematically transforms an image into frequency components. The process is performed on the entire image, which differs from the other methods (DCT), that work on smaller pieces of the desired data. The result is a hierarchical representation of an image, where each layer represents a frequency band. Compression Standards MPEG stands for the Moving Picture Experts Group. MPEG is an ISO/IEC working group, established in 1988 to develop standards for digital audio and video formats. There are five MPEG standards being used or in development. Each compression standard was designed with a specific application and bit rate in mind, although MPEG compression scales well with increased bit rates. They include:
Designed for up to 1.5 Mbit/sec
Standard for the compression of moving pictures and audio. This was based on CD-ROM video applications, and is a popular standard for video on the Internet, transmitted as .mpg files. In addition, level 3 of MPEG-1 is the most popular standard for digital compression of audio--known as MP3. MPEG-1 is the standard of compression for VideoCD, the most popular video distribution format thoughout much of Asia.
Designed for between 1.5 and 15 Mbit/sec
Standard on which Digital Television set top boxes and DVD compression is based. It is based on MPEG-1, but designed for the compression and transmission of digital broadcast television. The most significant enhancement from MPEG-1 is its ability to efficiently compress interlaced video. MPEG-2 scales well to HDTV resolution and bit rates, obviating the need for an MPEG-3.
Standard for multimedia and Web compression. MPEG-4 is based on object-based compression, similar in nature to the Virtual Reality Modeling Language. Individual objects within a scene are tracked separately and compressed together to create an MPEG4 file. This results in very efficient compression that is very scalable, from low bit rates to very high. It also allows developers to control objects independently in a scene, and therefore introduce interactivity.
MPEG-7 - this standard, currently under development, is also called the Multimedia Content Description Interface. When released, the group hopes the standard will provide a framework for multimedia content that will include information on content manipulation, filtering and personalization, as well as the integrity and security of the content. Contrary to the previous MPEG standards, which described actual content, MPEG-7 will represent information about the content.
MPEG-21 - work on this standard, also called the Multimedia Framework, has just begun. MPEG-21 will attempt to describe the elements needed to build an infrastructure for the delivery and consumption of multimedia content, and how they will relate to each other.
JPEG stands for Joint Photographic Experts Group. It is also an ISO/IEC working group, but works to build standards for continuous tone image coding. JPEG is a lossy compression technique used for full-color or gray-scale images, by exploiting the fact that the human eye will not notice small color changes.
JPEG 2000 is an initiative that will provide an image coding system using compression techniques based on the use of wavelet technology. DV is a high-resolution digital video format used with video cameras and camcorders. The standard uses DCT to compress the pixel data and is a form of lossy compression. The resulting video stream is transferred from the recording device via FireWire (IEEE 1394), a high-speed serial bus capable of transferring data up to 50 MB/sec. H.261 is an ITU standard designed for two-way communication over ISDN lines (video conferencing) and supports data rates which are multiples of 64Kbit/s. The algorithm is based on DCT and can be implemented in hardware or software and uses intraframe and interframe compression. H.261 supports CIF and QCIF resolutions. H.263 is based on H.261 with enhancements that improve video quality over modems. It supports CIF, QCIF, SQCIF, 4CIF and 16CIF resolutions.
DivX Compression
DivX is a software application that uses the MPEG-4 standard to compress digital video, so it can be downloaded over a DSL/cable modem connection in a relatively short time with no reduced visual quality. The latest version of the codec, DivX 4.0, is being developed jointly by DivXNetworks and the open source community. DivX works on Windows 98, ME, 2000, CE, Mac and Linux. Terms
Lossy compression - reduces a file by permanently eliminating certain redundant information, so that even when the file is uncompressed, only a part of the original information is still there. ISO/IEC
International Organization for Standardization - a non-governmental organization that works to promote the development of standardization to facilitate the international exchange of goods and services and spur worldwide intellectual, scientific, technological and economic activity. International Electrotechnical Commission - international standards and assessment body for the fields of electrotechnology
Codec - A video codec is software that can compress a video source (encoding) as well as play compressed video (decompress). CIF - Common Intermediate Format - a set of standard video formats used in videoconferencing, defined by their resolution. The original CIF is also known as Full CIF (FCIF). QCIF - Quarter CIF (resolution 176x144)
SQCIF - Sub quarter CIF (resolution 128x96)
4CIF - 4 x CIF (resolution 704x576)
16CIF - 16 x CIF (resolution 1408x1152
Additional sources of information*
TECH Online Review - Video Compression Overview
DataCompression.info IGM - Desktop Video - Compression Standards
*The WAVE Report is not responsible for content on additional sites.
Page updated 1/24/07
Copyright 4th Wave Inc, | 计算机 |
2014-23/2156/en_head.json.gz/486 | Can big data be open data?
by Gavin Craig
@craiggav
Big data has finally come to the world of games. As designers now study players with more and more granularity, games themselves become mirrors of our own play preferences. But with big data comes big questions: If games become personalized experiences, how will we have water-cooler conversations about them? Where is the boundary between collecting information to better experiences and a deeper understanding of the real you? Over the next few weeks, iQ and Kill Screen will explore these questions. This past spring, in conjunction with a panel at PAX East looking back on the completion of the Mass Effect trilogy, BioWare released an infographic sharing for the first time a small set of aggregated user data from the third Mass Effect game. While some of the numbers were impressive—88.3 million hours played in the single-player storyline, and 10.7 billion enemies killed in multiplayer—little of the information was particularly revelatory. A large percentage of players preferred to play as a male Commander Shepard over his female counterpart (82% to 18%). Very few players (4%) completed the game at its most difficult “insanity” setting. But tucked in amidst all those stats was a particularly fascinating one: 39.8% of players earned a “Long Service Medal” for either importing a saved character for Mass Effect 2 and then completing Mass Effect 3, or completing Mass Effect 3 twice.
This is, of course, a dramatic number for an industry in which somewhere between 10 and 20% of players are expected to finish any given game, but it’s also a tellingly ambiguous piece of information. In a game series like Mass Effect, which is built on branching storylines, and promises a different experience to players who replay a different class, gender, or alignment, it would be enlightening to know how many players do actually complete a second playthrough, and how many of those players choose to pursue a dramatically different experience compared to how many hew closely to their original choices. By comparing this information to similar data from other branching narrative games like The Walking Dead or Heavy Rain, we could start to talk about how players interact with branching narratives, whether certain groups of players seems to be driven by an impulse to explore all the available possibilities and how others might seem to use multiple playthroughs to attempt to refine and perfect a particular performance.
BioWare’s infographic serves as a tantalizing hint that the data to answer such questions might be stored on one of the company’s servers. It’s also a clear expression of the fact that on the rare occasions that developers do choose to share data, it’s generally in a severely limited fashion as part of a marketing event, and it’s almost never the raw data itself. Many contemporary videogames generate huge amounts of data, some of which is shared with individual users on in-game player stat boards like in Red Dead Redemption, some of which is shared in a semi-public fashion in the form of player achievements and trophies, and some of which is simply stored in-game. As internet connectivity even for single-player games becomes the default for both console and PC gamers, more and more of that data can potentially be captured by developers.
A lot of that potential, however, remains to be realized. According to Ben Medler, a technical analyst for EA Games, it’s not yet standard for even AAA games to include the sort of robust data systems that could lead to useful internal analysis, much less eventual public sharing. “Developers always dream big, so at the beginning of projects you always hear promises to track every bit of data or to allow players to share everything. And it is not from a lack of trying and hoping that developers rarely deliver on these promises, it’s just that they run out of time. Deadlines slip, core game features take longer, engine re-writes are necessary, and all the extra features get cut or held back.” Even in cases where in-game data systems are constructed, there can be additional costs involved in formatting the data in a way that makes it usable outside the game, and even more in building an application program interface that could make the data more broadly accessible.
"At the beginning of projects you always hear promises to track every bit of data or to allow players to share everything." Dmitri Williams, Associate Professor at the University of Southern California and CEO of the game analytics firm Ninja Metrics, echoes Medler’s assessment. “We’re in an interesting transition where there’s all this data, and developers talk about wanting to be data-driven, but they really struggle to make it a priority. It’s just not a cultural norm, and it’s not usually a core competency of the teams. So they’ll often outsource it, or ignore it, or do it a little bit on their own. And it’s the very rare cases where developers devote the resources and do a good job of it on their own.”
Part of the challenge, Williams says, is the particular skill set required to draw useful information out of large datasets. “There aren’t a lot of PhDs who understand big data and player psychology and gaming. That Venn diagram is pretty small.”
The challenges may be substantial, but there are models for what big data sharing might look like, and what sort of benefits it could have for players, developers, and games as a whole. Since 2007, Blizzard has made World of Warcraft player information directly available to the public through its Armory website—4,500 variables on every active character, every day.
Nick Yee, a research scientist studying online games and virtual worlds, has spent years researching massively multiplayer online player behavior, work made possible in large part by the accessibility of Armory data. Before the Armory, Yee says, gathering information on MMOs was a matter of attempting to negotiate with individual developers, who often had concerns not only about sharing raw data, but also about what findings could be shared once research was completed. The Armory not only changed that dynamic—at least between researchers and Blizzard—it also had a huge impact on game studies as a whole. “The problem with studying online games up to World of Warcraft,” Yee says, “was that every game researcher was speaking their to own game, and their own game culture. And so there were a lot of papers and books on specific games but they were all different, and it was hard to compare. By releasing its data, World of Warcraft allowed academics to kind of have a lingua franca. Now suddenly everyone could speak WoW.”
Blizzard’s motivation in releasing World of Warcraft data, of course, had less to do with enabling academic study than with encouraging engagement with its player community and allowing players to develop their own modifications to use within the game. Player groups could use the Armory to set up their own rankings, find players to recruit, and perform basic “background checks” before admitting new members. New players could find out what gear players at higher levels were using. Popular user mods were sometimes incorporated by Blizzard into game code.
Blizzard took a risk in sharing player and functionality data—Yee suggests that many other developers have not been willing to do the same because it could allow competitors to reconstruct marketing and player retention data—but they also gave players the ability to improve the experience of the game as whole. Massively multiplayer online games are essentially projects in ongoing worldbuilding, and World of Warcraft shows the way that making as much of that process visible to the user can drive participation. Dmitri Williams compares it to the sense of ownership players feel for games like Minecraft. “There’s an appetite for game data, and when you expose it, the players become extremely loyal, and extremely attached to it. It’s because the tools are now theirs. When you give people stuff, it really does pay off.”
Of course, the analogy between massively multiplayer online games like World of Warcraft and single-player, narrative-driven games like Mass Effect is less than perfect. “Modding” is often a dirty word in the world of single player games, where the game itself functions as more of a closed environment. In single-player games too, however, player communities are constructed in chat groups and wikis, as well as screenshots and gameplay videos where players show off costume mods, glitches, and narrative branches other players may not have experienced. Even the marketing value of the PAX East infographic comes largely from the way as it functions as a community reinforcement exercise, allowing players to line up their choices with those of other players. It may not be terribly useful for academic study, but it’s great for adding new fuel to old player debates.
“There’s an appetite for game data" And it’s a step in the right direction, even if just a small one. Dmitri Williams suggests that large scale data sharing won’t become common until developers observe enough positive feedback to justify devoting resources to it. The ongoing development of increasingly user-friendly data tools may help ease the way a bit by allowing developers to more easily incorporate data collection and analysis into the development process, but developers also have to decide that the benefits of making game data public outweigh the risks. “There’s always a tension between the idea that data wants to be free and maintaining control, but I think there’s a trend toward accessibility,” he says. “It’s certainly paid off for everyone that I can think of who’s tried it.”
Gavin Craig
Gavin Craig is a freelance writer and critic whose writing has appeared in Bit Creature, Snarkmarket, and The Bygone Bureau.
Big data comes to games. How one company sees the future in personalized play.
Like it or not, big data is bulging. And if the blowback over the N.S.A.’s access to the metadata of cell phone calls and Facebook profiles is any indication, all these digital stats and figures make a lot of people uneasy. And for sound reasons. Massive data collection efforts often don’t have public interest at heart ...
Sony filed patent for biometric data collector
Via Joystiq, some excellent future-shock news for this first Monday of fall 2012. Just say that outloud. Fall 2012! We are actually already living in the future. Next year is flippin' 2013! Ok:
The patent, titled "Process and Apparatus for Automatically Identifying User of Consumer Electronics," describes the inclusion of fingerprint sensors that would read biometric data of its users ... | 计算机 |
2014-23/2156/en_head.json.gz/2251 | Building information...Developing informati...New information tech...Technology infrastru...Infrastructure techn...Key technology infra...Describe a typical t...Internet technology ...How to build an tech...
Types of information...Itil certificationItil certification o...
Information Systems Security Management Professional
Information Systems Technician
Information TV
Information TV 2
Information Technologist
Information Technology Act
Information Technology Architect Certification
Information Technology Association of America
Information Technology Association of Canada
Information Technology Audit
Information Technology Audit - Operations
Information Technology Audit - Regulation
Information Technology Audit Process
Information Technology Channel
Information Technology Consulting
Information Technology Controls
Information Technology Enabled Services
Information Technology High School
Information Technology Industry
Information Technology Infrastructure Library Version 3
Information Technology Lokam
Information Technology Management Reform Act
Information Technology Outsourcing
Information Technology Professional Examination Council
Information Technology Security Assessment
Information Technology Security Evaluation Criteria
Information Technology Student Organization
Information Technology and Innovation Foundation
Information Technology in India
Information Technology in a Global Society
Information Technology industry in Hyderabad, Andhra Pradesh
Information Technology industry in India
Information Technology portal
Information Telegraph Agency of Russia
Information Theoretic Entropy
Information Therapy
The Information Technology Infrastructure Library (ITIL) is a set of concepts and policies for managing information technology (IT) infrastructure, development and operations. ITIL is published in a series of books, each of which cover an IT management topic. The names ITIL and IT Infrastructure Library are registered trademarks of the United Kingdom's Office of Government Commerce (OGC). ITIL gives a detailed description of a number of important IT practices with comprehensive check lists, tasks and procedures that can be tailored to any IT organization.
CertificationITIL Certifications lead to the credentail of ITIL Foundation Associate, ITIL Practioner, ITIL Service Manager.ITIL Certifications are managed by the ICMB (ITIL Certification Management Board) which is comprised of the OGC, IT Service Management Forum (itSMF) International and two examinations institutes: EXIN (based in the Netherlands) and ISEB (based in the UK). The EXIN and ISEB proctor the exams and award qualifications at Foundation, Practitioner and Manager/Masters level currently in 'ITIL Service Management', 'ITIL Application Management' and 'ICT Infrastructure Management' respectively. A voluntary registry of ITIL-certified practitioners is operated by the ITIL Certification RegisterOrganizations or a management system may not be certified as "ITIL-compliant". An organization that has implemented ITIL guidance in ITSM, however, may be able to achieve compliance with and seek certification under ISO/IEC 20000.On July 20 2006, the OGC signed a contract with the APM Group to be its commercial partner for ITIL accreditation from January 1 2007..
ITIL History
Many of the concepts did not originate within the original UK Government's Central Computer and Telecommunications Agency (CCTA) project to develop ITIL. According to IBM: In the early 1980s, IBM documented the original Systems Management concepts in a four-volume series called A Management System for Information Systems. These widely accepted “yellow books,” ... were key inputs to the original set of ITIL books.The primary author of the IBM yellow books was Edward A. Van Schaik, who compiled them into the 1985 book A Management System for the Information Business (since updated with a 2006 re-issue by Red Swan Publishing). In the 1985 work, Van Schaik in turn references a 1974 Richard L. Nolan work, Managing the Data Resource Function which may be the earliest known systematic English-language treatment of the topic of large scale IT management (as opposed to technological implementation).
What is now called ITIL version 1, developed under the auspices of the Central Computer and Telecommunications Agency (CCTA), was titled "Government Information Technology Infrastructure Management Methodology" (GITMM) and over several years eventually expanded to 31 volumes in a project initially directed by Peter Skinner and John Stewart at the CCTA. The publications were retitled primarily as a result of the desire (by Roy Dibble of CCTA) that the publications be seen as guidance and not as a formal method and as a result of growing interest from outside of the UK Government. During the late 1980s the CCTA was under sustained attack, both from IT companies who wanted to take over the central Government consultancy service it provided and from other Government departments who wanted to break free of its oversight. Eventually CCTA succumbed and the concept of a central driving IT authority for the UK Government was lost. This meant that adoption of CCTA guidance such as ITIL was delayed, as various other departments fought to take over new responsibilities. In some cases this guidance was lost permanently. The CCTA IT Security and Privacy group, for instance, provided the CCTA IT Security Library input to GITMM, but when CCTA was broken up the security service appropriated this work and suppressed it as part of their turf war over security responsibilities.Though ITIL was developed during the 1980s, it was not widely adopted until the mid 1990s for the reasons mentioned above. This wider adoption and awareness has led to a number of standards, including ISO/IEC 20000 which is an international standard covering the IT Service Management elements of ITIL. ITIL is often considered alongside other best practice frameworks such as the Information Services Procurement Library (ISPL), the Application Services Library (ASL), Dynamic Systems Development Method (DSDM), the Capability Maturity Model (CMM/CMMI), and is often linked with IT governance through Control Objectives for Information and related Technology (COBIT). In December 2005, the OGC issued notice of an ITIL refresh , commonly known as ITIL v3, which became available in May 2007. ITIL v3 initially includes five core texts: Service Strategy
Service Transition
Service Operation
Continual Service Improvement
These publications update much of the current v2 and extend the scope of ITIL in the domain of service management.
ITIL alternatives
IT Service Management as a concept is related but not equivalent to ITIL which, in Version 2, contained a subsection specifically entitled IT Service Management (ITSM). (The five volumes of version 3 have no such demarcated subsection). The combination of the Service Support and Service Delivery volumes are generally equivalent to the scope of the ISO/IEC 20000 standard (previously BS 15000).Outside of ITIL, other IT Service Management approaches and frameworks exist, including the Enterprise Computing Institute's library covering general issues of large scale IT management, including various Service Management subjects.The British Educational Communications and Technology Agency (BECTA) has developed the Framework for ICT Technical Support (FITS) and is based on ITIL, but it is slimmed down for UK primary and secondary schools (which often have very small IT departments). Similarly, The Visible OPS Handbook: Implementing ITIL in 4 Practical and Auditable Steps claims to be based on ITIL but to focus specifically on the biggest "bang for the buck" elements of ITIL.Organizations that need to understand how ITIL processes link to a broader range of IT processes or need task level detail to guide their service management implementation can use the IBM Tivoli Unified Process (ITUP). Like MOF, ITUP is aligned with ITIL, but is presented as a complete, integrated process model.Smaller organizations that cannot justify a full ITIL program and materials can gain insight into ITIL from a review of the Microsoft Operations Framework which is based on ITIL but defines a more limited implementation.The enhanced Telecom Operations Map eTOM published by the TeleManagement Forum offers a framework aimed at telecommunications service providers. In a joined effort, tmforum and itSMF have developed an Application Note to eTOM (GB921 V, version 6.1 in 2005, a new releases is scheduled for summer 2008) that shows how the two frameworks can be mapped to each other. It adresses how eTom process elements and flows can be used to support the processes identified in ITIL.
Overview of the ITIL 2 library
The IT Infrastructure Library originated as a collection of books each covering a specific practice within IT Service Management. After the initial publication, the number of books quickly grew within ITIL v1 to over 30 volumes. In order to make ITIL more accessible (and affordable) to those wishing to explore it, one of the aims of ITIL v2 was to consolidate the publications into logical 'sets' that grouped related process guidelines into the different aspects of IT management, applications and services. While the Service Management sets (Service Support and Service Delivery) are by far the most widely used, circulated and understood of ITIL publications, ITIL provides a more comprehensive set of practices as a whole. Proponents believe that using the broader library provides a comprehensive set of guidance to link the technical implementation, operations guidelines and requirements with the strategic management, operations management and financial management of a modern business. The eight ITIL version 2 books and their disciplines are:The IT Service Management sets
1. Service Delivery
2. Service SupportOther operational guidance
3. ICT Infrastructure Management
4. Security Management
5. The Business Perspective
6. Application Management
7. Software Asset ManagementTo assist with the implementation of ITIL practices a further book was published providing guidance on implementation (mainly of Service Management):
8. Planning to Implement Service ManagementAnd this has more recently been supplemented with guidelines for smaller IT units, not included in the original eight publications:
9. ITIL Small-Scale ImplementationITIL is built around a process-model based view of controlling and managing operations often credited to W. Edwards Deming. The ITIL recommendations were developed in the 1980s by the UK Government's CCTA in response to the growing dependence on IT and a recognition that without standard practices, government agencies and private sector contracts were independently creating their own IT management practices and duplicating effort within their Information and Communications Technology (ICT) projects resulting in common mistakes and increased costs. In April 2001 the CCTA was merged into the Office of Government Commerce (OGC), an office of the UK Treasury. One of the primary benefits claimed by proponents of ITIL within the IT community is its provision of common vocabulary, consisting of a glossary of tightly defined and widely agreed terms. A new and enhanced glossary has been developed as a key deliverable of the ITIL v3 (also known as the ITIL Refresh Project).
Overview of the ITIL v3 library
ITIL v3 (Information Technology Infrastructure Library Version 3), published in May 2007, comprises five key volumes:1. Service Strategy
4. Service Operation
5. Continual Service Improvement
Service Strategy
Service strategy is shown at the core of the ITIL v3.1 lifecycle but cannot exist in isolation to the other parts of the IT structure. It encompasses a framework to build best practice in developing a long term service strategy. It covers many topics including: general strategy, competition and market space, service provider types, service management as a strategic asset, organization design and development, key process activities, financial management, service portfolio management, demand management, and key roles and responsibilities of staff engaging in service strategy.
The design of IT services conforming to best practice, and including design of architecture, processes, policies, documentation, and allowing for future business requirements. This also encompasses topics such as Service Design Package (SDP), Service catalog management, Service Level management, designing for capacity management, IT service continuity, Information Security, supplier management, and key roles and responsibilities for staff engaging in service design.
Service transition relates to the delivery of services required by the business into liveoperational use, and often encompasses the "project" side of IT rather than "BAU" (Business As Usual). This area also covers topics such as managing changes to the "BAU" environment. Topics include Service Asset and Configuration Management, Transition Planning and Support, Release and deployment management, Change Management, Knowledge Management, as well as the key roles of staff engaging in Service Transition.
Best practice for achieving the delivery of agreed levels of services both to end-users and the customers (where "customers" refer to those individuals who pay for the service and negotiate the SLAs). Service Operations is the part of the lifecycle where the services and value is actually directly delivered. Also the monitoring of problems and balance between service reliability and cost etc are considered. Topics include balancing conflicting goals (e.g. reliability v cost etc), Event management, incident management, problem management, event fulfillment, asset management, service desk, technical and application management, as well as key roles and responsibilities for staff engaging in Service Operation.....
Continual Service Improvement (CSI)
Short Description Aligning and realigning IT services to changing business needs (because standstill implies decline).The goal of Continual Service Improvement is to align and realign IT Services to changing business needs by identifying and implementing improvements to the IT services that support the Business Processes. The perspective of CSI on improvement is the business perspective of service quality, even though CSI aims to improve process effectiveness, efficiency and cost effectiveness of the IT processes through the whole lifecycle. In order to manage improvement, CSI should clearly define what should be controlled and measured.CSI needs to be treated just like any other service practice. There needs to be upfront planning, training and awareness, ongoing scheduling, roles created, ownership assigned,and activities identified in order to be successful. CSI must be planned and scheduled as process with defined activities, inputs, outputs, roles and reporting.
Long Description Once an organization has gone through the process of identifying what its Services are, as well as developing and implementing the IT Service Management (ITSM) processes to enable those services, many believe that the hard work is done. How wrong they are!! The real work is only just beginning. How do organizations get buy-in for using the new processes? How do organizations measure, report and use the data to improve not only the new processes but to continually improve the Services being provided? This requires a conscious decision to adopt CSI with clearly defined goals, documented procedures, inputs, outputs and identified roles and responsibilities. To be successful, CSI must be embedded within each organization's culture.The Service Lifecycle is a comprehensive approach to Service Management: seeking to understand its structure, the interconnections between all its components,and how changes in any area will affect the whole system and its constituent parts over time. It is an organizing framework designed for sustainable performance.The Service Lifecycle can be viewed in a graphical manner, where it is easy to demonstrate the value provided, both in terms of "business contribution" and "profit"
The business contribution is the ability for an IT organization to support a business process, managing the IT service at the requested performance.
The profit is the ability to manage cost of service in relations to the business revenue.The Service Lifecycle can be viewed as a phased life cycle, where the phases are: Defining strategy for the IT Service Management (Service Strategy or SS)
Designing the services to support the strategy (Service Design or SD)
Implement the services in order to meet the designed requirements (Service Transition or ST)
Support the services managing the operational activities (Service Operation or SO)
The interaction between phases are managed through the Continual Service Improvement approach, which is responsible for measuring and improving service and process maturity level.
After Comparison of all phases, a service period is concluded and another service period begins.The Continual Service Improvement phase is involved during all phases of the service lifecycle. It is responsible for measuring the service and the processes, (Service Measurement), and to document the results (Service Reporting) in order to improve the services quality and the processes maturity (Service Improvement).
These improvements will be implemented in the next period of Service Lifecycle, stating again with Service Strategy, and after with Service Design and Transition, the Service Operation phase of course continue to manages operations during all service periods.With the evolution of service periods, the "effort" for each phase will be reduced concerning the strategic and tactical phases (SS,SD and ST), here the SO phase is optimized and takes the primary role. At each cycle of the service (service period) the service will be improved with results of increasing of the Value of business and maximizing of Profits.In terms of Business Contribution, the IT Service begins to be valuable when in the first step the Service Transition starts.In terms of profits, the major investments are required with the big implementation projects (ST), when the transition is complete and the Operations start, the service begins to support business process and the new revenues balance the costs. After some periods of service optimization the "Profit & Loss" start to be profitable and reach the "break even point".After a number of periods (depending on the complexity of the service and the complexity of the service and the flexibility of the business), the business contribution and the profit will be stabilized, which means that the IT organization has reached the right level of maturity in managing processes and the service has reached the right level of performance in meeting the service level requirements.
Details of the ITIL v2 framework
The Service Support ITIL discipline is focused on the User of the ICT services and is primarily concerned with ensuring that they have access to the appropriate services to support the business functions.To a business, customers and users are the entry point to the process model. They get involved in service support by: Asking for changes
Needing communication, updates
Having difficulties, queries.
The service desk is the single contact point for the customers to record their problems. It will try to resolve it, if there is a direct solution or will create an incident. Incidents initiate a chain of processes: Incident Management, Problem Management, Change Management, Release Management and Configuration Management (see following sections for details). This chain of processes is tracked using the Configuration Management Database (CMDB), which records each process, and creates output documents for traceability (Quality Management).
Service Desk / Service Request Management Tasks include handling incidents and requests, and providing an interface for other ITSM processes.Single Point of Contact (SPOC) and not necessarily the First Point of Contact (FPOC)
There is a single point of entry and exit
Easier for Customers
Communication channel is streamlinedThe primary functions of the Service Desk are:Incident Control: life cycle management of all Service Requests
Communication: keeping the customer informed of progress and advising on workarounds
The Service Desk function is known under various names .Call Center: main emphasis on professionally handling large call volumes of telephone-based transactions
Help Desk: manage, co-ordinate and resolve incidents as quickly as possible
Service Desk: not only handles incidents, problems and questions but also provides an interface for other activities such as change requests, maintenance contracts, software licenses, Service Level Management, Configuration Management, Availability Management, Financial Management and IT Services Continuity Management
The three types of structure that can be considered are:Local Service Desk: to meet local business needs - is practical only until multiple locations requiring support services are involved
Central Service Desk: for organizations having multiple locations - reduces operational costs and improves usage of available resources
Virtual Service Desk: for organizations having multi-country locations - can be situated and accessed from anywhere in the world due to advances in network performance and telecommunications, reducing operational costs and improving usage of available resourcesIncident Management
The goal of Incident Management is to restore normal service operation as quickly as possible and minimize the adverse effect on business operations, thus ensuring that the best possible levels of service quality and availability are maintained.
'Normal service operation' is defined here as service operation within Service Level Agreement (SLA) limits.Problem Management
The goal of 'Problem Management' is to resolve the root cause of incidents and thus to minimize the adverse impact of incidents and problems on business that are caused by errors within the IT infrastructure, and to prevent recurrence of incidents related to these errors.
A `problem' is an unknown underlying cause of one or more incidents, and a `known error' is a problem that is successfully diagnosed and for which either a work-around or a permanent resolution has been identified. The CCTA defines problems and known errors as follows:A problem is a condition often identified as a result of multiple Incidents that exhibit common symptoms. Problems can also be identified from a single significant Incident, indicative of a single error, for which the cause is unknown, but for which the impact is significant.A known error is a condition identified by successful diagnosis of the root cause of a problem, and the subsequent development of a Work-around.Problem management is different from incident management. The principal purpose of problem management is to find and resolve the root cause of a problem and prevention of incidents; the purpose of incident management is to return the service to normal level as soon as possible, with smallest possible business impact. The problem management process is intended to reduce the number and severity of incidents and problems on the business, and report it in documentation to be available for the first-line and second line of the help desk.
The proactive process identifies and resolves problems before incidents occur. These activities are: Trend analysis;
Targeting support action;
Providing information to the organization.
The Error Control Process is an iterative process to diagnose known errors until they are eliminated by the successful implementation of a change under the control of the Change Management process.The Problem Control Process aims to handle problems in an efficient way. Problem control identifies the root cause of incidents and reports it to the service desk. Other activities are: Problem identification and recording;
Problem classification;
Problem investigation and diagnosis.The standard technique for identifying the root cause of a problem is to use an Ishikawa diagram, also referred to as a cause-and-effect diagram, tree diagram, or fishbone diagram. An Ishikawa diagram is typically the result of a brainstorming session in which members of a group offer ideas to improve a product. For problem-solving, the goal will be to find the cause and effect of the problem.
Ishikawa diagrams can be defined in a meta-model.First there is the main subject, which is the backbone of the diagram that we are trying to solve or improve. The main subject is derived from a cause.
The relationship between a cause and an effect is a double relation: an effect is a result of a cause, and the cause is the root of an effect. But there is just one effect for several causes and one cause for several effects.
The goal of Change Management is to ensure that standardized methods and procedures are used for efficient handling of all changes, in order to minimize the impact of change-related incidents and to improve day-to-day operations.
A change is “an event that results in a new status of one or more configuration items (CI's)” approved by management, cost effective, enhances business process changes (fixes) - with a minimum risk to IT infrastructure.The main aims of Change Management are:Minimal disruption of services
Reduction in back-out activities
Economic utilization of resources involved in the changeChange Management TerminologyChange: the addition, modification or removal of CIs
Request for Change (RFC): form used to record details of a request for a change and is sent as an input to Change Management by the Change Requestor
Forward Schedule of Changes (FSC): schedule that contains details of all the forthcoming ChangesRelease Management
Release Management is used for platform-independent and automated distribution of software and hardware, including license controls across the entire IT infrastructure. Proper software and hardware control ensures the availability of licensed, tested, and version-certified software and hardware, which will function as intended when introduced into the existing infrastructure. Quality control during the development and implementation of new hardware and software is also the responsibility of Release Management. This guarantees that all software meets the demands of the business processes.
The goals of release management are:Plan the rollout of software
Design and implement procedures for the distribution and installation of changes to IT systems
Effectively communicate and manage expectations of the customer during the planning and rollout of new releases
Control the distribution and installation of changes to IT systemsThe focus of release management is the protection of the live environment and its services through the use of formal procedures and checks.
Release Categories
A Release consists of the new or changed software and/or hardware required to implement approved changes
Releases are categorized as:Major software releases and hardware upgrades, normally containing large amounts of new functionality, some of which may make intervening fixes to problems redundant. A major upgrade or release usually supersedes all preceding minor upgrades, releases and emergency fixes.
Minor software releases and hardware upgrades, normally containing small enhancements and fixes, some of which may have already been issued as emergency fixes. A minor upgrade or release usually supersedes all preceding emergency fixes.
Emergency software and hardware fixes, normally containing the corrections to a small number of known problems.
Releases can be divided based on the release unit into:Delta Release: is a release of only that part of the software which has been changed. For example, security patches.
Full Release: means that the entire software program will be deployed. For example, a new version of an existing application.
Packaged Release: is a combination of many changes. For example, an operating system image which also contains specific applications.Configuration Management
Configuration Management is a process that tracks all of the individual Configuration Items (CI) in a system.Service Delivery
The Service Delivery discipline is primarily concerned with the proactive and forward-looking services that the business requires of its ICT provider in order to provide adequate support to the business users. It is focused on the business as the customer of the ICT services (compare with: Service Support). The discipline consists of the following processes, explained in subsections below: Service Level Management
IT Service Continuity Management
Availability Management
Service Level ManagementService Level Management provides for continual identification, monitoring and review of the levels of IT services specified in the service level agreements (SLAs). Service Level Management ensures that arrangements are in place with internal IT Support Providers and external suppliers in the form of Operational Level Agreements (OLAs) and Underpinning Contracts (UCs). The process involves assessing the impact of change upon service quality and SLAs. The service level management process is in close relation with the operational processes to control their activities. The central role of Service Level Management makes it the natural place for metrics to be established and monitored against a benchmark.Service Level Management is the primary interface with the customer (as opposed to the user, who is serviced by the Service Desk). Service Level Management is responsible for
ensuring that the agreed IT services are delivered when and where they are supposed to be
liaising with Availability Management, Capacity Management, Incident Management and Problem Management to ensure that the required levels and quality of service are achieved within the resources agreed with Financial Management
producing and maintaining a Service Catalog (a list of standard IT service options and agreements made available to customers)
ensuring that appropriate IT Service Continuity plans have been made to support the business and its continuity requirements. The Service Level Manager relies on all the other areas of the Service Delivery process to provide the necessary support which ensures the agreed services are provided in a cost effective, secure and efficient manner.
Capacity Management supports the optimum and cost effective provision of IT services by helping organizations match their IT resources to the business demands. The high-level activities are Application Sizing, Workload Management, Demand Management, Modeling, Capacity Planning, Resource Management, and Performance Management.
Availability Management allows organizations to sustain the IT service availability in order to support the business at a justifiable cost. The high-level activities are Realize Availability Requirements, Compile Availability Plan, Monitor Availability, and Monitor Maintenance Obligations.Availability Management is the ability of an IT component to perform at an agreed level over a period of time.
Reliability: how reliable is the service? Ability of an IT component to perform at an agreed level at described conditions.
Maintainability: The ability of an IT Component to remain in, or be restored to an operational state. Serviceability: The ability for an external supplier to maintain the availability of component or function under a third party contract.
Resilience: A measure of freedom from operational failure and a method of keeping services reliable. One popular method of resilience is redundancy.
Security: A service may have associated data. Security refers to the confidentiality, integrity, and availability of that data. Availability gives us the clear overview of the end to end availability of the systemFinancial Management for IT Services
Planning to implement service management
The ITIL discipline - Planning To Implement Service Management attempts to provide practitioners with a framework for the alignment of business needs and IT provision requirements. The processes and approaches incorporated within the guidelines suggest the development of a Continuous Service Improvement Programme (CSIP) as the basis for implementing other ITIL disciplines as projects within a controlled programme of work. Planning To Implement Service Management is mainly focused on the Service Management processes, but is also generically applicable to other ITIL disciplines.
create vision
analyze organization
implement IT service managementSecurity Management
The ITIL-process Security Management describes the structured fitting of information security in the management organization. ITIL Security Management is based on the code of practice for information security management also known as ISO/IEC 17799.A basic concept of the Security Management is the information security. The primary goal of information security is to guarantee safety of the information. Safety is to be protected against risks. Security is the means to be safe against risks. When protecting information it is the value of the information that has to be protected. These values are stipulated by the confidentiality, integrity and availability. Inferred aspects are privacy, anonymity and verifiability.The current move towards ISO/IEC 27001 may require some revision to the ITIL Security Management best practices which are often claimed to be rich in content for physical security but weak in areas such as software/application security and logical security in the ICT infrastructure.
ICT Infrastructure Management
ICT Infrastructure Management processes recommend best practice for requirements analysis, planning, design, deployment and ongoing operations management and technical support of an ICT Infrastructure. ("ICT" is an acronym for "Information and Communication Technology".)The Infrastructure Management processes describe those processes within ITIL that directly relate to the ICT equipment and software that is involved in providing ICT services to customers.
ICT Design and Planning
ICT Deployment
ICT Operations
ICT Technical Support
These disciplines are less well understood than those of Service Management and therefore often some of their content is believed to be covered 'by implication' in Service Management disciplines.
ICT Design and Planning provides a framework and approach for the Strategic and Technical Design and Planning of ICT infrastructures. It includes the necessary combination of Business (and overall IS) strategy, with technical design and architecture. ICT Design and Planning drives both the Procurement of new ICT solutions through the production of Statements of Requirement ("SOR") and Invitations to Tender ("ITT") and is responsible for the initiation and management of ICT Programmes for strategic business change. Key Outputs from Design and Planning are: ICT Strategies, Policies and Plans
The ICT Overall Architecture & Management Architecture
Feasibility Studies, ITTs and SORs
Business CasesICT Deployment Management
ICT Deployment provides a framework for the successful management of design, build, test and roll-out (deploy) projects within an overall ICT programme. It includes many project management disciplines in common with PRINCE2, but has a broader focus to include the necessary integration of Release Management and both functional and non functional testing.
ICT Operations Management
ICT Operations Management provides the day-to-day technical supervision of the ICT infrastructure. Often confused with the role of Incident Management from Service Support, Operations is more technical and is concerned not solely with Incidents reported by users, but with Events generated by or recorded by the Infrastructure. ICT Operations may often work closely alongside Incident Management and the Service Desk, which are not-necessarily technical in order to provide an 'Operations Bridge'. Operations, however should primarily work from documented processes and procedures and should be concerned with a number of specific sub-processes, such as: Output Management, Job Scheduling, Backup and Restore, Network Monitoring/Management, System Monitoring/Management, Database Monitoring/Management Storage Monitoring/Management. Operations are responsible for: A stable, secure ICT infrastructure A current, up to date Operational Documentation Library ("ODL") A log of all operational Events Maintenance of operational monitoring and management tools. Operational Scripts
Operational ProceduresICT Technical Support
ICT Technical Support is the specialist technical function for infrastructure within ICT. Primarily as a support to other processes, both in Infrastructure Management and Service Management, Technical Support provides a number of specialist functions: Research and Evaluation, Market Intelligence (particularly for Design and Planning and Capacity Management), Proof of Concept and Pilot engineering, specialist technical expertise (particularly to Operations and Problem Management), creation of documentation (perhaps for the Operational Documentation Library or Known Error Database).
The Business Perspective is the name given to the collection of best practices that is suggested to address some of the issues often encountered in understanding and improving IT service provision, as a part of the entire business requirement for high IS quality management. These issues are:
Business Continuity Management describes the responsibilities and opportunities available to the business manager to improve what is, in most organizations one of the key contributing services to business efficiency and effectiveness. Surviving Change. IT infrastructure changes can impact the manner in which business is conducted or the continuity of business operations. It is important that business managers take notice of these changes and ensure that steps are taken to safeguard the business from adverse side effects. Transformation of business practice through radical change helps to control IT and to integrate it with the business. Partnerships and outsourcing This volume is related to the topics of IT Governance and IT Portfolio Management.
ITIL Application Management set encompasses a set of best practices proposed to improve the overall quality of IT software development and support through the life-cycle of software development projects, with particular attention to gathering and defining requirements that meet business objectives.
This volume is related to the topics of Software Engineering and IT Portfolio Management.
Software Asset Management Software asset management (SAM) is the practice of integrating people, processes and technology to allow software licenses and usage to be systematically tracked, evaluated and managed. The goal of SAM is to reduce IT expenditures, human resource overhead and risks inherent in owning and managing software assets.SAM includes maintaining software license compliance; tracking the inventory and usage of software assets; and maintaining standard policies and procedures surrounding the definition, deployment, configuration, use and retirement of software assets. SAM represents the software component of IT asset management, which also includes hardware asset management (to which SAM is intrinsicly linked by the concept that without effective inventory hardware controls, efforts to control the software thereon will be significantly inhibited).
Small-Scale Implementation
ITIL Small-Scale Implementation provides an approach to the implementation of the ITIL framework for those with smaller IT units or departments. It is primarily an auxiliary work, covering many of the same best practice guidelines as Planning To Implement Service Management, Service Support and Service Delivery but provides additional guidance on the combination of roles and responsibilities and avoiding conflict between ITIL priorities.
Criticisms of ITIL
ITIL has been criticized on several fronts, including:
The books are not affordable for non-commercial users
Accusations that many ITIL advocates think ITIL is "a holistic, all-encompassing framework for IT governance"; Accusations that proponents of ITIL indoctrinate the methodology with 'religious zeal' at the expense of pragmatism.
Implementation and credentialing requires specific training
Debate over ITIL falling under BSM or ITSM frameworksAs Jan van Bon (author and editor of many IT Service Management publications) notes, There is a lot of confusion about ITIL, stemming from all kinds of misunderstandings about its nature. ITIL is, as the OGC states, a set of best practices. The OGC doesn’t claim that ITIL’s best practices describe pure processes. The OGC also doesn’t claim that ITIL is a framework, designed as one coherent model. That is what most of its users make of it, probably because they have such a great need for such a model...CIO Magazine columnist Dean Meyer has also presented some cautionary views of ITIL, including five pitfalls such as "becoming a slave to outdated definitions" and "Letting ITIL become religion." As he notes, "...it doesn't describe the complete range of processes needed to be world class. It's focused on ... managing ongoing services."The quality of the library's volumes is seen to be uneven. For example, van Herwaarden and Grift note, “the consistency that characterized the service support processes … is largely missing in the service delivery books.In a 2004 survey designed by Noel Bruton (author of 'How to Manage the IT Helpdesk' and 'Managing the IT Services Process'), ITIL adopting organizations were asked to relate their actual experiences in having implemented ITIL. Seventy-seven percent of survey respondents either agreed or strongly agreed that "ITIL does not have all the answers". ITIL exponents accept this, citing ITIL's stated intention to be non-prescriptive, expecting that organizations will have to engage ITIL processes with their existing overall process model. Bruton notes that the claim to non-prescriptiveness must be at best one of scale rather than absolute intention, for the very description of a certain set of processes is in itself a form of prescription.
(Survey "The ITIL Experience - Has It Been Worth It", author Bruton Consultancy 2004, published by Helpdesk Institute Europe, The Helpdesk and IT Support Show and Hornbill Software.)While ITIL addresses in depth the various aspects of Service Management, it does not address enterprise architecture in such depth. Many of the shortcomings in the implementation of ITIL do not necessarily come about because of flaws in the design or implementation of the Service Management aspects of the business, but rather the wider architectural framework in which the business is situated. Because of its primary focus on Service Management, ITIL has limited utility in managing poorly designed enterprise architectures, or how to feed back into the design of the enterprise architecture.The BOFH notes that “ITIL manuals are like kryptonite to enthusiasm."
See also Business Application Optimization (BAO, Macro 4)
IBM Tivoli Unified Process (ITUP)
Microsoft Operations Framework (MOF)
Run Book Automation (RBA)
Stratavia Data Palette
enhanced Telecom Operations Map (eTOM)
Grey Area Diagnosis
RPR Problem DiagnosisReferences
External links Official ITIL Website The OGC website IT Service Management Forum (International)
The ITIL definition site The ITIL Forum The ITIL Open Guide American ITIL
Search another word or see information technology infrastructure libraryon | 计算机 |
2014-23/2156/en_head.json.gz/5308 | Bitty Browser (JavaScript required)
Bitty is a small-form Web browser designed for use within Web pages and other documents. I launched Bitty in 2005, and received US patent number 7,284,208.Chris Anderson, Editor-in-Chief, Wired:“Awesome hack!”Steve Rubel, Micro Persuasion:“InformationWeek, a major tech trade publication, has launched an innovative advertising program for Microsoft using the Bitty Browser widget platform.The program is totally breakthrough.The sponsor gets to communicate their message in an innovative way, the reader is spared the hassle of linking off to another page and InformationWeek can measure the effectiveness of the campaign.”[NB: InformationWeek later reported: "We're seeing more than double the interaction rate of a traditional ad banner despite it's placement well below the fold."]I was invited to present Bitty to the Supernova and PC Forum technology conferences. Here I am (orange shirt) at PC Forum:
Andromeda: Drag and Drop, Meet Browse and PlayGeorge Clinton, Fat Albert, and my old Thinkpad running AndromedaAndromeda gets a spread in PC User MagazineWhen I first set out to design Andromeda, the idea was to make it easy to play music over the Web. A few years later, with PHP and ASP versions for Windows, Linux, and Mac OS X, I'm happy to report that people seem to like it.“A way-groovy app that allows you to stream your MP3 library over the Internet.”“It doesn't get much simpler than Andromeda. If you have a basic Web site and are capable of copying MP3s and the Andromeda script (PHP or ASP) into a folder on your server, you can offer streaming or downloadable music on the Internet or within any LAN.”“We're using Andromeda to distribute station promo audio enterprise-wide within Clear Channel. It's so simple to use that even I can figure it out. Our producers and programming staff love it....simple and powerful.”Customers now include organizations like Greenpeace, Creative Artists Agency, US Air Force, Clear Channel, Salvation Army, as well as loads of independent musicians, voice-over artists, sermonists, and regular music fans.System > User > Scott > HistoryAtari 800 (1980)Geek fun: My first big software project was for my dad, when I was about 12. At the time, he sold hardware speech synthesizers for the Atari, Apple, and Commodore computers. I wrote a function in Atari BASIC which converted numbers to their phonetic equivalents. So...123.4 became:W-UH-N H-UH-N-DR-IH-D AE-N-D TW-UH-N-T-EE TH-R-EE P-OY-N-T F-AW-RIt worked all the way up to 999,999,999.999999, which I thought was spectacular.Firstview (1995)Firstview provided comprehensive access to runway photos within hours of the fashion shows in Paris, Milan, London, and New York City. The site received worldwide media attention, from both trade and popular publications, including Time, People, Le Monde, WWD, The New York Times.Curator (1995)Curator was a standalone interactive graffiti kiosk that was exhibited in art galleries in Soho, NYC, and was awarded first place by the MIT Media Lab. It was kind of like Photoshop for gallery-goers. Curator graffiti on a photo by artist Rikki ReichCurator graffiti on a photo by artist Rikki ReichIncubator (1997)Incubator was actually pretty spiffy for 1997. It was a Web service that supported an arbitrary number of catalogs, with a Web interface that merchants could use to update their catalogs themselves. Here's a mailer I designed:Most Infamous Moment (2008)True story: back in 1998 I designed the Bernard L. Madoff home page. And, amazingly, it stayed right there until December, 2008, when the Honorable Louis L. Stanton, U.S. Federal Judge, ordered that it be replaced.Well, it may not seem like much to 2014 eyes, but here's what first turned me on to computers, way back in 1978:
10 FOR I = 1 TO 100
20 PRINT STR$(I) + " "
30 NEXT LOOP
At the time I was a 9-year-old with a somewhat unhealthy interest in typing out page after page of sequential numbers on the family typewriter. The discovery that a FOR LOOP could reproduce weeks of work in mere seconds was a total game changer.I got my first personal computer, an Atari 800, in 1980. My most memorable project was writing a function in Atari BASIC that converted any number into a sequence of speech synthesizer phonemes (speech synths don't natively know how to pronounce numbers). My old Atari BASIC cartridge remains one of my most-treasured artifacts.I graduated from Cornell in 1992 with a major in Computer Science and a minor in Cognitive Studies. As a senior I lucked into a job with the Interactive Multimedia Group where I developed tools for authoring rich-media documents and sharing them over a network.Following Cornell I spent two years at Technovations, a small communications firm where I produced touchscreen kiosks and digital video for clients like Pfizer and Prudential.On August 3, 1994 I discovered the Web. (And now, thanks to the Web, I can determine that exact date: it was the same day as Lollapalooza in Providence, RI, right after MacWorld Boston.) But the company wasn't interested, so I set out on my own.As a self-employed Web-wonk in the mid '90s, I coded pages for Netscape, Time, and General Electric. Firstview, a fashion photo archive, drew international media attention (New York Times, People, Time, Le Monde). I also served as Director of Technology for Gen Art, a national nonprofit that promotes young artists, fashion designers and filmmakers.In 1995 the MIT Media Lab awarded me first place (in what turns out to have been the first interactive juried art show) for my digital graffiti project, Curator, which was exhibited in several art galleries in Soho, NYC. It was kind of like a collective, shared Photoshop for gallery-goers. The prize was a digital camera, and I celebrated by making this pre-Google Map tour of Soho. Curator was also licensed to help launch a new line of Cannon printers.I developed Incubator in 1997, which combined several ideas that were novel for the time: 1) it was a 'service' that I only had to build once, but could then license repeatedly, and 2) it enabled subscribers to log in and maintain their own catalogs through a Web interface.I started collecting MP3s in the late '90s, and soon wanted a way to play them over the network. This lead to my work on Andromeda starting in 1999. I then became active in debates on file sharing and intellectual property, including: an article for Salon.com, Baudio a 'concept app' that was covered by Slashdot and LawMeme, some friendly sparring with Stanford law professor and "Free Culture" leader Larry Lessig, a few appearances on the nationally syndicated David Lawrence Show, a invitation to sit on a CMJ Music Marathon panel, and DRUMS, a rough sketch of a P2P panacea.Bitty Browser was originally envisioned as a way to make it easy to embed Andromeda sites within other Web pages, but then I realized the concept of embedded browsing had many other applications. I was awarded US patent number 7,284,208 for my work on Bitty.Over 2008/2009 I developed a new site/strategy for Electro-Harmonix, a leading manufacturer of guitar sound equipment. My solution was to apply "social media" concepts toward the marketing of physical products. For reaction, see Wired, TechCrunch.Flipjar, currently in beta, is a Web-based platform that encourages regular/repeat Web site visitors to "do things" that directly benefit their favorite sites.The Home FrontAndromeda is the android name of my partner in life, Amy Harmon. We met at the book launch party for "Extra Life." Amy is a two-time Pulitzer Prize winning journalist for the New York Times: most recently for her series on the cultural impact of new genetic technologies (here's the award), and previously as a member of the "How Race is Lived in America" series team (here's her contribution).I've cameoed in a few of Amy's travel stories — here's one about digital cameras and another about our trip to Tucson, where we stayed overnight at Kitt Peak National Observatory and took this picture of NGC 1042.More recently, I've become interested in urban biking. I ride a custom Swift Folder, built by the original frame designer right here in Brooklyn, NYC.On August 24, 2004 we welcomed our first little Andromite, Sasha Harmon Matthews, to the family. Bitty Browser is named in her honor. We all live happily ever after in New York City. | 计算机 |
2014-23/2156/en_head.json.gz/7886 | welcome to the fest FAQ Search Memberlist Usergroups Register Profile Log in to check your private messages Log in PROTECT IP / SOPA Act Breaks the Internet
4duhHUYNHJoined: 23 Oct 2011Posts: 23Location: Sinfest Forums (duh)
Posted: Wed Nov 16, 2011 9:21 pm Post subject: PROTECT IP / SOPA Act Breaks the Internet
Ars Technica wrote: Imagine a world in which any intellectual property holder can, without ever appearing before a judge or setting foot in a courtroom, shut down any website's online advertising programs and block access to credit card payments. The credit card processors and the advertising networks would be required to take quick action against the named website; only the filing of a �counter notification� by the website could get service restored.
It's the world envisioned by Rep. Lamar Smith (R-TX) in today's introduction of the Stop Online Piracy Act in the US House of Representatives. This isn't some off-the-wall piece of legislation with no chance of passing, either; it's the House equivalent to the Senate's PROTECT IP Act, which would officially bring Internet censorship to the US as a matter of law.
Calling its plan a �market-based system to protect US customers and prevent US funding of sites dedicated to theft of US property,� the new bill gives broad powers to private actors. Any holder of intellectual property rights could simply send a letter to ad network operators like Google and to payment processors like MasterCard, Visa, and PayPal, demanding these companies cut off access to any site the IP holder names as an infringer.
The scheme is much like the Digital Millennium Copyright Act's (DMCA) "takedown notices," in which a copyright holder can demand some piece of content be removed from sites like YouTube with a letter. The content will be removed unless the person who posted the content objects; at that point, the copyright holder can decide if it wants to take the person to court over the issue.
Here, though, the stakes are higher. Rather than requesting the takedown of certain hosted material, intellectual property owners can go directly for the jugular: marketing and revenue for the entire site. So long as the intellectual property holders include some �specific facts� supporting their infringement claim, ad networks and payment processors will have five days to cut off contact with the website in question.
The scheme is largely targeted at foreign websites which do not recognize US law, and which therefore will often refuse to comply with takedown requests. But the potential for abuse�even inadvertent abuse�here is astonishing, given the terrifically outsized stick with which content owners can now beat on suspected infringers.
One thing private actors can't do under the new bill is actually block a site from the Internet, though it hardly matters, because the government has agreed to do it for them. The bill gives government lawyers the power to go to court and obtain an injunction against any foreign website based on a generally single-sided presentation to a judge. Once that happens, Internet providers have 5 days to �prevent access by its subscribers located within the United States to the foreign infringing site.�
The government can also go after anyone who builds a tool designed for the "circumvention or bypassing" of the Internet block. Such tools already exist as a result of the US government's ongoing campaign to seize Internet domain names it believes host infringing content; they can redirect visitors who enter the site's address to its new location. The government has already asked Web browser makers like Mozilla to remove access to these sorts of tools. Mozilla refused, so the new bill just tries to ban such tools completely. (Pointing your computer's browser to a foreign DNS server in order to view a less-censored Internet still appears to be legal.)
Search engines, too, are affected, with the duty to prevent the site in question �from being served as a direct hypertext link.� Payment processors and ad networks would also have to cut off the site.
Finally, and for good measure, Internet service providers and payment processors get the green light to simply block access to sites on their own volition�no content owner notification even needed. So long as they believe the site is �dedicated to the theft of US property,� Internet providers and payment processors can't be sued.
"Industry norms"
The House bill is shockingly sympathetic to a narrow subsection of business interests. For instance, buried deep in the back of the >70-page document is a requirement that the US Intellectual Property Enforcement Coordinator prepare a study for Congress. That study should analyze �notorious foreign infringers� and attempt to quantify the �significant harm inflicted by notorious foreign infringers.� (Talk about assuming your conclusions before you start.)
The report, which is specifically charged to give weight to the views of content owners, requests a set of specific policy recommendations that might �encourage foreign businesses to adopt industry norms to promote the protection of intellectual property globally.� Should the bill pass, the US government would be explicitly charged with promoting private �industry norms��not actual laws or treaties�around the world.
In the request for the report, we can also see the IP maximalist lobby preparing for its next move: shutting off access to US capital markets and preventing companies from "offering stock for sale to the public" in the US.
Call it what it is
Not all censorship is bad�but we need to have an honest discussion about when and how to deploy it, rather than wrapping an unprecedented set of censorship tools in meaningless terms like "rogue site," or by calling a key section of the new bill the "E-PARASITE Act."
You don't have to support piracy�and we don't�to see the many problems with this new approach. Just today, the RIAA submitted to the US government a list of "notorious markets." As part of that list, the RIAA included "cyberlockers" like MegaUpload, which are "notorious services" that "thumb their noses at international laws, all while pocketing significant advertising revenues from trafficking in free, unlicensed copyrighted materials."
It's not hard to imagine how long it would take before such sites--which certainly do host plenty of user-uploaded infringing content--are targeted under the new law. Yet they have a host of legal uses, and cyberlockers like RapidShare have been declared legal by both US and European courts.
Not surprisingly, the new bill is getting pushback from groups like NetCoalition, which counts Google, Yahoo, and small ISPs among its members. "As leading brands of the Internet, we strongly oppose offshore 'rogue' websites and share policymakers' goal of combating online infringement of copyrights and trademarks," said executive director Markham Erickson in a statement.
"However, we do not believe that the solution lies in regulating the Internet and comprising its stability and security. We do not believe that it is worth overturning a decade of settled law that has formed the legal foundation for all social media. And finally, we do not believe that it is worth restricting free speech or providing comfort to totalitarian regimes that seek to control and restrict the Internet freedoms of their own citizens."
Dozens of law professors have also claimed the original PROTECT IP Act, which contains most of the same ideas, is unconstitutional. But the drumbeat for some sort of censorship is growing louder. Heard about private companies controlling the internet a while back but didn't think it would actually happen until now._________________This is my sig. Woo.
Avatar is from Kawaii Not. Visit! | 计算机 |
2014-23/2156/en_head.json.gz/8380 | Billing & Payment Information
Turning Service On/Off
Low Income Program
Privacy Policy OverviewAmerican Water Works and its affiliates (collectively, “American Water,” “we” or “us”) are serious about protecting your privacy. We understand your concerns with regard to how information about you is used and shared, and we appreciate your trust that we will do so carefully and sensibly. This notice (this “Privacy Policy”) describes what information we collect about you, how we collect it, how we use it, with whom we may share it, and what choices you have regarding it. This Privacy Policy is incorporated into and is a part of the Terms of Use of the American Water Web site (www.amwater.com, or any replacement site, the “Site”). We encourage you to become familiar with the terms and conditions of both this Privacy Policy and the Terms of Use. By accessing and using the Site, you agree that you have read and understand this Privacy Policy, and you accept and consent to the privacy practices (and any uses and disclosures of information about you) that are described in this Privacy Policy.What Information Do We Collect?We may collect certain identifying information from or about you in connection with your use of or submissions to the Site (collectively, the “Collected Information”). For example, if you request information from us through the Site, you may submit your name and contact information, such as your email address, mailing address, or telephone number, to us. If you access your account or pay your bill through the Site, you may also provide information like your account number, credit card information, and bank account information. If you inquire about or apply for a job with us through the Site, you may provide us with information on your, educational background, employment history, or overall compensation.In addition, we may retain the content of, and metadata regarding, any correspondence you may have with us or our representatives, regardless of the mode of communication by which such correspondence was made. This information helps us to improve the Site and the online media, content, materials, opportunities, and services that we feature or describe on the Site, and to more effectively and efficiently respond to both current and future inquiries.As with many other Web sites, the Web servers used to operate the Site may collect certain data pertaining to you and the equipment and communications method that you use to access the Internet and the Site. For security reasons and to confirm the integrity of our data, American Water may combine components of this data with other sources of information which may identify you. Unless otherwise described in this Privacy Policy or our Terms of Use, such identifying information will be used solely for our internal business purposes. In addition, the information we collect may reveal such things as the Internet protocol (“IP”) address assigned to your computer, specific pages that you accessed on the Site or immediately prior to visiting the Site, and the length of time you spent at the Site. The purposes for which this information is collected and used include facilitating Site operation and system administration, generating aggregate, non-identifiable statistical information, monitoring and analyzing Site traffic and usage patterns, and improving the content and content delivery with regard to the Site and the online media, content, materials, opportunities, and services that we describe or make available on the Site.Although, like many other Web sites, our site may at any time, in our discretion, use “cookies” to help us recognize visitors when they return to the Site or as they move among the different portions of the Site, we do not currently use cookies, or other tracking mechanisms, to collect personal or individually identifiable information.While some aspects of the Site may provide information that is intended for children, we do not knowingly or intentionally collect personal or identifying information from children. We strongly recommend that children get their parent's or guardian's consent before giving out any personal information. If you are under 18, please use our Site only with the involvement of a parent or guardian.How Do We Use The Information That We Collect?In addition to the uses mentioned or described above, or otherwise described in our Terms of Use, we use the information that you submit to us to accomplish the purpose for which it was submitted. For example, if you submit your credit card information to make a payment on your account, we will use that information to make that payment. We may also use the information that we collect from or about you to analyze and improve the content, features, materials and opportunities that we make available on the Site, to notify you of changes made to the Site or new opportunities made available on or through the Site, to evaluate your needs and customize the Site content delivered to you according to those needs, to send you promotional materials that you request from us, and for other legitimate and lawful business purposes. If you contact us for support or assistance, we may use information that you provide or that we collect about you or your system for purposes such as verifying whether your system meets the minimum requirements needed to use the Site and our various services.With Whom Do We Share Information That We Collect?In addition to the uses mentioned or described above, or otherwise described in our Terms of Use, in the course of conducting our business, we may, as appropriate, transfer Collected Information, whether solicited or unsolicited, to our offices throughout the world, catalog and add such Collected Information to our databases, and transmit such Collected Information to our affiliates and contractors and, to the extent necessary to accomplish the purpose for which information was submitted, to other third parties (e.g., to the relevant financial institutions in making payments).We may from time to time utilize a number of trusted business and marketing partners in delivering the online media, content, materials, and opportunities available on or through the Site to you. To the extent necessary for purposes of communicating with you or fulfilling your requests for our products or services, or your subscriptions to such media, content, and materials, we may share information about you with these business partners. We may also produce reports on Site traffic or usage patterns and share these reports with our business partners and others, but the information contained in these reports is anonymous and does not allow identification of any specific individual. Please rest assured that, other than as expressly provided in this Privacy Policy, any Collected Information that we obtain about you will be reported outside our organization only in aggregated formats and will not be distributed in a manner that will identify or be attributable to any particular or specific individual or company.We may disclose information about you if we become subject to a subpoena or court order, or if we are otherwise legally required to disclose information. We may also use and disclose information about you to establish or exercise our legal rights, to enforce the Terms of Use, to assert and defend against legal claims, or if we believe such disclosure is necessary to investigate, prevent, or take other action regarding actual or suspected illegal or fraudulent activities or potential threats to the physical safety or well-being of any person.As American Water continues to grow and develop its business, it is possible that its corporate structure might change or that it might merge or otherwise combine with, or that it or portions of its business might be acquired by, another company. In any such transactions, customer information generally is, will most probably be, and should be expected to be, one of the transferred business assets.What Choices Do You Have?When corresponding with American Water or our representatives, or when making a request for information or otherwise interacting with American Water or others through the Site, you choose what information to supply, what questions to pose and comments to make, whether you wish to receive further information, and by what method of communication such information should be delivered. Please take care to share only such information as is needed or that you believe is appropriate.How Do We Protect Information Collected About You?American Water takes commercially reasonable measures to secure and protect information transmitted via or stored on the Site and transmitted to and from the Site. Nevertheless, no security system is impenetrable. We cannot and do not guarantee that information that users of the Site may happen to transmit or otherwise supply, or that any communications or any electronic commerce conducted on or through the Site, is or will be totally secure. You agree to immediately notify us of any breach of Site security, this Privacy Policy or the Terms of Use of which you become aware.Linked SitesFor your convenience, some hyperlinks may be posted on the Site that link to other Web sites not under our control. We are not responsible for, and this Privacy Policy does not apply to, the privacy practices of those sites or of any companies that we do not own or control. We encourage you to seek out and read the privacy policy of each Web site that you visit. In addition, should you happen to initiate a transaction on a Web site that our Site links to, even if you reached that site through our Site, the information that you submit to complete that transaction becomes subject to the privacy practices of the operator of that linked site. You should read that site’s privacy policies to understand how personal information that is collected about you is used and protected.Changes to Privacy PolicyFrom time to time, we may change our privacy practices, and this Privacy Policy, because of changes in relevant and applicable legal or regulatory requirements, our business practices, or in our attempts to better serve your needs and those of our other customers. Notice of such changes to our privacy practices will be given in the manner described in the Terms of Use and a revised Privacy Policy will be posted on the Site.Who Can You Contact For More Information?If you have any questions or suggestions about the Site, American Water, or our products, services, or privacy practices, please contact us at the numbers or address given below.American Water WorksAttn: Communications and External Affairs1025 Laurel Oak RoadVoorhees, NJ 08043Telephone: 856.346.8200Fax: 856.346.8360E-mail: [email protected] ACCESSING OR USING THE SITE, YOU ACKNOWLEDGE THAT YOU HAVE READ, UNDERSTAND, AND CONSENT TO THE PRIVACY PRACTICES, AND TO THE USES AND DISCLOSURES OF INFORMATION THAT WE COLLECT ABOUT YOU, THAT ARE DESCRIBED IN THIS PRIVACY POLICY, AND YOU AGREE TO BE BOUND BY THE TERMS OF USE REFERENCED ABOVE.NGEDOCS: 1209816.5 © 2014 American Water.
"Missouri American Water"
and the star logo are the registered trademarks of American Water Works Company, Inc. All rights reserved.
Hi, I am Puddles. | 计算机 |
2014-23/2156/en_head.json.gz/10084 | How to find Security Holes
Kragen Sitaker on
If a program has a bug in it that manifests under extreme circumstances, then normally, it's a minor annoyance. Usually, you can just avoid the extreme circumstances, and the bug isn't a problem. You could duplicate the effect of tickling the bug by writing your own program, if you wanted to. But sometimes programs sit on security boundaries. They take input from other programs that don't have the same access that they do. Some examples: your mailreader takes input from anyone you get mail from, and it has access to your display, which they probably don't. The TCP/IP stack of any computer connected to the Internet takes input from anyone on the Internet, and usually has access to everything on the computer, which most people on the Internet certainly don't. Any program that does such things has to be careful. If it has any bugs in it, it could potentially end up allowing other people -- untrusted people -- to do things they're not allowed to do. A bug that has this property is called a "hole", or more formally, a "vulnerability". Here are some common categories of holes.
Psychological problems
When you're writing a normal piece of software, your purpose is to make certain things possible, if the user does things correctly. When you're writing a security-sensitive piece of software, you also have to make certain things impossible, no matter what any untrusted user does. This means that certain parts of your program must function properly under a wide range of circumstances. Cryptologists and real-time programmers are familiar with doing things this way. Most other programmers aren't, and habits of mind from their normal-software work tend to make their software insecure.
Change of role hole
A lot of holes come from running programs in different environments. What was originally a minor annoyance -- or sometimes even a convenience -- becomes a security hole. For example, suppose you have a PostScript interpreter that was originally intended to let you preview your documents before printing them. This is not a security-sensitive role; the PostScript interpreter doesn't have any capabilities that you don't. But suppose you start using it to view documents from other people, people you don't know, even untrustworthy people. Suddenly, the presence of PostScript's file access operators becomes a threat! Someone can send you a document which will delete all your files -- or possibly stash copies of your files someplace they can get at them. This is the source of the vulnerabilities in most Unixes' TCP/IP stacks -- they were developed on a network where essentially everyone on the network was trustworthy, and now they're deployed on a network where there are many people who aren't. This is also the problem with Sendmail. Until it went through an audit, it was a constant source of holes. At a more subtle level, functions that are perfectly safe when they don't cross trust boundaries can be a disaster when they do. gets() is a perfect example. If you use gets() in a situation where you control the input, you just provide a buffer bigger than anything you expect to input, and you're fine. If you accidentally crash the program by giving it too much input, the fix is "don't do that" -- or maybe expand the buffer and recompile. But when the data is coming from an untrusted source, gets() can overflow the buffer and cause the program to do literally anything. Crashing is the most common result, but you can often carefully craft data that will cause the program to run it as executable code. Which brings us to . . .
Buffer-overflow holes
A buffer overflow occurs when you write a string (usually a string of characters) into an array, and keep on writing past the end of the array, overwriting whatever happened to be after the array. Security-problem buffer-overflows can arise in several situations: when reading input directly into a buffer; when copying input from a large buffer to a smaller one; when doing other processing of input into a string buffer. Remember, it's not a security hole if the input is already trusted -- it's just a potential annoyance. This is particularly nasty in most Unix environments; if the array is a local variable in some function, it's likely that the return address is somewhere after it on the stack. This seems to be the fashionable hole to exploit; thousands and thousands of holes of this nature have been found in the last couple of years. Even buffers in other places can sometimes be overflowed to produce security holes -- particularly if they're near function pointers or credential information. Things to look for: dangerous functions without any bounds-checking: strcpy, strlen, strcat, sprintf, gets; dangerous functions with bounds-checking: strncpy, snprintf -- some of these will neglect to write a NULL at the end of a string, which can result in later copying of the result to include other data -- possibly sensitive data -- and possibly crashing the program; this problem does not exist with strncat, and I'm not clear on whether it exists in snprintf, but it definitely exists with strncpy; misuse of strncat, which can result in writing a null byte one past the end of the array; security-sensitive programs crashing -- any crash comes from a pointer bug, and perhaps the majority of pointer bugs in production code are from buffer overflows. Try feeding security-sensitive programs big inputs -- in environment variables (if environment variables are untrusted), in command-line parameters (if command-line parameters are untrusted), in untrusted files they read, on untrusted network connections. If they parse input into chunks, try making some of the chunks enormous. Watch for crashes. If you see crashes, see if the address at which the program crashed looks like a piece of your input. incorrect bounds-checking. If the bounds-checking is scattered through hundreds of lines of code, instead of being centralized in two or three places, there's an extremely good chance that some of it is wrong. A blanket solution is to compile all security-sensitive programs with bounds-checking enabled. The first work I know of on bounds-checking for gcc was done by Richard W. M. Jones and Paul Kelly, and is at http://www.doc.ic.ac.uk/~phjk/BoundsChecking.html. Greg McGary mailto:[email protected] did some other work. Announcement: http://www.cygnus.com/ml/egcs/1998-May/0073.html. Richard Jones and Herman ten Brugge did other work. Announcement: http://www.cygnus.com/ml/egcs/1998-May/0557.html. Greg compares different approaches in http://www.cygnus.com/ml/egcs/1998-May/0559.html.
Confused deputies
When you give a filename to a regular program to open, the program asks the OS to open the file. Since the program is running with your privileges, if you're not supposed to be able to open the file, the OS refuses. No problem. But if you give a filename to a security-sensitive program -- a CGI script, a setuid program, a setgid program, any network server -- it can't necessarily rely on the OS's built-in automatic protections. That's because it can do some things you can't. In the case of a web server, what it can do that you can't may be pretty minimal, but it's likely that it can at least read some files with private info. Most such programs do some kind of checking on the data they receive. They often fall into one of several pitfalls: They check it in a time-dependent fashion that you can race. If a program first stat()s a file to see if you have permission to write it, and then (assuming you do) open()s it, it's possible you might be able to change the file to be something you don't have permission to write to in the meantime. (One possible solution is to stat() or lstat() the file before opening it, open it in a nondestructive fashion, then fstat() the open fd, then compare to see if you've got the same file you stat()ed. Credit Eric Allman, via Keith Bostic and BUGTRAQ.) They check it by parsing the filename, but they parse the filename differently than the OS. This has been a problem with lots of Microsoft OS web servers; the OS does some fairly sophisticated parsing on the filename to figure out what file it's actually referencing. Web servers look at the filename to determine what kind of access you have to it; often, you have access to run particular types of file (based on filename parsing), but not to read them. If the default access lets you read a file, then changing the filename so that the web server thinks it's a different kind of file, but the OS parses the filename to point to the same file, will give you the ability to read the file. This is a double-parsing problem, which we'll get into later, and also stems from fail-openness. They check it in an extremely complex way that has holes in it, due to the original author not understanding the program. They don't bother to check it at all, which is rather common. They check it in a simple way that has holes in it. For example, many older Unix web servers would let you download any file in someone's public_html directory (unless the OS barred them). But if you made a symlink or hardlink to someone else's private files, it was possible to download them if the web server had permission to do so. At any rate, programs that have privileges you don't usually fail to limit what they do on your behalf to just what they're supposed to do. setfsuid(), setreuid(), etc., can help. Another problem is that frequently, standard libraries look in environment variables for files to open, and aren't smart enough to drop privileges while doing this. (Really, they can't be.) So we're forced to resort to parsing the filename to see if it looks reasonable. Some OSes dump core with the wrong privileges, too, and if you can make a setuid program crash, you can overwrite a file that the program's owner would be able to overwrite. (Dumping core with the user's privileges often results in the user being able to read data from the core file that they wouldn't be able to read normally.) Fail-openness
Most security-sensitive systems fail to do the right thing under some circumstances. They can fail in two different ways: They can allow access when they shouldn't; this is called fail-open. They can refuse access when they shouldn't; this is called fail-closed. As an example, an electronic door lock that locks the door by holding it closed with a massive electromagnet is fail-open when the power goes out -- when the electromagnet has no power, the door will open easily. An electronic door lock that locks the door with a spring-loaded deadbolt that is pulled out of the way with a solenoid is fail-closed -- when the solenoid has no power, it's impossible to pull back the deadbolt. CGI scripts commonly execute other programs, passing them user data on their command lines. In order to avoid having this data interpreted by the shell (on a Unix system) as instructions to execute other programs, access other files, etc., the CGI script removes unusual characters -- things like '<', '|', ' ', '"', etc. You can do this in a fail-open way by having a list of "bad characters" that get removed. Then, if you forgot one, it's a security hole. You can do it in a fail-closed way by having a list of "good characters" that don't get removed. Then, if you forgot one, it's an inconvenience. An example of this (in Perl) is at http://www.geek-girl.com/bugtraq/1997_3/0013.html. Fail-closed systems are a lot less convenient than fail-open ones, if they fail frequently. They're also a lot more likely to be secure. Essentially every program I've seen to secure a Mac or Microsoft OS desktop computer has been fail-open -- if you can somehow disable the program, you have full access to the computer. By contrast, if you disable the Unix 'login' program, you have no access to the computer. Resource starvation
Lots of programs are written with the pervasive assumption that enough resources will be available. (See Psychological Problems, above.) Many programs don't even think about what will happen if not enough resources are available, and sometimes they do the wrong thing. So look to see what happens if there's not enough memory and some allocations fail, usually returning NULL from malloc or new if it's possible for untrusted users to use up all the resources (which can be a denial-of-service problem even if the program handles it without allowing intrusions; this problem is endemic throughout most software, though) what happens if the program runs out of fds (and whether it's possible) -- open() will return -1 what happens if the program can't fork(), or if its child dies during initialization due to resource starvation Trusting untrustworthy channels
If you send passwords in cleartext over an Ethernet LAN with untrusted people on it, if you create a world-writable file and later try to read back data from that file, if you create a file in /tmp with O_TRUNC but not O_EXCL, etc., you're trusting an untrustworthy intermediary to do what you want it to. If an attacker can subvert the untrustworthy channel, they may be able to deny you service by altering data in the channel, they may be able to alter the data without you noticing (causing bad things to happen -- if the attacker makes that file in /tmp a symlink to a trusted file, you may end up destroying the contents of a privileged file instead of just creating a temporary file. gcc has some bugs of this kind, too, which can lead to an attacker inserting arbitrary code into programs you compile.) and even if they can't do these things, they may be able to read data they shouldn't.
Silly defaults
If there are non-obvious, but insecure, defaults, it's likely that people will leave them alone. For example, if you unpack an rpm and create some configuration files world-writable, you're not likely to notice unless you're actively looking for security holes. This means that most people who unpack the rpm will have a security hole on their system.
Big interfaces If the security interface is small, it is much more likely to be secure than if it is large. This is just common sense -- if I have one door people can enter my house through, I'm pretty likely to remember to lock it before I go to bed. If I have five doors in different parts of the house, all of which lead to the outside, I'm much more likely to forget one of them. Thus, network servers tend to be much more secure than setuid programs. Setuid programs get all sorts of things from untrustworthy sources -- environment variables, file descriptors, virtual memory mappings, command-line arguments, and probably file input, too. Network servers just get network-socket input (and possibly file input). qmail is an example of a small security interface. Only a small part of qmail (though much more than ten lines, contrary to what I previously said on the linux-security-audit mailing list) runs as "root". The rest runs either as special qmail users, or as the mail recipient. Internally to qmail, the buffer-overflow checking is centralized in two small functions, and all of the functions used to modify strings use these functions to check. This is another example of a small security interface -- the chance that some part of the checking is wrong is much smaller. The more network daemons you run, the bigger the security interface between the Internet and your machine. If you have a firewall, the security interface between your network and the Internet is reduced to one machine. The difference between viewing an untrusted HTML page and viewing an untrusted JavaScript page is also one of interface size; the routines in the JavaScript interpreter are large and complex compared to the routines in the HTML renderer. Frequently exploited programs
Programs that have been frequently exploited in the past are likely to have holes in them in the future, and should sometimes just be replaced. /bin/mail was replaced in BSD with mail.local for this reason. If you're auditing, auditing such programs extra thoroughly is an excellent idea, but sometimes it's better just to rewrite them, or not to use them in the first place. Poorly-defined security compartments
Any secure system is divided into security compartments. For example, my Linux system has numerous compartments known as "users", and a compartment known as the "kernel", as well as a compartment known as the "network" -- which is divided into subcompartments known as "network connections". There are well-defined trust relationships between these different compartments, which are based on system setup and authentication. (My user, kragen, trusts my network connection after I send my password over it, for example.) The trust relationships must be enforced at every interface between security compartments. If you're running a library terminal, you probably want the terminal to have access only to the library database (and read-only, at that.). You want to deny them access to the Unix shell altogether. I'm not sure how to finish this paragraph -- I'm sure you can see what I'm getting at, though. Mirabilis ICQ trusts the whole Internet to send it correct user identifications. Obviously, this is not secure. At one point, tcp_wrappers trusted data it got from reverse DNS lookups, handing it to a shell. (It no longer does.) Netscape Communicator would sometimes insert a user-entered FTP password into the URL in the history list, when using squid as a proxy. JavaScript programs and other web servers can see this URL. Neglected cases
Distrust logic. if-else and switch-case statements are dangerous, because they're hard to test. If you can find a branch of the code that no one has ever run, it's likely to be wrong. If you can find a logical dataflow combination -- for example, if there are two routines, each of which does one of two things, and the output from the first of which gets fed into the second, giving four combinations -- that hasn't been tested, it may also be a hole. Look at elses on ifs. Look at default: in switch statements. Make sure they're fail-closed. gcc -pg -a causes the program to produce a bb.out file that may be helpful in determining how effective your tests are at covering all branches of the code. I believe this has been the source of many of the recent IP denial-of-service problems. Just plain stupid
Lots of people trust code that only a few people have reviewed. If the code to a piece of software has only been read by a few people, it's likely that it has lots of bugs in it; if the code is security-critical, it's likely to break security. The recent 3Com debacle, in which all of their CoreBuilder and SuperStack II hubs were revealed to have "secret" backdoor passwords which were revealed to customers in emergencies, is a perfect example. This should not be a major issue for the Linux security audit. Information of interest to those interested in writing secure software
SunWorld Online has an article on Designing Secure Software. While Sun doesn't have the world's best reputation for security, this article is worthwhile. BUGTRAQ announces new Unix security holes on a daily basis, with full details. geek-girl.com keeps some archives that go back to 1993. This is a very useful resource to learn about new security holes, or look up particular old security holes. It's a terrible resource for getting a list of security holes, though. Adam Shostack has posted some good code-review guidelines (apparently used by some company to review code to run on their firewall) at http://www.homeport.org/~adam/review.html. Cops comes with a setuid(7) man page, which is HTMLized at http://www.homeport.org/~adam/setuid.7.html, and includes guidelines for finding and preventing insecurities in setuid programs. John Cochran of EDS pointed me to the AUSCERT programming checklist: ftp://ftp.auscert.org.au/pub/auscert/papers/secure_programming_checklist | 计算机 |
2014-23/2156/en_head.json.gz/10844 | The Bob’s Cube project began when our client Hostway, a leading web hosting company, expressed an interest in introducing itself to a wider audience, especially web designers. Hostway wanted to do something that would be unexpected and entertaining… something so compelling that someone who sees it, might forward it along to a friend. We felt that a strong and compelling story should drive the idea that we would come up with. Something involving, that would give the user a feeling of participation and discovery. Brand AwarenessWhat was also important (and not to be forgotten), was also introducing the Hostway brand to new potential customers, helping them to understand who Hostway was and the services that they had to offer as well as communicating an impression of the character of the company. The idea of a strong narrative and the need to market a company had to play in balance with each other because we knew that if the site was to succeed people would have to be forwarded along to friends who hadn’t seen it yet, and who would want to send a friend a marketing message. We also felt that the visual design of the experience shouldn’t be too esoteric or idiosyncratic or a user wouldn’t identify with it. Time to start BrainstormingThis thinking gave us a good framework to start so we began the brainstorming process.It was during one of these brainstorming sessions that doing an interactive office cubicle came up. It started one of those tracks of conversation where everyone had a different story and it got quite lively, as we shared many a horror story about experiences in or around office cubicles. This really seemed to be the common thread that we were looking for and once we had the basic idea we began the process of really blowing it out. Say hello to Bob and his CubicleWe created a whole intricate back story on this everyman office worker, Bob, who works in a cubicle at some faceless corporation. We went into great detail, creating the background story behind this Bob character and this allowed us to build a really rich environment because we knew “who” Bob was and therefore, what he’d have in his cubicle.We began working on incorporating interactivity and how it would help tell the story. We wanted to create an environment where a user had to explore and through exploration they discovered new things, and unexpected surprises. We sketched out how we would photograph this and started to think about all the interactive surprises we could hide within the environment. We then created a storyboard presentation of the concept and presented the idea to Hostway--they loved it (lucky for us). They loved the attention to detail and saw the immediate potential that it had to get passed along. We talked over the schedule and when we agreed on what we would deliver and when, the real work began! Making it happenWe rented an office cubicle from a near-by office furniture company and had it delivered and installed right in the middle of our office so it could become a disruptive force for everyone in the studio. We then propped it with random office junk that each of us collected over the years, and photographed Bob’s environment. The photography process was quite detailed and we had all the areas and pieces mapped out so we photographed all the parts and prepared them for use in the interface. Once the photographing was done, we worked through the details of interface and interactive components, planning what they would do. We looked at how they would be integrated, and also experimented with ways to make transitions between the different photos and renderings that were telling Bob’s story. We started game development, when photography finished and never stopped creating content until the project was finished, developing some unusual things like Bob’s diary, fake books, his emails, voice messages, and random doodles. Having fun with the ClientWe also felt that adding some video to the overall experience would give us another rich element for users to experience, so we asked our client to help us in the process. Hostway happily agreed to let us set up for a LONG afternoon and shoot video in their offices. It was such a perfect place because it was a space that they had taken over from a former consulting company so it was the cube environment that Bob’s Cube might be a part of. We also got some of the staff to help us by starring in the different videos, all and all a whole lot of fun and it really helped to get the entire Hostway organization excited about the idea.The Complete ExperienceWe immersed ourselves in Bob’s world creating games, interactive applications, content and art as well as the 3 different videos that are a part of the environment--the project took a little over 8 weeks from concept to delivery. Bob’s Cube was one of those great experiences where you have a really great project, a client who gets it, and a team that is totally energized and contributing to really make something unique and exciting.About the author:, Mark RattinPresident, 15 LettersAs 15 letters’ President and Creative Director, Mark Rattin is responsible for the creative and strategic development for its clients and brings more than 17 years of creative design experience and strategic thinking.Throughout his years in the design industry, Mark’s work has received numerous awards and industry recognitions and publications. He has also served as a judge for some of the industry’s leading design and interactive competitions, such as the Communication Arts Interactive Annual, The London Advertising Awards, and the Web Advertising Awards.Mark holds a B.F.A. in Visual Communications and a M.A. in Photography from Northern Illinois University.
This article may not be reproduced or used in any part without the prior written consent of the author. Reprints must credit FWA (theFWA.com) as the original publisher of this article and include a link to this site.
1st, 2nd, 3rd... 4th Dimension?!
Hybrid Sites: Why, How & What?
Mixing the perfect Mojito
Corpse Bride, Behind the Scenes
Personality: What, why & how?
FWA 5th Anniversary
Beer, Branding and Burdz
You want to be a dinosaur?
The Story of Crew9.net | 计算机 |
2014-23/2156/en_head.json.gz/10921 | Convert Corner
Distro Picker
Try the Linux desktop of the futurePosted at 11:56am on Friday March 5th 2010
For the tinkerers and testers, 2010 is shaping up to be a perfect year. Almost every desktop and application we can think of is going to have a major release, and while release dates and roadmaps always have to be taken with a pinch of salt, many of these projects have built technology and enhancements you can play with now. We've selected the few we think are worth keeping an eye on and that can be installed easily, but Linux is littered with applications that are evolving all the time, so we've also tried to guess what the next big things might be. Take a trip with us on a voyage of discovery to find out exactly what's happening and how the Linux desktop experience is likely to evolve over the next 12 months...
KRunner
KRunner has been part of KDE for long time. It's the tool you see when you press Alt+F2, and is commonly used to run applications quickly by typing their names rather than resorting to the launch menu. In the face of stiff competition from the likes of Gnome Do though, KRunner has had to up its game recently, and there are several neat enhancements for the KDE 4.4 release.
The most obvious change is that the KRunner dialog itself is now at the top of the screen rather than in the middle. This makes more sense, because it's now less likely to tread over some important application information or Slashdot story. You can also close the window again by pressing Alt+F2. Now that KDE 4.4 has a working search engine, the first new thing you can do with KRunner is search your desktop. Results are listed in the panel below. Everything else more or less looks the same until you click on the small spanner icon.
KRunner is better looking than Gnome Do - it's just a pity it doesn't have its amazing plugin support.
KRun to the hills
The window that appears has always hidden the extra features hidden behind KRunner's austere GUI. It lists the type of items that are going to be probed and returned as results in the main window. This version for KDE 4.4 has four new additions. You can now terminate applications by typing kill followed by the name of the application. After you've typed kill, the applications that match the following text will be listed in the results panel. You can change the keyword by reconfiguring the Terminate Applications plugin. You can also list all removable devices on your system by typing solid, and you should be able to manage virtual desktops by typing window. We couldn't get this to work, despite the plugin being listed in the configuration window. There's still tons over other functionality you can get out of KRunner by using the older plugins, but what we'd really like to see is cross-compatibility with Gnome Do's plugins. Docky: Next generation panel
Docky started off as an ambitious panel replacement tool integrated into the Gnome Do utility. This was a great partnership, because Gnome Do is fairly technical, requiring its users to understand what they need to do and what their machines are capable of. It also provides relatively little feedback. The Docky component, on the other hand, is a more traditional launch panel that sits at the bottom of your screen. It can be used to display information, launch apps and switch between running programs. There are alpha packages available for the new standalone version, dubbed version 2, but you can get almost all the same functionality by installing an older version of the Gnome Do package, which is far more likely to be provided by your distribution's package manager. If you do run Docky from Gnome Do, you need to make sure you open the preferences window and change the theme to 'Docky', otherwise it won't be visible.
You can drag apps from your system menu to add them to the Docky panel, from where they can then be launched. Applications that are already running have a small dot beneath their icons, and to the right of the panel you see the information applets. As you move your mouse across the panel, icons will smoothly scale up and down to indicate which is in focus. Right-click on the panel to bring up the preferences panel, from where you can add new applets to do menial tasks such as watch the weather or your CPU resources, or monitor your Google Mail account.
Step by step: Docky In Use
Launch Gnome Do: Docky needs to be run either from the standalone application or by selecting the Docky theme from Gnome Do.
App management: Drag applications onto the panel. When they're running, hover your mouse over for further information. Desklets: There are several informative applets that are part of Docky. Click on them to reveal further information.
Cutting edges
How do you come up with a revolutionary new desktop while your users are wedded to the old familiar input ideas, tried and tested in the two decades since we all started using a keyboard and mouse? If Linux were run by Apple, the developers would work in secret for years before announcing the availability of their new desktop metaphor. But the open source community doesn't work in the same way. Innovation has to be hammered out on online forums, in developer channels and through software releases. It's trial by committee, and many things can and do go wrong with the process. Compositing effects are a good example. Almost as soon as David Reveman had finished his initial work on Compiz, patches could be integrated into almost any Linux desktop with no major changes. Users could install Compiz and start rotating their desktops within minutes. But the task of turning these patches into a homogeneous part of the desktop experience has taken considerably longer, and it's an ongoing process four years after the initial release. This is because the path to acceptance for Compiz has been slowed down by the community, with disagreement, forks, apathy and duplication all hindering its progress. And it's the same for many other projects. If you want to change the way people use their desktops, you have to change the underlying technology behind that desktop. Most developers interpret this to mean that they need a new release, with an all-new API and plenty of new technology for application developers to take advantage of. This is the theory behind KDE 4's glut of new libraries and frameworks, for example, but it also means that it takes time for developers to catch up, if they even feel so inclined.
Gnome development is more pragmatic. Version 2 was released at about the same time as KDE 3 in 2002, and broadly, it's still a version of this release that's the current version of Gnome. There have been no dramatic redesigns, API changes, feature overhauls or debugging marathons. Instead, there's been the steady march of progress, and while Gnome may be missing some of the more experimental aspects of KDE, the latest release, 2.28, is still very different to the 2.0 release. KDE is still pinning a lot of its hopes on the small, functional applications that the developers are calling Plasmoids.
This is partly because Gnome is more of a platform for applications than KDE. The user doesn't need to know that the F-Spot photo manager is written in Mono and uses C#, for example; the only important thing is that each Gnome application presents a standardised front-end by following Gnome's user interface guidelines. It's for this reason that Gnome has been going from strength to strength, even on other platforms and operating systems, and this kind of idea doesn't need to be updated when a new version is released. Gnome 3.0 is scheduled for release in September of this year, but like all version 2.x releases up to this point, it's unlikely to be a KDE 4-like revolution. Initially, there were plans for dramatic changes to be made, all falling under an umbrella term for Gnome 3.0 - ToPaZ (Three Point Zero). If you look at some of the plans touted for Topaz, especially the results from some of the original brainstorming sessions, you'll see that most of the ideas remain in the current plan. With the KDE 4 release, most of the development cycle for the revolutionary features that were supposed to make KDE 4 more attractive than version 3 actually occurred after the initial release. If KDE 4 were to be released now it would be hailed as a great success, rather than the stream of bugfixes and updates we've endured since 4.0 hit the mirrors in January 2008. But at the same time, developers have to balance expectation. Would many people still be using the KDE desktop if they had to stick to KDE 3-era applications? Fortunately, with the release of KDE 4.4, most of those criticisms and usability problems have been ironed out, and we finally have a KDE desktop that can replace KDE 3.5.
Next-gen tools: Gnome Do
We don't have to let ourselves be dictated to by desktop developers. We can pick and choose what we want to run regardless of what they bundle as the default environment or what they plan for the next major release. One revolutionary application you ought to consider, despite its omission in the plans for Gnome 3.0, is Gnome Do. Initially taking its inspiration from an OS X tool called Quicksilver, Gnome Do has quickly become the quickest and most powerful way to access the power of your Linux desktop. It gives you complete control over application launching, but it can also do so much more. Thanks to dozens of separate plugins, you can install applications, open remote connections, play music, browse the internet, send emails, play games, tweet and blog, all without lifting your fingers from the keyboard. And despite its Gnome heritage, it works almost as well on other desktop environments, including KDE. The last release was a little while ago, and this should mean that even the tardiest distribution should have Gnome Do packages available. After installation, you can normally run it from your system menu, and when it's running you can trigger the main application by pressing a special key combination - normally the Windows key and Spacebar, but this can be changed. This is the point where you might be tempted to discount Gnome Do as all hype and no substance, because there's very little to see - just two opaque boxes on the screen.
But with these two boxes you can accomplish almost anything. Begin typing the name of a bookmark in Firefox, for example, and you should see the full name appear in the box on the left. Pressing Return will then open the page in your browser. If you press Tab, the focus shifts to the box on the right, from where you can choose other options to perform on the URL using the cursor keys. By default, you can make a Tiny URL, open the URL or copy it to the clipboard.
What Gnome Do can do is controlled by a series of plugins. We used the Firefox plugin, and there are dozens of others to install and enable. With the Gnome Do window visible, click on the tiny down arrow in the top-right. This will display a small menu, and you can enable more plugins by clicking on the Preferences option and switching to the Plugins page that appears. To find out how to use each plugin, either click on the About button or take a look at the Gnome Do wiki (http://do.davebsd.com).
Step by step: Tweet from Gnome Do
Account details: Open up the plugin window, enable the Microblogging plugin and click on Configure to enter your account details. Read tweets: New tweets will appear on your desktop, and you can post your own by typing a message into Gnome Do and pressing Tab.
Post tweets: Press Tab to switch to the other box and use the cursor keys to choose an action. Select Post To Twitter to perform the function.
Both Gnome and KDE are putting a great deal of emphasis on something they call 'activities'. These are really an extension of the virtual desktop idea, but rather than each desktop being a disconnected extension to your screen's real estate, activities become associated with a certain task. You might want to create a documentation activity, for example, and for that you'd need a desktop that provided quick access to a text or HTML editor, online resources and perhaps a dictionary or thesaurus. Like most other tasks, setting up this kind of environment would normally require the user to mess around with a launch menu as well as understand a certain amount about your computer's filesystem.
Most developers recognise that this process isn't ideal and that desktops of the future shouldn't require filesystem knowledge, or even an idea of how applications are organised and stored. The process of working with your data should be as intuitive as possible, and both major Linux desktops are trying their best to tackle this issue in their own special ways. With Gnome, for example, one of the key aims of the upgrade to version 3.0 has been to streamline the user experience. And the central user-facing technology that's going to help this happen is called Gnome Shell. This is an application that has seen rapid development over the last 18 months after Gnome's Vincent Untz posted some observations from discussions at a recent hackfest in late 2008. These observations mentioned that tasks such as finding a window was more difficult than it should be, that workspaces were powerful but not intuitive enough and that launching applications was too hard.
Gnome Shell has been developed to address these problems, as well as take advantage of some of the latest Linux technology. Like Moblin, Gnome Shell uses Clutter, a graphical library that can build smooth transitions and eye candy out of even the most humble graphics hardware. Tools like Gnome Shell could make the desktop launchbar obsolete.
The KDE team have been working on similar concepts throughout the entire KDE 4 development process. But it's fair to say that many of ideas touted before the first release were judged too ambitious and too difficult to implement within the first few revisions. KDE 4.4 is designed to redress some of these issues by re-awakening the Nepomuk semantic desktop and by making desktop activities usable.
The Nepomuk semantic desktop, as we've written before, is designed to bridge the gap between online content and content in your hardware. Many components of the web can already be found in KDE applications like Dolphin, where you can add comments, tags and ratings to your own files, but until now there hasn't been a good reason to go to all this effort. With the release of KDE 4.4, you can finally use these fields of rich information to search your content, just as you would search the internet through Google. Another important aspect to user experience on the KDE desktop is the use of activities. Like Gnome Shell, this the ability to meta-manage the arrangement of virtual desktops and applications according to what you want to work on. It's a feature that has been part of the KDE 4 desktop for a while, but with version 4.4, activities also become first-class citizens on the KDE desktop, perhaps in an attempt to steal some of Gnome's thunder from the wonderful Gnome Shell. But it's not quite as simple or as straightforward to use. Rather than attempting to replace the launch menu and file management duties of the desktop, KDE's activities are better at managing complex environments. It doesn't replace the panel or the launch menu, for example, it just lets you fire up a working environment in the same way that you click on a browser's bookmark. That's not a bad thing, it's just different. The best thing about Gnome Shell is that you can play with it today. And we'd suggest you give it a go, because it might just change the way you think about Gnome. Gnome Shell should be straightforward to install through your distribution's package manager. To run it though, you will probably need to open the command line and type gnome-shell --replace. If you've ever manually started Compiz, this command will feel familiar, as the replace argument is used to replace the currently used window manage with both projects. When Gnome Shell is running (depending on the version you've installed), you'll won't see any new windows on your desktop; the only indication that something has changed is the different style of window decoration, and if it's a recent version of Gnome Shell, a quick-launch dock attached to the top-left of your main window.
To see Gnome Shell in action, just move your mouse to the top-right of your screen. You should then see the current view zoom away into the middle distance, and the freed-up screen space used to display other virtual desktops to the right and a minimal launch menu on the right. This launch menu contains applications and files, and you can either click on one to load the corresponding application into the current desktop or drag the icon on to the desktop on which you wish the application to appear. But it's also much cleverer than first glance might suggest. If you drag a text file on to a new desktop, for example, Gnome Shell will automatically load that file into the default application for that file type. Each window on the virtual desktop will update to reflect any changing contents, and you can enlarge any window in the frame by using the mouse wheel while the pointer hovers over the window you want to enlarge. Make sense of virtual desktops
While the activity functionality has been a part of KDE 4 for a while, it's only with the 4.4 release that it becomes an integral part of the desktop. This is mostly thanks to an addition to the Plasmoid menu, which now includes the Add Activity entry to blank the desktop background. You can now add Plasmoids, change the desktop appearance and launch apps. The best way we've found to switch between activities is to drag the Activity Bar widget on to the desktop and use this to switch between activities.You'll have to do this for every activity you use, although you can switch between activities by zooming out of the desktop from the Plasma menu or by creating a mouse action (right-click on the desktop background, select Desktop Activity Settings and switch to the Mouse Actions page). You can also reduce confusion by having multiple virtual desktops and different activities at the same time by combining the two into the same function. Open the Multiple Desktops configuration panel from the System Settings application and click on the Different Activity For Each Desktop checkbox. This will remove the specific virtual desktop functionality, but enable you to switch between activities in the same way, which is a more convenient solution for users who don't need more than one virtual desktop within a single activity. Step by step: KDE activities
Plasma: Click on the Plasma Cashew at the top-right of the screen and select Add Activity. Your desktop will switch to the blank screen.
Populate workspaces: You can now add files, folders and desktop widgets to the new activity, and these will appear only on that activity. Switch activities: You can switch activities by zooming out of the desktop, or using the desktop Plasmoid, or your virtual desktop pager.
Next-generation applications
If you're not already excited about what's coming up in the open source world in the coming 12 months, why not? Here's just a taste of what you can expect...
There's no doubt that both Gnome and KDE are stealing the limelight when it comes to feature upgrades for 2010. The other more common Linux desktops don't have any such big upgrades planned, and this is their strength, as they often like to capitalise on their ability to remain stable and relatively lightweight. Xfce is the best example of this: changes from one version to the next are generally small and lack the paradigm shifting-hype of other desktop environments. Xfce 4.8 only entered the planning stage in August last year, and as a result, the feature list is best described as nebulous.
It's hoped that the new version will include an enhanced menu system, icon routines and keyboard handling, but there aren't any ambitious plans to add masses of new features. The new menu system is hopefully going to make it much easier for users to edit the launch menu, a task that currently generates plenty of complaints, according to Xfce developers. Xfce should also been able to jump on to the on-screen notification bandwagon, with Xfce developer Jerome Guelfucci showing off patches that bring Gnome's notification system to the Xfce desktop. It looks really good too. The new file manager, Thunar, is also likely to become more powerful, although one of its great strengths is that it's super quick and not hampered by the cruft that plagues other file managers. The final version of 4.8 is due to be released on 12 April 2010.
Xfce is quickly becoming a Zen-like desktop in the face of KDE and Gnome's growing complexity.
The most comprehensive open source office suite is likely to go through something of a transformation this year, now that its principal sponsor, Sun Microsystems, is being taken over by Oracle. At the time of writing, the first release candidate of version 3.2 has just made it on to the mirrors. It promises faster startup times, almost halving the boot time for Writer from just over 11 seconds in version 3 to under six seconds in version 3.2, and should bring much better file compatibility with both the new ODF 1.2 specification as well as proprietary formats and the ability to save password-protected Microsoft Office documents. Version 3.3, which should be available by the end of the year, will be the first release to include the fruit from project Renaissance. This is a noble attempt by OpenOffice.org to overhaul the user interface of the various applications in the suite, hopefully pulling its appearance into the 21st century. This update is promised only for Impress, with the other applications getting the same treatment in later updates, but until we see a screenshot of the new design, we have yet to be convinced.
OpenOffice.org is going to enjoy a complete GUI overhaul later on this year, starting with Impress.
There's little doubt that the next 12 months are going to be particularly challenging for the Firefox web browser. Once the darling of the open source desktop, Firefox has suffered in the face of competition from Google's Chromium browser and its perceived lack of speed in the face of the growing dominance of WebKit-based browsing. As a result, future development is likely to focus on speed improvements and consolidating the initial reasons for Firefox's success, rather than adding feature after feature on to a browser than many users feel is already bloated. But so far, the current roadmap for Firefox couldn't exactly be described as exciting. There are several significant updates planned for Firefox this year, starting with version 3.6, which should be out as you read this. Beta versions of version 3.6 have shown decent JavaScript speed improvements as well as support for 'Personas', which is a theming engine similar to the one used in Google's Chrome. Version 3.7, available in the middle of the year should make further performance and include the latest version of the Gecko rendering engine.
Jetpack is also worth a mention. It's a way for web developers to build Firefox add-ons using the same skills they use for website construction, including HTML, CSS and JavaScript. But the best thing about Jetpack is that add-ons can be installed without requiring a tedious restart of Firefox. Finally, there's a small chance that Firefox version 4.0 could be seen on the mirrors before the end of the year. There doesn't seem to be much to get excited about - it's likely to feature the predictable makeover, faster JavaScript and a newer Gecko engine - but it might surprise us. With any luck, Firefox might not be staying like this forever...
After years languishing in the pool of applications known as 'loved and lost', Gimp looks like it may finely rise from the ashes of apathy and re-invent itself as the future of pixel editing on the free desktop. Version 2.6, released in October 2009, was a step in the right direction, but it's going to be version 2.8 that hopefully heralds the dawn of a new era. This is mainly because a brand-new, revised and re-imagined GUI is planned, finally consigning its multiple tiny dialogs and windows to the rubbish bin. Gimp 2.8 will include a single-window mode, just like its commercial competitor, and this should go a long way towards making it easier to use for most people.
In the words of one of the main developers on the project, Martin Nordholts, Gimp's UI feels rather cluttered. This is mainly because it uses so many windows, and the single window should solve most of these problems. But it's a big job. There are nine separate tasks required to make the modification work, with this feature alone taking up about 10% of the projected development time for the next release. Most people agree that it's going to be worth it. The remainder of the development time is going to be spent adding lots of other cool features. You'll be able to type text directly into the image canvas, for example, rather than using a text entry window first. You will also be able to group layers, making larger and more complex images vastly more manageable. But development on Gimp has always been dependent on its relatively small and dedicated team. In the past, this has meant there was a long gap between releases, and it's likely to be the same with 2.8. Martin Nordholts initially estimated that if they included all the features they wanted, 2.8 might not see the light of day until early 2012. He suggested a compromise, pulling ideas like vector layers and unified and free transform tools from the feature plan, and pulling the release forward to before the end of 2010.
There's been a slight shift in recent years from open source project being built purely by the community that uses them, to applications that are developed and sponsored by a commercial endeavour. Google's Chrome browser falls into this category, and so does Nokia's development environment, Qt Creator. The result is that we've never had a better selection of web browsers, and if you enjoy programming, there are now more Linux-compatible development environments that ever to choose from. If you're a Qt/C++ developer, Qt Creator is going from strength to strength, and is likely to be the best choice if you're thinking of joining the throngs of developers writing applications for Nokia's various mobile phones. In a related field, KDevelop 4 is finally due to be released some time in the first half of 2010. This is one of the final KDE 3-era applications to have made the transition to KDE 4, and we hope it will be good enough to last a few years before the developers decide to start from scratch again.
KDevelop 4 uses CMake for project management, and lets you have more than one project open at a time. There's also some sophisticated refactoring, argument matching and support for distributed version control systems such as Git. But KDevelop will no longer enjoy the wide language support of its predecessor, as it become increasingly adept at the C++/Qt combination - a space now defiantly occupied by Qt Creator.
KDevelop has a lot of catching up to do if it's going to compete with Qt Creator.
For Gnome developers there are likely to be a couple of releases of the Anjuta IDE, the first of which will be version 2.29.2. MonoDevelop, the multilingual IDE that specialises in C#, is also going from strength to strength, with version 2.2 being released right at the end of the year. There are currently no plans for version 2.4, but at the current rate of released, we'd expect another version before the end of the year. First published in Linux Format magazine
You should follow us on Identi.ca or Twitter | 计算机 |
2014-23/2156/en_head.json.gz/11861 | JournalsCommunications of the ACM
Communications of the ACM
all Communications of the ACM (CACM) is the monthly journal of the Association for Computing Machinery (ACM). Established in 1957, CACM is sent to all ACM members, currently numbering about 80,000. The articles are intended for readers with backgrounds in all areas of computer science and information systems. The focus is on the practical implications of advances in information technology and associated management issues; ACM also publishes a variety of more theoretical journals.
CACM straddles the boundary of a science magazine, professional journal, and a scientific journal. While the content is subject to peer review (and is counted as such in many university assessments of research output), the articles published are often summaries of research that may also be published elsewhere. Material published must be accessible and relevant to a broad readership. On the publisher's website, CACM is filed in the category "magazines".
http://cacm.acm.org/
Distracted drivers: Your habits are to blame
(Phys.org) —More than a decade of research has shown that using a handheld or hands-free phone while driving is not safe because the brain does not have enough mental capacity to safely perform both tasks at once.
Why rumors spread fast in social networks
Information spreads fast in social networks. This could be observed during recent events. Now computer scientists from the German Saarland University provide the mathematical proof for this and come up with a surprising explanation. | 计算机 |
2014-23/2156/en_head.json.gz/12988 | LabF
#include “lab.f”
Braindump: Live-Action Animation?
Wow, it’s been quiet around here.
There are some absolutely stunning realtime 3D engines available these days, primarily intended for games. A few groups also use these game engines to make short films, or machinima.
While perhaps not quite up to cinematic standards, some of the engines meet or exceed the graphical standards of what you’ll see on TV animation. It’s interesting that no-one seems to have attempted using machinima techniques for television shows. Admittedly, machinima relies on the game engine handle the minutiae of animation, and uses pre-written scripts for the character keyframes, so as it stands it’s quite limiting from a creative point of view. But does it need to be?
Motion capture can be done pretty cheaply these days with a couple of cameras and some ping pong balls; and some more advanced software might not even need the markers. Given a 3D character that has been properly rigged, it should be possible to map the motion data directly onto the model, without need for prescripted animation sequences. Most of the latest engines support facial animation as well; with a cheap knockoff of James Cameron’s face capture technique from Avatar, it should also be possible to automatically reproduce an actor’s expression on the model as well.
Usually the camera is controlled by another player in the game engine, but when appropriate, even this could be done via motion capture, as Steve Jackson did with Lord of the Rings. This would allow for vastly improved camera interaction with characters and scenery. Imagine “filming” a motion capture performance with an iPad, and seeing the final result right there on the screen! | 计算机 |
2014-23/2156/en_head.json.gz/14270 | Message fromthe Director
Office of Administrationand Research
Forecast ResearchDivision
Facility Division
Demonstration Division
Systems DevelopmentDivision
Aviation Division
Modernization Division
International Division
Acronyms and Terms
Nita Fullerton
Web Design: Will von Dauster
Best Viewed WithInternet Explorer
Margot H. Ackley, Chief(Supervisory Physical Scientist)
Web Homepage: http://www-dd.fsl.noaa.gov/
Norman L. Abshire, Electrical Engineer, 303-497-6179
Leon A. Benjamin, Programmer Analyst, 303-497-6031
B. Carol Bliss, Program Support Specialist, 303-497-5866
Michael M. Bowden, Engineering Technician, 303-497-3260
Jeanna M. Brown, Data Technician, 303-497-5627
James L. Budler, Engineering Technician, 303-497-7258
James D. Bussard, Information Systems Specialist, 303-497-6581
Michael G. Foy, Programmer Analyst, 303-497-6832
David J. Glaze, Electrical Engineer, 303-497-6801
Seth I. Gutman, Physical Scientist, 303-497-7031
Kirk L. Holub, Systems Analyst, 303-497-6642
Bobby R. Kelley, Computer Specialist, 303-497-5635
Kathleen M. McKillen, Secretary, 303-497-6200
Scott T. Nahman, Logistics Engineer, 303-497-3095
Michael J. Pando, Information Systems Specialist, 303-497-6220
Brian R. Phillips, Senior Engineering Technician, 303-497-6990
Alan E. Pihlak, Computer Specialist, 303-497-6022
Michael K. Shanahan, Electrical Engineer, 303-497-6547
Scott W. Stierle, Systems Analyst, 303-497-6334
Douglas W. van de Kamp, Meteorologist, 303-497-6309
David W. Wheeler, Electronic Technician, 303-497-6553
(The above roster, current when document is published, includesgovernment, cooperative agreement, and commercial affiliate staff.)
NOAA Forecast Systems Laboratory � Mail Code: FS3
David Skaggs Research Center
Boulder, Colorado 80305-3328 Objectives
The Demonstration Division evaluates promising new atmospheric observing technologies developed by NOAA and other federal agencies and organizations and determines their value in the operational domain. Activities range from the demonstration of scientific and engineering innovations to the management of new systems and technologies.
Currently the division is engaged in five major projects:
Operation, maintenance, and improvement of the NOAA Profiler Network (NPN), including three systems in Alaska.
Assessment of the Radio Acoustic Sounding System (RASS) for temperature profiling.
Collection and distribution of wind and temperature data from Boundary Layer Profilers (BLPS) operated by other organizations.
Development and deployment of a surface-based integrated precipitable water vapor (IPWV) monitoring system using the Global Positioning System (GPS), known as ground-based GPS-Met.
Planning and support activities for a national Mesoscale Observing System initiative which will include profilers and GPS-Met systems.
The division comprises five branches organizationally; however, the branches work in a fully integrated team mode in supporting the overall objectives of the division.
Network Operations Branch Monitors systems' health and data quality, and coordinates all field repair and maintenance activities.
Engineering and Field Support Branch Provides high-level field repair, coordinates all network logistical support, and designs and deploys engineering system upgrades.
Software Development and Web Services Branch Provides software support of existing systems, develops new software and database systems as needed, provides Web support of the division's extensive Web activities, and designs software to support a national deployment of profilers.
GPS-MET Observing Systems Branch Supports development and deployment of the GPS-IPWV Demonstration Network, and provides software development and scientific support.
Facilities Management and Systems Administration Branch Manages all computers, data communications, network, and computer facilities used by the staff and projects of the division.
Network Operations Branch
Douglas W. van de Kamp, Chief
The Network Operations Branch is responsible for all aspects of NOAA Profiler Network (NPN) operations and monitoring, including the coordination of logistics associated with operating a network of 34 radars and surface instruments (Figure 24). In addition to the NPN sites, which include GPS integrated precipitable water vapor (GPS-IPWV) capabilities, another 55 NOAA and other-agency sites are also monitored for timely GPS positions and surface observations to produce real-time IPWV measurements. This branch relies heavily on the other branches within the division to maintain and improve NPN real-time data availability to the National Weather Service (NWS) and other worldwide users.
Figure 24. The NOAA Profiler Network of 35 radars and surface instruments. The Alaska sites are shown at the bottom left.
Of the five people in the branch, three are involved with the day-to-day operations and monitoring tasks related to the hardware and communications aspects of the network. They also coordinate all field repair and maintenance activities, interact with field personnel, and log all significant faults. Another person monitors the meteorological data quality from the NPN, and yet another handles all financial aspects related to the continued operation of the NPN, including tracking land leases, communications, and commercial power bills for more than 30 profiler sites. All five continue to work with others in the division to support the operations and maintenance of the NPN, resulting in consistently high data availability statistics for the past six years. The high quality upper-air and surface observations are distributed in real time to a wide range of users, such as NWS forecasters and numerical weather prediction modelers.
The availability of hourly winds to the NWS remained high for Fiscal Year 2000. A summary of the overall performance of the network for the past 10 years is presented in Figure 25. A decrease in the availability of hourly winds can clearly be seen each year during the spring and summer months, compared to slightly higher availability during the fall and winter months. This pattern can be attributed to increased lightning activity and severe weather during the convective season, causing more commercial power and communications problems, along with profiler hardware damage from nearby lightning strikes, and site air conditioner failures during the summer. From this trend analysis, additional lightning suppression and communications equipment protection are being added to the profiler sites by the Engineering and Field Support Branch.
Figure 25. NOAA Profiler Network data availability from January 1991 January 2001.
A very important component of the Network Operations Branch is the logging of all significant faults that cause an outage of profiler data. The duration of each data outage is broken down into many different states, including how long it took to initially identify a failure, diagnose and evaluate the problem, wait for repair parts to be sent and received, restore commercial power or communications, and when and how the fault was ultimately repaired. Analysis of these states reveals important information regarding operation of the network, as shown in the examples below.
Each profiler site's mean time between failure (MTBF) over the most recent, nearly five-year period is shown in Figure 26. Also shown are the maximum time (days) between failures (MaxTBF) and the total number (count) of failures for each site, including all data outages (i.e., power and communications, not just profiler hardware) lasting longer than 24 hours. The "better" sites are shown toward the right-hand side of the figure, with many operating longer than one year without an outage.
Figure 26. Mean time between failure for NOAA Profiler Network sites with outages over 24 hours.
The NPN, currently noncommissioned by the NWS, is routinely monitored by personnel in the Profiler Control Center (PCC) only during normal working hours, 8:00 AM 5:00 PM during the week (27% of the total hour in a week). The remainder of the time, the profilers, dedicated communication lines, and Hub computer system operate while unmonitored and unattended. Figure 27 shows the distribution of downtime (normalized over the past four years). Generally more than 50% of downtime was due to waiting for parts to arrive or for a repair person to arrive at the profiler site. Thus, it was determined that increased staffing of the PCC would have little impact on improved data availability; however, additional NWS maintenance staff would improve response time.
Figure 27. Distribution of NOAA Profiler Network downtime, normalized over four years from 1997 through 2000.
Figure 28 shows the total number of hours of profiler data lost by fault type (such as component failures, scheduled downtime for maintenance, and power and air conditioner failures) from 1 January 1996 1 October 2000. A further breakdown, by fault disposition, is shown in Figure 29. This information is monitored for trends that may be causing outages. The Fault Disposition Category indicates the corrective action that was required to restore normal profiler operations for each data outage in the past two years, and the number of hours of missing data attributed to each fault category. The largest category by far is the "Line Replaceable Units (LRUS) Replaced." This simply means that a piece of hardware, an LRU, had to be replaced to restore operations, along with the associated waiting time for a technician to respond to the site. The next largest category to restore operations is "Scheduled Down Time (SDT) Completed." This typically means that preventive system maintenance or antenna measurement/repair activities were completed. Note the significant number of lost hours of data attributed to the local breaker (main 200 amp) being tripped to the opposition, usually caused by lightning related power surges, and only needing to be reset. From this analysis, the Engineering and Field Support Branch designed and installed the capability to remotely reset the main breaker at each site. The Network Operations group routinely uses this method to restore profiler operations, as well as "power cycling" a site to sometimes clear other problems. The next largest category, "Profiler Maintenance Terminal (PMT) Restart," is activated to restore operations. This outage is typically corrected by logging into the profiler's computer from the Profiler Control Center in Boulder and reentering critical system parameters that have been corrupted, or simply restarting the profiler's data acquisition cycle. Each profiler must have a current Search and Rescue Satellite Aided Tracking (SARSAT) inhibit schedule in order for the transmitter to radiate, and for the profiler to measure the winds. This inhibit schedule (the smallest category in Figure 29) expires very rarely, usually because of an extended primary communications link outage, causing the profiler's transmitter to shut down as a fail-safe mechanism to prevent possible interference to the SARSAT system.
Figure 28. NOAA Profiler Network data lost by fault type over fiscal years 1999 and 2000.
Figure 29. NOAA Profiler Network data lost by fault disposition over fiscal years 1999 and 2000.
The branch has made significant improvements in its ability to remotely monitor activity within the NPN via the World Wide Web. Activities that are now routinely monitored on the Web include information on profiler real-time status, data flow to the NWS Gateway, and ingest of profiler data into the Rapid Update Cycle (RUC) model at the National Centers for Environmental Prediction (NCEP).
A significant improvement was made to the quality control algorithm, primarily affecting the three Alaska profiler sites. The algorithm allows the removal of very specific radial velocities that come from multiple trip ground clutter returns. Overall data quality has been greatly improved.
The Bird Contamination Check algorithm, developed nearly five years ago within the division, was modified. The original algorithm analyzed only the north and east beams to detect the broader spectral widths caused by migrating birds. Now the spectral width from the vertical beam has also been incorporated into the algorithm. Although the spectral width bird signature is not as broad in the vertical beam, it is still apparent and improves the algorithm's detection ability.
Staff continue to evaluate the Radio Acoustic Sounding System (RASS) for temperature profiling. When the Neodesha, Kansas profiler was recently relocated, the planned addition of RASS was a primary consideration in selecting the new site.
The division expects to receive its first DEC (now Compaq) Alpha this year to replace the aging MicroVAXes for the profiler sites. The new system will be tested and evaluated, and should lend itself to acquiring raw Doppler spectra, which will lead to improved data quality related to reduced ground clutter and internal interference, and bird rejection in addition to the existing bird detection.
An improved wind and RASS data quality control algorithm has been investigated, and is being designed primarily to detect and correctly flag data contaminated by internal interference.
Additional lower tropospheric (boundary layer) profiler data will be acquired from targets of opportunity around the country.
Staff will also continue adherence to sound operating principles that have produced high data availability rates.
Engineering and Field Support Branch
Michael K. Shanahan, Chief
The Engineering and Field Support Branch provides high-level field repair, coordinates all network logistical support, and designs and deploys engineering system upgrades. These activities lead to improved operation and maintenance of the NOAA Profiler Network (NPN) and help to increase data availability. The 35-site network is monitored to assure data quality and reliability. Working with others in the Profiler Control Center (PCC), branch staff identify problems using remote diagnostics to analyze the situation and pursue corrective action.
Through agreement with the National Weather Service (NWS), their electronics technicians perform most of the preventive and remedial maintenance. At the PCC in Boulder, staff use the profilers' remote diagnostic capabilities to detect failed components, order Line Replaceable Units (LRUs), and coordinate with the NWS electronics technicians to carry out field repairs. A team of specialized engineer/technicians, called rangers, who are experienced in the design and operation of the profiler systems, handle the more complex problems. As division employees based in Boulder, the rangers can be mobilized to repair the profilers on short notice.
NOAA Profiler Network
Alaska Profiler Network The branch was involved with the transition of the Alaska profilers to the NWS. A Memorandum of Agreement was signed in 2000 by NWS headquarters, the NWS Alaska region, and the Office of Oceanic and Atmospheric Research/FSL for the implementation, support, maintenance, and operation of the profilers. NWS headquarters is providing coordination and support for the Alaska Region, and intends to use the three 449-MHz Alaska profilers as operational systems. FSL will continue to operate the profilers as part of the NPN, and the Alaska Region will assume responsibility for onsite maintenance, logistics, and funding of these systems.
The Alaska 449-MHz Profiler Network became operational in October 1999, and has recorded data availability to the NWS of over 90% since June 2000, and over 98% since August 2000 (Figure 30). When the Alaska profilers became operational, the wind profiles in the first five to seven range gates were corrupted due to receiver saturation. A delay of the transmitted pulse through a band pass filter in the transmitter caused a timing problem, whereby the receiver turned on while the RF pulse was still being transmitted. To solve this problem, the signal processor was reprogrammed to send the transmitted pulse sooner to compensate for the delay in the band pass filter. To ensure the integrity of the range gates, a calibration was performed before and after corrective action was taken.
Figure 30. Alaska 449-MHz network data availability to NWS from October 1999 January 2001.
Accumulation of snow on the antennas in Alaska has caused some loss of data. The Talkeetna and Central profilers have experienced problems when large amounts of snow build up on the antenna and the temperature rises above freezing. The snow melts and then refreezes causing the antennas' electromagnetic pattern to become distorted with high side lobes and thus degrading the data. It is interesting to note that when the snow pack is dry and consistent, it has no effect on the antenna pattern. The Alaska electronics technicians have had to remove the snow from the Talkeetna antenna twice this year and once from the Central antenna.
Minor ground clutter problems have also occurred with the Alaska profilers. When the antennas were installed, they were raised higher than usual off the ground for easier maintenance. This caused the antenna to be closer to the top of the security fence, apparently making it more susceptible to ground clutter. The security fence seems to perform a dual purpose, also acting as a clutter fence to eliminate ground clutter. Experiments with a higher fence will be conducted in the spring to help diminish the clutter problem.
Profiler Site Relocation The Neodesha, Kansas, profiler was relocated because the site's landowner changed and that location was unsuitable for the Radio Acoustic Sounding System (RASS) operations. A new site was found about six miles from the original site and relocation was completed last September. Although the move only took one week, data were not available for a month because a component failed and the site was in checkout mode for quality assurance.
Equipment Upgrades In response to an NWS Service Assessment report following the 3 May 1999 tornado outbreak in Oklahoma, which stated that "The NWS should make a decision on how to support the existing profiler network so that the current data suite becomes a reliable, operational data source," all NPN profiler sites have been outfitted with a remote-control main breaker (Figure 31). Profiler main breaker trips are the second largest contributor to site downtime. In most cases, main breaker trips are caused by AC power fluctuations or power surges during storms. Although the site usually does not sustain any damage from these occurrences, data availability remains down until a site visit is made to reset the main breaker. To address this problem, the main breakers at all profiler sites were replaced with ones that can be controlled remotely using a touch-tone phone. This capability reduces expenses and increases data availability by eliminating the need for a technician to visit the site to simply reset the main circuit breaker.
Figure 31. A remote control main breaker installed at all NPN sites.
Alaska Profiler Network Experiments will be performed to find a solution to correcting the ground clutter problems at the Alaska profilers. Figure 32 is a photo of the profiler site at Glennallen, Alaska.
Figure 32. Profiler site at Glennallen, Alaska.
Other NPN Sites The branch will continue providing prompt field repairs, appropriate coordination of network logistical support, and economical equipment upgrades to provide the meteorological community with quality NPN data. A continuing effort involves outfitting sites with wind and temperature profile capability, along with water vapor and surface meteorological measurements. Currently, 9 out of 35 sites are configured with RASS, and plans are to install these units at the Neodesha, Kansas, and Jayton, Texas, sites within the next year.
Two types of surface meteorological instruments, the Profiler Surface Observing System (PSOS) and the GPS Surface Observing System (GSOS), are now located at each profiler site. Plans are to replace these two units with one digital system, PSOS 11, at all profiler sites.
The grounding and lightning protection at all sites will be evaluated and upgraded to safeguard against lightning strikes. Existing ground networks will be tested and refurbished if necessary. Communications equipment, profiler components, and computers will be protected to isolate them from the damaging effects of lightning strikes.
Software Development and Web Services Branch
Alan E. Pihlak, Chief
The responsibilities of the Software Development and Web Services Branch are to provide software support of existing systems, develop new software and database systems as needed, provide Web support of the division's extensive Web activities, and design software to support a national deployment of profilers.
A recently implemented strategy concerns the development of new software to support future operations of the NOAA Profiler Network (NPN) and its infrastructure. A process reengineering effort during Fiscal Year 2000 indicated that a more effective way to view the NPN of the future is as a set of platforms offering convenient power and communications at which to install meteorological measuring equipment for both operational and research purposes. This strategy drives software development efforts in three areas: migrating profiler operations to the National Weather Service (NWS), continuing to support the record-setting reliability and availability of the NPN data, and becoming the prime focal point on the Web for profiler data.
The NOAA Profiler program was mandated a NOAA "mission-critical system," which required that formal Y2K test plans be produced and executed by a transition team composed of members from the division representing FSL and other NOAA laboratories. These Y2K test plans were successful, and the NPN experienced no interruption in data delivery to NWS because of date related problems. Tests were also performed on the systems for Y2K+1 and 2001 Leap Year, and both passed with almost no effect.
The branch was directly responsible for enabling NPN data availability and reliability to reach new heights during the fiscal year. In collaboration with the NWS, a monitoring system was implemented so that NWS electronics technicians could begin diagnosing profiler problems with limited support from the Profiler Control Center (PCC). In operation at the NWS Telecommunications Gateway, this system observes the delivery of wind profiler data from the NPN hub. If data are interrupted at a particular profiler, the system delivers a message via AWIPS to the NWS forecast office responsible for maintaining that site. The electronic technician may then diagnose the problem and initiate any action deemed necessary.
A "process reengineering" task was completed that involved documentation of the operations of the PCC, NPN hub, and NPN instrumentation. In this vital task, the computer captures the processes and systems so that their requirements can be extracted and documented, and then used to engineer new systems needed to accomplish goals. Using these requirements, branch personnel began to develop and test a "software toolkit" consisting of low-level software objects that will become the foundation for accelerating efforts in Fiscal Year 2001.
During 2000, staff serviced an average of 18,000 Web hits per month. Thirty percent of these Webpage views were from nongovernmental sites with network domains of ".net" and ".com." For the first time, raw profiler data were made available via the web. Significant amounts of internal operational documentation were converted to the Web and placed on the division's Intranet site. Informational materials (e.g., the "Facts Fax Bulletin" and the "Chiefs Report") formerly distributed via fax or e-mail were also converted to the Web.
It is envisioned that an NWS national profiler network will have a network architecture different from the current NPN. In the modernization of the NPN, the division will be taking various steps toward implementation of the new architecture which will facilitate transition of the NPN to a national network operated by NWS. The following discusses three phases that will take place soon.
The first phase of transition to NWS operations consists of altering the delivery mechanism for data acquired at NPN sites. The data are now delivered in five composite bulletins, each containing information from up to eight different sites. When one of these sites is unable to deliver data via landline, the entire bulletin is delayed until it can be transmitted via the backup communication system that uses the GOES satellites. This delay sometimes affects regular delivery of data to the numerical models operating at FSL and the National Centers for Environmental Prediction (NCEP). Beginning this spring, a single message will be sent from each type of measuring instrument, per site, per time period to alleviate the delays caused by message formatting. Part of this phase also includes collaboration with NWS regional and headquarters personnel, as well as with other FSL divisions, to produce decoders so that data in the new formats can be accessible on AWIPS.
A prototype of the Wind Profiler Processing Platform (WPPP), which will break the time-based dependency between the Lockheed-Martin manufactured wind profiler and the GOES Data Collection Platform (DCP), will be in place this year. This is important because the manufacturer of the particular GOES DCP for which the profiler was engineered is no longer in business. Additional functions of the WPPP will include moving quality control processing to the actual site, becoming a central collection platform for other collocated instrumentation such as GPS water vapor and RASS, and collecting data for the diagnosis of faults occurring in the collocated instrumentation. The WPPP will produce data ready for display on AWIPS and in a format ready for routing by the NWS and Global Telecommunications System.
The second phase of the transition is installation of a WPPP in the three Alaskan network sites, the addition of the WPPP to the database of Line Replaceable Units (LRUS) for NPN wind profilers, and training for NWS personnel, all to be completed in mid-2002.
The third and final phase involves installation of a WPPP at each remaining profiler site, again with training and documentation for NWS personnel. The upgrade of the network to an operating frequency of 449MHz will also be completed.
The branch will also be working on a joint project between the NWS and the University of Northern Florida to demonstrate the feasibility of remote wireless communications for meteorological measuring instruments. This project will also exemplify new technology including independent "smart" sensors and self-configuring networks.
In 2001, the staff will continue to look for opportunities to use the Web to make operations more efficient and cost effective. The division's Website will be improved in both appearance and operation. It will also be hosted on more modernized hardware, and will use new Java and other current software technologies.
GPS-Met Observing Systems Branch
Seth I. Gutman, Chief
The GPS-Met Observing Systems Branch develops and assesses techniques to measure atmospheric water vapor using ground-based GPS receivers. The branch was formed in 1994 in response to the need for improved moisture observations to support weather forecasting, climate monitoring, and research. The primary goals are to define and demonstrate the major aspects of an operational GPS integrated precipitable water vapor (IPWV) monitoring system, facilitate assessments of the impact of these data on weather forecasts, assist in the transition of these techniques to operational use, and encourage the use of GPS meteorology for atmospheric research and other applications. The work is carried on within the division at low cost and risk by utilizing the resources and infrastructure established to operate and maintain the NOAA Profiler Network (NPN).
To accomplish these goals, the branch collaborates with other NOAA organizations, government agencies, and universities to develop a 200-station demonstration network of GPS-Met observing systems by 2005. These collaborations allow the division to build, operate, and maintain a larger network of observing systems with lower cost, risk, and implementation time than would otherwise be possible using laboratory and division resources alone. The cornerstone of this effort is a "dual-use paradigm" within the federal government that allows leveraging of the substantial past and current investments in GPS made by other agencies such as the U.S. Coast Guard (USCG) and Federal HighWays Administration (FHWA) for purposes such as high accuracy surveying and improved transportation safety. From a technical standpoint, this is possible because of a fortuitous synergy between the use of GPS for precise positioning and navigation, and meteorological remote sensing. The branch is also taking advantage of the substantial effort that NOAA's National Geodetic Survey (NGS) has made to establish a growing network of Continuously Operating Reference Stations (CORS) in the Western Hemisphere. The CORS program collects, archives, and disseminates the GPS observations made by numerous organizations, including FSL, and distributes them to the general public. The branch contributes GPS and surface meteorological data acquired at 56 NPN and other NOAA sites. In fact, the fourth most requested dataset in the CORS network currently comes from the GPS-Met system on the roof of the David Skaggs Research Center in Boulder, Colorado (Figure 33).
Figure 33. The GPS-Met system on the roof of the David Skaggs Research Center.
Real-Time GPS Meteorology
For the past three years, the branch has been collaborating with the Scripps Orbit and Permanent Array Center (SOPAC) at the Scripps Institution of Oceanography, and the School of Ocean and Earth Science and Technology (SOEST) at the University of Hawaii at Manoa, to develop real-time orbit and data processing techniques for NOAA. The effort has resulted in the first-ever practical implementation of real-time GPS meteorology (GPS-Met). By the end of Fiscal Year 2000, GPS-IPWV observations with millimeter-level accuracy were being made at 56 sites every 30 minutes with about 20-minute latency. The realization of real-time GPS-IPWV for objective weather forecasting (near real time for subjective applications) with no significant loss of accuracy compared with those achieved using post-processing techniques is the last major milestone in the 1994 GPS-Met Project Plan to be achieved.
To accomplish real-time GPS-Met, three things had to be achieved: accurate satellite orbits had to be available in real time, and ways to acquire and process GPS data from an expanding network of sites in the shortest possible time had to be developed, as follows.
Real-time Satellite Orbits The calculation of integrated precipitable water vapor from GPS signal delays requires knowledge of the positions of the GPS satellites in Earth orbit with an error of less than 25 cm. In contrast, the accuracy of the orbit commonly available to civilian users of GPS is about 100 cm. Until this year, the improved satellite orbits were calculated once each day by seven orbit analysis centers belonging to the International GPS Service (http://igscb.jpl.nasa.gov/). These centers used data acquired over a 24-hour period at about 150 stations within the IGS global tracking network. They processed the data and produced improved orbits for high accuracy GPS positioning within about 8 hours. As a consequence, users had to wait about 32 hours for sufficiently accurate orbits to be available, an unacceptable delay for real-time applications that include (but are not limited to) weather forecasting. Recognizing that more frequent data would be required for these applications, several of the orbit centers took steps to acquire data from as many of the IGS tracking stations as feasible in the shortest possible time. This resulted in the availability of hourly data from a sufficiently large subset of the network to generate improved satellite orbits every hour using a "sliding window" technique developed at SOPAC. Short-term (two-hour) orbit predictions, also implemented at SOPAC, were shown to be accurate enough for use in real-time GPS-Met. This breakthrough left data acquisition and data processing as the only impediments to real-time GPS met.
Real-time Data Acquisition When the GPS-Met project began, guidance from the Forecast Research Division was that the GPS-Met system should be able to provide timely and accurate data to the next-generation of numerical weather prediction (NWP) models, which were expected to have a one-hour data assimilation and forecast cycle by 1998. Therefore, the data acquisition cycle at the NPN sites was designed and implemented at 30-minute intervals to allow for further improvements in NWP capabilities. Leveraging the use of other federal agency GPS assets, such as the U.S. Coast Guard, in 1995 permitted NOAA to develop the GPS Water Vapor Demonstration Network quickly with low cost and risk. Data from these sites were acquired and distributed by NGS only once each day, so in 1999 they began expanding their capabilities to acquire and distribute these data every hour to keep up with user demand for more timely high-accuracy GPS data. They made substantial upgrades to their communications and data processing capabilities. Working with the branch in 2000, they developed and implemented methods to send GPS and surface meteorological observations at CORS sites to FSL every half-hour, bringing these sites into alignment with the data acquisition capabilities required for a next-generation upper-air observing system. Staff worked with the Facilities Management and Systems Administration Branch to develop and implement advanced server and Internet data transfer capabilities to keep pace with the growing supply of observations needed to complete the 200-station demonstration network by 2005.
Real-time Data Processing To meet the demands of an hourly NWP data assimilation cycle, observations must be available to the models with less than 20-minute latency. The challenge of acquiring timely data from an expanding network of GPS-Met sites was previously discussed, but data processing is an entirely different matter. The task here is to combine these 30-minute observations with improved real-time orbits and other parameters, to calculate the signal delays at each site using common constraints provided by four "long-baseline" fiducial sites. Once this has been accomplished, quality controls have to be applied, the derived wet signal delays have to be mapped into IPWV, and the observations and retrievals have to be made available to the FSL and other groups within and outside NOAA. Figure 34 is a generalized diagram of this process. Until this year, "static" data processing was carried out once each day using expensive Unix workstations, and it took about 6 hours to calculate water vapor from 56 sites. The branch developed a scalable distributed processing system using fast and inexpensive personal computer workstations running the Linux operating system. Branch staff also implemented a "sliding window" data processing technique based on the one developed at Scripps in collaboration with the University of Hawaii to replace the old technique of generating static 24-hour solutions. This increased data processing speed by a factor of two, while reducing hardware costs by almost a factor of four. A high-level diagram of this system is shown in Figure 35.
Figure 34. Diagram of data processing for the GPS-Met sites.
Figure 35. Diagram of a scalable distributed data processing system.
Confirmation of GPS-IPWV Accuracy Figure 36 shows a comparison between near real-time GPS-IPWV calculated every 30 minutes and static (24-hour) solutions during qualification testing of the new real-time data processing technique in April and May 2000. Comparisons number 2050, with a mean difference of 0.20 mm and a standard deviation of 1.02 mm of delay. This translates to an average difference of about 0.03 mm IPWV +/- .15 mm IPWV.
Figure 36. Comparison between near real-time GPS-IPWV calculated every 30 minutes and static 24-hour solutions.
In September 2000, the branch participated in its third water vapor intensive observing period (WVIOP) experiment at the Department of Energy's Atmospheric Radiation Measurement (ARM) facility near Lamont, Oklahoma. The major difference between this WVIOP and previous ones was the availability of near real-time GPS-IPWV data to support on-the-fly comparisons with other instruments. Preliminary results indicate that GPS-IPWV measurement error is closer to 3.5% than the previous estimate of 5% that was determined in 1997. Figure 37 is a comparison of near real-time GPS data and rawinsonde measurements during WVIOP 2000.
Figure 37. Comparison of near real-time GPS data and rawinsonde measurements during WVIOP 2000.
GPS-IPWV Impact on Weather Forecasts
Evaluations of the impact of GPS-IPWV observations on weather forecast accuracy have been conducted by the Regional Analysis and Prediction Branch of FSL's Forecast Research Division since 1997. In 2000, a previously undetected problem with the assimilation of GPS data into the Rapid Update Cycle (RUC) was corrected, and all of the GPS-IPWV observations from 56 sites were made available to the 60-km RUC for parallel runs (with and without GPS). Table 1 summarizes the results, which shows that improvements in 3-hour relative humidity forecasts using the 60-km RUC NWP model with GPS observations in parallel cycles using optimal interpolation are greatest at the lowest levels and sensitive to the number of stations in the network. Table 2 shows the results for the lowest 2 levels, 850 hPa and 750 hPa.
Improvement in RH forecast accuracy with number of stations. Resultsare expressed in terms of forecast from RUC, verification from radiosondes.
Improvement in RH Forecasts by pressure level
18-station tests: 857 in 1998 1999
56-station tests: 421 in 2000
850 hPa
Period of parallel test statistics: March 1998 September 1999;February November 2000
Percentage of forecasts unchanged, improved, or made worse through the additionof GPS-IPWV data to the 60-km RUC model between 1998 1999 and 2000.
Expansion of the GPS Water Vapor Demonstration Network
An agreement between OAR/FSL and the Department of Transportation's Federal HighWays Administration (FHWA) was signed this year that allows the installation of NOAA GPS Surface Observing System (GSOS) packages at all Nationwide Differential GPS (NDGPS) sites. There will be about 70 NDGPS sites in the water vapor demonstration network by 2004.
The branch added three sites to the network in 2000: one at the Nationwide Differential GPS (NDGPS) site at Driver, Virginia; a second at the last NPN site to receive a GPS water vapor observing system, Slater, Iowa; and the last at the Ground Winds Lidar facility at Bartlett, New Hampshire, operated by the Mount Washington Observatory. Figure 38 shows the network at the end of Fiscal Year 2000, along with stations identified for inclusion in 2001. A total of 56 sites were delivering data at the close of the year.
Figure 38. The GPS Water Vapor Demonstration Network at the end of Fiscal Year 2000, and stations identified for inclusion in 2001.
A major effort will be made to expand the number of stations in the Water Vapor Demonstration Network in 2001. GSOS payloads will be installed at about 19 more FHWA NDGPS sites, bringing the expected total to about 23 sites next year. An agreement is under discussion with the U. S. Coast Guard and Army Corps of Engineers that will permit the division to install GSOS payloads at all remaining Maritime Differential GPS sites. This agreement will allow the addition of another 20 sites to the network, bringing the potential number of sites in the network to about 95.
The branch will work with various FSL divisions and NCEP to make GPS-IPWV data available to the wider NOAA forecaster and modeler communities, and will work with others in FSL to display GPS-IPWV on advanced NOAA workstations including FX-Net, W4, and AWIPS.
Another collaborative effort involves the UCAR SuomiNet program to add some of their sites to the NOAA/FSL GPS Water Vapor Demonstration Network. Of special interest are sites owned by the University of Oklahoma in and around the ARM site, and one belonging to Plymouth State College in New Hampshire.
In cooperation with the NWS forecast offices in Florida, branch staff will determine if GPS-IPWV data can improve weather forecasts during the convective storm season. This will be accomplished by working with Florida Department of Transportation (FDOT) and the NWS forecast offices to install FDOT differential GPS receivers at all forecast offices in Florida, add the sites to the Water Vapor Demonstration Network, and retrieve, process, and provide the data to the forecasters in near real time. Plans are to work with the 45th Weather Squadron at Patrick Air Force Base and NASA at Kennedy Space Center to density the GPS network in the Cape Canaveral/KSC area, and help them to use these data to address their primary weather challenge: lightning predictions.
The branch will collaborate with the NASA Langley Research Center (LARC) on the Clouds and the Earth's Radiant Energy System (CERES) project, a high priority NASA satellite program which has several important goals. The branch's involvement will be to assist LARC to install a GPS-Met system on an offshore ocean platform, and provide them with accurate water vapor data in near real time. Since water vapor is a significant forcing function in radiative transfer processes within the atmosphere, GPS-IPWV data will be used to constrain their models of the incoming and outgoing shortwave and longwave radiation within the atmosphere.
Facilities Management and Systems Administration Branch
Bobby R. Kelley, Chief
The Facilities Management and Systems Administration Branch manages and supports the division's communications and computer requirements in operations, maintenance, and support. Duties include performing systems operations, systems maintenance, systems administration, network administration, and NOAA Profiler Network (NPN) telecommunications administration. These responsibilities cover a broad range of computers and communications equipment. NPN processing is accommodated on 13 MicroVAXes configured in two clusters, primary and backup. Other data processing and Web page hosting is accomplished on a Sun E3000 server. Two Sun Ultra I workstations and 10 PCs running Linux are used for data acquisition, processing and distribution for the GPS Integrated Precipitable Water Vapor demonstration project. Four other PCs running Linux are supporting software development and testing to modernize the NPN processing system. The division's file and e-mail server is a PC running Microsoft NT-4 that provides connectivity to 31 PCs running Microsoft Windows 9x or NT-4. Backup NPN data communications are provided through an Intel 386-based PC running SCO Unix that is connected to a DOMSAT satellite receiver. Telecommunication responsibilities cover 38 NPN data circuits within the lower 48 states and in Alaska. Day-to-day work includes installing and configuring new components and systems on the division network, network problem isolation and maintenance, coordination with other FSL and building network staff, modifying system configurations to meet division requirements, system problem isolation and maintenance, in-house telecommunications maintenance or coordination for contracted maintenance, peripheral installation and configuration, computer and network security, preventive maintenance, information technology purchasing, property control, and routine system backups.
Over the last 12 to 18 months, improvements have been implemented to ensure NPN data availability. To eliminate a single point of failure for electrical power, a 70 kVA uninterruptable power system (UPS) was installed and the division's computer facility was connected to the David Skaggs Research Center emergency generator. This work was completed prior toY2K and has since served the division well on many occasions. The UPS maintains electrical power for the division computer facility until the building emergency generator is online, and the UPS buffers the computer and communications systems against power spikes, eliminating data loss due to electrical power problems. New branch personnel were trained on NPN operations, reemphasizing monitoring and preventive maintenance; this resulted in very few calls from division customers over the last 12 to 18 months. Also, the branch is now on call 24 hours a day, 7 days a week to handle data delivery problems to division customers. To ensure protection of computer facility equipment, temperature sensors connected to an automatic paging system were installed in the computer room to provide alerts around the clock when abnormal temperature increases are detected. Since the Facility Division is now a recipient of division products and its operations staff now provide 24 hours per day onsite coverage, additional monitoring of division data delivery is now in effect. Close coordination between operators in the branch and the Facility Division enables quick response to data delivery problems.
For calendar year 2000, the payoff in efforts taken to eliminate single points of failure and minimize downtime and risks resulted in the uptime for the NPN processing systems averaging 99.3%, communications systems uptime averaging 96.8%, and data delivery to the National Weather Service (NWS) averaging 94.4%. Data delivery to NWS in calendar year 2000 improved by more than 4% over calendar year 1999 (effective improvement of approximately 5%). A summary of profiler data availability for 2000 is shown in Figure 39.
Figure 39. Summary of profiler data availability from January December 2000.
Telecommunications risks and costs were minimized by requesting an exception to FTS-2001 for NPN data circuits. Department of Commerce approval of this request was obtained, and telecommunications services through AT&T were retained. This permitted continued use of existing local division equipment valued at approximately $130,000, ensuring continuous reliable NPN telecommunications services. A savings of approximately $750,000 over five years, compared to proposals for similar services by FTS -2001 providers, will be achieved using the existing AT&T equipment and circuits.
Plans are to maintain current operations and to ensure continued timely data delivery to all customers. Ongoing improvements are in support of software development and testing for the modernized NPN processing system. A low cost, high performance approach is being developed and tested using off-the-shelf PCs running the Linux operating system. The outdated NPN backup data communications system will be replaced with a modem PC running the Linux operating system and the new DOMSAT data communications software. To ensure rapid recovery from computer failures, an improved file backup system will be implemented. In all cases, the low cost, high performance approach will be the method employed to continue meeting mission requirements.
FSL Staff FSL in Review (Other Years) FSL Forum | 计算机 |
2014-23/2156/en_head.json.gz/18102 | What's up with the Snowman app?
A few people have contacted us with questions about the recent update to The Snowman and The Snowdog, or how they can play the previous version.
We would just like to clarify that we did not develop the version that was released in 2013. We developed the version that was released in 2012.
For contractual reasons outside our control the version that we did develop is no longer available to download in the UK, although it is now available internationally.
For iOS users in the UK: if you backed up your device to a PC/Mac after you originally downloaded the app, you may be able to reinstall it using iTunes on your PC/Mac.
Develop Awards 2013
On the 10th July 2013, the Develop Awards will toast another diverse array of the finest European game development talent.
We are delighted to announce that Crash Lab has been nominated in 2 categories; Best Use of a Licence or IP (for The Snowman and The Snowdog), and Best Micro Studio.
Time to dust off our smartest suits and check if they still fit, it's been a while....
More ways to play Twist Pilot!
We've been a little quiet so far this year but we are happy to be able to announce that Twist Pilot is now available from
BlackBerry App World,
Samsung Apps
and the Mac App Store.
We've even launched a
free version for iOS for those who would like to try it for free.
Oh, and before I forget, the iOS version will be getting a 4th chapter very soon.
Notes from a veteran of the console wars
There was a time when debate regarding video games was limited to "What's better: Commodore 64 or ZX Spectrum?" Developers stayed out of it and left the bickering to fanboys.
Fast forward a quarter of a century and the debate still rages. However, the fanboys have been joined by developers and the discussions have become more fractious; the debates more divisive....
1 million downloads!
We are very pleased to announce that over the holiday period, The Snowman and The Snowdog was downloaded more than 1 million times in the UK alone. A huge thanks to everyone who downloaded it and we're glad you enjoyed it.
Out Now - The Snowman and The Snowdog!
We are very pleased to announce that The Snowman and The Snowdog is now available for free from the App Store, Google Play and the Amazon App Store. Sorry to our overseas fans - like the film that it is based on, it is only available in the UK and Eire this year.
Out Now - Twist Pilot!
We are very pleased to announce that Twist Pilot is now available in the App Store and has become a Featured Game (thanks, Apple). We've been overwhelmed with the response so far and thanks for all your messages of support. We hope you all continue to have fun playing it.
Coming Soon - The Snowman And The Snowdog
According to Develop we are working on a mobile game version of the upcoming film The Snowman And The Snowdog! We'll be adding it to the Games page in the near future.
With Twist Pilot about to be released across multiple platforms we though it was high time we updated our website. We hope you enjoy the new look, be sure to check back as we continue to update with news of our other titles and some exciting collaborations we have in the pipeline. In the meantime check out our Twist Pilot game page and decide what device you are going to play it on!
Twist Pilot now coming to more platforms
We can now reveal that our debut title, Twist Pilot, will be released in October on iOS, Android and PlayStation Mobile.
Game Evaluation
Game evaluation is a sensitive subject. In theory, it's part of a mechanism to ensure the game is in an optimum state when released. In practice, it often leads to the Development team feeling alienated and disheartened.
My experience of the Game Evaluation process over the last 20 years has ranged from elation, to acquiescence, to depression.
Some time back in the 80's my parents bought me a Rubik's Cube. It was a simple but fiendishly difficult puzzle whose goal was to fill each face with squares of a single colour. I spent countless hours twisting and turning it, trying to work out what combination of moves would allow me to complete it. In all that time, one thing that I never did is ask why? Why do I need to make all of the faces a single colour? What's my motivation?
Crash Lab Tweets
Tweets by @CrashLab
Copyright © 2013 Crash Lab Limited. All Rights Reserved. | 计算机 |
2014-23/2156/en_head.json.gz/20579 | we create bespoke websites and web applications
for when off-the-shelf just won't cut the mustard
we do custom
Wax Media is a digital agency with a particular focus on the design and development of complex, richly functional websites and web applications. Whatever the challenge, our expertise is in providing solutions that make the complex seem simple.
Custom system development isn't the whole story though. We provide a range of digital services to some of the UK's leading brands, including email marketing, design and web hosting. If you have a project to discuss, get in touch, we'd love to hear from you.
CHANTRELL POSTERS
Tom Chantrell was arguably the most famous of all British cinema poster artists. The mid 20th Century was a golden era for the art form - until the arrival of digital image manipulation, film studios invested heavily in original painted artwork for each new title, and Chantrell forged a wonderful career in the field.The Chantrell Poster website is backed by Shirley Chantrell (the late Tom Chantrell's wife) and daughters Louis & Jacqueline, together with author Sim Branaghan and Mike Bloomfield, one of the UK's leading poster dealers and long-term client of Wax Media. The site exists to provide information about the life and work of Tom Chantrell, and to make selected prints and even original artwork from Chantrell archive available for purchase.The site uses PayPal for payment processing, and features a custom e-commerce admin system to enable the site owners to manage the products available on the site, along with all other aspects of site content.Visit http://chantrellposter.com
THE CREATIVE BOOK
The Creative Book is an ambitious project that we have developed in collaboration with designer Sanj Sahota. It’s a platform for talented creatives from around the world to share their work and connect with others.Contributors come from a variety of creative professions, including art and design, film-making, photography, make-up and modellingThe platform is entirely bespoke, developed from the ground-up with a huge array of functionality - too much to list here! Check it out at http://thecreativebook.com
CIOB OPUS
We were commissioned by the Chartered Institute of Building to design and develop a custom platform that would enable their regional and international offices to collaborate and control the flow of work through the central marketing, press and communications office. The first iteration of the Opus platform was launched in 2009.Subsequent additions to the platform include a bespoke web-to-print system that enables non-technical users to edit and download print-ready PDF artwork for the Institute's library of marketing collateral using an intuitive online interface. An 'iStock' style stock image library was developed to interface with this system, enabling images within print artwork to be replaced, or simply downloaded in a variety of resolutions and formats.Opus continues to grow, with numerous updates and developments planned for the near future.
ESPA is the world's leading luxury spa products and treatments company. We have worked with ESPA International for the past three years, providing a variety of digital creative services, including HTML email design and development, Flash design, production of web assets and development of a micro-site for one of their suppliers.Latterly, our work with ESPA is particularly focussed on their email marketing programme. Following the launch of the new ESPAonline.com website, we worked with the internal marketing team to execute a successful eCRM programme, providing email design, development, campaign delivery and reporting services. Find out more about our HTML email development services.
The Art of Building photography competition is an international showcase for the very best digital photography of the built environment. The competition is run by the Chartered Institute of Building, with proceeds from the 2012 competition being donated to the Haiti School Project, an initiative from Article 25 with the aim of building two hurricane and earthquake resilient schools in Haiti.The Art of Building website forms the hub through which the entire competition is run - all entries (more than 2000 in 2013) are submitted via the site and a suite of custom systems enable the three rounds of private judging that whittle 120 semi-finalists down to six finalists, with a public vote to decide the ultimate winner.We built the Art of Building website, which was awarded 'Best Website' at the MemCom awards in early 2011, from the ground-up. We also designed the logo.Visit http://artofbuilding.org
TEREMA
Terema is an international organisation offering bespoke training and consultancy services, and are the leading provider of Human Factors and Team Resource Management training to the UK National Health Service.The new Terema website was launched in early 2014 with a slick, modern design and some neat features including HTML5 video, integrated booking systems for upcoming courses, and a searchable resource centre providing written and video material.The site features a custom Content Management System covering all aspects of site content through an intuitive, easy-to-use interface. Like many of our sites, it is hosted on our dedicated UK-based server.Visit http://terema.co.uk
NEATSMITH
Neatsmith specialise in the manufacture, supply and installation of the finest contemporary fitted wardrobes, sliding door wardrobes and bedroom furniture in the UK.We were commissioned to replace their existing website with one that showcased their high-end products in a suitably contemporary manner. Neatsmith operate in a competitive sector, and SEO was a key consideration for the new site.The delivered solution contains lots of high quality imagery and full details of the products and service they offer. A bespoke content management system was developed that allows site administrators to update details of all of the products and site content with ease.The site's functionality was developed using Microsoft ASP.NET 3.5 and SQL Server 2008 technologies and is hosted on our dedicated UK-based server.Visit http://neatsmith.co.uk
UNIMED ELECTRODES
Unimed are a UK-based company providing a vast catalogue of EEG supplies, electrodes and gels for neurophysiologists at highly competitive prices. We have worked with Unimed since 2011, having firstly provided a re-design of their logo and brand collateral, followed by an all-new website.The Unimed site is a custom e-commerce platform with a difference - it is designed to fit Unimed's very specific business model, featuring a bespoke ordering process linked to product admin systems.A comprehensive suite of management systems enable Unimed administrators to manage orders and products, site messaging, general content and graphics. Custom SEO management tools enable Unimed to rank well for highly competitive search terms, by providing unique metadata, titles, descriptions and URLs for each product.Visit http://unimed-electrodes.co.uk
MASS PLC
Mass are the UK's Leaders in ARCHIBUS and Facilities Management Software and related services. In 2013 we were appointed to provide a comprehensive overhaul of their online presence, starting with a brand new website. The new site features a highly functional custom content management system, enabling all areas of the site to be easily updated with text, images and video. A custom Media Centre provides case studies, news and a blog, with feeds pulling the latest entries into other relevant areas of the site. A full suite of social sharing devices enable Mass' content to be syndicated around the web by site visitors.A subsequent update to the site saw the addition of an integrated newsletter sign-up form, which links directly with our email marketing application. A fully editable HTML email template in the external ESP enables Mass administrators to easily produce professional HTML emails, deliver them to their subscribers, and view comprehensive analytics on each campaign.Visit http://mass-plc.com
FUTURA DEFENCE
Futura provide a suite of technical solutions to address the central problem facing defence leaders - how to coordinate the components of capability between land, sea and air programmes. Their solutions enable defence forces to maximise capability under existing resources through planning and forecasting.The new Futura website was launched in late 2013, to replace an out-of-date legacy site built in Flash. The site features our custom .NET Content Management System which enables site administrators to quickly and easily manage content of all kinds.Futura operate in three main territories - the UK, Australia and Canada. The armed forces for each of these territories have their own terminology, so to ensure that the site content is relevant to a given audience, the CMS enables administrators to provide dedicated content for each.Visit http://futuradefence.com
a tried & tested process
Every project begins with a briefing meeting. We believe in straightforward, open, plain communication - no bluster or jargon, just a useful conversation that helps us understand your business or project requirements.
Following our initial consultation we put our heads together and formulate a plan. This is provided as a formal proposal that outlines a technical specification, functionality, process & deliverables and costs.
Working from a brief, we map out the information architecture, user paths, interface elements and functionality. The resulting design visuals are presented for review and we undergo an iterative process of revision and refinement.
Everything is hand-coded from scratch, using ASP.NET, Microsoft SQL Server, xHTML, HTML5, CSS, CSS3 and Javascript. The platform is tested thoroughly and uploaded to our staging server for review.
We host many of our clients' sites and applications on our dedicated UK-based server, or work with their technical department to deploy to their preferred hosting environment. SUPPORT
Whether it's through a structured maintenance agreement, hosting plan or just ad-hoc telephone and email support, we continue to take care of our clients and help them make the most of their investment.
We like to throw a lot of challenges to the guys at Wax and they never let us down in terms of meeting our expectations. Putting all the amazing work and their can-do attitude aside, it really is quite simple why we keep giving Wax work - they are just great to work with!
THE CHARTERED INSTITUTE OF BUILDING
The team at Wax are among the best developers I have worked with during the last 18 years of my digital career. Not only are they extremely quick, but have a natural ability of mirroring quality standards, practices and even introduce new ideas into the mix (rather than only follow instruction). Can't recommend them enough!
Using Wax Media for our email marketing allows us to be dynamic and spontaneous. They are fast, responsive and provide us with an individual service tailored to our needs – something I have not experienced from other email marketing providers.
ESPA INTERNATIONAL
Wax Media have been a real asset to our business for online design work, their flexible, responsive approach suits us perfectly and they always get things turned around on budget and on time!
THOMAS SANDERSON
Since we started working with Wax Media our website has gone from strength to strength. The website is easier to navigate for its users, easier to keep up to date ourselves and the work that they have carried out on SEO has increased the number of people visiting our web site dramatically. All of this has resulted in an increase in new work flowing directly from our website which was exactly what we wanted to achieve. The work has been done extremely efficiently and at very reasonable cost. I would not hesitate to recommend them.
HERRINGTON & CARMICHAEL
We've been using the guys at Wax Media for the last couple of years for all of our design/web needs and quite simply, they rock. They are blessed with a particularly hard to find mix of creative flair, technical excellence and general all-round-good-to-deal-with-ness. If you're looking for an online media company to work with, you'll have a tough job finding anyone better.
Having worked with Wax Media over the last year we are genuinely impressed with the professional approach and the creative abilities they have displayed. From providing creative ideas and design input on our hard copy newsletters to the complete redesign of our corporate website we have been really pleased with the service they have provided. More importantly we've received genuinely positive feedback from our clients on the projects that they have been involved with. We look forward to continuing to work together and to recommending Wax Media to our key contacts.
Shere Marketing has worked with Wax Media since 2008 and we have always been very pleased with their responsiveness and quality of work. Their production expertise for HTML emailers has been greatly valued. I would certainly recommend them to other agencies and end clients alike.
SHERE MARKETING
If you want a site that works as well as just looking pretty I'd give Wax Media a call. Our new site has transformed our visibility and sales, and is by far the best money I've spent on marketing / promotion in 25 years of running a business.
GECKO HOME CINEMA
I've had four websites designed & developed by Wax Media and I'm extremely happy with the outcome. As a non-technician myself, it was very important to find a company that could understand my objectives, design a user-friendly system and hand-hold me through the process when needs be. Communication with Wax Media has been friendly and efficient and it is reassuring to know that a genuinely personal service is provided. I have absolutely no hesitation in recommending Wax Media to others.
MEM COLLECT
we are wax media
We are a friendly, plain speaking, hard working and innovative digital agency. We’re based in Wokingham, Berkshire, and work locally and internationally on a range of web projects. Wax was formed in 2007 as a partnership between Nick Higton and Ian McInnes. Our flexible team draws on the talents of various specialist collaborators according to the requirements of a given project.
NICK HIGTON
Nick does a little bit of most things at Wax - he’s the primary account manager, an interface and UX designer, HTML email developer and, on occasion, even dabbles in a bit of front-end web development. Outside of Wax, Nick has provided caricature illustrations for two books and has a vague ambition of one day writing a novel. Call Nick on 01189 778 578.
IAN MCINNES
Ian is the technical director at Wax and the man behind some mind-bogglingly complex functionality. From ASP.NET to HTML5, Ian brings the expertise that enable us to turn challenging briefs into usable systems. When he isn't knee-deep in code, Ian spends far too much time (and money) renovating his Edwardian home. Call Ian on 01189 778 594.
IMAGES FROM OUR WORLD
ready to say hello?
Wax New Media Ltd. / Innovation House / Molly Millars Close / Wokingham / Berkshire / RG41 2RX
Registered Company Number: 06979625 / VAT Registered Number: 983365091
Your message SEND
Thanks for your message - We'll respond as soon as we can!
POP IN FOR A CUPPA | 计算机 |
2014-23/2156/en_head.json.gz/20683 | Aseem Agarwala Home
Tech transfer Research projects Publications
Activities & Honors
I am a principal scientist at Adobe Systems, Inc., and an affiliate assistant professor at the University of Washington's Computer Science & Engineering
department, where I completed my Ph.D. in June 2006 after five years
of study; my advisor was David
Salesin. My areas of research are computer graphics, computer vision, and computational imaging. Specifically, I research computational techniques that can help us author more expressive imagery using digital cameras. I
spent three summers during my Ph.D interning at Microsoft Research, and my
time at UW was supported by a Microsoft
fellowship. Before UW, I worked for two years as a research
scientist at the legendary but now-bankrupt Starlab,
a small research company in Belgium. I completed my Masters and Bachelors at MIT majoring in computer
science; while there I was a research assistant in the Computer Graphics Group, and an intern at the Mitsubishi Electric Research
Laboratory (MERL) . As an undergraduate I did research at the MIT Media Lab.
I also spent much of the last year building a modern house in Seattle, and documented the process in my blog, Phinney Modern. | 计算机 |
2014-23/2156/en_head.json.gz/21166 | Upgrade of Commission’s Website Operating Platform
NR 10-22
Contact: Karen V. Gregory, Secretary (202-523-5725)
On Thursday, September 30th, you may notice some changes to the Commission’s website. That’s because the Commission is transitioning to an upgraded website operating platform, the first of two major phases in the Commission’s website upgrade and redesign process. These upgrades will provide more transparency and public access and input for the Commission’s activities, information, and services. “This new foundation will support our continuing work to make the Commission more open, useful, and responsive to our customers -- the public and the shipping industry that serves them. The Commission is fully committed to President Obama’s directive that government should be transparent; government should be participatory; and government should be collaborative,” said Chairman Richard A. Lidinsky, Jr.
We anticipate minimal service disruptions during the launch process. Should you experience any difficulties accessing information on the Commission’s website please contact the Commission's Office of the Secretary or call (202) 523-5725.
While many of the improvements to the upgraded platform will run in the background, some of the immediately noticeable front-end benefits to the public include:
Improved online visibility of the Commission’s website, making it easier for the public to locate and use Commission resources and services;
Better organization of current information to improve transparency and access;
Enhanced website search capabilities, particularly within the Electronic Reading Room;
Improved communication capabilities through RSS feeds and streaming video; and
Front-end graphic enhancements – wider pages and crisp graphics.
Users of the Commission's service contract filing system, SERVCON, may access this system either through the Commission's website or directly via https://servcon.fmc.gov. If SERVCON filers/publishers require access to SERVCON during the launch period, they are encouraged to access SERVCON directly via the above link.
The Federal Maritime Commission (FMC) is the independent federal agency responsible for regulating the nation's international ocean transportation for the benefit of exporters, importers, and the American consumer. The FMC's mission is to foster a fair, efficient, and reliable international ocean transportation system while protecting the public from unfair and deceptive practices. | 计算机 |
2014-23/2156/en_head.json.gz/21716 | National Program Office
Privacy-Enhancing and Voluntary
Secure and Resilient
Interoperable
Cost-Effective and Easy To Use
Enhancing Online Privacy
Government Adoption
Catalyzing the Marketplace
Identity Ecosystem Steering Group
Releases/Announcements/Documents
The Strategy (PDF)
Benefits of NSTIC
NIST Home > NSTIC > Launch of the National Strategy for Trusted Identities in Cyberspace Transcript
Launch of the National Strategy for Trusted Identities in Cyberspace, Transcript
Transcript: Chamber of Commerce, April 15, 2011, Launch of the National Strategy for Trusted Identities in Cyberspace
The National Strategy for Trusted Identities in Cyberspace is an important outgrowth of the Obama administration’s cyberspace policy review and its pursuit of a smart and practical approach to securing America's economic and national security.
What underpins the Chamber’s enthusiasm for today's event? Well, we recognize that the strength of our free enterprise system is directly tied to the prosperity and security of the Internet. The Internet, a global engine of creativity and economic growth, is responsible for roughly $10 trillion in annual online transactions. Thank you to Secretary Locke for that figure. However, passwords—basically our tickets to the web—can be inconvenient and very insecure. As you know, ID thieves can guess or steal your passwords or pretend to be you online. Online fraud and identity theft put economic growth and job creation at risk and creates problems for businesses and consumers alike. As more and more of our daily activities, from paying bills, to shopping, to texting your friends, communicating with colleagues, all of that is moving online and we want those activities to be safe, secure and trustworthy. In a nutshell, we want that sum of $10 trillion to continue to grow.
The Strategy proposes building a voluntary system, an identity ecosystem, if you will, where consumers and businesses conduct transactions with greater confidence in each other and the infrastructure that the transactions take place on. Though there's still much work to be done, the NSTIC excites me and others because one, it will be driven by the private sector with collaboration with our government partners. It will be voluntary. The focus will be on providing consumers and businesses choice in how they authenticate online. It recognizes that numerous cybersecurity efforts impact the security of online transactions and trusted digital identities are only one part of a smart and layered approach to security in cyberspace. So today, this morning, first we'll hear from Commerce Secretary Locke and then Homeland Security Deputy Secretary Lute. Director Sperling of the NEC, contrary to what your agenda says, will be joining us a bit later this morning, so we may need to improvise a little bit, so bear with us. We'll have a panel and NIST’s Jeremy Grant will lead that discussion on the NSTIC with colleagues from CDT, Harvard, Pay Pal and Google. Thank you for being here. They'll take questions from the audience and the media, so get those questions ready. And following the panel we'll hear from Howard Schmidt, the White House’s cybersecurity coordinator, as well as Senator Mikulski. We encourage you to stay and mingle for a few minutes after the presentations. It is Friday, after all, so check out the great exhibits at the back of the room.
Without further ado, I'm very pleased to introduce our first speaker, Commerce Secretary Locke. Secretary Locke, as you know, joined the Administration in March of 2009 after serving as Washington State’s governor. He's been President Obama's point person for advancing the Administration's efforts to double U.S. exports. The Chamber wants to recognize your leadership on export control modernization as well as your efforts to boost U.S. trade in emerging markets such as China, India and Brazil. The Chamber appreciates your efforts to hear from U.S. business leaders on a regular basis. Thank you, Secretary. From a cybersecurity standpoint, we appreciate the Internet Policy Task Force's outreach to the private sector on cybersecurity, innovation and the Internet economy as well. Unfortunately, Secretary Locke won't be with us for very long. As all of you probably know, President Obama has nominated him to be our next ambassador to China and I understand that you have joined us on a break from ambassador school, so thank you very much. We're very pleased to have you here with us this morning. Please give a warm welcome to Secretary Locke. [Applause] Well, thank you very much, Ann, for the introduction. Wow, it's really great to see so many people here, and I want to thank the U.S. Chamber for hosting this very important event and this discussion. I also want to welcome the many innovators, the trade associations, the companies, the consumer advocates that are represented here as we mark another important milestone on our mission to build a more secure online environment.
President Obama has made innovation a centerpiece of his economic agenda and there is perhaps no segment of the economy that has seen more innovation than IT and the Internet. Fifteen years ago, we saw the dawn of the commercial Internet. Flash forward to the year 2011 today. Nowadays, the world does an estimated, as Ann indicated, $10 trillion of business online and nearly every transaction you can think of can be done over the Internet.
Consumers paying their utility bills from smart phones; people downloading movies, music and books online; companies from the smallest local store to the largest multi-national corporation ordering goods, paying vendors and selling to customers all around the world over the Internet. U.S. companies have led every stage of the Internet revolution, from web browsing and e-commerce technology, to search and social networking.
But at critical junctures, the U.S. government has helped enable and support private-sector innovation in the Internet space. In the early 1990s, the government opened the door for commercialization of the net. In the late 1990s, the government's promotion of an open and public approach to Internet policy helped ensure that the net could grow organically and that companies could innovate freely. Recently, we've promoted the roll out of broadband facilities and new wireless connections in remote parts of the country.
Today we take another major step, this one to ensure that the Internet's security features keep up with the many different types of online transactions that people are engaged in. The fact is that the old user name and password combination that we often use to verify people is no longer good enough. And, in fact, it's so cumbersome, constantly having to change these passwords and having to keep so many somewhere stored that you often times forget, misplace them, and maybe lose them and make it vulnerable to theft. It leaves too many consumers, government agencies and businesses vulnerable to ID and data theft.
And this is why the Internet still faces something of a trust issue and why it cannot and will not reach its full potential, commercial or otherwise, until users and consumers feel more secure than they do today when they go online.
President Obama recognized this problem long ago, which is why the Administration's cyberspace policy review called for the creation of an identity ecosystem. An identity ecosystem where individuals and organizations can complete online transactions with greater confidence and where they can trust the identities of each other and the integrity of the systems that process those transactions. And I'm proud to announce that the President has signed, and that today we are publishing, the National Strategy for Trusted Identities in Cyberspace, or NSTIC.
The strategy is the result of many months of consultation with the public, including innovators and private-sector representatives like you in the audience right now. I'm optimistic that NSTIC will jumpstart a range of private-sector initiatives to enhance the security of online transactions.
This strategy will leverage the power and the imagination of entrepreneurs in the private sector to find uniquely American solutions. Because other countries have chosen to rely on government-led initiatives to essentially create national ID cards, we don't think that's a good model. And despite what you might have read on blogs frequented by the conspiracy theory set, to the contrary, we expect the private sector to lead the way in fulfilling the goals of NSTIC.
Having a single user of identities creates unacceptable privacy and civil liberties issues. We also want to spur innovation, not limit it. And we want to set a floor for privacy protection that is higher than we see today, without placing a ceiling on the potential of American innovators to make additional improvements over time. Behind you are a number of firms exhibiting technologies and applications that can make a real difference in our future, and some are already out in the market already. At the end of today's event, I just really hope that you'll take an opportunity to see all of them, but let me take a minute to highlight two in particular.
You know, each year, medical researchers make discoveries that save lives and improve the well-being of those afflicted with disease. Part of this rigorous scientific research is the review and approval of clinical trials, such as the cancer therapy evaluation program run by the National Institutes of Health. To conclude these trials, paper signatures are needed for approvals at every turn. And this adds hundreds of dollars of cost and, more importantly, weeks of time that could be better spent getting patients into treatments more quickly.
But the system has been stuck in paper, as the world moves digital, for a very simple reason, because there has been no reliable way to verify identity online. Passwords just won't cut it here, and they are too insecure and the stakes too high to risk fraud. The good news is that today NIH has come together with private-sector groups, including patient advocates, researchers and pharmaceutical firms, to eliminate this inefficient paper system through a new identity technology that enables all sides to trust the transaction.
With trusted identities, patients can be enrolled more quickly in potentially lifesaving therapy programs, saving hundreds of dollars per transaction and trusted identities enable trials to run faster, researchers to spend more time in the lab and a faster and cheaper way to move new therapies from the lab to treating cancer patients.
At the other end of the identity spectrum, we have the scourge of ID and data theft, with phishing schemes being among the most prevalent. Every second, phishing e-mails show up in people's inboxes asking unwitting consumers to type their user name and password into a fraudulent site. Kimberly Bonnie of Bethesda, who planned on being here today, was victimized by one of those schemes last year. She received an e-mail that she thought was from her Internet service provider telling her that her account was in danger of being closed. And the e-mail asked her to provide her password, which she did.
Then her coworkers, fellow members of her church and her landlord began receiving e-mails that appeared to be from her, stating that she was stuck overseas and needed a $2,800 loan to fly back home. It was, of course, a fraudulent e-mail. Kimberly had become one of the 8.1 million Americans who were victims of identity theft or fraud last year. And these crimes cost us some $37 billion a year. But companies are introducing technologies that can help us turn the tide. At least one leader in the U.S. technology sector has come up with a simple solution to stop scammers from accessing their customers' accounts with just a stolen password. They've recently rolled out a simple tool where verification codes are sent over mobile phone networks to a user's smart phone or wirelessly connected computer and when they want to access their online accounts, they have this additional and incredibly simple layer of protection. I urge you to take a walk around and see these displays and see for yourself how stronger identification technology can protect against identity theft and cyber crime.This is a difficult challenge. We're trying to improve security and convenience and privacy all at once. That's why it's so important that we're leveraging the power and the imagination of private-sector entrepreneurs. And the Commerce Department, led by Jeremy Grant at our National Institute of Standards and Technology, is staffing up to facilitate and encourage these private-sector efforts. And I'm looking forward to learning of your future successes. Our family is eager to use your technologies as consumers and as private individuals. And perhaps you can send me an e-mail, an authenticated e-mail, describing those successes to my new e-mail address at the U. S. embassy in China. That is, Senate willing.
But thank you again for your support. Keep up the great work. There is an urgent need for what you're all engaged in. And we look forward to your quick results and quick progress. Now let me turn it over to Jane Lute who is our Deputy Secretary at the Department of Homeland Security. Jane has over 30 years of military and senior executive experience, having served at the United Nations, on the National Security Council and in the U.S. Army. She understands how integral cybersecurity is to our national security as well as to our economic security. Now let's bring her up, for her, to hear some thoughts from Jane. Thank you very much. [Applause] Well, Secretary Locke has given us an awful lot to think about, and I know you have a lot you want to talk about. And Homeland Security is extremely pleased and privileged to be joining with the Department of Commerce and the U.S. Chamber and the vendors that you see here and many of the agency's offices and enterprises that are represented in this room.
Two years ago when we began the Administration and began our work in homeland security, we knew we were not beginning the nation's work on cybersecurity. One of the things that we wanted to do at DHS was to integrate the work on cybersecurity with the overall effort to help build a safe, secure, resilient place where the American way of life can thrive and so two of the missions of preventing terrorism: securing our borders, enforcing our immigration laws and building national resilience, we established a mission and called out the importance of ensuring the nation's cybersecurity and we're here all today because of the critical importance of this mission.
We see cybersecurity is an important aspect of a safe and secure homeland where our way of life can thrive. The Internet is an engine of immense wealth creation, a force for openness, transparency, innovation and freedom. It is, in essence, a civilian space, if not always to each of us every day quite a civil space. It is, nevertheless, a civilian space. It is the very endoskeleton of modern life, and no single actor has the capability to secure this distributed and largely privately owned space. Nor would that be desirable.
Government, the private sector and individuals all share in the responsibility for keeping cyberspace secure. We have to work towards that security with solutions that enable innovation and prosperity, as Secretary Locke has said, and that are designed from the start to protect openness, enhance privacy and protect civil liberties. Indeed, few changes would be more profound than the broad adoption of voluntary, interoperable, privacy-enhancing authentication. The NSTIC is a cornerstone of this approach. It has the potential to change the game and how we authenticate ourselves and when we need to.
We know that the challenge of security in cyberspace really only requires that we do two things: secure our information and secure our identities. The rest, as a great voice of another time and another age for another reason said, “the rest is commentary.” We only need to secure our information and our identities, reliably every time. The NSTIC underscores this tenant and another fundamental tenant of our approach at Homeland Security, which is, where the market is capable of acting more speedily and effectively, it should be empowered to do so.
As you may be aware, we've recently published a paper on this point called “Enabling Distributed Security in Cyberspace,” which looks at how prevention and defense can be enhanced through three security building blocks: automation, interoperability and authentication. We really are aiming for a broadly distributed system of automated self help where smart users and smart machines supported by smart networks and reinforced with the kind of enabling standards and capabilities allow that to happen, as we say, at network speed.
This is an aspect of cyberspace where industry will continue to build the tools. The challenge and the role of government is to bring players together. At its broadest, the NSTIC asks: In cyberspace, can we still rely on our trust in each other? The goal here is confidence, not centralized control. It's about enabling trust. We have the ability to do this in cyberspace, we just need to put it together And again, this is a shared responsibility. We need strong, functional partnerships between governments, industry, advocacy groups, including those for privacy, security and broadly we must include the public.
We must also support strong relationships between federal agencies like the Department of Homeland Security and the Department of Commerce, including the Department of Defense. We each have a role in cybersecurity and in this case, enabling the role of government here, rightfully sits in the Department of Commerce. Together, we must focus on innovating solutions to combat identity theft and online fraud. We know in this area where the private sector can and should lead, and we know that we have to raise the bar on privacy and security from where they are today.
That's what the President has called on us to do with NSTIC and that's why we all are here, you and I, and that's why we do what we do in Homeland Security every day, because it is impossible to imagine a safe, secure, resilient homeland without a safe, secure and resilient cyberspace where the American way of life can thrive. Thank you. [Applause] Thank you, Secretary Lute. And thank you, Secretary Locke. We appreciate you being here today. Now I'd like to ask our panel to come up and get miked up and let me introduce to you Jeremy Grant. I'm very pleased to introduce Jeremy. We've had the good fortune to work with him over the past several months. He's the Senior Executive Advisor for ID Management at the National Institute for Standards and Technology. As many of you know, Jeremy was tapped to manage the establishment of a national program office for National Strategy for Trusted Identities in Cyberspace, the NSTIC. He has been in his new role only for a few weeks and has certainly hit the ground running. Jeremy comes to NIST with a background in identity and cybersecurity issues, having served in a range of leadership positions spanning government and industry. So we are in good hands with him. And at this point I will turn it over to you, Jeremy, thank you.
Thank you, Ann, and thanks very much to the Chamber for the support that they've provided today in hosting this event for us. It's very exciting for us to be able to be here today. As the strategy which we have just released makes clear, the leadership of the private sector is going to be incredibly important in helping to fulfill the vision of an identity ecosystem where all Americans can engage in transactions online that are safer, more privacy-enhancing and adding more convenience. And we'll look forward to working with the Chamber a lot of its members as we try to move that vision forward.
With us today, and I’ll do some quick introductions while everybody’s getting miked up, are a couple of folks who do come from industry, along with a couple of folks who have long been very effective advocates in the privacy community. I’ll start with the industry folks. To my right is Eric Sachs who is the product manager for Google security team, the counterpart to Google CIO. He's helped build a number of major systems at Google, including Google accounts, Google Health, orca.com. He also provides a lot of leadership in the standards community for standards like Open ID and OAuth.
Just next to him is Andrew Nash who is the Senior Director of Identity Services at Pay Pal. He'd previously been CTO at Senoa Systems and Reactivity. In a prior life he was director of technologies at RSA security, worked on a wide range of identity systems, including with the Liberty Alliance, a strong authentication expert group, and we're pleased to have him here today.
Next to me here is Susan Landau who is currently a fellow at the Radcliff Institute for Advanced Study at Harvard University. Her new book, Surveillance or Security? The Risks Posed by New Wiretapping Technologies, has just been published by MIT press. Susan serves on the Computer Science and Telecommunications Board of the National Research Council, as well as the Advisory Committee for NSF’s Directorate for Computer and Information Science and Engineering. She also is a member on the Commission of Cybersecurity for the 44th presidency.
And finally, Leslie Harris is president and CEO for the Center for Democracy and Technology where she is responsible for the overall vision and direction of the organization and serves as its chief strategist and spokesperson. Leslie’s known widely for her work on policy issues related to civil liberties, new technologies and the Internet, including free expression, government consumer privacy, cybersecurity and global Internet freedom. So, thank you, all of you, for being here today. I do want to start with a question--I have a couple of questions to get things started off and then we will open things up to the audience, and I will be the moderator. So a question for both Leslie and Susan. You’ve both been working at the intersection of technology security and privacy for a long time. Both working to ensure privacy is not an afterthought as the other two advance. I wonder if each of you, starting with Leslie, could give some opening thoughts as to how well NSTIC balances each of these three.
Thank you, Jeremy. I think NSTIC certainly sets out--it’s funny, I'm usually too quiet; that sounds very loud. It definitely sets out the right vision here because it gives consumers more control and more choice on their online identities. It makes clear that it's voluntary. It makes clear that consumers can have one or more choices. It leaves a strong space for anonymous speech, which I think is critical. And it puts the private sector rather than the government in the driver's seat for developing this. So, you know, as the Secretary said, this meme about, about somehow, some kind of a government ID, NSTIC is, in my view, exactly the opposite of that. From a consumer perspective, when we're juggling all of these IDs and online passwords, at the same time we're giving more and more information to more and more sites. We're using some of these passwords on multiple sites, and we really have sort of no framework, no trust framework for those transactions. So I think that there's no doubt that the vision is right. You look around the room at these technologies and others that I'm familiar with and you understand the innovation in the ID space is extraordinary. I think what this really is it's a question of whether or not industry can step up now and do the two things that are critical. One is clearly a serious governance model that has a trust framework in it that, you know, is going to bind all the parties in a way that's protective of privacy. I think that is absolutely critical, and I think that the government's role in that is also to have sort of the convening power and bully pulpit to make sure that we can stand something up here that consumers can trust.
So, I want to start by saying I'm delighted that NIST is the one that’s responsible for identity management. I was in the crypto wars in the 1990s and it took a long time before it actually really happened that NIST was in charge and we ended up with the advanced encryption standard and other standards that have seen worldwide adoption. NIST knows how to work with industry and NIST knows how to work internationally and that's really important in this domain. So I'm really pleased by that. I'm really pleased by the emphasis, as Leslie is, on privacy and the explicit call out to anonymity because there are many instances on the network where people sort of want to be anonymous. We know that any time you have to register to see something, all you have to do is type your name and your e-mail, registration viewing drops by half, immediately. So that support for anonymity is really good to see from the government.
I thought the NSTIC was a little enthusiastic and rosy about how easy it was going to be to secure browsers. There've been events in the last few weeks that made me doubt this even more. But there were two things that I wanted to push on within the NSTIC. There are hints of it, but I want to see it much more strongly. The first is identity management federation, which is the idea that, you know, the NSTIC is very clear, there will be different levels of authentication, and you'll have perhaps different identity providers for the different levels of authentication. But what you also want is potentially several identity providers within a level. Maybe I want to collect, you know, United points when I use the United identity provider at level two and I want to get free shipping when I’m using the Beans provider. Of course, we’re not actually going to use beans and united, but you understand the analogies. And I want to see much more support for identity federation. It’s better for privacy, it’s better for security if you have multiple identity providers even within a level, and they federate with each other. Sometimes I want to do my business over here and sometimes I want to do it over here, but I want to be able for these two to communicate about me, but pseudo anonymously at times. So I'd like to see more support there, and that can be in terms of how the federal government actually chooses to deploy things and push on identity federation.
The other thing, and this is hinted at only very slightly, even less so than the identity federation, is the whole idea of data accountability. We have ideas that, you know, you give your data and it should be used in the one place that it's been given and it won't be used elsewhere, and we know that often doesn't happen. I'd like to see the data traced. I'd like to see where the data goes. I want to know the information flows. So the FTC has done a very good job over the last years in following up with companies that have one privacy policy on paper or on the web and another privacy policy in fact.
And what has happened is that companies are actually racing to the top. They see somebody get in trouble with the FTC and they say, uh-oh, I better do not what they do, but the FTC next time is going to move to here, I better move over there first. And so I would like to see more push from the FTC and more push from the government. That's slightly hinted in the doc | 计算机 |
2014-23/2156/en_head.json.gz/23261 | Go Back! Community Mother2EB Fangames (MORE!!) - by ozwalledEB Fangames (MORE!!)One of the first things I did when I arrived at this site was download every fanmade game that had something even remotely to do with EarthBound that I could get my hands on. I knew that I'd want to play through them at some point, to give me an EarthBoundy fix whenever I needed a slight change of pace from either EB or EB Zero.
I'll admit that I haven't really played many of them yet... But that doesn't mean that I don't want more. I *DO* want more. PLENTY more. To be honest, unless I'm seeing all the EarthBound I can eat, I don't think I'll be entirely happy or satisfied.
Of course, making a game is no easy task. I'm currently working on an EB game (so as not to be a hypocrite and all), but I'm still not past the very early phases of production. That being the case, I have some idea into what difficulties one might encounter while making your own EB game. No, it's not easy, especially if you want to make a quality product.
One decision I have made, though, is that when I release the game itself, it won't all be at the same time. In my scouring the 'Net for EB fangames, I found far too many ambitious-sounding projects that appeared to have been abandoned for a year or more. I'm hoping that a part of the key to success, then, will be putting the game out little by little, in the form of "chapters" of sorts. This way, hopefully each chapter of the game will be manageable enough to maintain my interest.
I've also looked at the various options available in terms of actually making *A* game, much less this one (my programming skills are essentially nonexistent, by the way). I took a look at RPG Maker 2000 briefly, as it seemed like an obvious choice for a starting point. But between looking at that and considering the small number of EB games actually completed (that I could find, anyway) with this program, I started considering a look elsewhere. Maybe I haven't really used it enough to come to a fair conclusion, but I didn't really much like RPG Maker 2000. Getting into it just seemed a bit too slow-going for my liking, somehow. I understand that a game-making program has to have limitations by design, but this one just seemed a bit too confining.
I began to consider another, more unlikely candidate more seriously. I'd already taken a spin or two with another game-creating program called Adventure Game Studio, put together by a gentleman by the name of Chris Jones. As its name suggests, though, this tool was designed with Adventure games (i.e., point-and-click type games, the likes of LucasArts' first three Monkey Island games or Full Throttle, or Sierra games like the first Gabriel Knight and some of the last Space Quest games and such [ALSO NOTE: I've chosen this game-making tool because it seemed to be one of the easier ones to get into. Plus, it doesn't really require a prior knowledge of programming. PLUS it has a fairly active and helpful community.). But CAN EarthBound work as an Adventure Game? Somehow, I was convinced that it could (and still am). But what would OTHER fans think of the idea? With this question in mind, I made my first visits to the forum, and posted.
There was some unpleasantness involved, but it wasn't so bad. Of the few responses I got, the positive ones encouraged me that such a game might actually be played (I didn't want to make something NOBODY would want to even touch, after all). So I dove in (well, more like slowly, SLOWLY started dipping my toes in, followed by my foot, and then my ankle... Slowly now...) with some confidence.
Despite the slowness, I think things are working out quite well so far (despite the snail's pace it's coming along at). No, this won't really resemble the EarthBound game(s) you've known and grown to love so much. First off, I'll not likely be incorporating combat in the traditional RPG sense and I'll probably not have any statistics for the player-controlled characters (No! NO! Please don't stop reading yet! Stay! Please stay!... Pretty please? With fuzzy pickles on top?).
Some might argue that it just won't be EarthBound without the fights, and I can understand that. I'm still CONSIDERING making an RPG-style combat system, as this IS possible with the Adventure Game Studio (AGS) engine... but unless I can do it fairly easily (by borrowing the source code from someone else who's already made such a combat system, for example), it probably won't happen. Why? Because I actually want to MAKE this game and not get stalled on making RPG-esque combat.
So if there won't be RPG-like combat, will there BE combat at all? Well, yes and no. It's very likely that there will be combat of sorts, but it's more likely to be more dependent on inventory, environmental, conversational and character puzzles than it will be on hit points, psi points or statistics. Think more along the lines of how Belch was beaten, and you'll be starting to see that it might just work (well, maybe).
But will this be enough like EarthBound for any of its fans to want to play it? Ultimately, I'll find out eventually, but I really do hope that I'll be able to keep with the spirit of the game, more than anything, and that will be a big enough hook to keep people interested.
Anyway, enough of taking about what *I* hope to do. I want to get back to what I want to SEE from others. I'm not saying that you have to make up an epic quest. I'm not even asking that you finish a game. What I will ask, though, is that you find something that feels like it might be able to work for you that'll help you make the game (be it AGS, RPG Maker 2000, the tools of PK Hack or whatever) and give things a shot. Just a try (I know that with sections like Fanfics and Poems out there, that there's enough creativity out there to try to make a EB fangame, anyway.). Heck, if you can, make a game that takes place entirely in Ness's room, or at Paula's place or any other small area. It'd be a step in the right direction. I'd like to see more short EarthBound fangames than none at all. Please?
***If you're interested in taking a gander at Adventure Game Studio (it's absolutely free) and some of the games made with it already (most all of them are free, too), type "Adventure Game Studio" (including the quotation marks) into a Google search, and I'm sure you can probably find the homepage for it. If not, feel free to ask me for it (my address is linked to through my name, above) Any other feedback on this zany idea of mine would be nice, too.***
PS- I'm not responsible for any legal issues that might arise if you do decide to make an EarthBound fangame. It's their copyrighted material and all. HOWEVER, in an e-mail conversation with Nintendo of America, while they hardly seemed keen on the idea, didn't say that they would pursue legal action against me if I made an EB fangame (mind you, they didn't specifically say they wouldn't, either...).
PPS- Already making an EB fangame? Wanna' chat about it? Send me e-mail if ya' wanna'.
Vinyl Villain
A Mystical Record that's just waiting to be broken.
Hostle Elder Oak
Crotchety as heck and having more rings than you an shake a baseball bat at, it's just waiting to explode into flames.
Bill CosTree
Two for the price of one. Angela had commented on my Hostile Elder Oak picture and said that it looked like Bill Cosby. I couldn't just let such an idea go to waste, now could I?
I swear that some of these joke pictures are the most fun to do. Both were done with reference of course. http://www.dejarnettedesigns.com/samport/Bill-Cosby.jpg
The one on the left was done AFTER the one on the right, by which time I'd learned a bit about how to make it look better.
Special thanks to Angela for *ahem* planting the seeds for the idea.
Flying Man
A coloured sketch of a Flying Man.
Fierce Shattered Man
The Fierce Shattered Man has always been one sprite that I've wrestled with, mostly because of its colour. Pink. What is THAT all about? One would think it to be a mummy or a statue save for its pinkness. Even the clay model was pink.
Here, I devised an interpretation of that pinkness to make the Fierce Shattered Man a skinless behemoth of a man. | 计算机 |
2014-23/2156/en_head.json.gz/24260 | by Joel Spolsky
REALBasic comes tantalizingly close to solving a real problem; with a few minor changes, it could become the next great development tool for GUI programming. But it's not there yet. Here's my view on development tools for creating commercial software that needs to run on Windows and/or Mac (i.e., 99% of the desktop). There are some important axioms. Axiom one. The universe of programming languages is more or less divided into two categories: programming languages with memory management built in (garbage collection, like Java, or reference counting, like Basic), and programming languages that make the programmer do the memory management (C/C++). Programmers are much more productive in languages that do the memory management for them. Over the last two years I spent a lot of time writing GUI code in C++ (using MFC) and a lot of time writing GUI code in Visual Basic. Know what? My time writing Visual Basic was ALWAYS significantly more productive. Other programmers who have used both environments extensively have always agreed with me on this point. Programmers who don't agree with me have not used both. There are a lot of C++ only programmers who think of Basic as a toy language and disdain it. They think that they are cool coding jocks because they don't need wimpy languages. Frankly, they are wrong, and I don't have very much patience for people who get religious about tools and judge things that they haven't used. Memory management languages (Basic, Java) tend to be slower. This may not matter in the long run. However, as of now, there are still lots of examples where you can speed up code several thousand fold by writing some tight C++ to replace Basic code. As an example, I once wrote some code in Visual Basic to convert the contents of a rich edit box to HTML. It took about 10 seconds for a page of rich text. When I converted it to C++, I was able to use much lower level messages to the underlying rich edit control, and the speedup was profound: instead of 10 seconds, it took a few milliseconds. But for most GUI tasks, which operate at "user speed" (most of the time is spent waiting for the next message from the user), there is no benefit to using C++. Given axiom one, my preferred development environment would usually be something like Visual Basic. Axiom two. Users have expectations for what makes up a "professional looking" application. If you violate these expectations, they will feel like your program is a piece of junk. For example, on the Mac, they expect the menus to be at the top of the screen. And they expect that while they are editing text, the whole screen won't flash, which means you absolutely have to use double-buffering. They usually expect you to use "native widgets". When programmers try to reinvent their own widgets (viz. Netscape 6), users will think the program looks clunky. If you use the JDK, and your Windows application uses the Java coffee cup as its icon, users will think that your program is not professional. People are less likely to buy a product that looks unprofessional. I don't know how many people have told me that they tried various applications which were written in Java and thought that they felt unprofessional. They were slow, they flashed a lot, their widgets behaved slightly wrong. Thus, there is a lot of value in giving software developers absolute control over the exact look and feel. When you add in axiom two, and the need for speed in certain critical areas, I would modify my statement a bit, and say that my preferred development environment would be Visual Basic, with the ability to drop into C++ to do certain things. VB gives you at least two good ways to do this: through DLLs or through COM objects. COM objects that are also controls even give you the ability to define new user interfaces. Now let's talk about platforms. Windows has 90% or 95% of the desktop market. Macintosh has 5% or 10%. If you're talking about office users, Windows is even more dominant; Macintosh has a bit more of the home users, but probably not even near 15%. What this means is that if you are a software developer, the only thing that makes sense financially is to develop a Windows version first. Then, you need to evaluate the cost of doing a Mac version. If that cost is only 10% more, it's worth it. If that cost is something like 50% more, it's not worth it. If I have a product that cost me $1,000,000 to develop, and 10,000 Windows users are using it, that's $100 per user. Now if I have to make a Mac version, and it's going to cost me $500,000 to port the Windows version, and the product is going to be just as popular among Mac users as Windows users, then I will have about 1000 Mac users. That means that my port cost me $500 per user. This is not a good proposition. I'd rather spend the money getting more Windows users, because they're cheaper. What does this mean to you, a Mac tools developer? It means that your number one priority is making it possible for Windows developers to port to the Mac for less than 10% of the original cost. The big companies that have products on Mac and Windows (Macromedia, Quark, Adobe, etc) generally started on the Mac. That means that it was worth almost any amount of money to port their product to Windows, because the market was so much larger. The only big exception is Microsoft, which has several different portability layers that allow them to develop one product for both platforms. For products at Microsoft which already have portability layers (Project, Word, Excel), they can do a Mac version for less than 10% extra, so they do it. For other products (Access, FrontPage), the cost of a Mac port would be more than 10%, so they don't do it. An interesting point is that today, try as one might, there is just no great way to create a Mac port for less than 10% of the cost of the Windows original. Which is why not much stuff shows up on the Mac anymore, unless it's strategic in some way (like the Real Media Player). In my view, the "holy grail" of programming would be an environment which: allowed me to develop GUIs in a high productivity GUI environment like Visual Basic allowed me to drop into C++ and create COM objects for things which need to be fast or have a very particular look and feel So far, this environment exists on Windows (VB with VC++), and almost on the Mac (REALBasic with CodeWarrior, although they are not glued together nicely with an architecture like COM Controls). Now, if you want me to make a Mac port, I need the third thing: cost of porting from Windows to Mac is less than 10%. Just because of the differences in fonts between Mac and Windows, I'm already going to have to lay out every dialog again (and swap the position of all the OK and Cancel buttons) and change words like "Directory" to "Folder". So that uses up, like, 8%, which means that porting the code itself better be super-duper easy, like, 2% of the effort. And REALBasic can't do that today. But it is soooooooooo close, it's painful. I want to be able to write code in VB on Windows, and have it just work in REALBasic. No complicated one-way conversion scripts, because when I'm coding and porting, I'm coding all day long. There CAN'T BE a conversion step, because if I find a bug in the original version, I can't start again with the port. I think that the number one priority for REALBasic would be to make it compile Visual Basic code directly. There are too many cases where REALBasic is gratuitously different from Visual Basic. As a small example, it lacks a SET statement, instead using LET to do SET, which removes the possibility of using default properties. There are functions which have been in Basic since time immemorial, like LCase, which REALBasic calls "LowerCase" for absolutely no good reason... except to serve as an obstacle to porting to or from VB. It needs to be able to do COM controls. I need to be able to set up an environment where I can code once for Windows, using VB and COM controls written in VC++, and have that code compile unchanged with REAL and CodeWarrior. There are some cases where REALBasic includes a "neat" extra feature which somebody thought was cool, like "//" comments. These aren't bad features, but they need to be turned off by default. That way, I can be sure that if I write my code in REALBasic, it's going to work in Visual Basic later. Somebody needs to go over VB and REALBasic with a fine tooth comb, find every small difference, and fix them! The amount of work this will take is very small, which is why I said that REALBasic is sooooooooooo close. There are other useful things besides pure compatibility: tools that take into account the fact that my code needs to run on both platforms. For example, since every dialog will have to be laid out twice, REAL Basic should have a tool that lets me manage two versions of the layout of each form automatically. It should detect when one version has changed and remind me somewhere that I need to change it for the other platform. It should have good #ifdef capability which is 100% compatible with VB. It needs to provide small compatibility libraries, for example, for parsing file paths, that I can use in VB for Windows as well, so that I have a chance of writing portable code. And the prospective gains are HUGE. Visual Basic is the single best selling programming language in history, by a large margin. If REALBasic can get that cost of porting below 10%, a flood of developers will suddenly discover that porting to the Mac is worthwhile, which will hugely improve the sales of REALBasic. In the long run, it will also be shockingly beneficial to Apple, which has its own chicken and egg problem trying to get software for the Macintosh. If Apple has any sense at all (and they don't), they would PAY REALBasic to be more VB compatible.
Have you been wondering about Distributed Version Control? It has been a huge productivity boon for us, so I wrote Hg Init, a Mercurial tutorial—check it out!
Want to know more? You’re reading Joel on Software, stuffed with years and years of completely raving mad articles about software development, managing software teams, designing user interfaces, running successful software companies, and rubber duckies.
About the author. I’m Joel Spolsky,
co-founder of Trello and Fog Creek Software, and CEO of Stack Exchange. | 计算机 |
2014-23/2156/en_head.json.gz/26320 | GNATtracker Access
GNATtracker 3
Legacy version of GNATtracker
Learn about GNAT Tracker 3
About AdaCore
AdaCore is committed to being an active member of the Ada and software development communities. Below are some of the associations and initiatives we’re involved with.
ARG – Ada Rapporteur Group
The ISO/IEC JTC1/SC22/WG9 Ada Rapporteur Group (ARG) handles comments on the Ada standard (and related standards, such as ASIS) from the general public. These comments usually concern possible errors in the standard. The ARG is tasked with resolving the errors. To do so, it creates Ada Issues. Learn more about ARG…
ISO Ada Standardization Group
ISO (International Organization for Standardization) is the world’s largest developer and publisher of International Standards. Standards related to the Ada language are assigned to ISO/IEC JTC1 SC22/WG9. Completed standards are assigned labels such as the one for the Ada language itself, ISO/IEC 8652:1995. A completed document can be either an international standard (IS) or a technical report (TR). While under development, documents progress through a sequence of stages until they are finally approved. Learn more about ISO…
DO-178C Committee
Several AdaCore staff were involved in the DO-178C committee charged with developing the next generation certification process for flight software for the Federal Aviation Administration (FAA) and the European Aviation Safety Agency (EASA). Learn more about DO-178C…
Ada Organizations
SIGAda – Special Interest Group on Ada
SIGAda is the Special Interest Group on Ada, a part of ACM (Association for Computing Machinery). SIGAda is a powerful resource for the software community’s ongoing understanding of the scientific, technical and organizational aspects of the Ada language’s use, standardization, environments and implementations. Learn more about SIGAda…
Ada Europe
Ada-Europe is an international organization, set up to promote the use of Ada. It aims to spread the use and the knowledge of Ada and to promote its introduction into academic and research establishments. Above all, Ada-Europe intends to represent European interests in Ada and Ada-related matters.Learn more about Ada Europe…
Local Ada Chapters in Europe
AdaCore is involved with several of the local European Ada organizations.
Ada-Belgium
Ada-Deutschland
Ada-France
More Local Ada Chapters in Europe…
ARA – Ada Resource Association
Since 1990 the Ada Resource Association principle mission continues to be “To ensure continued success of Ada users and promote Ada use in the software industry. Their efforts cover three areas of responsibility:
Provide consistent, high-quality, Ada-related information to the public
Ensure continuation of the Ada validation process
Ensure that Ada continues to be the highest quality programming language
Learn more about ARA…
System@tic Paris-Region (ICT CLUSTER)
SYSTEM@TIC PARIS-REGION brings together 480 key players in Paris area. Each of them working in the field of software-dominant systems with a strong social dimension. The goal of SYSTEM@TIC PARIS-REGION is to develop the regional economy, boost the competitiveness of local companies and support employment growth by leveraging innovation, training and partnership opportunities.
Learn more about System@tic Paris…
The Open-DO Initiative Open-DO is an innovative Open Source initiative with the following goals:
Address the “big-freeze” problem of safety-critical software
Ensure wide and long-term availability of qualified open-source tools and certifiable components for the main aspects of safety-critical software development
Decrease the barrier of entry for the development of safety-critical software
Encourage research in the area of safety-critical software development
Increase the availability of educational material for the development of safety-critical software in particular for academics and their students
Foster cross-fertilization between open-source and safety-critical software communities
Learn more about Open-DO…
FSF – Free Software Foundation The Free Software Foundation (FSF) is a nonprofit with a worldwide mission to promote computer user freedom and to defend the rights of all free software users. Learn more about the FSF…
Eclipse Foundation Eclipse is an open source community, whose projects are focused on building an open development platform comprised of extensible frameworks, tools and runtimes for building, deploying and managing software across the lifecycle. Learn more about the Eclipse Foundations…
April Founded in 1996, April is the main French advocacy association devoted to promote and protect Free/Libre Software. With its 5498 members (5022 individuals, 476 businesses, associations and organizations), April is a pioneer of Free Software in France. Since 1996, it is a major player in the democratization and the spread of free software and open standards to the general public, professionals and institutions in the French-speaking world. It also acts as a watchdog on digital freedoms, warning the public about the dangers of private interests keeping an exclusive stranglehold on information and knowledge.Learn more about April…
GCC – GNU Compiler Collection GCC development is a part of the GNU Project, aiming to improve the compiler used in the GNU system including the GNU/Linux variant. The GCC development effort uses an open development environment and supports many other platforms in order to foster a world-class optimizing compiler, to attract a larger team of developers, to ensure that GCC and the GNU system work on multiple architectures and diverse environments, and to more thoroughly test and extend the features of GCC. Learn more about GCC…
GDB – The GNU Project Debugger
The GDB steering committee has been appointed by the FSF as the official GNU maintainer for GDB. It is in charge of GDB maintenance, but in practice delegates much of the work to various GDB developers who work according to procedures that the committee has established.Learn more about GDB…
Copyright © 2011 AdaCore. All Rights Reserved. | 计算机 |
2014-23/2156/en_head.json.gz/27319 | Nielsen Technical Services
Search Engine Optimization by NielsenTech
Jobs / Partners
NTS Blog
Low-Cost or Free Hand Submissions To Search Engines
Chris Nielsen is founder and chief executive officer of NielsenTech Corporation. The company uses simple strategy and technology to optimize the performance of its client's web sites. Chris and the NielsenTech team have been dedicated to helping small business owners succeed since 1988. Mr. Nielsen is the founder of Seoby.org and moderates or contributes to many discussion lists and Internet Forums.
Mr. Nielsen is almost never quoted by minor or major news media including the Wall St. Journal, Business Week, the San Jose Mercury News, or DM News. He has also never been invited as a speaker at industry conferences. He enjoys sharing tips, tricks and strategies in print and in person. He is largely self-taught and prefers working with small to medium businesses, since they have been shown to appreciate the level of service he provides.
Based in Minneapolis, Minnesota, Nielsen Technical Services was founded by Christian L. Nielsen in 1988 to provide computer repair, consulting, and data recovery services. At the time, Mr. Nielsen was working full as a computer technician. He found that he enjoyed working for himself and the freedom to do as much as he wanted to take care of the client and ensure customer satisfaction. The goal has always been to provide good service at a fair cost and to have customer satisfaction on every project.
In 1997 the technology focus of the company changed, as the Internet was starting to change the IT industry as nothing else had before. Choosing to bypass web site development as a primary service offering, due to a reported lack of design skills, Mr. Nielsen clearly recognized that a broader technical service offering would bring the greatest advantage in a new and quickly evolving technology that was the Internet. Starting with general Webmaster and administration services, the company discovered one area that was virtually unknown by the web site development companies. That one area was search engine optimization (SEO) and search engine submissions. Most web development companies are focused on the construction and appearance of the sites they develop, and not on the ability of those sites to deliver in the marketplace.
The company specializes on delivering top value for small and medium-sized business that need search engine optimization beyond simple site submissions, but cannot afford spending thousands dollars to have their site optimized and submitted. The company's method of General Optimization is the result of over three years of constant research into all of the available information on search engine optimization. The result of this research is a methodology that returns the best results for the least amount of effort and works across a wide range of search engines and Internet directories.
The company maintains several small directories including the successful Consultant-Directory.com in late 2003. The company also has plans for a number of specialized vertical search engines and on February 23rd, 2006 the Mesothelioma Search Engine was launched.
This is a company you can related to and communicate with. They are interested in your business and may get more excited than you about it's potential. You will have trouble finding another company that so freely offers their help and information you need to know to make the best decisions for your business. If you decide not to work with them, they will offer to help you find someone else, and NOT because there is any kind of finder's fee or comission involved. That's just the way they are.
When asked about this, Nielsen replied, "The best marketing tool I ever found is to selflessly help other people. Help others without a thought about what you might get and if it works for you like it has worked for me, you'll get everything you need in life".
Nielsen Tech - WAP
9145 N Ranier LN #NTS
+1-(952) 314-8351 GMT -6
[email protected]
JR Roberts is a respected expert witness for security legal cases. You want him on YOUR side, not theirs.
Nielsen Technical Services - "Be what they're looking for!"
NTS Network Sites include:
Arthritis Lawyer | Concussion Lawsuits | Expired Domains | Domain Incubation | Stain Treatment | Photodramatic | Nielsen Tech | Early Symptoms Of Mesothelioma | Overcapitalization | Single Robot | Open Source License |
©Copyright 2000 - Nielsen Technical Services All Rights Reserved Worldwide | 计算机 |
2014-23/2156/en_head.json.gz/29040 | The most recent additions and modification to my Home Page have annoying little notations next to them on the main page. This page gives a history of changes and additions as they happen.
This list only includes changes since I redesigned my pages late 1995. The What's Old page has a history of modifications and additions to the first version of my pages.
I've moved this site to a new Web hosting provider, Pair Networks.
I've started redesigning the site and gradually adding new content. The new home page is already online, along with the photo gallery. Look for more frequent updated here in the next few weeks!
Several outside submissions have been added to the He Who Smelt It page!
OK, OK, so I didn't update this page. But I did add material to the site, including a couple total revisions of the Bare Bones Guide to HTML.
Added the HTML FAQ to answer frequently-asked questions about HTML and Web page design. Unfortunately, I simply don't have time any more to answer individual questions that people email to me.
Added a link to my FCC Working Paper, "Digital Tornado: The Internet and Telecommunications Policy". The paper is the first comprehensive overview of the implications of the Internet for the communications industry and the FCC.
Updated my bio page, and added a photo of me. Some other scanned photos are also available through the "About Me" section of my home page.
Put my recent presentation on the Internet and Telecommunications Policy online. This was delivered at the Fall Internet World 1996 conference in New York City.
Rearranged the home page to replace the "Other Stuff" section with a separate section for Whirrled. I did this for two reasons: to give the new and improved Whirrled materials more exposure, and to avoid any possibility of infringing on other people's copyrights. Although the recycled material in the "Other Stuff" section was amusing, I think it's most important to focus on the original content on this site, and Whirrled is an important part of that.
Added new material to the Whirrled pages, including a new gateway page.
Put a humorous essay about why English is a crazy language into the other stuff section.
Added a page about my brother Adam, who is the President of the Sierra Club.
Some minor updates to the FAQ page and bio page, which were getting a bit long in the tooth. Also, added confirmation pages for all the fill-out forms on my site.
Added several awards to my awards page. Many of these happened a while ago but I either didn't know about them or didn't have time to update the page.
I've spent most of my time this month updating the Bare Bones Guide to HTML to version 3.0. The Guide now includes all the tags in the new HTML 3.2 specification, as well as new tags introduced with Netscape Navigator 3.0!
Updated my hotlist by adding many new sites, pruning others, and reorganizing things a bit.
I haven't had much time to work on my pages because I've been too busy in my new job coordinating Internet policy issues for the FCC.
My brother Adam, at age 23, got elected president of the Sierra Club!
Revised the WWW Help Page. Added some links, removed some others, and did a minor redesign to make the page more attractive.
Removed several outdated links and added some new entries to the Home Page Hall of Fame.
If you're wondering why I haven't added much to my site the past month, one reason is that I was busy redesigning the Web site of my employer, the Federal Communications Commission.
Moved everything to the werbach.com server, and installed new imagemap, counter, and form processing scripts. A mirror of these pages is still available on the Digital Express server, but the pages on the old server will not be updated, so please update your links. Now that I have access to CGI scripts and additional disk space, look out for lots of new additions in the near future!
Spruced up the graphics on the main home page, and added header graphics to the top-level pages of each the six sections. Improved the appearance of a few of the pages, including What Makes a Good Home Page, and added more of Bart Simpson's blackboard quotations. I also moved the Home Page Hall of Fame from the Lists section to the Web section, where I think it fits in better.
Added some graphics to significantly improve the appearance of the Home Page Hall of Fame, and added a new page or two to the list. Also enhanced the appearance of the Bare Bones Guide to HTML main page, and cleaned up several other things here and there. Finally, I completely redesigned the index page as a gateway to the major sections of this site.
The Whirrled page has been significantly revised, and I have added several more pages of material from the forthcoming Whirrled: A Survival Guide for the 21st Century.
I have revised the Home Page Hall of Fame, my list of outstanding home pages, to weed out some stale links and pages that no longer make the cut. The page now includes the first official Home Page Hall of Fame graphic, so people that get listed can proudly display it on their pages.
I have added a list of home pages of people who have visited this site. This page will include all home pages that people submit, in contrast to my selective list of "Outstanding Home Pages," which has been renamed as the Home Page Hall of Fame.
Everything is new! The -k- Page has been totally restructured and renovated, with new pages, design, and organization. Version 2 is now open for your viewing pleasure.
Up to the
L O U N G E
Copyright © 1995-2000 by Kevin Werbach. Last updated January 2, 2000. | 计算机 |
2014-23/2156/en_head.json.gz/29604 | GD Portal GD Game Ideas GD Development GD Shop Game Design Schools Game Design Software Game Jobs Latest Game Jobs Free stuff Online GD News&Reviews GD Cheats Hot! Game Design at Full Sail University
When you graduate from the Game Art & Design program, you will have the training and skills you need to compete for jobs in the game industry." Request More Info Video Game Design Schools & Game Design Colleges
Add this page to Favorites Today more and more schools and colleges have started to offer specialized programs and degrees geared towards game design and game production. Getting an education related to digital game production will prepare you for a challenging job in this fast expanding industry. The skills you acquire at these game design schools and game design colleges are not just limited to the game industry. A career within film, TV and other forms of multimedia production is also a possibility after taking a game design degree. We have made a list of video game design schools that offers all kinds of different programs related to game design, and you can easily request more information from each game design school by filling out a short online form. Choosing an education is amongst the more important choices you will make in your life, and the key to making the right decision is to make sure you have the information you need. If you are serious about a career in digital game development, we recommend you spend a few moments filling out the request information forms from all the schools and video game design colleges that interest you. Those schools will then contact you.
After gathering information from many different schools you will be in a much better position when it comes to choosing the right school and program. The Art Institute of Pittsburgh Online Division
The Art Institute offers Bachelor of Science degrees online in Game Art and Design and Media Arts and Animation. The game industry has grown enormously the last couple of years and the need for more specialized educational programs geared toward game design, animation, digital landscaping and media arts was needed. The Art Institute is the leader in creative arts education online..
DeVry University - Bachelor's Degree, Game & Simulation Programming
DeVry's Game and Simulation Programming curriculum will prepare you for taking on various development roles in the game industry. These roles include programmer, software engineer and project coordinator.
Many people nowadays prefer to take their education online. Westwood College also offers and excellent online program, with a wide selection of different courses. If you are working towards a career in game design, the “game art and design” or “game software development” are both programs you should consider. Westwood College also offers a range of other multimedia and IT programs that will preparer you for a career working with the development of video games. ITT Technical Institute
The ITT Technical Institute is one of the leading schools offering technology education. They have over 100 locations in 34 states and three online programs available in some of today’s hottest technical areas. If you plan to work with the production of games The ITT Technical Institute can give you an education that will prepare you for this. This has become one of the most popular schools listed on this page. Digital Media Arts College
3D computer animations are everywhere. You will find it in the latest movies, in TV shows and of course in computer and video games. The demand for skilled 3D animators is growing, and getting proper education is more important than ever for those seeking a career as a 3D computer animator and artist. Amongst the very best 3D computer animation schools is Digital Media Arts College. Digital Media Arts College empowers you to expand your creative career-fast-only 3 years to a BFA, and 1-½ years to an MFA.
Study Game Design at Collins College
At Collins you can take a Bachelor of Arts Game Design Degree (BA), and learn everything you need to know about game production. Their game design program covers everything from the planning of a game to the finished product. With a Bachelor of Arts Game Design Degree you will be able to finally take your ideas and make them into real games. The Art Institutes
Looking for a career in game art and design, photography, broadcasting or visual effects? The Art Institutes provides you with more than just hands-on training using the latest software...
Westwood College of Technology
The game industry is looking for talented young people who have what it takes to develop tomorrow’s games. If you have the talent, Westwood’s program in game art & design can help prepare you for a challenging career in the game industry. Game Institute Do you love video games? Have you ever wanted to learn how to design video games? Why not take classes on something you really love to get you closer to that diploma?
Brown College
The Bachelor degree program in Game Design and Development at Brown College is designed to provide education in principles and techniques used to create interactive 2D and 3D computer games.
International Academy of Design & Technology
Prepare for a creative career in game design at the International Academy of Design & Technology. Convenient campus locations across the United States, offering several different bachelor degrees and certificates in game design. Sanford-Brown College - St. Charles
At Sanford-Brown College - St. Charles you can prepare yourself for entry-level game design positions by building practical skills in the use of game design software and technology through The Computer Game Design program.
American Sentinel University
The online Bachelor of Science in Computer Science, Game Programming Specialization (BSCS-GP) program provides students with both a broad exposure to the field of computer science and specialized knowledge in the area of game programming. Digital Entertainment and Game Design - Bachelor of Science Degree at ITT The purpose of this program is to help graduates prepare for career opportunities in a variety of entry-level positions involving technology associated with designing and developing digital games and multimedia applications. Courses in this program offer a foundation in digital game design (through the study of subjects such as gaming technology, game design process, animation, level design) and general education subjects. Graduates of this program may pursue entry-level positions in a number of different digital entertainment and game design companies. Job functions may include working as part of a team to help design, develop, test and produce video games, or create animations and 3D scenes for use in video games. Ex�pression College for Digital Arts
At Ex’pression College for Digital Arts, you’ll train using state of the art equipment, and you’ll obtain the skills it takes to succeed as graphics or sound artist in the game industry.
Prepare for a job in the game industry with an online degree from the Academy of Art University. The Academy of Art University, San Francisco is the leader in online art education.
Seneca College�s Animation Arts Centre Seneca College�s Animation Arts Centre offers programs in animation arts (three-year diploma), three one-year post-diploma programs in 3D character animation, gaming, and visual effects. Programs are designed to give the student the skills necessary to succeed in both the traditional and computer animation production industry. Developed to meet the specific demands of animation studios in need of highly-trained animation artists well versed in both traditional and computer forms of animation, the curriculum focuses on the development of individual creative expression using experimental and innovative animation techniques. 3D Training Institute (3DTi)
In just a few short years, 3D animation has become a 40 billion dollar industry, impacting movies, television, video games, business, the medical field, and more.
3D Training Institute (3DTi) offers an innovative 12-week, project-based program that can give you the skills you need to break into this booming field.
Brooks College - Long Beach In the Multimedia Program you will learn about video game programming, game interface design, digital image manipulation, animation, 3D modeling and much more. The game industry is a fast growing industry that demands more and more advanced graphics and programming. Cogswell Polytechnical College
Cogswell Polytechnical College is a four-year, WASC accredited institution, offering B.A. and B.S. degrees. They are located in the heart of Silicon Valley just minutes away from studios like PDI/Dreamworks, EA, Pixar, and campuses such as Apple, Google, and NASA Ames. DigiPen Institute of Technology
DigiPen's video game programming diploma course was the first of its kind in North America. Established in 1988 by Claude Comair, DigiPen started up their first game development program in 1994 after teaching computer animation production for several years. DigiPen saw the need for a more
specialized program that had a even stronger focus towards game production so in 1998, DigiPen opened its campus in Redmond, Washington to offer undergrad degree level training in the field of digital interactive
entertainment technology. Daniel Webster College
The Computer Game Development Certificate at Daniel Webster College will educate you in the areas of games software, multimedia and computer graphics. Graduates will leave the school with a broad set of skills and will be capable of developing complete gaming environments and scenarios. GIT � Globe Institute of Technology
Globe Institute of Technology, a dynamic and personalized 4-year college specializing in computer programs. Learn why thousands of students have found GIT a great place to be. DePaul University
DePaul's Bachelor of Science degree in Computer Game Development was the first of its kind in a Liberal Arts University, and was established in 2004. There are currently over 150 students majoring in the program. We have strong ties to the Chicago-area game development community, are a sponsor of the Chicago chapter of the IGDA and employ many working game industry professionals as adjuncts.
This six-quarter program is designed to allow you to gain hands-on experience with cutting edge equipment and programs used in the digital game creation world today.
The Florida Interactive Entertainment Academy (FIEA) The Florida Interactive Entertainment Academy (FIEA) offers an industry-based graduate gaming education in a world-class facility in downtown Orlando. Work on the latest game engines and software and be mentored by faculty with decades of game-making experience. Vancouver Institute for Media Arts
Take a one and two-year diploma and 6-month certificate programs in Game Art & Design, 3D Animation, 2D Animation, Visual Effects, and Digital Film Production. Students at Vancouver will be well prepared for taking on a challenging career in the film and game industry. Pacific Audio Visual Institute's
Pacific Audio Visual Institute's Game Design & 3D Animation program presents a complete overview of all processes involved in creating a computer or video game. Students work individually and in teams to learn the technical, creative, project planning, and management skills required to produce a computer game from start to finish. PAVI�s comprehensive game design program covers the creative, software and production skills the game industry seeks. The Academy of Game Entertainment Technology
Working together with industry professionals and game developers AGET can offer their students the best possible education. Here you will be working together with the finest instructors and advisors using the newest technology available. Advancing Technology, University of
Get a game design degree at The Digital Animation Production (DAP). As a student at DAP you will learn the skills necessary to obtain entry-level positions in the gaming industry. You will be trained in 3D modeling, animation, lighting, special effects, and a lot more. The School of Communication Arts
The School of Communication Arts offers programs related to digital filmmaking and game development. The school is an excellent choice for anyone who wants to learn animation and special/FX for gaming, TV, film and industrial multimedia. The Guildhall at SMU
If you dream about a career in digital game development, joint the Guildhall digital game development program and study game design at SMU. iD Tech Camps iD Tech Camps has specialized in technology camps for students ages 7-17. They have several camps that includes game programming and design. Full Sail Real World Education
Full Sail's video game design course will teach you everything you will need to know about designing video games. It only takes 21 months of school to get started in the world of video game programming and design. The Game Institute
Have you dreamt about working with the development of video games? The Game Institute provides professional training in the field of video game production and digital game development. Emagination Game Design
Teens build an FPS game and learn video game design and development at Emagination Game Design. Located at Bentley College in Waltham, Massachusetts this two week program is for teens who are serious about game design. Students learn game design skills and join a development team to build a game and present it to industry experts. Media Design School
Media Design School is a registered and approved private tertiary institute specialising in high-end undergraduate and graduate qualifications offering courses in Game Development and Interactive Gaming.
Online Computer Schools
Education Online lets you request information from five leading online computer schools. Taking your degree online is both an economic and time efficient choice. By filling out a short one page form Education Online will send you an information package from each school covering your field of choice.
Online Game Design Colleges and Schools
Westwood Online College
The Game Institute
The Art Institute Online
Westwood (online)
DigiPen
The Art Institute of Pittsburgh Online Division
Digital Media Arts College
Keva
Vancouver Institute
AGET
Advancing Technology, University of
The School of Communication Arts
The Guildhall at SMU
iD Tech Camps Al Collins College
Emagination
Game Design Degree
All rights reserved GameDiscovery.com 2000-2005 | 计算机 |
2014-23/2156/en_head.json.gz/29926 | Mandriva Linux 2009 Beta 1
Mandriva Linux 2009 Beta 1 has been released
Major new featuresKDE 4.1KDE has been updated to version 4.1 final. An initial implementation of the standard Mandriva widget theme, Ia Ora, for KDE 4 is also included. This still does not represent the final appearance of Mandriva Linux 2009, as the Ia Ora window manager theme and an overall visual theme (including desktop backgrounds and so on) have not yet been added, and other appearance details may still be changed. Translations are also not yet complete, so you may find some areas of the desktop are not translated into your language if you use a language other than English. This will be corrected prior to the final release. 4.1 is the version of KDE that will be included in the final Mandriva Linux 2009 release, so it is particularly important that you report any significant bugs in it to Mandriva so they can be fixed. KDE 3 is still available from the /contrib section.GNOME 2.23.5The latest development release of GNOME (working towards the stable 2.24 series, which will be used in Mandriva Linux 2009) is included. Please note that the Evolution mail client is somewhat buggy in handling IMAP accounts in this release.Mozilla Firefox 3The new version of Mozilla Firefox is now the default in this pre-release. As we are still in transition from Firefox 2 to Firefox 3, both versions are provided in this pre-release; version 2 is in the mozilla-firefox package, while version 3 is in the firefox3 package. This is a temporary measure during the transition.Splashy replaces bootsplashThe Splashy system is now used for displaying a 'boot splash' (the graphical screen you see during the boot process), rather than the old bootsplash system. Splashy is a more modern and actively maintained system which does not require large amounts of code to be added to the kernel, as was the case with bootsplash. It also provides some new features that were not available in bootsplash. Please report any problems you notice with the boot image, as they are likely related to this change. Please note also that graphical rendering is not the final one. Splashy theme will be improved.Synchronization support for Windows Mobile 2003 devices, tethering support for Windows Mobile 5+This pre-release includes the new version 0.12 of the synce framework for supporting synchronization of Windows Mobile devices. With this update, support has been added for easy synchronization of Windows Mobile 2003 and earlier devices (previously, easy support was only available for Windows Mobile 5 and later devices). For instructions on synchronizing with Windows Mobile 2003 and earlier devices (and updated instructions for synchronizing with Windows Mobile 5 and later devices), see this page, but please note that unfortunately no synchronization is yet possible with KDE 4's PIM applications (kmail, kontact etc) as there is no KDE 4 opensync plugin available, so synchronization is only possible with GNOME and KDE 3. Kernel support has also been added for tethering using Windows Mobile 5 and later devices - that is, using them as a modem, so you can get an internet connection on your computer using the data connection available on your Windows Mobile phone / PDA. It should be possible to configure this by using the Mandriva Control Center's network configuration tool, but this has not yet been fully tested.Mandriva Linux 2009 Beta 1
Printed from Linux Compatible (http://www.linuxcompatible.org/news/story/mandriva_linux_2009_beta_1.html) | 计算机 |
2014-23/2156/en_head.json.gz/30486 | [phpBB Debug] PHP Notice: in file /viewtopic.php on line 927: getdate(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected 'America/Los_Angeles' for 'PDT/-7.0/DST' instead
the88.net • View topic - The 88 in the Ventura County Star
the88.net
When this baby hits 88 mph you're gonna see some serious s*#t!
Advanced search Board index ‹ The 88 Discussion Forums ‹ The 88
The 88 in the Ventura County Star
Try to keep it 88 related please.
2 posts • Page 1 of 1 The 88 in the Ventura County Star
by admin on Sun Apr 19, 2009 8:04 am Happily out of major-label deal, The 88 help Salzer's celebrate Record Store DayFriday, April 17, 2009In case you’re too broke and the Coachella Valley Music and Arts Festival is too far a drive, check out the free parking lot concert Salzer’s Records is holding Saturday in celebration of Record Store Day.This is the second year independent music retailers around the world are banding together to remind folks that they’re still around and still selling cool music, both on vinyl and CD.Headlining Salzer’s bash will be The 88 along with local favorites Franklin for Short and Army of Freshmen. Miranda Cosgrove, the teen queen from Nickelodeon’s “iCarly” series, will be on hand to sign autographs.The 88, a pop-rock band based in Los Angeles, is fronted by Keith Slettedahl, whose pipes rival those of Chris Isaak, Roy Orbison and Harry Nilsson. That’s a great thing.The 88 have placed so much of their music in movies and television, they list their appearances by network. They also have released three albums — two indies, then one on giant Island Records. Now they’re back as indie artists with stories to tell, namely that getting signed by a major label may not be the be all and end all of a musician’s mission these days. Slettedahl expounded on that and other topics during a recent phoner.@TO 1-Text Ragged Right no indent:What’s the latest with The 88?We have a bunch of shows coming up. We’re going to be gone a lot doing some East Coast shows with The B-52s. We’ve been recording a lot as well.Tell me your big label nightmare story.I don’t think it’s a nightmare. If I look at it positively, it was a great learning experience. I don’t subscribe to that victim thing bands can get into. We made decisions and no one held a gun to our head. I never want to come off saying that particular label is evil and that the people we met are evil or anything like that. Put us back in that situation 100 times and I think we’d make the same decision.Did the Island album do better, worse or the same as your earlier efforts?I don’t even know, honestly. I just think it wasn’t a good fit. Looking back, that was pretty clear from the start. My wife was pregnant and was afraid that I had to do certain things. The record had to get made. The record had to come out and, you know, whenever you’re afraid and you start making decisions, the results aren’t that good.Indie band before and indie band again. Any difference?There’s no difference. There wasn’t even any difference when we weren’t an indie band. The way we feel as people about the music didn’t really change. But I do think it really is liberating to make music you really like without other opinions and overthinking.I know the Island publicists were all over me to do a story on you guys when your last album came out.We met a lot of really nice people. My only problem was with myself. I wasn’t believing in myself and I was believing a lot of other people knew more than I did.Is there more pressure to make the first album, the second album or the current album — or no pressure at all?No pressure at all, not anymore. I think for awhile we got kind of mixed up and tried to mold the music to fit our business situation and it’s got to be the other way around, you know? What I found out is that I want to be comfortable in my own skin; I want to be happy and the way the last record was made is not the way I ever want to make a record again. There was an obvious agenda to try to take what we do and make it fit a little more into the mainstream. At a certain point, it got even darker. They wanted us to write with a bunch of other writers.I think the first two albums are better.I can’t listen to stuff we did five years ago and say, “This is good or bad.” I just try to not even go there. Creatively, we’re more excited right now than we’ve ever been. We got out of the Island situation and we’ve made another record.When’s the new one coming out?I’m not sure how it’s going to come out. We did it all at home, so the whole approach was very different. We have to decide whether these songs will be a record. We just have to decide what we want to do and be creative about getting the music out there.What does The 88 sound like these days?It’s a lot like what we’ve always done and, hopefully, a little more interesting.Every musician wants to know this: How do you get stuff in movies?Well, we just got really lucky. We used to pass out CDs and fliers at local shows and, one night, I think it was at a Supergrass show at Spaceland, we gave a CD to a guy we didn’t know who took it home and liked it. He’s been placing our stuff ever since.How do you survive on the road?We’re just really straight-laced people.No 88 beers a night?No, we’re all married. I don’t drink and none of us indulge in any of the trappings of being in a rock band.Since you guys chose the name, how many meanings of 88 have you found?It can mean a lot of different things. We went with that name and it was just one of 200 on a list. The main reason we chose it is because when you hear it you don’t necessarily think “Oh, they’re this kind of band” or “They sound like this or that.” You can apply your own connotations to it. And to the four guys in the band at the time, it was the only name all of us could get behind. Our drummer at the time was really into older blues and jazz music, and there was the “Rocket 88.”Yeah, Jackie Brenston. What a great song.Yeah, he liked it for that reason. Obviously, the piano reference was important because that’s always been a big part of our sound.The first time I saw you guys was at the Mercury Lounge in Goleta. You were singing a Harry Nilsson song at soundcheck. No one does that stuff anymore.He’s one of my all-time favorites.Any sage advice for the youngsters?Try to have as much fun as you can and enjoy it while you’re doing it.— E-mail music writer Bill Locey at [email protected]
Re: The 88 in the Ventura County Star
by cwiertny2 on Mon Apr 20, 2009 9:04 pm excellent interview... Hi. The 88 rocks!
cwiertny2 Posts: 23Joined: Sat Jun 30, 2007 10:35 pmLocation: Orange County CA
2 posts • Page 1 of 1 Return to The 88
------------------ The 88 Discussion Forums
Not 88 Related | 计算机 |
2014-23/2156/en_head.json.gz/31636 | Home > Archive > 2007 > December > 18
Does Twitter do enough?
Tuesday, December 18, 2007 by Dave Winer.
Evan Williams, the Blogger guy and Twitter co-founder, gave a talk at LeWeb3 about keeping software small, and how sometimes you can create a product by removing features from an existing product. He showed how Twitter is less than Blogger, no titles, comments, templates, etc. It's almost nothing compared to Blogger, but we're using it and liking it. It's not a new story. When I was coming of age in computer science, the newest computers were minicomputers. They were called that because they were smaller and did less than the mainframe computers that came before. They were followed by microcomputers which did even less and were a lot more popular than minicomputers (which of course were more popular than mainframes). Scaling things down can make them more useful. But it's a paradox because once a feature is in a product you can't take it out or the users will complain so loudly that you put it back in right away. I know, I tried, a number of times to back out of features that I thought of better ways to do. You can always add features to products, it will make the existing users happy. But it often comes at a cost of making the product more complicated for first-time users, and they don't have a voice, they can't complain, they just go somewhere else, usually quietly. So Evan has a point. Software design, if you're creating wholly new products, is like haiku. Find the smallest subset of a mature product that will attract people and ship it. But there are certainly features they could add to Twitter that would have no impact on the steepness of the learning curve (i.e. how easy it is for a new user to get started). For example users are good at skipping over prefs they don't understand. But you have to think carefully about what the default should be, so there's no penalty for not caring. Also features that only appear in the API have no cost in complexity of the user interface. They might make it possible for a developer to build a new product on top of the existing one. Since the user of the base product can't see the feature, it can't make it harder to learn. An example -- Flickr lets you build an RSS feed of recent pictures that have a certain tag, say snowstorm. It's nice to have, but only if it doesn't get in the way of other more basic features. Some users say they don't want new features, but I bet most of them would be very happy to use a new feature that made Twitter more fun or useful. And there are alot of users who don't say anything about it, and don't think much about it. Most people aren't interested in theories about why products catch on, they like it or they don't, and don't know why they do or don't. It's always good to ask questions about why things work, but if I could offer the Twitter folk any advice, I'd say don't hesitate too much to put in new features that will make users happy. Ultimately users like new features in products they use a lot. There's a reason why products tend to bloat over time, it's because users demand it. The trick is to not compromise too much on ease of learning. View the forum thread.
© Copyright 1994-2007 Dave Winer .
Last update: 12/18/2007; 1:38:11 PM Pacific.
"It's even worse than it appears." | 计算机 |
2014-23/2156/en_head.json.gz/32286 | Search | Back Issues | Author Index | Title Index | Contents D-Lib MagazineOctober 2002 Volume 8 Number 10
ISSN 1082-9873 Report on the Sixth European Conference on Digital Libraries
15 - 18 September 2002, Rome, Italy
George Buchanan
Research Fellow, Digital Libraries
[email protected] The Sixth European Conference on Digital Libraries (ECDL 2002) continues to draw participation from many nations worldwide.
This year's keynote speakers were Prof. Hector Garcia-Molina, from Stanford University, USA, and Prof. Derek Law, from Strathclyde University, U.K.. These two speakers covered very different aspects of the Digital Library (DL) spectrumGarcia-Molina's keynote dealt with technical aspects of digital library research, summarizing a number of technical features of web crawlers and archiving, whilst Law's keynote focused more on the social impacts and influences in his talk on the use of an image-based digital library.
Overall the conference focused more on the technical areas of DL research than on social and public policy areas of DLs, and so Prof. Law's presentation was a helpful counterpoint to the majority of the conference presentations. In creating the Glasgow Digital Library (http://gdl.cdlr.strath.ac.uk), the Centre for Digital Library Research, of which Prof. Law is a member, faced numerous non-technical challenges, particularly organizational co-ordination between a number of partners, licensing, and copyright concerns regarding online materials, among other challenges. From his perspective, Prof. Law argued that the technical challenges faced by digital library developers have, in general, already been recognized, and many have been solved, while the non-technical challenges encountered by his project were proving to be more enduring, complex and profound. Prof. Law's keynote also touched on scalability issuesfor instance, he discussed a 'help system' evaluated by his group that was intended to support library users. The help system was itself so counter-intuitive to use, it was a major cause of user difficulties.
It is unlikely that Professor Law's observations regarding the lack of technical challenges in digital libraries would be shared by those participating in the technical program at ECDL. However, I have heard a number of other digital library researchers make observations similar to Law's. Indeed, there seemed to be signs of a shifting of perspective across many of the papers. For example, the open-standards and open-source approach to DLs and DL researchpioneered by groups such as the New Zealand Digital Library with their Greenstone DL softwareseems to be gaining significant ground. Several groups making presentations at ECDL 2002 reported on new projects involving open-source or open-standards approachesfrom entire DL systems to inter-library communication protocols.
Members of the panel "OCKHAM: Coordinating Digital Library Development with Lightweight Reference Models" discussed how to co-ordinate the development of open DL systems. Such projects, moving towards higher levels of co-operation in digital library research, might reduce the numbers of future papers that report the construction of what might be termed 'yet another digital library' or 'a tool to translate from standard X to standard Y', both of which tend to exclude the development and testing of a hypothesis. Instead, a move towards less easily tractable, more research-centred, questions would be unavoidable.
What, then, may constitute the open research questions in the digital library field?
Architectural concerns are naturally related to "open" systems, in all their forms, and indeed architectures were a focus of research particularly highlighted in the ECDL 2002 Call for Participation. Three different sessions were organized under the heading of "Architecture". The first session included reports on work being carried out in two open-source DL projects, OpenDLib and the Cheshire II projects, neatly paving the way for a presentation in the second session by David Bainbridge of the New Zealand Digital Library group on "Importing documents and metadata into digital libraries: Requirements analysis and an extensible architecture". Bainbridge reported on the current form of the simple, yet powerful, framework used by the open-source Greenstone software to index many different document formats. Two other presentations in the same session dealt with the issue of metadata and its uses. The third session concerning architectures included a paper on an interesting visual interface for digital libraries, Daffodil, by a team from Dortmund University. The project is exploring novel means for facilitating user information seeking, centered on the stratagems and tactics from writings of information scientists such as Marcia Bates. The third session also included a paper by Ed Fox and Hussein Suleman, of which more later. Not surprisingly, the Web and related standards and activities also exerted a strong influence on the conference. As previously mentioned, Hector Garcia-Molina's keynote focused on the issues of web crawling, and web crawling was also discussed by Donna Bergmark in her paper "Focused Crawls, Tunneling and Digital Libraries". The continuing influence of the Internet and, in particular, the Web will play a significant role in the development of digital libraries. The Web could be seen as a 'rival' against which digital libraries must competesomething touched on by Hector Garcia-Molina when he argued that "the Web is a DL; Google is the card index." However, as in the delivery of digital library access, the Web can also be an ally. For example, one of the first sessions of the conference highlighted web archiving in conjunction with digital libraries (in papers authored by Abiteboul et al. from France and Rauber et al. from Austria).
A significant number of presentations focused on standards, particularly the Open Archives Initiative (OAI) metadata harvesting protocol. In a session specifically on OAI applications being developed at several organizations throughout the world, Herbert Van de Sompel and Carl Lagoze reported on the progress of the OAI in their paper "Notes from the Interoperability Front". In addition to the successes of OAI, limitations of this protocol that need to be addressed in future research were discussed in the paper presentation "Designing Protocols in Support of Digital Library Componentization", which Ed Fox co-authored with Hussein Suleman.
Turning to other areas of research, a second panel discussed usability, evaluation, and related social issues. Panelists included Ann Blandford (University College London), Teal Anderson (John Hopkins University) and Wilma Alexander (University of Edinburgh). The usability and evaluation theme was reflected in two additional paper sessions, one of which included a notable paper, "Goal-directed Requirements Specification for Digital Libraries", by David Bolchini and Paolo Paolini that presented a framework for organizing the discernment of Digital Library user requirements. The authors' utilization of use scenarios in the design process was particularly interesting and illuminating, tracking the refinement of requirements from the goals of stakeholders and placing significant emphasis upon the problematic area of user navigation through information tasks. The topic of usability was a continuing theme across the entire conferencetechnocratic or otherwisewith regard to the selection of appropriate evaluation methods for systems. More than in previous years, presentations focused on qualitative methods of evaluation, including a presentation on evaluating thesaurus-based retrieval of documents. However, agreement on what constitutes acceptable norms for qualitative evaluation does not yet seem to have been reached. In addition to the presentation of papers and the panel sessions, the conference programme offered six optional pre-conference tutorials and three post-conference workshops. I heard particularly positive responses to Dagobert Soergel's "Thesauri and Ontologies in Digital Libraries" tutorial and to Ian Witten's tutorial "How to build a digital library using open-source software". Posters and demonstrations were available for viewing throughout much of the conference. Of these, the number of usability-related presentations was particularly noticeable, including those by Mike Wright, Tamara Sumner and their colleagues from the University of Colorado, Boulder, and from Pete Dalton and Rebecca Hartland-Fox from the University of Central England.
Overall, the programme at ECDL 2002 continued a number of developing themes in digital library researchincluding usability, open standards and novel interactions, etc.against a backdrop of paper presentations on project implementations, and it was apparent that the subject of evaluation methods remains something of a 'hot potato'. It will be interesting to see in which directions the field of digital library research and development moves over the next few years. For ECDL 2002, architectural norms and standards presented a number of open questions. Questions thatin her presentation on usabilityAnn Blandford pointed out directly affects usability issues, clearly and indisputably a continuing "open" area of research.
The next European Conference on Digital Libraries will be held in Trondheim, Norway, in August 2003. Those accustomed to attending ECDL in the month of September need to pay particular attention to the schedule change to August, which means that the deadline for paper submissions will be moved up to March 10, 2003. I, and many others, look forward to next year's conference. For more information about ECDL 2003, see <http://www.ecdl2003.org>.
Copyright 2002 George Buchanan Top | Contents Search | Author Index | Title Index | Back Issues Previous Article | In Brief
Home | E-mail the Editor D-Lib Magazine Access Terms and Conditions DOI: 10.1045/october2002-buchanan | 计算机 |
2014-23/2156/en_head.json.gz/33150 | Return to Previous Press Release
Enter your name and a friend's email address in the fields below and click "Submit" to email this Press Release to a friend.
Finding Your Friends and Following Them to Where You Are
Your message will look like this:
[YOUR NAME HERE] thought you might be interested in this story from the University of Rochester.MEDIA CONTACT: Peter Iglinski [email protected]
A man�or person�is known by the company he keeps. That old proverb takes on new meaning in the 21st century.
Computer scientists at the University of Rochester have shown that a great deal can be learned about individuals from their interactions in online social media, even when those individuals hide their Twitter messages (tweets) and other posts. The paper, "Finding Your Friends and Following Them to Where You Are," by professors Henry Kautz and Jeffrey Bigham, and graduate student Adam Sadilek, won the Best Paper Award at the Fifth Association for Computing Machinery (ACM) International Conference on Web Search and Data Mining, held in Seattle, Washington.
The researchers were able to determine a person's location within a 100 meter radius with 85 percent accuracy by using only the location of that person's friends. They were also able to predict a person's Twitter friendships with high accuracy, even when that person's profile was kept private.
In one experiment, Sadilek, Kautz, and Bigham studied the messages and data of heavy Twitter users from New York City and Los Angeles to develop a computer model for determining human mobility and location. The users, who sent out 100 or more tweets per month, had public profiles and enabled GPS location sharing. The location data of selected individuals was sampled over a two-week period, and then was ignored as the researchers tried to pinpoint their locations using only the information from their Twitter friends. In more than eight out of ten instances, they successfully figured out where the individuals lived to within one city block.
"Once you learn about relationships from peoples' tweets, it makes senses that you can track them," said Sadilek, the project's first author. "My fianc�e may be a good predictor of my location because we have breakfast together every morning.\"
In the other experiment, the scientists used the same data sets from New York and Los Angeles, but ran the models in reverse. They made full use of individuals' location data and the content of their tweets, but ignored their lists of followers as they set out to predict people's Twitter friendships (mutual following). When they compared the predictions of their models with the actual network of friendships, the researchers found they were correct 90 percent of the time.
"If people spend a lot of time together online and talk about the same things," said Sadilek, "they're more likely to be friends."
The personal nature of the messages made it a little easier for the researchers to determine relationships. Sadilek explains that heavy Twitter users spend a great deal of time talking about themselves.
"It's harder than most people think it is to protect our privacy online," said Henry Kautz, chairman of the Department of Computer Science, "but there are ways to use this new reality for good."
The team will now apply their models to such tasks as tracking and predicting the spread of communicable diseases. If people and their friends in one location tweet about having a fever and not feeling well, it may be an indication of a flu outbreak.
About the University of Rochester
The University of Rochester (www.rochester.edu) is one of the nation's leading private universities. Located in Rochester, N.Y., the University gives students exceptional opportunities for interdisciplinary study and close collaboration with faculty through its unique cluster-based curriculum. Its College of Arts, Sciences, and Engineering is complemented by the Eastman School of Music, Simon School of Business, Warner School of Education, Laboratory for Laser Energetics, Schools of Medicine and Nursing, and the Memorial Art Gallery. PR 4017, MS 2425 | 计算机 |
2014-23/2157/en_head.json.gz/1106 | Where are we
Flash Comunicaci�n
Acceso's services can do a lot for you and your communication:
Acceso 360 & Media Tools - Manage your communication in an integratedmanner
Acceso Analysis - Transform information into knowledge"
Acceso Monitor - Monitor what media say about you
Free services - Publish your press releases for free
Do you want us to help you?"Try our recommendator
Why Acceso
yaencontre.com
Announcements There are no announcements
Tell us the user name you registered with and we will proceed to send you your password reminder. User
Your request has been stored sucessfully.We have sent you an email with an requested information.
PROTECTION OF DATA OF A PERSONAL NATURE. Pursuant to the Organic Law 15/1999 of December 13th, on the Protection of Data of a Personal Nature (“LOPD, according to its Spanish abbreviation ”), and in the Royal Decree 1720/2007, of December 21st, in which the Regulation for the Development of the Organic Law 15/1999 of December 13th, on the Protection of Data of a Personal Nature was passed (“LOPD, according to its Spanish abbreviation ”), the Company informs that the access and use of the web page does not imply a collection of personal data of the User by the Company. Nevertheless, the data that the User voluntarily provides will be included for its treatment in a duly registered file, owned by the Company, with the aim of being able to identify him/her and contact him/her, as well as provide him/her with any information he/she requires.
In any case, the User can exercise at any moment his/her access, rectification, cancelation and opposition rights, addressing the appropriate written request to the following address: ACCESO GROUP S. L.
Rambla Catalunya, 123, entresuelo 08008 Barcelona
The aforesaid request must include the following data: name and surnames of the User, address for the notifications and a photocopy of the passport or ID card. In the event of representation, it should be proved by legitimate document. Moreover, the petition in which the request is based on, the date, the signature of the User and the supporting documents of the petition formulated must be stated, where appropriate The Company informs that it has adopted adequate security levels and that it has moreover installed all the steps and measures in its scope in order to avoid the loss, misuse, alteration, unauthorised access and extraction of these at all times, bearing in mind the state of technology, the nature of the data stored and the risks to which they are exposed, stemming these from the human action or the natural or physical media.
The Company presumes that the data have been introduced by its holder or by the person authorized by him/her and that they are exact and correct. The User is the person responsible for the update of his/her own data. The Company will not be responsible for its inaccuracy in the event that the User does not communicate the changes that might have taken place (for example, change of e-mail address).
The computer in which the web is hosted in uses cookies in order to improve the service rendered by the Company. These cookies get automatically installed in the computer used by the User, but they do not contain any information related to the User. Nevertheless, these cookies do register the browsing carried out by the User with statistic aims and to make future connections from the same computer easier. However, the User has the possibility in most web browsers to deactivate the cookies. The Company commits itself to preserve the confidentiality of the information provided and to use it solely for the stated purposes.
Press pack:
GHIJ
KLMN�O
PQRSTU
VWXYZ#
Contact Acceso - +34 91 787 00 00 / +34 93 492 00 00
Acceso 360� & Media Tools
Acceso Analysis
Acceso Monitoring
Acces per sector
Large companies
Acces per target
Content analysis Search for information
Intellectual Property announcement
Sitemap - All rights reserved - Legal notice - Web Accesible | 计算机 |
2014-23/2157/en_head.json.gz/1943 | The Linux powered LAN Gaming House
Feb 08, 2012 By Michael Reed inCool Projects
netboot
LAN parties offer the enjoyment of head to head gaming in a real-life social environment. In general, they are experiencing decline thanks to the convenience of Internet gaming, but Kenton Varda is a man who takes his LAN gaming very seriously. His LAN gaming house is a fascinating project, and best of all, Linux plays a part in making it all work.Varda has done his own write ups (short, long), so I'm only going to give an overview here. The setup is a large house with 12 gaming stations and a single server computer.The client computers themselves are rack mounted in a server room, and they are linked to the gaming stations on the floor above via extension cables (HDMI for video and audio and USB for mouse and keyboard). Each client computer, built into a 3U rack mount case, is a well specced gaming rig in its own right, sporting an Intel Core i5 processor, 4GB of RAM and an Nvidia GeForce 560 along with a 60GB SSD drive.
Originally, the client computers ran Ubuntu Linux rather than Windows and the games executed under WINE, but Varda had to abandon this scheme. As he explains on his site:"Amazingly, a majority of games worked fine, although many had minor bugs (e.g. flickering mouse cursor, minor rendering artifacts, etc.). Some games, however, did not work, or had bad bugs that made them annoying to play."Subsequently, the gaming computers have been moved onto a more conventional gaming choice, Windows 7. It's a shame that WINE couldn't be made to work, but I can sympathize as it's rare to find modern games that work perfectly and at full native speed. Another problem with WINE is that it tends to suffer from regressions, which is hardly surprising when considering the difficulty of constantly improving the emulation of the Windows API. Varda points out that he preferred working with Linux clients as they were easier to modify and came with less licensing baggage.Linux still runs the server and all of the tools used are open source software. The hardware here is a Intel Xeon E3-1230 with 4GB of RAM. The storage hanging off this machine is a bit more complex than the clients. In addition to the 60GB SSD, it also has 2x1TB drives and a 240GB SDD.
When the clients were running Linux, they booted over PXE using a toolchain that will be familiar to anyone who has setup Linux network booting. DHCP pointed the clients to the server | 计算机 |
2014-23/2157/en_head.json.gz/3940 | Rescuing Floppy Disks
From Archiveteam
While they were cool-looking, convenient and even somewhat inexpensive after a while, Floppy Disks are out as a medium to store data on home computers. The USB stick, wireless access, the use of the internet, and a whole other range of options have rendered this medium obsolete. That said, a situation now exists where there are millions of these things out in the world, some of them containing rare or unusual pieces of history, and so there's a lot of benefit to getting all that old data off that medium.
This page is meant to be a clearinghouse for various options that a person or group of reasonable technical ability could use to rescue data from floppy disks. If any of these options seem daunting, a number of people have offered to accept floppy disks and pull the data using these tools. None of these options should be considered endorsements, and Archive Team does not earn commission from the sale of these items.
Some Basic Thoughts on Floppy Disks There are three different main sizes of floppy disks that had the most traction:
8" Disks
5 1/4" Disks
8" floppies fell out of favor relatively quickly in favor of the 5 1/4" versions. In the late 1980s, 3 1/2" overtook 5 1/4" as the dominant format, but a lot of machines, such as the Commodore 64, Atari 810, IBM PC, Kaypro, Apple II and II, and a range of others all supported the 5 1/4" primarily. All floppies work on the same principle: a magnetic disc with a hole in the middle is inside a case, and a disk drive reads the magnetic data off the disc. Some aspects changed - where 8" and 5 1/4" discs had no built-in protection for the magnetic face of the disc except a paper cover, the 3 1/2" versions had a small spring-loaded door that was opened by the disk drive.
Storage could make a huge difference in the lifespan of Floppies, and a pile of disks put inside a box that was stored in a low humidity, non-extreme-temperature environment could last a lot longer than a floppy used constantly that was left on top of a computer monitor for weeks.
We're going to assume you're just trying to take a pile of disks from however far back and transfer the data onto something more recent. In all cases, try and avoid throwing out the original disks after doing transfer, as you might find that the transfer you've done is missing information, or that technology might have shifted in the meantime, allowing better extraction of the data.
The Flippy Disk Problem Spend enough time with floppy disk nerds, and eventually you will hear weeping about the "Flippy Disk" situation. We'll use the FC5025 description of the issue here:
Many older computers recorded on only one side of the disk. So, people would fill one side of the disk and then flip it over to store more on the other side. Disks used this way are called "flippy" disks. 5.25" disks have a hole, called the index hole, that lets the drive know if the disk is rotating. (The index hole has other purposes also.) The problem with flippy disks is that when the disk is inserted upside-down, the drive cannot see the index hole. Many drives won't read from the disk unless they can see the index hole. If you have one of these drives, the FC5025 will be able to read from the first side of the disk only. When you flip the disk over to read the other side, the drive will not send any data to the FC5025, and the FC5025 will not be able to read that side. Please note: even the recommended TEAC FD-55GFR drive cannot read both sides of flippy disks. There is no recommended drive for reading flippy disks at this time.
The Copy Protection Problem What you're doing, here in the future, is just what the software companies of years past were terrified you would be trying to do: make multiple, potentially unlimited copies of the software on a floppy disk they sold you. To this extent, many companies selling software would enact one of many protection schemes to prevent duplication.
Some would use the documentation or included items in the package and have the software query the user to verify they paid. Some used hardware dongles (although generally this was high-end software, not, say, a game). And yet others implemented copy protection into the disks themselves.
An example of this might be spiral tracking, where a computer would start up off the boot sector of the disk, but then the booted "OS" (really just a control program) would force the drive head to act counter-intuitive to what any regular floppy would be expected to present. For example, a spiral. This meant that a standard disk-copy program would duplicate the drive as if it had regular tracks, but would totally fail on it, and the software was protected. And also un-preserved.
This means that software being run now to duplicate a floppy disk is one good at doing a magnetic copy, since all the other rules are out the window. It means that in cases where a drive shows lots of errors copying a disk, it might not be a bad disk, just the copy protection kicking in decades after it was dreamed up. It's a problem to keep in mind.
Methods of Transfer (Hardware) There are currently multiple ways to transfer a lot of floppy disks, some involving original hardware and others involving customized circuits to use modern hardware to pull the data off the disk.
DiscFerret
The DiscFerret is a device that reads magnetic flux data from disks at a sample rate of up to 100MHz. It has an interface port that can be connected to most common floppy drives, as well as MFM and RLL hard drives. This allows capture of all data, including copy protection, unusual formats, and mastering data. Though the hardware is quite powerful, the software is under heavy development at this time. A complete floppy format analyzer program is in development. All components of the board (including the hardware, firmware, and software) are under an open source license, with source code and hardware designs available at the | 计算机 |
2014-23/2157/en_head.json.gz/4231 | ASPs Offer New Mechanics for E-World
Finding companies that work on content for Web sites is basically easy. Finding companies that can launch a firm into the world of e-commerce is a different matter. These companies are much more than consultants with a group of techies that know how to implement little else.Application service providers are a new breed of firms that pull it all together from behind the Web site. Blend together Web hosting, information technology outsourcing and consulting and you have an ASP.Industry analysts differ on how the new ASP businesses should be categorized and defined and, as a result, ASP market projections vary wildly but solidly in the billions of dollars. The Gartner Group estimates ASP projects for 2003 will reach $22.7 billion. International Data Corp. foresees $2 billion in ASP projects in 2003. The truth lies somewhere in between, depending on the services provided.What do these new businesses do? According to the experts, an ASP deploys, maintains, upgrades and supports software. And they serve just about any company with an Internet site or a data warehouse. It's interesting to note that an Internet site needs just as much support as a data warehouse. The internal resources of most IT departments are already overwhelmed with day-to-day support and developmental issues. U.S. companies spent $153 billion on e-business infrastructure in 1999 - seven times the 1999 Internet retail sales. According to the Internet Research Group and SRI Consulting, Internet infrastructure costs are expected to soar to $348 billion by 2003.Instead of handling all the Internet and data warehouse tasks inhouse, companies can pay a subscription fee to an ASP to manage these functions. To qualify, a firm must be a full-service company that will take total responsibility for everything associated with either its Internet, intranet or extranet applications.ASP start-ups include Usinternetworking, Corio and Future Link. Microsoft and Cisco said they're collaborating to provide ASPs with an end-to-end solution to deployoutsourced applications and services. Other companies plan to provide infrastructure and support while working with independent software vendors such as Great Plains, Clarus Corp. and Pivotal Corporate to enhance the service available to midsize businesses in corporate purchasing, business management and customer relationship management.This move is meant to help ASPs offer customer services that are an alternative to the resource-intensive process of deploying and managing complex applications inhouse. Crossover From ASP to CRMApplication hosting also is emerging as an option for customer relationship management. Oracle's Business Online initiative is a good example. Customers can purchase the Oracle CRM 3i suite as a service and host the application in an Oracle data center. A joint effort between Nortel Networks and the Sun-Netscape Alliance aims to provide a standards-based framework that will allow Internet service providers and ASPs to create new service bundles, deliver them, and manage them more efficiently.Market researcher The Yankee Group forecasts that the market for hosted CRM solutions will escalate from $50 million in 1999 to more than $1.5 billion in 2003, including both infrastructure and software revenues. An increasing number of organizations, primarily those focused on the Web as a customer channel, will be able to leverage the next generation of hosted solutions to react quickly to ever-changing CRM challenges. However, while The Yankee Group predicts that ASPs will be financially attractive for hosting certain CRM applications in the short term, customization requirements will present significant challenges for an enterprisewide CRM rental market.As recently as one year ago, companies had few options in the CRM area other than to license a client-server product, or build or augment existing capabilities. Within the past six months, software vendors, third parties (ASPs) and users have begun experimenting with hosted CRM solutions. The Yankee Group predicts there will be a transition period to hosted CRM solutions characterized by a range of products and pricing models.Some trends likely to take place are:• Two delivery models for hosted CRM solutions will emerge.• Self-hosted solutions (vendors hosting their own products).• Third-party solutions with a significant integration and consultative component, delivered by what The Yankee Group calls solution service providers that will leverage the hosted model to gain significant market share. | 计算机 |
2014-23/2157/en_head.json.gz/4387 | HomeGamesCommunityAboutJobsStorePlaytestForumsContact
Randy P
Randy is one of three owners and founders of Gearbox software and under his guidance as President and CEO, Gearbox has grown from an idea into a leading independent game development studio with titles that have sold over 30 million units worldwide earning over $1 billion. In 2000, Randy accepted the Academy of Interactive Arts and Sciences award for Best PC Action Game of the Year for Half-Life: Opposing Force (1999, PC), for which he served as Executive Producer and Director. With Gearbox, Randy also directed, produced and/or was executive producer for Half-Life: Blue Shift (2000, PC), Half-Life (2001, PS2), Counter-Strike: Condition Zero (2004, PC), 007 James Bond: Nightfire (2002, PC), Tony Hawk's Pro Skater 3 (2002, PC), Samba de Amigo (2008, Wii) and Halo: Combat Evolved (2003, PC). Randy served as Co-Director and Executive Producer of Gearbox's original franchise 'Brothers in Arms'. In March of 2005, Gearbox launched the franchise with publisher Ubisoft Entertainment on three platforms to achieve record sales, critical acclaim and numerous industry awards and accolades making it the best selling and highest rated WW2 action game ever released on the Xbox video game system. Since its launch, the Brothers in Arms series has released ten different games across all major console and mobile platforms generating over $300m in gross revenue. The most recent release, Brothers in Arms Hell's Highway, was a 12th Annual Interactive Achievement Awards nominee for Outstanding Achievement in Original Story.
In 2009, as Executive Producer, Randy launched Borderlands, the fastest selling new video game brand of the year. Borderlands was released on the Xbox 360, Playstation 3 and Windows PC and other streaming platforms worldwide in October of 2009 to earn universal critical acclaim and record-setting sales. Featuring seamless single player and cooperative game play, Borderlands, as Randy put it, "is the first, great shooter looter." Borderlands has been nominated for and has won multiple Game of the Year awards throughout the industry and was nominated in two categories in the 13th Annual Interactive Achievement Awards.
Borderlands 2 was launched in September of 2012 to record breaking sales and an average metacritic.com score across all platforms exceeding 90%.
Randy is also Executive Producer of Aliens: Colonial Marines launching on multiple platforms including the Nintendo Wii U in 2013.
Before becoming a full-time video game developer, Randy was a professional magician in Hollywood occasionally performing at the famous Magic Castle between classes at UCLA. Under the handle "DuvalMagic" Randy is also is a feverishly dedicated gamer who has earned a competitive reputation since the days of Doom to the end-game raids of World of Warcraft. His Xbox Live Gamer Score exceeds 80,000 points and he similarly high marks with battle.net, Steam, Game Center and other platforms with meta game systems.
Length of time I have been employed in the industry
15 + years...
Game companies I have worked for
Gearbox Software and a couple of others...
Titles I have worked on
Every Gearbox title and a few others before Gearbox...
Resources I would recommend for those interested in getting into game development (books, tutorials, sites, etc)
Start by learning how to create. Then, strive to create something interesting. Internet, books, school, friends and the modification community are all great resources to use to discover your path. Never give up.
First game I remember playing
All-time favorite game(s)
Non-video gaming hobbies
Board Games, Piano, Guitar, Sleight of Hand
@DuvalMagic
gamescommunityaboutjobsstoreplaytestforumscontact
© 2014 Gearbox Software. All rights reserved. | 计算机 |
2014-23/2157/en_head.json.gz/4415 | A look at Photoshop Disasters and other catastroph...
A look at Photoshop Disasters and other catastrophes
We interview the webmaster of Photoshop Disasters to find out exactly how designers are slipping up
Patrick Budmar (PC World) on 13 March, 2012 12:46
"I need this to look perfect."
Adobe's Photoshop suite has overcome competing products over the years to not only become the de facto standard for image editing, but also a household name. The popularity of digital photography both as an art form and for commercial purposes means that there are now more people using Photoshop to edit and touch up their images than ever before. The visual nature of image editing and manipulation means that it is very easy to spot any imperfections and touch it up through the myriad of filters and plug-ins that are available in the program. However, despite the best efforts of designers, both amateur and professional, mistakes are made and inappropriate images somehow get published.Ann Taylor: Minor misstep.Seemingly fed up with the lack of professionalism, skill and/or talent routinely displayed by certain Photoshop users worldwide, the Photoshop Disasters (PSD) blog (www.psdisasters.com) came up in 2008 as a way to document and shame those who had committed the sin of creating and releasing a "Photoshop disaster". The blog, in its original form, was started by a webmaster who went by the handle "Cosmo", who helped build up the popularity and awareness of the blog for the next two years until ownership was passed on to a new webmaster. When Vernon, who chose to remain anonymous for this story, took the reins of PSD from Cosmo, he aimed to continue building upon the blog's mission of highlighting mistakes in graphic design. "The main reason for continuing PSD is our love for the community, and we see it as moral right to expose a lot of immoral Photoshop [work] that happens in our society", he said.While Vernon admits to being a Photoshop user for several years, he does not feel that he should be classified as an expert on the subject, which is a common conclusion that people tend to jump to just because he runs the blog. "My first experience was through college and I used it on some project that I was working on at the time", he said. "Photoshop today is a lot easier to use than it was back then". This interest in Photoshop led Vernon to operate PSD, which actively sources "disastrous" images from the Internet and user submissions before posting them online for the whole world to judge.What makes the site more intriguing is that the broken images are often from professional publications, which not only have a tendency to employ skilled artists, but editors as well. Despite these gatekeepers, PSD has demonstrated time and time again that "Photoshop disasters" happen. "I'm not exactly sure why these disasters happen", Vernon admits, "but I guess a lot of these companies have a messed up view as to what true beauty is".While PSD's tongue-in-cheek posts often serve as a critique for badly designed images, the blog found itself threatened with legal action in late 2009 by renowned clothing brand, Polo Ralph Lauren, if it did not remove a post that featured a grossly manipulated image of model Filippa Hamilton. Polo Ralph Lauren issued a Digital Millennium Copyright Act (DMCA) takedown notice to the host of PSD, claiming the use of the image infringed on its copyright, but the blog decided to continue running with the image as it was deemed to be fair use. In addition to continuing to use the offending image, a mocking rebuttal was posted by PSD and the takedown notice was reposted for visitors to see. While Polo Ralph Lauren eventually apologised for its "outrageous bout of Photoshop exuberance", as the blog described it, the company did not go as far as to apologise for the original DMCA notice their lawyers sent to PSD.As more artists dip their feet into Photoshop and the publishing industry continues to be obsessed with altering photos to their image of what constitutes perfection, it is still too early to tell whether PSD is contributing to artists tightening up their Photoshop skills, or whether it is merely generating more attention about flawed images that have always been around in one form or another. "I was expecting that submissions of disasters would start to drop, but we're actually seeing increasing submissions by our readers", Vernon said. "We might be seeing more disasters because our audience is growing and have learnt how to spot these disasters more easily".PSD in itself serves as a clear warning to both wannabe artists and seasoned professionals that careless mistakes in graphic design will not be tolerated but instead shamed, and for those who may or may not be aware of the blog, Vernon's advice on how to avoid making a Photoshop disaster is quite simple: "It would be to not cut corners and manipulate something to the point where it doesn't look real anymore", he warned. "The PSD community is very large, and anything that gets published that looks manipulated beyond belief normally gets picked up by our site"."Photoshopped" is not a verb, says AdobeThe rapid adoption and broad popularity of Adobe Photoshop has meant that the program has not only found itself on most computers around the world, but it has reached mainstream recognition in that terms such as "Photoshopped" or "Shopped" have become verbs used to refer to images edited by the program. While one might think that entering the vernacular with the likes of Google would put Adobe in an enviable position, but the company has actual gone to great lengths to discourage the practice in the past.Adobe's position is that its trademarks, such as Photoshop, are some of its most valuable assets and they are something it protects worldwide. "We consider the use of Photoshop as a verb, or in any other way than as the name of our digital imaging software product, to be a misuse of our mark", an Adobe spokesperson explained. While this is the vendor's official stance, most users such as PSD's Vernon continue to use the terms in a daily context. "I personally don't have any issue with these terms and I think they are quite a light hearted way in referring to images manipulated in the program", he said.
Tags adobephotoshop
Patrick Budmar | 计算机 |
2014-23/2157/en_head.json.gz/4531 | 1/25/201211:20 AMInformationWeekSlideshowsConnect Directly1 CommentComment NowLogin50%50%
Top 10 Open Government WebsitesFederal agencies face White House orders to become more transparent. These websites expose new data sets, support public petitions, and reveal where taxpayer money goes.1 of 10 Dozens of websites have been created under the White House's Open Government Directive, announced in December 2009, with the goals of increasing transparency and fostering greater public participation and collaboration in government processes and policy.
Some sites, such as Data.gov, are far-reaching. Data.gov (pictured above) is an online repository of government data that's available to the public in a variety of formats. As of the middle of January, more than 390,000 data sets from 172 agencies and sub-agencies were available there. Among the most popular are U.S. Geological Survey statistics on earthquakes and the U.S. Agency for International Development's data on economic aid and military assistance to foreign governments.
Data.gov, more than a big database, serves as a platform for communities of interest, where researchers, application developers, and others brainstorm about public data. One recent example is the National Ocean Council's Ocean portal, with information on offshore seabirds and critical habitats for endangered species, among other data sets. There are also discussion forums and interactive maps.
Some specialty sites focus on a particular area of government activity. The Environmental Protection Agency recently released a cache of data on greenhouse gas emissions, along with an interactive map and tools for analysis.
Other sites built around narrow data sets are AirNow (focused on air quality), the Department of Energy's Green Energy site, the National Archives and Records Administration's virtual community for educators, and the National Library of Medicine's Pillbox site, for identifying tablet and capsule medications.
The White House has established a Web page where the public can track the progress of open government. A dashboard shows how agencies are faring in 10 areas, such as releasing data and publishing their plans.
InformationWeek has selected 10 federal websites that are the best examples of open government in all of its forms. (State and metropolitan open government sites aren't included in this overview.) Some put an emphasis on data transparency, while others encourage public participation or collaboration. Even so, there's room for improvement. The data on these sites isn't always timely or accurate, and public participation sometimes wanes, underscoring that open government needs constant attention to be effective.
DDDDDEEE,
re: Top 10 Open Government Websites Open government websites sounds interesting but what about the privately held open sharing companies which handled before. Reply | Post Message | Messages List | Start a Board | 计算机 |
2014-23/2157/en_head.json.gz/4991 | Published on O'Reilly (http://oreilly.com/)
Design Patterns and Business Models for the Next Generation of Software
by Tim O'Reilly
Read this article in:
Oct. 2009: Tim O'Reilly and John Battelle answer the question of "What's next for Web 2.0?" in Web Squared: Web 2.0 Five Years On.
The bursting of the dot-com bubble in the fall of 2001 marked a turning point for the web. Many people concluded that the web was overhyped, when in fact bubbles and consequent shakeouts appear to be a common feature of all technological revolutions. Shakeouts typically mark the point at which an ascendant technology is ready to take its place at center stage. The pretenders are given the bum's rush, the real success stories show their strength, and there begins to be an understanding of what separates one from the other.
The concept of "Web 2.0" began with a conference brainstorming session between O'Reilly and MediaLive International. Dale Dougherty, web pioneer and O'Reilly VP, noted that far from having "crashed", the web was more important than ever, with exciting new applications and sites popping up with surprising regularity. What's more, the companies that had survived the collapse seemed to have some things in common. Could it be that the dot-com collapse marked some kind of turning point for the web, such that a call to action such as "Web 2.0" might make sense? We agreed that it did, and so the Web 2.0 Conference was born.
In the year and a half since, the term "Web 2.0" has clearly taken hold, with more than 9.5 million citations in Google. But there's still a huge amount of disagreement about just what Web 2.0 means, with some people decrying it as a meaningless marketing buzzword, and others accepting it as the new conventional wisdom.
This article is an attempt to clarify just what we mean by Web 2.0.
In our initial brainstorming, we formulated our sense of Web 2.0 by example:
Google AdSense Ofoto
upcoming.org and EVDB
domain name speculation
directories (taxonomy)
tagging ("folksonomy")
The list went on and on. But what was it that made us identify one application or approach as "Web 1.0" and another as "Web 2.0"? (The question is particularly urgent because the Web 2.0 meme has become so widespread that companies are now pasting it on as a marketing buzzword, with no real understanding of just what it means. The question is particularly difficult because many of those buzzword-addicted startups are definitely not Web 2.0, while some of the applications we identified as Web 2.0, like Napster and BitTorrent, are not even properly web applications!) We began trying to tease out the principles that are demonstrated in one way or another by the success stories of web 1.0 and by the most interesting of the new applications.
1. The Web As Platform
Like many important concepts, Web 2.0 doesn't have a hard boundary, but rather, a gravitational core. You can visualize Web 2.0 as a set of principles and practices that tie together a veritable solar system of sites that demonstrate some or all of those principles, at a varying distance from that core.
Figure 1 shows a "meme map" of Web 2.0 that was developed at a brainstorming session during FOO Camp, a conference at O'Reilly Media. It's very much a work in progress, but shows the many ideas that radiate out from the Web 2.0 core.
For example, at the first Web 2.0 conference, in October 2004, John Battelle and I listed a preliminary set of principles in our opening talk. The first of those principles was "The web as platform." Yet that was also a rallying cry of Web 1.0 darling Netscape, which went down in flames after a heated battle with Microsoft. What's more, two of our initial Web 1.0 exemplars, DoubleClick and Akamai, were both pioneers in treating the web as a platform. People don't often think of it as "web services", but in fact, ad serving was the first widely deployed web service, and the first widely deployed "mashup" (to use another term that has gained currency of late). Every banner ad is served as a seamless cooperation between two websites, delivering an integrated page to a reader on yet another computer. Akamai also treats the network as the platform, and at a deeper level of the stack, building a transparent caching and content delivery network that eases bandwidth congestion.
Nonetheless, these pioneers provided useful contrasts because later entrants have taken their solution to the same problem even further, understanding something deeper about the nature of the new platform. Both DoubleClick and Akamai were Web 2.0 pioneers, yet we can also see how it's possible to realize more of the possibilities by embracing additional Web 2.0 design patterns.
Let's drill down for a moment into each of these three cases, teasing out some of the essential elements of difference.
Netscape vs. Google
If Netscape was the standard bearer for Web 1.0, Google is most certainly the standard bearer for Web 2.0, if only because their respective IPOs were defining events for each era. So let's start with a comparison of these two companies and their positioning.
Netscape framed "the web as platform" in terms of the old software paradigm: their flagship product was the web browser, a desktop application, and their strategy was to use their dominance in the browser market to establish a market for high-priced server products. Control over standards for displaying content and applications in the browser would, in theory, give Netscape the kind of market power enjoyed by Microsoft in the PC market. Much like the "horseless carriage" framed the automobile as an extension of the familiar, Netscape promoted a "webtop" to replace the desktop, and planned to populate that webtop with information updates and applets pushed to the webtop by information providers who would purchase Netscape servers.
In the end, both web browsers and web servers turned out to be commodities, and value moved "up the stack" to services delivered over the web platform.
Google, by contrast, began its life as a native web application, never sold or packaged, but delivered as a service, with customers paying, directly or indirectly, for the use of that service. None of the trappings of the old software industry are present. No scheduled software releases, just continuous improvement. No licensing or sale, just usage. No porting to different platforms so that customers can run the software on their own equipment, just a massively scalable collection of commodity PCs running open source operating systems plus homegrown applications and utilities that no one outside the company ever gets to see.
At bottom, Google requires a competency that Netscape never needed: database management. Google isn't just a collection of software tools, it's a specialized database. Without the data, the tools are useless; without the software, the data is unmanageable. Software licensing and control over APIs--the lever of power in the previous era--is irrelevant because the software never need be distributed but only performed, and also because without the ability to collect and manage the data, the software is of little use. In fact, the value of the software is proportional to the scale and dynamism of the data it helps to manage.
Google's service is not a server--though it is delivered by a massive collection of internet servers--nor a browser--though it is experienced by the user within the browser. Nor does its flagship search service even host the content that it enables users to find. Much like a phone call, which happens not just on the phones at either end of the call, but on the network in between, Google happens in the space between browser and search engine and destination content server, as an enabler or middleman between the user and his or her online experience.
While both Netscape and Google could be described as software companies, it's clear that Netscape belonged to the same software world as Lotus, Microsoft, Oracle, SAP, and other companies that got their start in the 1980's software revolution, while Google's fellows are other internet applications like eBay, Amazon, Napster, and yes, DoubleClick and Akamai.
DoubleClick vs. Overture and AdSense
Like Google, DoubleClick is a true child of the internet era. It harnesses software as a service, has a core competency in data management, and, as noted above, was a pioneer in web services long before web services even had a name. However, DoubleClick was ultimately limited by its business model. It bought into the '90s notion that the web was about publishing, not participation; that advertisers, not consumers, ought to call the shots; that size mattered, and that the internet was increasingly being dominated by the top websites as measured by MediaMetrix and other web ad scoring companies.
As a result, DoubleClick proudly cites on its website "over 2000 successful implementations" of its software. Yahoo! Search Marketing (formerly Overture) and Google AdSense, by contrast, already serve hundreds of thousands of advertisers apiece.
Overture and Google's success came from an understanding of what Chris Anderson refers to as "the long tail," the collective power of the small sites that make up the bulk of the web's content. DoubleClick's offerings require a formal sales contract, limiting their market to the few thousand largest websites. Overture and Google figured out how to enable ad placement on virtually any web page. What's more, they eschewed publisher/ad-agency friendly advertising formats such as banner ads and popups in favor of minimally intrusive, context-sensitive, consumer-friendly text advertising.
The Web 2.0 lesson: leverage customer-self service and algorithmic data management to reach out to the entire web, to the edges and not just the center, to the long tail and not just the head.
A Platform Beats an Application Every Time
In each of its past confrontations with rivals, Microsoft has successfully played the platform card, trumping even the most dominant applications. Windows allowed Microsoft to displace Lotus 1-2-3 with Excel, WordPerfect with Word, and Netscape Navigator with Internet Explorer.
This time, though, the clash isn't between a platform and an application, but between two platforms, each with a radically different business model: On the one side, a single software provider, whose massive installed base and tightly integrated operating system and APIs give control over the programming paradigm; on the other, a system without an owner, tied together by a set of protocols, open standards and agreements for cooperation.
Windows represents the pinnacle of proprietary control via software APIs. Netscape tried to wrest control from Microsoft using the same techniques that Microsoft itself had used against other rivals, and failed. But Apache, which held to the open standards of the web, has prospered. The battle is no longer unequal, a platform versus a single application, but platform versus platform, with the question being which platform, and more profoundly, which architecture, and which business model, is better suited to the opportunity ahead.
Windows was a brilliant solution to the problems of the early PC era. It leveled the playing field for application developers, solving a host of problems that had previously bedeviled the industry. But a single monolithic approach, controlled by a single vendor, is no longer a solution, it's a problem. Communications-oriented systems, as the internet-as-platform most certainly is, require interoperability. Unless a vendor can control both ends of every interaction, the possibilities of user lock-in via software APIs are limited.
Any Web 2.0 vendor that seeks to lock in its application gains by controlling the platform will, by definition, no longer be playing to the strengths of the platform.
This is not to say that there are not opportunities for lock-in and competitive advantage, but we believe they are not to be found via control over software APIs and protocols. There is a new game afoot. The companies that succeed in the Web 2.0 era will be those that understand the rules of that game, rather than trying to go back to the rules of the PC software era.
Not surprisingly, other web 2.0 success stories demonstrate this same behavior. eBay enables occasional transactions of only a few dollars between single individuals, acting as an automated intermediary. Napster (though shut down for legal reasons) built its network not by building a centralized song database, but by architecting a system in such a way that every downloader also became a server, and thus grew the network.
Akamai vs. BitTorrent
Like DoubleClick, Akamai is optimized to do business with the head, not the tail, with the center, not the edges. While it serves the benefit of the individuals at the edge of the web by smoothing their access to the high-demand sites at the center, it collects its revenue from those central sites.
BitTorrent, like other pioneers in the P2P movement, takes a radical approach to internet decentralization. Every client is also a server; files are broken up into fragments that can be served from multiple locations, transparently harnessing the network of downloaders to provide both bandwidth and data to other users. The more popular the file, in fact, the faster it can be served, as there are more users providing bandwidth and fragments of the complete file.
BitTorrent thus demonstrates a key Web 2.0 principle: the service automatically gets better the more people use it. While Akamai must add servers to improve service, every BitTorrent consumer brings his own resources to the party. There's an implicit "architecture of participation", a built-in ethic of cooperation, in which the service acts primarily as an intelligent broker, connecting the edges to each other and harnessing the power of the users themselves.2. Harnessing Collective Intelligence
The central principle behind the success of the giants born in the Web 1.0 era who have survived to lead the Web 2.0 era appears to be this, that they have embraced the power of the web to harness collective intelligence:
Hyperlinking is the foundation of the web. As users add new content, and new sites, it is bound in to the structure of the web by other users discovering the content and linking to it. Much as synapses form in the brain, with associations becoming stronger through repetition or intensity, the web of connections grows organically as an output of the collective activity of all web users.
Yahoo!, the first great internet success story, was born as a catalog, or directory of links, an aggregation of the best work of thousands, then millions of web users. While Yahoo! has since moved into the business of creating many types of content, its role as a portal to the collective work of the net's users remains the core of its value.
Google's breakthrough in search, which quickly made it the undisputed search market leader, was PageRank, a method of using the link structure of the web rather than just the characteristics of documents to provide better search results.
eBay's product is the collective activity of all its users; like the web itself, eBay grows organically in response to user activity, and the company's role is as an enabler of a context in which that user activity can happen. What's more, eBay's competitive advantage comes almost entirely from the critical mass of buyers and sellers, which makes any new entrant offering similar services significantly less attractive.
Amazon sells the same products as competitors such as Barnesandnoble.com, and they receive the same product descriptions, cover images, and editorial content from their vendors. But Amazon has made a science of user engagement. They have an order of magnitude more user reviews, invitations to participate in varied ways on virtually every page--and even more importantly, they use user activity to produce better search results. While a Barnesandnoble.com search is likely to lead with the company's own products, or sponsored results, Amazon always leads with "most popular", a real-time computation based not only on sales but other factors that Amazon insiders call the "flow" around products. With an order of magnitude more user participation, it's no surprise that Amazon's sales also outpace competitors.
Now, innovative companies that pick up on this insight and perhaps extend it even further, are making their mark on the web:
Wikipedia, an online encyclopedia based on the unlikely notion that an entry can be added by any web user, and edited by any other, is a radical experiment in trust, applying Eric Raymond's dictum (originally coined in the context of open source software) that "with enough eyeballs, all bugs are shallow," to content creation. Wikipedia is already in the top 100 websites, and many think it will be in the top ten before long. This is a profound change in the dynamics of content creation!
Sites like del.icio.us and Flickr, two companies that have received a great deal of attention of late, have pioneered a concept that some people call "folksonomy" (in contrast to taxonomy), a style of collaborative categorization of sites using freely chosen keywords, often referred to as tags. Tagging allows for the kind of multiple, overlapping associations that the brain itself uses, | 计算机 |
2014-23/2157/en_head.json.gz/5021 | Privacy and Conditions
PcTips Box
Tips and Tricks Central
Windows Vista SP1 Automated System Recovery
13. February 2008 · Write a comment · Categories: Windows Vista · Tags: configuration, Operating system, Recovery, service pack 1, tweak, vista service pack, Windows, windows application, Windows Server 2008, Windows Vista The Automated System Recovery (ASR) is one of the aspects in Windows Vista that has been evolved with the introduction of Vista Service Pack 1. A Windows application programming interface, ASR is designed to keep track of and record the configuration of disks and volumes on a system. In the end, ASR comes into play in cases of bare metal recovery scenarios. Ahead of restoring the operating system as well as the associated content including programs and data, ASR will take the disks and volumes to their original state. The Automated System Recovery will manage disks in accordance with Critical and non-Critical labels, depending on whether they do or do not contain system state or operating system components.
“ASR in Vista and Server 2008 is tightly integrated with the Volume Shadow Copy Service (VSS) and presents a writer interface, which is a significant change from Server 2003 and XP. During a backup the ASR writer reports metadata describing all the disks and volumes on the system. During the restore the requester passes the same metadata back to the writer which recreates disks and volumes as necessary. The ASR writer will fa | 计算机 |
2014-23/2157/en_head.json.gz/7649 | M4V to MP3
Convert M4V to MP3, M4V to MP3 Converter
Convert M4V to MP3
MP4 MP3 Converter converts M4V to MP3 and supports more than 100 audio and video files. The software also supports batch conversion.
Free Download MP4 MP3 Converter
Install the software by instructions
Launch MP4 MP3 Converter
Choose M4V Files
Click "Add Files" button to choose M4V files and add them to conversion list.
Choose one or more M4V files you want to convert.
Choose "to MP3"
Click "Convert" to convert all M4V files into MP3 format.
The software is converting M4V files into MP3 format.
Play & Browse
Right-click converted item and choose "Play Destination" to play the destination file, choose "Browse Destination Folder" to open Windows Explorer to browse the destination file.
Top Free Download MP4 MP3 Converter
What is M4V?
M4V is a standard file format for the popular Apple iPod devices. There are two definitions for the term M4V. The first is that raw MPEG-4 Visual bitstreams are named .m4v. The second, and much more likely, is that you have legally downloaded a video file from the Apple iTunes store and it has the M4V extension. These files can be movies, TV shows or music videos and all will include Apple's FairPlay DRM copyright protection.
MPEG-1 Audio Layer 3, more commonly referred to as MP3, is a digital audio encoding format using a form of lossy data compression. It is a common audio format for consumer audio storage, as well as a de facto standard encoding for the transfer and playback of music on digital audio players. MP3's use of a lossy compression algorithm is designed to greatly reduce the amount of data required to represent the audio recording and still sound like a faithful reproduction of the original uncompressed audio for most listeners, but is not considered high fidelity audio by audiophiles. An MP3 file that is created using the mid-range bit rate setting of 128 kbit/s will result in a file that is typically about 1/10th the size of the CD file created from the original audio source. An MP3 file can also be constructed at higher or lower bit rates, with higher or lower resulting quality. The compression works by reducing accuracy of certain parts of sound that are deemed beyond the auditory resolution ability of most people. This method is commonly referred to as perceptual coding. It internally provides a representation of sound within a short term time/frequency analysis window, by using psychoacoustic models to discard or reduce precision of components less audible to human hearing, and recording the remaining information in an efficient manner. This is relatively similar to the principles used by JPEG, an image compression format.
Convert M4V to MP3 Related Topics: MP4 to WAV, M4V to MP3, M4V to WAV, M4V to AAC, M4V to M4A, M4V to M4B, AAC to MP4, AC3 to MP4, ADX to MP4, FLAC to MP4, IT to MP4, M4A to MP4, UMX to MP4, WAV to MP4, WMA to MP4, WV to MP4, XM to MP4
Home | Getting Started | Download | Buy Now! | Screen Shots | FAQ | Support | Contact | Links
Copyright © 2008-2013 Hoo Technologies All rights reserved. Privacy Policy | 计算机 |
2014-23/2157/en_head.json.gz/8644 | ShowsGoing Deep
Parallel Computing in Native Code: New Trends and Old Friends
Posted: Jan 12, 2009 at 11:48 AM
We've covered a lot of ground on both C++ and
on Channel 9 over the past few years. For C++ in particular, we've gone deep on many fronts with some of the main players in Microsoft's native programming world. Damien Watkins is one of these players and he's the brains behind most of the interviews you've
seen on C9 (he thought them up and set them up). But who is Damien and what does he do?Rick Molloy (PM) and Don McCrady(Development Lead) have been on Channel 9 before and they are both members of the native side of the parallel computing platform (PCP) house. It's no surprise that most teams who ship Microsoft software work closely with the
C++ team given that most of our products are written in native code. The C++ team produces the de facto compiler that most teams at MS use. The PCP team is no exception.
We figured it would be fun to get a C++ player (Damien is a PM on the front-end native compiler team) and some Parallel People together in a room to discuss the native
side of the Concurrency Problem (and possible solutions) and get a feel for the synergy between teams. The next version of C++, C++0x, will undoubtedly contain new language constructs that will make it easier to program many-core algorithms. We dig into some
of these here as well as reveal for the first time on C9 some new members of the C++ language that you may not have heard about yet....Enjoy. This is a great conversation among key thinkers who live in and innovate the native world.
C++, C++0x, Concurrency, Parallel Computing, Parallel Computing Platform, Parallelism, C++11
tomkirbygreen
Brilliant interview. Particularly the history of the Visual C++ 2010 concurrency feature set and the deep thinking and u-turns taken in creating it. Only on Channel 9 could we have such a frank interview about the history and early 'mistakes' made in the
creation of as yet unreleased technology. I really appreciate the transparency of such conversations. Oh, and loved Charles joke about adding another double-underscore keyword Charles
Glad you liked the conversation! I know I did. It's a treat to get to interact with so many smart and passionate people, digging into the lesser known aspects of what goes on inside the Happy Death Star. I love this job. Much more to come from Parallel
Computing, C++ and other great teams in '09!C
Software Transactional Memory: The Current State…
Parallel Computing in Native Code: New Trends and…
Mark Russinovich: Inside Windows 7
STL: Some Underlying Algorithms, Data Structures,… | 计算机 |
2014-23/2157/en_head.json.gz/9270 | Microsoft Active Directory Migration
Security Bytes: Vista under the hackers' microscope
Vista under the hackers' microscope
Microsoft has great confidence in the security features of its upcoming Vista OS. So much confidence, in fact, that it plans to show them off in a den of hackers. At the August Black Hat confab in Las Vegas, the software giant will take to the stage and offer an entire series of sessions on its long-awaited overhaul of the Windows operating system. It will be the first presentation Microsoft has made at the hacker-oriented gathering. Microsoft security program manager Stephen Toulouse told eWeek that the idea is to provide deeply technical presentations on Vista security to the hacking community. "We submitted several presentations to the Black Hat event organizers and, based on the technical merit and interest to the audience, they were accepted," Toulouse told eWeek. John Lambert, group manager in Microsoft's Security Engineering and Communications Group, will also be on hand to discuss the security engineering process behind Vista. Specifically, he will show how Vista's engineering process differs from that of Windows XP, and he'll display new features designed to blunt memory overwrite flaws. RSA stock option grants under scrutiny
Bedford, Mass.-based RSA Security Inc. acknowledged Tuesday that it has been subpoenaed by the U.S. Attorney for the Southern District of New York for records from 1996 to the present related to the company's granting of stock options. The company told the Reuters
news agency it will cooperate fully with the office of the U.S. Attorney in its investigation of how RSA and other companies granted stock options. According to Reuters, the SEC and federal prosecutors in New York and California are looking at more than 40 companies to determine if they gave backdated stock options to top executives after a run-up in stock options. The majority of companies involved have to date been technology-based companies. Last Friday RSA said it received notification of a shareholder complaint alleging violations from October 1999 to present of state and federal laws relating to stock option grant practices. The company said in a filing with the U.S. Securities and Exchange Commission that its directors intend to review the allegations before responding. Shares in RSA dropped by 2.1% to $16.33 in Tuesday mid-day trading on the Nasdaq market. Report shows spike in spyware
The spyware threat has grown steadily, according to a report from Chicago-based security software firm Aladdin Knowledge Systems Inc. Among the findings, which cover 2005:
The number of spyware threats grew from 1,083 in 2004 to 3,389 in 2005, representing a huge spike of more than 213%. The number of malicious threats classified as Trojans -- a form of spyware -- grew from 1,455 in 2004 to 3,521 in 2005, representing a 142% spike. The number all other malicious threats grew from 6,222 in 2004 to 9,713 in 2005, representing a 56% increase. The latter statistic covers email worms and file infectors defined as self-replicating/propagating malicious applications. Unlike Spyware and Trojan horses, viruses and worms have self-spreading capabilities, using email, networks, instant messengers and other programs to propagate.
This article originally appeared on SearchSecurity.com.
Microsoft Active Directory Migration,
Microsoft Windows Server 2008 Administration,
RSA Conference 2008 round-up: Reports from RSA USA
Roundup: Vista security, breakability touted at RSA Conference
Vista security needs beefing up, says independent test | 计算机 |
2014-23/2157/en_head.json.gz/9302 | Metcalfe's Law is Wrong
Communications networks increase in value as they add members--but by how much? The devil is in the details
By Bob Briscoe, Andrew Odlyzko, Benjamin Tilly Posted 1 Jul 2006 | 18:15 GMT
Illustration: Serge Bloch
Of all the popular ideas of the Internet boom, one of the most dangerously influential was Metcalfe's Law. Simply put, it says that the value of a communications network is proportional to the square of the number of its users.
The law is said to be true for any type of communications network, whether it involves telephones, computers, or users of the World Wide Web. While the notion of "value" is inevitably somewhat vague, the idea is that a network is more valuable the more people you can call or write to or the more Web pages you can link to.
Metcalfe's Law attempts to quantify this increase in value. It is named for no less a luminary than Robert M. Metcalfe, the inventor of Ethernet. During the Internet boom, the law was an article of faith with entrepreneurs, venture capitalists, and engineers, because it seemed to offer a quantitative explanation for the boom's various now-quaint mantras, like "network effects," "first-mover advantage," "Internet time," and, most poignant of all, "build it and they will come."
By seeming to assure that the value of a network would increase quadratically--proportionately to the square of the number of its participants--while costs would, at most, grow linearly, Metcalfe's Law gave an air of credibility to the mad rush for growth and the neglect of profitability. It may seem a mundane observation today, but it was hot stuff during the Internet bubble.
Remarkably enough, though the quaint nostrums of the dot-com era are gone, Metcalfe's Law remains, adding a touch of scientific respectability to a new wave of investment that is being contemplated, the Bubble 2.0, which appears to be inspired by the success of Google. That's dangerous because, as we will demonstrate, the law is wrong. If there is to be a new, broadband-inspired period of telecommunications growth, it is essential that the mistakes of the 1990s not be reprised.
The law was named in 1993 by George Gilder, publisher of the influential Gilder Technology Report . Like Moore's Law, which states that the number of transistors on a chip will double every 18 to 20 months, Metcalfe's Law is a rough empirical description, not an immutable physical law. Gilder proclaimed the law's importance in the development of what came to be called "the New Economy."
Soon afterward, Reed E. Hundt, then the chairman of the U.S. Federal Communications Commission, declared that Metcalfe's Law and Moore's Law "give us the best foundation for understanding the Internet." A few years later, Marc Andreessen, who created the first popular Web browser and went on to cofound Netscape, attributed the rapid development of the Web--for example, the growth in AOL's subscriber base--to Metcalfe's Law.
There was some validity to many of the Internet mantras of the bubble years. A few very successful dot-coms did exploit the power of the Internet to provide services that today yield great profits. But when we look beyond that handful of spectacular successes, we see that, overall, the law's devotees didn't fare well. For every Yahooï»' or Google, there were dozens, even hundreds, of Pets.coms, EToys, and Excite@Homes, each dedicated to increasing its user base instead of its profits, all the while increasing expenses without revenue.
Because of the mind-set created, at least in small part, by Metcalfe's Law, even the stocks of rock-solid companies reached absurd heights before returning to Earth. The share price of Cisco Systems Inc., San Jose, Calif., for example, fell 89 percent--a loss of over US $580 billion in the paper value of its stock--between March 2000 and October 2002. And the rapid growth of AOL, which Andreessen attributed to Metcalfe's Law, came to a screeching halt; the company has struggled, to put it mildly, in the last few years.
Metcalfe's Law was over a dozen years old when Gilder named it. As Metcalfe himself remembers it, in a private correspondence with one of the authors, "The original point of my law (a 35mm slide circa 1980, way before George Gilder named it...) was to establish the existence of a cost-value crossover point--critical mass--before which networks don't pay. The trick is to get past that point, to establish critical mass." [See " " a reproduction of Metcalfe's historic slide.]
Metcalfe was ideally situated to watch and analyze the growth of networks and their profitability. In the 1970s, first in his Harvard Ph.D. thesis and then at the legendary Xerox Palo Alto Research Center, Metcalfe developed the Ethernet protocol, which has come to dominate telecommunications networks. In the 1980s, he went on to found the highly successful networking company 3Com Corp., in Marlborough, Mass. In 1990 he became the publisher of the trade periodical InfoWorld and an influential high-tech columnist. More recently, he has been a venture capitalist.
The foundation of his eponymous law is the observation that in a communications network with n members, each can make ( n –1) connections with other participants. If all those connections are equally valuable--and this is the big "if" as far as we are concerned--the total value of the network is proportional to n ( n –1), that is, roughly, n2. So if, for example, a network has 10 members, there are 90 different possible connections that one member can make to another. If the network doubles in size, to 20, the number of connections doesn't merely double, to 180, it grows to 380--it roughly quadruples, in other words.
If Metcalfe's mathematics were right, how can the law be wrong? Metcalfe was correct that the value of a network grows faster than its size in linear terms; the question is, how much faster? If there are n members on a network, Metcalfe said the value grows quadratically as the number of members grows.
We propose, instead, that the value of a network of size n grows in proportion to n log( n ). Note that these laws are growth laws, which means they cannot predict the value of a network from its size alone. But if we already know its valuation at one particular size, we can estimate its value at any future size, all other factors being equal.
The distinction between these laws might seem to be one that only a mathematician could appreciate, so let us illustrate it with a simple dollar example.
Imagine a network of 100 000 members that we know brings in $1 million. We have to know this starting point in advance--none of the laws can help here, as they tell us only about growth. So if the network doubles its membership to 200 000, Metcalfe's Law says its value grows by (200 0002/100 0002) times, quadrupling to $4 million, whereas the n log( n ) law says its value grows by 200 000 log(200 000)/100 000 log(100 000) times to only $2.1 million. In both cases, the network's growth in value more than doubles, still outpacing the growth in members, but the one is a much more modest growth than the other. In our view, much of the difference between the artificial values of the dot-com era and the genuine value created by the Internet can be explained by the difference between the Metcalfe-fueled optimism of n2 and the more sober reality of n log( n ).
This difference will be critical as network investors and managers plan better for growth. In North America alone, telecommunications carriers are expected to invest $65 billion this year in expanding their networks, according to the analytical firm Infonetics Research Inc., in San Jose, Calif. As we will show, our rule of thumb for estimating value also has implications for companies in the important business of managing interconnections between major networks.
The increasing value of a network as its size increases certainly lies somewhere between linear and exponential growth [see diagram, " "]. The value of a broadcast network is believed to grow linearly; it's a relationship called Sarno | 计算机 |