id
stringlengths
30
34
text
stringlengths
0
75.5k
industry_type
stringclasses
1 value
2014-23/3291/en_head.json.gz/33028
CONFERENCESSIAM International Conference on Data Mining Temporal Dynamics and Information Retrieval Susan T. Dumais, Microsoft Research, USA Many digital resources, like the Web, are dynamic and ever-changing collections of information. However, most information retrieval tools developed for interacting with Web content, such as browsers and search engines, focus on a single static snapshot of the information. In this talk, I will present analyses of how Web content changes over time, how people re-visit Web pages over time, and how re-visitation patterns are influenced by changes in user intent and content. These results have implications for many aspects of information retrieval and management including crawling policy, ranking and information extraction algorithms, result presentation, and systems evaluation. I will describe a prototype that supports people in understanding how the information they interact with changes over time, and new retrieval models that incorporate features about the temporal evolution of content to improve core ranking. Finally, I will conclude with an overview of some general challenges that need to be addressed to fully incorporate temporal dynamics in information retrieval and information management systems. Biography Susan Dumais is a Principal Researcher and manager of the Context, Learning and User Experience for Search (CLUES) Group at Microsoft Research. Prior to joining Microsoft Research, she was at Bellcore and Bell Labs for many years, where she worked on Latent Semantic Indexing (a statistical method for concept-based retrieval), interfaces for combining search and navigation, and organizational impacts of new technology. Her current research focuses on user modeling and personalization, context and information retrieval, temporal dynamics of information, interactive retrieval, and novel evaluation methods. She has worked closely with several Microsoft groups (Bing, Windows Desktop Search, SharePoint Portal Server, and Office Online Help) on search-related innovations. Susan has published more than 200 articles in the fields of information science, human-computer interaction, and cognitive science, and holds several patents on novel retrieval algorithms and interfaces. Susan is also an adjunct professor in the Information School at the University of Washington. She is Past-Chair of ACM's Special Interest Group in Information Retrieval (SIGIR), and serves on several editorial boards, technical program committees, and government panels. She was elected to the CHI Academy in 2005, an ACM Fellow in 2006, received the SIGIR Gerard Salton Award for Lifetime Achievement in 2009, and a member of the National Academy of Engineering (NAE) in 2011. Copyright © 2014, Society for Industrial and Applied Mathematics Home Abstract Download Child Care Exhibits General Information Invited Presentations Navigating a SIAM Conference Other Datamining Meetings Paper Submissions and Key Dates Proceedings Program and Abstracts Program Updates Register Now Registration Information Related Links
计算机
2014-23/3291/en_head.json.gz/34429
Development Resources/New Commmitter Handbook < Development ResourcesRevision as of 22:15, 6 November 2009 by Wayne.eclipse.org (Talk | contribs) Jump to: navigation, search This page is intended for new Eclipse committers, and contributors who plan to become committers. Having said that, there should be a little something in here for everybody. We are attempting to keep this page as concise as possible (using plain English), while conveying as much information as possible. This document is not the definitive source of information about being an Eclipse committer. It is just the starting point: it provides many links to more information. As a committer, you must be familiar with the following documents: The Eclipse Development Process The Eclipse Intellectual Property Due Diligence Process The Eclipse Resources page is intended as a reference into the rest of the documentation. If you have a question, this is a good place to start. If you can't find the help you need there you should ask your project lead, or PMC. We are continually trying to improve this page. If you think that something can be improved, or something should be added, please let us know by opening a bug against Community/Process. Or you can just edit this page: it is a wiki, afterall! 1.1 Quality 1.2 IP Cleanliness 1.3 Transparency 1.4 Openness 2 Terminology/Who's Who? 2.1 Top-Level Project 2.2 Project 2.3 Committer 2.4 Contributor 2.5 Project Management Committee (PMC) 2.6 Project Lead (PL) 2.7 Architecture Council Mentor 2.8 Eclipse Management Organization (EMO) 2.9 Executive Director (EMO/ED) 2.10 Contribution Questionnaire (CQ) 2.11 Eclipse 2.12 Community 2.13 Eco-System 2.14 Architecture Council 2.15 Planning Council 3 My Foundation Portal 4 Licenses, Intellectual Property Due Diligence, and other Legal Stuff 5 How do I get Help? Principles Several important principles provide a foundation for the Eclipse Development Process. Quality Eclipse Quality means extensible frameworks and exemplary tools developed in an open, inclusive, and predictable process involving the entire community. From the "consumption perspective," Eclipse Quality means good for users (exemplary tools - cool/compelling to use, indicative of what is possible) and ready for plug-in developers (deliver usable building blocks - with APIs). From the "creation perspective," Eclipse Quality means working with a transparent and open process, open and welcoming to participation from technical leaders, regardless of affiliation. Copied from Eclipse Quality IP Cleanliness Intellectual property (IP) cleanliness is a critical issue for Eclipse projects. IP is pretty much any artifact that is made available from an eclipse.org server (this includes source code management systems, the website, and the downloads server). Artifacts include (but are not limited to) such things as source code, images, XML and configuration files, documentation, and more; pretty much anything that a project makes available to the community in general. Fairly strict rules govern the way that we manage IP and your responsibilities as a committer (the links at the top of this page provide a lot more information). Code produced by an Eclipse project is used by organizations to build products. These adopters of Eclipse technology need to have some assurance that the IP they're basing their products on is safe to use. "Safe" basically means that the organization or individuals who claim copyright of the code are the legitimate copyright holders, and the copyright holders legitimately agree to make the code available under the license(s) that the project works under. As a committer, you must be careful that you do not copy code and inadvertently claim it as your own. Transparency A project is said to be transparent if the project team makes it easy for a member of the community, who is not part of the core team, to participate in all decisions that affect the project. The project should make it easy for a non-team member to find all the information used and the decisions made by the project team. Copied from WTP Permeability and Transparency Checklist Openness A project is said to be permeable if the project team is receptive to new ideas from the community and welcomes new committers to its team. This is a measure of the 'responsiveness', 'friendliness' or 'sociability' of the project towards a participant who is not part of the project team. Copied from WTP Permeability and Transparency Checklist Terminology/Who's Who? Top-Level Project Top-level projects are "container projects" as defined by the Eclipse Development Project. As a container project, a top-level project does not contain code. Rather, a top-level project contains other projects. Each top-level project defines a charter that, amoung other things defines a scope for the types of projects that it contains. Top-level projects are managed by a Project Management Committed (described below). Project There are, essentially, two different types of projects at Eclipse (as defined in the Eclipse Development Process). "Container" projects are holders of other projects. You can think of a container project as a grouping of projects that make some logical sense together. Projects can be arbitrarily nested. A container project can itself contain container projects. Container projects do not include code and do not have committers. "Operating" projects are where the real work happens. Each operating project has code and committers. Operating projects may, but do not necessarily, have a dedicated website (many operating projects share a website through their container project). As a new committer, you are most likely associated with an operating project. Operating projects are sometimes referred to as "sub-projects" or as "components". The alternate terms are still used by some of our more seasoned committers, and in parts of our documentation. The Eclipse Development Process, however, treats the terms project, sub-project, and component as equivalent. Projects can be arbitrarily nested, forming a tree. The root of the tree is a single top-level project. There are numerous project-specific resources (including a list of all projects at Eclipse) on the Eclipse Projects Gateway page. Committer A committer is a software developer who has the necessary rights to write code into the project's source code repository. Committers are responsible for ensuring that all code that gets written into the project's source code repository is of sufficient quality. Further, they must ensure that all code written to an eclipse.org source code repository is clean from an intellectual property point of view. This is discussed with more detail below. Contributor Contributions to an Eclipse project take many forms. As a committer, you should do everything possible to encourage contribution by members of the community that surrounds your project. Contributions take the form of code, input into your project's wiki pages, answering questions on newsgroups, and more. Be aware that code contributions from outside of the project are best accepted through Bugzilla records. By attaching code to a Bugzilla record, the committer implicitly grants rights to use the code. Those contributions are subject to our IP policy. Some of our documentation refers to contributors as "developers". Project Management Committee (PMC) Each top-level project is governed by a PMC. The PMC has one or more leads along with several members. The PMC has numerous responsibilities, including the ultimate approval of new committers, and approval of intellectual property contributions. Effectively, the PMC provides oversight for each of the projects that are part of the top-level project. If you have a question that your project lead cannot answer, ask the PMC. Project Lead (PL) The project lead is more of a position of responsibility than one of power. The project lead is immediately responsible for the overall well-being of the project. They own and manage the project's development process, coordinate development, facilitate discussion amongst project committers, ensure that the Eclipse IP policy is being observed by the project and more. If you have questions about your project, the Eclipse Development Process, or anything else, ask your project lead. Architecture Council Mentor The Eclipse Architecture Council (AC) is a body of battle-hardened Eclipse committers. All new projects are required to have a minimum of two mentors taken from the ranks of the AC. Your project mentors will help you find answers to any questions you may have about the Eclipse Development Process and life-in-general within the Eclipse community. If your mentor doesn't have an answer to your question, they can draw on the wisdom of the full AC and the EMO. Eclipse Management Organization (EMO) In the strictest sense, the EMO consists of the Eclipse Foundation staff and the Councils (Requirements, Architecture, and Planning). The EMO is responsible for providing services to the projects, facilitating project reviews, resolving issues, and more. The EMO is the maintainer of the Eclipse Development Process. The main point of contact with the EMO is by email at [email protected]. If you have a question that cannot be answered by project lead, mentor, or PMC, ask the EMO. Executive Director (EMO/ED) Contribution Questionnaire (CQ) Eclipse Now this is a tough one. For most people in the broader community, "Eclipse" refers to a Java IDE based on the JDT project and assembled by the Eclipse Packaging Project. However, the term "Eclipse" is also used to refer to the Eclipse Foundation, the eclipse.org website, the community, the eco-system, and—of course—The Eclipse Project (which is just one of the top-level projects hosted by the Eclipse Foundation). Confusing? Community Eco-System A commercial eco-system is a system in which companies, organizations, and individuals all work together for mutual benefit. There already exists a vast eco-system of companies that base significant parts of their business on Eclipse technology. This takes the form of including Eclipse code in products, providing support, and other services. You become part of an eco-system by filling the needs of commercial interests, being open and transparent, and being responsive to feedback. Ultimately, being part of a commercial eco-system is a great way to ensure the longevity of your project: companies that build their business around your project are very motivated to contribute to your project. Architecture Council Planning Council My Foundation Portal The My Foundation Portal is the primary mechanism for committers to interact with the Eclipse Foundation. Using the portal, you can update your personal information, manage information about your project, nominate new committers, and more. New functionality is added on an ongoing basis. The portal presents functionality in the form of individual components. The set of components depends on the roles assigned to you. You can use either your Eclipse Bugzilla or your committer credentials to login. Some committer-specific functionality is only available when you login using your committer credentials. Licenses, Intellectual Property Due Diligence, and other Legal Stuff The Eclipse Legal page is the main resource for legal matters. Janet Campbell, Legal Counsel & Intellectual Property Team Manager for the Eclipse Foundation, presented a talk, IP for Eclipse Committers, at EclipseCon 2009 that provides a great overview of the IP policy and process at Eclipse. As an Eclipse Committer, you should be familiar with the Eclipse Public License (EPL). Further, you should be aware of the Eclipse Distribution License (EDL). By default all code authored for an Eclipse project is subject to the EPL. The BSD-style EDL license is used by some Eclipse projects which require dual-licensing along with the EPL. Use of this license by an Eclipse project is on a case-by-case basis and requires unanimous approval of the Board of Directors. Managing intellectual property is an important part of being an Eclipse Committer. All committers should be familiar with the Intellectual Property Due Diligence process. Any time you accept code from any party (committer or non-committer), you should consult this poster to determine what course of action to take. As a general rule, code created on an ongoing basis by committers for an Eclipse project can simply be committed into the project's source code repository. Any library authored by a third-party that you intend to distribute from any eclipse.org resource, including those licensed under the EPL or EDL, are subject to the process. The Guidelines for the Review of Third Party Dependencies provides some guidance on dealing with these third party libraries. Code contributions of other forms from a third-party should be made through Eclipse Bugzilla. This includes things like patches, lines of code, or files. Any contributions made through Bugzilla are subject to the Eclipse Terms of Use which, among other things, implicitly grants us the right to use the contribution. Contributions of all forms are subject to the process (not just source files). The entry-point into the Eclipse IP Due Diligence Process is the Contribution Questionnaire (CQ). CQs are tracked in a separate Bugzilla instance known as IPZilla. Only committers and other authorized individuals may access IPZilla. You should create a CQ using the My Foundation Portal. By using the portal, you are given an opportunity to locate similar existing CQs; by leveraging an existing CQ, you may be able to hasten the approval process. The CQ describes the contribution, including important information like the license under which it is distributed and identity of the various authors. Once your CQ is created, you must attach the source code for the library to the record. Do not attach binaries. A separate CQ is required for each library. Do not commit any contribution to your project's source code repository until the CQ is explicitly marked as approved and a comment is added by the IP Due Diligence Team that explicitly states that you can commit the contribution. An incubating project may make use of the Parallel IP Process. Using the Parallel IP Process, a contribution can be committed into a project's source code repository while the due diligence process is undertaken. If, at the end of the due diligence process, the contribution is rejected, it must be removed from the source code repository. The Parallel IP Process "How to" page provides some insight. Mature projects can take advantage of the parallel IP process for certain libraries. Projects are encouraged, where possible, to reuse libraries found in the Orbit project. Leveraging these libraries will help to reduce redundancy across Eclipse projects. If you are unsure about what you need to do or have other questions, ask your PMC. How do I get Help? Projects have at least one mailing list. The mailing list is a good place for developers on a project to ask questions of other developers. That is, the mailing list is typically intended for communication amongst developers on a project. Your project's home page should provide some help in identifying the mailing list. Alternatively, you can find the comprehensive list of all mailing lists here. Your project teammates are probably your best source of pertinent information. If you are working on an incubating project (i.e. a project that has not yet done a 1.0 release), there should be mentors assigned to your project to help with questions. Don't be afraid to use your mentors. You can identify the mentors on the project summary page (which should be accessible in an entry labeled "About this Project" in the left-menu on your project's web site). For communication with another project, you should go through the community Forums. If you're not sure which forum to post your question on, try posting on the newcomer group and someone there will point you in the right direction. Note that the forums are also available through NNTP if you prefer to use a newsreader. Retrieved from "http://wiki.eclipse.org/index.php?title=Development_Resources/New_Commmitter_Handbook&oldid=178058"
计算机
2014-23/3291/en_head.json.gz/35131
Metro: Last Light is heading to Linux and Mac by: Russell Linux and Mac users are about to get their own dedicated versions of Metro: Last Light...well, Mac users anyway, as Linux users will have to wait a little bit longer. Deep Silver and 4A Games recently confirmed that a Mac version of Metro: Last Light will be released on September 10th via the App Store and Steam, while a Linux version will be released later this year. Both versions will receive the same DLC packages as their PC and console counterparts. What's more is that the Steam versions will support Steam Play, which means that no matter which version of Steam you have, you'll be able to access Metro: Last Light on your account whether you're playing on a PC, Mac, or Linux. The Season Pass, Faction Pack DLC, and Tower Pack DLC will be available at launch for the Mac release, and the upcoming Developer and Chronicles Packs will launch alongside the PC and console releases. Metro: Last Light is coming to Mac and Linux - 4A Games handling development in-house Larkspur, Calif., August 27, 2013 - Deep Silver and 4A Games today confirmed that a dedicated Mac version of Metro: Last Light will be released on September 10th, 2013 via the App Store and Steam. A Linux version is also in development, scheduled for release later in the year. "The Mac and Linux versions of Metro: Last Light are further testimony to the power and flexibility of the 4A Engine," said Oles Shishkovstov, Chief Technical Officer at 4A Games. "Development was handled in house by 4A games, and we are very happy with the results. We hope that Mac & Linux gamers will appreciate our efforts to create the best possible version for their machines." Metro: Last Light on Steam will support Steam Play, meaning that owners of any Steam version will automatically find the game added to their PC, Mac and Linux Steam libraries. Metro: Last Light for Mac and Linux will receive the same DLC packages as the PC and console versions. The Metro: Last Light Season Pass, and the Faction Pack and Tower Pack DLC will be available at launch for the Mac release , and the upcoming Developer and Chronicles Packs will release on Steam and the App Store alongside the PC and console releases. For more information about Metro: Last Light, visit http://www.enterthemetro.com, like us at Facebook.com/MetroVideoGame or follow us on Twitter @MetroVideoGame. About Metro: Last Light It Is the Year 2034. Beneath the ruins of post-apocalyptic Moscow, in the tunnels of the Metro, the remnants of mankind are besieged by deadly threats from outside - and within. Mutants stalk the catacombs beneath the desolate surface, and hunt amidst the poisoned skies above. But rather than stand united, the station-cities of the Metro are locked in a struggle for the ultimate power, a doomsday device from the military vaults of D6. A civil war is stirring that could wipe humanity from the face of the earth forever. As Artyom, burdened by guilt but driven by hope, you hold the key to our survival - the last light in our darkest hour... The plot of Metro: Last Light was written by Dmitry Glukhovsky, acclaimed author of the novel Metro 2033 which inspired the creation of both video games. Glukhovsky personally contributed the vast majority of dialogue in Metro: Last Light. Glukhovsky's novels "Metro 2033" and "Metro 2034" have sold more than two million copies worldwide. In 2013, his latest book "Metro 2035" will be also available outside of Russia. The film rights for "Metro 2033" were bought by Metro-Goldwyn-Mayer (MGM), the movie is currently in development. About Deep Silver Deep Silver develops and publishes interactive games for all platforms, seeking to deliver top-quality products that provide immersive game experiences driven by the desires of the gaming community. The company has published more than 200 games worldwide since 2003, including the best-selling zombie action Dead Island(TM) franchise, the critically-acclaimed Metro(TM): Last Light and the fourth title in the action adventure open world Saints Row series, Saints Row IV(TM). Deep Silver is a wholly-owned subsidiary of Koch Media, GmbH, and includes the renowned development studio Deep Silver Volition, based in Champaign, IL. For more information, please visit www.deepsilver.com or follow us on twitter at @deepsilver. © Deep Silver, a division of Koch Media GmbH, Austria About 4A Games 4A Games is a computer game development studio based in Kiev, Ukraine. The studio was established in December 2005 by the veterans of Ukrainian game industry, with the aim of developing premium quality computer games for high-end PCs and next generation consoles supported by the in-house developed "4A Engine". At the heart of the company are around 80 talented designers, programmers, artists, sound specialists and writers, with years of experience in software development and computer games in particular. ​
计算机
2014-23/3291/en_head.json.gz/35873
Multiplayer - A waste of money Thread: Multiplayer - A waste of money Fon View Profile Location @Home Rep Power 77 Points 107,340 (10,000 Banked) http://www.joystiq.com/2012/08/29/sp...-waste-of-mon/ Originally Posted by Joystiq Spec Ops: The Line lead designer Cory Davis slammed his game's multiplayer in a recent interview, describing it as a "low-quality Call of Duty clone in third-person" and a "waste of money." Davis told The Verge the outsourced mode was just a financially motivated "checkbox" for publisher 2K Games, and that the low number of multiplayer users, coupled with the mode's distinct tone and feel, casts "cancerous" aspersions on the whole game. Davis revealed 2K insisted on the shooter having multiplayer, but the mode was far from a priority for developer Yager, and went against Davis' vision for the game. Nonetheless, the mode was greenlighted and then outsourced to Darkside Studios. Darkside is a small developer most notable for designing Borderlands' fourth add-on, "Claptrap's New Robot Revolution." Davis is clearly furious with the results. "It sheds a negative light on all of the meaningful things we did in the single-player experience," Davis said. "The multiplayer game's tone is entirely different, the game mechanics were raped to make it happen, and it was a waste of money. No one is playing it, and I don't even feel like it's part of the overall package. It's another game rammed onto the disk like a cancerous growth, threatening to destroy the best things about the experience that the team at Yager put their heart and souls into creating." Per the title, I want to make this into a general discussion regarding multiplayer. In response to the article itself, I haven't had the chance to play Spec-Ops: The Line but I have heard a handful of negative comments, mainly from the multiplayer experience so hearing those start to bum people out and end up disregarding the game completely(Not all of course). Personally I feel like not every game needs a multiplayer - Adding something that is completely irrelevant to your ideas should never be implemented and will only bring disastrous results. It's wasted resources and money. Above all it could potentially degrade the game's overall quality. What are your thoughts on the article and/or multiplayer in every game?. the_jim View Profile
计算机
2014-23/3292/en_head.json.gz/244
Skullgirls destroys funding goal, Lab Zero offer to help fellow developers Last years fantastic 2D fighter Skullgirls lost all financial support from their publisher, Autumn Games, due to some costly legal battles back in June. Many of the team were laid off and began work on the PC version and their first new character DLC in their own time, but it simply wasn’t enough. As a last ditch effort, the Lab Zero team decided to run an IndieGogo campaign to raise enough money for them to be able to continue work on something they truly love – Skullgirls. Here’s DarkZero’s Skullgirls review. Originally aiming for $150,000 the goal was quickly reached, so the team put together a bunch of stretch goals that included several voice packs and a second new character ‘Big Band’. Once again the stretch goal was soon hit and so once again was pushed even further. The next big goal was to get a new mystery character, that the fans would vote on from a given list of designs. Here’s where it get’s interesting… Mane6, the developers behind My Little Pony: Fighting Is Magic, which was an almost complete 2D fighter, needed some help. They were sent a cease and desist order from Hasbro and had to shut down production of their game. They’d almost spent two year
计算机
2014-23/3292/en_head.json.gz/1020
Games, Comics & Life “So you know cats are interesting. They are kind of like girls. If they come and talk to you it's great. But if you try to talk to them it doesn't always go so well.”- Shigeru Miyamoto http://www.4colorrebellion.com/archives/2014/05/23/4cr-plays-super-time-force/ Click to Load Post 4cr Plays – Super TIME Force Be kind, please rewind. by Gabriel Turcotte-Dubé If you ever played a game with an emulator and used the save-state feature, or really any game that allows you to save at any time, chances are you abused the save-states at least once to get past a hard level or boss. Save each time you make some progress, and reload compulsively every time you make a mistake, no matter how small. Well, Super TIME Force by Toronto-based developer Capybara feels like a game that was designed entirely around that “trick.” At first glance, Super TIME Force looks like your typical run-and-gun game in the vein of Contra, Metal Slug or Gunstar Heroes. Run from left to right, shoot everything that moves, destroy the glowing parts on the bosses, die a lot. However, if you die or screw up in any way, you can rewind time and restart at any point of the level, with your “old self” still there performing the same actions you did just before. You can even save that “old self” from death to gain it’s power. Soon, you end up with dozens of characters running around and shooting everything. It’s like a massively-multiplayer cooperative game… where you’re playing by yourself. Don’t misunderstand; this mechanic is not a cheat or an option for the less experienced players. The whole game is designed around it, and you NEED to use it to progress. Many collectibles require multiple characters “cooperating” together, bosses won’t go down until many, many characters are shooting at them, and the very strict time limit for each level means you will often rewind to make sure you go through each part as fast as possible. And of course, you can’t abuse the rewind indefinitely: there’s a reasonably generous number of times you can use it in a level until it’s Game Over. It takes some time to get used to this peculiar time mechanic and understanding how and when to use it efficiently, but once it “clicks,” it’s very rewarding. The game is almost as much a puzzle game as it is an action game, as you’ll sometimes have to stop and think to figure out how to solve a problem with time manipulation. However, while the time travel gameplay is great and inventive, it creates some (mostly minor) problems. The biggest of these is that the game become too confusing when you have a lot of characters on-screen. It’s not really problematic for most of the game, but it makes the screen simply way too crowded during the boss fights. When you have nearly 30 characters on the same screen jumping and shooting everywhere, it’s very hard to follow what is going on, and you die often because you simply can’t distinguish the enemy’s attacks among all the chaos. Some efforts have obviously been made by Capybara to minimize that confusion (different colors for the enemies’ bullets, greyed-out “past selves”, etc.), but during intense boss fights, it’s still not enough. The other notable problem is the strict time limit. It’s there to push you to use the rewind feature to be as efficient and fast as possible, which is a good idea but it can make the game needlessly tedious. It happened way too often that I felt I played well during a level, only to run out of time at the last stretch before the goal. I went back in time 15-20 seconds sooner to try to “optimize” my game play and shave off a few seconds on the timer, but after doing it again and again, it still wasn’t enough. I lost too much time at the beginning of the level, and I pretty much had to restart from scratch because of that couple of missing seconds. It’s not so bad as replaying a level is still fun and you can do better because of what you learned on your first playthrough, but having to restart at the last moment of a level because of a time-out happens too often and it becomes annoying fast. As you can see from the screenshots, Super TIME Force is clearly retro-styled. The pixel-based graphics are detailed and stylized, and there’s a lot of visual variety between levels. This “old-school” theme can also be felt in the setting of the game itself. The whole game’s tone is reminiscent of the “radical” attitude of the 80s and early 90s. For example, you can play as a skateboarding dinosaur in a level that takes place in a dialup-era website. Also, the Super TIME Force’s goal is only to make the world “more awesome” by doing things like stopping the dinosaur extinction or bringing back Atlantis to be the 51st US state. It’s intentionally very forced, so it’ll either make you cringe or laugh out loud. I was (mostly) on the laughing side. Even with a couple of problems inherent to the time-travelling mechanic, Super TIME Force is totally worth playing. It’s rare to see a game that asks you to think in a completely different way like this. While it’s disconcerting at first, planning more and more complex strategies with multiple versions of yourself is a unique and very enjoyable gaming experience. Oh, and there’s a cameo of Tiny, the mascot of the awesome Tiny Cartridge! Tags: capy, Indie, xbox 360, Xbox One Logging In... Profile cancel Last reply was 2 months ago Mark View This game looks really good. I’ve seen it played by the best friends, and its game play is amazing, just as its writing. Reply home
计算机
2014-23/3292/en_head.json.gz/2048
Home » Columns » Features DoNotCall.gov: Do Not Call it up With Firefox By Roger V. Skalbeck, Published on September 29, 2007 Printer-Friendly Version On April 4th of this year, somebody forwarded me email that said in part "12 days from today, all cell phone numbers are being released to telemarketing companies and you will start to receive sale calls. .....YOU WILL BE CHARGED FOR THESE CALLS." I immediately assumed it was a hoax. If it wasn't, I was going to be worried. Instead of checking a rumor site like Snopes.com, I decided to go directly to Do Not Call registry, which I knew was run by the Federal Trade Commission. Not knowing the best URL for the site, I typed "Do Not Call" on Google and chose "I'm Feeling Lucky." I didn't feel so lucky when I got to a page that looked like this: Focusing on text area at the top, take a look at how it displays in the Firefox browser: Here you can't read the text leading up to the link "The Truth About Cell Phones and the Do Not Call Registry." Upon seeing this, I immediately thought I had gotten to a spoofed or phishing site. Since April Fool's Day had just passed, my second thought was that it was simply a good joke or parody. Beyond the inexcusable display problem, here are some reasons why the site didn't seem "official". The Do Not Call Registry is managed by the Federal Trade Commission, but there's no FTC logo on the site. Visually, it looks nothing like the FTC.gov website. There's no link to the FTC. It doesn't have a clear text indication that it's a United States Government site, and it just says "National Do Not Call Registry" withougt indicating which nation it is for. Admittedly, the domain name ends in '.gov" which seemed to be the only reliable indicator that this is indeed an official U.S. government website. My biggest concern was that the message about "The Truth about Cell Phones and the Do Not call Registry" can not be read at all. I repeat: If you're using any browser other than Internet Explorer, you can't read a very simple and important message on the homepage. To show the page in several different browsers, following are screenshots created with the free online Browsershots.org site. At this site, you can submit a single URL to get screenshots in up to sixteen different web browsers. Interestingly, the site looks fine in Internet Explorer 5.5, which is often not the case. It doesn't look good in any non-Microsoft browser. Opera 9.23 Konqueror 3.5
计算机
2014-23/3292/en_head.json.gz/2057
LRS® Corporate Home LRS Worldwide Solutions at Work LRS Web Solutions|Facebook|Email Me Updates|Contest Rules & Regulations|FAQ|Contact Us The community will nominate local nonprofit organizations to receive a Grand Prize of a balanced website for the organization valued at approximately $10,000. Following a 3-week nomination period, an LRS committee will select up to 10 nominees as finalists. Then the community will cast votes during a 2-week voting process for their favorite organization from among the ten finalists to receive the Grand Prize. A first runner- up organization will receive an LRS Social Media Package valued at approximately $500. A second runner- up organization will receive an online photo gallery valued at approximately $250. QUALIFICATIONS OF NOMINEES In order to be eligible for consideration as one of the ten finalists, the nominee must: Be a non-profit, 501(c)(3) organization. Provide services within one or more of the following designated counties in Central Illinois: Adams, Bond, Brown, Cass, Champaign, Christian, Coles, Dewitt, Douglas, Effingham, Fayette, Logan, Macon, Macoupin, Mason, McLean, Menard, Montgomery, Morgan, Moultrie, Peoria, Piatt, Sangamon, Schuyler, Scott, Shelby, or Tazewell. Have an existing domain name and live website. Not be a current LRS Web Solutions customer. Not have a member of LRS senior management as a board member or officer. Be willing to participate and receive a newly designed website in February 2012. Be willing and able to communicate with LRS between November 2011 - February 2012 regarding content, imagery, navigation, and functionality of their website. Be agreeable to abide by timelines set forth by LRS. Sign a participation agreement including agreement to allow use of their name and website for LRS promotional purposes if the nominee is a winner. The top three finalists from 2010 contest are ineligible for 2011 contest. Nominations will be accepted from September 15 through October 5, 2011. Nominees can nominate themselves, or third parties can nominate organizations. If a third party nominates an organization, the nomination form still must be completed by a representative of the organization in order for the nomination to be accepted. LRS will contact any organization nominated by a third party and notify them of the process. Nominations must be made online only at http://www.lrs.com/lrswebsolutions/makeover. The finalists who advance for voting will be selected from the list of eligible nominees by a selection committee established by LRS. All finalists will be notified by phone on or before October 7, 2011 and the list of finalists will be posted on the contest website. SELECTION OF WINNER Winners will be selected based on total number of votes from the popular vote. Voting will be open from October 17 - October 30, 2011. (Early bird voting will be available for NHRA promotion.) It is the responsibility of each finalist to promote itself during the voting timeframe. LRS will provide all finalists with a packet of promotional items including a graphic and URL link for the finalists’ websites that will link to the online voting forum. A specific email address can only vote one time per day. Voting will be tracked by a valid email checker in order to enforce this requirement. LRS reserves the right to determine the validity of all votes. In the event of a tie, LRS reserves the right to award prizes based on the recommendation of a selection committee. GRAND PRIZE: There will be one Grand Prize winner who will receive a new website for their organization valued at approximately $10,000. TWO RUNNER-UP PRIZES: There will be two runner-up prizes. 1st runner- up will receive a free Social Media Package for their organization valued at approximately $500. 2nd runner- up will receive a free online photo gallery valued at approximately $250. LRS reserves the right to cancel the contest for any reason at any time. ALLOCATION OF PRIZES All winners will be notified by phone by November 1, 2011. All remaining finalists will be notified by email. In order to receive their prizes, all winners must sign and return an LRS Web Solutions contract on or before November 9, 2011. If LRS does not receive the signed agreement by that date, LRS reserves the right to allocate the prize to the finalist with the next highest number of votes. Winners are responsible for paying taxes, if any, on the prize value. Winners should consult their own tax professionals regarding the taxability of any prize. Winners will designate a qualified contact for this contest to include the Director, Public Relations Director, Marketing Manager, or any similar position within the organization. The Grand Prize website redesign/restructure will follow LRS Web Solutions project management protocol. The estimated website reveal date for the Grand Prize winner is February 2012 (subject to change). The Grand Prize website redesign will consist of up to $10,000 in time and resources for a balanced website design. Any requests by the Grand Prize winner for services other than those named above or for services or resources in excess of $10,000 will be charged at LRS’ standard prices. In addition to the $10,000 website, the Grand Prize winner can choose to host their website with LRS at no cost for one year beginning on the date the website goes live. Any hosting service after the initial year will be by mutual agreement of the winner and LRS. However, the Grand Prize winner is under no obligation to host their website with LRS. The Big Website Makeover website is optimized for IE8, IE9, Safari, Firefox, and Mozilla. Older browsers may not interpret the website accurately. ESTIMATED SCHEDULE Nominations will be accepted September 15 - October 5, 2011. Voting on finalists shall be open from October 17 - 30, 2011. The winners will be announced November 1, 2011. Progress/status updates will be provided to the Grand Prize winner and the public via Facebook and emails. The Grand Prize website will be completed by February 2012 for February 2012 reveal. Copyright © 2013 LRS Web Solutions. All Rights Reserved.LRS is a registered trademark of Levi, Ray & Shoup, Inc.Trademark Information & Legal Disclaimers | Privacy Policy
计算机
2014-23/3292/en_head.json.gz/2289
Search For Computer Help: Home > All Tips > PC Networking Tips What Is Ethernet? Ethernet is a standard of network communication using twisted pair cable although coaxial cable was used in older versions of Ethernet. Ethernet was developed in 1973 by Bob Metcalfe at Xerox. It is the most widely used standard in network communication and run at 10 to 1000 megabytes per second (Mbps) with faster versions on the horizon. The IEEE standard 802.3 is used for Ethernet but it has several different types or versions starting with the original 10base5 version. The 10 stands for 10Mbps and base describes the Baseband communications it uses. The 5 stands for a maximum distance of 500 meters which is how far the signal can travel before having to be repeated or regenerated. This type of Ethernet used coaxial wiring instead of the newer versions that use twisted pair cabling. Other types of Ethernet include 10Base2, 10BaseT and 100BaseT which is the most common type in use today. 10BaseT offers 100 Mbps with a distance of 100 meters and uses twisted pair cabling.
计算机
2014-23/3292/en_head.json.gz/2976
wireless mesh network - Computer Definition (1) A network that relies on all nodes to propagate signals. Although the wireless signal may start at some base station (access point) attached to a wired network, a wireless mesh network extends the transmission distance by relaying the signal from one computer to another. Used on the battlefield to provide path diversity, it is also used for sensor networks and personal computers. See mobile ad hoc network, 802.11 and wireless LAN. A Wireless Mesh for Users When laptops are set up in a wireless mesh, it is called an "ad hoc" network. (2) A network that provides Wi-Fi connectivity within an urban or suburban environment. It comprises "mesh routers," which are a combination base station (access point) and router in one device. Also called "mesh nodes," they are typically installed on street light poles, from which they obtain their power. Access Point and Backhaul Router Like any Wi-Fi access point, the access point in the mesh router communicates with the mobile users in the area. The backhaul side of the device relays the traffic from router to router wireless until it reaches a gateway that connects to the Internet or other private network via a wired or wireless connection. Routing Algorithms A major benefit of wireless mesh networks is path diversity, which provides many routes to the central network in case one of the routers fails or its transmission path is temporarily blocked. The choice in routing algorithm is critical, and numerous mesh algorithms have been used over the years. Mesh routers can employ one, two or three radios. A single-radio router shares bandwidth between users and the backhaul. If two radios are used, one is dedicated to the frontside clients and the other to the backhaul. In three-radio routers, such as the systems from BelAir Networks (www.belairnetworks.com) and MeshDynamics (www.meshdynamics.com), two radios are used for the backhaul and can transmit and receive simultaneously over different Wi-Fi channels. See 802.11. The Mesh Topology This simulated wireless mesh is overlaid onto an aerial view of a metropolitan area, showing how the mesh routers are situated in a typical environment. (Image courtesy of Tropos Networks, Inc., www.tropos.com) On the Pole Depending on foliage and topography, between 10 and 20 mesh routers are mounted on light poles or similar locations per square mile. In this image, the man is making a Wi-Fi voice call between his VoWi-Fi phone and a BelAir200 Wireless Multi-Service Switch Router. (Image courtesy of BelAir Networks Inc., www.belairnetworks.com)
计算机
2014-23/3292/en_head.json.gz/3070
Windows 7 UAC flaws and how to fix them A number of security flaws have been found in Windows 7's streamlined UAC— … - Feb 9, 2009 12:00 am UTC The Windows 7 UAC Slider Unlike many, I'm a big fan of Vista's User Account Control. Truth is, I don't get a lot of prompts asking me to elevate, and those that I do get are legitimate. Sure, the implementation isn't perfect; there are some scenarios that cause a rapid proliferation of prompts that are a little annoying (such as creating a folder in a protected location in Vista RTM), and there are even a few places where it forces elevation unnecessarily, but on the whole I think it's a good feature. The basic purpose of UAC is to annoy you when your software needs Admin privileges. The reason for this is simple: a lot of Windows software demands Admin privileges not because it needs to be privileged for everything it does, but rather because it was the quickest, easiest way for the developer to do some minor task. For example, games with the PunkBuster anti-cheat system used to demand Administrator privileges so that PunkBuster could update itself and monitor certain system activity. This was bad design because it meant that the game was then running with Administrator privileges the whole time—so if an exploit for the game's network code was developed, for example, that exploit would be able to do whatever it liked. The solution to this kind of problem is to split the application up in one way or another. In the PunkBuster case, the privileged parts were split into a Windows service (which has elevated privileges all the time), leaving the game itself running as a regular user. There are a number of other approaches of tackling the same problem, but in general they all require an application to be restructured somewhat so that privileged operations can be separated from non-privileged ones. As well as this "annoyance" role, UAC also provides a warning when software unexpectedly tries to elevate its privileges. UAC has heuristics to detect applications that "look like" installers, and it also traps important system utilities like Registry Editor. Though Microsoft has cited this kind of behavior as a benefit of UAC, the company has also said that UAC is not a "security boundary." That is to say, if a malicious program figures out a way of elevating without a UAC prompt (or by tricking the user into agreeing to the UAC prompt) then that's not a security vulnerability. If you want real security with UAC you have to run as a regular user and enter a username and password to elevate—the Admin approval click-through mode (the mode that's the default for the first user account created on any Vista system) is not intended to be secure. The winds of change are blowing Why bring this up? Well, first of all, Windows 7 brings some changes to UAC to try to reduce the number of prompts that Administrators see. The basic idea is that if you make a change through one of the normal Windows configuration mechanisms—Control Panel, Registry Editor, MMC—then you don't get a prompt asking you to elevate. Instead, the system silently and automatically elevates for you. Third party software will still trigger a prompt (to provide the warning/notification that it's raising its privileges), but the built-in tools won't. In this way, you don't get asked to confirm deliberate modifications to the system; the prompts are only for unexpected changes. In my na�vet� I initially assumed that perhaps the differentiation was made according to where the action initiated; keyboard and mouse input (i.e., user actions) rather than something more simplistic like trusting particular applications. After all, the computer knows that a keystroke or mouse click originated in the hardware (because a driver gets to handle it), so it can easily tell what's real and what's not. A trusted application, however, could be started up by a malicious program and made to do bad things. So surely that wasn't the route Redmond chose? It turns out that is indeed the route Redmond chose. For a number of years now, Microsoft has attached digital signatures to the programs and libraries that make up Windows; these signatures allow you to verify that a program did indeed come from Microsoft just by looking at the program's properties. In Windows 7, most programs with Microsoft signatures are trusted by UAC and won't cause a prompt. Instead, they just silently elevate. Unfortunately, Microsoft hasn't done anything to resolve the problem with this approach—trusted applications can be tricked into doing bad things. A few programs such as cmd.exe, PowerShell, and Windows Scripting Host don't auto-elevate (because they're designed to run user code, rather than operating system code), but they're the exception. Everything else elevates, and is vulnerable to being abused. This was noticed a last week by Long Zheng at I Started Something. Together with Rafael Rivera, he put together an exploit for this silent elevation. The exploit programmatically passed keystrokes to an Explorer window, navigating to the UAC Control Panel, and setting the slider to disabled. Because Explorer is trusted, changing the setting doesn't cause a prompt. Instead, UAC is silently disabled. Sending keystrokes is a bit crude, so a second attack was developed. This second attack was more flexible; instead of merely disabling UAC, it allowed any code to run elevated without prompting the user. It does this by using a Windows program called rundll32. rundll32 has been part of Windows for a long time; its purpose is, as the name might imply, to allow DLLs to be run, almost as if they were normal programs. The exploit simply puts the malicious code into a DLL and tells rundll32 to run it. rundll32 is trusted, so it elevates automatically. Together, these attacks mean that Windows 7's default UAC configuration is virtually worthless. Silently bypassing the prompts and acquiring Administrator privileges is as easy as putting code into a DLL. Windows Vista doesn't have a problem, because it doesn't trust any programs; the problems are purely due to the changes Microsoft has made to UAC in the name of convenience in Windows 7. Dismissing instead of fixing Given the importance of security and UAC, one might expect Microsoft to take note of this problem and do something to fix it. Unfortunately, the company's first response was to dismiss the behavior as happening "by design." Given the importance of security and UAC, one might expect Microsoft to take note of this problem and do something to fix it. Unfortunately, the company's first response was to dismiss the behavior as happening "by design." Redmond says that, because UAC isn't a security boundary, it doesn't matter if silent elevation occurs; it's not a vulnerability. UAC is only there to keep legitimate software authors honest, not to stop malware. After the second exploit was disclosed, on Thursday a company representative made a lengthy blog post reiterating that UAC is not a security boundary and that the behavior is by design—it's awfully convenient, you see, so it doesn't matter if it's actually useful as a security measure. In essence, the argument Microsoft has made is that if a user runs malicious programs as an Administrator and those programs do malicious things, that's not a security flaw, because the user ran the programs as an Administrator, and an Administrator is allowed—by design—to do things that can break the system. What this argument misses is that, until elevated, the malicious program can't do all the nasty things that malicious programs tend to do; it can't modify system files, make itself run on startup, disable anti-virus, or anything like that. Choosing to run a program without elevation is not consent to running it elevated. Maybe this needs to be fixed after all Things then took a turn for the weird. A second post was made admitting that, well, the company had "messed up" with the first post, in two ways. First and foremost, the new UAC behavior is badly designed; second, the whole issue was badly communicated by the company. The Windows 7 team will change the UAC behavior from that currently seen in the beta to address the first flaw. This fix won't be released for the current beta, though, and we'll have to wait until the Release Candidate or even RTM before we can see it in action. When fixed, the UAC control panel will be different in two important ways. It will be a high integrity process—which will prevent normal processes from sending simulated keystrokes to it—and changes to the UAC setting will all require a UAC confirmation, even if the current setting does not otherwise require it. Though this will resolve the first exploit, it looks like it will have no impact on the second, and since the second exploit was the more useful anyway (as it can be used to do anything, not just change the UAC setting), this fix doesn't seem extensive enough. There is some irony in Microsoft's behavior to use a trusted executable model; the company knows damn well that trusted executables aren't safe, and uses this very argument to justify the UAC behavior in Vista. In short, trusting executables is a poor policy, because so many executables can be encouraged to run arbitrary code. There is some irony in Microsoft's behavior to use a trusted executable model; the company knows damn well that trusted executables aren't safe, and uses this very argument to justify the UAC behavior in Vista. A system using trusted executables will only be secure if all of those executables are unable to run arbitrary code (either deliberately or through exploitation). That clearly isn't the case in Windows 7; rundll32's express purpose is to run arbitrary code! Removing the auto-elevation from rundll32 may be unpalatable, too. While non-elevating programs like Windows Scripting Host and PowerShell are used predominantly for user code, rundll32 is used mainly for operating system code. Removing its ability to elevate would, therefore, reintroduce some of the prompts that Windows 7 is trying to avoid. And even if rundll32 lost its ability to elevate automatically, there are almost certain to be other trusted programs that can be abused in a similar way. So, in spite of the most recent blog post, this remains a poorly-designed feature. UAC is now only as strong as the weakest auto-elevating program. It equally remains poorly communicated. Fundamentally, the defense that UAC is not a security boundary just doesn't cut the mustard. Microsoft sells UAC as providing "heightened security", as a way of limiting the "potential damage" that malware can do. To then argue that users should not, in fact, expect UAC to keep them secure is insulting. Moreover, even if the purpose of UAC is just to keep application writers honest, these exploits mean it fails to achieve even that. The simple fact is that it's a lot easier to restructure an application to make it use rundll32 to automatically elevate than it is to do things the Right Way. The unscrupulous or lazy software vendor who just wants to do the simplest thing possible to make the prompts go away will surely prefer that option to actually fixing their application. As someone who thinks that UAC is a good idea, these efforts to undermine it are terribly disappointing. As things currently stand, Windows 7's default UAC settings render it pointless in Admin approval mode, as it's so trivially bypassed. It might as well be turned off completely for all the good it does. To break a security feature—boundary or no boundary, it's sold as a security feature, it acts like a security feature, so I'm certainly going to treat it as a security feature—for the sake of convenience is a grave mistake. Peter Bright / Peter is Technology Editor at Ars. He covers Microsoft, programming and software development, Web technology and browsers, and security. He is based in Houston, TX.
计算机
2014-23/3292/en_head.json.gz/3072
Cross-platform game development and the next generation of consoles By the end of 2006, all three of the next-generation game consoles should be … by Jeremy Reimer - Nov 8, 2005 2:00 am UTC The gaming industry has come a long way since its humble beginnings more than thirty years ago. From a time when people were thrilled to see a square white block and two rectangular paddles on the screen to today, where gamers explore realistic three-dimensional worlds in high resolution with surround sound, the experience of being a gamer has changed radically. The experience of being a game developer has changed even more. In the early 1980s, it was typical for a single programmer to work on a title for a few months, doing all the coding, drawing all the graphics, creating all the music and sound effects, and even doing the majority of the testing. Contrast this to today, where game development teams can have over a hundred full-time people, including not only dozens of programmers, artists and level designers, but equally large teams for quality assurance, support, and marketing. The next generation of consoles will only increase this trend. Game companies will have to hire more artists to generate more detailed content, add more programmers to optimize for more complex hardware, and even require larger budgets for promotion. What is this likely to mean for the industry? This article makes the following predictions: The growing cost of development for games on next-gen platforms will increase demand from publishers to require new games to be deployed on many platforms. Increased cross-platform development will mean less money for optimizing a new game for any particular platform. As a result, with the exception of in-house titles developed by the console manufacturers themselves, none of the three major platforms (Xbox 360, PS3 and Nintendo Revolution) will end up with games that look significantly different from each other, nor will any platform show any real "edge" over the others. Many games will be written to a "lowest common denominator" platform, which would be two threads running on a single CPU core and utilizing only the GPU. All other market factors aside, the platform most likely to benefit from this situation is the Revolution, since it has the simplest architectural design. The PC, often thought to be a gaming platform on the decline, may also benefit. Conversely, the platforms that may be hurt the most by this are the PlayStation 3 and the XBox 360, as they may find it difficult to "stand out" against the competition. These are bold statements, and I don't expect it to make it without at least attempting to back it up with a more detailed argument, nor do I expect it to go unchallenged. In fact, I reserved a section at the end of the article where I describe all the problems I could find with my theory. So the fullness of my argument can best be understood by reading through to the conclusion and would encourage readers to do that prior to engaging in conversation in the discussion thread. I should also add that I fully expect all three next-generation platforms and also the gaming PC to survive and do reasonably well. The console wars will require at least another round after the next one before they have any sort of resolution. Ultimately, platforms themselves may reach a point where they no longer matter, as most content will be available on every gaming device. Our grandchildren may look at us strangely when we recall the intense and urgent battles between Atari and Intellivision, Nintendo and Sega, and Microsoft and Sony. At least we will have the satisfaction of knowing that we lived through the period when gaming went through some of its greatest advances. Download the PDF (This feature for Premier subscribers only.) Expand full story Jeremy Reimer / I'm a writer and web developer. I specialize in the obscure and beautiful, like the Amiga and newLISP. @jeremyreimer
计算机
2014-23/3292/en_head.json.gz/3565
It appears that you are currently using Ad Blocking software. What are the consequences? Click here to learn more. Surfing the Net By David Mendosa Last Update: January 16, 2001 Tweet What sort of a library has millions of books but no card catalogue? The World Wide Web has more information online than almost any library, but no card catalogue. Actually, that's something we can do without. No card catalogue could index the wealth of online information. Unlike card catalogues with generally no more than one card per book, what we need is a way to find all the concepts buried in millions of Web pages. Follow the links. This uncatalogued information on the Web is growing at a staggering rate. Two years ago Steve Lawrence and C. Lee Giles of the NEC Research Institute estimated in Science that the Web contained at least 320 million pages. The most recent NEC Research survey, released in January, says there are now more than 1 billion pages. Of these, 87 percent were in English. French was second with 2 percent. The easiest way to start searching the Web is to follow the links on just about every website. You might start with the site containing the most pages of information about diabetes. That's the American Diabetes Association's site, www.diabetes.org. The "Internet Resources" page there, www.diabetes.org/internetresources.asp, is a directory that I wrote for the ADA of about 50 of the most important diabetes sites. One of them is the International Diabetes Federation, www.idf.org. If 50 links to top diabetes sites aren't enough, how about 800? That's about how many sites I describe and link on the 15 Web pages of "On-line Diabetes Resources," www.mendosa.com/faq.htm. Sooner or later, however, you probably will want to find specific information about diabetes that you can't easily find by following these links. This is where directories and search engines come in. What the Web has instead of a card catalogue is a large choice of search engines and directories. About 85 percent of Web users use them. These tools let you search a large amount of information much more efficiently than has ever been possible in a library. The most popular tool to search the Web isn't a search engine at all. It's the Yahoo! directory, which itself is a website, www.yahoo.com, and the nearest thing we do have to a card catalogue of the Web. At Yahoo! about 150 editors have categorized more than 1 million websites. Search engines, on the other hand, use computers to build their indexes based on whatever pages are linked together. The major search engines are usually better for hard-to-find information than Yahoo!. Yet, their coverage varies significantly. Therefore, it is sometimes necessary to use two or more search engines. Formerly called All The Web, FAST Search, based in Norway, aims to index the entire Web. It was the first search engine to break the 200 million web page index milestone. Its address is www.alltheweb.com. Other major search engines include www.AltaVista.com, www.NorthernLight.com, and www.Google.com. With the wealth of information on the Web, you will probably find what you are looking for. These tools and a little persistence are all that it takes. This article appeared in Diabetes Voice, Bulletin of the International Diabetes Federation, March 2000, page 38. Go back to Home Page Go back to Diabetes Directory Go To Top Of The Page ^ Never Miss An Update! Follow @davidmendosa View Current Issue | View Old Issues I comply with the HONcode standard for trustworthy health information. You can verify my HONcode certificate here. Archives | Advertisment Policy | Privacy Policy Copyright © 1995 - 2014 David Mendosa. All Rights Reserved
计算机
2014-23/3292/en_head.json.gz/4255
Office of Technology Evaluation (OTE) Office of Technology Evaluation (OTE)Transshipment Best PracticesData Mining and System EffectivenessBIS/Census AES Compliance TrainingTechnology AssessmentsIndustrial Base AssessmentsOpportunities for Industrial Base Partnerships and Assistance ProgramsSection 232 InvestigationsTechnical Advisory Committees (TAC)Contact Information BIS Privacy Policy Statement | Print | The kinds of information BIS collects Automatic Collections - BIS Web servers automatically collect the following information: The IP address of the computer from which you visit our sites and, if available, the domain name assigned to that IP address; The type of browser and operating system used to visit our Web sites; The Internet address of the Web site from which you linked to our sites; and The pages you visit. In addition, when you use our search tool our affiliate, USA.gov, automatically collects information on the search terms you enter. No personally identifiable information is collected by USA.gov. This information is collected to enable BIS to provide better service to our users. The information is used only for aggregate traffic data and not used to track individual users. For example, browser identification can help us improve the functionality and format of our Web site. Submitted Information: BIS collects information you provide through e-mail and Web forms. We do not collect personally identifiable information (e.g., name, address, phone number, e-mail address) unless you provide it to us. In all cases, the information collected is used to respond to user inquiries or to provide services requested by our users. Any information you provide to us through one of our Web forms is removed from our Web servers within seconds thereby increasing the protection for this information. Privacy Act System of Records: Some of the information submitted to BIS may be maintained and retrieved based upon personal identifiers (name, e-mail addresses, etc.). In instances where a Privacy Act System of Records exists, information regarding your rights under the Privacy Act is provided on the page where this information is collected. Consent to Information Collection and Sharing: All the information users submit to BIS is done on a voluntary basis. When a user clicks the "Submit" button on any of the Web forms found on BIS's sites, they are indicating they are aware of the BIS Privacy Policy provisions and voluntarily consent to the conditions outlined therein. How long the information is retained: We destroy the information we collect when the purpose for which it was provided has been fulfilled unless we are required to keep it longer by statute, policy, or both. For example, under BIS's records retention schedule, any information submitted to obtain an export license must be retained for seven years. How the information is used: The information BIS collects is used for a variety of purposes (e.g., for export license applications, to respond to requests for information about our regulations and policies, and to fill orders for BIS forms). We make every effort to disclose clearly how information is used at the point where it is collected and allow our Web site user to determine whether they wish to provide the information. Sharing with other Federal agencies: BIS may share information received from its Web sites with other Federal agencies as needed to effectively implement and enforce its export control and other authorities. For example, BIS shares export license application information with the Departments of State, Defense, and Energy as part of the interagency license review process. In addition, if a breach of our IT security protections were to occur, the information collected by our servers and staff could be shared with appropriate law enforcement and homeland security officials. The conditions under which the information may be made available to the public: Information we receive through our Web sites is disclosed to the public only pursuant to the laws and policies governing the dissemination of information. For example, BIS policy is to share information which is of general interest, such as frequently asked questions about our regulations, but only after removing personal or proprietary data. However, information submitted to BIS becomes an agency record and therefore might be subject to a Freedom of Information Act request. How e-mail is handled: We use information you send us by e-mail only for the purpose for which it is submitted (e.g., to answer a question, to send information, or to process an export license application). In addition, if you do supply us with personally identifying information, it is only used to respond to your request (e.g., addressing a package to send you export control forms or booklets) or to provide a service you are requesting (e.g., e-mail notifications). Information we receive by e-mail is disclosed to the public only pursuant to the laws and policies governing the dissemination of information. However, information submitted to BIS becomes an agency record and therefore might be subject to a Freedom of Information Act request. The use of "cookies": BIS does not use "persistent cookies" or tracking technology to track personally identifiable information about visitors to its Web sites. Information Protection: Our sites have security measures in place to protect against the loss, misuse, or alteration of the information on our Web sites. We also provide Secure Socket Layer protection for user-submitted information to our Web servers via Web forms. In addition, staff is on-site and continually monitor our Web sites for possible security threats. Links to Other Web Sites: Some of our Web pages contain links to Web sites outside of the Bureau of Industry and Security, including those of other federal agencies, state and local governments, and private organizations. Please be aware that when you follow a link to another site, you are then subject to the privacy policies of the new site. Further Information: If you have specific questions about BIS' Web information collection and retention practices, please use the form provided.
计算机
2014-23/3292/en_head.json.gz/4848
The Forge Forums General Forge Forums Actual Play Meta-Gaming Technique: Outside reading for DP Topic: Meta-Gaming Technique: Outside reading for DP (Read 2653 times) Adam Riemenschneider I also go by Capulet on other Forums. Meta-Gaming Technique: Outside reading for DP So, this is the first time I've use this particular Technique. We're about 5 sessions into the campaign, and most players are taking me up on it. Results, so far, are positive. Here's the deal: In an effort to really get my players thinking more about the game setting, I've decided to award them Development Points (DP for short) for reading books/comics that have a lot in common with the game world. My reasoning is that the more fluent the player is with the setting, the more I am comfortable with them having an accelerated character in the game. So we're working off of a reading list I'd prepared. Also, I've left open the possibility that they can suggest new material to be added to the reading list, pending my evaluation as to how much it fits the overall genre of the game. Lastly, if someone had already read one of the items, they have to re-read it in order to get DP credit. I plan on introducing this concept in a "to be published later" Ref's Guide for Factions. To date, benefits include players becoming "veterans" of the game world faster and an active book exchange on game day (even for those that aren't related to the game). Has anyone else tried this? -adam Creator and Publisher of Other Court Games.www.othercourt.comhttp://othercourt.livejournal.com/http://www.myspace.com/othercourt Re: Meta-Gaming Technique: Outside reading for DP This sounds like a great idea.The closest I've done to this in the past is using a series of reference material for in game actions.A dozen or so movies may be chosen as genre staples, or a mix of movies and graphic novels may be available to set a world somewhere between the settings depicted. If a player uses an action from one of these [movies/pieces of pop-culture], they get a free reroll or some similar advantage. Another player at a later time may not use the same specific action to get the bonus, but they can refer to another movie in the list where something similar happens. If a movie is used too many times (let's say a dozen), then it can be refreshed for new uses, or it can be removed and a new movie added to the list.Keeping the list refreshing itself keeps the genre of the game intact, while replacing with new movies can have the game evolve in other directions.This is similar to your concept because it gets players to become familiar with the movies and graphic novels.It's kept specifically visual for quick reference (rather than novels where you have to wade through pages of text to find a specific action sequence...and those action sequences can take a few pages to narrate as well).V Hituro Does this not maybe penalize those who already know more about the setting, because they read those things years ago?For example I'm playing in a game of Dark Heresy right now. The GM could, if he wished, award extra XP to players for going and reading WH40k fiction or sourcebooks, as they get more involved with the setting. However I've devoured all those books long ago, so I wouldn't be in line for that? My solution to this issue (I've already read that one!) is to have the player read it again. I know it sounds simple (and it is), but I think there's a good chunk of value in the exercise. The player may have read Novel "X" three years ago, and they've just gotten their head around the Game Setting Y. Going back and reading X means getting to re-evaluate what is going on in X, through the lens of seeing how it fits into the ideas of Y. In other words, the player gets to try to guess which Special Ability (from Y) the protagonist is using in X, or which group/Subfaction they would belong to. It's kind of like going to a movie, and trying to pick out which WoD Clan a character belongs to (in Blade), or which advantages or disadvantages the main villain has in game terms. -adam Hi Adam,The game Amber featured this technique. I have tried it myself using a variety of systems. It typically has not worked well for me.I've found that development points, or whatever they might be called in a particular game, are very poor incentive for behaviors outside of the fiction itself. In Big Model terms, this is an example of trying to affect the larger, outer process of forming the SIS via within-System techniques. It�s trying to jump-start a cause by providing some of the effects.I�m thinking here of the discussion in Beating a dead horse? in which Nolan asks a key question about halfway down the first page. I talk there about an �arrow� which travels down/into the Big Model from Social Contract to Ephemera, and then back out again. I think your proposed idea, like so many others that try to �get the players to commit,� is starting with the wrong arrow.However, that is pretty theoretical, and I will try to put it into more concrete terms. Basically, in my experience, people don�t like homework unless they can see that it will be worthwhile outside of the context of that material. That includes the relatively self-interested reward of scoring class points as well as the more liberal-artsy reward of providing context and perspective on their personal knowledge, identity, and that stuff. I�ll focus on the former.For real-life, academic homework, points to score may be an initial and perhaps adequate motivator, but the difference is that those points are consequential for something that the student does care about. In contrast, to have people study X so they can get points which are only useful in imagining a version of X is � well, isolated from any consequence outside of X. At best it�s busy-work.I've also found that people are often inspired to pursue the source material on their own if the expression of that material in the game is itself exciting and intriguing. That has led me to the general habit of providing many examples and references to the players, sometimes as handouts, so that they may be just as informed about the influences on the upcoming game as I am. Again, though, my expectation is not that they read or view those influences first, but rather, they know where to go if they like what they see in the game.Before going on, I�d like to double-check that I understand the problem you are trying to solve, especially in the context of this particular group of people. Am I correct in thinking that you are not currently satisfied with the degree to which people are engaging with the material? Best, Ron Ah, Amber. A game I keep wanting to actually play, one day. Although I wouldn't characterize it as a problem per see, I would like my players to be as engaged with the material as possible. By this, I wouldn't say the players "don't get" the material; I'd like them to get more of it. For example, if I were running a game set in the Civil War, I'd want to get the players as familiar with the period as possible. The more they'd know about the setting, the deeper they could imagine themselves in it. Myself, I find the Civil War to be an interesting time period, and have a few books about on the subject. Still, I wouldn't feel comfortable in running a game with my limited knowledge... I'd want to bone up a bit and "study," and I'd want my players to do so, too. I'm pushing for a deeper Actor stance, where the player is more familiar with what their character might know about the setting. Again, I'd like to stress that this outside reading is *not* being required of the players. Also, the readings are not dry tomes on Ritual Magic theory or the like, but are instead popular fictions which have a lot in common with the game setting (selected works by Neil Gaiman, Grant Morrison, Charles DeLint, and a few others). So, sure, it's homework, but I wouldn't call it busy work. I'm not doing this simply to keep the players busy... I want to get them thinking more about the setting. I suppose I could reframe my first post this way: "I like my players to have a lot of the same knowledge as their characters do regarding the setting, because I find heavy Actor/SIM play to be enjoyable. I'm currently using in-game rewards as an incentive to get my players to become more acquainted with the setting, by reading selected novels and comics. Has anyone else tried this, and what experience have you had? Do you have any other suggestions along these lines?" Logged SoftNum I think that your players are willing to go out and seek information about your setting off-line is as much a testament to your GMing as it is your reward.I currently play a World of Warcraft d20 game. I mostly get into it because my friends were doing it. I'm continuing to play because I'm engaged in the story, and I like the players and GM. But I have no desire to spend my non-gaming hours reading up on WoW information. If we're going to a new area, I'll generally try and find one of the players during non game time and ask general questions my character should reasonably know.But some groups I've been in would rebuke any off-line 'homework' that is assigned. Especially the larger a given group gets, the more likely there are people playing with friends with little interest in the story. I think this is mechanic just servers to push those people further out of the spotlight.Just my two cents.
计算机
2014-23/3292/en_head.json.gz/5702
6th December 1998 Archive ← 4th December 1998 7th December 1998 → Analysis: How Bill Gates discovered the Web Two and a half years ago Bill went surfing, and told his execs about the Internet Tidal Wave What did Microsoft think about the Web, and when did it think it? Establishing the truth of this is of vital importance to the DoJ's case and to Microsoft's defence. Bill Gates says the company started work on browsers in April 1994, just before Netscape was incorporated, but Microsoft documentation from 1995 shows the company still formulating its position, and Gates himself didn't go public with the Microsoft 'embrace the Web' strategy until late that year. Gates claimed in August of this year that he'd pinpointed the date as 5 April 1994, when he'd said at an executive retreat: "Hey, we're going to get it integrated into the operating system," and that on 16 April he'd given executives responsibility for developing browser technology for Windows. But by May 1995 it appears that plans weren't going particularly well; DoJ trial exhibits 20 and 21 show on the one hand Gates delivering a 'road ahead' directional statement to his executives, and on the other Ben Slivka detailing the chaotic state of Microsoft development, and proposing a strategy for a Microsoft-owned "SuperWeb." The Gates document (exhibit 20, dated 26th May 1995) is entitled The Internet Tidal Wave, and provides a picture of what Gates was thinking about Microsoft's strengths and weaknesses at the time were, how the company could move forward, and who the big competitors and threats were. He notes that competitors' Web sites are better than Microsoft's and that "Sun, Netscape and Lotus do some things very well. Amazingly, it is easier to find information on the Web than it is to find information on the Microsoft corporate network". At the time one of Microsoft's strategies for the Web was to incorporate browsing capabilities into its applications, and although Slivka (we'll get to exhibit 21 another day) was simultaneously pointing to this as being a blind alley, Gates doesn't yet seem ready to drop it. "All work we do here can be leveraged into the HTTP/Web world. The strength of the Office and Windows businesses today gives us a chance to superset the Web. One critical issue is runtime/browser size and performance. Only when our Office-Windows solution has comparable performance to the Web will our extensions be worthwhile. I view this as the most important element of Office 96 and the next major release of Windows." Office 96 became Office 97 when it shipped, while the next release was Windows 98. Microsoft had to some extent been trying to view the Web as territory it could extend its file formats into -- hence the association in "Office-Windows solutions". Size and bandwidth problems were clearly starting to show this wouldn't work, and although much of the world had already embraced Microsoft file formats for productivity applications, Microsoft was only just beginning to twig that the Web hadn't done so, nor was it about to. But Gates is getting there. Although at the time Microsoft was on the point of launching an online system, the Microsoft Network (MSN), he suggests that approach may already have been superseded. "The online business and the Internet have merged. What I mean by this is that every online service has to be simply a place on the Internet with extra value added. MSN is not competing with the Internet [he's almost there, but not quite], although we will have to explain to content publishers and users why they should use MSN instead of just setting up their own Web server." This is exactly the problem that faced the established online systems like AOL and CompuServe at the same time, and they weren't necessarily coming up with the right answers straight away. But as we see from Gates, they were in good company -- in May 95 Microsoft knows it has to deal with the Web, but thinks it can still go with a proprietary online system as well. So why use MSN? "We don't have a clear enough answer to this question today. For users who connect to the Internet some other way than paying us for the connection we will have to make MSN very, very inexpensive -- perhaps free." Note the significance of "inexpensive" -- one of the reasons Microsoft didn't want to let go of MSN entirely at that point was because it felt it could associate billing (if you'll pardon the expression) systems with it. The other online systems too were at the time trying to wrap their heads around how they billed or microbilled instead of giving stuff away for free. Now, note the recently unveiled Microsoft 'TV' model used as a justification for giving IE away. TV stations broadcast programmes for free, and recoup the cost in advertising, and that's what Microsoft says it started doing when it started shipping IE for free later in 1995. But here's what Bill thinks in June: "The amount of free information available today on the Internet is quite amazing. Although there is room to use brand names and quality to differentiate from free content, this will not be easy and it puts a lot of pressure to figure out how to get advertiser funding." Further on in the document he makes it clear he's not letting go of the MSN approach yet. "We need to determine a set of services that MSN leads in -- money transfer, directory, and search engines. Our high-end server offerings may require a specific relationship with MSN." We think this means linkage between NT Web server sales and the purchase of MSN services, but no doubt Bill can't remember what he meant, these days. The Competition Gates identifies some familiar, and some not so familiar names here. "Our traditional competitors are just getting involved with the Internet. Novell is surprisingly absent... however... Novell has recognised that a key missing element of the Internet is a good directory service. They are working with AT&T and other phone companies to use the NetWare Directory Service to fill this role." The AT&T project for a parallel and proprietary Internet was later abandoned, although today Novell is trying hard to leverage NDS on the Internet. "All Unix vendors are benefiting from the Internet since the default server is still a Unix box and not Windows NT, particularly for high-end demands. Sun has exploited this quite effectively." Other competitors are obvious because of the file formats, and it seems Bill is starting to get it here: "Browsing the Web, you find almost no Microsoft file formats. After ten hours browsing, I had not seen a single Word DOC, AVI file, Windows EXE (other than content viewers), or other Microsoft file format. I did see a great number of QuickTime files... Another popular file format on the Internet is PDF, the short name for Adobe Acrobat files... Acrobat and QuickTime are popular on the network because they are cross-platform and the readers are free." Then there's Netscape. "Their browser is dominant, with 70 per cent usage share, allowing them to determine which network extensions will catch on." Gates is worrying about the standards escaping from Microsoft control. "They are pursuing a multi-platform strategy where they move the key API into the client to commoditise the underlying operating system. They have attracted a number of public network operators to use their platform to offer information and directory services. We have to match and beat their offerings including working with MCI, newspapers, and others who are considering their products." Here Gates is still thinking in terms of online systems and 'pay-per-view' models. But can we see a 'get Netscape' strategy emerging? If the control moves out from the client onto the Web, with maybe Sun servers dominating the back end, what happens then? "One scary possibility being discussed by Internet fans is whether they should get together and create something far less expensive than a PC which is powerful enough for Web browsing. This new platform would optimise for the data types on the Web. Gordon Bell and others approached Intel on this and decided Intel didn't care about a low-cost device so they started suggesting that General Magic or another operating system with a non-Intel chip is the best solution." Ah yes, so before Oracle invented the NC and Intel (originally) turned it down, the idea was floated past Intel, and Gates knew about it. Around about this time Intel was engaged in its messy battle with Microsoft over NSP, so maybe in a parallel universe Intel jumped the other way, and is three years ahead of where our Intel is in appliance design, and operating software. Slivka is also worried about this -- he thinks Siemens and Matsushita might turn out to be the troublemakers. What Bill wanted done He faces two ways here, to some extent. Slivka identifies lack of co-ordination of efforts at Microsoft as a major problem, but you can see how this arises. Bill says "I want every product plan to try and go overboard on Internet features", and this basically drives Microsoft's different product groups further along the road of multiple overlapping efforts. But: "One element that will be critical is co-ordinating our various activities... Paul Maritz will lead the Platform group to define an integrated strategy that makes it clear that Windows machines are the best choice for the Internet. This will protect and grow our Windows asset. Nathan [Mhyrvold] and Pete will lead the Applications and Content group to figure out how to make money providing applications and content for the Internet. This will protect our Office asset and grow our Office, Consumer and MSN businesses." Note we're facing two ways here as well. Maritz seems to be elected to 'invent' the integration strategy, or at least to build on what unco-ordinated shreds of it already exist, while Mhyrvold, old Advanced Technology Group trusty and the man who drove the early Microsoft apps strategy (which was incidentally tanked by Lotus in the early 80s), is still trying to link Office, MSN and cash together into a winning formula. This approach was actually what Microsoft was talking publicly about at the time (eg. Steve Ballmer in an InfoWorld interview of the time: "I tend to think of [the Microsoft Network] as part of the extra-enterprise pitch that we make to every corporate account that we call on.") Gates, meanwhile, has several things he wants done on the Windows platform. "We need to understand how to make NT boxes the highest performance HTTP servers. Perhaps we should have a project with Compaq or someone else to focus on this... We need a clear story on whether a high volume Web site can use NT or not because Sun is viewed as the primary choice." Three years on, this "clear story" has yet to ship in its entirety. "We need to establish distributed OLE as the protocol for Internet programming [oops...]. A major opportunity/challenge is directory. If the features required for Internet directory are not in Cairo or easily addable without a major release we will miss the window to become the world standard in directory with serious consequences [oops again]. Lotus, Novell and AT&T will be working together [not, as it transpired,
计算机
2014-23/3292/en_head.json.gz/7454
Organizers: the Council of Europe (CoE), United Nations Economic Commission for Europe (UNECE) and the Association for Progressive Communication (APC) Discussing the proposal for a “Code of good practice on public participation, access to information and transparency in Internet governance”. Agenda Opening by Ms Maud de Boer-Buquicchio, Deputy Secretary General, Council of Europe (video message) Chair: Ms Anriette Esterhuysen, APC 1. Mr. William Drake, Centre for International Governance, Graduate Institute for International and Development Studies, Geneva 2. Prof. David Souter, UK (Consultant expert) Interactive discussion 1. Mr. Bill Graham, ISOC 2. Mr. Thomas Schneider, OFCOM, Switzerland 3. Mr. Massimiliano Minisci, ICANN 4. Mr. Paul Wilson, Number Resource Organization (NRO) Closing remarks by Mr. Hans A. Hansell, UNECE During the second Internet Governance Forum the CoE, the UNECE and the APC organized a best practice forum that discussed the possibility of using the UNECE Aarhus Convention as a benchmark for developing a code of conduct for Internet Governance. Following the positive response to the “Best Practice Forum”, a study was commissioned to develop a Code of Conduct that could serve as input to the Internet governance discussions. A consultation, entitled “Towards a code of good practice building on the principles of WSIS and the Aarhus Convention”, was organized jointly by UNECE, the Council of Europe and the Association for Progressive Communication on 23 May 2008 in Geneva. Subsequently a discussion paper was developed for the third Internet Governance meeting. The purpose of the workshop was to explore if a roadmap for how such a code could be developed. During the workshop several speakers from the audience expressed their support for the initiative. The main conclusion from the workshop was that Internet and its governance is made up from a large number of organizations, standards bodies and governments and in view of the concern about its governance it was felt that the quality and the inclusiveness of Internet Governance would be improved by making information about decision-making processes and practice more open and more widely available, and to facilitate more effective participation by more stakeholders. A practical way of achieving this could be the development of a code of good practice dealing with information, participation and transparency. Such a code should be based on the WSIS principles and the on existing arrangements in Internet Governance institutions. In this context the experience of developing and implementing the Aarhus convention could serve as a benchmark for the work. The first aim is to make it applicable across a broad range of decision making bodies which means that the code must be expressed in broad and general terms. However, it should not be a very comprehensive document but be restricted to a couple of pages. As a way forward it was suggested a first step could be a comparative assessment (mapping) of existing arrangements in a number of selected internet governance institutions that would agree to participate in such an exercise. However equally important would also be not only to listen to institutions but also to listen carefully to the users. In this case a bottom up approach is as important as the top down. Following such a mapping exercise a small working group could develop a work plan leading to a draft code which could then be presented for discussion in the wider Internet community. The UNECE offered to host a first working group meeting at the United Nations in Geneva.
计算机
2014-23/3292/en_head.json.gz/7464
Non-volatile memory's future is in software New memory technology to serve dual roles of mass storage and system memory Lucas Mearian (Computerworld (US)) on 25 October, 2012 09:56 There will be a sea change in the non-volatile memory (NVM) market over the next five years, with more dense and reliable technologies challenging dominant NAND flash memory now used in solid-state drives (SSDs) and embedded in mobile products. As a result, server, storage and application vendors are now working on new specifications to optimize the way their products interact with NVM, moves that could lead to the replacement of DRAM and hard drives alike for many applications, according to the Storage Networking Industry Association (SNIA) technical working group. "This [SNIA] working group recognizes that media will change in next three to five years. In that time frame, the way we handle storage and memory will have to change," said SNIA technical working group member Jim Pappas. "Industry efforts are under way to remove the bottleneck between the processor and the storage." Pappas, who is also the director of technology initiatives in Intel's Data Center Group, noted there are more than a dozen non-volatile memory competitors coming down the pike to challenge NAND flash. Those technologies include Memristor, ReRam, Racetrack Memory, Graphene Memory and Phase-Change Memory. IBM's phase-change memory chip uses circuitry that is 90 nanometers wide and could someday challenge NAND flash memory's market dominance. "What is happening across the industry with multiple competing technologies to NAND flash is the memory that goes into SSDs today will be replaced by something very close to the performance of system memory," Pappas said. "So now, it's the approximate speed as system memory, but yet it's also nonvolatile. So it's a big change in computing architecture." For example, last year IBM announced a breakthrough in phase-change memory that could lead to the development of solid-state chips that can store as much data as NAND flash technology but with 100 times the performance, better data integrity and vastly longer lifespan. SNIA's Non-Volatile Memory (NVM) Programming Technical Working Group, which includes a who's who of hardware and software vendors, is working on three specifications. First, the group wants to improve the OS speed by making it aware when a faster flash medium is available; secondly, it wants to give applications direct access to the flash through the OS; and lastly, it wants to enable new NVMs to be used as system memory. "Most significantly, when you use non-volatile memory in the future, you can use as part of it for your memory hierarchy and not just [mass] storage," Pappas said. Among the companies backing the specifications effort is IBM, Dell, EMC, Hewlett-Packard, NetApp, Fujitsu, QLogic, Symantec, Oracle, and VMware. NAND flash accessed like hard drives today Today, a processor accesses system memory (DRAM) directly in hardware through a memory controller. The memory controller is usually integrated into the microprocessor chip. There is no software necessary. It is all performed in hardware. By contrast, a microprocessor talks to NAND flash the same way that it accesses a hard drive. It does that through operating system calls which in turn drives the traditional storage software stack. From there the OS then transports the data to or from the flash memory (or hard drive) over independent storage interfaces such as SCSI, SAS or SATA interface hardware. Once next generation NVM arrives, the interface will change; that is a product implementation decision that is outside the scope of the SNIA NVM Programming Technical Working Group, Pappas said. For example, one popular method that is already being used in multiple products today is connecting NVM directly to the PCI Express (PCIe) bus, which is usually directly connected to the processor. Solid-state memory vendor Fusion-io is among more than a half dozen companies selling NAND flash PCIe cards for servers and storage arrays. The company has also been working on software development kits and hardware products that will eventually allow its NAND flash cards to be used as system memory and mass storage in the same way SNIA's specifications will for the industry at large. Microsoft and Fusion-io have been working to develop APIs enabling SQL databases to use what Fusion-io calls its Virtual Storage Layer (VSL), which in turn allows developers to optimize applications for Fusion's ioMemory PCIe cards. Like any OS, SQL Server still uses NAND flash like spinning media, using a buffer and writing data twice to ensure resiliency. Fusion-io calls its interface effort the Atomic Multi-block Writes API. The API is an extension to the MySQL InnoDB storage engine that eliminates the need for a buffer or redundant writes, giving the application direct access to -- and control of -- the NAND flash media. "If we architect it to act like memory, and not like disk, we can do block I/O [reads and writes] and memory-based access," said Gary Orenstein, senior vice president of products at Fusion-io. "The APIs say to SQL, 'You have more capability than you think you have." The result is a 30% to 40% improvement in SQL database performance, half the number of writes, and twice the life for the NAND flash because it is storing half of the data it typically would, Orenstein said. "We're not saying flash will replace every instance of DRAM, but developers will have 10 times the capacity of DRAM at a little less performance and a fraction of the cost and power," Orenstein said. Products using the Atomic Multi-block Writes API are expected within a year, Orenstein said. Through new APIs, Fusion-io's 10TB ioDrive Octal PCIe module could someday play a dual role of system memory and mass storage. How NVM has affected data centers To understand the impact of NVM in a data center, it helps to look at what was there before it: hard drives and volatile system memory or DRAM. DRAM is extremely expensive and is volatile, meaning it loses all data when powered off unless it has a battery backup. DRAM has about six orders of magnitude the performance of hard drives, or about one million times, according to Pappas. In 1987, when NAND flash entered the picture, it offered a middle ground with about three orders of magnitude better performance than of disk drives, or about 1,000 times faster, Pappas said. Until recently, however, flash was not cheap enough to use as a mass storage device in servers and arrays. Now that it is, its popularity is soaring. Hardware manufacturers now use NAND flash as an additional tier of mass storage that provides faster performance for I/O-hungry applications such as online transaction processing and virtual desktop infrastructures. But NAND flash is typically not used as system memory, meaning a CPU does not access it as directly as it does DRAM memory. Today, storage infrastructures are built based on the performance of hard disk drives. SNIA's efforts will promote an infrastructure that supports the type of performance that NVM can offer. SNIA's NVM Programming Technical Working Group was formed in July and promotes the development of operating system enhancements to support NVM hardware. "We're focusing on that shared characteristic of this next-generation memory. So we don't need to care which particular technology wins, we just need to design an infrastructure that is capable of using what that replacement technology will be," Pappas said. How new specifications address NVM performance SNIA's working group will first focus on optimizing OSes, so that software platforms and the file stack recognize when faster media is available. The idea behind the effort is to figure out how to speed up the performance of an OS so that any application would also benefit from the performance boost. "Another aspect not available in storage systems today is intelligent interrogation of what the capabilities of the storage is," he said. "That's pretty rudimentary. How can an OS identify what features are available and be able to load modules specific to the characteristics of that device." Secondly, the task force will work on new interfaces through the OS to applications, which would allow applications to have a "direct access mode" or "OS bypass mode" fast I/O lane to the NVM. A direct access mode would allow the OS to configure NVM so that it's exclusive to an application, cutting out a buffer and multiple instances of data, which adds a great deal of latency. For example, an OS would be able to offer a relational database application direct access to NVM. IBM with DB2 and Oracle have already demonstrated how their applications would work with direct access to NVM, according to Tony Di Cenzo, director of standards at Oracle and a SNIA task force member. By far, the most difficult job the task force faces is the development of a specification that allows NVM to be used a system memory and as mass storage at the same time. "This is still a brand new effort," Pappas said. "Realistically, the [new NVM] media will take several years to materialize. So what we're doing here is having the industry come together, identifying future advancements ... and defining a software infrastructure in advance so we can get full benefit of it when it arrives." NAND flash increasingly under pressure Although new NVM technology will available in the next few years, NAND flash is not expected to go anywhere anytime soon, since it could take years for new NVM media to reach the price point of NAND flash. But NAND flash is still under pressure due to technology limitations. Over time, manufacturers have been able to shrink the geometric size of the circuitry that makes up NAND flash technology from 90 nanometers a few years ago to 20nm today. The process of laying out the circuitry is known as lithography. Most manufacturers are using lithography processes in the 20nm-to-40nm range. The smaller the lithography process is, the more data can be fit on a single NAND flash chip. At 25nm, the cells in silicon are 3,000 times thinner than a strand of human hair. But as geometry shrinks, so too does the thickness of the walls that make up the cells that store bits of data. As the walls become thinner, more electrical interference, or "noise," can pass between them, creating more data errors and requiring more sophisticated error correct code (ECC). The amount of noise compared to the data that can be read by a NAND flash controller is known as the signal-to-noise ratio. The processing overhead for hardware-based signal decoding is relatively high, with some NAND flash vendors allocating up to 7.5% of the flash chip as spare area for ECC. Increasing the ECC hardware decoding capability not only boosts the overhead further, but its effectiveness also declines with NAND's increasing noise-to-signal ratio. Some experts predict that once NAND lithography drops below 10nm, there will be no more room for denser, higher-capacity products, which in turn will usher in newer NVM media with greater capabilities. Lucas Mearian covers storage, disaster recovery and business continuity, financial services infrastructure and health care IT for Computerworld. Follow Lucas on Twitter at @lucasmearian or subscribe to Lucas's RSS feed. His e-mail address is [email protected]. See more by Lucas Mearian on Computerworld.com. Read more about data storage in Computerworld's Data Storage Topic Center. Tags hardwarestorage networking industry associationData storageNVstorageIBMhardware systemsintelStorage Managementstorage software 13 pieces of advice for Yosemite beta testers F5 data center firewall aces performance test Hear from James Turner, IBRS on User Authentication - Improving user experience while ensuring business efficiency, 31st July. Are you in the apps race? Find out how you can succeed at CA Expo’14 Check your compliance with privacy law changes today download our app
计算机
2014-23/3292/en_head.json.gz/7714
free-bees.co.uk "There is no reason anyone would want a computer in their home." - Ken Olsen Other Feeds All content on this website is copyright © 2005 - 2014 free-bees.co.ukunless otherwise specified Quick Look: Xubuntu Beta 2Main ArticleCommentsImagesPrinter Friendly PageQuick Look: Xubuntu Beta 2Monday 1st May 2006Categories: Reviews, GNU/Linux, FLOSSSo far, we have had Ubuntu, Kubuntu and Edubuntu, covering GNOME and KDE - the two major desktop environments. Now, for the Dapper Drake release, XFCE is to be added to the mix in the form of Xubuntu. Of course, XFCE has been installable before, but this is the first time it gets its own special treatment. Using the wonders of QEMU, you can quickly try out the LiveCD, although quickly might not be the right word if you haven't stuck kQEMU on the side. Anyway, the first image on the desktop that greets you is this little animated rodent having a fun run: The animated Xubuntu logo. Once the desktop had finished loading, the first thing that struck me is how much Xubuntu looks like... Ubuntu. If you take a look at xfce.org, especially the screenshots page, you can see how XFCE normally looks. Instead, Xubuntu looks like this: The Xubuntu Desktop. You can find plenty more screenshots of Xubuntu at osdir.com. It would appear that this has happened to ease the transition for anybody moving from GNOME, or perhaps even Windows. Running application appear in the bottom panel rather than top, you get an applications menu in the top left, and so on. Of course, there's really only one important question: is it a good idea? The answer is: sort of. On the one hand, those that use XFCE already may be disappointed. On the other, it succeeds in essentially creating a lightweight version of Ubuntu. In my opinion, it is the latter option that is probably more important. Those that want to use XFCE in its usual form may only have to play around with some options, but it could offputting to some. After all, Xubuntu will probably gain more users by targeting new users unfamiliar with XFCE or even Linux, than by targeting existing XFCE users. But enough of that. Xubuntu differs from Ubuntu in more than just desktop environment - there's also the selection of packages. Gone is the somewhat heavyweight OpenOffice.org, instead replaced with Abiword. Gnumeric might also be nice to have, although we'll have to wait a month to see if it turns up in the end. Other than that, there are just the usual suspects, although relatively few of them - Firefox, Thunderbird, the GIMP, GAIM, and all the applications that come bundled with XFCE. One such application is Thunar, the file manager, which behaves suspiciously similarly to Nautilus. I didn't get the chance to test the installer, although the fact that the CD doubles up as both LiveCD and installation CD is something that I hopes becomes more common. Xubuntu should also carry many of the strengths of Ubuntu, since, beyond the desktop environment and choice of office suite, they use the same base system (so far as I'm aware). This means that you can quite easily use any applications already available for Ubuntu in Xubuntu. If you're considering using Xubuntu in the future, it really depends on what you want. If you've fallen in love with XFCE, consider whether you like the changes that Xubuntu has made. If, on the other hand, you just want a lightweight distribution, Xubuntu may be worth keeping an eye on. Xubuntu Screenshots from osdir.com Xubuntu Beta 2 Download Page XFCE Screenshots at xfce.org All content on this website is copyright © 2005 - 2014 free-bees.co.uk unless otherwise specifiedPage generated in 0.002542 seconds
计算机
2014-23/3292/en_head.json.gz/8427
Rakesh Raul Rakesh Raul is from a small town of India, with a vision of doing something big in programming. He completed his first diploma in Programing at the age of 16, and continued his higher studies in Computer Software Development. He started his programming career with a small software development company in Mumbai. After 2 years of development in Visual Basic he was introduced to Microsoft Dynamics NAV version 3. For the initial 2-3 years he worked as a Microsoft Dynamics NAV developer and at the same time he learned all areas of the product and earned his first Microsoft Certification - Business Solutions Professional. He continues to stay updated with new releases of the product and is certified in multiple areas for versions 4.0, 5.0, 2009, and 2013. Apart from Microsoft Dynamics NAV, he also has a good handle on Microsoft SQL Server and Business Intelligence. His 7 year journey with Microsoft Dynamics NAV includes more than 30 implementations, and one horizontal and two vertical solution designs and development. Currently, Rakesh works in Tectura, India, as Senior Technical Consultant. Tectura is a worldwide provider of business consulting services delivering innovative solution. Rakesh Raul has worked on the following Packt books: Microsoft Dynamics NAV 7 Programming Cookbook
计算机
2014-23/3292/en_head.json.gz/8773
HIS Radeon HD 5850 Review By Steven Walton on October 9, 2009 Removing "line in" echo from Realtek HD Audio output (if it is not a sound effect) Cannot use speakers/headphones: Sound card is missing? Low sound quality HIS Radeon HD 5850 In Detail Test System Specs & 3Dmark Vantage Benchmarks: Call of Duty, Company of Heroes Benchmarks: Crysis Warhead, Enemy Territory Quake Wars Benchmarks: Fallout 3, Far Cry 2 Benchmarks: Left 4 Dead, Resident Evil 5 Benchmarks: Street Fighter IV, S.T.A.L.K.E.R Clear Sky Benchmarks: The Last Remnant, Unreal Tournament 3 Benchmarks: World in Conflict, Wolfenstein Power Consumption & Temperatures Overclocking Performance As an early production model this Radeon HD 5850 is not your typical highly modified HIS graphics card. Rather, it closely follows the reference design and specifications, with pretty much the only difference being a HIS sticker on the fan shroud. The package bundle HIS has prepared for their Radeon HD 5850 card however is more generous than we have come to expect from graphics card manufacturers as of late. Besides the card itself, inside the box we found a CrossFireX bridge adapter, two 6-pin power cable adapters (in case your PSU does not have them), a DVI-VGA adapter, the usual quick reference manuals and a game coupon for DiRT 2. We recently featured this game as one of the titles to look forward this holiday season on the PC. It's expected to be released sometime in December and will be the first shipping DirectX 11 title. As mentioned before, while the Radeon HD 5870 measured 28cm long, the Radeon HD 5850 is shorter at 24cm. That 4cm saving means that this new graphics card will fit in any case that can support a standard ATX motherboard. Although the Radeon HD 5850 is considerably smaller than other high-end graphics cards, it is still ~3cm larger than the Radeon HD 4850 and 4770 graphics cards. Cooling the �Cypress Pro� GPU is a fairly large aluminum heatsink made up of 36 fins measuring 10.5cm long, 6.0cm wide, and 2.5cm tall. Connected to the base of this heatsink are two copper heatpipes which help improve efficiency. Finally, there is a 75x20mm blower fan that draws air in from within the case and pushes it out through the rear of the graphics card. The Radeon HD 5850 shares the same remarkably low 27 watt idle consumption levels we saw on the 5870, allowing for a similarly quiet operation in such scenarios. When we began to game, the fan kicked in and made some noise -- as you would expect. Noise levels were comparable to those of the Radeon HD 4870 or GeForce GTX 285 graphics cards, however, nothing unusual here. The use of a 40nm design has allowed ATI to be quite aggressive with the GPU core speed, clocking it at 725MHz. Compared to the Radeon HD 5870, AMD has disabled two of the SIMDs and reduced the core clock speed by 125MHz. This translates to 10% less SIMD capacity and a 15% lower core clock speed. GDDR5 memory works at a frequency of 1000MHz on this particular model, or 17% slower than on the Radeon HD 5870. There's 1GB of memory in total spread across eight chips located on the front side of the graphics card. These memory chips are cooled via a large aluminum plate which is also used to cool the power circuitry. The GPU configuration features an impressive 1440 SPUs, 72 TAUs (Texture Address Units) and 32 ROPs (Rasterization Operator Units). While the core clock speed and memory frequency have been reduced significantly, the core configuration is not all that different than the Radeon HD 5870's. The Radeon HD 5850 can consume up to 151 watts of power when pushed hard. In order to feed the graphics card enough power, AMD has included a pair of 6-pin PCI Express power connectors. This is the same configuration that you will find on the Radeon HD 4870/4890 and GeForce GTX 285 graphics cards. As with all modern Radeons, in the standard position you'll find a pair of Crossfire connectors for bridging two or more cards together. The only other connectors can be found on the I/O panel. Our HIS card featured two dual DVI connectors, a HDMI and Display Port connection. It is worth noting that all Radeon HD 5850 graphics cards can support a maximum resolution of 2560x1600 on not one but rather three monitors (a feature ATI calls Eyefinity).
计算机
2014-23/3292/en_head.json.gz/9530
HP escapes fine for boardroom spying scandal HP has settled allegations over its failure to disclose why one of its directors resigned in the midst of last year's boardroom mole fiasco. The computer giant has not been fined and said it neither admitted nor denied the Securities and Exchange Commission (SEC) findings, but has agreed to a cease and desist order, effectively … COMMENTS House rules 24 May 2007 Daniel One wonders..... One wonders, of course, if the outcome of this case would have been different if it were a private individual committing the 'crimes' of which HP stood accused. I for one find it slightly disturbing that a large, influential corporation that nevertheless gets caught with both hands firmly in the cookie drawer, receives in effect a let-off by the SEC, and be required to do nothing more than promise not to do it again. It's difficult to believe that the punishment would be as lenient if the defendant were of lesser means. Locked up with the key thrown away would seem more likely. 0 0 24 May 2007 heystoopid
计算机
2014-23/3292/en_head.json.gz/12022
HomeFeaturesLatest NewsPress ReleasesScreenshotsForumsContact UsSign Up!!Arcade Warning: Parameter 1 to modMainMenuHelper::buildXML() expected to be a reference, value given in /home/aziphirael/www.virtualmindhive.com/libraries/joomla/cache/handler/callback.php on line 99 "Buffy the Vampire Slayer" Virtual World Written by Nate Randall Thursday, 02 October 2008 01:46 CENTURY CITY, CA (September 3, 2008) - Twentieth Century Fox Licensing & Merchandising and The Multiverse Network, Inc., a leading provider of virtual world development technology for Massively Multiplayer Online Games (MMOGs), educational and social worlds, and business collaboration environments, today announced the development of an original "Buffy the Vampire Slayer" MMOG. The virtual world will be based on the mythology and iconography made popular by the Emmy® Award-winning series. The announcement was made by Academy Award® winning producer and member of the Multiverse Advisory Board, Jon Landau, during his keynote speech today at the Virtual Worlds Conference and Expo. The ground-breaking MMOG will offer a new experience for gamers, allowing them to play it either as a fully immersive 3D environment or as a Flash-based 2D game, where both types of players can interact. The game will be launched within "Multiverse Places," a new social virtual world from Multiverse. Currently under development, "Buffy" will go into "beta" testing later this year. Landau, a Multiverse Advisory Board member, who is in production on Avatar—the widely anticipated Fox film from fellow Multiverse Advisor James Cameron—commented, "Multiverse has the vision and expertise to create the type of rich environment needed for the best possible game based on the 'Buffy' series. The resources are in place to develop a great MMOG." Created by sci-fi and comic book legend, Joss Whedon, "Buffy the Vampire Slayer" racked up critical and popular acclaim during its seven seasons on television. The series also inspired a line of top-selling comic books and successful merchandising lines with an avid fan base at retail. Fox Licensing & Merchandising (Fox L & M) continues to develop the brand for a number of targeted programs that reach its core market."Every once in a while a show comes along that lives long after its run on television, and ‘Buffy the Vampire Slayer’ is that kind of show," commented Elie Dekel, Executive Vice President Licensing and Merchandising for Fox L&M. "We think that creating this virtual Buffy world is the perfect extension of the brand and will attract both fans of the show and newcomers interested in a great experience online." "As a brilliant storyteller and world-maker, Joss Whedon crafts stories that expand perfectly into the new medium of virtual worlds," said Corey Bridges, co-founder and Executive Producer, Multiverse. "Not to give away too much, but when the 'Buffy' team finished the television series, they created the perfect launching point for an MMOG where everyone will feel like they're an important character in the ongoing story." In related news, Fox's plans for the development of the previously announced Firefly MMOG have been delayed, but Fox looks forward to continuing its collaboration with Multiverse on this endeavor. Since Multiverse's launch over four years ago, more than 25,000 development teams—ranging from garage developers to Fortune 100 companies to Hollywood legends—have registered to use the company's platform technology. In addition, several hundred pre-qualified customers have already begun building MMOGs and non-game virtual worlds with the Multiverse Platform.ABOUT TWENTIETH CENTURY FOX LICENSING & MERCHANDISINGA recognized industry leader, Twentieth Century Fox Licensing and Merchandising licenses and markets properties worldwide on behalf of Twentieth Century Fox Film Corporation, Twentieth Television and Fox Broadcasting Company, as well as third party lines. The division is aligned with Twentieth Century Fox Television, one of the top suppliers of primetime entertainment programming to the broadcast networks.ABOUT THE MULTIVERSE NETWORK, INC.The Multiverse Network, Inc. is creating a network of online video games and other 3D virtual worlds. Its unique technology platform will change the economics of virtual world development by empowering independent game developers to create high-quality, Massively Multiplayer Online Games (MMOGs) and non-game virtual worlds for less money and in less time than ever before. Multiverse solves the prohibitive challenge of game creation by providing developers with a comprehensive, pre-coded client-server infrastructure and tools, a wide range of free content - including a complete game for modification - and a built-in market of consumers. The Multiverse Network will give video game players a single program - the Multiverse Client - that lets them play all of the MMOGs and visit all of the non-game virtual worlds built on the Multiverse platform. For more information about the company, please visit www.multiverse.net. Add this page to your favorite Social Bookmarking websites | mindhive all content feed | mindhive features feed | mindhive latest news feed | mindhive press releases feed | mindhive forum topics feed© mindhive 2005-Current Theme Joomla
计算机
2014-23/3292/en_head.json.gz/13598
Records Management in Microsoft SharePoint By Antonio Maio April 16, 2013 • Reprints According to a 2011 AIIM survey, organizations are experiencing a 23% yearly growth in electronic records. This rapid growth presents a challenge to organizations that must comply with records management regulations while ensuring that the right people are accessing the right information. To address this challenge, many organizations are looking to Microsoft SharePoint. With its powerful record-keeping capabilities, organizations can now manage their records using the same platform as they use for everyday collaboration and document management. Records management is one of the most popular drivers for using Microsoft SharePoint. Despite how much has been written on this, records management is sometimes confused with document or content management, but it is in fact quite a unique discipline with its own best practices and processes. Microsoft SharePoint provides some great features to enable these processes, and it provides enterprises with the appropriate controls for the data and documents that they declare to be corporate records. A record refers to a document or some other piece of data in an enterprise (electronic or physical) that provides evidence of a transaction or activity taking place, or some corporate decision that was made. A record requires that it be retained by the organization for some period of time. This is often a legal or regulatory compliance requirement. As well, a record by definition must be immutable, which means that once a document or piece of data is declared to be a record, it must remain unchanged. The period for which records are retained, along with the process followed once that time period has expired, is a critical requirement for records management. There are legal and business implications to consider when content is kept too long. The business policy could be that after X years, a record is archived and then after Y years from that point it is disposed (which could include deletion or moving it to offline long-term storage). Again, establishing this policy requires planning and getting agreement from stakeholders, especially around any legal, regulatory compliance, revenue or tax implications. The requirements for records immediately suggest certain processes that must be in place to ensure that records are managed appropriately from several perspectives: business, auditing/legal, tax, revenue, and even business continuity. As we often find, for business processes to be applied consistently across all SharePoint content or records, automation is a key requirement, as well as making appropriate use of metadata. The first step in implementing records management in SharePoint is to define a file plan, which typically includes: A description of the types of documents that the organization considers to be records A taxonomy for categorizing the records Retention policies that define how long a record will be kept and how to handle disposition Information about who owns the record throughout its information lifecycle, and who should have access to the record It is important to determine what type of content should be considered a record. For example, if I am working on a new HR policy for next year, my initial draft and its various iterations should likely not be considered records because they are still changing – they are not yet approved or final, nor can I make any decisions based on those preliminary versions. But once my HR plan is approved or otherwise considered final then it can be declared a record because I can now base corporate decisions on it. Establishing a policy around what type of data is a record requires planning, meeting with appropriate stakeholders and agreeing on policy that’s communicated to everyone that may be declaring content as a record. Once the organization has defined what information it wants to preserve as records, SharePoint 2010 provides several methods to declare a record and implement record retention policies. These include the Records Center site, which is a SharePoint site dedicated to centrally storing and managing records. It provides many features that are critical to implementing a records management system, including a dashboard view at the site level for records managers with searching capabilities and integration with the Content Organizer for routing records within the site. Depending on the business need, it may make sense to centralize records management and storage in the Records Center. This is particularly true if the business demands that a small number of users be considered “record managers” and it is their role alone to declare content as records. A second method involves declaring records “in-place.” This feature allows individual users to declare content as records in their current SharePoint location. Records do not need to be moved or added to a central Records Center site, nor do they need to be routed within the Records Center. This is a trend in the records management space, because it allows users to continue to find content where it resides, based on its business nature, topic or properties. One drawback of this approach is that end users – who are typically not records managers - may be apprehensive about declaring records, due to the official and legal nature of a record. The powerful recordkeeping capabilities in SharePoint give organizations an effective enterprise records management system. SharePoint contains valuable features that can be used to define the appropriate records and retention policies for the business. Antonio Maio is Microsoft SharePoint Server MVP and senior product manager at TITUS in Ottawa, Ontario. Show Comments Antonio, we'll be in Toronto on May 1 with our SharePoint for ECM seminar. We'd love to have your input and participation. It's a free event for local practitioners from... twitter-36136502
计算机
2014-23/3292/en_head.json.gz/13712
Electrical and Computer Engineering School of Engineering and Computer Science Blood Glucose Analysis Configurable Fault Tolerant Processor Energy Efficient Power Electronics Systems and Smart-Grid F/A-18 Avionics Architecture Study Intelligent Distributed Control of Power Plants Microwave Applied Metrology Neural Networks Architectures Pulp Stock Consistency Calibrator Reconfigurable Computing Time Transit Tomography EXPLORE ECE Academic Openings WMCS2014 ECE Lab Give Back to Baylor Baylor > ECS > ECE > Research Research Blood Glucose Analysis Sugar content strongly affects the complex electrical permittivity of blood. This project is part of the Microwave Applied Metrology Research. The Configurable Fault Tolerant Processor is an experiment that explores the application of programmable systems on a chip in space environments. The project is being led by the Naval Postgraduate School in collaboration with the U. S. Naval Academy and Baylor University. We develop and promote highly efficient energy conversion technologies for power electronics systems and energy renewal conversion systems for smart-grid. Curves International is the parent company for the "Curves for Women" exercise franchises seen in every state and all over the world. As with any highly successful business endeavor, the company and its founder/CEO Gary Heavin recognize that continued success depends on innovation. Military avionics systems represent some of the most complex real-time embedded computing systems in existence. The focus of this research is extending the performance and lifetime of these systems. The objective of this research is development of an Intelligent Distributed Control System (IDCS) for a large-scale power plant, coupled with complex network of sensor/actuators. To operate a large-scale power plant, the monitoring and control systems are distributed and automated for each subsystem in the power plant. The approach is to use Multi-Agent Systems (MAS), which allows implementation of significantly more sophisticated measures to compensate for the unsecure and nonrobust properties plaguing traditional control systems. Our research in microwave applied metrology focuses on the use of low level microwave signals and ultra wideband pulses of energy to measure the electromagnetic properties of materials and relates these measurements to biomedical and industrial sensing problems. A multi-agent system is made of many agents. An agent is a computer software program that is autonomous and situated in some distributed environments in order to meet its design objectives. Since the agents are faced with different environments, they are designed differently and properly for the given environment. Moreover, the agent is intelligent because it is reactive, proactive, social, flexible, and robust. In a large-scale distributed complex system, the agent's autonomous and intelligent properties can reduce the complexity by reducing the coupling problems between the subsystems. Furthermore, the proactive, reactive, and robust properties can be well suited for applications in a dynamic and unreliable situation. The proposed project will focus on an investigation of a mathematical approach to extrapolation, using a combination of system-type neural network architecture and the semigroup theory. The target of the investigation will be a class of distributed parameter systems for which, because of their complexity, lack an analytic description. Although the primary objective is extrapolation, this effort must begin with the development of an analytic description from the given empirical data, and then, proceed to extend that analytic description into an adjoining domain space for which there is neither data nor a model. That is, given a set of empirical data for which there is no analytic description, we first develop an analytic model and then extend that model along a single axis. Semigroup theory provides the basis for the neural network architecture, the neural network operation and also for the extrapolation process. Paper fibers are processed and transported throughout the paper mill in a water slurry. Precise measurement of the weight of fibers present in the slurry is required at each step of the process. This project is part of the Microwave Applied Metrology Research. Reconfigurable computers use large field programmable gate arrays (FPGAs) to augment traditional microprocessors. The circuitry in the FPGAs can be reconfigured in a fraction of a second to implement a custom coprocessor that is optimized for a particular application. For some problems it is possible to achieve a speed up of over two orders of magnitude. A swarm can be defined as a loosely coupled set of agents obeying simple rules that combine to an emergent behavior whose aggregate performance exceeds the sum of individual efforts. Mechanized autonomous swarm behavior has compelling characteristics, including fault tolerance, performance plasticity, and decentralized control. Engineers world over are familiar with the similarities and differences between continuous and discrete systems, manifested in the study of Laplace and Z transforms, continuous and discrete Fourier transforms, continuous and discrete Lyapunov equations and a host of other results. Allan M. Cormack and Godfrey N. Hounsfield were recipients of the 1979 Nobel Prize in Physiology or Medicine for their independent inventions of computer tomography - a procedure whereby objects are reconstructed from their projections. Page last modified: 10:21 am, January 16, 2008
计算机
2014-23/3292/en_head.json.gz/13821
News and New Bob's other writings. Copyright: 1974-2014 Bob Frankston Connectivity Policy (Not) Getting the Message Across Spectrum as Farmland Iot Vs "Access" [Talk] Thinking Outside Routing (Packets) vs. Gatewaying (Messages Life (yet to be) Scripted Deconstructing “the Smartphone” The Internet: Missing the Light (Not) in Control Refactoring CE Demystifying Networking Understanding Ambient Connectivity Beyond Limits All Writings The Regulatorium and the Moral ImperativeSaturday, November 01, 2003 This is a copy of my November 2003 VON Magazine Column Regulation is a world-wide concern, with the issue of scarcity as a key factor. At VON Fall 2003, I gave a presentation which I entitled "Don't Ask" because I come from a software culture where you don't have to ask permission -- you just write your code and that's that. Now I find myself dealing with a complex system of rules and requirements that I call The Regulatorium. I could argue about each aspect but, basically, the system is self-perpetuating. Each rule is necessary because of another, related rule. You have to step back and see the system as a whole and in order to talk about it we need to assign it a suitable name. Thus, The Regulatorium. One advantage of attending VON is that I got a chance to meet a number of the regulators. Not all of them are clueless. In fact, some are very interested in understanding what is happening and coming to terms with it. While it's true that the incumbents want to fight to preserve their privilege they wouldn't succeed if their arguments didn't make sense to the policymakers and to those who elect the policymakers. The regulators have real concerns about societal issues such as emergency services and the responsibility to make sure that everyone is connected to the network. These are noble goals but it is hard to speak about them because they've become so identified with the telecommunications industry. Or, at least, the telecommunications industry as it was codified in the 1930s. The very first amendment to the United States Constitution guaranteed free speech. You needn't ask permission before speaking and the onus was on those listening to deal with what you said. The courts have been reluctant to put any restrictions on speech, at least if it is done over the same media that was available in the 1700s. The US benefited from this spirit because it allowed innovative ideas to be heard and instead of prejudging which were good and bad, we had a marketplace in which ideas could find a home and flourish. Those ideas which were not adopted simply faded away and could vie for attention later. Great ideas were embodied in the innovations that drove the economy. If no one listens or the marketplace doesn't buy into the idea, then you try again. Failure is a learning experience and not a moral failure. Telecommunications was very different. It was born in poverty. Alexander Graham Bell was trying to find out how to share a scarce resource -- the telegraph line -- and failed. With voice we still have one conversation per pair of wires to this day! The idea behind sharing a telegraph line -- the harmonic telegraph, did find a use in radio. It became spectrum allocation. Instead of one radio station, we could now have many. But "many" is not the same as "unlimited." We had to create a world wide system to ensure that no one owned a transmitter. Scarcity forces us to make choices; instead of a marketplace, we have to prejudge what are the good and bad uses of a technological resource. This is a moral decision, not a technical one. We must rely on morality. And once it becomes a moral issue, it's hard to resist going a step further and having the good applications contribute to the moral campaign to ensure both access and safety. In the last 50 years, thanks to Hollywood actress and inventor Hedy Lamarr, who gave us the beginning of spread spectrum as an alternative to frequency allocation and to Claude Shannon who demonstrated that there was no intrinsic relationship between the transport and the content, telecommunications has been able to join the rapid growth that had characterized the computer industry. Instead of having to focus on allocating scarcity we have found that demand actually creates supply and therefore we have abundance. Technically we could embrace the new opportunity. But the moral imperative has made it difficult for the policy makers to embrace the opportunity to do far more for public safety then E911 could possibly do. Policy makers also seem stymied with regards to thinking creatively about universal access. We must recognize when doing what was good becomes bad. MIT alumnus Bob Frankston was the co-developer (with Dan Bricklin) of the legendary VisiCalc. He later developed Lotus Express. During his 1990s tenure at Microsoft, Frankston initiated the home networking effort which has made it possible for you to buy a small router and easily connect your home with the Internet. He can be reached via Email Bob Frankston Site
计算机
2014-23/3292/en_head.json.gz/13893
Metal Type in the 21st Century 'Making Faces: Metal Type in the 21st Century' is a fascinating design documentary by Richard Kegler that captures the personality and work process of the late Canadian graphic artist and type designer Jim Rimmer (1931-2010). Making Faces focuses on one man's dedication to his craft and relays the details of creating a metal typeface, while also conveying this passion to anyone who values the "hand-made" in today's world of convenience. Jim Rimmer's good humor and intelligent description of his process make it an enjoyable viewing experience for those who are even vaguely interested in how things are made. In 2008, P22 type foundry commissioned Jim Rimmer to create a new type design, RTF Stern, that became the first-ever simultaneous release of a digital font and hand-set metal font. Rimmer was one of only a few who possessed the skills needed to create a metal font. Shot in High Definition, this film documents the creation of a new typeface from the preliminary sketches through the cutting and casting of a single letter. The film offers a unique opportunity to share Jim's knowledge with the world. Clearly anyone interested in type design, letterpress printing and graphic processes would see Making Faces as something inspiring and essential, as there are few films focusing on these topics. Richard Kegler is founder and lead designer of P22 type foundry. He is also currently director of the WNY Book Arts Center in Buffalo NY. Kegler has a Masters degree from the Department of Media Study and the University at BuffaloHere's a better sample -- the trailer for the film: This film is a unique opportunity to share Jim's knowledge, processes and passions with the world. Region Free DVD Package includes: 45 minute documentaryIn depth bonus features on the type making processNewly digitized rare silent film from the 1930s: “The Creation of a Printing Type from the Design to The Print by Frederic W. Goudy”O
计算机
2014-23/3292/en_head.json.gz/14208
And now it’s all this I just said what I said and it was wrong. Or was taken wrong. Matplotlib and the Dark Sky API » « Python calculator More and less weather October 18th, 2012 at 8:39 pm by Dr. Drang Like everyone else, I went to the App Store yesterday and bought Check the Weather, the new app from David Smith. Unlike everyone else, I had a serious test for it: to help me decide when I should start my bike ride home from work to avoid the worst of the rain. We’ll get to that after a brief overview. Check the Weather is an app that couldn’t have existed a few years ago. It’s not that it’s especially resource-hungry (not that I know of, anyway), it’s that its interface assumes a usage grammar that barely existed in the early days of the iPhone. Here’s the main screen: There are, apparently, no buttons. If you want more information than the main screen gives you, you swipe. Swiping to the right reveals an hour-by-hour forecast for the rest of the day. Swiping to the left reveals a day-by-day forecast for the next couple of weeks. Swiping up reveals the local weather radar and the Dark Sky short-term rain forecast. Note that these supplemental screens don’t entirely replace the main screen; there’s always some of it still showing (albeit darkened) to remind you which way to swipe to get back. How are you supposed to know to swipe in the first place? When Check the Weather is first launched, it takes you through a quick tutorial. It also asks if you’ll allow it to access Location Services, which you certainly should—that’s the easiest way to tell it where you are so you get the local forecast. There is, by the way, one button on the main screen. If you tap the Current Location at the top of the screen, you’ll be taken to a screen that lets you create and choose from a series of saved locations, and set a couple of defaults (Fahrenheit vs. Celsius and 12-hour vs. 24-hour clock). The information Check the Weather provides is nice, but a little less than I’d hoped for. I like that the main screen gives the sunrise and sunset times, but I wish it also provided the wind speed and direction. Wind chill and heat index, when appropriate, would be nice, too. There’s more than I’d hoped for, too. The extended forecast is just silly—predictions beyond a few days are hopelessly unreliable. I get the feeling they’re present only because there’s a lot of vertical space to fill. (These screenshots are from an iPhone 5. I assume the forecast is truncated by a couple of days on shorter phones.) The short-term rain forecast is a nice idea. I suspect Check the Weather is the first app to use Dark Sky’s API, and it does a nice job of presenting the forecast. The drops on the left side indicating the intensity of the rain is a nice touch, and it was a good idea to forgo the animated wiggly line in the graph. I like the wiggle in Dark Sky itself, but it would look out of place in Check the Weather. Will it take the place of my own little weather CGI script? Yes and no. Certainly, it’ll be what I use when traveling, because my script is tied to where I live. But the lack of wind info in Check the Weather is a serious downside for me, at least during biking season when I often have to fight the wind. As for Check the Weather’s performance yesterday, when I needed to time my bike ride home to avoid getting thoroughly soaked, its inclusion of the Dark Sky short-term rain forecast was a big help, but its depiction of the size of the storm has me worried about the accuracy of its radar, which it
计算机
2014-23/3292/en_head.json.gz/14212
Armando Trovajoli Privacy Policy Updated: March 2014 Legacy.com, Inc. provides various features tools that allow users to access obituaries, express condolences and share remembrances of friends and loved ones. Legacy.com, Inc. offers these features and tools at www.legacy.com and other websites and applications (“App”) powered by Legacy.com (collectively, the “Services”). Please read the following privacy policy ("Privacy Policy") carefully before using the Services. 1. WHAT THIS PRIVACY POLICY COVERS 3. INFORMATION WE COLLECT 4. HOW WE USE YOUR INFORMATION 5. PRIVACY ALERT: YOUR POSTINGS GENERALLY ARE ACCESSIBLE TO THE PUBLIC 6. CHILDREN’S PRIVACY 8. SECURITY MEASURES 9. CORRECTING OR UPDATING INFORMATION 10. OPT-OUT PROCEDURES 11. ADVERTISING AND LINKS 12. CHANGES TO THIS PRIVACY POLICY 13. ADDITIONAL INFORMATION This Privacy Policy covers how Legacy.com, Inc. (collectively, "Legacy.com", "we", "us", or "our") treats user or personally identifiable information that the Services collect and receive. If, however, you are accessing this Privacy Policy from a website operated in conjunction with one of our affiliates ("Affiliates"), the privacy policy of the Affiliate will govern unless such policy states otherwise. Subject to the above, this Privacy Policy does not apply to the practices of companies that Legacy.com, Inc. does not own, operate or control, or to people that it does not employ or manage. In general, you can browse the Services without telling us who you are or revealing any information about yourself to us. Legacy.com, Inc. does not collect personally identifiable information about individuals, except when specifically and knowingly provided by such individuals on interactive areas of the Services. "Personally Identifiable Information" is information that can be used to uniquely identify, contact, or locate a single person, such as name, postal address, email address, phone number, and credit card number, among other information, and that is not otherwise publicly available. Any posting made while using the Services, and any other information that can be viewed by the public and therefore is not considered "personal information" or "Personally Identifiable Information", and is not the type of information covered by this Privacy Policy. Personally Identifiable Information. Examples of Personally Identifiable Information we may collect include name, postal address, email address, credit card number and related information, and phone number. We may also collect your date of birth, geo-location, social networking profile picture and the date of birth and date of death for the deceased person in connection with certain features of the Services. We also maintain a record of all information that you submit to us, including email and other correspondence. We may collect Personally Identifiable Information when you register to receive alerts or offerings, sponsor, access or submit information in connection with certain Services, post other content through the Services, purchase products or other services, opt-in to receive special offers and discounts from us and our selected partners or participate in other activities offered or administered by Legacy.com. We may also collect Personally Identifiable Information about your transactions with us and with some of our business partners. This information might include information necessary to process payments due to us from you, such as your credit card number. Legacy.com allows certain social media platforms to host plug-ins or widgets on the Sites which may collect certain information about those users who choose to use those plug-ins or widgets. We do not intentionally collect Personally Identifiable Information about children under the age of 13. Please see the section on "Children’s Privacy" below. Other Anonymous Information. Like most websites, Legacy.com also receives and records information on our server logs from your browser automatically and through the use of electronic tools such as cookies, web beacons and locally shared objects (LSOs). Our server logs automatically receive and record information from your browser (including, for example, your IP address, and the page(s) you visit). The information gathered through these methods is not "personally identifiable;" i.e., it cannot be used to uniquely identify, contact, or locate a single person. Some browsers allow you to indicate that you would not like your online activities tracked, using “Do Not Track” indicators (“DNT Indicators”), however we are not obligated to respond to these indicators. Presently we are not set up to respond to DNT Indicators. This means that we may use latent information about your online activities to improve your use of our Services, but such usage is consistent with the provisions of this Privacy Policy. We will use your information only as permitted by law, and subject to the terms of our Privacy Policy; Use of Personally Identifiable Information: We do not sell or share your Personally Identifiable Information with unrelated third parties for their direct marketing purposes. Personally Identifiable Information and other personal information you specifically provide may be used: to provide the Services we offer, to process transactions and billing, for identification and authentication purposes, to communicate with you concerning transactions, security, privacy, and administrative issues relating to your use of the Services, to improve Services, to do something you have asked us to do, or to tell you of Services that we think may be of interest to you. to communicate with you regarding the Services. for the administration of and troubleshooting regarding the Services. Certain third parties who provide technical support for the operation of the Services (our web hosting service, for example), may need to access such information from time to time, but are not permitted to disclose such information to others. We may disclose Personally Identifiable Information about you under the following circumstances: In the course of operating our business, it may be necessary or appropriate for us to provide access to your Personally Identifiable Information to others such as our service providers, contractors, select vendors and Affiliates so that we can operate the Services and our business. Where practical, we will seek to obtain confidentiality agreements consistent with this Privacy Policy and that limit others’ use or disclosure of the information you have shared. We may share your Personally Identifiable Information if we are required to do so by law or we in good faith believe that such action is necessary to: (1) comply with the law or with legal process (such as pursuant to court order, subpoena, or a request by law enforcement officials); (2) protect, enforce, and defend our Terms of Use, rights and property; (3) protect against misuse or unauthorized use of this the Services; or (4) protect the personal safety or property of our users or the public (among other things, this means that if you provide false information or attempt to pose as someone else, information about you may be disclosed as part of any investigation into your actions.) Use of Anonymous Information: Certain information that we collect automatically or with electronic tools or tags (such as cookies) is used to anonymously track and measure user traffic for the Services and to enhance your experience with the Services and our business partners. For example: IP Addresses/Session Information. We occasionally may obtain IP addresses from users depending upon how you access the Services. IP addresses, browser, and session information may be used for various purposes, including to help administer the Services and diagnose and prevent service or other technology problems related to the Services. This information also may be used to estimate the total number of users downloading any App and browsing other Services from specific geographical areas, to help determine which users have access privileges to certain content or services that we offer, and to monitor and prevent fraud and abuse. IP addresses are not linked to Personally Identifiable Information Cookies. A cookie is a small amount of data that often includes an anonymous unique identifier that is sent to your browser from a website’s computers and stored on your computer’s hard drive or comparable storage media on your mobile device. You can configure your browser to accept cookies, reject them, or notify you when a cookie is set. If you reject cookies, you may not be able to use the Services that require you to sign in, or to take full advantage of all our offerings. Cookies may involve the transmission of information either directly to us or to another party we authorize to collect information on our behalf. We use our own cookies to transmit information for a number of purposes, including to: require you to re-enter your password after a certain period of time has elapsed to protect you against others accessing your account contents; keep track of preferences you specify while you are using the Services; estimate and report our total audience size and traffic; conduct research to improve the content and Services. We let other entities that show advertisements on some of our web pages or assist us with conducting research to improve the content and Services set and access their cookies on your computer or mobile device. Other entities’ use of their cookies is subject to their own privacy policies and not this Privacy Policy. Advertisers or other entities do not have access to our cookies. Page Visit Data. We may record information about certain pages that you visit on our site (e.g. specific obituaries) in order to recall that data when you visit one of our partners’ sites. For example, we may record the name and address of the funeral home associated with an obituary to facilitate a flower order. We may share anonymous information aggregated to measure the number of App downloads, number of visits, average time spent on the Services websites, pages viewed, etc. with our partners, advertisers and others. Your own use of the Services may disclose personal information or Personally Identifiable Information to the public. For example: Submissions and other postings to our Services are available for viewing by all our visitors unless the sponsor or host of a Service selects a privacy setting that restricts public access. Please remember that any information disclosed on a non-restricted Service becomes public information and may be collected and used by others without our knowledge. You therefore should exercise caution when disclosing any personal information or Personally Identifiable Information in these forums. When you post a message to the Services via message board, blog, or other public forum available through the Services, your user ID or alias that you are posting under may be visible to other users, and you have the ability to post a message that may include personal information. If you post Personally Identifiable Information online that is accessible to the public, you may receive unsolicited messages from other parties in return. Such activities are beyond the control of Legacy.com, Inc. and the coverage of this Privacy Policy. Please be careful and responsible whenever you are online. In addition, although we employ technology and software designed to minimize spam sent to users and unsolicited, automatic posts to message boards, blogs, or other public forums available through the Services (like the CAPTCHA word verification you see on email and registration forms), we cannot ensure such measures to be 100% reliable or satisfactory. Legacy.com, Inc. does not intentionally collect from or maintain Personally Identifiable Information of children under the age of 13, nor do we offer any content targeted to such children. In the event that Legacy.com, Inc. becomes aware that a user of the Services is under the age of 13, the following additional privacy terms and notices apply: Prior to collecting any Personally Identifiable Information about a child that Legacy.com, Inc. has become aware is under the age of 13, Legacy.com, Inc. will make reasonable efforts to contact the child’s parent, to inform the parent about the types of information Legacy.com, Inc. will collect, how it will be used, and under what circumstances it will be disclosed, and to obtain consent from the child’s parent to collection and use of such information. Although Legacy.com, Inc. will apply these children’s privacy terms whenever it becomes aware that a user who submits Personally Identifiable Information is less than 13 years old, no method is foolproof. Legacy.com, Inc. strongly encourages parents and guardians to supervise their children’s online activities and consider using parental control tools available from online services and software manufacturers to help provide a child-friendly online environment. These tools also can prevent children from disclosing online their name, address, and other personal information without parental permission. Personally Identifiable Information collected from children may include any of the information defined above as Personally Identifiable Information with respect to general users of the Services and may be used by Legacy.com, Inc. for the same purposes. Except as necessary to process a child’s requests or orders placed with advertisers or merchants featured through the Services, Legacy.com, Inc. does not rent, sell, barter or give away any lists containing a child’s Personally Identifiable Information for use by any outside company. A child’s parent or legal guardian may request Legacy.com, Inc. to provide a description of the Personally Identifiable Information that Legacy.com, Inc. has collected from the child, as well as instruct Legacy.com, Inc. to cease further use, maintenance and collection of Personally Identifiable Information from the child. If a child voluntarily discloses his or her name, email address or other personally-identifying information on chat areas, bulletin boards or other forums or public posting areas, such disclosures may result in unsolicited messages from other parties. Legacy.com, Inc. and/or the Affiliate Newspaper(s) are the sole owner(s) of all non-personally identifiable information they collect through the Services. This paragraph shall not apply to Material subject to the license granted by users to Legacy.com, Inc. pursuant to Section 3 of the Terms of Use governing the Services. The security and confidentiality of your Personally Identifiable Information is extremely important to us. We have implemented technical, administrative, and physical security measures to protect guest information from unauthorized access and improper use. From time to time, we review our security procedures in order to consider appropriate new technology and methods. Please be aware though that, despite our best efforts, no security measures are perfect or impenetrable, and no data transmissions over the web can be guaranteed 100% secure. Consequently, we cannot ensure or warrant the security of any information you transmit to us and you do so at your own risk. You may modify and correct Personally Identifiable Information provided directly to Legacy.com, Inc. in connection with the Services, if necessary. Legacy.com, Inc. offers users the following options for updating information: Send an email to us at Contact Us; or Send a letter to us via postal mail to the following address: Legacy.com, Inc., 820 Davis Street Suite 210, Evanston, IL 60201 Attention: Operations You may opt out of receiving future mailings or other information from Legacy.com, Inc. If the mailing does not have an email cancellation form, send an email to Contact Us detailing the type of information that you no longer wish to receive. 11. THIRD PARTY ADVERTISING AND AD DELIVERY This Service contains links to other sites that may be of interest to our visitors. This Privacy Policy applies only to Legacy.com and not to other companies’ or organizations’ Web sites to which we link. We are not responsible for the content or the privacy practices employed by other sites. Legacy.com works with third parties, including, but not limited to Adtech US, Inc. and Turn Inc. (collectively, the "Ad Delivery Parties:"), for the purpose of advertisement delivery on the Services including online behavioral advertising (“OBA”) and multi-site advertising. Information collected about a consumer’s visits to the Services, including, but not limited to, certain information from your Web browser, your IP address and your email, may be used by third parties, including the Ad Delivery Parties, in order to provide advertisements about goods and services of interest to you. These Ad Delivery Parties retain data collected and used for these activities only as long as necessary to fulfill a legitimate Legacy.com business need, or as required by law. The Ad Delivery Parties may also set cookies to assist with advertisement delivery services. For more information about Adtech US, Inc. cookies, please visit http://www.adtechus.com/privacy/. If you would like to obtain more information about the practices of some of these Ad Delivery Parties, or if you would like to make choices about their use of your information, please click here: http://www.networkadvertising.org/choices/ The Ad Delivery Parties adhere to the Network Advertising Initiative’s Self-Regulatory Code of conduct. For more information please visit http://www.networkadvertising.org/about-nai Legacy.com shall obtain your prior express consent (opt-in) before using any of your “sensitive consumer information” as that term is defined in the NAI Principles. Legacy.com may also share your social media identification and account information with the corresponding social media service to allow the service to provide you with advertisements about goods and services of interest to you. Please keep in mind that if you click on an advertisement on the Services and link to a third party’s website, then Legacy.com’s Privacy Policy will not apply to any Personally Identifiable Information collected on that third party’s website and you must read the privacy policy posted on that site to see how your Personally Identifiable Information will be handled. We may periodically edit or update this Privacy Policy. We encourage you to review this Privacy Policy whenever you provide information on this Web site. Your use of the Services after changes of the terms of the Privacy Policy have been posted will mean that you accept the changes. Questions regarding the Legacy.com Privacy Policy should be directed to Contact Us or (As provided by California Civil Code Section 1798.83) A California resident who has provided personal information to a business with whom he/she has established a business relationship for personal, family, or household purposes ("California customer") is entitled to request information about whether the business has disclosed personal information to any third parties for the third parties’ direct marketing purposes. Legacy.com, Inc. does not share information with third parties for their direct marketing purposes. If, however, you are accessing this Privacy Policy from one of our Affiliate sites, the privacy policy of our Affiliate will apply to the collection of your information unless the Affiliate’s privacy policy specifically states otherwise. You should review the privacy policy of the Affiliate to understand what information may be collected from you and how it may be used. California customers may request further information about our compliance with this law by emailing Contact Us. Today's The Eufaula Tribune Notices|
计算机
2014-23/3292/en_head.json.gz/14230
Ubuntu's Lonely Road "After using more than 15 distros for real, I can tell Ubuntu is good, and I would humbly say that a lot of derivative distros should be more respectful and thankful," suggested Google+ blogger Gonzalo Velasco C. "FOSS people should not attack other distros!" he added. "I am not using Ubuntu now, but I don't spit on the plate I have eaten from." It may be lonely at the top, as the old saying goes, but apparently it can also be lonely a few notches down -- at least if Ubuntu is any example. Though not currently in the No. 1 spot on DistroWatch -- for whatever that's worth -- Ubuntu is often credited with having achieved more mainstream acceptance than any other Linux distro so far. Nevertheless -- or perhaps as a result -- Ubuntu and Canonical are frequently singled out with sharp criticism here in the Linux blogosphere. To wit: "In the last couple of months, one thing is clear ... Canonical appears to be throwing the idea of community overboard as though it was ballast in a balloon about to crash," charged Datamation's Bruce Byfield in a recent post entitled, "The Burning Bridges of Ubuntu." Internally, "the shift can be seen in the recently released Ubuntu 13.10, a release so focused on Canonical's goal of convergence across form factors and so unconcerned with existing users that it has become the least talked-about version for years," Byfield wrote. "Meanwhile, elections for the Community Council, which is supposed to be Ubuntu's governing body, were apparently such low priority that they were held a month after the last Council's term of office expired." Them's grievous charges, to be sure -- but do they signify a distro unmoored? 'He's Slipped His Anchor' "Mark Shuttleworth has invested a lot of time and money in Ubuntu GNU/Linux," blogger Robert Pogson told Linux Girl down at the blogosphere's seedy Punchy Penguin Saloon. "It is his baby, but he's slipped his anchor in FLOSS. "Replacing consensus with dictate is not the right way to make FLOSS nor the right way to use it, for that matter," Pogson asserted. "The power of Ubuntu GNU/Linux lies not in the funding by Canonical but in the vibrant community that pitched in to contribute." It's true that "power corrupts, and Shuttleworth, on the eve of his victory over Wintel, gave up the fight for an easier path, relying on business 'partners' rather than users and developers to get things done," Pogson added. "It seems he now wishes to simply make Canonical a successful business using FLOSS instead of a successful business making FLOSS." 'Just Another Player in IT' In the long run, "that's OK," Pogson asserted. "The licenses permit that. "FLOSS, however, will survive and thrive despite Canonical's exit from the front lines of the battle," he added. "Debian and Red Hat and Mint etc. will carry on towards the inevitable celebration of Wintel's destruction on the battlefield and Freedom for software and its users." It remains to be seen "whether Wintel will be replaced by another set of bullies in IT," Pogson suggested. "I think the world has learned enough from Wintel never to accept that kind of lock-in again." Shuttleworth, meanwhile, "may attempt to become a tyrant, but he will always be just another player in IT," he concluded. "Ubuntu GNU/Linux, which began as a movement to treat people in IT right, may become much less, just another competitor in the IT business. That's disappointing, but we have to accept it. The world of FLOSS is bigger than that and will carry on." Indeed, "when one related group of people say something critical, you can dismiss it as politics," consultant and Slashdot blogger Gerhard Mack began. "But when the people complaining are people as unrelated as the kernel developers and KDE, you should consider if they have a point. "I have a lot of respect for Shuttleworth and his many contributions to FOSS, but he really needs to be better at realizing he is wrong about something," Mack said. 'I Can Tell Ubuntu Is Good' Google+ blogger Gonzalo Velasco C. had a different take. "This topic is getting me bored," Gonzalo Velasco C. told Linux Girl. "Once again, Ubuntu is on the gallows pole. It's nonsense." Ubuntu may not be a typical GNU/Linux distribution in that it has a company backing it up, he pointed out. "They have a goal. In this sense I don't blame them for pursuing those goals." Though no big fan of Unity, Gonzalo Velasco C. "thanks the *buntu family for what they do," he said. "After using more than 15 distros for real, I can tell Ubuntu is good, and I would humbly say that a lot of derivative distros should be more respectful and thankful. "FOSS people should not attack other distros!" he added. "I am not using Ubuntu now, but I don't spit on the plate I have eaten from." Two Versions Nothing grows forever, Gonzalo Velasco C. pointed out: "People change; things come and go... Mint is not growing that much either, any more; Fedora and Mageia go up and down; Debian and PCLinuxOS are steady there; smaller rising-star distros die young... it's life." Meanwhile, "Red Hat solved the community-company issue by having Fedora and RHEL versions," he added. "I would recommend Canonical do the same," he concluded: "Ubuntu Enterprise, where Mark and the inner circle can do what they want, and Ubuntu Community Version, were the community can rock and roll." 'The Best Chance We Have' Last but not least, "I am capable of holding two seemingly contradictory ideas at the same time," Google+ blogger Kevin O'Brien told Linux Girl. "First, I think Canonical has an undeniable problem with the wider Linux community," O'Brien suggested. "Second, I think it is the best chance we have for a breakthrough on getting Linux on to more desktops." Canonical "has certainly met the letter of the law on being open source, and I see no evidence that they will change that," he added. "At the same time, they now have a bullseye on their backs, and a large part of the Linux community is willing to go without sleep if necessary to find things to attack them on. "They may feel that is unfair," O'Brien concluded, "but it is a fact, and only Canonical can change it." Katherine Noyes is always on duty in her role as Linux Girl, whose cape she has worn since 2007. A mild-mannered ECT News Editor by day, she spends her evenings haunting the seedy bars and watering holes of the Linux blogosphere in search of the latest gossip. You can also find her on Twitter and Google+.
计算机
2014-23/3292/en_head.json.gz/14417
9/3/201309:51 PMCharles BabcockNewsConnect Directly3 commentsComment NowLogin50%50% Cisco Backs Loggly; Watch Out VMwareLog analysis startup wins powerful new backer, just weeks after VMware announced new log analysis tools.If log file management is a big data problem, Loggly wants to be the answer. So far, however, it's found the ranks of its competitors swelling. The most recent entrant was VMware, which entered the list July 11 with general availability of vCenter Log Insight. In its latest round of financing, Loggly found a new backer: Cisco Systems. If VMware believes log file management is one of the keys to the virtualized network and data center, Cisco apparently does too. Loggly, a 23-employee company, recently raised $10.5 million, for a total of $20.9 million so far, from a group of backers that included Cisco and Data Collective Venture Capital, Loggly said Tuesday. Data Collective was an early backer of Couchbase and MemSQL. Loggly CEO Charlie Oppenheimer said in an interview that his firm can't be posed as a direct competitor to vCenter Log Insight because VMware's product is oriented toward existing VMware enterprise customers. Loggly has been accumulating businesses that were born on the Web or are units of large companies doing business on the Web. [ To see how early entrants into the art of extracting meaning out of the server log files officials viewed VMware's vCenter Log Insight, see VMware Analysis Tools: Small Step, Big Vision ] With a product that's been on the market only since mid-2010, Loggly has 3,500 customers. Loggly is finding its initial round of success with "those companies that do the bulk of their business over the Internet," said Oppenheimer. Part of the reason for that is Loggly's log file management is available only as software-as-a-service. Companies with online systems as their primary business channel are the ones most comfortable with adopting SaaS, he said. Loggly was founded in 2009 in San Francisco and wants to make server logs more accessible to the average systems administrator, operations manager or DevOps manager. Most log file management products require someone knowledgeable in the system to configure it, search for particular software events and draw up a report. The second generation of the log analysis product, released Tuesday, will try to popularize a new term in IT operations, responsive log management (RLM), claimed Oppenheimer. Loggly doesn't require an agent or any other proprietary additions to a customer's operations. System admins can sign up and connect a server's log file system to the server through standard syslog protocols, including HTTP, RFC 5424 and RFC 3164. Once the data from the server is flowing into the Loggly service, "it's a point-and-click process, using commands like those in Excel," said Oppenheimer. Users may look for "low-memory events" which indicate a system is bogging down and "see what correlates with them," he said. Users may visualize in a chart the sum total of particular types of events, such as the number of times the database executes commits in a particular time frame. You may look for minimum or maximum usage of resources or look for events that have a response time that falls within a particular range. "It's high-level assistance. You point and click to visualize the data and see the story that the data is trying to tell," he said. Unlike the first version of Loggly, generation 2 has a graphical Web interface that allows users to build their own reports, based on log-file data that has already been parsed and indexed. The first version gave customers a simple, command line interface with which to examine the data. In a media release Tuesday, marketing director David Ewart wrote that the "stories" feature in version two means "that the service should not simply be a tool to inspect log data but rather should reveal the stories that the data tells. The stories are visual representations that provide insights in and of themselves as well as provide fine-pitch guidance as to where to focus further investigative effort." In an interview, he clarified that Loggly provides views into log file data but doesn't attempt to apply machine learning or conclusions deduced from artificial intelligence that would advise system admins on what to do next. The service runs on both Loggly's colocation servers in San Francisco and on Amazon Web Services. Loggly uses AWS for the part of the service that shows elastic demand, the data collection service. The analytics and data visualization parts are done on Loggly servers. Both operations can be done in either location as a safeguard against a service outage, Ewart said. The service is priced at $49 per month for the Developer version of the service, to process 1 GB or less of log file data per day; data is retained for only seven days. A Production version is priced at $349 a month for processing up to 7 GBs a day; data is retained for 15 days. If a customer has 50 GBs of data a day, the service would cost $2,600 a month. There is a free Light version for less than 1 GB of data a day. Oppenheimer is the former CEO of startups Aptivia, purchased by Yahoo in 2000 as the basis for Yahoo Shopping, and Digital Fountain, a supplier of wireless video infrastructure, purchased by Qualcomm in 2009. Learn more about SaaS by attending the Interop conference track on Cloud Computing and Virtualization in New York from Sept. 30 to Oct. 4. User Rank: Strategist9/5/2013 | 12:00:41 AM re: Cisco Backs Loggly; Watch Out VMware Cisco and VMware seem to be butting heads at every turn. re: Cisco Backs Loggly; Watch Out VMware The 3,500 customers number includes users of Loggly's free service, so not all of them are paying customers.How many are paying customers? Oppenheimer declines to divulge that number. His response, "a substantial" share of them, is non-specific. Substantial compared to what? User Rank: Ninja9/4/2013 | 3:02:53 PM re: Cisco Backs Loggly; Watch Out VMware Wow, $2,600 a month for 50 GBs of data a day? That could add up fast for a larger company in the big data era, and "responsive log management" sounds like APM-lite.
计算机
2014-23/3292/en_head.json.gz/14494
Ten Myths of Internet Artby Jon Ippolito By the time the mainstream art world awakened to the telecommunications revolution of the 1990s, a new landscape of exploration and experimentation had already dawned outside its window. Art on this electronic frontier-known variously as Internet art, online art, or Net art-matured at the same breakneck pace with which digital technology itself has expanded. Less than a decade after the introduction of the first image-capable browser for the World Wide Web, online art has become a major movement with a global audience. It took twenty years after the introduction of television for video artists such as Nam June Paik to access the technology required to produce art for broadcast television. Online artists, by comparison, were already exchanging text-based projects and criticism before the Internet became a visual medium with the introduction of the Mosaic browser in 1993. By 1995, eight percent of all Web sites were produced by artists, giving them an unprecedented opportunity to shape a new medium at its very inception. Since that time, art on the Internet has spawned countless critical discussions on e-mail-based communities such as the Thing, Nettime, 7-11, and Rhizome.org. Encouraged by a growing excitement over the Internet as a social and economic phenomenon, proliferating news articles and museum exhibitions have brought online art to the forefront of the discussion on art's future in the 21st century. One of the reasons for the difficulty of adapting a museum to networked culture is that numerous misconceptions persist about that culture-even those who are savvy about art or the Internet do not often understand what it means to make art for the Internet. The following are ten myths about Internet art worth dispelling. Myth Number 1: The Internet is a medium for delivering miniature forms of other art mediums. Though you might never know it from browsing many of the forty million Web sites listed in an online search for the word "art," the Internet is more than a newfangled outlet for selling paintings. Granted, searching Yahoo for "Visual Art" is just as likely to turn up alt.airbrush.art as äda'web, but that's because Internet art tends to make its cultural waves outside of art-world enclaves, surfacing on media venues like CNN and the Wall Street Journal as well as on museum Web sites. More importantly, this art exploits the inherent capabilities of the Internet, making both more participatory, connective, or dynamic. Online renditions of paintings or films are limited not only by the fact that most people cannot afford the bandwidth required to view these works at their original resolution, but also because painting and cinema do not benefit from the Internet's inherent strengths: You would expect more art made for television than a still image. So when surfing the Web, why settle for a scanned-in Picasso or a 150-by-200 pixel Gone with the Wind? Successful online works can offer diverse paths to navigate, recombine images from different servers on the same Web page, or create unique forms of community consisting of people scattered across the globe. Myth Number 2: Internet art is appreciated only by an arcane subculture. Museum curators are sometimes surprised to discover that more people surf prominent Internet art sites than attend their own brick-and-mortar museums. To be sure, the online art community has developed almost entirely outside the purview of galleries, auction houses, and printed art magazines. Ironically, however, online art's disconnect from the mainstream art world has actually contributed to its broad appeal and international following. The absence of a gallery shingle, a museum lintel, or even a "dot-art" domain suffix that flags art Web sites means that many people who would never set foot in a gallery stumble across works of Internet art by following a fortuitous link. Without a Duchampian frame to fall back on, most online artworks look outside of inbred references to art history or institutions for their meaning. For these reasons, the Guggenheim's acquisition of online works into its collection is less a radical experiment in evaluating a new medium than a recognition of the importance of this decade-old movement. Myth Number 3: To make Internet art requires expensive equipment and special training. One of the reasons network culture spreads so quickly is that advances don't come exclusively from Big Science or Big Industry. Individual artists and programmers can make a difference just by finding the right cultural need and fulfilling it through the philosophy of "DIY: Do It Yourself." In the right hands, homespun html can be just as powerful as elaborate vrml environments. And thanks to View Source-the browser feature that allows surfers to see how a Web page is built and reappropriate the code for their own means-online artists do not need residencies in research universities or high technological firms to acquire the necessary skills. The requirement that online artworks must squeeze through the 14.4 kb/s modems of dairy farmers and den mothers forces online artists to forgo the sensory immersion of IMAX or the processing power of Silicon Graphics. However, constraints on bandwidth and processor speed can actually work to the advantage of Internet artists, encouraging them to strive for distributed content rather than linear narrative, and to seek conceptual elegance rather than theatrical overkill. Making successful art for the Internet is not just a matter of learning the right tools, but also of learning the right attitude. Myth Number 4: Internet art contributes to the "digital divide." The widening gap between digital haves and have-nots is a serious concern in many public spheres, from education to employment. But this bias is reversed for art. While it is true that artists in Ljubljana or Seoul have to invest in a computer and Internet access, finding tubes of cadmium red or a bronze foundry in those locales is even more challenging and much more expensive. Even in Manhattan, an artist can buy an iMac for less than the oils and large stretcher bars needed to make a single "New York-sized" painting. And when it comes to distributing finished works, there is no comparison between the democratizing contact made possible by the Internet and the geographic exclusivity of the analog art world. Only an extreme combination of luck and persistence will grant an artist entrance to gallery openings and cocktail parties that can make or break careers in the New York art world. But artists in Slovenia and Korea-outside of what are considered the mainstream geographic channels of the art world-have had notable success in making art for the Internet, where anyone who signs up for a free e-mail account can debate Internet aesthetics with curators on Nettime or take advantage of free Web hosting and post art for all to see. Myth Number 5: Internet art = Web art. The World Wide Web is only one of the media that make up the Internet. Internet artists have exploited plenty of other online protocols, including e-mail, peer-to-peer instant messaging, videoconference software, MP3 audio files, and text-only environments like MUDs and MOOs. It's tempting to segregate these practices according to traditional categories, such as calling e-mail art and other ephemeral formats "performance art." Yet the interchangeability of these formats defies categorization, as when, for example, the transcript of improvisational theater conducted via a chat interface ends up on someone's Web page as a static text file. Internet mediums tend to be technologically promiscuous: Video can be streamed from within a Web page, Web pages can be sent via e-mail, and it's possible to rearrange and re-present images and text from several different sites on a new Web page. These artist-made mutations are not just stunts performed by mischievous hackers; they serve as vivid reminders that the Internet has evolved far beyond the print metaphors of its youth. Myth Number 6: Internet art is a form of Web design. It may be fashionable to view artists as "experienced designers," but there is more to art than design. The distinction between the two does not lie in differences in subject matter or context as much as in the fact that design serves recognized objectives, while art creates its objectives in the act of accomplishing them. The online portfolios of Web design firms may contain dazzling graphics, splashy Flash movies, and other attractions, but to qualify as art such projects must go beyond just visual appeal. Design creates a matrix of expectations into which the artist throws monkey wrenches. Just as a painter plays off pictorial design, a Net artist may play off software design. Design is a necessary, but not sufficient, condition for art. Myth Number 7: Internet art is a form of technological innovation. Internet artists spend much of their time innovating: custom writing Java applets or experimenting with new plug-ins. But innovation in and of itself is not art. Plenty of nonartists discover unique or novel ways to use technology. What sets art apart from other technological endeavors is not the innovative use of technology, but a creative misuse of it. To use a tool as it was intended, whether a screwdriver or spreadsheet, is simply to fulfill its potential. By misusing that tool-that is, by peeling off its ideological wrapper and applying it to a purpose or effect that was not its maker's intention-artists can exploit a technology's hidden potential in an intelligent and revelatory way. And so when Nam June Paik lugs a magnet onto a television, he violates not only the printed instructions that came with the set, but also the assumption that networks control the broadcast signal. Today's technological innovation may be tomorrow's cliché, but the creative misuse of technology still feels fresh even if the medium might be stale. The combined megahertz deployed by George Lucas in his digitally composited Star Wars series only makes more impressive-and equally surprising-the effects Charlie Chaplin achieved simply by cranking film backwards through his camera. In a similar vein, the online artists JODI.org exploited a bug in Netscape 1.1 that allows an "improper" form of animation that predated Flash technology by half a decade. Myth Number 8: Internet art is impossible to collect. Although the "outside the mainstream" stance taken by many online artists contributes to this impression, the most daunting obstacle in collecting Internet art is the ferocious pace of Internet evolution. Online art is far more vulnerable to technological obsolescence than its precedents of film or video: In one example, works created for Netscape 1.1 became unreadable when Netscape 2 was released in the mid-1990s. Yet the Guggenheim is bringing a particularly long-term vision to collecting online art, acquiring commissions directly into its permanent collection alongside painting and sculpture rather than into ancillary special Internet art collections as other museums have done. The logic behind the Guggenheim's approach, known as the "Variable Media Initiative," is to prepare for the obsolescence of ephemeral technology by encouraging artists to envision the possible acceptable forms their work might take in the future. It may seem risky to commit to preserving art based on such evanescent technologies, but the Guggenheim has faced similar issues with other contemporary acquisitions, such as Meg Webster's spirals made of leafy branches, Dan Flavin's installations of fluorescent light fixtures, and Robert Morris's temporary plywood structures that are built from blueprints. Preserving those works requires more than simply storing them in crates-so too immortalizing online art demands more than archiving Web files on a server or CD-ROM. Along with the digital files corresponding to each piece, the Guggenheim compiles data for each artist on how the artwork is to be translated into new mediums once its original hardware and software are obsolete. To prepare for such future re-creations, the Guggenheim has started a variable media endowment, where work of interest is earmarked for future data migration, emulation, and reprogramming costs. Myth Number 9: Internet art will never be important because you can't sell a Web site. It is true that the same market that so insouciantly banged gavels for artworks comprised of pickled sharks and other unexpected materials has yet to figure out how to squeeze out more than the cost of dinner for two from the sale of an artist's Web site. The reason artists' Web sites have not made it to the auction block is not their substance or lack thereof, but their very origin (equally immaterial forms of art have been sold via certificates of authenticity since the 1970s). The Internet of the early 1990s, and the art made for it, was nourished not by venture capital or gallery advances but by the free circulation of ideas. Exploiting network protocols subsidized by the US government, academics e-mailed research and programmers ftp'd code into the communal ether, expecting no immediate reward but taking advantage nevertheless of the wealth of information this shared ethic placed at their fingertips. Online artists followed suit, posting art and criticism with no promise of reward but the opportunity to contribute to a new artmaking paradigm. Indeed, many artists who made the leap to cyberspace claimed to do so in reaction to the exclusivity and greed of the art market. It's not clear whether online art can retain its youthful allegiance to this gift economy in the profit-driven world. It is possible, however, to hypothesize a Web site's putative value independent of its price tag in an exchange economy. That value would be the sum total of money a museum would be willing to spend over time to reprogram the site to ward off obsolescence (see Myth Number 8). Myth Number 10: Looking at Internet art is a solitary experience. The Internet may be a valuable tool for individual use, but it is far more important as a social mechanism. Beyond the numerous online communities and listservs dedicated to discussing art, many of the best Internet artists reckon success not by the number of technical innovations, but by the number of people plugged in. The hacktivist clearinghouse ®™ark, for example, connects sponsors who donate money or resources for anticorporate protest with activists who promote those agendas. In online art, works as visually dissimilar as Mark Napier's net.flag and John F. Simon, Jr.'s Unfolding Object capture the traces of many viewers' interactions and integrate them into their respective interfaces. In some cases, viewers can see the effects of other participants reflected in the artwork in real-time. In most online art, however, as in most online communication, viewers' interactions are asynchronous-as though an empty gallery could somehow preserve the footprints of previous visitors, their words still ringing in the air. Jon Ippolito is an artists and the Assistant Curator of Media Arts at the Solomon R. Guggenheim Museum, New York. His collaboration Fair e-Tales can be found at http://www.three.org. The Edge of Art, a book on creativity and the Internet revolution is forthcoming from Thames & Hudson. Abstract This article identifies ten myths about Internet Art, and expalins the difficulties museums and others have understanding what it means to make art for the Internet. In identifying these common misconceptions, the author offers insight on successful online works, provides inspiration to Internet artists, and explains that geographical location does not measure success when making art for the Internet. The article also mentions that the World Wide Web is only one of the many parts that make up the Internet. Other online protocols include email, peer-to-peer instant messaging, video-conferencing software, MP3 audio files, and text-only environments like MUDs and MOOs. The author concludes his list of myths with the idea that surfing the Internet is not a solitary experience. Online communities and listservers, along with interactive Internet artworks that trace viewers and integrate their actions into respective interfaces, prove that the Internet is a social mechanism. Jon IppolitoAssistant Curator of Media ArtGuggenheim Museum575 Broadway, 3rd FloorNew York NY, 10010 USAemail: [email protected] Jon Ippolito is an artists and the Assistant Curator of Media Arts at the Solomon R. Guggenheim Museum, New York. His collaboration Fair e-Tales can be found at http://www.three.org. The Edge of Art, a book on creativity and the Internet revolution is forthcoming from Thames & Hudson.
计算机
2014-23/3292/en_head.json.gz/16677
Marketers victimized by tools: NCDM keynoter ORLANDO, FL -- In terms of marketing, the tools available to us have never been better, but the situation has never been worse. This was the first of several opinions Ryan Mathews offered in his keynote presentation called "Changing Times - Using Market Insights and Technology to Grow Your Business" Dec. 13 at the 2006 National Center for Database Marketing conference. "In some degree, we are being victimized by our tools," Mr. Matthews told the assembled database marketers. "We have greater and greater ability to measure and to segment and to target, but what we don't have is the analytics, particularly in terms of the Web, to catch up with this information." Mr. Mathews is founder/CEO of Black Monk Consulting, in East Pointe, MI. He is an international consultant and commentator on topics such as innovation, technology, global customer trends and retailing. He is also co-author of "The Myth of Excellence: Why Great Companies Never Try to Be the Best at Everything." In his presentation, Mr. Matthews looked at the dynamics driving change in the world of direct marketing and database management. He encouraged the audience to rethink the rules to remain competitive, understand the customer of the future, and develop a new model for the relationship between data analysts and marketers. Mr. Matthews said that as opposed to having great advances in hardware and software, the companies that will succeed in the future "will be those marketers that have different insights into consumers." Another assumption Mr. Matthews made was that in the future, uninvited access is going to be more and more difficult. So is the ability to segment people for marketers to take action. "The problem is, we are living in an era where people don't like to be put into nice, neat little boxes," he said. Mr. Matthews told the audience that while there are many devices through which people can be marketed to, such as GPS systems, cell phones, telephone or the Internet, the upside of this is it has created a new flock of technology devices as way to keep marketers out. Spam blockers and caller ID are good examples. "Companies are trying hard to break through all of this electronic clutter to get to the consumer," he said. Instead of focusing on trying to reach individuals in general, marketers might try targeting the multifaceted composite of an individual as defined by his or her attitudes at a specific time and place. Mr. Matthews referred to this as marketing to the "instavidual." "If you want to market correctly, you should market to an instavidual, so its important to know at what point you are intercepting that person, because that's going to be who they are at that moment, and they are going to see your offer as being effective or ineffective on how successful you are in matching the message to the moment." Mr. Matthews said people today are re-aggregating themselves into neo-tribes, and marketers should be aware of this. "They are not ribs they belong to, but tribes they want to belong to," he said. An example of this is motorcycle enthusiasts, who are accountants and lawyers during the week and bad guy bikers on the weekends. "Marketers have to understand how tribes work, because I suspect that marketing to tribes is going to be more effective than marketing to cohorts," Mr. Matthews said. "Because people volunteer for the tribe, they have passion for the tribe, they have belief in the tribe, and they want to be identified with the tribe." To illustrate this point, he discussed how the AARP changed its magazine title from Modern Maturity to My Generation in an effort to connect with its members who don't necessarily think of themselves as mature or old. Modern Maturity was a cohort, but My Generation targets the tribe. A key trend marketers should keep in mind is marketing to consumers' values as opposed to a ZIP or industry code. "Whole Foods is a champion of [this type of marketer], selling over-priced food to people who have a certain value system," Mr. Matthews said. The automotive industry, he said, is also trying to revive itself in this fashion by selling green cars to a group of consumers with a certain value system, "people who want to save the world." Also, visual Web sites are rapidly replacing text-based sites on the Internet, Mr. Matthews said, "so in the future, effective direct marketing will have to speak in the language of images, not words." This material may not be published, broadcast, rewritten or redistributed in any form without prior authorization.
计算机
2014-23/3292/en_head.json.gz/17204
The Linux Foundation Announces LinuxCon North America Keynote Speakers and 20th Anniversary Gala By Linux_Foundation - April 13, 2011 - 10:20pm NEWS HIGHLIGHTS Top mobile and enterprise Linux executives lead keynote agenda: Red Hat CEO Jim Whitehurst to address enterprise Linux at 20 years and HP Chief Technology Officer Phil McKinney to discuss WebOS Internet and society author Clay Shirky will illustrate how collaboration is shaping today’s global culture LinuxCon Gala to mark official 20th Anniversary of Linux celebration and gather an unprecedented who’s who of Linux’ past, present and future SAN FRANCISCO, April 14, 2011 – The Linux Foundation, the nonprofit organization dedicated to accelerating the growth of Linux, today announced its keynote speakers for North America’s premier annual conference LinuxCon, taking place in Vancouver, B.C. August 17-19, 2011. The LinuxCon keynote lineup reflects major trends in the Linux market, from Linux in the enterprise and mobile computing, to its impact on today’s society and culture. The following keynote speakers have been confirmed: * Mark Charlebois, Director of Open Source Strategy at Qualcomm Innovation Center (QuIC), will discuss the role of Linux in mobile development and innovation. * Phil McKinney, Vice President and Chief Technology Officer at HP, will elaborate on the company’s WebOS platform strategy. * Marten Mickos, Chief Executive Officer at Eucalyptus Systems and former CEO of MySQL, is a recognized enterprise software entrepreneur and investor with a keen understanding of Linux and open source software. Mickos will discuss the changing enterprise Linux landscape, specifically as it relates to cloud computing. * Ubuntu’s Technical Architect Allison Randal will share how the vibrant Ubuntu community and development team are turning the vision for Linux into reality. Randal has more than 25 years of experience as a programmer and was the chief architect and lead developer on the open source project Parrot for many years. * Clay Shirky is an award-winning author and expert on how technology shapes culture. Also a New York University Professor on Internet and Society whose recent book releases include “Cognitive Surplus: How Technology Makes Consumers into Collaborators” and “Here Comes Everybody: The Power of Organizing Without Organizations,” Shirky will discuss how collaboration is impacting today’s culture. * More than 10 years after being the first Linux company to go public and as we approach the 20-year anniversary of Linux, Red Hat CEO Jim Whitehurst will detail the biggest challenges we still face and what the next 20 years looks like. The Call for Participation (CFP) deadline for LinuxCon is April 22, 2011. The Linux Foundation encourages all would-be speakers to submit a talk on technical, business or legal developments impacting Linux. To submit a proposal and be a part of this year’s historic event, please visit: http://events.linuxfoundation.org/events/linuxcon/cfp. Additional details, speakers and sessions will be announced after all proposals have been received. The event this year will also be co-located with the KVM Forum, as well as other community mini-summits. Registration is U.S. $500 through July 8, 2011. To register, please visit: http://events.linuxfoundation.org/events/linuxcon/register. LinuxCon Gala to Mark Formal Celebration of 20th Anniversary of Linux The Linux Foundation today is also announcing additional details about the LinuxCon Gala, which will celebrate 20 years of Linux at the Commodore Ballroom in Vancouver. The LinuxCon Gala will take place the evening of August 17, 2011 and will celebrate the 20th Anniversary of Linux with a “Roaring 20s” theme. The Linux Foundation is assembling an unprecedented lineup of key personalities to represent Linux’ past, present and future and will host a unique ceremony with special presentations of awards. The event will include a live band, casino, full dinner and open bar. Penguin suits (tuxedos) and “flapper” dresses will be available to rent onsite. For more information about the LinuxCon Gala, please visit: http://events.linuxfoundation.org/events/linuxcon/social. The winner of this year’s Linux Foundation Video Contest will also be revealed at LinuxCon. The contest is focused on the 20th Anniversary of Linux and is being judged by Linux creator Linus Torvalds. For more information and to submit your video, please visit: http://video.linux.com/20th-anniversary-video-contest. The Linux Foundation also produced the following “Story of Linux” video to help inspire submissions: http://www.youtube.com/watch?v=5ocq6_3-nEw. The 20th Anniversary Video Booth, which is traveling to Linux Foundation events throughout year, will also be onsite for attendees to record personal messages to the rest of the community about Linux’ past, present and future. LinuxCon, which has sold out every year since its debut, is the world’s leading conference addressing all matters Linux for the global business and technical communities. The LinuxCon schedule includes in-depth technical content for developers and operations personnel, as well as business and legal insight from the industry’s leaders. The networking, problem-solving and deal-making opportunities at LinuxCon are unmatched for those involved in enterprise, desktop or mobile Linux. For more information, please visit the LinuxCon website. About The Linux Foundation The Linux Foundation is a nonprofit consortium dedicated to fostering the growth of Linux. Founded in 2000, the organization sponsors the work of Linux creator Linus Torvalds and promotes, protects and advances the Linux operating system by marshaling the resources of its members and the open source development community. The Linux Foundation provides a neutral forum for collaboration and education by hosting Linux conferences, including LinuxCon, and generating original Linux research and content that advances the understanding of the Linux platform. Its web properties, including Linux.com, reach approximately two million people per month. The organization also provides extensive Linux training opportunities that feature the Linux kernel community’s leading experts as instructors. Follow The Linux Foundation on Twitter. ### Trademarks: The Linux Foundation, Linux Standard Base , MeeGo and the Yocto Project are trademarks of The Linux Foundation. Linux is a trademark of Linus Torvalds. Print Home › The Linux Foundation Announces LinuxCon North America Keynote Speakers and 20th Anniversary Gala
计算机
2014-23/3292/en_head.json.gz/17508
Researchers aim for cheap peer-to-peer zero-day wo... Researchers aim for cheap peer-to-peer zero-day worm defense Researchers claim peer-to-peer software that shares information about anomalous behaviour could be the key to shutting down computer attacks inexpensively Tim Greene (Computerworld) on 14 January, 2009 12:10 Shutting down zero-day computer attacks could be carried out inexpensively by peer-to-peer software that shares information about anomalous behaviour, say researchers at the University of California at Davis.The software would interact with existing personal firewalls and intrusion detection systems to gather data about anomalous behaviour, says Senthil Cheetancheri, the lead researcher on the project he undertook as a grad student at UC Davis from 2004 to 2007. He now works for SonicWall.The software would share this data with randomly selected peer machines to determine how prevalent the suspicious activity was, he says. If many machines experience the identical traffic, that increases the likelihood that it represents a new attack for which the machines have no signature.The specific goal would be to detect self-propagating worms that conventional security products have not seen before."It depends on the number of events and the number of computers polled, but if there is a sufficient number of such samples, you can say with some degree of certainty that it is a worm," Cheetancheri says. For that decision, the software uses a well-established statistical technique called sequential hypothesis testing, he saysThe detection system is decentralized to avoid a single point of failure that an attacker might target, he says.The task then becomes what to do about it, he says. In some cases, the cost of a computer being infected with a worm might be lower than the cost of shutting it down, in which case it makes sense to leave it running until a convenient time to clean up the worm, he says.In other cases, the cost to the business of the worm remaining active might exceed the cost of removing the infected machine from the network, he says.That cost-benefit analysis would be simple to carry out, he says, but network executives would have to determine the monetary costs and enter them into the software configuration so it can do its calculations he says.End users would not program or modify the core detection engine, he says. "We don't want to have humans in the loop," he says.He says he and his fellow researchers have set up an experimental detection engine, but it would have to be modified to run on computers in a live network without interfering with other applications and without being intrusive to end users, Cheetancheri says.So far no one he knows of is working on commercializing the idea.The software would be inexpensive because it would require no maintenance other than to enter the cost of each computer being disconnected from the network. Tags sonicwallwormzero day initiative Tim Greene
计算机
2014-23/3292/en_head.json.gz/17643
Ed Key and David Kanaga do the chat IGF Factor 2012: Proteus By Alec Meer on January 27th, 2012 at 11:43 am. Today in our series profiling (almost) all the PC/Mac-based finalists at this year’s Independent Games Festival, we turn to wondrous freeform exploration game Proteus. Here, developer Ed Key and composer David Kanaga talk about the game’s origins, the role of music in games, quitting work to go full time on Proteus, wandering hobos and their answers to the most important question of all. RPS: Firstly, a brief introduction for those who may not know you. Who are you? What’s your background? Why get into games? Why get into indie games? Ed: Hello! I’m a coder who worked in the games industry for a while but dropped out a few years ago for a standard 9-5 job which just happened to let me work on some little prototypes and other larger obsessive projects, like this one. I grew up with 8-bit games so it’s nice to resurrect those days when people weren’t really sure what games were all about. I’m pretty much equally interested in trying to make weird, arty suprising games and trying to reinterpret the games I grew up with, like XCOM and Warhammer. David: I’m a musician. I improvise and I write for instruments and computers. A few years ago, I realized that the kinds of musical ideas I’ve been increasingly interested in– open forms, free improvisation, etc– are the same kinds of ideas that can be explored in videogames, which I’ve played on and off since I was young. Thinking about the huge potential of games and music and play made me very excited. I still am, but I’m also generally disappointed by the ways that existing games are using music… I’m hoping to explore alternatives to how things have been done by writing these dynamic scores, feeling out this process, trying to make all the music as interactive as possible without sacrificing its heart (ideally making the interaction itself the heart– like in the best jazz)… I guess sort of trying to destroy the boundary between composition and sound design– which maybe only exists because of that too-common sort of sad quest for “realism.” I’m not interested in the way big budget games use music and sound, with a few exceptions (Nintendo is often doing some amazing things, there are others, too…). Indie games seem to be the only type of games that allow for much new thinking, not constrained in all sorts of ways to try to reach the huge existing audiences. RPS: Tell us about your game. What was its origins? What are you trying to do with it? What are you most pleased about it? What would you change if you could? Ed: Back in 2009 or so I was playing around with ideas for a “wandering hobo” themed roguelike, along with some world generation ideas. Through various drafts and dead-ends these turned in something like what you see now. At that point I didn’t know what the game was going to be at all. Maybe an open world RPG about a chinese exorcist? Anyway after hooking up David for the music, it became an integral part. I wasn’t sure how it’d be recieved without any traditional goals or rewards but many people loved it enough to make me remove any remaining hints of goals. I’m happy and a little suprised that it seems to be so refreshing to so many people. There are a bunch of philosophical things I was trying to do, but I’ll spare you the exposition. If I could, I’d change: Everything. Or at least rewrite a load of internal stuff. The codebase is really crusty aft
计算机
2014-23/3292/en_head.json.gz/18496
International Standard Name Identifier The International Standard Name Identifier (ISNI) is a method for uniquely identifying the public identities of contributors to media content such as books, TV programmes, and newspaper articles. Such an identifier consists of 16 numerical digits divided into four blocks. It was developed under the auspices of the International Organization for Standardization (ISO) as Draft International Standard 27729; the valid standard was published on 15 March 2012. The ISO technical committee 46, subcommittee 9 (TC 46/SC 9) is responsible for the development of the standard. ISNI can be used for disambiguating names that might otherwise be confused, and links the data about names that are collected and used in all sectors of the media industries. 1 Uses of an ISNI 2 ISNI governance 3 ISNI assignment Uses of an ISNI[edit] The ISNI allows a single identity (such as an author’s pseudonym or the imprint used by a publisher) to be identified using a unique number. This unique number can then be linked to any of the numerous other identifiers that are used across the media industries to identify names and other forms of identity. An example of the use of such a number is the identification of a musical performer who is also a writer both of music and of poems. Where he or she might currently be identified in many different databases using numerous private and public identification systems, under the ISNI system, he or she would have a single linking ISNI record. The many different databases could then exchange data about that particular identity without resorting to messy methods such as comparing text strings. An often quoted example in the English language world is the difficulty faced when identifying ‘John Smith’ in a database. While there may be many records for ‘John Smith’, it is not always clear which record refers to the specific ‘John Smith’ that is required. If an author has published under several different names or pseudonyms, each such name will receive its own ISNI. ISNI can be used by libraries and archives when sharing catalogue information; for more precise searching for information online and in databases, and it can aid the management of rights across national borders and in the digital environment. ORCID[edit] ORCID (Open Researcher and Contributor ID) identifiers are reserved block of ISNI identifiers, for scholarly researchers.[1] administered by a separate organisation.[1] Individual researchers can create and claim their own ORCID identifier.[2] The two organisations coordinate their efforts.[1][2] ISNI governance[edit] ISNI is governed by an ‘International Agency’, commonly known as the ISNI-IA. This UK registered, not-for-profit company has been founded by a consortium of organisations consisting of the Confédération Internationale des Sociétés d´Auteurs et Compositeurs (CISAC), the Conference of European National Librarians (CENL), the International Federation of Reproduction Rights Organisations (IFRRO), the International Performers Database Association (IPDA), the Online Computer Library Center (OCLC) and ProQuest. It is managed by directors nominated from these organisations and, in the case of CENL, by representatives of the Bibliothèque nationale de France and the British Library. ISNI assignment[edit] ISNI-IA uses an assignment system comprising a user interface, data-schema, disambiguation algorithms, and database that meets the requirements of the ISO standard, while also using existing technology where possible. The system is based primarily on the Virtual International Authority File (VIAF) service, which has been developed by OCLC for use in the aggregatio
计算机
2014-23/3292/en_head.json.gz/18799
FULL Install, All Data gone after reboot trijdmi I have made a full install, so only puppy linux is on the partition. Booting up without the cd works also. But have made a lot of adjustments, installed a lot of programs. But now everything is gone, after reboot! The destop also reverted to the initial state! What now? -----update--- looks like everything is still there, but the desktop reverted to initial state What version of Puppy? What is the specific problem now? _________________I have found, in trying to help people, that the things they do not tell you, are usually the clue to solving the problem. When I was a kid I wanted to be older.... This is not what I expected Back to top Maybe you are still booting from the CD... in that case it might (I don't know - never done it) ignore a full install. @disciple: that is exactly what happens -- boot from CD with a full install, full install is ignored.
计算机
2014-23/3292/en_head.json.gz/19188
Dance Studio still "on the list," say developersby Daniel Whitcomb Feb 28th 2010 at 2:00PM The dance studio is perhaps the most enduring piece of vapor ware in WoW to date. First hinted at in the original WotLK trailer and announced in the months leading up to release, it was conspicuously absent from the finished product, however, and now that we're on the final major patch of the expansion, it still hasn't made it in game. The closest we've gotten to any sort of official word on it, barring the dance battle April fool's joke, is from a Curse interview with the developers that suggests the dance studio won't be in Wrath at all. Still, the question comes up from time to time, and it came up again during the recent Twitter developer chat. The developers answered that it's still on the list of things to do, so it appears it hasn't been completely abandoned. They also mentioned that they always start expansions with more than they can do and prioritize from there. This, of course, brings up another question all on its own: What can we expect to be late in Cataclysm? Should the developers really announce things they don't think they'll have time to implement? Of course, it's part and parcel of the MMO game that features are pushed back a patch or two, but after over a year, there might be a time limit on how long you can be expected to wait. Still, I'm all for more character customization, so I'm glad it's not completely abandoned. With any luck, maybe we'll finally see it before patch 5.0.Email ThisTags: dance, dance-studio, dance-studios, dancing, dev-chat, developer-chat, developers, twitter, twitter-dev-chat, twitter-developer-chatFiled under: News items Reader Comments (Page 1 of 4) Rafa Feb 28th 2010 2:06PMlol ! I used this video last year, and I performed parts of it in a pan
计算机
2014-23/3292/en_head.json.gz/19523
Project Spark Official Release Date Set For PC, Xbox One By William Usher 2014-07-28 20:39:56 comments Microsoft has made the release date official for Team Dakota's Project Spark. The game design and creation tool has received quite a bit of positive press up to this point; and Microsoft has finally set a release date for the toolset. Gamers and game-designers-in-training can look to grab the retail starter pack of Project Spark beginning October 7th. The news came from a recent post on Major Nelson's blog where it was detailed that the game would be exciting its beta and would be available as a full release this fall, joining many other big-name titles from first and third-party studios. As noted on Nelson's blog... �...we�ll also be making an Xbox One disc edition of Project Spark available at retail for $39.99 USD. The �Project Spark Starter Pack� is loaded with great content, including starter packs filled with sounds, effects, animations, and props, plus advanced creator features, offline content, and experience boosts which allow players to unlock new content that much faster.� Project Spark is a game creation software tool. It allows players to engage in simple and advanced forms of game creation. The basic editor allows players to select from a number of different preset materials, animations, characters and settings, and then build upon those themes a wide variety of different interactive games. It's possible to make platformers, fighting games, role-playing games or point-and-click adventure titles. Even more than that, it's possible to actually design interactive movies or cinematics, similar to the specially made interactive experience featuring Linkin Park, which was pretty impressive. Even in my own initial impressions of the beta, there was a lot of power being put right there at the fingertips of gamers. It's a lot more intuitive than other game creation toolsets out there because it also doubles-down as a hub for enabling some gamers to simply browse through and experience creations made by other players. In this way, you could technically pick up a physical or digital copy of Project Spark and never have to worry about content-creation at all; you can just play stuff that everyone else already made. It was also pretty smart of Microsoft to hold off on the release of the game well after it first entered into its beta phase. Why? Because by the time the game lands on store shelves there will be enough half-made, fully-made and mostly-playable games available for the average consumer to dabble into. Releasing Project Spark too soon probably would have been pretty detrimental for the casual gamer who picked up a copy and found very little there in terms of finished content. That's not to mention that a lot of people may not instantly get into Spark and want to build an entirely new game from the ground up. Nevertheless, the game is scheduled to release for Windows 8 systems and the Xbox One on October 7th, both at physical retail outlets and digitally on the respective storefronts for both platforms. For more info, feel free to visit the official website. Middle-Earth: Shadow Of Mordor Gets New Release Date With St... Xbox One Will Be Sold Through China Telecom Corporation In C... Wii U's Consumer Interest Skyrockets By 50% Thanks To E3 Destiny Footage Shows 1080p Version For Xbox One GET GB IN YOUR FEED
计算机
2014-23/3292/en_head.json.gz/19709
Windows XP Timesaving Techniques For Dummies, 2nd Edition Woody Leonhard with Justin Leonhard Woody Leonhard: Curmudgeon, critic, and perennial “Windows Victim,” Woody Leonhard runs a fiercely independent Web site devoted to delivering the truth about Windows and Office, whether Microsoft likes it or not. With up-to-the-nanosecond news, observations, tips and help, AskWoody.com has become the premiere source of unbiased information for people who actually use the products. In the past decade, Woody has written more than two dozen books, drawing an unprecedented six Computer Press Association awards and two American Business Press awards. Woody was one of the first Microsoft Consulting Partners and is a charter member of the Microsoft Solutions Provider organization. He’s widely quoted — and reviled — on the Redmond campus. Justin Leonhard: Lives with his dad in Phuket, Thailand. Justin contributed to Windows XP All-in-One Desk Reference For Dummies. He frequently helps Woody with various writing projects and keeps the office network going. Justin is an accomplished scuba diver, budding novelist, and the best video game player for miles. He was admitted to Mensa International at the age of 14.
计算机
2014-23/3292/en_head.json.gz/20068
Facts Concerning the Technology Used for H.P. Lovecraft's Dagon Received from indie developer Thomas Platform: Mac, Windows, iPad, iPhone, iPod Touch, Other June 21, 2013 - As you probably know by now I’m using the Wintermute Engine (WME) to create H.P. Lovecraft’s Dagon, and as you maybe also know, this engine currently comes in two flavours: the original version, which runs on Windows platforms and supports more fancy stuff like 3d characters, lightning and geometry, and the later offspring called WME Lite. That version added support for both OS X as well as iOS, however it is a bit more limited when it comes to features, most notably no support for 3d characters -- which is not a problem for Dagon, since I’m not using 3d characters.What does that mean?That means that, in theory, I can offer the game for both Windows and OS X/iOS (iPhone/iPod touch/iPad). In theory because, while I’m currently developing for both Windows and OS X, the iOS version is different in terms of user interface and the performance of the devices, so, for now, I’m sticking to the desktop version and will port to mobile later.So, what about Android? And Linux?Good questions! Both platforms are becoming more and more important, and although WME doesn’t support them natively at the moment, there is good news.First, since WME is open source, a developer has taken it upon himself to create a native Android port for WME Lite, and from my tests on Android 2.3 and 4.2 devices I can say that the results already look very impressive. Secondly, for the second year in a row, WME support for ScummVM is a part of Google’s Summer of Code, meaning that a developer gets some funding to integrate WME in the ever growing list of engines supported by the great ScummVM project (see http://www.scummvm.org/ ). A lot of work was on this last year, and this year, additional bug fixing and new features will be added.I have tested an early build and again, the results already look very promising.What does that mean for me, the player?Although I can’t promise anything yet – because both projects are still works in progress – there is a very good chance that H.P. Lovecraft’s Dagon will be available on more platforms than I initially hoped, most notably, of course Android, but also Linux (and ScummVM seems to run on anything that has a screen).
计算机
2014-23/3292/en_head.json.gz/21525
Oracle® Database Advanced Security Administrator's Guide The ability of a system to grant or limit access to specific data for specific clients or groups of clients. Access Control Lists (ACLs) The group of access directives that you define. The directives grant levels of access to specific data for specific clients, or groups of clients, or both. Advanced Encryption Standard (AES) is a new cryptographic algorithm that has been approved by the National Institute of Standards and Technology as a replacement for DES. The AES standard is available in Federal Information Processing Standards Publication 197. The AES algorithm is a symmetric block cipher that can process data blocks of 128 bits, using cipher keys with lengths of 128, 192, and 256 bits. See Advanced Encryption Standard An item of information that describes some aspect of an entry in an LDAP directory. An entry comprises a set of attributes, each of which belongs to an object class. Moreover, each attribute has both a type, which describes the kind of information in the attribute, and a value, which contains the actual data. The process of verifying the identity of a user, device, or other entity in a computer system, often as a prerequisite to granting access to resources in a system. A recipient of an authenticated message can be certain of the message's origin (its sender). Authentication is presumed to preclude the possibility that another party has impersonated the sender. A security method that verifies a user's, client's, or server's identity in distributed environments. Network authentication methods can also provide the benefit of single sign-on (SSO) for users. The following authentication methods are supported in Oracle Database when Oracle Advanced Security is installed: Secure Sockets Layer (SSL) Windows native authentication Permission given to a user, program, or process to access an object or set of objects. In Oracle, authorization is done through the role mechanism. A single person or a group of people can be granted a role or a group of roles. A role, in turn, can be granted other roles. The set of privileges available to an authenticated entity. auto login wallet An Oracle Wallet Manager feature that enables PKI- or password-based access to services without providing credentials at the time of access. This auto login access stays in effect until the auto login feature is disabled for that wallet. File system permissions provide the necessary security for auto login wallets. When auto login is enabled for a wallet, it is only available to the operating system user who created that wallet. Sometimes these are called "SSO wallets" because they provide single sign-on capability. The root of a subtree search in an LDAP-compliant directory. See certificate authority An ITU x.509 v3 standard data structure that securely binds an identify to a public key. A certificate is created when an entity's public key is signed by a trusted identity, a certificate authority. The certificate ensures that the entity's information is correct and that the public key actually belongs to that entity. A certificate contains the entity's name, identifying information, and public key. It is also likely to contain a serial number, expiration date, and information about the rights, uses, and privileges associated with the certificate. Finally, it contains information about the certificate authority that issued it. A trusted third party that certifies that other entities—users, databases, administrators, clients, servers—are who they say they are. When it certifies a user, the certificate authority first seeks verification that the user is not on the certificate revocation list (CRL), then verifies the user's identity and grants a certificate, signing it with the certificate authority's private key. The certificate authority has its own certificate and public key which it publishes. Servers and clients use these to verify signatures the certificate authority has made. A certificate authority might be an external company that offers certificate services, or an internal organization such as a corporate MIS department. certificate chain An ordered list of certificates containing an end-user or subscriber certificate and its certificate authority certificates. certificate request A certificate request, which consists of three parts: certification request information, a signature algorithm identifier, and a digital signature on the certification request information. The certification request information consists of the subject's distinguished name, public key, and an optional set of attributes. The attributes may provide additional information about the subject identity, such as postal address, or a challenge password by which the subject entity may later request certificate revocation. See PKCS #10 certificate revocation lists (CRLs) Signed data structures that contain a list of revoked certificates. The authenticity and integrity of the CRL is provided by a digital signature appended to it. Usually, the CRL signer is the same entity that signed the issued certificate. checksumming A mechanism that computes a value for a message packet, based on the data it contains, and passes it along with the data to authenticate that the data has not been tampered with. The recipient of the data recomputes the cryptographic checksum and compares it with the cryptographic checksum passed with the data; if they match, it is "probabilistic" proof the data was not tampered with during transmission. Cipher Block Chaining (CBC) An encryption method that protects against block replay attacks by making the encryption of a cipher block dependent on all blocks that precede it; it is designed to make unauthorized decryption incrementally more difficult. Oracle Advanced Security employs outer cipher block chaining because it is more secure than inner cipher block chaining, with no material performance penalty. A set of authentication, encryption, and data integrity algorithms used for exchanging messages between network nodes. During an SSL handshake, for example, the two nodes negotiate to see which cipher suite they will use when transmitting messages back and forth. cipher suite name Cipher suites describe the kind of cryptographics protection that is used by connections in a particular session. Message text that has been encrypted. cleartext Unencrypted plain text. A client relies on a service. A client can sometimes be a user, sometimes a process acting on behalf of the user during a database link (sometimes called a proxy). A function of cryptography. Confidentiality guarantees that only the intended recipient(s) of a message can view the message (decrypt the ciphertext). connect descriptor A specially formatted description of the destination for a network connection. A connect descriptor contains destination service and network route information. The destination service is indicated by using its service name for Oracle9i or Oracle8i databases or its Oracle system identifier (SID) for Oracle databases version 8.0. The network route provides, at a minimum, the location of the listener through use of a network address. See connect identifier connect identifier A connect descriptor or a name that maps to a connect descriptor. A connect identifier can be a net service name, database service name, or net service alias. Users initiate a connect request by passing a user name and password along with a connect identifier in a connect string for the service to which they wish to connect: CONNECT username@connect_identifier Enter password: password connect string Information the user passes to a service to connect, such as user name, password and net service name. For example: CONNECT username@net_service_name A user name, password, or certificate used to gain access to the database. See certificate revocation lists CRL Distribution Point (CRL DP) An optional extension specified by the X.509 version 3 certificate standard, which indicates the location of the Partitioned CRL where revocation information for a certificate is stored. Typically, the value in this extension is in the form of a URL. CRL DPs allow revocation information within a single certificate authority domain to be posted in multiple CRLs. CRL DPs subdivide revocation information into more manageable pieces to avoid proliferating voluminous CRLs, thereby providing performance benefits. For example, a CRL DP is specified in the certificate and can point to a file on a Web server from which that certificate's revocation information can be downloaded. CRL DP See CRL Distribution Point The practice of encoding and decoding data, resulting in secure messages. A set of read-only tables that provide information about a database. Data Encryption Standard (DES) An older Federal Information Processing Standards encryption algorithm superseded by the Advanced Encryption Standard (AES). (1) A person responsible for operating and maintaining an Oracle Server or a database application. (2) An Oracle user name that has been given DBA privileges and can perform database administration functions. Usually the two meanings coincide. Many sites have multiple DBAs. database alias See net service name Database Installation Administrator Also called a database creator. This administrator is in charge of creating new databases. This includes registering each database in the directory using the Database Configuration Assistant. This administrator has create and modify access to database service objects and attributes. This administrator can also modify the Default domain. database link A network object stored in the local database or in the network definition that identifies a remote database, a communication path to that database, and optionally, a user name and password. Once defined, the database link is used to access the remote database. A public or private database link from one database to another is created on the local database by a DBA or user. A global database link is created automatically from each database to every other database in a network with Oracle Names. Global database links are stored in the network definition. database password verifier A database password verifier is an irreversible value that is derived from the user's database password. This value is used during password authentication to the database to prove the identity of the connecting user. Database Security Administrator The highest level administrator for database enterprise user security. This administrator has permissions on all of the enterprise domains and is responsible for: Administering the Oracle DBSecurityAdmins and OracleDBCreators groups. Creating new enterprise domains. Moving databases from one domain to another within the enterprise. The process of converting the contents of an encrypted message (ciphertext) back into its original readable format (plaintext). See Data Encryption Standard (DES) dictionary attack A common attack on passwords. The attacker creates a list of many common passwords and encrypts them. Then the attacker steals a file containing encrypted passwords and compares it to his list of encrypted common passwords. If any of the encrypted password values (called verifiers) match, then the attacker can steal the corresponding password. Dictionary attacks can be avoided by using "salt" on the password before encryption. See salt Diffie-Hellman key negotiation algorithm This is a method that lets two parties communicating over an insecure channel to agree upon a random number known only to them. Though the parties exchange information over the insecure channel during execution of the Diffie-Hellman key negotiation algorithm, it is computationally infeasible for an attacker to deduce the random number they agree upon by analyzing their network communications. Oracle Advanced Security uses the Diffie-Hellman key negotiation algorithm to generate session keys. A digital signature is created when a public key algorithm is used to sign the sender's message with the sender's private key. The digital signature assures that the document is authentic, has not been forged by another entity, has not been altered, and cannot be repudiated by the sender. directory information tree (DIT) A hierarchical tree-like structure consisting of the DNs of the entries in an LDAP directory. See distinguished name (DN) directory naming A naming method that resolves a database service, net service name, or net service alias to a connect descriptor stored in a central directory server. A directory naming context A subtree which is of significance within a directory server. It is usually the top of some organizational subtree. Some directories only permit one such context which is fixed; others permit none to many to be configured by the directory administrator. distinguished name (DN) The unique name of a directory entry. It is comprised of all of the individual names of the parent entries back to the root entry of the directory information tree. See directory information tree (DIT) Any tree or subtree within the Domain Name System (DNS) namespace. Domain most commonly refers to a group of computers whose host names share a common suffix, the domain name. A system for naming computers and network services that is organized into a hierarchy of domains. DNS is used in TCP/IP networks to locate computers through user-friendly names. DNS resolves a friendly name into an IP address, which is understood by computers. In Oracle Net Services, DNS translates the host name in a TCP/IP address into an IP address. encrypted text Text that has been encrypted, using an encryption algorithm; the output stream of an encryption process. On its face, it is not readable or decipherable, without first being subject to decryption. Also called ciphertext. Encrypted text ultimately originates as plaintext. The process of disguising a message rendering it unreadable to any but the intended recipient. enterprise domain A directory construct that consists of a group of databases and enterprise roles. A database should only exist in one enterprise domain at any time. Enterprise domains are different from Windows 2000 domains, which are collections of computers that share a common directory database. Enterprise Domain Administrator User authorized to manage a specific enterprise domain, including the authority to add new enterprise domain administrators. enterprise role Access privileges assigned to enterprise users. A set of Oracle role-based authorizations across one or more databases in an enterprise domain. Enterprise roles are stored in the directory and contain one or more global roles. enterprise user A user defined and managed in a directory. Each enterprise user has a unique identify across an enterprise. The building block of a directory, it contains information about an object of interest to directory users. external authentication Verification of a user identity by a third party authentication service, such as Kerberos or RADIUS. Federal Information Processing Standard (FIPS) A U.S. government standard that defines security requirements for cryptographic modules—employed within a security system protecting unclassified information within computer and telecommunication systems. Published by the National Institute of Standards and Technology (NIST). See Federal Information Processing Standard (FIPS) A group of one or more Active Directory trees that trust each other. All trees in a forest share a common schema, configuration, and global catalog. When a forest contains multiple trees, the trees do not form a contiguous namespace. All trees in a given forest trust each other through transitive bidirectional trust relationships. forwardable ticket-granting ticket In Kerberos. A service ticket with the FORWARDABLE flag set. This flag enables authentication forwarding without requiring the user to enter a password again. global role A role managed in a directory, but its privileges are contained within a single database. A global role is created in a database by using the following syntax: CREATE ROLE role_name IDENTIFIED GLOBALLY; A computing architecture that coordinates large numbers of servers and storage to act as a single large computer. Oracle Grid Computing creates a flexible, on-demand computing resource for all enterprise computing needs. Applications running on the Oracle 10g grid computing infrastructure can take advantage of common infrastructure services for failover, software provisioning, and management. Oracle Grid Computing analyzes demand for resources and adjusts supply accordingly. Hypertext Transfer Protocol: The set of rules for exchanging files (text, graphic images, sound, video, and other multimedia files) on the World Wide Web. Relative to the TCP/IP suite of protocols (which are the basis for information exchange on the Internet), HTTP is an application protocol. The use of Secure Sockets Layer (SSL) as a sublayer under the regular HTTP application layer. The combination of the public key and any other public information for an entity. The public information may include user identification data such as, for example, an e-mail address. A user certified as being the entity it claims to be. The creation, management, and use of online, or digital, entities. Identity management involves securely managing the full life cycle of a digital identity from creation (provisioning of digital identities) to maintenance (enforcing organizational policies regarding access to electronic resources), and, finally, to termination. identity management realm A subtree in Oracle Internet Directory, including not only an Oracle Context, but also additional subtrees for users and groups, each of which are protected with access control lists. initial ticket In Kerberos authentication, an initial ticket or ticket granting ticket (TGT) identifies the user as having the right to ask for additional service tickets. No tickets can be obtained without an initial ticket. An initial ticket is retrieved by running the okinit program and providing a password. Every running Oracle database is associated with an Oracle instance. When a database is started on a database server (regardless of the type of computer), Oracle allocates a memory area called the System Global Area (SGA) and starts an Oracle process. This combination of the SGA and an Oracle process is called an instance. The memory and the process of an instance manage the associated database's data efficiently and serve the one or more users of the database. The guarantee that the contents of the message received were not altered from the contents of the original message sent. java code obfuscation Java code obfuscation is used to protect Java programs from reverse engineering. A special program (an obfuscator) is used to scramble Java symbols found in the code. The process leaves the original program structure intact, letting the program run correctly while changing the names of the classes, methods, and variables in order to hide the intended behavior. Although it is possible to decompile and read non-obfuscated Java code, the obfuscated Java code is sufficiently difficult to decompile to satisfy U.S. government export controls. Java Database Connectivity (JDBC) An industry-standard Java interface for connecting to a relational database from a Java program, defined by Sun Microsystems. See Java Database Connectivity (JDBC) Key Distribution Center. In Kerberos authentication, the KDC maintains a list of user principals and is contacted through the kinit (okinit is the Oracle version) program for the user's initial ticket. Frequently, the KDC and the Ticket Granting Service are combined into the same entity and are simply referred to as the KDC. The Ticket Granting Service maintains a list of service principals and is contacted when a user wants to authenticate to a server providing such a service. The KDC is a trusted third party that must run on a secure host. It creates ticket-granting tickets and service tickets. A network authentication service developed under Massachusetts Institute of Technology's Project Athena that strengthens security in distributed environments. Kerberos is a trusted third-party authentication system that relies on shared secrets and assumes that the third party is secure. It provides single sign-on capabilities and database link authentication (MIT Kerberos only) for users, provides centralized password storage, and enhances PC security. When encrypting data, a key is a value which determines the ciphertext that a given algorithm will produce from given plaintext. When decrypting data, a key is a value required to correctly decrypt a ciphertext. A ciphertext is decrypted correctly only if the correct key is supplied. With a symmetric encryption algorithm, the same key is used for both encryption and decryption of the same data. With an asymmetric encryption algorithm (also called a public-key encryption algorithm or public-key cryptosystem), different keys are used for encryption and decryption of the same data. key pair A public key and its associated private key. See public and private key pair keytab file A Kerberos key table file containing one or more service keys. Hosts or services use keytab files in the same way as users use their passwords. kinstance An instantiation or location of a Kerberos authenticated service. This is an arbitrary string, but the host Computer name for a service is typically specified. kservice An arbitrary name of a Kerberos service object. See Lightweight Directory Access Protocol (LDAP) ldap.ora file A file created by Oracle Net Configuration Assistant that contains the following directory server access information: Type of directory server Location of the directory server Default identity management realm or Oracle Context (including ports) that the client or server will use Lightweight Directory Access Protocol (LDAP) A standard, extensible directory access protocol. It is a common language that LDAP clients and servers use to communicate. The framework of design conventions supporting industry-standard directory products, such as the Oracle Internet Directory. A process that resides on the server whose responsibility is to listen for incoming client connection requests and manage the traffic to the server. Every time a client requests a network session with a server, a listener receives the actual request. If the client information matches the listener information, then the listener grants a connection to the server. listener.ora file A configuration file for the listener that identifies the: Listener name Protocol addresses that it is accepting connection requests on Services it is listening for The listener.ora file typically resides in $ORACLE_HOME/network/admin on UNIX platforms and ORACLE_BASE\ORACLE_HOME\network\admin on Windows. man-in-the-middle A security attack characterized by the third-party, surreptitious interception of a message, wherein the third-party, the man-in-the-middle, decrypts the message, re-encrypts it (with or without alteration of the original message), and re-transmits it to the originally-intended recipient—all without the knowledge of the legitimate sender and receiver. This type of security attack works only in the absence of authentication. An algorithm that assures data integrity by generating a 128-bit cryptographic message digest value from given data. If as little as a single bit value in the data is modified, the MD5 checksum for the data changes. Forgery of data in a way that will cause MD5 to generate the same result as that for the original data is considered computationally infeasible. message authentication code Also known as data authentication code (DAC). A checksumming with the addition of a secret key. Only someone with the key can verify the cryptographic checksum. message digest See checksumming naming method The resolution method used by a client application to resolve a connect identifier to a connect descriptor when attempting to connect to a database service. National Institute of Standards and Technology (NIST) An agency within the U.S. Department of Commerce responsible for the development of security standards related to the design, acquisition, and implementation of cryptographic-based security systems within computer and telecommunication systems, operated by a Federal agency or by a contractor of a Federal agency or other organization that processes information on behalf of the Federal Government to accomplish a Federal function. net service alias An alternative name for a directory naming object in a directory server. A directory server stores net service aliases for any defined net service name or database service. A net service alias entry does not have connect descriptor information. Instead, it only references the location of the object for which it is an alias. When a client requests a directory lookup of a net service alias, the directory determines that the entry is a net service alias and completes the lookup as if it was actually the entry it is referencing. net service name The name used by clients to identify a database server. A net service name is mapped to a port number and protocol. Also known as a connect string, or database alias. network authentication service A means for authenticating clients to servers, servers to servers, and users to both clients and servers in distributed environments. A network authentication service is a repository for storing information about users and the services on different servers to which they have access, as well as information about clients and servers on the network. An authentication server can be a physically separate computer, or it can be a facility co-located on another server within the system. To ensure availability, some authentication services may be replicated to avoid a single point of failure. network listener A listener on a server that listens for connection requests for one or more databases on one or more protocols. See listener See National Institute of Standards and Technology (NIST) non-repudiation Incontestable proof of the origin, delivery, submission, or transmission of a message. A process by which information is scrambled into a non-readable form, such that it is extremely difficult to de-scramble if the algorithm used for scrambling is not known. obfuscator A special program used to obfuscate Java source code. See obfuscation object class A named group of attributes. When you want to assign attributes to an entry, you do so by assigning to that entry the object classes that hold those attributes. All objects associated with the same object class share the same attributes. Oracle Context 1. An entry in an LDAP-compliant internet directory called cn=OracleContext, under which all Oracle software relevant information is kept, including entries for Oracle Net Services directory naming and checksumming security. There can be one or more Oracle Contexts in a directory. An Oracle Context is usually located in an identity management realm. Oracle Net Services An Oracle product that enables two or more computers that run the Oracle server or Oracle tools such as Designer/2000 to exchange data through a third-party network. Oracle Net Services support distributed processing and distributed database capability. Oracle Net Services is an open system because it is independent of the communication protocol, and users can interface Oracle Net to many network environments. Oracle PKI certificate usages Defines Oracle application types that a certificate supports. Password-Accessible Domains List A group of enterprise domains configured to accept connections from password-authenticated users. Small credit card-sized computing devices that comply with the Personal Computer Memory Card International Association (PCMCIA) standard. These devices, also called PC cards, are used for adding memory, modems, or as hardware security modules. PCMCIA cards that are used as hardware security modules securely store the private key component of a public and private key pair and some also perform the cryptographic operations as well. peer identity SSL connect sessions are between a particular client and a particular server. The identity of the peer may have been established as part of session setup. Peers are identified by X.509 certificate chains. The Internet Privacy-Enhanced Mail protocols standard, adopted by the Internet Architecture Board to provide secure electronic mail over the Internet. The PEM protocols provide for encryption, authentication, message integrity, and key management. PEM is an inclusive standard, intended to be compatible with a wide range of key-management approaches, including both symmetric and public-key schemes to encrypt data-encrypting keys. The specifications for PEM come from four Internet Engineering Task Force (IETF) documents: RFCs 1421, 1422, 1423, and 1424. PKCS #10 An RSA Security, Inc., Public-Key Cryptography Standards (PKCS) specification that describes a syntax for certification requests. A certification request consists of a distinguished name, a public key, and optionally a set of attributes, collectively signed by the entity requesting certification. Certification requests are referred to as certificate requests in this manual. See certificate request An RSA Security, Inc., Public-Key Cryptography Standards (PKCS) specification that defines an application programming interface (API), called Cryptoki, to devices which hold cryptographic information and perform cryptographic operations. See PCMCIA cards An RSA Security, Inc., Public-Key Cryptography Standards (PKCS) specification that describes a transfer syntax for storing and transferring personal authentication credentials—typically in a format called a wallet. See public key infrastructure (PKI) Message text that has not been encrypted. A string that uniquely identifies a client or server to which a set of Kerberos credentials is assigned. It generally has three parts: kservice/kinstance@REALM. In the case of a user, kservice is the user name. See also kservice, kinstance, and realm In public-key cryptography, this key is the secret key. It is primarily used for decryption but is also used for encryption with digital signatures. See public and private key pair proxy authentication A process typically employed in an environment with a middle tier such as a firewall, wherein the end user authenticates to the middle tier, which thence authenticates to the directory on the user's behalf—as its proxy. The middle tier logs into the directory as a proxy user. A proxy user can switch identities and, once logged into the directory, switch to the end user's identity. It can perform operations on the end user's behalf, using the authorization appropriate to that particular end user. In public-key cryptography, this key is made public to all. It is primarily used for encryption but can be used for verifying signatures. See public and private key pair public key encryption The process where the sender of a message encrypts the message with the public key of the recipient. Upon delivery, the message is decrypted by the recipient using its private key. Information security technology utilizing the principles of public key cryptography. Public key cryptography involves encrypting and decrypting information using a shared public and private key pair. Provides for secure, private communications within a public network. public and private key pair A set of two numbers used for encryption and decryption, where one is called the private key and the other is called the public key. Public keys are typically made widely available, while private keys are held by their respective owners. Though mathematically related, it is generally viewed as computationally infeasible to derive the private key from the public key. Public and private keys are used only with asymmetric encryption algorithms, also called public-key encryption algorithms, or public-key cryptosystems. Data encrypted with either a public key or a private key from a key pair can be decrypted with its associated key from the key-pair. However, data encrypted with a public key cannot be decrypted with the same public key, and data enwrapped with a private key cannot be decrypted with the same private key. Remote Authentication Dial-In User Service (RADIUS) is a client/server protocol and software that enables remote access servers to communication with a central server to authenticate dial-in users and authorize their access to the requested system or service. 1. Short for identity management realm. 2. A Kerberos object. A set of clients and servers operating under a single key distribution center/ticket-granting service (KDC/TGS). Services (see kservice) in different realms that share the same name are unique. realm Oracle Context An Oracle Context that is part of an identity management realm in Oracle Internet Directory. A Windows repository that stores configuration information for a computer. remote computer A computer on a network other than the local computer. root key certificate See trusted certificate 1. In cryptography, generally speaking, "salt" is a way to strengthen the security of encrypted data. Salt is a random string that is added to the data before it is encrypted. Then, it is more difficult for attackers to steal the data by matching patterns of ciphertext to known ciphertext samples. 2. Salt is also used to avoid dictionary attacks, a method that unethical hackers (attackers) use to steal passwords. It is added to passwords before the passwords are encrypted. Then it is difficult for attackers to match the hash value of encrypted passwords (sometimes called verifiers) with their dictionary lists of common password hash values. See dictionary attack 1. Database schema: A named collection of objects, such as tables, views, clusters, procedures, packages, attributes, object classes, and their corresponding matching rules, which are associated with a particular user. 2. LDAP directory schema: The collection of attributes, object classes, and their corresponding matching rules. schema mapping See user-schema mapping Secure Hash Algorithm (SHA) An algorithm that assures data integrity by generating a 160-bit cryptographic message digest value from given data. If as little as a single bit in the data is modified, the Secure Hash Algorithm checksum for the data changes. Forgery of a given data set in a way that will cause the Secure Hash Algorithm to generate the same result as that for the original data is considered computationally infeasible. An algorithm that takes a message of less than 264 bits in length and produces a 160-bit message digest. The algorithm is slightly slower than MD5, but the larger message digest makes it more secure against brute-force collision and inversion attacks. An industry standard protocol designed by Netscape Communications Corporation for securing network connections. SSL provides authentication, encryption, and data integrity using public key infrastructure (PKI). The Transport Layer Security (TLS) protocol is the successor to the SSL protocol. A provider of a service. 1. A network resource used by clients; for example, an Oracle database server. 2. An executable process installed in the Windows registry and administered by Windows. Once a service is created and started, it can run even when no user is logged on to the computer. For Kerberos-based authentication, the kservice portion of a service principal. service principal See principal service key table In Kerberos authentication, a service key table is a list of service principals that exist on a kinstance. This information must be extracted from Kerberos and copied to the Oracle server computer before Kerberos can be used by Oracle. service ticket A service ticket is trusted information used to authenticate the client, to a specific service or server, for a predetermined period of time. It is obtained from the KDC using the initial ticket. session key A key shared by at least two parties (usually a client and a server) that is used for data encryption for the duration of a single communication session. Session keys are typically used to encrypt network traffic; a client and a server can negotiate a session key at the beginning of a session, and that key is used to encrypt all network traffic between the parties for that session. If the client and server communicate again in a new session, they negotiate a new session key. session layer A network layer that provides the services needed by the presentation layer entities that enable them to organize and synchronize their dialogue and manage their data exchange. This layer establishes, manages, and terminates network sessions between the client and server. An example of a session layer is Network Session. See Secure Hash Algorithm (SHA) shared schema A database or application schema that can be used by multiple enterprise users. Oracle Advanced Security supports the mapping of multiple enterprise users to the same shared schema on a database, which lets an administrator avoid creating an account for each user in every database. Instead, the administrator can create a user in one location, the enterprise directory, and map the user to a shared schema that other enterprise users can also map to. Sometimes called user/schema separation. single key-pair wallet A PKCS #12-format wallet that contains a single user certificate and its associated private key. The public key is imbedded in the certificate. single password authentication The ability of a user to authenticate with multiple databases by using a single password. In the Oracle Advanced Security implementation, the password is stored in an LDAP-compliant directory and protected with encryption and Access Control Lists. single sign-on (SSO) The ability of a user to authenticate once, combined with strong authentication occurring transparently in subsequent connections to other databases or applications. Single sign-on lets a user access multiple accounts and applications with a single password, entered during a single connection. Single password, single authentication. Oracle Advanced Security supports Kerberos and SSL-based single sign-on. A plastic card (like a credit card) with an embedded integrated circuit for storing information, including such information as user names and passwords, and also for performing computations associated with authentication exchanges. A smart card is read by a hardware device at any client or server. A smartcard can generate random numbers which can be used as one-time use passwords. In this case, smartcards are synchronized with a service on the server so that the server expects the same password generated by the smart card. Device used to surreptitiously listen to or capture private data traffic from a network. sqlnet.ora file A configuration file for the client or server that specifies: Client domain to append to unqualified service names or net service names Order of naming methods the client should use when resolving a name Logging and tracing features to use Route of connections Preferred Oracle Names servers External naming parameters Oracle Advanced Security parameters The sqlnet.ora file typically resides in $ORACLE_HOME/network/admin on UNIX platforms and ORACLE_BASE\ORACLE_HOME\network\admin on Windows platforms. See single sign-on (SSO) System Global Area (SGA) A group of shared memory structures that contain data and control information for an Oracle instance. system identifier (SID) A unique name for an Oracle instance. To switch between Oracle databases, users must specify the desired SID. The SID is included in the CONNECT DATA parts of the connect descriptor in a tnsnames.ora file, and in the definition of the network listener in a listener.ora file. A piece of information that helps identify who the owner is. See initial ticket and service ticket. tnsnames.ora A file that contains connect descriptors; each connect descriptor is mapped to a net service name. The file may be maintained centrally or locally, for use by all or individual clients. This file typically resides in the following locations depending on your platform: (UNIX) ORACLE_HOME/network/admin (Windows) ORACLE_BASE\ORACLE_HOME\network\admin token card A device for providing improved ease-of-use for users through several different mechanisms. Some token cards offer one-time passwords that are synchronized with an authentication service. The server can verify the password provided by the token card at any given time by contacting the authentication service. Other token cards operate on a challenge-response basis. In this case, the server offers a challenge (a number) which the user types into the token card. The token card then provides another number (cryptographically-derived from the challenge), which the user then offers to the server. transport layer A networking layer that maintains end-to-end reliability through data flow control and error recovery methods. Oracle Net Services uses Oracle protocol supports for the transport layer. Transport Layer Security (TLS) An industry standard protocol for securing network connections. The TLS protocol is a successor to the SSL protocol. It provides authentication, encryption, and data integrity using public key infrastructure (PKI). The TLS protocol is developed by the Internet Engineering Task Force (IETF). trusted certificate A trusted certificate, sometimes called a root key certificate, is a third party identity that is qualified with a level of trust. The trusted certificate is used when an identity is being validated as the entity it claims to be. Typically, the certificate authorities you trust are called trusted certificates. If there are several levels of trusted certificates, a trusted certificate at a lower level in the certificate chain does not need to have all its higher level certificates reverified. trusted certificate authority trust point A name that can connect to and access objects in a database. user-schema mapping An LDAP directory entry that contains a pair of values: the base in the directory at which users exist, and the name of the database schema to which they are mapped. The users referenced in the mapping are connected to the specified schema when they connect to the database. User-schema mapping entries can apply only to one database or they can apply to all databases in a domain. See shared schema user/schema separation See shared schema user search base The node in the LDAP directory under which the user resides. Selective presentations of one or more tables (or other views), showing both their structure and their data. A wallet is a data structure used to store and manage security credentials for an individual entity. A Wallet Resource Locator (WRL) provides all the necessary information to locate the wallet. wallet obfuscation Wallet obfuscation is used to store and access an Oracle wallet without querying the user for a password prior to access (supports single sign-on (SSO)). Wallet Resource Locator A wallet resource locator (WRL) provides all necessary information to locate a wallet. It is a path to an operating system directory that contains a wallet. An authentication method that enables a client single login access to a Windows server and a database running on that server. See Wallet Resource Locator An industry-standard specification for digital certificates.
计算机
2014-23/3292/en_head.json.gz/21620
the international journal ofcomputer game research The International Journal of Computer Game Research Our Mission - To explore the rich cultural genre of games; to give scholars a peer-reviewed forum for their ideas and theories; to provide an academic channel for the ongoing discussions on games and gaming. Game Studies is a non-profit, open-access, crossdisciplinary journal dedicated to games research, web-published several times a year at www.gamestudies.org.Our primary focus is aesthetic, cultural and communicative aspects of computer games, but any previously unpublished article focused on games and gaming is welcome. Proposed articles should be jargon-free, and should attempt to shed new light on games, rather than simply use games as metaphor or illustration of some other theory or phenomenon. Game Studies is published with the support of: The Swedish Research Council (Vetenskapsrådet) The Joint Committee for Nordic Research Councils for the Humanities and the Social Sciences IT University of Copenhagen If you would like to make a donation to the Game Studies Foundation, which is a non-profit foundation established for the purpose of ensuring continuous publication of Game Studies, please contact the Editor-in-Chief or send an email to: foundation at gamestudies dot org A Survey of First-person Shooters and their Avatars by Michael Hitchens A survey of over 550 first-person shooters, The titles are compared by year of release, platform and game setting. Characteristics of avatars within the surveyed titles are also examined, including race, gender and background, and how these vary across platform and time. The analysis reveals definite trends, both historically and by platform. [more] Against Procedurality by Miguel Sicart This article proposes a critical review of the literature on procedural rhetoric, from a game design perspective. The goal of the article is to show the limits of procedural rhetorics for the design and analysis of ethics and politics in games. The article suggests that theories of play can be used to solve these theoretical flaws. [more] The Pastoral and the Sublime in Elder Scrolls IV: Oblivion by Paul Martin The landscape in Elder Scrolls IV: Oblivion is seen here as a central aspect of the game’s theme of good versus evil. The analysis looks at the game’s distinction between the pastoral and the industrial realms and the way the player’s encounter with the landscape transforms over the course of the game from the sublime to the picturesque mode [more] Wrap Your Troubles in Dreams: Popular Music, Narrative, and Dystopia in Bioshock by William Gibbons The soundtrack of Bioshock includes popular music of the 1930s-50s, which serves several functions, signifying the time period of the game, yet ironically commenting on the dystopian environment. The lyrics also allow the music to remark obliquely on the game's action, spurring players to reflection without removing them from control. [more] ©2001 - 2011 Game Studies Copyright for articles published in this journal is retained by the journal, except for the right to republish in printed paper publications, which belongs to the authors, but with first publication rights granted to the journal. By virtue of their appearance in this open access journal, articles are free to use, with proper attribution, in educational and other non-commercial settings.
计算机
2014-23/3292/en_head.json.gz/22008
LiveContent + Creative Commons with Red Hat gets LiveCDs and LiveDistro LiveContent - CcWiki LiveContent is an umbrella idea which aims to connect and expand Creative Commons and open source communities. LiveContent works to identify creators and content providers working to share their creations more easily with others. LiveContent works to support developers and others who build better technology to distribute these works. LiveContent is up-to-the-minute creativity, 'alive' by being licensed Creative Commons, which allows others to better interact with the content.LiveContent can be delivered in a variety of ways. The first incarnation of LiveContent will deliver content as a LiveCD. LiveCDs are equivalent to what is called a LiveDistro. LiveCDs have traditionally been a vehicle to test an operating system or applications live. Operating systems and/or applications are directly booted from a CD or other type of media without needing to install the actual software on a machine. LiveContent aims to add value to LiveDistros by providing dynamically-generated content within the distribution. San Francisco, CA — August 6, 2007Creative Commons today announced the release of LiveContent, a collaborative initiative to showcase free, open source software and dynamic, Creative Commons-licensed multimedia content. Red Hat's Fedora 7 will serve as the platform for Creative Commons LiveContent CD. The first LiveContent CD is now available at the Creative Commons and Fedora booths at the LinuxWorld Conference and Expo in San Francisco.The Fedora Project is a Red Hat-sponsored, community-based open source collaboration that provides the best of next-generation open source technologies. Its latest distribution, Fedora 7, features a new build capacity that allows for the creation of custom distributions and individual appliances."Fedora 7 features a completely open source build process that greatly simplifies the creation of appliances," said Jack Aboutboul, community engineer for Fedora at Red Hat. "We encourage Fedora 7 users to create custom distributions that fit their individual needs and are excited that Creative Commons is making use of this capability within Fedora 7 to enable the liberation of content and provide free licensed software to all. This is the first step in bringing Red Hat's open source community and Creative Commons' "share, reuse, remix" initiative together. Our communities have always been talking about a common vision of free software and free content – today we have both decided that it's time to start bridging those gaps."The Fedora 7 operating system boots directly from the LiveContent CD, making use of the open source tools found in the latest Fedora distribution like Revisor, Pungi and more. The CD features a variety of Creative Commons-licensed content including audio, video, image, text and educational resources. From the desktop, users can explore free and open content and learn more about businesses like Jamendo, Blip.tv, Flickr and others supporting creative communities through aggregation and search tools.Also included are a number of open source software applications including OpenOffice, The Gimp, Inkscape, Firefox, multimedia viewers, open document templates and others. The LiveContent CD is a product of collaboration across a number of organizations – Red Hat is providing in-kind engineering support via Fedora 7 and many open source community members collaborated on the included software applications. Worldlabel.com, member of the Open Document Format Alliance, is supplying ongoing support for the development and distribution of the LiveContent CD."When we decided to explore LiveContent, we knew we would need a reliable, community-driven platform on which to base our content," said Jon Phillips, community and business developer at Creative Commons. "We had a previous relationship with some of the engineers at Red Hat and knew the Company's solutions to be valuable, well-developed and reliable. We envision LiveContent to be a stepping stone to dynamic distribution of open content. Forthcoming versions of LiveContent aim to support autocurated packaging of Creative Commons-licensed content, allowing for the most up-to-date, 'living' content distribution. For this to happen, we're calling on community members and content curators to join the effort to help spread open media." For more information on Fedora, to download or to join this community effort, please visit: http://fedoraproject.org. Visit http://creativecommons.org/project/livecontent to learn more about the project and get involved with future versions of LiveContent. To obtain a copy of the LiveContent CD, visit the Fedora and Creative Commons booths at the LinuxWorld Conference and Expo in San Francisco.About Creative CommonsCreative Commons is a not-for-profit organization, founded in 2001, that promotes the creative re-use of intellectual and artistic works—whether owned or in the public domain. Creative Commons licences provide a flexible range of protections and freedoms for authors, artists, and educators that build upon the "all rights reserved" concept of traditional copyright to offer a voluntary "some rights reserved" approach. It is sustained by the generous support of various organizations including the John D. and Catherine T. MacArthur Foundation, Omidyar Network, the Hewlett Foundation, and the Rockefeller Foundation as well as members of the public. For general information, visit http://creativecommons.org.About Red Hat, Inc.Red Hat, the world's leading open source solutions provider, is headquartered in Raleigh, NC with over 50 satellite offices spanning the globe. CIOs have ranked Red Hat first for value in Enterprise Software for three consecutive years in the CIO Insight Magazine Vendor Value study. Red Hat provides high-quality, low-cost technology with its operating system platform, Red Hat Enterprise Linux, together with applications, management and Services Oriented Architecture (SOA) solutions, including the JBoss Enterprise Middleware Suite. Red Hat also offers support, training and consulting services to its customers worldwide. Learn more: http://www.redhat.com.ContactJon PhillipsCommunity + Business DeveloperCreative Commons(415) [email protected] CatallozziRed Hat (919) [email protected] Kithttp://creativecommons.org/presskit
计算机
2014-23/3292/en_head.json.gz/22066
Cartastrophe Project Linework somethingaboutmaps Opening the Vaults Posted by Daniel Huffman on 22nd November, 2011 Today I have decided to begin offering free PDFs of all the maps that I sell prints of. There’s a fine line that a lot of people walk when putting their art online. You want people to be able to see (or hear) your work, but you also want to maintain some control over your intellectual property so that people don’t go passing it off as their own or profiting from it while you see nothing. And, if you’re selling something, why would people pay you for it if they can get it free? But, then again, people are less likely to buy when you only share a sample of your work — they can’t be wholly sure of what they’re getting until they’ve handed over their money. And so the arguments go back and forth. Setting aside my fears, and feeling filled with a bit of faith in humanity, I have decided to embrace openness in the belief that the positive will outweigh the negative, that most people will not harm me, and they will be offset by those who will be kind to me. I have seen it work for others (though, it should be noted, that success stories tend to circulate; artists who are harmed by this model probably don’t get a lot of press). If you click the link near the top of the page that says “Storefront,” you can see a PDF of any of the works that I’m selling at any level of detail you want. If you want to download the PDF and pay nothing, so be it. If you wish, though, you can also voluntarily donate to me via PayPal based on what you think my work is worth (and what you can afford). So, if you’d like to just print the map yourself and pay me directly, rather than ordering through Zazzle, now this is easy to do. Or if you’d like to print the map off and pay me nothing, that’s fine, too. I also dreamed once of my river maps having some sort of educational use, so putting them out there free may encourage that far-off dream, as well. I admittedly have little to lose from this — I rarely sell prints, and I am making these for my own satisfaction first and foremost. I’m slowly generating an atlas, and while I may offer copies of it to interested purchasers, I’m mostly doing it because I want to be able to hold a book of maps in my hand and know that I made them all. But I’m also doing this because I’m secretly an idealist (with all the inherent irrationality), and I find the notion of a world in which people pay what they want for art to be attractive. Others have gone down this path, and I thought it was time I tried it, as well. Edit: Now with extra licensing! As per Marty’s suggestion below, I have marked the download links with a Creative Commons license, specifically the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. Uncategorized On Generalization Blending for Shaded Relief Posted by Daniel Huffman on 18th October, 2011 I have nearly recovered sufficiently from an amazing NACIS conference, and I think I’m ready to get back to a little blogging. This time around, I’d like to present you all with an unfinished concept, and to ask you for your help in carrying it to completion. Specifically, I’d like to show you some attempts I’ve made at improving digital hillshades (I’ll be randomly switching terminology between ‘hillshade’ and ‘shaded relief’ throughout). Automated, straight-out-of-your-GIS hillshades are usually terrible, and it generally takes some extra cleanup work to get them to the point where they aren’t embarrassing or won’t burst into flames simply by being put next to a well-executed manual shaded relief. Here’s an example I stole from shadedreliefarchive.com which illustrates the problem: Digital vs. manual relief The computer doesn’t see the big picture — that every little bump in elevation can sum to a large mountain, or that some bumps are more critical than others. It treats everything the same, because it can’t generalize. What we’re left with is noise, rather than an image. But most of us, including myself, haven’t the talent to do a manual hillshade. We are left with two options: steal one from shadedreliefarchive.com, or do a digital one and try to find ways to make it look not terrible. In this post, I’m going to talk about some new (or, at least new to me) ways of doing the latter. To begin, here’s a bit of Mars, from a project I’m doing about Olympus Mons, given an automated hillshade through ArcMap’s Spatial Analyst tools. As in the earlier example, this image is way too noisy and detailed, especially in the rough area west of the mountain, Lycus Sulci. The common answer to these problems is to find ways of reducing the detail in the DEM so that those annoying little bumps go away, but the big stuff remains. Usually this is done by downsampling, blurring, median filters, and a few other more sophisticated methods that I don’t have time to explain in detail. For starters, check out Tom Patterson’s excellent tutorials at shadedrelief.com, and Bernhard Jenny’s gasp-inducing tools at terraincartography.com — both of these resources can take you a long way toward improving a digital hillshade. Downsample and median filter Run through Bernhard Jenny's Terrain Equalizer Both of these are an improvement over the original. The major valleys in Lycus Sulci become more apparent, and the flatter plateau regions there are no longer obscured by a myriad of tiny bumps. At the same time, though, while we’re losing unwanted details in the Sulci, we’re also losing desirable details elsewhere, especially along the escarpment of Olympus Mons and the gently sloping mountain face. In places like these, where the terrain is not so rough, we can support a finer level of detail than in the Sulci. Details of the three above images, in order. Note the loss of detail along the bottom edge and along the cliff face. What we need is a way to keep 100% of the original detail in the smooth places where we can support it, and to generalize the terrain where it’s too rough. To do this, we need a way of figuring out where the terrain is rough and where it isn’t. To do this, I originally started looking at variations in terrain aspect — which way things are facing, since the rough areas have a lot of variation in aspect, and the smooth areas have relatively constant aspect in one direction. But, that’s a somewhat complicated path to go down (though it works well), so instead I’m going with a simpler method that’s probably just as effective: I’m going to look at the variation in my initial hillshade, above. If I do some analysis to find out where the hillshade is seeing a lot of variation — many dark and light pixels in close proximity, then that will give me a mathematical way of separating the smooth from the rough areas. Here, I’ve calculated the standard deviation of the hillshade (using a 12px diameter circle window), and also blurred it a bit just to smooth things. The darker areas correspond to the smoothest terrain, and the bright areas are where we find a lot of jagged changes, such as in the rugged Sulci. Notice that even though the escarpment is steep, and the plain at the top center of the image is flat, both are dark because they’re relatively smooth and would be good places to keep lots of detail in our final image. In the end, what I’ve really done here is take a look at my initial, poor hillshade, and find out where the noisiest sections are. I think the analogy to image noise reduction is valuable here — we’re trying to reduce noise in our image, so that the major features become clear. So now I’ve got a data set which tells me a degree of ruggedness or noisiness for different parts of the terrain. There are other ways to get the same effect — you could do a high-pass filter, or the aspect analysis I mentioned above, or perhaps look at curvature. This is just my way of measuring things. Once I have this data set, I can move on to the fun part. What I want to do is use this to figure out where to keep details and where to lose them. I’m going to use this thing to do a weighted average of my original, high-detail DEM, and a much more generalized DEM. Where the terrain is very rough, I want the resulting data set to draw from the generalized DEM. Where it’s very smooth, I want it to use the detailed DEM. Where it’s in-between, I want it to mix both of them together, adjusting the level of detail in the final product based on the level of roughness in the terrain. In more mathematical terms, I want to use this thing as the weight in a weighted average of my original DEM and the generalized one. The general formula looks kind of like this: ((Generalized DEM * Weight) + (Detailed DEM * (WeightMax – Weight))) / WeightMax — where Weight is the value of our noisiness data set. Each pixel in the final output is a mix of the original DEM and the generalized one. Where there’s a lot of variation in the terrain, our Weight is very high, so we get a result that’s mostly the generalized DEM and very little of the detailed DEM. Where terrain is smooth, Weight is low, and we see mostly our detailed DEM. Here’s the output, once it’s been hillshaded: The smoothest areas retain all of their original detail, and the roughest areas are much more generalized. It’s a combination of the first two hillshades near the top of this post, with the best of both worlds. It could still use some tweaking. For example, in the Lycus Sulci, it’s still blending in some of the initial DEM into the generalized one, so I could tweak my setup a bit, by requiring the noisiness index to fall below a certain number before we even begin to blend in the detailed DEM. Right now the index runs from 0 to 100. So, an area with a noisiness of 80 would mean that we blend in 20% of the detailed DEM and 80% of the generalized. If I tweak the data set so that the new maximum is 40 (and all values above 40 are replaced with 40), then more of my terrain will get the highest level of generalization. Any place that’s at 40 (or was higher than 40 and has become 40) will get 100% of the generalized DEM and 0% of the detailed one. Here’s what we get: And here it is compared to the original relief: Improved hillshade on top, original on the bottom. Notice how seamlessly the two images blend together along the mountain slope — each of them has the same high level of detail. But in the Sulci, where we need more generalization, the improvement is manifest. For comparison, here’s my generalized DEM vs. the original before blending the two. The loss of fine texture detail on the mountain slope and especially along the cliff face becomes apparent here: So, there you have it. I feel like this is still a work in progress, that there are some other places it could go. Is this the best way to figure out how to blend the two DEMs together? Should I even be blending at all? Is this even a problem that needs solving? I am a bit unhappy with the median filter, I will say — it’s a classic of noise reduction, but it tends to leave things a bit…geometric. Here’s a more extreme example: There’s a balance still between cutting out detail and the artificial look of the median filter. I have also tried blurring, but then everything looks blurry, unsurprisingly. I’d like something that can cut out details, but keep sharpness. I may go back and use Terrain Equalizer some more to generate the blending base. But all this fits more on the side of “things you can use to blend into your detailed DEM,” and the main point I am writing about here is the blending concept. So, I invite you, gentle reader, to give me your input on where this can go, if it has any potential, and how to improve it. I think, after some weeks of work on this and a number of dead ends, my brain can take this no further without a break. Uncategorized On Projection Videos Posted by Daniel Huffman on 5th October, 2011 I’ve been meaning for some time to share these videos that I produced last year to assist in teaching projections to my students. Specifically, I wanted to use them to emphasize the importance of choosing projection parameters carefully to reduce distortions in the subject area, and to show how two different-looking maps can really be the same projection. The first video is of an Azimuthal Equidistant projection. The standard point moves around the map, beginning in the central US and ending near the southern end of Africa. I try to point out, when showing it, that the pattern of distortion remains the same because it’s the same projection, but that the location of those distortions on the earth changes as the standard point moves, and how the map at the beginning and the map at the end are appropriate for showing different locations. The second is of an Albers Equal Area Conic. First the central meridian moves, then the two standard parallels. Here I point out that the areas of the land features never change throughout the movie. Their shapes shift around significantly, but area is always preserved. The angle distortion moves with the standard parallels, and we can choose a set of standard parallels to best depict each area. We begin with a projection best suited for India and end with one adjusted for Sweden. By the time I show these videos, I’ve already gone over all these projection concepts — they’re just a nice way to reinforce what we’ve already discussed. Student responses suggest that the videos have been helpful in teaching distortions and the importance of choosing projection parameters. It can be a tough thing to get your head around, and I like to approach it from several different angles to make sure I’m reaching as many of them as I can. I made these using GeoCart (and Tom Patterson’s lovely Natural Earth raster), in a painstaking process which consisted of: 1) adjust projection parameters by a small amount (I think it was .25 degrees), 2) export image, 3) repeat 1-2 several hundred times, 4) use some Photoshop automation to mark the standard point/central meridian (though I had to add the standard parallels manually), 5) stitch together with FrameByFrame It took many hours. Soon thereafter daan Strebe, GeoCart’s author, pointed out at the 2010 NACIS meeting that he’d added an animation feature to the program, which probably would have saved me a lot of time. If you’d like the originals (each a bit under 40 MB, in .mov format), drop me a line. Uncategorized A Crosspost Posted by Daniel Huffman on 13th September, 2011 I made a post recently on my other blog, Cartastrophe, about the misuse of map elements. I feel like it belongs here, too, as it’s somewhat about cartography education, so here’s a link if you’d like to head on over. Uncategorized On Tweet Maps Posted by Daniel Huffman on 8th September, 2011 Gentle readers, welcome back. Forgive my prolonged absence (even lengthier on Cartastrophe). I’m unemployed, and it turns out that being unemployed can be a great deal of work, as I’ve been working harder these past couple months than when I was actually being paid. Much of my time has gone to building an atlas of my river transit maps, but I’ve also been taking some time to work on other projects. One of those projects which I’ve lately taken on as an amusing diversion is making Tweet Maps, which are simply maps that can be constructed within a post on Twitter. Here’s one I put up earlier today on my account, @pinakographos: Prime Meridian: North Sea (((GBR))) English Channel (((FRA-ESP))) Mediterranean Sea (((DZA-MLI-BFA-TGO-GHA))) Gulf of Guinea It’s a fun challenge, and it gives cause to think a bit more deeply about how representations are constructed, and what a map really is. Something I used to tell my students was that map readers are used to looking through maps — ignoring the representation and instead seeing the place it stands for. When most of us look at a map of Iceland, we don’t see patches of colors and lines and letters. We just see Iceland. But cartographers work in the layer of representation, and don’t have the luxury of looking through it. We have to create that transition between seeing bits of ink and imagining a territory. Making these Tweet Maps is a nice way for me to break out of the standard cartographic visual paradigm and think about how little it can really take to convey a space. I also hope that the unfamiliarity of this map style will make it just a bit harder for readers to simply look through the representation, and become more aware of that intermediate step that occurs between seeing some marks on a page and seeing the place that it symbolizes. But mostly I just do them because they amuse me. For more maps in the series, look for the #TweetMaps hashtag on Twitter. Uncategorized On The Ways of the Framers Posted by Daniel Huffman on 3rd May, 2011 We seem to like naming things after people; buildings, streets, awards, etc. Everywhere you look there are names on the landscape, meant to memorialize some historic figure deemed worthy. But it rarely works. Generations pass, and we no longer apprehend the significance of the fact that we live on Adams Street or walk past DeWaters Hall on our trip through campus. That’s just what they’re called; it doesn’t even occur to us that they share names with specific human beings. When James Doty platted the first streets of Madison, Wisconsin, in 1836, he named them after the signers of the U.S. Constitution. Today, though, that connection is lost on many of its residents. They have no idea whom their street was meant to honor, nor know that all the street names share this common theme. This last month I’ve been working on a map, The Ways of the Framers, which aims to reconnect Madison’s modern citizens with the people their city was intended to memorialize. The street grid is rendered using signatures traced off of a scan of the U.S. Constitution. Handwriting is personal, and it putting it on the map is a way to give the reader a more direct human connection with the historical figure. It’s different than simply reading a webpage about each figure. It puts a little bit of George Washington’s personality into the landscape, into the place where the reader lives. Only the streets which are named after the Framers of the Constitution are shown, so it’s probably not the best map to use for navigational purposes. I almost included other streets on the map, such as those named after non-Framers, but ultimately decided to keep it focused and simple. Fun fact: Three of the framers no longer have streets named for them. Robert Morris, Gouverneur Morris, and George Clymer. Morris St. was later renamed to Main St., and Clymer St. was renamed Doty St. Click on the thumbnails for a few images of the map. Click here to purchase a 36″ x 18″ print. Click to download a free PDF (~25MB), which you may use according to the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. If you wish, you can pay me what you think my work is worth: click here to donate via PayPal. Important: Zazzle will let you shrink the map down from the original size if you ask, but I cannot guarantee it will look good if you do. 10% of profits from the sale of prints of this map will be given to the University of Wisconsin Cartography Lab, which trains students in the cartographic arts. I was one such student, and I would not be where I am today without their support. It is a small way to repay the vast debt I owe them and help give other students similar opportunities. Note: This version of the map does not include streets that are not named for signers of the constitution. I have an alternate version, where those streets are shown with thin lines. Contact me if you would like to obtain a print or a PDF of this version. Uncategorized On Human Cartography Posted by Daniel Huffman on 20th April, 2011 At the instigation of my colleague Tim Wallace, the UW Cartography Diaspora has been lately abuzz with a debate on the role of art and science in cartography (particularly web cartography). Today’s post is my contribution to the discussion. For some background, I recommend you first read through the comments of my colleagues on the subject: Tim Wallace: “Web Cartography in Relation to Art & Science“ Tim Wallace: “On Art & Science in Web Cartography“ Andy Woodruff: “Apart from being dead, Art and Science are strong in web cartography“ Tim has challenged several of us to respond to him in writing, so more of my colleagues may be chiming in later. I’ll add their posts here as they come up. Onward to my own comments… I’m going to stray a bit from where my colleagues have focused and talk about art in cartography generally, not just where it fits in web cartography, because that’s what caught my attention initially. For me, this whole debate started like this: Tim: “…my commentary is on the displacement of art in web cart[ography].” Me: “If art’s being displaced from web cartography, that makes it not cartography anymore.” Caveat: Tim may have been talking about horse carts and I just assumed he meant cartography. Among all this discussion of “what is the role of art in cartography,” my proposition is this: cartography is a form of art. Art is not simply a component of cartography, alloyed with a liberal dose of science or technology or hackery. Art is what cartography is made of. It belongs on the same list as sculpture, as poetry, as painting. Mandatory Venn diagram What of science? Doesn’t a lot of that go into mapmaking? We cartographers use fancy digital tools that can calculate and render smooth bezier curves or instantly translate a color from an RGB space into CMYK process colors and determine how much ink to lay down based on print materials and coatings, etc. We also use math to analyze and manipulate our data: map projections, interpolations, calculating buffer zones, etc. Does this make cartography a science as well as an art? Not necessarily. A ceramicist relies on redox chemistry in order to produce colorful glaze patterns, firing everything in carefully controlled kilns to ensure that they achieve a desired appearance. A metal sculptor welds and files and cuts with various modern technological implements. A painter employs different varieties of paints, blended with precision in modern factories. Does this mean that all of these pursuits rely on both art and science, sitting at the intersection of those two august concepts? The argument that cartography is, or involves, science boils down to two things: tools and data. Cartographers use tools and techniques that were developed through scientific experimentation and research. But so do other arts. The synthetic painter’s brush didn’t invent itself. The other half of the argument is that cartographers use math and science to manipulate data. Again, that doesn’t make us unique. The data are our clay, the raw material input that our art requires. We manipulate our data the way a sculptor shapes their medium of choice into a final expressive work. I might use some mathematical formulae to transform a dataset, but a ceramicist will use a modern human-built kiln to change the chemical properties of their clay into something more desirable. Both require education and experience, and an understanding of the raw materials and how they are best manipulate. If cartography is both an art and science, so is sculpture. So is painting. So is photography. So is architecture. It goes on. We cannot declare that cartography is both an art and a science without claiming the same for many other fields. If we’re all willing to do that, then, yes, I agree cartography is an art and science. But if sculpture is “just art,” then so is cartography. There may be a science to the tools or the data or the materials, but the art is in what the artist does with those inputs. That is where cartography lies. Cartography is about creating something out of spatial data, just as painting is about creating something out of pigment and canvas. Art is in the doing. Back to Tim’s prompt. If art is missing from web cartography, or is at least not as present as we’d like, it’s because art requires people. What’s really missing from web cartography, and a lot of digital cartography generally, is humanity. Cartography is a fundamentally human practice. Machines don’t need maps — they can understand their environment through a series of databases and formulae. They don’t need a visual expression of space to help them interpret and interact with places, the way that people do. For most of human history, the maps people read were made by fellow human beings who drew everything out by hand and with at least a modicum of thought to how it looked. Every mark on the page involved a decision and an intent; an artist making use of the inputs at hand to try and evoke the desired reaction from a reader — maybe to create an understanding (this is where the river is), a judgment (the country across the river is a threat), or a feeling (worry that said country is going to harm us). Now, however, we have machines that make the maps for us. Through automated or semi-automated processes, people are involved less and less in the creation of the final map. Click a button and the computer will place everything for you, and color it, too. Most of the marks on the final page can make it there with no active human decision behind them. No more intent. No human brain considering how the typeface or the line color or weight will affect a fellow human reader. There is less art now because there is less humanity. Machines do not express, or create, or understand how to evoke a reaction. Machines do not make art. When humans made maps for each other, the cartographer had at least some understanding of how their work might influence a reader’s thoughts and feelings, by virtue of being the same species. But now the creator of the map is part digital, a human-machine hybrid, and that connection with the reader is fading. So many maps today are unattractive because they are alienating, because they were not made by people, but by insensate machines. There is no sapience behind the lines they draw, no appreciation for mood, for aesthetics. The machine does not desire to make you think or feel or learn anything in particular, as the artist does, and this is the heart of what is wrong with so much of cartography today. Only humans can make maps for other humans. Digital tools are all well and good, but they must remain just that: tools, in the hands of a human capable of wielding them wisely and with a purpose. Therefore if there is no art in web cartography, it is no longer cartography, because cartography is an art. Instead, we are seeing something new, the rise of the map made without humans. That’s a recent development, and it certainly has its own value as far as things like production speed, accessibility, and cost go. But the lack of human intent, of art, means that it is a fundamentally different thing than cartography. Related, to be sure, but separate. I’ll leave it to someone else to think of a name for it. Just like I wouldn’t call it art when an automated algorithm paints a painting based on a digital photograph, it’s not cartography when a server tosses together a map based on a spatial database. Any art that inheres in that process was left there in the form of the lingering human intelligence of the programmers who helped the computer figure out how to make the map/painting, and that’s usually not too much. There is no art without creative intention. Therefore there is no cartography without a human creator. In the end, I and the other bloggers involved in this discussion are neither right nor wrong. There are a lot of different ways to think about cartography; this one is mine, based on my self-image as a spatial artist. I don’t think any major decisions need to be made about what cartography really is. There are just different models that help us all figure out what it is we’re doing, and how to do it better. In an increasingly digital world, this is how I am personally trying to articulate the relevance of my role as a human cartographer. EDIT: A tweet from @shashashasha points out that I neglected to say anything about that other tricky term, “design.” To me, design means making decisions based on goals. It’s again about using our human brains to see something we want to do, then making cartographic choices to get there. The random and the organic are undesigned. Where there is intelligence and intention, there is design, which ties back into most of what I said above. Uncategorized Remembering LineDrive Posted by Daniel Huffman on 8th March, 2011 Once upon a time, there was a website called MapBlast. This was during the wild frontier days of online road maps about ten years ago, when MapQuest was king and Google Maps was but a gleam in the eye of a couple of Danish guys in Australia. MapBlast never seemed to me to be more than a minor player during these days, but it had one special feature that made it my website of choice for route planning: LineDrive directions. Then, as now, all the other online mapping services gave you route maps that looked like this, a highlighted route drawn onto a standard road map: MapBlast, though, could give you what they called LineDrive directions, a linear cartogram of your trip that looked like this: LineDrive was developed by Maneesh Agrawala, Chris Stolte, and Christian Chabot at Stanford University, and they describe their system in a 2001 paper if you’re interested in the details. What is most interesting to me is the creators’ inspiration for LineDrive: hand-drawn route maps. While most online mapping services were, and still are, patterned after paper road atlases, LineDrive was designed to look like what you might quickly sketch on a napkin. Grabbed from the above-mentioned paper. Probably not done on an actual napkin. This starting point leads to something quite remarkable. Hand-drawn route maps are custom products — they’re for just a few people, and are about going between one specific place and another. A road atlas, on the other hand, is the same for everyone. It doesn’t change based on the situation. It’s multipurpose, which is quite valuable, but it’s not as effective for a given route as something customized. When the Web came along, the mapping services that came online simply translated the idea of a paper road atlas into its digital equivalent. They added a few enhancements — you could zoom in and out or draw routes on them, but it was fundamentally the same thing. It was still multipurpose, not customized. When mapping services today talk about customization, they mean that you get to draw a blue line on top of their map or add a picture of a pushpin. But the map is always the same for everyone. LineDrive’s most remarkable feature was that it gave everyone their own map, fully customized to their specific situation. It should have been revolutionary, but it turns out that nothing ever came of it, at least in the realm of online driving directions. I did not apprehend the significance of this development at the time. Instead, I merely loved the design. It was brilliant. Clean, simple, effective. It tells you everything you need to know about how to get from A to B, and it tells you nothing else. There is no clutter. It does not take up my time or printer ink with roads I won’t be using, or cities hundreds of miles from those I’ll be passing through. I can look at this map quickly while driving. I don’t have to hunt around the page to find the little blue line that contains the path information I need — everything on this page is there because it’s essential. The LineDrive map shows every detail of the whole route at once by distorting scale. It makes the short legs of the trip look longer, and the long legs shorter, so that everything is visible on the same page at once. With a more traditional road map, either in an atlas or printed off of Google/Bing/etc., I would actually need several maps at different scales to cover each part of the route — zooming in to Madison to show how I get to the highway, then zooming out to show the highway portion of the trip, etc. LineDrive fits everything on the same page, and it does so legibly. Again, it’s customized, which is what gives it value. Since the scale changes, the length of each leg drawn on the map doesn’t correspond to the same distance. Therefore each line is marked with its distance, so you don’t lose that valuable information. In fact, distance is presented more clearly than on a standard road map. If I wanted to figure out how long each leg of my trip was on Google’s map, I’d have to go compare each one to the scale bar in the corner, measuring it out bit by bit. Or, I’d have to check the written directions. It’s not usually presented clearly on the map itself. For me, LineDrive eliminates the need for a verbal listing of directions accompanying the map. They can be clearly read out from the map itself, distances and turn directions and all. This map doesn’t do everything, to be sure. It’s only good for getting from one place to another. It’s very purpose-specific, and there are plenty of things that more standard traditional print and online road maps can do that this can’t. It won’t tell you where the city you’re driving to actually is, or what’s nearby. You can’t deviate from the path, so you can’t react to road closures or changes of plan by switching roads. You don’t know the names of the towns you’ll be passing through. Within its particular niche, though, LineDrive was very, very effective, just like the hand-drawn maps it was inspired by. I have never understood why it did not catch on. Perhaps most people don’t usually use the map to drive the route — the verbal directions tell them where to go more clearly. Perhaps they only have the map to plan routes, not to follow them. The map may be used for reviewing the route before you begin, or re-routing in case of emergency. I’m not sure if any of this is true; it’s just idle speculation. If it is true, though, then LineDrive doesn’t offer any advantages — it doesn’t explain the route any more clearly than verbal directions, and it doesn’t let you do any route planning. Alas, it died too soon. Microsoft bought MapBlast in 2002 and closed them down. They took the LineDrive technology and kept it going on their own map service, MSN Maps & Directions, until 2005. The site stopped updating in 2005, however, and Microsoft’s subsequent mapping endeavors don’t appear to offer LineDrive as an option. You can still access the old 2005 site, however, at http://mapblast.com/DirectionsFind.aspx. (EDIT 10/19/11: Microsoft appears to have finally taken MapBlast offline. Maybe traffic from this post reminded them that they’d left it up.) In generating a sample image for this post, I noticed that the database has quite a few holes and errors in it, so I wouldn’t trust the directions it gives. It’s merely a relic of a different era. If you try it, be sure to check out the standard map directions, too, in addition to the LineDrive ones — it’s a nice reminder of just how far online road map design has come in the past few years. LineDrive was something truly different. MapBlast’s competitors offered a slightly enhanced version of the paper road atlas. MapBlast offered this, too, but they would also give you your own version of a hand-drawn route map for anywhere you wanted to go, at a moment’s notice. I feel like LineDrive made much more effective use of the power computers can bring to cartography than other online mapping services have. It was a more creative re-thinking of what the digital revolution could do for map-making. I never used to draw route maps by hand; I would simply write verbal directions and bring an atlas. Discovering LineDrive changed that, though. Every long and unfamiliar trip since then has started with me taking pen to paper and sketching out a simple linear cartogram to get me where I want to go. Uncategorized On Salvation Posted by Daniel Huffman on 2nd March, 2011 Let me tell you about how I was saved by maps. I used to be a chemist some years ago. I worked at a mom & pop pharmaceutical laboratory in my home town of Kalamazoo, Michigan. From the time I was ten or eleven, I had planned on this job. I blame Mr. Wizard — I loved watching all the seemingly magic things he could do with brightly-colored liquids in test tubes. Now that I had my childhood dream job, however, I was disillusioned. There was no magic, there was only routine humdrum. The work was hard, it was stressful, and it was frequently dull. I started the slide into depression. I came home every day feeling far too tired for the number of hours I was working. Time off seemed fleeting, and I spent the whole weekend worried about the fact that Monday was approaching, and wondering sometimes if I could face another week. I felt constantly pursued, and unable to relax even when given respite. I decided to get out. I had planned on going to graduate school right after college. I liked being in school and I succeeded there. It seemed natural to continue. But inertia kept me in the workforce for about three years before I finally managed to get enough forward momentum to return to school. My other love in school besides chemistry had been history. I can’t blame Mr. Wizard for this one, but I’ll blame my high school teacher, Mr. Cahow, instead. My bachelor’s degree at Kalamazoo College was in both subjects. At first I tried, and failed, to get in to graduate school to study classical history. The next year, though, I switched focus and applied to History of Science departments. Given my background, it seemed a pretty sensible fit, and the University of Wisconsin agreed. They let me in, and I was off to Madison in the summer of 2007. Coming along with me was my girlfriend of three years, an amazing and brilliant woman whom I had met in college, and with whom I was very much in love. We got a small apartment together about a mile from campus, she found a job, and we settled in to an unfamiliar city. It was about here that my life started completely unravelling. I did not fit in to my new graduate department at all. I was wholly out of my depth; my background was insufficient to match the demands of the program. I had gone to graduate school because I liked school, not because I was deeply passionate about the history of science in particular. It just seemed interesting. My peers, on the other hand, seemingly had been pursuing this track for far longer than I, and had put in enough extracurricular effort through their college years that by the time we all started at Madison they were talking over my head. I fell swiftly behind and lost heart. I realize now that while I might have had the relevant skills, I lacked the critical element of passion. I did not feel like spending all week trying to get through 400+ pages of reading, because the end goal was simply not enticing enough, nor was the journey greatly intriguing. This had been my grand escape plan. My job had depressed me, and I was going to return to school, something I was good at, and I was going to enjoy getting an advanced degree and then spend the rest of my life in academia. My depression returned, much stronger, as this escape plan crumbled away. My girlfriend, meanwhile, made a bunch of new friends in town, and started spending more and more time with them. Sometimes she wouldn’t come home, or even communicate with me, for days. Eventually, she told me that it was because she didn’t like being around me when I was depressed. She kept getting more distant, and moved into her own separate room. She would have parties at our apartment and introduce me as her “roommate,” and then close me off into one room so that I couldn’t hear what was going on, and so that no one could see me. Then she’d leave for a few days and I’d have to clean up the mess. I did not generally have the wherewithal to stand up for myself in the face of her steadily worsening treatment of me. Her behavior eventually reached the point where I started reading online to determine if I was in an emotionally abusive relationship. None of this helped my depression. If you’ve never been depressed, I’ll just say that it’s much worse than it sounds, and I imagine everyone manifests it a bit differently. I would stay in bed for hours. I would avoid doing any of my school work. It took a significant effort to scrape together the energy needed to do any sort of housework or cooking. I was constantly bored, but could not muster myself to do anything to make me less bored, and I did not have the courage to face the growing pile of assignments that I was falling behind on. I felt trapped and powerless. I made a lot of maps during that period; it was one of the only activities that gave me any sort of positive emotion. I had actually started a cartographic hobby a few months earlier, before I moved to Madison. As far back as I can recall, I liked reading maps. I used to be the navigator when my family would take trips. I like paging through atlases for fun. So it was natural enough that I eventually determined to learn a bit about how to make them. One of my first efforts was for Wikipedia, a map of the Kalamazoo River: I was terribly proud of that map. I still am, despite the many, many flaws I can see in it today. I kept up my new mapmaking hobby when I moved to Madison. It gave me something creative to do, and I have since learned that, for me, being creative is critical to keeping a positive emotional state. Making maps was the light of my day, in a time when my days were very, very dark. I probably talked more to my colleagues in History of Science about the maps I made than anything actually having to do with my graduate work. I did not construct anything particularly interesting or attractive. Mostly I just put together choropleths of census data to answer idle curiosities. But I was making something, and it felt good. And it involved learning geography, which was fun and new. It was avoidance behavior, to be sure — there were important things I really needed to be doing, and spending eight hours on a map was just a way of procrastinating, but I needed the escape. I could not always face my life outside of my cartographic refuge. I was still trapped in my ill-fitting graduate program. I was adrift, and didn’t know what to do. I did not thrive as a chemist; I did not thrive as a historian of science. What now? I had no other obvious options, and I considered dropping out of school. But one day a friend of mine in the History of Science department pointed out that, being as I liked maps so much, perhaps I could go to school to learn about them. And, it so happened that there was a first-rate cartography program at Wisconsin. I spent my second semester in graduate school taking a cartography and a GIS class while applying for a transfer to the cartography program. I found that I liked these new classes, and that it was no longer a massive chore to get up every day and try and get to campus and accomplish something. I had more energy. I felt like I had a future. Somehow, despite no background or training, despite a weak performance in History of Science, and despite an application letter that didn’t say much more than “I really like maps,” the University of Wisconsin-Madison Department of Geography accepted me, and I began a Master’s program in Cartography & GIS. They took a chance on me, and I cannot hope to repay them for that. I threw away everything I had ever thought I wanted to do with my life and leapt blindly into the unknown. I abandoned the safe, clear path I had been plotting since I was a child. As simple as it sounds, that was probably the most courageous thing I’ve ever done in my life. I thrived immediately in my new program. I found that I was part of something I had lacked before: a community. I was surrounded by supportive friends and colleagues from whom I learned much, and whose input can be seen in everything that I design. I had a place to belong, and a future that I was passionate about. I came out of my depression. I worked up the fortitude to bring about an end to my relationship with my girlfriend, who by now was living on her own yet was unwilling to formally let go. I began to move on from my old life, to the new one I have now. I am a cartographer, and a teacher (another chance that the Geography department took on me). I love what I do. I draw strength from it. It feels like I should have been doing this all along, like I was made for it. I write this story because I want people to understand what maps mean to me. Cartography is not just a hobby, or a job. It has taken me through the darkest times of my life. It has helped me overcome depression. It has given me a renaissance and a calling. It has given me a community which has enriched me personally and professionally. It has saved me from a life I would prefer not to contemplate, one which I cannot believe would be as fulfilling. I am very glad that I made that terrible map for Wikipedia, one winter in 2007. Uncategorized On Salutary Obfuscation Posted by Daniel Huffman on 1st February, 2011 Last week, a map which I made about swearing on Twitter gained its fifteen minutes of Internet fame. I heard a lot of comments on the design, and one of the things that many of the more negative commenters (on sites other than mine) were displeased by was the color scheme. It was, as they said, very hard to distinguish between the fifteen different shades of red used to indicate the profanity rate. This complaint was probably a good thing, because I did not particularly want readers to tell the shades of red apart and trace them back to a specific number. In designing the map, I took a couple of steps which made it more difficult for people to get specific data off of it. Before I can explain why I would want to do this, first you need a quick, general background on how the map was made. This is a map based on a very limited sample of tweets. Twitter will give you a live feed of tweets people are making, but they will only give you about 10% of them, chosen randomly. On top of that, I could only use tweets that are geocoded, which means the user had to have a smart phone that could add a GPS reading to the tweet. A third limitation was that I could only use tweets which were flagged as being in English, being as I don’t know any curse words in other languages besides Latin. Finally, there were occasional technical glitches in collecting the data, which caused the program my colleagues and I were using to stop listening to the live feed from time to time. If you add those four limitations up, it means that I made use of somewhere between about 0.5% and 1% of all tweets going on in the US during the time period analyzed. Possibly not a strongly representative sample, but still a large one at 1.5 million data points. In that limited sample, I searched for profanities. This is based on my subjective assessment of what may be a profanity (as many readers sought to remind me), and the simple word searches I did may have missed more creative uses of language. Once I had the number and location of profanities, I could start to do some spatial analysis. I didn’t want to make a map of simply the number of profanities, because that just shows where most people live, not how likely they are to be swearing. So, I set up some calculations in my software so that each isoline gives the number of profanities in the nearest 500 tweets, giving a rate of profanity instead of a raw total. Unfortunately, for places that are really sparsely populated, like Utah, the algorithm had to search pretty far, sometimes 100 miles, to get 500 tweets, meaning the lines you see there are based partially on swearing people did (or, didn’t) in places far away. If I hadn’t done this, then there would be too few data points in Utah and similar places to get a good, robust rate (counting the # of profanities in 10 tweets is probably not the most representative sample, we need something much bigger to be stable). Maybe I should have just put “no data” in those low areas, but that’s another debate. So, the map is based on a limited sample of tweets, and the analysis requires some subjective judgments of what’s a swear word, and then some heavy smoothing and borrowing of data from areas nearby in order to get a good number. What all that means is: you shouldn’t take this as a really precise assessment of the swearing rate in your city. If I had chosen to look for different words, or if the Twitter feed had sent a different random 10% of tweets, or if I had chosen to search profanities in the nearest 300, rather than 500 tweets, then the map would end up looking different. Peaks would drift around some and change shape. But my feeling is that the big picture would not change significantly. Under different conditions, you’d still see a general trend of more profanity in the southeast, a low area around Utah, etc. The finer details of the distribution are the most shaky. Okay, back to my main point about trying to make it difficult to get specific numbers. What I wanted readers to do is focus on that big picture, which I think is much more stable and reliable. And so I made some decisions in the design that were intended to gently push them toward my desired reading. First off is that color scheme, which has only small changes between each level of swearing, which makes it hard to look at your home and tell if it’s in the zone for 12 or 13 or 14 profanities per 100 tweets. What’s important is that you know your home is at the high end. Whether it measured at 12 or 14 doesn’t matter, because that number is based on a lot of assumptions and limitations, and is likely to go up or down some if I collected on a different day. The color scheme makes the overall patterns pretty clear — bright areas, dark areas, medium areas, which is where I want the reader to focus. It’s weaker in showing the details I would rather they avoid. The other thing I did was to smooth out the isolines manually. The isolines I got from my software had a very precise look to them. Lots of little detailed bends and jogs, which makes it look like I knew exactly where the line between 8 and 9 profanities per 100 tweets was. It lends an impression of precision which is at odds with the reality of the data set, so I generalized them to look broader and more sweeping. The line’s exact location is not entirely reflective of reality, so there’s no harm in moving it a bit, since it would shift around quite a bit on its own if the sample had been different. Original digital isolines Manually smoothed in Illustrator This is a subtler change, but I hope it helped make the map feel a bit less like 100% truth and more like a general idea of swearing. Readers have a rather frightening propensity for assuming what they see in a map is true (myself included), and I’d rather than not take the little details as though they were fact. Had I to do it over again I probably would have made it smaller (it’s 18″ x 24″). Doing it at 8.5″ x 11″ would have taken the small details even further out of focus and perhaps kept people thinking about regional, rather than local patterns. Maybe I shouldn’t have used isolines at all, but rather a continuous-tone raster surface. There are many ways to second-guess how I made the map. Anyway, the point I mostly want to make about all of this is that it’s sometimes preferable to make the design focus on the big picture, and to do so you may need to obfuscate the little things. Certainly, though, a number of people were unhappy that I impaired their preferred reading of the map. People like precision, and getting specific information about places. But I didn’t feel like I had the data to support that, and I would be misleading readers if I allowed them to do so easily. SubscribeRSS - Posts Time Machine January 2014 Andy Woodruff Atlas of Design Axis Maps blog Bostonography indiemaps Map Hugger Tim Wallace's blog Follow “somethingaboutmaps”
计算机
2014-23/3292/en_head.json.gz/22873
Rumor: GTA 5 PC release in 'early 2014' If rumors are true, the 600,000 users who have signed the petition to bring Grand Theft Auto 5 to PC are about to be very happy. According to "multiple industry sources" who have spoken to Eurogamer, a PC version of GTA 5 will be available in early 2014. The sources have indicated a release during the first quarter of 2014 for the blockbuster hit which release on Xbox 360 and PS3 in mid-September. The idea is to mirror the release of Grand Theft Auto 4, which launched on PS3 and Xbox 360 in April 2008 before hitting PC in December 2008. If Rockstar follows a similar plan, we can expect GTA 5 on PC sometime in May -- which isn't technically in the first quarter. So it seems we have our first crack in the rumor mill. Rockstar has yet to officially announce plans for a PC version of their game which has generated over $1 billion in sales on current-gen consoles while breaking multiple Guinness World Records. However, the company's track record, along with various other rumors, seem to suggest the game will eventually come to PC. The big question is when....? Tags: Grand Theft Auto 5, GTA 5, Grand Theft Auto V, GTA V, Grand Theft Auto Online, GTA Online
计算机
2014-23/3292/en_head.json.gz/23114
Web Analysis Matias One Keyboard Apple and Me PVII Testimonial Scam Technical Support Phone Calls Warning About HTML 5 and CSS 3 Rosina My Web Portfolio I'll be honest. Compared to full-time web design shops, my portfolio is modest. But I believe the quality of work speaks for itself. My customers like what I have done for them, and so do their site visitors. See the Customer Testimonials page to view comments from my customers. Customer Gallery Thanks to Project Seven for their Lightshow Magic widget. Click any thumbnail image to launch the slide show. In the slide show, clicking the image takes you to the actual site. This web site was my first professional project in 1996. The design has changed over the years, and now most pages of this very large web site is coded in a css-driven layout. My newest customer's site is a make-over of a long-running site. The new version improves on the previous site's usability by replacing HTML frames with a fixed-position div that mimics the functionality of the old frameset, keeping the navigation menu visible when users scroll down in page content. I no longer manage this site, as the club decided to produce the site themselves. I preserve it in this portfolio only as another example of my work. The link to the present site lets you decide which version of the site you like better. :-) This site, promoting the joys of driving a MINI in Manitoba, launched in 2006. Designed to the latest W3C standards, the site also features a slick little image gallery designed with the aid of Project Seven's “Image Gallery Magic” component for Dreamweaver In the late summer of 2008, we launched this updated version of a long-standing web site to rave reviews from the site owner. Inspired by the art of Piet Mondrian, this design is one of the most colourful and fun projects we have ever worked on. As with all our newer projects, the ModernWorks site is 100% W3C standard XHTML and CSS This site went live on the web before we ever worked on it. Dr. Leclair's former webmaster disappeared, and she came to us in the fall of 2003. In the late summer of 2007, we upgraded the site to use a pure css-driven layout and text-based navigation. Accordingly, the download time for this site's pages is half of what it was with the bloat of all those NetObjects Fusion Navigation button graphics. This site first went live on the web in 2001. The site owners decided they needed to have a local web firm in early 2012, so the site may soon no longer look like this screen shot. I maintain this link as a courtesy to a good former customer. I built this site in 2003, and the site owners decided they needed a local webmaster in 2009. I present it here as another sample of my work, and give you a link to the new site as a courtesy to a good former customer. This web site was my second professional project, in 1997. It belongs to a great musician and good friend, long may she wail on her flute... This is the web site of a jewel in Winnipeg's musical crown. This web site features an on-line database that allows students to enter the festival on line and the site administrators to publish the festival schedule directly from the database of entries. This saves untold hours of work every year when the festival is approaching. Oops, I almost forgot the most important one... Your new web site could be in the gallery above. Contact me if you're interested... © 1996-present K-C-P.com and Karl Strieby
计算机
2014-23/3292/en_head.json.gz/23237
LinuxInsider > Software | Next Article in Software Writing Linux History: Groklaw's Role in the SCO Controversy What is Groklaw's actual influence? Is it a soapbox for a bunch of wild-eyed zealots and naive idealists, or a serious attempt at -- in one supporter's words -- illuminating the darkness? While the site has attracted wide praise from most members of the open-source community, others dismiss or vilify it. By David Halperin There's nothing like a good legal battle to whip up passions, and the SCO Group-versus-the-open-source-world dogfight is no exception. Rhetoric runs high. From the open-source advocates, it's "you're stifling free thought in the name of greed." SCO allies counter with "you're attacking the core values of capitalism." As SCO president and CEO Darl McBride himself has put it, "The stakes are extremely high. The balance of the software industry is hanging on this." The "this" in question is SCO's assertion that it owns some of the code now being used in Linux; its US$3 billion suit against IBM for copyright infringement; its attempts to convince enterprise Linux users to pay for licenses; and its threat to sue a noncomplying enterprise Linux end-user. The cases now reaching the courts are complex because SCO's actions affect not just one competitor, but the whole open-source software community. As open-source advocates see it, SCO threatens their entire collaborative philosophy, as well as the legal cornerstone of most open-source licensing, the General Public License (GPL). The Birth of Grok Many who felt they had an interest in the outcome found it hard, at first, to get information about the opposing players' positions or the laws that underpinned them. In the words of one open-source developer, "Many of us who hold a personal stake in Linux -- either we helped write it or our businesses depend upon it -- were completely in the dark as to what SCO was claiming and what it meant to Linux." Enter Pamela Jones and Groklaw. A paralegal, Linux programmer and self-described geek, Jones is passionate about the benefits of open, worldwide intellectual collaboration -- and dismissive of the ideas behind most proprietary development, which she says results in software that's "like petrified wood." "Grok," as most of you probably know, is a Martian verb coined by Robert A. Heinlein in his novel Stranger in a Strange Land, metaphorically meaning "to understand something so intimately as to become virtually one with it." When the SCO fracas began, Jones started a site where she presented the results of her legal research into the case. This site proved so popular with Linux and open-source regulars that she expanded the scope of the effort and moved it to a new Web site, Groklaw.net. Case Law Focal Point Groklaw.net hit the Web in May 2003, and, according to Jones, her readership doubled immediately. The site has become the primary resource for groups and individuals to research the legal labyrinths of SCO's disputes with IBM and others. Jones is now assisted by a team of legal researchers and Web gurus. At the beginning of February, Jones was named Director of Litigation Risk Research at Open Source Risk Management, LLC, on a one-year contract to help develop an insurance product aimed at protecting open-source users. She will, however, continue to edit Groklaw.net. But what is Groklaw's actual influence? Is it a soapbox for a bunch of wild-eyed zealots and naive idealists, or a serious attempt at -- in one supporter's words -- illuminating the darkness? While the site has attracted wide praise from most members of the open-source community, others dismiss or vilify it. One Linux developer noted, "By serving as the focal point for documenting the case -- for example, through transcripts -- Groklaw educated many of us as to the exact nature of what SCO claimed." He went on to suggest that until about last May, he thought SCO could be right, but said he has "no doubt now that SCO's directors are either mistaken, or they're crooks." He thanks Groklaw for that. Conversions like that one help explain why SCO defenders take a dim view of Groklaw. Blake Stowell, the company's director of public relations, says: "I think the unfortunate thing about Groklaw is that many people reference the site as a supposed 'credible resource' and take a lot of what is posted there as the absolute truth. I find that there is so much misinformation on Groklaw that is misconstrued and twisted that it's probably one step above a lot of the ranting and dribble that takes place on Slashdot." Show Us Your Code! One frequently raised legal point nicely illustrates the polarized views of the claimants. SCO has not yet revealed which portions of code it claims to own. Open-source spokespersons ask how SCO can press a case on the basis of evidence it won't reveal, and they have said that, if it is published and is indeed found to be the same as what's in the Linux kernel, they'll have no problem with removing it. SCO's Darl McBride disagrees, saying: "This kind of cleanup is an Exxon-Valdez kind of cleanup. It's not simple." In any case, one industry commentator points out that contractual agreements would prevent any public revelation of the code outside a trial. And, if it were removed from Linux, SCO -- its owner -- would never see a cent beyond initial damages. Long-Term Litigation That prospect pleases Pamela Jones, who has been quoted as saying, "Litigation isn't a long-term business strategy, even if you 'win.' It's a one-time payout. Then what? If you have no product people want, that's the final chapter, especially if people really don't like you and what you stand for." It may be that kind of intransigence that leads SCO's Blake Stowell to hint at darker motives. "Doesn't anyone find it the least bit ironic," he asks, "that Pamela Jones lives ... less than 10 miles from IBM's worldwide headquarters, and that Groklaw is hosted, free, by a nonprofit outfit called iBiblio, which runs on $250,000 worth of Linux-based computers donated by IBM and a $2 million donation from a foundation set up by Robert Young, founder of Red Hat?" "Call me crazy," adds Stowell, "but I somehow think that Pamela Jones isn't just a paralegal with nothing better to do with her life than host a Web site called Groklaw that is dedicated to bashing SCO. I think there is a lot more to her background and intentions than she is willing to reveal publicly. I believe that Big Blue looms large behind Pamela Jones." More by David Halperin The Rise of Corporate Blogging FTC Delivers Stern Message With Apple's In-App Refund Deal Nod's Ring May Be Smart but Not Precious
计算机
2014-23/3292/en_head.json.gz/24321
The once and future e-book: on reading in the digital age An e-book veteran looks at the past, present, and future of the business. I was pitched headfirst into the world of e-books in 2002 when I took a job with Palm Digital Media. The company, originally called Peanut Press, was founded in 1998 with a simple plan: publish books in electronic form. As it turns out, that simple plan leads directly into a technological, economic, and political hornet's nest. But thanks to some good initial decisions (more on those later), little Peanut Press did pretty well for itself in those first few years, eventually having a legitimate claim to its self-declared title of "the world's largest e-book store." Unfortunately, despite starting the company near the peak of the original dot-com bubble, the founders of Peanut Press lost control of the company very early on. In retrospect, this signaled an important truth that persists to this day: people don't get e-books. A succession of increasingly disengaged and (later) incompetent owners effectively killed Peanut Press, first flattening its growth curve, then abandoning all of the original employees by moving the company several hundred miles away. In January of 2008, what remained of the once-proud e-book store (now called eReader.com) was scraped up off the floor and acquired by a competitor, Fictionwise.com. Unlike previous owners, Fictionwise has some actual knowledge of and interest in e-books. But though the "world's largest e-book store" appellation still adorns the eReader.com website, larger fish have long since entered the pond. And so, a sad end for the eReader that I knew (née Palm Digital Media, née Peanut Press). But this story is not just about them, or me. Notice that I used the present tense earlier: "people don't get e-books." This is as true today as it was ten years ago. Venture capitalists didn't get it then, nor did the series of owners that killed Peanut Press, nor do many of the players in the e-book market today. And then there are the consumers, their own notions about e-books left to solidify in the absence of any clear vision from the industry. The sentiment seeping through the paragraphs above should seem familiar to most Ars Technica readers. Do you detect a faint whiff of OS/2? Amiga, perhaps? Or, more likely, the overwhelming miasma of "Mac user, circa 1996." That's right, it's the defiance and bitterness of the marginalized: those who feel that their particular passion has been unjustly shunned by the ignorant masses. Usually, this sentiment marks the tail end of a movement, or a product in decline. But sometimes it's just a sign of a slow start. I believe this is the case with e-books. The pace of the e-book market over the past decade has been excruciatingly—and yes, you guessed it, unjustly—slow. My frustration is much like that of the Mac users of old. Here's an awesome, obvious, inevitable idea, seemingly thwarted at every turn by widespread consumer misunderstanding and an endemic lack of will among the big players. I don't pretend to be able to move corporate mountains, but I do have a lot of e-book related things to get off my chest. And so, this will be part editorial, part polemic, part rant, but also, I hope, somewhat educational. As for Apple, that connection will be clear by the end, if it isn't already. Buckle up. John Siracusa / John Siracusa has a B.S. in Computer Engineering from Boston University. He has been a Mac user since 1984, a Unix geek since 1993, and is a professional web developer and freelance technology writer. @siracusa
计算机
2014-23/3292/en_head.json.gz/24424
Trion Worlds and XLGAMES Team up to Bring Highly Anticipated ArcheAge® to Western Markets Leading online games company Trion Worlds and renowned South Korea-based game developer XLGAMES have entered into a strategic agreement for Trion to exclusively publish and operate ArcheAge® in the West. Created by Jake Song, best known for his hit game Lineage, the highly anticipated ArcheAge is poised to be the most polished massively multiplayer online role-playing game (MMORPG) coming out of Asia. Trion will host the game on its Red Door platform in North America, Europe, Turkey, Australia and New Zealand. “We’re thrilled to be working with Trion Worlds, a company who is setting a new standard for gaming by embracing original, high-quality IPs on a dynamic connected platform,” said Jake Song, CEO and founder of XLGAMES. “We are impressed with Trion’s track record as they have consistently delivered against an unwavering vision; we are confident partnering with Trion will help bring ArcheAge a successful game that satisfies audience both in East and West.” “We’re very impressed with ArcheAge, and the level of anticipation for the game is absolutely astonishing,” said Dr. Lars Buttler, CEO and founder of Trion Worlds. “We are very proud of the catalog of world-class games we will be offering through Red Door. Now, with ArcheAge, we are bringing the best in Asia onto our platform.” Having gone through more than six years of development and nearly two years of closed testing, ArcheAge introduces players to a fantasy sandbox world where they begin their journey on one of two continents: Nuia and Harihara. From there, everything else is up to the player, from what character they play, to where they go, and why. The game promises to remove the restrictions that have hindered other MMORPGs, especially concerning character classes and skills. ArcheAge started its commercial service on January 16 in South Korea. ArcheAge is the latest premium online game to join Trion’s growing roster of titles on the Red Door platform. Red Door is a full-scale publishing and development solution enabling unprecedented flexibility and control for game monetization teams. The platform offers a sophisticated server architecture as well as proprietary toolsets positioned to radically speed up the time-to-market for developers who want to create the next generation of AAA games. In 2012, Trion also announced a joint venture with leading European games company Crytek to co-publish and co-operate the award-winning online first-person shooter Warface® in the West through GFACE® powered by Trion’s online platform. For more information on ArcheAge, visit www.ArcheAgegame.com About Trion Worlds Trion Worlds is the leading publisher and developer of premium games for the connected era. Powered by a breakthrough development and publishing platform, Trion is revolutionizing the way games are developed, played and sold. Trion’s world-class team delivers high-quality, dynamic, and massively social games operated as live services across the biggest game genres and devices, including the critically acclaimed blockbuster, Rift® and the highly-anticipated End of Nations® and Defiance™. Trion is headquartered in Redwood City, Calif., with offices in San Diego, Calif., Austin, Texas, and at Trion Worlds Europe in London, UK. For more information, please visit www.trionworlds.com About XLGAMES XLGAMES was founded in 2003 by the iconic Korean developer Jake Song (Korean name – Song Jae Kyung), the developer of 'Lineage' and 'The Kingdom of the Winds'. XLGAMES developed Dynamic MMORPG ArcheAge, which aims to be a game where players can pioneer and evolve their own world with their free will. ArcheAge is signed for distribution in China with Tencent, Japan with GameOn, and Taiwan, HongKong, Macao with FunTown. ArcheAge first introduced its signature features through G-Star, the largest game show in Korea, in 2010. For more information, please visit www.xlgames.com/en
计算机
2014-23/3292/en_head.json.gz/25898
Debian and the Creative Commons Short URL: http://fsmsh.com/1819 Wed, 2006-10-18 17:34 -- Terry Hancock Recently, I've become involved in the ongoing discussion between the Creative Commons and Debian over the "freeness" of the Creative Commons Public License (CCPL), version 3. Specifically, the hope is that Debian will declare the CC-By and CC-By-SA licenses "free", as most people intuitively feel they are. There are a number of minor issues that I think both sides have now agreed to, leaving only the question of "Technological Protection Measures" (TPM, also known as "Digital Rights Management" or "Digital Restrictions Management" or "DRM"). I myself have flip-flopped a couple of times on this issue. The inventors of TPM ("technological protection measures") also known as DRM ("digital rights management" or "digital restrictions management" as Richard Stallman has dubbed it) must be laughing at their cleverness now. The issue has stressed some of the key fracture lines in the free software and free culture communities. The problem essentially is this: none of the Creative Commons licenses have a "source" requirement (unlike the GPL, for example), because, being intended for creative content, it was generally felt that no definition of "source" was really workable, and what's worse, the intuitive rules for different media would likely be very different. Because of this, there is no "parallel distribution" requirement for "source" and "binary" versions of works in CC licenses. Instead, the licenses insist on a much milder requirement: the work must be distributed in a form that at least does not actively interfere with the end user's freedom to use, modify and use, or distribute the modified version. There has been a long-standing misconception that this provision would keep a user from applying TPM to a CC work in order to play it on a platform which requires TPM in order to play (a "TPM-Only Platform"). According to the CC representatives I've listened to, including General Counsel, Mia Garlick, this was not true in the previous CC licenses (that is: "yes you can TPM your own works on your own devices"). At the very least, the "fair use/fair dealing" provision is believed to provide this right in most jurisdictions, and the exact wording of the license is supposed to make it available generally. Nevertheless, there was agreement that the wording was too vague, and the CCPLv3 license has been revised to clear up the question (which I can vouch for myself, having read it -- though, of course, unlike Garlick, I am not a lawyer!). What Debian wants, however, is the ability to distribute packages in TPM form. They propose to obviate the concerns of TPM lock-in by requiring a parallel distribution of a non-TPM form of the work. This would be analogous to the way the GPL deals with source/binary distribution (in certain ways, binary distribution can be regarded as a kind of TPM distribution, since it is hard to reverse the binary to get usable source code for modifying the program). At first glance, this sounds like The Solution, and I argued pretty strongly for it on the cc-licenses list. However, another participant, Greg London, demonstrated an exploit that uses TPM distribution (even with parallel distribution) to break the copyleft: London's 'DRM Dave' Scenario A brief summary of Greg's problem case: Sam releases a work A under By-SA w/ parallel-distribution allowance. Dave, owner of a TPM-only platform, wraps work A in his TPM wrapper to create d[A] (the d[] represents the TPM wrapper). Under "parallel distribution" rule, Dave can now sell d[A] through his channel, so long as he also provides A. Dave may choose to alter the work to create A†, and wrap that to create d[A*] which he may also sell (Dave has the right to modify and distribute). He also has to provide A† of course. Bob, however lacks this freedom. He downloads work d[A] and likes it. He decides that he wants to create a modified version, A‡ which he will (due to the platform's TPM-only nature) need to wrap to create d[A‡] so that he can play on the platform Dave owns (and which is presumeably Bob's platform as a user). Unfortunately, he can't do this. Bob can download the non-TPM work, A. He can modify it to create A‡. But he has no means to create d[A‡]. Nor does anything we've done require Dave to give him those means! Bob's freedom to modify and distribute has been eliminated by Dave, in a clear breakage of copyleft. So Dave has secured a platform monopoly on Sam's work. He is able to charge for it under monopoly terms exactly as if he were the copyright owner. He is not required to distribute the work in a form that allows others to modify it and play it on his platform. He has managed to effectively revoke the users' "freedom 1" (FSF term), making the work non-free. Now, if you pay close attention, you can also see that this is basically identical to the problem of "tivo-ization": we need a special key, which has been withheld, in order to exercise our freedom to modify and distribute. The GPLv3d2 attempts to rectify this problem by defining that key as part of the "corresponding source" for the work. This is equivalent to demanding that Dave release his encryption key (or provide an alternate key) to be used to encode works to play on his platform. But note that this is (from Dave's point of view) no more difficult than making his platform run non-TPM'd works. In fact, one way to implement that is to make a TPM-key-wrapper a part of his platform. So, if Dave wanted to create a TPM-only platform in the first place, he's not going to release the encryption key. Requiring him to do so is no less onerous than just asking him to let non-TPM works play on his platform. Furthermore, if such a key is published, Bob may use the published key to TPM his own private copy of A‡ (to create d[A‡], and so may all users who receive A‡. IOW, having the key allows the platform to be freed to allow content to play on it, thus nullifying the objection that a non-TPM distribution requirement would restrict the user's use of the work. Well, I know that idea threw me for a loop. I had been snowed by the analogy between binary/source distribution and TPM/non-TPM distribution, but the example (and some other argument on both the cc-licenses and debian-legal mailing lists) has made it clear to me that this analogy is broken in at least two important ways: TPM is bound by law, not just code It would be bad enough to block user freedoms by merely making it difficult to accesses the editable version of a work—that's the reason for the source distribution requirement in the GPL, but even if the user has a good tool for reversing or applying the TPM, it's illegal for him to use it (or possibly even to possess it!) under new laws like the DMCA, which essentially provide the legal definition of TPM (or DRM). Because of this, it isn't just "difficult" for Bob to apply TPM to his modified work to be able to play it, it's probably illegal, even if he figures out a way. This kind of problem, IMHO, invokes the "liberty or death" principle, as it is described in the GPL: if you can't find a way to distribute within the license requirements, you can't distribute. TPM is intrinsically simpler than compilation (or rather linking) The desire for a right to distribute in TPM form is a natural-enough idea for people who are used to binary distribution. Compiling a binary from source is a difficult and error-prone process which requires a fair amount of expert skill. If you've ever tried it, I'm sure you realize this is the case. Now why should that be? The really difficult step turns out not to be the process of "compilation", but rather that of "linking". Typical programs reuse complex webs of libraries, static or shared, to do most of their "heavy lifting". And of course, each library is typically on a different development schedule, so there is a complex version-matching problem to make sure that the interfaces the program expects are actually supported by the versions of the libraries that you have. Even if this were not the case, the program may have complex interactions with the hardware (how many times have you had problems with a video or audio driver, for example?). But none of this makes sense for TPM. TPM is basically just a form of encryption. You need the key, and you need the algorithm for applying that encryption with the key (usually a program). The TPM itself is a simple data-to-data mapping. There's no outside dependencies or hardware variability to worry about (indeed, one of the few redeeming qualities of TPM-only platforms is that they are usually very consistent in design -- the person designing the TPM should be aware of all variants his system may need to run on). So while it is clear that general compilation and linking is doomed to be difficult, no matter how well-intentioned the developers are, the process of applying TPM is only going to be difficult if the TPM developer has intentionally made it so (perhaps to preserve a monopoly on being able to create works for the platform). Furthermore, such obstruction can be regarded as restricting the end-user from "freedom 1" to exactly the same degree that they restrict "freedom 0" if he has to apply the TPM in the first place. But that's the beauty of the CC solution. By not allowing TPM distribution, the CCPLv3 allows the works to be distributed and (easily) used by users, so long as the TPM can be (easily) applied by them. Providing an easy TPM wrapping application renders the work "free" in this sense. But if, on the contrary, applying the TPM for the user to have "freedom 0" (i.e. to play the work) is hard or impossible, then that's actually okay, because it is to the exact same degree "non-free" anyway, due to the restriction on "freedom 1". This is an elegant and much simpler solution than the intricate "Corresponding Source" requirements found in the new GPLv3 draft (which is not actually a criticism of GPLv3—for programs, the source/binary dichotomy already exists, so the possibility of TPM-imposed restrictions is already opened up). So at this point, I'm of the opinion that the CCPLv3 should be accepted as it is by Creative Commons, and I very much hope Debian will see sense and recognize it as a free license. However, even if it doesn't, CC is better off staying with a freer license than capitulating to an anti-copyleft position simply because of political pressure. Creative Commons Public License, version 3 draft - generic (PDF) CCPLv3 - US jurisdiction version (PDF) Replies from Mia Garlick to list questions about the draft (PDF) Debian Free Software Guidelines This article may be copied under the terms of the Creative Commons Attribution-ShareAlike 2.5+ license, provided that proper attribution and a link to the original is provided (e.g. "By Terry Hancock, Originally at Free Software Magazine, CC-By-SA-2.5+"). Category: OpinionsTagging: drmdebiancreative commonstpmgplv3 Terry Hancock's articlesLog in or register to post comments Nice analysis Submitted by Anonymous visitor (not verified) on Fri, 2006-10-20 16:50 Thanks! Need more clear thinking like this attached to voices that can be heard. "By-SA w/ parallel distribution allowance" Submitted by Terry Hancock on Sat, 2006-10-21 20:08 I just read a reference to this where it was apparently misunderstood that CCPLv3, "SA" with a "parallel distribution allowance" is a hypothetical license. That hypothetical license does not exist, because CCPLv3 does not (at least yet) contain a "parallel distribution allowance". Greg's "DRM Dave" scenario shows why such an allowance is a bad idea, because it demonstrates how it can be abused to break copyleft. So, this scenario cannot legally happen with the CCPLv3-SA as it is currently written (August 9th, 2006 draft). Another opinion in support Submitted by Anonymous visitor (not verified) on Wed, 2006-12-06 15:09 I had the boldness to write the following opinion: http://www.gatchev.info/articles/en/TPM.html In short: It appears to me that DRM / TPM is fundamentally incompatible with most free licenses. And that giving a free license to something designated for TPM platform and/or format means automatically violating the free license. It also appears to me that the idea for a free license for a TPMed content (or program) is a logical perpetuum mobile. The question is not if this or that specific attempt will be able to build one, by why it will fail. The "exploit" of Greg London shows this for one specific kind of this perpetuum mobile. Other kinds may have different "exploits", and some may prove very hard to refute. However, all of them will be eventually refuted, due to a simple principle: you cannot give freedom over a non-free platform or format, if you are not the entity that controls this platform / format. Grigor Gatchev
计算机
2014-23/3292/en_head.json.gz/26330
Research shows that computers can match humans in art analysis Jane Tarakhovsky is the daughter of two artists, and it looked like she was leaving the art world behind when she decided to become a computer scientist. But her recent research project at Lawrence Technological University has demonstrated that computers can compete with art historians in critiquing painting styles. While completing her master’s degree in computer science earlier this year, Tarakhovsky used a computer program developed by Assistant Professor Lior Shamir to demonstrate that a computer can find similarities in the styles of artists just as art critics and historian do. In the experiment, published in the ACM Journal on Computing and Cultural Heritage and widely reported elsewhere, Tarakhovsky and Shamir used a complex computer algorithm to analyze approximately1,000 paintings of 34 well-known artists, and found similarities between them based solely on the visual content of the paintings. Surprisingly, the computer provided a network of similarities between painters that is largely in agreement with the perception of art historians. For instance, the computer placed the High Renaissance artists Raphael, Da Vinci, and Michelangelo very close to each other. The Baroque painters Vermeer, Rubens and Rembrandt were placed in another cluster. The experiment was performed by extracting 4,027 numerical image context descriptors – numbers that reflect the content of the image such as texture, color, and shapes in a quantitative fashion. The analysis reflected many aspects of the visual content and used pattern recognition and statistical methods to detect complex patterns of similarities and dissimilarities between the artistic styles. The computer then quantified these similarities. According to Shamir, non-experts can normally make the broad differentiation between modern art and classical realism, but they have difficulty telling the difference between closely related schools of art such as Early and High Renaissance or Mannerism and Romanticism. “This experiment showed that machines can outperform untrained humans in the analysis of fine art,” Shamir said. Tarakhovsky, who lives in Lake Orion, is the daughter of two Russian artists. Her father was a member of the former USSR Artists. She graduated from an art school at 15 years old and earned a bachelor’s degree in history in Russia, but has switched her career path to computer science since emigrating to the United States in 1998. Tarakhovsky utilized her knowledge of art to demonstrate the versatility of an algorithm that Shamir originally developed for biological image analysis while working on the staff of the National Institutes of Health in 2009. She designed a new system based on the code and then designed the experiment to compare artists. She also has used the computer program as a consultant to help a client identify bacteria in clinical samples. “The program has other applications, but you have to know what you are looking for,” she said. Tarakhovsky believes that there are many other applications for the program in the world of art. Her research project with Shamir covered a relatively small sampling of Western art. “this is just the tip of the iceberg,” she said. At Lawrence Tech she also worked with Professor CJ Chung on Robofest, an international competition that encourages young students to study science, technology, engineering and mathematics, the so-called STEM subjects. “My professors at Lawrence Tech have provided me with a broad perspective and have encouraged me to go to new levels,” she said. She said that her experience demonstrates that women can succeed in scientific fields like computer science and that people in general can make the transition from subjects like art and history to scientific disciplines that are more in demand now that the economy is increasingly driven by technology. “Everyone has the ability to apply themselves in different areas,” she said.
计算机
2014-23/3292/en_head.json.gz/26577
Contact Advertise GNOME Discusses Becoming a Linux-only Project posted by Thom Holwerda on Thu 19th May 2011 18:59 UTC, submitted by fran Something's - once again - brewing within the GNOME project. While a mere suggestion for now, and by no means any form of official policy, influential voices within the GNOME project are arguing that GNOME should become a full-fledged Linux-based operating system, and that the desktop environment should drop support for other operating systems such as Solaris and the BSDs. I have a feeling this isn't going to go down well with many of our readers.Let's make the core issue clear first. Lennart Poettering, creator of systemd, an init replacement daemon for Linux, is proposing systemd as an external dependency for GNOME-Shell in GNOME 3.2. Since systemd has Linux as a dependency and won't be ported to other operating systems (which would be a very difficult undertaking anyway due to its Linux-specific nature), it would effectively make GNOME a Linux-specific desktop environment. Obviously, not everyone is happy with this idea. Debian, for instance, also has versions using the FreeBSD or HURD kernel, and on these versions, GNOME would no longer be able to run. Other than these niche versions of Debian, it would also sound the death knell for GNOME on Solaris and the BSDs. Jon McCann, Red Hat hacker and the main driving force behind GNOME-Shell, takes the idea even further. "The future of GNOME is as a Linux based OS. It is harmful to pretend that you are writing the OS core to work on any number of different kernels, user space subsystem combinations, and core libraries," he states, "That said, there may be value in defining an application development platform or SDK that exposes higher level, more consistent, and coherent API. But that is a separate issue from how we write core GNOME components like the System Settings." "It is free software and people are free to port GNOME to any other architecture or try to exchange kernels or whatever. But that is silly for us to worry about," he continues, "Kernels just aren't that interesting. Linux isn't an OS. Now it is our job to try to build one - finally. Let's do it. I think the time has come for GNOME to embrace Linux a bit more boldly." Of course, and I can't stress this enough, it's a mere suggestion at this point, and by no means any official policy or whatever. The fact of the matter is this, however: if systemd becomes an external dependency for GNOME-Shell, this is effectively what is happening anyway. GNOME-Shell requires systemd which requires Linux. For all intents and purposes, it would turn GNOME into a Linux-only project. Another important figure in the GNOME project, Dave Neary, isn't particularly enthusiastic about this idea. "This would be a major departure for the project, a big kick in the face for long-term partners like Oracle/Sun, and also for other free operating systems like BSD," he argues, "Are you sure you're not taking the GNOME OS idea a bit far here? Are we going to start depending on kernels carrying specific patch sets next? Or specifying which package management system we expect GNOME distributors to choose?" The argument in favour of just focussing on Linux exclusively goes like this: why should GNOME be held back by advances in technology simply because Solaris and the BSDs can't keep up with the fast pace of development in the Linux kernel? I'm sure maintaining this kind of portability is a major pain the bum for the GNOME project, as it can, at times, hold them back in embracing the latest and greatest the Linux community has to offer. All in all, I'm not really sure where I stand on this. I'm sure our readers will post some interesting insights which could help us undecided folk sway one way on the other. (6) 171 Comment(s) Related Articles My Trip to GNOME: a 3.10 ReviewGNOME 3.12 releasedTizen gets Gnome 3-based desktop shell
计算机
2014-23/3292/en_head.json.gz/27649
It's happening again... Long rumored and much anticipated, The Elder Scrolls Online is finally being unveiled in the June issue of Game Informer. In this month's cover story we journey across the entire land of Tamriel, from Elsweyr to Skyrim and everywhere in between. Developed by the team at Zenimax Online Studios, The Elder Scrolls Online merges the unmatched exploration of rich worlds that the franchise is known for with the scale and social aspects of a massively multiplayer online role-playing game. Players will discover an entirely new chapter of Elder Scrolls history in this ambitious world, set a millennium before the events of Skyrim as the daedric prince Molag Bal tries to pull all of Tamriel into his demonic realm. "It will be extremely rewarding finally to unveil what we have been developing the last several years," said game director and MMO veteran Matt Firor, whose previous work includes Mythic's well-received Dark Age of Camelot. "The entire team is committed to creating the best MMO ever made � and one that is worthy of The Elder Scrolls franchise." An in-depth look at everything from solo questing to public dungeons awaits in our enormous June cover story � as well as a peek at the player-driven PvP conflict that pits the three player factions against each other in open-world warfare over the province of Cyrodiil and the Emperor's throne itself. Come back tomorrow morning for a brief teaser trailer from Zenimax Online and Bethesda Softworks, and later on in the afternoon for the first screenshot of the game. Over the course of the month, be sure to visit our Elder Scrolls Online hub, which will feature new exclusive content multiple times each week. You'll meet the three player factions, see video interviews with the creative leads, and much more. The Elder Scrolls Online is scheduled to come out in 2013 for both PC and Macintosh. Spoiler!!! Click to Read!: Last edited by Soapy; 05-03-2012 at 11:33 AM. Send a private message to Soapy Find More Posts by Soapy
计算机
2014-23/3292/en_head.json.gz/27841
Karanbir Singh has announced the release of CentOS 5.1, a Linux distribution built by recompiling the source RPM packages for the recently released Red Hat Enterprise Linux 5.1: "We are pleased to announce the immediate availability of CentOS-5.1 for the i386 and x86_64 architectures. CentOS-5.1 is based on the upstream release 5.1, and includes packages from all variants including Server and Client. All upstream repositories have been combined into one, to make it easier for end users to work with. And the option to further enable external repositories at install time is now available in the installer. This is the first release where we are also publishing a special 'netinstall' ISO image that can be used to start a remote install." CentOS is an Enterprise-class Linux Distribution derived from sources freely provided to the public by a prominent North American Enterprise Linux vendor.CentOS conforms fully with the upstream vendors redistribution policy and aims to be 100% binary compatible. (CentOS mainly changes packages to remove upstream vendor branding and artwork.) • 7 cd's for installation on x86_64 platform back to top
计算机
2014-35/1056/en_head.json.gz/9492
HTML5 video in Internet Explorer 9: H.264 and H.264 alone It's already been announced that Microsoft will be supporting HTML5 video in … Microsoft has put its stake in the ground and committed to supporting H.264 in Internet Explorer 9. That the next browser version would support H.264 HTML5 video was no surprise (though the current Platform Preview doesn't include it, it was shown off at MIX10), but this is the first time that Microsoft has provided a rationale for its decision. More significantly, this is the first time the company has confirmed that H.264 will be the only video codec supported. H.264 certainly has some advantages. It's standardized, resulting in wide support in both software and hardware. This also provies a migration path of sorts from Adobe Flash; the same H.264 video file can be played both in Flash and via the native browser support, which allows site owners to target both HTML5 and Flash users with a single codec. But the biggest advantage cited by Microsoft was intellectual property: the IP behind H.264 can be licensed through a program managed by MPEG LA. Other codecs—the blog post named no names, but Theora is obviously the most widespread alternative for HTML5 video—may have source availability, but they can't offer the same clear IP rights situation. The codec choice is a long-standing issue with HTML5. The current draft specification refrains from mandating any specific codec, and the result is a schism. Firefox and Opera have gotten behind Theora, citing its openness; Safari and now Internet Explorer 9 have gone for H.264. Chrome supports both. It's not impossible that the spec will ultimately be changed to specify a specific codec—and that codec might not be H.264. If that turns out to be the case, Redmond will likely have to revisit the decision. H.264 has its problems, though. It's currently royalty-free for Web usage, but there are no guarantees that this will continue in the future. In contrast, Theora is perpetually royalty-free. The status of Theora might change if patent concerns emerge—though designed to avoid patented technology, there are no assurances in the murky world of software patents—and so there is some risk to browser vendors. But the unfortunate reality is that's true of H.264 too. MPEG LA may believe it holds all necessary IP rights to H.264, but one never knows when a patent troll might emerge unexpectedly from the deep. H.264 might be the safer option, but the difference is not as clear as Microsoft makes out. The decision may also be premature if Google opens up VP8, as is widely expected. If Google does indeed publish the source, and offers perpetual royalty-free usage of the VP8 patents, the codec may offer the same level of freedom and openness as Theora, combined with the corporate backing of Google, and assurances that both Google (and On2 before it) have done their due diligence. Google would also have the power to promote VP8 in other ways; as owner of YouTube, the company could, in principle, make VP8 one of the most widely used codecs on the Internet. Adoption by the company's own Chrome browser is all but certain, and if the terms are suitable, VP8 could also find its way into Firefox and Opera. Such a move would give VP8 considerable momentum. Microsoft's decision to support only one codec is also a little surprising when one considers the way in which video support will be implemented. Internet Explorer 9 will use the Media Foundation media decoding framework that's part of Windows Vista and Windows 7—another reason that the browser won't run on Windows XP. Just as with its predecessor, DirectShow, Media Foundation is an extensible framework that allows third-party codecs to plug in to the media infrastructure so that applications can use them automatically. If Redmond allowed IE9 to use Media Foundation without any restrictions, it would enable the browser to use whatever codecs were installed; H.264 is already included with Windows 7, so it might be a good default, but there's nothing preventing Google from producing a VP8 plugin for Media Foundation, for example. This would provide automatic support to IE9 users for VP8 video. That said, with the browser locked down in its protected mode sandbox, and the fact that Media Foundation codecs should all be modern (and hence, security-conscious), the risk seems marginal. Morever, there are likely other ways to feed malicious video into insecure codecs (embedding Media Player, for example), so the level of protection this offers isn't clear. HTML5 video is an important part of Internet Explorer 9, and given the big-name industry support for H.264, Microsoft's decision was not altogether surprising. As welcome as the new capabilities are, the decision to rule out the more open codecs is somewhat disappointing.
计算机
2014-35/1056/en_head.json.gz/10314
› Analytics › Analyzing Customer Data Shifting Sands: The Fall of Facebook and Twitter? Adrian Lee | November 27, 2012 | Comments Facebook and Twitter are not the sum total of the social media universe, particularly if brands are looking to engage effectively with the digital audience. Facebook and Twitter - the twin Goliaths of social media. In the past few years, any talk of social media marketing has been dominated by the two online brands - especially in the Southeast Asian region where these properties have dominated traffic. However, this will change in the coming years. Why So Dominating? Of course, any savvy digital marketer would know that Facebook and Twitter are not the sum total of the social media universe. However, due to the sheer gravity of the traffic to these two sites, most marketers have been focused on building a presence and media on these two properties and efforts on other smaller social media sites have at best been explored by the brave few brands and marketers that are adventurous. And it's hard to argue with the numbers. According to Quantcast, Facebook has 143 million monthly visitors, Twitter has 91 million - compared with Pinterest (the next nearest competitor) at 56 million. You can see Zipf's Law at work here, where the number one far outstrips the competitors by a long shot. Google+ logs in at an estimated 61 million visitors in March 2012, according to Mashable (caveat is that Google+ traffic is hard to estimate due to its rollup into Google.com). Subsequent reasoning becomes straightforward - simple marketing prioritization. Budgets are limited, marketers have to choose, and the clearest way forward is to go with the publisher with the widest audiences. This of course works in the old mode of marketer philosophy of mass "push" communication hearkening back to the days of TV and print advertising. In that world, the audience numbers make the biggest difference. However, I believe in the next two years, we will see a seismic shift in brands' usage of social media, driven by factors including the following that I will be exploring: Engagement needs The Price Isn't Right If there is one thing that is fueling the shift to digital marketing, it is cost effectiveness. Digital marketing has long used the mantra of cost effectiveness to get itself into the marketing budgets. This in turn has fueled ever more complex advertising technology such as real-time bidding and ad retargeting to name but a few. And this factor I believe will also fuel the shift in social media marketing behavior. Simply put, the costs of engagement are getting ever higher on Facebook and Twitter. CPCs on Facebook are increasing simply as a factor of demand for one thing (as more competition gets onto Facebook, this drives up prices across the board). In fact, a TBG Digital study indicated that Facebook CPCs increased by 58 percent over the second quarter of 2012. Another reason feeding the increase in costs is the maturing audience. Consumers, to generalize a little bit, don't really appreciate force-fed advertising, and you don't need any data beyond the existence of the AdBlock application to see that holds some truth. As demonstrated by the Banner Blindness phenomenon, users consciously or sub-consciously learn to ignore ad formats in any form, and this includes the sifting out of any information (tweets, status updates) that reeks of brand-related, selling information. This in turn means that marketers will have to spend more on creating specialized campaigns and applications just to maintain the attention of the already information overloaded consumer. Ultimately, this will cause marketers to explore new avenues of reaching out to consumers as they try to optimize cost effectiveness in terms of the number of consumers they actually reach. The result will be the shift to cheaper ad formats and more specialized social media channels (such as Google+, which seems to have a more niche audience currently, and Pinterest). Which leads us neatly to the next factor - different verticals have rather different dynamics when it comes to engagement with their audience. Where their audiences hang out is different. The way they need to showcase their products is different as well. For example, a retail brand would probably find it more effective to showcase their product visually where a B2B brand might find greater effectiveness in trying to present complicated abstract information (e.g., financial instruments) in a compelling manner - content vs. visuals, if you will allow me to over-simplify it. This will mean that, sooner or later, a one-solution-fits-all mentality will fail. Facebook has long recognized that judging from the plethora of new ad formats and opportunities that pop up every few months. What this means for the future is that brands will start exploring and taking advantage of newer platforms that are currently nascent, or will appear from the horizon in the future - especially when audiences start exploring new ways of sharing and connecting with their friends. In fact, it has already started to happen with Pinterest, Foursquare, Facebook check-ins, etc. - all platforms that address very specific needs of both the audience and brands on top of a social graph. Where Will We Go? The upshot of this is that now is the perfect time to start investing marketing budgets in alternative platforms that suit your needs, both in the search for greater cost effectiveness and engagement effectiveness. Facebook and Twitter aren't going away (although admittedly the title might have been a bit of a hyperbole) and are still vital in the digital mix. However, for effectively engaging the digital audience, they are no longer enough. The audiences knows this sub-consciously as evidenced by the increase in usage of alternatives, and next year I believe that we will see the tipping point where a majority of brand marketers will start to realize this consciously. engagementFacebookGoogle+PinterestTwitter Adrian is the chief of digital marketing and technology in Yolk, a Grey Group company, one of Asia's leading interactive and digital media agencies with over 40 employees headquartered in Singapore. Adrian joined Yolk in 2005 and helped shape the vision towards a company where creative and technology is inexplicably linked to serve the higher purpose of marketing. With this approach, Yolk managed to secure regional accounts such as Microsoft, Cibavision, and Canon. Adrian has 12 years of experience in the digital industry with parts of those years spent in Microsoft being in charge of MSN Search, Portal, and advertising platforms, overseeing the expansion of MSN portal from a single market (Singapore) to five markets across Southeast Asia, part of the team that piloted Microsoft adCentre in Singapore and won "Global Product Manager of the Year" at Microsoft in 2004. His technological background is well complemented with his five years experience in advertising and publishing industry. Technology solutions, which Adrian creates, always serve the purpose of his clients in bridging the latest technologies with marketing strategies to boost their campaigns to their fullest potential. When not knee deep in technology, he produces electronic music under various monikers. Pinterest Adds New Follow Button to Boost Brand Discovery #CZLNY: Unlocking the Secrets to Mobile Video Why Heels.com Prefers Polyvore to Facebook, Instagram, Pinterest, Etc. Social PPC – The Definitive Guide for Marketers Is BuzzFeed's Boom Making Brand Blogs Go Bust?
计算机
2014-35/1056/en_head.json.gz/10404
Study: Broken E-Mails Affect Response Rates Images that don't show up in recipients' e-mails are still a major problem for marketers, according to a study released yesterday by e-mail marketing provider Silverpop, Atlanta. Forty percent of e-mails sent by 360 of the largest U.S. firms had missing graphics, virtually unchanged since Silverpop conducted the same study in 2002. Instead of e-mail service providers having difficulty displaying HTML code, graphics are being blocked because some ESPs include software that blocks the images. Google's Gmail, Microsoft Outlook 2003 and AOL 9.0. blocked most images in 2005 because their default setting blocks images from senders not in the recipient's address book or "friendly list." But marketers could avoid many of the problems by reminding customers to add them to their address book, said Elaine O'Gorman, vice president of strategy at Silverpop. "Despite the fact that there are broken images, they are still using single-image creatives and not asking to be put in [recipients'] address books," she said. E-mail marketers also should include a link to view images on a Web version of the message and include text so users know what the images are and are more likely to click to view them. Other findings: * E-mail delivery rates have improved significantly. Though 25 percent of e-mails sent to Yahoo or MSN Hotmail accounts three years ago ended up in bulk folders, only 10 percent of messages received by those providers went to bulk inboxes in 2005. * More e-mail marketers are using HTML format -- 69 percent in 2005 -- compared to 47 percent in 2002. Marketers using HTML e-mails can provide a "richer product experience" for consumers, said Silverpop CEO Bill Nussey, but mis-rendered HTML messages can perform worse than text e-mails.
计算机
2014-35/1056/en_head.json.gz/10713
Books covering the history of Apple Computer, Inc. A review of the book Digital Deli Reviewed by Andy Molloy (first published in Juiced.GS, Vol. 12 No. 4, December 2007) A Digital Deli: The comprehensive, user-lovable menu of computer lore, culture, lifestyles and fancyAuthor: The Lunch Group & Guests; edited by Steve Ditlea Length: 384 pages MSRP: $12.95 (out of print) ISBN: 0-89480-591-6 Released: September 1984 Publisher: Workman Publishing, NY Digital Deli is one of the earlier books to delve into the culture and folklore surrounding personal computers during their birth from the late 1970s to early 1980s. The book is an eclectic and wide-ranging collection of writings produced by The Lunch Group, a self-described group of computer writers living in New York City that met monthly to dine together. They, in turn, invited contributors from across the US computer world and managed to get over 150 movers and shakers to contribute original material for the book. All told there are more than 200 separate pieces. I recognized many names from the large list of contributors on the back cover: Ralph Baer (inventor of the first home video game, the Magnovox Odyssey); Ray Bradbury (sci-fi author); Dan Bricklin (inventor of the first spreadsheet on the Apple II); Nolan Bushnell (founder of Atari); Lee Felsenstein (early Homebrew Computer Club moderator, inventor of the Processor Tech Sol computer); Doug Garr (wrote the first bio of Wozniak entitled Prodigal Son); Mitch Kapor (Lotus 123); Steven Levy (wrote Hackers and Insanely Great); Paul Lutus (author of Apple Writer); Frank Rose (wrote West of Eden about Apple), to name just a few. The book is an oversized softcover, and is loaded with black and white photos and illustrations. The layout is playful and informal, with each contributed piece ranging from one column in length to a few pages. It has examples of computer-generated art, screenshots, lots of people photos, and even comic strips. It is easy to open the book to practically any page and delve in. The Lunch Group "shared the joys and tribulations of setting up a first computer system, of learning a word processing program, of sending articles over telephone lines, of applying the personal computer revolution to our lives. We took pride in experiencing the frontiers of what many felt to be the most far-reaching and influential cultural development any journalist or author could hope to cover at this time." The book does an excellent job of showing how people's lives changed when they got hold of a computer that was all their own. Here you can read about the beginning of computer home banking, online communities, the computer replacing the typewriter, the first electronic games--but this isn't dry material--it's usually presented as personal stories of how a computer changed someone's life. The 12 major sections of the book are all plays on a food menu, starting with Appetizers covering "history's great computer eccentrics", the "rise and fall of the Altair" to "birthing microchips" to name a few; and ending with Just Desserts and Tomorrow's Specials. The authors have done a great job providing coverage of computers throughout US society. There's pieces covering programming, history, timelines, hardware, phone phreaking, hackers, computer magazines, computers stores, user groups, software piracy, religion, romance and dating, family computers, networking, on-line etiquette, philosophy, movies, music, computer camps...it just goes on and on. And the best part is that many occasions you are reading about the first intersections of personal computers with these subjects. Just comparing how things have changed in 25 years provides much food for thought. There is a plethora of Apple II lore throughout the book. The editor Steve Ditlea notes that the book owes much of its existence to two Apple II computers that were the workhorses in creating the book (he even tells us the serial numbers of the Apples used--this book brims with fondness for the machines). Steve Wozniak contributed my favorite piece in the book, a three-page story called "Homebrew and How the Apple Came to Be", which covers the development of the Apple I and II. Paul Lutus, author of the word processor Apple Writer, writes of the trials developing the program in "Cottage Computer Programming". There are short pieces on Apple Culture and Apple Totems. Apple II game author Bill Budge appears on the "Hardware and Software Star Trading Cards." Stephen Levy explains why he picked the Apple II. A fun part of the book is a section called Personal Choices. Eleven people each write about their favorite computer--why they love this or that computer over the rest. Even by 1984, some of the computer companies had gone out of business, so some of these pieces are odes to lost machines. There's passion here for Osbornes, Ataris, Kaypros, Commodores, Apple IIs, IBM PCs, Sinclairs, Macs, TSR-80s. A few pieces are a bit over the top. James Levine writes "A Family Computer Diary" about the introduction of an Apple II into his family. It's odd seeing a family picture of the parents and two kids with the Apple II in the middle. The parents each have one hand on the computer and one on the kids--something about this is just a little too weird for me. Maybe it's one of the kids comment at the end of the diary "I wouldn't mind if you gave it away..." And then there's Michael Graziano's piece "Ralph is Going Potty" about using his Apple II to computerize his home. This includes announcing throughout his house when his cat Ralph enters the little box. But, when you get to choose from a couple hundred pieces, you are bound to find some of interest. It's a giant smorgasbord and you can sample until you find something that tastes good. The best thing about this book is that editor Steve Ditlea has given permission to put the entire work online, complete with the graphics, photos and comics. As of this writing, used copies are available through Abebooks for less than $5.00. Main Page | Books - 1980s | Books - 1990s | Books - 2000s | Biographies | History Links | What's New Site (c) 1998-2011 by Andy Molloy
计算机
2014-35/1056/en_head.json.gz/11498
Digital Hair Manipulation Gets Dynamic Had your hair cut lately? Most of us probably can answer that one affirmatively. Use a brush or comb? Well, yeah, of course. Does your hair blow in the wind? Only when it’s windy.Such simplistic questions might have you scratching your head. In real life, the appearance of human hair is edited regularly, either by the elements or by ourselves. It’s something so natural, so normal, that we don’t even think about it.Lvdi Wang does, though. That’s because Wang, an associate researcher in the Internet Graphics Group at Microsoft Research Asia, has been working for the past year and a half on improving the appearance of hair in digital images, an enormously challenging task in technical terms. Providing Short Videos with Dynamic Looping—Automatically With today’s mobile devices, users can find shooting high-definition video as easy as snapping a photograph. That should mean, before long, that preserving and sharing bursts of video might become as commonplace as the current practice of exchanging still images.That’s the backdrop for a research project from the University of Illinois at Urbana-Champaign and Microsoft Research Redmond that captures a spectrum of looping videos with varying levels of dynamism, ranging from a static image to a highly animated loop.The research is detailed in a technical paper, written by Zicheng Liao of the University of Illinois at Urbana-Champaign and Neel Joshi and Hugues Hoppe of Microsoft Research Redmond, titled Automated Video Looping with Progressive Dynamism. The paper is among the 19 authored all or in part by Microsoft Research that have been accepted for presentation during the 40th International Conference and Exhibition on Computer Graphics and Interactive Techniques (SIGGRAPH 2013), being held July 21 to 25 in Anaheim, Calif. Indian Researcher Helps Prove Math Conjecture from the 1950s On June 18, Adam Marcus and Daniel A. Spielman of Yale University, along with Nikhil Srivastava of Microsoft Research India, announced a proof of the Kadison-Singer conjecture, a question about the mathematical foundations of quantum mechanics. Ten days later, they posted, on Cornell University’s arXiv open-access e-prints site, a manuscript titled Interlacing Families II: Mixed Characteristic Polynomials and The Kadison-Singer Problem.Thousands of academic papers are published every year, and this one’s title wouldn’t necessarily earn it much attention beyond a niche audience … except for the fact that the text divulged a proof of a mathematical conjecture more than half a century old—and the ramifications could be broad and significant.The Kadison-Singer conjecture was first offered in 1959 by mathematicians Richard Kadison and Isadore Singer. In a summary of the achievement, the website Soul Physics says, “… this conjecture is equivalent to a remarkable number of open problems in other fields … [and] has important consequences for the foundations of physics!” Digital Assistance for Sign-Language Users Sign language is the primary language for many deaf and hard-of-hearing people. But it currently is not possible for these people to interact with computers using their native language.Because of this, researchers in recent years have spent lots of time studying the challenges of sign-language recognition, because not everyone understands sign language, and human sign-language translators are not always available. The researchers have examined the potential of input sensors such as data gloves or special cameras. The former, though, while providing good recognition performance, are inconvenient to wear and have proven too expensive for mass use. And web cameras or stereo cameras, while accurate and fast at hand tracking, struggle to cope with issues such as tricky real-world backgrounds or illumination when not under controlled conditions.Then along came a device called the Kinect. Researchers from Microsoft Research Asia have collaborated with colleagues from the Institute of Computing Technology at the Chinese Academy of Sciences (CAS) to explore how Kinect’s body-tracking abilities can be applied to the problem of sign-language recognition. Results have been encouraging in enabling people whose primary language is sign language to interact more naturally with their computers, in much the same way that speech recognition does. Gates: The Golden Age for Computer Science Is Now Imagine that you got a once-in-a-lifetime opportunity to ask a question to Bill Gates. What would that be? Would you ask about health care? Software? Education? Philanthropy?For several people, that’s exactly what happened July 15 in the opening keynotes of Microsoft Research’s 14th annual Faculty Summit, and all those topics were addressed—and more.More than 400 academic researchers from around the world got a rare opportunity to hear Gates, chairman of Microsoft, discuss a wide variety of topics, including all of the above, during a freewheeling session lasting more than an hour, most of which was spent answering questions from the assembled academics. He kept the audience enthralled, fielding the inquiries with thoughtful responses both illuminating and intriguing.
计算机
2014-35/1056/en_head.json.gz/11499
Developers Get Even More Productive Back in February, a post on this blog introduced Bing Code Search, a project to deliver new tools to save developers time and to make software development easier.Youssef Hamadi of Microsoft Research served as spokesman for his end of a collaboration that included, among others, the Bing and Visual Studio product teams. At the end of that post. Hamadi delivered a cryptic response to a question about what would be next for this work.“We are working on very surprising things in this area,” he said. “I cannot comment about them.”That was then. Now, after a couple of recent developments, we know at least some of those “very surprising things,” and as it turns out, Hamadi’s guarded comment has proved accurate. WindUp: Researching Patterns of Content Creation and Exchange Posted by Richard Harper Given erroneous press reports about our research, as the lead for a Microsoft Research project called WindUp, I want to clarify our project’s objectives. We released WindUp into the Windows Phone Storelast week as part of our ongoing research. Our goal is to learn how people create, share and converse about content online. WindUp is a mobile application for research purposes only that enables users to share images, videos, text, and audio snippets for a finite period of time, or for a designated number of views, before the content in question is deleted permanently. The application is designed to enable me and my team to explore patterns of content creation and exchange. It isn’t meant to compete with anyone else’s service, and it isn’t meant for commercial purposes. Microsoft Research at SIGGRAPH 2014 Microsoft researchers will present a broad spectrum of new research at SIGGRAPH 2014, the 41st International Conference and Exhibition on Computer Graphics and Interactive Techniques, which starts today in Vancouver, British Columbia. Sponsored by the Association for Computing Machinery, SIGGRAPH is at the cutting edge of research in computer graphics and related areas, such as computer vision and interactive systems. SIGGRAPH has evolved to become an international community of respected technical and creative individuals, attracting researchers, artists, developers, filmmakers, scientists, and business professionals from all over the world. The research presented from Microsoft was developed across our global labs – from converting any camera into a depth-camera, to optimizing a scheme for clothing animation, and pushing the boundaries on new animated high-fidelity facial expression and performance capture techniques. Platt Plenty Excited About AI Now, here’s an interesting one: The latest video in Channel 9’s Microsoft Research Luminaries series features John Platt (@johnplattml) and explores his work in the resurgent research area of artificial intelligence (AI), its close cousin, machine learning, and the impact of deep learning on those fields.Platt, a Microsoft distinguished scientist and deputy managing director of Microsoft Research Redmond, tells interviewer Larry Larsen that he has been with Microsoft for 17 years, but that he has spent no fewer than 32 years in the AI domain. At Larsen’s prompting, Platt then attempts to define and differentiate what is meant by the terms “AI” and “machine learning.” Moving Food-Resilience Data to the Cloud People need food, regularly and often. That’s such an obvious truth that’s it’s easy to lose sight of it—easy, that is, until calamity strikes and the food supply is endangered, as it could be in the wake of ongoing changes to Earth’s climate. That prospect is far from inviting.This is why Microsoft supports the White House’s inclusion of Food Resilience as one of the themes of its Climate Data Initiative (CDI), an effort to open, organize, and centralize climate-relevant data on Data.gov’s Climate website. And that’s why teams from across Microsoft—including the company’s research unit—are playing an intrinsic role in supporting the initiative.The CDI was announced on June 25, 2013, by U.S. President Barack Obama. Little more than a year later, progress is under way. Work on the Coastal Flooding Risks to Communities theme began in March, and in Washington, D.C., on July 29, the White House announced public and private partnerships and commitments in further support of CDI.
计算机
2014-35/1056/en_head.json.gz/11626
Natural language user interface (Redirected from Natural language search engine) Natural Language User Interfaces (LUI or NLUI) are a type of computer human interface where linguistic phenomena such as verbs, phrases and clauses act as UI controls for creating, selecting and modifying data in software applications. In interface design natural language interfaces are sought after for their speed and ease of use, but most suffer the challenges to understanding wide varieties of ambiguous input.[1] Natural language interfaces are an active area of study in the field of natural language processing and computational linguistics. An intuitive general Natural language interface is one of the active goals of the Semantic Web. Text interfaces are 'natural' to varying degrees. Many formal (un-natural) programming languages incorporate idioms of natural human language. Likewise, a traditional keyword search engine could be described as a 'shallow' Natural language user interface. 3 Challenges 4 Uses and applications 4.1 Ubiquity 4.2 Wolfram Alpha 4.3 Siri A natural language search engine would in theory find targeted answers to user questions (as opposed to keyword search). For example, when confronted with a question of the form 'which U.S. state has the highest income tax?', conventional search engines ignore the question and instead search on the keywords 'state', 'income' and 'tax'. Natural language search, on the other hand, attempts to use natural language processing to understand the nature of the question and then to search and return a subset of the web that contains the answer to the question. If it works, results would have a higher relevance than results from a keyword search engine. Prototype Nl interfaces had already appeared in the late sixties and early seventies.[2] SHRDLU, a natural language interface that manipulates blocks in a virtual "blocks world" Lunar, a natural language interface to a database containing chemical analyses of Apollo-11 moon rocks by William A. Woods. Chat-80 transformed English questions into Prolog expressions, which were evaluated against the Prolog database. The code of Chat-80 was circulated widely, and formed the basis of several other experimental Nl interfaces. Janus is also one of the few systems to support temporal questions. Intellect from Trinzic (formed by the merger of AICorp and Aion). BBN’s Parlance built on experience from the development of the Rus and Irus systems. IBM Languageaccess Q&A from Symantec. Datatalker from Natural Language Inc. Loqui from Bim. English Wizard from Linguistic Technology Corporation. iAskWeb from Anserity Inc. fully implemented in Prolog was providing interactive recommendations in NL to users in tax and investment domains in 1999-2001[3] Challenges[edit] Natural language interfaces have in the past led users to anthropomorphize the computer, or at least to attribute more intelligence to machines than is warranted. On the part of the user, this has led to unrealistic expectations of the capabilities of the system. Such expectations will make it difficult to learn the restrictions of the system if users attribute too much capability to it, and will ultimately lead to disappointment when the system fails to perform as expected as was the case in the AI winter of the 1970s and 80s. A 1995 paper titled 'Natural Language Interfaces to Databases – An Introduction', describes some challenges:[2] Modifier attachment The request “List all employees in the company with a driving licence” is ambiguous unless you know companies can't have drivers licences. Conjunction and disjunction “List all applicants who live in California and Arizona” is ambiguous unless you know that a person can't live in two places at once. Anaphora resolution - resolve what a user means by 'he', 'she' or 'it', in a self-referential query. Other goals to consider more generally are the speed and efficiency of the interface, in all algorithms these two points are the main point that will determine if some methods are better than others and therefore have greater success in the market. Finally, regarding the methods used, the main problem to be solved is creating a general algorithm that can recognize the entire spectrum of different voices, while disregarding nationality, gender or age. The significant differences between the extracted features - even from speakers who says the same word or phrase - must be successfully overcome. Uses and applications[edit] The natural language interface gives rise to technology used for many different applications. Some of the main uses ar
计算机
2014-35/1056/en_head.json.gz/13469
Digital Worlds – Interactive Media and Game Design A blogged course production experiment… BlogAboutVideo ChannelSearch Archive for the 'Business Models' Category Making Casual Games Pay Published October 28, 2008 Business Models Like many other creative industries, the games industry is not just about helping people have fun. It’s an industry, made up of businesses, and those business exist to make money. In the post Ad-Supported Gaming, I described three different models for using advertising as a way of making computer games pay. In this post, and the ones that follow it in the topic, we’ll consider some of the other business models that support games and gaming, and look at how the distribution models for different sorts of games compares with the distribution of other digital media such as music, movies and even books. But for now – let’s consider casual games, which in many cases need to (appear to) be “free” to the end-user, or they won’t play them… And if they are being sold, then they need to be affordable (which means they need to be sold in volumes large enough to cover the cost of development and distribution, though not played in such large volumes as if they were purely ad-supported). The following opinion piece – The Future For Casual Game Revenue Growth? – that appeared on the GamaSutra news site tries to identify the different ways in which the developer of a casual game can make a living. Try to answer the following questions based on your reading of it: what are the three main ways of covering the development costs of, and ideally securing a profit from, casual games that are identified in the article? how does the use of advertising in casual games compare with advertising on television? The article identifies three main ways of raising revenue: - In-game advertising, in which advertising space is sold within a game; the developer uses ad-revenue to provide them with an income; – “the direct route”, whereby “a direct connection [is made] between independent developers and gamers”; here, the developer tries to sell direct to the end-user. This position is contrasted with ‘selling out’ to a publisher who is likely to market the game in a traditional way; – “increase the perceived value of their games by upping the price”: that is, sell the game as a “superior product”, a counter-intuitive and potentially risky strategy in which differentiation of the game is achieved by pricing it above that of competitors, some of which are made to look cheap, and – one hopes – of lower perceived quality! Several other approaches are mentioned in passing in the closing section: “promotional contests to award points to those who purchase new games, thereby increasing sales and loyalty. In-site ads, merchandising and game trailers, which are sold as advertising elsewhere”. Casual games are seen to be similar to television sitcoms in that “…in exchange for the ability to play and be entertained for a short period of time, people are willing to watch ads” (these ads correspond to the interstitial or pre-roll ads that were described in Ad-Supported Gaming). However, it is also possible “to integrate dynamic in-game advertising platforms into the game. [That is, in-game advertising.] With the constant connection, the adverts can be altered based upon a player’s moves, or even their geographic location, providing targeted and more effective advertising. … It wouldn’t be surprising if in-game ads soon become integral to the content of a game, offering clues, extra levels or other hidden rewards for the player who clicks through.” In-game advertising, even in casual games, offers the potential for interaction. By engaging the player emotionally in the game, they may well be forced to pay more attention to the promotional message or advertised goods (for example, if you have to go in search of the missing Nuvo Cola can…!) Can you think of any other “routes to free” for casual games? Post your thoughts back as comments… Here are some ideas to get you started: Lions, Tigers, Free Games… Oh My!. Ad-Supported Gaming Published October 27, 2008 Business Models , Interactive Media One of the most influential business models – for web companies at least – over the last few years has been ad-supported publishing. So it is not surprising that adverts are also being used to generate revenue in the context of computer games. (Advertising also contributes significantly to underwriting the costs of traditional publishing. If you have ever wondered why many glossy magazines have so many high profile adverts, that’s why!) How do you think advertising could be used to provide an income stream for the publisher of a computer game? Think about whether any games you have played brought you in contact with adverts from other companies, or browse through some of the posts on Business & Games: The Blog, to give you some more ideas. Making Games Pay the Advertising Way Looking across the ad-supported gaming market as a whole, there appear to be three dominant ways of using adverts to support computer games: “Ads around the edges”; Advergames; In-Game Advertising. Let’s look at some of those models in a little more detail. “Ads Around the Edges” Many online casual games are hosted on websites such as Kongregate or games.co.uk that contain adverts. The games are the hooks that pull people into the websites where they are forced to view adverts. The ads may appear as banner ads along the top of the screen, or in a sidebar alongside the game. Alternatively, the advert may appear as a “pre-roll” advert that plays in the game window before you are allowed to play the game. Not surprisingly, Google (which is an advertising sales company…) has got into the game advertising business with its Adsense for Games product that operates in both these ways, providing opportunities for publishers place “appropriate” adverts alongside Flash games on gaming websites, as well as ‘embedding’ pre-roll and interstitial (“ad-break”) adverts “within” the game. Advergames are games that are heavily branded and as such essentially “are” the advert. Advergames typically present a game world that reflects the advertiser’s branding, or at least the message the advertiser wants to communicate, and in so doing potentially engages the interest of the player for many valuable minutes in what advergame developer Skyworks calls “branded interactive entertainment”. Advergames are typically casual games, although two extremes are possible: for example, a pre-existing game may be bought “off-the-shelf” and rebranded with a particular company logo (a digital equivalent of company branded giveaway pens!); or they may be custom designed for a particular campaign. The custom design route is particularly evident in large corporate advertising campaigns, where the advergame is just part of a wider campaign, and is likely to have production values as high as the other parts of the campaign (photo ads, TV adverts, and so on). As you might expect, such advergames can be very expensive to develop. A good example of a game developed as part of a wider campaign is the Honda Problem Playground website. The rationale behind the website – and its role in the campaign – is described here: Honda Joy of Problems and how it got there. Visit the Problem Playground website and play some of the games there. How would you know that this game is an advergame if you came across it whilst looking for a new online game to play? What message is the Problem Playground trying to communicate? Post your thoughts as a comment back here. Now read through the “How it got there” article – does the rationale for the game described there fit with your interpretation of the game? Have a look round for some other high profile advergames and see if you can identify what sort of message they are trying to communicate. Here are a couple of examples to get you started: Stella Artois advergame and Guinnes “Legend of the Golden Domino” advergame. In-Game Advertising/Product Placement In-game advertising places adverts within the game itself, either as an advert inside the game, or via product placement (giving a particular make or model of car a prominent place in a racing game, for example). Watch the following promotional video from IGA Worldwide, a video game advertising agency. As you are doing so, note down the different ways that adverts are placed into the games. Does the setting of particular genres of game make in-game advertising more or less appropriate? What would be a good example of “in-context” advertising within a game? And what might an inappropriate advert be? For more examples of contemporary in-game advertising, see the Case Study showreels from the IGA Worldwide advertising network. Revenue streams for in-game advertising are determined in different ways for the different modes of in-game placement. For example, adverts shown on in-game billboards might be paid for using a “traditional” internet advertising model – “CPM” (cost per thousand impressions). For every 1000 views of the advert, the advertiser will be charged a certain amount. For each of the three modes of ad-support described above, write down the pros and cons of each approach, either in a blog post that links back here, or as a comment to this post. Some of the things you might consider are: time/cost to produce the ad; time spent by the viewer watching the ad; likely reach of the ad (how many people are likely to engage with it, is it amenable to a “viral” (word-of-mouth) distribution model); and so on. Further Reading:: if you would like to learn more about ad-supported gaming, this History of In-Game Advertising is well worth a red (it includes video walkthroughs of several early advergames), as well as the more comprehensive Advertising in Computer Games MSc thesis (MIT), both of which are by Ilya Vedrashko. Accessible Gaming Scripting With the Game Maker Language Digital Worlds – The Course Friday Fun #20 Net Safety Friday Fun #19 Let’s Make a Movie The Technical Cost of Persistence Making Casual Games Pay Friday Fun #18 Let’s Go F1 Racing The Language of Games – Player Types The Process of Game Creation & the Game Design Document Story Arcs, and the Three Act Structure Adding Title Pages and Game Over Screens to Game Maker Games An Example Game Design Document Template Future-Making Serious Games Guardian Unlimited: Games Blog 3D Worlds (4) Accessible Gaming (1) Aside (6) Business Models (2) Friday Fun (20) Game Design (7) Live data (1) Game Futures (1) Game Genres (7) Serious Games (3) Game Maker (24) GML (1) Game Studies (12) Narrative Structure (5) GM Free (41) Industry Standards (2) Info skills (8) Interactive Media (3) Controller (1) Machinima (1) OU (1) Prequel (4) Springboard (2) Virtual Worlds (5) What is a game? (5) Alex on Moving on With Game Maker…Ideal Game Length: A… on Story Arcs, and the Three Act…Game Design Learning… on The Process of Game Creation…✎ Keeping Your Story… on Story Arcs, and the Three Act…» Game art des… on The Process of Game Creation… Top Clicks gamedesign.wikicomplete.i…flickr.com/photos/2545195…flickr.com/photos/2545195…gamedev.net/reference/lis…e-games.tech.purdue.edu/G…playgen.com/home/index.ph…www3.open.ac.uk/study/und…gamasutra.com/view/featur…flickr.com/photos/2545195…flickr.com/photos/psychem…Get your own Box.net widget and share anywhere! Follow “Digital Worlds - Interactive Media and Game Design”
计算机
2014-35/1056/en_head.json.gz/14712
Laplink Solves Hidden Dangers of Moving off Windows XP Laplink helps Windows XP users move to a new PC easily while avoiding the dangers of identity theft. There is a hidden danger behind moving off of Windows XP that many PC users aren’t considering. What about the data that is on the old PC’s hard drive? Bellevue, Wash. (PRWEB) February 27, 2014 Laplink® Software, Inc. today revealed the solution to the hidden danger behind moving off Windows® XP in preparation for end of XP support. End of support for Windows XP begins on April 8, 2014. Windows XP users who choose to stay on the operating system after April 8 will face an end to hot-fixes, technical assistance, and security updates. Remaining on Windows XP is risky; many reports point out that the lack of security updates will make it a big target for hackers, viruses, and other malware. When Microsoft® releases security updates for Windows 7 and 8, attackers will check and test Windows XP for these same vulnerabilities. For example, if after April 8 a vulnerability is patched in Windows 7 and that same vulnerability exists in Windows XP, it will not be patched in Windows XP, even though it is known. The result will be a rapidly increasing number of widely known vulnerabilities in Windows XP, creating easy targets for malicious attacks. “Studies suggest that Windows XP users will be six times more likely to suffer from a virus or other malware,” stated Thomas Koll, CEO of Laplink Software. “We highly recommend that XP users upgrade their current operating system or transfer to a new PC. Our flagship product, PCmover, helps users move into a new PC seamlessly, so there’s no reason to face the risks that Windows XP will carry.” PCmover® is the only utility that transfers all selected files, personalized settings, and even applications from an old PC to a new one. Its streamlined process handles everything automatically using a step-by-step wizard to guide the user through individual selections. Users can choose to transfer everything, or they can select which applications, files, and settings to leave behind. PCmover handles the rest. There’s no need to find old serial numbers, license codes, or installation disks because PCmover transfers most applications to the new PC installed and ready-to-use. PCmover is also the only migration tool that comes with 24/7 Free Transfer Assistance, making it the perfect solution for PC users of any skill level. Users can simply call the toll-free number and receive expert guidance from a certified migration specialist. The migration specialist will walk any user through the complete transfer process. “There is a hidden danger behind moving off of Windows XP that many PC users aren’t considering,” continued Koll. “What about the data that is on the old PC’s hard drive? Data that isn’t erased permanently can be retrieved, causing users to become victims of identity theft. And simply deleting files or reformatting the hard drive doesn’t prevent the data from being recovered. We provide the solution, Laplink SafeErase. PCmover plus SafeErase is the perfect combination to move off of XP and leave nothing behind.” Research shows that identity theft is the fastest growing crime in America and affects approximately 19 people per minute. The average victim of identity theft suffers an estimated loss of $500 and 30 hours to resolve the crime. Deleted data can be recovered; even when hard disks are formatted, data recovery software can be used to obtain personal confidential data. Laplink SafeErase™ protects users by permanently deleting data so that no one can retrieve and use it. Laplink SafeErase is the fastest and most secure way to permanently delete sensitive information from an old PC before selling or recycling. Utilizing a series of government recommended deletion methods, SafeErase completely wipes personal data from the hard drive making it completely unrecoverable. PCmover Ultimate – which includes PCmover Professional, SafeErase, and a high-speed transfer cable can be purchased for under $60 at http://www.laplink.com or Amazon®, Best Buy®, Fry's™, Micro Center®, Office Depot®, Office Max®, Staples®, and other major software retailers and PC manufacturers in North America, Europe, and Japan. Laplink SafeErase is also sold separately for $29.95 at http://www.laplink.com and most software retailers. About Laplink Software, Inc. For over 30 years, Laplink has been the leader in providing software used for PC migration, remote access, file transfer, and synchronization. The privately-held company was founded in 1983 and is headquartered in Bellevue, Washington. Ashley Catlett Laplink Software, Inc+1 (425) 952-6027 Laplink Addresses Windows XP... Laplink® Keeps Users Connected... Fall in Love with Laplink This...
计算机
2014-35/1056/en_head.json.gz/15395
ishush open notebook, excitable librarian da vinci code debate I haven't read The DaVinci Code yet... I might get around to it some day, but I'm more interested (and have been for years) in what Stephen Hoeller has to say about church origins than I am in what Dan Brown says about it. I don't mean to knock Mr. Brown -- I haven't read any of his novels. But if we're arguing about doctrine and history, let's go to the historians and theologians, no? Instead of the novelists? It's like, you know, entertainment and education aren't exactly the same thing.There's a metaphor now loose, though, and what ever mean-eyed arguments may continue erupting around the novel (and now the movie), it will be very dangerous to lots of Christian faithful -- the metaphor is something like "Christ is in us & he always was". Something like that.Will believers go to their libraries and look deeper than the pro and con shout-down-books? Will they read the Nag Hammadi scriptures, and decide for themselves? Our patrons are better off (that is: better enabled to dig deeper and empower their own arguments for or against the "code") if we dig a little deeper too... Pull out the Margaret Starbird alongside The DaVinci Deception... Put out the Bethany House stuff alongside the Disinformation guides...For my part, the fact that so many people have called for the movie to be censored pretty much guarantees that I'll go watch it. Are they trying to get me to see it? I don't cotton well to being told what information I shouldn't have access to... and if I like the movie, I might just read the book after all.--Watching: Ask A Ninja: "Net Neutrality" image problem? LISNews.org reports on a couple of famous authors (Rowling, Rushdie) getting out to support public libraries. It seems that libraries in the U.K. have an image problem?In the Guardian Unlimited article that the story's drawn from, Alain de Botton is quoted: "It's in walking into a library that most people first get the sense of how little they know... Surrounded by so many books, we are liable to feel how great our ignorance is, next to all the accumulated wisdom and insight of others."Huh. Don't quite know how to respond to that... except to say that I hope that's not the cause of Britons' avoidance of libraries. I don't agree with it -- I lived in England, and I never got the impression that folks were scared of their books, or intimidated by the insights of others. If that is the problem, they've got bigger worries than dwindling library circulation. Anybody know more about this issue? "ideological exclusion" Via The Kept-Up Academic Librarian:"Since passage of the USA Patriot Act after Sept. 11, 2001, a number of academics have been denied visas or had them revoked. Universities and scholarly associations have written letters to the government to no avail."So foreign scholars are being kept out of the U.S. seemingly because some in the government are worried about what they might say (link to Christian Science Monitor story) to us Americans. The DHS claims that these academics are kept out because the Patriot Act's section 411 gives the government the power to exclude anyone who espouses or endorses terrorism.How long before we librarians are given a list of illegal books -- books that are said to espouse or endorse the viewpoints of terrorists? And since the t-word has such a broad definition, I reckon we'll all have lots and lots of weeding to do.See also: Patriot Act in the U.K. meta / universal library In a previous post, I went daydreaming about a sort of universal library. Kevin Kelly's "Scan This Book!" just over at New York Times Magazine is a good overview of the dream of a "universal library", a library collecting all information in all media (across all time). Of course such a thing is impossible, despite the apparent optimism in Kelly's article -- and I think he makes the complexities of implementation abundantly clear. See also The Gypsy Librarian for more good questions about this subject. Angel is right to point out the obvious hitch in these bold dreams: not many of us, on balance, have computers.I don't think it's possible to detangle the politics from the question of universal access. Kelly doesn't get into it, but there is a reason (besides the obvious technical and litigious barriers) that such a universal library doesn't exist. There's a reason that such a library will probably never exist. Simply put, it's not in the interest of the powerful to allow universal access to the world's information.Not until the nature of "power" and leadership is radically altered will true empowerment for all people be seen as a good by the world's governors. It would take a true democracy, at least (if not an even more radically distributive and open system), in which "all" the people who have an interest in access are able to legislate and execute such access for themselves -- in other words, a system where every person actually has meaningful self-governing powers.We're lucky that we can even have such conversations about the possibilities of universal access -- most aren't so lucky. And Google as the great enabler? Google censors searches in China to keep cozy with those in power. As long as it makes better political/business sense to keep people and information apart, this "universal library" won't be possible.Who benefits from a universal library? Who takes a hit when we get the information to the people with impunity, or when people get the information to themselves? File "universal libraries" under post-Singularity daydreaming.But that doesn't mean we have to give up.--Watching: Harold Bloom on BookTVFeeling: churlish as hellSurfing: swish-e.org homework, librarians Basic information literacy skills: for any book, website, lecture, film, e-mail, or conversation, you've got to be able to determine if it's accurate, authoratative authoritative, current, objective, and what the scope of it is. At least that's what we like to teach our patrons around here: think critically about your sources.Here's a spot of homework, then... does the following video (embedded from Google Video -- 9/11 Loose Change - 2nd Edition) meet these criteria? Posted by I read: bq's Editorials at Searcher Ellie <3 Libraries Geistweg.com Ghostfooting Hobby Princess info2go reBang microblog Roy Tennant STAR-TIDES The Dead Man's Chest The Gypsy Librarian Vinay Gupta's blog What You Already Know Sitemeter:
计算机
2014-35/1056/en_head.json.gz/15705
http://ee380.stanford.edu The Quiet Revolution in Interactive Rendering Matt Pharr Neoptica About the talk: The difference in image quality between images rendered at 60Hz for games and images rendered off-line for movies has been rapidly shrinking over the past few years, to the point where claims of the arrival of interactive film-quality graphics are now not unreasonable. While these two types of rendering address problems with a number of different characteristics (many of them unique challenges for interactive rendering), it is not unusual for offline rendering for film to require a million times more processing time to generate a single image than an image rendered for a game. How could this be possible? In this talk I will discuss how the development of programmable graphics hardware over the past few years, currently delivering hundreds of gigaflops and tens of gigabytes of memory bandwidth per second, has in turn sparked the development of completely new algorithms for real-time rendering that are fundamentally well-suited to the characteristics of this hardware. Thanks to the enormous performance benefits of developing algorithms that map well to graphics hardware, researchers and developers have invented new approaches that borrow few ideas from offline rendering while still delivering excellent image quality. I'll survey these new approaches, contrasting how they address rendering tasks with corresponding approaches in offline rendering. I'll present some ideas about how these two types of rendering can draw from each other, speculate how future developments in hardware architectures may impact interactive rendering in the future, and argue that off-line rendering will soon be a historical artifact, made irrelevant by the rapidly increasing quality of interactive graphics. About the speaker: Matt Pharr recently cofounded Neoptica, a company devoted to developing software for advanced graphics on next-generation architectures. Previously, he was a member of the technical staff in the Software Architecture group at NVIDIA, where he also served as the editor of the book "GPU Gems 2: Programming Techniques for High-Performance Graphics and General Purpose Computation", a collection covering the latest ideas in GPU programming written by experts in the industry. He was one of the founders of Exluna, a company that developed rendering software and tools; Exluna was acquired by NVIDIA in 2002. He previously worked in the Rendering R+D group at Pixar, working on the RenderMan rendering system. Matt and Greg Humphreys are the authors of the textbook "Physically Based Rendering: From Theory To Implementation", which has been used in graduate-level computer graphics courses at over ten universities (including Stanford). He holds a B.S. from Yale University and a Ph.D. from Stanford University, where he researched theoretical and systems issues related to rendering. 664 Douglass St [email protected]
计算机
2014-35/1056/en_head.json.gz/16573
Bubblehead, Catches the Worm in Her Apple The largest street art locations in Los Angeles and New York are the galleries for artist Mike McNeilly and Bubblehead. She illustrates an Internet Worm she calls "WRX", as Captured on Her New Street Art in Hollywood "Bubblehead & WRX" Bubblehead found her apple being nibbled by this very talented & creative internet worm named WRX. Hollywood, California (PRWEB) June 15, 2013 A new series of Mystery Girl a.k.a. "Bubblehead" street art by artist Mike McNeilly has just been created on the streets of L.A. The street art & super murals with vital messages are portraits of a beautiful girl named Robyn, nicknamed "Bubblehead." She has a long history of inspiring massive street art, graffiti and wild postings with messages calling for the public to do the right thing on important issues that impact our lives. "PLS DNT TXT + DRIVE", another Bubblehead SuperMural, along with many celebrities such as Oprah Winfrey's "No Phone Zone" and Justin Bieber, have helped send this important message in imploring all of us to not text and drive. This Mystery Girl a.k.a Bubblehead, created by artist Mike McNeilly, has been painted on the largest Manhattan wallscape on Park Avenue that raised awareness for organizations such as AMFAR, Project Angel Food and APLA. Her messages have been displayed on the Aircraft Carrier Intrepid in conjunction with the NYPD to give runaway kids a chance to call for help. On the Sunset Strip in Hollywood, the massive street art has reached out to "Stop the Violence, Save the Children", from guns, bullies, child abuse and drugs. She has also asked for support by the public for local organizations such as Hollygrove Orphanage, L.A.'s first orphanage and where Norma Jean Baker a.k.a Marilyn Monroe lived as a child, as well as Children of the Night, a private non profit organization dedicated to assisting children between the ages of 11 and 17 on the streets helping them with food to eat and a place to sleep. The art was inspired by Bubblehead and her "Rocket Science" style when she encountered an aggressive internet worm she calls "WRX", that was invading her privacy. Her analysis, "An Internet worm is a type of malicious software that self-replicates and distributes copies of itself to its network. These independent virtual worm viruses spread through the Internet, break into computers, get embedded in software and penetrate most firewalls. Internet worms can be included in any type of virus, script or program. These worms typically infect systems by exploiting bugs or vulnerabilities that can often be found in legitimate software. Internet worms can spread on their own. This makes them extremely aggressive and dangerous." Lucky for all, Bubblehead takes a byte and catches this worm. http://instagram.com/bubblehead13# https://www.facebook.com/pages/Bubblehead/402655229812471 https://twitter.com/Bubblehead2013 Planet Earth310-860-4542 Bubblehead takes a BYTE and catches this worm. "Bubblehead Bytes" Early Bird, Bubblehead, Catches the Worm in Her Apple.
计算机
2014-35/1056/en_head.json.gz/17083
(Redirected from Internet Protocol Security) Internet Protocol Security (IPsec) is a protocol suite for securing Internet Protocol (IP) communications by authenticating and encrypting each IP packet of a communication session. IPsec includes protocols for establishing mutual authentication between agents at the beginning of the session and negotiation of cryptographic keys to be used during the session. IPsec can be used in protecting data flows between a pair of hosts (host-to-host), between a pair of security gateways (network-to-network), or between a security gateway and a host (network-to-host).[1] Internet Protocol security (IPsec) uses cryptographic security services to protect communications over Internet Protocol (IP) networks. IPsec supports network-level peer authentication, data origin authentication, data integrity, data confidentiality (encryption), and replay protection. IPsec is an end-to-end security scheme operating in the Internet Layer of the Internet Protocol Suite, while some other Internet security systems in widespread use, such as Transport Layer Security (TLS) and Secure Shell (SSH), operate in the upper layers at Application layer. Hence, only IPsec protects any application traffics over an IP network. Applications can be automatically secured by its IPsec at the IP layer. Without IPsec, the protocols of TLS/SSL must be inserted under each of applications for protection. 2 Security architecture 2.1 Authentication Header 2.2 Encapsulating Security Payload 2.3 Security association 3 Modes of operation 3.1 Transport mode 3.2 Tunnel mode 4 Cryptographic algorithms 5 Software implementations 6 Standards status 7 Alleged NSA interference 8 IETF documentation 8.1 Standards Track 8.2 Experimental RFCs 8.3 Informational RFCs 8.4 Obsolete RFCs In December 1993, the Software IP Encryption protocol swIPe (protocol) was researched at Columbia University and AT&T Bell Labs by John Ioannidis and others. In July 1994, Wei Xu at Trusted Information Systems continued this research, enhanced the IP protocols, and completed successfully on the BSDI platform. Wei quickly extended his development on to Sun OS, HP UX, and other UNIX systems. One of the challenges was slow performance of DES and Triple DES. The software encryption was unable to support a T1 speed under the Intel 80386 architecture. By exploring the Crypto cards from Germany, Wei Xu further developed an automated device driver, known as plug-and-play today. By achieving the throughput for more than a T1s, this work made the commercial product practically feasible, that was released as a part of the well-known Gauntlet firewall. In December 1994, it was the first time in production for securing some remote sites between east and west coastal states of the United States.[2] Another IP Encapsulating Security Payload (ESP)[3] was researched at the Naval Research Laboratory as part of a DARPA-sponsored research project, with openly published by IETF SIPP[4] Working Group drafted in December 1993 as a security extension for SIPP. This ESP was originally derived from the US Department of Defense SP3D protocol, rather than being derived from the ISO Network-Layer Security Protocol (NLSP). The SP3D protocol specification was published by NIST, but designed by the Secure Data Network System project of the US Department of Defense. The Security Authentication Header (AH) is derived partially from previous IETF standards work for authentication of the Simple Network Management Protocol (SNMP) version 2. In 1995, The IPsec working group in the IETF was started to create an open freely available and vetted version of protocols that had been developed under NSA contract in the Secure Data Network System (SDNS) project. The SDNS project had defined a Security Protocol Layer 3 (SP3) that had been published by NIST and was also the basis of the ISO Network Layer Security Protocol (NLSP).[5] Key management for SP3 was provided by the Key Management Protocol (KMP) that provided a baseline of ideas for subsequent work in the IPsec committee. IPsec is officially standardised by the Internet Engineering Task Force (IETF) in a series of Request for Comments documents addressing various components and extensions. It specifies the spelling of the protocol name to be IPsec.[6
计算机
2014-35/1056/en_head.json.gz/17134
Quick note on Alan Wake review response Alan Wake is certainly not a terrible title, but it is repetitive and fairly underwhelming—doubly so for a game that's been in development for more than five years. I give credit to the developers for some beautiful environments and a combat style that's sensible and engaging, but it's weak in most other areas. I also have to say that for a game that's been so heavily touted for its story, I found the plot to be poorly-structured and unsatisfying.(A side note to you preemptive Alan Wake fanboys who've been posting negative remarks on the review: At least have the decency to wait until the game hits retail before tearing my evaluation down. If you've got a reason to disagree with me, I'd love to hear it. Present your case. Tell me your rationale. I'm honestly interested. However, you really don't have a thing to say until you've actually played the game, you "every other site gave it a 9" tossers.)In other games news, I've been spending a bit of time with Retro (mini) on PlayStation Portable. It's a new take on the old formula where players pilot a ship of some sort through obstacles and caves while struggling against gravity. I think the first game of its type ever played was Solar
计算机
2014-35/1056/en_head.json.gz/18340
See more news releases in Computer Electronics Multimedia & Internet Entertainment Mobile Entertainment New Products & Services Jimdo now available for iOS. Your Website. Wherever you are. SAN FRANCISCO, Aug. 22, 2013 /PRNewswire/ -- Jimdo, a leading website builder, today released a native iOS application. Available for free in the App Store, Jimdo's mobile app makes it possible to create and edit websites on an iPhone or iPad. All of Jimdo's existing 8 million websites can also be edited in the app. "The desktop version of Jimdo is already powerful," said Christian Springub, Jimdo's co-founder. "By integrating the new app with our existing platform, users now have the freedom to decide when, where, and with which device they edit their sites. Users can even start on one device and finish on another." Jimdo's iOS app opens up the possibility to add, modify, and delete content on the go. For instance, the app enables a photographer to add galleries at a shoot, a chef to add a new recipe page right from the kitchen, or a group of friends to create a travel website while on vacation. Anybody can use Jimdo for iOS to add new photos, galleries, and text; add, delete, and reorder pages; and track their website's traffic. All websites are optimized for mobile and desktop. Jimdo focuses on developing a product that meets its customers' immediate needs and also sets the stage for long-term innovation. "We've seen people trying to log in to Jimdo on their iPhones and iPads. We know that the demand for a native app is there, and now the wait is over," said Stephen Belomy, Jimdo's U.S. CEO. "This app is more feature-rich than any other in our industry, and it's only going to get better." The app has already received positive reviews from Jimdo users. Brent Thomas, owner of BikeWrappers.com, commented, "As a small business owner, I find Jimdo's app great for making quick changes while I'm traveling. Being able to keep fresh content on my website is extremely important to me, and this app allows me to manage my site and business without having to rely on a desktop." Jimdo's iOS app is available for free on the App Store at http://jim.do/appstore About JimdoJimdo is the easiest way to create a website on a computer, smartphone, or tablet. With a simple intuitive interface, Jimdo enables anyone to create a customized online presence with a blog and online store. Founded in Germany in 2007 by Christian Springub, Fridtjof Detzner, and Matthias Henze, the company set a new standard in website creation. Profitable since 2009 without venture capital, Jimdo has a passionate team of 170 people in Hamburg, San Francisco, Tokyo, and Shanghai. Jimdo is available in 12 languages and has helped people build over 8 million websites. For more on Jimdo visit http://www.jimdo.com SOURCE Jimdo RELATED LINKS http://www.jimdo.com
计算机
2014-35/1057/en_head.json.gz/37
Microsoft announces free antivirus, limited public beta Microsoft has officially announced its new Microsoft Security Essentials … Microsoft today officially announced Microsoft Security Essentials (MSE), its free, real-time consumer antimalware solution for fighting viruses, spyware, rootkits, and trojans. Currently being tested by Microsoft employees and a select few testers, MSE is Microsoft's latest offering intended to help users fight the threats that plague Windows PCs. Microsoft notes that the threat ecosystem has expanded to include rogue security software, auto-run malware, fake or pirated software and content, as well as banking malware, and the company is aiming to help the users who are not well protected. A beta of MSE will be available in English and Brazilian Portuguese for public download at microsoft.com/security_essentials on June 23, 2009 for the first 75,000 users. This is a target number, but Microsoft is willing to increase it if necessary. After the first beta, Microsoft will release a second public build, either a Beta Refresh or a Release Candidate, for the summer. Finally, Microsoft is aiming to release the final product in the fall, though it may adjust that based on feedback. MSE will be available as standalone 32-bit and 64-bit downloads for Windows XP, Windows Vista, and Windows 7. Microsoft has always recommended that its users use real-time antimalware protection, but the release by the end of this year will mark the company's first free solution. MSE was previously referred to as codename Morro when Microsoft first revealed it in November 2008. The announcement came as the company surprised everyone by saying it would be phasing out the pay-for Windows Live OneCare in favor of a free security solution. Sales of the Windows Live OneCare subscription service as well as Windows Live OneCare for Server on SBS 2008 are scheduled to end at the end of the month. While OneCare offered a Managed Firewall, PC Performance Tuning, Data Backup and Restore, Multi-PC Management, and Printer Sharing, MSE is really closer to Forefront Client Security, Microsoft's antivirus product for the enterprise. Features and performance Microsoft touts five features of Microsoft Security Essentials: Remove most-prevalent malware Remove known viruses Real-time anti-virus protection Remove known spyware Real-time anti-spyware protection You'll likely notice that the last two features can be attributed to Windows Defender, which is offered as a standalone download for Windows XP and Windows Server 2003, ships with Windows Vista, and will ship with Windows 7. During the MSE installation, Windows Defender is actually disabled as it is no longer needed with MSE installed. Nevertheless, the UI was based on Windows Defender's, and Microsoft emphasized that keeping the UI as simple as possible was very important. Below you can see two screenshots, with the first showing MSE when everything is nice and dandy while the second shows that a threat has been detected. While users can choose to clean the threat from the main MSE window, the more likely scenario is an alert popping up and a user choosing to clean the threat straight from the alert with a single click. MSE's engine is actually identical to the one that ships with Forefront Client Security; in fact, Microsoft uses the same engine for all of its security products. Thus, engine updates to MSE will be delivered at the same time as they are delivered to Forefront. Signature updates, on the other hand, can be delivered at different times and frequencies than Microsoft's other security software. New virus signatures for MSE will be downloaded automatically on a daily basis. One of the most interesting features for MSE is Dynamic Signature Service (DSS). When MSE detects that a file is making suspicious actions (such as unexpected network connections, attempting to modify privileged parts of the system, or downloading known malicious content) and there is no virus signature for it, MSE will send a profile of the suspected malware to Microsoft's servers. If there is a new signature for it, one that has yet to be sent out to the MSE client, MSE will be told how to clean the file. It should be emphasized that this communication will only occur for malware found that is not in the current signatures. This is a completely new feature and indeed the next version of Forefront will also use DSS. The actual security aspect aside, the most important part of security software is undoubtedly performance. Since MSE doesn't include many of the features of OneCare, this is an area that Microsoft has a chance to excel in. In fact, the company includes three features in MSE to keep it light: CPU throttling (the system will remain responsive to the user's tasks), idle-time scanning (scans and updates use a low-priority thread and only run when the PC is idle), as well as smart caching and active memory swapping (virus signatures not in use are not loaded into memory). It should also be noted that MSE is very small; when MSE first leaked out yesterday, we noted that the installer sizes range from just over 3MB to just over 7MB (the folder installed takes up about 11 MB). The leanness of MSE is also evident when looking at the system requirements: For Windows XP, a PC with a CPU with clock speed of at least 500MHz and at least 256MB of RAM For Windows Vista and Windows 7, a PC with a CPU with clock speed of at least 1.0GHz and at least 1GB of RAM VGA (display): 800x600 or higher Storage: 140MB of available hard-disk space An Internet connection is required for installation and to download the latest virus and spyware definitions. One other thing we noticed yesterday was that genuine validation was required during the installation of MSE. This seems slightly counterproductive since MSE is targeted at those who cannot pay for security solutions. These consumers are also likely to have pirated Windows instead of paying for it, and thus cannot use MSE because their copy is not genuine. Such a user will then either remain without a security solution or will decide to use another free alternative. When Ars asked about this, Theresa Burch, director of product management for Microsoft Security Essentials, responded that Microsoft's intent is to drive the market towards PCs with genuine copies of Windows, obviously for the sake of the bottom line, but also for the sake of security. Microsoft maintains that nongenuine copies of Windows are more likely to be compromised because they tend not to have the latest updates and they can be malware-ridden from the start. Further, she emphasized that Microsoft's intent is not to convert consumers from other security solutions and that the main goal is to keep consumers secure, regardless of whether that means using Microsoft's security solutions or third-party ones. One last thing Ars discussed with Burch was the "Essentials" branding. We've seen it before with Windows Live Essentials, but Burch says MSE will not be included in this suite, even though non-Windows Live applications like Silverlight are included. Microsoft is likely aiming to release MSE in time for Windows 7 (slated to arrive on October 22), but unlike Windows Live Essentials, Burch says there will be no download link for MSE included in the final version. This is a curious decision given that Redmond wants to push MSE out to all those that currently do not have a security solution (between 50 and 60 percent of Windows users, according to Microsoft). Nevertheless, it can be quite easily explained: Microsoft wants to avoid antitrust issues. MSE will be available for download directly from Microsoft, but the company will have to advertise it one way or another because these users aren't exactly going to flock excitedly to download a security suite, regardless of how bad or good it will end up being. For now, MSE looks like a surprisingly solid free product, but we will reserve further judgment until the product makes its way out of beta. Expand full story
计算机
2014-35/1057/en_head.json.gz/183
This article is part of a series on Computer security (main article) Related security categories Cyber security and countermeasure World Wide Web Security Viruses and worms Payloads Secure coding Security by design Secure operating systems Multi-factor authentication Firewall (computing) Intrusion prevention system Information security, sometimes shortened to InfoSec, is the practice of defending information from unauthorized access, use, disclosure, disruption, modification, perusal, inspection, recording or destruction. It is a general term that can be used regardless of the form the data may take (electronic, physical, etc.)[1] 5 Basic principles 5.1 Key concepts 5.1.1 Integrity 5.1.2 Availability 5.1.3 Authenticity 5.1.4 Non-repudiation 6 Risk management 6.1 Controls 6.1.2 Logical 6.1.3 Physical 6.2 Defense in depth 6.3 Security classification for information 6.4 Access control 6.4.1 Identification 6.4.2 Authentication 6.4.3 Authorization 6.5 Cryptography 7.1 Security governance 7.2 Incident response plans 7.3 Change management 8 Business continuity 8.1 Disaster recovery planning 9 Laws and regulations 10 Information Security Culture 11 Sources of standards 12 Conclusion 14 Scholars working in the field Two major aspects of information security are: IT security: Sometimes referred to as computer security, Information Technology Security is information security applied to technology (most often some form of computer system). It is worthwhile to note that a computer does not necessarily mean a home desktop. A computer is any device with a processor and some memory. Such devices can range from non-networked standalone devices as simple as calculators, to networked mobile computing devices such as smartphones and tablet computers. IT security specialists are almost always found in any major enterprise/establishment due to the nature and value of the data within larger businesses. They are responsible for keeping all of the technology within the company secure from malicious cyber attacks that often attempt to breach into critical private information or gain control of the internal systems. Information assurance: The act of ensuring that data is not lost when critical issues arise. These issues include but are not limited to: natural disasters, computer/server malfunction, physical theft, or any other instance where data has the potential of being lost. Since most information is stored on computers in our modern era, information assurance is typically dealt with by IT security specialists. One of the most common methods of providing information assurance is to have an off-site backup of the data in case one of the mentioned issues arise. Governments, military, corporations, financial institutions, hospitals and private businesses amass a great deal of confidential information about their employees, customers, products, research and financial status. Most of this information is now collected, processed and stored on electronic computers and transmitted across networks to other computers. Should confidential information about a business' customers or finances or new product line fall into the hands of a competitor or a black hat hacker, a business and its customers could suffer widespread, irreparable financial loss, not to mention damage to the company's reputation. Protecting confidential information is a business requirement and in many cases also an ethical and legal requirement. A key concern for organizations is the derivation of the optimal amount to invest, from an economics perspective, on information security. The Gordon-Loeb Model provides a mathematical economic approach for addressing this latter concern. For the individual, information security has a significant effect on privacy, which is viewed very differently in different cultures. The field of information security has grown and evolved significantly in recent years. There are many ways of gaining entry into the field as a career. It offers many areas for specialization including securing network(s) and allied infrastructure, securing applications and databases, security testing, information systems auditing, business continuity planning and digital forensics, etc. This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (August 2014) Since the early days of communication, diplomats and military commanders understood that it was necessary to provide some mechanism to protect the confidentiality of correspondence and to have some means of detecting tampering. Julius Caesar is credited with the invention of the Caesar cipher c. 50 B.C., which was created in order to prevent his secret messages from being read should a message fall into the wrong hands, but for the most part protection was achieved through the application of procedural handling controls. Sensitive information was marked up to indicate that it should be protected and transported by trusted persons, guarded and stored in a secure environment or strong box. As postal services expanded, governments created official organizations to intercept, decipher, read and reseal letters (e.g. the UK Secret Office and Deciphering Branch in 1653). In the mid-19th century more complex classification systems were developed to allow governments to manage their information according to the degree of sensitivity. The British Government codified this, to some extent, with the publication of the Official Secrets Act in 1889. By the time of the First World War, multi-tier classification systems were used to communicate information to and from various fronts, which encouraged greater use of code making and breaking sections in diplomatic and military headquarters. In the United Kingdom this led to the creation of the Government Code and Cypher School in 1919. Encoding became more sophisticated between the wars as machines were employed to scramble and unscramble information. The volume of information shared by the Allied countries during the Second World War necessitated formal alignment of classification systems and procedural controls. An arcane range of markings evolved to indicate who could handle documents (usually officers rather than men) and where they should be stored as increasingly complex safes and storage facilities were developed. Procedures evolved to ensure documents were destroyed properly and it was the failure to follow these procedures which led to some of the greatest intelligence coups of the war (e.g. U-570). The end of the 20th century and early years of the 21st century saw rapid advancements in telecommunications, computing hardware and software, and data encryption. The availability of smaller, more powerful and less expensive computing equipment made electronic data processing within the reach of small business and the home user. These computers quickly became interconnected through the Internet. The rapid growth and widespread use of electronic data processing and electronic business conducted through the Internet, along with numerous occurrences of international terrorism, fueled the need for better methods of protecting the computers and the information they store, process and transmit. The academic disciplines of computer security and information assurance emerged along with numerous professional organizations – all sharing the common goals of ensuring the security and reliability of information systems. Definitions[edit] Information Security Attributes: or qualities, i.e., Confidentiality, Integrity and Availability (CIA). Information Systems are composed in three main portions, hardware, software and communications with the purpose to help identify and apply information security industry standards, as mechanisms of protection and prevention, at three levels or layers: physical, personal and organizational. Essentially, procedures or policies are implemented to tell people (administrators, users and operators) how to use products to ensure information security within the organizations. The definitions of InfoSec suggested in different sources are summarised below (adopted from).[2] 1. "Preservation of confidentiality, integrity and availability of information. Note: In addition, other properties, such as authenticity, accountability, non-repudiation and reliability can also be involved." (ISO/IEC 27000:2009)[3] 2. "The protection of information and information systems from unauthorized access, use, disclosure, disruption, modification, or destruction in order to provide confidentiality, integrity, and availability." (CNSS, 2010)[4] 3. "Ensures that only authorized users (confidentiality) have access to accurate and complete information (integrity) when required (availability)." (ISACA, 2008)[5] 4. "Information Security is the process of protecting the intellectual property of an organisation." (Pipkin, 2000)[6] 5. "...information security is a risk management discipline, whose job is to manage the cost of information risk to the business." (McDermott and Geer, 2001)[7] 6. "A well-informed sense of assurance that information risks and controls are in balance." (Anderson, J., 2003)[8] 7. "Information security is the protection of information and minimises the risk of exposing information to unauthorised parties." (Venter and Eloff, 2003)[9] 8. "Information Security is a multidisciplinary area of study and professional activity which is concerned with the development and implementation of security mechanisms of all available types (technical, organisational, human-oriented and legal) in order to keep information in all its locations (within and outside the organisation’s perimeter) and, consequently, information systems, where information is created, processed, stored, transmitted and destroyed, free from threats. Threats to information and information systems may be categorised and a corresponding security goal may be defined for each category of threats. A set of security goals, identified as a result of a threat analysis, should be revised periodically to ensure its adequacy and conformance with the evolving environment. The currently relevant set of security goals may include: confidentiality, integrity, availability, privacy, authenticity & trustworthiness, non-repudiation, accountability and auditability." (Cherdantseva and Hilton, 2013)[2] Profession[edit] Information security is a stable and growing profession – Information security professionals are very stable in their employment; more than 80 percent had no change in employer or employment in the past year, and the number of professionals is projected to continuously grow more than 11 percent annually from 2014 to 2019.[10] Basic principles[edit] Key concepts[edit] The CIA triad of confidentiality, integrity, and availability is at the heart of information security.[11] (The members of the classic InfoSec triad -confidentiality, integrity and availability - are interchangeably referred to in the literature as security attributes, properties, security goals, fundamental aspects, information criteria, critical information characteristics and basic building blocks.) There is continuous debate about extending this classic trio.[2][citation needed] Other principles such as Accountability[12] have sometimes been proposed for addition – it has been pointed out[citation needed] that issues such as Non-Repudiation do not fit well within the three core concepts, and as regulation of computer systems has increased (particularly amongst the Western nations) Legality is becoming a key consideration for practical security installations.[citation needed] In 1992 and revised in 2002 the OECD's Guidelines for the Security of Information Systems and Networks[13] proposed the nine generally accepted principles: Awareness, Responsibility, Response, Ethics, Democracy, Risk Assessment, Security Design and Implementation, Security Management, and Reassessment. Building upon those, in 2004 the NIST's Engineering Principles for Information Technology Security[14] proposed 33 principles. From each of these derived guidelines and practices. In 2002, Donn Parker proposed an alternative model for the classic CIA triad that he called the six atomic elements of information. The elements are confidentiality, possession, integrity, authenticity, availability, and utility. The merits of the Parkerian hexad are a subject of debate amongst security professionals.[citation needed] In 2013, based on the extensive literature analysis, the Information Assurance & Security (IAS) Octave has been developed and proposed as an extension of the CIA-traid. The IAS Octave is one of four dimensions of a Reference Model of Information Assurance & Security (RMIAS). The IAS Octave includes confidentiality, integrity, availability, privacy, authenticity & trustworthiness, non-repudiation, accountability and auditability.'[2][15] The IAS Octave as a set of currently relevant security goals has been evaluated via a series of interviews with InfoSec and IA professionals and academics. In,[15] definitions for every member of the IAS Octave are outlined along with the applicability of every security goal (key factor) to six components of an Information System. Integrity[edit] In information security, data integrity means maintaining and assuring the accuracy and consistency of data over its entire life-cycle.[16] This means that data cannot be modified in an unauthorized or undetected manner. This is not the same thing as referential integrity in databases, although it can be viewed as a special case of consistency as understood in the classic ACID model of transaction processing. Integrity is violated when a message is actively modified in transit. Information security systems typically provide message integrity in addition to data confidentiality. Availability[edit] For any information system to serve its purpose, the information must be available when it is needed. This means that the computing systems used to store and process the information, the security controls used to protect it, and the communication channels used to access it must be functioning correctly. High availability systems aim to remain available at all times, preventing service disruptions due to power outages, hardware failures, and system upgrades. Ensuring availability also involves preventing denial-of-service attacks, such as a flood of incoming messages to the target system essentially forcing it to shut down. Authenticity[edit] In computing, e-Business, and information security, it is necessary to ensure that the data, transactions, communications or documents (electronic or physical) are genuine. It is also important for authenticity to validate that both parties involved are who they claim to be. Some information security systems incorporate authentication features such as "digital signatures", which give evidence that the message data is genuine and was sent by someone possessing the proper signing key. Non-repudiation[edit] In law, non-repudiation implies one's intention to fulfill their obligations to a contract. It also implies that one party of a transaction cannot deny having received a transaction nor can the other party deny having sent a transaction. It is important to note that while technology such as cryptographic systems can assist in non-repudiation efforts, the concept is at its core a legal concept transcending the realm of technology. It is not, for instance, sufficient to show that the message matches a digital signature signed with the sender's private key, and thus only the sender could have sent the message and nobody else could have altered it in transit. The alleged sender could in return demonstrate that the digital signature algorithm is vulnerable or flawed, or allege or prove that his signing key has been compromised. The fault for these violations may or may not lie with the sender himself, and such assertions may or may not relieve the sender of liability, but the assertion would invalidate the claim that the signature necessarily proves authenticity and integrity and thus prevents repudiation. Electronic commerce uses technology such as digital si
计算机
2014-35/1057/en_head.json.gz/573
Sir Tim Berners-Lee A Free and Open Web Announcing the Web We Want Dillon Mann “Millions of people together have made the Web great. So, during the Web’s 25th birthday year in 2014, millions of people can secure the Web’s future. We must not let anybody – governments, companies or individuals – take away or try to control the precious space we’ve gained on the Web to create, communicate, and collaborate freely.” This was the message from Sir Tim Berners-Lee, founding director of the World Wide Web Foundation, as he addressed a UN gathering in Geneva today. Sir Tim used his address to unveil a new campaign – the Web We Want. During the Web’s 25th birthday in 2014, the Web We Want campaign will to ask everyone, everywhere to play a part in defining the Web’s future, and then help to build and defend it. Ultimately, the Web We Want campaign hopes to see people’s online rights on a free, open and truly global web protected by law in every country. This global initiative is co-ordinated by the World Wide Web Foundation and Free Press. More information is available at www.webwewant.org. ← Sir Tim Berners-Lee To Address UN Gathering The World Wide Web Foundation and over 100 organisations unite to sign statement of concern on secret mass surveillance → V. Selva Raj says: Dec 16, 2013 at 1:04 am I am so happy to come across your website and its founder Rt Hon Gentleman Sir Tim Berners – Lee. After post independence India nearly 66 years, the country is still going back to stone age vis-a-vis, the neighbouring country China manifold development in all aspects of human spirit except for belligerant attitude of forcible occupation of its neighbour’s land and also threatening countries like Formosa ot its independence soverignty and also agressive attitude in South China see threatening Japan as well. Ryne Barnish says: Jan 14, 2014 at 1:53 pm Truly, truly, conditionally and unconditionally, this means beauty. As an untypical Web user, or somebody who hardly ever uses the World Wide Web, I can still define this movement as something simply amazing! Hopefully, whoever is reading this text among this server (whether a user or an employee of this foundation) will find this comment somewhat meaningful. For the whole society of which I’m sadly surrounded by is a society that sees the Web as a direct synonym for the Internet. Yet again, that’s probably the society you’re currently in too. But not only are they sickly mistaking the two concepts for something they’re not, they also happen to be mistreating the Web as well. And when I say; “they” I mean the modern day youth. The minute one sees a desktop or a laptop, the vision of Bing, Facebook, Google, YouTube, and even PornHub.com pop right into their adolescent minds. It’s a bittersweet addiction that has flipped the revolution of the Web into something hideous that’s mainly run a younger generation. Although, it wouldn’t be a revolution if there wasn’t an ugly side to it. And it’s not always the idea of cyberbullying, hacking, viruses, fake identification, and other black episodes that flip our digital atmosphere around, but the fat cats themselves. Governments and commercial (and uncommercial) corporations have always been a disadvantage behind the freedom of the medium. Radio, Cinema, Television, Newspaper, Telephone, Books, you get the idea. Any technology where information can be given or communicated through is a red light for an authority. Now, our one and only Web happens to be the newest victim. The modern world is filled with webbies, or people that can live with out using Tim’s artwork. It’s also filled by people who work off the Web, so why take away this legendary experience they have? Why should our children create or be apart of a deformed vision? Tim, if you’re reading this text at the moment, please know that there’s an artistry behind the World Wide Web. And know that this foundation is worth carrying out the vision to save the ideas that helped smother the earth in 0’s & 1’s. This comment wouldn’t be possible without you… Kitty deane says: Mar 12, 2014 at 5:35 am Thank you for this campaign. I agree wholeheartedly with this vision. It is a shame, but without this type of lobbying there is erosion of accessibility, freedom, equality and privacy. Click here to cancel reply. Your email is never shared. Required fields are marked * © 2008–2014 World Wide Web Foundation. This work is licensed under a Creative Commons Attribution 4.0 International License.
计算机
2014-35/1057/en_head.json.gz/953
The Lord of the Rings Online: First Impressions I spent a few leisurely hours in Middle-earth, unsure if there would be anything new in an enterprise that's been too long under the sun. What caught my attention was a development team that (somewhat) turned off the Peter Jackson films, tuned into their own intuitions, and turned out a faithful hymn to a daunting, frighteningly high-profile venture. Hit the [MORE] link below to take a little trip. The question is inherently ironic, but it bears asking nonetheless: How is The Lord of the Rings Online going to distinguish itself from the myriad other deep-pocketed, high-fantasy MMOs? When so much media (and just try to name a medium that hasn't been touched by The Lord of the Rings) relies on Tolkien's seminal trilogy as a basis for their own fantastical realms, what of Middle-earth is there left to explore? What angle could developers possibly exploit that hasn't already been lifted wholesale from the Rings franchise itself? First off, it plays the Archeological card. As in, for Western Civilization, having your passport stamped in Middle-earth is like embarking on a pilgrimage to the Holy Land. No, Tolkien didn't invent the fantasy tropes that are so familiar and expected today; while he's arguably the father of modern fantasy literature, he's not the grandfather. So, by unfurling the Middle-earth map, players aren't experiencing some slipshod spin-off that took a little here, stole a little there, and made just enough cosmetic changes to not get sued by the Tolkien Estate. Players are finally experiencing a world translated straight from the source texts of modern western fantasy. And again, let's spare ourselves the literary snobbery of referencing The Odyssey, Beowulf, Norse tradition, etc., as source texts. Tolkien himself had already assimilated that intellectual property into his own writings for our enjoyment. (Wasn't it Oscar Wilde that said "Good writers borrow. Great writers steal"? Well, it turns out that ol' Uncle Wilde wasn't lying to you.) So, with Lord of the Rings Online, you're running an avatar through the playgrounds of the man that stole it best. Don't over-read me and think that I'm a Rings fanboy, however. Not to incite a riot, but I find most of Tolkien's writing to be parched and eye-glazing. And while he's an author I certainly respect, he's not one that I've ever really liked. (Yes, we all know that we're supposed to deify Tolkien, but those of us that couldn't actually stomach reading the entire Rings trilogy -- you can breathe easier, knowing that you're among friends.) Secondly, seven months of beta testing, along with Turbine, Inc.'s previous development experience with Asheron's Call and Dungeons and Dragons Online, is putting LOTRO's best foot forward. Some very recent online releases (*cough* Vanguard *cough*) haven't shown the level of spit-shine that Middle-earth seems to be pulling off effortlessly. Monitoring the chat channels, I haven't seen any complaints whatsoever as to bugs -- real or alleged -- broken quests, avatar appearance or disappearance issues, or any of the other showstoppers that are often par-for-course with a new MMO launch. In about one month (which is when LOTRO reviews will trickle their way onto the net) we're not going to see the usual Buyer Beware! warnings that go hand-in-hand with lowered review scores. We won't see the concluding paragraphs that advise "Wait six months until all the bugs are mashed," or huffy bloggers stating "I'm not gonna pay $15 bucks a month to beta test a game!" Could this (along with the superbly polished World of Warcraft: The Burning Crusade -- although that, by definition, isn't a new MMO release) represent a new standard for the opening day of an online game? But I'm being hasty. Sadly, I admit that my experience was only a few levels deep for the different races, so at least I can say with confidence that the introductory stages are nicely refined. From amongst the selectable races of, um, the Race of Man, Elves, Dwarves, and Hobbits, and deciding between the race-limited classes of Champions, Guardians, Hunters, Lore-masters, Minstrels, Burglars, and Captains, I opted to explore a magic-wielding human character for a healthy mix of the arcane and the mundane, respectively. As a lovely touch (one that D&D Online alumni will recognize) each race and class has a short introductory cinematic to view on the character creation screen. This is some nicely-sculpted help from the developers, since at least a couple of those classes won't sound immediately familiar. Sure, you'd expect the Minstrel to take up a bard's lute, but it's much less conventional to think of the Minstrel as the healer. Which it is. But all of these LOTRO-specific paradigm shifts are spelled out equally well in text on the character creation screen. To the extent that commonly used MMO terms are blatantly utilized in their descriptions: The Champion's role is for "area-of-effect damage and damage-per-second," the Guardian is a straight up "tank," the Hunter is a "nuker," the Burglar is a "buffer," while the human-only Captains are a "buffer/pets" class, and the wizardly Lore-master handles "crowd control/pets." By entering Middle-earth, you're also entering a vast, pre-established world with a veritable cast of thousands, and an already fully-realized map. Each race enjoys a unique starting point, but LOTRO entertains the idea of a character's history by having you pick out your character's old stomping grounds. Which is doubly cool since you get a suffix title from the very beginning of the game. The name floating above my head isn't just BillyJoeBob. It's BillyJoeBob of Rohan. Or Rivendell, or The Lonely Mountain. More obscure locales are available too, like the Stoor, Lindon, or the Dale-lands. But no matter where you hail from, you're treated to a single-player instance to start things off. And if there's something Turbine has learned how to do right off the bat, it's launch a motivating hook for the rest of your time in Tolkien-land. So you don't immediately lose track of what you're getting yourself into, you'll either run across Elrond, Gandalf, a cave troll, or one of the Dark Riders during your tutorial mission. And it's going to be in an all-hell's-breaking-loose fashion that you're introduced to them. No, silly hobbit, you won't be taking down one of the Nazgul in all of your level-one glory. But you will be getting a little too close for comfort, just as an appetizer. But just as Tolkien himself does, the missions' writings tend to overrate themselves, plodding on a bit haphazardly about too many non-player characters and settings you don't really care about yet. Although, every layman's argument against something unapologetically Middle-earthy can and should be argued against -- in favor of some thoroughbred Lord of the Rings immersion. And for every rote fantasy convention you will be prescient of, there'll be one step outside the box to counteract your expectations. I can't lie to you and tell you that I'm riveted. Middle-earth is too middle-of-the-road for me when it comes to high-fantasy. It fits into some unlabeled "realistic fantasy" genre that, all things considered, wouldn't be fantasy at all if it weren't for a few fireball spells and a couple magical denizens. If not for those elements, it'd be a somewhat droll, perhaps picaresque period piece set in Tudor-style England. But still, there's enough baked into the Rings environment to keep it all quite charming, if not in a commendably restrained manner. If you check your baggage at the door, The Lord of the Rings Online shapes itself as a thoughtful homage to the father of fantasy literature, made all the more digestible from a nigh-infallible interface, buttery controls, and palpable vistas to make newly-forming fellowships puff their pipe-weed in awe. Beyond exploring its own license it doesn't do anything jaw-dropping. It's simply a well-crafted game that will leave you craving elevensies.
计算机
2014-35/1057/en_head.json.gz/964
Page 2 of 3| Previous | Next | Computerisation of land records in India Government Initiatives The Government of India and the state governments have been seized with the recurring problem of inadequately maintained land record system as it had made the administration of land reforms difficult and had served to neutralise their benefits. A weak land record system had also been viewed as a systemic weakness that has helped the perpetration of atrocities on the Scheduled Castes and the Scheduled Tribes. The following are the major initiatives taken by the Government of India for computerisation of land records: The Conference of Revenue Ministers of states/UTs (1985) advocated that computerisation of land and crop-based data be taken up on a pilot project basis as a technology proving exercise in one Tehsil/Revenue Circle of each state/UT, as a Central sector scheme. A Study Group (1985) comprising representatives from the Ministry of Agriculture, the Central Statistical Organisation and from the Governments of Karnataka, Madhya Pradesh, Maharashtra, Tamil Nadu and Uttar Pradesh also recommended computerisation of Core Data in land records to assist developmental planning and to make their records more accessible to the people. However, the Planning Commission considered it premature to take up the scheme at that point of time. A workshop on computerisation of land records (1987) reviewed the experience of different states in computerisation of land records made at their own initiative and recommended that the Government of India should fund this programme on a pilot project basis. The Department of Rural Development selected 8 districts in 8 states. Morena was one of the districts selected for computerisation of land records, the others being Rangareddy in Andhra Pradesh, Mayurbhanj in Orissa, Sonitpur in Assam, Singbhum in Bihar, Wardha in Maharashtra, Dungarpur in Rajasthan and Gandhinagar in Gujarat. While approving the pilot projects in 1988, the Government took the following decisions: The timeframe for the pilot project should not be more than 6-8 months. The states should clearly bring out the benefits that would accrue as a result of these pilot projects and these should be highlighted in the project reports. The state governments should show a clear commitment to computerisation of land records. An officer with knowledge, training and experience in handling computers should be made incharge of the project and should be posted in the district chosen for the Objectives of the �CLR" Scheme Keeping in mind all the aforesaid ideas, the final list of objectives of the scheme as conceived in the Memorandum for Expenditure Finance Committee (EFC Memo), submitted in 1993 by the Ministry of Rural Development, was as given below: To facilitate easy maintenance and updating of changes which occur in land database such as changes due to availability of irrigation/natural calamities/consolidation/ or on account of legal changes like transfer of ownership, partition, land acquisition, lease etc. To provide for comprehensive scrutiny to make land records tamper-proof, which may reduce the menace of litigation and social conflicts, associated with land disputes. To provide the required support for implementation of development programmes for which data about distribution of land holdings is vital. To facilitate detailed planning for infrastructural as well as environment developement. To facilitate preparation of an annual set of records in the mechanised process and thereby producing accurate documents for recording details such as collection of land revenue, cropping pattern etc. To facilitate a variety of standard and ad-hoc queries on land data. To provide database for agricultural census. Progress of the Scheme The centrally-sponsored scheme on computerisation of land records was started in 1988-89 with 100% financial assistance as a pilot project in the eight districts/states mentioned above was with a view to removing the problems inherent in the manual system of maintenance and updating of land records and to meet the requirements of various groups of users. It was decided that efforts should be made to computerise the CORE DATA contained in land records, so as to assist development planning and to make records accessible to people/planners/administrators. By 1991-92, the scheme had been extended to 24 districts in different states viz., Haryana, H. P., J&K, Karnataka, Kerala, Manipur, Punjab, Tamil Nadu, Tripura, Sikkim, Uttar Pradesh, West Bengal and Delhi UT. During the Eight Plan, the scheme was approved as a separate centrally-sponsored scheme on computerisation of land records. The total expenditure on the Scheme during the Eight Plan period was Rs. 59.42 crore which was utilised for covering 299 new districts and also for providing additional funds for the on-going pilot projects. Thus, by the end of the Eight Plan, 323 districts in the country were brought under the scheme with an expenditure of Rs. 64.44 crore. The scheme is being implemented since 1994-95 in collaboration with the National Informatics Centre (NIC) which is responsible for the supply, installation and maintenance of hardware, software and other peripherals. NIC is also responsible for providing training to the revenue officials and technical support for proper implementation. The Ministry of Rural Developemnt is providing funds to the state governments for site preparation, data entry work and for purchase of necessary furniture and other miscellaneous expenditure. Since inception of the scheme, the Ministry has released Rs. 109.37 crores upto March 31, 1999. The utilisation of funds reported by the states/UTs as on November 30, 1999 is Rs. 62.15 crore which is approximately 57% of the total fund released. During the first year of the Ninth Plan i.e. 1997-98, Rs. 20.19 crore was released to states for covering 177 new project districts and also for providing funds for purchase of software, hardware and other peripherals for tehsil/taluk level operationalisation of the scheme. Accordingly, during 1997-98, 475 taluks/tehsils were brought under the programme of operationalisation (@ Rs. 2.20 lakh per tehsil/taluk). During 1998-99, Rs. 24.75 crore was released for covering 28 new districts and operationalisation of the scheme in 625 more tehsils/talukas. At present the scheme is being implemented in 544 districts of the country leaving only those districts where there are no land records.
计算机
2014-35/1057/en_head.json.gz/1547
Home > Risk Management OverviewGetting StartedResearchTools & Methods Additional Materials ConsultingOur People Risk Management Consider a broad range of conditions and events that can affect the potential for success, and it becomes easier to strategically allocate limited resources where and when they are needed the most. Overview The SEI has been conducting research and development in various aspects of risk management for more than 20 years. Over that time span, many solutions have been developed, tested, and released into the community. In the early years, we developed and conducted Software Risk Evaluations (SREs), using the Risk Taxonomy. The tactical Continuous Risk Management (CRM) approach to managing project risk followed, which is still in use today—more than 15 years after it was released. Other applications of risk management principles have been developed, including CURE (focused on COTS usage), ATAM® (with a focus on architecture), and the cyber-security-focused OCTAVE®. In 2006, the SEI Mission Success in Complex Environments (MSCE) project was chartered to develop practical and innovative methods, tools, and techniques for measuring, assessing, and managing mission risks. At the heart of this work is the Mission Risk Diagnostic (MRD), which employs a top-down analysis of mission risk. Mission risk analysis provides a holistic view of the risk to an interactively complex, socio-technical system. The first step in this type of risk analysis is to establish the objectives that must be achieved. The objectives define the desired outcome, or "picture of success," for a system. Next, systemic factors that have a strong influence on the outcome (i.e., whether or not the objectives will be achieved) are identified. These systemic factors, called drivers, are important because they define a small set of factors that can be used to assess a system's performance and gauge whether it is on track to achieve its key objectives. The drivers are then analyzed, which enables decision makers to gauge the overall risk to the system's mission. The MRD has proven to be effective for establishing confidence in the characteristics of software-reliant systems across the life cycle and supply chain. The SEI has the MRD in a variety of domains, including software acquisition and development; secure software development; cybersecurity incident management; and technology portfolio management. The MRD has also been blended with other SEI products to provide unique solutions to customer needs. Although most programs and organizations use risk management when developing and operating software-reliant systems, preventable failures continue to occur at an alarming rate. In many instances, the root causes of these preventable failures can be traced to weaknesses in the risk management practices employed by those programs and organizations. For this reason, risk management research at the SEI continues. The SEI provides a wide range of risk management solutions. Many of the older SEI methodologies are still successfully used today and can provide benefits to your programs. To reach the available documentation on the older solutions, see the additional materials. The MSCE work on mission risk analysis—top-down, systemic analyses of risk in relation to a system's mission and objectives—is better suited to managing mission risk in complex, distributed environments. These newer solutions can be used to manage mission risk across the life cycle and supply chain, enabling decision makers to more efficiently engage in the risk management process, navigate through a broad tradeoff space (including performance, reliability, safety, and security considerations, among others), and strategically allocate their limited resources when and where they are needed the most. Finally, the SEI CERT Program is using the MRD to assess software security risk across the life cycle and supply chain. As part of this work, CERT is conducting research into risk-based measurement and analysis, where the MRD is being used to direct an organization's measurement and analysis efforts. Spotlight on Risk Management The Monitor June 2009 New Directions in Risk: A Success-Oriented Approach (2009) A Practical Approach for Managing Risk A Technical Overview of Risk and Opportunity Management A Framework for Categorizing Key Drivers of Risk Practical Risk Management: Framework and Methods
计算机
2014-35/1057/en_head.json.gz/2124
Fool Me Twice: The Fool and His Money Cliff Johnson's twice-long-awaited sequel to The Fool's Errand, at last is here, again Category: Interview "My goal was to be a worthy successor to The Fool's Errand, and according to my 50+ testers, I have succeeded." -- Cliff Johnson “The puzzles, as always, are challenging in just the right way. Moreover, the graphics are gorgeous, the sound effects charming, and the whole enterprise worth waiting for.” -- Stephen Sondheim Ten years ago, Cliff Johnson, the creator of such early computer puzzle games as 3 in Three and At the Carnival, got the urge to make a sequel to his most groundbreaking "meta-puzzle" game The Fool's Errand. He'd been offering his earlier games (as he still does) for free download on his new website and the positive response -- and the promise of digital marketing and distribution -- led him to believe there was an audience for such a project. He thought back then it would take him a year to complete.Five years later, in 2007, he was convinced he'd finally stamped out all the programming bugs and other unforeseen obstacles. So convinced that we did a long interview for JA on the at-long-last imminent release of his sequel, The Fool and His Money. Well, that deadline drifted past and another five long years later, finally, Cliff has his Fool's Errandsequel ready to release, on October 26th.The game's production was so long that the CD-ROM has nearly disappeared in the interim. One big change since 2007 is the game's availability by digital download, though Cliff will still send his "True Believers" and others the CD he promised them eons ago. As he explained it to me, "I will ship a CD as a souvenir drink coaster . . . and because I said I would."This time I know it's coming out, at least sometime soon, because I am in the midst of playing the "Omega" test version of the complete game. Cliff did release a tantalizing teaser containing a handful of puzzles a few years back, and that too is still available on his Fool and His Money website. But this is the full enchilada, all 77 puzzles, plus a new "meta-puzzle" like the original.When I finish the game, God willing, I intend to write a proper review; but Cliff was good enough to answer a few follow-up questions to our 2007 interview about what's transpired since. If you want to get the full story of how he created The Fool and His Money, you should read the 2007 interview.I will, however, give a brief "preview" for those who want just the facts.Much of the modus operandi of the original is back. This is a big puzzle game with an adventure story and feel. The main character, the Fool, is still wandering around the land of the Tarot card encountering all sorts of bewitched locals who present him with perplexing puzzles. The majority are word puzzles but there are Tarot card games, jigsaw puzzles, numbers puzzles, and some other clever graphics and logic puzzles tossed into the mix. There is, as in the original game, a perplexing map that slowly fills in as you complete the various puzzles. The Fool and His Money is no static crossword, however. There are numerous, ingenious flash animations and sounds to liven up the proceedings. There's even an occasional "action" challenge, as well as a novella-length story that's slowly "written" as you move through the game world. I have been playing for two weeks and am still only about halfway done.All true puzzle fans should be thrilled that Cliff Johnson has finally wrestled his programming demons to the mat and is releasing The Fool and His Money. When the original came out, the computer gaming world offered a fair number of full-length puzzle fests, such as Sierra's Dr. Brain series. Nowadays, all we get is match three and hidden object casual flash games. As the Fool once again wanders around the Kingdom of the Pentacles looking for his lost treasures, we should treasure this insanely long gestational effort by one of our great puzzle masters. Even more so because, unlike five years ago when he seemed enthusiastic about writing more Fool and even 3 in Three sequels, he now sounds uncertain. Or, perhaps, tired. Is this the last meta-puzzle game from Cliff Johnson? Let’s hope not. But right now, your guess is as good as mine.Wow, ten years, end nearly in sight. How do you feel?A decade older and a day wiser.What was the biggest, baddest holdup? To borrow a quote from The West Wing, "the total tonnage of what I [now] know... could stun a team of oxen in its tracks."Have the gameplay, puzzles, or story changed at all? In 2003, I envisioned The Fool and his Money as a "rags to riches" tale. The game had five transformations: from Vagabond to Street Peddler to Shopkeeper to Shipping Merchant to Land Baron to Emperor.(I still use the Emperor pose in the first puzzle of the Game.) In the end, that version had uneven game play, half puzzles and half simulation, and more important, the new story had little relation to the original story.The next version contained many of the elements of the final game with one exception: I decided to divide the play into four Kingdoms. You had to finish one Kingdom completely before you could enter the next Kingdom. Tied to this were the four floors of The Seventh House. You had to solve both the corresponding floor and Kingdom to arrive at the 1st, 2nd, 3rd Gateways and then move on.My earliest testers, The Alpha Team, gently suggested that this was an error of catastrophic proportions.You'd think removing the boundaries would be an easy fix, but I've come to believe "nothing is easy" when it comes to computers. In fact, the easier it looks, the more complicated it becomes.The final version begins with all the Kingdom of Swords puzzles available. After solving a few, new puzzles appear in the Kingdom of the Wands, then the Cups, and then the Pentacles. The Seventh House has become a Kingdom of its own where the antagonists lurk and scheme. The game is now wide open. No gateways.The Fool's Errand and 3 in Three were wide open. It was Merlin's Apprentice and Labyrinth of Crete that introduced gateways.Did that extra time allow you to make any other general improvements to the version that nearly made it out the door back then? Or has it basically been a bug hunt?It's not that the extra time allowed for changes. It's that the changes required extra time.The official bug hunt began in November 2011.Ever feel your Fool and His Money production schedule was mirroring the confusion and obstacles your hero experiences wandering around both the original game and its sequel?In years 3 – 7, my task felt like Sisyphus, perpetually rolling a boulder up a hill.In years 8 – 10, my task felt more like the twelve labors of Hercules. Incredibly difficult, but not impossible.I think it's fair to say any commercial game publisher in your predicament would have thrown in the towel years ago. What kept you going?Commercial publishers have different goals than I do.My goal was to create a sequel to The Fool's Errand and that's what I've done.What keeps me going is always the same. I have a passion for creating things. I need to see that final result.Has Adobe Flash given you a new appreciation for ye olde old pen and paper puzzle?To be precise, it is a Director app/exe using embedded Flash that miraculously gives me an identical product on both MAC and WIN.I congratulate Macromedia and Adobe for their persistence.I never asked you, but what made you decide on puzzles when you first started writing games for the Macintosh in 1985? Had you long been a puzzle fanatic?The first half of my professional life was in the film industry and I remain a filmmaker at heart, that is, a storyteller. I think of my computer games as "stories told by treasure hunts." Back in high school, my best friend subscribed to GAMES Magazine and he would give me his magazines after he had solved them. Then I could flip through the pages, see the challenges, and then nod my head at the solutions.Myself, I do not play puzzles, but I do appreciate their craft and their clever use of art direction.After seeing the 1972 movie "The Last of Sheila," a murder mystery emerging from a party game, penned by Anthony Perkins and Stephen Sondheim, I was inspired to make my own Mystery Party Games. At that time, I chose to deliver the clues in paper-and-pencil puzzle format and then I used photographs, tape recordings, and movies to advance the story . . . once the clues were solved.I learned to program specifically to bring this idea to a computer. That became The Fool's Errand.There were a fair number of full-length puzzle games when the PC was young. Now you’re one of the last Mohegans. Of course, since 2007, the online "casual" puzzle game has exploded. Have any of these impressed you? I have not played computer games since the Atari 2600. I might be the wrong person to ask.Are there advantages to the long-form puzzle game over the more casual puzzle game?I'd rather watch one 3 hour art film than 3 hours of ten-minute short films.In any presentation, a longer format allows for greater in-depth storytelling.That is my preference.Compared to your earlier games, Fool and His Money has a lot more "help" built in. INSTRUCTIONS would have been a better word, but it's an awfully long word.To me, HELP on a computer has always meant INSTRUCTIONS.True, in a game of puzzles, HELP does imply HINT.But, as I clarify on the opening screen, I consider HELP to be essential information to play the puzzle.I know your website is further offering hints and solutions. I think of it as "one-stop shopping." Why go somewhere else?Could I ask you to "punditize" a little on the "challenging puzzle with help" versus the "easy puzzle that makes people feel clever" controversy?I cannot accept the premise. I don't believe puzzles are a "one size fits all" medium.As to hints & answers, if a person is inclined to go online for quick solutions, they will do so.I feel it's better to offer them myself and then maintain some quality control over how that particular hint or answer is presented.Still hoping to make Fool's Paradise, and 3's a Crowd and 3's the Charm?Since I am a one-man band, I can see no reason why another product would not also take me 10 years to complete. This is not a casual decision.The time between your two Fool games has bridged the gap between Apple's dominance in the graphic computer game world. In the 80's the b&w GUI Mac was a miracle. And now the iPad is ruling the gaming 4G LTE waves. Any future plans to knock on the door of the App Store with Fool and His Money?I do not own a smart phone or a tablet. I never owned a laptop.I like my computers heavy and my monitors huge.Why? Because when I am done with the computers, I can turn them off and walk away from them and do other things.Therefore, it is unlikely I will pursue any projects with equipment I do not own.For years you've been generously offering your earlier games, Fool's Errand, At the Carnival, and 3 in Three, for free download on your Fool's Gold website. But a Classic Mac or a facility with emulators is required. All three seem ideal for the iPhone/iPad. Is converting them feasible? Weren't they all built on and for the original Macintosh?Yes, Fool was made on a Mac 512K and Carnival & Three on the first color Mac.Converting them would require a total make-over.I’d rather get a hundred thousand paper cuts on my face.Hey, I just realized the Amiga version of Fool’s Errand is in color! When did that happen? You know, I actually own an original Amiga. No hard drive, 256k RAM. But I’d need an Amiga emulator for Windows to get this version to run nowadays.The IBM, Atari, and Amiga conversions were all in 16-color, and have over a hundred errors and bugs, never fixed.Only the Mac original was black & white . . . and it is my favorite.Isn’t the iPad the second chance all these early PC and Mac games have been waiting for?I'd prefer to work on new projects rather than rehash the old ones.Dare I ask the same about Merlin's Apprentice and Labyrinth of Crete? Two more excellent puzzle games you made for the Philips early CD-i game system. Convertible? Or nigh on impossible?I have the Philips products well-documented on DVD. The computer assets, owned by Philips, are lost and gone forever.So, where do you go, creatively, from here?A decade is a long time, and yet, being at the end of that decade, it seems like yesterday (as I say in The Prologue).I am delighted with The Fool and His Money. It has surpassed all my expectations.Creatively, I shall be fixing what's broken on my website and answering e-mail well into the New Year.If I sound unsure of the future, it's because I feel like I just went 15 rounds with Mike Tyson and somehow I'm still standing.This is not the ideal time to ask me whether I want to step into the ring again.By Spring 2013, I expect a full recovery.And where are you going for your well-earned vacation?I plan on a splendid New England Halloween, Thanksgiving, and Christmas.Thank you, to one and all.
计算机
2014-35/1057/en_head.json.gz/2440
Classification Of Computers Classifications of Computers According to the U.S. Census "Forty-four million households, or 42 percent, had at least one member who used the Internet at home in 2000" (Home Computers 2). Today, no doubt, even more family members in the United States use computers. Most people are aware of the desktop computers which can be found in the home and in the workplace. What are the different types of computers and what are their purposes? Computers can be classified into three different categories of home computers, portable computers, and business computers including workstations and super computers. First, what is a computer? "While the term computer can apply to virtually any device that has a microprocessor in it, most people think of a computer as a device that receives input from the user through a mouse or keyboard, processes it in some fashion and displays the result on a screen" (What are the Different p. 1). Home computers are being used by children, teenagers, and adults. The PC or personal computer is designed to be used by one person. The term, Mac, is a PC, but most people link computers with Windows software such as Windows 98, Windows 2000, or Windows XP. A PC is actually a desktop that is designed to be used in one set location. "Most desktops offer more power, storage, and versatility for less than their portable brethren" (What Are p. 3). Many desktop computers are used at home and at work. Various types of software have been designed to meet individual needs of the computer user. The use of home computers or PCs can be for various purposes such as education, work at home, personal communication through e-mail, to gain knowledge about different topics, to find recipes, and even to play games. The second classification of computers is portable computers. This type of classification includes lap tops and palm tops. The personal digital assistant or PDA was designed to help people stay organized. This was expanded upon and now PDA's offer a variety of services. PDA's are "easy to use and capable of sharing information with your PC. It's supposed to be an extension of the PC, not a replacement" with many different types of services (How PDAs Work p. 1). Many PDA's are even capable of connecting with the Internet and act as global positioning devices. Other portable computers are now available. Another portable computer is the palmtop. "A pocket computer has to have a small, light batteries that last a long time so that the whole computer is light and small enough to be carried around in someone's pocket" (Types of Computers p. 2). Palm top computers do not have keyboards. They often are designed for the user to use special pens or touch-sensitive screens. "Palmtops are tightly integrated computers that often use flash memory instead of a hard drive" (What are the Different p. 5). Most palm tops are the size of a paperback book or smaller. Usually the palm top computer is designed for specific purposes such as games or personal memory devices. Another portable computer besides the palm top is the lap top, which is smaller than the desktop. More and more people are using lap tops instead of desktops. These are portable computers that are similar to desktops with many of the same functions. Lap tops "integrate the display, keyboard, a pointing device or trackball, processor, memory, and hard drive all in a battery-operated package slightly larger than an average hardcover book" (What are the Different p. 4). Laptops offer the convenience of use in different situations such as on an air plane, in front of a television set, or in a motel room. "Modern laptops have floppy disks, CD-ROM drives and CD re-writers, and even DVD drives" (Types of Computer p. 4). Laptops often have a mouse and fully functioning keyboards. Laptops are often used by business people who rely on computers to keep them in touch with their companies and clients. "The main advantage of a laptop is that the person using it can have all the programs and data from their desktop computer on a portable computer" (Types of Computer p. 6). The fact that it is a portable computer designed to use anywhere makes the laptop a favorite of many people. The next classification of computers is called the office computers such as workstations or supercomputers. What is the difference between a workstation and desktop computers? "A desktop computer that has a more powerful processor, additional memory and enhanced capabilities for performing a special group of task, such as 3D Graphics or game development" (What are the Different p. 6). Another computer under this classification is the server or networks. "A computer that has been optimized to provide services to other computers over a network. Servers usually have powerful processors, lots of memory and large hard drives" (What are the Differences p. 7). Networks combine several computers in an office or building space. Also, under the classification of business computer is the supercomputer. This type of computer is more expensive than any of the others with it often costing millions of dollars. "Although some supercomputers are single computer systems, most are comprised of multiple high performance computers working in parallel as a single system" (What are the differences 10). Supercomputers usually are used to predict weather, handle bank accounts, or insurance details. Three classifications of computers are home computers, portable computers, and business computers. The desktop computer is usually found in homes or small offices. Portable computers are becoming more accepted with people buying more laptops and PDA's. Business computers are usually known for networks servers and supercomputers. Computers are used by children, teenagers, and adults with many children and teenagers having desktop computers in their rooms and many teenagers and adults are finding PDA's necessary as a part of their lives. Works Cited "How PDAs Work." 1 May 2005 . "Types of Computer." 1 May 2005 . "What Are the Different Types of Computers?" 1 May 2005 . How to Cite this Page "Classification Of Computers." 123HelpMe.com. 21 Aug 2014 <http://www.123HelpMe.com/view.asp?id=158512>.
计算机
2014-35/1057/en_head.json.gz/4513
Creepygaming Reddit Youtubers My Only, a non-linear new pixelated horror 4:16 PM Ah yes, we have already heard of Lord Gavin Games, but for those who can't remember, he's the one who made the great Kilobyte a while back. Yesterday, his latest game named My Only has finally been released still following the pixelated horror path chosen, and obviously, mantaining even this one completely free of charge. And although the general formula is still the same, it's undeniably clear that My Only represents firstly his personal maturation in terms of development skills and storytelling, so, despite it anyway having a few downsides that I'll make sure to include at some point, here are my first impressions. As I just previously stated, it's a brand new 3D pixelated horror developed in Game Maker, but unlike any other game that would fit in this sub-genre of horror games, it isn't linear, and it offers a wide range of choices that will ultimately shape the way you reach the end. The place in which all the game will take place looked like a park to me, but then some clues seem to suggest that we're in a graveyard, it rains, and the opaque colors that surrounds you are particularly soothing, just as the background music - which by the way is composed by Vonyco, great stuff. But that's just the beginning, because as you move forward you get to deal with the darker side of the game, whether you chose to go in the depths of the dark celler or through the maze of trees it doesn't matter. At some point, the time for jumpscares will come. The story is quite intricate, you seem to have lost a loved one, but then something tells me that you didn't simply lose her, are you somehow involved in the whole thing? I guess that the only answer lies just at the end of the game, the moment when you'll open the gate perhaps will unveil what's behind this mysterious tale, but being still struggling to finish it, I can't really tell. The game comes with a very known main mechanic to us, the concept of gatering pages scattered around the environment was about to put me really off, but fortunately, this time the 12 pages are actually needed to add some more elements to the story that, otherwise, would look just like a real mess. On the other hand, I found My Only a little bit hard to get into, obtaining a key triggers a series of - damn scary - unidentified events, I know that they are related to the storyline, but they just seem a bit too random to be fully appreciated. If the question would be: Did you like the game? I would honestly answer with: Not as much as I was hoping - still reminding that I'm nowhere near to finish it. But the amazingly spot on atmosphere combined with the aforementioned strange events - often leading to a death - did scare me so bad! Which is what the game is all about eventually. My Only can be downloaded absolutely for free from GameJolt, don't understimate it, and be ready to face the ghost that's patiently waiting inside. Your chance to make my day! Survival Horrors (C) Creepy Gaming
计算机
2014-35/1057/en_head.json.gz/4669
GDC 2011: David Cage Encourages Developers To "Forget Video Game Rules!" Posted March 3, 2011 - By Guest Writer Cheats and Walkthroughs (125) By Dennis Scimeca The Game Developers Conference is, in large part, about game designers getting a chance to speak to the next generation of game developers about how to design and execute video games. I wish someone had told David Cage this before signing him up for a panel at GDC 2011. I spoke to other journalists after attending this panel to see whether Cage was spouting off the same old lines he always does, or whether this was something new. One person I spoke with said that it sounded like David Cage, only much more direct. To wit: Cage doesn’t want the next generation of game developers to bother making games. Not really. Want to know more? Read on. Cage began the panel with what was ostensibly meant to establish his street cred as a game designer: 89% score for Heavy Rain on Metacritic, the commercial success of the title with 2 million copies sold worldwide (which, to his credit, he noted wasn’t actually a very big number compared to AAA titles), industry accolades, and stated that 75% of Heavy Rain players actually finished the game versus an industry average of 20%-25% for other titles. While I wasn’t pleased that his panel began as a virtual press kit for Heavy Rain, I’d have preferred it if Cage had stuck to that vein of discussion rather than move on to launching an assault on the entire video game industry that didn’t even make sense much of the time. To wit: Cage argues that most video games are designed for teenagers, because they are based on violence and physical action, i.e. shooting or platforming. He felt that this makes video games meaningless and emotionally limiting. On the first point, I believe the average age of the gamer nowadays is around 35 years old. On the second point, we can identify plenty of games that involve “typical” mechanics by Cage’s definition which most certainly don’t fit his description. Half-Life 2, Enslaved: Odyssey to the West, and Mass Effect 2 immediately come to mind. Glimmers of valid industry criticism broke through Cage’s panel now and again. To a point, video games have been based on the same paradigms for 30 years. Technology might actually have advanced far quicker than game design principles. These points become frustrating when Cage then responds to his concerns by advocating a complete abandonment of video games as we know them. And I quote from some of the PowerPoint slides that accompanied his lecture: “Game mechanics are evil!” “Forget Video Game Rules. Mechanics, levels, boss, ramping, points, inventory, ammo, platforms, missions, game over, [and] cut scenes are things from the past.” Questioning convention is healthy and often bears the fruits of creative innovation, but Cage’s solution is instead to make writers and art designers the God of game development (his words) from which all the decisions flow. There’s a reason why the video game industry is tapping into Hollywood screenwriting talent to craft the stories in games like Homefront and Enslaved. Narrative writing is a specialized skill. I would hazard a guess that most of the students in the audience today were not writers. Most of them were likely game designers, and animators, and 3D artists, etc. Asking them to assume the role of lead writer is like asking the writer on a video game development project to instead sit in for the 3D animator for a day. It doesn’t work, because he or she lacks the specialized skills they require. When Cage says that we shouldn’t tell stories from cut scenes I can just roll my eyes and think about the tons of cut scenes in Heavy Rain. But when he wound up turning the latter part of his panel into a lecture that sounded more like one of my screenwriting classes in film school versus a discussion of game design, I just wanted to stand up and say “I know those who can’t do, teach, but the story in Heavy Rain wasn’t very good, Mr. Cage, so why are you presuming to lecture this audience about how to tell stories?” Towards the end of his lecture, Cage mused aloud “Is Heavy Rain a video game? I don’t know, and I don’t care.” And that’s precisely the problem. Heavy Rain was an adventure game at best, but if we’re just going to call it “interactive drama” or whatever genre Cage would like to coin for his work, he may as well just say he’s creating digital Choose Your Own Adventure books. I wouldn’t call those games, either. GDC 2011, PlayStation 3, Videogames http://www.g4tv.com/thefeed/blog/post/710813/gdc-2011-david-cage-encourages-developers-to-forget-video-game-rules/ http://cache.g4tv.com/images/ImageDb3/262/994/image262994/262994_S.JPG BlogThread_710813 AbleGamers Alison Haislip AOTB Exclusive Aqua-Gate Bad Snack Reviews Blair Butler Boss Battles Chris Gore
计算机
2014-35/1057/en_head.json.gz/4716
Steam for Linux officially launched Special offers for the official Linux launch With beta testing now officially completed, Valve Software has released the Linux client for its Steam game delivery platform. The software is available to download from the Ubuntu Software Centre and Valve has provided a downloadable deb package. The company has only recently relicensed the client with provisions that allow it to be included in Linux distributions. Valve recommends Ubuntu, but support for other distributions has been further improved during public beta testing, which started in December. Valve has Half-Life, Counter-Strike 1.6, Counter-Strike: Source and the free-to-play Team Fortress 2 available on Steam for Linux. For a limited period, Linux gamers playing TF2 will receive the Linux mascot Tux as in-game free content. To celebrate the launch, Valve is offering between 50% and 75% discount on Linux titles bought in Steam until 21 February. Steam is now available from the Ubuntu Software Centre Users who purchase a game on Steam can play it on all available platforms ("buy-once, play-anywhere"). Around 60 games are currently available for Linux; these are marked with a penguin and are listed on a dedicated Linux page. In many cases, however, the system requirements for specific games are missing. The spectrum of the available ga
计算机
2014-35/1057/en_head.json.gz/5090
Pan-Am Dynamic List (PDL) Project Home Page Welcome to the home page for Pan-Am Internet Services' Dynamic List - a list of home dial-up, home broadband and similar networks. Internet providers and e-mail network administrators use this list of networks to prevent e-mail abuse, more commonly known as spam, originating from people who try to abuse e-mail while not using their own Internet providers' mail system in the process. Update 08 MAR 2007: The Pan-Am Dynamic List project has outlived its usefulness and has been terminated. Some of you may have noticed that the listings were removed yesterday; this is an attempt to make sure your e-mail filters were cleared of PDL data automatically, but we will continue to maintain a "blank" database until the end of 2007 so you have a chance to remove the PDL from your mail systems. If you have backups of this data, please do not use it for junk e-mail filtering as it will eventually fall out of date unmaintained. In the PDL's place, please consider the Spamhaus Policy Block List (PBL) project. Its goals and charter are very similar to the PDL's: Spamhaus PBL Its usage is quite different from the PDL's, however, as it's maintained like a normal DNS-based blocking list. You can substitute pbl.spamhaus.org for pdl.invalid transparently if you were maintaining a local DNS zone built from the PDL. Website design by Al Iverson.
计算机
2014-35/1127/en_head.json.gz/15808
Gone but not forgotten: Looking Back on the Studio Closures of the Great Recession Posted by: Jeremy M. Zoss It wasn’t that long ago that the video game industry was considered recession-proof. Game sales were on the rise year after year, so many thought that gaming would weather the financial storm without serious causalities. Of course, that turned out not to be the case. Over the last few years, countless studios have felt the pain of our troubled times. Well-known developers like Factor 5, 3D Realms, and Microsoft’s ACES Team shut down. Smaller studios like Underground Development, Luxoflux, Paradigm Entertainment and more were closed by the publishers that owned them. Many more studios suffered massive reductions in staff. Yet out of these unfortunate circumstances comes new opportunities; several new development houses have been born out of the ashes of the old. GameZone spoke with several former staffers from acclaimed developers Ensemble Studios and Pandemic Studios about their memories of their time at the companies, what their losses mean to the industry, and what the future holds for those who were affected.Of all the studios that shut down over the last few years, the closure of Ensemble Studios was amongst the least expected. The critically acclaimed Age of Empires and Halo Wars developer had a great track record of quality games that sold well, reviewed strongly and won awards. None of that was enough to prevent its closure – former Ensemble luminary Bruce Shelley admits the company was perhaps too specialized, too expensive and had too many costly, unproduced projects. Fortunately, out of the demise of Ensemble were born several new studios, including Robot Entertainment, Bonfire Studios, Windstorm Studios and NewToy.“It was really an amazing experience,” says David Rippy, the ex-Ensemble employee who now serves as President of Bonfire Studios. “I had the pleasure of working at Ensemble from day one and watched it grow from a few guys experimenting with a WinG tank demo into a really well-respected game company. Hardly anyone ever left Ensemble, so it truly felt like family. Tony Goodman (our studio head) created an environment and culture where people actually enjoyed going to work every day and even hung out on the weekends. "We had a movie theater, arcade games, pool table, gourmet food … you name it! We certainly worked hard and crunched around major milestones, but we did it because we loved the games we were making. I think most former ES-rs will remember it as a really cool place to work, a great group of people who were completely committed to the company and their craft, and hopefully some of the most rewarding years of their life.”“Without question, the people are what I miss the most,” Rippy continues. “Both of my brothers (Chris and Stephen Rippy) worked at Ensemble, and I worked with many of the original ES guys in other industries before Ensemble was formed. There were lifelong friendships forged over the years and even two marriages. The roots for me (and I know for others) were very deep at ES. On a happy note, I do get to see about 30 former ES employees at Bonfire every day, and most everyone else from Ensemble has stayed in the Dallas area.”Rob Fermier, another former Ensemble employee who now works as a Lead Programmer at Robot Entertainment, agrees that Ensemble was made great by the people who worked there. “Ensemble was rare in that most of the people working there had been working together for many years, with a great deal of continuity,” said Fermier.“Being able to establish such deep working relationships with people was incredibly valuable, and we had strong bonds to each other and to the studio. I’ll most miss that sense of team that we had – a well established development process, a deep understanding in our area of expertise, and strong sense of studio identity. Such things take years to build, and once gone are lost forever.”As with any team that spends years working together, both Rippy and Fermier have fond memories of some of the good times at Ensemble. “As hard as we worked, we really knew how to let loose at our release parties and industry conferences,” says Rippy. “It’s largely a blur at this point, but I have great memories of a Van Halen tribute band playing at one release party, being handed the first copies of Age of Empires before it hit the store shelves, winning several ‘Game of the Year’ awards, bumping into fans that tell us that Ensemble’s games were their favorite of all time or inspired their kids to do better in school, releasing the first successful RTS game on a console, and finally, the company pulling together to make Halo Wars a great game even though we knew were being let go at the end of the project.” For his part, Fermier fondly recounts “When we made some drastic changes in our internal game lineup to enable us to work on our ‘dream games’ - a dramatic move most companies wouldn’t even consider, pulling all-nighters to get game features in on Age of Mythology because I was so excited about the gameplay, and going to E3 and GDC as a group and celebrating the commercial and critical success of our games.”Another well-know developer that recently closed its doors was Pandemic Studios, the respected action developer that first merged with BioWare and was, in turn, purchased by Electronic Arts. Not long ago, a group of former Pandemic employees formed Downsized Games, a small team currently working on a new iPhone action game. Several members of the Downsized crew were happy to share their thoughts on Pandemic, and like the Ensemble veterans, they have many fond memories of their experiences. “The thing I will miss the most about Pandemic is the studio culture they promoted there,” says Manny Vega of Downsized. “They really went out of their way to make the studio feel comfortable so that people working there would be able to work and play. I made some great friends and worked with some very talented people and I miss going to the office, which I can't say about my previous jobs. You could feel that culture slowly slipping away as bigger and bigger fish bought us and started replacing people, but we held on for as long as we could and did our best to ignore it.”"I really enjoyed working with the many talented people I met at Pandemic over the course of seven years,” adds Downsized’s Andrew Mournian. “I will miss working on games in the Mercenaries series and the unique challenges they presented. I still think that series holds so much potential for an awesome game. Now that it's over, I think most Pandemic folk are just trying to find new homes at studios scattered all over. As for Downsized games, I'm finding working on the small screen is a lot of fun, it feels like going back to development on the PS2 or something." Downsized’s Ariel Tal echoes the sentiment. “Pandemic had amazing staff and a drive to create fun games. I will miss the passionate discussions we had, ranging from shader tech to perforce checkins. Pandemic had an enormous amount of talent and we broke new ground in tech and ideas. I'm sorry we didn't get to see some of that talent come to fruition when we closed down. But slowly we are moving on to bigger – well, sometimes downsized, opportunities."Unfortunately, loads of talent and a positive work environment aren’t enough to maintain a business, and Manny Vega has plenty to say about what caused the downfall of Pandemic. “One of the main flaws of the company was that it got too big too fast,” he says. “The transition between Mercenaries and Mercenaries 2 on next-gen consoles, coupled with being courted and bought out by two companies in under three years - I think it was too much to bear. Many people seem to be under the impression that the people working at Pandemic were to blame for the downfall of the company, but the truth is that milestones, direction and focus were changed regularly and it made completing any project difficult. This is a business first and foremost, but it's also one that is run by creative, passionate people, and many times the two do not mix. It's something the entire industry is starting to see right now, creativity and the bottom line are not one and the same. I don't blame anyone in particular for Pandemic's closure; it was an intense situation that had only two possible outcomes: stunning success or complete closure. Middle of the road would have been unacceptable, and I can confidently say that everyone employed at Pandemic gave their all, and a left it all on the table.”Of course, it wasn’t just Ensemble and Pandemic that shut down recently. There are plenty of other studios that closed down and sent their former employees scrambling for new jobs. While it’s always sad to hear about a studio closing, the former Pandemic and Ensemble developers we spoke to seem to be optimistic about the future. After all, the new companies that have arisen in the place of the old bring with them new opportunities and challenges. “Downsized is about making small games that make you forget how ‘hardcore’ a gamer you are and let you enjoy the blissful ignorance of having fun,” says Vega. “Everyone always says they want to make their own game and that they have the best ideas, but few get the chance to make it happen.” Bonfire President David Rippy is also upbeat on the multiple companies that sprang up from the remnants of Ensemble. “The worst thing that could have happened would have been for all the talent at Ensemble to just dissipate,” he says. “Competent teams that have worked together and shipped games for over a decade are few and far between. My hope is that the companies that formed as a result of Ensemble Studios closing will go on to be big successes of their own. That would be great for us as individuals, and great for the health of the industry as a whole.”
计算机
2014-35/1127/en_head.json.gz/15916
Management Strategies What Matters in IT: the Power to Change By Ravi Koka, Polaris Financial Technology@insurancetech How an emerging architectural approach can turn a ‘commodity’ into a strategic weapon. This article is the first in a series of three on ACM — architecture for continuous migration. [Click here to read part 2: What Matters in IT: Solving Legacy with ACM] Ravi Koka, Polaris Financial Technology Limited Today, nearly a decade after Nicholas Carr’s famous article and book, the question that really matters is not “Does IT Matter?” For CEOs and CIOs it is “How can we spend our IT budget in ways that matter?” As Carr predicted, most firms have moved beyond the age of spending extravagantly. But it is more important than ever to think and spend strategically. Cost-cutting alone is insufficient, because while the times are tight, they also keep changing. Thus the great need is for strategic investments that can do double duty: reduce IT costs, while also enhancing the firm’s ability to adapt and compete in an environment where business needs, platforms and technologies all change constantly. What many people do not fully grasp is that a focus on cost-reduction alone actually imposes multiple costs. The costs include actual dollar costs for system maintenance and upgrades. But they also include the opportunity cost of not being able to adapt quickly and full benefit from new business models, as well the cost of losing business to, or even being driven out of business by, competitors who are more nimble. Simply maintaining and updating legacy systems can easily consume most of a company's IT operating budget. The estimated spend per year on IT globally is around $2 trillion dollars. A study by A.T. Kearney of 200 businesses in the U.S., Canada and Europe concluded that only 20% of this IT investment goes towards innovation. The rest is pure overhead — that is, until the firm wants to do something significantly new. Then the legacy problem becomes not just overhead, but an obstacle. Established companies have several generations of applications, each designed with technologies that were both state-of-the-art for their time and appropriate to the business needs of the time. The conundrum for most companies and CIOs is how to be more agile without disrupting the current operations, which are legacy based. A “big bang” replacement is risky and expensive — the few firms that have tried it have not had much success — and given that business and technical requirements will keep changing, it would be prohibitive to do big-bangs repeatedly. Few recommend such a solution today. Companies would be better served by accepting change as a permanent factor and start building-in agility gradually and iteratively. Architecture for Continuous Migration There is an approach that does all of these things remarkably well. For want of a better name, the approach can be called ACM, or architecture for continuous migration. ACM is not proprietary and requires no products or services from any particular vendor. It's fair to ask that if ACM is non-proprietary, why aren’t more firms adopting it? For one thing, it is still early in the game in terms of use of the approach in the field. Numerous studies of innovation have found that fundamental innovations typically take at least five to 10 years, and up to 20 years, to reach mainstream adoption. The paper that articulated the principles, Khosla and Pal’s “Real Time Enterprises: A Continuous Migration Approach," was published in 2002. ACM requires a solution architecture and design that seperates the various layers in application software – user interface, business logic and data access. When architecture for continuous migration is implemented optimally, all five of these elements are present: 1) Federation as opposed to migration or integration: In other words, a federated SOA approach. 2) An extensible information model — meaning using an open-architecture, industry-standard schema for the object and data model. 3) An Externalized rule framework. A rules framework is a set of externalized rules that are not embedded in code, and are easily modified by business users. 4) A layered architecture, with loose coupling of applications. Loose coupling of the user-interface, business-logic and data-access layers allows changes to be made to one layer without disrupting the other. 5) Component-based development — using best-of-breed, open architecture, pre-built business components that one can buy and assemble. Companies have focused on ways to reduce or eliminate various basic costs of running the enterprise. But this approach has its limits. Frugality and agility are distinct and potentially antagonistic objectives. Furthermore, this type of cost-cutting is not the only way to save. ACM is a rest-of-the-way approach that can build on more typical kinds of savings, in the process of making the firm more agile and competitive. Business and Technical Requirements Are Constantly Changing The reality for most firms is a constant re-building in the face of constant change. Every firm has legacy systems, including firms launched last week. The firms with the oldest systems are always at a disadvantage in terms of the legacy complexity they must deal with, since they tend to have a mix of architectures and systems conditioned by whatever was state-of-the-art when they first built their IT enterprises, plus a sampling of everything since. That includes, in many cases, packaged ERP solutions that were installed in hopes of fixing the problem "once and for all" — which doesn’t happen, because the ERP package, itself, “fixes” in place a set of systems which later become obstacles to further change. The same holds true for middleware installed at various times in attempts to integrate everything. Every firm, however, can strive to grow more agile at less cost. Consider an IT “investment pyramid” that looks like this: All firms start at the bottom with the infrastructure mix they happen to have. As noted above, all are not on equal footing in this regard, but everybody starts with what they have. In the next step up — which most have taken, or are taking — the goal is to cut the cost of running the enterprise as it is currently configured. That’s great, but it only takes you part of the way. There is plenty of upside left, but to get at it, one must cross a divide. That requires a shift of mindset from a focus on solely reducing costs to a focus on agility, with the knowledge that one will have to keep changing to stay competitive. Therefore, from the middle of the pyramid (which is also “the middle of the pack” in terms of competitiveness), a logical next step up would be to take some of the savings from the basic cost-cutting and start investing in architecture that can support continual change. Further savings will accrue. The final step is strong change management and skills training, to go along with the increased pace and degree of innovation that the firm will now be capable of. With almost any technology change, including ACM, the human factors have to be addressed to get the full benefits. Users have to be shown new system features, learn how to take advantage of them, and be helped with the transition from old ways of doing things. About the Author: Ravi Koka is CTO, Insurance & Portals, of Polaris Financial Technology Limited, a provider of enterprise software for the banking and insurance industry. Prior to joining Polaris Koka founded SEEC Inc. and successfully completed the company's IPO on Nasdaq in 1997. Koka started his career with System Development Corporation (originally a division of RAND) and was an adjunct associate professor at CMU. Polaris Software labs acquired SEEC in 2008. 6 Life Insurance Customer Experience LeadersInsurance & Technology Top 10 Stories of 2013Insurers Play Around on Facebook: 5 ExamplesWhat Went Wrong With Healthcare.gov? 9 ViewsMore Slideshows» More Insights Whitepapers Top 8 Considerations To Enable and Simplify Mobility Consolidation: The Foundation for IT Business Transformation More >> Webcasts Agile Service Desk: Keeping Pace or Getting out Paced by New Technology? Bring Salesforce.com Alive with Your Key Business Processes: Register Now More >> Message Board
计算机
2014-35/1127/en_head.json.gz/15938
SideBar Resolution: 1024x768 800x600 Designed at 1048x768, 16 bit color Best Viewed at 800x600 or better, 16 bit color Last Updated: July 12, 1999 EST Contact me at: [email protected] Over 300,000 visitors in the last two years. Greetings. You have entered a site in development. This site is dedicated to assisting you in completing Lucas Art's new game, Jedi Knight: Dark Forces II™. My goal is to help you progress through the game when you are struggling, maximize your force powers as quickly as possible, and generally help you have as good a gaming experience as possible. The focus of this site has been expanded now that all walkthroughs and secrets are done. Sections discussing both general aspects of the game, and multiplayer aspects are being added. The navigation side bar has been added to the left, but some links don't function yet. The site will feature three key sections: Discussion of General Jedi Knight Features Assistance for Single Player Missions Analysis and Discussion of Multiplayer Games This site will no longer be updated. Too much time has passed, and my interest in finishing the Mysteries of the Sith walkthroughs has dimished. After over 200 hours in time spent doing this site and playing the game, I am tired of it. But I don't regret it. I imagine many have already come to the conclusion that this site wasn't going to be finished, and I'm sorry but such is life. Thanks for visiting, and I hope you find the information contained within still useful. Over I find the site was a success considering it's limited focus on the title. It didn't give daily news, it didn't offer new modes of play, or anything like that. It taught how to play the game, and I'd like to think it did that well. Maybe the site was a little bandwidth/graphic intensive, but it seems to have been a success. � 1997-1999 Jedi Knight and Star Wars are registered trademarks of the Lucas Arts Entertainment Company. All images within the walkthroughs and secrets discussed on this page are copyrighted property of Lucas Arts Entertainment Company and are used here without permission to assist in the understanding of the issues discussed by the author of this site. The JediKnight.net name and logos are the copyright of Jediknight.net and have been used with permission. This guide is not associated with Lucas Arts in any way. The main background image was taken from Dr. Ozone's site, a bona fide Adobe Photoshop wizard. All images and information on this site is the copyright of Scott St. Jean unless noted otherwise. If you wish to copy any material found on this site, please contact me. All graphics on site were made with Adobe Photoshop 4.0.
计算机
2014-35/1127/en_head.json.gz/16662
Seven More Questions for SAP’s Co-CEO Bill McDermott January 14, 2013 at 6:32 am PT The last time we heard from SAP co-CEO Bill McDermott, he talked a great deal about a then-upcoming product strategy called HANA. The idea was to move all of SAP’s existing business applications into a high-performance appliance, where the database runs in memory, and is more responsive to requests. In the 15 months since that conversation, SAP has been on the move. HANA is not only done, but all of SAP’s primary applications are running on it. SAP has pivoted from running all of its applications in an old-school on-premise fashion to offering them both in the cloud and on premises, or on a mixed hybrid-cloud basis. It also made a significant acquisition of SuccessFactors, the cloud-based human capital management (HCM) company. SuccessFactors is now a significant business unit within SAP, and includes all of its previous HCM software assets, and it competes with that market’s fast-moving cloud player, Workday. Last week, SAP hosted a global launch event to announce that the three-year effort to convert its entire suite of business applications to the cloud — and to the HANA architecture — was complete. It also provided me an opportunity to catch up with McDermott in New York. Here’s a sample of our conversation: AllThingsD: Bill, the last time we talked about HANA, you hadn’t quite moved all your primary applications over to it. It’s not exactly a huge piece of your business yet, but let’s start there. How is HANA coming along? McDermott: We have 1,000 customers on it now, so it’s growing really fast. The last update we gave, we indicated that we think it could be a half-billion U.S.-dollar business, which would make it the fastest-growing software product in the history of the world. So it’s big. The last time we talked, you hadn’t quite moved all your applications over to HANA. The big one that was missing was the Enterprise Resource Planning piece. Can I assume that part of the news today is about that process being completed? That’s exactly it. The big news today is about the whole SAP suite being moved over to HANA. All the things that the business suite does — how you manage your supply chain and manfacture your products and get them to market, how you manage your people, how you manage your customer relationships, everything around you in that whole end-to-end value chain — runs in what we call the business suite. And we go to market with that suite in 24 industries, small, medium and large, all over the world. Now that whole suite runs on HANA. For the benefit of people who struggle with the idea of what the software actually does, can you give me a good example of who uses it, and how? We work with this company HSE24, it’s like a QVC in Europe. They are selling product on television, there’s a meter at the lower right-hand corner of the screen telling you how many of that item are left. That’s run on SAP software. When you call in to the call center, they already know from the sensors on the social networks, via HANA, what you’re likely to want. And then they can also do the upsell and the cross-sell. One of the customers that is going to be featured today is John Deere. We’ll talk about how they can, based on usage history and patterns, provide preventative maintenance on the things that will need it the most. There’s obviously more to it than simply running existing processes faster and cheaper and more efficiently, right? The wild part about all this is sort of like this: No one could have predicted that Disney would become the Disney we know today when Walt drew a picture of a mouse. What you have is the limitless potential. CEOs have the ability to rethink business models, based on having the speed and the insight and the simplicity to truly change how they run their companies and transform industries. You and I fly too much, and sometimes flights get canceled. It happens. If I have to get out of Moscow and get back to New York, the airline can charge me more. If my original flight is canceled, and I’m on the line with three other people, you can get more money out of me. Dynamic real-time pricing can transform the airline industry. Is this all the result of intelligence you’ve brought from the applications themselves, that are getting a new benefit from being run in-memory on HANA? There are two ways to look at it. The in-memory architecture makes it fast, and simplifies it. The application makes it so smart. You’re combining transactions and analytics. And you’re also doing things we like to call “extreme applications.” You may be a big consumer products company, and you have trade promotions that go to different stores in different geographies. If you ask them how it’s going at a particular store in Brazil, they will have a hard time answering unless they’re using HANA, because it captures all the transaction data. They know exactly who’s buying what, and using which promotion or deal. Talk to me about the competitive landscape. Oracle CEO Larry Ellison loves to lob verbal grenades at you from time to time. Care to lob one back? In the old days, the answer would have been yes. What’s happened is that we take it a compliment when people try to spread fear, uncertainty and doubt. It’s a sign they’re worried. But they don’t have to be, because we’re open, and our most important mission is to make customers happy and fulfill their ambitions. We’re fully cooperative with Oracle, with IBM and with Microsoft. So, anything that a customer chooses to do with one of them, they can continue to do it, and we are highly supportive of that. Obviously, SAP’s applications can now run optionally in the cloud or in a mixed environment. But the pure-play cloud companies like Salesforce.com and Workday are certainly showing some strength. What sort of competitive threat are you seeing from them? I think SAP has responded in the cloud. SAP Cloud will take care of your customers on-premise or on-demand. We announced SAP Customer 360, and it runs on HANA. So it’s real-time, it’s predictive, and it’s in memory. If you want to buy it on a public cloud on a subscription basis like Salesforce, we now have it. Once people realize it’s running on HANA, we’re going to have an advantage. Salesforce has done a good job of building a large cloud company, but they have done it on an old architecture. You can’t do real-time analytics on the Saleforce.com platform. That’s a big Achilles’ heel. On talent, Workday is a good company, they built a good HCM solution. We bought SuccessFactors, and then we took all the assets of SAP’s existing HCM application and put them under SuccessFactors. So now, as it relates to people, wait until you see, in June, the list of companies who are running SuccessFactors. Workday had a great opportunity to go in where there was no competition, and we didn’t have a response. We had HCM, but it was all on-premise. The market wanted talent in the cloud. Now they are going against us, and there’s a lot of competition. Tagged with: Bill McDermott, cloud computing, CRM, customer relationship management, enterprise applications, Enterprise Resource Planning, ERP, HANA, HCM, human capital management, Larry Ellison, Microsoft, Oracle, Salesforce.com, SAP, Seven Questions, SuccessFactors, talent-management, Taleo, Workday The problem with the Billionaire Savior phase of the newspaper collapse has always been that billionaires don’t tend to like the kind of authority-questioning journalism that upsets the status quo.— Ryan Chittum, writing in the Columbia Journalism Review about the promise of Pierre Omidyar’s new media venture with Glenn Greenwald AllThingsD by Writer
计算机
2014-35/1127/en_head.json.gz/17479
PS2 REVIEW: ARMORED CORE 3 The more some things change, the more they stay the same. That hackneyed statement is so true I once tried to rewrite it so it would mean something completely different but it ended up being more like the original than the original version. Go figure. Armored Core 3 has been changed and it's closer to what I'm sure most players envisioned the first and second version of the game to be like. The Armored Core series has a following made up of those who braved the learning curve and stuck it out. The games are not easy to control and there really is no reason why they should be so difficult. The digital control scheme does not feel natural and requires a lot of practice. The Cores are cumbersome at the best of times and can be out-maneuvered by enemies that possess the airborne grace of a hummingbird. AC3 has made a few changes which will bring back some of the Mech audience while preserving the integrity of the game for the legion of true believers. The future is in the hands of the mercenaries knows as Raven; a group of guns-for-hire that battle it out encased in robotic armored vehicles called Cores. The game offers you a third-person, 3D perspective of the action. The control system now includes the use of the left analog stick for control over the Y-axis and X-axis. You still have to use the shoulder buttons for pitch and strafing but it's a move in the right direction, making the game a bit more user friendly. A few new features to the game include less restriction on the weight which enables you to add more firepower without sacrificing too much maneuverability and a Drop feature which allows you to jettison empty weapons to gain back speed and flexibility. You can also store and operate up to three Cores and use them to perform specific missions. A Wingman is available to accompany, assist, and cover you on dangerous missions. Finally an Exceed Orbit feature lets you assign weapons and other accessories to circle in orbit around your Core thus allowing you access to more hardware without the added weight. It's like eating burgers that float around your stomach rather than make it bigger. The game is more relaxed during the first few levels than the previous games. This helps to smooth out the learning curve. Also, the hummingbird precision of the airborne vehicles has been tamed for a flight pattern more in keeping with the physics of the crafts. To assist you in the air and on the ground there are more boost powers in this version of the game. The Cores are the best looking of the series. They gleam playfully in the light revealing the hard, cold metallic armor beneath the reflection of the sun. The action takes place on Earth, as opposed to Mars and you can expect some really nice art for the background environments that range from swampy hollows to futuristic cities. I have just installed a Pro Logic II system in my living room and this game takes full advantage of it. PL II divides the rear speakers into stereo and the game really engulfs you in the atmosphere with sounds zipping around in all directions. The bass is incredible, it's so crisp and loud that if I lived in an apartment I surely would have been evicted two days ago. Armored Core 3 may not be the easiest game to learn but taking the challenge will pay great dividends; it may even spark a desire to play the series. System: PS2 Dev: From Software Pub: Agetec Released: Sept 2002 Review by Fenix RATING (OUT OF 5) MUSIC/FX
计算机
2014-35/1127/en_head.json.gz/17701
OUYA Android game console now up for pre-order on Amazon OUYA, the Android-based home game console that took Kickstarter by storm, is now available for pre-order on Amazon for those who missed out on the campaign. The cost is $99 for the unit, which includes the OUYA console and one controller. The draw of OUYA is that anyone can develop and publish games for the console, and there's no huge financial barrier to entry for devs. This could mean that there will be just a bunch of random stuff, but it also means that you'll have more developers working on quality games--and for the first time on a home console, you'll likely see games as inexpensive as the ones you play on your iOS and other Android devices. OUYA is powered by a quad-core NVIDIA Tegra 3 processor and 1 GB RAM with 8 GB of storage and 1080p output. Pre-order it now for $99 and it'll deliver in June, and don't forget to grab an extra controller. Read More | OUYA pre-order Ouya Android-based indie game console takes Kickstarter by storm Mods / Hacks, Are you bored and tired of the big players in the video game space failing to innovate in truly meaningful ways? Then you'll wanna meet Ouya, the Android-powered game console that will cost just $99 with a controller that connects to your television set just like your Wii U, Xbox 360, and PS3 does. The difference? Anyone can develop games for the Ouya console, and there's no huge financial barrier to entry. That means more indie quality indie games, likely much less cheaper than you'd find on other home game consoles. The product is designed by Yves Behar and team, the same folks who dreamed up the designs for the One Laptop Per Child OLPC computer and Jawbone Jambox. On the inside it's powered by Android 4.0 Ice Cream Sandwich with a quad-core Tegra 3 processor, 1 GB RAM, and 8 GB of built-in storage. It also packs 1080p output over HDMI, Wi-Fi, and Bluetooth connectivity. Interested? You can head over to the Ouya Kickstarter page to pre-order one now. This could turn out to be a very big deal. Check out a video explaining the project after the break. Click to continue reading Ouya Android-based indie game console takes Kickstarter by stormRead More | Ouya
计算机
2014-35/1127/en_head.json.gz/17818
Isolation — a societal issue in the computer age You can’t beat a system you can’t understand By Sam Bari The computer age, coupled with a worldwide failing economy, has fostered many changes in the way we live as a society. The bottom line is that we are becoming less societal every day. Companies are downsizing by outsourcing work. More people are working at home because they can. Many businesses prefer a home worker to a person needing a desk and office space, purely for economic reasons. Any work performed on a computer can be monitored by login and logout times, track changes in word processing, or proof of work by finished assignment. I have clients in other parts of the country that I have never met and rarely call, yet we have been doing business for nearly a decade by e-mail. Without a doubt, the computer age has made humans less social. Oneon one interaction to establish a relationship is an exception in today’s business arena. It was not that long ago when a one-on-one initial meeting was mandatory to give the involved parties a level of comfort by establishing good will. Water cooler conversations, coffee breaks, two-hour lunches and happy hour are all activities of a bygone era. We have entered an age where most corporate employees don’t know one another other than through messaging online. Even personal relationships are often initiated on the Internet. Computer dating and matchmaking is a billion dollar industry, more so now with the advent of computer video cameras and databases that verify identity and credibility. The last bastion of societal interaction was the bar and restaurant. Everybody went to some version of “Cheers,” where a bartender, waitress or at least some other patrons knew their names. Even that is possibly going in the history books with the growing popularity of computerized restaurants, where interaction with restaurant personnel is no longer required. Inamo, a computerized Asian fusion restaurant in London, caught the attention of CBS and was featured on the Early Show a few months ago. The restaurant cuts out the middleman with a system that directly connects customers to the kitchen. Overhead projectors, touchpad tabletops and a computerized ordering system completely changed the dining experience, said CBS News correspondent Richard Roth. Ordering was done from the table’s touchpad top, and sent directly to the kitchen. Apparently, the restaurant works like an iPhone, with a digital dining menu that includes computer games and a map of the subway system. The wait staff brings out the food, helps with the technology and delivers the check. The restaurant was allegedly designed for diners that don’t want to socialize. Noel Hunwick, creator and co-owner of Inamo, told CBS that the experience is about customer control. “We wanted to put the customer in a position where they could order when they want, get their bill when they want, and at the same time really customize their entire environment,” he said. I don’t know why, but I suspect that the intent was to eliminate the surly customer, not the waiter that showed up at work hung over or in a bad mood. Anyway, the computerized atmosphere even extends to a digital tablecloth that works as a touch screen computer. The menu is displayed in plain view in front of each place setting, and diners can touch the screen to indicate their selection. Conversing with a waiter is unnecessary. The idea spread like proverbial wildfire. Now, computerized restaurants can be found in most metropolitan areas, where the highest technology is readily available. The ultimate in minimalism is supposedly being introduced to the public later this year. An eastern seaboard restaurant chain will allow customers to make reservations at home online, order meals from an online menu and pay by credit card in advance of going to the restaurant. The only conversation required is for customers to tell the hostess their names when they arrive. If trends continue, social events will cease to exist. We already live like a society of cave dwellers. We stay in our individual caves and don’t come out except to replenish supplies. It won’t be long before people will go to social gatherings and not know what to do. They’ll stand around and ignore each other because the reason for being there is obsolete. We have invented ourselves into a life of solitary confinement. Have the machines already taken over? I fear the worst if any more social interaction is eliminated from this system that we can’t understand. Return to top
计算机
2014-35/1127/en_head.json.gz/18298
OptimalJ proves its case Faster, better coding IT-Analysis, On Monday (July 21st) Compuware announced version 3.0 of OptimalJ. This has some important new features, writes Phil Howard, of Bloor Research.However, perhaps even more interesting is the simultaneous release of independent performance analyses that the company has commissioned into the performance benefits of OptimalJ compared to Integrated Development Environments (IDEs). These analyses indicate pretty conclusively that OptimalJ offers substantial advantages when compared to the more prosaic coding approaches. Actually, Compuware is keen to point out that it is not really OptimalJ that has been compared to IDEs but the use of a model-driven architecture (MDA) in general. However, there is no getting way from the fact that it is Compuware's implementation of MDA (which is an OMG standard) that has proved to be so much more efficient than conventional coding.Compuware has commissioned three different studies of which two have reported. The first results are from The Middleware Group, which is very well respected within the Java community. The Middleware Group assigned two groups of similarly experienced developers to build the same J2EE-based application, one using OptimalJ and one using a popular IDE (though it is important to note that in the latest release you can use JBuilder, WebSphere Studio or Sun ONE Studio in conjunction with OptimalJ, as well as the built-in NetBeans).The bald figures were that the IDE group took 507 hours to build the application, whereas the OptimalJ group took just 330 hours. Moreover, the quality of the code generated by OptimalJ was higher. Of course this doesn't count the training period for the developers using OptimalJ (who had not previously used it). On the other hand it doesn't take account of the fact that those developers would probably be more proficient.The second reference test was done by EDS which undertook two investigations. In the first case, it looked at the number of lines of code that needed to be manually written to write
计算机
2014-35/1128/en_head.json.gz/355
Creating a Windows Forms Application By Using the .NET Framework (C++) Updated: January 2010In .NET development, a Windows application that has a graphical user interface (GUI) is typically a Windows Forms application. Development of a Windows Forms project by using Visual C++ generally resembles development by using any other .NET language, for example, Visual Basic or C#.Windows Forms applications in Visual C++ use the .NET Framework classes and other .NET features together with the new Visual C++ syntax. For more information, see Language Features for Targeting the CLR.This document shows how to create a Windows Forms application by using several standard controls from the Toolbox. In the finished application, a user can select a date, and a text label shows that date.PrerequisitesYou must understand the fundamentals of the C++ language. For a video version of this topic, see Video How to: Creating a Windows Forms Application By Using the .NET Framework (C++).To create a Windows Forms projectOn the File menu, click New, and then click Project.In the Project Types pane, click Visual C++ and then click CLR. In the Templates pane, click Windows Forms Application.Type a name for the project, for example, winformsapp. Specify the directory where you want to save the project.Form1 of the project opens in the Windows Forms Designer opens, as shown in the f
计算机
2014-35/1128/en_head.json.gz/3127
On standards and standards bodies Sep 03, 2008 By David Lane What does it mean to be open. My copy of Oxford defines open as: unconcealed circumstances or condition. Way back in the day when the GNU operating system was getting going, they coined the mantra: Free software is a matter of liberty, not price. To understand the concept, you should think of free as in free speech, not as in free beer. Last month, I talked about transparency and how important it was in software and systems. Just as important are standards, and, more important following those standards. Today, in Computerworld, a different issue has been raised. The value of standards. Way back, last year, there was a ratification of standard by the International Standards Organization (ISO), the same group of people that brought you the stupid label guy (ISO9000), IS-IS routing (does anyone really use it?) and of course, the OSI stack (Please Do Not Throw Sausage Pizza Away). The standard that was ratified was the Open XML standard. Now, I am not that much of a geek to be able to accurately reflect the arguments for the Microsoft (ratified) version and the non-Microsoft (not ratified) version that came to pass. I won’t lob too many stones at Redmond (that bastion of standardization), but I will highlight one point. There are some countries who are less than happy with the ISO and, in fact, are so dissatisfied that they are questioning not only the Open XML standard, but the value of any of the ISO standards at a national level. My father used to work for the telephone company, back before Judge Green broke up AT&T. He has since moved on and dabbled
计算机
2014-35/1128/en_head.json.gz/4611
Over 23% of Steam Customers Running Windows 7, DirectX 11 Jansen Ng (Blog) - January 21, 2010 12:00 PM 50 comment(s) - last by OKMIJN4455.. on Jan 24 at 6:38 AM Game developers should target DirectX 11 Steam is a digital gaming delivery platform developed by Valve, makers of the Half-Life series of games. The system was developed not only to reduce losses caused by piracy, but also to reduce costs of shipping physical media and to reduce reliance on retailers. Gamers download the Steam platform from their website and are able to purchase not only the latest games for download, but also a growing list of classic games as well. The Steam platform collects data about the type of computer hardware customers are using, which is then used to make programming decisions. For example, over 80% of Steam gamers are using multi-core systems, with almost 24% using quad-cores. This means that there is a large user base who would be able to take advantage of multi-threaded gaming. Since most games take 2-3 years to develop, spotting a trend early on can set developers on the right path early on, rather than having to patch it later. One of the biggest questions surrounding the Windows 7 launch would be whether gamers would adopt the new operating system, or stick with the tried and true Windows XP. The latest Steam hardware survey for December shows that almost a quarter of Steam's user base has already adopted Microsoft's latest OS, and has been abandoning the 32-bit Windows XP for 64-bit versions of Windows 7. The success of DirectX 11 has been tied into the Windows 7 OS, but DX11 can also be installed on computers running Windows Vista. Over 50% of Steam users are capable of supporting DX11 on the software side through Windows 7 or Vista, but unfortunately Steam isn't keeping track of DX11 video cards yet. Over two million DX11 GPUs have been sold by ATI so far, and game developers have been taking note. There are currently over three dozen DX11 titles that are in development or have been announced. RE: Meanwhile at unknown location in a Umbrella Corp. like lab.... Oh, and EA also has Crysis and Crysis Warhead on Steam, that's as PC-centric a release as you can get. :) Parent ATI Sells Over 2 Million DirectX 11 GPUs, Celebrates With Radeon Cake Gartner: 75 Percent of Corporate PCs Will Run 64-Bit Windows by 2014 AMD Desktop Roadmap Features Bulldozer Architecture, New Chipsets Games Will Leap Towards Photo-Realism With DirectX 11 Majority of Windows 7 Installations Will Be 64-bit
计算机
2014-35/1128/en_head.json.gz/4688
bnv.fnr.news/fnr;fnr=stories;tile=1;pos=top;sz=728x90,970x90;ord= bnv.fnr.news/fnr;fnr=stories;tile=2;pos=right1;sz=300x250,300x600;ord= 'Ruthless' cost cutting coming to Navy IT Thursday - 6/9/2011, 7:40am EDT WFED's Jared Serbu Click below to hear the report By Jared Serbu The Department of the Navy's chief information officer said Navy and Marine Corps IT managers should expect to see "ruthless" internal cost cutting this year in preparation for significant budget cuts. "We don't know how much that is yet, but it's going to be a big number," Terry Halvorsen said at the DON CIO's May 12 conference in Virginia Beach, Va. "There's no way it can't be a big number." Halvorsen's office posted an audio recording of the session on its website this week. To achieve the cost reductions, he said, he's gotten the word from as high as the Secretary of the Navy that if the department is going to take risks with its IT systems, they shouldn't be the ones that directly provide warfighting capability. Instead, he said, the Navy and Marine Corps need to accept risk in their business systems as they look for ways to do things better and cheaper. "And I fully understand sometimes it's difficult to separate business IT from the rest of IT and from warfighting IT," he said. "But I think we're going to have to make some attempts to do that. We're going to take risk. The one thing I don't want to do is to take risk when we don't have to in any area that affects, call it the tip of the spear, the edge of the battle, the thing that we are in business to do. It is to kill the enemies of the country. That is the business of this corporation if you wanted to put it in business terms." He said the DON will take a ruthless approach to finding savings in the short term rather than relying on promises of future cost reductions. "We spend X amount of dollars today. We are going to spend X minus amount of dollars next year," Halvorsen said. "Savings are what we take away. Does that mean we won't look at cost avoidance plans? No, we'll certainly do that. But when we do that, DON's going to be very ruthless in saying, 'OK, we gave you two dollars. You said if we gave you two dollars, in the next year, you would save us four. Not cost avoid, not maybe. You said you'd save us four. We're taking the four.' We are going to be ruthless, because if we don't do it, I guarantee it's going to be done for us." The first job is getting a handle on how much the Navy and Marines actually spend on IT, something he said his office has been working closely on with the department's financial managers. But Halvorsen's office has identified cost centers that are prime candidates for efficiency gains. They are the usual suspects in government IT: underutilized data centers, expensive, customized software, exploding bandwidth demands, inefficient software licensing practices and huge numbers of duplicative applications on the department's networks. "We run at least seven—and maybe nine depending how you define them—records management and tasking systems," Halvorsen said. "We are going to one. It makes no sense. I get that they may be meeting somebody's requirement. The question is, is the requirement for maybe a small number of people worth the additional money that we're paying across the board? Not just in the cost of buying that new system, but the cost of sustaining it, operating it and securing it. I'm going to tell you the answer on that one is no. Records management is important, but if I miss something on records management, do you think anything happens to a Marine in the field? I don't either." Halvorsen said the Navy and Marine Corps will set specific targets for removing applications from their inventory. As of now, they estimate there are close to 2,000 on the department's three main networks. "We are going to not just put some controls on applications," he said. "We are going to put a money target that says X percent or X dollars worth of applications come of the system in fiscal year 2012. We're going to call it application rationalization, and it will be maybe one of the more unpopular things that's going to happen, but it's going to happen." The prime targets will be applications or systems that overlap with one another. "We run multiple systems that basically do the same thing because someone said they can't change their process," he said. "Well, we're going to do the math. And if the math says this 100,000 people are costing us an additional 25 percent against the 1.2 million people we serve, that system is gone. It's a math drill, and we take the money. You are going to see a lot of that this year." 1 2 Next page » Tags: DoD, Department of the Navy, Navy, Marine Corps, technology, management, budget, DoD budget, efficiencies, data center consolidation, application rationalization, Enterprise software licensing, Terry Halvorsen, Janice Haith, Jared Serbu
计算机
2014-35/1128/en_head.json.gz/5593
Cyber Cynic Linux on the cloud: IBM, Novell & Red Hat By Steven J. Vaughan-Nichols March 16, 2010 1:30 PM EDT Today, Mar. 16, has been filled with Linux and cloud news — which is great, I guess, if you're ready to trust your data to the cloud. In case you don't follow Linux as closely as I do, here's the short version: Red Hat and Novell have joined up with IBM to provide a new open cloud environment that goes by the unwieldy name Smart Business Development and Test on the IBM Cloud. Besides running Linux, this new cloud service comes ready for work with more software partners than you can shake a stick at. The bottom line is that I don't care what capability you want from your server farm; chances are you'll find it ready to go on IBM's new Linux-powered cloud from either IBM, who is offering its full Lotus and WebSphere lines, or from one of its ISV (independent software vendor) partners. These services are scheduled to be made available in the second quarter of 2010 in the United States and Canada, with a global roll-out by year's end. IBM claims, and I see no reason to doubt them, that its cloud customers can cut IT labor costs by 50% and reduce software defects by 30% by moving development to the cloud. In particular, by moving internal development to the cloud, companies can save money and time otherwise spent on internal development and test environments. Specifically, IBM maintains that internal development and testing setups can eat up as much as 50% of a company's IT infrastructure while remaining idle 90% of the time. As proof of the Linux-powered cloud's advantages, IBM points to eBay's online payments division, PayPal, where developers are creating and testing payments applications for smartphones in IBM's cloud. In the above statement, Osama Bedier, PayPal's VP of product development, said, "We want to provide a very simple way to make payments available on all platforms, including mobile applications," and IBM's cloud delivers the goods. At the same time, Red Hat also announced that the Symbian Foundation, the non-profit devoted to fostering the recently open sourced Symbian operating system community, has adopted Red Hat Enterprise Linux for its private, cloud-based developer Website and server. I find that doubly ironic, since Symbian, an embedded operating system most commonly used in mobile phones, resisted going open source for ages; even now their main development site is apparently not going to be open to the public. Do you get the impression that I do that Symbian is still trying to gets it mind around the idea of open source? This is all good news for Linux vendors. It's just more proof that Linux is a strong mainstream server operating system. At the end of the day, I'm still left thinking that the cloud is just the latest version of that ancient idea that corporate computing is best done at a distance in some remote data center outside the control of a company. Whether you call it mainframe time-sharing, network computing, or client-server computing, it's always the same idea. It's a powerful idea, but today, in my home office alone, I have terabytes of storage, gigabit Ethernet, and one PC with an Intel 3.4GHz Nehalem 920 CPU, which does about 57 GFLOPs (Giga-FLoating point Operations Per Second). That makes my rather ordinary new desktop PC about as fast as the fastest early 90s supercomputers. If I have that kind of computing power in my house, why should a business with far more resources trust its data and programs to a cloud? This is an ancient argument. I can recall when people were debating whether PCs were just toys and that all 'real' business computing should be kept on the mainframes and mid-range computers. Since all of us have PCs on our desks today, we know how that argument worked out. I'm not saying that mainframes, distributed computing, and clouds, or whatever they'll call it next year, don't have a place in corporate computing. I am saying though that with inexpensive x86-based servers and PCs growing ever more powerful I still don't see a compelling reason for most businesses to move their data-processing power from in-house server rooms and data centers to anyone's cloud. What do you think? Print TAGS:Cloud, cloud computing, enterprise, IBM, Linux, mainframe, Novell, Red Hat TOPICS:Cloud Computing, Data Center, Hardware, High Performance Computing, Linux and Unix, Management, NOSes and Server Software, Operating Systems, Servers Older Post: An iPad on your belt?Newer Post: Google and Linux are coming to your TV Our Commenting PoliciesView the discussion thread.
计算机
2014-35/1128/en_head.json.gz/6080
Home OSS For The Love Of Open Mapping Data Submitted by Rianne Schestowitz on Sunday 10th of August 2014 08:02:17 PM Filed under Interviews It’s been exactly ten years since the launch of OpenStreetMap, the largest crowd-sourced mapping project on the Internet. The project was founded by Steve Coast when he was still a student. It took a few years for the idea of OpenStreetMap to catch on, but today, it’s among the most heavily used sources for mapping data and the project is still going strong, with new and improved data added to it every day by volunteers as well as businesses that see the value in an open project like this. To celebrate the project’s birthday, I sat down with Coast, who now works at Telenav, to talk about OpenStreetMap’s earliest days and its future. Here is a (lightly edited) transcript of the interview. Oracle Embargoes FLOSS (Java)… Submitted by Roy Schestowitz on Saturday 9th of August 2014 11:49:16 AM Filed under OSS So, Oracle is pushing the limits but apparently is legally doing so. Whether FLOSS can legally be embargoed by government is beyond me. After all, the source is out there and can’t be put back in the bottle. Further, if every country in the world had a random set of embargoes against every other country in he world, FLOSS could not be international at all. That would be a crime against humanity. If Java, why not Linux, itself? If such embargoes apply, Russia, Iran, Cuba etc. could just fork everything and go it alone. They certainly have the population to support a thriving FLOSS community behind their own walls. On Navigating Laws and Licenses with Open Source Projects Submitted by Rianne Schestowitz on Friday 8th of August 2014 06:06:34 PM Filed under OSS A few years ago, Red Hat CEO Jim Whitehurst made the prediction that open source software would soon become nearly pervasive in organizations of all sizes. That has essentially become true, and many businesses now use open source components without even knowing that they are doing so. For these reasons and other ones, it is more important than ever to know your way around the world of laws and licenses that pertain to open source software. Leaders of new projects need to know how to navigate the complex world of licensing and the law, as do IT administrators. Here is our latest collection of resources to help you navigate in the arena of law and licenses. Does having open source experience on your resume really matter? Submitted by Roy Schestowitz on Friday 8th of August 2014 11:27:50 AM Filed under OSS "Code is the next resume." These words by Jim Zemlin, executive director at The Linux Foundation tell profoundly about how our technology industry, and the many businesses that depend on it, are transforming. The unprecedented success of open source development methodology in the recent past raises some fundamental questions about the way the businesses are designed, the structure of the teams, and the nature of work in itself. From bench scientist to open science software developer Submitted by Rianne Schestowitz on Friday 8th of August 2014 06:40:55 AM Filed under OSS Almost immediately after moving to the US, I was flown back to the UK for a meeting about tools for computational chemistry at the Daresbury Laboratory, and it was there that I met a wider community of scientists interested in approaches to working with computational chemistry codes. During my postdoctoral work, I had the opportunity to continue some of the open source work I had done as well as work on some new software for data acquisition and some simulation code looking at the roles of defects in electronic transport. I enjoyed my postdoctoral work, but in many ways it served to solidify in my mind that I needed to find a career where I could work with scientists to enable their research, and I became more passionate about open access, open source, open data, and open standards. Above all, I wanted to be a part of the solution, to help scientific research to use software to enable reproducibility, and to get back to showing all of the working. What Immigration did with just $1m and open source software Submitted by Roy Schestowitz on Friday 8th of August 2014 06:07:35 AM Filed under Development The Department of Immigration has showed what a cash-strapped government agency can do with just $1 million, some open source software, and a bit of free thinking. Speaking at the Technology in Government forum in Canberra yesterday, the Department's chief risk officer Gavin McCairns explained how his team rolled an application based on the 'R' language into production to filter through millions of incoming visitors to Australia every year. Salil Deshpande: Software Engineer. Venture Capitalist. Open Source Investor. Submitted by Roy Schestowitz on Friday 8th of August 2014 05:57:33 AM Filed under Interviews Midas List VC Salil Deshpande talked to TechRepublic about why he's betting on open source software and what he thinks about the future of IT. » Login or register to post comments GSA’s open source first approach gives more software options, better savings The General Services Administration last week announced a new policy requiring open source software be given priority consideration for all new IT projects developed by the agency. And while some may question whether open source software will be as effective as its conventional, proprietary counterpart, Sonny Hashmi, GSA’s chief information officer, is confident this new IT model will put the agency in the best position to procure and develop software in the most cost-effective manner. Scale like Twitter with Apache Mesos Submitted by Rianne Schestowitz on Thursday 7th of August 2014 09:53:26 PM Filed under Interviews Twitter has shifted its way of thinking about how to launch a new service thanks to the Apache Mesos project, an open source technology that brings together multiple servers into a shared pool of resources. It's an operating system for the data center. "When is the last time you've seen the fail whale on Twitter?" said Chris Aniszczyk, Head of Open Source at Twitter. Open Prosthetics Founder: Challenges Ahead for Open Source Medical Devices Submitted by Roy Schestowitz on Thursday 7th of August 2014 09:30:01 PM Filed under OSS Before he lost his arm serving as a Marine in Iraq in 2005, Jonathan Kuniholm was pursuing a PhD in biomedical engineering. Now as a founder and president of the Open Prosthetics Project Kuniholm is working to make advanced, inexpensive prosthetics available to amputees around the globe through the creation and sharing of open source hardware designs.
计算机
2014-35/1128/en_head.json.gz/6110
IT standards and organizations IANA (Internet Assigned Numbers Authority) Part of the IT standards and organizations glossary: IANA (Internet Assigned Numbers Authority) is the organization under the Internet Architecture Board (IAB) of the Internet Society that, under a contract from the U.S. government, has overseen the allocation of Internet Protocol addresses to Internet service providers (ISPs). IANA also has had responsibility for the registry for any "unique parameters and protocol values" for Internet operation. These include port numbers, character sets, and MIME media access types. Partly because the Internet is now a global network, the U.S. government has withdrawn its oversight of the Internet, previously contracted out to IANA, and lent its support to a newly-formed organization with global, non-government representation, the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN has now assumed responsibility for the tasks formerly performed by IANA. This was last updated in September 2005 Definitions iSCSI (Internet Small Computer System Interface) - Definition: Learn what iSCSI is, how the protocol is used in iSCSI SANs and why it’s important for virtualization. (SearchStorage.com) OSI reference model (Open Systems Interconnection) - Definition: Learn what the Open Systems Interconnection (OSI) reference model is and how its seven layers of functions provide vendors and developers with a common language for discussing how messa... (SearchNetworking.com) IT Governance Institute (ITGI) - The IT Governance Institute (ITGI) is an arm of ISACA that provides research, publications and resources on IT governance and related topics. (SearchCompliance.com) Glossaries IT standards and organizations - Terms related to information technology (IT) standards, including definitions about IT organizations and words and phrases about policies and compliance. Continue Reading About IANA (Internet Assigned Numbers Authority) You can find out more about IANA at the IANA Web site . Here is an official version of the U.S. government white paper on the Management of Internet Names and Addresses that recommended the formation of ICANN. Here is the Internet Society's resource site on the "White Paper" . The ICANN Web site includes its meeting schedule and minutes of past meetings. Concern over Icann's request for more cash ICANN critics consider taking admin work in-house Who exactly gives out IP addresses to companies? Who owns and administers the internet's addresses? Icann refines its role in internet governance Ask a Question About IANA (Internet Assigned Numbers Authority)Powered by ITKnowledgeExchange.com
计算机
2014-35/1128/en_head.json.gz/7862
Go Back! Community A Very Special Episode of Sm.net: dios Goes to a Make-out Party - by diospadre A Very Special Episode of Sm.net: dios Goes to a Make-out Party I doubt that more than a handful of organizations could say this, but I'm confident that Starmen.net could stand toe to toe with TNT when they say "We know drama." Unlike TNT's 5 year old movies and Whilliam H. Macy inspirational vehichles though, our brand of drama seems to be an element that nobody wants around. Everyone knows what I'm talking about, such events as having EB.net ripped out from under us, the Great Deletion of December 2000, and the revolving door of webmasters. And of course most recently the loss of Xodnizel and the mood of the forums of the past few days. Despite any bad stuff that happens however, SM.net continues sailing on, continously (though sometimes slowly) improving and growing. But how can this be possible? With a name change, a number of deletions, loss of key site members, and hatred being thrown around like water from a bucket brigade, surely the site should have crumbled and its members disbanded. While I'm sure there are a great number of psychological and sociological reasons why we've all stuck together and continued to improve the site, they don't matter. What matters is that not only have we lived through every so-called disaster that's been thrown our way, the site has benefitted from them all. Take the domain name scandal and Great Deletions, for example. People who were around at the time can tell you that in the last months that the site was known as Earthbound.net, it was a hotbed of immaturity. Few people got along, scandals were rife in parts of the site so insignificant as Interactive Fiction, and updates had slowed down majorly. When the change came, however, everyone on the site came together to encourage the staff to rebuild or to hate the guy that "stole" the old domain name. Infighting stopped, and everyone really started to get along. The site deletions also led to better hosting plans, layouts, and infrastructure changes that helped the webmasters grow and learn from the site. This leads to the ever improving functionality of the site and forums, as well as the insanity wrought by reidman and Mars in that week of December. With these examples of the past, there is no reason to think that Xod leaving or the state of the forums today in any way spells doom for us all. As a collective, the visitors of SM.net prove to be intelligent and adaptable. Though Xod has left, reidman quickly found a team of four people who are working on the new forums and layout. I have no doubt that while they may be struggling at the moment, they will quickly learn how to do what needs to be done, and in the end we will have four people who are highly knowledgeable about the workings of the site, put together possibly a better force than even Xod was. As for the forums, I remain convinced that they've hit a minor bump in the road. At points, my treatment of two people was likely a little overboard, but even this didn't result in benefits to us all. The two most controversial of topics served to show exactly how far is too far, and it is unlikely that the line will be crossed again soon. In addition, they've resulted in some of the most inspiring, most hilarious, and craziest topics that anyone has ever participated in. On a personal note, the "bad" topics have pointed out to me that I need to curtail my posting so people won't turn into "dios clones". This is not only because too many of me is a bad thing (we don't need new people being ganged up on by hordes of me), and also to protect my style. Now some people may disagree with me. They'll say that everyone needs to constantly be nice to everyone so there will never be any drama. As explained above, I think the only way to improve ourselves is to keep the drama coming, but to make sure that instead of forecasting the end of the site and wishing for the good old days, we all make like Kirsten Dunst and say "bring it on!" Other Submissions by diospadre <BLMST> Author Title Description Date Rank diospadre Trippy Group Pic 1 800x600 Mother 2: Desktop: Wallpaper: Trippy Group Pic 1 800x600 diospadre Starman: Foreward A closer look at the Starman Jr.
计算机
2014-35/1128/en_head.json.gz/8316
An Engine For Assassination: IO's Tech Director Speaks by Christian Nutt [Business/Marketing, Design, Art, Interview] At some point, a triple-A studio must decide: when is it the right time to build a new engine? While the Square Enix Group studios are free to pursue their own paths when it comes to technology, Denmark's IO Interactive has always rolled its own. For 2012's Hitman: Absolution -- and what comes beyond -- the studio decided to embark on an entirely new engine, Glacier 2. In this in-depth interview, tech director Martin Amor speaks to Gamasutra about why the studio came to that decision, and just what factors into it -- from satisfying the design team, to creating a more cinematic experience through advanced AI. The architecture of an engine that has to support multiple titles is critical, and so is the decision-making process behind how its features are devised and implemented. Here, Amor walks Gamasutra through these and more, explaining what boundaries the team wanted to push with Hitman, and how these decisions impact the future of the technology and the studio. Hitman: Absolution is the debut of your new technology? Martin Amor: Exactly, yeah. We've been working on this technology for a while now and are very, very, very excited about it. Basically, we were looking at the ambitions that we had for Hitman: Absolution and we thought, "Okay, we need to come up with something completely groundbreaking in order to fulfill these ambitions." And so we began to work on Glacier 2. Glacier 1 is our earlier engine that we used for the former games, and Glacier 2 is a completely new from, made from the ground up. We're pretty far into this generation. How did you feel about getting a new engine ready at this juncture? This game launches over six years into the Xbox 360's lifespan. MA: When we made the engine, it has been a big part of many of the decisions that we've made. So we made our engine in a way that we felt is trying to accommodate how we believe the future is going to be, and so we think we're in pretty good shape regarding this, for future platforms. Also on the current generation of platforms, I still think we haven't fully explored everything that we have. There are still things to do, that we can do. We are hand-coding a lot of assembly on the PS3 to make sure that everything can get as much out of the hardware as possible. But it's kind of like a balance of effort and time, versus the cost, versus how much we actually get out of it. But there's more that can be done in this area. I've talked about AI periodically with different people. I get the sense that AI hasn't progressed as rapidly in this generation as we might have anticipated at the outset. MA: AI is quite complex, and I think a lot of the games, what they are challenging right now is not so much AI, but having the characters behave realistically. So you can put a lot of very complex behavior. What we do in Hitman is also to have a lot of coordination between the characters, which is very important so they can connect the dots, right? So if one of the cops, they hear a gunshot, another cop sees a gun lying on the ground, and the third cop sees the player run away, they are able to kind of connect the dots between these events, and from that figure out what happened in this situation, and react to this in a meaningful way. But coding the core of the AI is not the most challenging part. It's actually to have it perform realistically. So to coordinate the animations with the dialogue, and have them move around, and avoid other characters, and basically seem like real characters -- instead of uncanny valley sort of stuff. And I think that a lot of developers were kind of surprised how difficult this actually is. It seems like it's a particularly interdependent part of the development process. It seems like with AI, art and sound design all really sort of gel together with tech, and in that specific node of character behavior. MA: AI is kind of like at the end of the pipeline, right? Everything that all of the other developers are doing is feeding into this. The AI programmers have to work with the characters, and the animations, and the audio -- and music, not least, which is also a very important part of our Glacier 2 engine. We have a very dynamic music system. So basically, the music system feeds into the AI, and understands what is going on, and plays music according to that -- and sound effects, and stingers, and everything like that. You're totally right -- it's the most difficult place to be, many times, is AI programming. It's this synthesis of everything that the game has. MA: It's also very fun. I think some of my AI programmers say they like to think that they are playing on the side of the computer, and against the player, and I think that they're having fun with that. But at the same time they have to very much focus on creating a good experience for the player. Pieterjan Spoelders Good read! Really looking forward to the next hitman game :) Login to Reply or Like Reply | Jewel Jubic The Director spoke some really good things. Though the characters and the animation are the main features of this but the best thing which I like is the music of Glacier 2 engine. They really has got the best and dynamic system which gives more excitement to play. Login to Reply or Like Reply | Ramon Carroll 10 Dec 2011 at 1:45 pm PST I'm a HUGE fan of this series. It rates among my top five game series of all time, along with Final Fantasy and Dungeons and Dragons (tabletop). The AI always needed some work though, because they didn't always react too believably. My biggest beef with the hive-minds that the NPCs seemed to possess. Getting detected one time sometimes had the potential to completely compromise your mission. It looks like Absolution is going to give you the opportunity to correct your mistakes on the spot by improvising. This is what a real assassin would do, not press start and select "restart mission". Hopefully they can nail this one. I'm really looking forward to the release. I may even pre-order it! Keep up the hard work guys! Login to Reply or Like Reply |
计算机