id
stringlengths
30
34
text
stringlengths
0
75.5k
industry_type
stringclasses
1 value
2015-48/1916/en_head.json.gz/6679
The Fedora Project is an openly-developed project designed by Red Hat, open for general participation, led by a meritocracy, following a set of project objectives. The goal of The Fedora Project is to work with the Linux community to build a complete, general purpose operating system exclusively from open source software. Development will be done in a public forum. The project will produce time-based releases of Fedora about 2-3 times a year, with a public release schedule. The Red Hat engineering team will continue to participate in building Fedora and will invite and encourage more outside participation than in past releases. Fedora 15, a new version of one of the leading and most widely used Linux distributions on the market, has been released. Some of the many new features include support for Btrfs file system, Indic typing booster, redesigned SELinux troubleshooter, better power management, LibreOffice productivity suite, and, of course, the brand-new GNOME 3 desktop: "GNOME 3 is the next generation of GNOME with a brand new user interface. It provides a completely new and modern desktop that has been designed for today's users and technologies. Fedora 15 is the first major distribution to include GNOME 3 by default. GNOME 3 is being developed with extensive upstream participation from Red Hat developers and Fedora volunteers, and GNOME 3 is tightly integrated in Fedora 15." manufacturer website 1 dvd for installation on a x86 platform back to top
计算机
2015-48/1916/en_head.json.gz/10693
Cisco Next Generation Encryption and Postquantum Cryptography Marty Loy | October 19, 2015 at 12:16 pm PST Cisco developed Next Generation Encryption (NGE) in 2011. NGE was created to define a widely accepted and consistent set of cryptographic algorithms that provide strong security and good performance for our customers. These are the best standards that can be implemented today to meet the security and scalability requirements for network security in the years to come; or to interoperate with the cryptography that will be deployed in that time frame. Most importantly, all of the NGE algorithms, parameters, and key-sizes are widely believed to be secure. No attacks against these algorithms have been demonstrated. Recently there has been attention on Quantum-Computers (QC) and their potential impact on current cryptography standards. Quantum-computers and quantum algorithms is an area of active research and growing interest. Even though practical quantum-computers have not been demonstrated until now, if quantum-computers became a reality they would pose a threat to crypto standards for PKI (RSA, ECDSA), key exchange (DH, ECDH) and encryption (AES-128). These standards are also used in Cisco NGE. An algorithm that would be secure even after a quantum-computer is built is said to have postquantum security or be quantum-computer resistant (QCR). AES-256, SHA-384 and SHA-512 are believed to be postquantum secure. Tags: cryptography, encryption, Next Generation Encryption, postquantum cryptography POODLE and The Curse of Backwards Compatibility Talos Group | October 15, 2014 at 8:24 am PST This post was written by Martin Lee Old protocol versions are a fact of life. When a new improved protocol is released, products still need to support the old version for backwards compatibility. If previous versions contain weaknesses in security, yet their continued support is mandated, then security can become a major issue when a potential weakness is discovered to be a genuine vulnerability and an exploit is released. The Transport Layer Security (TLS) protocol defines how systems can exchange data securely. The current version 1.2 dates from August 2008, however the protocol’s origins lie in the Secure Sockets Layer (SSL) standard first published in February 1995. As weaknesses in the cryptography and flaws in the protocol design were discovered, new versions of the protocol were released. In order to maintain interoperability the most recent TLS standard requires that systems support previous versions down to SSL 3.0. The discovery of a cryptographic weakness in SSL 3.0 and the publication of an attack that can exploit this provide attackers with a means to attack TLS implementations by intercepting communications using the old SSL 3.0 protocol. The vulnerability, assigned the Common Vulnerability and Exposure ID CVE-2014-3566, and referred to as POODLE, allows an attacker to modify the padding bytes that are inserted into SSL packets to ensure that they are of the correct length and replay modified packets to a system in order to identify the bytes within a message, one by one. This allows an attacker to discover the values of cookies used to authenticate https secured web sessions. Nevertheless, the vulnerability potentially affects any application that secures traffic using TLS, not only https traffic. Read More » Tags: cryptography, CVE-2014-3566, POODLE, SSL, Talos, TLS A Collection of Cryptographic Vulnerabilities. Martin Lee | June 6, 2014 at 9:50 am PST The rustic origins of the English language are evident in the words left to us by our agricultural ancestors. Many words developed to distinguish groups of different animals, presumably to indicate their relevant importance. A ‘flock’ of sheep was more valuable than a single sheep, a ‘pack’ of wolves posed more danger than a single wolf. With respect to security vulnerabilities, we have yet to develop such collective nouns to indicate what is important, and to indicate that which poses danger. The world of Transport Layer Security has been rattled once again with the identification of a “swarm” of vulnerabilities in OpenSSL and GnuTLS. A total of seven new vulnerabilities ranging from a potential man in the middle attack, allowing an attacker to eavesdrop on an encrypted conversation, to vulnerabilities that could be used to allow attackers to remotely exploit code on a client have been identified in the popular open source libraries. Tags: cryptography, CVE-2014-0195, CVE-2014-0198, CVE-2014-0221, CVE-2014-0224, CVE-2014-3466, CVE-2014-3470, CVE-2014-5298, TRAC In Search of The First Transaction Michael Enescu | March 28, 2014 at 4:29 pm PST At the height of an eventful week – Cloud and IoT developments, Open Source Think Tank, Linux Foundation Summit – I learned about the fate of my fellow alumnus, an upperclassman as it were, the brilliant open source developer and crypto genius known for the first transaction on Bitcoin. Hal Finney is a Caltech graduate who went on to become one of the most dedicated, altruistic and strong contributors to open source cryptography. We are a small school in size, so one would think it’s easy to keep in touch; we try but do poorly, mostly a very friendly and open bunch, but easy to loose ourselves into the deep work at hand and sometimes miss what’s hiding in plain sight. He was among the first to work with Phil Zimmermann on PGP, created the first reusable proof-of-work (POW) system years before Bitcoin, had just the right amount of disdain for noobs in my opinion, and years later, one of the first open source developers with Satoshi Nakamoto on Bitcoin, in fact the first transaction ever. There is a great story about Hal in Forbes this week, “My hunt for Bitcoin’s creator led to a paralyzed crypto genius“, thank you, Hal Finney for going through with it, and Andy Greenberg for writing it. Sometimes it is very painful, shocking to see how things turn out, I think this is one of those moments when we realize how much this is going to mean to all of us, the brilliant minds of programmers like Hal Finney, who never sought the limelight, but did so much for us without asking for anything in return, who leave behind a long lasting contributions to privacy and security in our society, he is in fact a co-creator of the Bitcoin project. Do you realize that every bitminer successfully providing the required POW, should in fact reach the very same conclusion at the end of every new transaction… forever? You’d better accurately represent who was the very first. What a legacy to remember! I often go to Santa Barbara to see a very, very close and dear person there, my daughter. But now, there is another reason to stop by and pay tribute to one of the finest there. We will all be in search of the first transaction, eventually. Tags: BitCoin, bitminer, Caltech, crypto, cryptography, digital currency, digital wallet, Hal Finney, open source, PGP, Phil Zimmermann, POW, privacy, proof of work, reusable POW, Satoshi, Satoshi Nakamoto, security Trust but Verify and Verify and Verify Again Martin Lee | February 25, 2014 at 3:41 am PST Two recent disclosures show that often the weaknesses in cryptography lie not in the algorithms themselves, but in the implementation of these algorithms in functional computer instructions. Mathematics is beautiful. Or at least mathematics triggers the same parts of our brain that respond to beauty in art and music [1]. Cryptography is a particularly beautiful implementation of mathematics, a way of ensuring that information is encoded in such a way so that it can only be read by the genuine intended recipient. Cryptographically signed certificates ensure that you are certain of the identity of the person or organisation with which you are communicating, and cryptographic algorithms ensure that any information you transfer cannot be read by a third party. Although the science of cryptography is solid, in the real world nothing is so easy. Tags: cryptography, TRAC
计算机
2015-48/1916/en_head.json.gz/10811
HomeLatest ThreadsGreatest ThreadsForums & GroupsMy SubscriptionsMy Posts DU Home » Privacy Policy Democratic Underground Privacy Policy Democratic Underground, LLC (“Democratic Underground” or “the Site”) has created this privacy statement in order to demonstrate its firm commitment to privacy. In adopting this privacy policy, the intent is to balance the Site’s legitimate business interests in collecting and using personally identifiable information and a user’s reasonable expectations of privacy. The following discloses the information gathering and dissemination practices for the Democratic Underground Site. Information Collection & Use Information Collected from All Users Democratic Underground collects certain information from users that is not personally identifiable. This information includes, but is not limited to, the user’s Internet service provider (ISP), Internet protocol (IP) address, Internet browser type and version and operating system (OS). Other information collected includes the date and time of a user’s exit and entry on the Site, the names and addresses of referral sites, the specific pages a user chooses to visit on the Site and certain search terms that a user may have employed to find the Site. Use of Information Collected from All Users Information collected from “All Users” is used for purposes of internal reviews, including, but not limited to, traffic audits, tailoring of pages to a particular user’s technology environment, analyzing traffic and search trends, generating aggregate demographic information and general Site administration. This information may be shared in an aggregate form with advertisers and other third parties with a legitimate interest in the data. This aggregated data is anonymous and does not allow third parties to identify Democratic Underground users. Information Collected from Registered Users “Registered Users” are users that choose to participate more fully on the Democratic Underground Site. This higher level of participation includes, but is not limited to, a user’s decision to register for the online forums, make a purchase or donation, or otherwise share more detailed personal or payment information with Democratic Underground. Democratic Underground registration and payment forms may require users to provide certain types of personally identifiable information. This information may include, but is not limited to, a user’s name, e-mail address, postal address, country of residence, telephone number, year of birth, gender and similar identifying information. This information may also include payment information, such as a user’s credit card number or other payment account numbers. Democratic Underground may also record a user’s IP address along with your personally identifiable information. Use of Information Collected from Registered Users Information collected from “Registered Users” is used for purposes of communicating with registered users, including, but not limited to, sending a user occasional e-mail messages from Democratic Underground, corresponding with users via email in response to user inquiries, providing services a user may request and otherwise managing a user’s account. Users’ contact information may be shared with other organizations and companies, pre-screened by Democratic Underground, who may want to contact Site users. Users may opt-out of future mailings from Democratic Underground by employing methods provided in the below section entitled “Opt-Out and Modification of Provided Information.” Payment information is used for purposes of billing and order fulfillment. This includes, but is not limited to, sending a user’s billing information to a credit card processor or providing a user’s postal address to a shipping company. IP addresses and e-mail addresses may be employed by the Site for identity, safety and security purposes. Such uses include, but are not limited to, identifying specific users in the discussion forums or preventing access to the Site based on a user’s IP or e-mail address. Democratic Underground reserves the right to retain records of IP or e-mail addresses obtained by a Registered User for purposes of User identification and enforcement of Site policies and procedures. Democratic Underground authorizes pre-screened volunteer moderators to access limited information about Registered Users. Information accessible by these volunteers generally includes e-mail address, history of activity on the Site, and a user’s deleted posts, warnings, suspensions, and related discipline information. Volunteers also have access to any personal information which a Registered User may voluntarily disclose, including a user’s real name and location. Volunteer moderators do not have access to a user’s donation information or IP address. Democratic Underground employs reasonable security methods to secure users’ privacy, including, but not limited to, third-party encryption of payment forms, password protected file systems and similar means of protection, designed to protect the loss, misuse and alteration of information under Democratic Underground’s control. While Democratic Underground takes commercially reasonable security precautions, the Site is not responsible for data breaches of third parties, such as payment processors, web hosts or advertising providers. Voluntary Public Disclosure The Democratic Underground Site may include features, such as discussion forums, blog comment forms and related tools, that allow users to publicly share information about themselves. Any information a user discloses in these areas is publicly available on the Internet. Such information may be read, archived, collected, or used by other Internet users, including automated services such as search engines. Democratic Underground is not responsible for personally identifiable information you choose to publicly disclose, and urges users to exercise caution when disclosing information that may be personal in nature. Third Party Websites This Democratic Underground Privacy Policy applies only to information collected by the Democratic Underground Site. The Democratic Underground Site may contain links to other third party web sites that are not owned or controlled by Democratic Underground, including, but not limited to, third party advertisers. These sites control their own privacy practices and are not bound by this Democratic Underground Privacy Policy. Democratic Underground is not responsible for the information or services provided on third party sites provide, nor is it responsible for the privacy practices or content of their sites. Google, as a third party vendor, uses cookies to serve ads on the Democratic Underground site. Google's use of the DART cookie enables it to serve ads to users based on their visit to this site and other sites on the Internet. Users may opt out of the use of the DART cookie by visiting the Google ad and content network privacy policy. We allow third-party companies to serve ads and/or collect certain anonymous information when you visit our web site. These companies may use non-personally identifiable information (e.g., click stream information, browser type, time and date, subject of advertisements clicked or scrolled over) during your visits to this and other Web sites in order to provide advertisements about goods and services likely to be of greater interest to you. These companies typically use a cookie or third party web beacon to collect this information. To learn more about this behavioral advertising practice or to opt-out of this type of advertising, click here A cookie is a small text file that is stored on a user's computer for record-keeping purposes. Democratic Underground may use both “session ID cookies” and “persistent cookies.” Democratic Underground uses cookies to deliver content specific to your interests, provide convenience features and for other purposes. Disclosure by Legal Mandate Democratic Underground reserves the right to disclose your personally identifiable information as required by legal mandate, including, but not limited to, compliance with a judicial proceeding, court order, or legal process lawfully served on Democratic Underground. Children Under Thirteen Years of Age Pursuant to the Children's Online Privacy Protection Act of 1998 (COPPA), Democratic Underground generally does not knowingly collect personally identifiable information from children under thirteen (13) years of age. Democratic Underground only collects information from children under thirteen years of age after receiving verified parental consent prior to collection. Upon notice that Democratic Underground has inadvertently collected personally identifiable information from a child under thirteen, Democratic Underground will take all reasonable steps to remove such information from its records as quickly as reasonably possible. Transition of Ownership Were Democratic Underground to engage in a business transition, such as a merger, acquisition or sale, your personally identifiable information may be among the assets transferred. Users will be notified of any ownership transition via prominent notice on the Site at least thirty (30) days before any such change in ownership or control of your personal information is undertaken. Opt-Out and Modification of Provided Information Democratic Underground users may request that their personal information be deleted from Site records by sending an email to [email protected]. The Site will remove personally identifiable information from its records for any user that requests removal, provided that the user’s account remains inactive for seven days. Any user whose information is removed from Site records shall thereafter be bound by rules for “All Users,” as provided above, except that information in the public domain, such as a user’s forum posts while a Registered User, will not be deleted from the Site’s records. Democratic Underground reserves the right to modify this Privacy Policy at any time by posting the changes on the Democratic Underground Site. This Democratic Underground Privacy Policy was last updated on May 11, 2010. If you have any questions or suggestions regarding our privacy policy, please contact Democratic Underground at: Democratic Underground, LLC Kensington, MD 20895-0339 [email protected]
计算机
2015-48/1916/en_head.json.gz/11177
Home > Risk Management OverviewGetting StartedResearchTools & Methods Additional Materials ConsultingOur People Consider a broad range of conditions and events that can affect the potential for success, and it becomes easier to strategically allocate limited resources where and when they are needed the most. The SEI has been conducting research and development in various aspects of risk management for more than 20 years. Over that time span, many solutions have been developed, tested, and released into the community. In the early years, we developed and conducted Software Risk Evaluations (SREs), using the Risk Taxonomy. The tactical Continuous Risk Management (CRM) approach to managing project risk followed, which is still in use today—more than 15 years after it was released. Other applications of risk management principles have been developed, including CURE (focused on COTS usage), ATAM® (with a focus on architecture), and the cyber-security-focused OCTAVE®. In 2006, the SEI Mission Success in Complex Environments (MSCE) project was chartered to develop practical and innovative methods, tools, and techniques for measuring, assessing, and managing mission risks. At the heart of this work is the Mission Risk Diagnostic (MRD), which employs a top-down analysis of mission risk. Mission risk analysis provides a holistic view of the risk to an interactively complex, socio-technical system. The first step in this type of risk analysis is to establish the objectives that must be achieved. The objectives define the desired outcome, or "picture of success," for a system. Next, systemic factors that have a strong influence on the outcome (i.e., whether or not the objectives will be achieved) are identified. These systemic factors, called drivers, are important because they define a small set of factors that can be used to assess a system's performance and gauge whether it is on track to achieve its key objectives. The drivers are then analyzed, which enables decision makers to gauge the overall risk to the system's mission. The MRD has proven to be effective for establishing confidence in the characteristics of software-reliant systems across the life cycle and supply chain. The SEI has the MRD in a variety of domains, including software acquisition and development; secure software development; cybersecurity incident management; and technology portfolio management. The MRD has also been blended with other SEI products to provide unique solutions to customer needs. Although most programs and organizations use risk management when developing and operating software-reliant systems, preventable failures continue to occur at an alarming rate. In many instances, the root causes of these preventable failures can be traced to weaknesses in the risk management practices employed by those programs and organizations. For this reason, risk management research at the SEI continues. The SEI provides a wide range of risk management solutions. Many of the older SEI methodologies are still successfully used today and can provide benefits to your programs. To reach the available documentation on the older solutions, see the additional materials. The MSCE work on mission risk analysis—top-down, systemic analyses of risk in relation to a system's mission and objectives—is better suited to managing mission risk in complex, distributed environments. These newer solutions can be used to manage mission risk across the life cycle and supply chain, enabling decision makers to more efficiently engage in the risk management process, navigate through a broad tradeoff space (including performance, reliability, safety, and security considerations, among others), and strategically allocate their limited resources when and where they are needed the most. Finally, the SEI CERT Program is using the MRD to assess software security risk across the life cycle and supply chain. As part of this work, CERT is conducting research into risk-based measurement and analysis, where the MRD is being used to direct an organization's measurement and analysis efforts. Spotlight on Risk Management The Monitor June 2009 New Directions in Risk: A Success-Oriented Approach (2009) A Practical Approach for Managing Risk A Technical Overview of Risk and Opportunity Management A Framework for Categorizing Key Drivers of Risk Practical Risk Management: Framework and Methods
计算机
2015-48/1916/en_head.json.gz/13018
/ root / Linux Books / Red Hat/Fedora release date:May 2006 Andrew Hudson, Paul Hudson Continuing with the tradition of offering the best and most comprehensive coverage of Red Hat Linux on the market, Red Hat Fedora 5 Unleashed includes new and additional material based on the latest release of Red Hat's Fedora Core Linux distribution. Incorporating an advanced approach to presenting information about Fedora, the book aims to provide the best and latest information that intermediate to advanced Linux users need to know about installation, configuration, system administration, server operations, and security. Red Hat Fedora 5 Unleashed thoroughly covers all of Fedora's software packages, including up-to-date material on new applications, Web development, peripherals, and programming languages. It also includes updated discussion of the architecture of the Linux kernel 2.6, USB, KDE, GNOME, Broadband access issues, routing, gateways, firewalls, disk tuning, GCC, Perl, Python, printing services (CUPS), and security. Red Hat Linux Fedora 5 Unleashed is the most trusted and comprehensive guide to the latest version of Fedora Linux. Paul Hudson is a recognized expert in open source technologies. He is a professional developer and full-time journalist for Future Publishing. His articles have appeared in Internet Works, Mac Format, PC Answers, PC Format and Linux Format, one of the most prestigious linux magazines. Paul is very passionate about the free software movement, and uses Linux exclusively at work and at home. Paul's book, Practical PHP Programming, is an industry-standard in the PHP community. manufacturer website
计算机
2015-48/1916/en_head.json.gz/14546
Following established deployment Viglen win HPC contract at University of East Anglia Viglen has been selected as sole supplier of Managed services for Personal Computers and Notebooks by the University of East Anglia. The three year contract is estimated to be worth up to £3million. Following their recent successful deployment of an HPC cluster for the University, Viglen won a rigorous tendering process to supply up to 1000 desktops and notebooks in each year of the contract starting on 1st January 2011. The contract will run until 31st December 2013 with a possible extension to the end of 2014. The University wanted to minimise total cost of ownership whilst providing a high level of service to end users. Viglen showed strong evidence of a stable product line and cost effective upgrade and service management options. In addition, Viglen were able to satisfy the Universities requirement for sustainable solutions with energy saving configurations and disposal of redundant systems and packaging in line with WEE regulations. Viglen’s eco-friendly blue boxes will be used in the deployment of PC’s to reduce waste. Viglen were invited to bid, along with other suppliers via the National Desktop and Notebook Agreement (NDNA). Recently awarded the number one spot in Lot 3, One-Stop Shop (Desktops and Notebooks) of the NDNA, Viglen were judged to offer best overall value and highest standards of service. This prestigious position is further recognition of Viglen’s pedigree in the IT service providers market and allows NDNA members to choose whether to purchase desktops and notebooks from Viglen with or without additional tendering. In July 2010 Viglen were selected to partner with the University of East Anglia in the provision of a High Performance Computing Cluster Facility. The two phases of the two year contact are worth a total of approximately £750,000 and were awarded under the National Server and Storage Agreement (NSSA). “Viglen are very happy to be engaging with the University of East Anglia on another project so soon after our recent HPC partnership. We are excited to be involved with such a prestigious institution and look forward to the continuation of our flourishing relationship.” Bordan TkachukCEO, Viglen About The University of East Anglia The University of East Anglia (UEA) is one of the top research institutions in the UK and internationally recognised for excellence in teaching. Ranked eighth best for science among UK universities, it has a new medical school and is a leading member of the Norwich Research Park, one of the largest groupings of biotechnologists in the world. It has over 3,500 employees and 15,000 students.
计算机
2015-48/1917/en_head.json.gz/288
Перевести эту страницу? Перейти:МенюТекст страницыКарта сайтаИсходный текстИзменить языкЕвропейский фонд свободного программного обеспечения Наша работа Участвовать Европейский фонд свободного программного обеспечения Свободные программы — свободное общество! Внимание: Эта страница не переведена. Ниже представлен оригинальный текст. Если вы хотите помочь с переводами, перейдите по этой ссылке. Transcript of Richard Stallman at the 5th international GPLv3 conference; 21st November 2006 See our GPLv3 project page for information on how to participate. And you may be interested in our list of transcripts on GPLv3 and free software licences. The following is a transcript of Richard Stallman's presentation made at the fifth international GPLv3 conference, organised by FSIJ and AIST in Tokyo, Japan. From the same event, there is also a transcript Ciaran O'Riordan's talk. Transcription of this presentation was undertaken by Ciarán O'Riordan. Please support work such as this by donating to FSFE, joining the Fellowship of FSFE, and by encouraging others to do so. The speech was made in English. See also: http://gplv3.fsij.org/#Resources - all recordings from the event The recording of Richard Stallman's talk Presentation sections Why does the licence need updating? About versions 1 and 2 Internationalisation Licence compatibility Preventing tivoisation Tivoisation and Treacherous Computing General comments on Treacherous Computing Software patents The Novell and Microsoft example Internet distribution instead of mail order Licence termination Narrow patent retaliation Undermining the DMCA and EUCD Q1: What are the differences between open source and Free Software? Q2: How is copyright law affected by Disney and Creative Commons? Q3: Should Free Software include copyright notices? Q4: What about patents owned by children? Q5: Can you further explain the cure clause idea? Q6: What if someone sets up companies in a cycle? Q7: "Use freely" has another meaning regarding patents... Q8: What other movements is the Free Software movement similar to? Q9: What about open source licences? Q10: Will there be Free Software licences that are GPLv3-incompatible? Q11: Will FSF make further variants of the GPL, like the LGPL is? Q12: What's happening with the GFDL and GSFDL? Q13: Does using a GPL'd font constitute linking? The presentation (go to menu) [Section: Why does the licence need updating?] Welcome to our event. Since you've already heard from Niibe all the basic things that I would usually talk about in my speeches, I'm going to start right in on GPL version three. GPL version two was developed in 1991. The community was very different then. It was much smaller. There were probably hundreds of Free Software packages instead of tens of thousands. And there was no free operating system. As a result, the amount of pressure that people who were effectively our adversaries and wanted to cheat were placing on us and placing on our licences was much less. Since that time, Free Software has become far more popular with tens of millions of users. There are two basically free operating systems: GNU/Linux and BSD. Unfortunately, nearly all the versions that people use include non-Free Software, but basically they are free systems. And there are now many companies that are looking for loopholes, trying to defeat the goal of the GNU GPL which is to ensure all users freedom. The reason I wrote the GNU GPL was to make sure that when I release a program as Free Software, all of you get the four freedoms. So the point is, I wont be satisfied if only the users who get the program from me have freedom. I want to make sure that no matter how the program reaches you, whether it has been changed or not, all of you get freedom. The basic idea of the GNU GPL is to establish the four freedoms as inalienable rights, that is, rights that nobody can lose, except through wrong doing. You can't sell them. We're not going to have any selling yourself into slavery in our community of freedom. (go to menu) [Section: About versions 1 and 2] Back in 1991, we had seen two ways of trying to make software non-free. One was to release only a binary and not let users have the source code. And the other was to place restrictive licence conditions on it. These had been seen in the 1980s, so even the earliest GNU licences were designed to prevent that kind of abuse. They required distribution of source code and they say you can't add any other licence terms. You must pass on the program, including any changes of yours, under the exact same licence under which you got it. [Time: 237 secs] Around 1990, I found out about the danger of software patents. So in GPL version 2, we developed the section that we called "liberty or death for the program", although informally, because in GPL version 2 the sections don't have titles. This said that if you agree to any sort of patent licence that would limit the rights that your users would get, then you couldn't distribute the program at all. Now, what's the logic here? The idea is that patent holders would try to corrupt individual distributors of Free Software, trying to get them to sign specific deals to pay for permission to do so, and therefore we faced the danger that patent holders would divide our community and that this would make our community weak. In a country that is stupid enough to allow software patents, which I'm sad to say includes your country [Japan] and includes my country [USA], there's nothing we can do to prevent the danger that patent holders will use their patents to destroy Free Software, to drive it underground. But, there's an even worse thing they might be able to do, and that is make the software effectively non-free. If they could create a situation where individual users or individual distributors pay for permission, the software is effectively non-free. The decision I made was that we would try to prevent that danger. That danger is worse for two reasons. First, because a proprietary program which takes away a users' freedom is worse than no program at all. And second because that offers the patent holder a way to make money and would be more tempting than merely to cause destruction. So, Section 7 of GPL version 2 was designed to prevent that. And that was the main change in GPL version 2. However, today we've seen several more kinds of threats, as well as other issues that call for changes. The basic idea of GPL version 3 is unchanged: to protect the four freedoms for all users, but the details have to adapt to today's circumstances. This means that the changes in GPL version 3 do not have any common theme. They're all addressed to details, to specifics. Some very important, some secondary, but every change is in some specific detail because there's no change in the spirit. So let me go through the most important of these changes. (go to menu) [Section: Internationalisation] One of them is better internationalisation. I developed the earlier versions of the GPL working with a lawyer, but this lawyer was not an expert on the laws of other countries. We simply based our inputs on knowing that copyright law is mostly similar around the World. However, now we've made a large effort to consult lawyers from various different countries, to make sure that we will get similar results in all countries. To make this happen, we have eliminated certain words such as "distribute", from the GPL version 3. It turns out that various countries have different definitions for the word "distribute". So we have tried to avoid that word. We coined a couple of new terms, in order to express ourselves better. For instance, there's the term "propagate" which loosely means copying, but we've given it a precise definition that is meant to buffer it against variations in copyright law between countries. Another term called "convey" which loosely means distributing copies, but again, we've defined it in a way buffers it against international variations. So the bulk of the GPL gives conditions for propagating and conveying the program. (go to menu) [Section: Licence compatibility] Another area of change has to do with compatibility with other licences. Back in the early 90s, there were only a few different Free Software licences, and the ones people generally used were either the GNU GPL, or simple permissive licences like the X11 licence and the original BSD licence. The X11 licence was compatible with the GPL. You could merge code with the GPL version 1 or 2 with code under the X11 licence. The original BSD licence is incompatible because of the obnoxious advertising clause, but in the 90s we convinced the University of California to relicense all of BSD under the revised BSD licence, which gets rid of the advertising clause, and that is compatible with GPL version 2. By the way, you should never use the term "BSD-style licensing" because of the ambiguity. The difference between these two licences is quite important. One is compatible with the GPL and the other is not. It's very important to call people's attention to the difference between those two licences. However, starting in 1999 I believe, with Mozilla, many other Free Software licences have been developed, most of which are not compatible with the GPL. GPL version 3 is designed to be compatible with two important licences: the Apache licence and the Eclipse licence. It will be possible to merge code under those licences into GPL3 covered software once the GPL version 3 is really out. [Time: 752 secs] And while we were at it, we decided to formalise and clearly explain what it means to give additional permission as a special exception. That's a practice that we have been doing for many years. The simple library that comes with GCC that does very low level tasks, supporting certain language constructs, has a special exception on it saying basically that you can link it into almost anything. But there was some confusion about what it means to have such an exception so we decided to spell it out, to make it clear that when you give additional permission, people can remove that additional permission, because really what you have done is you have made two separate statements: (A) you can distribute this under the GPL, and (B) I also give you permission for this and that. It follows that anyone who is redistributing that software or distributing modified versions can pass it along under the GPL or he can reconfirm the other permission, or he can do both. So, if the other permission just says you can do one little extra thing, it makes no sense by itself. That would be useless. So basically, you've got to keep the GPL, but you either keep the added provision or not. So in GPL version 3 this is spelled out. We also explain that there are a few kinds of additional requirements that can appear on code that gets included or merged into the GPL covered program. Now, some of these are not new. There are essential trivial requirements in the X11 licence and the revised BSD licence, and because they're trivial, our interpretation is that there is no conflict with the GPL, but we decided to make that completely explicit. But in addition, there are some substantive requirements that are not in the GPL that we will now allow to be added. This is how we achieve compatibility with the Apache licence and the Eclipse licence. After all, the reason they are incompatible with GPL version 2 is that they have requirements that are not in GPL version 2. Those requirements are not part of GPL version 3 either, but GPL version 3 explicitly says that you are allowed to add those kinds of requirements. That is how GPLv3 will be compatible with those licences, because it specifically permits a limited set of additional requirements which include the requirements in those licences. [Time: 986 secs] While we were doing this we decided to try to put an end to a misuse of the GPL. You may occasionally see a program which says "This program is released under the GNU GPL but you're not allowed to use it commercially", or some other attempt to add another requirement. That's actually self-contradictory and its meaning is ambiguous, so nobody can be sure what will happen if a judge looks at that. After all, GPL version 2 says you can release a modified version under GPL version 2. So if you take this program with its inconsistent licence and you release a modified version, what licence are you supposed to use? You could argue for two different possibilities. We can't stop people making their software under licences that are more restrictive than the GPL, we can't stop them from releasing non-Free Software, but we can try to prevent them from doing so in a misleading and self-contradictory way, after all, when the program says GPL version 2 but you can't use it commercially, that's not really released under GPL version 2, and it's not Free Software, and if you tried to combine that with code that really is released under GPL version 2, you would be violating GPL2. Because this inconsistent licence starts out by saying "GPL version 2", people are very likely to be mislead. They may think it's available under GPL version 2, they may think they're allowed to combine these modules. We want to get rid of this confusing practice. And therefore we've stated that if you see a problem that states GPL version 3 as its licence, but has additional requirements not explicitly permitted in section 7 then you're entitled to remove them. We hope that this will convince the people that want to use more restrictive licences that they should do it in an unambiguous way. That is, they should take the text, edit it, and make their own licence, which might be free or might not, depending on the details, but at least it won't be the GNU GPL, so people won't get confused. [Time: 1169 secs] (go to menu) [Section: Preventing tivoisation] Another major change is a response to a new method of trying to deprive the users of freedom. In broad terms we refer to this as tivoisation. It's the practice of designing hardware so that a modified version cannot function properly. Now, I do not mean by this the fact that when you modify it you might break it. Of course that's true. But of course you also might modify it carefully and avoid making a mistake and then you have not broken the program, you would expect it to function. But tivoised machines will not allow any modified version to function correctly even if you have done your modification properly. For instance, the Tivo itself is the prototype of tivoisation. The Tivo contains a small GNU/Linux operating system, thus, several programs under the GNU GPL. And, as far as I know, the Tivo company does obey GPL version 2. They provide the users with source code and the users can then modify it and compile it and then install it in the Tivo. That's where the trouble begins because the Tivo will not run modified versions, the Tivo contains hardware designed to detect that the software has been changed and shuts down. So, regardless of the details of your modification, your modified version will not run in your Tivo. (go to menu) [Section: Tivoisation and Treacherous Computing] This is the basic method of tivoisation but there are more subtle methods which involve Treacherous Computing. Treacherous Computing is the term we use to describe a practice of designing people's computers so that the users can't control them. In fact, the perpetrators of this scheme don't want you to have real computers. What is a computer, after all? A computer is a universal programmable machine, one that can be programmed to carry out any computation, but those machines are designed so that there are computations you can't make them do. They're designed not to be real computers. Specifically, they are designed so that data or websites can be set up to communicate only with particular software and set up to make it impossible for any other program to communicate with that data or those websites. [Time: 1403 secs] One of the ways this works is through remote attestation. The idea is that a website will be able to check what software is running on your computer, and if you
计算机
2015-48/1917/en_head.json.gz/573
Cambridge Systems and Networking - Microsoft Research Cambridge Systems and Networking The systems and networking group is composed of approximately 20 researchers and post-docs. The group has existed since the lab opened, and over the last decade and a half we have covered a broad range of topics including systems, operating systems, networking, distributed systems, file and storage systems, cloud and data centre computing, social networking, security, network management, computer architecture, programming languages, and databases. We are a group that designs and builds systems that address significant real-world problems and demonstrate novel underlying principles. Many projects that researchers from the group have been involved with have had significant impact in the academic community and resulted in papers that have been widely cited. We are also very proud that many of our projects have also had internal Microsoft impact or have been licensed. A recent example of our impact on Microsoft is the Storage QoS feature in the new Windows Server Technical Preview. This feature enables data center hosters to control the bandwidth of traffic from VMs to storage on a per-class basis and is one of the outputs of our research on Predictable Data Centers. You can find more about this work in a recent blog we wrote. We also transferred our Control Flow Guard compiler security improvements, and today all of Windows builds with this modified compiler, adding safety checks to all indirect function calls. A rebuild of Windows 8.1 was released on Windows Update, and the compiler is available for anyone to use with Visual Studio 2015 (preview). Read the compiler team's blog post to learn more. Other public examples of our impact on Microsoft include the System Centre Capacity Planner and the Windows Network Map. We are always looking for talented people to join our team, please see our careers section for opportunities. If you are interested in projects underway please look at the group members' personal pages. The group has a long tradition of research in most aspects of computer communications and networks in general. The interests and contributions of the group are quite diverse with projects over the previous years in a number of areas such as peer-to-peer networks, resource allocation, congestion control and transport protocols, performance modelling, epidemics, network coding, routing, mobility models, social networks, enterprise and home network management, etc. Currently, there is a strong focus on data centre networks (private and cloud) where traditional assumptions that have underpinned the design of the Internet are challenged. The systems and networking team have a history of doing research in file and storage systems. We engage in storage research at all levels of the storage stack, from user-experience driven file system designs to data centre scale scalable, predictable and efficient storage systems. Recently we have been working on a software-defined storage (SDS) architecture that opens up the storage stack and makes it more controllable and programmable. We have also released the Microsoft Research Storage Toolkit to allow others to experiment with the architecture. We also build cold-storage systems such as Pelican, optimised for capacity and low cost. Pelican spins down disks to limit peak power draw thus allowing the disks to be more densely packed, and so reducing costs. Our system research spans hardware, programming languages, compilers, and applications. Our mission is to advance the state of the art in these areas to build cloud, mobile, and desktop platforms that are secure, high-performance, and cost- and energy-efficient. Our current focus is on providing memory safety, strong security, and high performance all at the same time. This requires us to define security properties at the programming language level and then enforce them using the compiler, runtime, OS and hardware. We are also investigating the impact of new hardware technologies such as on-chip customization and distributed integrated fabrics. This will enable building "rack-scale computers" with terabytes of memory, 100s of cores and low-latency communication between them. In the rack-scale computing project, we are looking at how to leverage these new technologies and at the implications for the software stack at large (OS, networking, and applications). Distributed Systems Our research group focuses on the vision of how our data center systems will look like years ahead and reason about the fundamental changes, both practical and theoretical, to distributed algorithms, distributed architectures, and networked hardware to realize such a vision. Core to our vision are components and mechanisms like systems for distributed coordination (a la ZooKeeper), replicated storage (such as BookKeeper for logging), and new forms of communicating distributed processes and servers, for example, with RDMA in the FARM project. Innovating the distributed systems that form the foundation of our services and that boost the productivity of our developers is an integral part to this group's mission. We also conduct research in the area of algorithms and systems for processing massive amounts of data. Our work aims at pushing the boundary of computer science in the area of algorithms and systems for large-scale computations. To find out more, please visit our project page. Our mission is to invent new wireless architectures and technologies that will support ever-growing traffic demands from mobile devices. We are particularly interested in the new spectrum access models (such as white-spaces) that will yield more flexible and efficient use of spectrum. Our research focus is on wireless network protocols, underlying signal processing algorithms and systems that will run them. We are equally engaged in advancing the state-of-the-art algorithmic research and building network prototypes and tools to prove our concepts. To find out more, please visit our project page. Empirical Software Engineering We develop empirically driven strategies, techniques and tools to optimize software development. We base our analysis on development process data—changes and tests, bug reports and patches, organizational structure and team management. Using this historical data allows us to characterize and model existing development processes with respect to efficiency and effectiveness and to simulate the impact of optimization strategies on the overall development process and quality, speed and cost goals. Analysing development process data has in the past already proved useful to software development organizations, as they seek to manage scope, quality, cost and time in software development projects. To find out more, please visit this project page. Recent selected publications Intern Projects Internal website: Systems and Networking sharepoint. The Cambridge Systems & Networking group is always looking for interns, post-docs, software engineers and researchers. For more information, visit Microsoft Research Careers. Prospective interns may wish to identify people or projects they are interested in, and informally email the relevant staff directly.
计算机
2015-48/1917/en_head.json.gz/607
End to End Report Creation and Management in SQL Server Reporting Services 2008 With Reporting Services 2008, Microsoft takes a step forward in presenting SQL Server as an enterprise data platform. Innovations in data regions, vast improvements in visualisation, and a new Report Designer, Microsoft SQL Server 2008 Reporting Services provides a tool that can be used by all members in the organization. This session will begin on Installation issues.You will walk through the authoring, management and delivery of reports, focusing on the new features of Reporting Services 2008, creating a report in the new report designer. Raising awareness of Report management options and the delivery mechanisms to deliver reports. Presented by Chris Testa-O'Neill SQLBits IV WMV Video Chris Testa-O'Neill is the founder and Principal Consultant at Claribi. An experienced professional with over 14 years’ experience of architecting, designing and implementing Microsoft SQL Server data and business intelligence projects at an enterprise scale. He has significant experience of leading and mentoring both business and technical project stakeholders in maximising investment in SQL Server and more recently in Azure solutions. A regular and respected speaker on the international SQL Server conference circuit, and an organiser of national SQL Server conferences and events, Chris has been recognised as a Microsoft Most Valuable Professional (MVP) by Microsoft. and has been a Microsoft Certified Trainer (MCT) for the last 14 years having both authored and delivered Microsoft Official Courses.
计算机
2015-48/1917/en_head.json.gz/792
Original Link: http://www.anandtech.com/show/6421/inside-the-titan-supercomputer-299k-amd-x86-cores-and-186k-nvidia-gpu-cores Home> Inside the Titan Supercomputer: 299K AMD x86 Cores and 18.6K NVIDIA GPUs by Anand Lal Shimpi on October 31, 2012 1:28 AM EST Posted in Earlier this month I drove out to Oak Ridge, Tennessee to pay a visit to the Oak Ridge National Laboratory (ORNL). I'd never been to a national lab before, but my ORNL visit was for a very specific purpose: to witness the final installation of the Titan supercomputer. ORNL is a US Department of Energy laboratory that's managed by UT-Battelle. Oak Ridge has a core competency in computational science, making it not only unique among all DoE labs but also making it perfect for a big supercomputer. Titan is the latest supercomputer to be deployed at Oak Ridge, although it's technically a significant upgrade rather than a brand new installation. Jaguar, the supercomputer being upgraded, featured 18,688 compute nodes - each with a 12-core AMD Opteron CPU. Titan takes the Jaguar base, maintaining the same number of compute nodes, but moves to 16-core Opteron CPUs paired with an NVIDIA Kepler K20X GPU per node. The result is 18,688 CPUs and 18,688 GPUs, all networked together to make a supercomputer that should be capable of landing at or near the top of the TOP500 list. We won't know Titan's final position on the list until the SC12 conference in the middle of November (position is determined by the system's performance in Linpack), but the recipe for performance is all there. At this point, its position on the TOP500 is dependent on software tuning and how reliable the newly deployed system has been. Rows upon rows of cabinets make up the Titan supercomputer Over the course of a day in Oak Ridge I got a look at everything from how Titan was built to the types of applications that are run on the supercomputer. Having seen a lot of impressive technology demonstrations over the years, I have to say that my experience at Oak Ridge with Titan is probably one of the best. Normally I cover compute as it applies to making things look cooler or faster on consumer devices. I may even dabble in talking about how better computers enable more efficient datacenters (though that's more Johan's beat). But it's very rare that I get to look at the application of computing to better understanding life, the world and universe around us. It's meaningful, impactful compute. Gallery: Oak Ridge National Laboratory Tour - Titan Supercomputer In the 15+ years I've been writing about technology, I've never actually covered a supercomputer. I'd never actually seen one until my ORNL visit. I have to say, the first time you see a supercomputer it's a bit anticlimactic. If you've ever toured a modern datacenter, it doesn't look all that different. A portion of Titan More Titan, the metal pipes carry coolant Titan in particular is built from 200 custom 19-inch cabinets. These cabinets may look like standard 19-inch x 42RU datacenter racks, but what's inside is quite custom. All of the cabinets that make up Titan requires a room that's about the size of a basketball court. The hardware comes from Cray. The Titan installation uses Cray's new XK7 cabinets, but it's up to the customer to connect together however many they want. ORNL is actually no different than any other compute consumer: its supercomputers are upgraded on a regular basis to keep them from being obsolete. The pressures are even greater for supercomputers to stay up to date, after a period of time it actually costs more to run an older supercomputer than it would to upgrade the machine. Like modern datacenters, supercomputers are entirely power limited. Titan in particular will consume around 9 megawatts of power when fully loaded. The upgrade cycle for a modern supercomputer is around 4 years. Titan's predecessor, Jaguar, was first installed back in 2005 but regularly upgraded over the years. Whenever these supercomputers are upgraded, old hardware is traded back in to Cray and a credit is issued. Although Titan reuses much of the same cabinetry and interconnects as Jaguar, the name change felt appropriate given the significant departure in architecture. The Titan supercomputer makes use of both CPUs and GPUs for compute. Whereas the latest version of Jaguar featured 18,688 12-core AMD Opteron processors, Titan keeps the total number of compute nodes the same (18,688) but moves to 16-core AMD Opteron 6274 CPUs. What makes the Titan move so significant however is that each 16-core Opteron is paired with an NVIDIA K20X (Kepler GK110) GPU. A Titan compute board: 4 AMD Opteron (16-core CPUs) + 4 NVIDIA Tesla K20X GPUs The transistor count alone is staggering. Each 16-core Opteron is made up of two 8-core die on a single chip, totaling 2.4B transistors built using GlobalFoundries' 32nm process. Just in CPU transistors alone, that works out to be 44.85 trillion transistors for Titan. Now let's talk GPUs. NVIDIA's K20X is the server/HPC version of GK110, a part that never had a need to go to battle in the consumer space. The K20X features 2688 CUDA cores, totaling 7.1 billion transistors per GPU built using TSMC's 28nm process. With a 1:1 ratio of CPUs and GPUs, Titan adds another 132.68 trillion transistors to the bucket bringing the total transistor count up to over 177 trillion transistors - for a single supercomputer. I often use Moore's Law to give me a rough idea of when desktop compute performance will make its way into notebooks and then tablets and smartphones. With Titan, I can't even begin to connect the dots. There's just a ton of computing horsepower available in this installation. Transistor counts are impressive enough, but when you do the math on the number of cores it's even more insane. Titan has a total of 299,008 AMD Opteron cores. ORNL doesn't break down the number of GPU cores but if I did the math correctly we're talking about over 50 million FP32 CUDA cores. The total computational power of Titan is expected to be north of 20 petaflops. Each compute node (CPU + GPU) features 32GB of DDR3 memory for the CPU and a dedicated 6GB of GDDR5 (ECC enabled) for the K20X GPU. Do the math and that works out to be 710TB of memory. Titan's storage array System storage is equally impressive: there's a total of 10 petabytes of storage in Titan. The underlying storage hardware isn't all that interesting - ORNL uses 10,000 standard 1TB 7200 RPM 2.5" hard drives. The IO subsystem is capable of pushing around 240GB/s of data. ORNL is considering including some elements of solid state storage in future upgrades to Titan, but for its present needs there is no more cost effective solution for IO than a bunch of hard drives. The next round of upgrades will take Titan to around 20 - 30PB of storage, at peak transfer speeds of 1TB/s. Most workloads on Titan will be run remotely, so network connectivity is just as important as compute. There are dozens of 10GbE links inbound to the machine. Titan is also linked to the DoE's Energy Sciences Network (ESNET) 100Gbps backbone. Physical Architecture The physical architecture of Titan is just as interesting as the high level core and transistor counts. I mentioned earlier that Titan is built from 200 cabinets. Inside each cabinets are Cray XK7 boards, each of which has four AMD G34 sockets and four PCIe slots. These aren't standard desktop PCIe slots, but rather much smaller SXM slots. The K20s NVIDIA sells to Cray come on little SXM cards without frivolous features like display outputs. The SXM form factor is similar to the MXM form factor used in some notebooks. Gallery: Oak Ridge National Laboratory Tour - Titan Installation There's no way around it. ORNL techs had to install 18,688 CPUs and GPUs over the past few weeks in order to get Titan up and running. Around 10 of the formerly-Jaguar cabinets had these new XK boards but were using Fermi GPUs. I got to witness one of the older boards get upgraded to K20. The process isn't all that different from what you'd see in a desktop: remove screws, remove old card, install new card, replace screws. The form factor and scale of installation are obviously very different, but the basic premise remains. As with all computer components, there's no guarantee that every single chip and card is going to work. When you're dealing with over 18,000 computers as a part of a single entity, there are bound to be failures. All of the compute nodes go through testing, and faulty hardware swapped out, before the upgrade is technically complete. OS & Software Titan runs the Cray Linux Environment, which is based on SUSE 11. The OS has to be hardened and modified for operation on such a large scale. In order to prevent serialization caused by interrupts, Cray takes some of the cores and uses them to run all of the OS tasks so that applications running elsewhere aren't interrupted by the OS. Jobs are batch scheduled on Titan using Moab and Torque. AMD CPUs and NVIDIA GPUs If you're curious about why Titan uses Opterons, the explanation is actually pretty simple. Titan is a large installation of Cray XK7 cabinets, so CPU support is actually defined by Cray. Back in 2005 when Jaguar made its debut, AMD's Opterons were superior to the Intel Xeon alternative. The evolution of Cray's XT/XK lines simply stemmed from that point, with Opteron being the supported CPU of choice. The GPU decision was just as simple. NVIDIA has been focusing on non-gaming compute applications for its GPUs for years now. The decision to partner with NVIDIA on the Titan project was made around 3 years ago. At the time, AMD didn't have a competitive GPU compute roadmap. If you remember back to our first Fermi architecture article from back in 2009, I wrote the following: "By adding support for ECC, enabling C++ and easier Visual Studio integration, NVIDIA believes that Fermi will open its Tesla business up to a group of clients that would previously not so much as speak to NVIDIA. ECC is the killer feature there." At the time I didn't know it, but ORNL was one of those clients. With almost 19,000 GPUs, errors are bound to happen. Having ECC support was a must have for GPU enabled Jaguar and Titan compute nodes. The ORNL folks tell me that CUDA was also a big selling point for NVIDIA. Finally, some of the new features specific to K20/GK110 (e.g. Hyper Q and GPU Direct) made Kepler the right point to go all-in with GPU compute. Power Delivery & Cooling Titan's cabinets require 480V input to reduce overall cable thickness compared to standard 208V cabling. Total power consumption for Titan should be around 9 megawatts under full load and around 7 megawatts during typical use. The building that Titan is housed in has over 25 megawatts of power delivered to it. In the event of a power failure there's no cost effective way to keep the compute portion of Titan up and running (remember, 9 megawatts), but you still want IO and networking operational. Flywheel based UPSes kick in, in the event of a power interruption. They can power Titan's network and IO for long enough to give diesel generators time to come on line. The cabinets themselves are air cooled, however the air itself is chilled using liquid cooling before entering the cabinet. ORNL has over 6600 tons of cooling capacity just to keep the recirculated air going into these cabinets cool. Applying for Time on Titan The point of building supercomputers like Titan is to give scientists and researchers access to hardware they wouldn't otherwise have. In order to actually book time on Titan, you have to apply for it through a proposal process. There's an annual call for proposals, based on which time on Titan will be allocated. The machine is available to anyone who wants to use it, although the problem you're trying to solve needs to be approved by Oak Ridge. If you want to get time on Titan you write a proposal through a program called Incite. In the proposal you ask to use either Titan or the supercomputer at Argonne National Lab (or both). You also outline the problem you're trying to solve and why it's important. Researchers have to describe their process and algorithms as well as their readiness to use such a monster machine. Any program will run on a simple computer, but to need a supercomputer with hundreds of thousands of cores the requirements are very strict. As a part of the proposal process you'll have to show that you've already run your code on machines that are smaller, but similar in nature (e.g. 1/3 the scale of Titan). Your proposal would then be reviewed twice - once for computational readiness (can it run on Titan) and once for scientific peer review. The review boards rank all of the proposals received, and based on those rankings time is awarded on the supercomputers. The number of requests outweighs the available compute time by around 3x. The proposal process is thus highly competitive. The call for proposals goes out once a year in April, with proposals due in by the end of June. Time on the supercomputers is awarded at the end of October with the accounts going live on the first of January. Proposals can be for 1 - 3 years, although the multiyear proposals need to renew each year (proving the time has been useful, sharing results, etc...). Programs that run on Titan are typically required to run on at least 1/5 of the machine. There are smaller supercomputers available that can be used for less demanding tasks. Given how competitive the proposal process is, ORNL wants to ensure that those using Titan actually have a need for it. Once time is booked, jobs are scheduled in batch and researchers get their results whenever their turn comes up. The end user costs for using Titan depend on what you're going to do with the data. If you're a research scientist and will publish your findings, the time is awarded free of charge. All ORNL asks is that you provide quarterly updates and that you credit the lab and the Department of Energy for providing the resource. If, on the other hand, you're a private company wanting to do proprietary work you have to pay for your time on the machine. On Jaguar the rate was $0.05 per core hour, although with Titan ORNL will be moving to a node-hour billing rate since the addition of GPUs throws a wrench in the whole core-hour billing increment. Supercomputing Applications In the gaming space we use additional compute to model more accurate physics and graphics. In supercomputing, the situation isn't very different. Many of ORNL's supercomputing projects model the physically untestable (either for scale or safety reasons). Instead of getting greater accuracy for the impact of an explosion on an enemy, the types of workloads run at ORNL use advances in compute to better model the atmosphere, a nuclear reactor or a decaying star. I never really had a good idea of specifically what sort of research was done on supercomputers. Luckily I had the opportunity to sit down with Dr. Bronson Messer, an astrophysicist looking forward to spending some time on Titan. Dr. Messer's work focuses specifically on stellar decay, or what happens immediately following a supernova. His work is particularly important as many of the elements we take for granted weren't present in the early universe. Understanding supernova explosions gives us unique insight into where we came from. For Dr. Messer's studies, there's a lot of CUDA Fortran that's used although the total amount of code that runs on GPUs is pretty small. There may be 20K - 1M lines of code, but in that complex codebase you're only looking at tens of lines of CUDA code for GPU acceleration. There are huge speedups from porting those small segments of code to run on GPUs (much of that code is small because it's contained within a loop that gets pushed out in parallel to GPUs vs. executing serially). Dr. Messer tells me that the actual process of porting his code to CUDA isn't all that difficult, after all there aren't that many lines to worry about, but it's changing all of the data around to make the code more GPU friendly that is time intensive. It's also easy to screw up. Interestingly enough, in making his code more GPU friendly a lot of the changes actually improved CPU performance as well thanks to improved cache locality. Dr. Messer saw a 2x improvement in his CPU code simply by making data structures more GPU friendly. Many of the applications that will run on Titan are similar in nature to Dr. Messer's work. At ORNL what the researchers really care about are covers of Nature and Science. There are researchers focused on how different types of fuels combust at a molecular level. I met another group of folks focused on extracting more efficiency out of nuclear reactors. These are all extremely complex problems that can't easily be experimented on (e.g. hey let's just try not replacing uranium rods for a little while longer and see what happens to our nuclear reactor). Scientists at ORNL and around the world working on Titan are fundamentally looking to model reality, as accurately as possible, so that they can experiment on it. If you think about simulating every quark, atom, molecule in whatever system you're trying to model (e.g. fuel in a combustion engine), there's a ton of data that you have to keep track of. You have to look at how each one of these elementary constituents impacts one another when exposed to whatever is happening in the system at the time. It's these large scale problems that are fundamentally driving supercomputer performance forward, and there's simply no letting up. Even at two orders of magnitude better performance than what Titan can deliver with ~300K CPU cores and 50M+ GPU cores, there's not enough compute power to simulate most of the applications that run on Titan in their entirety. Researchers are still limited by the systems they run on and thus have to limit the scope of their simulations. Maybe they only look at one slice of a star, or one slice of the Earth's atmosphere and work on simulating that fraction of the whole. Go too narrow and you'll lose important understanding of the system as a whole. Go too broad and you'll lose fidelity that helps give you accurate results. Given infinite time you'll be able to run anything regardless of hardware, but for researchers (who happen to be human) time isn't infinite. Having faster hardware can help shorten run times to more manageable amounts. For example, reducing a 6 month runtime (which isn't unheard of for many of these projects) to something that can execute to completion in a single month can have a dramatic impact on productivity. Dr. Messer put it best when told me that keeping human beings engaged for a month is a much different proposition than keeping human beings engaged for half a year. There are other types of applications that will run on Titan without the need for enormous runtimes, instead they need lots of repetitions. Doing hurricane simulation is one of those types of problems. ORNL was in between generations of supercomputers at one point and donated some compute time to the National Tornado Center in Oklahoma during that transition. During the time they had access to the ORNL supercomputer, their forecasts improved tremendously. ORNL also has a neat visualization room where you can plot, in 3D, the output from work you've run on Titan. The problem with running workloads on a supercomputer is the output can be terabytes of data - which tends to be difficult to analyze in a spreadsheet. Through 3D visualization you're able to get a better idea of general trends. It's similar to the motivation behind us making lots of bar charts in our reviews vs. just publishing a giant spreadsheet, but on a much, much, much larger scale. The image above is actually showing some data run on Titan simulating a pressurized water nuclear reactor. The video below explains a bit more about the data and what it means. At a high level, the Titan supercomputer delivers an order of magnitude increase in performance over the outgoing Jaguar system at roughly the same energy price. Using over 200,000 AMD Opteron cores, Jaguar could deliver roughly 2.3 petaflops of performance at around 7MW of power consumption. Titan approaches 300,000 AMD Opteron cores but adds nearly 19,000 NVIDIA K20 GPUs, delivering over 20 petaflops of performance at "only" 9MW. The question remains: how can it be done again? In 4 years, Titan will be obsolete and another set of upgrades will have to happen to increase performance in the same power envelope. By 2016 ORNL hopes to be able to build a supercomputer capable of 10x the performance of Titan but within a similar power envelope. The trick is, you don't get the performance efficiency from first adopting GPUs for compute a second time. ORNL will have to rely on process node shrinks and improvements in architectural efficiency, on both CPU and GPU fronts, to deliver the next 10x performance increase. Over the next few years we'll see more integration between the CPU and GPU with an on-die communication fabric. The march towards integration will help improve usable performance in supercomputers just as it will in client machines. Increasing performance by 10x in 4 years doesn't seem so far fetched, but breaking the 1 Exaflop barrier by 2020 - 2022 will require something much more exotic. One possibility is to move from big beefy x86 CPU cores to billions of simpler cores. Given ORNL's close relationship with NVIDIA, it's likely that the smartphone core approach is being advocated internally. Everyone involved has differing definitions of what is a simple core (by 2020 Haswell will look pretty darn simple), but it's clear that whatever comes after Titan's replacement won't just look like a bigger, faster Titan. There will have to be more fundamental shifts in order to increase performance by 2 orders of magnitude over the next decade. Luckily there are many research projects that have yet to come to fruition. Die stacking and silicon photonics both come to mind, even though we'll need more than just that. It's incredible to think that the most recent increase in supercomputer performance has its roots in PC gaming. These multi-billion transistor GPUs first came about to improve performance and visual fidelity in 3D games. The first consumer GPUs were built to better simulate reality so we could have more realistic games. It's not too surprising then to think that in the research space the same demands apply, although in pursuit of a different goal: to create realistic models of the world and universe around us. It's honestly one of the best uses of compute that I've ever seen.
计算机
2015-48/1917/en_head.json.gz/1021
Cloud-based Developer Tools Usher in Development-as-a-Service : Page 2 Cloud-based DaaS is a reality for organic developer collaboration thanks to advancements in browser capacity and the emergence of HTML5. by Herman Mehling Page 2 of 2 Cloud9 and the DaaS IDE An offshoot of Ajax.org, Cloud9 provides a cloud-based commercial IDE that allows Web and mobile developers to work together in remote teams anywhere, anytime. The platform's NodeJS framework supports HTML5, Python, Ruby and PHP. Cloud9 enables developers to start projects behind a single URL, share their code, and collaborate with co-developers anywhere in the world without having to install anything on the client. More than 30,000 developers around the world are using Cloud9 to build and collaborate on software projects. "The platform runs in the browser and lives in the cloud, allowing development teams to run, debug and deploy applications from anywhere, anytime," said Daniels. The DaaS tool also offers syntax support for popular programming languages, as well as the ability to: simultaneously collaborate on code and projects run real-time code analysis debug and test applications It also includes HitHub, Bitbucket and Joyent integration. Cloud9 offers a free version and a premium offering that costs $15 per month. Addressing the genesis of the platform, Daniels said the company saw the need for a cloud-based IDE for which Web development and JavaScript were the core focuses. "We wanted to create an alternative to Eclipse variants and other Java or C++ IDEs, where extending and customizing applications is done in either Java or C++, and is generally very difficult to use," said Daniels. "We figured that if you develop Web applications to run online, why shouldn't your application development be online too?" Daniels explained. Recently, Cloud9 IDE raised $5.5 million in Series A funding from Accel Partners and product development software company Atlassian Software. Daniels said Cloud9 will use the funds to add support for multiple languages, platforms and cloud/mobile SDKs. Herman Mehling has written about IT for 25 years. He has written hundreds of articles for leading computer publications and websites. « Previous Page 12 Author Feedback
计算机
2015-48/1917/en_head.json.gz/1026
Posted Creators of Bulletstorm and Gears of War: Judgment open new studio called The Astronauts By First, they were People Can Fly. Now they’re aiming higher than the sky. Now they are The Astronauts. Poland’s favorite shooter developers have started their very own studio, and they’ll be making games using the engine built by their former employers. It’s hard not to be exhausted by the preponderance of guns in video games. There are other things to do in the digital world that are just as entertaining as shooting people, monsters, and aliens. When the games about shooting things are made by People Can Fly, though, it’s hard to complain. The studio’s signature work on games like Painkiller and Bulletstorm singled them out as makers of balletic chaos, games of cartoon bloodshed as concerned with momentum as with gore. Excitement abounded when word came out that the Polish studio would take on Epic’s Gears of War series with next year’s Gears of War: Judgment. Concerns were raised one month after that game’s announcement though. Shortly after E3 2012, Epic announced that it would take full ownership of People Can Fly, but the studio’s co-founder Andrian Chmielarz, artist Andrezj Poznanski, and artist Michal Kosieradzki left the studio at the same time. It turns out that all three creators, key minds behind Bulletstorm, Painkiller, and even Gears of War: Judgment, went off to found their very own independent studio. Enter The Astronauts. The studio announced itself to the world on Thursday, opening a funny website discussing the philosophy of the studio and the founders history together at People Can Fly. No love was lost in their departure from Epic it seems since the studio has signed a “long-term” license to make games using the Unreal Engine. “We thought about making our own engine for our projects,” reads The Astronauts’ website, “That lasted about ten seconds, nine of which were filled with laughter.” It makes sense that The Astronauts would gravitate towards a development platform they have history with rather than devoting the massive resources necessary to develop their own technology, but the question remains: What will they make with it? Unknown. The team is targeting its first release for next year, so it’s safe to assume that The Astronauts aren’t making a massive retail console game. They do say that there first game will be built on Unreal Engine 3, not the graphically and physics intensive Unreal Engine 4, so a downloadable or mobile title also seems like a good bet.
计算机
2015-48/1917/en_head.json.gz/1148
Senior Systems Designer (Unannounced Title) Laguna College of Art + Design Full-time Faculty in Game Art 5 Core Elements Of Interactive Storytelling by Thomas Grip on 08/19/13 12:59:00 pm 9 comments The following blog post, unless otherwise noted, was written by a member of Gamasutra�s community. The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company. Originally posted at Frictional Game's blog.IntroductionOver the past few years I have had a growing feeling that videogame storytelling is not what it could be. And the core issue is not in the writing, themes, characters or anything like that; instead, the main problem is with the overall delivery. There is always something that hinders me from truly feeling like I am playing a story. After pondering this on and off for quite some time I have come up with a list of five elements that I think are crucial to get the best kind of interactive narrative.The following is my personal view on the subject, and is much more of a manifesto than an attempt at a rigorous scientific theory. That said, I do not think these are just some flimsy rules or the summary of a niche aesthetic. I truly believe that this is the best foundational framework to progress videogame storytelling and a summary of what most people would like out of an interactive narrative.Also, it's important to note that all of the elements below are needed. Drop one and the narrative experience will suffer.With that out of the way, here goes:1) Focus on StorytellingThis is a really simple point: the game must be, from the ground up, designed to tell a story. It must not be a game about puzzles, stacking gems or shooting moving targets. The game can contain all of these features, but they cannot be the core focus of the experience. The reason for the game to exist must be the wish to immerse the player inside a narrative; no other feature must take precedence over this.The reason for this is pretty self-evident. A game that intends to deliver the best possible storytelling must of course focus on this. Several of the problems outlined below directly stem from this element not being taken seriously enough.A key aspect to this element is that the story must be somewhat tangible. It must contain characters and settings that can be identified with and there must be some sort of drama. The game's narrative cannot be extremely abstract, too simplistic or lack any interesting, story-related, happenings.2) Most of the time is spent playingVideogames are an interactive medium and therefore the bulk of the experience must involve some form of interaction. The core of the game should not be about reading or watching cutscenes, it should be about playing. This does not mean that there needs to be continual interaction; there is still room for downtime and it might even be crucial to not be playing constantly.The above sounds pretty basic, almost a fundamental part of game design, but it is not that obvious. A common "wisdom" in game design is that choice is king, which Sid Meier's quote "a game is a series of interesting choices" neatly encapsulate. However, I do not think this holds true at all for interactive storytelling. If choices were all that mattered, choose your own adventure books should be the ultimate interaction fiction - they are not. Most celebrated and narrative-focused videogames does not even have any story-related choices at all (The Last of Us is a recent example). Given this, is interaction really that important?It sure is, but not for making choices. My view is that the main point of interaction in storytelling is to create a sense of presence, the feeling of being inside the game's world. In order to achieve this, there needs to be a steady flow of �active play. If the player remains inactive for longer periods, they will distance themselves from the experience. This is especially true during sections when players feel they ought to be in control. The game must always strive to maintain and strengthen experience of "being there".3) Interactions must make narrative senseIn order to claim that the player is immersed in a narrative, their actions must be somehow connected to the important happenings. The gameplay must not be of irrelevant, or even marginal, value to the story. There are two major reasons for this.First, players must feel as though they are an active part of the story and not just an observer. If none of the important story moments include agency from the player, they become passive participants. If the gameplay is all about matching gems then it does not matter if players spends 99% of their time interacting; they are not part of any important happenings and their actions are thus irrelevant. Gameplay must be foundational to the narrative, not just a side activity while waiting for the next cutscene.Second, players must be able to understand their role from their actions. If the player is supposed to be a detective, then this must be evident from the gameplay. A game that requires cutscenes or similar to explain the player's part has failed to tell its story properly.4) No repetitive actionsThe core engagement of many games come from mastering a system. The longer time players spend with the game, the better they become at it. In order for this process to work, the player's actions must be repeated over and over. But repetition is not something we want in a well formed story. Instead we want activities to only last as long as the pacing requires. The players are not playing to become good at some mechanics, they are playing to be part of an engrossing story. When an activity has played out its role, a game that wants to do proper storytelling must move on.Another problem with repetition is that it breaks down the player's imagination. Other media rely on the audience's mind to fill out the blanks for a lot of the story's occurrences. Movies and novels are vague enough to support these kinds of personal interpretations. But if the same actions are repeated over and over, the room for imagination becomes a lot slimmer. Players lose much of the ability to fill gaps and instead get a mechanical view of the narrative.This does not mean that the core mechanics must constantly change, it just means that there must be variation on how they are used. Both Limbo and Braid are great examples of this. The basic gameplay can be learned in a minute, but the games still provide constant variation throughout the experience.5) No major progression blocksIn order to keep players inside a narrative, their focus must constantly be on the story happenings. This does not rule out challenges, but it needs to be made sure that an obstacle never consumes all focus. It must be remembered that the players are playing in order to experience a story. If they get stuck at some point, focus fade away from the story, and is instead put on simply progressing. In turn, this leads to the unraveling of the game's underlying mechanics and for players to try and optimize systems. Both of these are problems that can seriously degrade the narrative experience.There are three common culprits for this: complex or obscure puzzles, mastery-demanding sections and maze-like environments. All of these are common in games and make it really easy for players to get stuck. Either by not being sure what to do next, or by not having the skills required to continue. Puzzles, mazes and skill-based challenges are not banned, but it is imperative to make sure that they do not hamper the experience. If some section is pulling players away from the story, it needs to go.Games that do thisThese five elements all sound pretty obvious. When writing the above I often felt I was pointing out things that were already widespread knowledge. But despite this, very few games incorporate all of the above. This quite astonishing when you think about it. The elements by themselves are quite common, but the combination of all is incredibly rare.The best case for games of pure storytelling seems to be visual novels. But these all fail at element 2; they simply are not very interactive in nature and the player is mostly just a reader. They often also fails at element 3 as they do not give the player much actions related to the story (most are simply played out in a passive manner).Action games like Last of Us and Bioshock infinite all fail on elements 4 and 5 (repetition and progression blocks). For larger portions of the game they often do not meet the requirements of element 3 (story related actions) either. It is also frequently the case that much of the story content is delivered in long cutscenes, which means that some do not even manage to fulfill element 2 (that most of the game is played). RPG:s do not fare much better as they often contain very repetitive elements. They often also have way too much downtime because of lengthy cutscenes and dialogue.Games like Heavy Rain and The Walking Dead comes close to feeling like an interactive narrative, but fall flat at element 2. These games are basically just films with interactions slapped on to them. While interaction plays an integral part in the experience it cannot be said to be a driving force. Also, apart from a few instances the gameplay is all about reacting, it does have have the sort of deliberate planning that other games do. This removes �a lot of the engagement that otherwise come naturally from videogames.So what games do fulfill all of these elements? As the requirements of each element are not super specific, fulfillment depends on how one choose to evaluate. The one that I find comes closest is Thirty Flights of Loving, but it is slightly problematic because the narrative is so strange and fragmentary. Still, it is by far the game that comes closest to incorporating all elements. Another close one is To The Moon, but it relies way too much on dialog and cutscenes to meet the requirements. Gone Home is also pretty close to fulfilling the elements. However, your actions have little relevance to the core narrative and much of the game is spent reading rather than playing.Whether one choose to see these games are fulfilling the requirements or not, I think they show the path forward. If we want to improve interactive storytelling, these are the sort of places to draw inspiration from. Also, I think it is quite telling that all of these games have gotten both critical and (as far as I know) commercial success. There is clearly a demand and appreciation for these sort of experiences.Final ThoughtsIt should be obvious, but I might as well say it: these elements say nothing of the quality of a game. One that meets none of the requirements can still be excellent, but it cannot claim to have fully playable, interactive storytelling as its main concern. Likewise, a game that fulfills all can still be crap. These elements just outline the foundation of a certain kind of experience. An experience that I think is almost non-existent in videogames today.I hope that these five simple rules will be helpful for people to evaluate and structure their projects. The sort of videogames that can come out of this thinking is an open question as there is very little done so far. But the games that are close to having all these elements hint at a very wide range of experiences indeed. I have no doubts that this path will be very fruitful to explore.Notes Another important aspects of interaction that I left out is the ability to plan. I mention it a bit when discussing Walking Dead and Heavy Rain, but it is a worth digging into a little bit deeper. What we want from good gameplay interaction is not just that the player presses a lot of buttons. We want these actions to have some meaning for the future state of the game. When making an input players should be simulating in their minds how they see it turning out. Even if it just happens on a very short time span (eg "need to turn now to get a shot at the incoming asteroid") it makes all the difference as now the player has adapted the input in way that never happens in a purely reactionary game. The question of what is deemed repetitive is quite interesting to discuss. For instance, a game like Dear Esther only has the player walking or looking, which does not offer much variety. But since the scenery is constantly changing, few would call the game repetitive. Some games can also offer really complex and varied range of actions, but if the player is tasked to perform these constantly in similar situations, they quickly gets repetitive. I think is fair to say that repetition is mostly an asset problem. Making a non-repetitive game using limited asset counts is probably not possible. This also means that a proper storytelling game is bound to be asset heavy. Here are some other games that I feel are close to fulfilling all elements:�The Path,Journey,�Everyday the Same Dream,�Dinner Date,�Imortall�and�Kentucky Route Zero. Whether they succeed or not is a bit up to interpretation, as all are a bit borderline. Still all of these are well worth one's attention. This also concludes the list of all games I can think of that have, or at least are closing to having, all five of these elements. Links:http://frictionalgames.blogspot.se/2012/08/the-self-presence-and-storytelling.htmlHere is some more information on how repetition and challenge destroy the imaginative parts of games and make them seem more mechanical. http://blog.ihobo.com/2013/08/the-interactivity-of-non-interactive-media.htmlThis is a nice overview on how many storytelling games give the player no meaningful choices at all.http://frictionalgames.blogspot.se/2013/07/thoughts-on-last-of-us.htmlThe Last of Us is the big story telling game of 2013. Here is a collection of thoughts on what can be learned from it.http://en.wikipedia.org/wiki/Visual_novelVisual Novels are not to be confused with Interactive Fiction, which is another name for text adventure games.Thirty Flights of LovingThis game is played from start to finish and has a very interesting usages of scenes and cuts.To The MoonThis is basically an rpg but with all of the fighting taken out. It is interesting how much emotion that can be gotten from simple pixel graphics.Gone HomeThis game is actually a bit similar to To The Moon in that it takes an established genre and cuts away anything not to do with telling a story. A narrative emerge by simply exploring an environment. /blogs/ThomasGrip/20130819/198596/5_Core_Elements_Of_Interactive_Storytelling.php
计算机
2015-48/1917/en_head.json.gz/1280
Back to staff select Head of Audio What did you do before joining Jagex? I have been in the games industry for nearly 20 years as an Audio Manager at places such as EA, Realtime Worlds and Rage Software. Most recently I worked on the Harry Potter franchise and APB (All Points Bulletin). I think the highlight of my career so far has been working with the Philharmonia Orchestra at Abbey Road Studios. How did you first hear about Jagex and the job opening? I've known about Jagex and obviously Runescape for years, a real British success story. The job was advertised in the gaming press, I spoke to a few colleagues, and knew this was an exciting opportunity that I had to be part of. What's your current role at Jagex and how long have you been with the company? I have been at Jagex for around 8 months as Head of Audio. My responsibility is for music, sound effects and voice overs across the company, defining the vision and raising the quality bar in every piece of content we release. I manage and schedule a team of extremely talented musicians, sound designers and coders who work across all of our projects including marketing trailers. What comprises a typical working day for you? It's difficult to define a typical day in the audio department. Whether we are working on new projects or updating old content and technology, each day can bring its own challenges and surprises. Recently I have been working on voiceovers for Runescape which is bringing an extra layer of immersion to the game. My daily focus is on improving quality, scheduling and reviewing the team's work, and making sure we hit all our release dates. I also try to keep my hand in at writing music and designing sound but there usually aren't enough hours in the day. What do you like best about working for Jagex? I love the autonomy we are given and the opportunity that every individual has, to make a difference. Everybody's contribution is valued and helps to shape the future of our games. I was attracted to this role because I feel like I can make a real difference to the way audio is perceived at Jagex both through the creativity of my team and better production values. What is your favourite perk or benefit at Jagex? Team jolly's are a great idea and help bring the team closer together. For our next jolly we are planning to go sky diving which we are all looking forward to... I think. My second favourite perk has to be monkey nuts in the fruit boxes! What has been your favourite memory to date of working at Jagex? The Jagex Christmas Masquerade Ball was an amazing night for staff and their partners. It was an opportunity to dress up and celebrate everybody's hard work from the last year and we certainly did that, a little too much in a few cases? What makes Jagex different? There is a definite thirst for success here but combined with a family approach and very little bureaucracy. Informal but professional. Jagex encourages a fair balance of work and play which is really refreshing compared to the usual 'crunch' I have experienced elsewhere. Finally, what would be your one piece of advice for someone interested in your role? For anyone interested in becoming part of the audio team you need to make your application stand out from the crowd. Be original and imaginative, concise but creative with your showreel and give it that wow factor! Today's audio is far more complex than it has ever been so also try to include examples of the technical side of sound design and implementation. Most of all if you have fun with creating sound then it's likely we will enjoy listening to it, after all audio isn't really work is it... but don't tell anyone that. Next
计算机
2015-48/1917/en_head.json.gz/2309
COM.lounge Homepage « CLTV43: Lightning Talks from NoSQL Live Boston CLTV45: The Evolution of the Graph Data Structure from Research to Production » CLTV44: Schema Design with Document-Oriented Databases This recording from “NoSQL Live Boston” is from the Schema Design and Modeling panel. Unfortunately sound has it’s problems and it also has some gaps in it. I hope it’s useful nevertheless! (Download MP3) Shownotes Moderated by Durran Jordan Durran is one of Hashrocket’s hardest-working consultants and primary author of the up-and-coming open source MongoDB mapping framework, Mongoid. He’s contributed to the MongoDB Ruby driver, MongoMapper, and provided MongoDB support to various other open source Ruby frameworks. An expert Java developer for close to 10 years, including several years tenure at world-renown ThoughtWorks, Durran made the leap over from the dark side upon joining Hashrocket in early 2008. He hasn’t looked back. Eliot Horowitz Eliot Horowitz is CTO of 10gen, the company that sponsors the open source MongoDB project. Eliot is one of the core MongoDB kernel commiters. Eliot is also the co-founder and chief scientist of ShopWiki. In January 2005, he began developing the crawling and data extraction algorithm that is the core of ShopWiki’s innovative technology. Eliot has quickly become one of Silicon Alley’s up and coming entrepreneurs, having been selected as one of BusinessWeek’s “Top 25 Entrepreneurs Under Age 25″ in 2006. Prior to ShopWiki, Eliot was a software developer in the R&D group at DoubleClick. Eliot received a B.S. in Computer Science from Brown University. Bryan Fink Bryan Fink is an Engineering Manager at Basho Technologies. Basho is the developer of Riak, a dynamo-inspired, highly-available, elastically-scalable datastore. During his time at Basho, Bryan has touched nearly every corner of Riak, and was a lead developer on two applications built on top of Riak. Bryan has written several blog articles about how to develop apps on top of Riak, and he can regularly be found answering questions on the riak-users mailing list. Before working at Basho, Bryan worked for companies in the financial analysis and electronics testing industries. Bryan graduated with a B.S. in Computer Science and Engineering from MIT in 2004. Paul J. Davis graduated in 2005 from the University of Iowa with a BSE in Electrical Engineering. Following graduation he spent two years as a Research Assistant in the Large Scale Digital Cell Analysis System (LSDCAS) lab of Prof. Michael Mackey. He currently works as a bioinformatician in the parasitology division at New England Biolabs. He was led to non-relational systems by the idea that biology has no schema. He’s been a committer to the Apache CouchDB project for the last year and was an enthusiastic early adopter the year before that. Beyond CouchDB, Pa
计算机
2015-48/1917/en_head.json.gz/2969
Splinter Cell on the PS2 I'm a huge fan of Splinter Cell on the XBox. When I heard it was coming out for the PS2, I grabbed a copy to see how they compared. I have to honestly say that in the past I'd always thought the graphics on the PS2 and XBox were pretty much the same - that they both could achieve the same level of crispness. But if what Splinter Cell on the PS2 looks like is REALLY the best the PS2 can do, compared to the exact same game on the XBox, I'm afraid the PS2 needs a facelift. I was really disappointed with the lower quality and jaggedness. Splinter Cell on the XBox was amazingly smooth, and we played the game many, many times because it was so enjoyable. While playing the PS2 version might have been better graphic-wise than say a Bond game, it wasn't up to the XBox version's high quality. The PS2 version promoted its "great new levels" and adjusted gameplay. So we started comparing. We found actually that many levels were missing many, many pieces. In the XBox a bad guy might have been standing against a balcony looking out over the city. In the PS2, he was standing against a flat wall! There were bad guys missing. There were puzzles missing. It felt "dumbed down" in many ways. Again, having enjoyed it so much on the XBox with its complexity, intricate graphics and challenging puzzles, it was extremely disappointing to keep hitting, again and again, a spot where a scene had been 'kiddiefied'. They were stupid little changes, too. For example, in the XBox version, you're walking through a house and find a bathroom, with lots of detail. Up on the wall is a medicine chest, just like you'd expect. In the PS2 version, the bathroom is missing and you just find a health kit lying in the middle of the hallway. It took away a lot of the realism. If you don't have an XBox, Splinter Cell is still a fun game and can give you enjoyment for many hours. I suppose if they tried to port Halo to the PS2 you might have the exact same problem - that a game that people LOVE on the XBox because of its giant levels, intricate graphics and complex gameplay would be reduced to tiny levels, jagged graphics and simple puzzles. Yes, it could still be fun. But it's sort of like listening to your favorite Rock Anthem through tinny speakers at a quiet volume, instead of blasting the sound through your high quality stereo. For that same "quality level" experience on the PS2, I'd really stick to games where the PS2's abilities shine - which seem to be the Kingdom Hearts and Final Fantasy genre of games. Buy Splinter Cell for the PS2 from Amazon.com Playstation 3 Site @ BellaOnline This content was written by Lisa Shea. If you wish to use this content in any manner, you need written permission. Contact Lisa Shea for details.
计算机
2015-48/1917/en_head.json.gz/3184
Delivering compressed multimediaDelivering compressed multimedia mir Majidimehr, Director, Windows Media Consumer Group, Sean Alexander, Technical Product Manager, Windows Media, Microsoft Corp., Redmond, Wash. 11/24/1999 01:31 PM EST Post a comment Given the long history of audio and video data transmission over various networking topologies, it would seem that these two technologies were made to go together. Indeed, given ultimate freedom to choose-bandwidth constraints aside-consumers would prefer to have all of their audio/video content delivered to them on the wire. Why buy shiny metal disks (CDs and DVDs) and spin them on mechanical devices resembling turntables from 50 years ago, still suffering from skips and scratches, when you could just have the same content transferred to the consumer directly? But anyone who has actually tried to build such systems, especially when they involve wide-area networks such as the Internet, knows acutely that delivering on that scenario is a very difficult challenge. Networks fast enough and with the needed quality to deliver uncompressed broadcast-quality video together with CD-quality audio are simply too expensive to deliver en masse. After all, we are dealing with data rates approaching 200 Mbits/second, which is beyond the capabilities of most LANs, let alone the general Internet. With the advances of audio and video compression and broadband data transmission there is little reason to store and transmit the source material in its entirety. But a quick look at the compression rates necessary for mainstream Internet delivery shows what at first seems to be a hopeless situation. For example, delivering a video signal to the typical "28k-modem" user means dealing with about 22 kbits/s of total data for both audio and video (the rest of the bandwidth from 22k to 28k is usually reserved to accommodate network overhead and the need for extra headroom when recovering from congestion). For example, allocating 5 kbits to audio leaves a paltry 17 kbits for video. This requires an amazingly high compression ratio of nearly 8,700:1. Of course, the video image can be shrunk (subsampled) and frame rates can be modified to reduce this ratio, but the magnitude of the problem remains nevertheless. Even though audio is thought to be an easier problem to solve, it too is subjected to high compression ratios of 63:1 at 22 kbits/s. This may not seem like a high ratio but the ear is a far less forgiving instrument than the eye, making it impossible to pass off such audio as "CD quality." Even after you get past the pure compression issues, there are additional challenges when the compressed data is transmitted over the general Internet. With no built-in Quality of Service today, there is no assurance that the connected modem rate of 22 kbits/s will remain constant during, say, a five-minute music video, let alone a full-length movie. Bandwidth drops can result in annoying interruptions of audio and video. While prebuffering can help, it increases latency of the delivered content, which is undesirable. Although the problems are daunting, there are solutions. One is to use high-performance compression algorithms to increase the quality given a specific bit rate; a second is to build an intelligent full-duplex system to deal with network throughput fluctuations; third, avoid real-time transmission and deliver the data as a file to the client to be played back later. To get the best quality, it is imperative to use a compression system designed to deliver good quality at the required bit rates. For example, MPEG-2 is an excellent compression scheme, which can deliver good quality video and audio. But it only does so at very high bit rates, evidenced by the 10-Mbit/s transfer rate of DVD and 19-Mbit/s rate of HDTV. Try to use this at the "broadband" speed of 300 kbits/s over a digital subscriber line (DSL) and you will get extremely low quality. The same can be said of audio-compression algorithms such as MPEG-1 Layer 3 (MP3) which was designed for good performance at rates exceeding 128 kbits/s. One would be lucky to simply get "AM radio quality" at POTS modem rates, which dashes any hope of having users abandon their radios for the Internet. Fortunately, the need to deliver excellent quality at low bit rates has resulted in a number of new and innovative algorithms, some of which are developed by standards groups while others use advanced proprietary techniques. On the video front, MPEG-4 is leading the way by producing excellent quality at astonishingly low bit rates. For example, the enhanced implementation of MPEG-4 in Microsoft Windows Media is able to reproduce as many as 10 to 12 frames/s using 160 x 120 resolution at 17 kbits/s. The same technology can easily deliver near-VHS quality of 320 x 240 at 30 frames/s at just 300 kbits/s, encroaching on the domain of MPEG-1, which generally requires bit rates of 1.1 Mbits/s and higher. With the rapidly growing base of broadband connections at those rates, one can start looking at entertainment applications that were once the domain of leased-line, cable or satellite-based systems. On the audio side, standards-compliant compression systems that perform well at low bit rates simply don't exist. While work is being done in the MPEG-4 committee and elsewhere, no commercially standards-compliant audio codec exists that can produce high-fidelity music at modem rates. Once you have high-quality compressed content, the challenge becomes managing the transmission link. With an average of seven routers between the source and the destination on the Internet, it is simply not realistic to assume fixed, guaranteed bandwidth even on a "digital" DSL or cable modem connection. Needless to say, the situation is even worse over an analog modem. And as mentioned before, buffering the data before playback, while useful, cannot be done aggressively as it increases the initial latency. The solution then is to use a dedicated "streaming" server. Using an end-to-end control-feedback system, the client and the server can produce the optimal experience for the user given the network bandwidth at the moment. For example, the Microsoft Windows Media Player can instruct the server to reduce its video transmission rate to deal with bandwidth drop. So, going from 300 kbits/s to 100 kbits/s, for example, will result in dropping the frame rate from 30 to 15, but without any interruption of the stream. Extreme bandwidth drops result in the system's pausing the video but keeping the audio transmission. This is important as the audio usually carries far more information than the video and users tend to consider audio breaks a far more serious degradation of service than video ones. When the efficiency of the video and audio compression and sophisticated network management features of the streaming servers are combined, a remarkably capable system for delivering entertainment-quality content to users is produced for the cost of a dedicated network system. Use of traditional computer systems for the client and server further reduces the cost structure of these systems, bringing the capability to most content and service providers. Given audio's lower bandwidth requirement and smaller clip size (four-minute song vs. two-hour movie), it is reasonable to also look at transferring files to the user's PC or music devices to be played back instead of streaming. Indeed, people tend to want to listen to their music more than once, making it a natural for some form of caching or local storage. Compression engines Fortunately, the excellent audio-compression engines necessary for low-bit-rate transmission can also be used in this application. The result is reduced file size, which reduces the download time and increases storage capacity on the user's PC or playback device. For example, a four-minute song encoded at 64 kbits/s produces a file that is 1.9 Mbytes vs. 42 Mbytes for the uncompressed one on the CD. To put this in context, downloading the 64k version will take less than about 11 minutes vs. more than four hours for the uncompressed one. Yet to most consumers the quality will be very close to the original. This is made possible by encoding only the psycho-acoustical properties that the average person is able to register and discern in the original uncompressed file while maintaining high fidelity. Once you allow content to be downloaded to the user's system, you get into many complex matters such as copyright management. Fortunately, systems such Microsoft's Windows Digital Rights Management let content owners optionally encrypt and "lock" downloaded content (audio, video or both) to the user's PC so that additional copies can not be made without authorization. Though such systems are nothing new in the consumer electronics world-witness the encrypted nature of DVD or Macrovision copy protection on VHS tapes-they are a relatively new development in the PC multimedia arena. Although some consumers may balk at any idea of copy protection, it is a necessary component of any modern digital music and video distribution to protect the authors. Copy-protection technology also allows new business models on the Web, such as the Windows Media Pay-Per-View solutions system, in which users pay for access to video streams such as live concerts or sporting events. See related chart
计算机
2015-48/1917/en_head.json.gz/3203
3 Web Design Building Blocks Every Entrepreneur Needs to Know Tom Cochran Don't Get Hacked -- Tools to Fight Cyber Attacks Why Moving to the Cloud Should Be Part of Your Business Plan 3 Tools to Simplify Your Digital World You're a small-business owner and you probably need a digital presence. Typically that means you need a website, though increasingly, some businesses can function digitally with just a Facebook page. The biggest problem you're going to face is that the terminology used is essentially a foreign language. To make matters worse, there are some digital strategists and web developers out there -- the bad ones, whom you want to stay away from -- who assume you're a luddite when you look bewildered by things like responsive design, media queries, CMS or HTML5. Don't worry about it and don't be scared off by this. It's not your job to understand the technologies behind the Web. I'll walk you through some of the basics of websites today to make you a little more conversant and hopefully able to filter out the charlatans trying to win your business. 1. What is responsive web design and why do you need it? The explosive adoption of the iPhone, iPad and other smart phones and tablets has changed the way digital content is consumed. If you're going to succeed in your business, you need to provide customers with a quality, frictionless experience. That means you need to build one website that is usable across a multitude of devices. Responsive web design (RWD) is the strategy being used today. This essentially involves developing a flexible website that will “respond” and adjust itself to the various screen dimensions of devices. If you're looking at a responsive website on an iPad and rotate it, the website will readjust and reconfigure itself automatically to the screen shape. This doesn't mean the website will look or behave in the same manner on a laptop versus a tablet, but the user experience will be optimized for each. RWD maintains aesthetics while optimizing usability. 2. What is HTML5 and CSS3 and why do you need them? All you really need to know is that these two languages are the basic building blocks of websites. HTML5 is like the foundation and framing used to build a house. It provides web browsers with instructions on how to structure and display content. CSS3, on the other hand is like the details that make a house unique: paint, landscaping, furniture, and art. CSS3 takes care of the overall aesthetics of your website. HTML5 and CSS3 are complementary technologies that aren't terribly useful when not used together. Both are required components of responsive web design. For example, CSS3 provides different layout and aesthetic instructions based on the device accessing a website, so that a website may show or hide certain things if the user is accessing it on an iPhone instead of laptop. 3. What is a CMS and why do I need it? You need a content management system (CMS) to simplify the task of managing a website. In the old days, website content was managed by creating an HTML file on your computer and uploading it to a web server, where it was accessible to the world. The problem was that sites started to get very large, with thousands of pages. When you wanted to update something minor like the copyright date in the footer, you needed to update every file on the site. Enter the CMS. You shouldn't have to know how to code to manage a website. A CMS separates the content from the presentation, giving you the flexibility to update your website. Some examples include WordPress and Drupal -- both open source and free to install or Expression Engine, which is affordable at $299. One of our Atlantic Media sites, Quartz, runs on WordPress, and WhiteHouse.gov, which I also worked on, is powered by Drupal. You don't have to be a mechanic to own and operate a car. Likewise, you don't need to be a developer to understand the basic components of a website. Understanding common terminology like responsive design, knowing the complementary building blocks of HTML5 and CSS3 and recognizing the importance of a CMS will demystify the world of web development. Bring yourself up to a conversant level so you can make educated business decisions about your digital presence. Then let the pros take over from there. Tech News for Your Bottom Line Get the latest news and opportunities in tech each week.
计算机
2015-48/1917/en_head.json.gz/3490
Choose text size: You're viewing an article in TMO's historic archive vault. Here, we've preserved the comments and how the site looked along with the article. Use this link to view the article on our current site:Microsoft SP3 for Windows XP Imminent [UPDATED] Microsoft SP3 for Windows XP Imminent [UPDATED] by John Martellaro, 3:50 PM EDT, March 24th, 2008 Microsoft has posted an overview of what to expect in Windows XP, Service Pack 3 (SP3). The last scheduled update of the Windows XP system includes only a small number of new functionalities. The update can be applied directly to SP2 or SP1 systems. "Windows XP Service Pack 3 (SP3) includes all previously released updates for the operating system. This update also includes a small number of new functionalities, which do not significantly change customers� experience with the operating system," according to Microsoft. The update can be applied to: Windows XP, Windows XP Home Edition, Windows XP Home Edition N, Windows XP Media Center Edition, Windows XP Professional Edition, Windows XP Professional N, Windows XP Service Pack 1, Windows XP Service Pack 2, Windows XP Starter Edition and Windows XP Tablet PC Edition. The overview page has a link to a PDF file which explains what's in the Service Pack, new and enhanced functionality, and how to deploy. Some beta testers have reported modest (10 percent) speed gains. When the Service pack is released, Mac users who have been running Windows XP in virtualization will find the single service pack a handy way to bring their guest Windows OS completely up to date. Microsoft told TMO that they've only promised SP3 for "first half" of 2008, but many industry observers believe it will be well before June 30. The public preview of XP SP3 labelled RC2 is available for download and user feedback. Microsoft has also released Vista SP1 for download. [Mar 25: This article was updated with addition information from Microsoft for clarity.]
计算机
2015-48/1917/en_head.json.gz/3679
Contact Advertise GEOS: The Graphical Environment Operating System posted by Kroc on Thu 24th Aug 2006 20:26 UTC GEOS managed to offer nearly all the functionality of the original Mac in a 1 MHz computer with 64 Kilobytes of RAM. It wasn't an OS written to run on a generic x86 chip on a moving hardware platform. It was written using immense knowledge of the hardware and the tricks one could use to maximise speed. Note: After a small break, here is another one of the articles for the Alternative OS contest. [Digg this story] 1. An Introduction to this article As we take time to look at the grand variety of operating systems available, it shows us that there is no one right way to 'do it'. With hardware already a commodity, the way we interact with our computers is taken as a standard, and a given best-practice of design. The joy of alternative operating systems, is the variety of Computer ? Human interface models available. Even now, the modern operating system is designed from the perspective of the engineer. Whilst actual human guinea-pig testing is done on new interfaces, it still does not make up the bulk of the design process. User involvement in design is almost an after-thought. What we've come to accept as the standard way of interacting with a computer was cemented in the early days by the extremely knowledgeable and technical system engineers of the day, through a process of creating: What they felt was right What the limited hardware was capable of So, for my article, I have decided to focus on an Operating System born in the early days of consumer-available 'WIMP' interfaces, on extremely restrictive hardware. It is my belief that 'the restraint of hardware is the true muse of the software engineer'. Good software does not come from being given unlimited resources; just take a look at the hardware requirements for modern PC games, for graphics that were reproducible (until recently) on a 300 MHz, 4 MB VRAM Playstation 2. 2. A Quick History of GEOS The history surrounding GEOS and its implementation within hardware restraints unimaginable nowadays makes for the most interesting parts of the OS, rather than just the GUI itself. Below is a brief history of the Operating System, up to its heyday; where we'll then get into usage, screenshots and technical details :) This history has been carefully gathered and researched through actual GEOS manuals, cited sources and websites. When you think of the history of our modern day operating systems, they are either the works of individuals and volunteers based on technical ability and software beliefs, or the work of large corporations employing many programmers. Rarely is the history of an OS based in the vibrant gaming era of the 1980s. The Graphical Environment Operating System was released in 1986, created by Berkeley Softworks: a small company start-up by serial entrepreneur Brian Dougherty. GEOS is a classic Mac like GUI running on Commodore 64 / 128 hardware, then later the Apple II, and PC. Around 1980, Brian turned down a job at IBM to go join the games manufacturer Mattel, then maker of the Intellivision gaming system. Brian helped write games for the system for about a year, before leaving with other engineers to form Imagic, a very successful games company that rivalled Activision, before being wounded in the games industry crash of 1983. Whilst Imagic went under in 1986, Brian did not. Dougherty formed Berkeley Softworks (later Geoworks), who in collaboration with a firm that made batteries, worked on a product for the airlines named "Sky Tray". The concept was a computer built into the backs of the seats, and Brian and his team would develop the OS for it. GEOS was coded by Dougherty's elite team of programmers, who had cut their teeth on the very restricted Atari 2600 and Intellivision games consoles of the time (usually 4 KB RAM). However, after the OS had been written, airline deregulation mandated that all in-flight extras were to be trimmed down to save weight and fuel, culling the Sky Tray project. With all that time put into an OS, Dougherty looked at the compatible (6502 Microprocessor-based) Commodore 64. A few changes were needed and the OS sprang to life on the affordable home computer, complimenting the powerful graphics capabilities of the machine with a GUI. Even though Berkeley Softworks started out small, with only two salespeople, the new software proved very popular because of low price for the necessary hardware (and of course the capability of the OS). This was due in part to the aggressive pricing of the Commodore 64 as a games machine and home computer (With rebates, the C64 was going for as little as $100 at the time). This was in comparison to an atypical PC for $2000 (which required MS-DOS, and another $99 for Windows 1.0) or the venerable Mac 512K Enhanced also $2000. In 1986, Commodore Business Machines announced the C-Model revision of the Commodore 64 in a new Amiga-like case (dropping the 'breadbox' look), and bundling GEOS in the US. At its peak, GEOS was the second most widely used GUI, next to Mac OS, and the third most popular operating system (by units shipped) next to MS-DOS and Mac OS. "GEOS, Page 1" "GEOS, Page 10" (14) 45 Comment(s) Related Articles Live update and rerandomization in MINIX3OOSMOS goes open sourceGNU Hurd 0.7 released
计算机
2015-48/1917/en_head.json.gz/3868
Altiora Publications Developing Software Requirements Specifications:A Guide for Project Staff Getting software requirements right can be difficult. Requirements collection is crucial to the development of successful information systems. To achieve a high level of software quality, it is essential that the Software Requirements Specification be developed in a systematic and comprehensive way. If this is done, the system will meet the user's needs, and will lead to user satisfaction. If it is not done, the software is likely to not meet the user's requirements, even if the software conforms with the specification and has few defects. Developing Software Requirements Specifications: A Guide for Project Staff is an easy to use, step-by-step guide to developing high quality, effective Requirements Lists (RL), Statements of User Requirements (SUR) and Software Requirements Specifications (SRS). It prescribes both the format and content of these important documents. Developing Software Requirements Specifications: A Guide for Project Staff is basically a 'plain English' version of IEEE Std 830 Guide to Software Requirements Specifications and IEEE Std P1233 - 1992 Guide for Developing System Requirements Specification, but with added features to enable project staff with average literacy skills to effectively develop a RL, SUR and SRS. Documents prepared in compliance with this How To guide will therefore also comply with IEEE Std 830. The SRS has business and technical considerations added which the customer may or may not be able to provide in the original Requirements List. The SRS provides all relevant detail about the proposed system to enable a development team to commence the design/development phases. BENEFITS TO YOU With this comprehensive guide to the preparation of the requirements capture and specification documents, you will learn how: the Requirements List (RL) is produced by the customer to effectively describe the features and capabilities required by a proposed system. the Statement of User Requirements document is developed, based on the Requirements List. content is developed. how the Requirements List and Statement of User Requirements should be formatted. the Review/Approval process is performed. to achieve consistency across Requirements Lists which are derived from different sources. The resulting consistency allows for a more objective assessment of the requirements. a Statement of User Requirements (SUR) which can be effectively used as the basis for the development of a detailed user requirement specification. project staff have the means to identify and follow the processes involved with the Requirements Capture Process. to furnish a guide to the format and contents of the Requirements List. Feed back to the customer - the SRS allows the customer to verify that the analyst has understood the problem to be solved and the required behaviour of the software. As such the SRS must be presented in terms that can be understood by the customer. This most commonly means that it must be written in natural language. Should natural language prove inadequate to unambiguously describe complex requirements, modelling tools such as data flow diagrams, structured English, state transition diagrams and decision tables may be used providing they can be understood by the customer. Problem decomposition - the physical act of writing the requirements down crystallises ideas, organises the information, surfaces and resolves conflicts and assists in the orderly decomposition of the larger problem into its component parts. Input to design - the SRS is the primary reference for the development of the design. As such, it must contain an accurate and detailed description of system behaviour from which a system architect can devise a design solution. A basis for product validation - the SRS is the primary source from which the developer produces a strategy for testing the end product. All requirements must therefore be verifiable. That is, the user must be able to devise a test to verify that the end product satisfies the requirement. AVOIDINGTHE TRAPS Why do projects fail? Much work has been done over the past 30 years on why and how a large proportion of systems are fail to achieve their purpose. They may be abandoned before the project is finished, or the system is developed, but the customer does not use it because it does not meet their requirements. The system may conform to its original SRS, and still be a failure if it does not do what the customer needs it to do. User-Developer gap. The most common cause of not getting the requirements right is the existence of a cultural gap between supplier and customer. These differences result in poor or inhibited communication between the stakeholders in the requirements gathering process, leading to an incomplete or poorly defined statement of user requirements. Systems subsequently developed using such an SRS is unlikely to meet the user's needs and will in all likelihood be abandoned or need to be substantially reworked. Closing the gap. The need for IS developers and users to collaborate has long been recognised by both the practitioner and academic worlds. A wide range of what might be called integrative processes have been developed to promote a collaborative approach to requirements analysis. These integrative processes include the ETHICS model for participative systems development, Joint Application Development and the use of particular people as integrators, such as the 'hybrid manager'. Where used, inte-grative processes are successful in improving the level of collaboration and effective communication between suppliers and customers. Too hard? No! But despite their effectiveness in solving a widely recognised, highly expensive problem, in reality integrative processes are not generally used. In practice, they are seen as expensive, time-consuming and a threat to established ways of developing software. Tight project budgets and schedules put most integrative processes into the 'nice to have in an ideal world' category. Technical writer as facilitator. The author has developed a proven integrative process that can substantially improve the chances of achieving successful project outcomes. A technical writer, who is already a member of the supplier development team, should take responsibility for the writing of the SRS. This is likely to be welcomed by the software engineers who would otherwise have to do it themselves. Technical writer's are usually able to understand both the technical point of view of the supplier, and the non-technical view of the customer/user, and as such can bridge the cultural divide. This suits them to act as a facilitator of communication between supplier and customer. If an appropriate template is used to develop the SRS, such as this one, it will be possible for a complete, correct, verifiable etc SRS to be developed, and this most dangerous of pitfalls for any development project to be avoided. CONTENTS Click on the links below to view the complete Table of Contents and sample chapter. This gives you an indication of the scope and level of detail of this document, plus representative content of the actual document. Download Table of Contents (pdf) | Download a sample of the book (pdf) INCLUDED IN PACKAGE The book is supported by MS Word SRS template that you can use immediately by saving the template as your working document, then use the established structure of the template and the explanatory text in each section to fill in the required information. The explanatory text can then be deleted leaving you with a document that should impress your manager for its professional appearance and comprehensive content. WHO SHOULD BUY IT Developing Software Requirements Specifications: A Guide for Project Staff is aimed at project staff from any industry sector; project managers, team leaders, any professional seeking to give themselves the means to develop a comprehensive and well-organised and presented Statement of User Requirements and/or Software Requirements Specification. BUY IT Ordering your PDF copy of Developing Software Requirements Specifications: A Guide for Project Staff for only US$19.95 direct from the author is easy. The printed book would sell for two or three times this price on the big online bookstores. It represents good value for money and a sound investment for any project manager, team leader or person wanting to develop their requirements gathering, analysis and management capabilities. Transaction Record. Your credit card transaction will be processed using the latest secure processes by CCNow (One of the WWW oldest and most respected Credit Card processors). Your credit card transaction statement will show CCNow. Delivery Information. You will receive a download link by email soon after you place the order. The download contains the book in PDF. Money back guarantee. Your purchase comes with a money back guarantee if you are not completely satisfied. ABOUT THE AUTHOR David Tuffley is a Senior Consultant with the Software Quality Institute and a Lecturer in the School of ICT at Griffith University in Australia. He has published extensively in the academic literature on the topic of this book, and also extensively in the commercial world of practical how-to guides for project managers and staff. Long-established on-line bookseller. David has a proven track record in the production of practical, user-friendly guides for project managers since the early 1990's. He has been selling these guides to satisfied customers via the internet since the mid-1990's, making him one of the longest established on-line book-sellers on the WWW. (c) Copyright, 2009. Altiora Publications, Redland Bay, Australia. All rights reserved.
计算机
2015-48/1917/en_head.json.gz/3994
Home » content GPL 3 likely to appear in early 2007 Submitted by srlinuxx on Thursday 4th of August 2005 02:11:04 PM Filed under OSS The next version of the GPL (General Public License), GPL 3, is likely to appear in early 2007, according to a board member of the Free Software Foundation (FSF) who is working on drafting the future release. The GPL is the most popular license for free software and was created by Richard Stallman in 1989 for the GNU free software operating system project. Version 2 of the GPL appeared in 1991. "Version 2 has now been running for [nearly] 15 years without substantial modification," said Eben Moglen, a member of the board of the Free Software Foundation and a professor of law and legal history at Columbia University Law School. "It [GPL 2] has successfully been used to go from a world in which free software was a very marginal community to one in which everyone, everywhere is aware of it." Moglen, Stallman and other members of the FSF are working on drafting GPL 3. Moglen, the chair of the Software Freedom Law Center, is due to give a talk at the LinuxWorld show next week in San Francisco on drafting the new version. "We need to globalize GPL," Moglen said. "GPL 2 has elegantly worked outside of the U.S. in Europe and elsewhere, but it needs to become a bit more legally cosmopolitan" so that the license is more accessible to lawyers around the world, he added. "The GPL depended heavily on the Berne Convention, but it's still speaking language very reminiscent of U.S. copyright law," Moglen said. "The GPL needs to recognize global copyright more explicitly. It sounds strange to lawyers in some countries." The FSF also needs to clarify some language in the license that some English-speaking lawyers have had trouble with, he added. GPL 3 will also need to reflect changes in technology, most notably the emergence of Web services, according to Moglen. The GPL grants users freedom to copy, modify and share software, but FSF needs to determine the situation when what's being redistributed is not a copy of the software itself but a service based on that software. Moglen has already received a flood of suggestions about GPL 3, he said. He expects to receive more than 150,000 comments on the draft license, with as many as 8,000 organizations wanting their views to be heard. "They think of GPL in terms of their own experience as developers, businesspeople and users," he said. "We want to capture that and the full reach of the community, running all the way from IBM and HP to the Linux user group of Nairobi." The discussion of GPL 3 by groups around the world will reveal how "genuinely multicultural" the Free Software Foundation is, he added. "It will be a shock to everyone just how large and powerful a community [the free software movement] is," he said. Over the next several months, Moglen, Stallman and other FSF members will come up with a first draft for GPL 3, he said. Moglen also plans to announce the formation of a number of advisory committees in relation to GPL 3. "We'll release the first discussion draft very late this year or very early next year," Moglen said. "We'll provide an extensive rationale as to why we made the choices we made and, in a limited way, why we didn’t include some other suggestions." There will then be about a year of what Moglen called "intense moderated dialogue" about the draft. "I hope and believe we'll release GPL 3 in early 2007," he said. By China Martens
计算机
2015-48/1917/en_head.json.gz/4296
Humanhead Studios Human Head Studios Inc. is a privately owned independent game development studio based in Madison, Wisconsin. Founded in 1997, Human Head began as a single-team development studio dedicated to creating the highest quality video and computer games. Since that time, the company has expanded to more than 35 veteran game developers.Human Head is a full-service developer providing game development for Windows/PC, Xbox, PS2 and other next-generation gaming consoles. We cover all the bases, providing technical, artistic and game play design, 3D modeling and animation, concept and production artwork. We also have our own internal sound development studio. In 2002 we began a separate Adventure Games Division for the production of table top role playing, board, strategy and non-collectible card games. This division is wholly separate from our video games division with its own staff and production. You can find out more about our Adventure Game Division here. The products from our Adventure Games Division are published by Green Ronin publishing. » Villainy CCG
计算机
2015-48/1917/en_head.json.gz/4340
Thousands of Web Sites Hit With New Twist on Old SQL Injection Hack April 1, 2011 at 1:10 pm PT A relatively simple hack has been used to compromise at least 500,000 Web sites–and perhaps as many as 1.5 million–in such a way that visitors are tricked into downloading fake PC security software. Dubbed LizaMoon after the Web site where some users are in some cases redirected, the attack was first documented by the security research firm Websense. The hack seeks to trick Web users into believing that their computer has been compromised by viruses and prompts them to download fake security software that itself causes further problems. Among the sites serving up the links to the fake software sites are some belonging to Apple and used on its iTunes store, though Apple is said to have cleaned up the affected code on its site. Websense says that so far it appears that sites using Microsoft SQL Server 2003 and 2005 are at risk, though as yet SQL Server 2008 doesn’t appear to be affected. No word yet from Microsoft about any of this, though I’ve asked them for a comment. Update at 4:25 pm PDT: I just got this statement from Microsoft: “Microsoft is aware of reports of an ongoing SQL injection attack. Our investigation has determined these sites were exploited using a vulnerability in certain third-party content management systems. This is not a Microsoft vulnerability.” I did not, however, get a hint as to the identity of the “third-party content management system.” SQL injection attacks take place when malicious code–essentially commands to a Web server to do things it’s not supposed to do–are inserted into routine queries of a Web site’s database. A basic way to carry out these attacks is to add extra commands into the URL bar of a browser when visiting a vulnerable Web site. It’s not entirely clear exactly how this series of attacks has been carried out. I talked with Josh Shaul, CTO of Application Security, Inc., a database security vendor that specializes in researching attacks on databases. “It’s a very new take on a very old type of attack,” Shaul said. “SQL injection has been the primary way that databases have been attacked for years. What’s different here is that people are putting the code that runs their Web sites in the database itself. And that’s what’s so troubling. Effectively you’ve exposed your code to an attacker so they can go modify it.” Attackers found hundreds of thousands of sites that use a single user account to query their databases for all visitors, Shaul said. “The databases are clearly configured in an insecure way,” he said. “That’s what it all comes down to. Why is it that the log-in to use the database has the right to modify the code for the Web site itself? That makes no sense at all.” In this case, the attackers took advantage of the weakness to insert a script that creates a pop-up that sends a site’s visitors to another site that looks like a legitimate place to download new Microsoft security software. That makes the attack on the Web sites themselves just a means to an end–the end being tricking innocent Web users into clicking on a series of links and paying to download fake security software. Websense produced a video demonstrating what happens. The short lesson is this: If you see a pop-up that tells you you’ve got a virus or that your computer is compromised by a bunch of security issues, don’t click any of the links in it; it’s probably not legit. Tagged with: Apple, Application Security Inc., Arik Hesseldahl, hackers, hacking, Internet, iTunes, Josh Shaul, malicious code, Microsoft, Microsoft SQL Server 2003, Microsoft SQL Server 2005, Microsoft SQL Server 2008, NewEnterprise, security, security feature, SQL injection, SQL Server, Web, Web security, Websense
计算机
2015-48/1917/en_head.json.gz/5314
World War III: Black Gold (c) JoWood Productions Windows, Pentium II-266, 64MB RAM, 800MB HDD, 4X CD-ROM Wednesday, January 2nd, 2002 at 04:38 PM By: Westlake World War III: Black Gold review Game Over Online - http://www.game-over.com In order to review World War III: Black Gold, I have to describe some history relating to the game, so bear with me. In June of 2000, TopWare Interactive released Earth 2150, a 3D real-time strategy game (RTS). It was based on the future (surprise), but that gave TopWare the opportunity to be play around with the weapons, units and factions included in the game, and the result was something that looked good and could even be called innovative. Then, nine months after the release of Earth 2150, TopWare released The Moon Project. It was a sequel / expansion pack to Earth 2150, but it featured an awful campaign and few improvements over the original game, and it was disappointing to say the least.Soon after the release of The Moon Project, TopWare went bankrupt, and that looked like the end of the line. But, lo and behold, the people of TopWare got back together as Reality Pump Studios, and in October of this year they released World War III: Black Gold. Now, if you’ve been paying attention to the dates, you might be worried because in an age when games take years to develop, Reality Pump managed to re-form itself and create a game, all in seven months. And your worries would be justified because Black Gold is basically Earth 2150 Lite (or Earth 2150 For Dummies). Black Gold uses the same engine as Earth 2150, but either because Reality Pump decided the game was too complicated, or because they needed to change the setting to modern times, just about everything that made Earth 2150 unique and interesting has been removed or streamlined in Black Gold, and as a result Black Gold is a fairly ho-hum playing experience.World War III: Black Gold takes place in the near future, when the world realizes it’s about to run out of oil. The UN decides to confiscate all oil fields, and pretty soon the three factions included in the game -- the United States, Russia, and Iraq -- are scuffling for position. That’s an interesting premise, and it sounds like something Tom Clancy might write about. I mean, what would happen if the United States or Russia invaded the Middle East? But Reality Pump doesn’t do a lot with the possibilities, and in fact most of the missions in the game deal with Iraq’s terrorist intentions rather than oil, and Russia never gets involved in the Middle East at all. So the story is a little disappointing. (And a warning here: each faction gets two campaigns, and in both Iraqi campaigns, Iraq detonates a nuclear warhead against the US. You’d think Reality Pump would have done something about that for the US release, but they didn’t.)The three factions are also a little disappointing. For some reason Reality Pump only included jeeps, trucks, tanks, and helicopters in the game -- possibly because those units most resemble the units from Earth 2150 -- and so not only are the factions more limited than they should be, there isn’t a whole lot of variety. The United States can call in airstrikes against targets (so planes sort of make an appearance), and their units are more powerful in head-to-head battles. The Russians can use chemical weapons to kill the crews of units and buildings, so they can more easily take them over (although all three factions can use assault vehicles to “convert” buildings). And the Iraqis have access to suicide bombers (including a Ford pickup) that can destroy units and structures in a single hit. But otherwise the factions play about the same, and there doesn’t seem to be much difference between, for example, the US Abrams tank, the Russian T-80 tank, and the Iraqi T-72 tank. They just feel like tanks.The gameplay also has some problems. Vehicles can’t drive in reverse, so if your units get too close to the enemy, their only method of retreat is to continue driving forward so they can turn, probably getting a few of them killed in the process. Line of sight also seems messed up, as if the game engine is checking from the bottom of a unit to the bottom of its potential targets, and so slight ridges in the landscape can prevent units from seeing each other, even when they should. And the pacing is pretty slow. Even at the fastest speed setting, it takes each oil shaft 15 seconds to generate $250, but units, buildings, and upgrades all cost thousands of dollars, so just creating a base and an attack force takes a while. And then once you do have units, Reality Pump modeled the terrain so units move more slowly on, say, rocky ground than on roads, and while that makes sense for jeeps and trucks, it doesn’t make as much sense for tanks. Plus, helicopters are treated as hovercraft, staying a fixed distance above the ground, and so the terrain affects them as well since they have to slow down greatly when going over hills. And so unit movement tends to be slow, since maps feature lots of hills and not a lot of roads.But the gameplay isn’t awful. Black Gold uses a slimmed down version of the Earth 2150 engine, and some of the good things from Earth 2150 are back in Black Gold (then again, some aren’t). Your units require ammunition, so you can’t just attack the enemy; you have to make sure you have supply lines open as well. There are lots of units and upgrades to research, so you have to decide whether to spend money on oil shafts (to earn money faster) or units (to become more powerful) or upgrades (to become more powerful in a different way). Plus you have to decide on the types of units to build (air versus land, light versus heavy), and, because of the slow movements of units, you have to be extra careful where you keep your units so they can respond to attacks. So there is a lot of strategy involved in the game, and, as long as you don’t mind the problems listed in the previous paragraph, Black Gold can even be fun to play.Meanwhile, the graphics for Black Gold are great. Black Gold uses a 3D engine, and the units, buildings, and terrain all look pretty good, even when you zoom in the camera. Plus, there is real-time lighting with day and night cycles and vehicle headlights, which adds to the game’s realism, and things like fires and explosions -- even nuclear explosions -- look excellent. About the only downside to the graphics is that, because there are so few unit types, they tend to look like each other, and it’s tough to tell whether a jeep, for example, is carrying a machine gun or a stinger rocket. In the single player game you can pause all you want to figure these things out, but in multiplayer it’s more troublesome.The sound is less good than the graphics, but it still does the job. Basically it’s about what you’d expect from an RTS: the voice acting ranges from good to atrocious, the sound effects are strong, and, while the background music is better than normal, there isn’t enough of it. The only real unusual thing about the sound is that for some reason Reality Pump included explosions and gunfire either as part of the background music or as random background noise (I couldn’t tell which). Well, that’s just a bad idea since those types of noises should be used exclusively as cues about what’s going on in the game, and it’s annoying how they’re used now.Lastly, Black Gold’s manual is simply terrible. Some units and structures (like the Communications Center) aren’t described. Some interface options (like the command queue) aren’t described. And the things that are described aren’t described well. Consider this sentence: “In WWIII result conditional increases in resource production.” Huh? And I don’t even want to get into the acronyms on the units, like “FCR” and “TOW” and “FIM,” that aren’t mentioned anywhere, or the joy in trying to remember the difference between the MI-6, the MI-26, and the MI-28 helicopters. Reality Pump could have made the game friendlier to play, in the manual and elsewhere, but they didn’t do it, and you might have a hard time learning what’s going on if you haven’t played Earth 2150 or The Moon Project.So, there you go. World War III: Black Gold has some good points and some bad points, and I think it qualifies for the term “mediocre.” Maybe I’m biased by having played Earth 2150 and comparing Black Gold negatively to it, but then if anybody hasn’t played Earth 2150, I’d heartily recommend they try that before Black Gold. Otherwise, Black Gold tries to be a realistic war game in the RTS genre, and it’s better than Rival Interactive’s Real War, so people might enjoy it for that reason alone. Written By: Westlake Ratings:[28/40] Gameplay[14/15] Graphics[11/15] Sound[08/10] Interface[07/10] Multiplayer[05/05] Technical[01/05] Documentation See the Game Over Online Rating System Game Over Online Copyright (c) 1998-2009 ~ Game Over Online Incorporated ~ All Rights Reserved Game Over Online Privacy Policy
计算机
2015-48/1917/en_head.json.gz/5317
Assassin's Creed 3: Liberation 'as large and deep' as the console entries Despite being release on the PlayStation Vita, Assassin's Creed 3: Liberation is every bit as "large" as its console counterparts in the AC series. Ubisoft has released a new developer documentary for Assassin's Creed 3: Liberation, their upcoming title which brings "key pillars" of the franchise and delivers it on the PlayStation Vita. Although still titled Assassin's Creed 3, this version introduces a new female assassin — a first for the series — who goes by the name Aveline de Grandpre. "She's sort of drawn to question her values and assumptions," explains scriptwriter Jill Murray, "and she's pulled in different directions by both the Assassins and the Templars." In addition to discussing Liberation's 18th century time period and Louisiana location, the developers also highlight the key features for Assassin's Creed 3 on the PlayStation Vita. "In terms of the scale of Liberation, we wanted to replicate the console experience on a handheld," adds lead scriptwriter Corey May. "So you will find that it's as large and deep as any of the other console entries in the franchise." "We tend to focus on what we consider pivotal moments in human history and the time of the American Revolution is obviously a very critical one, not just for the colonies and the United States itself, but actually for the entire world," he concludes. As sort of an addition to the upcoming console version of Assassin's Creed 3, Aveline will (at some point) meet up with new series protagonist Connor, but Ubisoft didn't reveal how the two will interact. Assassin's Creed 3: Liberation is set to release on October 30 for the PlayStation Vita. Assassin's Creed III: Liberation Assassin's Creed Liberation HD Xbox 360 and PC release dates announced Assassin’s Creed Liberation HD coming to PS3 in January Ubisoft planning HD port of Assassin's Creed: Liberation? Is Assassin's Creed: Rising Phoenix a new Vita game? Interview: Assassin's Creed III: Liberation composer Winifred Phillips
计算机
2015-48/1917/en_head.json.gz/6657
Public Release Dataset Public Release Dataset The public use version of the baseline Hurricane Katrina Community Advisory Group Study dataset is archived by the Resource Center for Minority Data at the Inter-university Consortium for Political and Social Research (ICPSR). The codebook, questionnaire, and user agreement to access the data can be downloaded on the ICPSR website at http://www.icpsr.umich.edu/cocoon/MDRC/STUDY/22325.xml Instructions for downloading files can be found on the ICPSR website. Technical support for accessing the data can be obtained by contacting the ICPSR staff at [email protected]. Restricted Data Use Agreement Due to the sensitive nature of the data, users will need to complete and submit to ICPSR an application form, a data use agreement, and a data protection plan before they are allowed to use these data. These materials and more information on access restrictions are available on the ICPSR website at http://www.icpsr.umich.edu/cgi-bin/bob/archive2?study=22325&path=MDRC&docsonly=yes. Helpline and FAQ Questions should be sent directly to the ICPSR helpline at [email protected]. In an effort to help support public users, we also established a FAQ section on our website: http://hurricanekatrina.med.harvard.edu/faq.php. | site by sequencer This study is supported by NIH Research Grants R01 MH070884-01A2 and R01 MH081832 from the US Department of Health and Human Services, National Institutes of Health (NIH), the Office of the Assistant Secretary of Planning and Evaluation, the Federal Emergency Management Agency, and the Administration for Children and Families. All content © 2005 Harvard Medical School
计算机
2015-48/1917/en_head.json.gz/6877
Ticket to freedom with content contributions My photos, such as this (click for larger view), are being organized and in the hopper pending a full release into the public domain, or a similar license as being discussed now. We need to be able to chunk blocks of knowledge freely. We need to extend conversations and global understandings. We need to have the rights and liberties to have the elements come together. And then the experts are going to be the ones with the best insights and glue. The Wikipedia universe and free content efforts are getting a facelift, again, with a new, needed, trusted '''free and open content license.''' It looks very good. The free culture movement is growing. Hackers have created a completely free operating system called GNU/Linux that can be used and shared by anyone for any purpose. A community of volunteers has built the largest encyclopedia in history, Wikipedia, which is used by more people every day than CNN.com or AOL.com. Thousands of individuals have chosen to upload photos to Flickr.com under free licenses. But - just a minute. What exactly is a "free license"?In the free software world, the two primary definitions - the Free Software Definition and the Open Source Definition - are both fairly clear about what uses must be allowed. Free software can be freely copied, modified, modified and copied, sold, taken apart and put back together. However, no similar standard exists in the sphere of free content and free expressions.We believe that the highest standard of freedom should be sought for as many works as possible. And we seek to define this standard of freedom clearly. We call this definition the "Free Content and Expression Definition", and we call works which are covered by this definition "free content" or "free expressions".Neither these names nor the text of the definition itself are final yet. In the spirit of free and open collaboration, we invite your feedback and changes. The definition is published in a wiki. You can find it at:http://freedomdefined.org/ or http://freecontentdefinition.org/ Please use the URL http://freedomdefined.org/static/ (including the trailing slash) when submitting this link to high-traffic websites.There is a stable and an unstable version of the definition. The stable version is protected, while the unstable one may be edited by anyone. Be bold and make changes to the unstable version, or make suggestions on the discussion page. Over time, we hope to reach a consensus. Four moderators will be assisting this process:Erik Möller - co-initiator of the definition. Free software developer, author and long time Wikimedian, where he initiated two projects: Wikinews and the Wikimedia Commons.Benjamin Mako Hill - co-initiator of the definition. Debian hacker and author of the Debian GNU/Linux 3.1 Bible, board member of Software in the Public Interest, Software Freedom International, and the Ubuntu Foundation.Mia Garlick. General Counsel at Creative Commons, and an expert on IP law. Creative Commons is, of course, the project which offers many easy-to-use licenses to authors and artists, some of which are free content licenses and some of which are not.Angela Beesley. One of the two elected trustees of the Wikimedia Foundation. Co-founder and Vice President of Wikia, Inc. None of the moderators is acting here in an official capacity related to their affiliations. Please treat their comments as personal opinion unless otherwise noted. The Creative Commons project has welcomed the effort to clearly classify existing groups of licenses, and will work to supplement this definition with one which covers a larger class of licenses and works.In addition to changes to the definition itself, we invite you to submit logos that can be attached to works or licenses which are free under this definition: http://freedomdefined.org/Logo_contest One note on the choice of name. Not all people will be happy to label their works "content", as it is also a term that is heavily used in commerce. This is why the initiators of the definition compromised on the name "Free Content and Expression Definition" for the definition itself. We are suggesting "Free Expression" as an alternative term that may lend itself particularly to usage in the context of artistic works. However, we remain open on discussing the issue of naming, and invite your feedback in this regard.We encourage you to join the open editing phase, to take part in the logo contest, or to provide feedback. We aim to release a 1.0 version of this definition fairly soon.Please forward this announcement to other relevant message boards and mailing lists.Thanks for your time,Erik Möller and Benjamin Mako Hill Years ago, I fell in love with the DSL, Design Science License. It was a copyleft type of license that has since had its plugged pulled. See the digital dust at DSL.CLOH.Org. Then came the Creative Commons licenses. I've been tending to just put my stuff into the ''public domain.'' Perhaps this effort will bring new energy and clarity -- as well as hope.
计算机
2015-48/1917/en_head.json.gz/6915
Everything You Needed to Know About the Internet in May 1994 A snapshot of a revolution, just before it really took off. By Harry McCracken @harrymccrackenSept. 29, 20130 Share ZD Press Email Back in 1994, the Internet was the next big thing in technology — hot enough that TIME did a cover story on it, but so unfamiliar that we had to begin by explaining what it was (“the world’s largest computer network and the nearest thing to a working prototype of the information superhighway”). And in May of that year, computer-book publisher Ziff-Davis Press released Mark Butler’s How to Use the Internet. I don’t remember whether I saw the tome at the time, but I picked up a copy for a buck at a flea market this weekend and have been transfixed by it. Among the things the book covers: E-mail: “Never forget that electronic mail is like a postcard. Many people can read it easily without your ever knowing it. In other words, do not say anything in an e-mail message which you would not say in public.” Finding people to communicate with: “… telephone a good friend who has electronic mail and exchange e-mail addresses with him or her.” Using UNIX: “UNIX was developed before the use of Windows or pointing and clicking with a mouse … although there are lots of commands that you can use in UNIX, you actually need to know only a few to be able to arrange your storage space and use the Internet.” Word processing: “Initially, you may make mistakes because you think you are in Command mode when you’re really in Insert mode, or vice versa.” Joining mailing lists: “Although it is polite to say ‘please’ and ‘thank you’ to a human, do not include these words in the messages you send to a listserv. They may confuse the machine.” Newsgroups: “Remember, a news reader is a program that enables you to read your news.” Online etiquette: “Flaming is generally frowned upon because it generates lots of articles that very few people want to read and wastes Usenet resources.” “Surfing” the Internet: “Surfing the Internet is a lot like channel surfing on your cable television. You have no idea what is on or even what you want to watch.” Searching the Internet: “If a particular search yields a null result set, check carefully for typing errors in your search text. The computer will not correct your spelling, and transposed letters can be difficult to spot.” Hey, wait a minute — does How to Use the Internet cover Tim Berners-Lee’s invention, the Web, which had been around for almost three years by the time it was published? Yup, it does, but the 146-page book doesn’t get around to the World Wide Web — which it never simply calls “the Web” — until page 118, and then devotes only four pages to it, positioning it as an alternative to a then popular service called Gopher: What Is the World Wide Web? Menus are not the only way to browse the Internet. The World Wide Web offers a competing approach. The World Wide Web doesn’t require you to learn a lot of commands. You simply read the treat provided and select the items you wish to jump to for viewing. You can follow many different “trails” of information in this way, much as you might skip from one word to the next while browsing through a thesaurus. The ease of use makes the World Wide Web a favorite means of window-shopping for neat resources on the Internet. Version 1.0 of the first real graphical browser, Marc Andreessen and Eric Bina’s Mosaic, appeared in November 1993. How to Use the Internet mentions it only in passing, describing it as “a multimedia program based on the World Wide Web; it allows you to hear sounds and see pictures in addition to text.” It devotes far more space to Lynx, a text-only browser that you navigated from the keyboard rather than with a mouse. By the time I first tried the Web in October 1994 or thereabouts, Mosaic was a phenomenon and Lynx was already archaic. Still, by the standards of early 1994, when the book was published, the text-centric Web was already a hit. As it warns: More and more people are using the Internet, and WWW is a very popular service. For this reason, you may have to wait a long time to receive a document, or, in some cases, you may not even be able to make a connection. The book’s original owner, whoever he or she may have been, was keenly interested in this whole Internet thing. When I opened it, I found a clipping on universities and other local institutions that offered lessons in going online. And this Post-it note, which reminds me of the instructions on using Windows 95 that I found in a different old computer book that I bought last year, was affixed to the inside front cover: In the spring of 1994, How to Use the Internet was probably pretty successful at helping people figure out a newfangled and arcane means of communications. Things progressed so rapidly that it was soon obsolete. But in 2013, it’s useful once again as a reminder of how much the Web has changed the world, and how recently it came to be. GeekMom 5pts Brings back a lot of memories... like my first IBM Clone DanB21 5pts I actually tech reviewed this beauty way back when, as a wee lad. adding to ehurtley's note, I remember the author (great/smart feller) demo'ing Mosaic for me at his house while the book was being written. He totally understood the big wow of it, and what it was going to change. It just wasn't practical to recommend it yet for consumers because it was too slow on whatever ridiculous baud we had for home modems back then... RichaGupta 5pts something about internet, we don't know..http://t.co/AlGUsHsv6T markburgess 5pts For an even further step back, check out Harley Hahn's "The internet Complete Reference". The book copyright date inside is 1994, but my copy is signed by Harley on October 23, 1993...there are 15 pages on the Web and he references a list of browsers at CERN...where the Web part of the Internet was born in April 1993. cronocloudauron 5pts Lots of books out there like this in the mid 90's. As was said, they had to cover shell accounts because PPP wasn't common until later. Depending on where you were, you might not have had a local access number till the late 90's. AOL didn't have a local number until after the local cable company began offering broadband. AskMisterBunny 5pts James Gleick and some people ran a service called Pipeline in NYC in the early 90s that used some weird SLIP emulation called PinkSlip. It was very buggy but the only game in midtown if you wanted the net at home. ehurtley 5pts Why didn't they cover Mosaic? Because in May 1994, dial-up PPP or SLIP was still *VERY* uncommon. It was far more common to have dial-up access to a UNIX shell account (which is why UNIX shell access is covered on a book about the Internet.)It wasn't until late 1994 that Portland, OR, a fairly tech-savvy city, got its first commercial dial-up SLIP/PPP provider. (How do I know? Because a friend and I are the ones who convinced a dial-up UNIX shell access company to offer SLIP.)Yes, many people had direct connections before then, either through work or college. But this book wasn't aimed at them - it was aimed at home users. I wrote a book for Random House in 1996 called "The Book Lover's Guide to the Internet." I spent the first half of the book explaining how the net worked and how to access it through AOL, CompuServe, Genie, Prodigy, et al. I think I still have a press account on AOL, for what that's worth. Somewhere I even have a pc with Mosaic on it. I did an author appearance at a B&N in NYC in '97 that was covered by C-SPAN. First question from the audience was "Isn't it true that the government is watching everything you do online?" I think I answered, "Yeah, probably." mysidia 5pts re: Email -- "Many people can read it easily without your ever knowing it. In other words, do not say anything in an e-mail message which you would not say in public."I would argue that today's average internet user still doesn't manage to understand, even that (the nature of e-mail being sent over the network in cleartext, and the content being potentially accessible to many prying eyes, unless specially encrypted).
计算机
2015-48/1917/en_head.json.gz/7400
Google's WebM (VP8) allegedly infringes the rights of at least 12 patent holders Google's attempts to promote "royalty-free" open source technologies just can't succeed in a world in which software is patentable -- a circumstance that Google increasingly realizes and complains about. No one can safely claim anymore at this stage that Android is a "free" mobile operating system without making a fool of himself, given that approximately 50 patent infringement lawsuits surround Android, an initial determination by an ITC judge just found Android to infringe two Apple patents (with many more still being asserted in other lawsuits), and ever more Android device makers recognize a need to take royalty-bearing licenses from Microsoft and other patent holders. Now Google's WebM codec project is apparently bound for a similar free-in-name-only fate as Android.As a result, WebM seems unfit for adoption as part of a W3C standard, given the W3C's strict policy that its standards must be either patent-free or at least royalty-free.In February I reported on MPEG LA's call for submissions of patents deemed essential to the VP8 video codec, a key element of Google's WebM initiative. I had already expressed doubts about Google's claims of WebM/VP8 being unencumbered by third-party patents shortly after WebM was announced more than a year ago. The commercial issue here is that Google's claims of WebM being "royalty-free" would be reduced to absurdity the moment that any patent holder rightfully starts to collect royalties on it.I just became aware of a new streamingmedia.com interview with MPEG LA. MPEG LA serves as a one-stop shop for licenses to AVC/H.264 and other multimedia codecs; streamingmedia.com is the website of Streaming Media magazine. In that interview, MPEG LA stated affirmatively that there have been submissions relating to the February call, and disclosed, at a high level, a preliminary result of the vetting process that commenced subsequently to the submissions period:Thus far, 12 parties have been found to have patents essential to the VP8 standard.12 parties -- that's really a high number, and it could even increase in the future.For now, MPEG LA doesn't want to name those companies. Chances are that there is an overlap between those 12 companies and the ones that contributed to MPEG LA's AVC/H.264 pool. I sent MPEG LA an email to inquire about this, but the only answer I received was that "confidentiality precludes [MPEG LA] from disclosing the identity of the owners".Whatever the names of those companies may be, it's obvious that they wouldn't have submitted patents to MPEG LA if they weren't interested -- at least in principle and always subject to agreement on the particular terms -- in collecting royalties on WebM. While the Moving Picture Expert Group (MPEG) is a standardization body that also has plans for a (truly) royalty-free codec, MPEG LA is independent from MPEG and in the licensing business. Even MPEG LA offers freebies. For example, it doesn't charge for the use of AVC/H.264 for free Internet video. But that's fundamentally different from declaring a codec royalty-free without any field-of-use restrictions.The WebM Community Cross-License intiative can't solve WebM's patent problemI'm sure that none of those 12 companies is a member of the Google-led WebM Community Cross-License initiative. The companies behind the WebM CCL are Google partners who have committed not to assert their patents (should they have any that read on WebM) against that codec. The significance of that initiative was overestimated by some people. It's just a non-aggression pact. Those companies didn't commit to launch retaliatory strikes against patent holders who may bring assertions against WebM. Also, there's a notable absence: Motorola is a top three Android device maker and should be an obvious partner for Google but apparently reserves the right to sue WebM adopters. A M
计算机
2015-48/1917/en_head.json.gz/7876
Midway Fights Next-Gen Snags Category: Console News (next-gen.biz) By: Starlight Tags: ps3,xbox 360,pc,gaming,news,industry October 31, 2007 // 12:25 am - As Midway aims to return to profitability, snags with Unreal Engine 3, PlayStation 3 and stiff competition haven't been making things easy. Midway hasn't made an annual profit since 1999, but analysts have noted that the Chicago-based game publisher does have potential to make money, with games like Stranglehold, Unreal Tournament 3 and Blacksite Area 51 (and of course, iterations of the undying Mortal Kombat franchise). But there have been hurdles in front of that comeback, some of which are related to delays of PS3 versions of next-gen games and some that are related to Midway's company-wide adoption of a modified version of Epic's Unreal Engine 3, not to mention stiff competition in the action genre. Most recently, there's the Blacksite: Area 51 delay. While the release date of the Xbox 360 and PC versions of the alien first person shooter have slipped by just a week to November 12, the PS3 version of the Unreal Engine 3 powered title has seemingly run into some more serious issues. While not offering an explanation for the delay, a Midway spokesman confirmed to Next-Gen that the title would see a release after the other versions of the game, at some point "during the holiday season." Whether the delay is specifically related to the UE3 engine or not is currently unclear, but what is evident is that this delay represents the latest thorn in the side of Midway's next-gen strategy. Having invested heavily in technology and staff to boost its next-gen drive, all three PS3 versions of its 2007 AAA multi-platform titles have been subject to delays. Speaking exclusively to Next-Gen earlier this month Steve Allison, Midway's chief marketing officer, noted that "Blacksite is completely on track for its targeted ship date [of November 5]," something that with hindsight sounded like positive thinking. He also expressed disappointment about the numerous delays to the PS3 version of the publisher's debut next-gen title, Stranglehold, which finally released Tuesday. "In a perfect world I would love to have the different versions shipping same day and we will solve that problem in my understanding here in the future. It affects us but it doesn't destroy us," he said. Allison went on to say that that despite Epic's best efforts to guarantee a 2007 release for Unreal Tournament 3 on PlayStation 3, the highly anticipated shooter was unlikely to hit consoles until 2008, representing a blow to, amongst other parties, Sony, who had billed the title as a 2007 PlayStation 3 exclusive. "You talk to Epic they'll tell you they're still working for this year and they really are working hard, but in all likelihood, because of the way titles get shipped in Europe, it will not be able to come out this year on consoles, and you don't want to ship them split in the territories if you can avoid it. So its PC this year, that's all go... The PS3 version is essentially done, it's just got lots of cleanup console stuff to do that's going to take them a few weeks." However, Allison is by no means downbeat about the future despite early teething problems following a heavy investment in next-gen that's financed new technology, staff and a robust marketing campaign for Stranglehold. "I think we're super enthusiastic about our titles and some of the games which have been hinted at but not shown yet. The technology platform is tough because when you tell the whole company you're going to work on one code base core, and then you're going to sort of alter it from game genre to genre, that's a ton of work on the front end, so to some degrees that work is inefficient fiscally. "But for the future we need to be efficient, so the thesis is that once those sort of genre specific versions of Unreal Engine 3 are completed and have shipped a game, the subsequent games that have similar features or the sequels to those games will be much, much cheaper to make with no drop in quality, and we still believe in that. We see now with Stranglehold being completed to ship and that code being packaged off to our other titles there's been rapid, rapid progress on the games for '08, '09 and even '010, more so than we expected, which makes us believe that our thesis is correct. It's just been tough to make that investment. On the front end of it it's very, very painful, but we still believe it's going to be completely worth it." Despite the release of Halo 3 a month ago, and the impending release of big hitters like Call of Duty 4, Assassin's Creed and Mass Effect, Allison feels this holiday will be business as usual at Midway. "This is just one of those Christmas' that happens every couple of years. A few years ago it was Halo 2 and Grand Theft Auto 3 and Metal Gear I think. They come every two to three years. Next Christmas everyone will say it's less competitive but you know, there'll probably be two times as many titles coming out so it'll be just as competitive, just in a different way." Follow us on Twitter, Facebook and drop by the PS3 Hacks and PS3 CFW forums for the latest PlayStation 3 scene and PS4 Hacks & JailBreak updates with PlayStation 4 homebrew PS4 Downloads. Related PS4 News and PS3 CFW Hacks or JailBreak Articles
计算机
2015-48/1917/en_head.json.gz/8261
Friden Flexowriter This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. Please help to improve this article by introducing more precise citations. (February 2011) Friden Flexowriter used as a console typewriter for the LGP-30 computer on display at the Computer History Museum. Model 1 SPD (Systems Programatic Double-case) equipped for edge-punched cards; most Flexowriters had paper-tape readers and punches The Friden Flexowriter was a teleprinter, a heavy duty electric typewriter capable of being driven not only by a human typing, but also automatically by several methods, including direct attachment to a computer and by use of paper tape. Elements of the design date to the 1920s, and variants of the machine were produced until the early 1970s; the machines found a variety of uses during the evolution of office equipment in the 20th century, including being among the first electric typewriters, computer input and output devices, forerunners of modern word processing, and also having roles in the machine tool and printing industries. 1.1 Origins and early history 1.4 End of product line 2.1 Automatic typewriters 2.2 Console terminals 2.3 Offline punch and printer 2.4 Machine tools 2.5 Unit record and early computing 2.6 Commercial printing 2.7 Finance 4 Flexowriters today Origins and early history[edit] The Flexowriter can trace its roots to some of the earliest electric typewriters. In 1925, the Remington Typewriter Company wanted to expand their offerings to include electric typewriters. Having little expertise or manufacturing ability with electrical appliances, they partnered with Northeast Electric Company of Rochester and made a production run of 2500 electric typewriters. When the time came to make more units, Remington was suffering a management vacuum and could not complete contract negotiations, so Northeast began work on their own electric typewriter. In 1929, they started selling the Electromatic. In 1931, Northeast was bought by Delco. Delco had no interest in a typewriter product line, so they spun the product off as a separate company called Electromatic. Around this time, Electromatic built a prototype automatic typewriter. This device used a wide roll of paper, similar to a player piano roll. For each key on the typewriter, there was a column on the roll of paper. If the key was to be pressed, then a hole was punched in the column for that key. In 1932, a code for the paper tape used to drive Linotype and other typesetting machines was standardized. This allowed use of a tape only five to seven holes wide to drive automatic typewriters, teleprinters and similar equipment. In 1933, IBM wanted to enter the electric typewriter market, and purchased the Electromatic Corporation, renaming the typewriter the IBM Model 01. Versions capable of taking advantage of the paper tape standards were produced, essentially completing the basis of the Flexowriter design. By the late 1930s, IBM had a nearly complete monopoly on unit record equipment and related punched card machinery, and antitrust issues became a concern as product lines expanded into paper tape and automatic typewriters. As a result, IBM sold the product line and factory to the Commercial Controls Corporation (CCC) of Rochester, New York, which also absorbed the National Postal Meter Corporation. CCC was formed by several former IBM employees.[citation needed] World War II[edit] Around the time of World War II, CCC developed a proportional spacing model of the Flexowriter known as The Presidential (or sometimes the President). The model name was derived from the fact that these units were used to generate the White House letters informing families of the deaths of service personnel in the war. CCC also manufactured other complex mechanical devices for the war effort, including M1 carbines. In 1944, the pioneering Harvard Mark I computer was constructed, using an Electromatic for output. Postwar[edit] After the war, especially in the early 1950s as the computer industry started in earnest, the number of applications for Flexowriters exploded, covering territory in commercial printing, machine tools, computers, and many forms of office automation. This versatility was helped by Friden's willingness to engineer and build many different configurations. In the late 1950s, CCC was purchased by Friden, a maker of electromechanical calculators, and it
计算机
2015-48/1917/en_head.json.gz/8363
Learn about Modeling Vacuum Systems in COMSOL Alexandra Foley | November 7, 2013 Until recently, simulation had not been widely used by vacuum system designers because of an absence of commercial simulation tools. Last October, my colleague James Ransley held a webinar about how to model vacuum systems using COMSOL Multiphysics. The webinar was a great success, and it inspired us to produce a dedicated product for modeling vacuum applications: the Molecular Flow Module (new with version 4.3b). This year, on November 21st, James will be giving a webinar explaining the new features for modeling vacuum systems that have been added to the product as part of this module. The webinar will demonstrate the wide range of practical problems that can now be addressed in the field of vacuum technology using COMSOL and will conclude with a demonstration of the simulation of an ion-implant vacuum system. Modeling an Ion-Implant Vacuum System An ion implanter is often used in the semiconductor industry for the implantation of dopants into wafers during manufacturing. This process works by accelerating ions toward a solid wafer using an electric field. When the ions are at a sufficiently high energy, they travel some distance into the wafer before coming to rest. When the wafer is subsequently annealed, the ions dope the material, changing its properties and enabling a wide range of semiconductor devices to be fabricated. When designing such a system, it is important that only ions of the correct charge state-to-mass ratio are accelerated toward the wafer. This is achieved through the use of a separation magnet that bends the beam of ions, redirecting their path so that only ions with a desired charge state reach the wafer. The Molecular Flow in an Ion-Implant Vacuum System model, available in the Model Gallery, demonstrates how this device can be designed using the Molecular Flow Module. In addition to controlling which ions reach the wafer, it is also necessary to control which portions of the wafer are implanted. This can be accomplished by covering regions of the wafer with an organic photoresist that shields these areas from the ions, creating a desired pattern on the wafer’s surface. However, when the photoresist is struck by the ions, it causes the photoresit to emit gas molecules known as outgassing molecules, which can interact with the ion beam in an undesired manner. When the fast-moving ions in the beam strike the molecules, they can produce unwanted secondary ion species, which can in turn be implanted into the wafer, adding undesirable impurities into the material. Therefore, it is essential that the number density of the outgassing molecules for the wafer is low within the beam line. In the ion-implant vacuum system model, the number density of outgassing H2 molecules along the beam path is used to evaluate the quality of the system design. The model shows how tilting the wafer with respect to the beam path can improve the quality of the implant. Geometry of the ion-implant vacuum system design. The red circle in the ion-implant cavity represents the wafer and the particle trajectory is also shown. In the model, the wafer is positioned within the main chamber of the vacuum system, and ions are accelerated toward the wafer down the ion-beam path, which is indicated in the figure. It is assumed that 30 sccm of H2 outgasses from the wafer when the beam strikes it. The following results are obtained: The number density as a function of position along the beam line. The average number density along the beam line as a function of the wafer normal angle to the incoming beam. The graphs above show that the number density of the outgassing species increases along the beam path toward the wafer. When the wafer is parallel to the beam, more flux enters the beam line because the line of sight is then parallel to the wafer normal. By altering the angle of the wafer relative to the ion beam, the average number density of the system and the average flux in the corrector can be decreased by 10% over the entire length of the beam. Upcoming Webinar: Vacuum System Simulations Interested in learning more about the capabilities of COMSOL Multiphysics for modeling vacuum systems? Attend the upcoming webinar on “Vacuum Systems Simulations” on November 21st. James Ransley, Product Manager at COMSOL, Inc., will be explaining how the Molecular Flow Module can be used for the accurate simulation of vacuum systems, including the modeling of gas flows in molecular flow, transitional flow, and slip flow regimes. Can’t make the live event on November 21st? Check back here or on the registration page to watch the archived webinar. Model Download Download the Molecular Flow in an Ion-Implant Vacuum System model that will be used in a live demonstration during the webinar. Fluid Molecular Flow « Newer PostResearch on Microwave Heating and Chemical Applications Older Post »COMSOL Conference 2013 Rotterdam Keynotes on Multiphysics Applications
计算机
2015-48/1917/en_head.json.gz/8462
Microsoft Office Web Applications Take Aim at Google Docs Jul 13, 2009 1:51 PM EST By Brian Heater The big news coming out of Redmond this morning is the launch of Microsoft Office 2010, as a technical preview. Microsoft's office suite, of course, is still the default for most users. And while the announcement is the direct shot at Google that many analysts anticipated in light of that company's recent announcement of the Chrome OS, the introduction of Office 2010 does include some features targeted at an existing Google property: Docs and Spreadsheets. Later this summer, Microsoft will be rolling out a technical preview of Office Web Applications, an online counterpart to its traditionally desktop-based office suite. The online suite will be available in three configurations. Businesses will be able to run it via SharePoint Server as part of the Office 2010 license or via Microsoft Online Services for a subscription. These two options offer company-wide collaborations on documents saved to a central server.The third option is for consumers via Windows Live. This version utilizes Skydrive for online hosting, and is really the most directly targeted at Google Docs--and, like Google Docs, is available for free for consumer use. According to a Microsoft spokesperson, the online version is quite feature-rich--even more so than its Google counterpart. Asked whether he thought the availability of a free online version of Office might cannibalize sales of its desktop counterpart, the spokesman said "possibly," but added that the desktop version of the app is still more robust and therefore more suited to larger word processing tasks, like, say, term papers. The Ribbon, for example, will only offer two tabs in its online version, versus the five offered in its desktop iteration. Microsoft's chief concern with the creation of Office Web Applications maintaining a unified experience cross-platform, from the desktop to the Web and even to mobile devices. Users are essentially trained in the layout on one platform and ideally won't lose anything in terms of feature placement when they switch to another. No word on when a final version of the suite will be available to the public. Categories: Google Docs,Office,Office apps
计算机
2015-48/1917/en_head.json.gz/8476
Scratch! peripheral officially unveiled, with first image The DJ peripheral for this fall's Scratch! title has finally be unveiled, with … While there was an older image of what the peripheral for Scratch! may look like, the company has set the record straight with an official look at the hardware that will ship with the game. Called the Scratch Deck, this is how you'll be living your DJ fantasies. Everyone should take a nice look and compose their thoughts. I'll give you a moment. Done? Okay. There is only so much you can tell from an image, but unlike the DJ Hero peripheral, the buttons occupy a large area between the "record" and the cross-fade controls. It's a nice looking piece of equipment, and we've already been promised a hands-on with both the game and the hardware at E3, so I'll be able to give you a lot more information next week. Here's what the marketing has to say about the peripheral. SCRATCH DECK combines two essential elements of the DJ and hip-hop experience—a free-spinning, touch sensitive turntable with a crossfader and 5 Akai Pro MPC-style drum pads. The turntable allows players to add their own style and manipulate the songs in real time, while the MPC-style drum pads give players the opportunity to perform and customize tracks by triggering samples using the very same pads that are the cornerstone of professional hip-hop beat production. Samples can either be pre-loaded into the game using 60 unique battle records that will ship with the software, or players can record and upload their own samples using a compatible USB microphone. Pretty cool stuff. It's going to be a nice change to get away from all those plastic guitars and drums and learn another fake instrument, and I mean that sincerely. What do you think of the peripheral? What's your interest level in this game? Sound off in the comments.
计算机
2015-48/1917/en_head.json.gz/8485
Home › About Acorn About Acorn Computers and ARM Processors Last update by Admin on 2010-04-28 There are more ARM processors on this world than with any other type of processor (including Intel compatible ones). Your cell phone most probably has an ARM CPU. Did you know that “ARM” originally stood for “Acorn RISC machine” and that the processors were designed for Acorn's desktop operating system? For latest RISC OS news, try the Icon Bar, one of the Acorn-related newsgroups (for example comp.sys.acorn.advocacy), or the websites of RISC OS Ltd and Advanced RISC Machines (ARM). Many of the JPEGs are only thumbnails - click on a picture to view it full size. Disclaimer: All of this must be regarded as my personal view. Other people may have different thoughts about Acorn and their products! If you think that some of the information on this page is wrong, I'd be glad if you told me. This information is not complete - the newer history of RISC OS is missing, as I no longer use RISC OS today. Acorn? Never heard about them... and why should I bother? I am perfectly aware that your computer and your OS are far better - but if you learn a little about Acorn, you might find that even though the system really can be considered exotic, it is well-planned with consistent design and some clever details which were not present in any other OS at the time it was released. The Acorn platform is probably one of the smallest computer platforms, consisting of some estimated 500,000 machines (excluding older 8-bit computers). A majority of them were sold in Great Britain, as Acorn Group were situated at Cambridge, and many were bought by British schools. However, there are also a lot of private users of Acorn computers, mostly - in order of importance - in Great Britain & (Northern) Ireland, Germany, France, Australia & New Zealand, the Netherlands and Italy. There are practically no Acorns to be found in the USA. Acorn founders Hermann Hauser and Chris Curry Being a small company in a market of industry giants who have much greater resources (for development, cheap production and for marketing), Acorn were always exposed to strong competition. The company was founded in 1978 by Hermann Hauser and Chris Curry. In the 1980s, practically all British schools were equipped with their "BBC" computers (and Acorn machines were also quite popular as home computers), but when Wintel PCs began to gain importance, more and more schools switched over to that platform. Still, the educational sector remained an important market for a long time. See the articles on stairwaytohell.com and Robert McMordie's page for more information on the early days of Acorn. The company went through many a restructuring. Most importantly, seperate companies were founded for supporting the UK education market (Xemplar was owned in part by Acorn, in part by Apple), developing RISC OS (Acorn RISC Technologies; ART, although this didn't exist as a separate company for very long), working on the 32-bit processor architecture (Advanced RISC Machines; ARM) and on Acorn's NetComputer models (Acorn Network Computing; ANC, again, not for a long time). As time went by, Acorn were able to sign contracts with various major companies, with a positive overall effect on their share price, e.g. Apple (UK education market), Digital Semiconductors (StrongARM processor) and Oracle (NetComputer). Since about 1997, the company's focus slowly changed. The desktop market of RISC OS machines was large enough to sustain it, and a new desktop computer was being designed, but other markets looked more promising in the long run. Building on the experiences made when designing the Acorn NetComputer, Acorn concentrated on making their technology available for licensing to third parties, for things like interactive/digital TV and Multimedia Point of Sale Terminals. Stuart Halliday's news posting took the Acorn community by complete surprise (local copy, Google Groups) Finally, on "Black Thursday" 17th September 1998, in a completely unexpected move Acorn announced that all work on desktop computers had ceased, which included the Phoebe workstation which had been scheduled for November, and that development would focus completely on the digital TV market from now on. In an attempt to get rid of the "educational" image, even the company name was changed to Element 14 early in 1999. (Element 14 is silicon.) Later, the company was bought by Pace Computers Ltd. Pace were only interested in Acorn's digital TV expertise - no further development for desktop systems was expected from their side. Later, development of RISC OS was taken over by RISCOS Ltd. For a very long time, Acorn had remained the only European company designing and manufacturing complete desktop computer systems, which at the time were considered a true alternative to the more popular systems by many (e.g. the Times and me - I think a few others too). Hmmm... I bet because the computers were produced in such low numbers, they were quite expensive! That depends on your point of view. Looking only at the numbers you are right: Compared to Intel-based computers, you payed more for the same amount of processing power. Additionally, you did not get the same support because of a much looser net of dealers. However, all this is outweighed by the good architecture design and the great OS - you don't save money, but you save yourself a lot of hassle and annoyance. Acorn produced computers for a long time. Their first "BBC" models were based on the 8-bit MOS 6502 processor (very similar to the MOS 6510 found in the Commodore 64), but later models use 32 bit ARM processors. Archimedes series Acorn A3010 Acorn Archimedes computers were the first of Acorn's computers to use a 32-bit architecture. The later models featured graphics resolutions of up to 800 × 600 with up to 256 colours, 8-bit logarithmic stereo sound with eight channels, ARM3 processors (up to 25 MHz, I think) and up to 4 MB of RAM (16 MB with one model). Many of them could be found for a long time in British schools. The last OS version supporting them was RISC OS 3.11. The A3010 was the first Acorn computer I owned. It ran at 12 MHz using an ARM250 and, unlike its brother A3020, didn't come with a built-in harddisc. Among others, there were also the A4000 and A5000, which looked more like "ordinary" PCs, i.e. with a separate keyboard, and the A4 (a laptop). PocketBook series PocketBook II The PocketBook notebooks, of which the PocketBook II is an example, are special in that they do not use ARM processors. Instead, they are 100% compatible with Psion notebooks, e.g. the Psion 3c. However, note that the later Psion 5 series of palmtops uses an ARM7100 processor. RiscPC series (and A7000) The first generation of RiscPCs was launched back in 1994. Afterwards, there were numerous, but not really significant improvements to the design. The RiscPC has two standard SIMM slots and one non-standard slot for 1 or 2 MB of VRAM (can also run without VRAM), an IDE bus, a VIDC20 chip combining video and sound output (up to 135 MHz pixel rate, 24bpp colours and CD quality sound) and an IOMD chip that provides high-speed buffered serial/parallel input and output as well as memory-mapping. The machine can be expanded almost infinitely: If you need to fit more than the two expansion cards the base model can hold, you can add a second, third and fourth slice, each of which can contain two more expansion cards and provides another 5¼" and another 3½" bay. (See the pictures below.) In practice, however, few people used more than two slices, which means the backplanes with more than 4 slots are extremely hard to come by - if at all. For the early RiscPC models, you usually also had to upgrade to a more powerful PSU when adding the second slice. Probably the most remarkable feature of the RiscPC are its two processor slots: When upgrading your processor, you just need to replace a small processor card instead of the whole motherboard as with previous Acorn machines. This way, processor upgrades were quite cheap. (StrongARM upgrade: fivefold performance for £99!) The second slot may be used for another processor. Theoretically, things like DSPs and MPEG decoders could be connected, but the only available cards for the second slot are x86 Intel processors. Running Windows (but not OS/2) and Acorn's own RISC OS on the same machine at the same time, the processors share all the computer's resources, like memory, discs and I/O ports. (By the way: It was really nice to see Windows run inside one of the windows of the RISC OS desktop, at a time when virtual machines on personal computers were unheard of.) Acorn also introduced the A7000 (followed later by the A7000+), a cut-down version of the RiscPC with only one processor card slot, only one SIMM slot, VRAM soldered to the motherboard and a lower price. There are some improvements over the RiscPCs, most notably the integrated Floating Point Accelerator and support for EDO RAM. The downside is, in my opinion, the dull design of the case. The first RiscPCs were sold with RISC OS 3.5, the last Acorn-supported versions are 4.0x. NetComputer Acorn CoNCord network computer The concept of the NetComputer was to offer low-cost PCs to people who had not previously owned a computer. NCs were envisioned as a kind of thin client which relied on Internet connectivity for much of its functionality. Some models were expected to connect to a TV to avoid the cost for a monitor. Acorn built the reference NC model for Oracle, one of the companies driving the NC initiative. After Acorn introduced their NC, numerous other companies also designed their own NCs, but, remarkably, a fair number of these used ARM instead of Intel processors. The Acorn NC models, just like the NCs of other companies, were not particularly successful. In the mid-1990s, the concept may just have been ahead of its time. These days, netbooks (small, low-cost notebooks) have filled the niche that NCs were targeting. The CoNCord shown on this picture is the fastest NC and also the one with the most unusual design. (Rumour has it that this CoNCord was only a mock-up, but other models were real.) Prototype machines which were never produced Acorn developed machines that would only have been produced if someone had ordered large quantities of a model - this did not happen. They also designed a high-end desktop machine, then cancelled the whole project... Stork design study The Stork sub-notebook comes either with a monochrome LCD screen (like on this picture) or with a TFT screen, but can also connect to standard monitors. A docking station allows you to use it conveniently on your desktop. The Stork contains a harddisc, but floppy and CD ROM drives must be connected externally. Some nice details are its built-in trackball, the Freeze Mode which preserves memory contents for as long as five days with full batteries (this was before APM or APCI...) and the support for PCMCIA expansion cards. The computer weighs only 1.8 kg. This notebook almost made it to production. Allegedly, an American company had already ordered a large number of machines, but withdrew later on. NewsPad design study The NewsPad is the result of Acorn's taking part in the European Union OMI-NewsPAD project (OMI = Open Microprocessor Initiative). Basically, the NewsPad machines are designed to replace ordinary newspapers, but of course they can do a lot more than that. The specification is quite similar to that of the Stork (harddisc, docking station, Freeze Mode etc.), except the NewsPad has a touch-sensitive screen, no keyboard (you can connect one to the docking station) and support for a bi-directional infrared link and for video/sound digitizing. The NewsPad weighs 2 kg. One cannot help comparing this to much later devices like the Apple iPad. Again, it seems that the idea was ahead of its time. The workstation The RiscPC design has a few shortcomings: It doesn't support newer technologies like EDO RAM and E-IDE, and the internal IDE bus only allows you to connect two devices. Additionally, the whole architecture was designed for the ARM610 processor running at 40 MHz, so when 202 MHz StrongARMs became available, the low bus speed suddenly represented a bottleneck which reduced the speed of the processor significantly. These problems were addressed by Acorn in the design of a new desktop computer, Phoebe, which had the following features: One processor soldered to the motherboard, with the option to add another one on a daughter board. Higher bus speed of 66 MHz (128 MHz in PC terminology). The RiscPC bus is so slow at 16 MHz (32 MHz in PC terminology) that a 202 MHz StrongARM using the new bus will be nearly twice as fast straight away. Support for SDRAM, E-IDE, 230kbps serial connections, about 200 MHz video bandwidth, MIDI in/out and sound sampling, PCI, IRDA, and Wintel compatibility through a special PC card Very nice case (in my opinion), designed by the same company that also designed the Zip drive for Iomega. The colour of the case caused a quite heated debate on the Acorn newsgroups... As mentioned before, Acorn decided to abandon the whole project only two months before its completion, at a time when prototypes were already up and running, although not at full speed. Even worse, they decided to discontinue all support for the RISC OS desktop market. Subsequently, several companies producing software and hardware for Acorns set up RISC OS Ltd, a company whose goal it was to license RISC OS from Acorn/Element 14 and to continue with its development. With only so few machines made, surely there is very little software around! Not really! Because RISC OS had been around for quite a few years (since 1988), there were many programs for it. It is true that on the PC you could usually choose between 20 programs doing the same thing whereas there were only two or three for RISC OS, but the quality of the programs was generally higher. Looking at the commercial market, there were numerous companies (mostly in the UK) that developed for RISC OS. The software prices were about the same as on the PC market, and due to the much lower numbers of copies sold, the support was often excellent, with programmers available for contact over the Internet. Practically all major software (e.g. painting packages, word processors, spreadsheet) contained import/export filters to allow data exchange with PC programs. After Acorn abandoned the desktop market in late 1998, development of new commercial software mostly stopped. The games scene was not particularly lively, though some of the most popular PC games tended to find their way onto Acorn screens two to three years after their inital release on the Windows/Intel platform (See Acorn Arcade for much more information on games!). The demo scene was also rather small, but there really were a few good coders out there! An argument in favour of RISC OS is the large amount of Freeware and Shareware that is available for it. Apart from ports of Freeware programs written in C (like PGP, PovRay, TeX, GNU C compiler, InfoZip and RasMol) there are excellent free text editors, Internet applications (browser, newsreader etc.) and painting and drawing programs, to mention only a few. There are also good programs by Acorn: An image conversion program, a drawing program, a complete Internet stack, a video player and more. All in all, I was content with the available software. There was one caveat though: Unless you lived in the UK, you really needed Internet access if you always wanted to be up to date. (There were magazines and PD libraries, but most of them were in the UK.) Oh, such a small company will never have the resources to develop a decent OS for their machines! You just might be surprised if you gave it a try. At the time it was introduced, RISC OS was very competitive compared to its rival operating systems on other platforms. If you happen to stumble across an Acorn, just try it out! Actually, there are several operating systems for Acorn computers: RISC OS is the one designed by Acorn for their own computers. Additionally, OSs have been ported to the Acorn platform: ARMLinux, RiscBSD, and RISC iX by Acorn themselves. RISC OS Here is a quick overview of Acorn's own operating system. Many "features" may hardly seem worth mentioning from today's point of view, but remember that RISC OS 3 was released in 1991, one year before Windows 3.11, and even its predecessor RISC OS 2 from 1989 had many of these features! Single-user, co-operatively multitasking, but not multithreading (you can multithread within one task with the help of an extension module). Provides a desktop with window environment. An icon bar shows icons for filing systems and programs. The Task Manager module lists all tasks together with the memory they take up and allows you to alter the amount of memory for applications that let you. The OS is not loaded from disc, but comes in 4 MB of ROM. This saves you a lot of RAM, makes the machine more invulnerable against viruses, allows for machines without harddiscs (predestined for networks) and makes booting very fast - the minimum is about 3 seconds with a StrongARM processor! Replacing old parts of the OS without copying all of it to RAM is also possible: The ROM is subdivided into 4k pages, each of which can be replaced by a page in RAM. Alternatively, you can also replace one of the over 100 modules making up the OS. RISC OS is only available on ROMs containing the British version. However, you can download the German version from Acorn's web site. This German RISC OS replaces all text inside RISC OS, but not the code - it only needs 350 kBytes of RAM to 'patch' 4 MBytes of ROM. As far as I know, RISC OS has only been translated into one other language apart from German, namely Welsh. Consistent look and feel across all applications. In part, this is due to the OS providing many routines to easily implement it this way, and in part to Acorn's efforts at setting up very useful rules about how a program should behave. (E.g. what names the mouse buttons have and what effect they should usually have - the whole Style Guide is 130 pages long.) The result is that for any new programs you get, you hardly need to peek inside the manual - it's all self-explanatory. Modules extending the OS (e.g. internet stack) can be loaded or removed any time, not just during booting. RISC OS has always supported what Bill Gates had the nerve to call "Plug & Play" - unlike with PCs of that era, it is never necessary to configure interrupts etc. before an extension card works. All filing systems (CD, HD, Floppy, ROM, RAM and soft-loaded ones) also install icons on the icon bar, which allow you to access them quickly. The RISC OS equivalent to Microsoft's Explorer is simply called the Filer. It opens a new window for each directory. Copying/moving files is achieved by dragging them from one directory to another. Similarly, to save a file from an application, you just have to drag the file icon to a directory window. Drag & Drop will also work between applications, e.g. you can write some text in a text editor and then directly 'save' it to a word processor window without saving to file or to some clipboard. Nice window design. The excellent Outline font manager anti-aliases fonts in real-time as it draws them to the screen. You can choose any font for your system font, and in contrast to older Windows versions, those fonts are very readable even on a low-resolution screen... A special mode even allows anti-aliasing to work with multicoloured backgrounds: Printing has been implemented in such a way that you do not need a new printer driver for each program you buy. Instead, programs print by making calls to the OS which will turn the graphics primitives either into PostScript or into bitmaps and send them to the parallel port, to a file or over a network. You only need one printer driver to allow all programs to print. You can change the screen resolution and number of colours at any time. A very fast BASIC interpreter (BBC BASIC) is supplied as part of RISC OS. Using it, you can create programs running in the desktop - you need not buy any expensive development software. The interpreter also contains an excellent ARM assembler. ("Yuk, BASIC?!" - Well, this flavour is fun to program!) There is another remarkable detail that I would like to mention: As of version 3.5, RISC OS has been supplied with an anti-virus program (also in ROM) which prevents viruses from spreading - thus, older viruses do not spread at all any longer on new computers! (By the way: This anti-virus program is possible because there exist relatively few viruses for Acorn computers - about 150.) On the other hand, there are also some flaws in RISC OS, most notably: No support for multithreading, although additional, free software allows this. Virtual memory is supported, but only with a commercial product from a third party, not as part of the OS. Memory protection is almost non-existent, programs can (and sometimes do) take down the whole machine. This is especially true for "modules", which are considered OS extensions and run in a privileged processor mode. In general, no major development of the OS has taken place for years. (Acorn did invest a lot of resources to make it use the capabilities of the RiscPC machines and then again to make it work with StrongARMs, but not much has changed for users.) It is sad that this mature operating system (Acorn claim that they have over 500 man years experience in developing for the ARM processor) is only known to relatively few people... Here are some screenshots of my StrongARM RiscPC running the RISC OS Wimp at 800 × 600 with 15bpp colour. By the way: The window border design can also be changed to whatever you like, except the defaults are nicer than most of the replacements I have seen. The RISC OS desktop, with a Filer window in the top left and the TaskManager window in the top right corner of the screen, and the free newsreader Messenger as well as a browser running. ARMLinux is also installed on the machine; the icons of its boot loader and the harddisc partition are on the pinboard. The DJ400 icon on the icon bar is the printer manager. The text editor Zap in HTML and C++ mode. The Internet stack is running and Acornet, a collection of free Internet programs, is loaded. The icon bar at the bottom is just filled with program symbols - if there are too many to fit, it starts to scroll sideways. As you can see from the Filer windows, under RISC OS dots are used to separate directory names, and if there are extensions, they are introduced with a slash - this can be a bit confusing if you are used to DOS/Unix filename conventions. The desktop with the commercial DTP program Impression Style in memory. Note that the FontManager enables all programs to use the high quality anti-aliased fonts - once you are used to this, working under Windows or X will inevitably make you think that something is missing... Additionally, a desktop solitaire game is loaded (ummmm). The little window titled ChangeFSI is a pop-up menu; different menus pop up depending on where the mouse pointer is when you press the middle mouse button. jigdo © Copyright 2014 Richard Atterer Top of page • Sitemap •
计算机
2015-48/1917/en_head.json.gz/8537
Ask Slashdot: Job Search Or More Education? from the can't-it-be-both dept. Matt Steelblade writes "I've been in love with computers since my early teens. I took out books from the library and just started messing around until I had learned QBasic, then Visual Basic 5, and how to take apart a computer. Fast forward 10 years. I'm a very recent college graduate with a BA in philosophy (because of seminary, which I recently left). I want to get into IT work, but am not sure where to start. I have about four years experience working at a grade/high school (about 350 computers) in which I did a lot of desktop maintenance and some work on their AD and website. At college (Loyola University Chicago) I tried to get my hands on whatever computer courses I could. I ended up taking a python course, a C# course, and data structures (with python). I received either perfect scores or higher in these courses. I feel comfortable in what I know about computers, and know all too well what I don't. I think my greatest strength is in troubleshooting. With that being said, do I need more schooling? If so, should I try for an associate degree (I have easy access to a Gateway technical college) or should I go for an undergraduate degree (I think my best bet there would be UW-Madison)? If not, should I try to get certified with CompTIA, or someone else? Or, would the best bet be to try to find a job or an internship?" WotC Releases Old Dungeons & Dragons Catalog As PDFs from the going-for-the-saving-throw dept. jjohn writes "Wizards of the Coasts, holders of the TSR catalog, have released rulebooks and modules for most editions of Dungeons and Dragons through a partnership with DriveThruRPG.com. The web site, dndclassics.com, may be a little overloaded right now. Most module PDFs are $4.99 USD." The article points out that these are all fresh scans of the old books. It's also worth noting that the decision to make these PDFs available reverses WotC's 2009 decision to stop all PDF sales because of piracy fears. The only reference to this in the article is a quote from the D&D publishing and licensing director: "We don't want them to go to torrent sites. Why not give them a legal route?" Ask Slashdot: Best Webcam To Augment Impaired Vision? from the tickle-me-elmo-via-usb dept. mynamestolen writes "In order to read paper-based books many visually impaired people want to attach a webcam to a computer and attach the computer to a TV. Some Electronic Magnifiers are purpose-built to provide a similar solution. Different organisations around the world (such as in the UK) have help pages. But I have not been able to find a guide to set up my own system. So I'm asking Slashdot readers how to go about it. What is the best camera to use if I want to hold the camera in my hand and point it at book or magazine? What parameters should I adjust, either in the software or on the camera? Depth of view, refresh rates, contrast, color balance and resolution might be key problems. My system is Linux and getting drivers for a good camera might also be a problem." Three Low-Tech Hacks for Phones and Tablets from the your-super-thin-phone-doubles-as-a-terrible-bookmark dept. Bennett Haselton writes "Here are three hacks that I adopted in the last few weeks, each of which solved a minor problem that I had lived with for so long that I no longer thought of it as a problem — until a solution came along, which was like a small weight off my shoulders. None of these hacks will help impress anyone with your technical prowess; I'm just putting them here because they made my life easier." Read on for the rest of Bennett's thoughts. Book Review: A Gift of Fire from the read-all-about-it dept. benrothke writes "In the 4th edition of A Gift of Fire: Social, Legal, and Ethical Issues for Computing Technology, author Sara Baase takes a broad look at the social, legal and ethical issues around technology and their implications. Baase notes that her primary goal in writing the book is for computer professionals to understand the implications of what they create and how it fits into society. The book is an interesting analysis of a broad set of topics. Combined with Baase's superb writing skills, the book is both an excellent reference and a fascinating read." Read below for the rest of Ben's review. bookreview O'Reilly Giving Away Open Government As Aaron Swartz Tribute jones_supa writes "The classic hacker book publisher O'Reilly is releasing their book Open Government for free as a tribute for Aaron Swartz. The book asks the question, in a world where web services can make real-time data accessible to anyone, how can the government leverage this openness to improve its operations and increase citizen participation and awareness? Through a collection of essays and case studies, leading visionaries and practitioners both inside and outside of government share their ideas on how to achieve and direct this emerging world of online collaboration, transparency, and participation. The files are posted on the O'Reilly Media GitHub account as PDF, Mobi, and EPUB files." oreilly Facebook Banter More Memorable Than Lines From Recent Books from the it's-complicated dept. sciencehabit writes "Scientists have found that, when it comes to mental recall, people are far more likely to remember the text of idle chitchat on social media platforms like Facebook than the carefully crafted sentences of books. The team gathered 200 Facebook posts from the accounts of undergraduate research assistants, such as 'Bc sometimes it makes me wonder' and 'The library is a place to study, not to talk on your phone.' They also randomly selected 200 sentences from recently published books, gathered from free text on Amazon.com. Sentences included, 'Underneath the mass of facial hair beamed a large smile,' and 'Even honor had its limits.' Facebook posts were one-and-a-half times as memorable as the book sentences (abstract). The researchers speculate that effortless chatter is better than well-crafted sentences at tapping into our minds' basic language capacities — because human brains evolved to prioritize and remember unfiltered information from social interaction." Turkey's Science Research Council Stops Publication of Evolution Books from the jesus-rode-tyrannosaurs dept. An anonymous reader writes "The Scientific and Technical Research Council of Turkey (TÜBITAK) has put a stop to the publication and sale of all books in its archives that support the theory of evolution, daily Radikal has reported. The books have long been listed as “out of stock” on TÜBTAK's website, but their further publication is now slated to be stopped permanently. Titles by Richard Dawkins, Alan Moorehead, Stephen Jay Gould, Richard Levontin and James Watson are all included in the list of books that will no longer be available to Turkish readers. In early 2009, a huge uproar occurred when the cover story of a publication by TÜBITAK was pulled, reportedly because it focused on Darwin’s theory of evolution." Public Library Exclusively For Digital Media Proposed from the don't-copy-that-book dept. CowboyRobot writes "In San Antonio, a judge and a precinct commissioner are proposing (PDF) a plan to create a library called BiblioTech that offers electronic media exclusively, offering patrons only e-readers and digital materials. 'BiblioTech intends to start with 100 e-readers that can be loaned out, 50 pre-loaded e-readers for children, 50 computer stations, 25 laptops and 25 tablets, with additional accommodations planned for the visually impaired.' But the economics have yet to be ironed out. 'A typical library branch might circulate 10,000 titles a month... To do that electronically would be cost-prohibitive — most libraries can't afford to supply that many patrons with e-reading devices at one time. And expecting library visitors to bring their own devices may be expecting too much.'" Book Review: Super Scratch Programming Adventure! MassDosage writes "I first heard about the Scratch programming language a few years ago and the idea of a simple language designed to teach kids to program in a fun, new way has always appealed to me. For those of you who don't know, Scratch was developed by the wonderfully named "Lifelong Kindergarten Group" at the MIT Media Lab. It's a programming language that allows programs to be built by dragging, dropping, configuring and combining various blocks that represent common coding concepts such as if/else statements and while loops. Scratch also provides tools for doing simple animation, playing audio and controlling sprites. The idea behind it is to make programming simple, fun and accessible to first time programmers so they can understand the key concepts without first needing to learn complex syntax which can come later when they move on from Scratch to other languages. It has been very successful and there are literally millions of Scratch programs freely available from the Scratch website and many others." Read below for the rest of Mass Dosage's review. Book Review: The Nature of Code eldavojohn writes "I kickstarted a project undertaken by Daniel Shiffman to write a book on what (at the time) seemed to be a very large knowledge space. What resulted is a good book (amazing by CC-BY-NC standards) available in both PDF and HTML versions. In addition to the book he maintains the source code for creating the book and of course the book examples. The Nature of Code starts off swimmingly but remains front heavy with a mere thirty five pages devoted to the final chapter on neural networks. This is an excellent book for Java and Processing developers that want to break into simulation and modeling of well, anything. It probably isn't a must-have title for very seasoned developers (unless you've never done simulation and modeling) but at zero cost why not?" Read below for the rest of eldavojohn's review. FBI Publishes Top Email Terms Used By Corporate Fraudsters from the security-unclassified-uscode-smuggle-espionage dept. Qedward writes "Software developed by the FBI and Ernst & Young has revealed the most common words used in email conversations among employees engaged in corporate fraud. The software, which was developed using the knowledge gained from real life corporate fraud investigations, pinpoints and tracks common fraud phrases like 'cover up,' write off,' 'failed investment,' 'off the books,' 'nobody will find out' and 'grey area'. Expressions such as 'special fees' and 'friendly payments' are most common in bribery cases, while fears of getting caught are shown in phrases such as 'no inspection' and 'do not volunteer information.'" Death of Printed Books May Have Been Exaggerated on Sunday January 06, 2013 @02:19AM from the i-think-it's-mark-twain's-fault dept. New submitter razor88x writes "Although just 16% of Americans have purchased an e-book to date, the growth rate in sales of digital books is already dropping sharply. At the same time, sales of dedicated e-readers actually shrank in 2012, as people bought tablets instead. Meanwhile, printed books continue to be preferred over e-books by a wide majority of U.S. book readers. In his blog post Will Gutenberg Laugh Last?, writer Nicholas Carr draws on these statistics and others to argue that, contrary to predictions, printed books may continue to be the book's dominant form. 'We may be discovering,' he writes, 'that e-books are well suited to some types of books (like genre fiction) but not well suited to other types (like nonfiction and literary fiction) and are well suited to certain reading situations (plane trips) but less well suited to others (lying on the couch at home). The e-book may turn out to be more a complement to the printed book, as audiobooks have long been, rather than an outright substitute.'" The Copyright Battle Over Custom-Built Batmobiles from the lost-their-wheels dept. Hugh Pickens writes writes "Eriq Gardner writes that Warner Brothers is suing California resident Mark Towle, a specialist in customizing replicas of automobiles featured in films and TV shows, for selling replicas of automobiles from the 1960s ABC series Batman by arguing that copyright protection extends to the overall look and feel of the Batmobile. The case hinges on what exactly is a Batmobile — an automobile or a piece of intellectual property? Warner attorney J. Andrew Coombs argues in legal papers that the Batmobile incorporates trademarks with distinctive secondary meaning and that by selling an unauthorized replica, Towle is likely to confuse consumers about whether the cars are DC products are not. Towle's attorney Larry Zerner, argues that automobiles aren't copyrightable. 'It is black letter law that useful articles, such as automobiles, do not qualify as "sculptural works" and are thus not eligible for copyright protection,' writes Zerner adding that a decision to affirm copyright elements of automotive design features could be exploited by automobile manufacturers. 'The implications of a ruling upholding this standard are easy to imagine. Ford, Toyota, Ferrari and Honda would start publishing comic books, so that they could protect what, up until now, was unprotectable.'" A Wish List For Tablets In 2013 timothy writes "For the last few years, I've been using Android tablets for various of the reasons that most casual tablet owners do: as a handy playback device for movies and music, a surprisingly decent interface for reading books, a good-enough camera for many purposes, and a communications terminal for instant messaging and video chat. I started out with a Motorola Xoom, which I still use around the house or as a music player in the car, but only started actually carrying a tablet very often when I got a Nexus 7. And while I have some high praise for the Nexus 7, its limitations are frustrating, too. I'll be more excited about a tablet when I can find one with (simultaneously) more of the features I want in one. So here's my wish list (not exhaustive) for the ideal tablet of the future, consisting only of features that are either currently available in some relevant form (such as in existing tablets), or should be in the foreseeable near future; I'll be on the lookout at CES for whatever choices come closest to this dream." Read below to see what's on Timothy's wish list.
计算机
2015-48/1917/en_head.json.gz/8822
The Fedora Project is an openly-developed project designed by Red Hat, open for general participation, led by a meritocracy, following a set of project objectives. The goal of The Fedora Project is to work with the Linux community to build a complete, general purpose operating system exclusively from open source software. Development will be done in a public forum. The project will produce time-based releases of Fedora about 2-3 times a year, with a public release schedule. The Red Hat engineering team will continue to participate in building Fedora and will invite and encourage more outside participation than in past releases. Fedora 15, a new version of one of the leading and most widely used Linux distributions on the market, has been released. Some of the many new features include support for Btrfs file system, Indic typing booster, redesigned SELinux troubleshooter, better power management, LibreOffice productivity suite, and, of course, the brand-new GNOME 3 desktop: "GNOME 3 is the next generation of GNOME with a brand new user interface. It provides a completely new and modern desktop that has been designed for today's users and technologies. Fedora 15 is the first major distribution to include GNOME 3 by default. GNOME 3 is being developed with extensive upstream participation from Red Hat developers and Fedora volunteers, and GNOME 3 is tightly integrated in Fedora 15." manufacturer website 1 dvd for installation on a x86 platform back to top
计算机
2015-48/1917/en_head.json.gz/8988
Cloud Security Authors: Harry Trott, Kevin Jackson, Anders Wallgren, Elizabeth White, Bob Gourley Related Topics: @CloudExpo, Java IoT, Microservices Expo, Containers Expo Blog, Apache, Cloud Security @CloudExpo: Article Understanding the Impact of Your Workload on Your Cloud Infrastructure Deploying dynamic and scalable websites By Christopher Aedo Enterprises are quickly realizing that their future success is dependent on their ability to adapt their business to the Cloud. That realization however comes with more questions and concerns about executing an effective cloud-based strategy. The explosion of the OpenStack community has made it possible for hosting providers and businesses to create or utilize Amazon-like public and private clouds, but it's clear that the Cloud is not a one-size-fits-all solution. One prime factor that dictates the success of a cloud computing strategy is the particular workload an enterprise is tackling. From DevOps, to rapidly deploying dynamic and scalable websites, enterprises' workload needs should dictate their cloud architecture. The specific workloads have an impact on many elements of the cloud, particularly the architecture of the infrastructure. It becomes clear how integral infrastructure architecture is to meeting workload requirements as we examine specific workload use cases. The first element to consider in the architecture of cloud infrastructure is computing power. The number and speed of compute nodes within a cloud configuration will dictate how quickly processes can be executed. This comes into play prominently when assessing a workload, as the computing power required to develop a web app pales in comparison to the compute power required to execute Big Data analysis. Large-scale data analysis projects require powerful compute capabilities. While this kind of project is completely within the purview of well-constructed cloud architectures, that architecture must be designed as such. The next integral ingredient to a cloud's architecture is the storage architecture. There are several different types of storage that vary in availability, resiliency and transactional performance. Amazon's Simple Storage Service (S3) provides a multi-tenant object storage environment, while block storage, like Amazon EBS, provides a persistent storage target. Typically an enterprise architecture would require a multi-level SAN architecture that provided enough IOPS (input/outputs) for the storage of the VMs as well as the transactional block storage. As flash storage has matured, it has become possible to collapse the typical storage architecture, running virtual machine operating systems and persistent transactional data on the same tier. Another variable that's worth pointing out is that of data access speeds. While one might have a large element of storage space, the ability to quickly access the data stored within is a factor in developing infrastructure for particular workloads. The last vector for consideration is that of density. In many datacenters, space is readily available. However, that may not always be the case. Compact and energy-efficient datacenter hardware systems take up less space in a datacenter, thereby saving space and presumably cost. However, dense hardware tends to be more expensive - making the proposition contingent upon cost per square foot versus the cost of denser hardware. One must also consider the power density per square foot as this varies widely depending on the data center. This kind of determination must be made based on ad hoc criteria and circumstances. Less dense solutions tend to also be less power-efficient, bringing an additional cost point of analysis into the picture. CIO, CTO & Developer Resources Dissecting DevOpsDevOps is a term that has gained quite a bit of notoriety in recent years, as enterprises acknowledge the interdependence of IT operations and software development teams. DevOps aficionados are looking to cloud technology as a means to more closely align the two groups' respective goals, which tend to be fundamentally at odds. The DevOps operation means creating a cloud environment that allows developers to quickly self-service launch the necessary build and test virtual machines required to create the artifacts used in a continuous delivery pipeline. This kind of pipeline requires that the main code base (often referred to as the trunk or mainline) be constantly in the "green" state and execute with no fatal errors. One of the fundamental keys to creating that pipeline is rapidly rebuilding and unit testing any changed code. Some development shops rebuild on every code check-in by every developer, while others take a less extreme approach and build every ten minutes or on the hour. The success that can be achieved by a continuous build environment is largely dependent on a fast, well-orchestrated infrastructure. In this use-case, an IT manager will seek out a cloud architecture that launches and kills virtual machines (VMs) quickly, and includes highly accessible storage. Depending on the specifics, this could result in a cloud that combines a large amount of IOPS to quickly launch the VMs and perform the workload. Deploying Dynamic and Scalable WebsitesThere's arguably no greater beneficiary of cloud computing than a company that repeatedly launches similar websites. Let's take a media company as an example that delivers entertainment content across its platform. Critical to this company's success is delivering existing and new content through rapidly changing websites. Powering this are innovative applications that provide interactive experiences that engage and create a loyal user base. For this particular type of workload, developers require automated provisioning and flexible storage and compute options, as different launches require different demands, such as a UGC contest demanding greater storage and an MMORPG video game that requires a compute-intensive environment. These requirements often vary in their scope but are consistent in their frequency, so it is vital to eliminate the need for repetitive, time-consuming tasks such as installing and configuring commonly used website software like databases and web servers. Well-made templates can be re-used and when consistency is maintained automatically, system administrators can focus on higher-value tasks rather than performing repairs. Where other workloads may have a narrow scope, elasticity and flexibility within compute, storage and data access elements is required to effectively and efficiently deploy dynamic and scalable websites. Approaching High Performance Computing AnimationOne interesting HPC application of cloud technologies is that of animation rendering. Over the years the animation industry has used various computer hardware and software technologies to automate the steps in the production process. Because many of these steps require high-performance computing systems with significant CPU and IOPs capabilities, animation shops have often relied on purpose-built hardware and software systems for their peak capacity. With the advent of server virtualization, high-speed solid state drives (SSDs) and standards-based cloud platforms, animators are taking a closer look at the benefits of cloud technology. In order for these workloads to be efficient and effective in the cloud, high power computing must be coupled with high IOPs, as virtual machines must be launched and deprovisioned as short lived but CPU-intensive tasks. Designing an infrastructure around a particular workload is a process that requires comprehensive understanding of the basic functions of the workload in question, and while optimizing an infrastructure for a particular workload can present some front-end hurdles, the efficiency and potential cost savings in the long run are significant, as managers can focus resources on a particularly impactful element of their architecture. Published December 7, 2012 – Reads 5,557 Copyright © 2012 SYS-CON Media, Inc. — All Rights Reserved. More Stories By Christopher Aedo Christopher Aedo is senior director of technical operations at Morphlabs where he oversees the technology and operations side. He found his niche early in his career while helping a global accounting firm move their information systems from an IBM mainframe to a distributed network of Novell and SCO Unix servers. He is currently focused on making it easy for technology groups to move their infrastructure and applications from bare-metal or virtualized servers into public and private clouds.
计算机
2015-48/1917/en_head.json.gz/9504
You are hereHome » Articles » David Sugar's articles The blind leading the blind in Massachusetts Short URL: http://fsmsh.com/1410 Tue, 2005-11-01 23:11 -- David Sugar For the moment, I will ignore the false statement of some that specifying ODF requires one to run OpenOffice. In fact, there are many products which already do so, including Koffice, AbiWord. Anyone that wishes to can produce OpenDocument compatible software, including proprietary software vendors, such as Corel, who have chosen to do so. Microsoft alone insists not that it is unable to do this, but rather that it is unwilling, and it alone demands the state choose its products and its document format instead. In doing so, it is requesting that the state join with it in an illegal business practice. The principle complaint that some have used is trying to claim that ODF cannot meet state mandates for blind accessibility. As it happens, I am interested in this very issue, under the sometimes active GNU Alexandria project. It is true: work has been slow, but the basic idea was to write a server that could access government documents and web sites in their current form, and then provide a voice rendered representation of said document or web site to a user, either locally through a soundcard, or over the public telephone network as part of an automated government service. GNU Alexandria is licensed under the GNU GPL. Microsoft claims that its patent license trumps the rights of others to access even their own documents and data, and while it offers its patent encumbered XML schema under a royalty free license, it requires others to both engage in giving their rights to access their own data away as part of it, and specifically denies the right to sublicense their patent. This would legally require the state to exclude using GNU GPL licensed software that may access state generated documents, at least if they are produced and distributed in the Microsoft schema. Clearly the result would be that less products and services, including those to enable blind accessibility, will be available to the state if it were to choose Microsoft’s XML rather than an open standard like ODF. However, accessibility is just a Trojan horse. There is a deeper and even more disturbing issue in this as well if Microsoft’s patent encumbered XML schema became a mandated state standard. From the point of view of a software vendor, adopting Microsoft’s XML schema would mean that Microsoft, and not the state, would determine under what terms vendors can offer goods and services or even engage in business with the state. This would be something like Ford saying to Massachusetts that it can also purchase Hondas, or cars from other companies, but only under terms and conditions that are set by and from companies that are approved by Ford. While governments can specify terms and conditions of sale on their own in a non-discriminatory fashion, to adopt Microsoft’s XML schema would require the state to discriminate against some vendors, and to do so at the request of another. This is in fact illegal. It most probably violates state law, and certainly constitutes a scheme to engage in illegal restraint of trade as per U.S. Code Title 15. Not only would Microsoft’s scheme make the state of Massachusetts itself a party to criminal behavior, but also potentially liable to any civil actions that may result. Clearly to mandate ODF as a government standard the state of Massachusetts would enable all vendors who may wish to offer interoperable goods and services (including Microsoft). And it would assure that the population, the very citizens of Massachusetts to which the state serves, have a legal and uncontested right to access state documents in any manner they choose without restraint, including blind users by whatever software they may happen to use. To do anything less would represent a failure of the state to serve even itself, or its duty to meet the needs of all its citizens on a fair and equal basis. Category: Opinions David Sugar's articlesLog in or register to post comments David Sugar is an active maintainer for a number of packages that are part of the GNU project, including GNU Bayonne. He has served as the voluntary chairman of the FSF’s DotGNU steering committee, as a founder and CTO for Open Source Telecomm Corporation, and currently owns and operates Tycho Softworks.
计算机
2015-48/1917/en_head.json.gz/9551
A system and method for transmitting data, using a source synchronous clocking scheme, over a communication (or data) link. A source synchronous driver (SSD) receives a micropacket of parallel data and serializes this data for transfer over the communication link. The serial data is transferred onto the communication link at a rate four times as fast as the parallel data is received by the SSD. A pair of source synchronous clocks are also transmitted across the communication link along with the serial data. The pair of clocks are the true complement of one another. A source synchronous receiver (SSR) receives the serial data and latches it into a first set of registers using the source synchronous clocks. The serial data is then latched into a second set of registers in parallel. The second set of registers are referred to as "ping-pong" registers. The ping-pong registers store the deserialized data. In parallel, a handshake signal, which is synchronized to the clock on the receiving end of the communication link indicates that there is a stream of n contiguous data words being received by the SSR. The ping pong registers guarantee that the deserialized data is available (valid) for two clock cycles. This provides a sufficient window to account for the synchronizer uncertainty on the handshake signal, while introducing minimum latency.
计算机
2015-48/1917/en_head.json.gz/9579
The evolving open source movement Tweet Bevil WoodingPublished: Thursday, October 4, 2012Technology Matters It may not always be obvious, but the open source movement has been on a steady upward march. Globally, open source applications are becoming a major factor in all industries from governments, healthcare and education to gaming and disaster relief. Today, open source presents significant opportunity for the development of local solutions, innovation and industry. The term open source generally refers to any material, such as software programs or other digital content that is made freely available to the general public for use or modification from its original design. For example, anyone can use, modify or adapt the source code for the popular Android operating system to creating their own products or services. The open source movement is based on a very different philosophical approach compared to traditional intellectual property creation models where copyrights and patents prevent others from appropriating ideas without cost or penalty. Open source licenses specifically grants royalty-free, perpetual and non-exclusive usage rights to the general public. Knowledge: The principal thing While the open source vs proprietary IP debate will continue, there is no question that open source provides tremendous opportunity for organisations as well as end-users. At the most basic level, open source provides users the world over with significant building blocks for constructing complex information structures and services. The open source benefit is not simply an issue of costs, but of knowledge. The explicit exploration, modification and adoption of open source software can have very real and tangible benefits for developing capacity in the local technology sector. The principles of openness and collaboration that lie at the heart of the open source movement are of major relevance within a developing society context. Open source software projects have, to an overwhelming degree, been the result of collaborative inputs of thousands of contributors including, for example programmers, designers and writers from across the planet. This approach allows nations which may only have a fraction of the resources of large developed and emerging economies to tap into a set of skills and experiences that go way beyond their local context and local pockets. For instance, open source applications such as Ubuntu, Ushahidi and OpenOffice can have been successfully used to advanced productivity in the public sector, in schools and even in the private sector. The movement provides a doorway for the development and customisation of local solutions for local needs. Properly leverages, open source approaches can be an important driver of innovation. Beyond software: Open education Recently, the open source movement has been evolving from its software-focused roots broader open data and open content initiatives. Already, massively open online courses (MOOCs) are becoming the most popular form of online education. The emerging leaders of this movement, including Web sites such as Udacity and Coursera, offer free courses in subjects such as computer science and statistics, taught by accredited lecturers. Another a new paradigm is beginning to emerge: open textbooks. This new development is threatening to disrupt a US$4.5 billion industry that has so far avoided the media upheavals experienced in music, movies and trade publications. Open-source textbooks are free for students to use and for professors to modify. More companies are moving to development them, and more classrooms are adopting them. The underlying objective, politically and socially, is to lighten the burden for students who have been hit with tuition increases and rising text book costs. This movement aligns well with the rise in free online courses, and it is poised to revolutionise the way we view, and pay for, education. Businesses, educators, government institutions and innovators should be looking at the open source movement with fresh eyes. Seen in the right context, with the right support and incentives, the open source movement can offer a new world of possibility for building the knowledge economy. Ten open source software projects you should know about 1. Android OS Android is a Linux-based operating system for mobile devices such as smartphones and tablet computers. It is developed by the Open Handset Alliance, led by Google. 2. GnuCash GnuCash is a free accounting software system designed for personal and small business use. It allows you to track bank accounts, stocks, income and expenses, in addition to double-entry accounting. Google Chrome OS is a Linux-based operating system designed by Google to work exclusively with web applications. 4. Magento Magento Community Edition is the world’s fastest growing e-commerce platform. The Enterprise Edition, for which there is an associated cost, offers features like multi-store capability, store credits and gift cards, out-of-the-box. 5. MySQL MySQL is the world’s most used open source relational database management system that runs as a server providing multi-user access to a number of databases. 6. Open Office A productivity software suite for creating text documents, spreadsheets, presentations and databases. 7. PDFCreator PDFCreator is a credible rival to Adobe Acrobat letting you create PDFs from practically any application. 8. Ubuntu Ubuntu is a free operating system for Linux that’s quick and easy to use. Recent figures suggest that around 50 per cent of Linux users have Ubuntu installed. With its focus on usability, Ubuntu comes with OpenOffice, Firefox Empathy, Pidgin, GIMP and other tools pre-installed. 9. Udacity Udacity, with a stated goal of democratising education, offers a range of certification options that are recognized by major technology companies. 10. Ushahidi An open source project which allows users to crowdsource crisis information to be sent via mobile devices. Bevil Wooding is the Founder and Executive Director of BrightPath Foundation, an education-focused not-for-profit delivering values-based technology training programs including digital publishing and eBook creation workshops. He is also Chief Knowledge Officer of Congress WBN. Follow on Twitter: @bevilwooding and Facebook: facebook.com/bevilwoodingFollow on Twitter: @bevilwooding and Facebook: facebook.com/bevilwooding Business Guardian Previous Article Tenth anniversary of EPA negotiations: Not a time for celebrationNext ArticleConstruction sector’s 2010 woes SporTT names Phillips as head Former national cyclist and current cycling promoter Michael Phillips, has been appointed chairman of the Sports Company of T&T (SporTT). Put ‘Champagne’ on menu ‘Champagne’ could be an addition to a sumptuous menu for Wolverhampton diners tonight, if they are successful in the ‘aged’ Maiden Stakes over twelve furlongs, fifth event of an eight-race... Speed guns can save lives, says Arrive Alive Concerns are being raised by president of Arrive Alive Sharon Inglefield that neither the speed guns nor the new Motor Vehicle Authority have been put into use, especially as the carnage on the... Facing victimisation with vision Last week, Vernon Ramesar of iETv interviewed me about hostility to feminism. Sewer problem shuts down another school Another 600 children have been sent home as yet another primary school has been closed because of sewer problems. Young chess stars get ‘bouff’ Double Rooks is calling on the T&T Chess Association (T&TCA) executive to respond to a situation involving apparent charges of discrimination against two of the country’s young chess... French steelband undeterred by attacks Despite the horrors of the terrorist attack on their city, the great Paris-based steelband Calypsociation is continuing to offer music and harmony in these tough times. Massy Foundation helps NGOs The Massy Foundation donated money recently to several local NGOs, including the Foundation for Human Development, the Lady Hochoy Home, Cocorite, and the Caribbean Conference of Churches. Imbert: Very serious matter No cover-up. That’s the assurance from acting Prime Minister Colm Imbert to the family of deceased Carenage fisherman, Brian Smith, whose death is currently under probe by various authorities,... THE ART OF RAMLEELA The Ramleela celebration has, over the years, shifted its focus from adult Hindu participants to include thousands of Hindu children in the Maha Sabha school network. GUARDIAN MEDIA is not responsible for the content of external sites. Copyright © 2014 GUARDIAN MEDIA LIMITED
计算机
2015-48/1917/en_head.json.gz/9790
Cart |Contact Us|| Data Quality Popular Services Industries Home > Lookups > Distance Between ZIP Codes Lookup First ZIP Code Second ZIP Code Straight Line Distance Between ZIP Codes Enter two 5-digit ZIP Codes Displays the distance in miles Listware for Excel & Online verifies, corrects & enhances the names, addresses, phones & emails. Up to 1,000 free Credits every month. Learn More. How Can We Improve this Lookup? | Send to a Friend ____________________________________________________________________________________________________________________________________ Mailing Products Data Enhancements Data Hygiene Listware Data Quality Authority Direct Marketing Authority MVPs Customer and Events Read Customer Stories Contact Us | Partners | Site Map | Privacy | Trademarks | Terms of use © Copyright 2015 by Melissa DATA Corp. All trademarks are used as property of their owners. Melissa Data The leading international provider of data quality and address verification software. Melissa Data's affordable, easy-to-use solutions for data cleaning and deduplication can be used separately or together to provide full global data quality. Melissa Data specializes in Microsoft .NET and SQL Server Integration Services, IBM, Java and Oracle environments with APIs, Web services and enterprise plugins that empower batch and real-time applications. Melissa Data also provides a full line of direct marketing solutions including mailing software, list hygiene and data append services, and a comprehensive line of mailing lists and sales leads.
计算机
2015-48/1917/en_head.json.gz/9953
Get a degree and become a master hacker PC Advisor Get a degree and become a master hacker An online training company is offering a master's degree program in security science. US IT trainer offers master's degree for hackers Matt Hines EC-Council University, a Mexico-based distance learning company, has launched the program to help qualified workers advance their training and move to the next level of the security profession. Founded in 2006, the school is a spin-off of the International Council of Electronic Commerce Consultants, an online trainer that claims to have certified more than 40,000 IT professionals already, including 12,000-plus security specialists. With the growing need for highly skilled security experts among businesses, EC-Council founders say there's a scarcity of people who have all the know-how necessary to make the leap to CTO-level work. "Traditionally, a lot of white hat hackers have been people with computer science backgrounds who taught themselves about hacking, but we're trying to change the surface of the industry because we can't leave this field of study up to chance," said Jay Bavisi, president of EC-Council. Get the latest IT security news, reviews, downloads and tips at Security Advisor "In researching the issue, we found that people had widely different standards of knowledge and varying levels of skill when applying for these types of high-level IT security jobs," he said. "We think that we can set a standard by which people worldwide can say, 'This is what you need to know to be considered a true ethical hacker.'" Through the program, Bavisi said, the school is giving a handful of candidates - all of whom have amassed significant amounts of IT security training and real-world experience before qualifying for the degree - the chance to boost their overall understanding of many different types of security issues while improving their future job prospects. The school is officially accredited by the New Mexico Department of Education, and it claims that the "ethical hacker master's" program attracted more than150 candidates for its initial class, only six of whom were accepted. All of the applicants who were accepted - and are currently undergoing training - had at least one other master's degree or an "enormous" amount of real-world experience, he said. The school has previously offered professional certification in security fields including computer forensics, ethical hacking, and penetration testing, but under the master's program - which is expected to take anywhere from one to three years to complete and cost more than $21,000 in tuition - students will be forced to immerse themselves in nearly every area of IT systems defense and policy enforcement. Upon demonstrating that they have accrued a degree in computer science or commensurate real-world experience, students who qualify for the program are required to take courses in ethical hacking and countermeasures, computer forensics, and network intrusion detection. Candidates must then complete six electives to qualify for the degree, along with a master's thesis, with the option to choose courses from a list that includes secure network management, security analysis and vulnerability assessment, cyberlaw, principles of e-business security, disaster recovery, project management, penetration testing, secure programming, and wireless networking. "Most chief security officers in the industry today joined at the low end and came up through the ranks, but we felt there was a need for a specific training regimen that went far beyond what was out there for systems administration-level professionals," Bavisi said. "Today's CTOs need to understand a wider range of attacks than ever before and how to help their organisations respond in a forward-thinking way in an organisational environment that is increasingly strained in terms of budget and the acquisition of new technologies." Bavisi said that it took roughly two years to put the program together and gain accreditation from the appropriate educational bodies. One of the six current students, Dock Marshall Clavon, who currently works as a project management infrastructure analyst at oil exploration industry giant Chevron Global Upstream, said that he is taking the program to position himself for a management job in IT security down the road. Having already completed a master's in business administration with a focus on IT security, along with a master's degree in project management, he said that there should be significant opportunities in the near future for those who aggressively expand their skills. "A lot of the senior people who do this type of work are from the baby boomer generation, and they're going to start retiring soon, which should lead to a hiring rush for those who are qualified over the next three to five years," Clavon said. "And a lot of people in IT security aren't interested in managing people, which actually might be the hardest part of this type of work." While Clavon said that he isn't looking to swap jobs today, he believes that completing the course could get him "fast-tracked" by his current employer or by other firms looking for management-level security expertise. "In this field, there will always be work for the locksmiths, and as technologies move further into the electronic world, there will be job security for the people who have the right sets of skills," he said. "The tests in this program are hard, and the classes are allowing me to go deeper into this area of concentration, not just in terms of technology, but in terms of what it will take to lead others in a business environment." Tags: Security, Humax HD-FOX T2 review
计算机
2015-48/1917/en_head.json.gz/10053
Liz Maas Final Fantasy XIV Free Trial Extended... AgainEorzea hopes that you still have some patience left.11.17.10 - 1:25 AM Over the last few weeks, Square Enix has done a lot of apologizing for Final Fantasy XIV's less-than-stellar worldwide launch, and promising huge updates. Last month, they extended users' free trial periods by 30 days - and are now doing so again. This time, as long as you have an account and buy or have bought a character before November 19th (that's this Friday), after that date you'll get another 30 days automatically added to your trial period. You'll get the extra time even if you benefitted from last month's extension, but not if you've already cancelled your account.As promised, Square Enix plans version updates both later this month (the 25th to be exact) and in December, as well as periodically in 2011. One of the most sought-after changes that will be addressed is the currently-unpopular user interface. Loading times will also be taken care of, notorious monsters will be added, you'll finally get an item search feature, and improvements will be made to the tutorials as well. But that's just scratching the surface. The complete list of planned changes is insanely long, so you can check it out for yourself.Meanwhile, Weekly Famitsu is reporting that the MMORPG has been 'completely ported' to the PlayStation 3, so it looks like it's on track for the scheduled March release date.
计算机
2015-48/1917/en_head.json.gz/10411
> Allume, Borland, Entrance, IBM, SCO, Seagate > A plague on my house . . . or maybe not A plague on my house . . . or maybe not Larry Armonk. Cupertino. Redmond. Santa Cruz. Of the four aforementioned places, three have iconic status in the history of the personal computer, and the fourth hopefully can reverse its dubious place in the historic footnotes that have yet to be written. Armonk is where IBM makes its home. Cupertino and Apple are joined forever in an infinite loop at 1 Infinite Loop. Redmond . . . well, the Death Star has to reside somewhere, and the suburb just east of Seattle just happens to be where Microsoft settled in. Then there’s Santa Cruz, which is the “SC” in the original “SCO,” which at its founding in 1979 was the Santa Cruz Operation. I live in Santa Cruz — in the mountains of the Santa Cruz County, not near surfing mecca on the shores of Monterey Bay (hence, I don’t pepper the ends of my sentences with duuuuuuuude) — and through SCO’s many metamorphoses, the company no longer has its headquarters in Santa Cruz (to be fair, there’s an SCO office in Scotts Valley, a suburb here which would be more at home in Orange County than Santa Cruz, but I digress). That’s a good thing, too, because like Berchtesgaden in Germany trying to clean its sullied past as Hitler’s playground, Santa Cruz also has some image problems in GNU/Linux circles thanks to SCO. This occurred to me during an on-line conversation with someone overseas that went like this: J: Where do you live? Me: Santa Cruz, California. J: Santa Cruz? As in SCO? Me: Um, yeah. But I didn’t live here when SCO was around. Why did I feel the need to defend Santa Cruz? I don’t know. We have some pretty good software and hardware companies here — Borland started out here, and Seagate still makes its home in Santa Cruz County, as does Allume, which was once called Aladdin Systems and is still based in Watsonville. A plethora of independent developers — like Entrance‘s Tod Landis — write programs on “this side of the hill,” while the Silicon Valley teems with activity on the other side of the Santa Cruz Mountains. Open Source and Free Software Reporter, my magazine, is based here, too. SCO is now based in Utah, which begs the question why they haven’t changed their name to UO, for Utah Operation (and keep those cards and letters — I’ve read the history and know why). (Larry Cafiero, editor/publisher of Open Source Reporter, is an associate member of the Free Software Foundation.) Categories: Allume, Borland, Entrance, IBM, SCO, Seagate No points for originality Red Hat, Ubuntu to Microsoft: Go to hell
计算机
2015-48/1917/en_head.json.gz/10559
4D Rulers Developers Site Launched - April 25th, 2007 The new developers site is now online!The AMP2 Engine site has been merged into the developers page. Four new products have also been added. The Normal Mapper tool for 3D Studio Max, as well as two new art libraries are now online and available for license. Additional information can be found inside the site for these products.The biggest addition is the availability of pre-orders for the upcoming 4D Automated Update, which will allow games and other software the capability of automated software updating. More details can be found on the the information page, located in the Developer Kits section of the site.The full press release regarding the site launch can be read here.That does it for now as far as the site launch is concerned, but more is on the way, so check back soon. © 2007 - 4D Rulers Software Inc.
计算机
2015-48/1917/en_head.json.gz/10610
Delay confirmed; IE8 will ship in 2009 As suspected, IE8 will not be shipping by the end of the year. The IE8 team is … Microsoft officially set the deadline for the final version of Internet Explorer for the end of 2008, and never gave details beyond that. We noted in September that, according to the IE8 Beta 2 support page, prerelease versions of IE8 would no longer be supported come November 1, 2008. Earlier this week I noticed that that date had changed to December 31, 2008 and speculated that the final release of IE8 wouldn't be on time. My suspicions have been confirmed: IE8 has been delayed. The IE blog has given insight on the team's plans, but still hasn't given a concrete date: We will release one more public update of IE8 in the first quarter of 2009, and then follow that up with the final release. Our next public release of IE (typically called a "release candidate") indicates the end of the beta period. We want the technical community of people and organizations interested in web browsers to take this update as a strong signal that IE8 is effectively complete and done. Microsoft's goal is to use the IE8 release candidate as a final test case: the company will be asking the public to test their sites and services with the build. Only feedback on critical issues will be considered before the final version is released. We'll keep you posted for when the RC build arrives. I'm perfectly fine with more testing and bug fixing being done; does a delay bother you? Further reading IEBlog: IE8: What?s After Beta 2
计算机
2015-48/1917/en_head.json.gz/12136
CAP Process Archive CAP 15 CAP 4 - Final Product Discussion in 'CAP Process Archive' started by bugmaniacbob, Nov 16, 2012. bugmaniacbob Was fun while it lasted Memento mori (Move your mouse to reveal the content) Memento mori (open) Memento mori (close) Wyverii, you are a veritable goddess among mortals Whelp, two months have passed, it's the end of the only chance I'll get to lead a proper CAP, and wouldn't you know it, it's also my 2000th post! Took quite a few update threads to push myself up to that number in time… so this may be a little long, as I think it's right and proper to make the most of a double-opportunity to soliloquise. But I'll do my best to keep it as short as I possibly can… Anyway, I suppose I should make a start on this, shouldn't I? Yes, where indeed to begin… I suppose perhaps, what I'd like to share with you before anything else, are a few memories from my very earliest interactions with the CAP Project – and more particularly, the message attached. I've often been guilty in the past of poking fun at newer members and their inability to read the rules, but I've never quite forgotten, myself, of how unbelievably awful I used to be. Briefly, of course, but on the other hand, there's a pretty important take-home message hidden in all this. I've seen people diving straight into topics, being told they're idiots, and that classic, age-old Smogon mantra, "lurk more". And every time I see somebody new post something like that, there's always, somewhere deep down, the realisation that here is someone who tried to read the rules, and yes, who made a hash of it, but there is always the potential for moving on up. Somehow I've managed to be one of only fifteen Topic Leaders in the five years this Project has been going, despite spending most of my first CAPs being the sort of occasional-poster who only occasionally gives an asinine, objectively wrong opinion as justification for clicky voting. And there are plenty of people here like that (not naming names, obviously), so I suppose the message is: stick around for a few years, and anything can happen. My very first recollection of the Create-A-Pokemon Project is from long before I joined Smogon, trawling through the old threads and trying to piece together all the bits and pieces of the first 3 CAPs – this was, then, some time in early 2008, before Fidgit was even thought of. Possibly Pyroak wasn't even finished, because somehow I never managed to find more than a few scraps of process threads – I can't understand why I never found Sunday's old Strategy Pokedex thread. But there was one very notable prevailing thought running through my head while I was doing this (aside from the obvious "would it kill them to organise these properly"), which was, oddly enough, "where are the rest of them". This isn't a particularly intuitive thing to think, so I'll explain further. The CAPs, as they stood, were frankly remarkable – they had their sprites, artwork, and movepools down to a tee, and the whole forum exuded such an air of professionalism (actually, to be honest that's a bit too kind. It was more like "being utterly unapproachable", but professionalism isn't necessarily untrue either) that I couldn't believe that the community could have developed its Pokemon-crafting skills so quickly and cleanly in a mere three attempts. Of course, there is an element of naivety here as well. Naturally I would expect, as many would expect on arriving at something called the "Create-A-Pokemon Project", that there would be a lot more Pokemon-creating going on (not being entirely familiar with the democratic process of the CAP Project at that time, or indeed the fact that it had only even existed for a few months). Looking back on it, possibly Smogon's overall demeanour gave the rudimentary Project a look by association that it didn't necessarily deserve, but the fact remains that I, young and impressionable, was indeed impressed. Briefly. Then I moved on to more important things, like deciding how much Lego I needed to build whatever model of the week it was. Now, we must jump forwards a little to September of 2008. I, like the many other uneducated imbeciles of the time, decided that it would be a worthwhile use of my valuable time and resources to join Smogon and complain angrily about the banning of Garchomp to Ubers in the Diamond/Pearl generation – for those of you who need a little history backstory, I'm probably not the best person to give it to you, admittedly, but I can cut a very long story short. Smogon wasn't, as far as I am aware, even close to how big it is now until the release of Diamond and Pearl, and the sudden ability for millions of people such as myself to play and battle online without using IRC scripts or Netbattle to play what was still ostensibly a children's game over the internet against a group of die-hard Pokemon fans who formed what seemed to be an impenetrable clique. So why did simulator battling suddenly become popular? Well, there was the fact that the bloody DS Wi-fi never worked… and that even when it did it would perpetually freeze or crash or burn when you were in the middle of a battle… and that I have no idea why any introverted nerd would ever want to attend an event dedicated to something they do when they're bored. But mostly the discovery of convenience – being able to quickly and easily create a team of whatever the hell you wanted from scratch, battle with it in seconds, and be able to lift the easiest and best sets direct from an analysis, for every single Pokemon in the game – without any kind of research – was a revelation. Rather predictably, however, I mistimed, and equally predictably, Garchomp was voted Über – the first case of its kind. Oh, I should probably also have mentioned the Suspect Testing. In the past, I guess whoever had the "big stick" at Smogon decided what the tiers were, and everyone went along with it. Then we get Suspect Testing, the first serious attempt to introduce democracy to the process. Which would be great if I agreed at all with any decision the system has ever made ever… or with the system itself. But that's quite beside the point. Garchomp's Über now, so what do we do? Easy… vent frustration by trying to logically argue the case that maybe just maybe you are being a teensy bit over-zealous in banning something that is very much borderline. And of course, when that fails, get angry and start employing rhetoric. Weirdly, looking back on it, I never got an infraction for anything I posted during that time period… I guess pretty much everyone was a troll back then, so I kind of blended in with the crowd. For reference, there's no way Garchomp was ever broken in DP OU and it was a reactionary movement both in testing the arm of the Suspect concept and against the shift away from the defensive metagame that everybody remembered from RSE. Old opinions die hard, I guess. So eventually this gets boring, and I start looking around for something else to ruin. Well, no, it didn't actually go like that, but… eventually I arrive back at CAP, and find that in my absence, Fidgit has been and gone, and Stratagem is nearly over. So, with little to no knowledge of how the process works or even what the point of the whole endeavour is, I immediately leap in to grace the CAP community with the illustrious benefits of my infinite, knowledge, wisdom, and dare I say total obliviousness to how pretentious I sound when I try to exemplify the above traits alongside an opinion. fat CAP 5 said: Protolith sounds cool, but more like a geological sample than a pokemon. Strategem... for me, the fact it is an English word has an averse effect; I mean, it doesn't really suit a rock. Therefore, I conclude that I shall vote Protolith.Click to expand... Adorable. I couldn't even spell "adverse" correctly. This was, as you may recall or recognise, CAP 5's final Name Poll, with tennisace at the helm, or as he was known then, tennisace0227 (three cheers for good old random numbers), with Protolith facing off against Stratagem. This was just after the CAP 5 Art Submissions debacle (of which I was blissfully unaware at the time, and only came across while reading some earlier process threads later on). With my support, Protolith didn't have a hope in hell of winning, obviously. But at the very least, I'd made my mark on CAP, and that counted for something, right? Well… eh. Unfortunately for the whole community, my quite obvious unparalleled genius and wit failed to be recognised, and as such, the whole of CAP was deprived of the benefits of my glorious intellect for pretty much the vast majority of the next few CAPs. Just think what could have been, had my talents been recognised then, and I been given the power that I so obviously deserved! fat bugmaniacbob;1624963 said: I don't know if I'm allowed to post concept submissions, but I read the rules four times over and I'm sure it says I can post vague ideas... Anyway, this is a watered down concept of a pokemon I envisualised once, of course no more details as such. Concept: 'Defensive Dragon' Description: A Dragon with defensive capabilities that can viably defend itself against other Dragons.Click to expand... Well, that's the concept. I'm not sure if I'm allowed to do so, but I wanted to include some ideas as to how the pokemon may be able to defend itself against its kindred. If you don't want to see, don't highlight this text. An Ability: Something like a Flash Fire variant, perhaps, to sponge up Dragon-Type moves. Mega Defensive Stats: Although they'd have to be pretty big... Typing: Difficult, because the only type that resists Dragon is Steel... Or, maybe not. The above is something I used to rather affectionately remember as my first foray into CAP territory, my first tentative baby steps that would one day lead to great strides and mighty leaps, and then eventually, oh my, yes eventually, reclining lazily back in a tweed armchair while I mercilessly order the poor, helpless members of the CAP Project to dance until their feet bleed for my amusement (what would we do without metaphors?). But surprisingly, it turns out that this wasn't my first, or my second, or anywhere near close to my earliest posts in CAP. In fact, it was a full month after the first quoted post – a full month in which I was apparently doing stuff, though I've no idea what that was – before the above idiocy cropped up. Still, that's getting a little bit ahead of time. There were, after all, a few things that appeared in the month between these two events. This mainly refers to a curious little oddity in CAP's history known only as "EVO". Now, I should probably point out at this very early juncture that I had pretty much nothing to do with EVO in any even vaguely important respect – not even putting a word or two into those famously chaotic discussions. I merely jumped in once, said the word "Pinsir" a lot, and disappeared again. Now, for those of you who don't know what I'm on about, EVO was a side-project that ran simultaneously with CAP 5 and was designed to exploit an existing niche using an existing but lacklustre Pokemon as a base. And rather predictably, it degenerated rather quickly into flavour (back then an even bigger taboo) and Farfetch'd. Why exactly am I bringing this up? Well, if you aren't one of the people who, unlike myself, choose to spend what little free time they have delving through forum archives as opposed to, I don't know, say, spending time with friends or family, I feel that it's necessary to give you a little required reading – specifically, the two posts on this page by DougJustDoug and X-Act, two of the people whose contributions to CAP honestly can't be stressed enough, but here's not the place to review their accomplishments – a couple of posts that do leave a rather marked impression, not purely as an example of how the CAP Project was in the past, but its fidelity to its core values or, more accurately, mix in a number of universally held points alongside the context of the time. The trivialities that we so often get ourselves hung up over, those who care only for a few parts of the process, not the whole, and the problems with poll-jumping and flavour. On a more positive note, those who step up to lead the discussions and try to make them work, the belief in allowing people to organise themselves and direct by example, the fierce defence of the process against those who don't quite see the point of it. As X-Act noted, "there's no way to prove that your opinion is right here", and as such all people, badgeholders and non-badgeholders alike start out on an equal footing from that first sentence. There are also bits of it that we can keep in mind when looking back on CAP 4, and looking ahead to the future – particularly the bit about the workhorses of the project. I've mentioned Doug and X-Act – I could just as easily mention tennisace, Umbreon Dan, Rising_Dusk, Fuzznip, or any number of others who have made all this possible through blood, toil, tears and sweat. And to a large extent, they're all gone or less active now. New workers are needed, and there's never been a bigger need for them. And there's a good reason I'm saying this – because I physically can't do it. But I'll get back to that later. So, you may be wondering, why bring up the concept submission? It was dire, yes, but not really more so than any other concept submission around that time, really. After all, they never really had that much structure. Well, after the above had resolved itself, I came back to CAP, and was surprised to see that the EVO Project had disappeared (by this time, CAP 5 had also finished a while ago). Still, never be discouraged, unless your death is imminent. I decided to actually try to find out how the CAP process worked. Now, we didn't have the good old CAP site back in those days (or if we did, it certainly wasn't advertised well enough), so I read the somewhat vague sticky threads religiously. All right, I said to myself. Apparently, we have to post a competitive concept for the Pokemon. Ok, I've written my concept, and it conforms to all the specifications. Ok, so do I just post it in the main forum? This guide is rather vague. Surely if anybody could just waltz in and post a concept, they'd have way more than five CAPs by now? Oh well, nothing else for it but trial and error. And yes, rather than bothering to rummage further through CAP process threads to find a suitable Concept Submissions thread to rummage through, I was one of those imbeciles you mock, who posted his concept in its own thread before a CAP had even begun. And in fact, the thread was never deleted, so it's right there for all you lovely people to gawk over and what not. In any case, I fled chastened, with a jolly old infraction in hand, and as such kind of missed the bus on the beginning of CAP 6 (which started about a week later). So, yes, while I'd love to say that lurking gave me the opportunity to study the CAP Project and associated process in action, in detail, in real-time, to be perfectly honest I was hiding under a rock for most of it, and missed nearly all of the important bits of the CAP – Concept, Typing, Ability, and Stats – returning right in the middle of Art Submissions. So then, why not try my hand at art submissions? After all, it ought to be pretty hard to screw up posting some artwork, right? …and there we have infraction #2. Wasn't that fun, children? I suppose I should probably note that the above was based on the mantis shrimp and well… yes, I did think it was pretty horrible at the time, but I thought posting something was better than posting nothing. Ah well. Back to the drawing board. Quite literally, in this case. Still, never mind. If nothing else, I could always think of an extraordinarily pretentious set of names for Name Submissions. I think I would name it... Carmolée or Epolace. Both names are amalgamations of Carapace, Mollusc and Epée, representing the parts of the pokémon; Carapace for the shell, Mollusc for the squid, and Epée for the sword.Click to expand... I think that if you look into these, my slate for Aurumoth's name polls starts to make a lot more sense… So, anyway, enough of that. CAP 6 is over, and CAP 7 is about to begin! Three cheers and hearty bellows, lashings of ginger pop all round, et ceteri. So, let's recap how far I've come on my CAP odyssey, after participating in two (kind of) CAP Projects – I have successfully demonstrated that I can't follow simple instructions or research process properly, can't read rules, and have terrible taste in names (and pretty much any kind of flavour really). All in all, you would be well within your rights to predict that I am the sort of rotten, miserable little imbecile that always has to ruin the fun for everyone else. After all, after two CAPs of nothing but utter bilge, surely he won't have changed at all? Name: Dragon's Bane Description: A pokemon built with emphasis on countering, not only Dragons themselves, but also the hideously overpowered Dragon moves that make them so deadly (you know the ones...). The plethora of moves available to Dragons to stop Steel-types from ruining their fun has led many to believe that this is an impossible endeavour without the hypothetical counter having ridiculous base stats, so I'd be interested to see how the community approaches this sort of project.Click to expand... And looking at the concept I submitted… you'd be totally right, I guess. Old habits die hard, and all that… some day, Dragon's Bane, you will be victorious in a CAP concept poll… just like Jack of All Trades, Winter Wonderland, and all those other ones on the "if we ever run out of ideas" list. But yeah, looking at that, you'd be well within your rights to ask yourself, "What is this guy thinking?" (This is of course assuming that you actually remember the idiot who decided it would be a bright idea to post a concept in its own thread under the expectation that it would lead to a CAP despite all the evidence to the contrary…). After all, you'd think that after two CAPs, I'd have an idea that wasn't exactly the same as the disastrous concept from months before. Honestly. And yet… Sorry if I've bored you, I promise we're getting to the take-home message soon enough. But here's the interesting part of CAP 7. You can work out for yourselves exactly by how much I'd got better through the two CAPs I'd taken part in, just from that little snippet of conversation – and probably also from my vehement support of the Bug-type in the Typing Discussion. But at the same time, some things do change. This CAP was the first time I had submitted my own stat spread – and somehow, I managed to come third. True, I was miles behind first and second, but for someone who's pretty much a "new guy" even at that point, it's pretty much functionally equivalent to winning regardless. And there's a weird message in that, which is that really, it doesn't matter what people think of you as a person, so long as your submission is good – which is something I really like about the way CAP works. I could submit a terrible, horrible concept for one poll, and get nowhere, and then I can submit a stat spread not long afterwards, and actually not do that badly. Well… it's a bit out of character for me to be so sunny and dreamy, isn't it? I suppose my cynical side wants to point out that first and second place in that poll both belonged to long-time, respected CAP contributors, who were far and away ahead of everyone else, and that, yes, we have a very real problem even now with voting based on the person who submitted it, even though it isn't talked about as much any longer… but hey, getting the encouragement and support of the TL and ATL was really satisfying enough. Oh my, such pathetic sentiment. Shall we move on to the moral message? I suppose we had better… As I said at the beginning – stick around for a bit, and you get recognised. It's really rather surprising how many people I know and can place into specific voter demographics (if you will) based on my seeing them in the previous CAP, or even a long time before. It's rather nice, in a way, that we as a community can recognise each other in this way – though I won't pretend that my view of everybody is always positive. But I'm always prepared to judge by post, and not by poster – as others have apparently been willing to do for me in the past. And now, four years after making my first post, I'm a Topic Leader. It's a bizarre feeling to be trusted by a community to such an extent after such a farcical entrance, but there it is. I wasn't around since the beginning, like Deck and so many others, and I certainly didn't come into the community in a blaze of glory, like Rising_Dusk. I haven't even really changed much in those four years, either – I'm not much more emotionally mature, or even intelligent, than I was back then, but it's a community that takes time to get used to. Once you're here, it's rather remarkable, in a way. I'm still filling the role of bitter cynic – a role it was rather painful to drop for the entirety of CAP 4 – but for now, I'll just say, pretty much everyone in CAP is capable of the same, even if you didn't have the most encouraging start. And one more thing – this is a two-way street. I encourage moderators to not look down on those they don't like as posters, or ever harbour any doubt that they could at some point be star contributors (yes I know sometimes it's hard but let's just speak of this in hypothetical terms please). Above all, don't be afraid to give second chances, even to those who have screwed up massively. I needed third, fourth, and fifth chances before I got the hang of this. And no, please don't think I'm lecturing you in any way. There's a marvellous culture in CAP that should be preserved, and I implore you to do so in your own way. But this is something that everybody, from mods to casual voters, should be bearing in mind. So, then, we've had our little trainwreck of an analogy up above. Doubtless most of you could find infinitely better examples, but hey, I'm me and my self is quite happy where it is. The rest isn't particularly important… tried to make Cyclohm a Bug-type too, got another Art thread infraction, and elsewhere, I got myself a nice collection of badges. As it turns out, my cunning plan to create a username with "Bug Maniac" in it, coupled with writing immensely long analyses of nothing but Bug-types, meant I got a Pre-Contributor pretty early on in my C&C career. Huzzah for ladybirds. I believe one person called me "the best gimmick user ever", which is a double-edged compliment if ever I've heard one. Rather like being called the least terrible option for Topic Leader. But who am I to judge? Anyway, we're straying slightly from the point here. We've been through my base history, and now we move on to the main event. Fasten those safety belts. More than two months ago, I posted a thread under the approval of Birkal for us to start CAP 4 immediately – and by an almost unanimous vote, the motion passed, and our grand journey commenced. It had been quite a while since Mollux ended, and very few of the open PRC topics were even close to being completed. I myself viewed it as something of an exercise in futility to try to attempt to get them moving, especially when school appeared to have started for most people, and even during the summer, there had been few posts on any subject, despite the best efforts of myself and a good number of others to get them rolling. Many of us held the view that the solution to the problem was to pull CAP back on its feet – we had just lost a number of our important, old contributors, such as Rising_Dusk and zarator, and our active numbers were pretty thin on the ground. As such, possibly a new CAP would bring in the activity and members we needed to get CAP back into the shape it needed to be in for progress to start. While I didn't mention it at the time (or perhaps I did, but unintentionally) I had personal reasons for wanting the start of the Project as soon as possible. I was less than a month away from moving on up in the world, and didn't really have any idea how much of a toll university life would take on my time. As such, I pretty much went into this viewing it as my last chance, really, to take on a Topic Leader role, and much more importantly, to do it well. Even before the Topic Leader vote commenced, I had only a few weeks left of unlimited leisure time – time I found myself quite honestly wanting to put to this use. To be perfectly honest, it is unlikely that I would never again have been able to apply for Topic Leader, but at that point in time I was feeling far more confident in my abilities, and far more uncertain about the future, than I do now. I'm certain you remember the results of that poll quite well (well you would, wouldn't you, since I'm here now). Though for myself, one with a piquant dislike of the dramatic (and yes, that's an oxymoron, don't bother asking), the poll was something of a nightmare. Running into the poll full of hope; resignation at the realisation that pretty much nobody on the PRC was voting for me, and that jas had the competition pretty much sewn up; mild surprise when my vote count started to climb up; trepidation as capefeather made a last-minute rally to draw level; breaking hope again as four last-minute votes pulled me clear of jas; and finally, blinding rage at Birkal's obvious intense pleasure at withholding the results from us over IRC to ramp up the tension (seriously that really wasn't kind). Well, blinding rage is a bit strong, since I tend not to ever get angry any longer, if I can help it. Let's say, "Mildly annoyed". And right at the end, the sudden realisation that people were expecting me to give a 10,000-word victory speech. Not really having expected to win, I hadn't thought of preparing such a thing beforehand. I had won despite not receiving the support of pretty much any moderator or member of the PRC – pretty much the only demographic that I thought I would be receiving votes from – and apparently, I'm a lot more liked by the community at large than I had known, which I must say is very gratifying. But not to worry – fortunately, fluff and flavour come naturally, and I quickly constructed one of the rather more long-winded and flowery ways of promising to not be long-winded and flowery when writing posts, as jas61292 was to later put it (I think. Correct me if I'm wrong). And then, I make four additional promises, and I'm pretty sure I haven't kept any of them. Well, all right, they were qualified with a "maybe", and none of them were deliberate, necessarily (see there I go doing it again). But anyway, let us move on. Suddenly, Concept Submissions were upon us, and our great CAP journey had begun – while I was still trying to get used to the idea that, in fact, I had all the power of the CAP Project at my disposal, for a mere two months. That and I could now see every deleted post in the forum, which made the whole layout a lot less pleasant. Rather tingly. This was, of course, something of a revelation, and I launched myself into the task with all the vim that I could muster. Probably. I remember being more tired than anything else at the time, but whatever makes the story more interesting. Anyway… I went into my first post with three key thoughts in mind. Firstly, to remember my "election pledge", if you will (kind of bizarre, looking back on it), to force the community to think about decisions that they might otherwise avoid, in terms of deliberately steering the CAP in a difficult, but not divisive, direction (look at me, being all rhetorical) – and to try to push the boundaries of what we were considering acceptable, and to challenge ourselves rather more than we had been doing – to create a Pokemon, rather than to build a concept, if you will. Or rather, to be engineers rather than architects. Or something like that. Secondly, to try to avoid taking the whole thing too seriously – Now, you may say at this point "but you never did do that, did you?" Well, yes, I grant that I brought out the iron hand rather more often than I would have liked to. But that comes later. My initial desire, for what it's worth, was to have a CAP Project where I was more approachable as a Topic Leader than perhaps others were before, where I was inviting people to question things that I thought, ask questions of me, discuss what we were trying to achieve. Lord only knows if I succeeded, because I don't have the faintest clue. If I did, great, if I didn't, then you should know that, yes, I was trying my absolute utmost to answer everybody frankly and invite counterpoints while at the same time keeping the discussion on the straight and narrow and not branching off into too many avenues. Finally, and you may say that this was remiss of me, but I went into battle with a plan in my head. I'd be lying if I said I went into CAP 4 with an open mind, because I had a very clear idea at the start of what I wanted to get out of the project, but then, that's just how I operate. I can't really work in any other way. Whatever the concept turned out to be, I wanted to see it look totally different and more than anything else to exemplify things that we usually shied away from on CAP – big discrepancies between statistics, crippling flaws offset by big advantages, that sort of thing. The sort of thing that we never really do on CAP – or at least, any longer. We ran out of kitchen sinks a while ago. So, you can see why "Living on the Edge" immediately appealed to me. It was challenging, it was big, and it had the potential to do everything I asked of it. Perfect Nemesis was another that did the same – it specified a crippling weakness that required offsetting. Now, I should probably point out here, in case anybody gets the wrong idea, that I did not waddle straight into the Concept Submissions thinking "we must do things this way". If I had thought that way, and no other way, I would have slated Zystral's concept, "Breaking Point", without question. As it stood, I could see how it would end up down the line – shouting, bad tempers, my having to impose a definition on people who disagreed with it, and above all, not necessarily telling us any more than we knew already, for lack of an end point beyond "big huge ugly evil thing". I went through every single concept that was submitted (well, all those that weren't leapt on by the moderators for whatever rule infringement it might have been), analysed them, thought about them, spent entire days agonising over them, and posted what I thought. Despite what I had said, I was almost disappointed. There wasn't really anything grand or imposing or fantastically new and imaginative – there more potential than you could shake a stick at, but alas, it was all mostly coming from concepts submitted in ages past, or slightly remade. I don't think that there was any concept that really enraptured me as such, though there were plenty that many people liked that I felt I had to be rid of. The only important one of these was Breaking Point, and it wasn't a decision I made lightly, if you'll excuse the cliché. I was hoping I wouldn't have to instigate the wrath of the community so early on, since, well, I knew that given the way I thought about things, I was bound to start encountering quite a bit of resistance before long, when I started not slating things I didn't like. Possibly this is what caused me to slate Weak Armour. But we can get to that later. After an agonising time of whittling them down, I had a shortlist of fifteen, seven of which I actually liked (rather than their just being "good concepts"). And I spent at least a day thinking carefully about what to put on the slate, when Birkal kindly reminded me that I only had a maximum slate of seven options anyway. Joy unbounded – I could make a full slate of things I actually liked! And rather predictably, Birkal's and capefeather's ended up in the final poll together, the two that I had wanted since the very beginning. Joy unbounded, my dears. Joy unbounded. Now, some of you have occasionally intimated that I am something of a Machiavellian figure, and I can assure you, that though I may lurk in the shadows saying nothing and act incredibly bizarrely at times and eat rats with my custard creams, I am no malevolent puppetmaster. The fact that I got my ladybird for writing a small number of analyses about Bug-types with a name like bugmaniacbob is pure coincidence. The fact that I managed to both indirectly instigate and become Topic Leader of CAP 4 is pure coincidence. And of course, the fact that CAP 4 turned out to be a Bug-type is pure… OK, I admit, I did really want this to be a Bug-type. But only once the two final concepts were chosen – and at that point, I began to think to myself. This isn't exactly strange – I did want to get a handle on what I eventually expected to produce before we got to the moment of having-to-post-thoughts. For Perfect Nemesis? We wanted a Pokemon with a unique type combination, which resisted a combination of moves that no other Pokemon resisted, and then give our CAP those moves only – so then, what better than Toxicroak, with its Poison/Fighting typing and Dry Skin? More to the point, what could it beat? Why, a Water/Bug CAP 4, with Grass moves for Jellicent and Rock moves for Dragonite, of course! For Risky Business? Well, we could cross that bridge when we came to it. Fortunately for the community, I think, the one I had invested a lot less time into thinking about – and the one I had supported throughout – turned out to be the victor by a narrow margin. Now, I will admit that very possibly I had isolated Perfect Nemesis and Risky Business as being those most likely to produce a Bug-type quite early on in the Concept Submissions. But as you can see from the slate, I didn't labour particularly long on the point. Once we'd got them, I started to get more attached to the idea. But more on that later. By this point, we'd arrived at the jolly old Concept Assessment, possibly my least favourite part of the process. And no, I don't know why that is. Possibly because it's always a tad directionless despite our best efforts – or our best acronyms. Or maybe it's because I was dreading having to impose a definition of Risk on people when I knew that the large majority of people would probably ignore it and go with their own regardless. This is perhaps something to be thankful for – it would be boring if we all agreed about everything. Oh wait, there was a fourth thing as well – I needed a Topic Leader footnote gimmick. And as ever, I resolved to put more effort into it than anybody had done before. At least it wasn't particularly hard to think of something to put – droning on about arthropods is quick, simple, thoroughly interesting and looks like it took more effort than it did. Even though in some cases, as it turns out, the opposite was the case. It took way more effort than it looked like it did. So, I thought about the problem, formulated some questions concerning how I'd go about thinking about Risk and Reward, in the overarching sense, and had a shot at answering them myself. By the time I was finished, I had a good framework, a solid idea of where to take the concept, and most importantly of all, a big fat 2000-word OP. And then my computer crashed. So I shrugged and typed it all up again. One of the great things about C&C on Smogon is that eventually you cease to get annoyed when you lose about a day's worth of work in an instant through no fault of your own – it happens frequently enough. It's probably done more for my anger management than all my logic and common sense combined (ok not really but it would be so nice if it had done). Fortunately, my computer is now on life support and has only gone blue-screen on me about three times since I bought a load of external devices to stop it dying. Up until somewhere around the Name Polls, it was killing itself every time I watched so much as a youtube video. Anyway, Concept Assessment. Rather predictably, everyone answered different questions at once and nobody seemed able to agree on one anything. Which gave me a goodish number of opinions and viewpoints to sift through, but was rather a headache from an administrative perspective. But right up until Arghonaut cropped up for no adequately explained reason, the level of thought being put into posts was staggering. This is one of those parts of the process where you do really appreciate having that many opinions and so many different takes on a very diverse idea, all in the same place. In fact, it was pretty much then that I realised that I couldn't possibly respond to absolutely everything, and as such, I just hoped that people would take the initiative to see that their line of enquiry was either being followed or put aside. In any case, I found the lines of thought that I agreed most strongly with, and commented on those I did not agree with, and then produced more questions following on from there. In my mind, I was slowly building up a tree of how we would proceed under every conceivable eventuality. Or something like that. And by the end of it all, I had a very good idea indeed of what I wanted to achieve from the CAP, if not exactly how we would go about obtaining it. If that "Best Discussion" thread from PRC ever materialises, I'd certainly not be sorry to see this one winning – though there was far less debate than in other threads, it was structured, everybody seemed to have their own thoughts, and it was extremely useful for me personally. Ultimately a lot of my thoughts leaving the thread were expressed by the midnight IRC chat log I posted there, but here's the bit I decided to push for throughout all further discussions: <Pwnemon> if we can fuse intensive team support risk with prediction once in risk <Pwnemon> i will consider this project a 10/10 So, the great and mighty CAP train chugged ever onwards, and we moved on to typing, adamant in our decision to create an offensive/supporting Pokemon that was a big investment to pick and a big risk to use – well, I was, anyway. Not sure how many people actually got that from the Concept Assessment thread, but to me, it was all perfectly clear, and pretty much everything had gone just as I had planned up until that point. Anyway, as I entered typing discussions I summoned all the different thoughts about ideal typings that I had dreamt up over the course of Concept Assessment and flung them all airily into one big vague "here's what I'm looking for". In hindsight, possibly the whole system of giving my opinion and inviting people to find things that correspond with it was flawed, but it seemed to work out all right in this case. Somewhat annoyingly (in one sense), my two favourite typings came out almost immediately – Bug/Psychic and Bug/Dragon. Rather like the Concept Submissions, I felt certain that these two were the winners, or rather, the best of the best. I had some very particular requirements, not least of which was something akin to a unique typing – I had a feeling that the way the CAP would be played would require some form of niche to make it work, and as such, the investigation of the way that the typing allowed comparisons to other, similar Pokemon was to be encouraged. On the other hand, there were a lot of typings that were "all right", but didn't get slated for whatever reason. Fire/Electric? Rotom-H exists, and is a very risky Pokemon, due to reliance on Overheat for Fire-type STAB and other such things. Fire/Psychic? Victini exists, and has STAB V-create. So what is there out there? Bug/Psychic pretty much ticked all the boxes, and Bug/Dragon was in a class of its own as far as the "inferior powerhouse" idea was concerned. On the one hand Volcarona, on the other hand Kyurem. But I honestly wanted to find some other typings of a similar standard. I really did. Electric/Psychic was definitely something I wouldn't have been sorry to see on CAP 4, but it's a show of my frustration at my inability to find a similarly good typing that I ended up slating Grass/Flying, a typing which I saw had merit but didn't truly believe could give us the best possible CAP 4, solely for lack of any better options. Fortunately, it didn't come down to that. Bug/Psychic and Bug/Dragon had their standoff, and ultimately, the more recognisably risky typing was the one that won out. Threats Discussion came afterward, and was mostly a consensus – plenty of checks, but no true counters. This was a large part of my master plan to create a versatile yet risky attacker, who could also support its team, without the opponent's knowledge that it could do so. Obviously, the details of that Pokemon were yet to be hammered out. Thus, we came on to the moderate trainwrecks that were the Ability Discussions. Now, I wasn't too fond of how they had gone just after they were finished, but on reconsideration, I'd say that, like pretty much the entirety of CAP 4, I'm very happy with how they resulted indeed. I went into the Primary Ability Discussion wanting some sort of "triality" (yeah I made the word up, but somebody must have used it beforehand, right?) between three different, each competitively viable abilities. This is largely why I went with abilities first – though I didn't say so at the time – because an overly impressive stat spread could quite easily have affected peoples' perceptions of how good certain ability combinations were. If I were to go through the whole thing again, I would probably have combined the three into one single "suggest a combination of three abilities" discussion, though I'm not even sure if such a thing would have been, or even is now, within the Topic Leader's power to decide. In any case, the results weren't exactly that bad, though I didn't feel that way at the time. Now, we had four excellent ability suggestions in No Guard, Illusion, Simple, and Moxie, all of which affected the CAP in different ways and doubtless would shape the CAP from the outset. And then we had Weak Armour, that horrible little niche ability that I thought few cared about. It was so very obvious to me – as someone who values obtuse or counter-intuitive ways of solving problems, as well as the simplest ones, Weak Armour seemed to fall right in the middle, and to me, was symptomatic of our falling into the trap of "taking the easy way out", in much the same way as custom abilities are seen. In fact, I'm willing to bet that Weak Armour's introduction led to the veritable plethora of custom abilities being proposed in the Secondary Ability Discussion. As it stood, though, I felt bad for having slated it. It felt like I had caved in to popular demand as opposed to standing up for the direction I felt I should have been going in – and I'm sure some of you will remember that my conversations on IRC at this time reflected this greatly. Yes, I knew Weak Armour would win if I slated it, and I had done so anyway, which while I suppose creditable in the sense that I had made a decision to give the community that decision, was nevertheless vexing. Indeed, possibly Weak Armour was a blessing in disguise, as well as a curse. Had a more defining ability been chosen, it may well have pressured voters to vote against further abilities, though I am of the opinion that Weak Armour ought by rights to be a pretty defining ability – and a large number of people, judging by the Secondary Ability Discussion, felt the same way. In any case, it was the first vote that had gone against my ideal, and as such, at the time I was attempting to find a way to pick up the pieces again and get back on track. So, on came the Secondary Abilities. Now, Primary Abilities had had its fair share of horrifically bad abilities and people not listening when I noted that their preferred abilities went against my direction for the CAP, such as the unfortunate cases of Hustle, Flare Boost, and Motor Drive, amongst others. But this all paled in comparison to the ruckus caused by my refusal to even consider Analytic, and it seemed to take three huge posts expounding upon the same points before people actually began to address my arguments – or simply stop posting. Well, all right, Flare Boost came close to causing that much tension, but Analytic nearly had me at the end of my tether – I was so tired that I couldn't stand to write any more to defend a viewpoint that nobody was analysing and it didn't honestly matter if anybody questioned. But I stuck with, I did it, and I'm glad I did, because some people seemed to get the message by the end. In any case, my slate rather reflected the way I was thinking at the time, in that I was only slating those options that I myself very much wanted. Possibly my reaction to Analytic was a rebound to what happened with Weak Armour. In any case, all I slated were abilities that I was sure would pair up well with Weak Armour, while at the same time giving a very real possibility of a third ability, and thus "triality". In the end, Illusion came out victorious in a first-round supermajority, which, while I had not supported it myself, was actually a rather comforting occurrence, for the very simple reason that, while there were many who supported it, there were far more out there who actively hated it. And so, we came on to the Tertiary Ability thread. Now, as I said before, I wanted a third ability, and it was firmly within my grasp, so by all rights I should have stamped my foot angrily and said "right, this is what we're going to do". But, I guess I didn't. Instead I gave a rather bland OP, posted some pictures of bunny rabbits and rather adorable spiders, and left people to get on with it. Now, this was remiss of me, as ever, and rather predictably, the "No Competitive Ability" crowd leapt out in full force. As with Hustle, Flare Boost, and a lot of other things that I had expressly forbidden beforehand. I'm not going to lie; it was very tiresome, and quite annoying. So, I prepared for one last push for No Guard by slating it alongside No Competitive Ability – in short, I was prepared to stake my all. That's an overly dramatic way of putting it, but by that point I had almost ceased to care what happened in that stage. I had tried to make it light and fluffy, but it was a rather difficult thing to do. Fortunately, I had faith in No Guard – I do have a very great love of bizarre ways of doing things, which has manifested itself in multiple places in this CAP – and more importantly, I had faith in the effect of the poll as I had set it up. There were those who wanted a third ability at any cost, those who simply had a vendetta against Illusion, and those who were probably trying to troll the concept – all of whom combined to bring victory. Some may say that's a rather immoral way of approaching the final poll – but really, that wasn't the thought process that was going through my head. I had considered that slating No Guard alongside something like Mummy would be far more likely to bring the desired result, but in the end, I was purely putting the options to the CAP community, and we got what we voted for. I'm fairly certain that everybody except for me came out unhappy with the abilities. People who liked Weak Armour hated No Guard. People who liked No Guard hated Weak Armour. And pretty much everyone hated Illusion. I seemed to be the only one who actually liked how the three abilities interacted and could possibly play together – so much so that I actually began to grow rather fond of Weak Armour, which had once been the representation of the worst mistake a Topic Leader could make. Possibly I was only happy because I had come out of a difficult situation with my vision for CAP 4 intact. Anyway, it mattered little. We were on to the stats, and on the one hand, we had three hugely good abilities, while on the other, we had a relatively poor typing. Now, I think that this is an appropriate time to bring up yet another of Doug's quotes, and by jingo, if the guy isn't psychic: fat DougJustDoug;4392910 said: I still support Risky Business too, because I think the self-balancing nature of the concept would be fun to wrestle with as a group. The "let's make this amazing" crowd will have a voice, and the "let's nerf this" crowd will have a voice too. In the end, we'll have to do both to succeed. I don't know if any CAP project has ever given such complete legitimacy to BOTH factions in the same project. That game of tug-of-war will be epic. I suspect I would switch sides frequently!Click to expand... A pretty perfect prediction of exactly how the entirety of CAP 4 turned out. Well, all right, it wasn't entirely Nostradamus but it serves to illustrate the outline of how the stats discussion went, or at least how I recall it going. An initial, very conservative set of stat limits rather snowballed when it became rather clear that a lot of people wanted the CAP to excel or be terrible in rather different areas, and as such, it was a bit of a nightmare to look at as a Topic Leader. Some people wanted low Speed and others wanted high Speed, which skewed the attacking BSRs to no end. Now, you may well say, why couldn't I just have put very lax limits in place, and left it there, as arguably I eventually did. Well, it seemed rather unbearable that I should take absolutely no lead in telling people what I wanted from the CAP, which was a problem when I could see the merits of either path and had indeed incorporated both approaches into my master plan as different forks. Maybe I was still feeling the after-effects of the Abilities, and was unwilling to give the same leeway twice. But regardless, eventually I did do what I felt would generate the best possible submissions and be fairest to the community – this not being the last time I would attempt to take both sides as far as the war between "nerf it" and "make it amazing" crowds were concerned. Perhaps more so than I ought to have done – I still slightly regret setting the BSR limit slightly too low, though admittedly it ended up being inconsequential. Yes, as far as this CAP goes, I will admit, I did have a "wish list" – that is to say, things that I wanted this CAP to be. This manifested itself in different ways at different stages (and I won't pretend it didn't), though it never really took over my competitive concerns. What I really wanted was to make a CAP that was unique, that said something about myself, and even more selfishly, was a symbolic statement. In terms of uniqueness, I wanted something that pushed all the boundaries of what was considered acceptable or exceptional. In terms of personification, I wanted something that reflected who I was as a person and the effort I had put into leading it – something that if not entirely the same as my vision, was at least something I could be proud of, and look back on without regrets – and something that I could see representing myself. Mostly this only manifested itself in terms of "grand, imposing, powerful artwork" alongside "extraordinarily pretentious name" and "dex entry that isn'
计算机
2015-48/1917/en_head.json.gz/12148
It was announced that the official name of the SQL Server “Denali” product is SQL Server 2012 and will be released in the first half of the year 2012. There will be a Release Candidate (RC0) that should be provided by the end of the year which will be feature complete. Then it will be about fixing bugs prior to the Release to Manufacture (RTM). Check out the new SQL Server 2012 Developer Training Kit. The new twitter hashtag is #SQL2012. “Project Crescent” has been officially named “Power View” (note the space in the name). PowerPivot provides the self-service data modeling capabilities to integrate your data to create a BI Semantic Model (BISM) and Power View provides the highly interactive data exploration tool. One of the new features that has been added since CTP3 is the ability to have multiple views of the data within an existing rdlx file (similar to the briefing book concept that ProClarity has where you would create and save multiple analytical views). Also mentioned is that an “Export to PowerPoint” will make it into RTM. There will also be a Power View on Windows Mobile. It is going to ship in the latter half of 2012, months after SQL Server 2012 itself. The new twitter hashtag is #PowerView. Juneau is going to be released as SQL Server Data Tools. The new twitter hashtag is #SQLDataTools. It was also hinted during the keynote that more BI tools will be introduced to SQL Azure in 2012. “Big Data” was also a much-talked about item, and Microsoft will be supporting Hadoop as a part of the data platform. This means that you’ll be able to run Hadoop on Microsoft Windows as well as on the Azure platform. As of last week you can download the Apache Hadoop connector for SQL Server and the PDW platform so that you can connect SQL Server to Hadoop. Future releases include the Hadoop based distribution, as well as an ODBC driver and add-in for Excel and office to make it easier for people to get Hadoop data into the office platforms (so you will be able to get data from Hadoop directly into PowerPivot and SSAS Tabular without having to stage it in a relational database). Microsoft will have a CTP version of their Hadoop platform available on SQL Azure before the end of the 2011 year, a Apache Hadoop-based distribution for Windows Server and Windows Azure (by end of year), and a Apache Hive ODBC driver and add-in for Excel (November release), Microsoft announced “Data Explorer” which allows users to do self-service BI without realizing that they are doing self-service BI. This allows users to easily see and read the data, most importantly taking the data and turning it into information that they can use to drive the company quickly and easily. Data Explorer will plug into Microsoft’s Windows Azure Marketplace and allow developers to create richer data sets that can be published and made available for free or pay. This is a web-based data integration tool for working with data from a number of sources such as SQL Azure, Excel, Access, and it also generates recommendations of data from the Azure Datamarket that you might be interested in. It allows you to mash-up data from various different sources then publish the result as an OData feed. Data Explorer connects to a SQLAzure database and discovers the data values. If a second data source is added, such as an Excel spreadsheet, the “Mashup” option is enabled. Mashups allow the user to overlay the data from both sources. This functions as a lookup. Next, the Azure Data Marketplace contains data recommended to the user. This data can also be added to the Mashup. Applying OData interfaces, Mashups allow users to “join” data from disparate sources – in the cloud. MS is investing heavily in the cloud and until now the message has been everything will be in the cloud within the next 2-5 years. Now they’ve backed off quite a bit and are saying that the cloud is just going to enhance your current skills and just give you another avenue when it comes to deciding where your data’s going to go. You can view the PASS Summit Keynote presentations here.
计算机
2015-48/1917/en_head.json.gz/12196
Microsoft: Windows 7 RC free for everyone Operating systems Try before you buy, from 5th May Shares Update: Check outour in-depth look atHow to get your copy of the Windows 7 RC todayMicrosoft has shown its confidence in Windows 7 by announcing that the Release Candidate (RC) will be made available to the general public for a year from 5 May. See today's best Black Friday deals for computingThe Windows 7 beta has been a huge success for Microsoft, which has been buoyed by the positive feedback to one of its most critical ever releases. Now, people will be able to effectively trial the new OS for a year before deciding if they want to buy the retail version, with Microsoft confirming that the RC will be readily and freely available to all until June 2010. Article continues below Top free games for older iPads, iPad Air and iPad mini with retina A big deal for Microsoft"It's a big deal for us, Microsoft's Windows OEM Product Manager Laurence Painell told TechRadar. "Obviously, we are releasing what we feel could be the final version - what we will put out to manufacturers and even wider availability when we release the product to consumers." "The release candidate is available for everyone. From 30 April it will be available to our IT professionals through MSDN and TechNET, we let them get it in advance. "Then it will go up on windows.com/download for everybody. There is no limit to the availability and it will be available on 5 May. It will run until 5 June 2010."Beyond the commercial launchThe lengthy RC availability means that people will be able to try out a full version of Windows 7 well beyond its production version release date, but Microsoft's launch has not been delayed, insists Painell. "Our official line is that [the production version] will be available no later than January 2010 and we will stick to that, but people will still be able to use the release candidate for nothing until June 2010."Microsoft's confidence in Windows 7 is such that the prospect of people trying for nothing and, potentially, deciding against the OS, does not phase the company in the slightest. "There has been a great deal of feedback and a huge amount of it positive through the beta program," adds Painell. "Obviously the beta program was the widest that we've ever run and the overwhelming response has been positive. "We're obviously very excited internally about the quality of the product and that's been one of the overwhelming things internally." Driver support lesson learned One of the major failings of Windows 7's predecessor Windows Vista was a failure to support thousands of third-party devices when it arrived back in January 2007. Painell pointed out that Windows 7 should not suffer from the same kind of problems, with the 'eco system' of third-party manufacturers and developers all deeply involved in making sure that consumers should quickly get their attached devices up and running quickly with the correct drivers. "From an ecosystem perspective, which is obviously imperative, we've had about 32,000 participants from 10,000 different partners – which has been split 50/50 between hardware and software vendors. "That's obviously a big part of what we need to do to make sure we have a successful launch. We're making sure that the companies that are providing software and hardware that supports Windows are ready for it as well. "I think that 2.8 million devices have been reported as compatible during the beta program but 75 per cent of those are available in the box for the RC and 90 per cent are available either in the box, from a Microsoft update or through links through to different partner vendor websites."You can download Windows 7 Release Candidate from 5 May from http://www.microsoft.com/downloads/. Normal provisos are in place about needing a clean install and to make sure all data is backed up on the PC you are putting the OS on to. Liked this? Then check out: 50 seriously useful Windows 7 tips
计算机
2015-48/1917/en_head.json.gz/12434
Data access object This article is about the software design pattern. For the Microsoft library, see Jet Data Access Objects. In computer software, a data access object (DAO) is an object that provides an abstract interface to some type of database or other persistence mechanism. By mapping application calls to the persistence layer, DAO provide some specific data operations without exposing details of the database. This isolation supports the Single responsibility principle. It separates what data accesses the application needs, in terms of domain-specific objects and data types (the public interface of the DAO), from how these needs can be satisfied with a specific DBMS, database schema, etc. (the implementation of the DAO). Although this design pattern is equally applicable to the following: (1- most programming languages; 2- most types of software with persistence needs; and 3- most types of databases) it is traditionally associated with Java EE applications and with relational databases (accessed via the JDBC API because of its origin in Sun Microsystems' best practice guidelines[1] "Core J2EE Patterns" for that platform). 1 Advantages 2 Disadvantages 3 Alternatives 4 Tools and frameworks Advantages[edit] This section does not cite any references (sources). Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (February 2015) The advantage of using data access objects is the relatively simple and rigorous separation between two important parts of an application that can but should not know anything of each other, and which can be expected to evolve frequently and independently. Changing business logic can rely on the same DAO interface, while changes to persistence logic do not affect DAO clients as long as the interface remains correctly implemented. All details of storage are hidden from the rest of the application (see Information hiding). Thus, possible changes to the persistence mechanism can be implemented by just modifying one DAO implementation while the rest of the application isn't affected. DAOs act as an intermediary between the application and the database. They move data back and forth between objects and database records. Unit testing the code is facilitated by substituting the DAO with a test double in the test, thereby making the tests non-dependent on the persistence layer. In the non specific context of the Java programming language, Data Access Objects as a design concept can be implemented in a number of ways. This can range from a fairly simple interface that separates the data access parts from the application logic, to frameworks and commercial products. DAO coding paradigms can require some skill. Use of technologies like Java persistence technologies and JDO ensures to some extent that the design pattern is implemented. Technologies like Enterprise JavaBeans come built into application servers and can be used in applications that use a JEE application server. Commercial products like TopLink are available based on Object-relational mapping (ORM). Popular open source ORM products include Doctrine, Hibernate, iBATIS and Apache OpenJPA. Disadvantages[edit] Potential disadvantages of using DAO include
计算机
2015-48/1917/en_head.json.gz/12548
Group NameCreate New GroupClipTed SchadlerVice President, Principal Analyst serving Application Development & Delivery PROFESSIONALSBlog: Ted serves Application Development & Delivery Professionals. He has 27 years of experience in the technology industry, focusing on the effects of disruptive technologies on people and on businesses. His current research agenda analyzes the expanding role of content and content delivery in a mobile-first, digital-always world, including the effects on web content management and digital experience delivery platforms. Ted is the coauthor of The Mobile Mind Shift: Engineer Your Business to Win in the Mobile Moment (Groundswell Press, June 2014). Your customers now turn to their smartphones for everything. What's tomorrow's weather? Is the flight on time? Where's the nearest store, and is this product cheaper there? Whatever the question, the answer is on the phone. This Pavlovian response is the mobile mind shift — the expectation that I can get what I want, anytime, in my immediate context. Your new battleground for customers is this mobile moment — the instant in which your customer is seeking an answer. If you're there for them, they'll love you; if you're not, you'll lose their business. Both entrepreneurial companies like Dropbox and huge corporations like Nestlé are winning in that mobile moment. Are you? Ted is also the coauthor of Empowered: Unleash Your Employees, Energize Your Customers, and Transform Your Business (Harvard Business Review Press, September 2010). Social, mobile, video, and cloud Internet services give consumers and business customers more information power than ever before. To win customer trust, companies must empower their employees to directly engage with customers using these same technologies.Previous Work ExperiencePreviously, Ted analyzed the consumerization of IT and its impact on a mobile-first workforce, the future of file services in a mobile-first, cloud-enabled world, mobile collaboration tools, workforce technology adoption and use, and the rise of cognitive computing. In 2009, Ted launched Forrester's Workforce Technology Assessment, the industry's first benchmark survey of workforce technology adoption. This quantitative approach helps professionals and the teams they work with have a fact-based conversation about employees' technology adoption. Prior to joining Forrester in April 1997, Ted was a cofounder of Phios, an MIT spinoff. Before that, Ted worked for eight years as CTO and director of engineering for a software company serving the healthcare industry. Early in his career, Ted was a singer and bass player for Crash Davenport, a successful Maryland-based rock-and-roll band.EducationTed has a master's degree in management from the MIT Sloan School of Management. He also holds an M.S. in computer science from the University of Maryland and a B.A. with honors in physics from Swarthmore College.(Read Full Bio)(Less)275Research CoverageAdobe Systems, Apple, Cisco Systems, Citrix Systems, Collaboration Platforms, Dell, Enterprise Collaboration, Google, Hewlett-Packard (HP), IBM, Information Management
计算机
2015-48/1917/en_head.json.gz/12582
Home > Packt Publishing > Our Authors Authors About Us Our Authors Careers with Packt Contact Packt Authors Do you want to write for Packt? The Packt Author Website is your resource for discovering what it is like to write for Packt, learning about the writing opportunities currently available, and getting in touch with a Packt editor. Search Authors by Name A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z 1 2 3 4 5 Next View: 12 24 48 Aurobindo Sarkar Aurobindo Sarkar is actively working with several start-ups in the role of CTO/technical director. With a career spanning more than 22 years, h... More about this Author Sekhar Reddy Sekhar Reddy is a technology generalist. He has deep expertise in Windows, Unix, Linux OS, and programming languages, such as Java, C# , and Py... Ahmed Aboulnaga Ahmed Aboulnaga is a Technical Director at Raastech, a complete lifecycle systems integrator headquartered at Virginia, USA. His professional f... Harold Dost Harold Dost III is a Principal Consultant at Raastech who has experience in architecting and implementing solutions that leverage Oracle Fusion... Arun Pareek Arun Pareek is an IASA-certified software architect and has been actively working as an SOA and BPM practitioner. Over the past 8 years, he has... Jos Dirksen Jos Dirksen has worked as a software developer and architect for more than a decade. He has a lot of experience in a large variety of technolog... Justin Bozonier Justin Bozonier is a data scientist living in Chicago. He is currently a Senior Data Scientist at GrubHub. He has led the development of their ... Andrey Volkov Andrey Volkov pursued his education in information systems in the banking sector. He started his career as a financial analyst in a commercial ... Achim Vannahme Achim Vannahme works as a senior software developer at a mobile messaging operator, where he focuses on software quality and test automation. H... Salahaldin Juba Salahaldin Juba has over 10 years of experience in industry and academia, with a focus on database development for large-scale and enterprise a... Kassandra Perch Kassandra Perch is an open web developer and supporter. She began as a frontend developer and moved to server-side with the advent of Node.js a... Saurabh Chhajed Saurabh Chhajed is a technologist with vast professional experience in building Enterprise applications that span across product and service in... 1 2 3 4 5 Next View: 12 24 48 Contact Us
计算机
2015-48/1917/en_head.json.gz/12645
The War Z‘s Steam debacle shows “released” isn’t “done” these days Developers need to be clear in their communication with gamers. The initial Steam description of The War Z contained a number of outright falsehoods.GameSpy / Steam Informed consumers routinely go into game purchases armed with dozens of previews, reviews, and pages of forum chatter shaping their decision. For many gamers, though, the decision of whether to buy or not is made solely on the basis of the back-of-the-box ad copy or its modern-day PC equivalent—the Steam description page. So when that page starts making promises the game itself can't keep, those buyers are going to be justifiably angry. Such was the case this week with Hammerpoint Interactive's The War Z (not to be confused with ArmA II mod Day Z), which hit Steam on Monday and quickly became the top-grossing game on the service. That success was thanks in part to an impressive list of features listed on the Steam page, including persistent worlds of up to 400 square kilometers, private servers, "dozens of available skills," and "up to 100 players per game server." Too bad none of those things were actually in the game that thousands of people spent $15 or more to buy. These and other issues with the initial Steam release have led to widespread player outrage on forums like Reddit, NeoGAF, and Steam itself. The complaints have gotten so bad that Steam has "temporar[ily] removed the sale offering of the title until we have time to work with the developer and have confidence in a new build." “There's no such thing as 'Release'” In an interview with GameSpy, Hammerpoint Executive Producer Sergey Titov offered a limited apology to players angry about the Steam listing, which he says included both current features and some planned for future updates (the Steam page was updated a day or so after launch to clarify this distinction). He also suggested the vast majority of players were satisfied with the game and that only a few misinterpreted what was meant by the Steam description. Titov defended the initial Steam listing as technically accurate. While the game allowed only 50 players per server at first, for instance, Titov noted private servers are able to host the promised 100 (those servers were later opened up to the public). And while the Steam listing implies multiple, huge worlds of up to 400 square kilometers, Titov said that the single, initial map does indeed fall in the low end of the promised "100 to 400 sq. km" range (though there's some reason to doubt that estimate as well). A couple of forum threads on the official War Z forums offered more apologies alternating with brittle defensiveness. GA screenshot from The War Z's Steam release clearly shows parts of the game still labeled as "alpha functionality."GameSpy In any case, Titov's main defense was the relativistic claim that an online game like this is never really "finished" in the way that a retail game of the past might have been. "My point is—online games are [a] living breathing GAME SERVICE," he told GameSpy. "This is not a boxed product that you buy one time. It's [an] evolving product that will have more and more features and content coming. This is what The War Z is." After offering The War Z as an alpha release for pre-orderers in October (and as a closed beta earlier this month), the version that hit Steam on Monday is what Hammerpoint considers a "Foundation release." The developer said it's ready for sale. But that semantic distinction still isn't noted on the Steam page, and it doesn't mean the game is complete. "There's no such thing as 'Release' for an online game," said Titov. "As far as I'm concerned The War Z is in stage when we're ready to stop calling it Beta." This isn't a sufficient defense for lying to (or at least misleading) players about your game's current feature list, of course. But statements like these reflect a recent reality that should be familiar to most gamers: the game you buy on launch day is rarely the final version of the game. Even AAA titles are often faced with massive patches that fix issues found between the time the game was "released" and the day it was finally "completed" (see Assassin's Creed III for just one recent example). Aside from fixing glitches, post-release patching might turn the game you bought into a different game entirely through gameplay re-balancing and tweaking. By and large, gamers are by now used to this "release first, patch later" world. But the scale of the difference between what is promised and what is initially delivered seems to be increasing. Social and mobile game developers now routinely discuss releasing games when they have a "minimum viable product," meaning a barely playable game that will be updated constantly as it attracts early adopters, often using live player data to guide the continuing design process. Massive success stories like Minecraft have made millions selling what were clearly labeled as "alpha" and "beta" versions of the game with vague promises about when the "final" release would hit. Kickstarter lets people essentially purchase pre-orders of games that often exist only as vaguely described concepts, going well beyond the more limited retail pre-orders for nearly complete physical games of the past. The difference between “finished” and “complete” Enlarge / Many players felt misled after buying Cortex Command when it was still "unfinished." The line between a game that is still being developed and one that is ready to be sold and played by the buying public is fuzzier than ever. And this isn't the first time that fuzzy line has led to controversy on Steam. In September, Cortex Command hit the service and immediately faced loud complaints from players upset that the $20 game they had purchased was still unfinished. While the developer's own sales page tells potential buyers in bold letters that the game is a "work in progress," the Steam description meekly notes near the bottom that the game is "still being improved" and is "not in a completely polished state yet." In light of the controversy, Cortex Command's developers issued a lengthy FAQ that gets into some pretty minute semantic territory about the game's development status. "To me, a 'finished' game is totally done and won't really be touched again by its developers, ever (save for ports, etc). 'Complete' means it is fully playable..." the FAQ reads in part. "On one hand, calling a piece of software '1.0' strongly implies completeness. On the other hand, to me it's also still only the very first revision that is fully usable," it says later. This is the world we live in now, where developers have to make a distinction between "playable" and "complete." Making that distinction requires a new, heightened level of communication between developers and players about the precise, current state of the game being sold, a standard The War Z definitely failed to achieve. For its part, Valve apologized for letting The War Z onto its service before fully vetting it. "From time to time a mistake can be made and one was made by prematurely issuing a copy of War Z for sale via Steam," a spokesman told Ars. "Those who purchase the game and wish to continue playing it via Steam may do so. Those who purchased the title via Steam and are unhappy with what they received may seek a refund by creating a ticket at our support site here." That's all well and good for this situation, but it seems clear that Valve needs to update its guidelines for how "finished" a game needs to be before it can hit Steam. It should also provide rules to developers for to describe unfinished games on their Steam pages. This is especially true as Steam opens its service up to approved Greenlight games from developers that often don't have the same proven track record or internal quality standards of major developers (some Steam users are already complaining about games being greenlit before they're sufficiently done). Perhaps an update to the Steam refund policy—offering players their money back within a short time after the first time the game is played—would alleviate some of these issues (Valve currently makes it nearly impossible to get a refund on most purchases made through Steam). Regardless of the precise fix, Valve needs to address these issues in order to maintain its rock-solid integrity as the most trustworthy and reliable downloadable game delivery service on the Internet. This isn't the last time an issue like this is going to come up. Valve should be more prepared for it next time. SmokezillaSmack-Fu Master, in training I purchased "Ravaged" recently. Upon purchase, there was a disclaimer on the opening splash screen that instructed me to go into my game directory and change the name of one of the files (which would have to be done EVERY SINGLE TIME I attempted to log-in and play it). Fortunately, there was patch recently released which fixed this issue, but there was absolutely NO mention of this when I purchased it on the Steam page. Based on this experience and the unsettling info in this article, you can bet it'll be a cold day in Hell before I go anywhere near any of their "Greenlight" games until they can prove their quality control issues are revised. Don't mind me. I'm going back to playing my World of Warcraft.At least it is finis.......... Oh... Wait. Nevermind.
计算机
2015-48/1917/en_head.json.gz/12881
Last year, Hewlett Packard Company announced it will be separating into two industry-leading public companies as of November 1st, 2015. HP Inc. will be the leading personal systems and printing company. Hewlett Packard Enterprise will define the next generation of infrastructure, software and services. Public Sector eCommerce is undergoing changes in preparation and support of this separation. You will still be able to purchase all the same products, but your catalogs will be split into two: Personal systems, Printers and Services and Servers, Storage, Networking and Services. Please select the catalog below that you would like to order from. Note: Each product catalog has separate shopping cart and checkout processes. Personal Computers and Printers Select here to shop for desktops, workstations, laptops and netbooks, monitors, printers and print supplies Server, Storage, Networking and Services Select here to shop for Servers, Storage, Networking, Converged Systems, Services and more. Privacy Statement | Limited Warranty Statement | Terms of Use ©2015 Hewlett Packard Development Company, L.P
计算机
2015-48/1917/en_head.json.gz/12999
An estimated 70 percent of MailChimp subscribers also use Google Analytics. Instead of having to embed new code in their site to track visitor activity, as is the case with many of the large ESPs, all users can take advantage of the new functionality at no cost, and without having to make any code changes to their site. Because Google Analytics does not provide an API, many users struggle with determining which data corresponds with which individual marketing event. Google Analytics Integration takes the guesswork out by singling out and displaying only the data relevant to the specific email campaign, giving marketers immediate insight into accurate and detailed results of their email efforts. Google Analytics Integration also calculates an eCommerce campaign’s ROI by deducting the cost of the campaign from the overall revenue it generated, providing an automatic view of the campaign’s success. The new feature is also useful for publishers looking to see the amount of page views generated by a particular campaign. “We believe that this type of detailed reporting should be part and parcel of an email marketing solution, regardless of the size of the campaign,” said Ben Chestnut, CEO of MailChimp. “The Google Analytics Integration feature automates information such as ROI and generated revenue, which would ordinarily have to be calculated manually or via a third party solution, making it readily available on each user’s account. We’re taking care of the heavy lifting when it comes to aggregating performance metrics so that our clients can focus simply on their success, which should be all they have to think about.” “Tracking and ROI were the features that attracted us to MailChimp in the first place,” noted David McCarty, Marketing Director for American Precious Metals Exchange. “We have been actively deploying email campaigns for years, but our previous provider gave us no way of knowing how email results mapped back to our overall business goals. We could see aggregate results, but the Google Analytics Integration gives us very simple, intuitive metrics that just contribute to MailChimp’s ease of use. We were able to pinpoint revenue generated from our very first email campaign with MailChimp, and we were thrilled to see over $157,000 in sales. It just keeps getting better, too.” About MailChimp MailChimp supports more than 3 million subscribers worldwide, sending 3 billion emails per month. The MailChimp platform improves the user experience by providing seamless, yet powerful email marketing and publishing features that are easy and affordable enough for a small business to get started, but powerful enough for a large company looking for an enterprise level solution. MailChimp’s platform provides an open API used by more than 250,000 subscribers. MailChimp integrates with many third party applications including Facebook, Twitter, Eventbrite, SurveyGizmo, Salesforce, WordPress, Magento, Joomla, Drupal and Google Analytics. And best of all, prices start at free. MailChimphttps://static.mailchimp.com/web/brand-assets/logo-dark.pngMailChimp helps you design email newsletters, share them on social networks, integrate with services you already use, and track your results.http://mailchimp.com©2001-2015 All Rights Reserved. MailChimp® is a registered trademark of The Rocket Science Group. Privacy and Terms
计算机
2015-48/1917/en_head.json.gz/13326
Accounting Software NetSuite, “The Next Big Thing”, Ends Salesforce Sticker Shock Marketing Firms Oct 11th 2006 0 NetSuite, selected as one of five “the Next Big Thing” companies at http://www.sandhill.com/conferences/enter2006.php Enterprise 2006, has struck a blow to end sticker shock by offering life-time fixed subscription price for salesforce.com users migrating to NetSuite CRM+. Such innovative strategies have helped make NetSuite one of the world’s largest on-demand software suites providers and the fastest growing software company in Silicon Valley. “We are honored to be selected as a ‘Next Big Thing’ company,” said Zach Nelson, chief executive officer (CEO) of NetSuite, who was recently named one of the customer relationship management (CRM) industry http://www.destinationcrm.com/articles/default.asp?ArticleID=6334 Influential Leaders for 2006. “This recognition is a testament to what NetSuite has achieved. As a Software as a Service company, NetSuite has impacted the software industry by delivering the only on-demand suites to small and midsize businesses. While Web 2.0 is hot, I think the next big thing will be SaaS 3.0 led by NetSuite.” CRM Magazine’s Influential Leader award identifies industry luminaries who have made the CRM industry what it is today and who are shaping it into what it will be tomorrow. According to CRM Magazine, “NetSuite continues to innovate and does so with flair.” One example of that flair for innovation is the offer to salesforce.com users, shocked by renewal prices significantly higher than the original subscription price, to lock-in a subscription price on their most recent invoice with NetSuite CRM+. The life-time fixed price offer applies to those migrating to NetSuite CRM+ from salesforce.com Professional and Enterprise edition before December 31, 2006. NetSuite had the top score in both the “Sales Management” and “Breadth of Offerings” categories, according to Forrester Research’s Hosted Sales Force Automation TechRankings™. NetSuite also received high scores in some of the most important functional areas of SFA, including forecasting, opportunity management, activity management, dashboards, document management and pricing and products. “We switched to NetSuite because salesforce.com didn’t have the extra feature NetSuite has, including order management,” said Fabrice Cancre, chief operating officer (COO), Olympus NDT, a manufacturer and distributor of testing equipment, headquartered in Waltham, Mass. “We have gradually increased NetSuite usage to our 100-plus member distributed sales team. We can create quotations and sales orders, and measure the forecast by product – something we couldn’t do with salesforce.com. We’re continually deploying more NetSuite features and are planning to use even more of the advanced CRM+ features NetSuite is offering, as well as extending the suite to our international offices.” Trending In addition to free services to transfer 100MB of salesforce.com data, salesforce.com switchers will also receive sales force automation (SFA) and customer relationship management (CRM) functionality not found in salesforce.com, including abilities to: create estimates or quotes generate sales orders manage multiple quotes or forecasts automate support for cross-selling and up-selling manage incentive management (commissions) within the system without using a third product pre-configured dashboards for business intelligence, and more. “I have tracked NetSuite for several years and have seen the growth and industry leadership of the company,” said M.R. Rangaswami, co-founder of the Sand Hill Group and a driving force behind Enterprise 2006. “Based on what I have seen, I bet on NetSuite to be the Next Big Thing company in the on-demand software industry.” TagsMarketing Report Finds Customer Relationship Management... Salesforce.com customers victimized by phishing... Intacct Online Professional Accounting... Sep 17th 2015 Accounting Software PwC to Help Promote Sage Live to SMBs
计算机
2015-48/1917/en_head.json.gz/13606
Posted Bethesda promises it’s still working on Skyrim: Dawnguard for PS3 By The Elder Scrolls V: Skyrim players on PlayStation 3 received good news on Monday when Bethesda announced that latest expansion, Dragonborn, would arrive on PC and Sony’s console in early 2013. It will be the first Skyrim DLC to hit the PlayStation 3. While Bethesda’s released multiple expansions for Xbox 360 at this point, including Dawnguard and Hearthfire, technical problems have prevented this content from releasing on PlayStation 3. Bethesda says that Dragonborn won’t be the end of Skyrim DLC on Sony’s console. “We’re hard at work to make [Dragnborn] available early next year on PS3 and PC,” reads a new update at Bethesda’s official blog, “On PS3 in particular, we turned our attention to Dragonborn, as we thought it was the best content to release first, and we didn’t want folks to wait longer. Once it’s ready to go for everyone, we’ll continue our previous work on Hearthfire and Dawnguard for PS3. Each one takes a lot of time and attention to work well in all circumstances and all combinations of DLC. Once we have a better idea of release timing and pricing, we’ll let you know.” It’s those expansions’ compatibility with all variations of the game on Sony’s platform that has apparently been causing problems. Following the release of Hearthfire at the beginning of October, Bethesda’s Pete Hines said that the studio was working diligently on the content but couldn’t get it to work consistently. “Performance isn’t good enough in all cases,” said Hines, “For most folks, it’d be fine. For some, it wouldn’t be.” The standalone version of Skyrim released on PlayStation 3 in 2011 had a host of technical problems as well, with many players reporting crippling slow performances caused by the way the game saved information on Sony’s machine. To date, not all of the content introduced through basic patches on PC and Xbox 360 are implemented in the PlayStation 3 version of the game. Sony itself has worked alongside Bethesda to try and remedy the problem. “The PS3 is a powerful system, and we’re working hard to deliver the content you guys want,” read a statement released in August, “Dawnguard is obviously not the only DLC we’ve been working on either, so this issue of adding content gets even more complicated. This is not a problem we’re positive we can solve, but we are working together with Sony to try to bring you this content.”
计算机
2015-48/1917/en_head.json.gz/13785
Qt 5.1 - more than just a minor update Just under six months after the release of Qt 5.0, a new version of the C++ user interface development framework has now been released. Qt 5.1 is not just a minor update focusing on improvements and performance, as originally intended, it also offers various new features, the most important ones being iOS and Android support, although these aspects continue to be classified as technology previews. According to the developers, however, the implementations can already be used in various scenarios. Qt 5.1 has been released together with version 2.7.2 of the Qt Creator development environment. The components are available via a new online installer that will assist with automatic updating in the future. The developers have also released a new version of the add-in that provides integration with Microsoft's IDE Visual Studio. For the first time, Qt now works with Visual Studio 2012, and Windows users can choose ANGLE (Almost Native Graphics Layer Engine) or OpenGL. With this release, the Qt developers have introduced a new model for managing serial ports (Qt SerialPort). Familiar from Qt 4.x, the Qt Sensors module for accessing sensor hardware has now become a Qt component again. It currently supports Android, BlackBerry, iOS and Mer/SailfishOS. The new Qt Quick Controls module offers a collection of reusable UI components for desktop applications. Qt Quick Layouts provides extra help with managing scalable user interfaces. The previously mentioned support of iOS and Android is almost the same. Compared to Qt 5.1 on other platforms though, missing components include Qt Serialport, Qt WebKit and parts of Qt Multimedia. Qt Quick 2 is also missing from the Qt implementation for Apple's mobile operating system, because its V8 JavaScript engine can't be used under iOS. The developers plan to provide full Qt Quick support for the operating system with Qt 5.2, which they have already scheduled for release in late 2013. The open source, LGPLv2/GPLv3-licensed Qt is available to
计算机
2015-48/1918/en_head.json.gz/41
Compromising Twitter’s OAuth security system Twitter recently transitioned to OAuth, but the social network's … - Sep 2, 2010 3:25 pm UTC Authorization issues So far, this article has largely focused on the technical deficiencies of Twitter's own OAuth implementation. For the rest of the article, we will be looking primarily at broader OAuth issues that also affect many other implementations. We will still be discussing OAuth in the context of Twitter, but it's important to keep in mind that the following issues are widespread and aren't necessarily specific to Twitter's implementation. The manner in which OAuth relies on page redirection to facilitate the authentication process poses some unusual challenges that are difficult to address. One issue that has been raised is that the user remains logged in on Twitter (and might not even realize it) when he or she goes through the legitimate redirect-based authorization process that is initiated by a third-party website. This could be a problem if the user is using a computer at a public location, such as a school computer lab. Say, for example, that a user logs into an online game and then authorizes that game to access their Twitter account. When they are done playing, they will likely log out of the game so that the next person to use the computer won't mess with their game account, but they might not realize that they need to log out of Twitter too due to their use of the OAuth authentication process that they performed during the session. It's not clear exactly what the right behavior should be, but it's arguable that Twitter should log the user out after handing authentication to the third-party service in cases where the user wasn't already logged into Twitter before initiating the authentication request. Another somewhat related issue is that the "Deny" button on the authorization page is really just a cancel button. If you are prompted to authorize an application that you have already authorized and you click the Deny button, Twitter will not revoke the application's existing authorization to access your account. Again, this is a situation where it's not really obvious what behavior the user should expect. The word "Deny" has a very specific meaning that is somewhat misleading in the way that it is used on the page. I think that implementors should either change the denial button to use the word "Cancel" or make it revoke existing access in cases where it exists. Perhaps the user should be prompted. As previously stated, this is not a Twitter-specific issue—the same problem exists on Google's authorization page, too. Phishing risks Twitter doesn't have any kind of vetting process or validation procedure to ensure that consumer key registrants are who they claim to be. For example, there is absolutely nothing to stop me from registering a Twitter OAuth application key claiming that my company is Apple and my product is Mac OS X. When a malicious person registers a key that pretends to be a legitimate product, the company that makes that product has to go through a lengthy arbitration process with Twitter's administrators and demonstrate that they own the trademark in order to get the falsely registered key invalidated. This problem is not unique to Twitter, but Twitter exacerbates the risk of phishing by failing to use appropriate language on its authentication page. When Twitter presents users with the option of granting access to an application, it warns the user to only allow the authorization to proceed if they trust the party requesting access, but they don't warn the user that the initiator of the request and recipient of account access could, in fact, be somebody other than the entity stated on the authorization page. Arguably, Twitter will be able to partially mitigate the risks of such attacks by finding and invalidating fraudulently registered keys. One of the advantages of OAuth is that the malicious application will have its access to user accounts revoked when the key is invalidated. A more serious problem is when phishing attacks are perpetrated with a compromised key that came from a legitimate third-party application. OAuth supports a callback parameter that allows the party initiating the authorization request to specify where the user's access token (the token used to access a user's account) should be sent when the authorization process is completed. A malicious individual with a compromised consumer key could request authorization in a manner that appears to be on behalf of a legitimate application, but could have the key sent to their own server so that they can control the user's account. The user would see a normal Twitter authorization page on the official Twitter website with the name of a legitimate and safe application, but they would unknowingly be granting access to the malicious third-party that initiated the authorization request. This is especially dangerous because all of the things that users have been trained to look for to spot phishing—like the URL and the SSL certificate—will appear exactly as they should, giving the user a false sense of security. Twitter has taken some reasonable steps to limit the risk of such an attack. Specifically, Twitter has blocked keys that are registered for the desktop from using the callback parameter. Any consumer key that is registered on Twitter for a desktop application key will only be able to use the so-called out-of-band (OOB) authorization method, which doesn't rely on redirection. This is one of the few things about Twitter's approach to OAuth that actually makes good sense. Unfortunately, it doesn't protect against such a phishing attack in circumstances where a key that has redirection enabled is compromised. The consumer secret key for a Web application is stored on the servers of the company that operates the application, so it is unlikely to be compromised. The problem is that there are a lot of mobile applications that rely on the redirection method and configure their Twitter consumer keys to function in Web mode. This is because a very common practice for mobile applications is to use the redirection authorization method in conjunction with a custom URL handler that is registered with the platform. The URL handler trick makes it possible for the Twitter website to hand the user's access token directly back to the application when authentication is complete. In cases where that approach is used, the application's key necessarily has to be configured as a Web key, even though it is used in a desktop application. If that key is compromised, it is susceptible to the previously described phishing attack. (It's also worth noting that there is a risk of some malicious application overriding the URL handler settings to make itself the recipient of the access token.) Ideally, OAuth implementors should require application developers to supply the callback address when they configure their key and should not allow that setting to be overridden by the client application in a request parameter. Twitter has a field in the key configuration that allows the developers to specify a default, but they still allow client applications to use the dangerous callback override parameter. Security is hard, let's go shopping! Individual implementations aside, the general concept behind OAuth's redirection-based authorization process materially increases the risk of phishing. The people behind the standard are fully aware of that fact, but they don't believe that the issue should necessarily be addressed by the standard itself. They have argued for quite some time that end users should simply be more careful and implementors should come up with best practices on their own. This is because the purpose of the OAuth standard was to mitigate the password antipattern, not to holistically solve every security problem. "OAuth cannot help careless users, and phishing is all about not paying attention to what you do. There has been some interesting discussion about phishing on the OAuth group and the bottom line is, it is far beyond the scope of the protocol," OAuth contributor Eran Hammer-Lahav wrote in 2007. Unfortunately, there are advocates of OAuth who are less honest than Hammer-Lahav about the standard's scope and limitations. Some proponents of the standard misrepresent its maturity and suitability for adoption while downplaying its weaknesses and risks. When people try to raise concerns about the problems, OAuth advocates tend to argue that developers who don't like OAuth are simply lazy or don't care about security. Some of the people behind the OAuth standard try really hard to convince end users that they should expect OAuth support everywhere, even in contexts where it doesn't really work or make sense. Their attitude is that developers should man up and learn to live in the brave new OAuth-enhanced world where solving the password antipattern takes priority over every other security issue. To be clear, I don't think that OAuth is a failure or a dead end. I just don't think that it should be treated as an authentication panacea to the detriment of other important security considerations. What it comes down to is that OAuth 1.0a is a horrible solution to a very difficult problem. It works acceptably well for server-to-server authentication, but there are far too many unresolved issues in the current specification for it to be used as-is on a widespread basis for desktop applications. It's simply not mature enough yet. Even in the context of server-to-server authentication, OAuth should be viewed as a necessary evil rather than a good idea. It should be approached with extreme trepidation and the high level of caution that is warranted by such a convoluted and incomplete standard. Careless adoption can lead to serious problems, like the issues caused by Twitter's extremely poor implementation. As I have written in the past, I think that OAuth 2.0—the next version of the standard—will address many of the problems and will make it safer and more suitable for adoption. The current IETF version of the 2.0 draft still requires a lot of work, however. It still doesn't really provide guidance on how to handle consumer secret keys for desktop applications, for example. In light of the heavy involvement in the draft process by Facebook's David Recordon, I'm really hopeful that the official standard will adopt Facebook's sane and reasonable approach to that problem. Although I think that OAuth is salvageable and may eventually live up to the hype, my opinion of Twitter is less positive. The service seriously botched its OAuth implementation and demonstrated, yet again, that it lacks the engineering competence that is needed to reliably operate its service. Twitter should review the OAuth standard and take a close look at how Google and Facebook are using OAuth for guidance about the proper approach. Page: 1 2 3 Reader comments 62
计算机
2015-48/1918/en_head.json.gz/374
/ root / Linux Books / Red Hat/Fedora stock:back order release date:May 2006 Andrew Hudson, Paul Hudson Continuing with the tradition of offering the best and most comprehensive coverage of Red Hat Linux on the market, Red Hat Fedora 5 Unleashed includes new and additional material based on the latest release of Red Hat's Fedora Core Linux distribution. Incorporating an advanced approach to presenting information about Fedora, the book aims to provide the best and latest information that intermediate to advanced Linux users need to know about installation, configuration, system administration, server operations, and security. Red Hat Fedora 5 Unleashed thoroughly covers all of Fedora's software packages, including up-to-date material on new applications, Web development, peripherals, and programming languages. It also includes updated discussion of the architecture of the Linux kernel 2.6, USB, KDE, GNOME, Broadband access issues, routing, gateways, firewalls, disk tuning, GCC, Perl, Python, printing services (CUPS), and security. Red Hat Linux Fedora 5 Unleashed is the most trusted and comprehensive guide to the latest version of Fedora Linux. Paul Hudson is a recognized expert in open source technologies. He is a professional developer and full-time journalist for Future Publishing. His articles have appeared in Internet Works, Mac Format, PC Answers, PC Format and Linux Format, one of the most prestigious linux magazines. Paul is very passionate about the free software movement, and uses Linux exclusively at work and at home. Paul's book, Practical PHP Programming, is an industry-standard in the PHP community. manufacturer website
计算机
2015-48/1918/en_head.json.gz/375
The Fedora Project is an openly-developed project designed by Red Hat, open for general participation, led by a meritocracy, following a set of project objectives. The goal of The Fedora Project is to work with the Linux community to build a complete, general purpose operating system exclusively from open source software. Development will be done in a public forum. The project will produce time-based releases of Fedora about 2-3 times a year, with a public release schedule. The Red Hat engineering team will continue to participate in building Fedora and will invite and encourage more outside participation than in past releases. Fedora 15, a new version of one of the leading and most widely used Linux distributions on the market, has been released. Some of the many new features include support for Btrfs file system, Indic typing booster, redesigned SELinux troubleshooter, better power management, LibreOffice productivity suite, and, of course, the brand-new GNOME 3 desktop: "GNOME 3 is the next generation of GNOME with a brand new user interface. It provides a completely new and modern desktop that has been designed for today's users and technologies. Fedora 15 is the first major distribution to include GNOME 3 by default. GNOME 3 is being developed with extensive upstream participation from Red Hat developers and Fedora volunteers, and GNOME 3 is tightly integrated in Fedora 15." manufacturer website 1 dvd for installation on a x86 platform back to top
计算机
2015-48/3652/en_head.json.gz/14270
The OpenStack Foundation Legal Documents / THE OPENSTACK FOUNDATION COMMUNITY CODE OF CONDUCT COMMUNITY CODE OF CONDUCT This Community Code of Conduct covers our behavior as members of the OpenStack Community, in any forum, mailing list, wiki, web site, IRC channel, public meeting or private correspondence. OpenStack members and governance bodies are ultimately accountable to the OpenStack Board of Directors. Be considerate. Our work will be used by other people, and we in turn will depend on the work of others. Any decision we take will affect users and colleagues, and we should take those consequences into account when making decisions. OpenStack has a global base of users and of contributors. Even if it's not obvious at the time, our contributions to OpenStack will impact the work of others. For example, changes to code, infrastructure, policy, documentation, and translations during a release may negatively impact others' work. Be respectful. The OpenStack community and its members treat one another with respect. Everyone can make a valuable contribution to OpenStack. We may not always agree, but disagreement is no excuse for poor behavior and poor manners. We might all experience some frustration now and then, but we cannot allow that frustration to turn into a personal attack. It's important to remember that a community where people feel uncomfortable or threatened is not a productive one. We expect members of the OpenStack community to be respectful when dealing with other contributors as well as with people outside the OpenStack project and with users of OpenStack. Be collaborative. Collaboration is central to OpenStack and to the larger free software community. This collaboration involves individuals working with others in teams within OpenStack, teams working with each other within OpenStack, and individuals and teams within OpenStack working with other projects outside. This collaboration reduces redundancy, and improves the quality of our work. Internally and externally, we should always be open to collaboration. Wherever possible, we should work closely with upstream and downstream projects and others in the free software community to coordinate our technical, advocacy, documentation, and other work. Our work should be done transparently and we should involve as many interested parties as early as possible. If we decide to take a different approach than others, we will let them know early, document our work and inform others regularly of our progress. When we disagree, we consult others. Disagreements, both social and technical, happen all the time and the OpenStack community is no exception. It is important that we resolve disagreements and differing views constructively and with the help of the community and community processes. We have the Technical Board, the User Committee, and a series of other governance bodies which help to decide the right course for OpenStack. There are also Project Core Teams and Project Technical Leads, who may be able to help us figure out the best direction for OpenStack. When our goals differ dramatically, we encourage the creation of alternative implementations, so that the community can test new ideas and contribute to the discussion. When we are unsure, we ask for help. Nobody knows everything, and nobody is expected to be perfect in the OpenStack community. Asking questions avoids many problems down the road, and so questions are encouraged. Those who are asked questions should be responsive and helpful. However, when asking a question, care must be taken to do so in an appropriate forum. Step down considerately. Members of every project come and go, and OpenStack is no different. When somebody leaves or disengages from the project, in whole or in part, we ask that they do so in a way that minimizes disruption to the project. This means they should tell people they are leaving and take the proper steps to ensure that others can pick up where they left off. Respect the election process. Members should not attempt to manipulate election results. Open debate is welcome, but vote trading, ballot stuffing and other forms of abuse are not acceptable. We pride ourselves on building a productive, happy and agile community that can welcome new ideas in a complex field, and foster collaboration between groups with very different needs, interests and goals. Mailing lists and web forums Mailing lists and web forums are an important part of the OpenStack community platform. This code of conduct applies to your behavior in those forums too. Please follow these guidelines in addition to the general code of conduct: Please use a valid email address to which direct responses can be made. Please avoid flamewars, trolling, personal attacks, and repetitive arguments. If a Community Member wishes to file a complaint against behavior that is not compliant with the Community Code of Conduct, he or she should contact the Executive Director ([email protected]).
计算机
2015-48/3652/en_head.json.gz/14279
Contact Advertise Apple To Mandate Sandboxing by March 2012 Linked by Thom Holwerda on Thu 3rd Nov 2011 22:54 UTC And so the iOS-ification of Mac OS X continues. Apple has just announced that all applications submitted to the Mac App Store have to use sandboxing by March 2012. While this has obvious security advantages, the concerns are numerous - especially since Apple's current sandboxing implementation and associated rules makes a whole lot of applications impossible. 6 · Read More · 70 Comment(s) http://osne.ws/jim Permalink for comment 496190 RE[6]: Comment by frderi by frderi on Sun 6th Nov 2011 19:37 UTC in reply to "RE[5]: Comment by frderi" Member since: I agree that software protections which are good enough against everyday desktop and mobile threats will be insufficient against targeted attacks with colossal financial and human means like Stuxnet. When you're facing this sort of attacks, you need NASA-like permanent code auditing and warfare-like financial and human means to achieve good security. However, I also believe that that the average desktop/mobile user is not likely to have to worry about this anytime soon. I'm not sure if you aware of how the black hat industry works. Make no mistake, this is a multi million dollar industry. There are people out there that make a living out of it. There are people who do nothing all day but to find these zero-day bugs. And when they find them, they sell them on the black market, for hundreds or thousands of dollars. These aren't the kinds of bugs that come to light by patches. The black hat industry has moved beyond that. These are bugs that aren't known by their respective vendors and aren't patched in any of their products. This information is then bought by malware writers, who exploit them in their malicious code for keylogging, botnets, whatever. There's not a hair on my head that thinks black hats are not capable of writing Stuxnet-like functionality. Don't underestimate these guys, they're way smarter than you think. Hmmm... Which version of OS X are we talking about here ? I think that on the (admittedly a little old) 10.5 machines which I'm used to, Safari automatically mounts and opens dmgs but does not do anything else. Opening safe files is an option you can turn off and on in the options; it also works with zip files. I really, really do not like Windows-like installers, but I see the value in standard packages whose installation goes a bit beyond copying a folder at a standard location. File associations, applications which start on system boot, security permissions... All that benefits from being managed at once during "installation" time. True. On a Mac, .pkg/.mpkg packages do that. They actually are little more than a bundle of an archive files and some xml data to describe its contents. it supports scripting, resources, … You are right that cross-device portability, if possible, would be about much more than basic UI fixes. I've not started full work on that yet, but an interesting path to study, in my opinion, would be to start with a relatively abstract theory of human-computer interactions, then gradually specialize it towards the kind of devices and users which the OS or application wants to target. Its an interesting train of thought, but I still think there would be a lot of human design based decisions to be made for the different devices, and I don't know if the net gain of letting the computer do this would be greater than just redesigning the UI yourself, especially on iOS devices, where its trivial to set up an UI. And when you do too little, people just say "meh" and move along I guess that defining reasonable goals for a product must be one of the hardest tasks of engineering ! It has to have the functionality to support the use cases for the device. Everything else is just clutter. After defining the goals of your app, you need to design the practical implementation of the functionality. As a user, I really appreciate it when a lot of thought has gone into this process. Some UI's which are basically displays of underlying functionality. These tend to be very tedious and time consuming to work with. There are others which actually take the effort to make the translation between a simple user interaction and the underlying technology. A lot of thought can go into the process of trying to come to grips with how these interactions should present itself to the user, and in some cases, it takes an order of a magnitude more effort than it takes to actually write the code behind it. Well, they do have windows, in the sense of a private display which the application may put its UI into without other software interfering. It just happens that these windows are not resizable, full screen, and as a consequence are hard to close and can only be switched using the operating system's task switcher. Which makes multi-windows interfaces impractical. But those ought to disappear anyway You're looking at it from a developer perspective, I'm looking at it from a user perspective. As a user I don't care if there's a windowing technology behind it or not. I don't see it, I don't use it, so it doesn't exist. Desktop computers have windowing functionality (The classic Mac OS even had way too many of it) There are more differences than that. Some popups, like authorizations, are modal, some others, like notifications, are non-modal. They way they display these things is different as well. But these are just individual elements, and in the grand scheme of things, trivialities. Although they do not have a mouse, they still have pointer-based UIs. Only this time, the pointer is a huge greasy finger instead of being a pixel-precise mouse, so hovering actions must not be a vital part of the UI, and controls must be made bigger to be usable. Since controls are bigger and screens are smaller, less controls can be displayed at once, and some controls must either go of be only accessible through scrolling. But this does not have to be fully done by hand, UI toolkits could do a part of the job if the widget set was designed with cross-device portability in mind... Try to think a little bit further than the practicalities of the UI elements and think about the overall user experience instead of the engineering challenges. Good tablet apps are layed out differently than good desktop apps. This is not a coincidence. Some of those differences are based on the different platform characteristics, as you mentioned. But other reasons have to do with the fact that the use cases for these apps differ greatly. I'm convinced that when you are designing UI's, you have to start from the user experience and define these use cases properly to be able to come to an application design thats truly empowering your users.
计算机
2015-48/3652/en_head.json.gz/14660
What is the Difference Between Scalar and Superscalar Processors? Dulce Corazon Edited By: W. Everett There are different types of central processing units (CPUs) available for computers. These types of CPUs do not really differ in terms of processing hardware and architecture. Most of them perform the basic tasks of a CPU such as reading and writing data, basic arithmetic, and address jumping. They can, however, differ in terms of bus sizes and processor architecture. Several types of computer processor hardware are available, two of which are the scalar and superscalar processors. A processor that executes scalar data is called a scalar processor. Using fixed point operands, integer instructions are executed by scalar processors even in their simplest state. More powerful scalar processors usually execute both floating point and integer operations. Recently produced scalar processors contain both a floating point unit and an integer unit, all on the same CPU chip. Most of these modern scalar processors use instructions of the 32-bit kind. Ad The superscalar processor, on the other hand, executes multiple instructions at a time because of its multiple number of pipelines. This CPU structure implements instruction-level parallelism, which is a form of parallelism in computer hardware, within a single computer processor. This means it can allow fast CPU throughput that is not even remotely possible in other processors that do not implement instruction-level parallelism. Instead of executing one instruction at a time, a superscalar processor uses its redundant functional units in the execution of multiple instructions. These functional units are not separate CPU cores, but a single CPU's extension resources such as multipliers, bit shifters and arithmetic logic units (ALUs). Differences between scalar and superscalar processors generally boil down to quantity and speed. A scalar processor, considered to be the simplest of all processors, works on one or two computer data items at a given time. The superscalar processor works on multiple instructions and several groups of multiple data items at a time. Scalar and superscalar processors both function the same way in terms of how they manipulate data, but their difference lies in how many manipulations and data items they can work on in a given time. Superscalar processors can handle multiple instructions and data items, while the scalar processor simply cannot, therefore making the former a more powerful processor than the latter. Scalar and superscalar processors both have some similarities with vector processors. Like a scalar processor, a vector processor also executes a single instruction at a time, but instead of just manipulating one data item, its single instruction can access multiple data items. Similar with the superscalar processor, a vector processor has several redundant functional units that let it manipulate multiple data items, but it can only work on a single instruction at a time. In essence, a superscalar processor is a combination of a scalar processor and a vector processor. Ad What Is a Front-End Processor? What Is the Sun-Ni Law? What Is an Arithmetic Logic Unit? What is Superscalar? What is the Difference Between Vector and Array Processing? What is a Superscalar Processor? What is the Difference Between Vector and Scalar Processors?
计算机
2015-48/3653/en_head.json.gz/267
Google App Engine: Write Your Own Google Apps Google's applications could be useful and interesting, but they are just a small fraction from all the applications you may need. That's probably the reason why Google decided to open its infrastructure to third-party applications and released Google App Engine.Google App Engine gives you access to the same building blocks that Google uses for its own applications, making it easier to build an application that runs reliably, even under heavy load and with large amounts of data. The development environment includes the following features: * Dynamic webserving, with full support of common web technologies * Persistent storage (powered by Bigtable and GFS with queries, sorting, and transactions) * Automatic scaling and load balancing * Google APIs for authenticating users and sending email * Fully featured local development environmentFor now, there are a lot of limitations: only the first 10,000 users who register at http://appengine.google.com/ will be able to test the new service, you need to write your applications in Python (more languages will come) and the quotas are enough only for small to medium projects. "During this preview period, applications are limited to 500MB of storage, 200M megacycles of CPU per day, and 10GB bandwidth per day. We expect most applications will be able to serve around 5 million pageviews per month. In the future, these limited quotas will remain free, and developers will be able to purchase additional resources as needed." The limitations are reasonable if you think this is only a preview release and Google wants to get feedback from developers before the official launch.The applications can be run locally using a SDK provided by Google or uploaded to a subdomain of appspot.com or to your own site. There's already a gallery of applications that includes a chat room for teams, a movie quote site, a Python shell and more.Google previously released Mashup Editor, "an AJAX development framework and a set of tools that enable developers to quickly and easily create simple web applications and mashups", but the new App Engine lets you build more complex applications. Kevin Gibbs explained more about Google's intentions at Google App Engine Campfire One.Google App Engine provides an infrastructure for running web apps. By that, I mean that we're focused, specifically on web applications: making them easy to run, easy to deploy, and easy to scale. App Engine is different than a lot of other things out there: App Engine is not a grid computing solution-- we don't run arbitrary compute jobs. We also don't give you a raw virtual machine. Instead, we provide a way for you to package up your code, specify how you want it to run in response to requests, and then we run and serve it for you. You don't reserve resources, or machines, or RAM or a number of CPUs, or anything like that. It's a fluid system, that runs your code in response to load and demand. (...) App Engine is a complete system. We provide ways to run your code, serve your static content, a database, request and application logs, methods to push new releases of your code, and more. Ultimately, we are trying to provide a simpler alternative to the traditional LAMP stack. (...) Finally, the other key part of App Engine is that we're providing you access to Google's infrastructure. The APIs and systems we are providing to you are built off of the same distributed, scalable infrastructure we use to power Google's other applications, like Google Accounts, GFS, and Bigtable. We're giving you access to those powerful building blocks, and giving you the ability to write real code and real apps that make use of them.Usually, if you lower the entry barriers for a system, people will use it more often and the probability of building something great increases. Google wants to reduce the complexities of creating web applications and give developers the opportunity to spend more time writing code and less time building the infrastructure and scaling the application. The same way Amazon Web Services reduced the costs of running a start-up, Google App Engine could accelerate innovation by letting developers focus on what's important.Google App Engine - http://appengine.google.comDocumentation - http://code.google.com/appengineFeatured applications - http://appgallery.appspot.com
计算机
2015-48/3653/en_head.json.gz/777
of The American Society for Information Science December / January 2000 Go to�Bulletin Index Annual Meeting CoverageTrack 3: Information Retrieval by Matthew KollThe Information Retrieval track at the 1999 ASIS Annual Meeting was informative and lively. It started with my overview of the field, intended to provide a backdrop against which attendees might view the papers and sessions to follow. In this report I'll try to recapture a sense of that backdrop and touch on some of the highlights of the sessions.Information Retrieval Backdrop Information retrieval is the science and practice of trying to show people the document they would want to see next, if they had total knowledge and hindsight. The field used to be divided between the information retrieval research community and the business world. In recent years we have seen a growing split (along with increased communication) among researchers who focus on users versus those who focus on retrieval systems. An even bigger schism has developed within the search industry, between the traditional, professional information services and the consumer-oriented search services. Yes, the Web has changed everything.One way to grasp the wide scope of our field and some of the changes is to dig into the "needle in a haystack" metaphor. Searching is like finding a needle in a haystack, but not all searches are the same. "Finding a needle in a haystack" can mean a known needle in a known haystack; a known needle in an unknown haystack; an unknown needle in an unknown haystack; any needle in a haystack; the sharpest needle in a haystack; most of the sharpest needles in a haystack; all the needles in a haystack; affirmation of no needles in the haystack; thinks like needles in any haystack; let me know whenever a new needle shows up; where are the haystacks?; and needles, haystacks � whatever. The point is that people come to search systems with a variety of needs. Systems do pretty well finding a specific document in a specific collection. But often, users don't find what they want because they're looking in the wrong place. Also, users sometimes want to know that they have found all the relevant documents (high recall) or be confident that they have not missed any important documents. This task is difficult and tends to be neglected, in large part because people don't know what they're missing until and unless they find it. The "needles, haystacks � whatever" line started off as a light-hearted poke at Gen-X searchers, but with the massive growth in consumer online searching, this now represents a legitimate viewpoint. Casual searchers don't have time for a lot of interaction and aren't going to give the system a lot of words to work with; they want some good information back fast, and if they don't get it they're going to take their business elsewhere.Despite recent progress, as seen in TREC results and in commercial systems, providing good search results continues to be a difficult problem. The main reasons are language is inherently imprecise; when users do use logic, they misuse and overuse it; users provide very few explicit clues to what they want; there is limited opportunity for interaction; users want to find what they're looking for and get on with their lives; people don't know where to look; many retrieval methods do not scale, especially to the very large collections now emerging; the limits of aboutness; knowing the topic of a document is not sufficient to predict its relevance; and "I'll know it when I see it." It's hard to describe what you don't know. To a large degree, the papers and panel sessions addressed these issues in an engaging and constructive way. Here are some key questions I had coming into the conference and my answers as of today. Q: As the Web gets bigger, and queries don't get longer fast enough, won't precision be terrible? A: No, precision is the top priority of the research and commercial communities. Q: Is recall dead? A: No, but it is in need of attention.Q: Will commercial imperatives kill off information science entirely?A: No. Library and information science professionals have never been in higher demand. Information Retrieval at ASIS '99: Themes and Observations Classification of Tracks. The first observation I'd make about information retrieval at ASIS '99 was that it wasn't limited to the information retrieval track. Sessions dealing with searching, navigating, agents and visualization spread out across various tracks. Perhaps for future conferences we'd be better off not classifying sessions at all, but just letting the attendees do full text searching of the program.The User, Time and the Search Process. There was an increased emphasis on the process of searching. This is manifest in papers such as those by Choo, Detlor and Turnbull, which examined the modes and stages users pass through in searching, and the Kantor, Boros, Melamed and Menkov paper describing Information Quests. Papers like these mark more than just a return of attention to the role of the user in the search process, but also reveal the impact of recent advances in technology in tracking what people actually do when searching. Kantor's Ant World project at Rutgers, which captures, organizes and finds other people's relevant information quests, is a fascinating way of moving search from a private activity to more of a community activity, where people can improve the effectiveness of their searches by learning from other people with similar questions. Similar, but slightly different, Lankes' work with the National Digital Reference System and AskA project are designed to help searchers find people who actually know the answers to their questions or who can guide them in their quest.Larger Task Context. The Watson search agent, described by Budzik and Hammond, takes this renewed focus on the user even further. Watson tries to understand the context in which a user's need for information is arising and to anticipate or to at least augment search requests by utilizing knowledge of the user's larger task, of which searching is just a part. Budzik reported on an experiment in which Watson outperformed human experts. Relevance . Projects that involve the user more deeply are a hopeful sign for search systems. Developers are striving to overcome the problem that the relevance decision (the process by which a user decides whether a document meets his needs or not) is driven by much more than what the document is about. Getting beyond aboutness is essential. Toward that end, Schamber and Bateman described an ongoing project to determine the factors that influence users' relevance decisions. They've identified factors such as novelty, availability and source characteristics. Maybe we can train search assistants to start keying into these variables as well as topicality.Off-the-Page Indicators. Another trend in our field is the growing use of "off-the-page" indicators of relevance. These indicators include, for example, relevance ratings provided by other people � known as "collaborative filtering" to some, or as a variant of the time-honored "relevance feedback" to others; popularity of items, as indicated by analyzing user clicks; and analyzing the references made to a document by other documents, that is, the hypertext links and citations to a document, as well as the text surrounding those references in the citing documents. Bradshaw and Hammond described the Rosetta system, which makes innovative use of the context of citing documents to describe the cited document. Their goal is worth repeating: "to provide precise results for simple queries." Given that most queries are very simple (still averaging just 2-4 words), this is an important goal. Several people expressed a concern that reliance on the judgments and links of others could lead to a loss of serendipity and individuality in search results.Integration. Though not in the Information Retrieval track, Lawrence provided an overview of a search system developed at NEC that is notable for its inclusion of a wide array of search and relevance ranking techniques, including citation mining. The NEC system is an example of a trend toward the integration of multiple methods. Several authors discussed integrated combinations of searching and browsing. Another interesting integration involves combining catalogs (collections of items carefully selected, described and classified into a taxonomy) with large full-text collections such as the whole Web. I'll have to indulge in a mention of the new AOL search product here, which features just that kind of integration. A user's search runs against not just the human-created descriptions of documents and the taxonomy into which they were classified (based on the Netscape Open Directory), but also the full-text of those documents, as well as the collected aggregated searches conducted by other users. For display, users can easily navigate among the documents themselves, their descriptions and the taxonomy.Multimedia. Several speakers described their efforts at multimedia search. This includes both describing and retrieving relevant graphic, audio and video materials, and using visual and audio tools in the user interaction. On the search side, one of the most enlightening sessions was the panel on Information Retrieval from Speech. Siegler addressed how speech recognition for the purpose of preparing for searchable access is different from speech recognition for the purpose of preparing a transcript. He noted that instead of only retaining the most likely interpretation of a speech fragment, his system kept several of the less likely but possible interpretations. By keeping these additional words and phrases as part of the document description, search results were improved substantially.User Interface and Visualization. The self-anointed highlight of the conference was a panel chaired by Hildreth, and featuring Bates, Marchionini, Hjerppe and Rorvig. The panel explored issues ranging from personalization and user choices versus complexity to learning and social search contexts and inexorable technology trends. Rorvig described his "Big Sky" project which aims to visualize a huge collection � all the information in the world � on a huge surface, specifically, a planetarium. Hjerppe reminded us of some of the fundamental paradoxes underlying information retrieval, of which the most profound is probably that the user must describe that which he does not know in order to find it. Another significant point that arose during this and other sessions was the importance of getting new systems out into the real world for testing. This is a shift toward a product-design-with-rapid-iterations philosophy as opposed to the other extreme of relatively big science projects. More. Of course, the underlying theme of the information retrieval track was "more" � more content, more searches, more kinds of things to be found, and much more attention to our field. Matthew Koll is an AOL Fellow at America Online. He can be reached by mail at America Online, CC2, 44900 Prentice Dr., Dulles, VA 21066. He can be reached by phone at 703/265-1766 or by e-mail at [email protected] Go to Track 4 @ 2000, American Society for Information Science
计算机
2015-48/3653/en_head.json.gz/996
email list Unlicensed Surreal Estate Broker Last Updated 1/25/2004 by dickdiamond.com Eye Opening I'm learning Flash. Don't ask me why. While on the one hand I would seem to be utterly unqualified to use a tool that is essentially a lever for artistic expression, I also tend to think in terms of images, movement and storyboards. As I begin working with the tutorial book Build Your First Website with Flash MX (ISBN 1904344127, but not available in the United States, so don't bother), I encounter immediately, on page eight, what had been my central fear going into this: that I might have to draw to be able to use the program. In fact, the book begins with exercises in how to draw familiar objects with the tools provided in Flash. The first task: draw a leaf. Here's the thing: I can't draw. At all. I can sketch a relatively good perspectivized plan for a house or a room or a piece of furniture. I can even do a projection or an exploded diagram with back-of-a-napkin proficiency. But I've just never been able to drawn anything organic. I remember in grade school art class being shown that the human figure or face can be thought of as an assemblage of blobs—circles, ovals, squares, triangles. But to tell you the truth, I never figured out what to do with that information. Fast forward 20 years and I'm being asked to draw leaves. Fortunately the book tells me exactly how to do this, line by line. Now the goal was to show me how to use the tools, but in fact what they accomplished, what 16 years of school plus 20 years of living had failed to, was showing me how you draw something like a leaf. I had always assumed that artists, good ones anyway, were like laser printers, or at least inkjet plotters—that they held a perfect image of something in their heads and rendered it progressively through some instrument onto some medium. Of course I understood there were learned techniques and even tricks like the Happy Little Trees guy on PBS used to use. I supposed that artists eventually learned all of these so that the edge of a knife drawn through some green paint overlaying brown paint automatically became a good-enough tree. What I had never realized, until yesterday, was that much of art is how you look at something in the first place. To take one example, look at a maple leaf. For me, a maple leaf looks like the thing on the Canadian flag. But of course that's an iconic, stylized, monochrome silhouette of a maple leaf. I'm talking about a real maple leaf. You could, and I probably would have, simply attempt to draw this freehand—basically tracing it without the aid of tracing paper. But since we're talking about using a program like Flash, which has finite tools, it helps to think about a process. Using things like straight and curved lines, freehand drawing, and simple, repeatable rules, how could you describe a maple leaf? Well, let's see, it's symmetrical, but not perfectly so. It seems to have five sub-leaves of similar (possibly fractal) construction, the two nearest the stem slightly smaller. Now let's get a little more detailed. How could you describe the process of drawing one of these sub-leaves? Well, they appear to be an initially inwardly or outwardly curving line followed by a reverse curving line to a point, then another inwardly or outwardly curving line back in toward the center, but shorter. You seem to repeat this pattern until you reach a leaf tip and then work your way back down in reverse order. The pattern is repeated fractally to achieve the desired size and scale. Then there appears to be a skeletal structure in the middle of the leaf which is simply a network of lines radiating to the tips, the thickness of the line roughly proportional to the length of the point. The stem simply a slightly bent line. The overall color and shading of the leaf seems to be a relatively uniform mottled brown with some randomly spaced darker spots. Now I'm not saying you could draw the leaf given just my text description. But given a mental picture or a real picture of a leaf, you could use this procedure to drawn a non-photorealistic representation of the image in any paint programs, or on paper. The thing is, I had never looked at a leaf like this before. I had never asked myself, what, visually, makes up a leaf. To me a leaf was an atomic, indivisible part of a larger entity like a shrub or tree. In a game of Pictionary, to render the concept of a leaf, many of us would probably resort to drawing a stick-figure tree with a blob at the end of a branch. But to me, that's all a leaf was, ever: a blob at the end of a branch. No wonder I couldn't draw one! So the lesson here is, first, observe. Then think about your tools, your capabilities. Maybe you can't draw a rooster or a face freehand, but you can probably draw the lines and ovals that make one up. It would seem that any image can be simplified to fit your tools and then recomplicated to the point required to fit the level of expression you desire. For Flash, we really are dealing with a mostly iconic medium, so very simplified renditions will do. Yet by paying attention to construction, by getting the broad strokes right, I think it's possible for even an amateur to avoid having his drawings look like the crayon renderings of a 5-year-old, or worse a clueless adult. At least I hope so. <-- December 2003 February 2004 --> Copyright 2004 by dickdiamond.com hear this
计算机
2015-48/3653/en_head.json.gz/2430
Akaneiro: Demon Hunters Kickstarter interview Written by Charles Husemann on 1/14/2013 for PC The folks at Spicey Horse recently launched a Kickstarter campaign for their game Akaneiro: Demon Hunters, a free to play action RPG. Curious about what they were trying to get out of the Kickstarter (other than money) I reached out to the company and got the following response from American McGee, the CEO and Founder of Spicy Horse Games. When you started in the industry did you ever think you would have access to something like Kickstarter? What do you think the biggest impact this has on the industry as a whole? When I started in the industry there wasn’t even a “WWW” on the Internet! We had to search for stuff “online” using something called Archie. And when it snowed we wrapped barbed-wire around our bare feet for traction. But we did have something called “Shareware,” which allowed users to interact more directly with developers and their games. At that time, developers could distribute independently and earn a living on shareware related sales, which they handled directly. It was a kinder, happier time. Except for the barbwire feet; that pretty much sucked. Kickstarter feels to me like a much more democratic method of raising awareness and funding for game projects. There’s not the “Top-10” funnel you have on places like the mobile app stores – and obviously no publisher control over what goes up or gets attention. It’s also allowed us a much closer form of communication with our audience. If the trend continues (and I hope it does) I think it means more people can see their ideas brought to life without having to rely on bankers and investors. That’s great because in reality, bankers and investors don’t get behind speculative ventures – they want to put their money into things that already making money. That’s great if you’ve already managed to build something and want to make it grow faster, but not so great if you’re just starting or launching something. Why did you choose to launch a Kickstarter for Akaneiro: Demon Hunters? Was this always planned? It wasn’t always planned, but we’ve been watching and discussing Kickstarter for a while. For us the question was not when or if, but “what.” Having paid attention to other campaigns that have succeeded or failed – we felt it best to focus on something people couldn’t dismiss as vaporware, something solid and playable. Akaneiro made sense because, while development had come to its natural end, we still had a long list of features and ideas we felt would make it even better. Turning to the audience for backing on those things took the place of what we might normally have done – gone to our publisher for more funding. Being without a publisher is usually a blessing, but when it comes to the financial limitations forced upon us by being a small indie, not having a publisher can be a real pain. Now that we’ve actually launched the campaign, I realize we were myopic in our understanding of the value it could bring. We’re seeing support from our audience, which is wonderful, but also benefiting from all the marketing exposure the campaign is generating. That exposure has translated into interest from publishers, financiers and a long list of potential partners. It’s got me thinking that this is the way ALL of our games should be launched! You launched a Kickstarter to complete the game and “realize the most complete version of the game” - could you talk about exactly what this means in terms of actual features? What happens if you don’t get the full amount of the kickstarter? It’s pretty simple. We finished the game per the schedule, design and budget that we originally set out with. During development we came up with a ton of additional ideas for making the game even better. We’ve put most of those up on the Kickstarter page – and we’re asking the audience to decide whether or not they’d like to see those things in the game. If we the campaign doesn’t fund, then those things could still appear in the game, but the timing will be different. With funding we can keep the entire development team focused on Akaneiro until those items are finished. Without it, we need to move some of the team onto new projects, which means work on Akaneiro will progress more slowly. This isn’t so much about “fund this or you’ll never see these features.” It’s more about, “fund this and we can make these features happen much faster!” Could you talk about how you determined the amount of the Kickstarter request? Did you consider doing some kind of Founders program like MechWarrior Online did? It’s based on the amount of time and number of developers we’re going to need in order to support implementation of those features – and I added to that a reasonable budget for additional marketing and community support. As with most development studios, about 90% of our burn-rate comes from paying salaries – from people. So this is simply the combined cost of all the people required to implement and support what’s listed in the campaign. As I mentioned previously, our planning (which has existed for 18+ months,) calls for those people to move onto a new project when they wrap major development on Akaneiro. Developing and releasing new content is critical – at least until we stumble on a life-sustaining hit. As for a Founders program… we didn’t really think about other methods of fund raising. Given the sheer number of gaming Kickstarters that are out there are you at all concerned about gamers being either mentally or financially burned out by them? My biggest concern was that we present a high quality offering that would rise above the noise. We studied other campaigns that worked well and tried to emulate their best features. It’s a beauty pageant, but there’s also a huge amount of live interaction that must be maintained throughout. What I’m realizing now is that a campaign is only as effective as you drive it to be – it requires constant attention and feeding. How did you come up with the various levels of the program? Did being a free to play game change how you determined your levels? We covered a wall in ideas and threw ninja stars at it! Seriously, we based on it what we saw was successful with other campaigns. And we’re continuing to update on a daily basis, mainly in reaction to backer feedback. It’s been a very organic process. We'd like to thank American for taking the time to answer our questions as well as Shannon for coordinating the interview. * The product in this article was sent to us by the developer/company for review. Hi, my name is Charles Husemann and I've been gaming for longer than I care to admit. For me it's always been about competing and a burning off stress. It started off simply enough with Choplifter and Lode Runner on the Apple //e, then it was the curse of Tank and Yars Revenge on the 2600. The addiction subsided somewhat until I went to college where dramatic decreases in my GPA could be traced to the release of X:Com and Doom. I was a Microsoft Xbox MVP from 2009 to 2014
计算机
2015-48/3653/en_head.json.gz/2899
Open Menu Close Menu Apple Shopping Bag Apple Mac iPad iPhone Watch TV Music Support Search apple.com Shopping Bag HardwareSoftwareSales & Support Internet ServicesIntellectual PropertyMore Resources THE LEGAL AGREEMENT (“AGREEMENT”) SET OUT BELOW GOVERNS YOUR USE OF THE GAME CENTER SERVICE. IT IS IMPORTANT THAT YOU READ AND UNDERSTAND THE FOLLOWING TERMS. BY CLICKING "AGREE," YOU ARE AGREEING THAT THESE TERMS WILL APPLY IF YOU CHOOSE TO ACCESS OR USE THE SERVICE. IF YOU ARE UNDER THE AGE OF MAJORITY, YOU SHOULD REVIEW THIS AGREEMENT WITH YOUR PARENT OR GUARDIAN TO MAKE SURE THAT YOU AND YOUR PARENT OR GUARDIAN UNDERSTAND IT. Apple Inc. is the provider of the Game Center service (the “Service”), which permits you to engage in game related activities, including, but not limited to, participation in leader boards, multi-player games, and tracking achievements. The Service may not be available in all areas. Use of the Service requires compatible devices, Internet access, and certain software (fees may apply); may require periodic updates; and may be affected by the performance of these factors. To use the Service, you cannot be a person barred from receiving the Service under the laws of the United States or other applicable jurisdictions, including the country in which you reside or from where you use the Service. By accepting this Agreement, you represent that you understand and agree to the foregoing. As a registered user of the Service, you may establish an account ("Account") in accordance with the Usage Rules, below. Don’t reveal your Account information to anyone else. You are solely responsible for maintaining the confidentiality and security of your Account and for all activities that occur on or through yo
计算机
2015-48/3653/en_head.json.gz/4867
Embedded Battle Royale Faster chips and the need for connectivity help Linux and Windows challenge traditional embedded operating systems By Brian Santo Around 90 percent of the microprocessors sold commercially end up in embedded systems, often not even recognizable as computers. Cell phones, factory controls, microwave ovens, network switches, automobiles, printers, MP3 players, and singing greeting cards fall into this category. As such systems connect to the Internet and the trend toward ubiquitous computing accelerates, the market for the operating systems that run embedded processors is on the verge of exploding. Top 10 Vendors of Embedded Software Tools & Real-Time Operating Systems Click on graph for larger view. Anticipating this boom, Microsoft Corp., Redmond, Wash., the 800-pound gorilla of the software world, entered the embedded operating systems (OSs) market about five years ago with WinCE. Less than two years ago, another contender stepped into the ring--Linux, an OS championed by the open-source software community. Microsoft and the Linux crowd are just beginning to wrest market share from companies such as Wind River Systems, OSE Systems, QNX Software Systems, and Green Hills Software; all cater to embedded system developers with specialized real-time OSs (RTOSs) and have long dominated the market [see table]. The competition affects more than the number of OSs sold. Whereas software companies charge a per-copy royalty for their OSs, plus fees for associated products and services, Linux is royalty free. No two Linux vendors have the same strategy for making money, but basically they all offer software for little or nothing, while also charging for associated goods and services such as development tools (compiler, debugger, simulator, and so on) and support. And whereas commercial OS companies bar all access to their source code, Linux vendors give programmers access to the source code, and let them add or subtract anything to or from it. An OS that is royalty free and open source is a shot fired across the bow of every company that sells embedded OSs. A spreading market Many embedded systems employ an RTOS, distinguished for its ability to respond correctly to a stimulus within a set period of time, usually a few microseconds. For a number of reasons, real-time operation is beyond the capabilities of OSs designed for computers, such as Microsoft Windows and NT, MacOS, and Unix [see "Defining Real-Time Operating Systems"]. Computer OSs such as Unix or Windows occupy between hundreds of kilobytes and hundreds of megabytes of memory, so large that they must reside on capacious hard drives. An RTOS may require as little as 10KB of memory and almost never more than 100KB. Any embedded OS must reside in non-volatile read-only memory (ROM). Depending on the system, it's the designer's choice to run the OS either from ROM or, if the OS must run faster, volatile random access memory (RAM). Whether ROM or RAM, all IC memory is relatively expensive. Since embedded systems tend to be highly cost-sensitive, memory costs, and therefore system costs, can be reduced when the OS is reduced to its absolute minimum size. Now a confluence of factors, including denser IC memories and more powerful microprocessors, has expanded the market for embedded operating systems beyond traditional RTOSs. As IC memories get denser, embedded system developers are finding that they can use scaled down versions of standard OSs that allow them to add more features to their systems. There are miniature versions of Linux or Windows that can fit in a few megabytes of memory. Using a larger OS and more IC memory than an RTOS would have required is more expensive, but system developers hope to compensate by cutting system and application development time. Embedded microprocessors have also gotten more powerful, migrating from 8- to 16- to 32-bit devices. "We saw the fat part of the market in 32-bit applications," said Phil Shigo, lead product manager for Microsoft's embedded appliance platform group. "We can create a 32-bit­based system with memory and network interfaces [on a single chip] and drive it down to costs that would've been associated with 16-bit systems in the past." Rice cookers, industrial robots, and other less demanding systems can make do with an 8-bit processor, but consumer electronics and communications equipment--the "fat part of the market"--benefit from 32-bit power and flexibility. Processors with 32-bit architectures can directly address a much larger memory space and so can handle much bigger data sets. Moreover, they will almost always have many more registers available for processing data. That means more data can be kept in the microprocessor itself, resulting in less frequent memory accesses than 8- or 16-bit devices, which saves clock cycles. Still another factor is contributing to the rise of Windows and Linux in the embedded world. Many newer products that use embedded microprocessors, such as networked game consoles, set-top boxes, and cell phones, have either relaxed timing requirements or have no real-time requirements at all, so don't need an RTOS. Enter Linux After Microsoft began to target the embedded market, the Linux community saw that many of the same trends that make an embedded version of Windows viable also suit Linux. Linux, first released in 1991, is inherently modular. The kernel (the fundamental elements of an OS, such as memory management and file management functions) is approximately one megabyte in size. It can be easily extracted and then supplemented with modules appropriate for embedded systems. Microsoft and Linux's proponents--Lineo, MontaVista Software, and Red Hat, among others--have sculpted pint-sized versions of their respective OSs to run on a host of embedded devices that don't require real-time performance. All are excited by a thriving embedded OS market (counting in development tools and services as well). According to Venture Development Corp., of Natick, Mass., the US $1.11 billion market of the year 2000 should more than double to $2.62 billion by 2005, not including the value of OSs that embedded systems developers write themselves. The market researcher estimates that OSs and OS services such as support and maintenance represented 56 percent or so of the 2000 total, or about $626.8 million. Embedded Linux OSs made but a small splash in their first year of commercial availability. Worldwide shipments of such software, development tools, and related services in 2000 took in an estimated $28.2 million, or about 2.5 percent of the market. But Linux's slice of the embedded pie will grow quickly: Venture Development estimates that by 2005, shipments will soar to $306.6 million, a compound annual growth rate of 61.2 percent that will give Linux 11.7 percent of the projected 2005 market. Notable wins over the last 18 months include TiVo personal video recorders, Sharp's Zaurus personal digital assistant (PDA), Motorola's DCT-5000 set-top box, and Ericsson Business Innovation's BLIP (Bluetooth Local Infotainment Point). This last is a communications hub based on the Bluetooth short-range wireless standard that facilitates interconnectivity between Bluetooth-enabled devices, such as PDAs and mobile phones, and the Web. Behind the growth of high-octane embedded OSs is the push to connect more and more devices to the Internet. "Devices that can communicate with other devices are becoming dominant in the embedded market," said a recent Venture Development report. The company identifies telecom/datacom equipment and consumer electronics as the two largest growth categories in embedded systems. "Consumer electronics is a major category where developers are looking to use Linux," said Stephen Balacco, a Venture Development analyst. The area is high volume and cost-sensitive, so "if you can reduce licensing costs by reducing run-time royalties, you will be more competitive." A huge coup for Linux would be to make it into cell phones--and at least one large Linux vendor is trying to do just that. Red Hat Inc., Research Triangle Park, N.C., which provides Linux-based operating systems for servers, computers, and embedded systems, has partnered with 3G LAB, Cambridge, UK, developer of mobile communication system software for 2 1/2- and third-generation mobile wireless handsets. The new cell phone OS will be based on Red Hat's open-source embedded real-time eCos (for embedded Configurable operating system). Manufacturers in Asia are trying to beat Red Hat to the punch. Some are now targeting the European market with low-end mobile phones that incorporate Linux, according to Julian Harris, 3G LAB marketing manager. They chose Linux, he explained, because it enables them to develop phones and get them to market quickly. "I wouldn't expect to see high-end, feature-rich models for some time, but do expect to see some competent, less feature-rich models soon," he said. Reacting to Linux The entry of Linux and WinCE has roiled embedded-OS waters. Wind River Systems Inc.'s fortunes are emblematic. Its stock has withered from $50.62, the 52-week high as of late July, to the $13­$16 range at press time. In part that's due to the economy. The Alameda, Calif., company's stock price tracks with the downtrodden technology sector at large, but observers say Wind River's troubles also reflect investor concern about the threats from Microsoft and especially Linux. embed02.jpg In reaction to the open-source onslaught, Wind River purchased Berkeley Software Design Inc. (BSDI), whose BSD Unix is often used in networking equipment. For Wind River, this is a new market in which both Linux and Microsoft's NT OS are strong. With the purchase, the company also gains stewardship of FreeBSD, a free version of BSD Unix, and thereby a position in the open-source market. Other RTOS companies are trying to parry the Linux challenge as well. In Canada, QNX Software Systems Ltd., Kanata, Ont., has moved to address the royalty-free issue by offering free downloads of its QNX 6.1 OS for evaluation purposes. Increasing time-to-market pressures dictate that products start being developed as soon as possible. A developer may have made a determination to use a traditional RTOS, but does not want to wait to initiate product development while it evaluates the RTOSs available or negotiates the best price with various vendors. So some system designers start developing their products around Linux because it is freely and easily available and can be used immediately; but once the project is under way, these developers switch to a commercial OS. QNX is betting that some developers will start development projects with QNX 6.1 (just as some now start with Linux) because it is easily available, but that in the end, they will stick with QNX. Green Hills Software Inc., Santa Barbara, Calif., is also shifting to a business model in which it offers its OSs on a royalty-free basis. The company charges for services and tools for its Integrity and ThreadX RTOSs, which are aimed at applications with the strictest real-time and OS size requirements. Linux is never appropriate for embedded applications, according to John Carbone, the company's marketing vice president. It's "a square peg for a round hole," he said. At the same time, the company recognizes that some people want only Linux and it has modified its Multi 2000 tool set to support Linux development. Mentor Graphics' Embedded Software Division, based in San Jose, Calif., is trying to fight on two fronts. The company recently acquired the VRTX RTOS, and has allied itself with OSE Software, San Jose, Calif., to take on Wind River. "OSE has a very robust real-time OS," said Jerry Krasner, an analyst with Electronic Market Forecasters, Waltham, Mass. "It's got the security features, it has features for network interconnectivity. But it lacks [development] tools. Mentor Graphics has VRTX, which isn't growing, but it does have the tools. Mentor and OSE are a threat to Wind River with a robust telecom-oriented RTOS with good tools." Mentor Graphics is also pursuing the consumer business with an interesting pitch. It expects that companies in that field will be moving toward a system-on-chip approach, where functions once implemented in discrete parts are brought on board the microprocessor. And when entire systems are implemented on a single chip, including an OS, performance can be increased, and costs reduced. As Mentor's primary business is design-automation tools, the company hopes its unique combination of hardware design expertise and OS experience will appeal to developers of embedded systems-on-a-chip. Back in the state of Washington, Microsoft has not been idle. It aims to blanket the entire electronics market with a family of OSs that span the spectrum from the smallest embedded applications to large computing systems. To that end, the company stripped Windows down to essentials and rebuilt it as Windows CE (WinCE), which it introduced in late 1996. This OS has a relatively compact kernel and accommodates additional features for embedded applications such as add-on modules or components. Microsoft recently rewrote the WinCE kernel to make it capable of real-time operation. That was version 3.0, released in the latter half of 2000. Microsoft has also released an embedded version of Windows NT and is developing a super-small OS, called Stinger, suited to smart cards. Scheduled for next year is a new version of WinCE that is now in beta test. Code-named Talisker, it is being optimized for smart phones, PDAs, and cable set-top boxes. But even mighty Microsoft has been compelled to counter Linux. It has offered to share access to its source code with its customers, a first for the company, which has jealously guarded the source code of its PC OSs. It has also reduced its licensing fees and royalties, according to Venture Development's Balacco. Click on the chart for a larger view. Selecting an embedded OS OS vendors of all stripes agree: the technical differences among RTOSs, WinCE, and embedded Linux are minimal. By now, the choice of OS is so much a business decision, it is often made at the corporate level, according to Jim Ready, CEO of Linux vendor MontaVista Software Inc., Sunnyvale, Calif. "Sometimes it comes down to which side of the iron curtain you want to be on: open source or proprietary," Ready said. "If either can meet your requirements, it can be a top-level decision to go open or closed. It's getting to be a matter of predisposition." The assumption that the OS is important enough to drive a system developer's choice of hardware and development tools is sound--sometimes. But when any given developer sits down to create a new system, the choice of OS will be just as likely a secondary or even tertiary consideration behind the processor and the development tools. A case can be made that hardware is slightly more often the chief concern. The developer will first seek a processor that can support the functions wanted in her product. Hardware is the most expensive category in any product's bill of materials, so this is the first and best place to contain costs. Once a company decides upon a processor, its OS options tend to narrow. Every OS has to be modified to support any given processor, and with so many processors available, no OS vendor has the resources to support them all. So every OS is associated with a finite set of chips. Similarly, each suite of development tools is associated with a specific OS or set of OSs. Linux is tightly coupled with GNU tools, which also happen to support a number of commercial RTOSs. (GNU stands for GNU's Not Unix, a project of Boston's Free Software Foundation.) Wind River's RTOS, VxWorks, is closely linked with the company's Tornado tool suite, and WinCE is tied to Microsoft's Platform Builder tools [see table]. "Tools are an important concern--if my application will work as well with this OS as with that OS, then I look at tools," said Alex Doumani, vice president of engineering at a leading vendor of development tools, Applied Microsystems Corp., Redmond, Wash. Another point: is the pool of developers large enough to write applications a company can't develop on its own? For WinCE and Linux, the answer is a loud "yes." Microsoft regularly boasts of the millions of developers worldwide familiar with its development tools, while estimates of the developers familiar with GNU tools range from the high hundreds of thousands to low millions. Programmers may begin using one or both of these tool sets in college. Wind River's Tornado tools are favored by embedded system developers, who, however, aren't nearly as numerous as the people familiar with the Linux or WinCE toolsets. Whom do you trust? If Linux isn't technologically superior, and the popularity of its tools is only mildly compelling, why is its potential impact on the embedded OS market so huge? Simply stated, many view it as the check and the balance to Microsoft. Microsoft has a history of taking open standards, engineering proprietary improvements, attempting to establish its re-engineered version as the standard, then charging handsomely for it. It also has a history of wresting control of markets from partners, marginalizing rivals, and driving others out of business. Whether its activities are in fact legal or ethical, some people believe they are neither. "There is a real, deep distrust and animosity against Microsoft," said Arthur Orduna, a former Wind River executive, now with Canal+ Technologies, Paris, an interactive TV company. The embedded community is an old one and has the hacker mentality that comes from the advanced computer sciences. Its members pride themselves on elegance and compactness of code, and their prime example of what not to do is Microsoft. These developers will not use a Microsoft OS unless absolutely necessary. And in the embedded market, rarely is any one OS an absolute necessity. Linux marketers play on the anti-Microsoft bias, pointing out how often computers that run Windows and Windows NT crash. It's not entirely fair to tar WinCE with that same brush, but in a competitive market, it's common for rivals to try to spread fear, uncertainty, and doubt (FUD) about each other. MontaVista's Ready brought up another issue: "The scourge of worms and viruses when the world is based on one platform with known weaknesses." Microsoft has over 90 percent of the desktop OS market. The size and uniformity of that market makes it vulnerable to computer viruses and worms, which attack weaknesses in OSs. Now imagine that one embedded OS comes to dominate the embedded systems market. Then a virus could affect heating systems, kitchen appliances, motor vehicles, everything that runs that OS and is linked to the Internet. That scenario is highly speculative, but it is an argument for ensuring true competition in the embedded OS market. Of course, Microsoft and the commercial RTOS companies are not above doing some FUD-mongering of their own. Microsoft's Shigo referred to two chinks in Linux's armor: "Linux in and of itself is not a real-time OS. You need to license real-time extensions. Is real-time important to you as a developer? Real-time systems typically require long-term support and maintenance. You have to ask, 'Where's my Linux vendor going to be five years from now?' " Detractors of Linux also fan concern about the requirements imposed by the GNU Public License (GPL) system. The concern is that if a developer using embedded Linux creates a differentiating feature by writing an extension to Linux, that extension cannot be proprietary and must become the property of GNU, and therefore part of open-source Linux. "Tools are an important concern--if my application will work as well with this OS as with that OS, then I look at tools" Rich Larson, senior vice president of sales and marketing, at Lineo Inc., Lindon, Utah, a vendor of Linux-based operating systems, tools, and services, said that's simply false. "You can create your own [intellectual property], retain the rights, and we can ensure there are no violations," he said. The GPL system does preside over the licensing of proprietary additions to Linux. The issue of violations arises when a developer uses a block of code licensed under that system, makes a few million copies of a product, and then finds out that someone else owns that block of code and wants 50 cents a copy for it. Lineo said it would indemnify its customers against that by means of its new GPL scanning tool, which scans the Linux source code used by Lineo customers in their products. If the scanner finds any code that must be licensed for a fee, Lineo will write new code to replace it. Prognostications So who will come out on top in this OS battle royale? Industry analyst Krasner is among those who are leaning toward Linux, in part on the merits, but also because he's skeptical of Microsoft's prospects, at least with WinCE. "Less than 5 percent of embedded system developers are willing to use CE. Many feel that Microsoft has over-promised and under-delivered again," he said. "Plus it continues to charge royalties." MontaVista's Ready is not prepared to discount Microsoft. He noted that companies with broad product portfolios employ many OSs in different products, and they would like to standardize. "Customers do complain that there is a Tower of Babel of OSs, and they'd like to get down to one or two," he said. "Eventually there will be Linux and Microsoft and that's it. Both have the benefit of being standards. Linux has the charm of not being from one vendor." Green Hills' Carbone is less sanguine about Linux's chances. "There will be a squeeze," he said. "We're O.K., Wind River will be, too, but the Linux vendors aren't making a lot of money, and I don't know how long they all can survive in that mode." Whoever remains, Venture Development predicts that embedded Linux is going to have its effects. The research firm says it expects that growth from run-time royalties (charged on a per device basis) will slow as a result of market forces from both the open-source movement and embedded OS vendors offering royalty-free solutions. BRIAN SANTO is a former editor of Spectrum. Now, in between PTA meetings in Portland, Ore., watering the garden, and making dinner, he plays traditional Zimbabwean music with Dziva, an eight-piece marimba ensemble. The competition among vendors of operating systems for embedded microprocessors will not conclude any time soon. New developments occur weekly, if not daily. Fortunately, the trade press is invaluable for keeping up on the subject; and while subscriptions may be hard to come by unless you're in the business, they all maintain publicly accessible Web sites. Notable resources on embedded design include Electronic Engineering Times (http://www.eet.com) and EDN (http://www.e-insite.net/ednmag/). ZDnet is a good source on software in general and operating systems (OSs) in particular (http://www.zdnet.com/zdnn/software/). Slashdot (http://slashdot.org) provides insider views on OSs and embedded system development that are informative and entertaining, although often esoteric. Peter Wayner's Free For All: How Linux and the Free Software Movement Undercut the High Tech Titans (ISBN: 066620503) provides a look into the current OS wars and goes into detail on the stakes involved. The book was published in 2000 by Harperbusiness. Why California Rules the Robocar Industry Google has the cars, the test drivers, the engineers, and the money31 Aug ARM and IBM Make It Easy to Experiment With the Internet of Things A Kit Connects a Microcontroller to the Cloud21 Aug Chip Fingerprinting Scheme Could Secure IoT Devices Against Malware Mitsubishi will roll it out in April 20168 Apr Body Sensors Help Dogs 'Talk' to Humans A new two-way communication system for dogs and humans could aid search and rescue missions5 Nov 2014 The Internet of Things Gets a New OS ARM hopes its new operating system will help streamline IoT product development7 Oct 2014 USB Flash Drives Are More Dangerous Than You Think White-hat hackers reveal a new vulnerability in those ubiquitous thumb drives4 Aug 2014 The Drive for Driverless Cars Automated vehicles are coming, but will they be fun?26 Jun 2014 On the Internet of Things, Nobody Knows You’re a Dog SmartAmerica Challenge brings things and people together to create a more-connected country19 Jun 2014 Goal Detection Technology for the Other Football Goal line detection technology has won over crowds at the World Cup. Will American football be next?19 Jun 2014 Most Technologists Upbeat About Future Internet of Things, Says Pew Survey Technologists see great societal benefits to the Internet of Things, and a host of technical and privacy problems along with it16 May 2014 GM CEO: “We Admit It. Somebody Messed Up” Recalls another 1.5 million cars with faulty ignitions and appoints safety czar as Congress and federal regulators close in18 Mar 2014 Radio Pulse Gun Aims to Stop Modern Cars A radio frequency pulse device can scramble a vehicle's electronics to cut out the engine5 Dec 2013 We’re Being Driven to Distraction by Clamorous Computing Technology must be more self-effacing21 Nov 2013 Remembrance of Everything Past More information could enhance cognition and lead to better decision making—or drown us in a deluge of data points26 Jun 2013 A Starter Kit for the Internet of Things Ayla Networks software platform connects smart thermostats, refrigerators, and other Wi-Fi-enabled gadgets to cloud apps12 Jun 2013 An Eye Tracker in Every Smartphone? Eye Tribe wants to make eye tracking cheap and easy—as long as you don’t wear bifocals20 Apr 2013 If At First You Don’t Succeed, Recall Your Product Nissan, Honda, and Subaru issue recalls to fix problems with airbags, brakes, and remote engine starters 14 Mar 2013 Embedded Anti-Malware Defends Against Cisco IP Phone Hack Software “symbiotes” signal attacks on embedded systems27 Feb 2013 Gun Control: What About Technology? Why don’t guns recognize their owners and not shoot when in other hands?12 Feb 2013 Cisco IP Phones Vulnerable Computer scientists discover a way to take over whole networks of one of the most ubiquitous office phones in the United States18 Dec 2012
计算机
2015-48/3653/en_head.json.gz/5958
Blogs On the Editor's Desk Essen, day 1 - Tuesday, October 7, 2008 I'll post a few times, tonight. First, check out my story on ONVIF's announcement and presence here. Inevitably, the PSIA put out a press release today as well. I respect everybody involved in both organizations, and I've written a number of times about the value of standards (though I'm not sure I've completely decided on a personal position on implementation and the finer points), and the technology is over my head, so this is what I'll say right now about these two standards efforts: They are a contrast in styles. One the one hand is the PSIA, which has got a little bit of a burr in its saddle and wants to move, move, move. On the other is the ONVIF, which is deliberate and dots all of its Ts (PSIA's pages of copy dedicated to membership discussion? 2. ONVIF's? 25). Both are populated by a lot of people who make it easy to agree with them. Read more about Essen, day 1 Update from Amsterdam - Sunday, October 5, 2008 On my way to the Essen show, I noticed this store front display at Versace, in Amsterdam: Gold-plated cameras all staring down the newest high-heeled boot? Who knew surveillance systems could be so hip? Now, if I could just discover who OEM'd those enclosures for Versace... Read more about Update from Amsterdam That editorial I wrote - Friday, October 3, 2008 Okay, so the feedback is starting to come in regarding my editorial in the October paper. In it, I sort of tepidly endorse Barack Obama. You can read the editorial for my reasoning, so I won't go into it here, but the central gist of it is that while his tax policies are likely to be worse for business owners than John McCain's, I think Obama's long-run vision for energy is a game-changer. I think energy is the single most important issue of our time, and I find McCain/Palin to be covering their eyes and pretending the problem isn't there. They'd be the sort of captains who advocate more bailing (or maybe drilling) as everyone else is jumping on the life rafts. If you disagree with me, I'm okay with that. And we've gotten both positive and negative feedback that's created some cool dialogues. What I won't tolerate, however, are the cowards who've called our offices, refusing to identify themselves, and yelled into the phone that they're canceling their subscription, blah, blah, and then hung up. What purpose does that serve? I'm sorry if you've come to expect so little from your industry publications that an editorial made you angry and you didn't know what to do about it, so you lashed out in the only pathetic power grab you could think of. But, you know what, I disagree with editorials, on issues big and small, in all kinds of papers I respect (the Wall Street Journal and New York Times, among them), and I rarely spite myself by denying myself their content in the future. I love to hear people disagreeing with me. Arguing is one of my favorite past-times. And maybe I do stir the pot on purpose sometimes. But I won't engage with people throwing around ad hominem attacks and setting up strawmen to knock down. So, fire away, but keep your discourse civil and intelligent. And please acknowledge that people can hold opinions opposite to yours without being "ignorant" or "biased." Because that's what opinions are: biases. Read more about That editorial I wrote Off to Essen Hey all, I'll be in transit to Essen for the next couple of days, so I won't be around to moderate comments much. Still, if you comment, I'll get to them as fast as I can. If you'll be at the show, drop me a line via Twitter and we'll hook up. Read more about Off to Essen The Switzerland of standards - Thursday, October 2, 2008 The folks at Milestone today pointed me to a blog entry by John Blem, their CTO, regarding this whole standards issue that's been getting a lot of attention (from me, anyway), with the PSIA, SIA, and ONVIF (Sony-Axis-Bosch) initiatives. I'm going to ignore for a moment that he's linking to John Honovich's standards post and not one of mine (look how big a person I am (actually, you all know how petty I am. Who am I kidding?)) to get things started, and point out a few interesting things Blem has to say about the whole situation. Why do we care what Milestone thinks? Well, without throwing my weight behind anyone (and I'm pretty skinny, anyway), I have to say that I hear more often about Milestone's "openness" than anyone else's. That's just a fact. Almost daily, I get questions with regard to standards being set on the camera or hardware side. Specifically, asking me why Milestone as an open platform company is not leading the charge for one of these standardizations. My answer is always the same: As an open platform software provider, we will adopt any standards emerging, but obviously we do not want to take sides when we plan to support everything. It is more important for us to follow all these standards instead of creating them. But, jeez, doesn't that cost Milestone an awful lot of money, having to constantly adapt to all of the new ways of sending information about? Wouldn't the company be well-served if there was one universal way of communicating? And I guess there's another implicit argument here as well: That there is a need to take sides. Theoretically, there could be one universal standards body that everyone got behind and there wouldn't be a need to take sides. And, also theoretically, if Milestone was active in that one universal body, wouldn't that help provide it validation? Couldn't taking sides also end the sides-taking? I'm not sure about the answers to those questions. Then John goes into a well-reasoned discussion of who benefits from standards and why. I agree with about 99 percent of it so I won't reproduce it here. Just go read it. Done with that? Okay, back to the blogging: On the analytics side, you see standards being driven as well. I cannot be sure of the motivation, but the stance that Milestone takes on this is that you cannot standardize something that has not been invented yet. What I mean by this is that the sheer speed of innovation on that particular side is moving so rapidly that it is impossible to standardize everything at this point. Eventually, I think we will see a polarized market in both the analytic and camera side where we have value-driven products versus price-driven products. This will ultimately lead to a subsequent shakeout in the market. I agree with a lot of this, but I think this is where the standards talk often gets confusing for integrators and end users and I think there's a point to be made here. Sure, analytics are still very young and I agree that you can't standardize before the largest part of the innovation happens, but I don't think using standards (here being equated with price-driven) and having differentiating features (value-driven) are diametrically opposed. I've had it explained to me a couple of times that, for example, you can use standard H.264 encoding that could be played back on any Qucktime viewer, but that doesn't limit you from having all kinds of cool features that appear in your playback and not in other people's playbacks. So, you're using a standard way of communicating, but you have better stuff to say than other people. We're standardized on the English language, in general, but some people are better talkers/writers than others, right? I don't think that's as bad an analogy as it might initially seem. One could wonder, however, why companies claiming to seek a global standard do not join an already established standards committee instead of launching a competing one. To me, it seems contradictory to have several standards driven at the same time when the overall message is that there should be a common standard. Maybe it is more important to be in the driver seat instead of trying to get as many companies as possible represented under one common standard committee? Well, I think maybe John has Bingo here, but it's also still very early in this process. It's not impossible that these competing (and only we observers say they're competing - it's not necessarily true they're working at cross purposes) entities will eventually come together to work out the best standard for the industry as a whole. That's kind of where I have my hopes pinned. I'll be seeing ONVIF's news at Essen next week, so stay posted. Read more about The Switzerland of standards Time to go faux? by: Tess Nacelewicz - Wednesday, October 1, 2008 Picking up on my blog of a few days ago about homeowners putting security systems signs in front of their house when they don't have a security system, here's another story along the same lines. It's from the Minneapolis-St. Paul StarTribune and it says that you can get Brink's and ADT signs on eBay. It also gives information and assessments of other faux security measures such as fake cameras and an interesting product that simulates the light of a TV in a room. (Did you know that many burglars are afraid of TVs?) I couldn't find any ADT or Brink's signs for sale on eBay, but I did find one for an APX sign Here's a posting for a "Security home camera warning signs 4 ADT'L stickers—but that's not ADT, that's an abbreviation for "additional." Here's my advice. If you want to go faux, you should do it right. And I've got good news for you. You can purchase the very same fake security stickers we had the Entwistle house when I was growing up. Click here to "buy it now!" There are 20 available and they're only $2 a piece. Attractive too! Read more about Time to go faux? Log in or register to post comments Time to go faux? Read more about Time to go faux? WSJ hearts Vumii - Tuesday, September 30, 2008 Night-vision manufacturer Vumii picked up a nice accolade today when the Wall Street Journal tabbed the company for its 2008 Technology Innovation Awards. Basically, the WSJ employs a panel of judges, mostly entrepreneur and inventor types, to evaluate new technology entries in a bunch of different categories, and picks a "winner" in each of those categories, physical security being one. Of all the new stuff out there in physical security - easy-to-configure analytics, browser-based control of alarms systems, video fire detection, wide-reaching PSIM software - Vumii took the prize. Of course, there's no way to know who entered or even knew about the contest (and I generally don't go in for these kinds of awards), but the WSJ certainly has no stake in the physical security market, you know the award wasn't paid for, and it's interesting insight into what non-industry observers think is valuable in our market. Here's what they had to say: Vumii Inc. was selected in this category for developing a night-vision camera technology that uses a near-infrared laser to illuminate an area. Most long-distance night-vision cameras "see" in the dark by capturing thermal infrared rays. But these cameras can't read writing or recognize faces, and they can't see through glass. Atlanta-based Vumii's Discoverii technology gets its illumination from an invisible laser beam that produces a high-resolution image that can be captured by standard video equipment. Introduced in 2006, the equipment is being used to monitor a nuclear power plant in Japan and a water system in Pennsylvania, among other uses. You'll remember I was pretty geeked about this technology back at ISC West. Still, I don't actually think the Discoverii part of Vumii is the coolest. The software the company offers, Sensorii, which offers a panoramic view of the scene you're watching and places what you're looking at, or allows you to create automated night-vision video tours, is what really makes the technology somewhat practical. Good for WSJ for making an interesting choice. If anyone would have picked something else, I'd like to hear what you would have picked. Obviously, another night vision company, NoblePeak, has been winning lots of show awards within the industry. Wonder if they entered this contest. Read more about WSJ hearts Vumii Apx employees help hurricane victims by: Tess Nacelewicz - Monday, September 29, 2008 Tired of all the depressing economic news? Here's some feel-good news about an alarm company volunteering to help out victims of Hurricane Ike in Galveston, Texas. Apx COO Alex Dunn said: "Our employees expressed a desire to help the hurricane victims, and so we set up a plan to make it happen. While many are concentrating on the financial headlines that are dominating the news, there are people affected by Hurricane Ike who don't have the most basic things: power, phone, or a safe place to live. We came to discover that making a small difference is how you make a big difference. We hope that other people will remember the victims as well." Read more about Apx employees help hurricane victims Log in or register to post comments Security and the election I'm going to try to look at the presidential election fairly often over the next month for clues about how each candidate will perform for the security industry. There is, on one hand, the simple fact of how they'll perform for the economy in general, and small businesses especially, since the vast majority of security companies are simply small businesses trying to get by in what are increasingly uncertain economic times. But what of the candidates' views on actually keeping people safe? Sure, from terrorism and the like, but also from crime in general. I think this article from the Arizona Republic raises some interesting points about how security has been pushed to the side as the economy dominates presidential discussion. The candidates hardly discussed national and domestic security in Friday's debate. Why? Recent polls suggest that voters have relegated terrorism to a secondary concern, though it remains a major unresolved issue for the next president. Congressional and non-partisan reports lay out a list of 9/11 Commission mandates that remain unfinished, such as tighter transit security to better efforts to interdict weapons of mass destruction. The two candidates have staked out similar positions on bolstering border security, hunting Osama Bin Laden and closing Guantanamo Bay prison. But in the dozen times the two senators cast votes together on homeland-security bills, they agreed only twice. So how are voters supposed to figure out where they really differ? Well, you can try the candidates' web sites. For McCain, go here, here, here, and here. I'm not 100 percent sure what the difference between "National Security" and "Homeland Security" is, but maybe you can figure it out. For Obama, go here, here, and here. It looks like "Defense" is for fighting overseas and "Homeland Security" is more defending the borders, but there's some bleed. Also, Iraq is separated out for Obama. But if you read all of that, you'll see scant mention of the private security industry. I think this is a well-made point: Domestically, "we are obsessing about securing the border, but there are lots of other things out there to be concerned about: protecting the food supply, water supply, nuclear plants, natural-gas supplies and so on," said Courtney Banks, chief executive officer of National Security Analysis Worldwide. Is anyone reaching out to the security industry? The NBFAA, especially, has a presence on Capitol Hill, but despite their lobbying efforts, there's never much of a mention at all of the private security industry in the public discourse. Everyone's just talking about military and government efforts, but there's no way publicly funded efforts can keep everything safe. It's up to private water companies to protect their water supplies, up to private food manufacturers to make sure their products aren't tainted, up to private natural-gas facilities to make sure their plants aren't attacked and destroyed. CFATS and other government regulations dictate how some of these places must secure themselves, but they are largely unfunded mandates and it's up to the private security industry to figure out how to solve the problems as efficiently as possible. Has anyone suggested tax breaks for private businesses who invest in security? Has any candidate suggested a nationwide private information gathering service, a linking of IP-based surveillance systems? I haven't heard it if they have. Please send anything you see along and I'll take a look and make it widely available. Read more about Security and the election Articles Doyle builds density with buy Bates Security considers border crossing Bank survey forecasts lots of M&A activity for 2016 Fire install company gets into monitoring Reed opens new conference at ISC West Blogs ASIS in Anaheim 2015 Videos Security Central's Bray talks on company's plans Event Listings
计算机
2015-48/3653/en_head.json.gz/7036
Machine Learning and Intelligence in Our Midst - Microsoft Research Machine Learning and Intelligence in Our Midst DownloadVideo (WMV)Video (MP4)Audio (WMA)Audio (MP3) Date recorded 6 March 2012 The creation of intelligent computing systems that perceive, learn, and reason has been a long-standing and visionary goal in computer science. Over the last 20 years, technical and infrastructural developments have come together to create a nurturing environment for developing and fielding applications of machine learning and reasoning–and for harnessing machine intelligence to provide value to businesses and to people in the course of their daily lives. Key advances include jumps in the availability of rich streams of data, precipitous drops in the cost of storing and retrieving large amounts of data, increases in computing power and memory, and jumps in the prowess of methods for performing machine learning and reasoning. The combination of these advances have created an inflection point in our ability to harness data to generate insights and to guide decision-making. This talk will present recent efforts on learning and inference, highlighting key ideas in the context of applications, including advances in transportation and health care, and the development of new types of applications and services. Opportunities for creating systems with new kinds of competencies by weaving together multiple data sources and models will also be discussed. People also watched Krysta Svore on quantum computing and machine learningWelcome to Microsoft Research Cambridge > Machine Learning and Intelligence in Our Midst
计算机
2015-48/3653/en_head.json.gz/7712
1/9/201309:34 PMLarry SeltzerCommentaryConnect Directly1 CommentComment NowLogin50%50% Should Microsoft Switch Internet Explorer to WebKit? NoInternet Explorer has made great strides in recent years and is now an excellent, very fast browser. Yet it still gets second-rate treatment from developers for whom IE has the taint of "uncool," and it has a small presence on mobile devices. Perhaps the best thing would be for Microsoft to throw in the towel on their Trident browser layout engine and adopt WebKit, the emerging de facto standard. There are also plenty of reasons not to switch browser layout engines.Just as it has been since Windows 95 was ascendant and dinosaurs roamed the earth, Internet Explorer is the dominant web browser in desktop computer use. But even though there seem to be hundreds of millions of users running it on desktops and notebooks, Internet Explorer gets no respect from the Web developer community, and it often gets second-rate support among desktop browsers. On mobile devices IE is no doubt growing as a share of the total, but still a very small player. The intelligent mobile Web developer focuses on getting his or her web site to look good in the dominant mobile browsers — Safari, the (pre-4.0) Android Browser, and Google Chrome — all of which are based on the WebKit layout engine. This means that Windows Phone and Windows 8 users often run into web site problems in IE 10. Windows 8 users can at least install a different browser, but Windows RT and Windows Phone users have only Internet Explorer. As Microsoft MVP Bill Reiss argues, this is bad for Microsoft's users. He thinks it's time for Microsoft to throw in the towel on their own layout engine, known as Trident and implemented on desktop Windows in the MSHTML.DLL program file, and switch to WebKit. This is really a fascinating proposal. There are plenty of very good reasons to do it. There are also plenty of reasons not to. On the whole, I have to decide against the move, but it's not an easy decision. Caving in to the WebKit juggernaut would reduce a lot of friction that makes life difficult for Windows developers and users. It might even inspire many developers who now shun Windows 8 and Windows Phone to support those platforms, since it would be much less work to do so. And for all the progress that Microsoft has made with IE, there are some areas where it really lags, with HTML5 compliance at the top of the list. Tests just now at html5test.com give me these results all out of a total of 500 points, higher being better): Internet Explorer 9 (Windows 7)138 Internet Explorer 10 (Windows 8 and Windows Phone)320 Safari 6.0.2 (OS X 10.7)368 Firefox 18 (OS X 10.7)389 Chrome 23.0.1271.97 m (Windows 8)448 So IE10 is a huge improvement on IE9, but it's still clearly at the rear of the pack, and Chrome makes it look really bad. So why not make the switch? I'm a security guy, and security problems often are the first thing to come to mind for me. Most people still don't appreciate it, but IE is probably the most secure browser available, and has been for some time. If you follow vulnerability reports you'll see that WebKit has a high volume of them, and they are fixed on very different schedules in the various WebKit products. Microsoft can fix the much smaller number of security problems in IE on their schedule. By joining in with the WebKit consortium, Microsoft loses some control over the schedule for such fixes. Microsoft would also lose control over feature decisions, some of which involve security. Consider WebGL, an open standard for high-speed graphics in browsers, supported by all the major browsers, except Internet Explorer. Microsoft has decided that WebGL is inherently unsecurable and it won't be in any of their browsers. If they move to WebKit, they don't get to make decisions like this. Reiss doesn't say whether he's speaking only about mobile browsers or also about the desktop, but it's a point worth exploring. Many, many corporate developers write web code with Internet Explorer as their development target. Messing things up for them would be a bad thing. But Microsoft can't decide to make the changes only for mobile, because it's central to Microsoft's marketing that the tablet market is really just part of the PC market. There could be a middle ground I suppose. Microsoft could provide two browsers, or allow the user (or maybe even the web site) to switch engines. But it's just not something they would do. It's too complicated and they still get all the downsides of WebKit. Finally, as Reiss himself points out, it's often not a good thing to have one dominant standard. He cites Daniel Glazman, the co-chairman of the W3C's CSS standards working group, who is concerned about the tendency of so many mobile developers to target WebKit rather than standards. WebKit has many features that go beyond standards and many sites rely on them. If the WebKit-only phenomenon is unstoppable, then the only practical way to deal with it may be to cry "Uncle!" and switch to WebKit. I don't think things have gotten that bad. Microsoft needs to keep features in IE/Trident developing to keep up with WebKit and then, if Microsoft can produce the market share to justify it, developers will support IE. Probably. It's not a clear decision. What do you think? Please argue in the comment section below. jeffweinberg, I Disagree As a professional web developer for the past 10 years, I cant tell you how many hundreds, perhaps thousands of hours that I have wasted on IE bugs. If you multiply that across the entire industry, how many billions of dollars in productivity are wasted on dealing with IE bugs? Its time to get on board with webkit
计算机
2015-48/3653/en_head.json.gz/8388
Blizzard Updater: Where it fails Millions of people have had to settle with it, unless they decide to get their mods from the 3rd party sites. The Blizzard Updater for World of Warcraft is essentially a bittorrent clone, made to deliver only their content. However – they destroyed the concept of the program working by releasing their patches as a background download. This hurts all of their players rather than helping them to play faster. The concept works for a few people, yes, only those with the 56kb/s connections. Come patch day, for those without the patch 100% downloaded, the people with the patch 100% downloaded will be playing the game with their updater turned off, therefor defeating the concept of using bittorrent as their delivery system. If everyone already has the client, and has the program turned off.. There is nobody sending the client!!! Fix your mistake Blizzard! Make them wait for patch day! Categorized in Gaming Jun Horde Versus Horde I have been thinking lately… In every great battle in the history of the world, there has been traitors and spies. In World of Warcraft, there are neither. What if I wanted to be a Horde, and fight on the Alliance side? What if I wanted to be, the Benedict Arnold of World of Warcraft? The Horde and Alliance are stuck on one path in World of Warcraft (referred to from here on as WoW) which is to be that one faction. It’s not good versus evil, but it still changes how you have to play the game. The quests you can do, the cities you can enter without being killed, and the people that you can communicate with. If I was an Alliance, I could not speak to a Horde because the game changes what we say as if it was someone who spoke German talking to someone who talked Chinese and knew not a word of German. This causes frustration and miscommunication within the game. There is no way to tell the other faction that you wish not to fight them. If I was a Horde, I should be able to build up my honor and respect towards the Alliance, and be allowed to roam their cities and outposts. I should be able to build up skills in the languages of the Alliance, and communicate with those players of the Alliance. Most of all though, with the new gained communication and honor for the Alliance, I should be able to fight along side them in Battlegrounds and Instances, and vice-versa. However, there should also be a penalty to gaining honor and respect and then going to war with the faction. Like treason, and they should not be allowed to regain honor within the game for the Faction that they were at war with if they gained previous honor with them. For example, I am a Horde once again. I have built up my honor with the Alliance, and am respected and allowed to walk their cities. If I attack an Alliance character, or a character that is Honored with the Alliance, I should lose my honor or be punished for doing so. I am curious to see how many people agree with me, because I feel that this truely is a good idea. Please comment with your thoughts, ideas or additions to the topic. Categorized in Uncategorized Oct The Future of Gaming It seems that every so often the genre of games that are favored by the mass public changes. In 1992, Doom changed the way most people looked at games, quite literally too. The first person shooter was brought to the eye of the gamer, and it looked great. It actually put you in the game; you were the one shooting the demons. Now, lets fast forward to today. The first person shooter no longer is the ultimate answer to gaming. Today, if you wan’t to play a first person shooter online, you constantly have to look for hackers and cheaters. The wall hack is one huge problem. It allows you to see, wire frame, through walls. Then, you have an auto aim hack. This always points to the opposing player’s head, no matter how far away, every shot is a head shot. Multiplayer games seem like they are at an end. Until now. The age of the Massively Multiplayer Online Role Playing Game (MMORPG for short) is amoung us. No longer is hacking possible. All character or player data is stored on a remote server somewhere on the other side of the world. So, in order to hack a character or items, instead of simply changing a few hex codes around, (Yes Blizzard, I am talking about your Diablo series mistake…) you would have to hack the official server to change it. This is the future. All games should, and will be like this. While F.E.A.R. is an incredible game, there is still that one flaw in multiplayer games without characters hosted on servers. While the hacks aren’t always there right at release, they will be out within that first month. Soon, games like Planet Side will be everywhere. A MMORPG in a first person shell. Now, can you picture shooting out demons in Doom 3 with 2 million other players at the same time? I know I can. A few examples of this new technology are Hellgate: London and Twilight War, both are the new bread of games called XORPGs, or, Extreme Online Role Playing Games. You are in a role playing universe, but you still have the first person shooter elements in the game. Hellgate: London is the sequal to Blizzard’s ever popular Diablo 1 and 2, but made under the new studio of Ex-Blizzard employees, Flagship Studios. So basicly think, a Diablo MMORPG, in first person, with BFGs, and swords. Twilight War takes the full on First Person Shooter genre into the MMORPG world. For example, you get the gun of all guns, the shotgun. You can shoot, jump, (Yeah Arena Net, that was your Guild Wars mistake…) swim, (Again Arena Net, I’m talking to you…) and still blow someone to bits. So this is the future, and from what I’ve seen, I like it. But for now, I’ll stick with F.E.A.R., until 2007 that is, when Twilight War hits the streets. Oh, wait, did I mention Twilight War uses the Valve: Source engine? Enjoy. Categorized in Gaming Oct Ubuntu Ship it: Not really free? Well, about two months ago I learned about Ubuntu ShipIt from Digg. Well, today I recieved my CDs. All 30 of them actually. I was so happy, to be honest, I almost died. Well, I took my CDs out, and and put them away, but not before giving a few away. But eventually, about a hour ago, I decided to read the package in case I missed something. I did. A little letter to me on a sticker on the package. Here’s what it said: Cornonical Ltd. is a global organisation headquartered in the isle of Man commited to the development, distribution and promotion of open source software products, and to providing tools and support to the open source community. One of Cornonical’s products in the Ubuntu Operating system. Ubuntu is developed as free and open source software and can be used, modified, and redistributed without permission and completely free of charge. As part of it’s promoting Ubuntu, Canonical Ltd. sends CDs completely free of charge through the mail to users who request them. The software on the CDs can also be downloaded at no cost throught our website. The Ubuntu CDs in all shipments are distributed completely free of charge. For shipping purposes, we declare a 0.26 EUR value for each CD. If you have any questions, please don’t hesitate to contact me personaly. Benjamin Mako Hill Community Development Coordinator Now, can someone put this in Lamen’s terms for me? I read that thinking he was saying these were DISTRIBUTED free of charge? Then in the last paragraph he says he wants money for the shipping? Also, the website said they were free. No shipping. Now, it says they want $7 from me? I don’t understand these people. And do make this worse, I have a little sticker on here, and the only part I understand, is the parts that says “E”5.00 (5.00 EUR). the rest is in German or some other language. So, before you rush out and order your “free” CDs, make sure you have some money for shipping. Now, since it said feel free to contact him… The reponse (comment): The CDs *are* free. Totally free. We \don’t charge anything. We do not recieve any money from CDs we’ve shipped to people. By default, the CDs are shipped without any documentation as to the price of the CD. In some places, mostly in the developing world, customs officials stop the CDs and ask to see some sort of documentation on their *value* (not cost) because they usually charge some portion of the value in tax — regardless of how much was paid for them. It turns out that they simply won’t believe us if we say the CDs have no value or just a few cents. As a result, we declare a value of 0.26 for customs officials who ask because that seems to be the lowest number that works most of the time. If you life in a country where customs stops your package and if they do not believe the “these CDs were sent completely free�? line, you may be charged taxes based on the value of those CDs. We do not charge you for the CDs or the taxes. Your country’s government does. In reality, this only happens less than 1% of the time and almost never within the developed world. It is also very rare in orders of less than, say, 100. So if you are sick of windows, give ShipIt a try. It take’s about a month for you to get them, but if you have a slow connection (or a lot of friends), it worth the wait. Thank you Benjamin Mako Hill for the response, and I look forward to recieving the next version of Ubuntu. Categorized in Linux Oct F.E.A.R. Released I pronounce today: The Day of F.E.A.R. Mark you’re calendars, today is the start to the official anual holiday “The Day of F.E.A.R.” So every year on this day, October 18th, 2005, skip school, take a personal day, or whatever you have to do to get a day alone with F.E.A.R. For those of you who have been living under a rock lately, F.E.A.R. (First Encounter Assult Recon) is a new action FPS that messes with your mind instead of you’re senses. While DooM and Half-Life and the other FPS out on the market challenged you to be afraid of monsters and aliens, F.E.A.R is a psycological thriller that instead of making you scream, it takes what everyone is afraid, mainly, the unknown. Everytime you see something move, you’re constantly asking yourself, “What was that?” Or, “Was that real?” Or you’re simply flipping out in your head trying to figure it out. The secret to the game is mixing, the girl from The Ring, slow-mo Matrix-style fighting, and some melee fighting techniques. One main feature to mention would be the in-game cinematics add that slow-mo style so if you try to run, you find yourself moving nowhere fast. For more info on F.E.A.R. pick up this months DVD Edition of PC Gamer, you can’t miss it, it has a picture of the girl from The Ring. The DVD Edition packs the F.E.A.R. single-player demo, and a free one month trial of Conquer, a new MMORPG. So go out, and buy F.E.A.R. today. First Post. And so I got my Golden Ticket. I have to thank Eric for it. I don’t know what exactly this is going to be for yet, but I’m trying to decide if I want this is be a project, or a blog. But whatever I choose, it is definatly worth checking my email non-stop. Refresh. Refresh. Refresh. Seven hours later… There it is. Thank you Eric, and you won’t regret sending the invitation my way. Categorized in Uncategorized PagesAbout Blog at WordPress.com. The Sunburn Theme. Follow Follow “Hybrid”
计算机
2015-48/3653/en_head.json.gz/8688
Third Software Arts location The first award that we got for VisiCalc was from Adam Osborne. Adam was an important visionary, commentator, and entrepreneur in the early personal computer days. He founded a book publishing company (later sold to McGraw Hill) which published computer books. Some of those books were about accounting and included the source code of programs such as General Ledger. That "open source" helped start the use of microcomputers (especially CP/M machines) in small business. He also was an industry pundit and gave a yearly "White Elephant" award for the most important chips introduced in the previous year and to the people that changed the industry for the good. In March of 1980 he gave Bob and me the 1979 award for VisiCalc at the West Coast Computer Faire. There's more information about Adam and a copy of a recording of him giving us the award on my "Adam Osborne Recording" page. To get a feeling for the thinking of an industry visionary at the time, it's worth listening to. At that trade show I also met Dave Winer, later of Userland, and saw his early outliner as he demonstrated to Ted Nelson, of hypertext (and Dream Machines) fame. The award consisted of a circuit board with the winning chips, some engraved words, and a tiny ivory white elephant (he lived much of his life in India). Here is a picture of the award (there's a bigger picture on the recording page) and Bob and me on the cover of the Boston Computer Society's magazine of July-August 1980 holding it: "White Elephant Award" from Adam Osborne, Dan and Bob on the cover of Boston Computer Society publication with it We ported VisiCalc to many different computers. Here is a picture of VisiCalc running on a variety of them: Clockwise from upper left: Apple III, TRS-80 model 3, Apple II, IBM PC, TRS-80 model 2, Commodore PET CBM-80, HP 125, Atari 800 Here are some of the packages it came in: VisiCalc packaging for a variety of computers including from Radio Shack, Apple, and Hewlett Packard A copy of the IBM PC VisiCalc is available for you to try on this web site. For more information about the announcement of the IBM PC, see these pages on this web site: "Thoughts on the 20th Anniversary of the IBM PC" (which includes pictures of my notes in my notebook from those days) and "IBM PC Announcement 1981" (which consists of a transcript of a videotape taken of the Software Arts staff meeting about the announcement on August 12, 1981, including reading the press material and brochures from IBM). The IBM PC version became the most popular. The shipped version supported up to 512K of memory (the maximum we could test it on at the time). We hired many programmers, managers, testers, and others. Software Arts computer room with Prime timesharing minicomputers used to do development, and some of the programmers, testers, and managers working on VisiCalc (probably taken in late 1981 or 1982) In January of 1982, publicity started to pick up. Bob and I appeared on the cover of Inc. Magazine in an article written by Stewart Alsop. Stewart was new to computers (this was his first exposure) but he went on to be editor of Infoworld, start his own publication and the Agenda conference, and later become a venture capitalist. In addition to the article about Software Arts, there was another one about "The Birth of a New Industry" which included Bill Gates, Mitch Kapor, Gary Kildall, Dan Fylstra, Tony Gold, and others, written by Steve Ditlea and Joanne Tangorra. Part of it reads: "All five of their companies -- whose combined revenues just missed $50 million in 1981..." Here are some pictures: Bob and Dan on the cover of Inc. Magazine January 1982, and an excerpt from the article by Stewart Alsop about Software Arts Article about the new personal computer software business in Inc. Magazine in January 1982: "The Birth of an Industry: Working in their attics, basements, and garages, seven entrepreneurs tacked together a totally new industry."
计算机
2015-48/3653/en_head.json.gz/14039
Distinguished Visiting Chair | Liu, Jane Win Shih Research Descriptions My research focus has been on theories, algorithms, architectures and tools for building real-time and embedded systems from components and validating their timing performance efficiently and reliably. The past two decades have ushered in tremendous advances in technologies needed to ensure predictable timing behavior and enable rigorous validation of real-time systems built from commodity hardware and software components. My students and I have contributed our fair share of techniques for these purposes. Our results are used extensively in PERTS (Prototyping Environment for Real-Time Systems), a system of schedulers and tools which we built in mid 90’s. PERTS puts important scheduling, resource management, and validation theorems and algorithms in a form ready for use by developers to validate, simulate and evaluate design alternatives of systems with critical timing requirements. PERTS was distributed to numerous universities and research laboratories worldwide and has been enhanced and commercialized. My students and I have also developed the underlying principle of an open architecture for real-time applications. A common assumption underlying existing real-time techniques and standards is that the system is closed. To determine whether an application can meet its timing requirements, one must analyze detailed timing attributes and resource usages of all applications that share the platform. The need for detailed information prohibits independent development of components and invariably limits the configurability of real-time systems. Our open real-time system principle, convincingly demonstrated by Windows and Linux prototypes, makes it possible to tune and validate in an open environment the timing behavior of a real-time component independent of other components in the system and enables independently developed real-time and non-real-time applications to run together. My recent research focuses on technologies for building personal and home automation and assistive devices and services. Some of them are primarily devices of convenience designed to enhance the quality of life and self-reliance of their users, including elderly individuals as well as people who are chronically ill or functionally limited. Other devices can also serve as point-of-care and automation tools for use at home and in care-providing institutions. Examples include smart medication dispensers and administration tools, autonomous home appliances and robotic helpers. These devices are human-centric, meaning that they are used at their users’ discretion, often for the purpose of complementing and compensating users’ skills and weaknesses. Such a device must be aff ordable, easy to use. It should be easily confi gured to work with a variety of sensors and rely on different support infrastructures. It should be customizable according to its user’s preferences and able to adapt to changes in user’s needs, mindset and skills. A major thrust of our research has been directed towards system architecture, components, platforms and tools for building such devices and services at low-cost, including the development of an embedded workflow framework and a simulation environment. Recent results of this work and links to open source software projects can be found at SISARL homepage http://sisarl.org.
计算机
2015-48/3654/en_head.json.gz/523
Awk is a pattern-scanning language designed by Al Aho, Brian Kernighan and Peter Weinberger at Bell Labs in the 1970s to provide string munging, file conversion and data laundry services in the data center. It features strong support for regular expressions, provides string-indexed associative arrays, and operates on a pattern-action paradigm where it scans an input file looking for lines that match a particular pattern then performs a specified action on each matching line. The original version of the language (“old awk”, still available on some unix systems as oawk) had only the pattern-action control structure, the updated version of awk (“new awk”, available as either nawk or simply awk, still maintained by Kernighan) added user-defined functions, dynamic regular expressions, and a somewhat larger library, and the GNU version of awk (known as gawk) adds the ability to read or write to URLs as if they are files. The Awk Programming Language, by the original authors of the language, is definitive (and is the best “language book” since K&R C). Arnold Robbins, the developer of gawk, has written Effective AWK Programming, which is also excellent. Comp.lang.awk is a low-volume Usenet group. Ronald Loui wrote a surprising paean to awk. The Ambiru Theme. Blog at WordPress.com.
计算机
2015-48/3654/en_head.json.gz/965
Posted Jan 27, 2004 Microsoft launches SQL Server Reporting Services By DatabaseJournal.com Staff [From NetworkWorldFusion] Microsoft has added reporting capabilities to its SQL Server 2000 database, rounding out its business intelligence platform with a feature long sought by some of its customers. SQL Server 2000 Reporting Services allows users to program their databases to generate reports, such as a breakdown of sales by region, and then helps manage and distribute those reports. It can pull data from multiple sources including databases from Microsoft, Oracle and IBM, as well as line-of-business applications from SAP AG and others, said Thomas Rizzo, director of Microsoft's SQL Server management team. The article continues at http://www.nwfusion.com/news/2004/0127microlaunc.html
计算机
2015-48/3654/en_head.json.gz/1099
Microsoft Office 2010 Video Tour So Microsoft Office 2010 has officially been unveiled, and there’s a lot of excitement surrounding the announcement, mostly due to the fact that a large portion of Office 2010 will be available as a free web application online. Microsoft has put together a bunch of videos showing off the new features found in the Office web applications, as well as Word 2010, PowerPoint 2010, Outlook 2010, and all the rest. We’ve put the video that focuses on the web apps up top, and you can watch the rest after the break as well. Click to continue reading Microsoft Office 2010 Video Tour Microsoft introduces Office 2010 with web apps Today, Microsoft has introduced Office 2010 at their Worldwide Partner Conference. As rumored over the past few weeks, Office 2010 will bring with it the first free cloud-based Microsoft Office product. This will be Microsoft’s answer to products like Google Docs, Zoho Docs, and other free online office suites. According to the company, Office 2010 web apps will work with Internet Explorer, Firefox, and Safari. You can take a look at the Microsoft Office 2010 technical preview page now, which will soon be open to a limited set of beta testers. Here’s what we know about Microsoft Office 2010: As we said, Office 2010 features the introduction of web apps that are completely free to use. The online version of Office 2010 will include Microsoft Word, PowerPoint, Excel, and OneNote. Now, while these are all free, Microsoft does not see them as a replacement for the full desktop office suite. These apps do not include all the bells and whistles that you’ll find the desktop versions, but they do put Microsoft on the map as far as free online office suites are concerned. Click to continue reading Microsoft introduces Office 2010 with web appsRead More | Office 2010 Preview
计算机
2015-48/3654/en_head.json.gz/1361
Mike Burridge Computer upgrades, repairs, web hosting, design and programming. You can find me on:| FACEBOOK | Username: Password: Home Page About Me Privacy Policy Services Customer Services PHP & MySQL Freelance Projects Advice Anti Virus Advice ADSL Tweaks FREE Protection Information Internet Terms & Glossary Computer Terms & Glossary Lost XP password? Tools HTML Encryption Utility Website Keyword Generator IP Subnet Calculator -- ** -- -- ** -- © Copyright Mike Burridge 1996 -2010 a Computer Terms A A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Access time The time it takes for a device to access data. The access time, quoted in milliseconds (ms) for hard disks and nanoseconds (ns) for memory, is usually an average as it can vary greatly. Together with the transfer rate, it is used to gauge the performance of hard disks and other devices. The lower the number, the better the performance. Applications An application, or package, is one or more programs used for a particular task. For example, word processing, invoicing or spread sheeting. ASCII (American Standard Code for Information Interchange ) Usually a synonym for plain text without any formatting (like italics, bold or hidden text). Since computers naturally use binary rather than Roman characters, text has to be converted into binary in order for the processor to understand it; ASCII assigns binary values to Roman characters. RTFI a Microsoft standard, adds extra formatting features to plain ASCII. B A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Backwards compatible Compatibility of hardware or software to older versions of the product or standard. Baud rate The number of electronic signals that can be sent along a communications channel every second. In common usage, it is often confused with bits per second. These days modem speeds are normally measured in bits per second. (See V and Bit). BC Card Formerly PCMCIA. A standard to allow PCs, particularly notebooks, to be expanded using credit card-sized cards. BIOS (Basic Input/Output System.) Software routines that let your computer address other devices like the keyboard, monitor and disk drives. Bit Binary digit, the basic binary unit for storing data. It can either be 0 or 1. A Kilobit (Kbit) is 210 (1,024 bits); and a Megabit is 220, which is just over a million bits. These units are often used for data transmission. For data storage, megabytes are more generally used. A megabyte (Mb) is 1,024 kilobytes (Kb) and a Kb is 1,024 bytes. A gigabyte (Gb) is 1,024Mb, A byte (binary digit eight) is composed of eight bits. Bug (See Crash) Boot Short for bootstrap. Refers to the process when a computer loads its operating system into memory. Reboot means to restart your computer after a crash, either with a warm reboot (where you press Ctrl Alt Del) or a cold reboot, where you switch the computer off and back on again. Bus A "data highway", which transports data from the processor to whatever component it wants to talk to. There are many different kinds of bus, including ISA, EISA, MCAI and local bus (PCI and VL-bus). C A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Cache (See Memory) COAST Cache On A Stick. CD-ROM A CD-ROM is the same as a normal audio CD, except it can store data as well as sounds. A CD-ROM player can be attached to your computer to read information from the CD-ROM into the computer's memory in the same way that a domestic CD player reads information from the CD into your hi-fi. The advantage of distributing information on CD-ROM rather than other media is that each one can hold up to 680Mbof data: equivalent to about 485 high-density 3.5in floppy disks. CISC (See RISC) CPU Central Processing Unit. Normally refers to the main processor or chip inside a PC, (See Processor.) Crash Common term, for when your computer freezes, Can be caused by a power surge, a bug (which is a fault in software or a GPF. (General Protection Fault) D A B C D E F G H I J K L M N O P Q R S T U V W X Y Z DRAM (See Memory) DOS (Disk Operating System) Once the standard operating system for PCs, it is now being replaced by Windows 95 and Windows NT. DPI (Dots Per Inch)Common measure of the resolution on a printer, a scanner or a display. Drive controller card An expansion card that interprets commands between the processor and the disk drives. Drivers Pieces of software that, "drive" a peripheral, They interpret between the computer and a device such as a CD-ROM. If you have a SCSI CD-ROM drive connected, you will be able to use it on a PC or a Mac just by loading up the relevant driver on each machine. E A B C D E F G H I J K L M N O P Q R S T U V W X Y Z EIDE (See IDE) EISA (Extended Industry Standard Architecture) A bus standard designed to compete with MCA. Now being replaced by PCI.Electronic mail (E-mail, email) Still the biggest single use of the Internet, When you sign up with an ISP you are given an email address. Usually you can incorporate your name, or part of it, into your email address to make it easy to remember. Expansion card Circuit boards, which fit inside PCs to provide extra functionality. For example, one might be an internal modem, providing the same functions as an external version (which is more common) but sitting inside the PC, Expansion cards are designed to be fitted and removed by people with little knowledge of PCs. F A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Floppy disk drive Practically all PCs come with a floppy disk drive: 3.5in HD thigh densityi 1.44Mb floppy disks are now the standard. They come in hard plastic cases and have replaced the older, literally floppy, 5.25in disks. Fonts A font is an alphabet designed in a particular style. Fonts apply both to screen and printed letters, Truetype and Type 1 fonts are stored as shape descriptions, scalable to any size, Format To wipe a floppy or hard disk in order to prepare it to accept data. G A B C D E F G H I J K L M N O P Q R S T U V W X Y Z GPF General protection fault. Graphics card An expansion card that interprets commands from the processor to the monitor. If you want a better, higher resolution picture or more than your existing set-up, you'll need to change your graphics card and/or your monitor.GUI (Graphical User Interface- See Windows) H A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Hard disk Sometimes called a fixed disk, hard disks are hermetically sealed rigid disks able to store data and programs. Disk capacities increase all the time. Hardware All electronic components of a computer system, including peripherals, circuit boards and input/output devices.HTML (Hypertext mark-up language) The standard language used in the creation of web pages, which can be read by web browsers. I A B C D E F G H I J K L M N O P Q R S T U V W X Y Z IBM-compatible Originally meant any PC compatible with DOS. Now tends to mean any PC with an Intel or compatible processor capable of running DOS or Windows. IDE (Integrated Drive Electronics) A control system designed to allow computer and device to communicate, Once the standard for PC hard disks, now being replaced by EIDE (enhanced IDE~ which offers improved performance and extra features. Internet Millions of computers interconnected in a global network. ISP (Internet Service Provider) ISPs provide access to the internet. You use your modem to dial the ISP's modem. The ISP has a high-bandwidth permanent connection to the Internet. IRDA (Infra-Red Data Association) The standard for exchanging data using infra-red, typically from PDAs or notebooks to a PC or printer.ISA (Industry Standard Architecture) This was the original bus architecture on 286 PCs. Also known as the AT bus The 286 was known as the AT) it remains in use today. Slow by modern standards, but so widely accepted that expansion cards are still made for it, (See EISA, PCI.) ISDN (Integrated Services Digital Network) Offers significant advantages over analogue telephone lines, It can handle multiple transfers on a single connection and is faster, In the UK, however costs of installation and rental remain high. J A B C D E F G H I J K L M N O P Q R S T U V W X Y Z JPEG (See MPEG) K A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Kbit (kilobit), Kb (See Bit) L A B C D E F G H I J K L M N O P Q R S T U V W X Y Z LAN (Local Area Network) (See Network) Local Bus PCI (Peripheral Component interconnect), developed by Inter, is now the standard far local bus architecture, It is faster than the older VL-Bus (Video Electronic Standards Association local bus) it replaces. M A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Macintosh (Mac) A personal computer made by Apple and which is incompatible with PCs. Developed as a rival standard, its operating system looks like Windows but pre-dates it. Maths co-processor A specialised chip that handles mathematical calculations (floating point operations) for the processor, Modern processors such as the Pentium have a co-processor built into them.Mbit (megabit) (See Bit) Mb (megabyte) (See Bit) MPEG (Moving Picture Expert Group) A standard for compressing video, available in several flavours: MPEG 1, MPEG 2, MPEG 4. JPEG (Joint Photographic Expert Group) is a standard for still image compression. N A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Network A network is a group of computers linked together with cable. The most common form of network is a LAN (Local Area Network), where electronic mail and other files can be exchanged between users without swapping floppy disks. Printers and other resources can be shared; All the PCs on a LAN are connected to one server, which is a powerful PC with a large hard disk that can be shared by everyone. O A B C D E F G H I J K L M N O P Q R S T U V W X Y Z OS (Operating System) The operating system communicates with the hardware and provides services and utilities to applications while they run, such as saving and retrieving files. P A B C D E F G H I J K L M N O P Q R S T U V W X Y Z PDA (Personal digital Assistant) Small electronic organisers. The Psion 3a is a typical example. PCI (See Local bus) Package (See Applications) Parallel Ports Used by your PC to communicate with the outside world, usually via a printer. Information can travel in parallel along a series of lines, making it faster than serial ports which can only handle one piece of information at a time. Pentium Fast 32-bit processor with a built-in cache. Now the standard on PCs, it is been replaced by the Pentium MMX chip which has extra instructions and a 32Kb cache. The Pentium Pre is a higher-end workstation CPU with 256Kb cache meant for full 32-bit operating systems like Windows NT. Pixel Picture element. The smallest addressable dot displayed on a monitor. PCMCIA A standard to allow PCs, particularly notebooks, to be expanded using credit card-sized cards. Power PC This family of RISC chips is the result of a collaboration between IBM, Apple and Motorola, It is now used in all Apple Macintosh computers and many IBM workstations. Processor Chip which does most of a computer's work. Programs (See Applications) Public Domain Software that is absolutely free. The author usually retains the copyright but you can make as many copies as you want. Q A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Qwerty The name of a standard English-language keyboard, derived from the first six letters in the top row. French equivalent is AZERTY. R A B C D E F G H I J K L M N O P Q R S T U V W X Y Z RAM (Random access Memory) (See Memory) RISC Reduced Instruction Set Computing (See Boot) ROM (Read Only Memory) See Memory) RTF (Rich Text Format) (See BSCII) S A B C D E F G H I J K L M N O P Q R S T U V W X Y Z SCSI Small Computer System Interface is a bus that comes as standard in a Macintosh and is beginning to rival EIDE on PCs. Serial port Serial ports (Com1 and com2) are used by your PC to communicate with the outside world. Mostly used by modems and similar devices which communicate quite slowly. Faster communications are achieved through the parallel port. Shareware A method of distributing software. It is freely available, but not free of charge. You are honour-bound to pay a small fee to the software's developer if you continue to use the program after a set period. SIMM (Single Inline Memory Module) The standard modules for memory expansion on PCs. Older 30-pin SIMMs have now been replaced by the 72-pin variety available in capacities up to 16Mb. T A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Tape streamer Magnetic tape recorder for backing up data from a hard disk. U A B C D E F G H I J K L M N O P Q R S T U V W X Y Z UART (Universal Asynchronous Receiver Transmitter) Pronounced "you-art", this is a chip that allows your PC to cope with high-speed communications. V.34 plus, V.34, V.32bis A series of CCITT standards which define modem operations and error correction. There are more than 20, but the key ones are: .V.32bis, the standard for 14.4Kbps (kilobits per secondj modems. V,34, the standard for 28.8Kbps modems (see Baud). V.34 Plus, the new standard for speeds up to 33.6Kbps. V A B C D E F G H I J K L M N O P Q R S T U V W X Y Z VESA (See Local Bus) VGA Video Graphics Array is the name given to a popular display. VGA graphics have 640 pixels horirontally and 480 vertically, and can display 16 colours. SuperVGA (SVGA) graphics can display 800 x 600 or 1,024 x 768 in as many colours as the memory in your graphics card will allow: up to -16.4 million, or true colour. VL-Bus (See Local Bus) VRAM (See Memory) W A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Windows a GUI (Graphical User interface) developed by Microsoft. Windows is intended to make programs easier to use by giving them a standard, mouse-driven interface. � Windows 3,11 16-bit operating system. Windows NT Robust, fully 32-bit operating system from Microsoft. The latest, version 4.0, features a Windows 95 type interface. Windows 95 Major improvement to Windows 3.11, with a redesigned interface. Less prone to crashes and easier to use, but requires more memory. Windows 98 Major improvement to Windows 95, with a interface that can be set to work like the internet. Less prone to crashes and easier to use, but requires more memory. Winsock Short for "sockets for Windows". The Winsock.dll is an extension for Windows which is necessary for connecting to TCP/IP networks. WWW World Wide Web Service on the internet using special software called web browsers (Netscape and Internet Explorer are two best-known browsers) to give access to pages of information with text, pictures and multimedia. WYSIWYG "What You See Is What You Get": what you see on the screen is exactly what you will get when you print out your work. X A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Nothing for X yet unless you know better? - email: [email protected] Y A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Nothing for Y yet unless you know better? - email: [email protected] Z A B C D E F G H I J K L M N O P Q R S T U V W X Y Z ZIF (Zero Insertion Force) Sockets used for CPU's. Lifting a handle enables you to remove the processor ZIP The common standard for compressing files so that they take up less space, Zipped files have the extension .zip and are compressed and decompressed using shareware utilities such as Winzip and PKZip.
计算机
2015-48/3654/en_head.json.gz/1522
Page-View Syndrome January 1, 2008 10 Comments Let's hope the Web doesn't turn into an endless popularity contest judged only by page hits. An Australian writer recently claimed that he was fired from an online publication for not getting enough hits on his articles. As soon as I heard his lament I thought of Max Headroom, one of the great cult films of the late 1980s. It depicted a culture in which TV shows had real-time ratings and could be canceled in the middle of an episode if the meter fell too far. It also depicted a society going to pot while media moguls were all living the high life. The key element in this scenario was the deterioration of a society in an era in which ratings meant everything and a sense of corporate responsibility was virtually nonexistent. It was all about the market, the free market. Let the chips fall where they may. Some aspects of the Max Headroom dystopia are actually emerging today. The "get hits or die" thinking is just one example, and it reminds me of some ludicrous comments that Rick Cotton, NBC Universal's general counsel, made recently. He said, "Society wastes entirely too much money policing crimes like burglary, fraud, and bank-robbing when it should be doing something about piracy instead." Displaying little understanding of community or common sense about the future of civilization, Cotton went on to say, "Our law enforcement resources are seriously misaligned. If you add up all the various kinds of property crimes in this country—everything from theft, to fraud, to burglary and bank robbing, all of it—it costs the country $16 billion a year. But intellectual property crime runs to hundreds of billions a year." This sort of crazy thinking, if ever implemented, would result in Max Headroom–style chaos, and that's where we're headed, apparently, since nobody seems to disagree with these kinds of pronouncements. But that's just half the problem—the half that values corporate interests over public safety. The other half of the problem is the necessity to troll for the most Web-page hits. I'm complaining because we need to consider other factors, such as the public interest. Not everything has to be popular. Certain boring facts need to be reported, for example, and long investigations need to be disclosed so they are on the record. Imagine a world in which every song ever written had to be an instant hit, otherwise the band would be dissolved after one clinker. This is not only ludicrous, it's also not in anyone's interest. Now, if the Australian writer cited above was fired for crummy, inaccurate, or useless writing, then the hits on his online articles would probably also be down, but the reasons for terminating his employment would be based on other criteria. There needs to be some meta-measure beyond mere popularity to make determinations about information that affects the common good. Make no mistake about it: This is an issue of civic concern. Writers serve the public at large, period. Popularity seldom serves the public interest but instead serves corporations and celebrities as a conduit for cash flow. You will find plenty of publications that bank on nothing more than ephemeral themes such as celebrity news, without having any concern whatsoever for the greater community. The kicker to this tale of woe is that a Harvard professor comes out and says that all content eventually will be driven only by page views. So much for the public good. I think he's wrong and is missing the significant fact that the Web is a great equalizer. While the big publishing companies are worried about page views and popularity, the alternative venues are growing and have the capability to produce the same huge numbers that any other site can produce. The owners and editors of publications around the world should trust their own judgment regarding content quality. After all, the assignments are theirs, and they are the ones who choose what to publish. If you are going to fire someone over poor online results, then you should fire the whole team involved in that effort—not just the writer. Geez. I've also seen the opposite effect, in which someone consistently writes popular material but it is perceived by the company as too mainstream and thus not desirable by its standards. The worst-case scenario for many readers is that, by some circumstance, their favorite writer is fired for some weird reason. Let's just hope the reason is valid and is not influenced by a cynical view of the world and an endless popularity contest judged only by page hits. Otherwise, all I'm going to be writing about is Britney Spears and how she grew her hair so long so fast. You noticed that, right? I mean really! More John C. Dvorak: • It's Not Just Dell, Fraudsters Are Everywhere• The Failure of the Surveillance State• The Surface Book and Microsoft's Marketing Folly• Microsoft's OneDrive Bait-and-Switch• 'PC Does What' Campaign Isn't About PCs• more Go off-topic with John C. Dvorak. Microsoft's Continued Vista Backpedaling RIAA Goes After "Personal Use" Doctrine
计算机
2015-48/3654/en_head.json.gz/1728
Entrepreneurial resources & interviews presented by Comcast Business. Updates Via RSS Feed Rod Canion - Compaq Computer Corp. Listen Now This text will be replaced Extras: Download Transcript Russ Capper Episode: 472 June 21, 2014 Share: Summary: Russ is at the MIT Enterprise Forum to interview the founder of the most successful startup EVER. Rod Canion’s Compaq Computer Corp. grossed more in its first year than eBay, Microsoft, Google and Facebook COMBINED. His goal was to create a portable computer and, once he resolved all the issues of being a startup with an unheard-of product, well, he never looked back. Of course, that led to more compatibility issues with Microsoft and IBM—it’s a fascinating story. One of the greatest entrepreneurs ever is, today, one of the greatest mentors ever; in this interview, Canion relives the early days with a live audience of college students. Video and Full Interview Text Russ: Welcome to a special edition of the Business Maker Show, brought to you by Comcast Business built for business. Special because we're in front of a live audience at the Hilton hotel at the University of Houston, and it is another cool MIT Enterprise Forum event also brought to you by Comcast Business, and it is my great pleasure to have as my guest here the cofounder and CEO of the most successful startup in business history. Producing more first year revenue than EBay, Microsoft, Google, and Facebook combined. Please join me in welcoming Rod Canion. Okay lots of discussion about that startup; $111 million, fastest company to get to a billion dollars in sales. How in the world did you do that? Rod: Oh it was easy. Russ: Okay next question. Rod: As I was writing the book that just came out last October, and putting together some of the details of it, it's the first time I really realized why we went after a $100 million dollars in our first; when you really think about it that was a stupid thing to do. I mean a startup that just finished developing a product, has no manufacturing, why in the world would you try to ramp up your manufacturing that fast? Well there's an answer, and it actually kind of makes sense. We put this idea together, which was to build a portable computer. What we were looking for was something that wasn't being done at that time in computers, and there was a whole lot of computer companies so there wasn't much that wasn't being done, but the idea was a portable computer, rugged, nicely styled, that would work in an office, and meet the needs of the market at the time. The one big problem we had was how are we going to possibly get software, and that's when the idea finally gelled in my head on January the 8th. One of the things that's burned into my brain, you know that I'll always remember was the chill running down my spine, what if we can make our portable IBM/PC software. Then it would always have the most important, the newest software available for it. It was a simple idea. Could it be done? So we charged off down the path. We raised some money, hired a team, built prototypes, continued on down the path, and then we got ready to go sell the product. You know you can build the greatest mousetrap every, but if you don't have a way to sell it it was all for not. Well fortunately there were these computer stores that were becoming the main ways that the computers were sold at the time, and IBM had entered the market so IBM and Apple were in all of these stores, and they were doing quite well. So it was natural that we would go to the computer store and show them our product. So let me give you an example of how that worked. I just happened to have one of these, a portable. Trust me, it's a portable. Okay maybe it's transportable, okay this has been called a lot of names, one of which is it looks like a portable sewing machine, but we designed this the feet in the bottom so that we can set it on the table, and I would go into a dealer, and I would set it down on his table, and I would lean it over like that, and then the keyboard stamps on the front. It has feet on it like that. I would unstamp the keyboard, and I'd say we have a computer. Thank you. That's what I thought to, but the dealers didn't really think much of it. They said, well it's kind of nice. So we'd go on and tell them how great it was. It was rugged, and you know how many megahertz it had, and how many kilobytes of RAM, and they would be okay. Then we'd get to the end, and we would say okay now this really does run all the IBM software so you can go pick any program you want off the shelf, take it out of the shrink wrap so this hadn't been staged, and plug it in. So they would go skeptically pick the one they though probably wouldn't run, take it out of the box, come plug it in, and when it came up and ran just like it did on the IBM PC their eyes got really big. I mean their eyes got big, and you could see the wheels turning, and then the next thing out of their mouth was how soon can I get 20 of these? How soon can I get 25 of these? The number varied, but the reaction was the same. That was a nice thing to hear, because it meant they liked our product, and of course it was like our baby, but more importantly it turned out that they were hearing something from them. They weren't really saying exactly what it was, and that is there is a pent up demand for a product like this. So we've been thinking of it as a cool portable computer that had access to IBM software. What they saw it as was a portable version of the IBM PC, and people had been asking for a portable version of the IBM PC. So sure enough, there was this pent up demand that hey we discovered it, but nobody else knew about it. So now we had this opportunity, if we could capitalize on it, to really go capture a big piece of the market very quickly. After going to a lot of dealers, and getting the same reaction we went back to Houston, back to our office, and did the calculations, and we were blown away. When you just simply multiply it out, you know 5 units per month per dealer times 2,000 dealers, you know we can do a $100 million dollars this year. Wow, are we going to do that. Well we were pretty conservative, so our first reaction was nah. We can rent faster than what we got in the plan, but let's don't take any real risk. Then as we began to do in those days, we began to really think more about it. Okay so what's the repercussion of that, and it wasn't quite as simple to foresee, because here's an opportunity we can actually go get in, we believe, almost if not all of these IBM dealers, and they will try to sell our product, but if we can't meet their demand, if we can't supply the computers what are they going to do? Well we're going to create more demand, and then they're going to sell somebody else's because sure enough there's going to be a lot of people that follow us. So if we want to capture the demand, and then hold onto the dealers that we get in, we're going to have to ramp up really fast. So we really thought it through. We decided on the fastest ramp we thought we could manage. We had to change our plans completely. We had to go out and raise $20 million more dollars. We had already raised $10 million dollars in two different crutches, but in February of 1983 we raised an additional $20 million dollars to fund the ramp that year. The other thing I guess is worth pointing out is we were afraid to actually tell the investors we were trying for $100 million. We thought they would laugh. So we said $80, we said we think we can do $80 if we're really lucky, and maybe it'll be less or more, but we're going to go for $80, and the good news is we missed the forecast, but we hit $111 million dollars instead. So anyway, that's how we hit $100 million dollars is the opportunity was really there, it was really clear, and we decided to go for it. Russ: Sounds pretty easy actually. Rod: It was. Russ: No I mean when you put all that in perspective, and look at the number of people you hired, the number of assembly lines you had to build, and then you had to build computers, you had to ship them, and you had to have people accept them, and pay for them, I mean that's just an extraordinary execution. Was it stressful there? Was everybody worked to death? Rod: Everybody on the team worked very hard, but we found out something about people in that first couple of years and that is if you create an environment where people are working together, and not sort of fighting each other, they're not trying to look better than the other guy. If they understand where you're trying to lead the company, what the game plan is, what our model is, you know we emphasize quality. We're not going to ship anything that
计算机
2015-48/3654/en_head.json.gz/1830
TELEMATICS APPLICATIONS Programme (WAI) TIDE Proposal Daniel Dardailler - W3C Administrative forms, Part A of Proposal REMOVED --- OBSOLETE Part B. PROPOSAL DESCRIPTION This proposal, called "Web Accessibility Initiative" (WAI), is a support action whose goal is to make the Internet, aka the Web, more accessible to all users with disabilities. It is lead by the World Wide Web Consortium (W3C), the international vendor-neutral organization which fosters the evolution of the major Web protocol and format specifications, and whose goal is to realize the full potential of the Web (a long description is provided later on in the proposal). W3C is currently starting a new major activity in this area, funded on its own, that includes a strong technological group working on the accessibility of the core Web formats such as HTML, HTTP, and CSS (see Annex A for information and references on these acronyms) and also incorporates work on a set of guidelines accompanying the technologies and the tools that are using them (Web browsing tools, HTML authoring tools, etc). This TIDE proposal complements this technical work and includes three workpackages and a cross-workpackage activity: an education/dissemination/awareness workpackage whose goal is to promote the realization of accessible content. a certification/rating workpackage that will use the PICS technology to create a classification system assessing the level of accessibility of Web pages. a standardization workpackage to ensure that the Web related access technologies move forward in the official standard bodies such as ISO or EEC. the creation of online User forum to be used across workpackages where the disabilities community will be involved in the elaboration of the materials issued in the above workpackages. The partners are: W3C (hosted in France, but really a European and Worldwide consortium), as coordinator, mostly responsible for the education/awareness the certification work, and overall management. ICS/FORTH (Greece) and CNR (Italy), to work on technology usage guidelines and their evolution in the standard bodies BrailleNet (France), EBU (Europe) and RNIB (GB) as an associate contractors with W3C representing the user community and participating in the elaboration of the online user forum, the education materials, and as input on the technology requirements. Working on evolving the Web technologies in the most interoperable and accessible way has always been and still is W3C's mission. Solving the technical issues is necessary but is not sufficient if we want to really succeed in making the Web and the Internet accessible universally. We have to address the content providers, the people that create and distribute the information, and in order to do that, we need to raise the awareness, and educate them in as many ways as we can (including rating campaign). This European TIDE proposal, if funded, will focus on the European Web content providers and market. As global as the Internet and the Web are, there is still a clear need for "local" actions when content providers are the target. A similar fund raising activity for education and dissemination is being persued by W3C for the Americas and the Pacific rim. We think all these actions are required for the Web as a whole to become more accessible. The rest of this proposal text follows the themes and outlines provided by the Telematics booklet on proposal form, Part B (long description). Chapter 1 is about User needs and Application area, and Chapter 2 described each Workpackages and their deliverables in details. Then come the description of the consortium and the relevance and references chapters. The annex gives information about the Web technologies on which this work is based (Web itself, HTML, HTTP, etc). 1. User needs and application area The emergence of the World Wide Web has made it possible for individuals with appropriate computer and telecommunications equipment to interact as never before. The Web is the stepping stone, the infrastructure, which will pave the way for next generation interfaces. The current situation in the area of Web usability for people with disabilities is not very good and is getting worse everyday as more and more providers of information rush into the Web business without any awareness of the new limitations and frontiers they may create. No single disability population is unaffected. For example: People who are deaf cannot hear multimedia or audio events that do not contain captioning or audio descriptions. People who are blind struggle with the Web's inherent graphical interface, it's graphic-based content, and any Web protocol or application that cannot easily be rendered or accessed using audio, braille, large text or synthetic voice. People who are physically challenged have difficulty using certain hardware devices or web controls, including Web kiosks and WebTV. People who are cognitive and visually impaired have difficulties interpreting most web pages because they have not been designed with this population in mind. Worldwide, there are more than 750 million people with disabilities. A significant percentage of that population is affected by the emergence of the Web, directly or indirectly. For those without disabilities, the Web is a new technology that can help unify geographically dispersed groups. But these barriers put the Web in danger of disenfranchising people with disabilities in this emerging infrastructure. The users in our project are the Web users with a disability, like visually or hearing impaired people. The needs for these users is the access the information online on the Internet just as everyone else. Impact on users. The Web is rapidly becoming the interface of choice to get access to information worldwide. There are millions of pages of data available today on the Internet and Electronic commerce widespread adoption is the next step. One important thing to mention is that the Internet and the Web are becoming more and more critical as a social resource: job posting or university course descriptions are good examples of things that some organizations are starting to only distribute via the Web. Over the next few years, accessing the web to do shopping or get the weather forecast is going to be as natural as doing shopping in the supermarket or watching/hearing TV/radio. The impact of this project on the users with disabilities is to give them the same access to information as users without a disability. In addition, if we succeed making web accessibility the norm rather than the exception, this will benefit not only the disability community but the entire population. For instance, people wanting to browse the web through a telephone or in a car, with no screen feedback, are in a sense temporary blind and the development of voice-based interface will benefit them as well. Another example is web users using very slow link to the Internet (an economic problem likely), and for which heavy graphical images are a too expensive: widespread adoption of descriptive text added to images would allow them to get access to the same pages with no or little loss of information. Application and context. One important aspect of this project is that of education and awareness. We do not seek to only enhance the format used on the Web (HTML, CSS, etc) but to go after the content providers, either directly or indirectly (through the tool and service providers they use) so that design of accessible web pages become the default case and the format extensions are put at use. Therefore, we can say that the intended size of the application population and area if that of the Web. In addition to the education and dissemination actions targeted at the largest content providers in Europe, we will also develop specific application domain site to illustrate good design. Our market is the online market, aka the Internet or the Web, and it is still in rapid expansion. We think that with a very focused action over 18 months, we can succeed in making the Information Society accessible for the years to come. Knowledge of Sector and technologies to be used Protocols and Data formats. In terms of data formats the state of the art is HTML3.2 (HyperText Markup Language) and CSS1 (Cascading Style Sheet), which are both controlled by W3C and which are evolving in parallel to this project and in close contact with the developers (in fact the technical manager for the evolution of these formats regarding accessibility at W3C is going to be active as well in this TIDE proposal). The clear message that we want to convey is the following: content on the web must be separated in the structure and the text on one side (what is a TITLE, a bullet LIST) and the presentation made out of it (rendered on a graphical screen, a dumb terminal, using a voice synthesizer, or a Braille device). Not only this is good for accessibility, but this is good for the management of information itself: by virtue of this separation content/style, one can evolve the two sides separately: change the text without touching the style (the color used, the fonts, etc) and more importantly change the style without changing the content and sharing one style for multiple different content. There are very good economic reasons for separating the style out of the web content and HTML, as an SGML application, is perfectly suited to achieve this goal. On other words, we really want to convince information providers that if they just do their job well, then in addition it will be accessible to all. Of course, information on the Web is not just text and HTML: there are images, video, audio clip, or programs (like Java applet being downloaded by users). The strategy here is called Alternative description: for instance for image, there is HTML attribute that allows content providers to describe the image in words, that can in turn be spoken or rendered on a one line telephone screen. One goal of the W3C is to make sure all formats used on the Web, and that include multi-media formats such as video and audio, allow room for accessibility hooks and alternative delivery. For instance, the OBJECT tag being added to HTML should allow for descriptive text to be used as a replacement of any given data format being presented to the user. This, and much more, will have to be taught to the people creating content. Tools In terms of browsing and authoring tools, there are products available from several software companies available on the market and one of the first action this project will conduct is a study of the existing base. The goal here is to educate tool providers regarding the style-guide that the users are expecting. In terms of certification and rating, it's a very novel area where a couple of HTML validator exists but where most of the work (especially regarding rating system and labelling) is going to be innovative. Links to other projects. Web related projects are common nowadays in Europe and worldwide, and we expect that the result of our action will impact them in a way to make their sites more accessible. One of the thing we will promote and suggest as an awareness action during the project that a certain level of accessibility be made a requirement for EC projects generating web content. This will act as a good incentive for widespread adoption of accessibility design on the web. 2. Work content The project is divided into a series of 3 work-packages and cross-workpackage activity (not including management, which is another cross-workpackage activity of course). education/awareness campaign rating and certification system The following sections detail each workpackage and the management task as well. Work package 1 description Title: Education/Awareness Campaign Total man power (MMs) Start End W3C: 12 BN: 2 RNIB:1.5 EBU:1 Month 12 Objectives and background The goal of this work-package is to promote the realization of accessible content throughout the Web. This need to be done using education means (teach the content providers how to create accessible content), dissemination of information (guidelines helping the authoring phases) and awareness (constantly remind new players of the issues involved) In order to reach our goal, we need to target different audiences. The content providers are of course our first target, and "in fine" our only target, since they will eventually decide what to put on the pages. But for doing so, they use, listen, and are influenced by, several other actors: the authoring tools software vendors. More and more often, Web content is authored using specialized WYSIWYG tools and no longer textual editor "showing the tags". By making sure the providers of these tools take accessibility in account, we improve the chances that the users of these tools will create accessible content. the web site designers. The people "owning" the content are the content providers in the larger sense, but it often happens that the people actually producing the content, i.e. implementing web sites, are service companies that can play a big role in advocating accessibility the web-design educators. When a given company, usually a big one, wants to create a web space, and it's often for an Intranet, they are most of the time using the service of a educational service that teaches the employees how to take best advantage of authoring tools. We need to make these educational/formation services aware of the accessibility aspects. the press, and in effect, the users base, can greatly influence content providers through their review of web sites. It's important that accessibility becomes a regular criteria of choice for such reviews. Of course, one other actor is W3C itself, and having us running this program is a very important factor. The production of new HTML and CSS specifications, together with guidelines, that comply with the requirements of the disability community - which will happen through our regular technical activity in 97 - will play a great role in moving forward in this awareness action. In order to reach all these communities, we have to target our effort along a series of events: presentations/talks in major Web related conferences organizations of free seminars at these conferences or isolated direct contact and awareness action with major European web site providers addition of accessibility "modules" in the curriculum of the major authoring tools educational process. direct contact and lobby with the major authoring tool providers. submission of papers in specialized and regular press. A last educational aspect needs also to be explored: the education of the disability community itself regarding their rights with respect to accessing the information like everybody else. This is particularly true and important in the Intranet context, where companies are already subject to existing legislation regarding access (see the US ADA or the UK DDA). Our educational action at that level will go through the disability user organization such as the EBU and also through reference materials put on our W3C Web site. Breakdown and Phases Task 1: Authoring of the educational and awareness materials for both presentation/seminar and accessibility modules to be used in the action. First we need to gather the information available in the W3C working group on Accessibility and we also need to survey which authoring tools educational curriculum we need to target. Task 2: Active participation in conferences, organization of events, lobby of tool and content providers. Once the material is ready, we will use it in dissemination actions of various kind. The phasing will be such that there will be some overlap between the two phases in the sense that we will participate in conferences and submit papers as soon as we have a presentation ready, and not wait until the final materials is ready. Workpackage deliverables Month 6: Presentation/Seminar material ready Presence in two to four International conferences Market study reporting on the target authoring tool choices and the companies providing the authoring educational curriculum. Market study reporting on the target content provider in Europe. Month 12: Accessibility modules for Major authoring ready Submission of Web Accessibility paper to Technical journal and specialized press (both Internet and Disability press). Guidelines of accessible authoring tools ready Free seminars given at at least two conferences by professional education contractors Report on active lobby to major content providers in Europe Month 18: Accessibility modules for major authoring tools integrated and operational. Report on improvement in authoring tools and content on the web. Partners and roles W3C is the principal actor in this workpackage. W3C will contract any product-specific (e.g. Microsoft FrontPage, Netscape Gold) authoring tool education process to a professional company and concentrate on the generic (non product specific) messages. Both BrailleNet and RNIB/EBU are assisting W3C in the elaboration of the educational/awareness materials, and also by facilitating the presentation of the materials at workshops and conferences. Making the Web accessible requires attention on the part of the designer to the needs of a community that is all-too-often ignored. The key to success here is a combination of languages, protocols and tools that make it easy to do the right thing (and W3C is doing that as part of its regular activity). Education is the glue that will reinforce the importance of using the tools routinely and Title: Rating/Certification system W3C: 9 BN:2 RNIB:1 This work package deals with a novel idea which will use the result of the latest development in the area of Web information access: the Platform for Internet Content Selection (PICS). Roughly, it's about creating a new descriptive rating vocabulary to assess the level of accessibility of Web pages and putting it at work with users in a small pilot phase involving a community of people with disabilities. PICS is an infrastructure for associating labels (meta-data) with Internet content. It was originally designed to help parents and teachers control what children access on the Internet, but it also facilitates other uses for labels, including code signing, privacy, or intellectual property rights management. We want to create another use for PICS: Level of Accessibility of Web Content. PICS is both the name of the "system" and the name of the cross-industry working group hosted by W3C which design and evolve the specifications. In order to advance its goals, PICS has devised a set of standards that facilitate the following: Self-rating: enable content providers to voluntarily label the content they create and distribute. Third-party rating: enable multiple, independent labelling services to associate additional labels with content created and distributed by others. Services may devise their own labelling systems, and the same content may receive different labels from different services. Ease-of-use: enable non-technical users to use ratings and labels from a diversity of sources, doing filtering or searching, without specific training. PICS is called a Platform because it's made of several complimentary components. There are two formal PICS specification documents (see annex for details) which define: A syntax for describing a rating service, that is, new vocabulary describing a new domain, so that computer programs can present the service and its labels to users. A syntax for labels, so that computer programs can process them. A label describes either a single document or a group of documents (e.g., a site.) An embedding of labels (actually, lists of labels) into the Web transmission format and the HTML document format. An extension of the HTTP protocol, so clients can request that labels be transmitted with a document. A query-syntax for an on-line database of labels (a label bureau) About labels PICS labels describe content on one or more dimensions. It is the selection software, not the labels themselves, that determine how the labels is used: as a search tool ("find me all the sites talking about fishes and have a accessibility label of more than 3 on the print impaired scale) or as a filtering tool ("no need to show me the sites which have a rating of 0 or 1 in the Financial Times Commercial Trust scale"). Each rating service can choose its own labelling vocabulary. For example, a given system might include a "coolness" dimension and a subject classification dimension. Information publishers can self-label, just as manufacturers of children's toys currently label products with text such as, "Fun for ages 5 and up." Provided that publishers agree on a common labelling vocabulary, self-labelling is a simple mechanism well-matched to the distributed nature and high volume of information creation on the Internet. When publishers are unwilling to participate, or can't be trusted to participate honestly, independent organizations can provide third-party labels. For example, the Simon Wiesenthal Center, which is concerned about Nazi propaganda and other hate speech, could label materials that are historically inaccurate or promote hate. Third-party labelling systems can also express features that are of concern to a limited audience. For example, a teacher might label a set of astronomical photographs and block access to everything else for the duration of a science lesson. In our domain, we expect that the disability user community will want to operate such a "label bureau". Prior to PICS there was no standard format for labels, so companies that wished to provide access control had to both develop the software and provide the labels. PICS provides a common format for labels, so that any PICS-compliant selection software can process any PICS-compliant label. This separation allows both markets to flourish: companies that prefer to remain value-neutral can offer selection software without providing any labels; values-oriented organizations, without writing software, can create rating services that provide labels. PICS is now implemented in popular browsers such as Microsoft Internet Explorer or IBM Web Server and internally to W3C, we have or will have by the time this action is implemented a tool for creating a label database in the domain of Accessibility. Task 1: Our first task in this work-package is to devise a new rating system - a new vocabulary - that will allow people to rate the level of accessibility of Web pages by users having a given disability (multiple scale available). Users involvement is of course critical as this stage since the criteria used should reflect the needs of the people with Disabilities. Task 2: The second task, once the rating system is created, will be to actually create labels and store them online for use by other people. We expect to use our W3C Jigsaw HTTP server to implement the label database repository Ideally, the same community creating labels should be the one using them, and we want to involve the online users as much as we can in this task. Month 6: PICS compliant rating system to assess the level of accessibility of Web pages along a multiple disabilities scale Month 12: Integration and demonstration of said classification system using regular products such as Windows or Macintosh. Pilot Label databases available openly for real use on the Internet Month 18: Operational label bureau where authenticated users can also add new labels. W3C is the technical lead with PICS and label server management expertise. BrailleNet and RNIB/EBU act as expert for the Visual impaired community, relating information to their end-users for creating the labels. The Online forum created for the project will help gather requirements for the other disabilities not covered by the two associated contractors, such as hearing or cognitive impairment. Title: Standardisation FORTH: 9 CNR: 4.5 To identify and assess the international state of the art with regards to current (on-going) and future standardisation activities related (explicitly or implicitly) to the accessibility of Web-based interactive applications and services by disabled and elderly people. Collect and consolidate the existing wisdom in the area of accessibility of Web-based interactive applications and services in order to define the scope of the required standardisation efforts. Identify requirements for accessibility and develop recommendations and criteria facilitating universal access to Web-based interactive applications and services. Disseminate the results to the relevant national, European and International standardisation bodies and fora, with the view to influence their current and future activities Work-package breakdown Task 1: User requirements and data collection Task 2: Consolidation and recommendations Task 3: Dissemination Work-package phases The work-package phases are depicted in the following diagram which provides a summary of activities and respective outcomes. Work package background During the past two decades, concern over the impact of computer-based equipment on health and safety has led to the establishment of new national, European and international ergonomic standards, directives and legislation. Many of those have already had a profound impact on the relevant industries, while for others the impact is due in the years to come. With reference to the ISO series of standards (i.e. ISO 9241, ISO 9000, etc), there have been several underlying assumptions which influenced and shaped the work carried out. First of all, for the past 10 years, standardisation work in the context of ISO, but also the European Directive 90/270/EEC on the Minimum Safety and Health Requirements for work with the Display Screen Equipment have been primarily characterised by the emphasis on average able user. Additionally, these standards emphasise primarily an ergonomic perspective whose scope has covered users in work situations. Moreover, the standards have been developed through a sequential process, whereby research results are consolidated and then standardised. This means that a first draft of a standard would evolve from a review and consolidation of existing results. Finally, regarding the spectrum of technologies considered, these were bound by the above commitments. In other words, the technologies which were relevant were those employed by an average able user in work situations; alpha-numeric terminals (during the early period) and then graphical user interfaces. More recently, and with the emergence of multimedia, a new work item was introduced, by ISO TC 159 / SC 4 / WG 5. With recent advances in Information Technology and Telecommunications, new trends have evolved which require that some of the assumptions in traditional standardisation work are revisited. These trends are driven by the following two important factors. Change in context: Shift from business to private and residential demand for services It is likely that, whereas in the past the business demand prevailed any standardisation activities in software ergonomics, the new paradigm shift (see also next section) has created business opportunities in sectors that were not covered by the traditional focus of the ISO work (i.e. education, banking, shopping, news groups, entertainment, etc). What is important to mention is that the potential size of market for these new applications is substantial, thus the compelling need for new standardisation activities which will adequately cover these domains. Shift in computer usage: From calculation-intensive & scientific use to work group- and communication-centred computing This shift is summarised in the diagram of Figure 1, which shows the intended purpose, and primary use of computer equipment, in the past, as well as the tentative forecast, given current and emerging trends, for the future. Clearly, the existing work in the area of software ergonomic standardisation does not suffice to provide a foundation for the new state of affairs likely to emerge in the future. Finally, standardisation is needed in order to facilitate and promote non-discrimination and universal access; quality of interaction in the emerging interaction-intensive Information society and compliance with legislation and policy recommendations as expressed by various international fora and technical committees, including the United Nations General Assembly, the Americans with Disabilities Act (ADA) in U.S.A, The Telecommunications Act of 1996 in U.S.A, the Technical Committee USACM of ACM, The Telecommunications Policy Roundtable in the U.S.A, as well as several European Commission programmes (TIDE, COST 219 etc). Work-package Deliverables D.1 Report on data collection methods and data analysis D.2 Draft report on standardisation guidelines for the accessibility of Web-based applications and services by people with disabilities. Detailed Task description T.1: User requirements and data collection Responsible FORTH: 2 CNR: 2 Partners and role FORTH FORTH will undertake a thorough review of past and on-going European collaborative RTD projects within the framework of TIDE, COST 219 and ESPRIT programmes; additionally it will review the international state of the art as related to standardisation (i.e. work in ISO, ETSI HF 2, CEN/CENELEC, work by ANSI/HFES in U.S.A, the Scandinavian guidelines for accessibility, etc). Additionally, FORTH will sub-contract to the Greek National Confederation of People with Disabilities (ESAEA), the equivalent of one man month in resources for the collection of data pertaining to end user interaction requirements CNR CNR will bring into the project past experience in various RTD projects, as well as relevant materials related to terminal adaptations Technical approach In order to determine precisely what could be the scope of any future standardisation activities regarding accessibility of Web-based interactive applications and services, a thorough investigation will be undertaken covering the broad international state of the art. In this context, the following documents will be studied: ISO 9241 (all parts) User-centred design (ISO/CD 13407) Multimedia (ISO 14915) Recent events in Europe (i.e. European Directive 90/270/EEC) Relevant work by HFES/ANSI 200 (primarily Section 5 on Accessibility) Work in the 1990s of ETSI HF 2 (former HF 4) CEN CENELEC TC 224 WG 6 Additionally, the existing wisdom on accessible Web-design will be consolidated in order to define the scope of the recommendations to be derived in the following task. To this end, we will investigate recent work in the area of TIDE, namely previous TIDE projects, as well as work carried out at an international level by various organisations and institutions. Task milestones M.1.1 Report on data collection methods and analysis (Contributing to D.1) T.2 Consolidation and recommendations CNR: 1.5 To consolidate existing material and derive criteria, recommendations and guidelines for the accessibility of Web-based interactive applications and services by people with disabilities. FORTH FORTH will primarily deal with the derivation of requirements, design criteria, guidelines, and recommendations for unified interface design facilitating accessibility and high quality of interaction with Web-based applications and services. CNR CNR will contribute to the task of ICS-FORTH and additionally it will provide recommendations for terminal adaptations This task will be concerned with the identification of unified interaction requirements in Web-based applications and services. Based on such requirements, we will then derive recommendations and guidelines towards unified interaction in the Web; facilitation of accessible and high quality interfaces for user with different requirements, abilities and preferences, including disabled and elderly (i.e. following the concept of design for all). To derive the unified interaction requirements, the partners will review technical progress in previous RTD projects as well as recent steps by companies such Microsoft and Sun towards accessible interface design. The outcomes of this task will take the form of technical reports reviewed by an international panel of experts which is to be decided upon the start of the activity in collaboration with the TIDE Office of the European Commission. Finally, it is important to mention that the guidelines and recommendations to be compiled will be ranked according to the ground upon which they are formulated. Thus, recommendations, guidelines and requirements will be ranked into empirically-based (i.e. there is sufficient empirical evidence in support of the guideline / criterion / requirement), experience-based (i.e. the guideline / criterion / recommendation is based on existing wisdom or best design practice, intuition-based (i.e. proposed but not yet fully verified). Such rankings will be established through the collaboration of partners with the international expert panel. M.2.1 Draft technical report on standardisation guidelines for accessibility of Web-based applications by disabled and elderly people (Contributing to D.2). Month 18 M.2.2 Final technical report on standardisation guidelines for accessibility of Web-based applications by disabled and elderly people (Contributing to D.3). T.3 Dissemination To develop an overall dissemination strategy and to undertake the necessary steps to ensure the widest possible diffusion of the project�s results Partners and role FORTH FORTH will be responsible for defining and carrying out the dissemination strategy. CNR CNR will undertake the maintenance of the Web server Technical approach To facilitate the widest possible dissemination of the project�s results, the partners will undertake: The development of a Web server so that some of the results of the project are publicly available; additionally a Web-based service will be established facilitating the exchange of expert opinion amongst the partners and the international panel of experts. One workshop will be scheduled to enable experts to review, comment and finalise the recommendations for accessible Web-design. Contribution to various national / European / international standardisation bodies including ISO TC 159 / SC 4 / WG 5, ETSI HF 2, COST 219 activities, CENELEC TC 224 WG 6. Participation in international events (meetings, workshops, seminars, etc) in order to promote and raise the awareness of the results of the project. M.3.1 : Draft report on the dissemination of results M.3.2 : Final report on the dissemination of results Cross-workpackage activities 4 User Forum RNIB: 1 EBU:1.5 Not a workpackage by itself, this activity will focus on the creation and the maintenance of an online user forum to be used by the project workpackages to gather user needs and requirements. Both BrailleNet, RNIB and EBU will participate in the elaboration of this forum, which will take place using a regular electronic mailing list and a set of web pages. The responsibilities of the user organizations in this activity is to make sure the end-users are represented and actively participate in all the phase of the projects. W3C will also participate in managing this forum and keeping consistent and synchronize it with its existing set of forum. 5 Project Management The objective of the management workpackage is to ensure that the workplan, targets, milestones and deliverables are met within the agreed time and cost schedules. The co-ordinating partner who will provide the project management will also provide the technical management of the project. No distinction is made in this package. Consequently, in addition to overall management tools, two other issues are addressed : the definition of common methodologies across the project, and quality control & assurance. Most of the management will be done using day-to-day electronic means between the partners, using a mailing list set up by the co-ordinating contractor. In addition, conference calls and face-to-face meeting will be scheduled on a regular basis to ensure the proper advance of the work. W3C has a lot of experience in managing such multi-national projects since all the work done at W3C is in fact done in partnership with given subset of W3C members. By making visible the results regularly in our user forum and W3C technical forum, we should be able to assess our progress and adjust our methodology and goals in the most effective way. We will also create and manage a web site making available our education program, any materials such as rating system description file, demonstrator code, and all interim reports for the project. Project Steering Committee A Project Steering Committee will be set-up that consists of the two main contractor managers, together with one representative of each associated contractor and a Quality Panel representative. It is responsible for the overall strategy. It also has specific responsibility for ensuring that recommendations of the Quality Panel are adhered to by the Workpackage managers doing the technical and awareness developments and dissemination. Meetings will review progress, accept and sign off deliverables, reports and demonstrators, and identify and carry out any replanning of the project. These meetings will normally be a minimum of one-day duration. At technical meetings, each package that is ongoing will present its findings to date together with plans for future work. The aim of these technical meetings will be to bring the project together at regular intervals to allow partners to benefit from the progress being made in different areas of the project. Deliverables Month 3: Project Reference Guide Month 6,12,15: Interim Reports, Meetings Month 18: Final report 3. The Consortium This proposal is made in partnership by six non-for-profit organizations. The roles and responsibilities of the participants are as follow: W3C: coordinator, overall management and industrial/technical expertise. ICS/FORTH, CNR: standardization aspect and work on guidelines. RNIB/EBU/BrailleNet: representing the user base World Wide Web Consortium [W3C] Backgrounder W3C�s mission: Realizing the Full Potential of the Web The W3C was founded to develop common protocols to enhance the interoperability and lead the evolution of the World Wide Web. Uniquely Positioned to Lead the Evolution of the World Wide Web Leading the World Wide Web's evolution means staying ahead of a significant wave of applications, services, and social changes. For W3C to effectively lead such dramatic growth -- at a time when a "Web Year" has shortened to a mere three months -- it must demonstrate exceptional agility, focus and diplomacy. To this end, the Consortium fulfills a unique combination of roles traditionally ascribed to quite different organizations. Like its partner standards body, the Internet Engineering Task Force [IETF], W3C is committed to developing open, technically sound specifications backed by running sample code. Like other information technology consortia, W3C represents the power and authority of hundreds of developers, researchers, and users. Hosted by research organizations, the Consortium is able to leverage the most recent advances in information technology. Host Institutions The W3C was formally launched in October 1994 at the Massachusetts Institute of Technology's Laboratory for Computer Science [MIT LCS]. Moving beyond the Americas, the Consortium established a European presence in partnership with France's National Institute for Research in Computer Science and Control [INRIA] in April 1995. As the Web's influence continued to broaden internationally, the resulting growth in W3C Membership created the need for an Asian host. In August 1996, Keio University in Japan became the Consortium's third host institution. Members The Consortium's real strength lies in the broad technical expertise of its Membership. W3C currently has more than 165 commercial and academic Members worldwide, including hardware and software vendors, telecommunications companies, content providers, corporate users, and government and academic entities. W3C provides a vendor-neutral forum for its Members to address Web-related issues. Working together with its staff and the global Web community, the Consortium aims to produce free, interoperable specifications and sample code. Funding from Membership dues, public research funds, and external contracts underwrite these efforts. The Consortium's Advisory Committee [AC] is comprised of one official representative from each Member organization who serves as the primary liaison between the organization and W3C. The Advisory Committee's role is to offer advice on the overall progress and direction of the Consortium. Staff W3C is led by Director Tim Berners-Lee, creator of the World Wide Web; and Chairman Jean-François Abramatic. With more than 30 years' combined expertise in a wide array of computer-related fields, including real-time communications, graphics, and text and image processing. Berners-Lee and Abramatic are well prepared to lead the Consortium's efforts in spearheading the global evolution of the Web. The Consortium's technical staff includes full- and part-time employees, visiting engineers from Member organizations, consultants, and students from more than 13 countries worldwide. W3C staff works with the Advisory Committee, the press, and the broader Web community to promote W3C's agenda. Recommendation Process Specifications developed within the Consortium must be formally approved by the Membership. Consensus is reached after a specification has proceeded through the review stages of Working Draft, Proposed Recommendation, and Recommendation. As new issues arise from Members, resources are reallocated to new areas to ensure that W3C remains focused on topics most critical to the Web's interoperability and growth. Domains Leading the evolution of technology as dramatically in flux as the World Wide Web is a challenging task indeed. W3C is a unique organization, well adapted to today's fast-paced environment. Its mission is to realize the full potential of the Web: as an elegant machine-to-machine system, as a compelling human-to-human interface, and as an efficient human-human communications medium. In order to achieve these goals, W3C's Team of experts works with its Members to advance the state of the art in each of the three Domains: User Interface, Technology & Society, and Architecture. Each Domain is responsible for investigating and leading development in several Activity Areas which are critical to the Web's global evolution and interoperability. W3C web site is http://www.w3.org See annex for Member list. European Blind Union [EBU] Backgrounder EBU is a non-governmental and non-profit making European organisation, founded in 1984. It is the principal organisation representing the interests of blind and partially sighted people in Europe with membership made up or organisations of and for visually impaired (VI) people in 43 European countries. EBU has formal consultative status as the co-ordinating NGO for the visual impairment sector on the European Disability Forum in Brussels. Royal National Institute for the Blind [RNIB] Backgrounder RNIB is the largest organisation in the UK looking after the needs of visually impaired people, with over 60 services. Current reappraisal of its work has led to services being increasingly considered in terms of supplying the needs of visually-impaired people at every stage of their lives and in various aspects. The organisation employs around 2500 people based throughout the UK, of whom 7% are visually-impaired. RNIB has already been involved as a partner in the CAPS (136/218) and Harmony (1226) projects. This work will be greatly enhanced by the recent approval of the TIDE ARTNet (3006) project which will build an international digital network for assistive and rehabilitation technology. Apart from CAPS, Harmony and ARTNet, RNIB has also been involved with a number of other TIDE and Telematics projects: ASHORED (101), AUDETEL (169/212), GUIB (103/215), CORE(126/213), ACCESS(1001), SATURN(1040), MOBIC(1148) and OPEN(1182). These have shown the technical knowledge which can be accessed by the organisation and have developed an understanding of how to assess user needs and wants." RNIB web site is http://www.rnib.org.uk BrailleNet Backgrounder Braillent is a french consortium whose mission is to to promote the Internet for social, professional, and school integration of visually impaired people. Improve Internet access for visually impaired people Development of pilot web site, containing specific services Explore tele-working and education thru Internet Disseminate result of work to end-users. The BrailleNet consortium regroups: INSERM (French National Institure on Medical Research) EUROBRAILLE (first maker of Braille terminals) AFEI (specialized in the formation of visually impaired people) CNEFEI (specialized in the formation of teachers) ANPEA (National Association of Parents of Visually Impaired Children) FAF (Federation of Blind and Visually Impaires in France) BrailleNet web site is http://www.ccr.jussieu.fr/braillenet/consbrn.html National Research Council (CNR) Backgrounder The National Research Council (CNR, Italy) is a government research organisation (staff of about 7000), which is involved in activities addressing most disciplinary sectors (physics, chemistry, medicine, agriculture, etc), in cooperation with universities and industry (one of its tasks being the transfer of innovations to production and services). CNR will participate in this project proposal with two Institutes: IROE (Firenze) and CNUCE (Pisa). IROE, with a staff of about 100 (half of whom are researchers) has a broad range of activities in pure physics (solid state, cosmology, optics) and applied physics (electromagnetic wave propagation, communications, integrated optics, optical fibre, remote sensing, etc). The Department on Information Theory and Processing is involved in research on the theory and applications of signal and image processing and information technology. It has a extensive experience in accessibility and usability. CNUCE, with a staff of 107, conducts research on Methods and Models for the Design and Analysis of Systems, Multimedia Technology, Geographical Information Systems, Mechanics of Materials, and Flight Dynamics of Spacecrafts. In relation to the project proposal, CNUCE is conducting research in 3D virtual environments and modelling, knowledge integration, agent architectures and user modelling. CNR Web site is at http://www.cnr.it/ Foundation for Research and Technology - Hellas (FORTH) Backgrounder Foundation for Research and Technology - Hellas (FORTH, Greece), is a centre for research and development monitored by the Ministry of Industry, Energy and Technology (General Secretariat of Research and Technology) of the Greek Government. The Institute of Computer Science, one of the seven institutes of FORTH, conducts applied research, develops applications and products, and provides services. Current R&D activities focus on information systems, software engineering, parallel architectures and distributed systems, computer vision and robotics, digital communications, network management, machine learning, decision support systems, formal methods in concurrent systems, computer architectures and VLSI design, computer aided design, medical information systems, human-computer interaction, and rehabilitation tele-informatics. ICS-FORTH has a long research and development tradition in the design and development of user interfaces that are accessible and usable by a wide range of people, including disabled and elderly people. It has recently proposed the concept, and provided the technical framework for the development of unified user interfaces, that are adaptable to the abilities, requirements and preferences of the end user groups. ICS/FORTH web site is at http://www.ics.forth.gr/ Effort Per Workpackage/Activity Education/Awareness Rating/Certification RNIB 4. European dimension and benefits The Web is global by nature and the players in the field of accessibility comes not only from all across Europe but worldwide. Through W3C, we expect to leverage that worldwide expertise and cooperate closely with non-European players. From W3C point of view, this proposal comes as complement of a wider scale initiative gathering experts worldwide in the field of Web Accessibility. But of course, there are individual persons and organizations behind any web pages, whether authored by hand or automatically generated, and these human beings live in a given nation, not in a virtual world. With this TIDE proposal, we want to give our focus on the Europe's Information society. In terms of economic impact, it's clear that giving access to the web to an entire section of the population (people with disability) will help the development of the information society just by bringing in more users. In terms of social policies, this is basic non-discrimination, that some countries have already made into legislation, which is providing additional motivation to build accessibility into the Web's infrastructure. These legal standards and requirements (current and proposed) already exist in the US and other national laws. There is work in Europe to extend the national laws into a pan-European framework that would, presumably, also be considered for adoption worldwide. Part of our education/awareness effort will aim at raising the visibility of these European legislation to the disabled users of the technologies (by hosting a web site with reference information). 5. References and related projects There are several projects, European and worldwide, that already have expertise in the field of Web access to people with disabilities (TEDIS, ACTS Avanti, University Leuven, Industrial - COST 219, Trace, CAST, DOIT, ICADD, etc). The partners in this proposal have very good links to these past or current efforts and one of our first activity will be to gather as much input as possible for the education aspect and to create a technical forum where existing team can participate in the elaboration of the specific awareness planning and materials. As mentioned in the introduction, W3C is starting a separate major new technical activity in this area and this is obviously a project with which coordination is going to be critical. Thru the W3C own forum, we expect to gather input from its European and worldwide industrial membership, as well as from the US organizations that are active in this area. ANNEX A: Web technologies This annex is meant to give the reader a quick but yet informative overview of the Web technologies and protocols refered to in the proposal. The World Wide Web (known as "WWW', "Web") is the universe of network-accessible information, the embodiment of human knowledge. Started as one application on the Internet (which existed years before), it now defines the Internet. The World Wide Web began as a networked information project at CERN, where Tim Berners-Lee, now Director of the World Wide Web Consortium [W3C], developed a vision of the project. The Web has a body of software, and a set of protocols and conventions. Through the use hypertext and multimedia techniques, the web is easy for anyone to roam, browse, and contribute to. An early talk about the Web gives some more background on how the Web was originally conceived. W3 Concepts The world-wide web is conceived as a seamless world in which ALL information, from any source, can be accessed in a consistent and simple way. Universal Readership Before W3, typically to find some information on the Internet, one had to have one of a number of different terminals connected to a number of different computers, and one had to learn a number of different programs to access that data. The W3 principle of universal readership is that once information is available, it should be accessible from any type of computer, in any country, and an (authorized) person should only have to use one simple program to access it. This is now the case.In practice the web hangs on a number of essential concepts. Though not the most important, the most famous if that of hypertext. Hypertext Hypertext is text with links. Hypertext is not a new idea: in fact, when you read a book there are links between references (see section X), footnotes, and between the table of contents or index and the text. If you include bibliographies which refer to other books and papers, text is in fact already full of references. With hypertext, the computer makes following such references as easy as turning the page. This means that the reader can escape from the sequential organization of the pages to follow pursue a thread of his or her own. This makes hypertext an incredibly powerful tool for learning. Hypertext authors design their material to make it open to active exploration, and in doing so communicate their information and ideas more effectively. W3 uses hypertext as the method of presentation, although as we shall see, this does not necessarily require that authors write hypertext. In W3, links can lead from all or part of a document to all or part of another document. Documents need not be text: they can be graphics, movies and sound, so the term "hypermedia", meaning "multimedia hypertext" applied equally well to W3. Whilst hypertext is a powerful tool for finding information, it cannot cope with large amorphous masses of data. For these cases, computer-generated indexes allow the user to pick out interesting items from textual input. There are therefore two operations a reader can use: the hypertext jump and the text search. Indexes appear within the web just like other documents, but a search panel (or FIND command) accompanies them which allows the input of text. Behind each index is some search engine: many different search engines with different capabilities exist on different servers. However, they are all used in exactly the same simple way: you type in some text, and you get back a hypertext answer which points you to things which were found by the search. Client-Server Model To allow the web to scale, it was designed without any centralized facility. Anyone can publish information, and anyone (authorized) can read it. There is no central control. To publish data you run a server, and to read data you run a client. All the clients and all the servers are connected to each other by the Internet. The W3 protocols and other standard protocols allow all clients to communicate with all servers. Format negotiation Since computers were invented, there have been a great variety of different codes for representing information. It has never been possible to pick one as the "best" code, as each has its advantages and its advocates. Our experience is that any attempt to enforce a particular representation such as postscript, TeX, or SGML leads to immediate war. A feature of HTTP is that the client sends a list of the representations it understands along with its request, and the server can then ensure that it replies in a suitable way. We needed this feature to cope with the existing mass of graphics formats for example (GIF, TIFF, JPEG to name but a few). If we cannot cope with the existing formats, how can we hope to evolve to take advantage of all the exciting new formats yet to be invented? Format negotiation allows the web to distances itself from the technical and political battles of the data formats. A spin-off of this involves high-level formats for specific data. In certain fields, special data formats have been designed for handling for example DNA codes, the spectra of stars, classical Greek, or the design of bridges. Those working in the field have software allowing them not only to view this data, but to manipulate it, analyse it, and modify it. When the server and the client both understand such a high-level format, then they can take advantage of it, and the data is transferred in that way. At the same time, other people (for example high school students) without the special software can still view the data, if the server can convert it into an inferior but still useful form. We keep the W3 goal of "universal readership" without compromising total functionality at the high level. W3 Protocols The specifications for the following protocols and formats are publicly available at the W3C web site at http://www.w3.org. The W3 project has deined a number of common practices which allow all the clients and servers to communicate. URL (Universal Resource Locator) When you are reading a document, behind every link there is the network-wide address of the document to which it refers (e.g. http://www.inria.fr). The design of these addresses (URLs) is as fundamental to W3 as hypertext itself. The addreses allow any object anywhere on the internet to be described, even though these objects are accesed using a variety of different protocols. This flexibility allows the web to envelop all the existing data in FTP archives, news arcticles, and WAIS and Gopher servers. HTTP (Hypertext Transfer Protocol) The web uses a number of protocols, then, but it also has its own Hypertext Transfer Protocol (HTTP). This protocol includes a number of facilities which we needed: it is fast, stateless and extensable. It also allows the web to surmount the problems of different data types using negotiation of the data represeentation as already described . The other protocols which W3 clients can speak include FTP, WAIS, Gopher, and NNTP, the network news protocol. HTML (Hypertext Markup Language) Although W3 uses many different formats, this is one basic format which every W3 client understands. It is a simple SGML document type allowing structured text with links. The fact that HTML is valid SGML opens the door to interchange with other systems, but SGML was not chosen for any particular technical merit. HTML describes the logical structure of the document instead of its formatting. This allows it to be displayed optimally on different platforms using different fonts and conventions. HTML 3.2 is the current specification recommended by W3C. CSS (Cascading Style Sheet) Style sheets describe how documents are presented on screens, in print, or perhaps how they are pronounced. Style sheets are soon coming to a browser near you, and this page and its links will tell you all there is to know about style sheets. By attaching style sheets to structured documents on the Web (e.g. HTML), authors and readers can influence the presentation of documents without sacrificing device-independence or adding new HTML tags. Cascading Style Sheets (CSS) is a style sheet mechanism that has been specifically developed to meet the needs of Web designers and users. A CSS style sheet can set fonts, colors, white space and other presentational aspects of a document. CSS 1 are now supported in recent versions of Microsoft Explorer and Netscape. ANNEX B: W3c Members EUROPE: total 62 (Full 27, Affiliate 35) Aérospatiale Alcatel Alsthom Recherche AGF Group Alfa-Omega Foundation Architecture Projects Management Ltd. A Belgacom F British Telecommunications Laboratories F Bull S.A. Canal+ F Cap Gemini Innovation CEA (Commissariat à l'Energie Atomique) CNRS/UREC (Centre National de la Recherche Scientifique) A CWI (Centre for Mathematics and Computer Science) A CCLRC (Rutherford Appleton Laboratory) Computer Answer Line CNR (Consiglio Nazionale delle Richerche) Cosmosbay Dassault Aviation F Deutsche Telekom F EEIG/ERCIM (European Research Consortium for Informatics and Mathematics) A ENEL F ENSIMAG A EDF (Electricité de France) F Ericsson Telecom F Etnoteam A FORTH (Foundation for Research and Technology - Hellas) France Telecom F GC Tech S.A. Gemplus F GMD Institute Grenoble Network Initiative A GRIF, S.A. Groupe ESC Grenoble A Iberdrola F ILOG, S.A. A Infopartners S.A. A INRETS A Institut Franco-Russe A.M. Liapunov d'informatique et de mathematiques appliques Joint Information Systems Committee (JISC) Matra Hachette F Michelin F MTA SZTAKI A NHS (The National Health Service) Nokia F O2 Technology A Orstom PIPEX Public IP Exchange Ltd A Reed-Elsevier F Sema Group F SICS (Swedish Institute of Computer Science) A Siemens Nixdorf F SISU (Swedish Institute for Systems Development) A Sligos F STET F SURFnet bv A Thomson-CSF F UKERNA (United Kingdom Research and Education Networking Association) A VTT Information Technology AMERICA: total 84 (Full 29, Affiliate 55) Alis Technologies, Inc A America Online F American International Group Data Center, Inc. (AIG) A American Internet Corporation A Apple Computer, Inc. F AT&T F Bellcore F Bitstream, Inc. A Compuserve CyberCash A Cygnus Support A Data Research Associates, Inc. A Defense Information Systems Agency (DISA) Delphi Internet F Digital Equipment Corporation F Digital Style Corporation Eastman Kodak Company F Electronic Book Technologies A Enterprise Integration Technology FTP Software, Inc. F First Floor, Inc. A First Virtual Holding A Folio Corporation A Fulcrum Technologies, Inc. General Magic, Inc. A Geoworks A Harlequin Incorporated A Hewlett Packard F Hummingbird Communications Ltd. A IBM F Incontext Systems Intel Corp. Intermind Corporation Internet Profiles Corporation Intraspect Sofware K2Net Lexmark International Inc. F Los Alamos National Laboratory A Lotus Development Corporation F Mainspring Communications Metrowerks Corporation A MCI Telecommunications F MITRE Corporation A Microsoft Corporation F NCSA / Univ. of Illinois A Netscape Communications Corp. F NeXT Software Inc. A Novell, Inc. F NYNEX Science & Technology Object Management Group Open Market A Open Software Foundation Research Institute A Oracle Corporation F O'Reilly & Associates, Inc. A PointCast Incorporated A Pretty Good Privacy Process Software Corp. A Prodigy Services Company F Progressive Networks Raptor Systems Inc Rice University for National HCPP Software Exchange Security Dynamics Technologies, Inc Silicon Graphics, Inc. F Softquad A Software 2000 F Spyglass Inc. A Syracuse University A Tandem Computers Inc. F Teknema Corporation A Telequip Corporation A Terisa Systems TIAA-CREF A TriTeal Corporation U.S. Web Corporation A Verity Inc. A Wolfram Research, Inc. A Wollongong Group A Xionics Document Technologies ASIA-PACIFIC: total 17 (Full 9, Affiliate 8) Canon, Inc Fujitsu Ltd. F Hitachi, Ltd. ITRI (Industrial Technology Research Institute) A Justsystem Corporation A Kumamoto Institute of Computer Software, Inc. A Mitsubishi Electric Corporation F NEC Corporation F NTT Data Communications F Nippon Telegraph and Telephone Corporation (NTT) F Omron Corporation F Pacifitech Corporation A The Royal Hong Kong Jockey Club The Royal Melbourne Institute of Technology A Sony Corporation WWW.Consult Pty, Ltd WWW - KR A
计算机
2015-48/3654/en_head.json.gz/1909
SharePoint Advancing the enterprise social roadmap by SharePoint Team, on June 25, 2013February 17, 2015 | 2 Comments | 0 Today’s post comes from Jared Spataro, Senior Director, Microsoft Office Division. Jared leads the SharePoint business, and he works closely with Adam Pisoni and David Sacks on Yammer integration. To celebrate the one-year anniversary of the Yammer acquisition, I wanted to take a moment to reflect on where we’ve come from and talk about where we’re going. My last post focused on product integration, but this time I want to zoom out and look at the big picture. It has been a busy year, and it’s exciting to see how our vision of “connected experiences” is taking shape. Yammer momentum First off, it’s worth noting that Yammer has continued to grow rapidly over the last 12 months–and that’s not something you see every day. Big acquisitions generally slow things down, but in this case we’ve actually seen the opposite. David Sacks provided his perspective in a post on the Microsoft blog, but a few of the high-level numbers bear repeating: over the last year, registered users have increased 55% to almost 8 million, user activity has roughly doubled, and paid networks are up over 200%. All in all, those are pretty impressive stats, and I’m proud of the team and the way the things have gone post-acquisition. Second, we’ve continued to innovate, testing and iterating our way to product enhancements that are helping people get more done. Over the last year we’ve shipped new features in the standalone service once a week, including: Message translation. Real-time message translation based on Microsoft Translator. We support translation to 23 languages and can detect and translate from 37 languages. Inbox. A consolidated view of Yammer messages across conversations you’re following and threads that are most important to you. File collaboration. Enhancements to the file directory for easy access to recent, followed, and group files- including support for multi-file drag and drop. Mobile app enhancements. Continual improvements for our mobile apps for iPad, iPhone, Android, and Windows Phone. Enterprise graph. A dynamically generated map of employees, content and business data based on the Open Graph standard. Using Open Graph, customers can push messages from line of business systems to the Yammer ticker. Platform enhancements. Embeddable feeds, likes, and follow buttons for integrating Yammer with line of business systems. In addition to innovation in the standalone product, we’ve also been hard at work on product integration. In my last roadmap update, I highlighted our work with Dynamics CRM and described three phases of broad Office integration: “basic integration, deeper connections, and connected experiences.” Earlier this month, we delivered the first component of “basic integration” by shipping an Office 365 update that lets customers make Yammer the default social network. This summer, we’ll ship a Yammer app in the SharePoint store and publish guidance for integrating Yammer with an on-prem SharePoint 2013 deployment, and this fall we’ll release Office 365 single sign-on, profile picture synchronization, and user experience enhancements. Finally, even though we’re proud of what we’ve accomplished over the last twelve months, we recognize that we’re really just getting started. “Connected experiences” is our shorthand for saying that social should be an integrated part of the way everyone works together, and over the next year we’ll be introducing innovations designed to make Yammer a mainstream communication tool. Because of the way we develop Yammer, even we don’t know exactly what that will look like. But what we can tell you is that we have an initial set of features we’re working on today, and we’ll test and iterate our way to enhancements that will make working with others easier than ever before. This approach to product roadmap is fairly new for enterprise software, but we’re convinced it’s the only way to lead out in space that is as dynamic and fast-paced as enterprise social. To give you a sense for where we’re headed, here are a few of the projects currently under development over the next 6-8 months: SharePoint search integration. We’re enabling SharePoint search to search Yammer conversations and setting the stage for deeper, more powerful apps that combine social and search. Yammer groups in SharePoint sites. The Yammer app in the SharePoint store will allow you to manually replace a SharePoint site feed with a Yammer group feed, but we recognize that many customers will want to do this programmatically. We’re working on settings that will make Yammer feeds the default for all SharePoint sites. (See below for a mock-up of a Yammer group feed surfaced as an out-of-the-box component of a SharePoint team site.) Yammer messaging enhancements. We’re redesigning the Yammer user experience to make it easier to use as a primary communication tool. We’ll also be improving directed messaging and adding the ability to message multiple groups at once. Email interoperability. We’re making it easier than ever to use Yammer and email together. You’ll be able to follow an entire thread via email, respond to Yammer messages from email, and participate in conversations across Yammer and email. External communication. Yammer works great inside an organization, but today you have to create an external network to collaborate with people outside your domain. We’re improving the messaging infrastructure so that you can easily include external parties in Yammer conversations. Mobile apps. We’ll continue to invest in our iPad, iPhone, Android, Windows Phone 8, and Windows 8 apps as primary access points. The mobile apps are already a great way to use Yammer on the go, and we’ll continue to improve the user experience as we add new features to the service. Localization. We’re localizing the Yammer interface into new languages to meet growing demand across the world. It will take some time, and we’ll learn a lot as we go, but every new feature will help define the future–one iteration at a time. When I take a moment to look at how much has happened over the last year, I’m really proud of the team and all they’ve accomplished. An acquisition can be a big distraction for both sides, but the teams in San Francisco and Redmond have come together and delivered. And as you can see from the list of projects in flight, we’re definitely not resting on our laurels. We’re determined to lead the way forward with rapid innovation, quick-turn iterations, and connected experiences that combine the best of Yammer with the familiar tools of Office. It’s an exciting time, and we hope you’ll join us in our journey. –Jared Spataro P.S. As you may have seen, we’ll be hosting the next SharePoint Conference March 3rd through the 6th in Las Vegas. I’m really looking forward to getting the community back together again and hope that you’ll join us there for more details on how we’re delivering on our vision of transforming the way people work together. Look forward to seeing you there! amagnotta Will the Office 365 release this fall integrate with SharePoint Online? I only see SharePoint 2013 on-prem mentioned. If not, are there plans in the Road Map to integration with SharePoint Online at some point? Thanks. CorpSec How does Yammer relate to Lync? It seems to me there’s a lot of overlap between the 2 collaboration tools. Will this evolve over time?
计算机
2015-48/3654/en_head.json.gz/2182
Issue: Jan, 1965 Posted in: Computers, How to, Origins 3 Comments on Wanted: 500,000 Men to Feed Computers (Jan, 1965) Wanted: 500,000 Men to Feed Computers (Jan, 1965) Wanted: 500,000 Men to Feed Computers You don’t have to be a college man to get a good job in computer programming – today even high-school grads are stepping into excellent jobs with big futures By Stanley L. Englebardt IF YOU know how to “talk to computers,” chances are you’ve got it made. If you don’t, you may be missing out on a great job opportunity. People who talk to computers are called programmers. They instruct data-processing machines on how to perform specific jobs. Today there are about 40,000 of these specialists at work. In six years, experts say, 500,000 more will be needed. Many will require a bachelor’s, master’s, or even doctor’s degree. But close to 50 percent will move into this new profession with only high-school diplomas. Here’s why there’s such a tremendous demand for programmers. Computers are really very stupid multimillion-dollar collections of wires and transistors. Plug one in and it does nothing. Yell at it, curse, kick it-and still it remains mute. The reason: no instructions. But once people write instructions, the computer becomes a marvelous tool. It can tell the exact moment at which an astronaut should fire his retrorockets, or identify an obscure disease and prescribe a course of treatment. It can keep watch over huge inventories and write reorders when the stock gets low. Computers can prepare your paycheck, update accounts-receivable files-even print out past-due notices when you’re late in paying bills. Thousands of new computers are installed each year to do these jobs. Each one must be programed before it can start processing. This means anywhere from 1 to 100 people sitting down to figure out every possible step in a particular operation. These steps are translated into machine language, punched into cards, and fed into the computer. There they are stored for use during the solution of a problem. Do you have what it takes to be a programmer? Education is important, but most important is a quick mind, with the ability to see details. You must be able to stay with a problem until it’s solved. Where do you look for a programing job? In many cases it will look for you, especially if you work for a company with a computer setup. Experience has shown that good programmers are usually found within a company, and most firms will give their employees first crack at the job. Supposing the job doesn’t come looking far you? You’re still in good shape. Demand for programmers far exceeds the supply. As an IBM executive told me: “At the current rate of new installations, it is virtually impossible for a qualified programmer – no matter how he learns the profession – to be out of work.” How can you learn programing? If you are a high-school or lower-class college student, your best bet is to complete as many mathematics courses as possible. Some secondary schools offer programming classes and data machine operation. More than 100 colleges and universities offer full-fledged programing courses. Many lead to a degree in mathematics or physical science. And while a piece of parchment isn’t essential, it does increase your value in the job market. If you are out of school there are several courses of action: • Night or technical school. • Adult-education programs. • Home-study courses. Technical schools advertise in most big-city newspapers. Caution: Some are more concerned with collecting tuition than providing instruction. Contact the local office of a national data-processing film and find out which schools they recommend. Among the major computer manufacturers are IBM, Remington Rand Univac, RCA, National Cash Register, Burroughs, Control Data, and Honeywell. Some offer their own courses. They will also supply you with literature about programing. Home-study courses are really helpful. A recent experiment by a large industrial firm showed that programmers who had successfully completed a home-study course did as well-if not better-than those who attended classes. Possibly the most successful of these courses was designed jointly by Pennsylvania State University and IBM. What’s the job like? The first step in programing is to reduce the problem to its simplest terms. Let’s take the task of multiplying two numbers together, adding a third to the product, and displaying the sum. In other words: the equation d=ab+c, which might be part of a bank program for calculating compound interest. To get the machine to perform this routine, the programmer will first have to write instructions in machine language that will cause a, b, and c to be read-in from an input device. Usually this device will be either a magnetic-tape unit or a punched-card reader. Thus, the programmer will tell the computer which input device to go to, how to identify the input numbers, how to bring them into the main processing unit, and where to store them once they are in. Next, number a must be transferred from the memory unit to a section called the multiplier register. Then b must be moved in and multiplied by a. After that c must be moved in and added to the product to form d. Finally, d must be moved to an output device-such as a high-speed line printer-where it is written out in a previously specified format. Each of these steps will have to be analyzed, written out in flow-chart form, corrected, polished into block-diagram form and finally translated into computer codes. What can you earn as a programmer? The average for a beginning or lowest-level programmer is about $110 a week -although some firms pay as high as $170. A senior programmer commands an average $150; a lead programmer about $160; and a supervisor about $190. There is also excellent advancement. It is not unusual to hear of a manager of data processing being made a vice-president of his company. And while managing a large-scale computer installation involves considerably more than just instruction writing, each of these men either started or served a term as a programmer. The computer programmer’s daily work From his boss, a systems analyst, the t programmer receives his assignment to work out a program for one section of a job planned on a flow chart. First step: Draw a block diagram showing basic data-handling and logic operations computer must perform. Standard symbols’ are used in the diagram. Consulting a special dictionary, programmer spells out instructions in a “programing language” -a code describing standard sequences of machine operations. Coded instructions, now punched into t cards, are fed into a computer, which automatically translates steps into precise instructions it can follow. Program is tested to see if it processes t the data correctly. Programmer can check its progress by reading numbers on the computer’s control console. Programmer finally studies results printed out by the computer. The job of a programmer was demonstrated in these photos by IBM systems engineer R. Reuel Stanley. Could you answer these test questions? Sample these questions given to applicants for computer-programmer training by Honeywell Inc. for a hint as to whether you have the aptitude to be an electronic brain feeder. An electronic-parts distributor has some transformers in one of his stockrooms. They all look alike but he knows that a mistake has been made and that there are two types of transformers (types A & B) in the room, and that there are four of each type for a total of eight. He receives a rush orderfrom a customer for either two type-A transformers, or two type-B transformers. The customer has the equipment to tell the difference between the transformers, but the parts distributor does not. Since the transformers are very expensive to ship, the distributor ships the minimum number necessary. How many does he ship? If the statement, “There are more dogs in the U.S. than there are hairs on any one dog in the U.S.,” is true, then is the statement, “There are at least two dogs in the U.S. with exactly the same number of hairs,” true or false? And why? If a brick balances evenly with three quarters of a pound weight plus three quarters of a brick, what is the weight of the whole brick? A light flashes once every five minutes; another light flashes once every 14 minutes. If they both flash together at 1 :00 p.m., what time will they next flash together? Alice is as old as Betty and Christine together. Last year Betty was twice as old as Christine. Two years hence Alice will be twice as old as Christine. Their ages? A man and his wife live on the fifth floor of an apartment building and have no phone. Frequently, when he comes home from work at night, his wife asks him to run an errand before dinner, but of course not the same errand every night. So, in order to save himself a trip up the stairs every evening, she puts a light in each of the four windows that can be viewed from the street. What is the most number of errand messages his wife can choose from, at anyone time? phi number says: January 12, 20067:25 pm I like programming, so is a bright future to me or people like me Paul says: February 2, 20096:26 am “People who talk to computers are called programers” – oh no they’re not! Ruthie says: July 28, 20115:50 pm I think more people should recognize Stanley Englebardt as a true visionary in the field of computers. He also took time to write about children with disabilities ( as published in Reader’s Digest). What a neat guy.
计算机
2015-48/3654/en_head.json.gz/2215
From Dumpster Diving to Running a $50 Million Company Ryan Allis talks about how he went from sleeping in his office and dumpster diving to running a $50 million online-marketing-software business By Ryan Allis Nov. 18, 20110 Share In the summer of 2003, Ryan Allis dropped out of the University of North Carolina to launch iContact, an online-marketing-software company, with co-founder Aaron Houghton. Now 27, he runs a 280-employee company that serves 1 million customers, brought in $50 million in revenue in 2010 and has offices in Raleigh, Miami and London. Here’s his story: On Dropping Out of College I figured I had little to lose and could always go back to school. I’d caught the entrepreneurial bug at an early age, setting up a computer consulting business when I was 11 years old. For $5 an hour, I taught local senior citizens how to use the Internet to keep in touch with friends and family. By 14, I had moved on to website design, and at 16 I founded a Web marketing consultancy. Bootstrapping It For the first three years following the launch of iContact, Aaron and I worked without a salary to keep costs low. I lived in the office for the first few months. I was 18 years old, sleeping on a futon, cooking on a George Foreman grill and showering at a friend’s house every few days. I once jumped into a dumpster to recover the proof-of-purchase tag from a chair box to claim the $50 rebate. We worked until we fell asleep, and we slept until we woke up. Time was amorphous. Our fundraising strategy was simply to acquire customers and reinvest the revenue. We started by giving iContact away to a local sandwich shop called Jimmy John’s. There was a fishbowl on the counter for customers to drop in business cards. Once a week we’d collect the fishbowl and type the contacts into Jimmy John’s newsletter database. When they began seeing an increase in returned visits from customers receiving the newsletter, Jimmy John’s became a paying customer. (See how the CEO of College Hunks Hauling Junk made it big.) Learning to Grow By September 2003, iContact had 10 customers, and we even hired our first employee to help with customer service and marketing. By the end of 2003, we had a grand total of $12,000 in sales and $17,000 in expenses. We had spent a year of our lives building iContact, and all we had to show for it was negative $5,000 in earnings. Our sole server crashed that Christmas, bringing the website down and product offline for a week. We lost a third of our customers. But we persisted. iContact had a basic business model. We charged between $10 and $699 per month to manage a client’s contact database, send out e-mails to customers and potential customers and track the results. The average customer paid $50 per month and remained a customer for three to four years. Over time, we found that, on average, it cost us $500 to acquire a customer who would pay $2,000 in revenue over time. It was clear, in other words, that if we could invest more in marketing, we could grow the company profitably — but we needed outside funding to make that happen quickly, before other companies filled the niche and the opportunity disappeared. In 2006, we hired a CFO to help us raise a $500,000 seed round of funding, which we invested primarily on pay-per-click ads on Google and kept a close eye on return-on-investment metrics. We saw that for every $1 we spent in Google ads, we earned $5 over time — so naturally we kept scaling the advertising model and doubled in size from 12 employees and $1.3 million in sales in 2005 to 30 employees and $2.9 million in sales in 2006. Building a Team With the scalability of our model demonstrated, iContact closed an additional $5.3 million in funding in 2007. We invested in building out iContact’s senior management team, hiring experienced heads of technology, sales, support and marketing. I was now a 23-year-old managing a 100-employee company and overwhelmed with my newfound responsibilities. Fortunately, I was surrounded with a strong executive team to help grow the business. Over the next two years, iContact revenues grew 271%, ending 2009 with $26 million in revenue and 500,000 users. The execution was hard work, but the formula was simple. Build a product customers love, and market it aggressively across search engines, websites, partners and the radio within very clear mathematical guidelines. Discovering the Bigger Picture Our success turned bittersweet when iContact co-founder Aaron Houghton was diagnosed with thyroid cancer in late 2009. He underwent surgery and, happily, was declared cancer-free a year later. But the episode made us reflect deeply on our current situation and our goals. We wanted to build the company in a way that represented our belief that businesses earn the best financial returns when, in addition to driving returns for shareholders, they work to make a long-term difference in the world for customers, employees and community. We launched a program called the 4-1s, under which we would donate 1% of the company’s payroll, equity, product and employee time to local and global communities. iContact hired a corporate social responsibility manager and became a B Corp , a designation from a nonprofit organization called B Lab certifying iContact as a socially responsible company. We firmly believe that to attract the best people you must create a fun environment and have a deep sense of collective mission. In 2010 we closed on a $40 million round of funding from JMI Equity, which we hope to use to grow to $500 million in annual revenue in the years ahead. Our customers send 1.5 billion e-mails every month to their subscribers. This quarter we’ll be launching a social-media marketing product to propel iContact into new opportunities. We’ve come a long way from the days of sleeping in the office and eating Ramen noodles.
计算机
2015-48/3654/en_head.json.gz/2638
Home > Risk Management OverviewGetting StartedResearchTools & Methods Additional Materials ConsultingOur People Consider a broad range of conditions and events that can affect the potential for success, and it becomes easier to strategically allocate limited resources where and when they are needed the most. The SEI has been conducting research and development in various aspects of risk management for more than 20 years. Over that time span, many solutions have been developed, tested, and released into the community. In the early years, we developed and conducted Software Risk Evaluations (SREs), using the Risk Taxonomy. The tactical Continuous Risk Management (CRM) approach to managing project risk followed, which is still in use today—more than 15 years after it was released. Other applications of risk management principles have been developed, including CURE (focused on COTS usage), ATAM® (with a focus on architecture), and the cyber-security-focused OCTAVE®. In 2006, the SEI Mission Success in Complex Environments (MSCE) project was chartered to develop practical and innovative methods, tools, and techniques for measuring, assessing, and managing mission risks. At the heart of this work is the Mission Risk Diagnostic (MRD), which employs a top-down analysis of mission risk. Mission risk analysis provides a holistic view of the risk to an interactively complex, socio-technical system. The first step in this type of risk analysis is to establish the objectives that must be achieved. The objectives define the desired outcome, or "picture of success," for a system. Next, systemic factors that have a strong influence on the outcome (i.e., whether or not the objectives will be achieved) are identified. These systemic factors, called drivers, are important because they define a small set of factors that can be used to assess a system's performance and gauge whether it is on track to achieve its key objectives. The drivers are then analyzed, which enables decision makers to gauge the overall risk to the system's mission. The MRD has proven to be effective for establishing confidence in the characteristics of software-reliant systems across the life cycle and supply chain. The SEI has the MRD in a variety of domains, including software acquisition and development; secure software development; cybersecurity incident management; and technology portfolio management. The MRD has also been blended with other SEI products to provide unique solutions to customer needs. Although most programs and organizations use risk management when developing and operating software-reliant systems, preventable failures continue to occur at an alarming rate. In many instances, the root causes of these preventable failures can be traced to weaknesses in the risk management practices employed by those programs and organizations. For this reason, risk management research at the SEI continues. The SEI provides a wide range of risk management solutions. Many of the older SEI methodologies are still successfully used today and can provide benefits to your programs. To reach the available documentation on the older solutions, see the additional materials. The MSCE work on mission risk analysis—top-down, systemic analyses of risk in relation to a system's mission and objectives—is better suited to managing mission risk in complex, distributed environments. These newer solutions can be used to manage mission risk across the life cycle and supply chain, enabling decision makers to more efficiently engage in the risk management process, navigate through a broad tradeoff space (including performance, reliability, safety, and security considerations, among others), and strategically allocate their limited resources when and where they are needed the most. Finally, the SEI CERT Program is using the MRD to assess software security risk across the life cycle and supply chain. As part of this work, CERT is conducting research into risk-based measurement and analysis, where the MRD is being used to direct an organization's measurement and analysis efforts. Spotlight on Risk Management The Monitor June 2009 New Directions in Risk: A Success-Oriented Approach (2009) A Practical Approach for Managing Risk A Technical Overview of Risk and Opportunity Management A Framework for Categorizing Key Drivers of Risk Practical Risk Management: Framework and Methods
计算机
2015-48/3654/en_head.json.gz/3097
Posted The 439 organizations SOPA opponents should worry about [updated] By UPDATE: To avoid any further confusion, we have removed all names that appear exclusively on the Global Intellectual Property Center letter, which was sent to Congress a month before a draft of SOPA was submitted to the House of Representatives, and includes no mention of any specific legislation. Our list now only includes the names of companies that are on the official House Judiciary Committee list of SOPA supporters. ORIGINAL TEXT With the next House Judiciary Committee markup hearing delayed until “early next year,” opponents of the Stop Online Piracy Act (SOPA) and the Protect IP Act (PIPA) are looking for more ways to fight back against these contentious pieces of legislation, while they still have a chance. The anti-SOPA crowd includes everyone from Internet giants like Google, Facebook, eBay, Wikipedia, Mozilla and Yahoo! to The New York Times, the Stanford Law Review and even the very people who created the Internet in the first place. Experts say SOPA and PIPA would not only stifle free speech by allowing widespread censorship (in the name of copyright protection), but could castrate innovation, destroy the open Internet, and disrupt the very foundation upon which the Internet was built, the domain name system. (See a comprehensive list of SOPA opponents here.) From the pro-SOPA side, we’ve heard much from organizations like the Motion Picture Association of America (MPAA), the Recording Industry Association of America (RIAA) and the US Chamber of Commerce — not to mention politicians, like Rep. Lamar Smith (R-TX), the chief sponsor of SOPA, and the chairman of the Judiciary Committee. But far too little light has shone on the hundreds of organizations that continue to support SOPA, despite the staggering dangers it holds for the Internet as we know it. Below is a list of 439 corporations, unions, law firms and other groups that have explicitly expressed their support of SOPA, or similar anti-piracy legislation. This list is derived from two sources: the official list (pdf) of SOPA supporters from the Judiciary Committee’s website, and a letter (pdf) addressed to Congress from the Global Intellectual Property Center, which is an affiliate of the US Chamber of Commerce. Some have begun organizing a mass boycott of these organizations, and a list (Google Doc) of contact information for each is currently in the works. We’re not advocating any specific action, but we do think it is important for people to understand who is for and against SOPA and PIPA; if passed, either of these bills will fundamentally change the online world, whether you think that’s a good idea or not. Update: We have learned that Petzl America has explicitly stated that while it supports greater protections of intellectual property, and signed the Global Intellectual Property Center letter to Congress (along with 401 other companies on this list), it “does not support SOPA or the Protect IP Act,” or any other legislation that will “harm the freedom of the Internet.” We have taken their name off the list. Update 2: All of the law firms that originally appeared on the Judiciary Committee’s list of SOPA supporters have been removed. We have removed them from the list below. They include: BakerHostetler LLPCowan, DeBaets, Abrahams & Sheppard LLPCowan, Liebowitz & Latman, P.C.Davis Wright Tremaine LLPIrell & Manella LLPJenner & Block LLPKelley Drye & Warren LLPKendall Brill & Klieger LLPKinsella Weitzman Iser Kump & Aldisert LLPLathrop & Gage LLPLoeb & Loeb LLPMitchell Silberberge & Knupp LLPMorrison & Foerster LLPPatterson Belknap Webb & Tyler LLPPhillips Nizer LLPProskauer Rose LLPRobins, Kaplan, Miller & Ciresi LLPShearman & Sterling LLPSimpson Thacher & Bartlett LLPSkadden, Arps, Slate Meagher & Flom LLPWhite & Case LLP It has not yet been officially explained why these firms have been pulled off the list. At least one firm, however, Davis Write Tremaine LLP, has said that it does not support SOPA, but two of its lawyers privately support the bill. Update 4: The Redwing Shoe Company has emailed, informing us that it “does not support SOPA as it is currently drafted.” Redwing is one of the 402 companies that signed the GIPC letter mentioned above, which does not specifically name any legislation. We have removed their name from the list below. Update 5: Gibson Guitars has also clarified that it does not support SOPA: “Hey guys – Gibson does NOT support this legislation. Gibson’s CEO has demanded that Gibson be removed from the list of company’s supporting SOPA. Don’t believe everything you read on the Internet!” Update 6: Nintendo and Sony Electronics have both been removed from the Judiciary Committee’s official list of SOPA supporters. Correction: It is currently unclear whether Nintendo, Sony Electronics and EA support or oppose SOPA. Sony Music Entertainment, Sony/ATV Music Publishing and Sony Music Nashville all remain on the list. GoDaddy, after failing to convince users of its anti-SOPA stance the first time around, has also come out explicitly against SOPA. GoDaddy has been removed from the list. Update 7: Leatherman Tool Group has emailed us to clarify that they do not support, and have never supported, SOPA. “We did not sign ANYTHING to support or endorse SOPA,” writes Leatherman’s PR department. “Leatherman has never been contacted or consulted about/during the creation of SOPA. Leatherman’s name has been erroneously added to a ‘list’ of SOPA supporters.” Leatherman’s name was added to this list below due to its signage of the GIPC letter. Its name does not appear on the official list of SOPA supporters from the House of Representatives. We have removed Leatherman from the list below. Update 8: Taylor Guitars — which signed the GIPC letter but does not appear on the official House list — has emailed us to clarify that it does not support SOPA. Taylor Guitars has been removed from our list below. Here is the company’s full statement on SOPA: In August 2011, Taylor Guitars, its trade organization, NAMM, and other music industry manufacturers offered a signature of support on a U.S. Chamber of Commerce letter sent to Congressional members to encourage the introduction of anti-piracy and counterfeiting legislation. As the letter was not bill-specific, we felt the spirit of its intent was in accordance with our efforts to confront ongoing piracy and copyright infringement issues that we, like many others in the industry, continue to battle. However, our desire to stop piracy and counterfeiting has been misrepresented as support for the Stop Online Piracy Act (H.R. 3261). Clearly stated, we do not support SOPA and its intent to restrict the Internet. The values of freedom, creativity and innovation are at the core of our business, and SOPA is not in accordance with those values. List update: To avoid any further confusion, we’ve updated the format of this list to make these companies’ SOPA stances more clear. All companies whose names appear in bold are on the Judiciary Committee’s official list of SOPA supporters; non-bolded companies appear only on the GIPC letter, which does not name any specific legislation. 60 Plus AssociationABCAlliance for Safe Online Pharmacies (ASOP)American Bankers Association (ABA)American Federation of MusiciansAmerican Federation of Television and Radio Artists (AFTRA)American Society of Composers, Authors and Publishers (ASCAP)Americans for Tax ReformArtists and Allied Crafts of the United StatesAssociation of American Publishers (AAP)Association of State Criminal Investigative AgenciesAssociation of Talent Agents (ATA)BMG ChrysalisBroadcast Music, Inc. (BMI)Bulding and Construction Trades DepartmentCapitol Records NashvilleCBS CorporationCengage LearningChristian Music Trade AssociationChurch Music Publishers’ AssociationCoalition Against Online Video Piracy (CAOVP)Comcast CorporationConcerned Women for America (CWA)Congressional Fire Services InstituteCopyhypeCopyright AllianceCoty Inc.Council of Better Business Bureaus (CBBB)Council of State GovernmentsCountry Music AssociationCountry Music TelevisionCreative AmericaDeluxe Entertainment Services GroupDirectors Guild of America (DGA)Disney Publishing Worldwide, Inc.ElsevierEMI Christian Music GroupEMI Music PublishingEntertainment Software Association (ESA)ESPNEstée Lauder CompaniesFraternal Order of Police (FOP)Gospel Music AssociationGraphic Artists GuildHachett Book GroupHarperCollins PublishersHyperionIndependent Film & Television Alliance (IFTA)International Alliance of Theatrical and Stage Employees (IATSE)International AntiCounterfeiting Coalition (IACC)International Brotherhood of Electrical Workers (IBEW)International Brotherhood of Teamsters (IBT)International Trademark Association (INTA)International Union of Police Associations L’OréalLost Highway RecordsMacmillanMajor County SheriffsMajor League Baseball Majority City ChiefsMarvel Entertainment, LLCMasterCard WorldwideMCA RecordsMcGraw-Hill EducationMercury NashvilleMinor League Baseball (MiLB)Minority Media & Telecom Council (MMTC)Motion Picture Association of America, Inc. (MPAA)Moving Picture TechniciansMPA – The Association of Magazine MediaNational Association of ManufacturersNational Association of Prosecutor CoordinatorsNational Association of State Chief Information OfficersNational Cable & Telecommunications Association (NCTA)National Center for Victims of CrimeNational Crime Justice AssociationNational District Attorneys AssociationNational Domestic Preparedness CoalitionNational Football League (NFL)National Governors Association, Economic Development and Commerce CommitteeNational League of CitiesNational Narcotics Offers’ Associations’ CoalitionNational Sheriffs’ Association (NSA)National Songwriters AssociationNational Troopers CoalitionNBCUniversalNews CorporationPearson EducationPenguine Group (USA) Inc.Pfizer Inc.Pharmaceutical Research and Manufacturers of America (PhRMA)Provident Music GroupRandom HouseRaulet Property PartnersRepublic NashvilleRevlonScholastic, Inc.Screen Actors Guild (SAG)Showdog Universal MusicSony/ATV Music PublishingSony Music EntertainmentSony Music NashvilleState International Development Organization (SIDO)The National Association of Theater Owners (NATO)The Perseus Books GroupsThe United States Conference of MayorsTiffany & Co.Time Warner Inc.True Religion Brand JeansUltimate Fighting Championship (UFC)UMG Publishing Group NashvilleUnited States Chamber of CommerceUnited States Olympic CommitteeUnited States Tennis AssociationUniversal MusicUniversal Music Publishing GroupViacomVisa, Inc.W.W. Norton & CompanyWallace Bajjali Development Partners, LPWarner Music GroupWarner Music NashvilleWolters Kluewer HealthWord Entertainment [Image via Elnur/Shutterstock]
计算机
2015-48/3654/en_head.json.gz/3823
MIDIMAN Digipatch12X6Digital Audio PatchbayPublished in SOS December 1997 Printer-friendly version Reviews : Patchbay Midiman are a company devoted to producing cost-effective problem solvers, and with the Digipatch they're aiming to solve the problem of affordable digital patching. PAUL WHITE finds out how well they've succeeded. The increasing amount of digital equipment finding its way into our studios brings its own special patching problems. Digital mixers and soundcards may include any combination of ADAT and S/PDIF optical interfaces or electrical AES/EBU and S/PDIF sockets, all of which need to be connected to something. Then there are DAT machines, ADATs, DA88s and various hard disk recording systems to consider, not to mention outboard D/A and A/D converters. As with analogue audio devices, you don't necessarily want all of these connected in the same way at all times. The most elegant solution is to use some form of digital patching system, but traditionally these have been both expensive and complicated. California-based Midiman have come up with a simple but effective low-cost solution to digital patching with their Digipatch 12-in, 6-out programmable digital patchbay. This 1U rackmounting device, powered by a wall-wart power supply has 12 digital inputs, each of which may be routed to a choice of six digital outputs. Only one input can be routed to an output at any one time, and no digital signal processing takes place at all, which means that the data structure of the incoming signal is passed through without change, regardless of format, flag settings, sample rate or whatever else. A complete patching setup may be stored as a Program for later recall and, true to the MIDI part of their name, Midiman have given the unit MIDI In and Out sockets for remote Program changing, SysEx dumping and loading of patch data, or control via the supplied utility software (for both Mac and PC). The 12 inputs are arranged as six S/PDIF phono sockets and six optical connectors; the latter can be used either for S/PDIF signals or ADAT signals. There's no AES/EBU option and no TDIF capability -- the system is limited to serial digital formats that can be sent down either a standard optical cable or a phono socket. The six outputs are arranged as pairs of connectors so that you get the same signal on both a phono connector and an optical output at the same time. In situations where you need to change optical S/PDIF to electrical S/PDIF or vice-versa, this is obviously very useful, but because the unit doesn't monitor or interfere with the digital signals in any way, it's entirely up to the user to make sensible routing decisions. If you route an ADAT signal to one of the outputs and then try to read it from the phono socket using a DAT machine's S/PDIF input, for example, it's clearly not going to work. The Digipatch has eight buttons on its front panel, not counting the power LED, plus a 4-digit LED display showing the Program Num
计算机
2015-48/3654/en_head.json.gz/3886
HTML5 Jumps Off The Drawing Board Mar 28, 2008 (08:03 PM EDT) Read the Original Article at http://www.informationweek.com/news/showArticle.jhtml?articleID=206906010 After several years spent trying to persuade Web site developers and browser vendors to move to XML-based documents, the World Wide Web Consortium has resumed development of HTML, announcing in mid-January the first public working draft of the HTML5 specification. The consortium, known as W3C, hasn't given up on XHTML 2.0, which strives for elegance and insists on correctness. But those developing HTML5 take a more pragmatic approach: Consider the problems plaguing Web developers today and try to make their lives easier--without rebuilding the core of the protocol. HTML5 detractors say the spec is not a step forward; they prefer the more elegant design of XHTML2, which is still under development. At some point, they argue, Web designers must be held to a stricter standard when developing sites. Yet the reality is that wide browser support is crucial for any Web standard to be useful, and XHTML2 is a more significant change for browser developers than HTML5. And with no support for XHTML promised by Microsoft, elegance is proving a difficult sell. TML5 should make life much easier for developers with ease of use and better backward compatibility, interoperability, and scripting. Not enough? How about local storage, less discrepancy across browser platforms, and better recovery when browsers run into bad markup. THE PLAYERS The Web Hypertext Application Technology Working Group, created by reps from Apple, Opera, and Mozilla, and the World Wide Web Consortium, both contribute to HTML5. Google's Ian Hickson is the document editor, and all major browser makers, as well as many Web vendors, are represented on working groups. THE PROSPECTS Backing from all major browser vendors means HTML5 will, eventually, become the standard Web developers will write to. Browser vendors are adding support for certain portions of the spec now. For Web developers, this day has been a long time coming. HTML 4.01 was introduced in December 1999. The W3C released XHTML 1.0 as a successor to HTML 4.01, and followed with its latest standard, XHTML 1.1, way back in 2001. The intent of the W3C was to continue down the XHTML path with a release of XHTML 2.0, but the spec wasn't moving in the direction that several major browser vendors expected. As a result, Apple, the Mozilla Foundation, and Opera formed the Web Hypertext Application Technology Working Group (WhatWG) in April 2004 to work on Web Applications 1.0, citing concerns regarding the W3C's progress with XHTML. Web Applications 1.0 was eventually renamed HTML5, and in April 2007 the WhatWG approached the W3C and offered its work as a basis for a new HTML standard. The W3C agreed. There are significant changes within HTML5, including updates to ease interactive Web development. New elements include header, footer, section, article, nav, and dialogue capabilities to divide sections of a page more clearly, while advanced features include a "canvas" with a corresponding 2-D drawing API that allows for dynamic graphics and animation on the fly. HTML5 also eliminates some elements, such as frames and framesets, that have caused more usability problems than they were worth, although browsers are still required to support them. FEATURE PRESENTATION While most developers have adopted Cascading Style Sheets, or CSS, as a better way of handling presentation of Web documents, HTML5 codifies this by eliminating most presentational attributes. To earn its Web Applications 1.0 moniker, HTML5 also adds APIs that include direct provisions for audio and video content; client-side persistent storage with both key/value and SQL database support; offline-application, editing, drag-and-drop, and network APIs; and cross-document messaging. While much of this is possible today through browser plug-ins, standardizing on these features and building them into browsers will make it much easier for developers to add advanced functionality that will work across platforms. In contrast to XHTML2, supporting existing content is a key HTML5 design principle. Other design goals center on compatibility, utility, interoperability, and universal access. Compatibility means not only that existing Web pages should still render properly, but that new functionality introduced with HTML5 should degrade gracefully when an older browser is used. Another key conviction: Browser implementers should do their best to render pages that may have incorrect markup, and do so in a consistent manner. In stark contrast, XML is supposed to "error out" when a fault is reached, so a single mistake on a developer's part may create an unreadable Web page. Considering the number of pages that don't properly validate, that's a real burden to put on Web developers. "The HTML5 specification is a good step because it's a fairly realistic one," says Charles McCathieNevile, chief standards officer for Opera Software. "It doesn't aim to change the world in a radical way." Internet Explorer doesn't support XHTML, and at press time Microsoft hadn't released plans to support it in future versions, instead saying it's concentrating on fixing more pressing issues, such as CSS and rendering errors in versions through IE7 and IE8 betas. Fortunately, HTML5 makes concessions for phased adoption. The W3C predicts that the full HTML5 recommendation will be ratified in the third quarter of 2010. You won't have to wait that long to take advantage, however; among the four most popular browsers--Internet Explorer, Firefox, Safari, and Opera--bits of support for HTML5 already are available. For example, all but Internet Explorer have implemented the Canvas element, and Opera includes Web Forms. THE ACID TEST At the same time, however, developers of the two most popular browsers, Internet Explorer and Firefox, are still struggling to come closer to full compliance with existing standards. The Acid2 test, developed by the Web Standards Project in 2005, was created to cajole browser developers into complying with current CSS specs, for example. On Dec. 19, Microsoft said its latest IE8 beta passed the Acid2 test, and on Dec. 7, changes to Firefox's Gecko layout engine that make this version pass as well were submitted. Both IE8 and Firefox 3 are expected to pass the Acid2 test. That's encouraging, but the bar keeps being raised. The Web Standards Project released Acid3 on March 3. Acid3 rates browsers' capabilities with ECMAScript (JavaScript) and the Document Object Model (DOM), which are important for Web-based applications. As of March 26, WebKit (Safari's rendering engine) achieved 100 on Acid3 for a public build, and Opera reports scoring 100 on an internal build. While Acid tests don't confirm that browsers are fully standards compliant, they were created to verify the features that Web developers consider most important. Continue to the sidebar: Rise And Fall Of Version Targeting In Internet Explorer 8 Mike Lee is an independent IT consultant and software developer and an InformationWeek contributor.
计算机
2015-48/3654/en_head.json.gz/4062
(1) Asus P9X79 Deluxe: Starting Point for LGA 2011 Platform. Page 11 [01/03/2012 10:44 AM | Mainboards]by Doors4ever ASUSTeK mainboards are the leading brand today that is why this particular model seems to be an ideal choice for opening a series of articles dedicated to the new platform. We will dwell on absolutely everything about it: package, accessories, technical specifications, EFI BIOS functionality, new programs and utilities, overclocking potential, performance and power consumption. Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 ] Lately, when we started studying a new platform or began a series of reviews of mainboards based on a new chipset, we tried to gather several different mainboards in the first review, mostly from the top of the line-up. Of course, it is clear why we did it. Flagship mainboards accumulate all the goodness, feature the most extensive functionality. At the same time, comparing several different mainboards allows to single out leaders and determine what the advantages of each specific model are. This time we didn’t have this opportunity, because Asus P9X79 Deluxe is a startup unit as well, and will become a reference point for our further comparative articles. However, even in this difficult situation we can easily shape up our verdict about this product. And don’t be discouraged by several issues pointed out throughout the review. There are no ideal mainboards out there, but Asus P9X79 Deluxe was easy and pleasant to work with and our experience with it was highly positive. The manufacturer provided this mainboard with sufficient accessories, came up with a pretty good layout, equipped it with all necessary controllers. The BIOS has everything necessary for successful overclocking and system fine-tuning for optimal performance and power consumption. There is a variety of proprietary technologies, programs and utilities that will make your everyday life much easier. The comparison against the competitors’ solution is yet to come, but even at this time we can tell that Asus P9X79 Deluxe will be among the best boards out there and most likely even becomes an absolute winner. Without denying what has been said in the introduction to this article, I have to admit that as I was getting more acquainted with the new platform my opinion about it genuinely improved. Of course, when you see that the new flagship processor is only 11% faster than the old one, it may not have that big of an effect on you. But do all of you have a thousand-dollar Intel Core i7-990X Extreme Edition CPU? Most likely there is one of the previous six-core processors in your system, such as Intel Core i7-970, for example. The new Core i7-3960X will be about 25% faster and it is a pretty significant difference already. If you overclock your six-core LGA 1366 processor, it will outperform the new LGA 2011 at the nominal frequencies, but overclocking will help LGA 2011 platform to regain its leadership. Therefore, purchasing a new Sandy Bridge-E processor may be a good and justified choice even if you currently have a six-core Gulftown. However, most of the LGA 1366 systems were built around the quad-core Bloomfield processors and replacing this platform with the new LGA 2011 will almost double the speed. It is up to you to decide whether upgrade is necessary or not, but one thing is definitely indisputable: LGA 2011 processors currently have no competitors in the desktop segment. Table of contents: Asus’ New Proprietary Tools and Utilities Latest materials in Mainboards section ASUS Z97-Deluxe Mainboard Review Gigabyte GA-Z97X-Gaming 3 Mainboard Review: For the Gamers, But Cheap MSI Z97 MPOWER Mainboard Review: Inflexible Power ASUS Z97-A Mainboard Review: New Look of LGA 1150 Platform Socket FM2+ Mainboards Comparison: Asus A88XM-Plus versus Gigabyte GA-F2A88XM-D3H
计算机
2015-48/3654/en_head.json.gz/4085
Comic 792: Blog Post Reuse [Alt: It'll be hilarious the first few times this happens.]If you're new around here, you may not yet be acquainted with a little concept we like to talk about, which, while it goes by various names, is generally called "Randall Munroe's Illustrated Picto-Blog." It's not a blog that exists; rather, it's a blog that we speak about theoretically. We want it to exist. Ideally, it would replace xkcd as Randall Munroe's major creative output to the world (using the word "creative" in only its most literal sense). It would sometimes be funny, sometimes not. The goal would not be to be funny; the goal would be to be interesting. If an idea or story were interesting as well as funny, so much the better. If an idea benefited from a small drawing, or even a large one, that's fine too. But if it was not funny and entirely text based - which is to say, if it were like the worst of the xkcds now - that would also be ok. Most important, it would not have a regular update schedule, allowing Mr. Munroe to post only when hehttp://www.blogger.com/home felt an idea was worth sharing.In any case, I've been over this all many times before. And what I've said before applies very much to this strip as well. It's an interesting idea (I don't think it's as common or widespread as some people have suggested, I'd be curious if they had some links) but the way Mr. Hat plays it is almost deliberately unfunny. Seriously, look again at the end of Mr. Hat's conversation: He's saying, "I had a cool idea but I have nothing I can do with it. So I don't know what to do."It's true that that isn't really the punchline, but I guess I was still hoping for more from Mr. Hat. Even if he isn't as good as he used to be, "just sighing and giving up" hardly seems to be in character (though he's still more in character than Mr. Beret).As to the actual punchline ("google is also not evil") I first found it extremely similar to this classic Onion article, then thought all about the various evil things Google has already done (think: China, Verizon Net Neutrality deal, that crazy flying colored balls homepage last week). I know that a few years ago Google was the darling of the computer nerd world, but can't we agree that they've made some decisions - inevitable, some could say, given their rate of growth - that show they aren't the perfect angels we once thought? To portray them as they are portrayed in the final two panels of this comic strikes me as hugely naive.Oh, and count me in the camp that says that the "March 1997" reference is either trolling, noodle-incidenting, or both. picto-blog
计算机
2015-48/3654/en_head.json.gz/4851
Assassin's Creed II PC PC PS3 Assassin's Creed III PC Assassin's Creed III builds on the strong foundation of its predecessors while boldly exploring new frontiers. This is a massive, beautiful, visceral game worth experiencing. The revolution beckons... by Ubisoft Release Date: 10/30/2012 Description Assassin's Creed II on the PC allows players to see if they have the skills to be a true assassin. All new freedom controls which allow you to roam freely. All new fun and exciting missions. See full description PC Games A continuation of Assassin's Creed, the story continues where that sory left off with Desmond Miles escaping from the Templars with the help of Lucy and joining her group of resistance fighters. Desmond takes a look back at his ancestors' past roles as assassins. While about this he takes on the past role of Ezio in 15th century Firenze. There are a lot of links to the previous game so it helps if you have played it, but you'll get by without it anyway. Assassin's Creed was thought by some to be rather repetitive but that's not the case here. The missions are of many varieties. The side missions can become a bit tedious but their only reason is to gain some extra money and if you don't want it don't do them. Ezio is a champion freerunner and a bit of a dab hand stabbing, making for acrobatics and creatively gorey murder methods. Even the great Leonardo Da Vinci lends a hand.Ezio's capers make money which can buy weapons, ammunition and other sundry equipment. He can also use it to pimp his home city of Monteriggioni and in return a more beautiful city is received and a cut of the profits. There's also lots of collecting to do; weapons, paintings, armor, and the like.All this is set in beautifully built cities which are all worlds in themselves and open to adventure. Running over rooftops as you do, pushing down some baddies, or quietly sauntering around town, please yourself.The music adds to the ambience and there is also a commentary from people passing by when you're going nuts doing acrobatics and you can always listen in on covert conversations.The camera can be difficult at times with nonsensical angles, and sometimes the images seem to have a mkind of their own, and during races this can be a problem. But for the most the part the visuals are up to the job. Then the DRM problem; any problems with the internet or internet servers and you are doomed. However most of us will be OK in this respect. However we could do without the problem and it will cost sales. The game itself is well playable and visually attractive. High marks all round. Was this review helpful to you? gamerjay Achievements.Dec 18, 2010 PC PS3
计算机
2015-48/3654/en_head.json.gz/4853
Software Takes Command: An Interview with Lev Manovich Save Share Twitter Facebook By Michael Connor Lev Manovich is a leading theorist of cultural objects produced with digital technology, perhaps best known for The Language of New Media (MIT Press, 2001). I interviewed him about his most recent book, Software Takes Command (Bloomsbury Academic, July 2014). Photograph published in Alan Kay and Adele Goldberg, "Personal Dynamic Media" with the caption, "Kids learning to use the interim Dynabook." MICHAEL CONNOR: I want to start with the question of methodology. How does one study software? In other words, what is the object of study—do you focus more on the interface, or the underlying code, or some combination of the two? LEV MANOVICH: The goal of my book is to understand media software—its genealogy (where does it come from), its anatomy (the key features shared by all media viewing and editing software), and its effects in the world (pragmatics). Specifically, I am concerned with two kinds of effects: 1) How media design software shapes the media being created, making some design choices seem natural and easy to execute, while hiding other design possibilities; 2) How media viewing / managing / remixing software shapes our experience of media and the actions we perform on it. I devote significant space to the analysis of After Effects, Photoshop and Google Earth—these are my primary case studies. Photoshop Toolbox from version 0.63 (1988) to 7.0 (2002). I also want to understand what media is today conceptually, after its "softwarization." Do the concepts of media developed to account for industrial-era technologies, from photography to video, still apply to media that is designed and experienced with software? Do they need to be updated, or completely replaced by new more appropriate concepts? For example: do we still have different media or did they merge into a single new meta-medium? Are there some structural features which motion graphics, graphic designs, web sites, product designs, buildings, and video games all share, since they are all designed with software? In short: does "media" still exist? For me, "software studies" is about asking such broad questions, as opposed to only focusing on code or interface. Our world, media, economy, and social relations all run on software. So any investigation of code, software architectures, or interfaces is only valuable if it helps us to understand how these technologies are reshaping societies and individuals, and our imaginations. MC: In order to ask these questions, your book begins by delving into some early ideas from the 1960s and 1970s that had a profound influence on later developers. In looking at these historical precedents, to what extent were you able to engage with the original software or documentation thereof? And to what extent were you relying on written texts by these early figures? Photograph published in Kay and Goldberg with the caption, "The interim Dynabook system consists of processor, disk drive, display, keyboard, and pointing devices." LM: In my book I only discuss the ideas of a few of the most important people, and for this, I could find enough sources. I focused on the theoretical ideas from the 1960s and 1970s which led to the development of modern media authoring environment, and the common features of their interfaces. My primary documents were published articles by J. C. R. Licklider, Ivan Sutherland, Ted Nelson, Douglas Engelbart, Alan Kay, and their collaborators, and also a few surviving film clips—Sutherland demonstrating Sketchpad (the first interactive drawing system seen by the public), a tour of Xerox Alto, etc. I also consulted manuals for a few early systems which are available online. While I was doing this research, I was shocked to realize how little visual documentation of the key systems and software (Sketchpad, Xerox Parc's Alto, first paint programs from late 1960s and 1970s) exists. We have original articles published about these systems with small black-and-white illustrations, and just a few low resolution film clips. And nothing else. None of the historically important systems exist in emulation, so you can't get a feeling of what it was like to use them. This situation is quite different with other media technologies. You can go to a film museum and experience the real Panoroma from early 1840s, camera obscura, or another pre-cinematic technology. Painters today use the same "new media" as Impressionists in the 1870s—paints in tubes. With computer systems, most of the ideas behind contemporary media software come directly from the 1960s and 1970s—but the original systems are not accessible. Given the number of artists and programmers working today in "software art" and "creative coding," it should be possible to create emulations of at least a few most fundamental early systems. It's good to take care of your parents! MC: One of the key early examples in your book is Alan Kay's concept of the "Dynabook," which posited the computer as "personal dynamic media" which could be used by all. These ideas were spelled out in his writing, and brought to some fruition in the Xerox Alto computer. I'd like to ask you about the documentation of these systems that does survive. What importance can we attach to these images of users, interfaces and the cultural objects produced with these systems? Top and center: Images published in Kay and Goldberg with the captions, "An electronic circuit layout system programmed by a 15-year- old student" and "Data for this score was captured on a musical keyboard. A program then converts the data to standard musical notation." Bottom: The Alto Screen showing windows with graphics drawn using commands in Smalltalk programming language. LM: The most informative sets of images of Alan Kay's "Dynabook" (Xerox Alto) appears in the article he wrote with his collaborator Adele Goldberg in 1977. In my book I analyze this article in detail, interpreting it as "media theory" (as opposed to just documentation of the system). Kay said that reading McLuhan convinced him that computer can be a medium for personal expression. The article presents theoretical development of this idea and reports on its practical implementation (Xerox Alto). Alan Turing theoretically defined a computer as a machine that can simulate a very large class of other machines, and it is this simulation ability that is largely responsible for the proliferation of computers in modern society. But it was only Kay and his generation that extended the idea of simulation to media—thus turning the Universal Turing Machine into a Universal Media Machine, so to speak. Accordingly, Kay and Goldberg write in the article: "In a very real sense, simulation is the central notion of the Dynabook." However, as I suggest in the book, simulating existing media become a chance to extend and add new functions. Kay and Goldberg themselves are clear about this—here is, for example, what they say about an electronic book: "It need not be treated as a simulated paper book since this is a new medium with new properties. A dynamic search may be made for a particular context. The non-sequential nature of the file medium and the use of dynamic manipulation allow a story to have many accessible points of view." The many images of media software developed both by Xerox team and other Alto users which appear in the article illustrate these ideas. Kay and Goldberg strategically give us examples of how their "interim 'Dynabook'" can allow users to paint, draw, animate, compose music, and compose text. This maked Alto first Universal Media Machine—the first computer offering ability to compose and create cultural experiences and artifacts for all senses. MC: I'm a bit surprised to hear you say the words "just documentation!" In the case of Kay, his theoretical argument was perhaps more important than any single prototype. But, in general, one of the things I find compelling about your approach is your analysis of specific elements of interfaces and computer operations. So when you use the example of Ivan Sutherland's Sketchpad, wasn't it the documentation (the demo for a television show produced by MIT in 1964) that allowed you to make the argument that even this early software wasn't merely a simulation of drawing, but a partial reinvention of it? Frames from Sketchpad demo video illustrating the program’s use of constraints. Left column: a user selects parts of a drawing. Right column: Sketchpad automatically adjusts the drawing. (The captured frames were edited in Photoshop to show the Sketchpad screen more clearly.) LM: The reason I said "just documentation" is that normally people dont think about Sutherland, Engelbart or Kay as "media theorists," and I think it's more common to read their work as technical reports. On to to Sutherland. Sutherland describes the new features of his system in his Ph.D. thesis and the published article, so in principle you can just read them and get these ideas. But at the same time, the short film clip which demonstrates the Sketchpad is invaluable—it helps you to better understand how these new features (such as "contraints satisfaction") actually worked, and also to "experience" them emotionally. Since I have seen the film clip years before I looked at Sutherland's PhD thesis (now available online), I can't really say what was more important. Maybe it was not even the original film clip, but its use in one of Alan Kay's lectures. In the lecture Alan Kay shows the clip, and explains how important these new features were. MC: The Sketchpad demo does have a visceral impact. You began this interview by asking, "does media still exist?" Along these lines, the Sutherland clip raises the question of whether drawing, for one, still exists. The implications of this seem pretty enormous. Now that you have established the principle that all media are contingent on the software that produces, do we need to begin analyzing all media (film, drawing or photography) from the point of view of software studies? Where might that lead? LM: The answer which I arrive to the question "does media still exist?" after 200 pages is relevant to all media which is designed or accessed with software tools. What we identify by conceptual inertia as "properties" of different mediums are actually the properties of media software—their interfaces, the tools, and the techniques they make possible for navigating, creating, editing, and sharing media documents. For example, the ability to automatically switch between different views of a document in Acrobat Reader or Microsoft Word is not a property of “text documents,” but as a result of software techniques whose heritage can be traced to Engelbart’s “view control.” Similarly, "zoom" or "pan" is not exclusive to digital images or texts or 3D scenes—its the properly of all modern media software. Along with these and a number of other "media-independent" techniques (such as "search") which are build into all media software, there are also "media-specific" techniques which can only be used with particular data types. For example, we can extrude a 2-D shape to make a 3D model, but we can't extrude a text. Or, we can change contrast and saturation on a photo, but these operations do not make sense in relation to 3D models, texts, or sound. So when we think of photography, film or any other medium, we can think of it as a combination of "media-independent" techniques which it shares with all other mediums, and also techniques which are specific to it. MC: I'd proposed the title, "Don't Study Media, Study Software" for this article. But it sounds like you are taking a more balanced view? LM: Your title makes me nervous, because some people are likely to misinterpret it. I prefer to study software such as Twitter, Facebook, Instagram, Photoshop, After Effects, game engines, etc., and use this understanding in interpreting the content created with this software—tweets, messages, social media photos, professional designs, video games, etc. For example, just this morning I was looking at a presentation by one of Twitter's engineers about the service, and learned that sometimes the responses to tweets can arrive before the tweet itself. This is important to know if we are to analyze the content of Twitter communication between people, for example. Today, all cultural forms which require a user to click even once on their device to access and/or participate run on software. We can't ignore technology any longer. In short: "software takes command." Andrés Ramírez Gaviria 2 years, 4 months agoReply It seems that after several decades and some false starts by other companies, Adobe is bringing back the stylus, which will never again look so futuristic as a design tool as it must have in its previous and more interesting incarnation as a light-pen in 1963 with the TX-2.http://money.cnn.com/2013/07/08/technology/adobe-stylus.pr.fortune/index.html
计算机
2015-48/3654/en_head.json.gz/5424
What game design has taught me about running a startup by Carla Engelbrecht Fisher on 10/26/12 05:12:00 pm Post A Comment It’s been more than 2.5 years since I accidentally founded No Crusts Interactive (I accepted a 30-day freelance gig that lasted 18 months), and I’m still finding my way around the school of entrepreneurship, where theories abound on strategic growth, value propositions, and the beloved pivot. (Oh the pivot!) Some days it feels like I’m speaking an entirely different language. But in the past few weeks I had a most wondrous epiphany about the business of making games. ****Every game is a start up. And every game design document is a business plan.**** Suddenly, the world of minimally viable product and customer acquisition made sense. I realized that it wasn’t that I was speaking a different language, but more like I was hanging out with some Brits. We were basically speaking the same language with a few key differences. You say knackered, I say tired. You say minimally viable product, I say prototype. In this world of app-driven development, more and more companies are hanging their entire existence on a single game. Their business plan and the game are one in the same. If the game succeeds, the business does as well, and the best practices of growing a startup align with those of making a great game. By way of example, here are four principles. 1.�Business plans and game design documents are living, breathing documents.�These documents throw a fork in the ground, but they are not the absolute truth of the project. It used to be that the business plan was the ultimate 5-year plan and you were not to meander from its sage wisdom, no matter what. We know now that’s no longer the best practice, unless you’ve perfected fortune-telling, in which case, please give me a call. Otherwise, it’s adapt or die for the rest of us. 2.�What you thought you were going to make is not what you’ll end up making.�The Lean Startup methodology, which guides entrepreneurs through a series of experiments in the process of starting a business, refers to the shifts in a product’s definition as a pivot. We tend to call it an iteration. In both cases, it’s a universally accepted truth that great ideas are not great on the first or even fifth try. It takes a repeated cycle of time, testing, and lots of prototypes to get to the true innovation (or hits). 3.�Putting the product in front of customers is a requirement, not a nice-to-have.�Steve Blank, author of The Startup Owner’s Manual, calls it getting out of the office to talk to customers. We call it formative testing of prototypes. In both cases, it means putting things in front of the target audience long before the official launch. It also means **listening** to that audience and addressing their feedback. 4.�It’s OK to show the audience an unpolished, raw product.�This is the minimally viable product in startup world, or in other words, the smallest kernel of the product that needs to be built in order to test the idea. It’s a prototype and it shouldn’t be something that you’ve spent years slaving over and perfecting. It’s the rough cut. Sometimes it’s sloppy and sometimes it’s a taped-together approximation of the idea. But the goal is to do it quickly and cheaply, yet well-enough to get the appropriate confirmation that you’re on the right path. Is this blue sky thinking? Probably. Is this process right for every project or every client? Definitely not. Every project is idiosyncratic, which means not all of the above conditions are met. But in an ideal situation, we’re happiest when we can create an initial game design document, then kick the prototyping and testing cycle into gear until we arrive at a product that makes us happy. The greatest epiphany of all? It turns out that all of this methodology boils down to the scientific method. Dust off your hypothesis hat because making great games (and companies) is all hypothesis > testing > refine hypothesis > testing > refining hypothesis > testing… Have a smashing week. I’m off to CineKid in Amsterdam. Give a yell if you want to meet up or to just say hello [email protected] or @noCrusts on Twitter. Read more:�http://kidscreen.com/2012/10/22/what-game-design-has-taught-me-about-running-a-startup/#ixzz2ARSYknxX /blogs/CarlaEngelbrechtFisher/20121026/180307/What_game_design_has_taught_me_about_running_a_startup.php
计算机
2015-48/3654/en_head.json.gz/5432
Microsoft to introduce major changes to Windows Server Posted on 6-Jul-2012 04:00. : Computing. Microsoft today announced changes to its Windows Server product line when Windows Server 2012 is launched later this year. Windows Server 2012 has been simplified compared with previous versions, cutting back from eight to four editions. Key changes across Windows Server Datacenter and Windows Server Standard now see both editions having feature parity, differing only in virtualisation rights � Windows Server Datacenter (the full specification version) will provide customers with unlimited virtualisation whilst Windows Server Standard edition will allow customers to two virtual instances with each license. "This is an exciting time in the server space, with Windows Server 2012 being the first cloud operating system, meaning customers can use the cloud to optimise their businesses. It will be an easy-to-manage, multiple server platform that houses the power of many servers in one," says Bradley Borrows, Business Group Lead, Server and Tools Marketing for Microsoft New Zealand. "It is great news for small-to-medium businesses, as they will have access to the same functionality as large corporate customers, with the same premium features, including business continuity, data security and compliance, automation and increased storage capacity, previously only found with the Datacenter edition." For larger customers, the Datacenter edition with unlimited virtualisation rights will provide scalability and predictability along with lower costs. "Customers will be able to choose the edition that is right for them by taking into consideration the size of their company and their virtualisation needs," says Borrows. "There will be additional value for small business customers, including high availability features previously only offered in the premium editions and an additional virtual instance. No matter what the scenario, whether the business is large or small, there will be more value provided with Windows Server 2012, giving businesses another affordable choice in the virtualised server space." A number of New Zealand customers are participating in the early adoption programme, allowing them to trial Windows Server 2012 before the official launch later in year. Pricing and licensing changes have not been announced at this stage, but it will be simpler. There will be a consistent processor-based licensing model between the two editions making it easier for customers to purchase and manage their licenses. More information: http://www.microsoft.com/windowsserver2012...
计算机
2015-48/3654/en_head.json.gz/5982
Bug repellent for supercomputers proves effective DOE/Lawrence Livermore National Laboratory Researchers have used the Stack Trace Analysis Tool, a highly scalable, lightweight tool to debug a program running more than one million MPI processes on the IBM Blue Gene/Q-based Sequoia supercomputer. Lawrence Livermore National Laboratory (LLNL) researchers have used the Stack Trace Analysis Tool (STAT), a highly scalable, lightweight tool to debug a program running more than one million MPI processes on the IBM Blue Gene/Q (BGQ)-based Sequoia supercomputer. The debugging tool is a significant milestone in LLNL's multi-year collaboration with the University of Wisconsin (UW), Madison and the University of New Mexico (UNM) to ensure supercomputers run more efficiently. Playing a significant role in scaling up the Sequoia supercomputer, STAT, a 2011 R&D 100 Award winner, has helped both early access users and system integrators quickly isolate a wide range of errors, including particularly perplexing issues that only manifested at extremely large scales up to 1,179,648 compute cores. During the Sequoia scale-up, bugs in applications as well as defects in system software and hardware have manifested themselves as failures in applications. It is important to quickly diagnose errors so they can be reported to experts who can analyze them in detail and ultimately solve the problem. "STAT has been indispensable in this capacity, helping the multi-disciplined integration team keep pace with the aggressive system scale-up schedule," said LLNL computer scientist Greg Lee. "While testing a subsystem of Blue/Gene Q, my test program consistently failed only when scaled to 1,179,648 MPI processes. Although the test program was simple, the sheer scale at which this program ran made debugging efforts highly challenging. But when I applied STAT, it quickly revealed that one particular rank process was consistently stuck in a system call," said Dong Ahn, a computer scientist in Livermore Computing. Based on this finding, a system expert took a close look at the compute core on which this rank process was running and discovered a hardware defect. "Replacing the component suddenly got the entire Sequoia system back to life," Ahn said. "Putting this exercise into perspective, this error was due to a defect in a tiny hardware unit, the decrementor, of a single hardware thread out of a total of 4.7 million hardware threads. I felt it was like finding a needle in a haystack over a coffee break." Sequoia delivers 20 petaflops of peak power and was ranked No. 1 in June of this year's TOP500 list. It is currently ranked No. 2, behind Oak Ridge National Laboratory's Titan. LLNL plans to use Sequoia's impressive computational capability to advance understanding of fundamental physics and engineering questions that arise in the National Nuclear Security Administration's (NNSA) program to ensure the safety, security and effectiveness of the United States' nuclear deterrent without testing. Sequoia also will support NNSA/DOE programs at LLNL that focus on nonproliferation, counterterrorism, energy, security, health and climate change. As LLNL takes delivery of the Sequoia system and works to move it into production, computer scientists will migrate applications that have been running on earlier systems to this newer architecture. This is a period of intense activity for LLNL's application teams as they gain experience with the new hardware and software environment. "Having a highly effective debugging tool that scales to the full system is vital to the installation and acceptance process for Sequoia. It is critical that our development teams have a comprehensive parallel debugging tool set as they iron out the inevitable issues that come up with running on a new system like Sequoia," said Kim Cupps, leader of the Livermore Computing Division at LLNL. STAT is particularly important for LLNL because supercomputer simulations are essential in virtually every mission area of the Laboratory. The tool also has been used at other sites and proved to be effective on a wide range of supercomputer platforms, including Linux clusters and Cray systems. The team is actively pursuing further optimization of STAT technologies and is exploring commercialization strategies. More information about STAT, including a link to the source code, is available on the Web. The above post is reprinted from materials provided by DOE/Lawrence Livermore National Laboratory. Note: Materials may be edited for content and length. DOE/Lawrence Livermore National Laboratory. "Bug repellent for supercomputers proves effective." ScienceDaily. ScienceDaily, 14 November 2012. <www.sciencedaily.com/releases/2012/11/121114134713.htm>. DOE/Lawrence Livermore National Laboratory. (2012, November 14). Bug repellent for supercomputers proves effective. ScienceDaily. Retrieved November 30, 2015 from www.sciencedaily.com/releases/2012/11/121114134713.htm DOE/Lawrence Livermore National Laboratory. "Bug repellent for supercomputers proves effective." ScienceDaily. www.sciencedaily.com/releases/2012/11/121114134713.htm (accessed November 30, 2015). Blue Gene Computational genomics Knot theory Programming Model for Supercomputers of the Future June 25, 2013 — The demand for even faster, more effective, and also energy-saving computer clusters is growing in every sector. The new asynchronous programming model GPI from Fraunhofer ITWM might become a key ... read more New Simulation Speed Record on Sequoia Supercomputer Apr. 30, 2013 — Computer scientists have set a high performance computing speed record that opens the way to the scientific exploration of complex planetary-scale systems. Scientists have announced a record-breaking ... read more Record Simulations Conducted on Lawrence Livermore Supercomputer Mar. 19, 2013 — Researchers have performed record simulations using all 1,572,864 cores of Sequoia, the largest supercomputer in the world. Sequoia, based on IBM BlueGene/Q architecture, is the first machine to ... read more 3-D Motion of Cold Virus Offers Hope for Improved Drugs Using Australia's Fastest Supercomputer July 17, 2012 — Researchers are now simulating in 3-D, the motion of the complete human rhinovirus, the most frequent cause of the common cold, on Australia's fastest supercomputer, paving the way for new drug ... read more Strange & Offbeat
计算机
2015-48/3654/en_head.json.gz/6744
AnimeSuki Forum > Social Groups > Series Appreciation > The Realm of Gensokyo Concealed the Conclusion Discussion Tools 2009-02-07 22:24 Mirrinus Well, since no one else was using the "new thread discussion" feature... We all know about the wonderful main series of Touhou games, but has anyone else here tried playing Concealed the Conclusion? It's pretty much a complete Touhou game that plays very similarly to Imperishable Night, except it's an unofficial work made using Danmakufu. I recently just got it working on my computer, and I'm really enjoying it. Nearly every major character from games 5-9 (including Shinki and Mima!) show up as bosses at some point, thanks to a full 4 different scenarios to choose from, with 3 stages being completely different depending on the scenario. CtC also has original spellcards for everyone, of course, many of which are quite creative, yet still in-character for the most part (Yuyuko and Yukari combined spellcard? Sweet!). I only have three gripes with the game: There's the Hakurei system, which is similar to the Time system from Imperishable Night, except it is far harsher on you if you fail to gather enough, making the final boss essentially unwinnable at times. Secondly, each boss's Last Spell is way harder to unlock; to get them to use it, you have to essentially have a perfect run of the entire stage, which is just not happening in any mode beyond Normal for someone of my skill level. Third, the storyline is really sad and tragic for a Touhou game, but thankfully it isn't canon at all. The game also boasts the return of the popular spellcard practice option last seen in Imperishable Night, complete with cool new Last Words for everyone. Here are a few of my favorites: Eirin's Maze-o-Doom: http://www.youtube.com/watch?v=49PYrKiGB1s Yuka shows how Master Spark is really done: http://www.youtube.com/watch?v=xRYGeWJPRz0 Suika wants to play "Simon Says": http://www.youtube.com/watch?v=PLhZP63XJDE The Yakumo family together: http://www.youtube.com/watch?v=3CZU3PBjbUw Youmu tries her hand at DDR: http://www.youtube.com/watch?v=A7WnMc9rvwc Reimu is just plain HAX: http://www.youtube.com/watch?v=4Y7tFLuliFc Sadly, there are very few videos of Concealed the Conclusion on youtube; besides the collection of Last Words, the only ones I can find are for the Extra Stage, Phantasm Stage, and maybe 2 or 3 other random stages at varying difficulties. There are so many more stages in the game, thanks to the whole Scenario system, plus the final boss fight is just plain epic, as it's more like a nostalgia trip down Touhou history. I'm still exploring what the game has to offer, such as trying to unlock every shot type for each scenario (which I'm still not too sure how to do). I also would like to find out how to unlock all those Last Words, as it seems no one knows how that works. Perhaps someone here could help me out? USB500 I've heard of CTC, and I'd like to give it a try, considering that it's done entirely on Danmakufu. I know nothing of it since I never had the chance to try it, though the soundtrack collection is sweet. Exilon I've played it :P And for me (which has a skill level next to zero), even Easy is quite hard, mainly because many of the spellcards are either too easy, or too hard, because they're not very different from the later difficulties. and there's those spellcards that are easier in normal than easy, namely one of yukari's spellcards. Anyway, I haven't been able to clear it yet. By the end of the fifth stage I'm out of lives (and therefore Hakurei), because of the sudden difficulty jump, and Youmu doing a number on me. cicido MarisaC is the most fun to play with. You literally SPAM the whole screen with Christmas trees. Since youtube seems to lack videos of most of the CtC stages, I'm trying to upload a few of my own. I want to include the last spells, but unfortunately, those only seem to be used against you if you captured the entire stage perfectly without bombs or deaths. Sadly, I can only seem to accomplish this on normal mode...I need more practice. I have Aya's stage and Yuka's stage loaded so far. http://www.youtube.com/watch?v=bBxX0Su5FZc http://www.youtube.com/watch?v=o2MdGN0jcPY I never played that game, but it seems to be fun. Is that series of touhou is a fan made game? Yeah, it's an unofficial fanmade Touhou game modeled after the main series. It's more like a homage to the series, what with the high number of returning characters and the whole nostalgia final stage. Edit: Loaded Remilia's stage and Tewi's stage. http://www.youtube.com/watch?v=-_2-MBB-cTI http://www.youtube.com/watch?v=JR959P2U8MA Also, what are the exact requirements for a boss to use a last spell on you? I thought it was having to capture the entire stage, but I'm starting to think I'm wrong. PhoenixG well my guess is you also needed the enough hakurei to unlock it. Well I've played it, and it's quite fun. Well IMO it's a lot easier than the real touhou MaxMaximilianMaximus From what I see, for a boss to unleash a last spell on you : - You need enough Hakurei point - The gauge displayed near Marisa sprite must be full by the time you defeat the boss (in other word, the Hakurei point counter must be displayed in green) Simply put, you need to stay alive. (This makes mid-boss' last word spell card quite hard to capture)
计算机