id
stringlengths 30
34
| text
stringlengths 0
75.5k
| industry_type
stringclasses 1
value |
---|---|---|
2014-23/2664/en_head.json.gz/33230 | Search | RSS | Updates | E-Filing | Initiatives | Consumers | Find People
Communications History
> OMD
> Common Standards
site map Search the FCC:
Help | Advanced
The Internet: A Short History of Getting Connected
Common Standards
Making the Connections
Radio Pioneers &Core Technologies
Historical Periods in Television Technology
Virtual Display Case
The Internet: Looking Back at How We Got Connected to the World
Museums, Libraries,and Collections
Internet History Links
Documents & Viewpoints
In 1969, when the ARPANET eventually connected computers at Stanford, UCLA, UC-Santa Barbara, and the University of Utah, it was a significant step toward realizing the vision of the computer as an extender of human capabilities. But, four connected computers did not constitute a "galactic" network. How ARPANET created the foundation upon which today's true "galactic" network, the Internet, is built is a story about using common standards and protocols to implement vision.
One historian of the Internet says, "In the beginning was - chaos." And so it often is when people are trying something so new that many can't even find words to describe it. But, while chaos can bring great energy and excitement, differing techniques, media, and protocols have to give way to common approaches if a build-up of chaotic energy is to result in something other than an explosion.
Excerpts from Request for Comments (RFC) 100 (August 1987) give a peek into how the original ARPANET team harnessed the energy of their new creation. These insights also show that, from its very beginning, today's Internet was conceived and established as a peer-to-peer network:
"At this point we knew only that the network was coming, but the precise details weren't known. That first meeting was seminal. We had lots of questions....No one had any answers, we did come to one conclusion: We ought to meet again. The first few meetings were quite tenuous. We had no official charter. Most of us were graduate students and we expected that a professional crew would show up eventually to take over the problems we were dealing with....later...it became clear to us that we had better start writing down our discussions....I remember having great fear that we would offend whomever the official protocol designers were...(so we labeled our decisions) "Request for Comments" or RFC's.
"Over the spring and summer of 1969 we grappled with the detailed problems of protocol design. Although we had a vision of the vast potential for intercomputer communication, designing usable protocols was another matter.... It was clear we needed to support remote login for interactive use -- later known as Telnet -- and we needed to move files from machine to machine. We also knew that we needed a more fundamental point of view for building a larger array of protocols. With the pressure to get something working and the general confusion as to how to achieve the high generality we all aspired to, we punted and defined the first set of protocols to include only Telnet and FTP functions. In December 1969, we met with Larry Roberts in Utah, and suffered our first direct experience with "redirection". Larry made it abundantly clear that our first step was not big enough, and we went back to the drawing board. Over the next few months we designed a symmetric host-host protocol, and we defined an abstract implementation of the protocol known as the Network Control Program. ("NCP" later came to be used as the name for the protocol, but it originally meant the program within the operating system that managed connections. The protocol itself was known blandly only as the host-host protocol.) Along with the basic host-host protocol, we also envisioned a hierarchy of protocols, with Telnet, FTP and some splinter protocols as the first examples.
"The initial experiment had been declared an immediate success and the network continued to grow. More and more people started coming to meetings, and the Network Working Group began to take shape. Working Group meetings started to have 50 and 100 people in attendance instead of the half dozen we had had in 1968 and early 1969....In October 1971 we all convened at MIT for a major protocol "fly-off." Where will it end? The network has exceeded all estimates of its growth. It has been transformed, extended, cloned, renamed and reimplemented. But the RFCs march on."
Indeed they do. Today there are nearly 4000 RFC's and they are just one of several mechanisms used to propose and decide on standards for the Internet � a network of networks that learned from the ARPANET but had to be created and developed on its own terms. Because of the increasing complexity the Internet�s TCP/IP protocols represented when compared to ARPANET�s NCP protocol � simply put, the difference between creating one national network versus linking multiple, world-wide networks - several additional methods and organizations were established in the 1980s and 1990s to deal with protocol and standards. First among these was the 1986 establishment of the Internet Engineering Task Force (IETF). The IETF took over responsibility for short-to-medium term Internet engineering issues, which had previously been handled by the Internet Activities Board. The Internet Society (ISOC), begun in 1992, provides an organizational home for the IETF and the Internet Architecture Board (IAB) (previously IAB stood for the Internet Activities Board).
Another organization, the ICANN (Internet Corporation for Assigned Names and Numbers) -- established in 1998, is a public-private partnership that is "responsible for managing and coordinating the Domain Name System (DNS) to ensure that every address is unique and that all users of the Internet can find all valid addresses. It does this by overseeing the distribution of unique IP addresses and domain names. It also ensures that each domain name maps to the correct IP address." These, and other, organizations employ a variety of working groups, task forces, and committees to work through a multi-stage process of suggesting, reviewing, accepting, and issuing standards for the Internet. When a specification reaches the point that it "is characterized by a high degree of technical maturity and by a generally held belief that the specified protocol or service provides significant benefit to the Internet community" it is released as an Internet Standard. Today there are 63 Internet Standards.
By making common standards a routine practice from the beginning, ARPANET began pouring a strong foundation. In fact, ARPANET was so dedicated to common standards that RFC 1 was issued on April 7, 1969, six months before the first network connection was made. By 1982, when ARPANET transitioned to the use of the TCP/IP inter-networking protocols, the foundational footings had fully settled and the way was open for broader public involvement. In 1987, the National Science Foundation (NSF) took over the funding and responsibility for the civilian nodes of the ARPANET. In addition, NSF had built their own T1 backbone for the purpose of hooking the Nation's five supercomputers together. While the slower ARPANET nodes continued in operation, the faster T1 backbone of the NSFnet, increasingly called the Internet, began to get lots of attention from private enterprises. Officially sanctioned civilian demands upon the NSFnet/Internet included:
MCI Mail and CompuServe, first formally sanctioned commercial email carriers connected to the Internet, 1988 and 1989; and
The World Comes On Line, first public dial-up Internet Service Provider, 1989.
By the time the ARPANET was formally decommissioned in 1990, the NSFnet/Internet was poised for explosive growth. When the NSF lifted all restrictions on commercial use of its network backbone in 1991, today's Internet was begun. Internet HistoryHome Page
Somethingto Share
CommonStandards
Making theConnections
Photo Credits...
last reviewed/updated on 11/21/05 FCC Home
- Website Policies & Notices
- Required Browser Plug-ins
- Freedom of Information Act | 计算机 |
2014-23/2664/en_head.json.gz/33357 | Icons of Progress
United States [ Change ]
Celebration of Service
Lectures and Colloquia
THINK exhibit
Choose your Country and Language:
Korea, Republic of (Korean)
Back to All Icons
Previous: A Culture of Think
Next: A Global Innovation Jam
The Era of Open Innovation
Technical Breakthroughs
Cultural Impacts
No product, idea, or achievement is possible without our most critical asset—the collective thought capital of hundreds of thousands of IBMers. The expertise, technical skill, willingness to take risk and overall dedication of IBM employees have led to countless transformative innovations through the years. Meet team members who contributed to this Icon of Progress.
Irving Wladawsky-Berger Formerly an IBM vice president for technology strategy, Dr. Irving Wladawsky-Berger guided the IBM team into bringing Linux into the business. He began his IBM career in 1970 at the company's Thomas J. Watson Research Center. In 1985, he headed IBM’s initiatives in supercomputing and parallel computing. Wladawsky-Berger became chairman emeritus of the IBM Academy of Technology, and was co-chair of President Bill Clinton's Information Technology Advisory Committee. A founding member of the Computer Sciences and Telecommunications Board of the National Research Council, he is a visiting professor of engineering systems at the Massachusetts Institute of Technology and an adjunct professor in the Innovation and Entrepreneurship Group at the Imperial College (London) Business School. Originally from Cuba, Wladawsky-Berger earned an MS and a PhD in physics from at the University of Chicago in Illinois.
Samuel J. Palmisano Sam Palmisano, formerly vice president of the IBM Server Group, is chairman of the board, president and chief executive officer of IBM. He led the effort to adopt Linux on IBM servers and other hardware. Palmisano began his career with IBM in 1973 in Baltimore, Maryland. Since then, he has held a series of leadership positions at IBM, including senior vice president for the Enterprise Systems and Personal Systems groups. Palmisano played a key role in creating and leading IBM Global Services, rising to senior vice president, and building the largest and most diversified information technology services organization in the industry. He also served as senior managing director of operations for IBM Japan. He became president and chief operating officer in 2000, and was appointed to chief executive officer in 2002, and chairman in 2003. Palmisano is a graduate of The Johns Hopkins University.
Daniel D. Frye Born in Portland, Oregon, Dr. Daniel Frye is a founding board member of the Linux Foundation. Frye graduated from the University of Idaho with a degree in physics and received both his master of arts in physics and his PhD in theoretical atomic physics from Johns Hopkins University. Frye started at IBM with a postdoctoral position in parallel programming and later rose through the development ranks and coauthored original IBM corporate strategies for both Linux and open-source software. Frye founded, and still leads, the IBM Linux development team and the IBM Linux Technology Center (LTC). He is currently the vice president of IBM’s open systems and solutions development. Frye has served in a variety of industry Linux and open-source groups. In 2010, he was named to the University of Idaho Alumni Hall of Fame for contributions to open-source software.
Linus Torvalds On September 17, 1991, Linux version 0.01 is released.
Linus Torvalds is the creator and patent holder of Linux. He was born in Helsinki, Finland, and attended the University of Helsinki between 1988 and 1996. He earned an MSc in computer science with the thesis “Linux: A Portable Operating System.” He has worked on Linux ever since. In early 1997, Torvalds accepted a position with Transmeta and worked there through June 2003. Torvalds then joined the Open Source Development Labs, which later merged with the Free Standards Group to become the Linux Foundation, where he was named the foundation’s first Fellow. Torvalds currently resides in Portland, Oregon, where he and his wife, Tove, have three daughters. | 计算机 |
2014-23/2664/en_head.json.gz/35232 | xorg/ Other/ Press/ XorgOIN
9 AUGUST 2012: The X.Org Foundation has joined the Open Invention Network (OIN) patent non-aggression community in order to better protect the future of the X Window System. OIN has granted the Foundation a license to use all of the patents they control or which are covered by agreements with other OIN community members and licensees, in exchange for a pledge from the Foundation to license back any patents which the Foundation may come into possession of. (Currently the Foundation owns no patents, but if we ever do, they will be covered by this agreement.) This will help protect the Foundation's resources and donations made to it against patent claims, allowing the Foundation to devote those resources and donations to the improvement of the window system itself. The OIN definition of the "Linux System" for which patent claims are covered by the agreement has long included X Window System software packages, and thus the X.Org Foundation receives coverage for many of 1the software packages we release. Due to the reciprocal nature of the OIN agreement, the Foundation can only enter to this agreement on behalf of the Foundation, as we cannot pledge anyone else's patents to the OIN community. Any vendors, distributors, or contributing organizations which wish to obtain similar protection should contact OIN directly about joining the OIN non-aggression community on their own, as described on http://www.openinventionnetwork.com/pat_license.php. In order to more broadly protect the open source desktop software systems, the X.Org Foundation is also asking our members and contributors to participate in the defensive patent programs sponsored by OIN, the Software Freedom Law Center, and the Linux Foundation, under the Linux Defenders umbrella, including Defensive Publication to establish prior art citations for new techniques, and helping the X.Org Foundation locate and publish prior art references from our archives of computer graphics and window system research and development stretching back over 25 years, through the original X Consortium back to the project's founding at MIT. Those who wish to help can learn about Defensive Publications at http://www.defensivepublications.org/ and can contact the X.Org Foundation Board at [email protected] to be included in discussions about how to best utilize the X development archives. The X.Org Foundation is a non-profit public charity dedicated to supporting and defending the ongoing development of open source graphics and window system software, centered around the X Window System, one of the oldest and most successful open source software projects in existence. The X Window System software has become the standard graphical interface across Linux & Unix workstations and servers, and has been included in a growing number of mobile phones and tablets in recent years. More information about the X.Org Foundation and X Window System may be found at http://www.x.org/. Open Invention Network® is an intellectual property company that was formed to promote Linux by using patents to create a collaborative environment. It promotes a positive, fertile ecosystem for Linux, which in turns drives innovation and choice in the global marketplace. This helps ensure the continuation of innovation that has benefited software vendors, customers, emerging markets and investors. More information about OIN is available at http://www.openinventionnetwork.com/. Conceptualized by Open Invention Network and co-sponsored with the Software Freedom Law Center and The Linux Foundation, Linux Defenders is a first-of- its-kind program which combines free online intellectual property (IP) publication with defensive patent tools to provide the Linux and open source community an effective vehicle to reduce future patent concerns. Linux Defenders serves as a portal for the Linux and broader open source community and seamlessly links to the Peer to Patent and Post-Issue Peer to Patent platforms that New York Law School manages. The Linux Defenders web site is located at http://linuxdefenders.org/. Links: | 计算机 |
2014-23/2664/en_head.json.gz/36669 | Bloor Security
What did IPv6 Day prove?
By Fran Howarth on August 23, 2011 4:47 PM
| More IPv6 Day came and went without much fanfare. That is because, according to participants, it worked. True, there were a few problems encountered, but no more than expected and that was one of the main points of the exercise anyway. According to Cisco, the event proved that careful and gradual adoption will be easier than believed. And Arbor Networks reported that the test was enough to tell us that we can handle the transition to IPv6. So what happens next? One of the benefits seen from the day is that it has persuaded hardware and software vendors to add support for IPv6 into their products, which has been one of the biggest sticking points to date. There are still further challenges to be overcome, including details of running dual stack IPv4 with IPv6 and new security challenges that are unique to IPv6. But now is the time for all organisations to at least be planning for their own transition. IPv6 will allow continued growth of the internet, which has become essential for commerce, communication and social interaction. According to Verisign, internal drivers for adoption are for organisations to be as technologically current and future-proofed as possible, whilst external drivers include the need to keep up with the increasing number of devices requiring IP addresses, ranging from mobile and streaming technologies, to smart meters, cars, TVs, game consoles and medical devices, plus a surge in new users from emerging markets who all need IT addresses. Another push for IPv6 take up is that governments worldwide are increasingly looking to promote take up of IPv6. In Europe, national governments are undertaking their own initiatives, as well as efforts being made at an EU level. The US government is going even further as it believes that IPv6 technologies will allow it to pursue policy goals in areas such as healthcare, education and energy. In September 2010, the federal government mandated that all agencies must upgrade external-facing systems to IPv6 by end-2012 and internal applications that communicate with the internet by 2014. The transition to IPv6 will not happen overnight, but there is finally a great deal happening to spur adoption. There are workarounds that have been in put in place to extend the life of IPv4 and organisations, but these are just that--temporary workarounds, not a long-term solution. According to Alan Way of Spirent: "The organisation that sticks doggedly to its old IPv4 inheritance won't be cut off from the outside world, it will simply suffer increasingly degraded performance as more and more communications move to IPv6. For financial services and such high speed transactions this would be disastrous. For other businesses, it could still erode their competitive edge."
internet infrastructure, IPv6, networks, security
| More No TrackBacks
TrackBack URL: http://www.computerweekly.com/cgi-bin/mt-tb.cgi/44549
Inside Outsourcing
The Privacy, Identity & Consent Blog
This page contains a single entry by Fran Howarth published on October 31, 2011 3:45 PM.
Best practices for email archiving was the previous entry in this blog. | 计算机 |
2014-23/2665/en_head.json.gz/5248 | Knowledge Center HomeCloud HostingCMS Comparison: Drupal, Joomla and Wordpress Feedback
CMS Comparison: Drupal, Joomla and Wordpress
Last updated on April 4, 2013
Authored by: Rackspace Support
If creating a website for your business is on the horizon, you may be wondering which content management system (CMS) is the best choice for you. Here’s a look at three of the most widely-used ones. All three are open-source software, each developed and maintained by a community of thousands. Not only are all three free to download and use, but the open-source format means that the platform is continuously being improved to support new Internet technologies. With all of these systems, basic functions can be enhanced ad infinitum with an ever-expanding array of add-ons, contributed from their respective communities.
There’s no one-size-fits-all solution here; it depends on your goals, technical expertise, budget and what you need your site to do. For a simple blog or brochure-type site, Wordpress could be the best choice (while very friendly for non-developers, it’s a flexible platform also capable of very complex sites). For a complex, highly customized site requiring scalability and complex content organization, Drupal might be the best choice. For something in between that has an easier learning curve, Joomla may be the answer.
When you have questions or need help, will you be able to find it easily? With all of these systems, the answer is yes. Each has passionate, dedicated developer and user communities, making it easy to find free support directly through their websites or through other online forums or even books. In addition, paid support is readily available from third-party sources, such as consultants, developers and designers. Each of these systems shows long-term sustainability and longevity; support for them will continue to be readily available for the foreseeable future. The more time and effort you are willing and able to invest into learning a system, the more it will be able to do for you. With both Wordpress and Joomla, you can order a wide range of services and options off the menu to suit your needs; with Drupal, you’ll be in the kitchen cooking up what you want for yourself, with all of the privileges of customization that entails.
See the comparison chart below for more insight into the differences in these top content management systems. Still not sure? Download each of the free platforms and do a trial run to help you decide.
www.drupal.org
www.joomla.org
www.wordpress.org
Drupal is a powerful, developer-friendly tool for building complex sites. Like most powerful tools, it requires some expertise and experience to operate.
Joomla offers middle ground between the developer-oriented, extensive capabilities of Drupal and user-friendly but more complex site development options than Wordpress offers.
Wordpress began as an innovative, easy-to-use blogging platform. With an ever-increasing repertoire of themes, plugins and widgets, this CMS is widely used for other website formats also.
Community Portal: Fast Company, Team Sugar
Social Networking: MTV Networks Quizilla
Education: Harvard University
Restaurant: IHOP
Social Networking: PlayStation Blog
News Publishing: CNN Political Ticker
Education/Research: NASA Ames Research Center
News Publishing:The New York Observer
Drupal Installation Forum
Joomla Installation Forum
Wordpress Installation Forum
Drupal requires the most technical expertise of the three CMSs. However, it also is capable of producing the most advanced sites. With each release, it is becoming easier to use. If you’re unable to commit to learning the software or can’t hire someone who knows it, it may not be the best choice.
Less complex than Drupal, more complex than Wordpress. Relatively uncomplicated installation and setup. With a relatively small investment of effort into understanding Joomla’s structure and terminology, you have the ability to create fairly complex sites.
Technical experience is not necessary; it’s intuitive and easy to get a simple site set up quickly. It’s easy to paste text from a Microsoft Word document into a Wordpress site, but not into Joomla and Drupal sites.
Known for its powerful taxonomy and ability to tag, categorize and organize complex content.
Designed to perform as a community platform, with strong social networking features.
Ease of use is a key benefit for experts and novices alike. It’s powerful enough for web developers or designers to efficiently build sites for clients; then, with minimal instruction, clients can take over the site management. Known for an extensive selection of themes. Very user-friendly with great support and tutorials, making it great for non-technical users to quickly deploy fairly simple sites.
Caching Plug-ins
Pressflow: This is a downloadable version of Drupal that comes bundled with popular enhancements in key areas, including performance and scalability.
JotCache offers page caching in the Joomla 1.5 search framework, resulting in fast page downloads. Also provides control over what content is cached and what is not. In addition, page caching is supported by the System Cache Plugin that comes with Joomla.
WP-SuperCache: The Super Cache plugin optimizes performance by generating static html files from database-driven content for faster load times.
Best Use Cases
For complex, advanced and versatile sites; for sites that require complex data organization; for community platform sites with multiple users; for online stores
Joomla allows you to build a site with more content and structure flexibility than Wordpress offers, but still with fairly easy, intuitive usage. Supports E-commerce, social networking and more.
Ideal for fairly simple web sites, such as everyday blogging and news sites; and anyone looking for an easy-to-manage site. Add-ons make it easy to expand the functionality of the site.
Was this content helpful? | 计算机 |
2014-23/2665/en_head.json.gz/6391 | Sat Mar 01 2008
FYI: Funding Opportunity: Automating Deep Language Understanding
Editor for this issue: Ann Sawyer <sawyerlinguistlist.org>
Directory 1. Ann
Sawyer, Funding Opportunity: Automating Deep Language Understanding Message 1: Funding Opportunity: Automating Deep Language Understanding Date: 01-Mar-2008 From: Ann Sawyer <sawyerlinguistlist.org>
Subject: Funding Opportunity: Automating Deep Language Understanding E-mail this message to a friend Readers, please note the time-sensitivity of the following announcement: IARPA (the [US] Intelligence Advanced Research Projects Activity) is seeking proposals for the initial phase of a new program dedicated to automating deep language understanding through the discovery of human-language indicators of social meaning. IARPA is the advanced research organization established by the Office of the Director of National Intelligence (ODNI) in October 2007. IARPA's principal mission is to impact fundamentally and positively the quality of the future operational processes of the Intelligence Community. The preceding and following paragraphs are extracted from the full solicitation, available at: http://www.nbc.gov/acquisition/fort_h/scil/BAA-08-SCIL.pdf Researchers who are interested in submitting a proposal to this solicitation are urged to read it right away, as there are many details and the deadline for applications is March 22, 2008. The Socio-Cultural Content in Language (SCIL) Program intends to explore and develop innovative designs, algorithms, methods, techniques and technologies to extend language understanding into the socio-cultural arena. The program will, in the end, develop automated resources that provide users with a broadened understanding of the contextual and social value of the information with which they work. Human language use reflects social and cultural norms, contexts and expectations. Social variables (such as religion, status, gender, education) and contextual features (such as formality, participant beliefs, social situation) can influence the form and features of language. Because language use responds to such social and cultural influences, then correlating social goals with language forms and content should provide a rich and expanded understanding of the attributes, roles and nature of the associations and intentions of the users of the language. Current human language technologies show little ability to "understand" or capture the social dimensions of language. Today, information analysts gather facts, generally without the context in which these facts occur. Yet, human language does more than serve as a means of transferring factual information. Referential meaning (i.e., conveying information about the real world) is only one aspect of language use. Language can also convey feelings and other unstated meaning; elicit behaviors from others; and build and maintain relationships.... Understanding the global community of today requires access to the varying worldviews of the players on the world stage. Many dimensions of these worldviews are reflected in language. Strides have been made in addressing the handling and processing of human language data, in areas such as information retrieval and extraction, machine translation, categorization, and speech and hard-copy processing. Although challenges remain in these areas, researchers in human language technology are positioned to extend their capabilities to a new arena. That new arena is the discovery and representation of social and cultural insights from human language use. The goal of the SCIL Program is to develop a methodology for identifying language indicators (i.e., their form, meaning and strength) of the social characteristics and objectives of members of a social group. The relationship between language indicators and social objectives will be culture- and language-specific but the aim is to generalize across languages and cultures. People tend to want to accomplish similar social goals; it is how they do this that differs. The social sciences have developed theories of behavior that are relevant to this effort. These theories and systems can serve as the framework for understanding social principles as well as for generalizing across cultures. (As an example, Brown and Levinson in the 1980's proposed a theory of politeness that abstracted away from language forms and culture-specific strategies and provided a generalized view of politeness that (presumably) can apply across languages or cultures.) The goal of the Program, then, is to develop a methodology for addressing similar social goals in different languages and cultures. Although using one language as a baseline is permitted, proposers should keep in mind that the goal is to be able to apply insights on linguistic indicators of a social function to a new language and culture. The SCIL Program is envisioned as a five-year effort that will be initiated at the beginning of the second half of FY2008. Phase 1 of the Program will consist of a base period of 14-months with two possible option years. The final deliverable for the base period will be made at the 12-months mark. Work may continue in the following two months but, based on the work accomplished in the first 12 months, the Government will determine whether to exercise the first option year. Year 1 of the Program will focus on development of a proof-of-concept that automates techniques and resources that link linguistic features with social goals and extended meaning. Based on the results of the prior period, option years may be exercised to expand the work. Proposals for an additional phase 2 of 2 years will be solicited under this BAA at the end of the third year. The primary focus of the Program is on human language. The aim is to associate linguistic cues and features with particular social goals and constructs of a social group (e.g., leadership, coercion, politeness). Because much social research on social norms and rules exists, it is not the intent of the program to develop new social theories. The research is focused on the automation of the association of linguistic features with social generalizations. Traditional approaches to social network analysis are not of interest, but social groups and the behaviors of their members, as conducted through or supported by language, are. Enhancement to information extraction technologies is also not of value to the Program, although such techniques can be used if it is demonstrated that the correlation between social goals and linguistic cues can be met. There are three dimensions to this effort: the social features and activities of the group and its members; the linguistic features that serve as evidence of social goals; and the social science theories that help to define the social features. It is the correlation of these three dimensions that is important to the Program, showing how language serves as evidence of social functions. Because of the expected diversity in the problems that will be addressed, the Program will not supply data to the participants. Data collection will be the responsibility of the proposer. The proposer must make clear what data will be used, what the features of the data are (i.e., language, source, participants, size, etc.), how the data are relevant to the topic of interest and how the data sets are sufficiently large and rich to enable the identification of correlations between the specific social problem being addressed and the language of the data. The amount of data should support the research question and the development of a convincing proof of concept. There is particular interest in the proposed use of blogs, emails, conversations, text messaging and chat. It is not expected that newswire will provide a rich source of information because it generally reports on interactions versus documenting them. Data from languages other than English and cross-cultural data are of special interest and will be considered positively. The goal of the Program is to provide analysts with language indicators of social phenomena, and the strength of those indicators, in one or a large group of documents or interactions. It is envisioned that the individual efforts in the Program will result, in the end, in an integrated resource that provides insights into multiple social and cultural dimensions of a dataset. It is the responsibility of the proposer to specify how the insights gathered will be represented and automated. SCIL is open to all research and development organizations, including Academic and eligible non-profit and not-for-profit institutions; Large and small businesses; Collaborative ventures from mixed sources; and Federally Funded Research and Development Centers (FFRDCs) and Laboratories. All international organizations will be required to team in a subcontract role with a U.S.-based organization. Proposers are invited to submit proposals for a base period of 14-months with two possible option years, indicating how the anticipated work of the base year would be extended and enhanced in the option year(s). The Government anticipates funding approximately 6-10 proposals for the first year at varying levels of effort. The base period is expected to fall within the $300,000 to $500,000 range. This funding range is an approximation. Cost proposals should reflect the realistic cost of the proposed work. Option years will be in the same funding range. The initial set of proposals is due on March 22, 2008 NLT 3:00 p.m. (MST) to the Department of the Interior/National Business Center address. Proposals must be submitted in accordance with the requirements and procedures identified in the BAA and this PIP. To be considered, full, complete proposals (in original, one copy, and electronic media) must be received. For overnight package delivery, proposals should be addressed to the following address: Dept of the Interior National Business Center, Acquisition Services Directorate Sierra Vista Branch Augur & Adair Streets (Bldg. 22208, 2nd Floor) Dept of the Interior Fort Huachuca, AZ 85613 Linguistic Field(s): General Linguistics Read more issues|LINGUIST home page|Top of issue | 计算机 |
2014-23/2665/en_head.json.gz/8236 | Do you like your games "easy"?
Thread: Do you like your games "easy"?
Do you like your games easy or easier than say, PS2 and PS1 games in the past? Do you like where video games are going in terms of difficulty?
I don't, I truly don't. I agree with the thought that video games these days hold your hand throughout the entire game. It doesn't have to be as hard as say, Ikaruga for the Gamecube, but you don't have to make it super easy either. A lot of people won't even try Demon Souls or Dark Souls because it's too hard, but that's how hard some SNES games were back in the day. I prefer the SNES version of Chrono Trigger because it's just more challenging.
What I do like is games like Uncharted 3 give you a choice such as Crushing and Hard.
Fenix View Profile
PSU Trophy Manager | 计算机 |
2014-23/2665/en_head.json.gz/9125 | As this Privacy Policy changes in significant ways, we will take steps to inform you of the changes. Minor changes to this Privacy Policy may occur that will not significantly affect the ways in which we each use your personally identifiable information. In these instances, we may not inform you of such minor changes. When this Privacy Policy changes in a way that significantly affects the way we handle personal information, we will not apply the new Policy to information we have previously collected from you without giving you | 计算机 |
2014-23/2665/en_head.json.gz/11204 | Original URL: http://www.psxextreme.com/psp-reviews/31.html
Star Soldier
Control: 8
Star Soldier is a vertically scrolling shooter (a.k.a. "shmup") that's currently only available in Japan. Hopefully, a North American publisher will take a chance and bring this game to the west, because it's a decent shmup that fans of the genre would appreciate having the opportunity to play.
There aren't any wacky or innovative gimmicks here. Your job in each of the 10 different levels is to shoot and dodge your way through wave after wave of ships and bullets until you reach, and eventually destroy, the boss of the level. The screen scrolls vertically (top to bottom), but enemy ships come from all directions. In fact, this is one of the few shmups I've ever played where you have to keep an eye out for ships swooping in from the side of the screen on kamikaze runs. There are three different ships to choose from. Each has a primary weapon, which can be upgraded three times by collecting power up boxes, and a rechargeable sub-weapon, which, in addition to doling out major damage, can also erase bullets. A handy, 3-hit shield also kicks in once you fully upgrade your primary gun.
Fans of the shmup genre will find that Star Soldier harkens back to the no nonsense days when shooters were set in outer space and unconcerned with complicated weapon setups or scoring formulas. All but one of the game's 10 levels is set in outer space (above a moon, in a space station, in an asteroid field, etc.). Generic enemies appear and fly-by according to preset patterns, and you get points for blowing them up. The twin weapon setup is just right for this game, because it keeps you jockeying between regular bullets and charge blasts, and is just "underpowered" enough to ensure that you need to keep your bullet and ship dodging skills sharp. A popular term for hectic shmups is "bullet hell." I'd describe Star Soldier as "enemy hell," since enemies are constantly swooping into view from the top, bottom, and sides of the screen at all times... not to mention the gigantic boss ships that have multiple attacks and combat forms.
Certainly, Star Soldier would be better if the hero ships had more weapon choices, or if the game included screen-filling super bombs. Even so, the three ship-two weapon setup and multi-form bosses provide enough variety to keep repeated plays spicy. There are also a number of play modes to dig into and jump between. Initial play modes include a normal arcade mode, a 2-minute score attack mode, and a 5-minute score attack mode. The score attack levels are different than those used in the main game, which is nice because you can experience two "bonus" levels in addition to the 10 "regular" ones. The game keeps track of high scores in these three modes and automatically records replay runs in the score attack modes. Not bad for a game that only eats roughly 300KB on a memory stick. After you beat the game a few times, you'll eventually unlock a boss rush mode, a sound test menu, and a stage select option (woo!). Last, but not least, this is also one of those rare PSP games that supports game sharing. If you know someone else with a PSP, you can send them any of the levels in the game and they'll be able to play it until they turn their system off. Sure, a legitimate 2 player wifi mode would've been preferred, but game sharing is nevertheless a welcome bonus.
Truth be told, the PSP version of Star Soldier is a "port" of the PS2 version that was released a couple years ago (also only in Japan). The PS2 game can be imported for approximately $30, whereas the PSP game will run you at least $50 right now. The PSP version is well worth the extra dough. Along with the fact that you can run it on your PSP without having to buy a boot disc or a mod chip, the PSP version of Star Soldier is simply superior in every way to its PS2 counterpart. There are 3 different ships to pick from (instead of just 1), the music has been reworked so it doesn't sound like a broken 16-bit synthesizer, and the graphics employ a super-widescreen aspect ratio that displays more of the playing field than the PS2 version does in its "rotate" mode.
The only drawback to the super-wide display is that you have to hold the system vertically while playing. The d-pad sits below, with the screen above, and the buttons up top. You'd assume that holding the system like that would be uncomfortable, but it isn't. I put in a marathon 3-hour session the day I received the game and the only pain I felt afterward was a very minor twinge in my left wrist. That's how I feel after holding the system normally for 3-hours too. Reaching the buttons and tapping them is no problem, and the design of the PSP makes it very difficult to obscure the screen with your right hand. It almost seems like Sony took vertical use into account when designing the system.
Any misgivings you might have over holding the unit sideways will drop away once you see what the super-wide display brings to the table. There's a ton of room to maneuver, even when the screen seems packed with bullets, and the gigantic bosses fit completely within the screen boundaries. In the PS2 game, sometimes as much as half the boss would be situated off screen so that the player's ship would still have enough room to move. Thanks to the super-wide aspect ratio, you can admire the enemy's gigantic contraptions while you're blowing them to bits. From a purely technical standpoint, the graphics don't even come close to what the PSP is capable of. Even though many of the ships and environments were put together from texture-mapped polygons, the game looks and feels primarily 2D. The only objects that actually look 3D are the bosses and explosions. On the upside, the game absolutely never slows down. The frame rate is a smooth 60FPS at all times.
By the same token, the audio won't win any end of the year awards, but it does get the job done. There are plenty of the requisite laser and explosion sound effects, and speech clips are used to count off the continue countdown and score attack timers. The soundtrack is heavily tinged with 80's guitar riffs and fast-paced action beats. Most importantly, the developers employed a higher-quality synthesizer and some real instrument samples for the PSP game, so the various guitars, drums, and so forth don't sound like computer-generated crap like they did in the PS2 game.
If you're a fan of the shmup genre, definitely take the risk on Star Soldier. It'll cost you roughly $50 to import it, which isn't much more than a domestic game costs. The Japanese disc plays just fine in North American PSP units, the game's menus are in English, and there's no story text whatsoever--so you don't have to worry about lockouts or language barriers. Besides, what else are you playing on the PSP right now?! | 计算机 |
2014-23/2665/en_head.json.gz/11805 | HOME | SITE MAP | SEARCH You are here: Home > Newsroom > Press
Press Releases Press Briefs The Open Group in the Media Journalist Resources
Company Backgrounder
The Open Group's
IEEE and The Open Group Okay �Linux Manual Pages Project� to Incorporate Material from the POSIX� Standard
PISCATAWAY, NJ AND SAN FRANCISCO, CA, 21 January 2004 – The IEEE and The Open Group have granted permission to the Linux Manual Pages Project to incorporate material from the joint IEEE 1003.1™ POSIX® standard and The Open Group Base Specifications Issue 6.
This step will allow developers using the Linux manual pages to gain a better understanding of how to write portable programs utilizing IEEE Std 1003.1, “Standard for Information Technology-- Portable Operating System Interface (POSIX)”. The POSIX standard, which also forms the core volumes of Version 3 of The Open Group’s Single UNIX® Specification, defines a set of fundamental services needed for the construction of portable application programs. IEEE and The Open Group have granted permissions for reuse of material in the Linux ‘man pages’ project (see: ftp://ftp.win.tue.nl/pub/linux-local/manpages) covering over 1400 interfaces from the standard including the headers, system interfaces and utilities.
“ We could not quote the POSIX standard verbatim until now because of copyright restrictions,” said Professor Andries Brouwer, who oversees the Linux Manual Pages Project and is based at the National Research Institute for Mathematics and Computer Science in the Netherlands. “As a result, inaccuracies crept into pages written by volunteers, because they had to interpret the standard in the text they wrote.
“ The approval we’ve received from the IEEE and The Open Group to reuse POSIX documentation in the Linux man pages will make the exact standard available to our volunteers. Needless to say, we’re very grateful to both organizations for the permission to do so.”
Andrew Josey, Director of Certification at The Open Group and Chair of the Austin Group, said: “We’re taking active steps to increase the adoption of POSIX within the software community. Making POSIX more available to Linux developers is one such step. Another was the recent decision to make the POSIX standard freely available on the Internet."
About the Linux Manual Pages Project
The "man pages" project provides the Linux system with manual pages so programmers have the documentation they need to write portable code. This documentation describes what the various standards say about a function or program and any special properties or deviations from the standards in the Linux libraries and kernels. This project was started by Rik Faith and has been maintained by Andries Brouwer for the last nine years. The “man pages” distribution can be found at the ftp site of the Technische Universiteit Eindhoven at ftp://ftp.win.tue.nl/pub/linux-local/manpages.
The Open Group is a vendor-neutral and technology-neutral consortium, whose vision of Boundaryless Information Flow™ will enable access to integrated information within and between enterprises based on open standards and global interoperability. The Open Group works with customers, suppliers, consortia and other standard bodies. Its role is to capture, understand and address current and emerging requirements, establish policies and share best practices; to facilitate interoperability, develop consensus, and evolve and integrate specifications and open source technologies; to offer a comprehensive set of services to enhance the operational efficiency of consortia; and to operate the industry’s premier certification service, including UNIX certification.
About the IEEE Standards Association
The IEEE Standards Association, a globally recognized standards-setting body, develops consensus standards through an open process that brings diverse parts of an industry together. These standards set specifications and procedures to ensure that products and services are fit for their purpose and perform as intended. The IEEE-SA has a portfolio of more than 870 completed standards and more than 400 standards in development. Over 15,000 IEEE members worldwide belong to IEEE-SA and voluntarily participate in standards activities. For further information on IEEE-SA see: http://www.standards.ieee.org/.
About the IEEE
The IEEE has more than 380,000 members in approximately 150 countries. Through its members, the organization is a leading authority on areas ranging from aerospace, computers and telecommunications to biomedicine, electric power and consumer electronics. The IEEE produces nearly 30 percent of the world's literature in electrical and electronics engineering and in computer science. This nonprofit organization also sponsors or cosponsors more than 300 technical conferences each year. Additional information about the IEEE can be found at http://www.ieee.org.
The Open Group is a trademark of The Open Group.
UNIX is a registered trademark of The Open Group in the US and other countries. POSIX is a registered trademark of the IEEE Inc. Linux is a registered trademark of Linus Torvalds.
Eva Kostelkova The Open Group
Karen McCabe IEEE Senior Marketing Manager
+1 732 562-3824 | Legal Notices & Terms of Use | Privacy Statement | Top of Page Copyright © 1995-2013. The Open Group. All Rights | 计算机 |
2014-23/2665/en_head.json.gz/13487 | Features Game Design Challenge: The Grid
Game Design Challenge: The Grid [05.02.12] - GameCareerGuide.com staff
GameCareerGuide.com's Game Design Challenge is an exercise in becoming a game developer, asking you to look at games in a new way -- from the perspective of a game creator, producer, marketer, businessperson, and so forth. Every other Wednesday we'll present you with a challenge about developing video games. You'll have two weeks to brainstorm a brilliant solution (see below for how to submit your answers). After the two week submission period elapses, the best answers and the names of those who submitted them will be posted. The Challenge Design a game that works within the confines of a 32x32 grid Assignment Details As video game technology and hardware has advanced, developers have been able to make their games far more visually complex, with new rendering techniques, more detailed 3D models, and higher display resolutions. While this increased visual fidelity is all well and good, a real designer doesn't need to rely on stunning visuals to make a good game. For this latest Game Design Challenge, let's put that philosophy to the test. Rather than working with a high-definition display, with thousands of pixels to play with -- what if you had to make a game that worked within the confines of a 32x32 grid? That's a very small canvas to work with when it comes to making a video game, but there are plenty of examples that already work within a similar framework. Look at Checkers, Chess, or even Connect Four -- they might not be video games per se, but they all demonstrate the types of experiences designers can make within a limited, grid-based play space. Over in the digital realm, adventure game veteran Brian Moriarty (Beyond Zork, Loom) has implemented a similar idea in his game design course at Worcester Polytechnic Institute. He even created his own grid-based game design engine, dubbed Perlenspiel. If you're looking for inspiration on how to make a game that works within a tiny grid, feel free to read more about Moriarty's engine on his website or on GameCareerGuide's sister site, Gamasutra. Functional Colors, an example of a 32x32 game designed using Moriarty's Perlenspiel engine When it comes to designing your 32x32 game, feel free to incorporate outside instructions or prompts to teach players how to play. If you'd like to put on-screen text below your grid, feel free, just make sure the game itself only uses colored squares on the 32x32 matrix. Good luck, and we can't wait to see what you come up with! To Submit Work on your ideas, figure out your strategy for coming up with a solution, and ask questions on the forum. When your submission is complete, send it to [email protected] with the subject line "Design Challenge: (title)." Please type your answer directly in the email body. Submissions should be no more than 500 words and may contain up to three images. Be sure to include your full name and school affiliation or job title. Entries must be submitted by Wednesday, May 16 Results will be posted Tuesday, May 22 Disclaimer: GameCareerGuide.com is not responsible for similarities between the content submitted to the Game Design Challenge and any existing or future products or intellectual property. permalink | 计算机 |
2014-23/2665/en_head.json.gz/14233 | More info: http://www.questionpro.com/edu
Sales: +1 (800) 531-0228 |
Analysis of online buying behaviour of Internet users
Project Abstract
The concept of e-shopping has indeed matured since the fire and ice years of 1998-2000. It now is no longer a euphoric technology; it forms the bedrock of most businesses. Banking, travel, entertainment, shopping and e-mailing now all form part of a new mode in which the society interacts.Internet retailing had been a hot topic for many years since the emergence of Internet, but the dotcom bust of year 2000 raised questions about whether this was a sustainable business. Experts were quick to write-off the virtual business model and claimed that the use of Internet would be limited to information exchange.The truth in 2004 is far from those predictions made four years back. Markets in western countries of America and Europe have warmed up to online shopping in a big way and now online transactions form a significant part of the total trade in these countries. Several factors have contributed to this phenomenon. Greater Internet penetration, fall in prices of hardware, fall in the price of Internet communication, development of better and more reliable technologies, and increased awareness among user are few of the prominent factors leading the change.During the last year, the number of people and hosts connected to the Net increased. In India too, Internet penetration became more widespread with bandwidth becoming readily available, Internet tariffs coming down and computer hardware becoming cheaper. The Indian Internet and E-commerce market however is nowhere close to its expected potential. E-mail applications still constitute the bulk of Net traffic in the country. Some of the various ways in which online marketing is done in India are company websites, shopping portals, online auction sites, e-choupal, etc. The acid test for Internet marketing in India lies in its effective exploitation of rural markets of India.The new businesses provide all kinds of goods and service at the doorstep of the e-customer at the click of a mouse button. The business model built for excellent service quality is based on five fundamentals: low price, big range of choices, availability, convenience, and comprehensive information about products. The most commonly found ingredient in commercially successful websites, apart from original ideas, is careful analysis of how people use the site. Hence, for the success of such a business model, it is critical for an organization to understand its customers preferences and behavior.This paper attempts to track the preferences of the new age, Internet savvy, Indian consumer and to determine the trends in on-line shopping in India.An empirical study was undertaken to capture a snap shot of on-line shopping habits among Indian consumers. The paper provides details of the research methodology including questionnaire design, pre-testing of the same, target sample, data collection and data analysis. More than 5000 netizens were requested to take part in an on-line survey during June 2004 and the findings from this would of great interest to marketers of consumer products.
Surveys released for this project:
Internet buying
QuestionPro is FREE for Academic Research
This Project Sponsored by: QuestionPro - Web Survey Software
See Research Sponsorship for more information.
· Pricing
Survey Logic
Custom Surveys
Survey Directory
© QuestionPro
Online Survey Software | 800-531-0228 | 计算机 |
2014-23/2665/en_head.json.gz/15561 | Home > Risk Management
OverviewGetting StartedResearchTools & Methods Additional Materials ConsultingOur People Risk Management
Consider a broad range of conditions and events that can affect the potential for success, and it becomes easier to strategically allocate limited resources where and when they are needed the most.
Overview The SEI has been conducting research and development in various aspects of risk management for more than 20 years. Over that time span, many solutions have been developed, tested, and released into the community. In the early years, we developed and conducted Software Risk Evaluations (SREs), using the Risk Taxonomy. The tactical Continuous Risk Management (CRM) approach to managing project risk followed, which is still in use today—more than 15 years after it was released. Other applications of risk management principles have been developed, including CURE (focused on COTS usage), ATAM® (with a focus on architecture), and the cyber-security-focused OCTAVE®. In 2006, the SEI Mission Success in Complex Environments (MSCE) project was chartered to develop practical and innovative methods, tools, and techniques for measuring, assessing, and managing mission risks. At the heart of this work is the Mission Risk Diagnostic (MRD), which employs a top-down analysis of mission risk.
Mission risk analysis provides a holistic view of the risk to an interactively complex, socio-technical system. The first step in this type of risk analysis is to establish the objectives that must be achieved. The objectives define the desired outcome, or "picture of success," for a system. Next, systemic factors that have a strong influence on the outcome (i.e., whether or not the objectives will be achieved) are identified. These systemic factors, called drivers, are important because they define a small set of factors that can be used to assess a system's performance and gauge whether it is on track to achieve its key objectives. The drivers are then analyzed, which enables decision makers to gauge the overall risk to the system's mission.
The MRD has proven to be effective for establishing confidence in the characteristics of software-reliant systems across the life cycle and supply chain. The SEI has the MRD in a variety of domains, including software acquisition and development; secure software development; cybersecurity incident management; and technology portfolio management. The MRD has also been blended with other SEI products to provide unique solutions to customer needs.
Although most programs and organizations use risk management when developing and operating software-reliant systems, preventable failures continue to occur at an alarming rate. In many instances, the root causes of these preventable failures can be traced to weaknesses in the risk management practices employed by those programs and organizations. For this reason, risk management research at the SEI continues. The SEI provides a wide range of risk management solutions. Many of the older SEI methodologies are still successfully used today and can provide benefits to your programs. To reach the available documentation on the older solutions, see the additional materials.
The MSCE work on mission risk analysis—top-down, systemic analyses of risk in relation to a system's mission and objectives—is better suited to managing mission risk in complex, distributed environments. These newer solutions can be used to manage mission risk across the life cycle and supply chain, enabling decision makers to more efficiently engage in the risk management process, navigate through a broad tradeoff space (including performance, reliability, safety, and security considerations, among others), and strategically allocate their limited resources when and where they are needed the most. Finally, the SEI CERT Program is using the MRD to assess software security risk across the life cycle and supply chain. As part of this work, CERT is conducting research into risk-based measurement and analysis, where the MRD is being used to direct an organization's measurement and analysis efforts. Spotlight on Risk Management
The Monitor June 2009
New Directions in Risk: A Success-Oriented Approach (2009)
A Practical Approach for Managing Risk
A Technical Overview of Risk and Opportunity Management
A Framework for Categorizing Key Drivers of Risk
Practical Risk Management: Framework and Methods | 计算机 |
2014-23/2665/en_head.json.gz/16156 | How to Usethe Map
Locating the Site
Map 1: In the flood's wake.(National Park Service)
1) When the South Fork Dam (elevation 1,650 feet) was breached, the lake waters followed their natural course downhill along the river, growing stronger and more destructive as the flood waters picked up and carried along everything in their path. The first town struck was South Fork, two miles downstream. The flood claimed its first four victims and 20 to 30 homes were destroyed.
2) When the wave reached the two-mile long oxbow in the river it split. Part of the wave left the river channel here, crossed the oxbow, and hit the 75-foot-high stone viaduct. Because the water was choked with debris by this time, it was temporarily dammed at the arch. The greater part of the flood followed the oxbow, and crashed into the viaduct six to seven minutes later. For a brief moment, the wreckage at the viaduct created a second dam for Lake Conemaugh. When the viaduct collapsed, it did so with even greater violence than the South Fork Dam.
3) A mile below the viaduct the sawmill town of Mineral Point was struck by the renewed force of the wave. Thirty families lived on the village's main street, but when the flood had passed, only bare rock remained. Sixteen people died.
4) The wave headed toward East Conemaugh. A witness said the water by now was almost obscured by the debris, resembling "a huge hill rolling over and over,"tossing up logs high above its surface. Before the flood hit East Conemaugh, train engineer John Hess tried to warn the residents by tying his train whistle down and racing toward town ahead of the wave. His warning saved many, but 50 people died, including about 25 passengers on trains that had been stranded in the town by earlier flooding caused by the rain.
5) As the river straightened out between East Conemaugh and Woodvale, the flood gathered speed and power. Woodvale had no warning. Part of a mill was all that was left standing after the flood struck. Of 1,100 residents, 314 died. When the Gautier Wire Works were hit, boilers exploded creating what flood survivors in Johnstown called the black "death mist."
6) The flood hit Johnstown (elevation 1,174 feet) with full force, bearing the remains of the Conemaugh Valley. The time was 4:07 p.m., 57 minutes after the dam had broken. Again, the wave split, sparing some buildings in the center of town. Still in use is the stone bridge, where the mass of debris, animals, and humans piled up and caught fire, taking 80 lives. By now, the torrent had spent its force and the wave continued to break up and lose speed as it continued its downward course. It caused no more damage.
Questions for Map 1 1. Find Johnstown on a map of Pennsylvania and describe its location within the state.
2. Trace the path of the flood wave. How many towns were affected by the flood?
3. What happened at the oxbow? Why do you think this occurred?
4. What is the elevation change between the dam site and Johnstown? How would this have affected the flood wave?
* The map on this screen has a resolution of 72 dots per inch (dpi), and therefore will print poorly. You can obtain a high quality version of Map 1, but be aware that the file may take as much as 30 seconds to load with a 28.8K modem. Continue | 计算机 |
2014-23/2665/en_head.json.gz/16764 | Cloudius Systems, HSA Foundation and Valve Join Linux Foundation By Linux_Foundation - December 4, 2013 - 4:19am Gaming, Cloud and Parallel Computing Industries Invest in Linux and Collaborative Development SAN FRANCISCO, December 4, 2013 -- The Linux Foundation, the nonprofit organization dedicated to accelerating the growth of Linux, today announced that Cloudius Systems, HSA (Heterogeneous System Architecture) Foundation and Valve are joining the organization.
The newest Linux Foundation members represent both nascent open source endeavors as well as established industry leaders. Companies from diverse markets, such as gaming, cloud computing and virtualization are seeing the value of Linux and collaborative development to put them out in front of competitors. For example, Valve recently announced its plans to expand its Steam platform, with 65 million active accounts and games from hundreds of developers, into the living room with the Steam Machines project. An innovative living room device, the Steam Machines will be powered by a Linux-based operating system dubbed the Steam OS. At the same time, emerging cloud, virtualization, big data and device-driven computing companies are taking advantage of open development and collaboration to ignite innovation. More information about today’s newest Linux Foundation members:
Cloudius Systems is a startup company led by the KVM hypervisor originators, developing a new open source operating system to handle virtualized cloud workloads. Launched at CloudOpen North America recently, OSv is designed and optimized to run on top of the KVM, Xen and ESX hypervisors to simplify and accelerate cloud computing. This eliminates the redundancies, overhead and complexity typical with many of today’s cloud application deployments.
“The Linux Foundation is one of the most influential advocates for open cloud technologies, projects and companies. Its inclusiveness and broad reach is incredibly unique in the industry,” said Dor Laor, chief executive officer, Cloudius Systems. “As Linux Foundation members, we’ll be able to take advantage of both Linux’s large presence plus the Foundation's open cloud network and events.” HSA Foundation, a nonprofit backed by founding members AMD, ARM, Imagination Technologies, Qualcomm, Samsung Electronics and many others, is dedicated to developing open-standard architecture specifications to advance heterogeneous parallel computing.
“Joining The Linux Foundation is a natural fit for the HSA Foundation as we strongly believe in supporting an open ecosystem, and the HSA Foundation is focused on bringing richer heterogeneous computing into mobile, cloud-based and big data computing where Linux dominate,” said Gregory Stoner, vice president and managing director of the HSA Foundation.
Valve is well-known for its award-winning games and Steam, a leading software distribution platform with more than 65 million active accounts. The company recently announced the SteamOS, a Linux-based operating system that will power its Steam Machine living room devices. “Joining the Linux Foundation is one of many ways Valve is investing in the advancement of Linux gaming. Through these efforts, we hope to contribute tools for developers building new experiences on Linux, compel hardware manufacturers to prioritize support for Linux, and ultimately deliver an elegant and open platform for Linux users,” said Mike Sartain of Valve. “Our membership continues to grow as both new and mature entities embrace community development and open technologies,” said Mike Woster, chief operating officer, The Linux Foundation. “Our new members believe Linux is a strategic investment that allows their markets to evolve as quickly as possible to achieve long-term viability and competitiveness.” About The Linux Foundation
The Linux Foundation is a nonprofit consortium dedicated to fostering the growth of Linux and collaborative software development. Founded in 2000, the organization sponsors the work of Linux creator Linus Torvalds and promotes, protects and advances the Linux operating system and collaborative software development by marshaling the resources of its members and the open source community. The Linux Foundation provides a neutral forum for collaboration and education by hosting Collaborative Projects, Linux conferences, including LinuxCon, and generating original research and content that advances the understanding of Linux and collaborative software development. More information can be found at www.linuxfoundation.org.
Home › Cloudius Systems, HSA Foundation and Valve Join Linux Foundation Copyright © 2014 Linux Foundation. All rights reserved. | 计算机 |
2014-23/2665/en_head.json.gz/16765 | Linux Foundation Appoints New Fellow
By Linux_Foundation - December 12, 2010 - 10:15pm Linux Foundation Appoints New Fellow
OpenEmbedded Core Developer and Yocto Project Maintainer Richard Purdie Joins Linux Foundation SAN FRANCISCO, December 13, 2010 – The Linux Foundation, the nonprofit organization dedicated to accelerating the growth of Linux, today announced that OpenEmbedded core developer and Yocto Project maintainer Richard Purdie has been appointed to the position of Linux Foundation Fellow. The rise of Linux in mobile and embedded computing is placing new demands on Linux software development while opening new opportunities for tools and infrastructure to ease that development. The Linux Foundation’s Fellowship Fund provides financial support for resources that can accelerate development efforts and spur the adoption of Linux and open source software. As a Linux Foundation Fellow, Purdie will work full-time on the Yocto Project, OpenEmbedded, the Poky Project and other embedded Linux development initiatives. The Yocto Project [1]was announced in October and provides high-quality open source infrastructure and tools to help developers create custom Linux software for any hardware architecture. It is intended to provide a helpful starting point for developers and speed time to market for vendors by establishing shared build infrastructure and tools. The Yocto Project is based on OpenEmbedded, an open source project and build framework for embedded Linux that provides coding assistance, guides and FAQs. Besides Richard’s work on Yocto and OpenEmbedded, he was also the founder of the Poky Build System. “We are happy to be able to add someone of Richard’s caliber. He has already made extremely important contributions to the advancement of embedded Linux and his depth of expertise in this area will accelerate technical progress in the year ahead,” said Jim Zemlin, executive director of the Linux Foundation. “The Linux Foundation provides a neutral forum in which the highest priority work on Linux can be done,” said Richard Purdie, Linux Foundation Fellow. “I’m looking forward to dedicating my time to helping provide developers with tools and infrastructure to ease the development of embedded Linux and collaborating with the community to make Linux even better.” Purdie was most recently a Core Developer at OpenEmbedded, where he was also lead maintainer of bitbake. He has also been an embedded Linux architect in Intel’s Open Source Technology Center. From 2005 to 2008, he was a Software Engineer at OpenedHand, where he worked with a variety of other open source projects such as Clutter, X server, Zaurus and Oprofile. He has also made numerous contributions to the Linux kernel, including as maintainer of the backlight and LED subsystems. Purdie received his MSci in Physics from University of Durham in 2003. Current Linux Foundation Fellows include John Hawley, Till Kamppeter, Janina Sajka and Linus Torvalds. Previous Fellows include Steve Hemminger, Andrew Morton, Andrew Tridgell and Ted Ts’o. For more information on Linux Foundation Fellows, please visit click here [2]. About The Linux Foundation
The Linux Foundation [3]is a nonprofit consortium dedicated to fostering the growth of Linux. Founded in 2007, the organization sponsors the work of Linux creator Linus Torvalds and promotes, protects and advances the Linux operating system by marshaling the resources of its members and the open source development community. The Linux Foundation provides a neutral forum for collaboration and education by hosting technical events [4], including LinuxCon [5], and generating original Linux research [6] and content that advances the understanding of the Linux platform. Its web properties, including Linux.com [7], reach approximately two million people per month. The organization also provides extensive Linux training [8]opportunities that feature the Linux kernel community’s leading experts as instructors. Follow The Linux Foundation on Twitter [9]. ###
Trademarks: The Linux Foundation, Linux Standard Base. Linux is a trademark of Linus Torvalds.
Source URL: http://www.linuxfoundation.org/news-media/announcements/2010/12/linux-foundation-appoints-new-fellow
Links:[1] http://www.yoctoproject.org
[2] https://www.linuxfoundation.org/programs/developer/fellowship | 计算机 |
2014-23/2665/en_head.json.gz/17015 | Why Caldera Released Unix: A Brief History
by Ian F. Darwin
Our strangest dreams sometimes take on a reality of their own. In January, Caldera, the latest owners of the "official" Unix source code, decided to release some of the older versions (up to "V7" and "32V") under an open source license. While not as significant as it would have been, say, ten years ago, it is nice that everyone now has access to the code that first made Unix popular, and that led to the development of the 4BSD system that underlies FreeBSD, NetBSD, OpenBSD, and Apple's Darwin (which in turn underlies Mac OS X). Since I was active in the computer field through almost all the years of Unix's development, I'd like to comment briefly on the Caldera announcement in its full context.
"Free Unix source code" was a strange dream for many of us in the late 1970s and early 1980s, and even the subject of an April Fools joke in there someplace on USENET. But then there was Minix, and it seemed less like a strange dream. Around the same time, John Gilmore was working on a project he called "Radio Free Berkeley," to replace all the encumbered source code in BSD Unix so that it could be free. And many of us worked on small pieces of it; this is why and when I wrote the file command that is on your Linux or BSD system.
While this was happening, BSD was encountering major success in powering the growing Internet (small by today's standards, but nontrivial). There were many, many university and research VAXen running 4BSD, the first mainstream Unix release to ship with a TCP/IP implementation (around 1983). DEC's (since swallowed by Compaq) ULTRIX, Sun's SunOS 3.5 and 4, and Unixes from a variety of smaller, long-dot-gone companies powered the Internet. And they were BSD Unix.
Then came the 4.4-Lite release from the University of California at Berkeley, which was at first believed to be unencumbered. Some very clever people began marketing an operating system derived from it, called BSDI, but they made a couple of minor mistakes: One, they used the term Unix in their marketing, bringing them to the attention of the AT&T lawyers; and two, there were still a few lines of AT&T code in what they were shipping.
The result changed the free software world forever, and led directly to the rise of Linux. AT&T's lawyers sued not only the upstart BSDI, but also the University of California. This lawsuit prevented any new BSD releases for a long time and eventually led the University to decide to get out of the BSD business altogether. And, after several years of bickering, AT&T abruptly settled their lawsuit, abandoning attempts to stop "free Unix" and even allowing the few remaining bitsies to be used in free Unixes. And so unto this day, some files in the free BSD Unix's /sys/kern directory contain this copyright alongside their BSD license:
* (c) Unix System Laboratories, Inc. * All or some portions of this file are derived from material licensed * to the University of California by American Telephone and Telegraph * Co. or Unix System Laboratories, Inc. and are reproduced herein with * the permission of Unix System Laboratories, Inc.
Unix System Laboratories is one of many names that AT&T's Unix Support Group took on over the years; the Unix trademark was assigned to different corporate bodies within AT&T so frequently that one wag apparently changed the troff footnote macro from "Unix is a registered trademark of AT&T" to something like "Unix is a registered footnote of Western Electric, no, AT&T, no, Unix Support Group, no, Unix System Laboratories, heck, I give up."
During this long hiatus, when what was by then FreeBSD could have been dominating the free software world, Linux came into the vacuum. We all knew we needed a free Unix clone with source code and, since BSD wasn't available, we took Linux. It wasn't really Unix, and it had "this funny GPL thing" attached to it, but it was close enough.
And because of that head start, Linux has overwhelmingly attracted the media's, and many hackers', attention, making it harder for the BSD systems (FreeBSD, NetBSD, and OpenBSD) to grow as popular as Linux. It's not that Linux is better, or worse, but that it got the popularity first. History shows that first, even if worst, tends to gain power and hold on to it. This is true for Microsoft's dominance of the commercial and home desktop; it's true for Unix/Linux's dominance as the engine for Internet servers; and it's true for Linux's dominance of the freeware OS niche. Of course, one exception is Netscape, which has seen | 计算机 |
2014-23/2665/en_head.json.gz/17079 | Online Poker | PokerStars
Online Now Players Tournaments Join PokerStars.net - the world's biggest play money only online poker room
Poker Home
PokerStars Online Poker Software Terms of Service
This end user license agreement (the "Agreement") should be read by you (the "User" or "you") in its entirety prior to your use of PokerStars' service or products. Please note that the Agreement constitutes a legally binding agreement between you and Rational Poker School Limited (referred to herein as "PokerStars", "us" or "we") which owns and operates the Internet site found at www.pokerstars.net (the "Site"). By entering into this Agreement, you acknowledge that PokerStars is part of a group of companies. As such, where used and the context allows, the term “Group” means PokerStars together with its subsidiaries and any holding company of PokerStars and any subsidiary of such holding company and any associated company with PokerStars including, but not limited to associated companies providing services under the trade mark “Full Tilt Poker”. In addition to the terms and conditions of this Agreement, please review our Privacy Policy, Cookie Policy and the Poker Rules, as well as the other rules, policies and terms and conditions relating to the games and promotions available on the Site as posted on the Site from time to time, which are incorporated herein by reference, together with such other policies of which you may be notified of by us from time to time.
By clicking the "I Agree" button as part of the software installation process and using the Software (as defined below), you consent to the terms and conditions set forth in this Agreement, the Privacy Policy, Cookie Policy and the Poker Rules as each may be updated or modified from time to time in accordance with the provisions below and therein. For the purposes of this Agreement, the definition of "Software" will include both the PokerStars poker software downloadable to your personal desktop or laptop computer (“PC”) from www.pokerstars.net (the “PC Software”) and the PokerStars mobile software application (the "Mobile Software") downloadable to a mobile device (including, without limitation, a cellular phone, PDA, tablet, or any other type of device now existing or hereafter devised) (each, a “Device”), as well as all ancillary software to the poker software (whether web-based software or client/server software).
When using the Service to play the Games (as defined below), you will have the option to purchase virtual chips (“Virtual Chips”) pursuant to Clause 1A below in addition to your use of free of charge chips (“Free Chips”). The operation and provision of the Virtual Chips for use in the Games is provided by Rational Social Projects Limited (“Rational Social”) a company within the same Group as PokerStars and which has contracted with PokerStars for this purpose. For the purposes of this Agreement, the terms “Virtual Chips” and “Free Chips” together shall be referred to as “Chips”). 1. GRANT OF LICENSE/INTELLECTUAL PROPERTY 1.1. Subject to the terms and conditions contained herein PokerStars grants the User a non-exclusive, personal, non-transferable right to install and use the Software on your PC or Device, as the case may be, in order to access the PokerStars servers and play the "play for free"/"play money" poker games (the "Games") available (the Software and Games together being the "Service").
1.2. The Software is licensed to you by PokerStars for your private personal use. Please note that the Software is not for use by (i) individuals under 18 years of age, (ii) individuals under the legal age of majority in their jurisdiction and (iii) individuals connecting to the Site from jurisdictions from which it is illegal to do so. PokerStars is not able to verify the legality of the Service in each jurisdiction and it is the User's responsibility to ensure that their use of the Service is lawful.
1.3. We reserve the right at any time to request from you evidence of age in order to ensure that minors are not using the Service. We further reserve the right to suspend or cancel your account and exclude you, temporarily or permanently, from using the Service if satisfactory proof of age is not provided or if we suspect that you are underage.
1.4. PokerStars, its Group companies and its licensors are the sole holders of all rights in the Software and the Software's code, structure and organisation are protected by copyright, trade secrets, intellectual property and other rights. You may not within the limits prescribed by applicable laws:
(a) copy, distribute, publish, reverse engineer, decompile, disassemble, modify, or translate the Software or make any attempt to access the source code to create derivate works of the source code of the Software, or otherwise;
(b) sell, assign, sublicense, transfer, distribute or lease the Software; (c) make the Software available to any third party through a computer network or otherwise; (d) export the Software to any country (whether by physical or electronic means); or (e) use the Software in a manner prohibited by applicable laws or regulations. (each of the above is an "Unauthorised Use").
PokerStars, its Group companies and its licensors reserve any and all rights implied or otherwise, which are not expressly granted to the User hereunder and retain all rights, title and interest in and to the Software.
You agree that you will be solely liable for any damage, costs or expenses arising out of or in connection with the commission by you of any Unauthorised Use. You shall notify PokerStars immediately upon becoming aware of the commission by any person of any Unauthorised Use and shall provide PokerStars with reasonable assistance with any investigations it conducts in light of the information provided by you in this respect.
1.5. The terms "PokerStars", the domain names “pokerstars.net” and “pokerstarsmobile.net” and any other trade marks, service marks, signs, trade names and/or domain names used by PokerStars on the Site and/or the Software from time to time (the "Trade Marks"), are the trade marks, service marks, signs, trade names and/or domain names of PokerStars and/or its Group companies and/or its licensors, and these entities reserve all rights to such Trade Marks. In addition, all other content on the Site, including, but not limited to, the Software, images, pictures, graphics, photographs, animations, videos, music, audio and text (the "Site Content") belongs to PokerStars and its Group companies and/or its licensors and is protected by copyright and/or other intellectual property or other rights. You hereby acknowledge that by using the Service and the Site you obtain no rights in the Site Content and/or the Trade Marks, or any part thereof. Under no circumstances may you use the Site Content and/or the Trade Marks without PokerStars' prior written consent.
Additionally, you agree not to do anything that will harm or potentially harm the rights, including the intellectual property rights, held by PokerStars, its Group companies and/or its licensors in the Software, the Trade Marks or the Site Content nor will you do anything that damages the image or reputation of PokerStars, its Group companies, employees, directors, officers and consultants.
1.6 You warrant that any names or images used by you in connection with the Site or Service (for example, your user name and avatar) shall not infringe the intellectual property, privacy or other rights of any third party. You hereby grant PokerStars and its Group a worldwide, irrevocable, transferable, royalty free, sublicensable licence to use such names and images for any purpose connected with the Site or Service, subject to the terms of our Privacy Policy.
PURCHASES OF VIRTUAL CHIPS
PokerStars has entered into an agreement with Rational Social to provide and operate the Virtual Chips for use in the Service. When purchasing Virtual Chips for use in the Service you are contracting directly with Rational Social.
1A.1. While using the Service, you may “earn”, “buy” or “purchase” Virtual Chips for use in the Service. You hereby acknowledge that these “real world” terms are only used figuratively, and you agree that you have no right or title in Virtual Chips appearing or originating in any Game, whether “earned” in a Game or “purchased” from Rational Social, or any other attributes associated within an | 计算机 |
2014-23/2665/en_head.json.gz/18083 | ENIAC: The Army-Sponsored Revolution William T. Moye
ARL Historian
Fifty years ago, the U.S. Army unveiled the Electronic Numerical
Integrator and Computer (ENIAC) the world's first operational, general purpose, electronic digital computer, developed at the Moore School of Electrical Engineering, University of Pennsylvania. Of the scientific developments spurred by World War II, ENIAC ranks as one of the most influential and pervasive.
The origins of BRL lie in World War I, when pioneering work was
done in the Office of the Chief of Ordnance, and especially the Ballistics Branch created within the Office in 1918. In 1938, the activity, known as the Research Division at Aberdeen Proving Ground (APG), Maryland, was renamed the Ballistic Research Laboratory. In 1985, BRL became part of LABCOM. In the transition to ARL, BRL formed the core of the Weapons Technology Directorate, with computer technology elements migrating to the Advanced Computational and Information Sciences Directorate (now Advanced Simulation and High-Performance Computing Directorate, ASHPC), and vulnerability analysis moving into the Survivability/Lethality Analysis Directorate (SLAD).
The need to speed the calculation and improve the accuracy of the
firing and bombing tables constantly pushed the ballisticians at Aberdeen.
As early as 1932, personnel in the Ballistic Section had investigated the possible use of a Bush differential analyzer. Finally, arrangements were made for construction, and a machine was installed in 1935 as a Depression-era "relief" project. Shortly thereafter, lab leadership became interested in the possibility of using electrical calculating machines, and members of the staff visited International Business Machines in 1938. Shortage of funds and other difficulties delayed acquisition until 1941, when a tabulator and a multiplier were delivered.
With the outbreak of the war, work began to pile up, and in June
1942, the Ordnance Department contracted with Moore School to operate its somewhat faster Bush differential analyzer exclusively for the Army. Captain Paul N. Gillon, then in charge of ballistic computations at BRL, requested that Lieutenant Herman H. Goldstine be assigned to duty at the Moore School as supervisor of the computational and training activities. This put Goldstine, a Ph.D. mathematician, and the BRL annex of firing table personnel in the middle of a very talented group of scientists and engineers, among them Dr. John W. Mauchly, a physicist, and J. Presper Eckert, Jr., an engineer.
Despite operating the computing branch with analyzer at APG and
the sister branch and analyzer at Moore School, BRL could not keep up with new demands for tables, coming in at the rate of about six a day. Goldstine and the others searched for ways to improve the process. Mauchly had come to Penn shortly after his 1941 visit with John Vincent Atanasoff at Iowa State College to discuss the latter's work on an electronic computer. In the fall of 1942, Mauchly wrote a
memorandum, sketching his concept of an electronic computer, developed in consultation with Eckert. Ensuing discussions impressed Goldstine that higher speeds could be achieved than with mechanical devices.
About this time, Captain Gillon had been assigned to the Office
of the Chief of Ordnance as deputy chief of the Service Branch of the Technical Division, with responsibility for the research activities of the Department. Early in 1943, Goldstine and Professor John Grist Brainerd, Moore School's director of war research, took to Gillon an outline of the technical concepts underlying the design of an electronic computer. Mauchly, Eckert, Brainerd, Dr. Harold Pender (Dean of Moore School), and other members of the staff worked rapidly to develop a proposal presented to Colonel Leslie E. Simon, BRL Director, in April and immediately submitted to the Chief of Ordnance. A contract was signed in June.
The so-called "Project PX" was placed under the supervision of
Brainerd, with Eckert as chief engineer and Mauchly as principal consultant. Goldstine was the resident supervisor for the Ordnance Department and contributed greatly to the mathematical side, as well. Three other principal designers worked closely on the project: Arthur W. Burks, Thomas Kite Sharpless, and Robert F. Shaw. Gillon provided crucial support at Department level.
The original agreement committed $61,700 in Ordnance funds. Supplements extended the work, increased the amount to a total of $486,804.22, and assigned technical supervision to BRL. Construction began in June 1944, with final assembly in the fall of 1945, and the formal dedication in February 1946.
The only mechanical elements in the final system were actually
external to the calculator itself. These were an IBM card reader for input, a card punch for out-put, and the 1,500 associated relays. By today's standards, ENIAC was a monster with its 18,000 vacuum tubes, but ENIAC was the prototype from which most other modern computers evolved. Its impact on the generation of firing tables was obvious. A skilled person with a desk calculator could compute a 60 second trajectory in about 20 hours; the Bush differential analyzer produced the same result in 15 minutes; but the ENIAC required only 30 seconds, less than the flight time.
During World War II, a "computer" was a person who calculated
artillery firing tables using a desk calculator. Six women "computers" were assigned to serve as ENIAC's original programming group. Although most were college graduates, the "girls" were told that only "men" could get professional ratings. Finally, in November 1946, many of the women received professional ratings.
ENIAC's first application was to solve an important problem for
the Manhatten Project. Involved were Nicholas Metropolis and Stanley Frankel from the Los Alamos National Laboratory, who worked with Eckert, Mauchly, and the women programmers. Captain (Dr.) Goldstine and his wife, Adele, taught Metropolis and Frankel how to program the machine, and the "girls" would come in and set the switches according
to the prepared program. In fact, the scheduled movement of ENIAC to APG was delayed so that the "test" could be completed before the machine was moved.
Late in 1946, ENIAC was dismantled, arriving in Aberdeen in January 1947. It was operational again in August 1947 and represented "the largest collection of interconnected electronic circuitry then in existence."
ENIAC as built was never copied, and its influence on the logic
and circuitry of succeeding machines was not great. However, its development and the interactions among people associated with it critically impacted future generations of computers. Indeed, two activities generated by the BRL/Moore School programs, a paper and a series of lectures, profoundly influenced the direction of computer development for the next several years.
During the design and construction phases on the ENIAC, it had
been necessary to freeze its engineering designs early on in order to develop the operational computer so urgently needed. At the same time, as construction proceeded and the staff could operate prototypes, it because obvious that it was both possible and desirable to design a computer that would be smaller and yet would have greater flexibility and better mathematical performance.
By late 1943 or early 1944, members of the team had begun to
develop concepts to solve one of ENIAC's major shortcomings -- the lack of an internally stored program capability. That is, as originally designed, the program was set up manually by setting switches and cable connections. But in July 1944, the team agreed that, as work on ENIAC permitted, they would pursue development of a stored-program computer.
At this point, in August 1944, one of the most important and
innovative (and influential) scientists of the 20th century joined the story. Dr. John L. von Neumann of the Institute of Advanced Studies (IAS) at Princeton was a member of BRL's Scientific Advisory Board. During the first week of August, Goldstine met von Neumann on the platform at the Aberdeen train station and told him about the ENIAC project. A few days later, Goldstine took von Neumann to see the machine. From this time on, von Neumann became a frequent visitor to the Moore School, eagerly joining discussions about the new and improved machine that would store its "instructions" in an internal memory system. In fact, von Neumann participated in the board meeting at Aberdeen on August 29 that recommended funding the Electronic Discrete Variable Computer (EDVAC).
In October 1944, the Ordnance Department approved $105,600 in
funds for developing the new machine. In June 1945, von Neumann produced "First Draft of a Report on the EDVAC," a seminal document in computer history and a controversial one. It was intended as a first draft for circulation among the team; however, it was widely circulated, and other members of the team were annoyed to find little or no
mention of their own contributions. This, combined with
patent rights disputes,
led to several confrontations and the later breakup of the team.
The second of the great influences was a series of 48 lectures
given at the Moore School in July and August 1946, entitled "Theory and Techniques for the Design of Electronic Digital Computers." Eckert and Mauchly were both principal lecturers, even though they had left Moore School to form their own company. Other principals included Burks, Sharpless, and Chuan Chu. Officially, 28 people from both sides of the Atlantic attended, but many more attended at least one lecture.
Although most "students" expected the sessions to focus on ENIAC,
many lecturers discussed designs and concepts for the new, improved machine, EDVAC. Together, von Neumann's paper and the Moore School lectures circulated enough information about EDVAC that its design became the basis for several machines. The most important of these were two British machines the EDSAC (Electronic Delay Storage Automatic Computer) built by Maurice V. Wilkes at the Mathematical Laboratory at Cambridge University and completed in 1949 and the Mark I developed by F. C. Williams (later joined by Alan M. Turing) at the University of Manchester and completed in 1951 in cooperation with Ferranti, Ltd.; and one U.S. machine, the Standards Automatic Computer (SEAC) developed at the National Bureau of Standards and completed in 1950.
Meanwhile, despite the breakup on the team, BRL still had a contract with the Moore School for construction of EDVAC. It was decided that Moore School would design and build a preliminary model, while IAS would undertake a program to develop a large-scale comprehensive computer. Basic construction of EDVAC was performed at Moore School, and beginning in August 1949, it was moved to its permanent home at APG.
Although EDVAC was reported as basically complete, it did not run
its first application program until two years later, in October 1951. As one observer put it, "Of course, the EDVAC was always threatening to work." As constructed, EDVAC differed from the early von Neumann designs and suffered frequent redesigns and modifications. In fact, at BRL, even after it achieved reasonably routine operational status, it was largely overshadowed by the lab's new machine, the Ordnance Variable Automatic Computer (ORDVAC), installed in 1952. Interestingly, ORDVAC's basic logic was developed by von Neumann's group at IAS.
Meanwhile, in 1948 after reassembly at APG, ENIAC was converted
into an
internally stored-fixed program computer
through the use of a converter code. In ensuing years, other improvements were made. An independent motor-electricity generator set was installed to provide steady, reliable power, along with a high-speed electronic shifter, and a 100-word static magnetic-core memory developed by Burroughs
During the period 1948-1955, when it was retired, ENIAC was
operated successfully for a total of 80,223 hours of operation. In addition to ballistics, fields of application included weather prediction, atomic energy calculations, cosmic ray studies, thermal ignition, random-number studies, wind tunnel design, and other scientific uses.
Significantly, the Army also made ENIAC available to universities free of
charge, and a number of problems were run under this arrangement, including studies of compressible laminar boundary layer flow (Cambridge, 1946), zero-pressure properties of diatomic gases (Penn, 1946), and reflection and refraction of plane shock waves (IAS, 1947).
The formal dedication and dinner were held on February 15, 1946
in Houston Hall on the Penn Campus. The Penn president presided, and the president of the National Academy of Sciences was the featured speaker. Major General Gladeon M. Barnes, Chief of Research and Development in the Office of the Chief of Ordnance, pressed the button that turned on ENIAC. To commemorate this event, on February 14, 1996, Penn, the Association for Computing Machinery (ACM), the City of Philadelphia, and others are sponsoring a "reactivation" ceremony and celebratory dinner. As part of the ACM convention, ARL will sponsor a session on Sunday, 18 February, to present the story of Army/BRL achievement. One of the speakers will be
Dr. Herman H. Goldstine.
Considerable information, including pictures, is available on the
World Wide Web. Visit the ARL homepage. For more information on the image archives, visit
http://ftp.arl.army.mil/~mike/comphist/
and select the
"Photographs of Historic Computers"
Special thanks are due the ARL Technical Library at APG for access to historical materials in their "vault." | 计算机 |
2014-23/2665/en_head.json.gz/18278 | The Fedora Project is an openly-developed project designed by Red Hat, open for general participation, led by a meritocracy, following a set of project objectives. The goal of The Fedora Project is to work with the Linux community to build a complete, general purpose operating system exclusively from open source software. Development will be done in a public forum. The project will produce time-based releases of Fedora about 2-3 times a year, with a public release schedule. The Red Hat engineering team will continue to participate in building Fedora and will invite and encourage more outside participation than in past releases. Fedora 15, a new version of one of the leading and most widely used Linux distributions on the market, has been released. Some of the many new features include support for Btrfs file system, Indic typing booster, redesigned SELinux troubleshooter, better power management, LibreOffice productivity suite, and, of course, the brand-new GNOME 3 desktop: "GNOME 3 is the next generation of GNOME with a brand new user interface. It provides a completely new and modern desktop that has been designed for today's users and technologies. Fedora 15 is the first major distribution to include GNOME 3 by default. GNOME 3 is being developed with extensive upstream participation from Red Hat developers and Fedora volunteers, and GNOME 3 is tightly integrated in Fedora 15." manufacturer website
1 dvd for installation on an 86_64 platform back to top | 计算机 |
2014-23/2665/en_head.json.gz/19288 | Fanatic Attack
Fanatic Attack is about entrancement, entertainment, and an enhancement of curiosity.
Home Zithromax By Vbulletin
Zithromax By Vbulletin, No, this is not a day when you free yourself of all your software addictions. Rather, Zithromax By Vbulletin paypal, Software Freedom Day is an annual grass roots effort to educate the public on the virtues of free and open source software. The 2008 event takes place on September 20 and will be celebrated in 65 countries across the globe. So exactly what is this open source movement and why are people celebrating it, Zithromax By Vbulletin us. Moreover, 1000mg Zithromax By Vbulletin, why should you care.
Open source software is available for free, to everyone and unlike for example, Windows or Mac operating systems, it is non-proprietary - meaning it is available for others to share, build upon, change, and redistribute either in its modified or unmodified form.The source code is transparent and allows rights to | 计算机 |
2014-23/2665/en_head.json.gz/22449 | Features Book Excerpt: Game Design Complete: Advergaming and Sponsorships
Book Excerpt: Game Design Complete: Advergaming and Sponsorships [04.06.06] - Patrick O'Luanaigh
Advertising in Games Let's take a look at a few real-life examples of how in-game advertising has been implemented very well and some in which I think it has been implemented badly. Good Examples of Advertising in Games I chose these games because the advertising is fairly subtle and, when they were released, the games broke new ground for using advertising. One important lesson to be learned here is that you can make your in-game advertising more palatable and effective if you find new and clever ways to incorporate it. Wipeout 2097 Figure 11.1 Wipeout 2097 The Wipeout series has always been at the cutting edge, with many new ideas. In-game advertising was one of them. Wipeout 2097 (see Figure 11.1) advertised companies like Red Bull on in-game billboards and was one of the first games to do so. What's more, because the game is about a futuristic racing championship, the advertisements seemed in keeping with the game, so they never felt out of place. Along with the great licensed music tracks, they made the game feel very cool. Crazy Taxi Crazy Taxi (see Figure 11.2) was one of the very first games to really go to town with in-game advertising. It featured a whole load of national chain stores, such as Tower Records, KFC, Fila, and Pizza Hut. The advertising made the city feel realistic and definitely enhanced the gameplay. Again, the advertising was totally in keeping with the game, and because there was such a variety, it never really felt like you were watching an advertisement. Figure 11.2 Crazy Taxi Next:
Bad Examples of Advertising | 计算机 |
2014-23/2665/en_head.json.gz/24666 | Vizionstudios
My Current Mindset Mixed with a sprinkling of Rants and Doodles.
Seriously—Everyone Need To Play Portal
Back when I was High School a little game came out: Wolfenstein 3D
This little game changed the face of Electronic Gaming. It wasn't the first first-person-shooter, but every first person shooter since has pretty much been the same (Except for some six degrees of freedom games: Descent, Forsaken). The next big one after that was Doom, and I haven't really played a First-person-shooter (FPS) since then. I played Halo and Halo 2 a few times, but frankly they seemed about the same to me.
When I first started playing Wolfenstein 3D and Doom, it gave me headaches, but the play style was so new I just played through it and eventually got used to it and didn't have the problem any more. Then I stopped playing because every new FPS game was just more of the same, and now if I play FPS games the headaches and nausea manifest themselves almost immediately.
Some time ago, a game called Half Life came out. I don't know much about Half Life. What I understand, is that it introduced some puzzle elements to the FPS. Then Half Life 2 Came out, and my understanding is that it had really great modification tools available, and people were building completely new games based on Half Life 2. In fact, I think I knew people that bought it so they could play the Modified versions rather than the actual published game.
Then this thing called The Orange Box came out. It was a compilation of games all based on the same engine. (Half Life 2: Episode One and Two, Team Fortress 2 and Portal) I think that Most People Bought it for the Cartoony Team Fortress 2, and were surprised at the awesomeness of the new puzzle game Portal. I don't play as many video games as I would like to, but I try to keep up with the news. Portal was all over the place for a while. Since it was first person I read about it (Since I knew I would most likely never play it, and I wanted to understand the jokes).
In the game you wake up in a cell and shortly thereafter a portal opens to let you out, then you are challenged with a puzzle to exit the area. For the most part there is nothing that directly threatens your life and you are just challenged with passing obstacles and opening doors to get to the next challenge. During your travels from level to level, a computer voice speaks to you and you feel as though you are being watched by some Big Brother Entity. Frankly, I felt like a rat in a maze—there are frosted glass areas that are high up and give you the impression that you are being watched.
The Puzzles all involve the use of portals that are linked (Walking through the Orange Portal, you exit at the location of the Blue portal and vice versa). For the first couple levels, the portals open and shut for you. Then you are able to obtain a portal generator that generates the blue portal, while the Orange portal is stable and cannot be manipulated. Then later your Aperture Science Handheld Portal Device is upgraded to allow you to create both the Blue and Orange Portals, this is where the game gets interesting. As the computer continued to announce things, I started to get the impression that this was an experiment gone horribly horribly wrong. The computer voice would short/cut out right as it was about to give vital information. It started lying about its monitoring of the tests, and eventually was offering me cake to just give up.
This game was a short, but brilliant ride. It consists of nineteen levels. Eighteen of them, I would consider to be tutorials that teach you all the concepts that you need to know in order to get through the last level. It's a good story. I'm sure I'm missing out on a lot because I get the impression that it takes place in the Half Life Universe, which I know nothing about. Still, I think it stands on its own quite well.
What prompted me to actually play this game?
I certainly wasn't going to buy it, given my history of physical illness when playing first person games. There is a service called Steam from which you can purchase and download full retail versions of games. May 12–24 they had Portal as a FREE download. (Apologies, I meant to get this written before the deal was over—now it's $20, which is a fair deal if you ask me.)
My daughter loves playing this. She makes no attempt to do what the game asks you to, but she gets a kick out of putting a portal in the floor and the ceiling and falling infinitely; she also likes making portals in corners so she can see herself.
I'm excited that there is a sequel planned, and I'm planning to get it. In fact, I've wanted to play new content enough that I've downloaded some fan built levels, but they aren't as fun as the real deal, if anyone knows of any pro quality MODs for portal, let me know.
I know I'm late to the party on this one—the game came out in 2007—but for anyone that missed this, you really need to try it out.
Richard McLean
Gaming With Kids,
Dicecreator's Dice
Anyone that follows my ramblings should know I have a wee obsession with Dice.
Which leads me to looking for unique dice once in a while. A while ago, I stumbled across someone that manufactures unique and custom dice—and sells them on e-Bay, through his store: Unconventional Dice.
Shortly thereafter, I learned about his blog: Dicecreator's Blog, and I've been following it ever since. I have planned to purchase some of his carbon fiber and unusually numbered dice when I get the chance.
Then there was a contest in which I participated—and won. So here we are. I won the opportunity to review some of the dice created by the Dicecreator. FULL DISCLOSURE: I was sent a pair of dice free, for review purposes. Let's start with a picture of the dice.
We have here a Halo Die and a Steam-punk Die.
The Symbols are inlaid brass, and they look very nice. I thought I read somewhere care instructions for the dice so that the patina they develop is nice and even, but I can't seem to locate that. With brass in there you would wonder about the fairness. I know that is a real concern of the Dicecreator and he has made a commitment to never sell an unfair dice.
I'm no statistician, but they do seem to roll just like every other d6 I own—they roll all over the place: high, low, and medium. They roll low when I want them to roll high and high when I want them to roll low.
Now here's the funny thing, I just described the way they roll as fair, just like all the other dice I own—and now I'm going to describe the way they roll as just like the loaded dice that I own. What could I possibly mean by that? It's because of the brass. The added weight makes them skittle across the table low, i.e. they don't bounce as high as standard dice, but they do roll as far.
It was kind of weird to see at first, and I couldn't quite figure it out until I watched it a few times.
Let's talk quality. These dice have been in my pocket since I received them. Here is a picture of the contents of said pocket.
You may not think a pocket is a rough place to travel, but look at the phone and the pen. All that damage happened in my pocket. The dice are durable enough to take the abuse. In fact, I found myself rather clumsy the week I received them and dropped them from about four feet on multiple occasions.
I must say, I really enjoy these dice. They look great, they're easy to read, they're fun to use at the game table, they're durable, high quality.
I give them A+, 4 Stars, Two Thumbs Up, and every other Appropriate Superlative Available.
Dice Related,
Someone At The Library Loves Me
My Wife asked me to pick some things up for her at the Library recently. It had probably been over a year since I was last in that building—I love reading, but I just have been involved in so many other things, that I hadn't taken the time to go to the Library; and my Wife usually goes with the kids when I'm at work. So I perused the DVDs briefly. It's hard to get a quick look at all the DVDs because of the way they have them shelved. I only saw a fraction of what was available, and I didn't see anything that jumped out at me as a must watch now title. So I decided to check out the Instructional Art books, see if there was anything worth taking a look into, or something that might Jog my creativity. I found that they had completely rearranged the library and what was once a Non-fiction, Dewey Decimal section, was now young adult fiction. I couldn't remember the number of the section the art books were in, which means a trip to the computer. While looking that up at the computer system, I got curious as to whether they had any Role Playing Game books in the system. I made a note of the section they were in and headed over to find them. This is when I discovered that someone at the Library loves me. I spotted 3 full shelves—top to bottom—of Graphic Novels & Comic Books. Last time I had gotten comic books at the library, there was only about half of a single shelf; and most of that was newspaper stuff (Garfield, The Far Side, Garfield, Calvin and Hobbes, Garfield, Dilbert, Garfield...Garfield). One full shelve and a third of the second shelve was Manga, which I'm not really in to. The rest however, was Superhero and Independent stuff—Titles that I've wanted to read for years. Things I haven't read because I lack the funds to justify purchasing such things. I don't know who is doing the purchasing for the library, but they seem to know their stuff. There was a small, independently published, obscure, title on the shelf that I've wanted to read ever since I heard about it—Pinocchio: Vampire Slayer. Think about it...it's a Brilliant idea, and well executed, I only had one complaint about it, and that was the use of the slang "Cool" it felt out of place.
A Short Game This Week
I moved the Deadlands: Reloaded game to Friday this week, people had fewer conflicts—all week I've been working hard at getting my Resume and Portfolio in order, and I hadn't really had a chance to prepare for the game properly.
Friday before the game I was trying to read through the material for the game and my 1.5 year old came to sit on the couch next to me, and threw up. So, I cleaned that up. Just as I finished taking care of that, my four year old wet her pants.
When people showed up, I still had not finished prepping the game. The couple that came have a child who has problems with seizures, so we warned them about the state of heath in our home (their child was with them) and they made a quick trip to the grandparents. The other person that had planned on coming sent a message that she wasn't feeling well. I felt like calling the game, but everyone wanted to play.
So we started late and I was under prepared.
When we got to the end of the only encounter I had planned, I let the small group know that we had reached the end of what I prepared, and called the game an hour earlier than I generally plan.
I generally like to run games completely analog. Part of what I like about Role Playing Games, is that they are low tech. I often use the computer to prepare the game, but I like the table to be free of Electronics. (The Exception being that I do want to get an e-reader) Since I wasn't completely ready for the game, I used a laptop as GM Screen, and Books.
Running a game out of PDF versions of the books seems a little slow. I had managed to get all the stat blocks for the evening copied into Word, I found that to be very convenient.
Halfway into the encounter my wife was mocking me for closing the notebook computer's screen every time I wanted to look at the battle mat—suggesting that I use the webcam. Frankly that was brilliant!!!
You can see from this screenshot (click for full size) that I had Deadlands, and The Flood open in PDF as well as a word file for my Encounters. If you look close you'll see that all the figures we were using were Zombies—I need to get some more appropriate figures for this game. I have the Deadlands board game; the figures in that are appropriate for the player characters, but I need to look at all my little toys and see what I have for the Monsters. The Encounter from last nights game was our player characters versus tunnel critters—mostly represented by Zombies on the map.
We didn't get a lot of story covered, so I would have awarded 1 XP. However, I was impressed that nobody took the "bait" and acted impulsively versus some of the things leading up to the final encounter of the evening or attacked inappropriately—so 2 XP for the night.
Deadlands,
Role Play,
Let the Job Search Begin
So, officially my last day was a week ago though I left Thursday and did not return. I've spent the week updating my resume and pulling together all the graphic design I did in the past two years to update my portfolio.
I'm tempted to post my resume here, but I try to Keep information about where I live vague on this site. Though I think that that may not have been the case when I first started the blog, so It might be a moot point, I haven't gone back and looked. Monday they told us to apply for unemployment now, and by now they meant before June first. Today I tried twice to do so and the website crashed on me. The second time I decided to call as instructed, because I had gotten much further into the process and didn't want to type all that stuff in again. I forgot that it was Friday, and our local government doesn't do Fridays. Perhaps I'll try again later today...or wait until Monday.
Job Hunt,
That is all for now.
Mini Rant
Ummm—Tell Me Again Why We're Going In the Dark Creepy Hole?
I am not happy with the way last night's session of Deadlands went. It was my fault. I felt as direction-less as my players were probably feeling. I had decided that they would probably end up going back to Denver. Brandon decided to Retcon the Head Chopping action from the last game, and I was perfectly happy to allow that. What I wanted to do was feel out what the players wanted to do when they got to Denver. I had 3–4 things that I could take the players through after feeling out what they wanted to do in Denver—and most likely ended up headed to Mexico.
Things were very direction-less however, so I had to make a decision. I felt like I was forcing things, and I messed up part of the description that I'm going to have to fix.
I felt a little flustered every time Brandon would "loot" the scene. It was pulling me out of the narrative, but it wasn't out of character for him to be doing that. I also think I need to read up on how spell-casting works; I trust Brandon, but I would feel more comfortable if I understood the mechanics of it better.
We started the Flood last night, and I found it a little tricky to get the characters to go where they needed to go. I guess if they had decided to not go down into the scary hole, they would have met some sharply dressed businessmen back in Denver.
I had also planned to run an interlude session on the Train, but I forgot to print out the interludes document. I wanted to run the interlude, because part of the issues I have as Marshall (Game Master) seem to be rooted in the fact that the posse doesn't feel compelled to be together. So they decided to role play it out sans interlude card drawing. I think that went ok, but I feel it cemented the fact that there's not a lot of cohesion in the group.
I guess I just didn't feel connected to the game last night. I was pretty upset about it, it was stressing me and I even snapped at Justin when he questioned part of the narrative.
The only part that seemed to go okay was the combat. I threw a lot more villains at them than the scenario called for, and it seemed to work out okay, I was rolling lousy—but I was using Fate chips, so was everyone else, so that went well—but there wasn't a lot of description (another failure on my part) so it was the ROLL play portion of the evening.
I feel that overall I did a lousy job last night. I hope I can run a better game in two weeks.
Free Comic Book Day 2010
The trick is to take your family along with you. Limit 10 per person—so I took 4 persons (Including Myself). There weren't even 40 comics that I wanted. I think we ended up with three copies of the Toy Story Comic, which is good; one for each kid and an extra when one inevitably gets destroyed.
They didn't seem to be handing these out to everyone—but I got one. A War Machine Hero Clix. Probably because Iron Man 2 released yesterday.
Yes, that's a picture of me in a Batman T-Shirt With Guy Gardner. COME ON!! I was at free comic book day—you have to be nerdy and pose with Guy Gardner!! At least I wasn't dressed up like Rorschach (Yes there was a customer there dressed as Rorschach). Funny thing—Green Lantern has always been my favorite, but back when I was actually collecting comics they weren't doing much with Green Lantern. Now it's Green Lantern this and Green Lantern that and I just peripherally experience it because I don't have disposable income to spend on comics these days.
Then there were these, I was pretty Excited to see them. I know, you're saying to yourself—but you already have a set of those! Well, these are the Mini Versions. See...
Emily was making fun of me for being so excited—then she spotted the mini version of her Toxic Orange Set and insisted that I pick them up before they were all gone.
I also got a standard size set of Pink/Purple Sparkly Dice for my four year old Daughter.
It was a good Free Comic Book Day. Now it's time to prepare for my Deadland's Game Tonight.
Ummm—Tell Me Again Why We're Going In the Dark Cre...
Links You Should Check
My Sketch Dump Blog
My Savage Worlds Blog
Squishy Comics
Orange World
How To Get Rid of Skunk Smell
How to build a "UFO"
Because I refuse to Twitter
Dice Related
Gaming With Kids
Pirates of the Spanish Main | 计算机 |
2014-23/2665/en_head.json.gz/24706 | Windows 7 vs. Mac OS X Snow Leopard, Part 2: Pricing
Over the course of this OS comparison, I fully expect Windows 7 and Mac OS X Snow Leopard to be pretty comparable, from a functionality perspective. Sure, there will be solid wins here and there for each system. But what I didn't expect was for either system to just out-and-out dominate the other, and laughably so, in any given category.
Well, prepare for a chuckle ... if you're a Mac user, that is. Windows users, by contrast, have a lot of work ahead of them, and a lot of complaining to do. The issue is that Windows 7 pricing is as convoluted, ridiculous, and hard to understand as it was with Vista, and I'm reasonably sure no one would ever hold up that product's pricing and licensing as a model of clarity. If you're looking for a real-world beat-down of Windows at the hands of the Mac, there is no better example than pricing.
It wasn't always this way. In fact, Apple regularly charges its Mac OS X-using customers an exorbitant $129 per release, and it did so over the past four versions. (10.1 was the sole free update.) If you include the original version of Mac OS X in the equation, your typical faithful Mac user could have easily spent about $650 on OS upgrades since 2001, per Mac. | 计算机 |
2014-23/2665/en_head.json.gz/25541 | Reference Resources on the Web
Chris Sherman
ONLINE, January 2000
Copyright © 2000 Information Today, Inc.
There are a number of hybrid reference services on the Web that are something of a cross between a traditional database service and a Web search engine. They often feature proprietary content, and may also include links to other Web resources. All are either free or very inexpensive when compared to services like Dialog and LEXIS-NEXIS.
For this article, I looked at three prominent Web-based online reference resources: Ask Jeeves, Electric Library, and Information Please. I had chosen Answers.com to review as well, but its recent acquisition by Net Shepherd and subsequent product overhaul made a side-by-side comparison with the other resources difficult. See the sidebar on page 55 for a brief discussion of the new service. To test the services, I posed three queries to each that could be answered with unambiguous, factual responses. The first was an "easy" question, which all services should have been able to answer. The results for this question allowed me to compare and contrast the depth of the results provided by each service. The second and third questions were deliberately "harder," ones that I didn't expect all of the services to be able to answer. The point here was to see what alternatives were offered if no results were found. Since Ask Jeeves and the Electric Library encourage the user to "ask questions," I searched using both keyword queries and simple, natural language sentences to test the capabilities of the language parsing system of each service.
The test questions were:
QUESTION 1: Who is Ehud Barak? (The Prime Minister of Israel in July 1999)
QUESTION 2: When was the city of Beijing founded?
(Peking founded around 1122 B.C.; renamed Beijing in 1949, according to the New York Public Library Desk Reference)
QUESTION 3: How many chromosomes do humans have?
(46, in 23 pairs)
http://www.ask.com
Ask Jeeves is an Internet search engine that takes a non-traditional approach to cataloging Web resources. Unlike the other services reviewed in this article, Ask Jeeves does not compile or aggregate proprietary content that directly answers questions. Instead, the service has built a knowledgebase of about 7 million questions with pointers to millions of resources on the Web that offer answers. In her excellent profile of Ask Jeeves ("Hi AJeevers," DATABASE, June/July 1999), Reva Basch writes: "Strictly speaking, those 7 million questions actually consist of several thousand question-and-answer templates, any of which might have 50 to 5,000 'smart list' items associated with it."
Jeeves employs a simple query form with no refinement or limiting options, encouraging the user to "just type a question and click 'Ask!'" Jeeves responds by presenting the closest matching questions in its knowledgebase. When a question has several possible interpretations, drop-down boxes allow the user to fine-tune the question to more closely match the desired result. Clicking the "Ask!" button next to any of these "results" questions calls up the Internet document Jeeves editors have selected as the most relevant for answering that specific question.
Jeeves also functions as a metasearch engine, querying AltaVista, Excite, Infoseek, WebCrawler, and Yahoo!. This two-pronged approach provides useful alternatives if an answer isn't found in Jeeves' database.
The question-and-answer templates are built by a staff of editors, working in teams focusing on specific content areas. Editors for business and finance, health, and law categories are recruited for expertise in their respective fields. Editors for other categories are expected to be generalists, with strong Internet awareness and research skills.
The editorial team strives to improve the knowledgebase in a number of ways. User interaction is monitored to track which questions seem to provide more relevant results. Editors constantly seek out newer or more relevant Web sites as candidates to replace existing answers. And the team responds to timely events such as breaking news or newly-released books or movies by creating new question-and-answer templates.
Results were identical for keyword and natural language queries for the Jeeves knowledgebase, but quite different for the metasearch results from search engines. In all cases, natural language queries produced better results than keyword queries, a somewhat counterintuitive result. When Jeeves could not find the answer to a question, it presented a creditable list of alternatives.
QUESTION 1: Who is Ehud Barak? Jeeves found no results to this query, instead offering a link suggesting "I think you may have misspelled something." Clicking this link brought up a new question, "Did you mean: Who is..." with two drop-down menus proposing 16 alternate words each, the top two being "Hued Bark." In a like vein, none of the others even closely matched the query. The metasearch results for the query resulted in numerous results from AltaVista, Excite, and WebCrawler that did answer the question, however.
This question provided an excellent example of Jeeves' question-and-answer templates in action. Jeeves proposed answers for five questions, including a city guide, restaurant finder, night life directory, and map of Beijing. It also proposed "extensive historical, economic, and political information about the country China," which linked to the Library of Congress "China: A Country Study" page. Unfortunately, none of these resources provided the answer.
The metasearch produced mixed results. Several documents purported to have an answer, but all were found on personal Web pages, so the results could not be considered to be authoritative.
This question also showcased Jeeves' question-and-answer templates. Jeeves' first suggested question, "Where can I find a concise encyclopedia article on chromosomes," linked to an Encyclopedia.com article from Infonautics (which unfortunately didn't answer the question). The remaining suggested questions all pointed to much more general resources on genetics, and as such were not useful.
Metasearch results were mixed. All of the services queried by Jeeves provided a mix of authoritative and personal pages, so the answer was ultimately found. Curiously, it was easier to find the estimated number of genes in the human DNA sequence in these results (80,000 to 100,000) than the well-documented number of human chromosomes that can be found in any basic biology textbook.
The Electric Library
http://www.elibrary.com
The Electric Library is one of several research-oriented Web sites maintained by Infonautics Corporation (the others are Company Sleuth, Job Sleuth, Encyclopedia. com, and Researchpaper. com). Unlike the other services reviewed in this article, The Electric Library is a fee-based service. The service is often compared to Northern Light, though the Electric Library uses a subscription model with no transactional fees. It is licensed to more than 15,000 schools and libraries, and has more than 80,000 individual subscribers. Subscriptions are available on a monthly ($9.95) or annual ($59.95) basis.
The Electric Library Personal Edition is also unique in that its database contains only copyrighted content. Licensed content includes material from 400 publishers, with well over 1,000 titles, according to Bill Burger, Vice President, Content and Media Services of Infonautics. Segregated into six categories, the Electric Library contains over 5.5 million newspaper articles, nearly 750,000 magazine articles, 450,000 book chapters, 1,500 maps, 145,000 television and radio transcripts, and 115,000 photos and images. Fully 95% of the content in Electric Library isn't available on the Web, at least for free, says Burger.
"We update the database every day," says Burger, "it's constantly refreshed." Content providers regularly send new information--sometimes in real time, in the case of content provided by wire services.
Lists of the sources of the content are easily available. Hyperlinks on the search form display sources arranged alphabetically. Each media type's source list also provides both text and image links for the other media types. This transparency provides a high degree of confidence in the reliability and validity of the materials provided by The Electric Library.
The Electric Library's search form is simple yet elegant. The search form allows you to enter a question in natural language. You can limit your search to a specific type of media by checking or unchecking boxes next to the six media types. Searches can be further limited to specialized content categories through the use of a drop-down menu selector.
Clicking "search options" provides additional refinement tools. You may select natural language or Boolean search, publication date range, or limit your search by bibliographic information, such as author, title, or publication. Search results are presented in groups of 30. Descriptions include an icon indicating media type, the title of the document or image, and a relevancy score. The source and author of the document, publication date, and size are also provided. And, as a nice touch for children using The Electric Library for homework, a reading level is indicated.
You can also choose the "refine search" option, which displays a search form with the additional search options noted earlier, and two other controls. The first is the "search power setting," which controls language expansion by the natural language parser. "High" (the default) performs a great deal of language expansion, while "Low" searches only for the exact words you enter in the search text box. The second control allows you to change the number of results you see displayed, up to 150 in increments of 30.
Clicking on the title of a document displays the full-text of the selected document. Keywords are underlined and boldfaced in the document, and there's an option to go to the "Best Part" of the document, which is useful for finding the core idea in longer documents.
An interesting feature unique to the Electric Library is called "Recurring Themes." Recurring themes include people, places, and subjects extracted from the documents in your result set. When a person, place, or subject theme occurs in significant numbers, it becomes a "Major Theme," with related "Other Themes." Themes are clickable links that will organize search results by that theme.
The Electric Library returned different results for keyword and natural language queries. Overall, keyword queries provided more relevant results for these test questions than natural language queries. Also, results varied significantly when search refinement tools were used. As a rule of thumb, a searcher should definitely use these refinement tools when searching the Electric Library for best results. QUESTION 1: Who is Ehud Barak? More than 30 results were found. For this query, the results page displayed a tip: "All documents have a score of 100. For better results, try to provide more search criteria." Results included a variety of newspaper articles, including many from the Jerusalem Post. Also included were magazine profiles from Time and Newsweek International, and a National Public Radio interview with Prime Minister Barak. All 30 results were relevant.
The Electric Library fared poorly on this question. Neither natural language nor keyword queries returned relevant documents. Only when content was restricted to "books and reports," search was limited by the specialized content "history," and the Power Setting was set to "low" did results return an appropriate answer. This was from the Columbia Encyclopedia, the same resource used by Information Please.
The natural language query returned no relevant results in the top 30, whereas the keyword query provided the answer in the first document of the results list. Information Please
http://www.infoplease.com
Information Please is the online service owned by Information Please LLC, an almanac and reference database publisher that's been in business for more than 50 years. Its most famous product is the Information Please Almanac, first published in 1947 as an outgrowth of the popular Information Please quiz show which ran on NBC from 1938 to 1952.
The quiz show evolved from a "stump the expert" format into a forum for curious people to find answers to difficult and often obscure questions. This led to a tradition within the company of providing accurate information explained in a clear, easy to understand format. Information Please online combines the contents of an encyclopedia, a dictionary, and several almanacs replete with statistics, facts, and historical records. The information in its database is continuously updated and refined by an internal staff of editors and researchers. Editors come from a broad variety of backgrounds, including major publishers and academic institutions, according to Elizabeth Buckley Kubik, Vice President and General Manager of Information Please LLC.
Data in the Information Please database is maintained in SGML format, and the search and retrieval software is proprietary. This allows the system to be quite linguistically rich. Natural language and keyword queries generally provide similar or identical results.
Information Please looks and feels more like a portal than any of the other services reviewed here. It combines a search form with a directory-style collection of topical links. This makes browsing for content quite easy. Information Please's Kubik recommends the service as a good starting place for students, or searchers looking for specific factual information. The search form appears at the top of every screen. The only search limiting or refinement capability is provided by a drop-down box that lets you select a specific almanac, biographies, a dictionary, or encyclopedia.
Search results display document titles, the source, section, and category where the document resides, and a brief description of the document. Information Please has a nifty function that lets you highlight any word or phrase on a page and click a "Hot Words" button to perform a search on the highlighted area. Test Results
Information Please successfully answered all three questions. Results were identical for keyword and natural language queries.
QUESTION 1: Who is Ehud Barak? Fifteen results were found. Eight of the top ten results were encyclopedia or dictionary entries, all dealing with the Biblical characters Ehud and Barak. The correct answer was found in the fifth result, an almanac entry on Israel. However, no additional information other than Barak's official title was offered.
More than 100 results were found. Top ten results included entries from the dictionary, encyclopedia, almanac, and a "spotlight article" on Tiananmen Square. An answer similar to the New York Public Library Desk Reference was found in the third result, an encyclopedia article on Beijing.
More than 100 results were found. Top ten results included dictionary, encyclopedia, and several almanac entries. The second result, an encyclopedia article on chromosomes, contained the correct answer.
Each of the three services reviewed here has strengths and weaknesses, and aren't directly comparable to one another. The choice of which to use should be driven by user need. With its strong natural language parser and question-and-answer template structure, Ask Jeeves is useful for complex questions, and is a good choice for searchers that lack Boolean or other searching skills.
Electric Library is an excellent choice for a serious researcher in need of timely content from a wide array of otherwise unavailable sources. And Information Please is an excellent tool for students and other researchers, as an authoritative source of facts and pointers for further investigation.
Answers.com Acquired by Net Shepherd Inc.
As part of this review of Internet reference resources, I also tested Answers.com. At the time of writing (mid July 1999), Answers.com was organized in a question/answer format. According to the "About Us" section of the Answers. com Web site, "Humans answer your questions. Our database has been built by people like yourselves asking our human researchers questions." As such, it was not an exhaustive reference resource, but rather one that was built in somewhat of an ad hoc fashion in response to user demand. The service fared poorly when compared to the others reviewed. The layout of the site was somewhat awkward, and Answers.com's search engine didn't work well. In general, Answers.com seemed more like an interesting collection of fun facts and trivia than a useful research tool. However, just as I was completing this article, I learned that Answers.com had been acquired by Net Shepherd Inc. Their plans for the service are both intriguing and exciting. "Improvements in the home site of Answers.com will be extensive," says Bill Fogg, CEO of Answers.com. For starters, rather than relying on a "small staff and a big bunch of encyclopedias," answers will be provided by a vetted network of "e-Explorers," according to Peter Hunt, Net Shepherd's Vice President of Corporate Affairs. E-Explorers are equipped with proprietary resource discovery tools, and are organized into an online community called The Internet Explorers Society. Society members cooperate to review, classify, and rate Internet content. Membership is by invitation, and open only to experienced instructors or librarians, who must also complete a training course. Members are compensated using a "Points of Discovery" system that translates directly into cash rewards. Net Shepherd is using e-Explorers to work on a variety of projects other than Answers.com, including visiting Web sites to extract business intelligence, categorize content, and so on. E-Explorers use a customized browser called a "Member's Journal" that includes all of the tools and functions needed for each project. "Included in the Member's Journal are communication windows that we can use to 'push' messaging and content to selected members," says Ron Warris, Net Shepherd's Founder & Vice President Technology. "Our intent is to use these 'push windows' to broadcast questions that have been asked on the Answers.com site to members who are currently online and participating in other projects. If a member sees a question that they believe they know the answer to or are willing to do a bit of research for, they will be able to click on the question and will be immediately presented with a response form that they can use to submit an answer," says Warris. If a question has been broadcast for a preset amount of time and no one has 'claimed' it, it is forwarded to a community discussion area for debate among Internet Explorer Society members. If debate doesn't answer the question, it is forwarded to a select group of members who research it and post an answer. Net Shepherd is also creating Neural Network technology to help automate the process of identifying potential "domain experts" inside the e-Explorer community. As the community grows and the system learns more about each member's knowledge and skills, questions can be more precisely pushed to members with the highest probability of being qualified to answer. Quality control is a paramount goal of the new service. Net Shepherd is developing an integrated Quality Management System (patent pending) that consists of both computer operated and human review processes, according to Warris. With the potent combination of real-time help from certified experts, and a quality control system that assures consistent, authoritative responses to questions, the redeployed Answers. com seems certain to become a useful part of any serious Web searcher's toolkit. --Chris Sherman
Chris Sherman ([email protected] or [email protected]) is the About.com Guide to Web Search, http://websearch.about.com. He holds an MA from Stanford University in Interactive Educational Technology, and has worked in the Multimedia/Internet industry for two decades, currently as President of Searchwise.net, a Web consulting firm. Comments? Email letters to the Editor at [email protected].
[infotoday.com]
[ONLINE]
[Current Issue] [Subscriptions]
Copyright © 2000, Information Today, Inc. All rights reserved. Comments | 计算机 |
2014-23/2665/en_head.json.gz/25717 | LiMo Foundation and Linux Foundation Announce New Open Source Software Platform, Tizen™ By Linux_Foundation - September 27, 2011 - 8:00pm New cross-device and cross-architecture platform will drive standards-based web applications
September 27, 2011 – LONDON, ENGLAND and SAN FRANCISCO, USA – LiMo Foundation™ and The Linux Foundation today announced a new open source project, Tizen [1]™, to develop a Linux-based device software platform. Hosted at The Linux Foundation, Tizen is a standards-based, cross-architecture software platform, which supports multiple device categories including smartphones, tablets, smart TVs, netbooks and in-vehicle infotainment systems. The initial release of Tizen is targeted for Q1 2012, enabling first devices to come to market in mid-2012.
Tizen combines the best open source technologies from LiMo and The Linux Foundation and adds a robust and flexible standards-based HTML5 and WAC web development environment within which device-independent applications can be produced efficiently for unconstrained cross-platform deployment. This approach leverages the robustness and flexibility of HTML5 which is rapidly emerging as a preferred application environment for mobile applications and the broad carrier support of the Wholesale Applications Community (WAC). Tizen additionally carries a state-of-the-art reference user interface enabling the creation of highly attractive and innovative user experience that can be further customized by operators and manufacturers.
“LiMo Foundation views Tizen as a well-timed step change which unites major mobile Linux proponents within a renewed ecosystem with an open web vision of application development which will help device vendors to innovate through software and liberalize access to consumers for developers and service providers,” said Morgan Gillis, Executive Director of LiMo Foundation. “LiMo will maintain its focus on providing the industry with a broadly backed vendor- and service-neutral ecosystem grounded in the spirit of open and unconstrained opportunity that is embodied by Linux.”
The mobile industry continues to embrace Linux and open source technologies as key factors in lowering device realization cost, increasing flexibility and improving time to market and it is expected that Tizen will further enhance these effects due to its cross-category reach and strong focus on open standards.
“The Linux Foundation is pleased to host the Tizen platform,” said Jim Zemlin, Executive Director of The Linux Foundation. “Open source platforms such as Tizen are good for Linux as they further its adoption across device categories. We look forward to collaborating with the LiMo Foundation and its members on this project.”
To participate in the project, please go Tizen.org [1].
About LiMo Foundation
LiMo Foundation™ is a dedicated consortium of mobile industry leaders working together within an open and transparent governance model—with shared leadership and shared decision making—to deliver an open and globally consistent handset software platform based upon mobile Linux for use by the whole mobile industry. The Board of LiMo Foundation comprises ACCESS, Panasonic Mobile Communications, NEC CASIO Mobile Communications, NTT DOCOMO, Samsung, SK Telecom, Telefónica and Vodafone. A full description of LiMo Foundation can be found at www.limofoundation.org [2].
About The Linux Foundation
The Linux Foundation [3] is a nonprofit consortium dedicated to fostering the growth of Linux. Founded in 2000, the organization sponsors the work of Linux creator Linus Torvalds and promotes, protects and advances the Linux operating system by marshaling the resources of its members and the open source development community. The Linux Foundation provides a neutral forum for collaboration and education by hosting Linux conferences [4], including LinuxCon [5], and generating original Linux research [6]and content that advances the understanding of the Linux platform. Its web properties, including Linux.com [7], reach approximately two million people per month. The organization also provides extensive Linux training [8] opportunities that feature the Linux kernel community’s leading experts as instructors. Follow The Linux Foundation on Twitter. [9]
The Linux Foundation and Tizen are trademarks of The Linux Foundation. Linux is a trademark of Linus Torvalds. LiMo is a trademark of the LiMo Foundation.
Source URL: http://www.linuxfoundation.org/news-media/announcements/2011/09/limo-foundation-and-linux-foundation-announce-new-open-source-softw
Links:[1] https://www.tizen.org
[2] http://www.limofoundation.org | 计算机 |
2014-23/2665/en_head.json.gz/25738 | Control Your Mac from Afar - O'Reilly Media
Mac Topics
All ArticlesApplication Development
Mail Handling on OS X
Control Your Mac from Afar
by Harold Martin
There are many different ways to control your Mac -- even when you're not sitting at it. You might think that this level of flexibility would require special software. But no! If you're running Mac OS X, you'll be able to accomplish everything in this article without buying a single piece of software. As a bonus, you can also perform most of these tricks on the Mac you're sitting at right now. (Even though that wouldn't actually be remote controlling, now would it?) The Tools We'll Use
ssh is the Secure SHell. It and its accompanying programs allow you to securely log in and copy files to other computers running an ssh server. The target Mac must have OS X's built-in ssh server enabled. You can do this by checking "Remote Login" in the "Services" tab in the "Sharing" System Preference pane. ssh is the "remote" part in "remote control:" we use ssh to log in to the Mac that we want to control, and then use the other technologies we'll talk about below to do the actual controlling. The computer you'll use to control the target Mac should have an ssh client. Most types of UNIX (including OS X) come with an ssh client, and clients are available on just about every other platform, including Windows and OS 9. If you're interested in learning more about ssh, I recommend SSH, The Secure Shell: The Definitive Guide. Built-In Commands
On top of the normal UNIX commands we'll use, Mac OS X has a couple of extra ones that will be particularly helpful to us: open is a command that can open a file, directory, application, or URL from the command line, just as if it was double-clicked in the Finder.
screencapture is a program that (surprise, surprise) takes a screen shot of what is on the computer's screen.
Session by Gordon Meyer: Living in the Digital Hub: Your House and Mac OS X
Upgrade your digital life to include your house and living environment--controlling lights, temperature, music, and more--all from your Mac. We'll start with the basics of controlling lighting and other physical objects, then graduate to true automation which turns your home into a living entity that responds to, and anticipates, your needs. | 计算机 |
2014-23/2665/en_head.json.gz/26763 | Word Game Fans Get Their Own Bubble In Paradise
BY Christine Chan on Thu April 12th, 2012
Bubble in Paradise
clickgamer.com
Bubble in Paradise™ ($0.99) by Clickgamer.com is a must-have for the word game fan.
I love to write, so with that comes a love for words. However, I honestly am not that great with games like Scrabble (so as you can imagine, I’m pretty bad at Words with Friends), but I seem to do better with the Boggle-like games, such as Scramble with Friends. Of course, the aforementioned games are best played with others, so what if you feel like playing a word game by yourself? Fortunately, Bubble in Paradise fills that void.
The first thing that I noticed with Bubble in Paradise were the graphics. The visuals are absolutely fantastic, and literally “pop” out at you from the screen. I applaud the developers for making a word game look this good. Each stage has its own unique background, so it feels like an entirely new experience as you make your way through the different stages. The music is catchy too, though I never found it annoying. In fact, when you are about to hit a critical point in the game (about to be game over), the music, as well as the on-screen visuals, will indicate that you are about to get game over. It’s a nice touch, actually, but I still panicked each time, furiously trying to make words on the screen.
The gameplay is rather simple: bubbles with letters will float onto the screen from the bottom, and you must spell words with them. Simply tap on the letters in order, and then submit the word by tapping on the last selected letter (so double-tap the last one). If you have trouble with figuring out what words can be submitted (aka real words), a green check mark will appear near the top of the screen when a word is actually valid. If that check mark isn’t there, then you can’t submit the word (no matter how much you want it to be a real word). As time goes by in the game, bubbles will start inflating; new bubbles won’t be able to get in when the screen is full, so make sure to pop bubbles before they get too big.
As you play, there will be some special power-ups or power-downs that may appear. Power-ups include bubbles that will turn surrounding bubbles into lead balls so they fall off the screen, blow up adjacent bubbles, freeze the game briefly, and more. Power-downs will have bubbles that will inflate surrounding bubbles, increase the rate at which new bubbles come in, and others. Be careful popping those! Sometimes the wrong move will end up in game over, as I have discovered many times.
Additionally, sometimes a lightbulb will appear. If you collect these you will be provided hints when you use them. This is extremely useful when you just can’t seem to find words to spell anymore. However, they are pretty sparse, so you will have to use them only when necessary.
Stars are another item that you will definitely want to keep an eye out for. These will show up very rarely (compared to the other power-ups), but it’s important that you collect them. Why? The only way to unlock new stages is to have a certain amount of stars. There are a total of five stages (with four of them having to be unlocked), so you will every star you can get. Each stage will have a different setting and the pace is greatly increased as you advance (so the game is never boring).
Bubble in Paradise includes four game modes (some have to be unlocked): Normal, Endless ?, Blitz, and Battle.
The Normal mode is basic – just create words until you can’t anymore. Once the screen starts to fill up with letters, you’ll get to see how much time you have (a red bar at the top) remaining before it’s game over.
Endless will have you spelling words endlessly – you can’t get game over. Because of that, I found this mode to be a bit boring, since there isn’t much incentive to make you keep spelling words (unless that’s just something you really like doing).
Blitz is a timed mode, where you must get as many points as you can in that limited time. Once time runs out, it’s game over. Battle allows you to play against others, either locally or via Game Center. The goal is to spell as much as you can and beat your opponent’s score.
If you’re the competitive type (like me), you can compare your score with friends on Game Center. There are also 30 achievements to obtain, so there is a lot of reason to keep playing (or at least until you get them all).
Bubble in Paradise was actually released last December, but disappeared from the App Store shortly after. It only just came back in the App Store at the end of last month, which came as a surprise to me (I never figured out why the game was removed for a while in the first place). However, I’m glad it’s back, and hopefully that means that there will be more people playing this awesome little word game (and more to compete against).
If you’re a fan of word games in general, then you definitely have to give Bubble in Paradise a try. It’s incredibly fun and fast-paced, addictive, and just one of the best word games available. Make sure to check it out in the App Store for $0.99 – it’s a universal app for your iPhone and iPad.
Bubble in Paradise™Clickgamer.com SCRABBLEElectronic Arts BOGGLEElectronic Arts Scramble With FriendsZynga Words With FriendsZynga About The Author | 计算机 |
2014-23/2665/en_head.json.gz/31224 | I am a Senior Lecturer with the Department of Electronic and
Electrical Engineering, University College London, U.K. I was
previously with the Department of Computer Science, University of
Porto, Portugal, rising through the ranks from Assistant to Associate
Professor, where I also led the Information Theory and Communications
Research Group at Instituto de Telecomunicações – Porto.
I received the Licenciatura
degree in Electrical Engineering from the Faculty of Engineering of the University of Porto, Portugal in 1998 and
the Ph.D. degree in Electronic and Electrical Engineering from University College London, UK in 2002. I have carried out postdoctoral
research work both at Cambridge
University, UK, as well as Princeton University, USA, in the period 2003 to 2007. I have also held
visiting research appointments at Princeton University, USA., Duke University,
USA Cambridge University, UK and University College London, UK in the period 2007 to 2013.
My research interests are in the general areas of information theory,
communications theory and signal processing. I have over 100 publications
in international journals and conference proceedings in the areas.
I was the recipient of the IEEE Communications and Information Theory Societies Joint Paper Award in
2011 for the work on Wireless Information-Theoretic Security
(with M. | 计算机 |
2014-23/2665/en_head.json.gz/33422 | AboutParticipateConferenceTechnical ProgramExhibitsNews & PressRegistration, Travel & Hotels About SCSC08 CommitteesSponsoring SocietiesSC HistoryIndustrial Advisory CommitteeSteering CommitteeContact Information Technical ProgramSubmission SiteCall for ParticipationEducation ProgramBroader Engagement ProgramMusic Initiative20th Anniversary InitiativeStudent VolunteersHPC Ph.D. Fellowship ProgramExhibitsRegistration, Travel, & Hotel Conference OverviewScheduleInteractive ScheduleRegistrationKeynote and Invited SpeakersTechnical ProgramEducation ProgramBroader EngagementMusic Initiative20th Anniversary InitiativeStudent Volunteer ProgramHPC Ph.D. Fellowship ProgramSCinetSC Your WayImportant Dates OverviewConference ReceptionPapersTutorialsInvited SpeakersMasterworksPanelsPostersWorkshopsBirds-of-a-FeatherDoctoral ShowcaseAwardsGordon Bell PrizeDisruptive TechnologiesChallengesTechnology Thrusts OverviewExhibitor ProspectusExhibitor ForumAustin CC Floor PlanIndustry ExhibitsResearch ExhibitsExhibitor Information/FormsExhibitor List Overview and SC08 LogosPress ReleasesNewslettersSC08 Media PartnersFor Media Professionals RegistrationSC08 HotelsSC08 BusingAustin-Bergstrom International AirportAustin Convention CenterVisiting AustinSC Your WayInternational Travel Info • Conference Overview• Schedule• Interactive Schedule• Registration• Keynote and Invited Speakers• Technical Program• Education Program• Broader Engagement• Music Initiative• 20th Anniversary Initiative• Student Volunteer Program• HPC Ph.D. Fellowship Program• SCinet• SC Your Way• Important Dates SC08 Keynote speaker
Michael Dell to Give Keynote Speech at SC08
Michael Dell, Dell, Inc.
Higher Performance: Supercomputing in the Connected Era
Abstract: 2008 marks the 20th anniversary of SC bringing together the world�s leading high-performance computing researchers, scientists and engineers. From the environment to health and energy, these leaders have helped address many of the world�s most pressing challenges. The next era of HPC will be enabled by super-scalable, increasingly simple technologies that will make possible even greater collaboration, productivity and scientific breakthroughs. Biography: Michael Dell, chairman of the board of directors and chief executive officer of Dell Inc., the company he founded in 1984 with $1,000 and an unprecedented idea�to build relationships directly with customers�will give the keynote address at SC08 in Austin, Texas. In 1992, Mr. Dell became the youngest CEO ever to earn a ranking on the Fortune 500 list. Mr. Dell is the author of �Direct From Dell: Strategies that Revolutionized an Industry,� his story of the rise of the company and the strategies he has refined that apply to all businesses.
In 1998, Mr. Dell formed MSD Capital, and in 1999, he and his wife formed the Michael & Susan Dell Foundation, to manage the investments and philanthropic efforts, respectively, of the Dell family.
Born in February 1965, Mr. Dell serves on the Foundation Board of the World Economic Forum and the Executive Committee of the International Business Council, and is a member of the U.S. Business Council. Mr. Dell also serves on the U.S. President's Council of Advisors on Science and Technology, the Technology CEO Council and the governing board of the Indian School of Business in Hyderabad, India.
Kenneth H. Buetow, National Cancer Institute
Developing an Interoperable IT Framework to Enable Personalized Medicine
Abstract: 21st Century biomedical research is driven by massive amounts of data; automated technologies generate hundreds of gigabytes of DNA sequence information, terabytes of high resolution medical images, and massive arrays of gene expression information on thousands of genes tested in hundreds of independent experiments. Clinical research data is no different. Each clinical trial may potentially generate hundreds of data points of thousands of patients over the course of the trial.
This influx of data has enabled a new understanding of disease on its fundamental, molecular basis. Many diseases are now understood as complex interactions between an individual�s genes, environment and lifestyle. To harness this new understanding, research and clinical care capabilities (traditionally undertaken as isolated functions) must be bridged to seamlessly integrate laboratory data, biospecimens, medical images and other clinical data. This collaboration between researchers and clinicians will create a continuum between the bench and the bedside�speeding the delivery of new diagnostics and therapies, tailored to specific patients, ultimately improving clinical outcomes.
To realize the promises of this new paradigm of personalized medicine, healthcare and drug discovery organizations must evolve their core processes and IT capabilities to enable broader interoperability among data resources, tools and infrastructure�both within and across institutions. Answers to these challenges are enabled by the cancer Biomedical Informatics Grid� (caBIG�) initiative, overseen by the National Cancer Institute Center for Biomedical Informatics and Information Technology (NCI-CBIIT). caBIG� is a collection of interoperable software tools, standards, databases, and grid-enabled computing infrastructure founded on four central principles:
Open access; anyone�with appropriate permission�may access caBIG� the tools and data
Open development; the entire research community participates in the development, testing, and validation of the tools
Open source; all the tools are available for use and modification
Federation; resources can be controlled locally, or integrated across multiple sites
caBIG� is designed to connect researchers, clinicians, and patients across the continuum of biomedical research�allowing seamless data flow between electronic health records and data sources including genomic, proteomic, imaging, biospecimen, pathology and clinical information, facilitating collaboration across the entire biomedical enterprise.
caBIG� technologies are widely applicable beyond cancer and may be freely adopted, adapted or integrated with other standards-based tools and systems. Guidelines, tools and support infrastructure are in place to facilitate broad integration of caBIG� tools, which are currently being deployed at more than 60 academic medical centers around the United States and are being integrated in the Nationwide Health Information Network as well. For more information on caBIG�, visit http://cabig.cancer.gov/.
Biography: In his role as the Associate Director for Bioinformatics and Information Technology at the National Cancer Institute (NCI), Dr. Buetow is best known for initiating the cancer Biomedical Informatics Grid� (caBIG�) and currently oversees its activities. caBIG� was conceived as the �World Wide Web� of cancer research, providing data standards, interoperable tools and grid-enabled computing infrastructure to address the needs of all constituencies in the cancer community. Guided by the principles of open source licensing, open software development, open access to the products of that development and federated data storage and integration, caBIG� facilitates data and knowledge exchange and simplifies collaboration between biomedical researchers and clinicians, leading to better patient outcomes and the realization of personalized medicine in cancer care and beyond.
As Director of the NCI Center for Biomedical Informatics and Information Technology (NCICBIIT), Buetow works to advance the center�s goal of maximizing interoperability and integration of NCI research. The center participates in the evaluation and prioritization of the NCI�s bioinformatics research portfolio; facilitates and conducts research required to address the CBIIT�s mission; serves as the locus for strategic planning to address the NCI�s expanding research initiative�s informatics needs; establishes information technology standards (both within and outside of NCI); and communicates, coordinates or establishes information exchange standards.
Buetow also serves as the Chief of the Laboratory of Population Genetics (LPG), which focuses on developing, extending and applying human genetic analysis methods and resources to better understand the genetics of complex phenotypes, specifically human cancer. He also spearheaded the efforts of the Genetic Annotation Initiative (GAI) to identify variant forms of the cancer genes detected through the NCI Cancer Genome Anatomy Project (CGAP). His laboratory combines computational tools with laboratory research to understand how genetic variations make individuals more susceptible to liver, lung, prostate, breast and ovarian cancer.
Buetow received a B.A. in biology from Indiana University in 1980 and a Ph.D. in human genetics from the University of Pittsburgh in 1985. From 1986 to 1998, he was at the Fox Chase Cancer Center in Philadelphia, where he worked with the Cooperative Human Linkage Center (CHLC) to produce a comprehensive collection of human genetic maps. Buetow has been in his role at NCI since 2000. He has published more than 160 scientific papers on a wide variety of topics in journals such as PNAS, Science, Cell, and Cancer Research.
His honors and awards include The Editor�s Choice Award from Bio-IT World (2008), The Federal 100 Award (2005), The NIH Award of Merit (2004), the NCI Director�s Gold Star Award (2004), The Partnership in Technology Award (1996), and the Computerworld Smithsonian Award for Information Technology (1995).
David Patterson, University of California Berkeley Parallel Computing Landscape: A View from Berkeley
Abstract: In December 2006 we published a broad survey of the issues for the whole field concerning the multi-core/many-core sea change (see view.eecs.berkeley.edu). We view the ultimate goal as being able to productively create efficient, correct and portable software that smoothly scales when the number of cores per chip doubles biennially. This talk covers the specific research agenda that a large group of us at Berkeley are going to follow (see parlab.eecs.berkeley.edu) as part of a center funded for five years by Intel and Microsoft.
To take a fresh approach to the longstanding parallel computing problem, our research agenda will be driven by compelling applications developed by domain experts in personal health, image retrieval, music, speech understanding and browsers. The development of parallel software is divided into two layers: an efficiency layer that aims at low overhead for 10 percent of the best programmers, and a productivity layer for the rest of the programming community�including domain experts�that reuses the parallel software developed at the efficiency layer. Key to this approach is a layer of libraries and programming frameworks centered around the 13 design patterns that we identified in the Berkeley View report. We rely on autotuning to map the software efficiently to a particular parallel computer. The role of the operating systems and the architecture in this project is to support software and applications in achieving the ultimate goal. Examples include primitives like thin hypervisors and libraries for the operating system and hardware support for partitioning and fast barrier synchronization. We will prototype the hardware of the future using field programmable gate arrays (FPGAs) on a common hardware platform being developed by a consortium of universities and companies (see http://ramp.eecs.berkeley.edu/).
Biography: David Patterson was the first in his family to graduate from college and he enjoyed it so much that he didn�t stop until he received a Ph.D. from UCLA in 1976. He then moved north to UC Berkeley. He spent 1979 at DEC working on the VAX minicomputer, which inspired him and his colleagues to later develop the Reduced Instruction Set Computer (RISC). In 1984, Sun Microsystems recruited him to start the SPARC architecture. In 1987, Patterson and colleagues tried building dependable storage systems from the new PC disks. This led to the popular Redundant Array of Inexpensive Disks (RAID). He spent 1989 working on the CM-5 supercomputer. Patterson and colleagues later tried building a supercomputer using standard desktop computers and switches. The resulting Network of Workstations (NOW) project led to cluster technology used by many Internet services. He is currently director of both the Reliable Adaptive Distributed Systems Lab and the Parallel Computing Lab at UC Berkeley. In the past, he served as chair of Berkeley�s Computer Science Division, chair of the Computing Research Association, and president of the ACM.
All this has resulted in 200 papers, five books, and about 30 honors, some shared with friends, including election to the National Academy of Engineering, the National Academy of Sciences, and the Silicon Valley Engineering Hall of Fame. He was named Fellow of the Computer History Museum and both AAAS organizations. Awards were also received from the ACM, where as a fellow, he received the SIGARCH Eckert-Mauchly Award, the SIGMOD Test of Time Award, the Distinguished Service Award, and the Karlstrom Outstanding Educator Award. Patterson is also a fellow at the IEEE, where he received the Johnson Information Storage Award, the Undergraduate Teaching Award and the Mulligan Education Medal. Finally, he shared the IEEE the von Neumann Medal and the NEC C&C Prize with John Hennessy of Stanford University.
Jeffrey Wadsworth, Battelle Memorial Institute
High Performance Computing and the Energy Challenge: Issues and Opportunities
Abstract: Energy issues are central to the most important strategic challenges facing the United States and the world. The energy problem can be broadly defined as providing enough energy to support higher standards of living for a growing fraction of the world�s increasing population without creating intractable conflict over resources or causing irreparable harm to our environment. It is increasingly clear that even large-scale deployment of the best, currently available energy technologies will not be adequate to successfully tackle this problem. Substantial advances in the state of the art in energy generation, distribution and end use are needed. It is also clear that a significant and sustained effort in basic and applied research and development (R&D) will be required to deliver these advances and ensure a desirable energy future. It is in this context that high performance computing takes on a significance that is co-equal with theory and experiment. The U.S. Department of Energy (DOE) and its national laboratories have been world leaders in the use of advanced high-performance computing to address critical problems in science and energy. As computing nears the petascale, a capability that until recently was beyond imagination, it is now poised to address these critical problems. Battelle Memorial Institute manages or co-manages six DOE national laboratories that together house some of the most powerful computers in the world. These capabilities have enabled remarkable scientific progress in the last decade. The world-leading petascale computers that are now being deployed will make it possible to solve R&D problems of importance to a secure energy future and contribute to the long-term interests of the United States.
Biography: Jeffrey Wadsworth is the senior executive responsible for Battelle�s laboratory management business. Battelle currently manages or co-manages six U.S. Department of Energy (DOE) national laboratories: Brookhaven, Idaho, Lawrence Livermore, the National Renewable Energy Laboratory, Oak Ridge and Pacific Northwest. A Battelle subsidiary, Battelle National Biodefense Institute, manages the National Biodefense Analysis and Countermeasures Center for the U.S. Department of Homeland Security. The laboratories have combined research revenues of $3.2 billion and employ 16,000 staff.
Wadsworth joined Battelle in August 2002 and was a member of the White House Transition Planning Office for the Department of Homeland Security before being named director of Oak Ridge National Laboratory (ORNL) and president and CEO of UT Battelle, LLC, which manages the laboratory for DOE, in August 2003. As ORNL�s director through June 2007, he was responsible for managing DOE�s largest multi-purpose science and energy laboratory, with 4,100 staff and an annual budget of $1 billion. Under his leadership the laboratory commissioned the $1.4 billion Spallation Neutron Source, launched DOE�s first nanoscience research center, developed the world�s most powerful unclassified computer system, expanded its work in national security, and initiated an interdisciplinary bioenergy program.
Before joining Battelle, Wadsworth was Deputy Director for Science and Technology at Lawrence Livermore National Laboratory, where he oversaw science and technology across all programs and disciplines. His responsibilities included programmatic and discretionary funding, technology transfer, and workforce competencies. He joined the laboratory in 1992 and was Associate Director for Chemistry and Materials Science before becoming Deputy Director in 1995. From 1980 to 1992, Wadsworth worked for Lockheed Missiles and Space Company at the Palo Alto Research Laboratory, where as manager of the Metallurgy Department, he was responsible for direction of research activities and acquisition of research funds.
Wadsworth attended the University of Sheffield in England, graduating with a bachelor�s and a Ph.D. in 1972 and 1975, respectively. He was awarded a D. Met. for published work in 1990 and an honorary D. Eng. in 2004. He joined Stanford University in 1976 and conducted research on the development of steels, superplasticity, materials behavior and Damascus steels. He lectured at Stanford after joining Lockheed, and remained a consulting professor until 2004. He is a Distinguished Research Professor in the department of materials science and engineering at the University of Tennessee.
He has authored and co-authored more than 280 papers in the open scientific literature on a wide range of materials science and metallurgical topics; one book, Superplasticity in Metals and Ceramics (Cambridge, 1997); and four U.S. patents. He has presented or co-authored 300 talks at conferences, scientific venues, and other public events, and has twice been selected as a NATO Lecturer. His work has been recognized with many awards, including Sheffield University�s Metallurgica Aparecida Prize for Steel Research and Brunton Medal for Metallurgical Research. He was elected a Fellow of ASM International in 1987, of The Minerals, Metals & Materials Society in 2000, and of the American Association for the Advancement of Science (AAAS) in 2003. He is a member of the Materials Research Society (MRS) and the American Ceramic Society (ACeRS). In January 2005 he was elected to membership in the National Academy of Engineering �for research on high temperature materials, superplasticity, and ancient steels and for leadership in national defense and science programs.�
Mary Wheeler, University of Texas at Austin
Computational Frameworks for Subsurface Energy and Environmental Modeling and Simulation
Abstract: Over the past 60 years, modeling and simulation have been essential to the success of the petroleum industry. This fact dates back to 1948 when von Neumann was a consultant for Humble Research in Houston, Texas. Exploration and production in the deep Gulf of Mexico and the North Slope of Alaska and the design and construction of the Alyeska pipeline could not have been achieved without modeling of coupled nonlinear partial differential equations.
Today, energy-related industries are facing new challenges: unprecedented demand for energy as well as growing environmental concerns over global warming and greenhouse gases. Resolving complex scientific issues in addressing next generation energies requires multidisciplinary teams of geologists, biologists, chemical, mechanical and petroleum engineers, mathematicians and computational scientists working closely together. Simulation needs include: 1) the development of novel multiscale (molecular to field scale) and multiphysics discretizations for estimating physical characteristics and statistics of stochastic systems; 2) modeling of multiscale stochastic problems for quantifying uncertainty to heterogeneity and small-scale uncertainty in subdomain system parameters; 3) verification and validation of models through experimentation and simulation; 4) robust optimization and optimal control for monitoring and controlling large systems; and 5) petascale computing on heterogeneous platforms that includes interactive visualization and seamless data management.
In order to address these challenges, a robust reservoir simulator comprised of coupled programs that together account for multicomponent, multiscale, multiphase (full compositional) flows and transport through porous media and through wells and that incorporate uncertainty and include robust solvers is required. The coupled programs must be able to treat different physical processes occurring simultaneously in different parts of the domain and, for computational accuracy and efficiency, should also accommodate multiple numerical schemes. In addition, these problem-solving environments or frameworks must have parameter estimation and optimal control capabilities. We present a �wish list� for simulator capabilities as well as describe the methodology and parallel algorithms employed in the IPARS software being developed at the University of Texas at Austin.
Biography: Mary Fanett Wheeler is a world-renowned expert in massive parallel-processing. She has been a part of the faculty at The University of Texas at Austin since 1995 and holds the Ernest and Virginia Cockrell Chair in the departments of Aerospace Engineering and Engineering Mechanics and Petroleum and Geosystems Engineering. She is also director of the Center for Subsurface Modeling at the Institute for Computational Engineering and Sciences (ICES).
Wheeler�s research group employs computer simulations to model the behavior of fluids in geological formations. Her particular research interests include numerical solution of partial differential systems with application to the modeling of subsurface flows and parallel computation. Applications of her research include multiphase flow and geomechanics in reservoir engineering, contaminant transport in groundwater, sequestration of carbon in geological formations, and angiogenesis in biomedical engineering. Wheeler has published more than 200 technical papers and edited seven books. She is currently an editor of nine technical journals.
Wheeler is a member of the Society of Industrial and Applied Mathematics, the Society of Petroleum Engineers, American Women in Mathematics, Mathematical Association of America and American Geophysical Union. She is a fellow of the International Association for Computational Mechanics and is a certified professional engineer in the state of Texas.
Wheeler has served on numerous committees for the National Science Foundation and the Department of Energy. For the past seven years she has been the university lead in the Department of Defense User Productivity Enhancement and Technology Transfer Program (PET) in environmental quality modeling. Wheeler is on the Board of Governors for Argonne National Laboratory and on the Advisory Committee for Pacific Northwest National Laboratory. In 1998, Wheeler was elected to the National Academy of Engineering. In 2006, she received an honorary doctorate from Technische Universiteit Eindhoven in the Netherlands and in 2008 an honorary doctorate from the Colorado School of Mines.
Wheeler received her B.S., B.A., and M.A. degrees from the University of Texas at Austin and her Ph.D. in mathematics from Rice University.
IEEE Computer Society / ACM 2 0 Y E A R S - U N L E A S H I N G T H E P O W E R O F H P C | 计算机 |
2014-23/2665/en_head.json.gz/35957 | Nothing is as simple as we hope it will be
Items of interest in computer and network security, privacy, voting, public policy, etc., plus a few that just tickled my fancy or provoked my outrage.
Big Ball of Mud
[No, I don't mean downtown New Orleans.]Two contrasting styles of software architecture have been characterized as "diamond" and "big ball of mud." While the former is prettier and more robust, the latter is a lot easier to create and to add to. Here's a website that explores the metaphor in considerable detail, with lots of stimulating ideas.Labels: Assorted
posted by Jim Horning at 4:28 PM
Creationism, Bush and Corporate Responsibility
A post on Dan Gilmore's Bayosphere blog bemoans business leaders' silence in the face of (one of) Bush's attacks on science....I asked Benhamou, one of Silicon Valley's more distinguished people, whether it was the duty of executives to speak out when the president of the United States suggests that science classes be required to teach "intelligent design"--basically creationism in new clothing--as an equally valid alternative to evolution. They absolutely should speak out, he said. It's a fact, he observed, that today's knowledge-based companies need people "whose minds are trained on knowledge and scientific fact, and not mixed up with this creationism bullshit."I then asked if he could name anyone in a prominent corporate position who'd actually spoken out in this way. He could not, he said with what sounded like regret: "It's hard to be caught on TV saying these things, but it's particularly important now. I feel quite worried that we're passive about it."Corporate America's leaders are willing to speak out on purely selfish matters. They'll call for lower taxes, for curbs on shareholder lawsuits, for all kinds of things that might be good for business interests in a specific way.They'll even call for better education in a general sense. And some of them push for higher standards in schools. But when it comes to even discussing the willingness of George Bush, his administration and his fundamentalist followers to turn public education into religious indoctrination--and to mock the foundations of scholarship by promoting faith as a legitimate scientific alternative to the scientific method--they fall silent.Silence in the face of this challenge to basic education is damaging America. It gives yet another advantage, at least in the long term, to nations that teach children to think logically...Labels: Policy
Australian International University website
For all academics, former academics, students, and former students, this website repays close study. Truly, no university has previously managed to achieve all of these goals.Thanks to Brian Randell for bringing it to my attention.Rapacitas Bona EstUpdated March 20, 2008 to add: This website appears to have changed hands since I made this post. Use the Wayback Machine to see previous versions.Labels: Amusing, Satiric
posted by Jim Horning at 10:56 PM
Blackberry Women and Technology awards
A BBC story reports that Research in Motion is doing something to help remedy a situation that more have talked about than have done anything effective.Top women in the field of technology are to be recognised in the first Blackberry Women and Technology awards. The awards have been set up by Research in Motion, the company behind the Blackberry mobile device, and Aurora, a women's business networking group. Prizes will be given to women who have been leading lights in academia, journalism, public and private sectors, as well the top female mentors. The awards will raise their profile in what has been a male-dominated world. "What we want is to recognise the progress and achievement women have made not only in the technology industry, but also in using technology," Charmaine Eggberry, from RIM, told the BBC News website.Bravo!Labels: Policy
posted by Jim Horning at 11:25 AM
Name: Jim Horning Location: Palo Alto, California, United States Hooked on computing since 1959.
Virtual Bumper Stickers
The way it was
Trusty Sites
Who said...?
Republican Presidents and Jobs
How fast are the very rich getting richer at the e...
The first word of "don't be evil" actually matters...
Chuck Thacker honored by ACM with the A.M. Turing...
The Top 8 Reasons Not to Talk to Cops
Must-read piece in Business Week on reviving Ameri...
I've left SPARTA, Inc., doing business as Cobham A...
CIA agent: Beware mixing votes and electrons
Antarctic ice shelves continue retreat
SPARTA rebranded as Cobham | 计算机 |
2014-23/2665/en_head.json.gz/36581 | Icons of Progress
United States [ Change ]
Celebration of Service
Lectures and Colloquia
THINK exhibit
Choose your Country and Language:
Korea, Republic of (Korean)
Back to All Icons
Previous: The Cell Broadband Engine
Next: Preserving the Legacy of Film
Transforming the World
Cultural Impacts
On May 11, 1997, an IBM computer called IBM
® Deep Blue
® beat the world chess champion after a six-game match: two wins for IBM, one for the champion and three draws. The match lasted several days and received massive media coverage around the world. It was the classic plot line of man vs. machine. Behind the contest, however, was important computer science, pushing forward the ability of computers to handle the kinds of complex calculations needed to help discover new medical drugs; do the broad financial modeling needed to identify trends and do risk analysis; handle large database searches; and perform massive calculations needed in many fields of science.
Since the emergence of artificial intelligence and the first computers in the late 1940s, computer scientists compared the performance of these “giant brains” with human minds, and gravitated to chess as a way of testing the calculating abilities of computers. The game is a collection of challenging problems for minds and machines, but has simple rules, and so is perfect for such experiments.
Over the years, many computers took on many chess masters, and the computers lost. IBM computer scientists had been interested in chess computing since the early 1950s. In 1985, a graduate student at Carnegie Mellon University, Feng-hsiung Hsu, began working on his dissertation project: a chess playing machine he called ChipTest. A classmate of his, Murray Campbell, worked on the project, too, and in 1989, both were hired to work at IBM Research. There, they continued their work with the help of other computer scientists, including Joe Hoane, Jerry Brody and C. J. Tan. The team named the project Deep Blue. The human chess champion won in 1996 against an earlier version of Deep Blue; the 1997 match was billed as a “rematch.”
The champion and computer met at the Equitable Center in New York, with cameras running, press in attendance and millions watching the outcome. The odds of Deep Blue winning were not certain, but the science was solid. The IBMers knew their machine could explore up to 200 million possible chess positions per second. The chess grandmaster won the first game, Deep Blue took the next one, and the two players drew the three following games. Game 6 ended the match with a crushing defeat of the champion by Deep Blue.
The match’s outcome made headlines worldwide, and helped a broad audience better understand high-powered computing. The 1997 match took place not on a standard stage, but rather in a small television studio. The audience watched the match on television screens in a basement theater in the building, several floors below where the match was actually held. The theater seated about 500 people, and was sold out for each of the six games. The media attention given to Deep Blue resulted in more than three billion impressions around the world.
Deep Blue had an impact on computing in many different industries. It was programmed to solve the complex, strategic game of chess, so it enabled researchers to explore and understand the limits of massively parallel processing. This research gave developers insight into ways they could design a computer to tackle complex problems in other fields, using deep knowledge to analyze a higher number of possible solutions. The architecture used in Deep Blue was applied to financial modeling, including marketplace trends and risk analysis; data mining—uncovering hidden relationships and patterns in large databases; and molecular dynamics, a valuable tool for helping to discover and develop new drugs.
Ultimately, Deep Blue was retired to the Smithsonian Museum in Washington, DC, but IBM went on to build new kinds of massively parallel computers such as IBM Blue Gene
®. [Read more about this Icon of Progress.]
The Deep Blue project inspired a more recent grand challenge at IBM: building a computer that could beat the champions at a more complicated game, Jeopardy!.
Over three nights in February 2011, this machine—named Watson—took on two of the all-time most successful human players of the game and beat them in front of millions of television viewers. The technology in Watson was a substantial step forward from Deep Blue and earlier machines because it had software that could process and reason about natural language, then rely on the massive supply of information poured into it in the months before the competition. Watson demonstrated that a whole new generation of human - machine interactions will be possible. | 计算机 |
2014-23/2666/en_head.json.gz/2970 | We are engineers and artists exploring crossroads of technology and humanity.
Left Brain + Right Brain = Total Package.
We build software and brands.
Here, logic is not the enemy of intuition and colors are just as revered as numbers.
An Ambidextrous Mind
is the best way to describe us
We are fortunate to have a core team whose collective mind is a happy mix of creativity and reason. To us, writing a brilliant piece of code is just as thrilling as writing an award-winning TV spot. And that makes all the difference in the work we do and the applications we create.
We are not just back-end developers or front-end designers. Our unique mix of left-brain and right-brain makes things come together for our clients in a holistic way. We are innovators, we are creators, we are communicators. We think, we build, and we create value. To top it all, we know how to sell. And we do it with style, strategy and great ideas.
We provide high-end software application design and development services to highly selective world-class organizations and we develop innovative software products. We also provide brand strategy, communication and design services.
In a nutshell, we provide business solutions that make your business work well and look good too.
If that sounds like what you are looking for, you’ve come to the right place.
Chintan Patel
Co-founder and Tech Lead Dr. Chintan Patel is the Co-Founder and Technology Lead at Applied Informatics. He brings more than 15 years experience as a Software Engineer and Biomedical Informatician. His areas of expertise include Semantic Databases, Biomedical Ontologies, Machine Learning and Natural Language Processing. His works has been published in dozens of leading conferences and academic journals. Previously, he worked at IBM’s T.J. Watson Research Center, conducting research on ontology-reasoning and information extraction. He holds a Ph.D. in Biomedical Informatics (with honors) from Columbia University.
Sharib Khan
Co-founder and Product Lead Sharib Khan, aka "idea khan" leads product development and strategic initiatives at Applied Informatics. He has been responsible for the conception of Applied Informatic's flagship product, Trialx.com, a consumer-centric clinical trials patient recruitment platform. The success of TrialX and it’s enterprise version, iConnect is evident in their use at some of the world’s most renowned academic medical centers and patient advocacy groups. A decision engine built over TrialX, Ask Dory won the NCI’s Best Cancer Consumer Application Development Award in 2011 for its simplicity and novelty.
Prior to TrialX, Sharib co-founded Future Today Inc., a leading online and connected TV media company. Today, Future Today and its media properties reach more than 3 million unique visitors every month. After completing his Masters in Biomedical Informatics at Columbia University, Sharib worked as a Biomedical Informatics Researcher on several federally funded research projects which included the development of a patient-centric Electronic Health Record; a health promotion portal and the adoption of EHRs among clinics in New York City as part if the NYC Department of Health's Primary Care Information Project. After completing medical training at the University of Medical Sciences, Delhi University, Sharib traded his stethoscope for computers, code and technology, a passion that goes all the way back to his high school days in the quaint hills of Northern India where he got fascinated with the power of information systems to transform work and life.
Kartik Vishwanath
Senior Software Architect
Kartik has been building distributed systems in the healthcare domain for more than 5 years, specializing in building out indexes, services, and infrastructures for making EHRs and population health systems more searchable. His interests also include building infrastructures to do machine learning against clinical data to build custom search data sets. Kartik graduated with a Masters in Computer Science from UMKC in 2008. He has a number of publications including in the Journal of the American Medical Informatics Association (JAMIA) and presentations in conferences such as AMIA, AAAI, and SAC. He also won AMIA’s Distinguished Paper Award in 2005.
Aziz Rawat
Creative Director, Design Lead
Aziz grew up on the Shivaliks of northern India. Nick-named "General" because of his childhood fascination with the army, he is now a creative technologist constantly engaged in a battle of ideas.
Having worked in such differing cultural markets as the U.S. and India, Aziz brings to his work a cross-continental perspective that is rare and refreshing, and which has earned him recognition from international award shows to local baseball fans. Aziz has helped launch global brands like Kraft, Dial and Boots as well as local ones such as the Chicago White Sox and The Art Institute of Chicago. Other clients on his roster include Disney, Starwood Hotels & Resorts, Jim Beam, Wrigley and Bayer in the U.S. and NIIT, ABN Amro Bank, Dabur and Park Hotels in India.
An alumni of the Faculty of Fine Art, Baroda (India) and the VCU Brandcenter (Richmond, Virginia), Aziz has worked at some of the planet's top creative shops including Energy BBDO, DDB and Contract Advertising.
Proud of the diversity and breath of work he has helped create over the years, Aziz believes is a result of a happy confluence of his pragmatic mother, perfectionist father and the paradoxical nature of all his experiences living in India, both coasts of the United States and the mid-west. Now living with his wife and daughter in Chicago, Aziz loves to laugh, enjoys art, philosophy, politics, and feeds on all sorts of new inventions and ideas that the web has to offer.
Julie Dulude
Creative Director, Copy Lead
Julie is one of the world’s most skeptical consumers. Ironically, it’s her cynicism for all things branded that make her good at what she does. Part copywriter, part strategist, Julie searches far and wide for emotional truths that will resonate with the intended audience. Julie’s work spans a diverse range of accounts including Dove, Kotex, CleanWell, Honest Foods, BlackBerry and American Eagle Outfitters, with special expertise in the wellness and natural foods markets. Julie has worked in all mediums from interactive to broadcast, but she brings to Applied Informatics far more than a traditional advertising background.
Some of Julie’s more memorable projects have been inventing new product ideas for Adidas and redesigning the consumer shopping experience for Westfield malls. She has named new companies and products, and helped launch multiple brands from scratch, in each case working closely with the client to create a unique and ownable brand voice. A graduate of Wesleyan University and VCU Brandcenter, Julie is a recovering wanderjarh and has lived and traveled to such far-flung places as the Marshall Islands, Ecuador, India, South Africa and Costa Rica. Julie’s other passion is cooking, and she enjoys inventing new recipes and testing them out on willing guinea pigs. | 计算机 |
2014-23/2666/en_head.json.gz/3203 | What is ED 2.0?
What is ED 2.0?ED 2.0 is the use of web 2.0 applications in education. To make this initial definition useful it is necessary to say something about the meaning of "Web 2.0".Web 2.0The term "Web 2.0" was coined by Darcy DiNucci in 1999. She noted that "The Web we know now, which loads into a browser window in essentially static screenfuls, is only an embryo of the Web to come. The first glimmerings of Web 2.0 are beginning to appear, and we are just starting to see how that embryo might develop The Web will be understood not as screenfuls of text and graphics but as a transport mechanism, the ether through which interactivity happens." She added that this interactivity will migrate from the computer screen to mobile devices, especially cell phones. The term "Web 2.0" gained traction in 2004 when Tim O'Reilly organized a now-famous conference to consider the status and future developments in the new interactive or participatory applications on the Internet. Since that time the term has come into common usage. Web 2.0 has also been called the read/write web, the participatory web, the user-generated content web, and other terms. At this point is may be easiest to define Web 2.0 in terms of familiar examples. The advantage of this sort of definition is that almost everyone interested in learning about Web 2.0 is completely familiar with most of these examples. The downside is that the definition will tend to limit imagination by making users of the term think about possibilities by reference to those examples and not the underlying concept or technical capabilities. Web 2.0, thought of in terms of the most familiar examples, is the Web of wikis, with Wikipedia being the most familiar, blogs with comments, eBay and Amazon as platforms for businesses, UTube and Flickr, social networks such as Facebook, Linkedin and Myspace, microblogging networks wuch as Twitter, etc. What is common to all of these Web applications is that users not only download static content they seek, but also upload content: documents, videos and photos, entire business enterprises, and in the process form links, user groups, and networks. Web 2.0 depends on certain features of software which enable this sort of interactivity. However it is unhelpful to think about Web 2.0 in terms of technical features. It is the social uses, not the technical capabilities, that make Web 2.0 different, and interesting.Network EffectsO'Reilly has emphasized 'network effects' as the key driver of Web 2.0. What are 'network effects'?We can think of many social situations as networks, with each unit as a node. There are in any network positive or negative effects to each node from such features of the network as size, or speed, etc. Let's first consider a negative network effect of size. Thinking of the road network of LA as an example, and the roadway between Santa Monica and City Hall as a piece of the network, imagine the effect of the number of drivers (the number of nodes) on each driver. If there are no other drivers, then the network affords the one driver on the road between Santa Monica and City Hall a frictionless connection. Up to a certain limit, each additional driver does not affect the value of the network. But past a certain size (which is in fact passed every day) each additional driver decreases the value of the road network, adding friction (traffic) and making it more difficult for each driver to get where they want to go. The more additional drivers, the more traffic, the slower the drive, the less value the roadway. Now for an opposite example, think of the FAX network. The first FAX machine user gets no value at all, because he or she cannot use the machine; there is no one to send a FAX to because no one else has a FAX machine. Up to a certain limit the situation does not improve very much, because the users can only use the machine to FAX documents to a small handful of others. But past a certain limit and the FAX machine becomes useful, as many businesses have FAX. Then it becomes expected for businesses to have FAX machines, and the FAX becomes a highly valuable, then necessary tool. The more users in the FAX network, the more value the network has for every FAX user (node). Some simple arithmetic shows that positive network effects grow rapidly. If there are only two nodes A and B in a network the number of connections between them = 1. If we add another node C, the number of connections = 3 (A-B, B-C, A-C). If we add one more, D, the number of connections now = 6. Add one more, E, and the connections = 10. The point: the number of connections grows much more rapidly than the number of nodes. Back to ED 2.0The task of imagining and building ED 2.0 consists of constructing a platform upon which can be established an indefinite number of teaching-learning networks. In a comprehensive learning materials network, anyone -- teacher, scientists or researcher, homeschool parent, commercial provider, etc., can post useful materials. Imagine for example a "wiki-riculum" where every conceivable topic might be developed, amplified, tested, assessed, for use by anyone interested in teaching about or learning about that topic. I take the notion of "every conceivabnle topic" seriously, because the voluntary contributions of users, motivated by the urge for creativity, by sheer vanity, or by social virtue (the desire to contribute to the good) will extend the range of topics far beyond what any administrative organization can create and establish. Few schools can field a course in Vietnamese literature, for example, because the costs of developing the course and making it available as a for-credit option are too great. But the Vietnamese community, including the many teachers from Vietnam (located in the US, France, Viet Nam and elsewhere) can readily contribute such a course through the gifts economy. As a result so many students of Vietnamese origin can have access to an on-line course in their national literature, perhaps on a very low cost-no-cost basis. Imagine endless edblogs developing support materials for every topic in every subject matter, with comments and useful links to additional materials, resources, problem sets, assessment tools. Imagine a Facebook-like application for teachers, breaking through the barriers of teacher isolation, providing support networks, assessments of tools, links to worked out lesson plans and curriculum units, etc. These are among the ideas that we will be exploring in this Blog. Please add comments, or communicate directly with me to add your voice to this exploration of ED 2.0
Leonard Waks
dpOctober 4, 2009 at 5:45 PMlove the phrase "wiki-riculum" - fabReplyDeleteKevin KimOctober 4, 2009 at 10:38 PMI'm very interested in applying this web 2.0 tech/concept to education as it's my org's main business (provincial gov of education). Thanks for your insight, and I'll certainly visit again. :)ReplyDeleteAdd commentLoad more...
ED 2.0 Report
Why Teach? Teacher Motivation and ED 2.0
The Teachers of ED 2.0 | 计算机 |
2014-23/2666/en_head.json.gz/3472 | release date:Nov. 2, 2010
The Fedora Project is a Red Hat sponsored and community-supported open source project. It is also a proving ground for new technology that may eventually make its way into Red Hat products. It is not a supported product of Red Hat, Inc.
The goal of The Fedora Project is to work with the Linux community to build a complete, general purpose operating system exclusively from free software. Development will be done in a public forum. The Red Hat engineering team will continue to participate in the building of Fedora and will invite and encourage more outside participation than was possible in Red Hat Linux. By using this more open process, The Fedora Linux project hopes to provide an operating system that uses free software development practices and is more appealing to the open source community. Fedora 14, code name 'Laughlin', is now available for download. What's new? Load and save images faster with libjpeg-turbo; Spice (Simple Protocol for Independent Computing Environments) with an enhanced remote desktop experience; support for D, a systems programming language combining the power and high performance of C and C++ with the programmer productivity of modern languages such as Ruby and Python; GNUStep, a GUI framework based of the Objective-C programming language; easy migration of Xen virtual machines to KVM virtual machines with virt-v2v...." manufacturer website
1 DVD for installation on x86_64 platform back to top | 计算机 |
2014-23/2666/en_head.json.gz/3555 | Video Game Review: Super Mario Bros. 3 (Super Mario Advance 4: Super Mario Bros. 3 [Game Boy Advance])
Super Mario Bros. 3 is often considered one of the greatest video games of all time, frequently being referenced both in and out of the Super Mario Bros. franchise. Its vast number of unique power-ups and advanced physics engine have kept it rather close to the hearts of gaming enthusiasts everywhere, and its contributions to the platforming genre were just as, if not more than, substantial as the contributions of its predecessors. One can say with almost absolute confidence that the modern gaming landscape would be completely different had this game never come to fruition.
This game has been remade at least twice that I know of: once for Super Mario All-Stars and again as the version which I will be reviewing today, which was released as the larger portion of Super Mario Advance 4. Now, I'm not sure of the specific details of these remakes, like if they simply added pretty graphics on top of the old physics and what-not, but I do know... they're all damn fun, and it doesn't really matter.
Super Mario Bros. 3 features graphics which are clearly inspired by their Super Mario All-Stars counterparts, but aren't quite the same. There are a few small line art differences and a lot of differences in terms of color tones. In general everything is lighter than it was on the SNES, and there are times that I feel like things are too light. Usually, though, the colors and graphics aren't anything troublesome, and often look rather nice and pretty if a little stunted by their dated animations. You see, while everything looks nice, it also feels a little... I hate to say lifeless, because when everything is in motion it definitely isn't. But I will say that if you take the time to look at the enemies and stages, you can feel how mechanical it all is. With a few exceptions such as the brilliant glimmer of breakable blocks and the cute little dance the Pile Driver Micro-Goomba's do before they leap into the air.
The music and sound is passable, but I found the sound effects to be far more memorable than the music. I can still hear the Micro-Goombas pounding the earth, Mario bouncing on a music note block, and the bwing-bwing of a swirling raccoon tail. Even the bwoodf of magical smoke as Mario transforms from tanooki to statue and back again still sounds clearly in my ears. The music is a lot less memorable, with most of the songs I do remember either being renditions of tracks from the original Super Mario Bros. (from which we have a very cool version of the underground theme) or having been revitalized in recent times. There are a couple of exceptions though, that I have to mention. One of these is the Coin Heaven theme utilized in this game, as it has a very memorable and upbeat, if cheesy, melody. The athletic theme is also fairly memorable, and not terribly stale. It does its duty well and reminds players to stay on their toes.
But I would be remiss if I did not mention the most memorable track from this game, and perhaps one of the most memorable tracks from any video game: the Doom Ship theme. This song has followed me through my life ever since I first heard it being processed through the NES' very limited chipset. And for good reason. The song that plays as Mario boards and infiltrates the Koopalings mighty Doom Ships sets the mood in all the right ways. It's heavy, deep, and gives you a sense of foreboding that builds as you delve deeper into the stage and the action really revs up. Combined with the classic bwow of the cannonballs and bullet bills, the Doom Ship song will invoke all kinds of emotion from players. If there's any single job of music from a video game, that would be it. And this track does it in spades.
The physics for the game are pretty solid, although Mario seems to be a little finicky and enjoys dashing off the edge of platforms at the top of a button. Unlike the Super Mario Land titles for the original Game Boy system, Mario won't lose his momentum in mid-jump, and that is a very good thing because there are some simply nerve-racking obstacles in this game. In fact, the later levels can be extremely challenging. Unlike similar games, however, these levels are never unfair and when you die it's your fault. And that's the beauty part of it. Some of these stages might be simply unforgiving, but they aren't impossible, they aren't cheating, and you aren't dying because the game decided you simply shouldn't win this time. You die, your fault. And that goes a long way toward preventing the kind of frustration that leads to anger, because when it's your fault for the loss you just try again and try to get better.
That said, I simply cannot fathom playing the original NES game without a save feature. I'm going to eventually, as I own it and it's my mission to experience everything I own to the fullest that I can. But man, this is a long and hard game. It took me about a week and a half of on-and-off play to see this through to the end. I don't know if I'm prepared to handle this all in just one sitting... with gruelling consequences for a game over.
Speaking of game overs, that's baby stuff in this game. Literally all a game over does is cause a screen to appear asking if you want to continue, and then send you a little further back on the map. You don't even have to replay any levels like you would in the more modern New Super Mario Bros. games. While I, as a player with a time constraint, found that to be nice and all, and probably good for my blood pressure, I don't really feel that it's a good design decision. It makes game overs no more damaging than a lost life, and in that case, why have them at all? If your game overs don't come with consequences, might as well give the player infinite lives. Just how I feel about all that.
As in most Super Mario games, you're supposed to save only after completing a fortress stage. However, there is also a Quick Save feature which lets you save anywhere on the map at any time. The downside to this is that if you resume from a Quick Save and forget to save again before shutting down, you're going to lose everything since the last fortress. So be careful about how you use the save and when you use it, especially with the fleeting battery life of the GBA systems.
The plot isn't really anything remarkable, although something interesting I noticed as I finished it off the other day (and this is a pretty big spoiler, so if you care about that... y'know, stop reading): Princess Peach isn't kidnapped until near the end of the game. Throughout the game the Princess sends you letters with helpful hints and items, but I always assumed that these letters were sent from somewhere within Bowser's Keep. In retrospect that doesn't make sense, but hey, the Super Mario universe has all kinds of magic. However, at the end of World 7 (which is the most difficult world in the game, in my opinion) you get a letter from Bowser who mentions having kidnapped the Princess while Mario was out rescuing the other kingdoms from the Koopalings. That caught me a little by surprise, because I'd always assumed Mario was out to rescue Peach the whole time. Pretty cool way to twist things a bit, Nintendo, even if the end-game is ultimately the same.
As I mentioned, Mario actually leaves Mushroom Kingdom to rescue a number of other kingdoms (the Grass, Desert, Water, Giant, Sky, Ice, and Pipe Kingdoms, respectively, and to ultimately attack the Dark Kingdom where Bowser lives). Each land has its own gimmick with enemies and obstacles which fit the setting quite nicely. The ice land has lower traction, the giant land has enormous enemies, sky land has cloud platforms and flying beetles, and the desert kingdom has an evil miniature sun which has made it its personal mission to fry Mario into a spicy Italian crisp. All of these worlds are awesome, and each one has something to offer which players are going to remember.
What will players remember about Pipe Land, though? How infuriated it made them, most likely. Pipe Land is home to a number of mazes and the majority of its stages are the most difficult in the game. Even the final land is a cakewalk compared to Pipe Land. I take this as confirmation that Ludwig von Koopa, the Koopaling who orchestrated an attack on Pipe Land, is actually the most evil of Bowser's children. And also the most tactically competent.
While Dark Land isn't nearly as difficult as some of the other lands in the game, it is one of the most memorable. Each stage brings you closer to Bowser's castle, and you get a very strong sense of invading an enemy kingdom. It starts off with Mario defeating Bowser's infantry and naval units, which are a pair of very awesome stages where the plumber single-handedly takes out tanks and warships in an ocean of blood (yes, blood). After this the world becomes a little more regular, which we can assume to be Mario invading the civilian areas of the Dark Land. Ultimately Mario storms Bowser's castle to engage in a fairly difficult end-game brawl to set the peachy princess free.
What this game lacks in music, it more than makes up for in level design and simple variety. There are a few dozen enemy types, all with unique behavior which lends a sort of puzzle element to an otherwise puzzle-less (in the traditional sense) game. This game introduced Boos, Thwomps, and Chain Chomps to the franchise, as well as one of my personal favorite enemies: Hot Feet. Hot Feet are haunted candle flames which can freely leave their stick to pursue Mario. Like Boo they stop when spotted, but they travel exclusively along the ground. Love them.
But of course the most renowned aspect of Super Mario Bros. 3 is its large number of power-ups. Mario has a personal arsenal in this game, able to assume a number of animal forms, the typical fire flower item, starmen, and even a hammer-throwing suit which appears stolen directly from the Hammer Bros. themselves. This is all easily managed by the introduction of an inventory system, one which, in my opinion, is the best inventory system the franchise has seen thus far.
This GBA port of the game features several notable enhancements over the original. For starters, clearing the game unlocks a World Select menu, which lets you jump around to any world you want. After clearing every level, you can also freely replay and exit levels at your whim. This is a nice touch, but I wish it was something which came earlier in the game. Also there are two methods of play: Mario Mode, and Mario & Luigi Mode. While Mario Mode lets you play through the game as the iconic hero, Mario & Luigi Mode is a throwback to the back-and-forth two-player mode of the NES era. While I'm sure that was the intention behind its inclusion, the mode has another function: allowing you to play as Luigi.
In most instances this is nothing special, as Luigi is often a simple palette swap of Mario. However in this game they went the extra mile to make Luigi play just a tad differently. He jumps higher, falls slower, and slides farther than his little big brother. Luigi also comes equipped with his somewhat-trademarked panic-jump from Super Mario Bros. 2. At first I really hated that this bizarre jump was finding its way into every game where Luigi was used, but I admit that it's beginning to grow on me.
Unfortunately, as I do not have multiple copies of the game or multiple Game Boys, I could not play any of the multiplayer games. I am also unable to assess the e-Reader exclusive levels and power-ups, which is a real bummer as those were the primary reason I bought this version of the game. Apparently they're now ridiculously priced collectibles. Meh.
Really, I have no complaints with Super Mario Bros. 3. It's a great game, and I recommend anybody with any fondness for platformers to give it a go. Yet despite all the praise I give it, and all the praise I still want to give it, there was always something a little... bland about it. Not to say that it was a boring game, because it really isn't by any means, but it always felt like something was missing. I'm not quite sure what that something was, but it was a weird and ever-present feeling that made the game a little less enjoyable than it otherwise could have been. Because of that I'll have to give the game a nine out of ten.
But hey, it's still almost perfect.
*Quick apology: forgive if my writing seems a little off or stilted in this review. I woke up with not quite enough hours of sleep and I don't believe my melatonin pills had worn off when I awoke. As a result I started falling asleep about halfway through the post. Kind of hampering some of the other things I wanted to talk about today. Oh well. I hope it's not actually as bad as I think that it is, because there have been times where I've written while tired only to find out that it was totally coherent the whole time. With the exception of a few extra letters here and there. In any case, I apologize if this post was not quite up to par with my usual.
Nathan DiYorio
Video Game Reviews,
All material, unless otherwise stated, are the sole copyright © 2003-2013 of Nathan DiYorio. Simple template. Powered by Blogger. | 计算机 |
2014-23/2666/en_head.json.gz/3872 | Cloud9 IDE Builds Online Development Environment with Red Hat OpenShift
Red Hat, Inc. (NYSE: RHT), the world’s leading provider of open source solutions, today announced that Cloud9 IDE has built its online development environment with Red Hat’s OpenShift Online hosted Platform-as-a-Service (PaaS) solution. By integrating OpenShift Online into its original online development environment, Cloud9 IDE is able to deliver more flexibility, security and ease of use to developers.
Cloud9 IDE is an online development environment for Javascript and Node.js applications as well as HTML, CSS, PHP, Java, Ruby and 23 other languages. It is designed for developers looking for a modern and secure IDE that strives to work efficiently by having their code accessible online from anywhere via Cloud9 IDE workspaces. Launched in March 2011, Cloud9 IDE quickly took steps to expand the functionality it initially offered developers with its online environment, hoping to offer a user workflow with the ability to develop and deploy runtime code from the same online platform on which it had been developed. Cloud9 IDE hoped to give developers a more robust offering, combining editing tools and the runtime environment into a single platform, all within the cloud.
Looking to collaborate with another leading vendor to enhance its platform, Cloud9 IDE selected Red Hat for its OpenShift Online offering. One of the many reasons it selected OpenShift Online was for the inherent security provided through the PaaS solution’s inclusion of Red Hat Enterprise Linux. Today, by leveraging OpenShift Online containers, Cloud9 IDE workspaces provide developers with a single online place to store, run and configure their files, all while offering a full terminal and full shell access into the container. While inside a workspace, a user can access Cloud9 IDE and complete the same functions that can be done on a desktop or laptop machine. In addition to security, the switch to OpenShift Online provides a cost-efficiency benefit as it offers a multi-tenant model where multiple workspaces run on the same virtual machine.
The combination of Cloud9 IDE and OpenShift Online also allows users to run executables, including binary models, Node.js support, Ruby, PHP and C++ applications. This flexibility has allowed many to benefit from using Cloud9 IDE in their infrastructure, including startups, independent software vendors and freelance developers. Working with a variety of users has allowed Cloud9 IDE to explore the next phase of offerings within a private cloud.
Supporting Quotes
Ruben Daniels, CEO and founder, Cloud9 IDE
“Once we really understood what OpenShift was capable of from a technical point of view, we realized we could integrate it to support a better workflow for our users to easily build apps. The combination of Red Hat’s leadership in PaaS and our capabilities at Cloud9 IDE offer developers an easy, flexible and efficient way to build and deploy applications. Our platform wouldn’t be where it is today without OpenShift Online.”
Ashesh Badani, general manager, Cloud Business Unit and OpenShift, Red Hat
“We are excited to collaborate with Cloud9 IDE so more users and developers can experience the full range of programming languages and runtime ecosystems OpenShift has to offer, along with the inherent flexibility and ease of use of the platform. We look forward to continuing to enable more developers to easily utilize PaaS platforms like OpenShift Online for their applications.”
CIO, CTO & Developer Resources For more information
Read the full Cloud9 and OpenShift case study
Visit Cloud9 IDE’s website
Learn more about and download Red Hat OpenShift Online
Connect with Red Hat
Get more Red Hat news or subscribe to the Red Hat news RSS feed
Follow Red Hat on Twitter
Join Red Hat on Facebook
Watch Red Hat videos on YouTube
About Red Hat, Inc.
Red Hat is the world's leading provider of open source software solutions, using a community-powered approach to reliable and high-performing cloud, Linux, middleware, storage and virtualization technologies. Red Hat also offers award-winning support, training, and consulting services. As the connective hub in a global network of enterprises, partners, and open source communities, Red Hat helps create relevant, innovative technologies that liberate resources for growth and prepare customers for the future of IT. Learn more at http://www.redhat.com.
Certain statements contained in this press release may constitute "forward-looking statements" within the meaning of the Private Securities Litigation Reform Act of 1995. Forward-looking statements provide current expectations of future events based on certain assumptions and include any statement that does not directly relate to any historical or current fact. Actual results may differ materially from those indicated by such forward-looking statements as a result of various important factors, including: risks related to delays or reductions in information technology spending; the effects of industry consolidation; the ability of the Company to compete effectively; uncertainty and adverse results in litigation and related settlements; the integration of acquisitions and the ability to market successfully acquired technologies and products; the inability to adequately protect Company intellectual property and the potential for infringement or breach of license claims of or relating to third party intellectual property; the ability to deliver and stimulate demand for new products and technological innovations on a timely basis; risks related to data and information security vulnerabilities; ineffective management of, and control over, the Company's growth and international operations; fluctuations in exchange rates; and changes in and a dependence on key personnel, as well as other factors contained in our most recent Quarterly Report on Form 10-Q (copies of which may be accessed through the Securities and Exchange Commission's website at http://www.sec.gov), including those found therein under the captions "Risk Factors" and "Management's Discussion and Analysis of Financial Condition and Results of Operations". In addition to these factors, actual future performance, outcomes, and results may differ materially because of more general factors including (without limitation) general industry and market conditions and growth rates, economic and political conditions, governmental and public policy changes and the impact of natural disasters such as earthquakes and floods. The forward-looking statements included in this press release represent the Company's views as of the date of this press release and these views could change. However, while the Company may elect to update these forward-looking statements at some point in the future, the Company specifically disclaims any obligation to do so. These forward-looking statements should not be relied upon as representing the Company's views as of any date subsequent to the date of this press release.
Red Hat, the Shadowman logo and JBoss are registered trademarks of Red Hat, Inc. in the U.S. and other countries. Linux is a registered trademark of Linus Torvalds. | 计算机 |
2014-23/2666/en_head.json.gz/4498 | Home Linotype Helvetica Helvetica®
Linotype Add
The Helvetica® typeface is one of the most famous and popular in the world. It’s been used for every typographic project imaginable, not just because it is on virtually every computer. Helvetica is ubiquitous because it works so well. The design embodies the concept that a typeface should absolutely support the reading process – that clear communication is the primary goal of typography.
Helvetica History
Helvetica didn’t start out with that name. The story of Helvetica began in the fall of 1956 in the small Swiss town of Münchenstein. This is where Eduard Hoffmann, managing director of the Haas Type Foundry, commissioned Max Miedinger to draw a typeface that would unseat a popular family offered by one his company’s competitors.
Miedinger, who was an artist and graphic designer before training as a typesetter, came up with a design based on Hoffmann’s instructions, and by the summer or 1957, produced a new sans serif typeface which was given the name “Neue Haas Grotesk.” Simply translated this meant “New Haas Sans Serif.”
The Stempel type foundry, the parent company of Haas, decided to offer the design to its customers in Germany, where Stempel was based. The company, however, felt it would be too difficult to market a new face under another foundry’s name and looked for one that would embody the spirit and heritage of the face. The two companies settled on “Helvetica,” which was a close approximation of “Helvetia,” the Latin name for Switzerland. (“Helvetia” was not chosen because a Swiss sewing machine company and an insurance firm had already taken the name.)
Over the years, the Helvetica family was expanded to encompass an extensive selection of weights and proportions and has been adapted for every typesetting technology.
Helvetica usage
Helvetica is among the most widely used sans serif typefaces and has been a popular choice for corporate logos, including those for 3M, American Airlines, American Apparel, BMW, Jeep, JCPenney, Lufthansa, Microsoft, Mitsubishi Electric, Orange, Target, Toyota, Panasonic, Motorola, Kawasaki and Verizon Wireless. Apple has incorporated Helvetica in the iOS® platform and the iPod® device. Helvetica is widely used by the U.S. government, most notably on federal income tax forms, and NASA selected the type for the space shuttle orbiters. | 计算机 |
2014-23/2666/en_head.json.gz/4598 | An introduction to the World Wide Web and what it could mean to state and local government information gathering and distribution
by Mike Nevins
Sept 95 Vendors: Wired; Time; Fortune; Business Week; Prodigy; America Online; CompuServe; Microsoft; Silicon Graphics; State Technologies Inc.; Jurisdictions: California Resources Agency; Kentucky; Hampton Roads County, Va.; Use Solution Summary? NO By Michael Nevins State Technologies Inc A media blitz has taken the Internet from the cover of Wired magazine to the cover of Time, Fortune and Business Week. Everyone wants to get on the Internet whether they know why or not. It is no longer the deep, dark chasm into which only scientists, academics and computer programmers dare voyage Thanks to the World Wide Web (WWW) the Internet is now quite easy to access and use. A number of "point and click" web browsers provide an interface roughly similar to Windows or a Macintosh. Users can search documents linked by Hypertext Markup Language (HTML). HTML also allows pictures, movies and sound files to be embedded into documents to give the WWW the look and feel of online magazines or brochures. Online service providers like Prodigy, America Online, CompuServe and others have added millions of Internet users by embedding web browsers in their software. Millions of Windows 95 buyers will also have embedded Internet access But while web browsers opened a window into the world of the Internet and attracted millions of new users, the jury is still out over whether Marc Andreessen - the person most often credited with creating the original web browser Mosaic - has produced the greatest time-saving device for research or the biggest desktop distraction since solitaire. One word of caution - it can be incredibly addictive Content Means Everything Corporations are falling all over themselves to establish web sites to showcase products and service offerings. They are developing online catalogs of information for potential buyers. The web is an ideal medium to distribute the information normally contained in glossy brochures There is arguably no industry more information-intensive than government and the public is demanding access to data as never before. The web is becoming the communications medium of choice to publish the myriad of data collected by government agencies Kris Hagerman, a member of the WebForce team at Silicon Graphics, said at a recent conference on the Internet, "With all due respect to anyone with a real estate background, location means nothing, content means everything." While there are some location considerations, like having your web site listed by the variety of search indexes like Yahoo (http://www.yahoo.com) or the WWW Virtual Library (http://www.w3.org), what he said is generally accurate. Web sites are like magazines. The cover - or home page - must be appealing enough to generate interest, but if the content does not grab and keep that interest, the user is not likely to return to that site or subscribe CALIFORNIA RESOURCES AGENCY One agency that has created a web site worth visiting and returning to is the California Resources Agency (http://resources.agency.ca.gov). They have developed an information system that facilitates access to the variety of electronic data describing California's natural resources. CERES, as it is called, uses technology to coordinate data and promote public involvement in decision-making. It is not a new computer system but a means of integrating and distributing existing information pertaining to ecosystems, resources and management "CERES helps to meet the great public demand for information, about California's rich and diverse natural heritage, faster and more efficiently than has been possible before," said Secretary for Resources Douglas P Wheeler. "CERES assists land owners, resource managers and planners in progressing from traditional conservation strategies to a broader ecosystem approach to conservation." If content is everything, these people have assembled a ton of it. It is particularly well-organized.A considerable frustration found in many new web sites are dead ends or links to no data. The information Mike Nevins | 计算机 |
2014-23/2666/en_head.json.gz/4651 | System Shock
??/??/1994
System Shock review
Lewis Denby says: "It's true that the interface is clumsy, with far too much dragging and dropping going on. But away from the peripherals, here remains a game of survival horror resource management, careful RPG stat-planning, and basic but tactical first-person action. It weaves these threads together into something so wholly representative of developer Looking Glass' style that, even above Ultima Underworld and Thief, you'd point to System Shock as the prime example of what this wonderful studio created."
We love reader reviews. If you're a great writer, we'd love to host your System Shock review on this page. Thanks for your support, and for letting your friends know about us. Every time you start here when you shop online, contribute game reviews or tell your friends about our content, you help to ensure that we'll still be here years from now!
None of the material contained within this site may be reproduced in any conceivable fashion without permission from the author(s) of said material. This site is not sponsored or endorsed by Nintendo, Sega, Sony, Microsoft, or any other such party. System Shock is a registered trademark of its copyright holder. This site makes no claim to System Shock, its characters, screenshots, artwork, music, or any intellectual property contained within. Opinions expressed on this site do not necessarily represent the opinion of site staff or sponsors. | 计算机 |
2014-23/2666/en_head.json.gz/4998 | Hello guest register or sign in or with: Not the most exciting thing image - Enhanced 4X Mod Mod for Sins of a Solar Empire: Rebellion
Enhanced 4X Mod
summary news features tutorials downloads videos images forum Sins of a Solar Empire is often described as a 4XRTS game, or a game that tried to merge the action and tactics packed gameplay of a Real time strategy game with the deep, complex, empire wide strategy of a 4X turn based game. It's up to debate whether Sins succeeded in this goal, but I think all can agree that of the 4Xs of "explore, expand, exploit, and exterminate", Sins is much more focused on the exterminate than on the others. This mod seeks to give more depth to the other 3Xs of the game by adding additional game elements or refining the existing ones to reward players who give more strategic thought to the non-combat side of the game. Combat will still be the center point of the game, but players will find that the non-combat options available will be much more rewarding. Spying, exploring, sabotage, diplomacy, culture, and economic development have all been added or changed in innovative new ways. Regardless of what you think Sins is, this is the 4XRTS it should have been. Media RSS Feed Report media
Not the most exciting thing
Shockmatter Aug 6 2012, 8:14pm says:
Looks pretty sick if I do say so myself.
BillChuffington Sep 29 2012, 8:17pm says:
DAT GLOSS
But here's a look at one of the new loading screens that will be released in the next version. This image was made by SivCorp, with other works by Axxo2 and from the Rebellion's collectors edition.
Original images from (http://forums.sinsofasolarempire.com/423241/)
GoaFan77 | 计算机 |
2014-23/2666/en_head.json.gz/5139 | Forgot login?New to Packt? Home > Articles > Open Source > OpenStreetMap: Gathering Data using GPS My Account | Newsletters |
OpenStreetMap: Gathering Data using GPS
OpenStreetMap — Save 50%
Be your own cartographer $23.99 $12.00 by
Jonathan Bennett | September 2010 |
Open Source OpenStreetMap is a diverse project with hundreds of thousands of people contributing data and making use of it in different ways. As a result, many of the resources that mappers have created and use are scattered around the Internet, but the project data and much of the documentation is hosted at openstreetmap.org, on servers operated by the OpenStreetMap Foundation.
As a crowdsourced project, OpenStreetMap is heavily reliant on having an active community participate in the project, and there are probably as many tools and websites aimed at allowing mappers to communicate and collaborate as there are for mapping and using the data. Mappers have created many different ways of sharing information, based on personal preference and the kind of information involved.
In this article by Jonathan Bennett, author of the book OpenStreetMap, we'll look at the tools and techniques used by the OpenStreetMap community to gather data using GPS, and upload it to the website, including:
What the Global Positioning System is, and how it works
How to set up your GPS receiver for surveying
How to get the best signal, and more accurate positioning
How to tell a good GPS trace from a bad one
Ways of ensuring your survey is comprehensive
Other ways of recording information while surveying
We'll also look at a couple of ways of gathering information without needing a GPS receiver.
Be your own cartographer
Collect data for the area you want to map with this OpenStreetMap book and eBook
Create your own custom maps to print or use online following our proven tutorials
Collaborate with other OpenStreetMap contributors to improve the map data
Learn how OpenStreetMap works and why it's different to other sources of geographical information with this professional guide
Read more about this book
(For more resources on OpenStreetMap, see here.)
OpenStreetMap is made possible by two technological advances: Relatively affordable, accurate GPS receivers, and broadband Internet access. Without either of these, the job of building an accurate map from scratch using crowdsourcing would be so difficult that it almost certainly wouldn't work.
Much of OpenStreetMap's data is based on traces gathered by volunteer mappers, either while they're going about their daily lives, or on special mapping journeys. This is the best way to collect the source data for a freely redistributable map, as each contributor is able to give their permission for their data to be used in this way.
The traces gathered by mappers are used to show where features are, but they're not usually turned directly into a map. Instead, they're used as a backdrop in an editing program, and the map data is drawn by hand on top of the traces. This means you don't have to worry about getting a perfect trace every time you go mapping, or about sticking exactly to paths or roads. Errors are canceled out over time by multiple traces of the same features.
OpenStreetMap uses other sources of data than mappers' GPS traces, but they each have their own problems: Out-of-copyright maps are out-of-date, and may be less accurate than modern surveying methods. Aerial imagery needs processing before you can trace it, and it doesn't tell you details such as street names. Eventually, someone has to visit locations in person to verify what exists in a particular place, what it's called, and other details that you can't discern from an aerial photograph
If you already own a GPS and are comfortable using it to record traces, you can skip the first section of this article and go straight to Techniques. If you want very detailed information about surveying using GPS, you can read the American Society of Civil Engineers book on the subject, part of which is available on Google Books at http://bit.ly/gpssurveying. Some of the details are out-of-date, but the general principles still hold.
If you are already familiar with the general surveying techniques, and are comfortable producing information in GPX format, you can skip most of this article and head straight for the section Adding your traces to OpenStreetMap.
What is GPS?
GPS stands for Global Positioning System, and in most cases this refers to a system run by the US Department of Defense, properly called NAVSTAR. The generic term for such a system is a Global Navigation Satellite System (GNSS), of which NAVSTAR is currently the only fully operational system. Other equivalent systems are in development by the European Union (Galileo), Russian Federation (GLONASS), and the People's Republic of China (Compass). OpenStreetMap isn't tied to any one GNSS system, and will be able to make use of the others as they become available. The principles of operation of all these systems are essentially the same, so we'll describe how NAVSTAR works at present.
NAVSTAR consists of three elements: the space segment, the control segment, and the user segment.
The space segment is the constellation of satellites orbiting the Earth. The design of NAVSTAR is for 24 satellites, of which 21 are active and three are on standby. However, there are currently 31 satellites in use, as replacements have been launched without taking old satellites out of commission. Each satellite has a highly accurate atomic clock on board, and all clocks in all satellites are kept synchronized. Each satellite transmits a signal containing the time and its own position in the sky.
The control segment is a number of ground stations, including a master control station in Colorado Springs. These stations monitor the signal from the satellites and transmit any necessary corrections back to them. The corrections are necessary because the satellites themselves can stray from their predicted paths.
The user segment is your GPS receiver. This receives signals from multiple satellites, and uses the information they contain to calculate your position. Your receiver doesn't transmit any information, and the satellites don't know where you are. The receiver has its own clock, which needs to be synchronized with those in the space segment to perform its calculations. This isn't the case when you first turn it on, and is one of the reasons why it can take time to get a fix.
Your GPS receiver calculates your position by receiving messages from a number of satellites, and comparing the time included in each message to its own clock. This allows it to calculate your approximate distance from each satellite, and from that, your position on the Earth. If it uses three satellites, it can calculate your position in two dimensions, giving you your latitude (lat) and longitude (long). With signals from four satellites, it can give you a 3D fix, adding altitude to lat and long. The more satellites your receiver can "see", the more accurate the calculated position will be. Some receivers are able to use signals from up to 12 satellites at once, assuming the view of the satellites isn't blocked by buildings, trees, or people. You're obviously very unlikely to get a GPS fix indoors.
Many GPS receivers can calculate the amount of error in your position due to the configuration of satellites you're using. Called the Dilution of Precision (DOP), the number produced gives you an idea of how good a fix you have given the satellites you can get a signal from, and where they are in the sky. The higher the DOP, the less accurate your calculated position is. The precision of a GPS fix improves with the distance between the satellites you're using. If they're close together, such as mostly directly overhead, the DOP will be high. Use signals from satellites spread evenly across the sky, and your position will be more accurate. Which satellites your receiver uses isn't something you can control, but more modern GPS chipsets will automatically try to use the best configuration of satellites available, rather than just those with the strongest signals. DOP only takes into account errors caused by satellite geometry, not other sources of error, so a low DOP isn't a guarantee of absolute accuracy.
The system includes the capability to introduce intentional errors into the signal, so that only limited accuracy positioning is available to non-military users. This capability, called Selective Availability (SA) was in use until 1990, when President Clinton ordered it to be disabled. Future NAVSTAR satellites will not have SA capabilities, so the disablement is effectively permanent. The error introduced by SA reduced the horizontal accuracy of a civilian receiver, typically to 10m, but the error could be as high as 100m. Had SA still been in place, it's unlikely that OpenStreetMap would have been as successful.
NAVSTAR uses a coordinate system known as WGS84, which defines a spheroid representing the Earth, and a fixed line of longitude or datum from which other longitudes are measured. This datum is very close to, but not exactly the same as the Prime Meridian at Greenwich in South East London. The equator of the spheroid is used as the datum for latitude. Other coordinate systems exist, and you should note that no printed maps use WGS84, but instead use a slightly different system that makes maps of a given area easier to use. Examples of other coordinate systems include the OSGB36 system used by British national grid references. When you create a map from raw geographic data, the latitudes and longitudes are converted to the x and y coordinates of a flat plane using an algorithm called a projection. You've probably heard of the Mercator projection, but there are many others, each of which is suitable for different areas and purposes.
What's a GPS trace?
A GPS trace or tracklog is simply a record of position over time. It shows where you traveled while you were recording the trace. This information is gathered using a GPS receiver that calculates your position and stores it every so many seconds, depending on how you have configured your receiver.
If you record a trace while you're walking along a path, what you get is a trace that shows you where that path is in the world. Plot these points on a graph, and you have the start of a map. Walk along any adjoining paths and plot these on the same graph, and you have something you can use to navigate. If many people generate overlapping traces, eventually you have a fully mapped area. This is the general principle of crowdsourcing geographic data. You can see the result of many combined traces in the following image. This is the junction of the M4 and M25 motorways, to the west of London. The motorways themselves and the slip roads joining them are clearly visible.
Traces are used in OpenStreetMap to show where geographical features are, but usually only as a source for drawing over, not directly. They're also regarded as evidence that a mapper has actually visited the area in question, and not just copied the details from another copyrighted map. Most raw GPS traces aren't suitable to be made directly into maps, because they contain too many points for a given feature, will drift relative to a feature's true position, and you'll also take an occasional detour.
Although consumer-grade GPS receivers are less accurate than those used by professional surveyors, if enough traces of the same road or path are gathered, the average of these traces will be very close to the feature's true position. OpenStreetMap allows mappers to make corrections to the data over time as more accurate information becomes available.
In addition to your movements, most GPS receivers allow you to record specific named points, often called waypoints. These are useful for recording the location of point features, such as post boxes, bus stops, and other amenities. We'll cover ways of using waypoints later in the article.
What equipment do I need?
To collect traces suitable for use in OpenStreetMap, you'll need some kind of GPS receiver that's capable of recording a log of locations over time, known as a track log, trace, or breadcrumb trail. This could be a hand-held GPS receiver, a bicycle-mounted unit, a combination of a GPS receiver and a smartphone, or in some cases a vehicle satellite navigation system. There are also some dedicated GPS logger units, which don't provide any navigation function, but merely record a track log for later processing. You'll also need some way of getting the recorded traces off your receiver and onto your PC. This could be a USB or serial cable, a removable memory card, or possibly a Bluetooth connection. There are reviews of GPS units by mappers in the OpenStreetMap wiki.
There are also GPS receivers designed specifically for surveying, which have very sensitive antennas and link directly into geographic information systems (GIS). These tend to be very expensive and less portable than consumer-grade receivers. However, they're capable of producing positioning information accurate to a few centimeters rather than meters.
You also need a computer connected to the Internet. A broadband connection is best, as once you start submitting data to OpenStreetMap, you will probably end up downloading lots of map tiles. It is possible to gather traces and create mapping data while disconnected from the Internet, but you will need to upload your data and see the results at some point. OpenStreetMap data itself is usually represented in Extensible Markup Language (XML) format, and can be compressed into small files. The computer itself can be almost any kind, as long as it has a web browser, and can run one of the editors, which Windows, Mac OS X, and Linux all can.
You'll probably need some other kit while mapping to record additional information about the features you're mapping. Along with recording the position of each feature you map, you'll need to note things such as street names, route numbers, types of shops, and any other information you think is relevant. While this information won't be included in the traces you upload on openstreetmap.org, you'll need it later on when you're editing the map. Remember that you can't look up any details you miss on another map without breaking copyright, so it's important to gather all the information you need to describe a feature yourself.
A paper notebook and pencil is the most obvious way of recording the extra information. They are inexpensive and simple to use, and have no batteries to run out. However, it's difficult to use on a bike, and impossible if you're driving, so using this approach can slow down mapping.
A voice recorder is more expensive, but easier to use while still moving. Record a waypoint on your GPS receiver, and then describe what that waypoint represents in a voice recording. If you have a digital voice recorder, you can download the notes onto your PC to make them easier to use, and JOSM—the Java desktop editing application—has a support for audio mapping built-in.
A digital camera is useful for capturing street names and other details, such as the layout of junctions. Some recent cameras have their own built-in GPS, and others can support an external receiver, and will add the latitude, longitude, and possibly altitude, often known as geotags, to your pictures automatically. For those that don't, you can still use the timestamp on the photo to match it to a location in your GPS traces. We'll cover this later in the article.
Some mappers have experimented with video recordings while mapping, but the results haven't been encouraging so far. Some of the problems with video mapping are:
It's difficult to read street signs on zoomed-out video images, and zooming in on signs is impractical.
If you're recording while driving or riding a bike, the camera can only point in one direction at once, while the details you want to record may be in a different direction.
It's difficult to index recordings when using consumer video cameras, so you need to play the recording back in real time to extract the information, a slow process.
Automatic processing of video recordings taken with multiple cameras would make the process easier, but this is currently beyond what volunteer mappers are able to afford.
Smartphones can combine several of these functions, and some include their own GPS receiver. For those that don't, or where the internal GPS isn't very good, you can use an external Bluetooth GPS module. Several applications have been developed that make the process of gathering traces and other information on a smartphone easier. Look on the Smartphones page on the OpenStreetMap wiki at http://wiki.openstreetmap.org/wiki/Smartphones.
Making your first trace
Before you set off on a long surveying trip, you should familiarize yourself with the methods involved in gathering data for OpenStreetMap. This includes the basic operation of your GPS receiver, and the accompanying note-taking.
Configuring your GPS receiver
The first thing to make sure is that your GPS is using the W GS84 coordinate system. Many receivers also include a local coordinate system in their settings to make them easier to use with printed maps. So check in your settings which system you're getting your location in. OpenStreetMap only uses WGS84, so if you record your traces in the wrong system, you could end up placing features tens or even hundreds of meters away from their true location.
Next, you should set the recording frequency as high as it will go. You need your GPS to record as much detail as possible, so setting it to record your location as often as possible will make your traces better. Some receivers can record a point once per second; if yours doesn't, it's not a problem, but use the highest setting (shortest interval) possible. Some receivers also have a "smart" mode that only records points where you've changed direction significantly, which is fine for navigation, but not for turning into a map. If your GPS has this, you'll need to disable it. One further setting on some GPSs is to only record a point every so many metres, irrespective of how much time has elapsed. Turning this on can be useful if you're on foot and taking it easy, but otherwise keep it turned off.
Another setting to check, particularly if you're using a vehicle satellite navigation system, is "snap to streets" or a similar name. When your receiver has this setting on, your position will always be shown as being on a street or a path in its database, even if your true position is some way off. This causes two problems for OpenStreetMap: if you travel down a road that isn't in your receiver's database, its position won't be recorded, and the data you do collect is effectively derived from the database, which not only breaks copyright, but also reproduces any errors in that database.
Next, you need to know how to start and stop recording. Some receivers can record constantly while they're turned on, but many will need you to start and stop the process. Smartphone-based recorder software will definitely require starting and stopping. If you're using a smartphone with an external Bluetooth GPS module, you may also need to pair the devices and configure the receiver in your software.
Once you're happy with your settings, you can have a trial run. Make a journey you have to make anyway, or take a short trip to the shops and back (or some other reasonably close landmark if you don't live near shops). It's important that you're familiar with your test area, as you'll use your local knowledge to see how accurate your results are.
Checking the quality of your traces
When you return, get the trace you've recorded off your receiver, and take a look at it on your PC using an OpenStreetMap editor or by uploading the trace. Now, look at the quality of the trace. Some things to look out for are, as follows:
Are lines you'd expect to be straight actually straight, or do they have curves or deviations in them? A good trace reflects the shape of the area you surveyed, even if the positioning isn't 100% accurate.
I f you went a particular way twice during your trip, how well do the two parts of the trace correspond? Ideally, they should be parallel and within a few meters from each other.
When you change direction, does the trace reflect that change straight away, or does your recorded path continue in the same direction and gradually turn to your new heading?
If you've recorded any waypoints, how close are they to the trace? They should ideally be directly on top of the trace, but certainly no more than a few meters away.
The previous image shows a low-quality GPS trace. If you look at the raw trace on the left, you can see a few straight lines and differences in traces of the same area. The right-hand side shows the trace with the actual map data for the area, showing how they differ.
In this image, we see a high-quality GPS trace. This trace was taken by walking along each side of the road where possible. Note that the traces are straight and parallel, reflecting the road layout. The quality of the traces makes correctly turning them into data much easier.
If you notice these problems in your test trace, you may need to alter where you keep your GPS while you're mapping. Sometimes, inaccuracy is a result of the make-up of the area you're trying to map, and nothing will change that, short of using a more sensitive GPS. For the situations where that's not the case, the following are some tips on improving accuracy.
Making your traces more accurate
You can dramatically improve the accuracy of your traces by putting your GPS where it can get a good signal. Remember that it needs to have a good signal all the time, so even if you seem to get a good signal while you're looking at your receiver, it could drop in strength when you put it away.
If you're walking, the best position is in the top pocket of a rucksack, or attached to the shoulder strap. Having your GPS in a pocket on your lower body will seriously reduce the accuracy of your traces, as your body will block at least half of the sky.
If you're cycling, a handlebar mount for your GPS will give it a good view of the sky, while still making it easy to add waypoints. A rucksack is another option.
In a vehicle, it's more difficult to place your GPS where it will be able to see most of the sky. External roof-mounted GPS antennas are available, but they're not cheap and involve drilling a hole in the roof of your car. The best location is as far forward on your dashboard as possible, but be aware some modern car windscreens contain metal, and may block GPS signals. In this case, you may be able to use the rear parcel shelf, or a side window providing you can secure your GPS.
Don't start moving until you have a good fix. Although most GPS receivers can get a fix while you're moving, it will take longer and may be less accurate. More recent receivers have a "warm start" feature where they can get a fix much faster by caching positioning data from satellites.
You also need to avoid bias in your traces. This can occur when you tend to use one side of a road more than the other, either because of the route you normally take, or because there is only a pavement on one side of the road. The result of this is that the traces you collect will be off-center of the road's true position by a few meters. This won't matter at first, and will be less of a problem in less densely-featured areas, but in high-density residential areas, this could end up distorting the map slightly.
Be your own cartographer Published: September 2010 eBook Price: $23.99
01234567891011121314151617181920 Read more about this book
Surveying techniques
You can gather traces while going about your normal business, or you can make trips specifically to do surveying for OpenStreetMap. The amount of detail you'll be able to capture on a normal journey will be far lower than during a survey, but there are still some techniques you can use to record as much detail about your surroundings as possible.
The first technique to consider is your mode of transport while mapping. For some types of features, there is only one choice: For a motorway you need to use a vehicle, and for narrow footpaths you'll need to walk.
For everything in between, you need to use some judgment. Many existing mappers have found that for suburban and residential areas, a bicycle is the most efficient way of mapping. It's faster than walking, and cheaper than a car. A bike is also easier to turn around when you reach a dead end, and you can dismount and walk along paths where cycling isn't allowed.
Making your survey comprehensive
To make sure you map an area completely, you need to be methodical about how you travel around the area. One simple rule that works well in suburban and residential areas is to "always turn left". This is a standard technique used by robots to find the layout of a maze (and thus escape from it), and it works just as well for mapping.
What "always turn left" means is that if you come across a left-hand turn you haven't previously been down, then take it. Unless the streets you're mapping have a grid pattern, you'll eventually come to a dead end. When you do, turn around and head back down the street, and start turning left again. This method isn't perfect, particularly when there are loops in the road network, so take notes of places where you pass turnings on the opposite side of the road and make sure you visit them later in your survey.
If you're mapping on a bike or in a car in a country where traffic drives on the right, then "always turn right" works better, but the choice is yours. For streets in a grid layout, a simple back-and-forth sweep of the grid should be fine.
When mapping trunk roads or motorways with grade-separated junctions, remember to map the on-and off-ramps whenever you can. While the roads themselves get mapped quite thoroughly by people on long-distance journeys, individual junctions can get missed out even if the roads they join to are mapped. If you make a regular journey along a road like this, why not try to map one junction per day, which shouldn't take much of your time, but will still increase the map coverage quite quickly.
What level of detail you map streets at is up to you, but some typical features you could gather include bridges, changes of speed limit, route reference numbers, and street names.
Along with the streets, there will be point features and areas to map. Point features include street furniture, such as postboxes, bus stops, and public telephones; while areas can be things, such as car parks, playgrounds, and pedestrianized areas. You can record the relative locations of features that you find in a notebook, like in the previous diagram. Your diagram doesn't need to be neat and precise, as your GPS receiver is recording the location of each feature you find. All your notes have to do is record the information, such as names, that don't automatically get stored in your GPS traces.
You can mark the location of point features using your GPS's waypoint feature. Simply stop right by the object you want to mark and press the waypoint button, or choose the option from the menu. You can either use the automatic name given to the waypoint and add extra notes to your notebook, or use a voice recorder. Alternatively, rename the point with a more descriptive name.
For areas, you have a choice of techniques. You can either walk around the perimeter of the area, and the trace will show a loop giving the feature's location. Alternatively, you can just mark the corners of the area as you pass them using waypoints, and note which waypoints are linked to make the area. The latter method is useful when the area is large and you'll be passing by its corners anyway as part of your survey. As with all other OpenStreetMap data, your first pass at mapping an area doesn't have to be perfect, so don't be afraid of drawing an approximation based on three corners of an area.
Photo mapping
You can use a digital camera to take pictures of road signs, street names, or junction layouts to speed up your mapping and help increase the accuracy of your mapping.
The first step to take is to synchronize the clock on your digital camera with the time on your GPS receiver. This is because we'll be using the timestamp information on each photo to match it up with its location, based on your GPS trace. Not all digital cameras allow you to set the time to the nearest second, but as long as the difference between the clocks isn't too great, this won't cause a problem. One way of coping with a difference between the clocks is to take a picture of your GPS receiver with the time showing. You can use this to work out the offset between your camera and the GPS clock.
Once you've done this, you can use your camera to record lots of details you'd otherwise have to note by hand, and because you can place photos by their datestamp, you don't even need to record waypoints on your GPS.
Take pictures of more things than you think you need, and in more detail than you think. For instance, street name signs may also contain a part or all of the street's postal code, or the district it's in; take your picture from too far away, and you won't be able to read it. The previous photo captures two pieces of information: the name of the road, and the speed limit to the left of the sign. We can also see another road to the left, which gives us another clue which way the camera was facing when the picture was taken.
Some built-up areas may be very difficult to get an accurate GPS fix, so taking photos can give you an idea of where any straight sections are, and where the road bends, which may not be obvious from a poor GPS trace.
You can photograph road junctions to record their layout. It can be difficult and time consuming to record a trace of every possible path through a junction, so a series of pictures can help you map the junction accurately in less time.
Once you have your pictures and the accompanying GPS trace, you can load them into an OpenStreetMap editing application and use the information to draw the map.
OpenStreetMap doesn't have any image hosting facilities itself, so if you want to make your pictures available for other mappers to use, you'll need to use a separate photo-hosting site, such as Flickr. There is a sister project called Open StreetPhoto, but this is still under development and at present only provides detailed mapping for the Benelux countries.
Audio mapping
Like photo mapping, audio mapping can speed up the surveying process, but it has an additional advantage of being useful if you're just making a regular journey. The principle is that you describe features into a voice recorder. It's possible to do audio mapping with a tape-based voice recorder, but for best results, a digital recorder that timestamps the files it creates, is used. A smartphone may have a similar function built-in, or you may be able to add some software to it that will allow you to take voice notes. If you do use a tape recorder, remember to record a waypoint in your GPS, then start your voice note with the waypoint name.
As with photo mapping, you'll need to synchronize the clock on your recorder with that on your GPS. Some difference is OK, but if you're mapping high-speed roads, you need to keep it to a minimum; at 60mph or 100kph, you're travelling 22 meters every second. At that rate, a 20 second difference between the clocks would put your mapping out by over 500 meters.
Once you've done that, just use your voice recorder to describe your surroundings. You could record street names by saying, "Turning left into High Street" just before a turn, or use the past tense just after the turn. If a street name has an unusual spelling, or could be spelled in several different ways, it's a good idea to spell out the name in your voice note. It's a good idea to note route names or numbers occasionally as well, even if you've already noted them, as it removes some uncertainty.
Getting your traces into the right format
Once you've completed your mapping, you need to get the information off your receiver and into the right format. The simplest ways of getting traces off your GPS are using a direct cable connection or a removable memory card. Some recent receivers can act as USB mass storage devices, allowing you to get your traces onto your PC with a simple file copy. For older units, you may have to use the software that came with your GPS.
OpenStreetMap only accepts traces in GPS Exchange format (GPX)—an XML vocabulary for traces and waypoints. You can find out more about the GPX vocabulary at http://www.topografix.com/gpx.asp. OpenStreetMap also only accepts GPX files with timestamps on each trackpoint in the trace. This is to prevent mappers from uploading traces that have been converted from an existing map database, which will usually be subject to copyright and will therefore, not be suitable for use in OpenStreetMap. This doesn't present a problem most of the time, as practically all receivers store the time in traces made in real time.
If your receiver doesn't produce GPX files itself, you'll need to translate from its own format into GPX. Fortunately, there's an application that can convert most formats into GPX, called GPSBabel. If you can get GPX files from your receiver, you can skip this section.
GPSBabel is a free software package, and you can download it from http://www.gpsbabel.org/ for Windows, Mac OS X, and Linux. While GPSBabel itself is a command-line application, graphical interfaces are available for Windows, Mac, and Linux. The Windows version is shown in the following screenshot, which shows the command line it creates for the options you select in the interface.
We'll work through an example of converting a file in NMEA format to GPX using GPSBabel on Windows, but the procedure is the same for any file format. GPSBabel can also communicate directly with many models of GPS, and this can speed up the conversion process over downloading and converting separately.
GPSBabel is a powerful package with many options, and you won't need most of them. We're interested in converting tracks and waypoints into the GPX format. For an NMEA file, you'd use the following command line:
gpsbabel -w -t -i nmea -f <input filename> -o gpx -F <output filename>If you're using the Windows command line, you'll need to use gpsbabel.exe as the program name.
The first two options tell GPSBabel to process waypoints and tracks; the default is to only process waypoints, which is of no use to OpenStreetMap. The third option is the input file format to use. You specify this with the -i flag, a format identifier, and the filename. The list of every identifier GPSBabel understands is available in the online documentation or by using the program's help option:
gpsbabel -hYou set the output format using the -o option in a similar manner, and in our case, this is always -gpx. The input filename is specified in the -f option, and the output file in the -F option. The input can also be a device if you want to retrieve traces directly from your GPS. Check the GPSBabel documentation for the precise syntax needed for your device.
As already mentioned, OpenStreetMap only accepts traces with valid timestamps. If you want to conceal the time you made a journey, you can do so through a filter that will change the start time of your trace. As timestamp information is useful to other mappers and for future projects, if you're going to change your timestamps, you're asked to do so to an obviously fake time so that your trace can be filtered out from automatic processing.
Use the following option to adjust timestamps:
gpsbabel -w -t -i nmea -f <input filename> -x track,start=19700101000000-o gpx -F "<output filename>
This will set the start of your trace to January 1, 1970, well before OpenStreetMap was started, so it will be obvious that these are faked timestamps.
If you want to disguise the precise location your traces start or stop at, you can use a filter to discard any points within a given radius of a point. The following command line will filter out points within 500 meters of 10 Downing Street—the British Prime Minister's residence:
gpsbabel-w -t -i nmea -f <input filename> -x radius,exclude,distance=0.5K,lat=51.5034052,lon=-0.1274766 -o gpx -F <output filename>
You can use more complex filters to clean up your traces by taking actions such as discarding any points with a high DOP. You don't need to do this, as the crowdsourcing process eliminates such errors over time, but it can help when first mapping an area.
Adding your traces to OpenStreetMap
Once you have your traces in GPX format, you can upload them to openstreetmap. org. Whether you need to upload your traces depends on which editing method you prefer. If you want to use the online editor, Potlatch, you need to upload your traces. If you use a desktop editor such as JOSM or Merkaator, you don't need to upload your traces to use them, but it's still useful to OpenStreetMap as a project if you do.
It's hoped that in the future, the traces can be put to other uses, including automatically generating average speeds for roads, detecting one-way streets and the layout of junctions. This automatic processing isn't yet in place, but the more the data it has to work with (once it's working), the better.
Some editors and applications support direct upload of traces, but the simplest way of adding your traces to OpenStreetMap is via the website. The upload form is at the top of the list of your traces. To find this, make sure you're logged into the site, and browse to the front page. Click on GPS Traces among the tabs at the top of the map, then click on See just your traces, or upload a trace.
You should now see the upload form, shown in the previous screenshot. The top field is the file to be imported. Either enter the path to your GPX file directly, or use the Browse button to find it on your hard drive. You can only upload one trace at a time using the web interface at present.
You need to add a short description for each file you upload. You can also add tags to each trace, so you can find particular sets of traces at a later date. There are no folders or other ways of organizing your traces, so adding tags at upload is a good idea. Enter a list of comma-separated tags in the form field. You can use as many tags as you find useful.
You can set the privacy level for a trace using the Visibility drop-down. All points in traces uploaded to OpenStreetMap are visible in some way, but you can choose whether other mappers can see the trace as a whole, and whether they can see who uploaded the trace. Some mappers feel that because many of their traces start or end at their home or workplace, other users shouldn't be able to see whose traces are whose, but there have been no reported incidents of a mapper's privacy being invaded through a trace they've uploaded. You can use GPSBabel to remove all points within a given radius of your start and end points, which will have a similar effect.
You have four privacy options:
Private: hides all details of your traces from other users, and only shares the individual points in each trace. Other users can't see which points come from which trace, or who uploaded them. The trace isn't shown in the public list of traces.
Public: is a historical option and isn't useful for newly uploaded traces. It shows the trace in the public list of traces, but still anonymizes the points when an area is downloaded. This was previously the only option other than Private.
Trackable: allows other users to see the timestamps for each point, but not who uploaded them. The traces don't appear in the public list of traces.
Identifiable: allows other users to see the entire trace, including who uploaded it. The trace is shown in the public list of traces, and the original file is available for download for everyone.
Which option you choose is entirely up to you, but your traces will be of more use to the OpenStreetMap project if you use one of the last two options.
After you've completed the upload form, you can press the Upload button to submit the trace. It will then go into a queue to be processed. Processing can be almost instant, or can take several minutes, depending on how many mappers are uploading traces. Once processing is complete, you'll get an e-mail to your registered address telling you whether the import was successful or not. Common reasons for imports to fail include a lack of timestamps, or GPX files that contain only waypoints, not tracks.
Refresh your traces list, and you should now see your newly imported trace, complete with a thumbnail image, and the description and tags you entered.
Clicking on a trace's filename or the more link will take you to a details page for that trace, where you can also edit the description, tags, and visibility of that trace. You can't edit a trace itself in OpenStreetMap, although JOSM does have support for GPX file editing. You won't normally need to make changes to GPS traces, but if you do, you'll need to edit a local copy of the file, delete the version on the site, and upload the new one.
The map link in a trace entry will take you to a map view centered around the trace's starting point. The edit link will take you to Potlatch—the online editor—with that trace loaded, so you can add the features it represents to the map.
Clicking on the uploader's name will take you to his/her user page. Click on any of the tags to see any traces you've tagged with that word. You'll also see a list of your tags in the sidebar on the left of the screen. Note that when you're looking at an individual trace page, the tag links in the page and in the sidebar will take you to a list of all traces on openstreetmap.org with that tag, not just yours.
Collecting information without a GPS
It's still possible to gather data for OpenStreetMap without a GPS receiver. Roads near your area may have been mapped by someone passing through the area, so will be missing any detailed information and points of interest. There are also areas where mappers have traced aerial images or out-of-copyright maps without surveying the area in person, so these areas will need details to be filled in.
You can identify areas that have been traced purely from aerial images using the NoName layer in the slippy map. This highlights any residential roads without a name, so any large blocks where all streets are highlighted are likely to have come from aerial images.
To go mapping without a GPS, you'll need a hard copy of the existing map of an area. You can print directly from the slippy map, but you'll get better results by using the Export facility to produce a PDF map of the area you're going to survey. You can then print this out with far greater control over layout. Note that you can't export an image of the NoName layer itself, but if you have a Garmin GPS receiver, you can download an add-on map version of NoName from Cloudmade (http://downloads.cloudmade.com).
Once you have your printed map, grab a pencil and head out. Mark the map with the locations of any points of interest, or any missing streets and paths. Don't worry about precise positioning, as it's more important to have data in an approximate location than have no data at all. You or another mapper can refine the positioning later. Use the techniques described earlier in the article to ensure you cover the whole of the area you've chosen.
There's a site aimed at making the process of mapping from an existing map more streamlined. Walking Papers (http://walking-papers.org/) is a site aimed at increasing participation in OpenStreetMap beyond dedicated mappers in possession of a GPS receiver. The site allows you to choose an area of the map and print it out, so that you can make notes on it. Walking Papers uses a different style of cartography than the slippy map, so there's more room around features to write and draw.
You then scan the printed map and upload it to the site. A 2D bar code on the map allows Walking Papers to match your annotated map to the area it was taken from, and use it as a background in your editor. You can then add data to the map by drawing over your scan and adding points of interest where you've marked them. Even if you don't have a scanner, you can use Walking Papers' more spacious maps as your template to draw on.
Have you finished?
Once you've surveyed an area, it's time to turn the information you've gathered into a map. However, don't be fooled into thinking that this means the end of surveying.
The most obvious reason for needing to re-survey an area is that there have been changes on the ground, such as road layout changes, buildings being demolished and built, or the opening and closure of shops or other amenities. One of OpenStreetMap's great strengths is the ability to update it and have the new data available to everyone immediately. You'll know of construction work taking place in your local area, so you can be ready to map the new features as soon as they're finished.
You may also stumble across new roads or road layouts while making an unrelated journey; another good reason to keep your GPS receiver recording, even if you're not planning to do any surveying.
Apart from changes in the real world, your skill as a surveyor will grow the more you do it, so revisiting an area will allow you to capture more detail with greater accuracy than you may have done the first time you visited. Remember, you can record any permanent geographical feature in OpenStreetMap, and ultimately it's hoped to include anything that can be mapped.
Surveying for OpenStreetMap isn't as difficult as you might first think, and it certainly doesn't need expensive, complex equipment. We've seen that you can do surveys with one or more of:
An inexpensive consumer-grade GPS receiver, or even no GPS at all
A notebook and a pencil
A digital camera
A voice recorder
We've also learned some basic surveying techniques, including:
Covering an area methodically
Recording as much detail as possible, even if it's not immediately obvious how it will be used
Using a combination of recording devices to speed up and improve the accuracy of mapping
Surveying an area multiple times to capture changes to features, or to increase the overall accuracy of the data.
Getting Started with OpenStreetMap [article]
Checking OpenStreetMap Data for Problems [article]
01234567891011121314151617181920 About the Author : Jonathan Bennett
Jonathan Bennett is a British journalist, writer, and developer. He has been involved in the OpenStreetMap project since 2006, and is a member of the OpenStreetMap Foundation. He has written for print and online technical publications including PC Magazine, ZDNet, CNET, and has appeared on television and radio as a technology commentator. He has an extensive collection of out-of-date printed maps.
Books From Packt Drupal 7
jQuery 1.4 Reference Guide
Moodle 2.0 First Look
Joomla! Social Networking with JomSocial
Apache OfBiz Cookbook
Plone 3.3 Site Administration
Nginx HTTP Server
IT Inventory and Resource Management with OCS Inventory NG 1.02 | 计算机 |
2014-23/2666/en_head.json.gz/5215 | See more news releases in Computer Electronics Electronic Gaming Licensing Neverwinter Launches In Brazil
Perfect World Entertainment and UOL BoaCompra Partnership Brings Acclaimed Free-to-Play MMORPG to Brazil
REDWOOD CITY, Calif., Feb. 18, 2014 /PRNewswire/ -- Perfect World Entertainment, Inc., a leading publisher and operator of free-to-play massively multiplayer games, today announced a partnership with Latin American digital goods payment giant, UOL BoaCompra, to monetize and market the widely-acclaimed free-to-play action MMORPG Neverwinter in Brazil. Neverwinter is now fully localized and available to play in Portuguese.
The partnership grants UOL BoaCompra exclusive monetization rights for Neverwinter in Brazil and will ensure that local customer support, payment gateways, and marketing through Brazil's largest content portal, UOL, will be provided. The action MMORPG set in the classic Dungeons & Dragons Forgotten Realms universe from Wizards of the Coast allows players to play the game for free, with optional microtransactions to tailor the game in a way that suits them best. However, more than 70 percent of Brazilians don't have access to an international credit card to make purchases online with non-Brazilian companies. With this partnership, UOL BoaCompra's payment gateways will allow Neverwinter players in Brazil to be able to make purchases in-game using their preferred payment methods.
"Brazil is a key and emerging market in the MMORPG space," said Perfect World Entertainment General Manager of Game Publishing, Andrew Brown. "Since launching in North America and Europe last year, we've strived to push Neverwinter globally, and with today's announcement we're thrilled to bring Neverwinter to our new Brazilian players. Neverwinter is a vibrant and engaging MMO that continues to grow, and the recently released Shadowmantle update brings Brazilians the best and latest free content to-date. "
"Connecting gamers with the great D&D gaming experiences in their preferred language is always a goal for us, and with the expansion of Neverwinter into Brazil, Perfect World continues to deliver that," said Nathan Stewart, brand director for Wizards of the Coast. "MMOs account for nearly 75 percent of the Brazilian game market right now, and we've seen incredible demand from our player base for Neverwinter in Brazil," said UOL BoaCompra Director of Global Business Development, Julian Migura. "We're excited to bring Neverwinter's epic storylines, intense gameplay and role play to this massive market and offer players easier access to the extra features they demand using the local payment methods available to them. "
Neverwinter is an action MMORPG that features fast-paced combat and epic dungeons. Players explore the vast city of Neverwinter and its surrounding countryside, learning the vivid history of the iconic city and battling its many enemies. In Neverwinter, players can create their own quests and campaigns with the Foundry, a powerful content creation tool that allows players to seamlessly integrate their adventures in to the live game world.
For more information and to begin playing now for free, please visit the official Neverwinter Brazilian website: http://www.jogueneverwinter.br.com Check us out on our Brazilian social links:Facebook |Twitter |YouTube |Orkut
To download the trailer and screenshots, please visit:https://www.dropbox.com/sh/9a0nb7lh2yogmpg/FPuqW3gaNy
ABOUT UOL BOACOMPRAFounded in 2004, UOL BoaCompra specializes in monetizing online games while offering and servicing more than 3,000 games across multiple platforms in LATAM, Portugal, Spain and Turkey. As part of Brazil's largest internet company and content portal, UOL BoaCompra's game portals aggregate more than 1.2 billion page views per month and has more than 120,000 POS across LATAM. Several of the largest gaming companies including Valve, EA, Riot Games, Bigpoint, SmileGate and Aeria Games partner with UOL BoaCompra to bring their games to emerging markets including Brazil and most recently, Turkey. For more information, please visit www.boacompra.com and keep up-to-date on the latest news at http://www.boacompra.eu/blog/.
ABOUT PERFECT WORLD ENTERTAINMENT INC.Perfect World Entertainment is a leading North American online games publisher specializing in immersive free-to-play MMORPGs. Founded in 2008, Perfect World Entertainment has published a number of popular titles, including Blacklight Retribution, Forsaken World, Perfect World International and Star Trek Online. The company works closely with its American development teams and partners such as Cryptic Studios, developer of the highly acclaimed MMORPG Dungeons & Dragons Neverwinter, and Runic Games, developer of the hit Torchlight series, to provide unparalleled quality of service and game experiences to its players. A subsidiary of Perfect World Co., Ltd. (NASDAQ: PWRD), Perfect World Entertainment is headquartered in Silicon Valley, California. For more information, please visit: www.arcgames.com.
ABOUT CRYPTIC STUDIOS, INC.Cryptic Studios, Inc. is a leading developer of online games committed to delivering the next level of gameplay. Cryptic Studios, Inc. develops AAA titles for PC and is rapidly diversifying its portfolio of games to expand beyond the traditional MMORPG genre. Its successfully launched titles include "Champions Online: Free for All," "Star Trek Online" and "Dungeons & Dragons: Neverwinter." Cryptic Studios, Inc., a subsidiary of Perfect World Co., Ltd. (NASDAQ: PWRD), is located in Los Gatos, CA.
ABOUT WIZARDS OF THE COASTWizards of the Coast LLC, a subsidiary of Hasbro, Inc. (NASDAQ: HAS), is the leader in entertaining the lifestyle gamer. Wizards' players and fans are members of a global community bound together by their love of both digital gaming and in-person play. The company brings to market a range of gaming experiences under powerful brand names such as MAGIC: THE GATHERING, DUNGEONS & DRAGONS, and KAIJUDO. Wizards is also a publisher of fantasy series fiction with numerous New York Times best-sellers. For more information about our world renowned brands, visit the Wizards of the Coast Web site at www.wizards.com.
Dungeons & Dragons, Neverwinter, their respective logos, Forgotten Realms, and Magic: The Gathering are trademarks of Wizards of the Coast LLC in the U.S.A. and other countries. All rights reserved. Kaijudo is a trademark of Wizards of the Coast/Shogakukan/Mitsui-Kids.
CONTACT:Derick ThomasPerfect World Entertainment, Inc.(650) [email protected]
Liz GoodnoUOL [email protected]
SOURCE Perfect World Entertainment, Inc. RELATED LINKS
http://www.arcgames.com
More by this Source Neverwinter Curse Of Icewind Dale Now Live
Star Trek™ Online Sets Phasers To Mac
Neverwinter Curse of Icewind Dale Announced
View all news by Perfect World Entertainment, Inc. | 计算机 |
2014-23/2666/en_head.json.gz/5860 | And the Land Played On
Mark RosewaterMonday, January 18, 2010
Making Magic Archive Mark Rosewater Archive elcome to Worldwake Previews, Week 1. It's time to shake the present that is Worldwake and see what we can deduce. Quite a bit, I hope, as I helped wrap the thing. Mostly what I'm going to do today is walk you through one of Worldwake's major themes (including showing you a preview card) and then talk a bit about the challenges of designing a second set. Actually those are backwards. What I need to do is explain how we design a second set first because that will explain why the major theme we have is the major theme. So if you're at all interested in what Worldwake has to offer (and if you aren't, what are you, made of stone?), stick around.
They've Got the Whole Worldwake In Their Hands
Before I get to all of that though, I have to first introduce you to the design team that's responsible for the goodies I'm about to talk about.
Kenneth Nagle – During the Great Designer Search (click here if you have no idea what I'm talking about; the short version is that it was a reality show a la The Apprentice on magicthegathering.com where we gave a Magic player a six-month design internship—we actually ended up giving out more than one), the judges would get together after each challenge and talk about who got the boot. One of the judges wasn't all that impressed with Ken and every time would throw his name on the list for elimination. Each time, I would take his name off—as it was technically a design internship, I had the final say—because I felt Ken had a lot of potential.
In two weeks, you all will get to go to the Worldwake Prerelease and see that potential realized. Worldwake was Ken's first design lead, and he knocked it out of the park. I am very proud of him and of the Worldwake design team. The fun part now is watching Ken wait for the set's release. I have not seen anyone this excited for a set since I had my first design come out (Tempest, for those who don't know). Ken has had his fingers in every possible piece of Worldwake. Ken helped plan the spoiler releases. Ken helped choose the artwork that went on the poster. For all I know Ken helped the people at our printers pack up the sets and load them onto the boat for shipping.
It's awesome to see Ken this excited, and I can't wait until he gets to the best part, seeing the reaction from all of you. (Seriously, it's the best part of the job.)
Mark Rosewater – Zendikar was my baby, so it made a lot of sense for me to stick around and help out with the sole "lands matter" small set. (Remember, Rise of the Eldrazi is a large set that takes a big mechanical turn.) Also, it was Ken's first lead, and in design the first skydive is always tandem. It's comforting for a first-time designer to have someone who's been there before to bounce ideas off of and to let them know if they've missed something obvious. While this set was going on, I was busy leading my own set (the 2010 large fall set codenamed "Lights"—I have many awesome things to say about this set, but not today), so while I was there to help out, Ken had to do all the heavy lifting.
Matt Place – I like putting a developer on each of the design teams because they have a very useful perspective. It's easy sometimes for designers to miss the forest for the trees, and the developers tend to point out fundamental issues that can get lost in the excitement of trying new things. In particular, I love having Matt on a team because he is a big-picture guy like me and he tends to have a vantage point that I find very helpful when trying to get a sense of how everything is going to come together.
Mark Globus – Many people forget that the #1, #2, #3 and #4 finishers from the Great Designer Search all work at Wizards now. Mark was #4. (Alexis Janson – #1, Ken – #2, Graeme Hopkins – #3) He's now the Magic Producer in R&D, and he oversees all the many processes that Magic R&D have to navigate to make Magic. On the side, he does design. (And you don't get to #4 in The Great Designer Search without having some design chops.) I'm always happy to see Mark on one of my teams, and I know Ken felt the same way.
Kelly Digges – Most design teams these days are composed of five people. The fifth slot is always saved for someone who has never done Magic design before but that has expressed interest. As we have a small number of sets per year and a large list of people in Wizards who are interested, it takes some time to land a "fifth slot." Worldwake's fifth slot went to someone near and dear to my heart, my editor Kelly. (Okay, he's not just my editor, he's the editor for all of Daily MTG, but I only interact with him on my columns.) I was very impressed with Kelly on the team. He jumped right in and contributed from day one. At some point during the next month or so, Kelly will write something to tell you about the design team from his perspective. I'll just say that I don't think this is the last you'll see of Kelly Digges, designer.
Coming In Second
I've spent a number of articles talking about designing blocks, and usually I focus on the work to make the first, large set. While that is the brunt of the work, there still this little issue of one or two small sets that follow the big set. Those sets have to not only be designed, they have to fill a role in the larger overall block structure. What does this mean for a second set? Let me walk you through some of the major concerns.
#1) It Has to Feel Like It Follows the Big Set – Small sets don't start with a blank slate. Players come into them with expectation set by the first set of the block. The second set is already in a known world with known themes and known mechanics. There is expectation that this set will be "more of the first." As I often explain, a lot of design's role is meeting expectations (and helping set up expectations so that you can later meet them). If players expect something, you'd better have a good reason for not supplying what they expect. What this means is that the small set comes to the table with a lot of pre-existing demands which the design team must meet.
#2) It Has to Have Its Own Identity – One of the realities of making a game (as a business as opposed to a hobby) is that you have to encourage people to buy the game. When the small set comes out, we have to convince all of you that it's something you want to buy. To do this, we have to be able to talk about the set in a way that sets it apart from all other sets. Why should you buy this set? This means that each set has to have an identity that we can use to explain what the set is about.
#3) There Has to Be Something New – "More of the same" is not enough for a new set. Yes, it can be a significant portion of the design, but it needs to have something that wasn't available before. This new thing could be a new mechanic, a significant twist on an established mechanic, a new way to approach the block's theme, or anything else that takes the experience of the block and adds something new to the mix. #4) You Have to Find Space That Wasn't Already Explored in the Big Set – Not only do you need to have something new, you need to have new discoveries in the space that the large set explored. You have to allow the players the ability to discover things. Why is this so important? Because there's a lot of power in first exposure. The human brain loves discovering things. The reason we have curiosity is that the need to explore and discover is hard wired into our brains. Also, to reinforce this, our brain gives us bursts of pleasure when we learn of things for the first time. As the saying goes, "You always remember your first." This means that you have to have some firsts even in your second set. #5) Everything Has to Fit – This is a huge design challenge for the second set because some part of the large set's design is loved by someone, you want to continue it into the second set. You also want to add something new (see above). The problem is that the small set is significantly smaller than the large set, yet you kind of want more things in it than the large set. That causes some problems with design space. What all that means is this: the first set of the block has the major issue of figuring out what the block is going to be about and then delivering an experience that maximizes on the block's theme. The second set is about building onto what the first set has established but doing so in a way that gives the second set its own identity. Once upon a time, R&D accomplished this task set to set. The large set would be designed, and then when work on the second set was started, the team would ask itself "What hasn't been done yet?" With the shift of focus to block planning, the needs of the second set are now examined during the design of the first set. When my team was designing Zendikar, I had to keep asking the question, "What is Worldwake going to do?" Whatever the answer was, it meant that it was something Zendikar couldn't do.
Modern block planning means that we have to make sure upon the creation of the block that each set plays an important enough role in the overall plan. The second (and third) set(s) cannot be afterthoughts, but must be pieces of the block plan. The assembly of the block requires the creation of each set's identity. To make this happen, the discussion about what the second (and third set) will do is decided much earlier in the process.
I've Got the Whole Worldwake in My Hands
So during Zendikar block design, we had to answer each of the concerns above about Worldwake:
#1) It Has to Feel Like It Follows the Big Set – This was the easiest to address. Worldwake would continue the "lands matter" theme. It would have more landfall cards, more Allies, Vampires, and Kor, more Traps and quests. Everything that was beloved in Zendikar (by someone—as I always say, Magic is many things to many people) returns in Worldwake. #2) It Has to Have Its Own Identity – This is the first place where the block plan becomes important. In order for Worldwake to have its own identity, we had to save it something; something that wouldn't go into Zendikar. As the block's core theme was "lands matter," it was crucial that Worldwake's identity be land-centric. This begets the question: (man, I don't get much chance to use "beget" all that often): what piece of the "lands matter" pie do we save for Worldwake? It had to be something significant, but not so important that Zendikar couldn't function without it. How did we figure this out? We started in early Zendikar design by using every mechanic related to land that we could think up. In addition, we designed cards that made use of lands in unorthodox ways. We took all these cards and we played them. And played them. And played them. What we ended up with fell into three buckets: good, good but complicated, and bad. Good but complicated is an excellent resource for second sets. You see, some ideas are very simple and can be understood easily. Others are more disorienting until you come to grips with the environment you are playing in. You want the players to experience this subset of cards after they have a handle on what the environment is.
The biggest category that fell into the "good but complicated" bucket involved lands that sat on the battlefield and did something. Here's why: the last seventeen years of Magic have taught players to basically ignore lands on the battlefield. Yes, you look from time to time when you need to figure out whether a player can do something or to have more knowledge to guess what might come next, but all in all, players are taught that the lands on the battlefield don't need to occupy much mental space. From time to time, we've created powerful lands that do something on the battlefield besides produce mana, but those have been few and far in between, and seldom do multiple of these lands coexist in games.
This all became apparent during early Zendikar playtests when we had lots of lands that did things on the battlefield. It was obvious fairly quickly that it was significantly ratcheting up the complexity of on-board play. Players were continually forgetting to take into account the appropriate lands. R&D is used to a higher level of complexity than the average player—designing and developing requires we bring in things at a high level of complexity and bring them down—and even we were forgetting left and right. We decided that we should push the majority of these lands out of Zendikar (there are obviously some exceptions at higher rarities, such as the rare land cycle). Since the set only had one small expansion, these cards would go into Worldwake or nowhere.
The meatiest chunk of these cards were lands that turn into creatures—what R&D (and much of the world) calls "man-lands." These are lands that have the ability to be turned into creatures. Lands that turn into creatures have been popular historically, partly because they're pretty cool and partly because they've tended to be tournament-worthy cards. Once we realized that we wanted to push these off, it became pretty clear what Worldwake's land theme was going to be: When Lands Attack.
#3) There Has to Be Something New – This is the topic for next week's column. Suffice to say, the design team did try to add some new things to Worldwake.
#4) You Have to Find Space That Wasn't Already Explored in the Big Set – Lands that attack were a big part of doing this. Zendikar is about lands, but most of them function more like "spell" lands than "creature" lands. Adding lands that have functionality on the board has a profound effect on the game play.
#5) Everything Has to Fit – Ken solved this the way that lead designers of most small sets do: he just crammed in everything he could wherever he could all the while making sure to leave room for simpler and elegant cards. There are a lot of tricks to make this happen, and perhaps one day I'll dedicate a column to these tricks. (Let me know in the thread or my email if you'd like to see such a thing.)
Talk to the Land
Enough of my yapping, let's get to the part where I show you a cool new card. Before I do that, I thought I'd show off a few cards in this theme that have already been spoiled.
The first card was revealed several weeks ago, because it is the "Buy a Box" promo card, which you get if you, well, buy a box of Worldwake. I want to show it to you, as I want you to get a sense of what kind of lands will be attacking you.
Click here to see Celestial Colonnade.
So what's going on here? The Worldwake design team spent a lot of time trying to figure out the coolest cycle of animating lands. We tried all sorts of things. We had lands that animated by landfall. We had lands that came into play already animated. We had lands that turned into giant monsters. In the end, though, we liked the idea of having dual man-lands. Zendikar's fetch lands were enemy-color-pair lands, so it seemed like a good place to introduce allied-color duals to the block.
The other thing we tried hard to do was distance ourselves from another famous cycle of lands that turned into creatures.
In our minds, this cycle has become the default in players' perceptions, and while we wanted to capture the goodness they had, we wanted our man-lands to not feel like minor tweaks of them. This is why we chose to have these lands activate for a little more but turn into bigger creatures.
Next, we have a land from our "pool cards"—that is, a group of cards we gave to all the different magazines.
Click here to see Dread Statuary.
Since the duals were rare, we decided to toss Limited a bone and make an angry land that anyone who opened it could play.
Which brings us to my preview card of the day. It's another member of the cycle of dual lands. Click here to see Raging Ravine.
Let me quickly address the one rules question that might pop up when you read it: Does the +1/+1 counter fall off when the land is no longer a creature? No. Lands have no problem having +1/+1 counters on them. (I've been told some really like it.) It kindly sits there until the Raging Ravine animates again. As you can see by comparing Celestial Colonnade to Raging Ravine, we tried to make each member of this cycle different so that they would have a great variety in play. Different color combinations will use their new dual land in different ways.
The next card is another one from the Visual Spoiler.
Click here to see Vastwood Zendikon.
The Zendikons are a common cycle and that allow you to turn your lands into creatures. Part of making the theme work was trying to find different ways to animate lands. The Zendikons came about because we were trying to find cards that fit the theme other than just lands. We liked using these at common rather than traditional man-lands as the enchantment helps remind everyone that the land in question is not just a plain old land. Originally, the "return the land to the battlefield" rider wasn't there but the cards too often caused card disadvantage, making people hesitant to play them. These cards should definitely have an impact on Zendikar block Limited.
Land Now For Something Completely Different
That's all the time I have for today. Hopefully, I've whet your appetite enough to get you to come back next week when I explain how we took kicker to eleven as well as hit upon the areas we chose to innovate in.
Until then, may you imagine the fun of shouting "Land Ho!" before you attack. | 计算机 |
2014-23/2666/en_head.json.gz/6342 | Martin Kleppmann
Entrepreneurship, web technology and the user experience
The complexity of user experience
The problem of overly complex software is nothing new; it is almost as old as software itself. Over and over again, software systems become so complex that they become very difficult to maintain and very time-consuming and expensive to modify. Most developers hate working on such systems, yet nevertheless we keep creating new, overly complex systems all the time.
Much has been written about this, including classic papers by Fred Brooks (No Silver Bullet), and Ben Moseley and Peter Marks (Out of the Tar Pit). They are much more worth reading than this post, and it is presumptuous of me to think I could add anything significant to this debate. But I will try nevertheless.
Pretty much everyone agrees that if you have a choice between a simpler software design and a more complex design, all else being equal, that simpler is better. It is also widely thought to be worthwhile to deliberately invest in simplicity — for example, to spend effort refactoring existing code into a cleaner design — because the one-off cost of refactoring today is easily offset by the benefits of easier maintenance tomorrow. Also, much thought by many smart people has gone into finding ways of breaking down complex systems into manageable parts with manageable dependencies. I don’t wish to dispute any of that.
But there is a subtlety that I have been missing in discussions about software complexity, that I feel somewhat ambivalent about, and that I think is worth discussing. It concerns the points where external humans (people outside of the team maintaining the system) touch the system — as developers using an API exposed by the system, or as end users interacting with a user interface. I will concentrate mostly on user interfaces, but much of this discussion applies to APIs too.
Let me first give a few examples, and then try to extract a pattern from them. They are examples of situations where, if you want, you can go to substantial engineering effort in order to make a user interface a little bit nicer. (Each example based on a true story!)
You have an e-commerce site, and need to send out order confirmation emails that explain next steps to the customer. Those next steps differ depending on availability, the tax status of the product, the location of the customer, the type of account they have, and a myriad other parameters. You want the emails to only include the information that is applicable to this particular customer’s situation, and not burden them with edge cases that don’t apply to them. You also want the emails to read as coherent prose, not as a bunch of fragmented bullet points generated by if statements based on the order parameters. So you go and build a natural language grammar model for constructing emails based on sentence snippets (providing pluralisation, agreement, declension in languages that have it, etc), in such a way that for any one out of 100 million possible parameter combinations, the resulting email is grammatically correct and easy to understand.
You have a multi-step user flow that is used in various different contexts, but ultimatively achieves the same thing in each context. (For example, Rapportive has several OAuth flows for connecting your account with various social networks, and there are several different buttons in different places that all lead into the same user flow.) The simple solution is to make the flow generic, and not care how the user got there. But if you want to make the user feel good, you need to imagine what state their mind was in when they entered the flow, and customise the images, text and structure of the flow in order to match their goal. This means you have to keep track of where the user came from, what they were trying to do, and thread that context through every step of the flow. This is not fundamentally hard, but it is fiddly, time-consuming and error-prone.
You have an application that requires some arcane configuration. You could take the stance that you will give the user a help page and they will have to figure it out from there. Or you could write a sophisticated auto-configuration tool that inspects the user’s environment, analyses thousands of possible software combinations and configurations (and updates this database as new versions of other products in the environment are released), and automatically chooses the correct settings — hopefully without having to ask the user for help. With auto-configuration, the users never even know that they were spared a confusing configuration dialog. But somehow, word gets around that the product “just works”.
What’s a user requirement?
We said above that simplicity is good. However, taking simplicity to an exaggerated extreme, you end up with software that does nothing. This implies that there are aspects of software complexity that are essential to the user’s problem that is being solved. (Note that I don’t mean complexity of the user interface, but complexity of the actual code that implements the solution to the user’s problem.)
Unfortunately, there is a lot of additional complexity introduced by stuff that is not directly visible or useful to users: stuff that is only required to “grease the wheels”, for example to make legacy components work or to improve performance. Moseley and Marks call this latter type accidental complexity, and argue that it should be removed or abstracted away as much as possible. (Other authors define essential and accidental complexity slightly differently, but the exact definition is not important for the purpose of this post.)
This suggests that it is important to understand what user problem is being solved, and that’s where things start getting tricky. When you say that something is essential because it fulfils a user requirement (as opposed to an implementation constraint or a performance optimisation), that presupposes a very utilitarian view of software. It assumes that the user is trying to get a job done, and that they are a rational actor. But what if, say, you are taking an emotional approach and optimising for user delight?
What if the user didn’t know they had a problem, but you solve it anyway? If you introduce complexity in the system for the sake of making things a little nicer for the user (but without providing new core functionality), is that complexity really essential? What if you add a little detail that is surprising but delightful?
You can try to reduce an emotional decision down to a rational one — for example, you can say that when a user plays a game, it is solving the user’s problem of boredom by providing distraction. Thus any feature which substantially contributes towards alleviating boredom may be considered essential. Such reductionism can sometimes provide useful angles of insight, but I think a lot would be lost by ignoring the emotional angle.
You can state categorically that “great user experience is an essential feature”. But what does that mean? By itself, that statement is so general that could be used to argue for anything or nothing. User experience is subjective. What’s preferable for one user may be an annoyance for another user, even if both users are in the application’s target segment. Sometimes it just comes down to taste or fashion. User experience tends to have an emotional angle that makes it hard to fit into a rational reasoning framework.
What I am trying to get at: there are things in software that introduce a lot of complexity (and that we should consequently be wary of), and that can’t be directly mapped to a bullet point on a list of user requirements, but that are nevertheless important and valuable. These things do not necessarily provide important functionality, but they contribute to how the user feels about the application. Their effect may be invisible or subconscious, but that doesn’t make them any less essential.
Data-driven vs. emotional design
Returning to the examples above: as an application developer, you can choose whether to take on substantial additional complexity in the software in order to simplify or improve the experience for the user. The increased software complexity actually reduces the complexity from the user’s point of view. These examples also illustrate how user experience concerns are not just a matter of graphic design, but can also have a big impact on how things are engineered.
The features described above arguably do not contribute to the utility of the software — in the e-commerce example, orders will be fulfilled whether or not the confirmation emails are grammatical. In that sense, the complexity is unnecessary. But I would argue that these kind of user experience improvements are just as important as the utility of the product, because they determine how users feel about it. And how they feel ultimately determines whether they come back, and thus the success or failure of the product.
One could even argue that the utility of a product is a subset of its user experience: if the software doesn’t do the job that it’s supposed to, then that’s one way of creating a pretty bad experience; however, there are also many other ways of creating a bad experience, while remaining fully functional from a utilitarian point of view.
The emotional side of user experience can be a difficult thing for organisations to grapple with, because it doesn’t easily map to metrics. You can measure things like how long a user stayed on your site, how many things they clicked on, conversion rates, funnels, repeat purchase rates, lifetime values… but those numbers tell you very little about how happy you made a user. So you can take a “data-driven” approach to design decisions and say that a feature is worthwhile if and only if it makes the metrics go up — but I fear that an important side of the story is missed if you go solely by the numbers.
This is as far as my thinking has got: believing that a great user experience is essential for many products; and recognising that building a great UX is hard, can require substantial additional complexity in engineering, and can be hard to justify in terms of logical arguments and metrics. Which leaves me with some unanswered questions:
Every budget is finite, so you have to prioritise things, and not everything will get done. When you consider building something that improves user experience without strictly adding utility, it has to be traded off against features that do add utility (is it better to shave a day off the delivery time than to have a nice confirmation email?), and the cost of the increased complexity (will that clever email generator be a nightmare to localise when we translate the site into other languages?). How do you decide about that kind of trade-offs?
User experience choices are often emotional and intuitive (no number of focus groups and usability tests can replace good taste). That doesn’t make them any more or less important than rational arguments, but combining emotional and rational arguments can be tricky. Emotionally-driven people tend to let emotional choices overrule rational arguments, and rationally-driven people vice versa. How do you find the healthy middle ground?
If you’re aiming for a minimum viable product in order to test out a market (as opposed to improving a mature product), does that change how you prioritise core utility relative to “icing on the cake”?
I suspect that the answers to the questions above are “it depends”. More precisely, “how one thing is valued relative to another is an aspect of your particular organisation’s culture, and there’s no one right answer”. That would imply that each of us should think about it; you should have your own personal answers for how you decide these things in your own projects, and be able to articulate them. But it’s difficult — I don’t think hard-and-fast rules have a chance of working here.
I’d love to hear your thoughts in the comments below. If you liked this post, you can subscribe to email notifications when I write something new :)
Enjoyed this? To get notified when I write something new,
follow me on Twitter,
subscribe to the RSS feed,
or type in your email address:
I won't give your address to anyone else, won't send you any spam, and you can unsubscribe at any time.
Hello! I'm Martin Kleppmann, entrepreneur and software craftsman.
I co-founded Rapportive
(acquired
by LinkedIn in 2012) and Go Test It (acquired by Red Gate Software in 2009).
I care about making stuff that people want, great people and culture, the web and
its future, marvellous user experiences, maintainable code and scalable architectures.
I'd love to hear from you, so please leave comments, or feel free to
contact me directly.
26 Mar 2014: Six things I wish we had known about scaling
23 Oct 2013: LinkedIn Intro: Doing the Impossible on iOS
12 Aug 2013: System operations over seven centuries
24 May 2013: Improving the security of your SSH private key files
05 Dec 2012: Schema evolution in Avro, Protocol Buffers and Thrift
Unless otherwise specified, all content on this site is licensed under a
Attribution 3.0 Unported License.
Theme borrowed from
Carrington,
ported to Jekyll by Martin Kleppmann. | 计算机 |
2014-23/2666/en_head.json.gz/6567 | Send a letter I Fill up my inbox I Old columns I Really old columns ASK BUCKET Save the Last Dance September 24, 2005 Rebecca Rudeen - 20:18 EST
It's been a fun ride as a Q&A host but all good things must come to an end. However, I did have a lot of fun, and learned a lot on the way, about how EditPad should be everyone's best friend, why it's not good to stay up until 3 AM after eating chocolate cake, and that you shouldn't insult the Lunar games in front of Cortney. As for my worthiness to fill Goog's slimy shoes, it's up to the readers to decide. All I know is that even if it was for a short time, it's been a real honor to be considered a Q&A host(ess) of one of the foremost RPG sites on the Internet, a site that I have been going to religiously ever since I was a high school kid with a Super Nintendo and a free weekend trying to figure out how to defeat Masa and Mune in Chrono Trigger.Okay, one last crop of letters before we go.
L E T T E R S
Take my Revolution (and shove it?)With the expected raise in devopment costs, making realistic character designs will possibly be more costly than anime-style. How much of a factor do you think cost will play in making the decision of art style in the next gen? Also, do you think the Virtual console of the Revolution will make it worth buying? How do you think it will work? Good luck in the contest!
J_Sensei | 计算机 |
2014-23/2666/en_head.json.gz/6577 | Essential guide to API management and application integration
API development communities requires online and offline presence
Learn how API development communities are expanding beyond the company setting.
APIs 101
API trends in action
Essential Guide Section you're in:API management: What you need to know
Respect the servlet API for effective portlet development
API project planning: Steps to add value in data, app integration
How to build an application program interface
API best practices, problems and advice
API design requires key principles, artistic thinking
CLOUD API SECURITY RISKS: HOW TO ASSESS CLOUD SERVICE PROVIDER APIS
Don't be a SOA-sauraus
Third-party developers aren't just uber-smart college kids coding in their dorm rooms, hopped up on Red Bull and
Doritos. These days, a third-party developer could be an up-and-coming programmer or even a large company, and companies releasing APIs need to develop a third-party developer community to ensure the API's success. Companies can build API development communities in person, online and by offering incentives, according to API management companies and veterans.
Building the developer community is critical, particularly for cloud applications, according to experts. Software as a Service (SaaS) by its nature needs to be accessible and provide benefits to users, whether or not the ideal solution comes from the API development company or a third party, said Kevin O'Brien, senior director of the AppConnect program at email marketing provider Constant Content. "The benefits we've received [from the third-party developer community] have been immediate," he said. The first step, however, is to have a compelling, easy-to-use application that results in a natural fit for third-party developers to work with the API, according to John Thomas, director of product management at San Francisco-based database software provider Embarcadero. "When you build a framework that is well-designed and architected and is extensible, it means you use languages and programming patterns that make it straightforward to take the base work and [create] something additional," he said. For example, a framework may offer a lot of functions but not a specific one, like printing to a specific plotting printer. In that case, a third-party developer would see an opportunity to connect the framework to that type of printer, Thomas said. Get personal -- and offer prizes
Once the API is ready, one way to build a developer community in person is to host in-person contests like hackathons, said Alex Gaber, API evangelist at Washington, D.C.-based Layer 7 Technologies. Hackathons typically are weekend-long events dedicated to allowing third-party developers to create new ways of using a company's API. Often, companies will offer prizes based on specific challenges, he said.
Another personal method to foster a third-party developer community is to run online contests for third-party developers, Gaber said. "We see big companies throw their hat into the ring," he said, citing Samsung's first place prize offering of $100,000 to the developer that could build the best integration with SDK tablets, or Netflix's movie recommendations algorithm contest that offered a $1 million purse to the winner. "Netflix ended up with a bunch of different solutions that were built, functioned and worked," he said.
These contests can attract more than just individual developers, according to Gaber. For example, a 20-person developer shop could submit an entry, or a large company that wants a better relationship with the contest sponsor may also throw its hat into the ring with a solution built around that company's API, he said.
Dedicated portals provide a community feeling
Meanwhile, companies looking to foster community online need dedicated portals for their third-party API developers, according to experts. "With API providers, every company in the world is going to have an API portal. These portals are where you really go to get access to these APIs," Gaber said. Broadsoft has a typical portal since it launched its first API four years ago, complete with forums and documentation, according to Leslie Ferry, vice president of marketing at the Gaithersburg, Md.-based VoIP company. That community has grown to 5,000 members, and those members have come up with new uses for Broadsoft's API. For example, one company in New Zealand tied the API into their billing system and can send emails to remind customers that their accounts are past due, which not only is less invasive for the customer but also reduces the length of time the bill is outstanding by 50%. "Our community enables third-party developers and our own customers to create new processes," Ferry said.
Meanwhile, Constant Contact shares their insights from working with small businesses to third-party developers, alerting them to trends, according to Constant Contact's Kevin O'Brien. Utilizing newsletters, forums and webinars, Constant Contact provides developers with what they know to help the developers know where to start.
Offer incentives as well
To encourage third-party participation in the company's API development, offering incentives helps. Broadsoft offers an incubator program that provides seed funding for developers. By providing developers with money up front, Broadsoft is able to speed the developers' time to create a solution for a common request, according to Ferry.
Meanwhile, Constant Contact has a revenue-sharing program in place with its third-party developers, according to O'Brien. "We have a marketplace that we promote to customers," he said. This marketplace includes the solutions third-party developers build.
No matter what, though, the developer experience should be as smooth as possible. According to O'Brien, a lesson learned is that, because of the volume of applications that a third-part developer can integrate, the developer's experience with the company's API should be straightforward and uncomplicated. "If they're making choices [about] which API to build on, it needs to reflect the functionality and be simple for them to use," he said. | 计算机 |
2014-23/2666/en_head.json.gz/7241 | Dave Pasternack, and Bill Wise
Building a Better Mousetrap: The Search Engines and Your Desktop
Ralph Waldo Emerson famously said: "Build a better mousetrap and the world will beat a path to your door." But if Emerson lived today, and he had to find an e-mail about a lecture he was giving at Harvard or was searching for a quote he wrote about quotations but couldn't remember the name of the file, he might have modified his original complaint. "Build a better desktop search application and the world will download it in droves, severely draining your bandwidth." The need for a better way to search your own computer grew out of technological advances in searching the Web. It became painfully clear that something was wrong when you could search billions of Web pages for a particular phrase in a matter of seconds, but it took forever to find a file on your PC -- even when you knew the name of the file. If all you knew was some text from the file, it was pretty much hopeless. If you were looking for a certain e-mail in Outlook or another e-mail program, it was often easier asking someone to resend it to you than searching for it with the program's search features. Luckily, the companies that made us aware of the problem entered to solve it. The difference between the instantaneous searching of the Web and the ungainly, slow searching of our computers is the difference between a search engine and the find feature built into computer programs. Search engines are constantly running, crawling the Web and indexing it. When you conduct a search in Google or Yahoo, you are searching a well-organized index, not the Web. The result you get from an engine is an entry in the index that points to a Web site. When you search your PC, you are actually looking through files, one by one, trying to find the file you wanted. Because a PC only has a couple thousand files, this method is tolerable, albeit slow. By comparison, searching the 8 billion pages now indexed by Google with this method for, say, "DM News," would take years. So the engines ported their technology to your desktop. While your computer is idle or has processing power to spare, the desktop search applications index all the files on your computer, organizing them to make searching local files as quick and painless as searching for sites online. All three major engines, and many other engines and companies, have introduced desktop search applications. Some integrate into your browser; some act as a standalone program. Some index e-mails while some index music and video. Which is best for the user? Which holds the most potential as an advertising tool? Search leader Google was the first to release a desktop search product, which it recently took out of beta. Google Desktop boasts an easy installation and indexes your PC very quickly (note that your e-mail program must be open in order to index e-mails). Google Desktop excels in its ease-of-use and its simple, unobtrusive integration into your system. To use Google Desktop, simply go to Google.com and select Desktop from the above links. Even when you conduct regular Web searches, Google shows a couple results from your desktop. If you are offline, double-clicking on the Desktop Search taskbar icon brings up a local Google Desktop Search screen. Google Desktop Search indexes e-mails from Outlook, Outlook Express, Thunderbird and Netscape Mail; Web history from Firefox, Internet Explorer and Netscape; all Office file types, such as Word or Excel documents; chat transcripts from AIM; music and video files; PDFs; images; and other text documents. It can search for text within e-mails, Web history, chats, Office and text documents. Searching e-mails is Google's strong point: The program shows a text-only version of the e-mail on the screen, or you can choose to open the e-mail in its native program. You can see threads of back-and-forth e-mails, as in Gmail, which helps nail down the correct e-mail quickly and simply. You can even respond to your e-mail straight from Google Desktop. Google Desktop Search is unique in its indexing of Web history. Searching for that great Web site you found last week? That's where Google really comes in handy. Google also easily finds text within Office documents, though those can be opened only in their respective applications. Google has trouble finding documents by file names and doesn't index information about music or video files, like whom the artist is, or text within a PDF file. But Google seems to be the fastest desktop search application to index new files, usually displaying up-to-the-second results. Google's strength is its integration into your search bar. You don't need any new software or toolbars to benefit from Google Desktop Search. Yahoo search is by far the most robust of the bunch. Yahoo indexes e-mails (Outlook and Outlook Express), nearly all file types, music, contacts and even e-mail attachments. Yahoo Desktop Search not only indexes file names, but also includes thorough information about those files. It can index text within PDF files, which no other desktop search application can do. Yahoo can find music files by artist, even if the artist name isn't included in the name of the file. Yahoo Desktop Searches are performed from either the Yahoo Desktop program or from a toolbar installed in Outlook. Though this may be seen as an extra step over Google, the versatile and productive Yahoo application more than makes up for it. Within the program, you can preview any type of file, including PDFs, PowerPoints, Excel and other spreadsheets -- even music and video files. Where applicable, your keywords are highlighted to help find data within the document. From within the program, you can respond and work on e-mails, send files as an attachment, print files, open their parent directories and delete files. Unlike any other desktop search application, Yahoo searches on the fly, narrowing your results as your query gets longer and more complex. This addition lets you easily find exactly the file you need. Yahoo's only drawbacks are that you can't easily pause indexing if you need to do something private on your PC, which you can do in Google and MSN, and its lack of Web history and support for Thunderbird. Otherwise, it stands head and shoulders above the rest. MSN's entry into the desktop search market met with the same icy welcome as its new search engine. The desktop search program can be installed only with the Microsoft Toolbar Suite. If you don't want another toolbar, too bad. And the default for the toolbar is for it to be displayed everywhere: Internet Explorer, Windows Explorer and Outlook or Outlook Express. MSN also suffers by not offering any preview of files. The program's surviving strength is the speed at which it indexes and finds files. MSN Desktop also offers an option to pause indexing. Ask Jeeves also introduced a desktop search application recently, but it is still in need of work. It indexes at a much slower rate and doesn't set up as easily as its competitors (it requires profiles to be manually added from Outlook instead of figuring that out itself). Once running, it does a fair job of searching for textual documents, but stutters on more complex searches. Like Yahoo, it is not easy to pause. Considering how new a field desktop search is for the major search engines, their products are surprisingly polished and effective. All applications need to index more types of files, especially from e-mail programs other than Outlook. While e-mail remains the main purpose of these programs, it is important to be able to respond to and work with e-mails from the desktop application, as Google and Yahoo allow. Desktop search needs to be easily integrated into our current search behavior. While Google's product is not the best, it is the only one that does its best to mimic online search behavior, and has thus become the most popular. Soon, engines will open the desktop search to advertisers. Google Desktop, since it is set up like an Internet search page, will be the easiest to integrate with ads. The right side of the page, usually filled with ads, is blank in Desktop searches -- for now. Yahoo, MSN and Ask Jeeves all have some limited space in their programs for generic or keyword-based banner ads. All four programs also offer Web search as well. If you can't find it on your computer, you'll check the Web and the regular PPC advertising will be waiting for you. | 计算机 |
2014-23/2666/en_head.json.gz/7649 | You're now entering the world of Josh Forde, an independent freeware game developer...
Feel free to take a look around, but chances are you're looking for the Games page.
My name's Josh Forde (pronounced ford-ee, similar to the number forty-- the E is not silent as most people assume).
I'm a Wisconsin-based developer of independent games, specializing mainly in free games. At this point, I'm mostly known for my free browser-based MMORPG Seeds of Time Online, which introduced bizarre monster species like the Orbulite and the Bootworm.
I tend to create somewhat cartoony games, as evidenced in the Seeds of Time series. Most of my freeware games tend to fall into either the platformer or RPG genres; as such, you'll see that the Games page contains several free RPG's and platformers.
Newest release
My newest game is Super Blackout, a puzzle game for Android, released on August 26, 2013.
Where are all these games?
Check out the Games page to see a list of my best games, most of them being freeware PC games. You can also see smaller, less notable games on the Small Projects page.
While you're here...
Be sure also to check out my blog. Reading the thoughts of a game developer can sometimes be enjoyable, enlightening, or even both on a good day. Feel free also to subscribe to the RSS feed and leave a comment or two. :)
This page was last updated on Aug 27, 2013 | 计算机 |
2014-23/2666/en_head.json.gz/7790 | LinuxInsider > CRM | Next Article in CRM
Dreamforce '12
The annual Dreamforce conference is important in terms of announcements and developments, but what's really important is messaging. It's subtle and harder to spot, but the overall message Salesforce wants to get across is in there. This year, it's all about scale. It's clear that the company is making a sophisticated direct appeal to CEOs and other C-level officers at major corporations.
As you read this, Dreamforce 2012 is underway and Marc Benioff is giving his keynote presentation. I've been in San Francisco since Monday and have watched the city -- especially near the Moscone Center -- swell like a pumpkin on a vine in late summer. If Salesforce's 70,000 registered attendees doesn't impress you, consider that they have another 30,000 people attending a Web conference as well.
It's big alright. Lots of people wonder and ask for predictions about what the company will announce. And while features and functionality are important and make up a sizable part of the show, rather than discuss them -- you can find them in the press releases -- I'd rather concentrate on the messaging that the company is delivering in more subtle ways.
How Big Is Big?
It's clear to me that the company is making a sophisticated direct appeal to CEOs and other C-level officers at major corporations. Benioff will note that his company is on a US$3 billion run rate, which is impressive, but the meaning behind the number is important too.
Do you know what the 500th place in the 2012 Fortune 500 list represents in revenue terms? The company in that spot is Molina Healthcare with revenues of $4.769 billion.
This is important -- Salesforce is on track to crack the Fortune 500 list in the near future. That's significant, not for bragging rights, though they go with the territory, but because companies in the Fortune 500 buy and sell to each other.
These companies require business partners that have the ability to scale and to consistently deliver on their promises. If you're a $3 billion company, it takes a lot of deals with companies your size or smaller to drive growth, or you need a few very big and very demanding customers, and that's what's ahead.
Knowing Your Customers
So, I look for Salesforce's messaging to have a good dose of "we understand the complex needs of big companies that are embarking on a shift to become social enterprises." Those messages come through with words like compliance, access control, performance management, calibration and other things you hear in business school but not so much on the street. I think you'll hear them a lot in the Moscone environs.
We'll also see a decided effort to appeal to more SMB companies and not just because they make good customers. While some SMB companies are small and built to stay that way, many, if not most, are in transition from embryonic to big. You need only look at the successes of companies like Facebook, Twitter, YouTube, Google and Salesforce itself, to understand that the appeal that Salesforce can make to them will resonate the same as it does with the Fortune 500.
Companies that want to get to that pinnacle are now shopping for the foundational technologies that will get them there because if there's one thing they all want to avoid, it's swapping out the technology foundation that got them to the $100 million revenue level in order to get something that will help them get to $1 billion and beyond.
So, Salesforce's mission this week is equal parts about introducing new products and services AND convincing guest CEOs that its subscription enterprise strategy is a keeper. The company will be assisted by the presence and testimony of CEO customers Angela Ahrendts of Burberry, Jeff Immelt of GE and Sir Richard Branson of Virgin.
The messaging started this morning and will go until well after midnight for several days. It will be a great and exhausting show. | 计算机 |
2014-23/2666/en_head.json.gz/9500 | Create a DOS boot disk for Windows 2000
Upgrading a system's BIOS requires direct access to a system's hardware. In the days of DOS and 16/32-bit Windows, this was easy: all someone had to do to flash the BIOS was boot to a command line and run the BIOS-flashing utility. Since Windows NT, Windows 2000 and now Windows XP don't allow direct access to hardware, this is no longer possible.
If you are dealing with a machine that has one of the aforementioned operating systems loaded onto it and need to flash the system BIOS or the BIOS of one of the controller cards in the system, there are a couple of ways to approach the problem.
Keep a copy of Windows 95/98/Me running on another machine as a disk-creation system. Win95/98/Me can be used to format a bootable disk onto which can be placed the BIOS-flashing software. Even if the system is a dual-boot with Win95/98/Me and another OS, it's still handy to have the ability to create DOS-level boot disks. Some organizations, however, may not allow this.
Download a bootable floppy image and use that. The site Requires Free Membership to View
www.bootdisk.com contains repositories of various DOS-disk images that can be unpacked and created in 32-bit Windows. Many of these disks contain nothing more than the minimum of files needed to boot a system into DOS mode, with plenty of space left over for BIOS images.
Keep a spare manufacturer's copy of DOS handy. Obviously this only works if you have a manufacturer's copy of DOS. Make several backup copies of the disks if you do have a spare copy, since DOS itself is hard to come by.
Create a bootable DOS CD. This is a very slick and elegant solution -- if you have a CD-R/W drive and the right tools, you can create a bootable CD with as many of the needed utilities on it, plus access to the floppy drive. There are several ways to go about doing this, depending on the tools at hand -- some CD-R/W programs allow the creation of bootable CD-ROMs and some don't. This site -- http://www.nu2.nu/bootcd/ -- contains detailed information and tools for creating a bootable DOS CD.
Serdar Yegulalp is the editor of the Windows 2000 Power Users Newsletter.
Admin Know-IT-All Question #1
Create a bootable USB flash drive -- in a flash!
How to reset a BIOS password
KIA Quiz #1: Dive, dive!
Useful reboot tool is not for novices
This was first published in March 2002
Step 3: Deal with the desktop first
How can I track which users have connected to a terminal server?
How do I change permissions to allow drive mapping with Windows 2000 Terminal Services? | 计算机 |
2014-23/2666/en_head.json.gz/9667 | Computing fundamentals
Part of the Computing fundamentals glossary:
Haptics (pronounced HAP-tiks) is the science of applying touch (tactile) sensation and control to interaction with computer applications. (The word derives from the Greek haptein meaning "to fasten.") By using special input/output devices (joysticks, data gloves, or other devices), users can receive feedback from computer applications in the form of felt sensations in the hand or other parts of the body. In combination with a visual display, haptics technology can be used to train people for tasks requiring hand-eye coordination, such as surgery and space ship maneuvers. It can also be used for games in which you feel as well as see your interactions with images. For example, you might play tennis with another computer user somewhere else in the world. Both of you can see the moving ball and, using the haptic device, position and swing your tennis racket and feel the impact of the ball.
A number of universities are experimenting with haptics. The Immersion Corporation offers a joystick product that is used in laboratories and in arcade games. Haptics offers an additional dimension to a virtual reality or 3-D environment.
This was last updated in April 2005
Definitions disruptive technology - A disruptive technology is one that displaces an established technology and shakes up the industry. Harvard Business School professor Clayton M. Christensen coined the term. In his 1997 best-sell... (WhatIs.com)
application program interface (API) - An application program interface (API) is code that allows two software programs to communicate with each other. Typically, APIs are released for third-party development as part of a software devel... (SearchExchange.com)
Microsoft Windows Control Panel - The Microsoft Windows Control Panel is a management tool for the Windows operating system (OS) that allows end users to change settings and manage tasks within the OS. (SearchWindowsServer.com)
Glossaries Computing fundamentals - Terms related to computer fundamentals, including computer hardware definitions and words and phrases about software, operating systems, peripherals and troubleshooting.
Continue Reading About haptics The Haptics Photo Gallery from the Haptics Community Web Page offers a visual table of contents to a number of haptics products and experiments.
At Carnegie-Mellon University, one project is working on a WYSIWYF Display that coordinates visual and tactile feedback in a learning experience.
The Immersion Corporation's Impulse Engine 2000 is a sophisticated and rather expensive "joystick" that lets you feel impact, texture, structure, and other tactile sensations as you interact with computer applications.
Bionic brain chips – a hope for the paralysed
Brave new future
BlackBerry Thunder/Storm rumored priced at $199
Microsoft unveils prototype 3D touchcreen
High altitude aircraft could replace comms satellites to provide low-cost broadband
Ask a Question About hapticsPowered by ITKnowledgeExchange.com | 计算机 |
2014-23/2666/en_head.json.gz/10024 | Assassin's Creed Creative Director Joining THQ Montreal
Patrice D�silets, the former creative director for the Assassin's Creed franchise, has a new job. He's going to be one of the founding members of THQ's new Montreal studio.
"The best way we can deliver fresh, high-quality gaming experiences is by working with the best talent. THQ is delighted to have the opportunity to make a brilliant addition to our team next year with Patrice D�silets," said Danny Bilson, THQ's Executive Vice President of Core Games. "We expect calendar 2011 to be a watershed year for THQ, and adding developers like Patrice helps ensure our focus on new IP and great games charted by leading industry artists."
D�silets is expected to begin working at the studio next summer. No details were provided about his project(s). As Bilson said, though, he's going to be developing new intellectual properties for THQ.
Assassin's Creed: Memories Brings Card Battle Killin' To iOS
Assassin's Creed Unity And 20 Facts You Didn't Know About Th...
5 Reasons WWF No Mercy Is Still A Great Wrestling Game
Assassin's Creed Franchise Has Sold 73 Million Total Units
Call Of Duty, Terraria Dominate PSN December Sales
Assassin's Creed 4 Interview: Olivier Deriviere Talks Freedo...
Assassin's Creed 4 Black Flag Freedom Cry Walkthrough Guide
South Park: The Stick Of Truth Delayed To 2013 | 计算机 |
2014-23/2666/en_head.json.gz/10263 | Powering The Rewind...
...PLEASE HELP
We really hope you enjoy your visits with us and you've probably noticed the lack of pop-ups, banners and other spam on this site.
If you ever wondered why so many sites -even big name ones- have so many, the reason is simple. Money.
Running a popular site costs a suprising amount of not just time and love, but money too.
This site is completely paid for by the fans who run it, but you can help in several ways. Use our Retro Assistant to buy your favorite movies, soundtracks and stuff. We'll get a penny or two each time.
If you really enjoyed this site, please also consider making a donation to its running costs. Small or large, it really would help and be very much appreciated. Or please support us in any of the other ways on this page. Thank you.
Please Note: The purpose of this page is only to discuss our Content Management System (CMS) for those who are interested. There's nothing about 80s movies here.
In 1999, this site was called the 80s Movies Gateway. Authored by hand using a combination of primitive web authoring tools and notepad. When it got to about 45 movie pages, I decided to change the background color (and maybe a couple of other things) and spent a very frustrating weekend opening each page, changing the things, saving and then uploading. An all too common story, I'm sure.
I realised that if the site was ever going to grow (and I was going to stay sane), I needed some automated process that would build the pages automatically from some kind of template and save me the trouble. This was the height of the .com boom and competent CMS systems that would do this and more were being licensed for silly money.
Like most people, I'd only ever written that classic two line program in BASIC that printed something repeatedly, so I was hardly qualified to write a Content Management System.
Yet, this is what I decided to do... What an idiot! And so PageBuild was slowly and painfully born. It started as literally a naiive design concept and flowchart scribbled on a sheet of paper...
The very first version was the the result seven months solid labor on my part. Conceived and born in an environment that couldn't be further from the corporate world of steel and glass buildings housing dynamic teams of software developers... Like every clichéd home-brew development story you've ever read, I was holed up in a small, darkened room, surrounded by fast food cartons and 80s inspiration in the form of books, magazines and vinyl albums, my eyes fixed on the flickering PC screen. The final sofware ran to many thousands of lines of original code. The new site look was developed alongside the code, inspired by the '80s album designs for the music that pumped constantly into my unholy "design chamber".
And so, to cut a long story short, PageBuild ran the site. And as I'm writing this, its still doing it. Good old PageBuild. Every month it served up to a half million users. Constantly being developed in small steps ...Until, in late 2005, I started work on a brand new replacement, codenamed C21... Why? Because PageBuild was solid, reliable.. but extremely restrictive, naiive and not very well structured (after so many bolt-on revisions) or documented.
As I started to draw up my wish list, it became obvious that the future was in splitting up the running of the site into seperate engines -- each specialising in one area. Movie data, locations, product management and user interaction (update submissions etc). It also became increasing obvious that I had bitten off an enormous challenge!
I should mention at this point that there was a clear decision to be made regarding the use of industry standard tools and software to build these new engines. The freely available mySQL and other dynamic site technologies at no cost presented a temting series of shortcuts, but ultimately I decided that it was a good idea to stick with what I knew and write my own core technology. The reasons are many, but one big one is.. Security. This is probably a contraversial point, but I always felt that these software platforms were inherently vunerable because of the nature of their availability. Exploits are found and quickly propogated round the people who spend their time wrecking peoples sites (for no good reason than they feel like it) and you have big problems which you cannot solve yourself -or at all, until the developers fix the weakness. At any given time of day, I can check the logs of this site and someone will have been trying to compromise it in some way within the last minute. Such is life.
...Anyway, so C21 consists of several independent 'engines' to manage the different types of new information that will be present on the new site.
While the new system nods to the previous version of the site, everything is completely new and has been written from the ground up to give the best possible movies site. Virtually nothing whatsoever has been retained. And the scope of C21 has been daunting to say the least... Remember, we're not some corporation with teams of software developers -there's just me, the webmaster! Yet C21 requires technology that must face-off sites run by billion dollar companies. -It's the classic David and Goliath story... But it always has been... 80s Movies Home |
©2008 Fast-Rewind.com All Rights Reserved. Powered by Rewind C21 CMS.
80s Movies Home
Browse the '80s Movies featured at this site... | 计算机 |
2014-23/2666/en_head.json.gz/11970 | reading: Home » PC Games » PC Game Releases for Fall 2009
PC Game Releases for Fall 2009By PainPublished: 29 October 2009 3:35 PM UTCPosted in: PC GamesTags: fall games, game releases, new games 2009, november video games, PC Games Comments [1]Digg it!FacebookEdit Post November has always been the month when big games come up from behind at the end of the year to kick gamers in all the way through the Holiday season. Right after the anticipated worldwide retail release of Windows 7, the fall releases for the PC are lined up to test the new OS and its capabilities in handling the latest games. Windows 7 features DirectX 11 as its main advantage compared to its predecessors when it comes to games, and the big 3 are definitely right there to test it.
This year, the first to come up is Bioware’s long-awaited RPG epic, Dragon Age: Origins. Having become a staple of modern CRPG’s, Bioware established itself with the Neverwinter Nights and Star Wars: Knights of the Old Republic franchises, while stumbling a bit with Jade Empire. As this game was first announced, they have gone with a completely original storyline and concept for a solid 3D action RPG that will leave a giant imprint upon RPG fans. The last singleplayer RPG for the PC to do this was The Witcher, which was released back in fall of 2007.
A week after that, Call of Duty: Modern Warfare 2 comes out almost two year after its highly-acclaimed predecessor was released. Call of Duty 4: Modern Warfare took the FPS world by storm, receiving high praise worldwide for its new approach and action-packed gameplay. Coming up after a multitude of WWII games, it was a breath of fresh air for those who have grown tired of firing virtual Thompsons and MP44′s. Now, with Modern Warfare 2, things have been ramped up to 11 as the already crazy mechanics of the previous game is set to be even crazier. Both singleplayer and multiplayer modes are given equal priority in bringing gamers closer to the hard action that the game depicts. Unfortunately, no Nazi Zombies in this one.
But there will be zombies for the next one. Then on the third week, another sequel will make an impact. Left 4 Dead 2 is the follow-up to the surprise hit multiplayer co-op game by Valve. Some say that it is way too soon for a sequel to be released, but this title has so many additions to the already awesome gameplay mechanics of the original game that perhaps it really had to be done. The teasers and gameplay trailers had fans squirming for zombie-killing action. The setting is now more like the American South. Perhaps developers are looking for a zombie version of Hurricane Katrina, which may sound like a nightmare for conservatives. However, the developers have given preview after incredible preview to justify its ever-growing reputation as the zombie game, period.
Publishers have marketed their respective games to the utmost limit, flooding the Internet with so much material that it is impossible to not feel excited. A lot of people would say that 2007 was the year of the PC’s revival in gaming. So many good titles came out that year that the PC has reemerged as the gaming system to have. This year has also been good for gamers as the ever-improving technology has paved the way to consistency of sweet titles being released for the PC, either singleplayer or online.
Windows 7 is about to get an early workout.
Fall 2009 Game Lineup Taking Severe Casualties
Upcoming Game Releases(12.04 – 18.04)
Video Game Releases This Week: Apr. 18 – 24
Video Game Releases This Week: Jan. 4 – Jan. 8
Video Game Releases This Week – Dec. 21-25
Not a Naruto fan 1 Comment
Gamer Syndrome
Posted October 29, 2009 at 3:45 PM haha Windows 7… is anyone in their right mind going to buy Windows 7 after this whole Vista debacle? (thats unless the retailers are forcing you to get Windows 7 which looks like the case) I quote “Shattered Horizon requires DirectX 10 on Windows Vista or Windows 7. There is no support for Windows XP or DirectX 9.” .. to bad for us poor bastards still running XP Leave a Reply | 计算机 |
2014-23/2666/en_head.json.gz/12303 | ASCETIC : The Audient Void
A blog about the physical controller I have been making for Ableton and MaxforLive.
Im going to start rebuilding the gloves in the next week or so but at the moment im working on finishing up an EP and some tracks for compilation albums so please excuse the lack of updates, there is lots more to come when i get finished with these projects!
Tom Whiston
Gestural Controller Software Design and Conclusion
Firstly a video another proof of concept video for my supporting work for my masters:
Construction 4 Edit from TheAudientVoid on Vimeo.
Another bit of proof of concept video from my supporting documentation for my Masters.
The drums at the start were fed into the markov chain drum recorder earlier. Basically this patch takes what you put into it, makes a markov grid and spits permutations of its input out according to whatever method you use to retrieve the data (in this case it uses the recorded midi sequence being played back to create note on/offs which send a bang to the pitch generation patch. These are quantised to 16th notes and output).
You can see how the gestural control works with the gloves pretty clearly at the start as the hand position is used to control a filter over a drum section.
Around the 3 minute mark i start playing some percussion live instead of using the markov chain recorded section.
and now the final sections of my Dissertation, please look at the annotated patch pictures that accompany the text as they are meant to be seen in conjunction with this section. There are in fact many more annotated patches in the actual maxforlive device but I will post about those another day in a more detailed breakdown of the software design
Once the hardware has been designed and created the software must be made to produce useful values for control of Ableton Live. As maxforlive offers an almost infinite possibility of functions it is important to decide what you wish to do with the software before you start building it. “By itself , a computer is a tabula rasa , full of potential , but without specific inherent orientation. It is with such a machine that we seek to create instruments with which we can establish a profound musical rapport . ”(Tanaka 2)
It is important that we create a system whereby the software plays to the strengths of the performer and hardware design, these elements must work in tandem to create an innovative and usable ‘playing’ experience. Firstly the Arduino data has to be understood by Max/Msp. I chose Firmata as it uses the robust OSC protocol to transmit data to max/msp and is also provided with pre made max/msp objects for receiving the data, this code proved to be very stable and fast at passing messages. Once this was uploaded to the board and xbees were configured correctly it becomes simple to receive values that can become usable in your software. As we are using a range of analogue sensors it is important to include a calibration stage in the software so that minimum and maximum values can be set, inputs can be smoothed and then also assigned to a function. To this function I used the “Sensor-Tamer” max patch as a basis for creating a calibration system for all the inputs. These are then scaled and sent to a max patch which allows us to choose an effect from the current Ableton live set.
Left Hand Maxuino Input and other modules, annotated
Right Hand Maxuino Input and other modules, annotated
The analogue inputs can be made to produce midi messages as well as directly controlling parameters of effects from Live menus, the advantage of this is that you can then operate with two distinct modes, one for controlling fx parameters and other for passing midi messages to synths. Due to the fact that Ableton Live merges midi from all input channels to one channel of output you have to use the internal routing (S and R objects) functions of Max/Msp to send midi to a number of tracks. Obviously as this is a control system for live performance you need to have a way to control more than one synth/plugin and you want to be able to control various parameters for each synth. Creating small plugin objects for the channels you wish to control makes it easy to do this and as these simply pipe the midi from the input channel to the selected max receive object and because of this it is possible to assign the same physical controller to a different midi assignment on every channel. This again comes back to the watchword of customizability and allows the user to create dynamic performances where many elements can be changed without touching the computer. This also works neatly around the problem of only being able to send information to the active channel in a sequencer as your midi is routed ‘behind the scenes’ and effectively selects the channel you wish to use without any physical selection of the channel (i.e. no mouse click is required).
The footpedal which currently implements record and step through patch features
As the system is to be used to perform live there are a number of utility functions which also need to be created such as freezing and recording loops, stepping through channels, octaves and patches. These are best implemented away from the gloves themselves as the gloves are most intuitive to play when using both hands (8 notes over two hands), as you can only have a fixed number of switches that are easily playable it makes sense to assign these to notes (with sharps of notes being achieved through a foot pedal). Having switches used for playing on both hands also means that you can create polyphony by pressing down finger switches on both hands simultaneously. There is also the practical consideration that you do not want to have to stop playing a pattern to push a button to record a loop or to freeze things, by moving these functions to your feet you can continue playing whilst accessing control functions. For ease of use recording and freezing functions are assigned to all looping plugins from a single switch, as you are only sending midi data to one channel at a time there is no chance of creating a ‘false-positive’ and recording unwanted sounds in the wrong channel and having one switch to operate freeze or record greatly simplifies control for the end user.
I also decided to use a phone mounted on my arm running touchOSC to control some functions of Ableton live as it is useful in some cases to have visual feedback and again this allows the gloves to be freed up for musical functions. Some of these functions echo the footswitch controls to allow the performer to move away from the laptop and into the audience and as touchOSC has two-way midi control it updates the status of a switch or setting to correspond with the footswitch being pressed so there are no crossed signals. With touchOSC it is easy to design your own interface and to assign buttons to Ableton Live functions. As this essentially operates as a midi controller it is only necessary to put the software into midi learn mode, click the function you wish to assign and touch the button on the phone. This again allows for a high level of customizability for the end user and for interfaces to be made and set up according to the type of performance you wish to create. It is for example particularly suited to triggering sounds or prerecorded loops as many buttons are required for this (one button per clip) and this would not be sensibly achievable using the gloves. Although currently using a predesigned interface due to hardware constraints it is my aim to implement a touchOSC system that as well as providing controls for loops and other parameters provides a full set of feedback from the gloves and foot pedal and thus it will be possible to see what instrument, bank and so forth you have chosen in the software. This will become vital to the projects aim of being able to move completely away from the computer when performing.
At the time of writing this I did not have an apple device to create a custom layout so this HUD was used to show data from Max on the laptop .
Algorithmic Variation
“Each artwork becomes a sort of behavioral Tarot pack, presenting coordinates which can be endlessly reshuffled by the spectator, always to produce meaning”(Ascott 1966 3)
The Markov Chain Recorder/Player, Annotated
I decided that I wanted to be able to manipulate midi data within my performance to produce a number of variations to the input. These variations had to sound human and make intelligent choices from the data that was presented. To this end I have used Markov Chains to analyze midi data to create a system whereby a circular causal relationship between the user and the patch is developed. The patch takes midi input and then creates a probability table as to which note will be played next, after each note is generated it is fed back into the system and used to look up the next note from the probability grid. This means that whatever midi data is fed to the patch will be transformed in a way that preserves the most important intervals and melodic structures of your original data but allows for permutation, this in turn means that the performer must react to what the patch outputs and there is the possibility to input more data to change the markov chain that you are using and thus alter the performance further. In essence I wished to create a system of patches that function very much like an improvising live band, a certain set of melodic parameters are agreed upon, by midi input, and then used as a basis for improvisation. The data from these markov chains can be output in two ways, either the computer can be set to automate the output itself or you may use the gloves the push data from the markov chain into a synth, both of these methods yield different but equally valid musical results and allow the performer to create very different types of results. The idea of using markov chains to create predictable but mutating data has much in common with Cybernetic and Conversation theory where the interaction of two agents and the interpretation of these leads to the creating of a third which in turn influences the original agents. If we consider the original midi data in the patch to be the first agent and the person using the controller to be the second the interpretation of data from the computer influences the playing of the person using the controller and in turn this can be fed back into the computer to create another set of data which is again interpreted, permuted and responded to by the performer. This application of disturbing influences to the state of a variable in the environment can be related to Perceptual Control Theory.
“Perceptual control theory currently proposes a hierarchy of 11 levels of perceptions controlled by systems in the human mind and neural architecture. These are: intensity, sensation, configuration, transition, event, relationship, category, sequence, program, principle, and system concept. Diverse perceptual signals at a lower level (e.g. visual perceptions of intensities) are combined in an input function to construct a single perception at the higher level (e.g. visual perception of a color sensation). The perceptions that are constructed and controlled at the lower levels are passed along as the perceptual inputs at the higher levels. The higher levels in turn control by telling the lower levels what to perceive: that is, they adjust the reference levels (goals) of the lower levels.” (Powers 1995) Despite this being in reference to control systems in the human mind it is easy to see how it is also applicable to computer control systems, the higher level systems that are accessible by the user tell the software what to perceive, this is done in two ways, firstly the input of midi data, this input allows the software to create a lower level abstraction, being the table of probability, which is then called upon to trigger notes.
The changes must be subtle and controlled enough that the performer is reacting to them and responding rather than fighting the computer to maintain control of the system. The process that is used to determine probability of notes is a closed system to the performer (all one needs to do is feed in a midi file) the performer has access to an open system which can be used to alter key characteristics of the processes after this, they also have access to play along with this process through a separate control system linked to an instrument, hence the feel of improvising along with a band is created. In Behaviourist Art and the Cybernetic Vision Roy Ascott states: “We can say that in the past the artist played to win, and so set the conditions that he always dominated the play”(Ascott 1966 2) but that the introduction of cybernetic theory has allowed us to move towards a model whereby “we are moving towards a situation in which the game is never won but remains perpetually in a state of play” (Ascott 1966 2) Although Ascott is concerned with the artist and audience interaction we can easily apply this to the artist/computer/audience interaction whereby the artist has a chance to respond to the computer input and the audience and to use this response to shape future outcomes from the computer, thus creating an ever changing cyclical system that rather than being dependant on the “total involvement of the spectator” is dependent on the total involvement of the performer.
Having worked on developing this system for two years there are still improvements to be made, although the idea to use conductive thread would have been very good from a design and comfort point of view, as it allowed components to be mounted on the glove without bulky additional wiring, the technology proved to be too unstable to withstand normal usage and when creating something for live performance it needs to be robust. It was the case with this design that something could be working in one session and not the next, and obviously if a mission critical thread was to come unraveled it had the potential to take power from the whole system rather than causing a single element not to work. Also the thread, being essentially an un-insulated wire, if not stitched carefully created the possibility of short circuits when the glove were bent in a particular way. In addition to this the switches, even when used with resistors (also made of thread) produced a voltage drop in the circuit that changed the values of the analogue sensors. Obviously this change in values will change what happens to a parameter that the sensor controls and therefore can produce very undesirable effects within the music you are making. Although the accelerometers produce usable results for creating gestural presets and manipulating parameters the method used to work out the position of the hands could be further improved by the use of gyroscopes instead. Gyroscopes are able to accurately tell the position of an object when it is not in motion where as accelerometers work best when in a state of constant motion. With a gyroscope we would be able to introduce an addition value into our gestural system, we would be able to tell the amount of rotation from the starting position, and this would allow us to use very complicated gestures to control parameters within Ableton.
The current ‘on the glove mounting’ of the components works but is in my opinion not robust enough to withstand repeated usage and so it will be important to build the gloves again using a more modular design. Currently the weak point is stress placed on soldered connections when the gloves twist or bend and even though using longer than necessary wiring helps to alleviate this it does not totally solve the problem, therefore it is necessary to create a more modular design which keeps all soldered components contained and does not subject them to any stress. The best way that this could be achieved would be to mount the Xbee, Arduino and power within a wearable box housing and have all soldered connections housed within it as well. To make sure there is no cable stress it is possible to mount screw down cable connectors in the box for two wire input sensors and three pin ribbon cable connectors for analogue sensors, in this way no stress is put on the internal circuitry and the cabling is easily replaceable as none of it is hard soldered. These cables would run between the box and a small circuit board mounted on the glove near the sensor where the other end would plug in. This also increases the durability of the project as it can be disassembled before transport and as such does not risk any cables getting caught or pulled and makes every component easily replaceable, without soldering, in event of a failure.
I would like to introduce a live ‘gesture recording’ system to the software so that it is possible to record a gesture during a live performance that can be assigned to a specific control, this would allow the user to define controls on the fly in response to what movements are appropriate at the time. However this will take considerable work to design and implement effectively as value changes must be recorded and assigned in a way that does not break the flow of the performance and although it is relatively simple to record a gesture from the gloves by measuring a change in values of certain sensors assigning these to a parameter introduces the need to use dropdown boxes within the software to choose a channel, effect and parameter and how to achieve this away from the computer is not immediately apparent. It may be possible to choose this using touchOSC when an editor becomes available for the android version of the software, but as yet this is not possible.
Further to this the touchOSC element of the controller must be improved with a custom interface which collects suitable controls on the same interface page and receives additional feedback from Ableton such as lists of parameters controlled by each sensor, the sensors current value and the names of clips which can be triggered. Using the Livecontrol API it should be possible to pass this information to a touch screen device but again without an editor being available for the Android version of touchOSC this is not yet possible. I have investigated other android based OSC software solutions such as OSCdroid and Kontrolleur but as yet these also do not allow for custom interfaces. OSCdroid however looks promising and having been in touch with the developer the next software revision will include a complex interface design tool that should allow for these features to be implemented. I will be working with the developer to see if suitable Ableton control and feedback can be achieved once this has been released.
In essence the ideas and implementations I have discussed mean that we can create an entire musical world for ourselves informed by both practical considerations and theoretical analysis of the environment in which we wish to perform. We can use technology to collect complex sets of data and map them to any software function we feel is appropriate, we can use generative computer processes to add a controlled level of deviation and permutation to our input data and we can use algorithms to create a situation whereby we must improvise and react to decisions made by the computer during the performance of a piece. We can have both total control of a musical structure and allow a situation whereby we must respond to changes being made without our explicit instruction. It is my hope that through this it is possible to create a huge number of different musical outcomes even if using similar musical data as input. The toolset that I have created hopefully allows the performer to shape their work to the demands of the immediate situation and to the audience they are playing to and opens up live computer composition in a way that allows for ‘happy mistakes’ and moments of inspiration.
As previously stated it is my hope that these new technologies can be used to start breaking down the performer and audience divide. It is possible to realize performances where the performer and audience can enter into a true feedback loop and can both influence the outcome of the work. In the future there is the potential to also use camera sensing and other technologies (when they are more fully matured and suitable for use in ‘less than ideal’ situations) to capture data from the crowd as well as the performer. The performer can remain in control of the overall structure but could conduct the audience in a truly interactive performance. This technology potentially allows us to reach much further from the stage than traditional instruments and to create immersive experiences for both performer and audience. It is this idea and level of connection and interactivity that should move electronic musicians away from traditional instrument or hardware modeling controllers and look for more exciting ways to use technology.
“All in all, it feels like being directly interfaced with sound. An appendage that is simply a voice that speaks a language you didn't know that you knew” Onyx Ashanti
Adaptive Physical Controllers,
dissertation,
max/msp,
maxuino,
Sonic Arts
Updates and more dissertation
Apologies for not updating this for a while, ive been moving house, to Berlin! Also ive been finishing an EP to be released on Planet Terror.
So without further ado, the next section of my dissertation...........
Hardware Design
“Any sufficiently advanced technology is indistinguishable from magic.” - Arthur C Clarke
“One way one can attempt their adepthood in magic is to try weaving a spell without using any of the prescribed tools. Just quiet the mind and slip off into a space within your mind that belongs only to you. Cast your will forward into the universe and see if you get the desired results.” (Nicht 2001 47)
When designing my controller I looked at the idea of openhanded magic as a source of inspiration. Rather than being directly related to card tricks and illusion open handed magic is a form of magic in modern occult systems whereby the practitioner does not use tradition ritual props but uses the focus of their will in the moment to achieve the intended results. The performer must achieve some sense of gnosis and ‘at-one-ness’ for this to succeed and as we have previously explored dancing is one route to this state. As explained by Joshua Wetzel:
“Dancing This method could also be termed “exhaustion gnosis.” The magician engages in continuous movement until a trance-like state of gnosis occurs. Dance gnosis is particularly good for visions and divinatory sorts of workings, or at least that is the history of its use. However, it is apparent how it could be used in any type of magical activity. The effort to maintain continuous motion eventually forces the mind to a single point of concentration, the motions themselves become automatic and there is a feeling of disassociation from the mind. It is at this point that the magician performs rituals, fire sigils and various other magical acts. This is also a great form of “open handed magic.” You can do it in a club full of people, with dozens watching, and no one has a clue.” (Wetzel 2006 21)
As discussed earlier I feel that the dance floor has a strong ritual and tribal element associated with it and I believe that these ideas can be incorporated into the design and usage of an adaptive controller system. If the ultimate aim of the design is to interact with the audience and the “blurring of once clear demarcations between himself and the crowd, between herself and the rave” then it is possible to incorporate the ideas of ritual and ritual magick to inform the creation of your controller. Although the idea of creating something ‘magic’ is certainly in one sense that it should ‘wow’ the audience and create something novel and exciting to draw them into the performance I believe that for the performer/programmer the idea must become more abstracted. If we refer back to the earlier idea of having the space within a performance to have moments of inspiration and the room to experiment, take risks and also possibly fail and couple this with the intended purpose of the music we are focusing on, to make people dance, then surely the optimal state for creation of this is to be in the trance like state of the dancer. In the previous section I asked the question “Would they (the performer) not be more fully immersed in their own sonic landscapes if unshackled from the computer screen and became free to roam the space their sound occupies, interacting with the audience and using their whole body to feel their performance in the way the audience does?” and I believe the answer to this is to allow the performer a system of control that allows them to largely forget the mechanism of creation and to ‘feel’ what they are making by being in the same state as the dancer themselves. When looking at how to design my controller I have tried to think about this question throughout and use it as a reference when trying to ascertain the best way to incorporate a feature into the hardware and software design. The controller must be simple to use, requiring natural hand gestures, and notes must be easy to trigger and record so that the flow of the performer is not interrupted by the technology. It has taken a great amount of trial and error to reach a stage where this is possible and indeed the use and design of a controller to allow such interaction with audience and music is, by necessity, in a constant state of flux where new ideas can always be incorporated and refined to move towards the optimal playing experience. As I have previously stated this idea of a continually evolving and demand responsive controller system is the optimum state for these projects and although temporary goals can be established the performer/designer should always be looking for a way to improve and advance their work and as such it can never be described as truly ‘finished’.
It is relatively easy to build your own controller system and use it to interact with a computer and there a number of advantages in creating your own system over co-opting existing computer interface devices. With a basic knowledge of electronics it is possible to create anything from a simple input device to a whole new instrument. Using an interface such as the Arduino you can simply, and with minimal processor load, send analog and digital signals to your software and there are a huge number of sensors on the market that you cannot find in a pre made solution and making your own controller allows a novel approach to the capture of data. The traditional computer controller model of interface relies on pushing buttons to input data and thus even when using a modern controller such as the Wii-mote we are still tied into this idea of physical buttons as the main input device. Other devices such as the Kinect although allowing gestural input only work under specific lighting and placement conditions which would make it largely unsuitable for use in a live performance environment. If we build our own system it is possible for us to use a vast number of different devices such as bend and pressure sensors or accelerometers to receive input. This approach allows us to fully incorporate the idea of gestures to manipulate music as it does not rely on you tapping a key but rather invites you to use your whole body. As previously stated with the controller I wished to design I did not wish to copy or model traditional instruments but rather to create a unique interface with a distinct playing experience to take advantage of the many controls available to us to manipulate. To get the most from the custom controller experience we must develop our own language to interact with computers and the music being made.
In designing a physical controller it is important to think about what you intend to use it for and what controls you need. Do you just need switches or do you need analog control values that you can use to, for example, turn a knob or move a fader? Do you want your controller to play like a traditional instrument or to have a totally non-traditional input method? With my project it was important to have a number of analog controllers as well as digital switches and also some kind of control for moving through the live interface was required, this meant that I added a touchOSC component to my project for feedback and control of Ableton’s midi map triggered features, this allows you to trigger clips and manipulate controls all without having to look at the computer. In my project only the hands contain sensors and the feet perform basic functional software control, which are also replicated on the touch screen device, allowing the performer total freedom of movement. Being free from the computer allows the performer to more fully enter into the flow of the music and to, for example, dance whilst creating. In this aspect my controller is attempting to remove itself from a more traditional model of playing music where you would have to think about the placement of an instruments, your hands on the keys and so on. As my project is particularly focused on creating electronic ‘dance’ music, which has little link to traditional instruments, it seems counter productive to produce something which models itself upon a traditional instrument as in the setting of a live performance this would look misplaced.
Rather than create a system where the user has to hold a controller my system is built entirely into a set of gloves and as such one simply has to move their hand to affect change in the music. The hardware has gone through a number of revisions to find the best setup to compliment my workflow. Initially I used available ready made sensors to create my gloves, and whilst these made for a relatively simple construction they presented a serious set of problems regarding connections to the gloves, keeping the sensors in place and not putting stress on the weak points of their construction. Many commercially available sensors are designed to be used in a static setup where once mounted they are not moved, however when making something such as a pair of gloves it must be recognized that there will be a large amount of movement and that actions as simple as putting on or removing the gloves may produce unwanted stress on connections that may break or impair the functionality of the system.
Over the development time of my project technology has become available that allows you to make bend sensors, pressure sensors and switches out of conductive material. This creates a distinct advantage over traditional sensors as they are more durable, easier to wear and very simple to fix and replace. Conductive thread has, in theory, made it possible to create a controller with less physical wiring, the wires can be sown into controller, are flexible and do not restrict movement and are more comfortable for the user. I initially remade my project using this technology, however this technology also has drawbacks that only become apparent after a period of usage and have meant that it was unsuitable for this project. A prototype version of the gloves were made using conductive thread rather than wiring and although this initially worked it was found that stretching and compressing the thread in a vertical direction lead to it unraveling. As the wire functions in the same way as a multicore wire when the thread is not tightly wound together you get a loss of signal, initially I sought to counter this problem by covering the conductive thread in latex but as this seeped between the strands of the thread this also lead to a loss of signal. This conductive thread technology is certainly useful in some situations however when used on a pair of gloves the amount of stretching required to get them on and off means that the thread breaks very quickly. However it is still used in the project to connect between circuit boards and the conductive fabric fingertips of the gloves and between circuit boards and the analog sensors in places where there is not a great amount of stress placed on them.
The analogue sensors are also made from conductive material and this has the advantage of making the sensors easily replaceable if broken and easy to fine-tune the output values and sensitivity. The bend sensors on the fingers are made using conductive thread, conductive fabric, velostat and neoprene. By sewing conductive thread into two pieces of material and sandwiching layers of velostat between them you can easily create a sensor which is simple to adjust the sensitivity of as this is determined by the number of layers of velostat between the conductive thread, a sensor made this way also has the advantage that it can easily be mounted on the gloves via stitching. These sensors also can be made to look almost any way you desire, in the case of my project simple black circles, and as such they are in keeping with the idea of open handed magic where the actual method is partially obscured from the audience but easy to use and understand for the performer. The switches in the gloves are also made in a way that removes the need for any wiring, electronics or unwieldy physical switches. Using conductive thread it is possible to create a switch that can be closed by applying a voltage across it and this greatly simplifies the construction of the gloves as only one positive terminal is needed, in this case placed on the thumb, thus the switches are constructed by wiring a ground and input wire to each finger and are closed by touching the finger and thumb together. This natural gesture requires no learning on the part of the user and we can, for example, use each switch to trigger a drum hit or play a key on a synthesizer as well as performing more command based functions if required. I have taken the approach of making the switches on both hands produce midi notes (one for each whole tone in an octave with an extra c of the octave above on the last finger of the right hand and a foot pedal to sharpen/flatten the notes) as this yields the most natural playing experience, but it is possible to program these switches to provide other controls is required.
My controllers use accelerometers in each hand to work out the position of your hands, this allows us to seamlessly change between parameters being controlled. For example if your right hand is held at a 45 degree angle the accelerometer can function to control a cut off filter within your music software, however if you tilt the right hand further to 90 degrees the functionality of the left hand can change and could instead be used to control the volume of a part or the length of a sample. As we can produce accurate results with these sensors we are able to build a huge amount of multifunctionality into a very simple control system. Positioning of the hands is very easy for the performer to feel without the need for constant visual re-assurance and this contributes to the ease of use of the system.
I have also incorporated multi colored LED’s into the hand for visual feedback, by using three color LED’s we have a huge variety of potential colors to choose from which can indicate function, and therefore we also cut down on the amount of wiring needed to manage this and space used on the glove. There are three of these LED’s mounted on the gloves, two represent feedback from the notes played and change color corresponding to the instrument chosen and the third is used as a metronome so it is easy to record sections in time with the computers tempo setting and gives the performer visual feedback for their timing.
By using Xbee radios in conjunction with the Arduino and sensors we are able to unwire ourselves from the computer completely. This of course simplifies the use of the controllers as it does not matter where the performer is in relation to the computer and for my project this is vitally important to my core idea of ‘open handed magic’ and audience interaction. The most obvious disadvantage of using wireless communication is increased complexity of setup. To get the xbees to talk to one another over a meshed wireless network is not a simple task and Arduino code that works when the unit is plugged in via USB does not necessarily work when passed over a serial radio connection. For example the Arduino2Max code, available online, is a simple piece of programming that allows the Arduino to pass results from each of its inputs to max/msp. However this does not work when Xbees are introduced as the data being reported by the serial.print function floods the buffers of the xbees and means that data is only reported once every ten seconds or so. Obviously as we are aiming for a system with as low latency as possible this situation is unacceptable and another means of passing the data must be sought. In the case of my project this meant the Firmata system which can be uploaded to the Arduino and which communicates data to the computer by the use of the OSC protocol. Although the code for this system is much more complex than Arduino2Max the results that it produces are far more accurate and do not result in any appreciable latency. However to get this to work in the way I required demands a greater level of coding knowledge for both the Arduino and Max/MSP and messages are passed to and from the serial port using more complicated OSC messages and must, for some functions, be translated into a format that max understands to create usable data. Using series 2 Xbees also creates an additional problem in that they are designed for more complex tasks than serial cable replacement, as such part of their standard behavior is to continually seek nearby nodes that they can connect and pass information to. Through extensive testing and research I have found that if this mode was utilized the stream of data from the gloves to the computer and visa-versa was often delayed by seconds at a time, as the xbees seem to prioritize data integrity over timing. However it is possible to quickly bypass this by setting the xbees to only look for a specifically addressed endpoint and this seemed to solve inconsistent timing issues. There is a distinct advantage to using the Firmata/OSC based communication and that is that if there is a dropout from the controller the flow of data will resume when the connection is restored. I.e. if the battery runs out and wireless communication is lost when a new battery is used the wireless communication is resumed and the data will also resume being seen in max/msp. This does not occur with more simple codes and therefore using this more complex system provides a level of redundancy to our hardware that allows us to continue performing without the need to reboot the computer or software.
When powering an Arduino over USB you do not need an additional power source as the USB bus can provide what is needed to run your sensors, however when using wireless you must include and external power source, this must be powerful enough to provide the correct voltage for the Arduino, wireless module and sensors and must have a long enough battery life to not run out mid performance. This obviously increases the size and weight of the controller and if you are using conductive thread it is important that the power source is placed in close proximity to the most high voltage mission critical elements of the project. This is because conductive thread has a resistance of 10 ohms per foot (i.e. one foot of conductive thread is equal to a 10 Ohm resistor) and therefore you lose power from your source the more thread is placed between it and your components. However if traditional wiring is used this becomes less of an issue. Li-Po batteries were chosen for this project due to their high power output and quick recharge time, one must be aware though that they must not be discharged under 3 volts and that if the packaging is damaged the batteries are liable to expand and potentially become unstable, therefore care must be taken to ensure that they are looked after properly when used. These batteries clearly offer the most potential for a system like this however as they allow somewhere in the range of 1000 − 3000 mAh to be output, this is more than enough to power the lilypad, xbee, sensors and lights for a long duration. Originally I had looked at using AAA batteries and although these powered the system on they ran down very quickly and with some sensors produced a voltage drop that would reset the Arduino and cause unreliable operation.
conductive thread,
Help make this project better
If you would like to help make this (ridiculously expensive) project even better please donate something towards it, as a thanks you will get some exclusive access to mp3's, videos etc...
Gestural Controller Software Design and Conclusion... | 计算机 |
2014-23/2666/en_head.json.gz/12818 | Japanese | Contact us
NEC C&C Foundation
C&C Prize
2012 Recipients of C&C Prize
Prof. Hisashi Kobayashi
The Sherman Fairchild University Professor Emeritus of Electrical Engineering and Computer Science, Princeton UniversitySenior Distinguished Researcher, the National Institute of Information and Communications Technology (NICT)
For pioneering and leading contributions both to the invention of high-density and highly reliable data recording technology and to the creation and development of a performance-evaluation methodology for computer and communication systems
Partial response, maximum likelihood (PRML) is a signal-processing and decoding method applicable to hard disks, optical disks as well as data communication systems. Both PRML and performance-analysis technologies for computer and communication systems have been important in the advancement of information technologies. Prof. Kobayashi made significant contributions to the invention and development of the fundamental principles behind them by creating innovative methodologies utilizing advanced communication theory and mathematics. Prof. Kobayashi received his Ph.D. from Princeton University in 1967 and then went to work as a researcher at IBM's Thomas J. Watson Research Center, where he worked on high-speed data transmission and high-density, high-reliability magnetic recording. In a band-limited channel, the interference between signals of neighboring digital codes (called inter-symbol interference) becomes very pronounced as the data transmission speed increases. In those days, the development of an automatic equalizer to reduce inter-symbol interference was a major issue for high-speed data transmission. Dr. Kobayashi proposed a novel method to recover data with drastically reduced error rates, even in the presence of inter-symbol interference. Dr. Kobayashi and Dr. Donald Tang pointed out in 1968 that high-density magnetic recording is mathematically equivalent to high-speed data transmission. Therefore, they proposed applying partial response channel coding, which improves bandwidth utilization by allowing inter-symbol interference. In addition, Dr. Kobayashi found that the maximum likelihood method devised by Prof. Andrew Viterbi as a method for decoding convolutional codes was applicable to decoding partial response signals. As a result, in 1970, Prof. Kobayashi verified by analysis and computer simulation that a combination of the above two methods enabled a significant improvement in the density and reliability of magnetic recording. Prototyping and experimentations of his invented schemes were carried out by a research group of IBM Zurich Research Lab. The resultant method for achieving high density and high reliability in digital recording devices through the application of advanced communication theories came to be known as PRML. It has been adopted in almost all magnetic as well as optical recording storage and memory since IBM released the 5.25-inch hard-disk drives (HDD) in 1990. For this contribution, Prof. Kobayashi received, together with Dr. Francois Dolivo and Dr. Evangelos Eleftheriou of IBM Zurich Lab., the 2005 Technology Award from the Eduard Rhein Foundation of Germany. In 1971 Dr. Kobayashi was appointed Manager of the then newly established System Measurement and Modeling Group at the Thomas J. Watson Research Center. He played a leading role in the research of analytical methods for computer performance evaluation and prediction. In the early 1970s, researchers paid a great deal of attention to application of Markovian queuing network models for the performance evaluation of computer systems. Dr. Kobayashi applied a diffusion process approximation to a non-Markovian queuing network model as an analytic technique to evaluate a multi-programming system with virtual memory. Moreover, he pointed out that the diffusion approximation method was very useful for the analysis of multiple-access communication systems as well. A computationally efficient algorithm for the normalization constant is important for performance analysis that adopts a queuing network model. Dr. Kobayashi developed practical computational algorithms based on convolutional algorithms and the Polya theory of enumeration, and made theoretical contributions to the development of the first software packages for performance analysis, called QNET4 and RESQ, which were developed by Dr. Martin Reiser and other members in Dr. Kobayashi’s Group. These results influenced the computer performance community worldwide. Thus, since the mid-1970s, performance analysis techniques using queuing network models were largely developed based on the methods pioneered by Dr. Kobayashi and his associates. For these contributions, he received an IBM Outstanding Award in 1975, elected to Fellow of the IEEE in 1977, and received a Humboldt Prize in 1979. In 1978 he published an authoritative textbook in this field “Modeling and Analysis: An Introduction to System Performance Evaluation Methodology” (Addison Wesley). He also served as the Founding Editor-in-Chief of an international journal “Performance Evaluation” (North-Holland/Elsevier) from 1980 to 1986. Therefore, he has made significant contributions to the establishment of this field of research and to the dissemination of its knowledge. He served as Dean of the School of Engineering and Applied Science at Princeton University from 1986 to 1991. He has been serving as an advisor to a number of universities and research organizations in the world. He has also been playing a pioneering and leadership role as a Japanese researcher in this age of globalization. His research accomplishments in the high-density recording technology for storage devices and in the performance evaluation methodology for computer systems were major factors leading to the advancement of research in information and communications technology. Thus, the NEC C&C Foundation highly praises Prof. Kobayashi for his worthy contributions to the advancement of research and development. NEC Head Office Building 7-1, Shiba 5-chome, Minato-ku, Tokyo 108-8001, Japan
TEL +81 3 3457-7711; FAX +81 3 3798-7818Copyright© NEC C&C Foundation All right reserved. | 计算机 |
2014-23/2666/en_head.json.gz/12994 | With XP End of Life, Microsoft Asks Holdouts: How Badly Do You Want XP?
47 comment(s) - last by LRonaldHubbs.. on Apr 11 at 12:16 PM
If you're willing to pay more and more, Microsoft might extend support a year or two
Ding, dong, Windows XP is dead. After nearly a decade and a half kicking around as the world's most used operating system, Microsoft announced today that it was finally pulling the plug on support as promised.
But that ultimatum was softened by Microsoft's concession that for a few select enterprise and government users worldwide, it would continue to support the dying platform, IF they paid massive fees.
Some have already committed to that offer. The UK's government offered up £5.5M ($9.1M USD) in British taxpayer money (on top of existing bulk enterprise licensing fees) to provide one additional year of support on Windows XP. It is estimated that the deal will cover a couple million machines at UK government agencies. The UK government claims £20M ($33.2M USD) in taxpayer money will be saved by a more gradual transition away from the aging platform.
The Netherlands government entered into a similar multimillion dollar contract to cover 34,000 to 40,000 aging Windows XP machines. These deals will cover the costs of providing additional security updates to Windows XP, Office 2003, and Exchange 2003.
Some aren't quite ready to "Turn Off" their beloved Windows XP.
Regardless of how taxpayers and tech observers feel about these deals, one thing's for sure -- they're a win-win for Microsoft. If users upgrade to Windows 7 or Windows 8 Microsoft will score licensing fees. If they don't upgrade they will have to pay a quickly escalating ladder of premium fees, boosting Microsoft's profits.
Estimates of Windows XP's market share vary wildly and are skewed by certain segments that have seen higher upgrade rates -- or alternatively slower progression.
For example recent desktop PC statistics suggest Windows XP to be barely behind Windows 7 with over 40 percent of desktop PCs running it. Likewise an estimated 95 percent of ATMs are thought to be running XP.
Kurtis Johnson, an "ATM expert" at U.S. ATM-maker Triton tells CNN Money:
This isn't a Y2K thing, where we're expecting the financial system to shut down. But it's fairly serious.
He argues that the high rate of ATM holdouts may leave customers vulnerable if hackers use malware to attack the machines. The financial service sector has been slower than most to upgrade, not necessarily because it dislikes Windows 7 or Windows 8. Quite to the contrary, many have expressed enthusiasm for these platforms. However, they simply were unable to justify the costs, as it can cost between $1,000-3,500 USD to upgrade an ATM given the necessary modifications to the hardware and software and the expert support needed.
ATM makers have until 2016, or in some cases 2019, to get their machines off Windows XP.
[Image Source: funnpoint]
The "problem" of Windows XP on ATMs may be somewhat overstated, as most run Windows XP Embedded, a product which Microsoft plans to provide ongoing support for until 2016. Additionally, some SKUs of Windows XP Embedded will receive support all the way until 2019, as they were released later. Hence in the banking sector Microsoft understands the difficulties upgrading and won't be pulling the plug too soon, although there may be some odd exceptions.
Globally it is estimated that 25-30 percent of PCs (including laptops) are running Windows XP.
The shuttering of support is most dangerous for individual consumers and small businesses clinging to Windows XP. Despite media coverage a significant percentage of both groups don't even realize they're running a dead platform and the danger they may be subjecting themselves and/or their business to, by not upgrading. To that end Microsoft has released a tool to let customers know if they're running Windows XP, in case going into the Control Panel proves too technically challenging. It's also offered up to $100 USD in discounts to customers trading in Windows XP PCs.
Sources: Microsoft [TechNet], The Guardian, Webwereld [translated via Google], MSDN Comments Threshold -1
RE: fixes
marvdmartian
The problem with W7 is, I've heard that MS doesn't plan on supporting it for very long (no doubt, just another "reason" they're giving consumers, to push them toward W8). So W7 may not be much of a solution at all, either.The problem is, many of these companies and governments are still running software that was designed for XP, and, for whatever reason, won't run on Vista or 7 (that has to be some seriously messed up software, IMHO!). Since they don't have a replacement for those programs (or don't want to replace them with something new, further incurred cost), they've stuck with XP.As far as consumers go, I have my main computer as a W7 set up, and I'm really not interested in learning what's basically a new operating system, in Windows 8. That's why, when I recently bought a new tablet to travel with, I went with Android OS. Chances are, when W7 is no longer supported, I'll make the jump to Linux, as I believe it will be easier for me to transition to.And I really won't be surprised if Microsoft loses some business customers that way too. Great job they've done! Parent
chripuck
They don't support old versions indefinitely due to deprecated code. Many of the fixes for XP are for vulnerabilities that don't exist in Windows 7/8 due to major kernel revisions since then. Sure, some overlap, but many do not. That means to patch these versions of the OS you need to keep a running stable of employees skilled with that version of the OS, even though the OS is rapidly falling out of use. They aren't a charity and code doesn't magically fix itself. Parent
PsychoPif
quote: I'm really not interested in learning what's basically a new operating system, in Windows 8. That's why, when I recently bought a new tablet to travel with, I went with Android OS. That's some pretty good logic... Parent
Since I had previously used a Kindle Fire HD, which runs Amazon's slightly crippled version of Android, it really wasn't much of learning curve to continue with a full blown version.Learning Windows 8 (or 8.1) would have been much steeper of a learning curve.Sorry if I didn't make that clear enough before. Parent
ilt24
quote: The problem with W7 is, I've heard that MS doesn't plan on supporting it for very long 1/14/2020 is the current end of support date for Windows 71/10/2023 is the current end of support date for Windows 8from "Microsoft Support Lifecycle Policy FAQ" page: quote: Business and Developer productsMicrosoft will offer a minimum of 10 years of support for Business and Developer products. Mainstream Support for Business and Developer products will be provided for 5 years or for 2 years after the successor product (N+1) is released, whichever is longer. Microsoft will also provide Extended Support for the 5 years following Mainstream support or for 2 years after the second successor product (N+2) is released, whichever is longer. Finally, most Business and Developer products will receive at least 10 years of online self-help support.Consumer and Multimedia productsMicrosoft will offer Mainstream Support for either a minimum of 5 years from the date of a product’s general availability, or for 2 years after the successor product (N+1) is released, whichever is longer. Extended Support is not offered for Consumer and Multimedia products. Products that release new versions annually, such as Microsoft Money, Microsoft Encarta, Microsoft Picture It!, and Microsoft Streets & Trips, will receive a minimum of 3 years of Mainstream Support from the product's date of availability. Most products will also receive at least 8 years of online self-help support. Microsoft Xbox games are currently not included in the Support Lifecycle policy. support.microsoft.com/gp/lifepolicy Parent
GodMadeDirt
Always chuckle at the "moving to Linux" crowd. This never happens. Parent
LMAO, I love hearing these doomsday predictions about Linux. No, Linux isn't big on the desktop. But it's conquered just about every other market soundly. Android phones all run Linux. All of Facebook runs on Linux. All of Google runs on Linux. All of Netflix runs on Linux. All of Amazon runs on Linux. You know... services that most people use every day of the week. Lol. Everyone has moved to Linux already... only you haven't realized it yet. Parent
OoklaTheMok
Citing that businesses use Linux based OSes to run their services is not the same thing because it has zero impact on what the end user does on a day to day basis. The services could be running on Windows or OSX and the user wouldn't know the difference. Parent
Agreed, that it is a different market with the consumer. The fact is that Apache, (along with PHP and MySQL) runs 60% of the world's websites, which you might be oblivious to if you've only used IIS/ASP.Net at work as many have, and which give you a distorted view of Microsoft's power. It is much easier to work with for the Unix crowd because Apache is built in (which does actually include those that use OSX as most non MS web-designers/developers seem to)I think *nix based devices have only been very successful in the consumer space when they have abstracted all the complexities of the OS behind a shiny exterior and where they don't need to maintain compatibility with the desktop equivalents. E.g. even iOS is a completely different app model to OSX and Android is likewise different to Chrome OS. MS attempted this with Windows RT. Very few people will be looking in the filesystems of these devices, but if they do, it is nice that they will see a standard *nix layout of folders rather than some proprietary MS arrangement.Coming to Unix having worked with MS systems throughout my working life, I found it utterly terrifying, but really it is something we shouldn't put out heads in the sand about, and I'm glad I took the time to learn about it and conquer my fear. I now use OSX as I find it a nice environment for doing most regular tasks, but powerful enough under the hood if you need to use Terminal, PHP and MySQL and the like. I still use Windows only when I need Visual Studio, Project or Visio in a VM, it's not a case of all or nothing. Parent
Report: Windows XP Still Running on Over 25 Percent of PCs April 1, 2014, 2:08 PM
Microsoft Wants Quick Death for Office 2003, Leads Users to Office 365
Microsoft Will Give You $100 to Get Rid of Your Windows XP PC
Microsoft Launches Site to Tell Clueless Customers if They're Running XP | 计算机 |
2014-23/2666/en_head.json.gz/13201 | Blizzard's Titan project 'dead in the middle of development'
Blizzard still isn't really talking about its top secret MMO project, currently code named Titan, but that hasn't
After jokingly acting oblivious to the project, Rob Pardo revealed that Blizzard is "definitely dead in the middle of development". For many, that could mean numerous things, but Pardo confirmed that there is now over 100 people working on the game.
"Titan's still moving along," Pardo said during an interview with Curse. "I don't want to get anyones hopes up that it's around the corner or anything."
"Its a very big project. It's got a long way to go," he admitted. "I don't know yet when we're really going to start releasing more information. We're definitely dead in the middle of development at this point. I think we're over 100 people now on the team working on it."
When asked about Titan's rumored six year development, Pardo responded, "I guess it depends on how you look at such things."
"When we first start a team, we start it really, really small," he clarified. "Like we might start with just a couple people and we just talk about the concepts; we draw some concept art."
He did set the record straight, however. "It definitely has not be in core development for that long. I'd say core development maybe more closer to four years. Even that was with a smallish team."
With World of Warcraft seemingly losing steam,Blizzard might want to speed up the production of Titan. Especially after analysts are describing the sales of Mists of Pandaria as "disappointing".
Tags: Blizzard, Titan | 计算机 |
2014-23/2666/en_head.json.gz/13863 | Definition of:page ranking
page ranking
The priority given to the placement of a link on the results page of a Web search. For example, Google's PageRank system, named after co-founder Larry Page, classifies a site based on the number of links that point to it from other sites (the "backlinks"). The concept is that if very prominent sites link to a site, the site has greater value. The more popular the backlink sites themselves are, the higher the ranking as well.
In addition to Google's PageRank, the content and organization of the pages on the site are criteria used in the page ranking algorithm. Google claims its algorithm uses more than 200 factors to determine the value of a page. The amount of buzz on social media also plays an important part.
The Algorithm Changes
Because countless Web sites are laden with unnecessary content in order to boost their ranking, the search engine formulas are often changed to prevent deceptive content from achieving permanent high ranks. See link juice, search engine optimization, Google bomb and social bookmarking site.
Page Ranking Technology
Although the Web page ranked number 3 may have much more useful information than the one ranked number 1, search engine software cannot really tell which is the superior Web site from a quality perspective. It can only know which ones are popular, and link swaps (you link to me - I link to you) are created to do nothing more than make pages popular. | 计算机 |
2014-23/2666/en_head.json.gz/14109 | (not yet rated) Rate this
Regrets of the Last Decade
By Steve Jones, 2013/12/16
Microsoft was never an exciting company to me as a kid. I grew up excited by Commodore with their GEOS, early Atari computers and consoles, and the original Macintosh computer. I was fascinated by Sun and Silicon Graphics workstations, running X Windows across multiple machines. The NeXT workstations made early Windows machines seem primitive. When I first worked with SQL Server, it felt like a half-finished product that wasn't stable on OS/2. Microsoft felt like a pedestrian company that was good for my career, and was trying to grow, but they weren't exciting.
Some of that changed in the late 90s as Microsoft embraced the Internet and helped it grow with the ActiveX system. For better or worse, ActiveX seemed to bring life to web sites in a way that Java and other techniques couldn't. We could argue about the damage that systems like ActiveX and Visual Basic 6 did in terms of application performance and maintainability, but without a doubt these tools caused an explosion of development and no end of work for those of us in technology. Developers, developers, developers was exciting and was a great mantra. I believed that Microsoft had really hit upon the key when I heard that. Woo developers, help them build applications, and let the excitement grow. I watched SQL Server mature with it's expansion into BI areas. I saw Microsoft create the XBOX platform, and build a much better Office Suite than I'd had with Lotus, AmiPro, and others. Microsoft had a time during my career when I thought they were really going to dominate the world of computing.
However I wasn't thrilled when Steve Ballmer took over. He wasn't a geek, and wasn't one of us. Slowly I felt that the company lost its way. Mr. Ballmer talks about some of his regrets in a piece this week, one of which was Longhown. That project dragged on for years and become the debacle that was Windows Vista. During this same time, SQL Server dragged, taking 5 years to get SQL Server 2005 released. Office seemed to stagnate, offering little reason to upgrade, other than because others were upgrading. The company felt lost, and paled in comparison to the excitement generated by Google and then Apple during the last 12 years.
As I read the quotes and thoughts in the piece, I have to admit that Microsoft really has quietly advanced. They have had successes and lots of growth, even if the stock price hasn't skyrocketed. They've invested in platforms, research, and technology, internally and through acquisitions, that may help the company maintain its position as a technology leader for some time. For every misstep like aQuantive, Danger and Ray Ozzie, there are advances like Azure and Dynamics. Windows has continued to grow inside enterprises and I rarely see the complaints over scale and capabilities that I remember from early in my career.
Were the Steve Ballmer years a success or failure for Microsoft? I think I'd call it a maintenance time. Like the manager he is, I think Mr. Ballmer managed the company, without ruining it, but without creating much excitement either.
Microsoft BI developers study
The Third Evolution
Steve Ballmer is retiring in the next year and Microsoft needs to find a new CEO. Steve Jones has a ...
The Microsoft Sideshow
Microsoft hasn't performed well on the stock market across the last decade, but the company has chan...
Hunting Developers
Steve Jones is up in the Redmond area at Microsoft's HQ looking for SQL Server developers. This edit...
6000 Companies Hiring #DataScientists
“Search engines (Google, Microsoft), social networks (Twitter, Facebook, LinkedIn), financial instit...
database weekly editorial Join the most active online SQL Server Community | 计算机 |
2014-23/2666/en_head.json.gz/14152 | Apr 27, 2006 (03:04 PM EDT)
MySQL Launches Community Development Site
MySQL AB on Thursday unveiled a community site for users and developers to discuss, collaborate on and share code and applications for the company's namesake open-source database.
MySQL Forge gives members the opportunity to share articles and tutorials through the site's wiki, to post and maintain projects in its directory, and to offer sample code snippets in its repository. "The foundation of the open source development model is the rapid creation and sharing of different solutions within an open, collaborative environment," Kaj Arno, vice president of community relations for the Swedish firm, said in a statement. "Built for the community and grown by the community, we anticipate that MySQL Forge will become a significant resource for MySQL-related development, providing value to developers and building a stronger network between developers and users."
In addition, the company unveiled support within MySQL for Ubuntu, a version of the Linux operating system. At the MySQL Users Conference in Santa Clara, Calif., officials outlined the new partnership and technology collaboration between the Ubuntu project, sponsored by Canonical Ltd., and MySQL AB.
The MySQL database is often used in open source versions of business intelligence applications. | 计算机 |
2014-23/2666/en_head.json.gz/14501 | Home » MIDRANGE-L » July 2008
Re: Modernization and multi-member files
Subject: Re: Modernization and multi-member files
From: Tommy.Holden@xxxxxxxxxxxxxxxxxxxxx
you pay the attendance fees, air fare and expenses....i'll come and talk to the "experts" about DB2/400...anyday
Tommy Holden
"Dave Odom" <Dave.Odom@xxxxxxxxxxxx>
<midrange-l@xxxxxxxxxxxx>
Great detailed technical write up. Some things:
"Although some DB2 "purists" do not like to admit it, the IBM System/38 relational database that was built into CPF was shipped at least two years before DB2 became available for mainframes (in June of 1983). (IBM's SQL/DS was announced in 1981 and was not delivered until 1982.) The IBM System/38 is widely accepted as the world's first commercially available relational database product. (The IBM System/38 was announced in 1978 and delivered in 1980.) These facts are well documented in the literature."
"What you say is true, about who claimed to be first and who claimed to be a relational database, BUT, it was NOT true the S/38 was accepted in the market place, even inside IBM, as a true RDBMs and the first commercial implementation. In IBM presentations, created by IBM and given by me in the late '80s/early '90s, yes by the mainframe side, SQL/DS was said to be the first commercially available RDBMS. Yes, I spouted the dogma as well. So, often what you believe to be the gospel is whatever you've been taught unless you see all the environments and can think for yourself.
Yet you hold up "DB2 on the mainframe" as if it was the "definitive" version DB2 -- yet on the mainframe, you must code JCL to create a "stored procedure" and when the stored procedure gets invoked, it submits a batch job -- how quaint is that?"
As far as most of the world is concerned, DB2 on the mainframe is just that, the world standard for world-class RDBMS. I wouldn't use wiki anything for a proof as its opinion based by its users.
"which are carved out of (guess what?) FLAT FILES!" The same as Physical "flat files" in DB2/400; can't get away from the basic file structure for any platform but the implementation and what can access... now there's the important point.
(That's the part that is the security concern -- if you could directly access those FLAT FILES, you would be bypassing all of the built-in security and integrity etc. of the database. But that's NOTHING LIKE how DB2/400 works -- at all.) Oh??? Seems to me, in the i,if you have security access to the file you can get in and look at the data from all kinds of back doors; some without SQL. Whereas, certainly on the mainframe, IF you have security access to the base file at all, go ahead and look at the file; it looks like junk. You can't read it, not EVEN WITH AN ISAM UTILITY. Why? Because your not going through the engine and using SQL. Not so on the i.
"You mention "single level storage" in some of your posts. All DB2/400 tables and views (physical and logical files) are implemented as various MI objects (data spaces, data space indexes, cursors, etc.) that reside in the single level storage -- so the pages of these objects can reside anywhere on any DASD volume within the containing ASP. There is no need to worry about or allocate 'table space" or worry about what DASD volumes the table space must reside on -- OS/400 or i5/OS takes care of all of that automatically, as part of single-level storage. (NOTE: This is a FAR CRY from a database built on top of "flat files.)"
Well, you are built on flat files even though you like to call them "objects". BUT, I will acknowledge that, to some extent, single level storage is very nice and beneficial. However, not when it comes to repairing a portion of a database; you got to go through the whole database restore process unless something has changed I don't know about. Not so in DB2 mainframe, you can restore parts as kit is NOT single level storage. This also promotes you being able to backup or restore some tables and tablespaces without bringing down the whole database. Now, if I remember correctly you can do something like this in DB2/400 if your database is segmented by ASP. If you do that, then you've done the same sort of thing mainframe DB2 has done. But, that sounds OK to me; makes them very similar in that one respect.
"The folks in Rochester realized when they created CPF and its built-in database, that most of their existing customers were coming from a System/3, System/32 or System/34 background, and they were used to "flat files" and using languages like COBOL or RPG. So, they did something very clever -- they provided a way to access the "database" via "native" I/O statements (READ, SETLL, READE, etc.) which made the transition much easier."
Makes good business sense at the time and would now if most of your market share are small companies. But, if the i5 want's to play in the big league (where I think it can play to a large extent), it needs to grow up and offer another more sophisticated alternative like mainframe DB2. Ahhhhh, now we may be getting at the real reason why the i5 doesn't implement DB2 like the real DB2s and why the i5 is not considered by large shops...) IBM doesn't want Rochester and Santa Teresa/Toronto to really compete. Just a thought.
"This in no way undermines the integrity of the database, because, at the MI level, ALL access, whether through SQL, or through "native" I/O statements, must go through the exact same database access routines in the operating system, and those are using the built-in MI objects (data spaces, data space indexes, journals, journal receivers, and cursors, etc.) with special MI instructions that work with those object types."
Yes, but still seen as a relational-like DBMS, not relational where SQL is the standard. And, promotes use of mixed data access types for programs and not a standard programming database access methodology to which all programmers must adhere. Standards and consistency are important on the mainframe side; perhaps not in i shops. "Some vendors and customers created database tables (files) in CPF or OS/400 that are not truly "normalized" -- but anyone can create some un-normalized tables in DB2 on the mainframe, or in DB2 on any other platform, or in MS SQL Server, or Sybase, or Oracle, or any database system you care to name. It's your choice.."
While technically possible, my arguments are based on how things are actually done with the two DB2s. In most mainframe DB2 shops I've been in that is RARELY done. Why, because you usually have a good DBA, procedures, methodologies and discipline that prevent such a mess going into production, UNLESS, it is a data warehouse implementation and then there is another set of rules and procedures for good design. This is very important in the mainframe world and many ORACLE shops doing development because its too costly to the company to do otherwise. Perhaps not so in i shops. "I also know of many vendor applications packages on other platforms that go to great lengths to "simulate" ISAM access by using stored procedures and triggers, etc. -- if they had the kind of direct native I/O access that we have on the System i, they would not need to resort to those "tricks."
Yeah, but I'm discussing the real DB2 RDBMS and to some extent ORACLE.
"It is simply not the case that we are all just a bunch of OS/400 or i5/OS "bigots"" Didn't say all.
" -- our version of DB2 is far more advanced than any other." Go say that at IDUG or a major mainframe DB2 or ORACLE convention and see what happens. Even present a paper on DB2/400 to try and convince them to move to the i because you're "superior". I invite anyone to do that. I'll pay to see that. If you get many that buy an i that was not already intending to do so, I'll buy you several dinners and openly get on here and say I was always wrong and you all were always right.
Re: Modernization and multi-member files, (continued)
, Dave Boettcher
, Dave Odom
, Evan Harris
Re: Modernization and multi-member files,
Tommy . Holden <=
, Jerry Adams
, Tommy . Holden
RE: DSPFD SQL instructions | 计算机 |
2014-23/2666/en_head.json.gz/15145 | Notice No. 120 June 27, 1997
IMPORTANT NOTICE TO
PRESIDENTS OF UNIVERSITIES
AND COLLEGES AND HEADS OF
OTHER NATIONAL SCIENCE FOUNDATION
GRANTEE ORGANIZATIONS
Subject: Year 2000 Computer Problem
As part of the National Science Foundation's (NSF) activities related to potential problems associated with Year 2000, NSF wishes to remind its awardees of their responsibilities under NSF
grants and cooperative agreements.
Recipients of NSF grants and cooperative agreements generally have full responsibility for the scientific, administrative, and financial aspects of the activity being supported.
This responsibility extends to anticipating and reacting to events such as the Year 2000 and taking all steps necessary to mitigate potential problems that might be caused by the Year 2000.
Many computer systems may experience operational difficulties because they are unable to handle the change from the year 1999 to the year 2000. Others may fail because they do not properly
consider 2000 a leap year. For computer systems that use two digits to represent the year, calculations, comparisons, and data sorting my be adversely affected. This would include computer
systems ranging from the desktop to the largest mainframe.
Awardees should also be aware that the Year 2000 may affect electronic devices utilizing embedded microchips that perform date-based calculations. Biomedical devices and other laboratory
equipment may depend upon embedded date functions. If the chip receives what it perceives to be an invalid date, it may fail, impacting important experiments. False date comparisons may
invalidate test results, leading to false conclusions.
NSF awardees should take
appropriate actions to ensure that the NSF activity being supported is not adversely affected by the Year 2000 problem. Potentially affected items include: computer systems, databases, and
equipment. If an application deals with future date, that application must be Year 2000 compliant before the first use of dates beyond December 31, 1999. The National Science Foundation should
be notified if an awardee concludes that the Year 2000 will have a significant impact on its ability to carry out an NSF funded activity.
More detailed information concerning
Year 2000 activities, plans, and issues can be found on the General Services Administration's web site at http://www.itpolicy.gsa.gov under the Year
2000 Information Directory.
Neal Lane | 计算机 |
2014-23/2666/en_head.json.gz/15806 | › Email › Email Marketing Optimization
On the Road Again...With E-Mail
Al DiGuido | June 28, 2007 | Comments
Even when your audience takes a break from the everyday, you can reach them with e-mail.
It's June and most of us will take time off this summer to relax. During that time, wouldn't it be great if we could park all our responsibilities and challenges for a time? In this connected world, though, this is virtually impossible. Thankfully, e-mail shows its great power and utility when we're remote.
Once again, I'm traveling the way I love: atop my Harley-Davidson with three friends. As long-time readers may know, I'm one of the few bikers who travels the mountains and valleys of this country with a laptop in my tour pack. This year's trip has taken me through one of the most beautiful places in the country: Washington State. Images of Mt. Rainier and Mt. St. Helens have been breathtakingly inspiring, and the people and routes along the way have been great.To those who think e-mail is a dying communications medium, I say, "Bunk." E-mail is a lifeline for all of us on this trip. I've been the designated communicator, using e-mail to communicate on behalf of my fellow riders to their families and friends. I communicate with business associates, friends, and colleagues via this incredible platform. It's interesting to see just how important e-mail can be to those without access to other communication tools.As we travel from place to place, we get a steady stream of e-mail confirmations from various hotels we booked. We're able to e-mail the hotels to receive ride route instructions, saving us hours of wondering where an obscure inn or motel might be. Even in the most remote locales, most of our accommodations have wired rooms. At the base camp in Mt. Rainier, the Alexander Country Inn (just 12 rooms) has wireless access.We all have ongoing business dealings and need to stay somewhat connected to moves being made some 3,000 miles away. All the necessary information was exchanged via e-mail, ported to our smartphones. We get preprogrammed weather alerts along the ride to ensure we avoid inclement weather as much as possible. Such information delivered so effectively is a lifesaver.We continue to conduct commerce remotely. Shipping confirmations from the local general store at Mt. St. Helens were sent via e-mail. It's pretty wild when a grandmotherly lady asks for our e-mail address.My fellow riders remind me there are still many who are confused about e-mail basics. They want to know more about how to add companies to address books, how they can set up alerts and bulk mail folders. It just emphasizes that as marketers and providers, we must never forget there's an ongoing need for education and communication about the basics, so we can bring others into the fold and build loyal and recurring relationships with incremental customers.While on vacation, I'm not viewing Web sites or reading blogs. I am using e-mail as it's supposed to be used -- as the mainstay of communication between myself and the rest of the world. Each time I use it, my appreciation of the power it has to connect me to so many great people, services, and products increases. Like the summits and peaks I'm seeing, I have great respect for e-mail's power and majesty when used for good.As marketers, you don't need to worry that I'm not looking at your message while on the road. You should worry about whether it's relevant to me. When time's precious, the need for information essential, and connections slow, spam of any type becomes more than a nuisance. Irrelevant message clogging the inbox are more frustrating on the road.I'm back on my motorcycle today, traveling the last leg of a 1,500 mile trek and finishing up in Seattle. E-mail's been my partner throughout the trip and has made the ride much less stressful for us all.Until next time,Al D.Want more e-mail marketing information? ClickZ E-Mail Reference is an archive of all our e-mail columns, organized by topic.
Long recognized as one of the direct response industry's premier innovators and a pioneer in e-mail communications, Al DiGuido brings over 20 years of marketing, sales, management, and operations expertise to his role as CEO of full-service digital marketing company Zeta Interactive. Formerly Epsilon Interactive's CEO, DiGuido also served as CEO of Bigfoot Interactive, CEO of Expression Engines, EVP at Ziff Davis, and publisher of Computer Shopper, where he launched ComputerShopper.com, a groundbreaking direct-to-consumer e-commerce engine. Prior to Ziff Davis, he was VP/advertising director for Sports Inc. DiGuido also serves on the Direct Marketing Association's Ethics Policy Committee.
Augmented Reality and Ads Unite
SXSW 2014: Hottest Topics to Include Wearables, Data, and More Wearables
Love of Data: The Root of All Evil
Bridging Digital & Physical – The Potential of Leveraging Beacons
Search Observations From the South of France
Get ClickZ Email newsletters delivered right to your inbox. Subscribe today! | 计算机 |
2014-23/2666/en_head.json.gz/16201 | Home News Heritage Main Menu
Maurice Wilkes, father of British computing, dies Tuesday, 30 November 2010 Sir Maurice Wilkes, widely regarded as the Father of British Computing, passed away today, aged 97. Sir Maurice Wilkes, widely regarded as the Father of British Computing, passed away on 30 November 2010, aged 97. He was best known as the designer and creator of EDSAC (Electronic Delay Storage Automatic Calculator) which became the first practical stored-program computer when it ran its initial calculation on May 6, 1949 in Cambridge, England.
EDSAC, however, wasn't his only innovation. In 1951 he set to work on developing the concept of microprogramming. This was derived from the realisation that the Central Processing Unit of a computer could be controlled by a miniature, highly specialised computer program in high-speed ROM - the so called "microcode" way of building a machine. The results of his work meant that CPU development was greatly simplified.
Maurice Wilkes was awarded the Turing Award in 1967, the Faraday Medal from the Institution of Electrical Engineers in London in 1981 and the Kyoto Prize for Advanced Technology in 1992. He was knighted in 2000.
Further reading: Maurice Wilkes and EDSAC Sergei Lebedev and Early Russian computers Soviet Russia had its own early computer program and its "father of the computer" was Sergei Alekseevich Lebedev. Was the Russian effort just a copy of computers being built at the same time in the US [ ... ] + Full Story Clive Sinclair And The Small Home Computer RevolutionNot every computer innovation originated in the United States. Clive Sinclair was a designer who could make one transistor do the work of two or more. He built low cost and futuristic electronics by d [ ... ] + Full StoryOther ArticlesKonrad Zuse And The First Working ComputerKemeny & Kurtz - The Invention Of BASIC Towards Objects and Functions - Computer Languages In The 1980sRobert MetcalfeFrom Baby to Mark IThomas J Watson Jr and the IBM 360 Fifty Years On ICT 1301 - A 1960's Computer Gene Amdahl
Last Updated ( Tuesday, 30 November 2010 ) | 计算机 |
2014-23/2666/en_head.json.gz/16499 | Hello guest register or sign in or with: Tutorials - Resident Evil Game
Capcom | Released Mar 21, 1996
summary news reviews features tutorials downloads mods videos images jobs In this game you play as: Chris Redfield or Jill Valentine.
The player's character is a member of a special law enforcement task force who is trapped in a mansion populated by dangerous mutated creatures. The objective of the game is to uncover the mystery of the mansion and ultimately escape alive. The game's graphics consist of 3D polygonal characters and objects superimposed over pre-rendered backdrops with pre-determined camera angles. The player controls the character by pushing the d-pad or analog stick left or right to rotate the character and then move the character forward or backwards by the pushing the d-pad up or down. Summary List
(view official) Reset
If you would like to share tutorials with the community, sign up and you can. Track this game
Windows, Wii, DS, GCN, PS1 | 计算机 |
2014-23/2666/en_head.json.gz/16934 | Blizzard Discusses Diablo 3 Console Port (Update: Director Jay Wilson Weighs In)
By Chris Faylor, Oct 10, 2008 2:16pm PDT
Though Diablo 3 is currently only confirmed for the PC, Blizzard president Mike Morhaime has once again suggested that the long-awaited action-RPG may end up on consoles as well.
"Every game we have the discussion about which platforms make the most sense," Morhaime stated in an interview at BlizzCon. "As Diablo 3 takes shape, I think we'll do an evaluation."
"I think there is a pretty good argument to be made that that type of game might work very well on consoles," he added. "There might be some technical limitations though that we might need to get past."
Diablo 3 director Jay Wilson also chimed in on the oft-speculated topic, noting that the game's control scheme would work rather well on consoles. "If we did it, we would want to do a really high quality version--we wouldn't just want to do a port," Wilson said. "We would never make that decision if we thought we had to compromise the overall quality...we could probably do it at any time, we could release the game and then decide we wanted to do a 360 version or a PS3 version."
"We haven't really decided to take the [console] plunge," he continued "We've really come to the conclusion that it's probably the best fit because the control scheme is actually not that incompatible. So if we were to make that decision, Diablo would be the natural choice."
I hope not. Oct 10, 2008 2:51pm PDT
I hope not. : DM7 | 计算机 |
2014-23/2666/en_head.json.gz/17173 | Information Management Initiative Institution:
Australian Government Information Management OfficeDepartment of Finance and Administration
Various activities have been undertaken to facilitate access to cost-effective infrastructure for government agencies. They include:
• Fedlink: a virtual private network for electronic communication between government agencies. It can operate securely across all infrastructures, including the Internet, to transmit a variety of data types;
• Open Source Content Management System: a content management system as implemented in the Australian Government Information Management Office and made available to government agencies in an easily installed package;
• Whole of Government Volume Sourcing Arrangements: arrangements for volume software supply to the Government of Australia. These arrangements have been established;
• Whole of Government Telecommunication Head Agreement: provides agencies with access to services of 23 providers. • Australian Government Authentication Framework: a whole-of-government approach to authentication for business dealings online with government;
• Gatekeeper Policy and Administration: a framework for implementation of public key infrastructure in government;
• SourceIT web site: a resource for agency chief information officers and staff with sourcing information and tools. • Australian Government Service Delivery Principles: principles developed as the first component of the Access and Distribution Strategy of the Government of Australia;
• Govdex: used to develop and test infrastructure that government agencies can use to align standards, promote interoperability and facilitate federated services. The Govdex infrastructure is based on Web Services registry technology and a collaborative governance framework; and
• ReuseIT: catalogue information components and patterns developed by agencies and that can be used across a range of technical environments. ReuseIT will be published on Govdex and help efforts to rationalize duplication in the design of e-government solutions.
Citizens have access to up-to-date information. Public services are deployed more quickly and more effectively. The public sector is more reliable and efficient and it meets citizens’ needs. Citizens have therefore developed greater confidence in the public sector. The use of open source technology has enabled the Government to link its agencies together and thus provide an integrated network.
http://www.agimo.gov.au/ | 计算机 |
2014-23/2666/en_head.json.gz/17750 | Adonthell is a role playing game in development that aims to combine the best features of the Final Fantasy and Ultima series with an epic plot. It is set in a detailed virtual world. With the current engine, a small demo game (Waste's Edge) is available.
Amanda is a popular network backup and archiving
software that protects multiple machines running
various versions of Linux, Unix, and Microsoft
Windows operating systems. It supports tapes,
disks, optical media, and changers.
BSD OriginalArchivingbackup
Amaya is a complete Web browsing and authoring environment, and comes equipped with a WYSIWYG style interface. It lets users both browse and author valid Web pages, with standards including (X)HTML, native MathML, and SVG documents. It also includes a collaborative annotation application (RDF).
BSD OriginalInternetWebBrowsers
Angband is a single-player rogue-like dungeon exploration game that runs on a wide variety of computer systems.
GPLGames/EntertainmentRole-Playing
Arianne is a multiplayer online engine to develop
turn based and real time games. It provides a
simple way of creating games using Python for a
game's description. Marauroa, its server, uses
Java, MySQL, UDP, and Python for hosting dozens of
players.
RCDevs TiQR
A two-factor authentication server.
Outline A note-taking app. | 计算机 |
2014-23/2666/en_head.json.gz/17751 | Oracle -- Patches 42 security holes -- in Java
Fudzilla ^
| Wednesday, 17 April 2013 09:33
| Nick Farrell
Posted on 04/17/2013 8:21:22 AM PDT by Ernest_at_the_Beach
Patches 42 security holes
Oracle has released a major security update for the version of Java programming language that runs inside Web browsers.
The patch fixes 42 vulnerabilities within Java, including "the vast majority" of those that have been rated as the most critical. Oracle Executive Vice President Hasan Rizvisaid that a series of big security flaws in the Java plug-in for browsers have been uncovered in the past year by researchers and hackers, and some have been used by criminal groups. One hacking campaign infected computers using Microsoft Windows and Apple software inside hundreds of companies.Earlier this year the US Department of Homeland Security recommended that computer users disable Java in the browser. But many large companies use internal software that relies on Java and have been pressing Oracle to make the language safer.Perhaps the most significant change will be that, in the default setting, sites will not be able to force Java applets to run in the browser unless they have been digitally signed.Not all known problems are being fixed with the current patch, but there are no unpatched problems that are being actively exploited, Rizvi said.
TOPICS: Computers/Internet
KEYWORDS: computers; computersecurity; internet; java; malware; oracle; tech
To: ShadowAce
It is About time.
May take some time for "Fixed " stuff to show in updated code.Microsoft Windows users may be the last to see anyfixeds.Mozilla has been pumping out updates pretty frequently.
Larry Ellison could not be reached for comment.
by nhwingut
(This tagline is for lease)
To: nhwingut
Oracle -- Patches 42 security holes -- in Java Dashes -- we're not -- sure what -- they're for.
by SoothingDave
Does this mean I’m going to have to do 42 effing updates to the each of the 5 computers that just I use at work and home? Same ole story then...
But many large companies use internal software that relies on Java and have been pressing Oracle to make the language safer.Sounds to me like they have bigger problems than java in the browser if they're worried about being hacked.
by VeniVidiVici
(Obama's vision - No Job is a Good Job)
Java - the gift that keeps on giving. I read this story out loud to a collective groan from my long-suffering IT colleagues. We just finished the month’s round of patches. Gotta do it, though.
by Billthedrill
Since the warning I have had Java turned off - except for one work app.
I have a dedicated browser that only goes to a dedicated site that is behind a firewall. This app is mission critical to my job and there is no doing without it. But on the other hand I’m probably never going to use Java out “in the wild” on the interwebs ever again. Fixes or no fixes.
To: BreezyDog
No. It means that you if so choose when you next upgrade Java you’ll get fixes for 42 issues - according to this.
To: rdb3; Calvinist_Dark_Lord; Salo; JosephW; Only1choice____Freedom; amigatec; stylin_geek; ...
by ShadowAce
(Linux -- The Ultimate Windows Service Pack)
To: SoothingDave
Well, it’s entirely, possible, that they, ran out, of commas.
Once again proving that running code in a sandbox doesn’t help if the sandbox is poorly written or designed.
(Socialism is slavery)
To: 2 Kool 2 Be 4-Gotten
by Signalman
The post doesn’t say what the new version number is for the JRE
Fudzilla is not big on details.
Shatner has a new gig writing copy.
Google turned this up from February:Updated Release of the February 2013 Oracle Java SE Critical Patch Update
********************************************************************************************************************************The Register has Info:Oracle slaps critical patch on insecure Java************************************************************
Tries to educate users about potential dangers of in-browser Java appsBy Jack Clark in San Francisco Get more from this authorPosted in Security, 17th April 2013 00:17 GMTFree whitepaper IT infrastructure monitoring strategies
Oracle has issued a critical update patch for Java as the database giant works to shore up confidence in the widely used code.The security update fixes 42 security flaws, 19 of which merit a 10 (most severe) rating acording to the CVVS metric the company uses to evaluate the software. Along with this, Oracle has also sought to give users more information about the Java apps that want to execute code within the browser.
The patch comes at a time when many security pros are questioning the value of Java, with many seeing its presence in user's browsers as a liability rather than a benefit.
Of the 42 security flaws patched by Oracle in April, 39 of them "may be remotely exploitable without authentication, i.e., may be exploited over a network without the need for a username and password," Oracle wrote in the patch notes.The most severe vulnerabilities exploit problems in the 2D, Deployment, Hotspot, Install, JAXP, JavaFX, RMI, Libraries and Beans sub-components of the Java runtime environment.The majority of these exploits apply to client Java deployments, and can only be exploited through untrusted Java Web Start applications, and untrusted applets.The vulnerabilities affect JDK and JRE 5.0, 6 and 7, along with JavaFX 2.2.7. "Due to the threat posed by a successful attack, Oracle strongly recommends that customers apply CPU fixes as soon as possible," the company said.Alongside the patch fixes, Oracle is also rolling out an update (Java 7 Update 21) that lets the plugin more clearly telegraph to users when it could potentially be dangerous to let Java code be executed in their browsers (not all the time? Ed).Low-risk apps will cause a simple message to be displayed, while high-risk apps will be indicated by either an exclamation mark within a yellow triangle (applications with untrusted or expired certificates), or a yellow shield (applications with unsigned and/or invalid certificates)This patch follows a rather insecure three months for Java: In January, Oracle admitted that Java's security was less than perfect, saying at the time that its grand plan for Java security was to fix it and communicate its security efforts more widely.In February, a zero day flaw in Java was exploited to let unscrupulous types gnaw at the innards of major companies like Apple, Facebook, and Microsoft. In March, Oracle was forced to issue another emergency patch to deal with another zero day.We can only wonder what May could bring... ®
I removed Java from all the family PC’s. Haven’t had any complaints.
I don't see an issue date on this ,...mentioned in the article from Register just above but...here is a direct link: Java SE Development Kit 7, Update 21 (JDK 7u21)
JUST ANOTHER VULNERABILITY ANNOUNCEMENT
To: martin_fierro; ShadowAce
There is this :Java SE 7u21******************************Oracle does not show a date on these things....
To: Billthedrill
Many thanks. Crikey! I’m at u17. BTT.
Yeah—that’s the version I just updated to
Wonderful..... ANOTHER Java update......
/puke
Thanks for the link ErnestATB, I just freaking upgraded my java in my work VM. Time to do it again. | 计算机 |
2014-23/2666/en_head.json.gz/18876 | Merge/Purge of E-Mail Addresses
Rodney Joffee, Whitehat.com
Companies that want to send marketing messages to rented e-mail addresses place rental orders with managers and owners of many different lists. Recipients of marketing messages who get more than one copy of the same message tend to be irritated at the duplicate e-mails and are generally less responsive to the marketing messages themselves. Because the e-mail marketing industry is unsophisticated compared with the traditional postal list world, e-mail list owners are hesitant to provide the actual lists to mailers or their service bureaus. As a result, there is no simple method of ensuring that lists rented from multiple sources do not contain duplicates. There is a methodology that incorporates patented cryptographic technology as well as publicly available algorithms and allows list owners to retain possession of their lists while still enabling the elimination of duplicates. Over the past 40 years, the traditional direct mail and direct marketing world has developed techniques that allow list owners to ship the actual names and addresses of their customers to mailers for various purposes without fear of the names and addresses being misused. The purposes include: • Application of statistical models to identify the best prospects. • Deduplication among the multiple sources of lists. • Suppression of prospects on rental files who are already customers of the mailer. • Overlaying demographic and psychographic data to increase response rates. • Segmentation for testing purposes. • Control over the logistics of the overall mailings. • Achieving media and postage cost efficiencies. • Various database functions. The list owners' concerns include: • Use of lists for offers not approved by owners. • Mailing of lists on unapproved dates. • Use of lists in excess of the usage contracted (multiple mailings). • Passing lists on to other parties who use the lists without permission. The universal technique in traditional direct marketing is the use of decoy, or seed, names. In this process, the list owner, or its computer service bureau, inserts a unique set of records into each list order that is shipped out. While these records look like and have the same format as the legitimate records on the file, an identifier unique to each list rental order is inserted somewhere within each decoy record. This allows the seed names to pass the normal processing steps of the mailer without giving any indication that they are anything other than real customer names and addresses. The list owner keeps the nature of the seed names secret. Because the decoy record is created by the list owner or its service bureau, the detail in that record exists nowhere outside the control of the list owner. Therefore, all mail received by these decoys has to originate from specifically approved list rental transactions. When these decoy records receive mail, the coded information allows the list owner to identify the original list rental file that was sent out and to monitor the usage. The technique has proved sufficiently secure so that over the years, after a few attempts by dishonest mailers were thwarted and prosecuted successfully, misappropriation of a list has become rare. Despite the hundreds of thousands of list orders that are fulfilled each year, fewer than five cases of misappropriation have been reported publicly over the past two years. In most of the reported cases, the misappropriation has proved to be the result of human error, not malice. In the traditional postal world, identifying duplicates is no longer a problem. The E-Mail World The use of e-mail addresses for marketing is a relatively new phenomenon. True direct marketing to e-mail addresses was first recorded in 1994, and in most cases, the marketers were Internet companies, not traditional merchandisers. They had no understanding or experience in the traditional postal world and created a new set of experiences to draw on. In addition, many of the e-mail marketers began their careers in the shadier segment of marketing, where vendors existed in cyberspace, and there was no physical framework to allow for normal validation of a vendor's genuineness. Many of the early e-mail marketing campaigns were dishonest offers mailed by unscrupulous marketers. As a result, the e-mail industry adopted a standard practice whereby the list owners themselves retained possession of their customer e-mail addresses, and marketers who rented the lists had to rely on their message being sent out by the list owners themselves. This led to three major problems: • If a mailer rented names from more than one list owner, there was a good chance that some people would receive duplicate e-mails from each list owner. • A mailer could not ensure that his existing customers would not receive solicitations, with obvious negative feedback from his existing customers. • There was no consistency in the formatting of the messages and, as a result, no easy way to compare the results of mailing to two different lists accurately. In the halcyon days of e-mail marketing, mailers lived with these limitations because results were generally good and recipients were unsophisticated. But as Internet use has become more widespread, this has changed. Response rates are being carefully measured for the first time, and mailers are realizing that duplicate messages are expensive. The e-mail world is not yet sufficiently populated with sophisticated traditional mailers, and so list owners still refuse to release e-mail lists to mailers. It is likely that the first step in the evolution of the e-mail world will be to ship files to trusted third-party service bureaus. In the absence of that, a method described as "merge/purge by proxy" meets the requirements and overcomes the objections of both mailers and list owners. Merge/Purge by Proxy In the computing world, a technique known as "hashing" has been developed. Hashing can be used to, among other things, obtain a single integer value that can represent, usually uniquely, a string of data. Simply put, an arithmetic calculation is applied to a sentence, word or set of characters, and the result is a single number. Given the integer values for two strings, one can quickly determine whether the strings are the same. In other words, we can convert English sentences into numeric values, and by comparing the values, we can easily tell whether the sentences are different. As a simple example, let's look at these two sentences: The elephant is blue. The monkey laughs. If we were to apply a simple formula to these two sentences, wherein we apply a numeric value to each letter, starting with a=1, b=2, c=3... z=26, and add the numbers, we would end up with two integers. The elephant is blue = 20+8+5+5+12+5+16+8+1+14+20+9+19+2+12+21+5= 182. The monkey laughs = 20+8+5+13+15+14+11+5+25+12+1+21+7+8+19= 184 To see whether the two pieces of data are the same, instead of having to compare every character, we look at the resultant integers. Obviously, 182 does not equal 184, so the data must be different. Let's transfer this knowledge to the e-mail world. If a list owner applied the simple, public formula we have above (a=1, b=2, etc...) to each e-mail address on the list segment it supplied to a mailer and the mailer applied the same formula to his list of e-mail addresses, any e-mail addresses in the list owner's file that did not match the mailer's customer file would, by definition, be unique, and safe to mail. Of course, there is a practical flaw in this: A simple formula like this applied to a big list would create many records on the mailer's own list with the same numbers. For example, consider the following two sentences: The elephant is neat = 20+8+5+5+12+5+16+8+1+14+20+9+19+14+5+1+20 = 182. The elephant is blue = 20+8+5+5+12+5+16+8+1+14+20+9+19+2+12+21+5= 182. In this example, if someone gave you the number 182, you would not be able to tell which of the two sentences they meant. There are thousands of sentences that could generate the same number: 182. We do know that if the numbers were different, the sentences would have to be different. But we have no idea what the original sentence was by being in possession of the number. While we would prefer unique numbers to provide the most accurate comparison of e-mail lists, the e-mail application also requires that one cannot recover an e-mail address given its corresponding integer. If the translation produces unique integers, this raises the concern of whether a reverse translation can be constructed (i.e. whether one can reverse-engineer the e-mail address from the integer). In other words, we want to make sure that if someone knows the number, they can't translate it back to the original e-mail address. As we note above, the second two sentences have the same number but are different. So we have to look for an algorithm, or formula, that will ensure that each unique e-mail address will produce a unique resultant. In mathematics and in cryptography, several algorithms will allow this to happen. The difficult part is to create a unique answer that cannot be reversed. In other words, we ensure that it is impossible to take the resultant numbers and re-create the original address. One way to do this is to ensure that when the hash is calculated, an insignificant part of the intermediate data is discarded. A solution to this challenge can be found easily in the mathematical world. Probably the most appropriate algorithm is known as MD5, or Message Digest Version 5. An alternative appropriate algorithm is SHA-1 or Secure Hash Algorithm. By using one of these algorithms, "merge/purge by proxy" (M/PBP) will allow list owners to protect the confidentiality of their names while allowing mailers to deduplicate their mailings. To provide a method of deduplicating e-mail lists using merge/purge by proxy, these steps are taken in a hypothetical situation: Mailer A orders 10,000 e-mail addresses from each of 10 list owners, Owner 1 through Owner 10 inclusively. Together with the list orders, Mailer A provides the specifics of the algorithm employed. Each list owner applies the algorithm to his e-mail list and creates a list of 10,000 hash values. These hashed files are sent to a neutral third-party service bureau. The mailer also applies the algorithm to the e-mail addresses of his house file. He then forwards this file of hashes to the independent third-party service bureau. The service bureau then goes through these steps: • The bureau examines each record from the house file and deletes any matching hashes it identifies on each of the 10 rental files. At this point, the service bureau has ensured that the mailer will not mail to an existing customer. • The service bureau then merges the 10 files and identifies all records that occur on more than one of the 10 lists. Using a fair allocation system, the service bureau suppresses all but one occurrence of any given hash. The bureau now can ensure that no duplicates remain in any of the rental lists, and no more than one e-mail message will be sent to any one address. • The bureau splits the unique file back into its 10 component lists and returns those files to their list owners. The list owners now have a file of hashes known to represent e-mail addresses that do not appear on either the mailer's customer file or on any of the other nine post-deduplication files. The list owners then match the hashes to their master files and extract the corresponding e-mail addresses. These are then the addresses that are to be mailed for the mailer. Practical Issues To ensure that each list owner is fairly compensated for records that also appear on other lists, the first duplicate between list 1 and list 2 is allocated to list 1, and the second is allocated to list 2, etc. This also applies to triplicates and so on. It is conceivable that an address may occur on all 10 lists, and so tables are kept to ensure that allocations are fair. Care should be taken to ensure that addresses are in a canonical or basic form so that various permitted derivations of a unique address are not identified as unique addresses themselves. This includes ensuring that addresses in the form name+[some random value]@domain name are edited back to name@domain name, and dealing with upper- and lower-case characters. Until the e-mail list world becomes comfortable with the systems used by the traditional list world to maintain the integrity of lists in the face of theft and misuse, "merge/purge by proxy" provides the only acceptable solution.
The magic of merge/purge
StickyMinds.com e-mail addresses
Adventure Travel E-mail Addresses
Nurses at e-mail addresses list now available
Health-E-Newsletter Prescription Drugs E-mail Addresses Please enable JavaScript to view the comments powered by Disqus.
Next Article in Digital Marketing E-Mail Newsletters vs. Solo E-Mails Sponsored Links
Follow us on Twitter @dmnews Latest Jobs:
More in Digital Marketing
Ramp Introduces Video Platform for Marketers
The cloud-based platform syncs with marketing automation and capitalizes on user behavior to extend view times.
CMOs Who Take Charge of Digital Make More ...
Chief marketers who usurp the CDO role earn the board's respect, as well as base salaries of $500,000 and up, says a new study.
Microsoft Set to Overtake Yahoo in Ad Revenues
Marissa Mayer can take credit for reversing ad declines. Still, her company will fall out of digital's Top 3 by year's end, according to eMarketer. | 计算机 |
2014-23/2666/en_head.json.gz/19536 | Major spam botnets yet to recover after host shut-down
Posted on 20 November 2008.
One week after the world's most significant breakthrough in the fight against spam, spam levels are yet to return to their previous levels, according to security experts from the Marshal8e6 TRACE Team. However, it is likely that spam levels will eventually return to their previous high levels in the future.
On November 11, the volume of spam around the world fell by as much as 70 percent due to the shutdown of a major spam hosting network, McColo.
McColo was shut down by its Internet Service Provider after an investigative journalist made inquiries about the Web hosting company's illicit activities. McColo was hosting the command and control infrastructure for three of the world's most prolific spam botnets: Srizbi, Mega-D and Rustock. When McColo was shut down, the spammers were disconnected from the networks of spam-sending bot computers under their control.
Throughout 2008, the TRACE team has published reports showing that just a handful of major spamming botnets are responsible for as much as 90 percent of spam. The TRACE Team has been campaigning within the IT security community for a coordinated effort against the top spamming botnets. Marshal8e6 says that the command and control servers play a critical part in managing the hundreds of thousands of infected bot computers, also referred to as 'zombies'.
Marshal8e6 says the command and control servers for the Srizbi, Mega-D and Rustock botnets were affected by the McColo shut down. According to Marshal8e6's statistics, just prior to McColo's shut down, these three botnets were ranked first, second and fifth respectively as the world's most prolific sources of spam, together responsible for nearly 70 percent of spam.
Email Address Spotlight | 计算机 |
2014-23/2666/en_head.json.gz/19788 | Search United StatesUnited KingdomNetherlands
SolutionsSupportResourcesPartnersCompany
Main menuSolutions - RES Suite 2014 - IT Store - Workspace Virtualization - IT AutomationSupport - Solution Assurance - Training -- Online Learning -- Workshops -- Training Classes -- Certification - Knowledge Base - Support CommunityResources - Case Studies - Press Releases - Events - Videos - Webinars - White Papers - RES in the NewsPartners - Find a PartnerCompany - Who We Are - Leadership - Board of Directors - Market Trends - Careers
Legal Statements End User License Agreement: END USER LICENSE AND SOLUTION ASSURANCE AGREEMENT RES SOFTWARE (“EULA”)
USER NOTICE: BY INSTALLING THIS SOFTWARE YOU AS LICENSEE ACKNOWLEDGE THAT YOU HAVE READ AND UNDERSTOOD THIS EULA AND AGREE TO THE CONDITIONS AND PROVISIONS HEREIN AND THAT YOU ARE DULY AUTHORIZED TO EXECUTE THIS EULA. YOU SHALL INFORM ALL USERS OF THE SOFTWARE OF THE TERMS AND CONDITIONS OF THIS EULA. YOU ACCEPT THAT THIS EULA IS THE FULL AND EXCLUSIVE EXPRESSION OF THE AGREEMENT BETWEEN YOU AND RES SOFTWARE AND THAT IT TAKES PRECEDENCE OVER ALL PREVIOUS PROPOSALS OR VERBAL OR WRITTEN AGREEMENTS AND OTHER POSSIBLE COMMUNICATIONS REGARDING THE SUBJECT OF THIS EULA. IF YOU DO NOT ACCEPT THE TERMS OF THIS AGREEMENT, YOU MAY NOT INSTALL THIS SOFTWARE. LICENSOR. This license is granted to licensee (End-User) by Real Enterprise Solutions Nederland B.V., a company organized under the laws of the Netherlands. If End-User is located in the United States or Canada, this license is granted to End-User by Real Enterprise Synergy Inc, a company organized under the laws of Delaware. In this EULA, the term RES Software refers to Real Enterprise Solutions Nederland B.V. or Real Enterprise Synergy, Inc. LICENSE. The software provided herewith, and, as long as End-User has a right to Solution Assurance, any Product Releases or Service Releases related thereto, including the end user manuals and documentation (the "Software") are licensed to End-User by RES Software and are provided for use solely under the terms of this EULA. RES Software reserves all rights not expressly granted under this EULA. RES Software hereby grants to End-User a perpetual (except as otherwise provided herein) non-exclusive, non-transferable license, to install, use, perform and display the rightfully obtained version of the Software, solely in object code format for End User’s own internal business use and without the right to sub license. The Software may only be used for the purpose for which it is designed as described in the documentation and on the RES portal. The documentation is licensed solely for the purposes of supporting End-User’s use of the Software as permitted in this section. The Software may only be used on the site and within the infrastructure environment it was first installed. Depending on the edition that End-User obtained a license for, End-User is allowed to use all or limited functionality of the Software. The number of licenses required by End-User depends on either the number and type of devices to be used, the number of concurrent users, the number of named users, or the specific allocated tasks to be performed by the Software, as further specified on the RES Software website and pricelist. The use of the Software is limited to the number of licenses that End-User actually paid for or otherwise rightfully acquired. If End-User obtains subscription licenses the term for use is not perpetual, but limited to the specific subscription period agreed to. RESTRICTIONS. End-User is not permitted to: (i) reverse engineer, disassemble or decompile the Software or any portion thereof or otherwise attempt to derive or determine the source code or the logic therein, except to the extent and for the express purposes authorized by applicable law, and only if RES Software is not willing or able to provide the relevant information to End-User; (ii) remove or evade any technical protection (iii) use plug-ins or extensions not distributed by RES Software which enable modification of the Software; (iv) modify or change or make new installation programs for the Software; (v) use the Software for on behalf of third parties or sub-license, rent, sell, lease, distribute or otherwise transfer the Software and (vi) use the Software in or in association with safety critical applications such as, without limitation, medical systems, transport management systems, vehicle and power generation applications including but not limited to power applications. EVALUATION SOFTWARE AND EXPRESS EDITION. If available, End-User may download certain evaluation editions ("Evaluation Software") and/or express editions ("Express Edition") of the Software from www.ressoftware.com free of charge. End-User has the right to use the Evaluation Software for evaluation purposes only. The Evaluation Software license expires on the expiry date. The Express Edition provides limited functionality of the RES Software product. AUDIT. On RES Software’s request, and at RES Software’s expense, RES Software may conduct an audit of End-User’s use of the Software. Any such audit shall be conducted during regular business hours at End-User’s facilities and shall not unreasonably interfere with End-User’s business activities. If an audit reveals that End-User has underpaid in relation to the actual use of the Software, in addition to other remedies, End-User shall be invoiced for such underpaid fees. If the underpaid fees exceed five percent (5%) of the license fees paid, then End User shall also pay RES Software’s reasonable costs for conducting the audit. OWNERSHIP. The Software is the intellectual property of RES Software and/or its licensors and contains material that is protected by intellectual property rights and legislation of various countries worldwide. This EULA does not grant to End-User any ownership interest in the Software. End-User shall not remove any proprietary notice of RES Software from any copy of the Software. Third party materials and/or software presented or accessed using the Software ("Third Party Materials") are owned by the respective third parties and may be protected by intellectual property rights and the use of such Third Party Materials may be subject to the terms of use of such third parties. The End-User is solely responsible to obtain a valid license for the use of Third Party Materials. SOLUTION ASSURANCE. With the exception of Evaluation Software, and Express Edition, End-User is obligated to buy a subscription to maintenance and support ("Solution Assurance") for a period of minimum 1 (one) year starting at the date the End-User receives the license key. End-Users subscription to Solution Assurance will automatically renew for additional one (1) year periods, unless either party gives the other party written notice of its intent not to renew at least thirty (30) days prior to the expiration of the then current term of the subscription. The fees due for Solution Assurance shall amount to a maximum of 20%, for Premium Solution Assurance 25%, of the then-current list-price of the Software. RES Software reserves the right to adjust its prices for Solution Assurance. If End-User does not accept an adjustment of the prices, End-User shall be entitled to terminate its subscription to Solution Assurance services within thirty (30) days of receipt of the written notification of the price adjustment. End-User shall pay invoices for Solution Assurance within the payment term stated on the invoice. If no payment term is specified, a payment term of thirty (30) days shall apply. In order to benefit from Solution Assurance, End User must have a valid license for the latest version of the Software. If End-User has elected to terminate its subscription to Solution Assurance and, at a later date, wishes to reinstate Solution Assurance, RES Software is entitled to charge a reinstatement fee. CONTENT OF SOLUTION ASSURANCE. Solution Assurance consists of: (1) right to download and use Service Releases to the Software. Service Releases will be provided with a minimum of two per year. A Service Release consists of a number of bundled fixes to Defects. A Service Release does not necessarily offer new functionality. For the purpose of this EULA a Defect means a reproducible instance of adverse and incorrect operation of the Software that impacts End-User’s ability to use functionality as described in the documentation to the Software. Minor discrepancies that do not impair the normal use of the Software shall not constitute a Defect under this EULA; (2) right to download and use Product Releases to the Software. Product Releases will be provided with a minimum of one every three years. A Product Release contains new functionality and features; (3) Access to RES Software Support by Internet, e-mail and phone (during specified office hours). RES Software Support will assist in locating and solving problems and Defects in the Software. (4) Access to the RES Portal, including the RES Software Knowledge Base. Further details on Solutions Assurance are described in the Solution Assurance document available through the RES Software website.
Premium Solution Assurance additionally provides: (1) Extended access to RES Software Support by Internet, e-mail and phone (24 x 7 x 365); (2) right to use Escrow services under the conditions defined by RES Software. Further details on Premium Solutions Assurance are described in the Premium Solution Assurance document available through the RES Software website.
EXECUTION OF SOLUTION ASSURANCE SERVICES. RES Software provides Solution Assurance on a commercially reasonable efforts basis in a way it considers appropriate. RES Software is not obliged to follow the directions of the End-User. End–User shall first analyze any problems with the Software internally and consult the RES Software Knowledge Base before contacting RES Software support. End-Users shall appoint a qualified contact person for contact with RES Software support. End-User shall provide all relevant materials to RES Software when contacting RES Software support. RES Software is entitled to examine and test materials delivered by End-User. RES Software is under no obligation to use those materials. End-User guarantees that RES Software is entitled to use the materials and, after approval by End-User, is allowed access to its systems, to provide Solution Assurance. RES Software will provide full Solution Assurance on the current Product Release until a new Product Release is available. Solution Assurance on the previous Product Release will be limited to making available existing and new fixes on request by End-User(s) for at least 1 year after general availability of the latest Product Release. Furthermore, for all older versions access to RES Software support and the RES Software portal and knowledge base which contains previously developed solutions will remain available. RES Software cannot provide optimal Solutions Assurance to End-User if End-User does not use the latest Product Release or Service Release. RES Software reserves the right to terminate the End-User’s subscription to Solution Assurance with prior written notice and/or to amend the financial or other conditions of this EULA in case of excessive use of the Solution Assurance services by the End-User, or if End-User does not install the latest Product Release or Service Release. EXCLUSIONS. Solution Assurance services do not cover resolution of Defects which result from (i) third party software or hardware (ii) any modifications to the Software carried out by a party other than RES Software (iii) use of the Software by End-User which is not in accordance with the documentation. RES Software will only support the Software on platforms for which all components are supported by their respective vendors, under standard conditions, as of the date the support request is made by the End-User to RES Software. Solution Assurance does not cover source code supplied by RES Software as part of either a consulting engagement or as a demo, sample or contribution. USE OF RES SOFTWARE MATERIALS. All materials, including, but not limited to the RES Software portal and knowledge base, demo’s, samples or contributions provided by RES Software (the "Materials") by whatever means is either owned by or licensed to RES Software. End-User may only use those Materials as part of the Solution Assurance and as long as he is entitled to Solution Assurance. In no event shall the End-User publish, retransmit, redistribute or otherwise reproduce any Materials in any format to anyone or use any Materials in any connection with any business or commercial enterprise, without the express written consent of RES Software. End-User will destroy all Materials not needed for the solution of a Defect once the technical problem is solved.
TERM AND TERMINATION. The EULA takes effect when End-User installs or uses the Software or at the date End-User receives the relevant license keys, whichever is sooner ("the Effective Date"). RES Software reserves the right to terminate this EULA upon 30 days notice in the event of: (1) a change of control of the End-User; (2) if End-User breaches any provision of this EULA and, upon receiving written notice of such breach, fails to remedy such breach within 30 (thirty) days following receipt of the notice; or (3) if a petition for End-User’s bankruptcy is filed or End-User has been declared bankrupt. Subscription to Solution Assurance shall automatically terminate on termination of the End-User license. Upon termination End-User shall promptly cease to use the Software and return or destroy, at End-User’s expense and at RES Software’s option, all Software and any copies thereof and confirm this in writing to RES Software. The provisions regarding Audit, Intellectual Property, Limitation of Liability, and Miscellaneous shall survive the expiration or termination of this EULA. LIMITED WARRANTY. RES Software warrants that the Software shall be free from material defects in materials and workmanship, and shall conform in all material aspects to the specifications as described in the documentation for a period of ninety (90) days from the Effective Date, provided the Software has been stored and used in accordance with ordinary industry practices and conditions. RES Software does not warrant that the functionality of the Software will meet End-User’s requirements or is fit for any particular purpose, or that the operation of the Software will be uninterrupted, error free, virus free or that Defects in the Software will be corrected. It is the responsibility of End-User to isolate the Software, to use anti-virus software, to make relevant back-ups and to take other steps to ensure that the Software does not damage End-User’s information or system. In the event that the Software does not comply with the warranty set out in this section and RES Software is notified of such non-conformity within the warranty period, RES Software, at its choice, will replace such non-conforming Software at no additional charge or will refund the total amount paid for the non-conforming Software. The limited warranty as set forth in this section shall also apply to any Product Releases and Service Releases or any software that repairs or replaces the non-conforming Software. RES Software grants no other warranty, either specific or implied, including without limitation, warranties of merchantability or suitability for a particular purpose. The Evaluation Software and the Express Edition are provided "as is" without warranty of any kind, whether express, implied, statutory, or otherwise. RES Software is not liable for any damages resulting from the use (or attempted use) of the Evaluation Software and the Express Edition at any time. LIMITATION OF LIABILITY. RES Software shall in no event be liable to End-User or any third party for any indirect, incidental or consequential damages (including, without limitation, indirect, special, punitive, or exemplary damages for loss of business, loss of profits, business interruption, loss of data, or loss of business information) arising out of this EULA or connected in any way with use of or inability to use the Software or the provision of Solution Assurance, or for any claim by any other party, even if RES Software has been advised of the possibility of such damages. RES Software’s total liability to End-User for all damages, losses, and causes of action (whether in contract, tort (including negligence), or otherwise) shall not exceed € 10.000,-- (TEN THOUSAND EUROS) AND WITH RESPECT TO THE EVALUATION SOFTWARE AND EXPRESS EDITION, SHALL NOT IN ANY EVENT EXCEED € 500,-- (FIVE HUNDRED EUROS). IF END-USER IS LOCATED IN THE UNITED STATES OF AMERICA OR CANADA, THE FOREMENTIONED MAXIMUM AMOUNTS WILL BE $ 10,000 (TEN THOUSAND DOLLARS) AND WITH RESPECT TO THE EVALUATION SOFTWARE AND EXPRESS EDITION $ 500,-- (FIVE HUNDRED DOLLARS). RES Software liability will only arise if End-User informs RES Software in writing of any default and the damages resulting there from as soon as possible and gives RES Software a reasonable time to remedy a failure to perform. Any notice of default must specify the failure in as much detail as possible, so that RES Software will be able to act adequately. FORCE MAJEURE. RES Software shall not be responsible for failures of its obligations under this EULA to the extent that such failure is due to causes beyond RES Software’s control, including, without limitation, natural disaster, war, strikes, fire, floods, explosions, acts of any government or agency thereof, failures of suppliers, disruption in electricity supply or non-availability of telecommunication services. If RES Software is prevented by force majeure from fulfilling its obligations under this EULA for more than ninety (90) days, RES Software and End-User are entitled to terminate the EULA in writing. INDEMNIFICATION. RES Software shall indemnify, hold harmless and defend End-User against any action brought against End-User to the extent that such action is based on a claim that any Software, when used in accordance with this EULA, infringes a copyright of a third party. RES Software shall pay all costs, settlements and damages finally awarded, provided that End-User promptly notifies RES Software in writing of any claim, gives RES Software sole control of the defense and settlement thereof, and provides all reasonable assistance in connection therewith.
If the Software is finally adjudged to so infringe, or in RES Software’s opinion is likely to become the subject of an infringement claim, RES Software shall, at its sole discretion, either: procure for End-User the right to continue to use the Software, modify or replace the Software to make it non-infringing, or upon return of the Software, refund the price paid by End-User for the Software, minus a reasonable usage fee. RES Software shall have no liability regarding any claim arising out of or caused by: End-User’s use of other than the latest, unaltered release of the Software unless the infringing portion is also in the then current, unaltered release; any modification or derivation of the Software not created or publicly released by RES Software. The aforementioned states the entire liability of RES Software and the exclusive remedy for End-User relating to any actual or claimed infringement of any intellectual property right. End User shall indemnify, defend and hold harmless RES Software and its directors, officers, agents, employees, subsidiaries and affiliates from and against any claim, action, proceeding, liability, loss, damage, cost, or expense (including, without limitation, attorneys’ fees), arising out of or in connection with the use of the Software that is not in strict accordance with this EULA by End-User, its employees, subcontractors, or others. RES Software shall provide reasonable cooperation and assistance to End User in defending the claim. COMPLIANCE WITH LAWS. End User must comply with all domestic and international (export) laws and regulations to the Software and with any end-user, end-use and destination restrictions issued by governments. End-User must at its own expense obtain and arrange for the maintenance of any government approval and comply with all applicable laws and regulations necessary for End-User’s performance of the EULA. End-User acknowledges that it is responsible for obtaining any licenses to export, re-export or import the Software as may be required. End-User will defend, indemnify, and hold harmless RES Software from and against all fines, penalties, liabilities, damages, costs and expenses incurred by RES Software as a result of any violation of export (control) laws or regulations by End-User or any of its agents or employees. PERMANENT EFFECT. RES Software reserves the right to modify this EULA for any new Product Release or Service Release. By installing and continuing to use the new Product Release or Service Release of the Software over a period of thirty (30) days, End-User accepts the new or revised version of this EULA. NOTICES. Any notices permitted or required under this Agreement shall be in writing, and shall be deemed given when delivered (i) in person, (ii) by overnight courier, upon written confirmation of receipt, (iii) by certified or registered mail, with proof of delivery, (iv) by facsimile transmission with confirmation of receipt, or (v) by email, with confirmation of receipt (except for routine business communications issued by RES Software, which shall not require confirmation from End-User). Notices shall be sent to the address, facsimile number or email address set forth below, or at such other address, facsimile number or email address as provided to the other party in writing. Notices for RES Software shall be sent to: Het Zuiderkruis 33, 5215 MV ‘s-Hertogenbosch, The Netherlands. Fax for legal notices: +31 (0)73 622 8811. Email for legal notices: [email protected]. APPLICABLE LAW. Except for End-Users residing in the United States, this EULA shall be governed, construed and enforced in accordance with the laws of the Netherlands, without giving effect to its conflict of law principles. Any legal action will be brought exclusively before the relevant court in Amsterdam, the Netherlands. Proceedings will take place in Dutch. For End-Users residing in the US only: This EULA shall be governed, construed and enforced in accordance with the laws of the State of Delaware, without giving effect to its conflict of law principles. Any dispute regarding this EULA shall be subject to the exclusive jurisdiction of the state and federal courts of Philadelphia. MISCELLANEOUS. End-User may not assign or transfer its rights or obligations arising under this EULA to any third party, including any group of companies, parent companies, subsidiaries and affiliated companies of End-User without the written consent by RES Software , and any such attempted assignment or transfer shall be void and without effect.
The failure of any party to enforce a provision of this EULA shall not constitute a waiver of such provision or any other provision or of the right of such party thereafter to enforce any provision of this EULA. RES Software reserves the right to use End-Users name or trademark, trade name or logo in external communications, presentations and marketing materials, and on its website and to describe the solution provided to End-User in these external communications. DATA PROTECTION. For “Personal Data”, reference is made to the Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 (“the Directive”). Where a party receives any Personal Data from the other party, it shall ensure that it fully complies with the provisions of the Directive and only deals with the Personal Data to fulfill its obligations under this EULA. The parties agree to process Personal Data in accordance with the “mandatory data protection principles”, which broadly reflect the data protection rules set out in the Directive. The principles include: using Personal Data only for a purpose which is clearly specified in the contract itself; security and confidentiality of the data; the right to see a copy of Personal Data about oneself and to have inaccurate information corrected or deleted; and, restriction on the onward transfer of Personal Data to other third countries without a further contract being put in place. Any rights not expressly granted herein are reserved by RES SOFTWARE. Copyright © on software and all Materials 1998-2014 Real Enterprise Solutions Development B.V., P.O. Box 33, 5201 AA `s-Hertogenbosch, The Netherlands. RES and the RES Software Logo are either registered trademarks or service marks of Real Enterprise Solutions Nederland B.V. in Europe, the United States and other countries. RES Automation Manager, RES Workspace Manager, RES Suite, RES Virtual Desktop Extender, RES IT Store and RES VDX are trade names of Real Enterprise Solutions Nederland B.V. in Europe, the United States and other countries. All other product and company names mentioned may be trademarks and/or service marks of their respective owners. Real Enterprise Solutions Development B.V., The Netherlands has the following patents: U.S. Pat. "US 7,433,962", "US 7,565,652", "US 7,725,527", "US 8,683,018", other patents pending or granted.
Version 20140601
Terms of use: Welcome to the web site of RES GROUP, which consists of RES Software, headquartered in The Netherlands (P.O. Box 33, 5201 AA `s-Hertogenbosch, Nederland. Phone: +31 (0)73 622 8800) and RES companies in the United States, United Kingdom and other countries (hereafter: "Web Site"). Please read the following terms concerning your use of the Web Site. By accessing, using or downloading any Materials from the Web Site, you agree to follow and be bound by these terms (the 'Terms'). If you do not agree with these Terms, please do not use this Web Site.
General Use ProvisionsAll materials provided on this Web Site, including but not limited to information, documents, databases, products, logos, design, graphics, sounds, images, software, and services ('Materials'), are provided either by RES Software or one of the companies of the RES GROUP or by third party manufacturers, authors, developers and vendors ('Third Party Providers') and are the intellectual property of RES Software and/or Third Party Providers.
Subject to legal exceptions the Materials and Web Site may not be duplicated (framing also included) copied, reproduced, distributed, re-published, downloaded, shown, sent in whatever form whatsoever, or made available to third parties or made public, without prior express written permission from RES Software. Unauthorised use of Materials and Web Site represents a violation of intellectual property rights of RES Software or Third Party Providers and may result in severe civil and criminal penalties. Violators will be prosecuted to the maximum extent possible.
Except where expressly provided otherwise by RES Software, nothing on this Web Site shall be construed to confer any license offer for license or sale under any of RES Software or any Third Party Provider's intellectual property rights. You acknowledge sole responsibility for obtaining any such licenses. Contact RES Software if you have any questions about obtaining such licenses. RES Software does not provide, sell, license, or lease any of the Materials other than those specifically identified as being provided by RES GROUP.
RES Software hereby grants you permission to display, copy, distribute and download RES Software texts, logos, graphics, sounds and images on this Web Site provided that: (1) both a copyright notice of RES Software and this permission notice appear in these Materials; (2) the use of such Materials is solely for personal, non-commercial and informational use and will not be copied or posted on any networked computer, broadcast in any media, or used for commercial gain; and (3) these Materials are not modified in any way. This permission terminates automatically without notice if you breach any of these terms or conditions. Upon termination, you will immediately destroy any downloaded or printed Materials.
The use of the Client Portal is submitted to the RES Software General Terms & Conditions.
Any software that may be made available for download from this Web Site ('Software') is the intellectual property of RES Software or Third Party Providers. Use of the Software is governed by the terms of the End User License Agreement that accompanies or is included with the Software ('EULA'). An end user agrees to the EULA terms by installing, copying, and/or using the Software. The Software is made available for downloading solely for use by end users according to the EULA. Without limiting the foregoing, the copying or reproduction of the Software to any other server or location for further reproduction or redistribution is expressly prohibited.
Circumventing, hacking or other violation of the Web Site, RES GROUP security, dial-up or subscription systems is expressly prohibited. RES GROUP reserves the right to exclude certain IP-addresses if violation has been committed from them, or attempts with this aim have been undertaken, or illegitimate use of its systems has taken place. To this end RES GROUP can monitor the access to its websites. This does not impede RES GROUP's remaining rights to prosecution.
RES GROUP and the Third Party Providers may make improvements and changes in the Materials and Web Site at any time without prior notice. RES GROUP has the right to cancel the provision of Materials or this Web Site at any time without prior notice.
Links to third party web sites The Web Site may contain links to web sites controlled by parties other than the RES GROUP. However, this does not mean that the RES GROUP embraces the contents of those websites. The RES GROUP is not responsible for, does not endorse, nor accepts any responsibility for the contents or the use of those web sites. RES GROUP provides those links only for your convenience.
Please consult the user conditions, privacy declaration and other legal notices of those third party web sites before you use them. It is your responsibility to arrange for the necessary precautions to ensure that what you apply for your personal use is virus free and free of other issues which are, or could be detrimental.
Submissions Subject to personal details all comment, feedback, information or material in the broadest sense of the word sent to RES Software ('Submissions') are considered as non-confidential and RES GROUP property.
At no charge the RES GROUP will be free to apply the Submissions worldwide to its own perception on an indefinite basis and for any purpose. The RES GROUP obtains the turnover of the Submissions. However, you are and remain responsible for your Submissions, including legality, reliability, appropriateness, originality and respect of intellectual property rights of third parties.
Disclaimer The RES GROUP exercises the greatest possible care on the reliability and topicality of the data on the Web Site. Inadequacies and incompletion can however occur. Materials and the Web Site are provided "AS IS". Materials provided by Third Party Providers have not been independently reviewed, tested, certified, or authenticated in whole or in part by RES GROUP and as such RES GROUP makes no warranty with respect to its contents.
THE SOFTWARE IS WARRANTED, IF AT ALL, ONLY ACCORDING TO THE TERMS OF THE EULA. EXCEPT AS MAY BE EXPRESSLY WARRANTED IN THE EULA AND THESE TERMS, THE RES GROUP HEREBY DISCLAIMS ALL EXPRESS OR IMPLIED REPRESENTATIONS WARRANTIES, GUARANTIES, AND CONDITIONS, ALSO WITH REGARD TO THE SOFTWARE, INCLUDING BUT NOT LIMITED TO ANY IMPLIED REPRESENTATIONS, WARRANTIES OR CONDITIONS OF MERCHANTIBILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY INVALID. THE RES GROUP MAKES NO REPRESENTATIONS, WARRANTIES, GUARANTIES, OR CONDITIONS AS TO THE QUALITY, SUITABILLITY, TRUTH, ACURACY, OR COMPLETENESS OF ANY OF THE MATERIALS AND THE WEB SITE.
Limitation of LiabilityRES GROUP SHALL NOT BE LIABLE FOR DAMAGES SUFFERED AS A RESULT OF USING, MODIFYING, CONTRIBUTING, COPYING, DISTRIBUTING, DISSEMINATION OR DOWNLOADING THE MATERIALS AND THE WEB SITE OR THE IMPOSSIBILITY TO DO SO, OR AS A RESULT OF INADEQUACIES OR INCOMPLETION IN THE MATERIALS AND THE WEB SITE.
IN NO EVENT SHALL RES GROUP BE LIABLE FOR ANY DAMAGE (INCLUDING BUT NOT LIMITED TO LOSS OF BUSINESS, REVENUE, PROFITS, USE, DATA OR OTHER ECONOMIC ADVANTAGE) HOWEVER IT ARISES, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTUOUS ACTIONS, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF MATERIALS OR THE WEB SITE, EVEN IF the RES GROUP HAS BEEN PREVIOUSLY ADVISED TO THE POSSIBILITY OF SUCH DAMAGE.
You are yourself responsible for the protection and backup of your (computer) data and/or objects which are used with regard to The Web Site. For this reason you will not issue a claim against RES Software or one of the companies of the RES GROUP or Third Party Providers for the loss of data, incorrect output, work delay (s) or loss of turnover and/or profit as a result of the use of Materials and Web Sites.
Indemnification In exchange for the acceptance of the advantages which are provided to you by the Web Site, you agree to protect the RES GROUP, its Executive Boards, employees, representatives and partners and indemnify them from judicial and extrajudicial measures, sentences and such, including - and without restriction - reasonable cost for legal assistance (lawyers, jurists and bailiffs) and accountants, who have been appointed by third parties as a result of your use of the Materials and the Web Site, your violation of the Terms or your violation of any legal regulation whatsoever or rights of third parties.
Local Law; Export ControlRES GROUP controls and operates the Materials and Web Site from its headquarter in 's-Hertogenbosch, The Netherlands, and makes no representation that these are appropriate or available for use in other locations. If you use this Web Site from other locations, you are responsible for compliance with applicable local laws including but not limited to the export and import regulations of other countries. Unless otherwise explicitly stated, all marketing or promotional materials found on this Web Site are solely directed to individuals, companies or other entities located in European Union and the United States.
U.S.-residents acknowledge and agree that Materials are subject to the U.S. Export Administration Laws and Regulations. Diversion of such Materials contrary to U.S. law is prohibited. You agree that none of the Materials, nor any direct product there from, is being or will be acquired for, shipped, transferred, or re-exported, directly or indirectly, to proscribed or embargoed countries or their nationals, nor be used for nuclear activities, chemical biological weapons, or missile projects unless authorized by the U.S. Government. Proscribed countries are set forth in the U.S. Export Administration Regulations. Countries subject to U.S. embargo are: Cuba, Iran, Iraq, Libya, North Korea, Syria, and Sudan. This list is subject to change without further notice from RES GROUP, and you must comply with the list as it exists in fact. You certify that you are not on the U.S. Department of Commerce's Denied Persons List or affiliated lists or on the U.S. Department of Treasury's Specially Designated Nationals List. You agree to comply strictly with all U.S. export laws and assume sole responsibility for obtaining licenses to export or re-export as may be required.
Miscellaneous The RES GROUP does not warrant that e-mails or other electronic messages transmitted to it are swiftly received and processed, and accepts no liability for consequences of lack of or late receipt or processing of them.RES GROUP has the right to revise these Terms at any time without notice by updating this posting.These Terms are governed by Dutch law. No choice of law rules of any jurisdiction will apply.
Privacy Statement: This is the Privacy Statement of Real Enterprise Synergy, Inc. (“RES Software”), whose principal place of business is at 150 North Radnor Chester Road, Suite D100, Radnor, PA 19087, the United States. More information about RES Software can be found at http://www.ressoftware.com. This Privacy Statement covers RES Software’s treatment of Personal Information that we gather when you are on our website. Please read the following to learn more about what information we may collect from visitors and users of our website, how we use and protect this information and what choices you have on how that information is used. Protecting your privacy is important to us. RES Software certifies that it adheres to the Safe Harbor Framework concerning the receipt of personal data from the European Union and Switzerland to the United States of America. Accordingly, we follow the Safe Harbor Privacy Principles published by the U.S. Department of Commerce with respect to all such data. If there is any conflict between the policies in this Privacy Statement and the Safe Harbor Privacy Principles, the latter shall govern. RES Software also transfers personal information to other members of the RES Software group of companies in the European Economic Area and Australia.
RES Software’s adherence to these Principles may be limited in certain circumstances, in particular:
to the extent necessary to meet national security, public interest, or law enforcement requirements,
to the extent expressly permitted by any applicable law, rule or regulation, where there is a conflicting or overriding legal obligation.
To learn more about the Safe Harbor Framework please visit www.export.gov/safeharbor.
We self-certify compliance with https://safeharbor.export.gov/list.aspx.
DETAILS COLLECTED
If you want to contact RES Software or take part in a forum or survey via our website, we may ask you to provide name, address and phone number. We may also ask you to provide additional information e.g. your e-mail address if you want to obtain additional services or information or to resolve complaints or concerns.
These details are used in order to:
carry out services of RES Software and its administrative processing;
reply to your inquiries, provide you with requested products and services, set up your Customer’s account, i.e. via RES Software Customer Portal, and contact you regarding new products and services;
inform you of products and services offered by RES Software and other organizations selected on the basis of your personal preferences;
communicate with you about your account or transactions with us and send you information about features on our website or changes to our policies;
optimize or improve our products, services and operations.
Additionally, RES Software may ask you details about your work and position. We use these details to obtain a clearer picture of our customers and to develop our services and website in line with your personal preferences. RES Software may also record all information (e.g. the IP address used), with a view to collating user statistics, and for the protection of own website. Finally, we collect information that you voluntarily provide to us through responses to surveys, search functions, questionnaires, feedback forms and the like.
By submitting personal information, you consent to the use of that information as set out in this Privacy Statement.
SENSITIVE INFORMATION
We do not collect, use or disclose sensitive information (e.g. information about religious or political ideologies, racial or ethnic backgrounds).
YOUR CONTROLS AND CHOICES
RES Software provides you the ability to exercise certain controls and choices regarding our collection, use and sharing of your information. You can always contact us in order to:
correct, update or remove the personal information that you have provided to us;
change your choices for subscriptions, newsletters and alerts; and
change your preferences with respect to marketing contacts by e-mailing us at [email protected]. Such request will be processed within ten (10) days.
You may exercise your controls and choices, or request access to your personal information, by contacting us at [email protected]. Please be aware that, if you do not allow us to collect personal information from you, we may not be able to deliver certain products and services to you, and some of our services may not be able to take account of your interests and preferences. If you have questions regarding the specific personal information about you that we process or retain, please contact [email protected].
SAFE HARBOR PRINCIPLES
RES Software is committed to following the Privacy Principles for all Personal Information within the scope of the Safe Harbor Frameworks. NOTICE
RES Software may collect and retain your Personal Information, or transfer your Personal Information within RES Software group of companies in the EU and the U.S. Such Personal Information will only be collected, saved and/or transferred for the purposes defined above. Where RES Software obtains personal information from individuals in the EU and/or Switzerland, we will inform them in clear and conspicuous language of the purposes for which it collects and uses their personal information and the choices and means that RES Software offers individuals for limiting the use and disclosure of their personal information. CHOICE
RES Software will offer individuals the opportunity to choose (opt-out) whether their personal information is to be disclosed to a third party (unless that disclosure is allowed or required by contract or law), or to be used for a purpose that is incompatible with the purpose for which that information was originally collected or subsequently authorized by the individual. We support and subscribe to these protections, but in any event RES Software will not disclose or transfer your Personal Information to any other party for any purpose other than that for which it was collected, except when:
we have your express consent to share the information for a specified purpose;
we need to respond to subpoenas, court orders or other legal process, to establish or exercise our legal rights or defend against legal claims;
we need to protect the personal safety of the users of our websites or defend the rights or property of RES Software; and we believe it is necessary to share information in order to investigate, prevent, or take action regarding illegal activities, suspected fraud, situations involving potential threats to the physical safety of any person, violations of our terms of use, or as otherwise required by law.
ONWARD TRANSFERS
RES Software will ensure that any third party to which Personal Information may be disclosed will safeguard Personal Information consistently with this Privacy Statement. Furthermore, such third party:
has need of the Personal Information for the purpose for which that information was collected, and has privacy policies that are consistent with ours or agrees to abide by our policies with respect to Personal Information.
DATA SECURITY The security and confidentiality of your information is extremely important to RES Software and therefore we have implemented technical, administrative and physical security measures that are designed to protect your personal information from loss, unauthorized access, improper use, alteration and unlawful or accidental destruction. Please be aware though that, despite our best efforts, no security measures are perfect or impenetrable.
Also, we limit access to personal information about you to employees who reasonably need to come into contact with that information to provide products or services to you or in order to do their jobs. Our employees who have been granted access to your personal information are made aware of their responsibilities to protect the confidentiality, integrity, and availability of that information and have been provided training and instruction on how to do so.
RES Software will only process Personal Information in a way that is compatible with and relevant for the purpose for which it was collected or authorized by the individual. To the extent necessary for those purposes, we will take reasonable steps to ensure that Personal Information is accurate, complete, current and reliable for its intended use.
Upon request, RES Software will allow an individual access to their Personal Information and allow the individual to correct, amend or delete inaccurate information, except where the burden or expense of providing access would be disproportionate to the risks to the privacy of the individual in the case in question or where the rights of persons other than the individual would be violated.
RES Software uses a self-assessment approach to assure compliance with this Privacy Statement and periodically verifies that the policy is accurate, comprehensive for the information intended to be covered, prominently displayed, completely implemented and accessible and in conformity with the Safe Harbor Privacy Principles. Any employee that RES Software determines is in violation of this Privacy Statement will be subject to disciplinary action. DISPUTE RESOLUTION
Any questions or concerns regarding the use or disclosure of Personal Information should be directed to the following e-mail address: [email protected]. RES Software will investigate and attempt to resolve complaints and disputes regarding use and disclosure of Personal Information in accordance with the Safe Harbor Privacy Principles. We strongly encourage interested persons to raise any concerns using the contact information provided and we will investigate and attempt to resolve any complaints and disputes regarding use and disclosure of Personal Information in accordance with the Safe Harbor Privacy Principles.
RES Software has agreed to cooperate and comply with the EU Data Protection Authorities (DPAs) with respect to all types of data received from the EU and with the Federal Data Protection and Information Commissioner with respect to all types of data received from Switzerland. Hence, we have agreed to participate in the dispute resolution procedures of the panel established by the DPAs and the Commissioner to resolve disputes pursuant to the Safe Harbor Privacy Principles. If you do not receive timely acknowledgment of your complaint, or if your complaint is not satisfactorily addressed by RES Software, you may contact either the panel of the EU DPA via [email protected], or individual DPAs (http://ec.europa.eu/justice/data-protection/bodies/authorities/eu/index_en.htm), or the Commissioner via http://www.edoeb.admin.ch/kontakt/index.html?lang=en
When you visit our website, our servers automatically record standard information that your browser sends whenever you visit a website. These server logs may include information such as the page(s) you visited, your Internet Protocol address, cookie information, browser type, browser language, and the date and time of your request. We use this traffic data to help diagnose problems with our servers, analyze trends and administer our website.
Generally, we automatically collect usage information, such as the numbers and frequency of visitors to our website and its components, number of links clicked within the website, etc. This data is only used in the aggregate. This type of collective data enables us to determine how users interact with the website, so we can identify areas for improvement.
We use third party tool cookies to make our website easier for you to use, for example, by remembering that you are logged in. Cookies are alphanumeric identifiers that we transfer to your computer's hard drive through your web browser to enable our systems to recognize your browser and tell us how and when pages in our website are visited and by how many people. The third party tool cookies are listed below:
Third party Name
retention period
Web Analytics / Metrics
_ga
Google Analytics is a web analysis service that is offered by Google Inc. (“Google“). Google Analytics uses cookies to analyze the usage of the website by users (i.e., the pages you go to most often), and any error messages you receive to understand how the site is being used and identify problem areas to improve user experience. RES Software uses analytics.js JavaScript library on its website that uses a single first-party cookie containing an anonymous identifier which is used to distinguish users. You can find out more about Google’s privacy policy concerning its analytics service at Google privacy policy. Google also offers a tool that allows you to opt-out of being tracked by Google Analytics across all websites and can be found here. Marketo
Visitor / Marketing Profile
_mkto_trk
_mkt_library _mkt_whitepapers
Our website uses an automations system provided by Marketo Inc. Marketo uses cookies to recognize you as a unique user when you return to the site, and to track various data related to your website usage in order to provide custom content or services related to your specific interests. Marketo allows the scoring of users based on their activity (including repeat visits, number of pages visited, and requesting information) to determine how sales- ready users are. The cookies placed by the Marketo server are readable only by Marketo. All information gathered on users is held in secured global cloud servers.
You can find more information on Marketo here.
The cookies do not collect personal information, and we do not combine information collected through cookies with other personal information unless you have also completed one of the forms on our website requesting a service, such as to register to attend a webinar, view a recorded webinar, download a white paper, obtain a free trial or obtain a product demonstration.. That information is cached so you do not have to complete the form each time. The "help" portion of the toolbar on the majority of browsers will direct you on how to prevent your browser from accepting new cookies, how to command the browser to tell you when you receive a new cookie, or how to fully disable cookies. You can set your browser to refuse all cookies, but some of our website features and services may not function properly if your cookies are disabled. We recommend that you leave the cookies activated, however, because cookies enable you to take advantage of all of our website features.
During the further growth and development of RES Software group, one or more companies or activities of RES Software may be transferred to a third party. In this case, your personal details might also be transferred.
PASSING ON INFORMATION We operate globally and may transfer your personal information within the individual companies of RES Software or third parties in locations around the world for the purposes described in this Privacy Statement. Wherever your personal information is transferred, stored or processed by us, we will take reasonable steps to safeguard the privacy of your personal information. CHANGES The Privacy Statement can be amended from time to time consistent with requirements of the Safe Harbor Framework to accommodate new technologies, industry practices, regulatory requirements and/or for other purposes. Any such changes will be published on this page.
If you continue to use our website after any such amendments, this signifies your acceptance thereof.
QUESTIONS AND SUGGESTIONS Should you have any questions and/or suggestions regarding this Privacy Statement or RES Software Privacy Policy, you can send an e-mail to RES Software customer service via [email protected].
Copyright © 1998-2014 RES Software. All rights reserved.
Effective date: May 29, 2014.
Copyright Statement: Copyright 2013 RES Software. RES and the RES Software Logo are either registered trademarks or service marks of Real Enterprise Solutions Nederland B.V. in Europe, the United States and other countries. RES Automation Manager, RES Workspace Manager, Workspace Virtualization Suite, Virtual Desktop Extender, RES IT Store and RES VDX are trade names of Real Enterprise Solutions Nederland B.V. in Europe, the United States and other countries. All other product and company names mentioned may be trademarks and/or service marks of their respective owners. Real Enterprise Solutions Development BV, The Netherlands has the following patents: U.S. Pat. "US 7,433,962", "US 7,565,652", "US 7,725,527", other patents pending or granted.
Kingston University Kingston University Adopts “Blank Canvas” Self-Service Model for Faster Service Delivery and an Improved User Experience.
View Full Case Study The RES Software Vision solutions RES SUITE 2014IT STOREWORKSPACE VIRTUALIZATIONIT AUTOMATION
Support Solution AssuranceTrainingKnowledge BaseSupport Community
Company CareersEventsContact
resources WEBINARSWHITEPAPERSCASE STUDIES
© 2014 RES Software 1998-2014. All Rights Reserved.
LegalPrivacyCustomer PortalGlossarySitemap | 计算机 |
2014-23/2666/en_head.json.gz/21025 | Energy-Efficiency Work Reaps RewardsBy Rob KniesAugust 10, 2009 9:00 AM PTTweetThese days, more than ever, it’s important for computing to be energy-efficient. Particularly in data centers, energy requirements represent a significant portion of operational costs, and power and cooling needs help dictate where data centers can be located, how close to capacity they can operate, and how robust they are to failure.In part, however, this is true because computers are precision machines. They’re hard-wired that way. Ask a computer what the average daily temperature is in Seattle on May 16, and you might get an answer such as 57.942368 degrees. While most of us would be hard-pressed to discern the atmospheric difference between 57.942368 degrees and 58, computers are able to provide that precision—even when such precision is all but useless.Surely, there must be a way to save the energy utilized to offer such superfluous exactitude. And if you could do so, given the thousands upon thousands of machines housed in a typical data center, surely significant savings would result.Trishul Chilimbi certainly believes so. Chilimbi, senior researcher and manager of the Runtime Analysis and Design team within Microsoft Research Redmond’s Research in Software Engineering organization, is leading an effort to underscore the importance of energy-efficient computing, as evidenced in the paper Green: A System for Supporting Energy-Conscious Programming Using Principled Approximation.Trishul Chilimbi“The Green project,” Chilimbi explains, “is looking at how we can save energy efficiency purely looking at software. The high-level goal is to give programmers high-level abstractions that they can use to express domain knowledge to the underlying system so it can take advantage.”In other words, round that temperature off to the nearest degree, please.“Especially in the cloud-computing space and data centers,” Chilimbi says, “there are service-level agreements that you’re not always going to get a very precise answer. And even where they’re not in place, programmers know that there is an asymmetric importance of results in certain domains.“Let’s take search for an example. My query results on the first few pages are very important. The ones on the 100th page might not be as important. We’d like to devote more resources to focusing on the first several pages and not as much to those much lower down the rank system.”So, with a bit of programmer-defined approximation, computers can take much less time to provide results that, while not as precise as possible, deliver valuable information for real-world problems in a fraction of the time and the resource requirements necessitated by precision measurements.“Computer programming languages have been built on a mathematical foundation to provide very precise answers,” Chilimbi says, “and in domains where you don’t need that precision, you end up overcomputing and throwing away results. If programmers specify what their requirements are, we’ll avoid this and do exactly what you need to meet those requirements, but no more.”The energy savings, of course, depend upon the level of approximation the programmer specifies.“Say 99.999 percent of all results should be similar as if you didn’t have this Green system in place,” Chilimbi says. “That’s a very high threshold. And with that, we see savings of about 20 percent, which translates directly into energy, because if you do less work, you’re consuming fewer resources and less energy.“We found that if you’re willing to tolerate even lower levels of accuracy—say I have a speech-recognition program, and I can tolerate a loss of accuracy of 5 percent—maybe I can get it back with smart algorithms and understanding use context and get a factor of 2X improvement.”Such results might seem astounding, but not when you stop to think about it.“It’s not surprising there’s this diminishing margin of returns,” Chilimbi says. “As you go for the last improvement, you’re using more and more resources, and as you’re willing to scale back, you get fairly large savings. It can be very significant, and it’s up to the programmer to intelligently specify what’s realistic for the domain.“In many domains, such as graphics in a video game, anything beyond 60 frames per second the user can’t distinguish. You have potential for a graceful degradation, and it’s fairly significant.”Unnecessary PrecisionBut, he stresses, the savings depend on the domain.“You don’t want to do this for banking,” Chilimbi smiles. “There are certain domains in which you need absolute precision. But I think there are more domains for which you don’t need this precision. A lot of them are concerned with who the consumers of the information are. If they are human, they’re limited by audio and visual sensory perception. There is leeway to play around with that.”You’ve heard the question: If a tree falls in a forest and nobody’s around to hear it, does it make a sound? Now, try this one: If a human isn’t able to perceive the difference between a 10-millisecond response time and one that takes 5 milliseconds, why not save the energy the extra speed demands?Graphics is one such domain. Speech recognition is another. “There is a certain amount of fidelity we can deal with,” Chilimbi says. “If my battery is going to die, and I really want to see the end of this movie, I might be willing to compromise image quality for a little bit. Search is another classic example.”Lest you get the idea that the Green project is applied research just to squeeze more capability out of data centers, think again. As Chilimbi notes, there are legitimate, intriguing, basic research questions being refined as part of this effort. One is to rethink programming languages from the perspective of a requirement of approximate versus absolute precision. How do you provide guarantees to support your approximation techniques? And how can programmers communicate their high-level knowledge about programs and application domains to underlying hardware?The answer to the latter question might require designing hardware and software in tandem.“One idea might be to use lower-energy components,” Chilimbi says, “but to get the performance back, you could have software communicate more of its intent, so hardware doesn’t have to infer it. That’s another high-level goal: How can we co-design software and hardware so that the whole system is energy-efficient, rather than trying to deal with these pieces in isolation?”Quality of ServiceProviding quality-of-service guarantees is equally challenging.“Say I have a function in a program,” Chilimbi explains, “and there is a quality-of-service contract this function enforces. We have modular programs that have a notion of abstraction—I don’t need to know how this contract is implemented, just that this is the contract this module enforces. You could have a similar kind of quality-of-service contract in modules. Then the hard part becomes: How do I compose these quality-of-service agreements and construct an overall quality of service for the program?“That’s a challenge, because you can have non-linear effects of combining these things. What kind of guarantees can you give? Can you give static guarantees, or are they only runtime guarantees?”So, if you have approximated certain procedures sufficiently that they provide good-enough results while using fewer resources, and you analyze the level of precision necessary for a specific task, you’re home-free, right?Not so fast. What, now, can you do with the model you have just constructed?“You could say, ‘Well, I’m done if I have the model,’ “ Chilimbi says. “I’m going to use the problem, and I’m going to assume that everything follows this model, and if that’s true, everything will be good because the model has been designed so that it is only approximate when it guarantees that it can meet whatever quality of service is required.”The problem is, the real world isn’t always as accepting of lab-derived models as researchers would like.“Unfortunately,” Chilimbi states, “in the real world, you might get unanticipated scenarios or inputs. You have to prepare for all possibilities. That’s why the static model is a good starting point, but it’s not sufficient, especially if you have a scenario very different from the ones you’re seen. To handle that, you need a third part that says, ‘Every once in a while, I’m going to check the scenarios I’ve seen, the usage I’ve seen, to ensure that I’m still meeting the quality-of-service requirement.This diagram outlines how the Green project is designed to work.“The way we check that is, on the same scenario, we execute both the precise version and the approximate version our model suggests and measure the quality-of-service error and see if it’s still above the threshold. If everything is fine, we’re OK. But if the scenario is very different from what we anticipated, then we might recalibrate the model to cope with this. You need this third part to guarantee that no matter what the situation is, you will meet quality of service. That is crucial, because you want to have this guarantee for your users, and just modeling limited inputs doesn’t allow you to give that guarantee.”Chilimbi says he always has been interested in computer performance and optimization, but in recent years, he has seen his interests shift a little bit.“As we’ve been moving to cloud computing with data centers and software as a service,” he says, “energy becomes, from a business perspective, very important, both on the data-center side and on the client side. On the data-center side, it’s about monthly operating costs and power bills. On the client side, it’s about battery life.”And then there’s a new focus on the environment.“I’m still interested in performance, but it’s not just performance at any cost—it’s performance per joule,” he says. “Energy efficiency is a natural extension. Data centers have interesting new workloads, I want to do research there, and it’s not just pure performance. It’s performance per joule and energy efficiency.”In fact, the Green project is part of a bigger effort—one that includes a broad swath of personnel across Microsoft—to examine what would happen if the entire software and hardware stack were rebuilt with energy efficiency in mind.“What if we could start from scratch?” Chilimbi ponders. “How could we rethink all parts of it—the programming language, the runtime system, the operating system, the microarchitectural design—so that everything is up for grabs?“Are we going to design a microprocessor from scratch? Maybe not. But what we can do is see what current microprocessors are good at and what they’re bad at and add things like hardware accelerators to make them better. Rather than starting with something that’s very high-performance, expensive, and inefficient, you could start with something that’s very efficient but not very high-performance and see what parts of that inefficient piece are crucial for data-center workloads that would deliver high performance but still be energy-efficient?”Of course, it helps when considering such scenarios to have Microsoft-scale resources behind you.“Being at Microsoft,” Chilimbi says, “I feel I have an advantage, because I have access to data centers and data-center workloads that a lot of people don’t. By looking at that, you get access to interesting real problems.”One of the real problems Chilimbi and colleagues have been examining has been search. “The search domain was what we first focused on,” Chilimbi says. “It’s our largest property, we care about energy efficiency, let’s see what we can do.”Search ExpenseThey identified the expensive functions in search and what they did, specifically the portions that identify all documents that match a query and then rank these documents.“What we found,” Chilimbi reports, “is that, many times, you return a certain set of results, but to return those, you look at many more documents than needed. If you had an oracle, you would look at just a certain set of documents and return those. But you don’t have that. You need to look at more and rank them.“So we said, ‘We can design an algorithm that can look at fewer documents but give the same results by deciding to stop early.’ When we started with that, we said, ‘Hey, wait a second! This is generalizable. We can this for other programs, as well.’ That’s where the whole framework and abstraction came about. But it was really targeted at search initially. The search people told us, ‘We have so many machines that if you improve it by this percent, it goes into millions of dollars really fast. That seemed motivating, so we decided to focus on search and how we could search fewer documents but return pretty much the same results. And then we can use those savings to improve the quality of ranking the documents that really matter.”For Microsoft’s search efforts, it wasn’t so much a way to save money as it was a way to be efficient and use the savings to improve other parts of the search experience. And the technology is being used in Bing search today.Chilimbi and his colleagues—including, at various points, Woongki Baek, a Stanford University Ph.D. candidate; Kushagra Vaid, from the Microsoft Global Foundation Services group; Utkarsh Jain, from Bing; and Benjamin Lee from Microsoft Research Redmond’s Computer Architecture Group—aren’t done yet. They hope to find a way to automate more of the approximation process, to learn how to compose quality-of-service metrics, and to identify the information software can communicate to hardware to make the two work together more efficiently.“It’s a different way of looking at computing,” Chilimbi says. “We’ve been programmed to think of computers as these mathematical units that compute functions, whereas real-world problems are often much more fuzzy and approximate. We’ve forced these real-world problems into mathematical-function form.“We want to see how far we can go with the results we have to date with more approximate models of computing. In the long run, this might make perfect sense. Green has been the first few steps, but the results are pretty good, so they justify continuing along this very interesting, novel path. Approximate computing is very interesting, especially if we can bound and guarantee a specified level of approximation, and I think we should continue investigating it. There’s a lot of opportunity there.”Related LinksTrishul ChilimbiRuntime Analysis and DesignMicrosoft Research RedmondResearch in Software EngineeringGreen: A System for Supporting Energy-Conscious Programming Using Principled ApproximationBingWoongki BaekMicrosoft Global Foundation ServicesBenjamin LeeComputer Architecture Group | 计算机 |
2014-23/2666/en_head.json.gz/21259 | IBM X-Force® Report Reveals Unprecedented State of Web Insecurity Report finds more than 500 percent increase in malicious Web links and increased sophistication in vulnerability exploitation
ARMONK, N.Y. - 26 Aug 2009:
IBM (NYSE: IBM ) today released results from its X-Force 2009 Mid-Year Trend and Risk Report. The report's findings show an unprecedented state of Web insecurity as Web client, server, and content threats converge to create an untenable risk landscape.
According to the report, there has been a 508 percent increase in the number of new malicious Web links discovered in the first half of 2009. This problem is no longer limited to malicious domains or untrusted Web sites. The X-Force report notes an increase in the presence of malicious content on trusted sites, including popular search engines, blogs, bulletin boards, personal Web sites, online magazines and mainstream news sites. The ability to gain access and manipulate data remains the primary consequence of vulnerability exploitations.
The X-Force report also reveals that the level of veiled Web exploits, especially PDF files, are at an all time high, pointing to increased sophistication of attackers. PDF vulnerabilities disclosed in the first half of 2009 surpassed disclosures from all of 2008. From Q1 to Q2 alone, the amount of suspicious, obfuscated or concealed content monitored by the IBM ISS Managed Security Services team nearly doubled.
"The trends highlighted by the report seem to indicate that the Internet has finally taken on the characteristics of the Wild West where no one is to be trusted," said X-Force Director Kris Lamb. "There is no such thing as safe browsing today and it is no longer the case that only the red light district sites are responsible for malware. We've reached a tipping point where every Web site should be viewed as suspicious and every user is at risk. The threat convergence of the Web ecosystem is creating a perfect storm of criminal activity."
Web security is no longer just a browser or client-side issue; criminals are leveraging insecure Web applications to target the users of legitimate Web sites. The X-Force report found a significant rise in Web application attacks with the intent to steal and manipulate data and take command and control of infected computers. For example, SQL injection attacks - attacks where criminals inject malicious code into legitimate Web sites, usually for the purpose of infecting visitors - rose 50 percent from Q4 2008 to Q1 2009 and then nearly doubled from Q1 to Q2.
"Two of the major themes for the first half of 2009 are the increase in sites hosting malware and the doubling of obfuscated Web attacks," Lamb said. "The trends seem to reveal a fundamental security weakness in the Web ecosystem where interoperability between browsers, plugins, content and server applications dramatically increase the complexity and risk. Criminals are taking advantage of the fact that there is no such thing as a safe browsing environment and are leveraging insecure Web applications to target legitimate Web site users."
The 2009 Midyear X-Force report also finds that:
Vulnerabilities have reached a plateau. There were 3,240 new vulnerabilities discovered in the first half of 2009, an eight percent decrease over the first half of 2008. The rate of vulnerability disclosures in the past few years appears to have reached a high plateau. In 2007, the vulnerability count dropped for the first time, but then in 2008 there was a new record high. The annual disclosure rate appears to be fluctuating between six and seven thousand new disclosures each year.
PDF vulnerabilities have increased. Portable Document Format (PDF) vulnerabilities disclosed in the first half of 2009 already surpassed disclosures from all of 2008.
Trojans account for more than half of all new malware. Continuing the recent trend, in the first half of 2009, Trojans comprised 55 percent of all new malware, a nine percent increase over the first half of 2008. Information-stealing Trojans are the most prevalent malware category.
Phishing has decreased dramatically. Analysts believe that banking Trojans are taking the place of phishing attacks geared toward financial targets. In the first half of 2009, 66 percent of phishing was targeted at the financial industry, down from 90 percent in 2008. Online payment targets make up 31 percent of the share.
URL spam is still number one, but image-based spam is making a comeback. After nearing extinction in 2008, image-based spam made a comeback in the first half of 2009, yet it still makes up less than 10 percent of all spam.
Nearly half of all vulnerabilities remain unpatched. Similar to the end of 2008, nearly half (49 percent) of all vulnerabilities disclosed in the first half of 2009 had no vendor-supplied patch at the end of the period.
The X-Force research team has been cataloguing, analyzing and researching vulnerability disclosures since 1997. With more than 43,000 security vulnerabilities catalogued, it has the largest vulnerability database in the world. This unique database helps X-Force researchers to understand the dynamics that make up vulnerability discovery and disclosure.
IBM is one of the world's leading providers of risk and security solutions. Clients around the world partner with IBM to help reduce the complexities of security and strategically manage risk. IBM's experience and range of risk and security solutions -- from dedicated research, software, hardware, services and global Business Partner value -- are unsurpassed, helping clients secure business operations and implement company-wide, integrated risk management programs.
For more security trends and predictions from IBM, including graphical representations of security statistics, download the 2009 IBM X-Force Mid-Year Trend and Risk Report today.
For more information about IBM, visit www.ibm.com. Contact(s) information
Jennifer Knecht
917-472-3607 [email protected]
InvestorsNews of interest to IBM investors | 计算机 |
2014-23/2666/en_head.json.gz/21309 | Office & Outlook Integration for Alfresco
Solution Name: Office & Outlook Integration for Alfresco Category: Business Applications Language(s): English Works with Alfresco: 4.1, 4.0, 3.4 With Micro Strategies' Office and Outlook integration modules for Alfresco, users are able to stay inside their everyday applications (Microsoft Office and Outlook) and take full advantage of a robust Enterprise Content Management platform. They have the ability to search across their repository using pre-defined document properties. Users can search for document folders based on folder properties (deal name, project name, matter name, loan number, etc.) and then select appropriate documents within a folder to edit. The ability to access previous versions of documents to do comparisons and the ability to create folders means the user does not need to leave Office to accomplish tasks. Additional capabilities include selecting and executing workflows and validating document properties from an external source (CRM, ERP, etc.).
The system recognizes and notifies users (color coded e-mails and attachments) if they're trying to save duplicate emails in the repository, regardless of who originally saved them. If an e-mail or attachment appears in red text it has not been saved. If it appears in green text, the e-mail or attachment has already been saved in the repository. Users can easily assign document properties to the e-mail and attachments individually or tag a number of e-mails with the same properties (Deal, Project, Matter, etc.). Rules can be setup to store the e-mails and attachments in specific Sites or folders based on document properties or the users can select the folder for saving the e-mails and attachments. The user can also indicate if they would like to save an attachment as the latest version of a document already stored in the repository.
The e-mail is stored in its native format, so users will have the ability to open and forward the e-mail to other users. When the attachments are individually saved, they are also linked back to the e-mail. This allows users to know the originating e-mail for the document.
When creating an e-mail users can include documents from the Alfresco repository. They can access the repository in the same manner they open documents from the MS-Office integration. This allows users to create an e-mail and then search for the appropriate document to be attached to the outgoing e-mail.
Avoid storing duplicate e-mails
Save and Assign metadata to e-mails and attachments from MS-Outlook
Attachment documents from Alfresco to Outgoing e-mails
Save multiple e-mails simultaneously Requirements: MS-Outlook 2007, 2010
Licensing Model: Licensed by Alfresco Server Version History: Version 3.2
Support Option: Supported by Solution Provider
Support Contact(s): Adam Storch, [email protected]
Landing Page URL
http://www.microstrat.com/Pages/Alfresco_Outlook_Integration.aspx Micro Strategies
Established in 1983, and an Alfresco partner since 2007, Micro Strategies’ Enterprise Content Management practice has focused on all facets of ECM solutions - Document Management, Records Management... Read more Acerca de Alfresco | 计算机 |
2014-23/2666/en_head.json.gz/21601 | Image Adjusts Re-Orders Policy On Saga Comic To Allow For Continuation Of Re-Orders On #s 7-8
I wasn't sent the letter that Robot 6 was sent from Image Comics Publisher Eric Stephenson adjusting a decision made earlier this week to back away from automatically reprinting heavy-selling comics later in the run like Saga, so I'm happy to send you over there to read it. I'm not sure I'd describe it as changing course, or at least that's not how I see it. It reads to me like they're backing away from the immediacy of the move, and apologizing for the frustration that came through the initial announcement -- a brusque tone some thought needlessly antagonistic -- but it also reads to me that they're keeping an eye on making some sort of move to reduce their risks here. What I'm guessing we'll see is some sort of modified version of that initial announcement: it may be left to the creator/creators, it may be restricted to certain titles, it may be a move of diminished severity. Saga really is a key book here, because it seems like orders should have stabilized by now and that's a big enough book people will pay attention.
Having said all that, I should probably point out that this is one of those things that doesn't seem like a big deal to me, and certainly doesn't fit our usual framework of something being "right" and something else being "wrong." It seems like Image is more or less just sorting out different strategies for how they want certain of their big serial books to be presented. I think as long as they're announcing in advance this is how something will work -- and that's something they're more than doing now, with the adjustment -- any kind of non-exploitative business decision is fine.
There's a good supplementary article here, where Rich Johnston rounded up statements from a bunch of Image creators. Milton Griepp's article here focuses on the issue in question being a natural candidate for extensive reprinting + the way Image is going to offer a heightened discount to the way they'll handle a reprint on that one, turning a negative into a potential positive. posted 5:00 pm PST | Permalink | 计算机 |
2014-23/2666/en_head.json.gz/21899 | Microsoft pushes switchover deal for CRM Online
Salesforce.com and Oracle CRM on Demand users get six months free to switch
Chris Kanaracus (IDG News Service) on 04 November, 2009 08:49
Microsoft is trying to steal away Salesforce.com and Oracle CRM on Demand customers with a new offer that will provide them with six months' access to its own CRM Online application at no charge if they sign a 12-month contract.Microsoft charges US$44 per month per user for CRM Online Professional edition. That compares to $65 per month per user for Salesforce.com Professional. Oracle CRM on Demand pricing starts at $70 per month per user.Meanwhile, Microsoft's application is comparable from a feature standpoint and "already about 35 percent cheaper" than the competition, said Brad Wilson, general manager of Dynamics CRM. The six-month offer is valid through the end of this year. Microsoft will consider expanding access to customers of other CRM products once it sees how well the program is received, Wilson said. Six months is about how long it takes a customer to know for sure whether an application is right for their business, said Ray Wang, partner with the analyst firm Altimeter Group.But potential hurdles lie in the way of a smooth transition over to CRM Online, he added. For one thing, a customer and Oracle or Salesforce.com may have a year-to-year deal, which might still be in effect when the six-month trial period expires, Wang said. While contract terms may allow the customer to cancel, they may not get a refund on the year's remaining fees, according to Wang. "Hopefully you'd be [signed up] month-to-month. It's good to check and see where you are in that process."Overall, however, "users win" in price wars like this, Wang said. Microsoft on Monday also announced price cuts for its Business Productivity Online Suite. Other SaaS (software as a service) vendors, such as NetSuite, have made a steady stream of financial enticements in recent months too, as sales slowed during the global recession.Salesforce.com has also quietly lowered monthly per-user fees for its two lowest-end editions, Contact Manager and Group Edition, to $5 and $25 respectively, down from $9 and $35. Meanwhile, Microsoft is announcing the CRM switch-over deal in conjunction with an update to CRM Online, Wilson said. It is also planning to roll out the software worldwide in the second half of 2010, he said. The service is now available in North America.In the new release, Microsoft made signing up for CRM Online "super-simple," he said. No credit card information is required to sign up, although users need to provide an e-mail address. They can then start a free trial with either Microsoft's Outlook client or a browser-based interface, Wilson said. Thirty-day trials include sample data so users can begin experimenting with the system. A series of help tools provide information on setup and maintenance.Microsoft has also developed an improved data import wizard. In addition, mobile access is available at no additional charge for any phone with a HTML 4.0-compliant Web browser."We specifically tried to engineer [the application] to make it really easy for people who don't have CRM systems," Wilson said.
Tags SaaSSalesforce.comMicrosoftDynamics CRMcrm
Chris Kanaracus | 计算机 |
2014-23/3419/en_head.json.gz/32238 | Limited gaming/development experience for cross-platform HTML5 games.
by Przemyslaw Szczepaniak on 03/18/14 10:35:00 am 7 comments
A comment I received recently to my post titled "Where and how can we play HTML5 games?" has inspired me to think about the limitations which developers can run across during the development of HTML5 cross-platform games. The experience created for these games can depend on how developers will use the code and how they will create game mechanisms. In the end, imagination and overcoming a few technical issues are the only limitations, and those have to be kept in mind when developing a game. Having fun with friends who use different devices can be very attractive. The gameplay thanks to the cross-platform feature can happen on multiple platforms and operating systems, but before we start to create a cross-platform game, we need to think about the limitations that we will have on each of the devices we are targeting.
The first issue we meet is the input or the steering.
Before developing a cross platform HTML5 game, we need to think about about the gamer's experience and how they will operate the game. For PC's or consoles it is pretty simple. Most games developed for those platforms use a keyboard and mouse or possibly a special input device such as a gamepad, joystick or steering wheel that can also be customized. But for smartphones, tablets or devices running similar OSes such as Smart TV's, it is much more difficult. The touch, tap, or slide features limit the user in steering in a game, and we cannot customize it. The TV controller is also limited in its function. This is a very obvious issue, but it may be crucial to gameplay and game types available. Think about it seriously before creating a prototype, and then test it so you don't run into future issues in the project's development.
Testing the games may be difficult.
Creating the game code for many platforms in a multiplayer game can be difficult, especially since developers have to sacrifice extra time for testing and fixing issues for the various devices. It is fairly obvious that the slogan "One code, multiple devices" doesn't really work as easy as it sounds. It may cause extra frustration. I'd like to quote one of Gamasutra's articles "While HTML5 might be designed to run on a wide range of devices, there's still no reliable way to maintain performance across varying hardware specifications". In this article, EA creative director Richard Hilleman said that "On my own computer, which runs on an i7, I couldn't get more than a few frames per second [from our demo]". Futher in the article he explained that "high performance JavaScript is obtuse at best", so it's hard to predict how an app will run on a given hardware specification. For a developer there is a need to solve a lot of problems during the game testing. The variety of devices in your gaming studio has to be really big, and it's not only limited to devices that have the latest system updates or simply the newest smartphones or tablets. Developers also have to think about older devices, the ones who are owned by gamers who don't update their hardware or systems very often. Because of those issues the game may not simply work on bigger part of devices. The same issue occurs with older versions of browsers on multiple devices on various platforms.
The speed of the game on various devices requires code optimization.
This may be very difficult, and it is strongly connected with the game testing process because a game can work great on a PC, but the performance on a smartphone can be poor because of the lack of CPU power or memory (or vice versa if a game was designed for a smartphone in the early stages.) It is important to design the game while keeping the technology limitations in mind for various devices. For example, you might create a great game based on WebGL, but it simply will not work on a tablet, or worse will not work on a smartphone. This may sound obvious, but I believe you get the idea.
Issues with audio implementation for mobile devices.
Issues may occur with game lags or even the browser crashing while playing. Develop-online posted the words of Chris Herbert, technical lead of Remode Studios, where he says, "... audio is considerably behind the rest of the platform’s features, with problems such as lag and file support", although he believes there could be vast improvements in this area in the future. There are clearly areas of HTML5 that need some work, audio being an area that’s lagging behind quite dramatically, particularly in mobile browsers (...) "The problems mainly relate to how sound data is swapped out of sound channels, many browsers are experiencing a lag, or in some cases jumps or pops (...) Also, the support for audio file formats is inconsistent, with some browsers favouring the ogg vorbis format whilst others – such as Safari – favour the MP3 format." But further in the article, Sandy Duncan, CEO of YoYo Games, confirms that this issue is being worked on, and it just needs time to be fixed. Hopefully we will expect that fix soon, since this is one of the core elements of having fun. It hasn't been the biggest issue for mobile since most HTML5 mobile games don't have sound, but it is not acceptable by players who play those games on PC. It would be great to have same the sound experience for all platforms while playing a game.
Big screen vs small screen experience
The difference in screen resolution can be an issue. If for example developers make strategy or rpg games where there is a vast map that players have to move their units around on then there can be a problem. Players with lower resolution screens (smartphones) don't have a chance to react as easily if they need to scroll the whole map to defend their city or send out troops. We may find similar problems with a difference in game design for the different size screens. On the other hand, simple games made for mobile devices may not look as nice on high resolution TV or PC screens, so this would also require a solution. The developers could only consider building the type of games that will not cause a problem for any platform.
Multi platform interface
During the start of development of an HTML5 game, developers have to design different types of game content on the screen (especially different user interfaces) for players on different platforms. This can become quite complicated, because every platform will use a different type of screen setup. Yes, it is very similar to resolution issues and for sure it requires planning of the screen setup. This is the moment where UX and easiness of game use comes in, and it is crucial to plan the interface according to the limitations of design possibilities for various devices.
Those issues are not the reasons to give up on HTML5 cross-platform game development. These are just some obstacles that each developer meets during game creation. We have found out already that many past issues with games performance and speed have been solved. Nothing is impossible as long as developers and the companies which create browsers, operating systems, and coding languages find a way to solve some of these issues. Do you agree? Have you found other issues in HTML5 coding and solution you would like to share? I'd like to hear them from your point of view.
Lennard Feddersen
I've just released a playable version of Real Estate Empire 3 which was my first experience with coding an HTML/Javascript/PHP/CSS type game. My results will not be applicable to everyone as REE3 is a turn based game where user events typically trigger a page load. That said, what I have found is:
1. I decided to target a 1024x768 display. Future devices would be a simple port rather than trying to support all devices up front with one code base. This means that I work out of the box on desktop and larger tablets. At this point I'm really glad I made that choice. Now that I'm up and running I don't know how I would shoe horn this design into a smart phone anyhow and trying to figure out scaling for all kinds of devices up front was just wasting time. Going forward I may decide to work with smart phones but, as I mentioned earlier, I will fork the code base and do a quick port rather than try and support all devices at once.
2. Keep it simple. My client server model is that the client provides input to the server where all decisions are made - I think you would have to work pretty hard to cheat at the game right now since the client only provides input to the server logic. I keep a global block of variables that I pass down to the client using PHP's json_encode to write a simple javascript array. A big array of global variables is pretty old school and, for this simple type of game, incredibly productive. The client pages can reference the variable block but never make changes to it. The server reads and writes the variable block to a user unique simple, flat file.
3. I tried the Aptana editor for awhile - the color coding of my javascript was pretty helpful but I eventually went back to my old sidekick of VC++ 6.0 and it's half baked BRIEF editing capability. I use Firebug to debug my javascript and tend to test with Chrome and Firefox open on my second monitor in two windows. I've just started using Komodo this week for local PHP debugging and it just works. I downloaded it, pointed it at PHP.exe and was debugging effectively within minutes - I expect when the 21 day trial is over I'll be a year by year subscription customer.
4. I've not decided how to make money with the program yet. Since it loads a lot of pages over a single play through I'm going to see how beta testing goes over the next month or two and see if I can just keep it free. TBD.
You can play Real Estate Empire 3 at the Rusty Axe Games website - I'd be happy to hear from you (I've got a contact button on the website) about your experiences if you do! Login to Reply or Like 2 Reply |
Vladislav Zorov
Hi, thank you for the great article!
About the first issue, I guess that's only relevant if you're trying to make a traditional PC game run on a mobile device. The game I'm currently making, touches and swipes are not only not limiting, but in fact essential to the game experience - sure, it's playable with a mouse (it's how I debug locally), but it's not nearly as "physical" without the touchscreen. Hitting the ball with your finger is so much more satisfying than the abstraction provided by keyboards, mice or gamepads (or TV remotes).
About the different screen sizes, I'm trying to make everything scalable, so only SVG files and procedural graphics need apply - this takes care of the different resolutions, however nothing can be done about the different physical screen sizes. You just need space to play, and you can't just make everything smaller, because game elements are directly related to the size of the spot a finger (or two) make on the display. My solution? Make sure it's clearly mentioned that the game is ONLY for tablets, and, even though a smartphone may run it, it won't be playable.
About audio, I'm not there yet, but the Web Audio API (http://caniuse.com/#feat=audio-api) looks like it might be the solution to real-time game audio issues. Sadly it's only supported on iOS's stock browser, which might mean the game will only be playable on iOS tablets and nothing else (although it should also work on Chrome for Android).
In conclusion, I'm actually making a game for the newest-generation iOS tablets, but I just don't want to use Objective C and a Macintosh computer. So, no cross-platform issues here - if anything else happens to run it now or in the future, great, but supporting every possible device and type of device under the sun (especially from a single code base) is not a requirement. Of course, standards are being followed religiously, so as to maximize the chances of that happening :) Login to Reply or Like Reply |
Vladislav, have you found any browsers that don't support SVG? I've been thinking about some of my graphics and converting them to SVG. There's an online tool that does a credible job on my houses. Login to Reply or Like Reply |
Lennard, I'm still using debug graphics, but I was kind of assuming it will work, since SVG has been pretty widely adopted in browsers for a while now - http://caniuse.com/#search=svg
Using SVG in canvas requires the same machinery as displaying .svg files on the web, so if you can view .svg files, you can use .svg sprites - it's just a `new Image()` with a specified width and height (that you set dynamically at runtime), see http://getcontext.net/read/svg-images-on-a-html5-canvas Login to Reply or Like Reply |
Ryan Christensen
Snapsvg is great for this: http://snapsvg.io/ Adobe supported and the creator originally created Raphael back in the day one of the first SVG toolkits. Mobile and desktop support for SVG solid for the first time ever: http://caniuse.com/svg. Login to Reply or Like Reply |
Bart Stewart
To a couple of the points raised (graphics and performance), an article a year or two ago at Gamasutra noted that a big source of performance hits is letting the platform do graphics scaling.
Substantial speed improvement came from referencing graphics that are sized exactly for each target platform. I don't know if that's still a factor for current implementations of HTML5, but it was a memorable comment at the time.
Also, nice to see someone else who appreciates BRIEF. :) Login to Reply or Like Reply |
What I'm doing is allocating a new invisible canvas for every asset as part of the loading code, to which I render the SVG (or oversized PNG) sprites with the appropriate size for the screen. Then each frame is just a canvas-to-canvas copy without scaling, which seems to be very fast (and, of course, the scene canvas is sized 1:1 with device pixels). Login to Reply or Like Reply | | 计算机 |
2014-23/3419/en_head.json.gz/32814 | Y2K bug
Name: Leonardo
Is the Y2K bug going to shut down all the computers?
Leonardo,
Computers won't actually shut down with the Y2K bug. Almost all computers
will continue to work just fine. What will happen, however, is that some
computers will not recognize what year it is. Just like we write the date
as 8/31/99, some computers store it that way. In the 1800's, they wrote the
date the same way, so really, when you see a date as 8/31/99, you just
assume that it is 1999.
Computers do the same thing, but when the date becomes 1/1/00, they will
think it is the year 1900, when it really is the year 2000.
For most computers, this won't be a problem. The computers in toaster,
cars, radios, etc, don't care what year it is so it won't matter.
What people are worried about are the computers that do use the current date
for calculations like computers that control banks, power stations, water
plants, and the internet. Since these computers rely on dates to determine
how much interest to charge, when to send bills, and things like that, if
they get the date wrong in the year 2000, it will cause a lot of problems.
No one really knows how many problems will be caused, but in general, it is
almost certain there will be some problems.
Some people believe it will be the end of civilization as we know it.
Others think there won't be any noticeable problems at all.
I think there will be some problems, but I also think most of the problems
will be simple surprises that are quickly worked out.
For example, if a city loses power, they will just set the clocks back for a
week or so until the problem is fixed.
Businesses, like banks, which are affected will do their best to fix the
problem as soon as possible, or they will lose a lot of customers, and they
don't want that to happen.
The reason I feel the problems will be relatively small, is because it will
cost businesses, cities, and states a lot of money to not have tested their
computers before the year 2000, and they don't want to lose the money. So
most are doing everything they can to make sure they don't lose money when
the year 2000 comes.
Eric Tolman
No. In fact, it won't shut down any of them. Life would be simpler
if it did. The computer would just stop working on January 1, 2000,
and then people would wonder why, and come to fix it.
The problem is that the computer DOESN'T shut down, it just gives
wrong answers when you ask it questions. But you (the human) may not
know they are wrong answers, because you have come to depend on the
computer to give you right answers. Then you might be in trouble!
Fortunately, the only computers that will do this are old computers.
Unfortunately, these computers can control some interesting things,
like whether the electric company thinks you've paid your bill on time
or not. Fortunately, none of these computers controls really
dangerous things like whether nuclear missiles should be launched.
Dr. Grayce
Click here to return to the Computer Science Archives NEWTON is an electronic community for Science, Math, and Computer Science K-12 Educators, sponsored and operated by Argonne National Laboratory's Educational Programs, Andrew Skipor, Ph.D., Head of Educational Programs. | 计算机 |
2014-23/3419/en_head.json.gz/33652 | FTC to Congress: Spyware purveyors need to do hard time
The Federal Trade Commission told Congress yesterday that it needs more …
The commissioners from the Federal Trade Commission (FTC) all trekked over to Congress yesterday for an appearance before the Senate Commerce Committee. During the hearing, the commissioners explained what their agency has been up to over the last year and asked for a bigger budget to continue fighting the good fight in the future. And the agency does good work: it continues to pursue spammers, spyware purveyors, and pretexters. Unfortunately, it doesn't pursue many of them. Testimony from FTC Chairwoman Deborah Platt Majoras revealed that in the last two years, the Commission has taken action against 11 spyware operators. Think about that number for a moment, and then think about the sheer amount of spyware in the wild. Despite the plague of software that continues to annoy grandparents, uncles, parents, and the occasional geek, the FTC has gone after an average of 5.5 spyware operators a year. Fortunately, it has had some notable successes, most recently a $1.5 million fine against Direct Revenue that will hopefully strike a bit of fear into other US-based adware companies. But there's no denying that progress has been slow. In questions after the testimony, Arkansas Democrat Mark Pryor told the commissioners that spyware was "a real source of frustration for my family, my constituents, my office." He also questioned whether the FTC's remedies were sufficient, given that the fines paid by companies often seem to be only a small portion of their total revenues. The FTC response illustrated how complicated it can be to pursue these companies. "It's hard to determine what the injury is to each consumer," said Commissioner Jon Leibowitz, who also pointed out how difficult it was to decide how much revenue a company earned from impermissible conduct, and how much from legal conduct. Majoras also made a pitch for "civil penalty authority," which the FTC currently lacks. The Commission can currently squeeze cash out of businesses by forcing them to pay restitution, but this is often impractical because "consumers suffer injury that is either noneconomic in nature or difficult to quantify," she told senators. The FTC's other remedy is "disgorgement," which requires firms to write a check directly to the US Treasury. Disgorgement is different from a fine, though, because it can only be assessed where firms have profited from unlawful actions. If a firm violated every data protection statute on the books but made no money from doing so, it could not be forced to disgorge revenue, even if thousands of consumers suffered identity theft as a result of its actions. Civil penalty authority would give the agency much more latitude to lay down the smack on offenders. Bills that would give the FTC this authority and Marty been introduced into Congress but have yet to be passed. Commissioner William Kovacic also told Congress that the FTC was now partnering with other US agencies and foreign governments in order to put spyware and malware authors in jail, rather than attempt to fine them. "Until we have success as a law enforcement community in placing them in prison," he said. "I don't think we'll ultimately have the deterrent influence we need." When it comes to spam, progress is similarly slow. The FTC has brought only 89 legal actions against spammers in the last decade, and only eight were filed in 2006. Meanwhile, spam is projected to surge past human-generated messages this year. In the FTC's defense, these cases are difficult to prove and prosecute, and spam often originates beyond US borders (and other agencies, like the Department of Justice, handle the criminal cases). Even given all the caveats, though, eight cases does not inspire confidence that the FTC will be the agency that can slow the rising tide of junk e-mail. Expand full story
Nate Anderson / Nate is the deputy editor at Ars Technica, where he oversees long-form feature content and writes about technology law and policy. He is the author of The Internet Police: How Crime Went Online, and the Cops Followed. | 计算机 |
2014-23/3419/en_head.json.gz/34017 | Jim Clark and Marc Andreessen developed the idea for Mosaic Communications Corporation in early 1994. They founded the company in April, and have since built a team of more than 50 employees -- half of whom are engineers. The company, which is privately held, is based in Mountain View, California -- in the heart of Silicon Valley.
Jim Clark is chairman and chief executive officer of Mosaic Communications Corporation. Prior to founding the company, Clark was chairman of Silicon Graphics, Inc., a computer systems company he founded in 1982 that now has annual revenues of $1.5 billion and is among the Fortune 500's fastest growing companies. Prior to founding Silicon Graphics, Clark was an associate professor at Stanford University, where he and a team of graduate students developed the initial technology on which Silicon Graphics was built. Clark resigned as chairman of Silicon Graphics in February 1994 to undertake a new venture with the young programming team that created the widely-used Mosaic graphical user interface. Clark holds a Ph.D. in Computer Science from the University of Utah.
Marc Andreessen is vice president of technology for Mosaic Communications. Andreessen developed the idea for the Mosaic graphical user interface in the fall of 1992 while he was an undergraduate student at the University of Illinois and a staff member at the university's National Center for Supercomputing Applications in Champaign, Illinois. He created the friendly, easy-to-use navigational tool for the Internet with a team of students and staff at NCSA in early 1993.
In his role at Mosaic Communications, Marc sets and oversees the technical direction of the company. He received a Bachelor of Science degree in Computer Science from the University of Illinois in 1993.
Next page...
Previous page...
Main page...
[email protected]
Copyright © 1994 Mosaic Communications Corporation. | 计算机 |
2014-23/3419/en_head.json.gz/34065 | The Journey Down: Chapter Two - Screens, Info
A continuation of the game in which art, shapes and patterns are inspired by central African carvings, and characters are based on genuine masks from countries such as Tanzania, Kenya and Nigeria
Skygoblin
Platform: Mac, Windows, iPad, iPhone, iPod Touch, Other
Buy The Journey Down: Chapter One
The Journey Down: Chapter Two - Screens, Info - July 27, 2014
Chapter Two of The Journey Down, currently in beta phase, is scheduled to launch this summer for iPad, iPhone, iPod Touch, Mac, PC and Linux.
According to indie developer Skygoblin: TJD2 has taken us a long time to build. Why? Because we have truly challenged ourselves with it. We didn't want to just re-do what we did with the first game. We wanted to take the critique we got from the first game and battle it, head on. This has taken quite a while, but it has also made TJD2 a better game in every way possible.
Few specific details have been disclosed about Chapter Two, so stay tuned. Meanwhile, have a look at some screenshots. | 计算机 |
2014-23/3419/en_head.json.gz/35250 | Dungeon Siege II: Broken World (c) 2K Games
Windows XP, 1.8GHz Processor, 512MB RAM, 1.4GB HDD, ATI Radeon 7500 or Nvidia Geforce 5750+ Video Card, 4x CD-ROM
Monday, October 30th, 2006 at 12:26 PM
By: Phil Soletsky
Dungeon Siege II: Broken World review
I'm probably not the right person to be reviewing this, I think to myself as I have to spend over two hours just looking for my Dungeon Siege 2 disks so I can install the Broken World expansion pack over it. When was the last time I even looked at the disks? I can't remember. I can't even remember that much about Dungeon Siege the first, and I freaking reviewed that title. I do remember generally liking Dungeon Siege 1. I also remember that whenever a character died every single stinking item they were carrying fell on the ground so that they were resurrected naked, and if two characters died next to one another you could spend 10 minutes getting all their items sorted out. “Hello, who had the +2 cloak of indifference? Anyone? Anyone? Bueller?” Still, I was very excited when DS2 came out, ran right out and bought a copy, installed it, and played it for like one hour. Maybe it struck me as very similar with just newer graphics, or maybe I was simply making room for something else I wanted to play more (or making room for a title that I was reviewing). I did have sort of a small hard drive back then. Anyway, with new games always coming out and new titles to review, I stack up far more games than I ever get a chance to really play. I guess what I'm trying to say, in my own phenomenally wordy style, is that I don't know why I didn't play DS2 much, but I didn't, and that makes me a little clueless about the expansion pack.For example, I don't know why the World is Broken. The manual has some story about assembling the parts of a magic shield to take on some guy with a magic sword (probably the plotline of DS2, I'm guessing), and that said shield meeting said sword was akin to crossing the streams in Ghostbusters. So the World is Broken, only it doesn't look terribly broken. I don't see many cities, perhaps it is only the cities that are broken, in which case it should have been entitled Broken Cities. I can feel myself spending an inordinate amount of time harping on the title, and that never bodes well for a game review �“ let me back up a little and start again.BW requires you to begin with a high level character �“ level 39 minimum. There is a tool to bring your characters over from DS2 (I think they had to have beaten that game in order to import, but I don't know for sure). I had no such characters having not played DS2, and so had to go with one of the stock level 39 characters that the game offered to me - one of each character class available. These are roughly generic characters and they come with some collection of standard factory options; armor, weapons, sun roof, CD player, skills, potions, etc. In comparison to the characters I had in DS1, which by the end of the game strongly resembled shambling medieval garage sales, these are pretty stripped characters. But you know what? Picking up even a stripped level 39 character and just playing it is hard. I'm unfamiliar with the spells in the books; I don't really know the good equipment or understand the character's skills. Perhaps if I could find my DS2 manual (fat chance, and the BW manual is a little on the thin side) it would reduce some of my confusion, but even beyond that it's also tough to feel at all a part of a character that is so advanced, despite the fact that you are given some points to spread around to try and make the character more ‘your own.' It's kind of like buying a high-level EverQuest character on Ebay �“ you just don't care about them (inasmuch as you can care about a clump of pixels, but I'm sure gamers know what I mean).Right off, in the first town I'm in, I can recruit a number of characters to join my party, so now I'm trying to run four high-level characters that I'm not all that familiar with. For the most part I control only one character, the others following behind like puppies and set to perform default actions, like attacking or casting spells. Monsters attack in clumps using swarming attacks, so the strategy is to inch down the trails and retreat frequently trying to peel off just a few monsters at a time because 12-15 monsters attacking in a clump can kill my whole party with astonishing ease. When a character dies, and trust me several will with great frequency, they can be resurrected by anyone with the appropriate spell or scroll. They come back from the dead fully clothed. When the whole party dies, and that has happened to me a few times also, everyone gets resurrected (after a peculiar go-into-the-light starfield animation) at the last teleport gate. They come back naked, but with the contents of their backpacks intact. That's a little strange. Later when that character just gets near the body of their previous incarnation, all the items are drawn back onto their bodies �“ armor, rings, weapons, all of it. That's a little strange too, but it's far more convenient than playing 52-pieces-of-equipment-pickup of DS1.For those who don't know about it, BW (and DS2) is a third person isometric RPG, heavy on the clickfest. It's boatloads like the recently reviewed Titan Quest with the exception that Titan Quest is a single character affair (except in multiplayer) whereas in DS2 you're responsible for a party of up to, I think, six characters. Titan Quest also had a brand spanking new graphics engine, whereas BW is working off an engine a little more than a year old. The old engine is no slouch �“ it looks pretty good and runs well even on my older machine �“ but if you just came off TQ as I did, BW looks a lot less snazzy. I suppose I should be fair and point out that DS2 came out before TQ, so really it is TQ that is a lot like DS2. The map in BW is almost completely linear, and pretty short (took me about 8 hours), and has you following some who apparently tricked you into breaking the world. It's not a whole lot of a plot and it doesn't feel like much of an adventure at all. The variety of monsters that I ran across was likewise skimpy. There are a selection of side quests that you can take to expand the storytelling a little, but my primary impression of the expansion pack is “short.”Broken World is not a great expansion pack �“ it's puny, has a thin, meaningless plot, and generally feels like it was thrown together in a great rush. It does make me want to go back and play DS2 which, at least from the summary in the BW manual, feels more extensive and complete. BW could probably, in fact, serve as a sales pitch for DS2, provided you didn't already need DS2 to play it, which you do. I know that it has made me want to go back and play DS2. For those of you who were addicted to the DS2 game mechanics, and perhaps even for those of you who liked Titan Quest, there's some, but not much, more of the same here.
Written By: Phil Soletsky
Back to Game Over Online | 计算机 |
2014-23/3419/en_head.json.gz/35317 | Close Resources
Welcome to Computer Resources
Apply for a Job with Computer Resources
Printing & Plotting
Media & Presentation Services
All network and computer users are expected to abide by the rules and policies outlined below. Violations of any of these policies can lead to suspension or revocation of computing/network privileges, or other disciplinary action.
For clarification or questions about any policies or procedures, please contact helpdesk or the Director of Computer Resources.
Academic Use
Respect for others and Shared Resources
Advance Notice / Rush Orders
Your business / Our business
When you are given access to GSD computer facilities and to the campus-wide network, you assume responsibility for their appropriate use. The school expects students and other community members to be careful, honest, responsible, and civil in the use of computers and networks. Those who use wide-area networks (such as the Internet) to communicate with individuals or to connect to computers at other institutions are expected to abide by the rules for the remote systems and networks, as well as those for Harvard's systems.
In addition to being a violation of College rules, certain computer misconduct is prohibited under Massachusetts General Laws, c.266 subsection 33 (a) and 12 (f) and is, therefore, subject to criminal penalties. Such misconduct includes:
knowingly gaining unauthorized access to a computer system or database
falsely obtaining electronic services or data without payment of required charges
destroying electronically processed, stored, or in-transit data
Users are expected to consult a Computer Resources Group staff member before any activity that would appear to threaten the security or performance of the school's computers and networks. Failure to do so may result in disciplinary action.
Your password should be considered valuable and private; it should not be loaned or shared.
Your GSD network account name and password give access to public computers and network resources such as printing, web access, and email, and may also be used to incur charges for color printing or the use of other resources.
There is never a valid reason for anyone to ask for or know your password; any member of the GSD community, including cross-registered students, visiting faculty and temporary staff, can get an account and password for free.
Students may be held responsible for misuse which occurs by allowing access to a third party to their own computer or account.
A malicious individual with access to your password can send e-mail appearing to be from you, remove and modify files in your home directory, and incur charges that will be billed to you.
If you ever believe your password has been misused or misappropriated, contact Help Desk at once.
As a matter of policy, what you do on your own time on your own computer is your own business, and no records are kept centrally which can be used to recover those actions. (Other places or agencies, such as web sites or corporations, may keep records which can be traced to you.) The Computer Resources Group will never open any individual's email or otherwise invade the privacy of an individual computer, without involvement of senior staff (dean of students, executive dean, etc.) and reasonable notice if it is feasible.
In the case of staff or student assistants at the GSD, who use a school-owned computer to store files in their course of work and receive e-mail pertaining to their job, those files and e-mails may be opened or forwarded at the request of the staff supervisor or a responsible officer of the university. Staff are also bound by the provisions of the University policies on Information Security and privacy, as documented at harvie.harvard.edu/...Staff_Personnel_Manual/Section2/Privacy Information stored on a computer system or sent electronically over a network is the property of the individual who created it. Examination, collection, or dissemination of that information without authorization from the owner is a violation of the owner's rights to control his or her own property. Exception: Systems Administrators may gain access to user's data or programs when it is necessary to maintain or prevent damage to systems or to ensure compliance with other University rules.
Computer systems and networks provide mechanisms for the protection of private information from examination. These mechanisms are necessarily imperfect and any attempt to circumvent them or to gain unauthorized access to private information (including both stored computer files and messages transmitted over a network) will be treated as a violation of privacy and will be cause for disciplinary action. In general, information that the owner would reasonably regard as private must be treated as private by other users. Examples include the contents of electronic mail boxes, the private file storage areas of individual users, and information stored in other areas that are not public. That measures have not been taken to protect such information does not make it permissible for others to inspect it. On shared and networked computer systems certain information about users and their activities is visible to others. Users are cautioned that certain accounting and directory information (for example, user names and electronic mail addresses), certain records of file names and executed commands, and information stored in public areas, are not private. Nonetheless, such unsecured information about other users must not be manipulated in ways that they might reasonably find intrusive; for example, eavesdropping by computer and systematic monitoring of the behavior of others. Actions such as these are likely to be considered invasions of privacy and would be cause for disciplinary action.
The GSD computer netowrk and information systems may contain confidential or personally-identifiable information. Because of this, all provisions of the Harvard Enterprise Information Security Policy (HEISP), as enumerated at http://www.security.harvard.edu/ apply to all GSD community -- staff, faculty and students. Any violations of the provisions of those policies may lead to disciplinary proceedings or removal from the network.
All staff should be aware of the provisions at the GSD for complying with the University's HEISP, as documented in the 'InfoSec' section of this manual.
You are the primary person responsible for your own success in computing at the GSD, (as elsewhere, and in other aspects of life)! Maintaining effective anti-virus protection, systematic back-ups of important work, and physical protection and security of your computer equipment are your responsibility. The CRG staff and resources may be allocated from time to time to help, but are never to blame and may not always be available.
Harvard neither sanctions nor censors individual expression of opinion on its systems. The same standards of behavior are expected in the use of electronic mail as in the use of telephones, and written, and oral communication. Therefore electronic mail, like telephone messages, must be neither obscene nor harassing. Similarly, messages must not misrepresent the identity of the sender and should not be sent as chain letters, or broadcast indiscriminately to large numbers of individuals. This prohibition includes unauthorized mass electronic mailings. For example, e-mail on a given topic that is sent to large numbers of recipients should in general be directed only to those who have indicated a willingness to receive such e-mail.
Only uses which are in support of your educational courses and research -- no commercial or 'non-GSD' activities -- are allowed. Personal or extra-curricular work may be allowed when it does not interfere with higher priority academic use.
Computer and network facilities are provided to students primarily for their educational use. These facilities have tangible value. Consequently, attempts to circumvent accounting systems or to use the computer accounts of others will be treated as forms of attempted theft.
Students may not attempt to damage or to degrade the performance of Harvard's computers and networks and should not disrupt the work of other users.
Students may not attempt to circumvent security systems or to exploit or probe any Harvard network or system for security holes, nor may students attempt any such activity against other systems accessed through Harvard's facilities. Running programs designed to breach system security is prohibited and unlawful.
If you download or distribute movies, music, software, or other copyrighted materials from the Internet without the owner's permission -- whether you do it from your desktop, dorm-room, or wirelessly, etc -- you may be breaking the law. You are culpable even if the source of the material is a Web site that appears to be offering a legal and inexpensive service. You bear the risk since the Harvard network is provided to support academic activity, not the transmission of illegally acquired movies or music. Penalties are severe; you may lose your network privileges, you may be disciplined by Harvard, and you may be criminally liable.
The University, the GSD, and you, are legally bound by the terms of the Digital Millenium Copyright Act (DMCA), and all GSD commmunity members must abide by the University's rules and policies, published at http://www.dmca.harvard.edu/ .
All users of the GSD's network must respect the copyrights of works that are accessible through that network. Under federal copyright law, no copyrighted work may be copied, published, disseminated, displayed, performed, or played without permission of the copyright holder. The GSD may terminate the network access of users who are found to have repeatedly infringed the copyrights of others. Questions about copying or other use of copyrighted works may be referred to any academic officer.
Software: Most GSD-provided software is 'site-licensed' for concurrent use on the GSD network only, for academic/educational purposes only. You may not re-distribute the software outside the GSD, nor attempt to circumvent the license-protection mechanism (Keyserver, Flex-LM, others.) Use of 'pirate', or 'cracked' versions of software on the GSD network is strictly forbidden, and may result in penalties.
If you have any question about the ownership of material, printed, images, sound, or other, you should make an effort to determine the ownership. Consult with help in the Library and online if you are uncertain. Signed artwork and music by published artists should be presumed to be protected by copyright.
Given the school's requirement for each student to provide their own suitable laptop, the few public computers at the GSD are provided primarily as a teaching resource (in Room 516 Computer Lab), or attached to periperhals such as scanners ( basement library clusters and elsewhere), and secondarily as a backup shared resource for individual users. Except for reservations by classes and other groups for use of the computer classroom, Room 516, no reservations are accepted for pubic computers, and it is an abuse for individuals to tie up one or more public computers when they are not physically present, such as with long-running rendering jobs; and it is officially acceptable for physically-present students, wishing to use such a public computer, to restart the computer, interrupting the long-running job, in order to use the computer in question. Note that several alternative solutions including the GSD's Render Farm are available for less-intrusive approaches to rendering and other long-running jobs; ask at helpdesk if you have questions.
Leave it cleaner than you found it.
Wear earphones.
Use the sign-up sheet.
Return it after use.
Don't mess with the configurations.
Plan ahead.
Be a good citizen.
We provide consultation, technical support and trouble shooting with respect to computing, IT and presentation technology, but not 'on-call personnel'.
We expect advance notice, knowledge of posted procedures and technologies (from the on-line manual at http://www.gsd.harvard.edu/manual), and in-kind staffing support (warm bodies.) Events outside of normal 9-5 weekday hours need extra advance notice, and can't always be provided for with normal systems -- extra provisions, including hired help, may be required.
Presentation Services include helping to specify and prepare presentation equipment (above or beyond 'Powerpoint' projection or simple video playing, both of which are provided in most GSD spaces as part of normal operations) in Piper Auditorium or other Gund Hall/Sumner Rd. rooms. Events outside of these locations require extra notice and preparation and may not be supported.
Laptop computers, digital projectors, digital cameras and audio/video recording equipment are available for check-out from the Loeb Library circulation desk. Some additional computing or presentation equipment may be available from CRG or PS for extended loan. Other equipment may have to be rented from outside suppliers; CRG or PS may be able to recommend vendors or coordinate rentals and installation.
For school-wide events including public lectures, event staffing is provided through Presentation Services. For all other events, the event-sponsoring units, such as courses, departments, individuals, etc. are responsible for providing bodies to sign up, collect, transport, operate and return borrowed equipment. Help is available to train those bodies in advance in basic procedures. CRG staff and PS staff may be available by prior reservation, and may be able to help locate and retain paid outside help.
Clarifications to common misconceptions:
All requests for assistance should be made directly to Helpdesk (6-3810, or email [email protected]) or to Presentation Services (6-0335) and should not be considered confirmed unless confirmed.
Other than for public lectures, there are no GSD 'projectionists' available from CRG or PS. Courses and events are expected to provide their own staffing for projection.
Evening and weekend events require advance notice and preparation, and may require hiring (and paying for) outside help.
Video-taping of events requires extra advance notice and may require hiring outside help at the customer's expense, especially for events outside of normal weekday hours.
Video-conference equipment may be available, but is sufficiently exotic that it may require dedicated personnel for operation, and may be presently unavailable except by advance arrangement.
Computer Projectors, PLasma Screens and associated computers are available by sign-up reservation at Helpdesk. Users are expected to sign up for, collect, operate and return (in a timely fashion) the projectors and plasma screens and to familiarize themselves with their operation prior to 'live' use.
No software or other functionality on borrowed computers should be assumed without testing; all equipment and peripherals should be verified for proper installation and operation prior to 'live' use.
Special requests made after 4 PM may not be accommodated until the following day.
All special requests and 'extra notice' need to be cleared with the Director of Computer Resources, 5-2682 or email:[email protected])
An extra charge is assessed for rush orders, of which only a limited number may be requested. Don't abuse the privilege.
"We don't do data."
Computer Resources staff know a lot about the workings of computing machines and networks; but not necessarily about how you are going to use them. We may be able to learn, and help you figure out, but we are not usually able to help with your own 'business practices', or getting or interpreting the data for the particular problem you need to solve. (An exception is for Geographic Data, for use in a GIS; the GSD GIS Specialist may be able to help in this case!) In some cases, you may need to read the manual for the software you are trying to use, or seek training or consultation somewhere outside the GSD.
(some CRG policies are modeled after the following sources:)
Graduate School of Arts & Sciences Rules & Regulations
Faculty of Arts & Sciences Computer-User Rules and Policies
Harvard Enterprise Information Security Policy (HEISP) at http://www.security.harvard.edu/ Provost's Polices on Copyright at http://www.dmca.harvard.edu/ Print
Resources: Computer Resources: Welcome: Policies | 计算机 |
2014-23/3419/en_head.json.gz/35954 | Adobe readying Apple lawsuit?
By Duncan Geere 14 April 2010
Sources are suggesting that the ongoing degradation of the relationship between Apple and Adobe, once the best of friends, is about to get a whole lot uglier, as the latter prepares a lawsuit against the former.
Following Apple's move to change its iPhone SDK to ban the use of cross-platform compilers, which essentially banned Flash but also targeted Silverlight, C# and .NET, it seems that the relationship has now soured to such an extent that legal action is being planned.
For now, all Adobe will say is: "We are aware of the new SDK language and are looking into it. We continue to develop our Packager for iPhone OS technology, which we plan to debut in Flash CS5", but if you work in the legal department at Apple then don't be surprised if a big fat envelope arrives in the next couple of weeks.
We'll bring you more on the precise accusations being levelled as soon as we get it.
Tags: Software, Apple, Adobe, Lawsuits, Flash Tweet | 计算机 |
2014-23/3419/en_head.json.gz/36139 | Microsoft`s Push for the AdWords Market
Posted on April 3, 2006 by Jennifer Sullivan Cassidy Like
We’ve known it for a while: Microsoft’s agreement with Overture expires in June, 2006. This paves the way for MSN to take a large slice of the pay per click pie with its own advertising program: adCenter.With two of the three search engine giants, Google and Yahoo, already having their massive advertising arenas, it only makes sense for Microsoft to enter the race. What took them so long to start? Were they just dragging their feet, or were they simply being patient and waiting for the opportunity to make a huge splash?
Part of the reason it took so long for MSN to jump on the ad bandwagon is the agreement Microsoft has with Overture, the first successful pay per click ad program, which is owned by Yahoo! Search Marketing. Those who know the Redmond, Washington based company understand that this could not make MSN happy for too long, however MSN chose to utilize some form of advertising while it spent its energy on revamping its search engine. MSN contracts with Yahoo until the end of June, and will be replacing the contracted program with their own, which has a unique approach that will give both Yahoo and Google a run for their money. The $15 billion U.S. Internet advertising market is a compelling reason to enter the game, even if MSN comes in last place.
But MSN wants to do more than simply serve ads; they want to be better than what’s already in place. MSN’s adCenter is unique for its use of customer profiling, taking advantage of the data MSN gathers from its more than 9 million subscribers. MSN AdCenter, which debuted in Singapore at the end of September 2005, allows advertisers to launch highly targeted online keyword search-based campaigns, with the ability to include or exclude target customers based on geographic location, gender and age and to run ads only during certain times and days. For MSN, advertising is not just about exposure, it’s about exposure to the right audience.
“With the competing products you buy a word. On ours you go into detailed level and see who is searching for words,” said Eric Hadley, senior director of advertising and marketing for MSN. “You can plan an (ad) buy based on the people and say, ‘I’m willing to pay this much for this demographic, and I don’t want these people in the mix.’”
The adCenter pilot program initially was scheduled to begin in the United States on March 16, 2006, but MSN’s adCenter Beta was active well before that date. Testing the waters seems like a good idea for Microsoft, who invested millions to launch the program in the United States. In the pilot program, you can even import ads from Yahoo and Google’s programs to make the switch-over more attractive to those who are already advertising with MSN’s competitors, allowing you to avoid manual upload of each of your keywords or ad campaigns.
One of the major complaints advertisers have with pay-per-click programs is the inability to choose to whom they are advertising, resulting in unqualified traffic. This will make all the difference in being able to make the sale or not. MSN’s adCenter information page states, “For instance, if you sell running shoes, you want an audience who is interested in running shoes to see your ad and take action, which may result in the audience clicking on your search ad, visiting your web site, and buying a pair of running shoes. Conversely, you would not want someone who is interested in buying horseshoes to see your ad and take a similar course of action because that likely not result in a sale, but the click on your ad by this person would still cost you.”
Currently, registration for the adCenter pilot program is by invitation only. You will apply for the program, and if you meet with MSN’s criteria, you will be extended an invitation. I finally received mine, but it took nearly 4 weeks to receive. Further, with the invitation, I received a similar sign-up offer, but for the MSN pilot QuickLaunch Marketing Analyst, which I can only assume is like Google Analytics. I say “assume” because I am still on the waiting list for signing up for Google Analytics. I can’t report on personal experiences with adCenter Beta just yet, however a few clients signed up for the adCenter pilot program have already seen great returns.
A representative from AskJeeves finds the concept of MSN’s customer profiling slightly disturbing because they feel this violates privacy issues, and so do many others. But MSN claims that the information they get from registered users is not personally identifiable, and cannot be traced back to any one particular individual. I would imagine that this would be no different than the information gathered by Google with its toolbar, which technically is a version of spyware.
However, some people who believe this is just another form of spyware are not happy with the idea of customer profiling at all. And while MSN doesn’t intend for the information to be used in any way to harm individuals, there are plenty of folks out there that don’t have the same ethics. People have trouble differentiating between helpful and harmful spyware, and would just rather not deal with it at all.
For advertisers, however, the idea is worth gold. Advertisers are truly tired of spending money on completely unqualified clicks on their ads, and being able to reach their target audience is, after all, what successful marketing is all about. And, if advertisers are happy, then publishers are happy.
MSN Search rolled out a new user interface in February, a Search and Win promotion in March, which offers prizes to those who use the MSN search engine, and plans to revamp its entire search presence on the web within the next few weeks. Some critics of the MSN pilot program feel that Microsoft is a day late and a dollar short; Microsoft disagrees and simply states that it was waiting patiently to see where modern search would lead before jumping in with both feet. Joanne Bradford, top salesperson at Microsoft says, “I thank Yahoo and Google for proving that a software company can be a media company and a media company can be a software company.”
MSN certainly has their work cut out for them, however. In 2005, Google took in almost $6 billion in revenues from their Internet advertising, four times that of MSN. Yahoo collected $4.6 billion, more than three times that of Microsoft at $1.4 billion. Analysts say that they can easily expect MSN’s revenue to double within three to five years; however, experts also say that MSN’s revenue cannot keep climbing unless they close the gap in the irrelevant and what many consider “spammy” results from the search engine. And no gimmick in the world will make people overlook the poor search engine results, no matter how much they are offered.
Even with a successful launch of MSN’s adCenter, advertisers will only reach a fraction of surfers compared to Google’s and Yahoo’s. According to the Nielsen Net Ratings, Google finished 2005 with a whopping 48% of all searchers using it as their search engine, with Yahoo coming in second at 22%, which is a considerable decrease since August, 2005. MSN barely came in third place with only 11% of the searchers to boast about. Clearly, MSN search has been on the steady decline, and some analysts feel that they were asleep at the wheel for perhaps too long.
MSN recently dumped the Inktomi search engine in favor of its own algorithmic search engine, RankNet. RankNet is a “learning” engine, collecting information as searchers use the MSN search to better its results every time a search is made, then contouring future results based on the data. However, many people still seem unhappy with the search results provided by MSN search, indicating a lack of ability to weed out spam and duplicate content, while providing less relevant results than before.
Still, MSN’s marketing techniques are impressive, and Forrester Research projections place MSN’s Internet advertising revenues at $26 billion by the year 2009. Other experts strongly disagree, especially in light of the current failures by the search engine to provide relevant results. Microsoft has not unveiled its specific plans to revamp the search engine, whether they are algorithmic, filters, or other changes; they have only indicated that a major change is in the works.
One of the driving forces behind the new adCenter is Microsoft’s Windows Live. Executives at Microsoft feel that Windows Live is their ace in the hole, and where adCenter is concerned they believe that the unified Windows Live services will allow Microsoft to get a deeper understanding of the people using its online services.
Ad placement using MSN’s adCenter keyword bids depends on a type of quality score like Google’s, taking into account cost per click, click through rates, and relevancy of landing pages. Where MSN differs from Google in the respect to landing pages is that Google allows non-relevant keywords to prompt ads depending solely upon bid price, whereas with MSN’s ads, if the landing page is devoid of the keyword you’ve bid on, your ad won’t be shown.
Currently, MSN is only serving about a fourth of its ads through its adCenter, while the other 75% are served from Overture or other sources. The program is still only a pilot program, so statistics are hard to gather during this time. However, advertisers have reported seeing a higher rate of return on their ads at a much lower cost. It will be interesting to see where the revenues will increase once the program is in full force by July. Yahoo says it has already made its adjustments in the financial shift it anticipates from losing the profits from the MSN ad revenues.
MSN fully expects to take a large piece of the pie in July when the adCenter fully rolls out, especially backed by its aggressive marketing strategies. While some experts feel that the pie isn’t necessarily going to get bigger with the launch, they also anticipate that it will only divide Google’s and Yahoo’s pieces. Yahoo’s piece of the pie will certainly get smaller, since many of the ads served currently through adCenter are Overture driven, and once Overture’s agreement with MSN ends in June, MSN is free to keep the whole piece of the pie instead of splitting the proceeds with Yahoo. MSN also will strive to expand the big pie of Internet searchers into a bigger one, and some analysts support this ideology; especially in light of the expanding online searches which are being integrated into virtually everything: toolbars, PDA’s, wireless phones, and desktop utilities to name a few.
It is still too soon to predict where adCenter will end up once the full program is launched, and analysts keep their opposite points of view intact. I have a “wait and see” policy, and for now, will refrain from making predictions regarding the world of search. In a way, search engine technology is like the shifting sands on a beach, because the Internet seems to be ever changing and fluid. Ultimately, in the end, searchers want relevant results, whether in an ad format, or on a search results page. With all of the changes in the works from Microsoft, MSN search certainly hopes to deliver soon.
For information about Jennifer Sullivan Cassidy’s professional search engine optimization services, please visit her site at First Class SEO.Google+ Comments Related Threads
Not Satisfied with Google Analytics - Please Suggest good Customized Web AnalyticsWeird keywords in Google Analytics - Could Analytics actually be wrong?Paid Analytics Services - Who to use - sick of Google AnalyticsGoogle Analytics Vs Yahoo Analytics??New Features in Google Webmaster Tools and Google Analytics in Late 2010 Related Articles
This entry was posted in Search Engine News and tagged Google Analytics, MSN, United States, Windows Live. Bookmark the permalink. BioLatest Posts
Jennifer Sullivan Cassidy
Latest posts by Jennifer Sullivan Cassidy
Tools for SEO: Search Engine Friendly URLs - October 4, 2006 SEO Firms: The Good, the Bad, and the Ugly - September 20, 2006 How to Conduct Competitive Research - September 13, 2006 Google+ Comments Post navigation | 计算机 |
2014-23/3419/en_head.json.gz/36315 | Time for Europe to stamp out software pirates, BSA says
Flog 'em. And if that don't work, make 'em walk the plank
The Business Software Alliance (BSA) today called on Tony Blair to get tougher with software pirates. The group sent letters to the European Commission, the European Parliament and heads of member states – including our very own prime minister – to spur them into action to combat counterfeiting. The letters asked for a five-point plan to be put in place at EU level, via the BSA's Green Paper on Combating Counterfeiting and Piracy in the Single Market. The BSA said tougher penalties needed to be adopted. "To combat piracy effectively, it simply must be made more risky, embarrassing and costly to pirate than to obey the law," it stated. It also asked for the creation of an EU agency to handle piracy and a crack down on factories where CDs were copied – including the implementation of plant licensing and mandatory identification (SID) codes. Would-be candidates to the EU should be vetted for piracy issues, it said. And the state sector should set an example to the private sector and establish policies to stop the use of illegal computer software within its institutions. The BSA said it was waiting for a response. Well, at least it should keep spin doctor Alistair Campbell busy for a day or two.® | 计算机 |
2014-23/3419/en_head.json.gz/36725 | The 10 best browser-based MMORPGs out there
by Thomas Touche, MMO Attack, Posted Feb 25th 2013 3:30PM It's popular for people to play down on many no download MMORPGs. But the fact of the matter is, with the technological advancement of today's computers and browsers, we are able to play some pretty awesome MMORPG games right in our browser. That's right, you don't have to download anything to play an awesome, advanced MMORPG game. Take it from us, some of the hottest games today are no download MMO games and that's what this list here will show you.
The games on this best no download MMORPG list were graded based on many elements that we find in 'normal' games. So sit back and play one of the games right in your browser! And remember kids, Internet Explorer is not a real browser. Drop that thing now and get Chrome or Firefox!
10. Marvel: Avengers Alliance
Marvel: Avengers Alliance is a turn-based social network game developed by Playdom in 2012. It is based on characters and storylines published by Marvel Comics, and written by Alex Irvine. The game is available as an Adobe Flash application via the social-networking website Facebook, and via Playdom's official website. It officially launched in Facebook at March 1, 2012. It was initially released as promotion for the 2012 Marvel Studios crossover film The Avengers. It was nominated for Best Social Game on the Video Game Awards 2012.
Play Marvel: Avengers Alliance Now 9. Wartune
Wartune is a 2D browser-based RPG that puts you in the shoes of a mighty hero bent on protecting, and caring for their city. Among its most notable features, you'll find dungeons, city building, crafting/farming, and PvP in the form of competition with other surrounding player cities. Whether you're looking for RPG or strategy, you'll find it in the world of Wartune.
Play Wartune Now
8. The Settlers Online
In The Settlers Online (known also as Castle Empire), you are the king of a fledgling empire, seeking to become mighty and great. Develop your land to produce the raw materials needed for construction, exploration, and combat. Join with your friends to create a thriving land, and go on exciting adventures together. Repel invasions with your forces, and grow a kingdom into an empire.
Play The Settlers Online Now 7. OverKings
Overkings is a free-to-play MMORPG that takes you to a world full of danger and mystery as it has been destroyed in a great event. You must create a hero and venture across the lands, laying waste to the evil that has arisen out of this terrible event. | 计算机 |
2014-23/3419/en_head.json.gz/38894 | / Office of the Executive Vice President
/ Finance & Information Technology
/ Global Technology
/ Tom Delaney
Vice President for Global Technology & Chief Global Technology Officer
[email protected]
Thomas Delaney is Vice President of Global Technology and Chief Global Technology Officer at New York University. Tom is responsible for global technology strategy across NYU’s functional areas and coordinates the university’s IT operations across the international sites. Tom brings a background in both academic and corporate IT to the role. He previously served as Associate Dean of Technology and Chief Information Officer at the New York University School of Law. Under his leadership, NYU Law was named the top technology law school in the United States by preLaw magazine. Prior to working in academia, he founded an international software company in the document assembly space, consulted globally on engineering document management, and served as Chief Information Officer for a consortium of companies in the retail and distribution sectors.
Tom serves on a number of international corporate advisory councils and is a frequent speaker on innovation, technology and global higher education. He holds several U.S. patents for telecommunications products, and was named a Premier IT 100 leader by Computerworld. Tom has an MS and a BS in electrical engineering from Rensselaer Polytechnic Institute and is a New York State Licensed Professional Engineer.
Thomas A. Delaney
Ben Maddox
Deputy Chief Global Technology Officer & Associate VP
[email protected]
Heather Stewart
Associate Vice President for Global Technology
[email protected]
Associate Vice President in Global Technology Services
[email protected]
Director, IT Program Management Office
[email protected]
Finance & Information Technology
Martin S. Dorph
Global Technology | 计算机 |
2014-23/3419/en_head.json.gz/39279 | Welcome to www.Takamine.com. Any person accessing this World Wide Web Site (the "Web Site") agrees to the following:
All textual, graphical and other content appearing on this Web Site (www.Takamine.com) is the property of KMC MUSIC, INC., or its affiliates (collectively, "KMC") or its licensors. All right, title and interest in and to the site and its materials, including but not limited to, all patent rights, copyrights, trade secrets, trademarks, site marks and other inherent proprietary rights, are retained by KMC or its licensors. Except as expressly authorized by KMC herein, you agree not to make, copy, display, modify, rent, lease, license, loan, sell, distribute or create derivative works of this Web Site or its materials in whole or in part. Any modification of the Web Site or its materials for any purpose is in violation of these terms.
You may view, copy, print and use content contained on this Web Site (including recorded material) solely for your own personal use and provided that: (1) the content available from this Web Site is used for informational and non-commercial purposes only; (2) no text, graphics or other content available from this Web Site is modified or framed in any way; and (3) no graphics available from this Web Site are used, copied or distributed separate from accompanying text. The use of any such content for commercial purposes is expressly prohibited. Nothing contained herein shall be construed as conferring by implication, estoppel or otherwise any license or other grant of right to use any patent, copyright, trademark, service mark or other intellectual property of KMC or any third party, except as expressly provided herein.
Reference to any product, recording, event, process, publication, service, or offering of any third party by artist name, trade name, trademark, company name or otherwise does not necessarily constitute or imply the endorsement or recommendation of such by KMC. Any views expressed by third parties on this Web Site (including recorded interviews) are solely the views of such third party and KMC assumes no responsibility for the accuracy or veracity of any statement made by such third party.
KMC Music Trademarks: ADAMAS and the unique bridge, headstock, fingerboard inlay, and soundboard configuration designs of ADAMAS guitars, GENZ BENZ ENCLOSURES, GIBRALTAR, HAMER and the unique headstock design of HAMER guitars, KMCONLINE, LATIN PERCUSSION LP, LP and the 1 and 2 circle designs, MBT, MUSICORP, OVATION and the unique bridge and bowl-shaped designs of OVATION guitars, ROUNDBACK, TAKAMINE, TOCA, are a few of the trademarks and service marks of KMC that may appear in this Web Site, many of which are registered in the United States and other countries. This is not a comprehensive list of all trademarks of KMC. The KMC trademarks may not be displayed or otherwise used in any manner without the prior written consent of KMC. All other names and marks mentioned in this Web Site are the trade names, trademarks or service marks of their respective owners.
LINKS: THIS WEB SITE MAY CONTAIN LINKS TO OR BE ACCESSED THROUGH LINKS ON WORLD WIDE WEB SITES OF KMC DEALERS OR DISTRIBUTORS. KMC DEALERS AND DISTRIBUTORS ARE INDEPENDENT CONTRACTORS AND ARE NOT AGENTS OF KMC. KMC DOES NOT HAVE RESPONSIBILITY FOR THE CONTENT, AVAILABLITY, OPERATION OR PERFORMANCE OF WEB SITES OF KMC DEALERS OR DISTRIBUTORS, OR ANY OTHER SITES, TO WHICH THIS WEB SITE MAY BE LINKED OR FROM WHICH THIS WEB SITE MAY BE ACCESSED. YOUR USE OF SUCH SITES OR RESOURCES SHALL BE SUBJECT TO THE TERMS AND CONDITIONS SET FORTH BY THEM.
Networked KMC Music Sites: This Web Site also serves as an entry into several Web Sites operated by KMC subsidiaries and operating divisions. Please note that these sites may adopt terms of use particular to the subsidiary or operating division. While these terms of use apply to this KMC Web Site as a whole, if a KMC subsidiary or division has terms in addition to those described here, then those terms will also apply. In addition, sales made in connection with a KMC subsidiary or division site that offers products or services for sale will be subject to that subsidiary's or division's terms of sale as a condition of completion of the transaction. Those terms of sale will either be posted on the subsidiary or division Web Site or described in a separate agreement. You | 计算机 |
2014-23/3419/en_head.json.gz/39566 | Is it that big ?? About 2 hours, 1 minute ago
Start your Windows Phone development training at DVLUP Day events
Complete Nokia MWC developer keynote posted
Microsoft sold twice as many Windows Phones the week before Christmas versus last year
Nokia: 2013 was about getting the apps, 2014 will be about making them great
Windows Phone developer builds Silverlit API for remote control enjoyment
Nokia announces Create Mini Mission competition winners and new challenges
We sit down with Nokia’s Stephen Elop
DualShot developer reveals 100,000 downloads in just 20 days since launch
Microsoft scientist says 'Bing It On' is no lie; Ayres experiment “wildly uncontrolled”
Congrats to Rudy Huyn for passing the 3 million download mark in the Windows Phone Store
Delicious: Emily's Wonder Wedding for Windows 8 looks extra wonderful on a huge touch screen
Internet Explorer 11 Developer Preview available for Windows 7 with new features and improvements
Bing releases new developer APIs - new features inbound
Stay code savvy with the Windows Developer Show App for Windows Phone 8
Nokia answers some questions about the Lumia 925 design and hardware
We're interviewing Nokia's SVP Product Design, throw us some of your questions!
Read this: Interview with Albert Shum, Windows Phone Design Boss
Windows Phone NewsFeaturedDevelopers By Rich Edmonds, Sunday, Feb 5, 2012 at 8:53 pm EST This week we've been joined by keyboardP, the developer behind Air Pick Voice ("epic voice"), who agreed to a Windows Phone developer interview. Should you be interested to learn more about his project and experience developing for Microsoft's mobile platform, head on past the break for the full interview.
Tell us about yourselves and how you got into software development.
I’m a self-taught developer and have been programming since the age of ten. I would stay after school in the computer rooms so that I could mess around with QBasic, which was the IDE the school had at the time. I started off writing simple text adventure games in QBasic before moving on to other languages such as Java, C++ and C#. I pursued my interest in computers and read Computer Science and Business at university and interned at Microsoft, which was a great experience.
Games were primarily the main reason I got into programming as every time I played games, half of my concentration was on playing and the other half was on trying to figure out how developers created certain aspects. With the evolution of games and technology, that curiosity still lingers within my mind as I’m playing any game. It’s also part of my inspiration to create applications and games that causes other people to ask “how was that created?”.
What do you think of Microsoft's platform (from a user perspective) and how do you compare it to competitors?
Like a lot of things in life, different platforms are suited to different users. The openness of Android, for example, is suited to a certain demographic, but not necessarily the best choice for all consumers who actually have an Android. Likewise, those who want the ability to hack their phone however they like may not be as happy with their iOS or Windows Phone device (in their current states) as they would be with an Android device. In my opinion, having had an Android device and a Windows Phone device, I’m very happy with the Window Phone from a user perspective. Everything just works out the box and there are no custom ROMs required if you want to speed up your device.
You’re also guaranteed updates regardless of your carrier which, from a user perspective, is something I’d like to have as standard as opposed to manually downloading and installing custom ROMs on to my device. It seems that Microsoft are taking the middle ground between iPhone and Android devices. The former has a limited number of devices with a consistent interface whereas the latter has a wide range of devices with the OEMs being able to customise it to their requirements. Windows Phone is the middle ground where there are a range of devices, so users have hardware options, but the software experience remains consistent. Despite the smoothness of the devices, the hardware specs are what consumers compare and, in my opinion, this is where Windows Phone has to improve if they want to start battling on the marketing front.
What's the number one feature you love the most in Mango, and what are you looking forward to in the next update?
Mango adds a lot to the Windows Phone platform, so it’s difficult to choose just one feature. However, I do like the ‘Local Scout’ feature which immediately shows nearby restaurants, bars, and things to do. I’ve used more times that I thought I actually would so it was something I underestimated in terms of usefulness to me. As more of a concept than a single solid feature, I’m looking forward to the integration between Windows 8, Windows Phone and Xbox. It simply opens up a huge range of possibilities and, as a developer, my mind is constantly coming up with ideas of what this combination can produce.
What path(s) led you to develop for Windows Phone?
Being a C# developer, it was certainly an advantage to not have to learn a new development environment or language to get started. I was familiar with Visual Studio and since Windows Phone development uses Visual Studio, there was a very low barrier to entry in terms of skill set. However, I believe it’s important to be a ‘programming language polyglot’ as a developer and having to learn a new language shouldn’t be the only reason to not work on other platforms. When I first saw Windows Phone announced at a developer conference, I immediately felt that it had a huge potential.
With Microsoft falling behind in the mobile space, I felt that they would put a lot of resources into this project to try and become a competing force against the iPhone and Android platforms. I feel that there is a popular belief that the iPhone market and the Android market is a sure fire way to make millions, simply because of the success stories you hear. The reality is, a vast majority of apps don’t make it and you need a something unique and a bit of luck to be able to make it in such a crowded marketplace. As a one-man team, I felt that a great place to make a name would be in an emerging piece of technology which has huge potential.
What's your take on the Windows Phone development process?
I absolutely love it. The fact that an existing skill set can be utilised from the offset is something that is often underrated in terms of importance. However, more than that, Visual Studio is a great IDE and Expression Blend is a brilliant UX development tool. I’ve been developing for Windows Phone from the beta of the original SDK and it’s great to see it constantly improving. I think what helps a lot are the official samples you can download for pretty much any major feature. Having the samples really helps explain how to use the various APIs and tools.
Have you developed for other platforms and if so how does the development process compare?
Windows Phone development experience is second to none. I’ve developed for iOS, Android, and Windows Phone and I can say without a shadow of a doubt, the development experience on Windows Phone is by far the easiest and smoothest. The fact that a rough prototype of Angry Birds can be created within a few hours, with barely a line of code, speaks volumes in my opinion. As a one-man team, I feel that prototyping should be a fast process and with the combination of Visual Studio and Expression Blend, the entire process is very efficient and effective.
It’s easy to forget that Microsoft puts a lot of store in developers and has done since its inception. Over the years, they’ve had the opportunity to listen to developer feedback and improve the development experience. The current state of Visual Studio is not something that sprung up overnight and comparing it with my experience of the Eclipse IDE and XCode, you can really see the difference. Even if I’m developing an iOS app, I’d prototype it using the Windows Phone kit simply because I find it more efficient to do that and then port it, rather than prototype with XCode and Objective-C.
The only area which Windows Phone falls behind on, in my experience, is with its API. iOS has had the time to expand the API and Android’s API is great for being able to do pretty much anything. Windows Phone has to catch up on providing more APIs to developers as it’s currently more limited than the other two platforms.
Air Pick Voice is an 'epic' concept, how did the idea came to be and what issues did you run into throughout development?
Thanks! My apps tend to be born out of personal experience. I feel that if I tackle a problem that I personally face, I have a better chance of solving it in the best possible way. I also have the tendency to solve problems in a unique way which doesn’t conform to expected solutions. I feel this not only spawns further ideas, but can often result in a better solution. It also makes things more challenging, which is something I embrace as a developer.
The first time the idea for APV came about was when I was developing on a machine that didn’t have my music. Whenever I wanted to listen to a particular song, I’d have to flick to the other machine and select the song which often broken my concentration. Additionally, I listen to music when I get ready in the morning and so when I was making breakfast, the randomised playlist sometimes played a song I didn’t want to listen to. At that point I knew there must be a better way to fix this solution than to having to go back to my machine or to have to scroll through thousands of songs on my phone.
Speech recognition wasn’t something I had played around with before so the whole development process has been a learning experience. There are things I would and would not do if I was to recreate this project from scratch. Besides a few hiccups, I think the biggest issue was speed and memory. There is only so much control I have over the boot up time and it was taking around seven seconds for 3500 songs. Using some tricks and relatively complex plumbing, I managed to cut the boot up time to three seconds on my machine. I also managed to shave off over sixty percent in memory usage. I put a lot of effort into these two areas because this is a service I expect users to be running for an extended period of time, hence the memory concern, and the possibility that the user has quite a large music collection and so the speed was important.
The project as a whole sports some well sought after features (especially the custom ringtone creation), why pack so much into a single package?
When I was creating the various aspects of the app, I knew straight away that I could sell them as individual apps and possibly earn a higher revenue. However, the first and foremost aspect for me is to be proud of my apps and to ensure that anyone who actually uses my apps are getting the best possible experience. I’ve been using APV in its current state for a couple of weeks and I think when users do too, they’ll see that all the features work well together.
There’s a fluid interaction from one task to another and the custom ringtones sits very nicely in this process. If I’m listening to a song and I suddenly hear a particular bit I want as a ringtone, I can immediately do that without having to exit the app. That experience is something I consider important and hopefully my users will appreciate that when they use it. I’m sure other apps of a similar nature will hit the Marketplace, but my priorities lie with the actual users of my apps even if that results in fewer sales.
What can we expect from APV in the future once version 1 is out in the wild?
Firstly, I’d love to receive feedback and build up on that. I try and make it easy to contact me and do my best to reply to people who message me on Twitter or email me. Being a one-man team means that I can listen to my users directly and that’s an advantage I’d hate to see go to waste. In fact, I’ve already received feedback regarding the name Air Pick Voice or APV. There’s been mixed responses regarding the name and so I’m accepting new name suggestions until Tuesday 7th (more information at www.keyboardp.me).
Besides feedback, there are some very cool plans and I think they adhere to my philosophy of trying to do things differently. Version 1 sets out to facilitate your listening experience from an interaction point of view but future plans, that have been with me from the beginning, attempts to improve it from a psychological point of view. That’s all I’m saying for now...
Are you looking forward to the upcoming Windows 8 to expand onto the big screen, as well as mobile, with higher levels of integration being made available?
Absolutely. I feel that Microsoft are carefully coordinating certain aspects of Windows Phone to coincide with Windows 8. Not only does this mean that even more things are going to be possible, but that the entire development process is going to be made even easier. As someone who adores technology and its potential, Windows 8 and the integration it brings is something I’m very excited about.
What other Windows Phone projects are you working on?
I was working on a game (which, incidentally, was the app being developed on the machine that didn’t have my music). During development of the game, I created the prototype of APV (which was known as ‘PhoneZune’) to solve the music issue, and the response to the video I uploaded was simply immense. A lot of people wanted APV so I put the game on the back burner and made APV my main project over the last couple of months. I have a couple of other apps that I have prototyped and I believe that they’re all as unique as APV as well as changing the way people perform certain tasks. However, APV is my current focus at the moment so you’ll have to wait for a bit until I announce the other apps What advice would you give to other aspiring developers?
Don’t give up. As cliché as that sounds, I know of many developers who jumped into the deep end, tried to code an ambitious project as their first attempt and were permanently put off programming. It’s important to start off simple and regardless of how pointless some of the more basic tutorials seem, there’s always a reason they’ve become standard. There’s nothing wrong with ambition, but it needs to be coupled with ability in order for anything to be realised.
I think it’s also quite easy to be put off programming when you see people on forums answering complex questions off the top of their heads. It’s important to remember that these people also started off without any programming knowledge and I think a common trait amongst the best programmers is that they’ve always stuck at it.
Thank you for your time. Any closing words about WP7's future?
I believe that Windows Phone is, and always has been, in this for the long term. I think it would be naïve to have expected Windows Phone to take a huge chunk of the marketshare in a year or so. However, the platform is solid, the development experience is second to none in my opinion, and the final hurdle is to get this message across to the consumers. Having a great product isn’t enough in this industry and sometimes you have to compete on numbers that don’t necessarily have tangible effects.
A lot of non-technical consumers will look at the specs of a device and if it has higher numbers, they’d assume it’s faster than a device that has lower specs. I think Windows Phone needs to, and will, start competing on the specification front even if it’s just for the marketing aspect. I also believe Microsoft will start to target the lower end of the market where Android is dominating. I feel that the important question there is if people are buying Android devices because of the Android brand or because of the price. Competing on positive branding is much more difficult than competing on pricing in this case. It’s going to be an uphill battle, but the fact that Windows Phone is a solid device is a great start to the climb.
Thank you for the interview, it’s been an absolute pleasure!
You can follow keyboardP on Twitter, view his previous videos, and check out his development blog to keep up-to-date.
Filed in: Windows Phone News, Featured, DevelopersTags: developer, Interview, developer interview, keyboardp, apv -
Native YouTube playback gets updated for Windows Phone
Post everywhere with Seesmic Ping on Windows Phone
Off subject, but I've been watching the SB and not 1 WP ad. Did I miss one?
Rude! ... Much love to keyP.!
keyboardP says:
Thanks! :) The SB would've been a great time to show something so I guess we'll have to wait for another major event. (Olympics 2012?)
ccrraaiigg007 says:
I'm afraid there wasn't one.
Good game, but I was really hopping to see some WP action.
Well at least there was no iPhone ads and only 1 android ad (galaxy note)
Actually the entire pre game show was sponsored by the Droid Rzr maxx. The host desk was branded with a big RZR maxx logo, and They did show that "why would we limit the iPhone" commercial once. To be honest, I don't think MS knows how to do a SB production quality ad just yet. It's easy,, just add stars, a dance number, fast cars, explosions, pretty girls, CG! They are clueless.
This was the best opportunity all year!
How about Music Genie for a name?
I like the name, but the main domain names are taken which is something I'd like to have (.com/.net at least). Posted on Feb 6, 2012 at 9:24 am - 2 years ago
Fantastic article I loved it!
Looking forward to playing with Epic! erhmm Air Pick Voice!
Glad you liked the interview! lol, I'm going to put up a new name tomorrow on my blog and see what people think of it. APV only seems to work with certain accents.
Great work from WPCentral and Keyboardp. Now for a name tip:
• MuzikVoice
• Vocoaul (VOcal COntrolled AUdio Language), Vobaco (Vocal BAnd COntroller)
• Mubaco (MUsic BAnd COntroller) or Mucoba.
• Sric (Synthesis recorder interface controll)
Just throwing out some stuff. A great idea should be to give it more of a name for product recognition. Looking forward to use this, hopefully its not to hard to setup and get going.
Thanks for the suggestions. A couple of them exist (MuzikVoice) and Mubaco, but I'll take the others into consideration. At the moment, I'm quite liking "TellDJ" as it's easy to pronounce, spell, shows that it's related to music and can easily explain what the goal of the app is. What do you think?
I wonder if he could have play on the phone (and have the entire app on his phone)
This isn't possible just yet, but I have thought about it. There is one way of doing it that I know of, but it's not very scalable at the moment (so if a lot of users used it, it would be quite slow). However, this is something I would like to implement in a future update :)
Hampus says:
Book radio cabs in India with the new Mega Cabs app 2 hours 43 min ago by Abhishek Baxi Lumia Cyan update rolling out for Lumia 625 in Canada 3 hours 20 min ago by John Callaham Strategy puzzle game HexaLines price drop with myAppFree deal for Windows Phone 3 hours 59 min ago by Daniel Rubino Windows Phone 8.1 now on nearly 12 percent of Windows Phones 4 hours 2 min ago by Sam Sabri Facebook Messenger app for Windows Phone adds video sharing in new update 4 hours 10 min ago by John Callaham Tag Cloud
Just don't know what the problem is with a lot of you...about 1 min agoby Bruno Sain I would love to see data on how many folks who have WP as...about 4 min 14 sec agoby melzappICTServ That's exactly what I do, but now I don't make dumb app...about 4 min 35 sec agoby 2brun4u Didn't you have warranty for the phone? | 计算机 |
2014-23/3420/en_head.json.gz/1369 | Kiran Bhat
Hello and welcome to my web-page! I am a lead engineer with the R&D group at Industrial Light & Magic, working on graphics and vision algorithms for visual effects and computer games. My current research focus is facial animation and performance capture. Before joining ILM, I was working as a Research Scientist in the Algorithms group at Epson Palo Alto Lab. I graduated with a Ph.D in Robotics from the School of Computer Science at Carnegie Mellon University. My areas of interest span computer vision, computer
graphics and robotics. While at Carnegie Mellon, I was a part of the CMU Graphics group, and advised by Prof.
Steven Seitz, Prof.
Pradeep Khosla and Prof. Jessica Hodgins. I also spent a few great summers in Seattle working at University of
Washington's graphics lab GRAIL. hiking at McConnell's Mill trail, Pittsburgh | 计算机 |
2014-23/3420/en_head.json.gz/1373 | By Janice Helwig and Mischa Thompson, Policy Advisors Since 1999, the OSCE participating States have convened three “supplementary human dimension meetings” (SHDMs) each year – that is, meetings intended to augment the annual review of the implementation of all OSCE human dimension commitments. The SHDMs focus on specific issues and the topics are chosen by the Chair-in-Office. Although they are generally held in Vienna – with a view to increasing the participation from the permanent missions to the OSCE – they can be held in other locations to facilitate participation from civil society. The three 2010 SHDMs focused on gender issues, national minorities and education, and religious liberties. But 2010 had an exceptionally full calendar – some would say too full. In addition to the regularly scheduled meetings, ad hoc meetings included: - a February 9-10 expert workshop in Mongolia on trafficking; - a March 19 hate crimes and the Internet meeting in Warsaw; - a June 10-11th meeting in Copenhagen to commemorate the 20th anniversary of the Copenhagen Document; - a (now annual) trafficking meeting on June 17-18; - a high-level conference on tolerance June 29-30 in Astana. The extraordinary number of meetings also included an Informal Ministerial in July, a Review Conference (held in Warsaw, Vienna and Astana over the course of September, October, and November) and the OSCE Summit on December 1-2 (both in Astana). Promotion of Gender Balance and Participation of Women in Political and Public Life By Janice Helwig, Policy Advisor The first SHDM of 2010 was held on May 6-7 in Vienna, Austria, focused on the “Promotion of Gender Balance and Participation of Women in Political and Public Life.” It was opened by speeches from Kazakhstan's Minister of Labour and Social Protection, Gulshara Abdykalikova, and Portuguese Secretary of State for Equality, Elza Pais. The discussions focused mainly on “best practices” to increase women’s participation at the national level, especially in parliaments, political parties, and government jobs. Most participants agreed that laws protecting equality of opportunity are sufficient in most OSCE countries, but implementation is still lacking. Therefore, political will at the highest level is crucial to fostering real change. Several speakers recommended establishing quotas, particularly for candidates on political party lists. A number of other forms of affirmative action remedies were also discussed. Others stressed the importance of access to education for women to ensure that they can compete for positions. Several participants said that stereotypes of women in the media and in education systems need to be countered. Others seemed to voice stereotypes themselves, arguing that women aren’t comfortable in the competitive world of politics. Turning to the OSCE, some participants proposed that the organization update its (2004) Gender Action Plan. (The Gender Action Plan is focused on the work of the OSCE. In particular, it is designed to foster gender equality projects within priority areas; to incorporate a gender perspective into all OSCE activities, and to ensure responsibility for achieving gender balance in the representation among OSCE staff and a professional working environment where women and men are treated equally.) A few participants raised more specific concerns. For example, an NGO representative from Turkey spoke about the ban on headscarves imposed by several countries, particularly in government buildings and schools. She said that banning headscarves actually isolates Muslim women and makes it even harder for them to participate in politics and public life. NGOs from Tajikistan voiced their strong support for the network of Women’s Resource Centers, which has been organized under OSCE auspices. The centers provide services such as legal assistance, education, literacy classes, and protection from domestic violence. Unfortunately, however, they are short of funding. NGO representatives also described many obstacles that women face in Tajikistan’s traditionally male-oriented society. For example, few women voted in the February 2010 parliamentary elections because their husbands or fathers voted for them. Women were included on party candidate lists, but only at the bottom of the list. They urged that civil servants, teachers, health workers, and police be trained on legislation relating to equality of opportunity for women as means of improving implementation of existing laws. An NGO representative from Kyrgyzstan spoke about increasing problems related to polygamy and bride kidnappings. Only a first wife has any legal standing, leaving additional wives – and their children - without social or legal protection, including in the case of divorce. The meeting was well-attended by NGOs and by government representatives from capitals. However, with the exception of the United States, there were few participants from participating States’ delegations in Vienna. This is an unfortunate trend at recent SHDMs. Delegation participation is important to ensure follow-up through the Vienna decision-making process, and the SHDMs were located in Vienna as a way to strengthen this connection. Education of Persons belonging to National Minorities: Integration and Equality By Janice Helwig, Policy Advisor The OSCE held its second SHDM of 2010 on July 22-23 in Vienna, Austria, focused on the "Education of Persons belonging to National Minorities: Integration and Equality." Charles P. Rose, General Counsel for the U.S. Department of Education, participated as an expert member of the U.S. delegation. The meeting was opened by speeches from the OSCE High Commissioner on National Minorities Knut Vollebaek and Dr. Alan Phillips, former President of the Council of Europe Advisory Committee on the Framework Convention for the Protection of National Minorities. Three sessions discussed facilitating integrated education in schools, access to higher education, and adult education. Most participants stressed the importance of minority access to strong primary and secondary education as the best means to improve access to higher education. The lightly attended meeting focused largely on Roma education. OSCE Contact Point for Roma and Sinti Issues Andrzej Mirga stressed the importance of early education in order to lower the dropout rate and raise the number of Roma children continuing on to higher education. Unfortunately, Roma children in several OSCE States are still segregated into separate classes or schools - often those meant instead for special needs children - and so are denied a quality education. Governments need to prioritize early education as a strong foundation. Too often, programs are donor-funded and NGO run, rather than being a systematic part of government policy. While states may think such programs are expensive in the short term, in the long run they save money and provide for greater economic opportunities for Roma. The meeting heard presentations from several participating States of what they consider their "best practices" concerning minority education. Among others, Azerbaijan, Belarus, Georgia, Greece, and Armenia gave glowing reports of their minority language education programs. Most participating States who spoke strongly supported the work of the OSCE High Commissioner on National Minorities on minority education, and called for more regional seminars on the subject. Unfortunately, some of the presentations illustrated misunderstandings and prejudices rather than best practices. For example, Italy referred to its "Roma problem" and sweepingly declared that Roma "must be convinced to enroll in school." Moreover, the government was working on guidelines to deal with "this type of foreign student," implying that all Roma are not Italian citizens. Several Roma NGO representatives complained bitterly after the session about the Italian statement. Romani NGOs also discussed the need to remove systemic obstacles in the school systems which impede Romani access to education and to incorporate more Romani language programs. The Council of Europe representative raised concern over the high rate of illiteracy among Romani women, and advocated a study to determine adult education needs. Other NGOs talked about problems with minority education in several participating States. For example, Russia was criticized for doing little to provide Romani children or immigrants from Central Asia and the Caucasus support in schools; what little has been provided has been funded by foreign donors. Charles Rose discussed the U.S. Administration's work to increase the number of minority college graduates. Outreach programs, restructured student loans, and enforcement of civil rights law have been raising the number of graduates. As was the case of the first SHDM, with the exception of the United States, there were few participants from participating States’ permanent OSCE missions in Vienna. This is an unfortunate trend at recent SHDMs. Delegation participation is important to ensure follow-up through the Vienna decision-making process, and the SHDMs were located in Vienna as a way to strengthen this connection. OSCE Maintains Religious Freedom Focus By Mischa Thompson, PhD, Policy Advisor Building on the July 9-10, 2009, SHDM on Freedom of Religion or Belief, on December 9-10, 2010, the OSCE held a SHDM on Freedom of Religion or Belief at the OSCE Headquarters in Vienna, Austria. Despite concerns about participation following the December 1-2 OSCE Summit in Astana, Kazakhstan, the meeting was well attended. Representatives of more than forty-two participating States and Mediterranean Partners and one hundred civil society members participated. The 2010 meeting was divided into three sessions focused on 1) Emerging Issues and Challenges, 2) Religious Education, and 3) Religious Symbols and Expressions. Speakers included ODIHR Director Janez Lenarcic, Ambassador-at-large from the Ministry of Foreign Affairs of the Republic of Kazakhstan, Madina Jarbussynova, United Nations Special Rapporteur on Freedom of Religion or Belief, Heiner Bielefeldt, and Apostolic Nuncio Archbishop Silvano Tomasi of the Holy See. Issues raised throughout the meeting echoed concerns raised during at the OSCE Review Conference in September-October 2010 regarding the participating States’ failure to implement OSCE religious freedom commitments. Topics included the: treatment of “nontraditional religions,” introduction of laws restricting the practice of Islam, protection of religious instruction in schools, failure to balance religious freedom protections with other human rights, and attempts to substitute a focus on “tolerance” for the protection of religious freedoms. Notable responses to some of these issues included remarks from Archbishop Silvano Tomasi that parents had the right to choose an education for their children in line with their beliefs. His remarks addressed specific concerns raised by the Church of Scientology, Raelian Movement, Jehovah Witnesses, Catholic organizations, and others, that participating States were preventing religious education and in some cases, even attempting to remove children from parents attempting to raise their children according to a specific belief system. Additionally, some speakers argued that religious groups should be consulted in the development of any teaching materials about specific religions in public school systems. In response to concerns raised by participants that free speech protections and other human rights often seemed to outweigh the right to religious freedom especially amidst criticisms of specific religions, UN Special Rapporteur Bielefeldt warned against playing equality, free speech, religious freedom, and other human rights against one another given that all rights were integral to and could not exist without the other. Addressing ongoing discussion within the OSCE as to whether religious freedom should best be addressed as a human rights or tolerance issue, OSCE Director Lenarcic stated that, “though promoting tolerance is a worthwhile undertaking, it cannot substitute for ensuring freedom of religion of belief. An environment in which religious or belief communities are encouraged to respect each other but in which, for example, all religions are prevented from engaging in teaching, or establishing places of worship, would amount to a violation of freedom of religion or belief.” Statements by the United States made during the meeting also addressed many of these issues, including the use of religion laws in some participating States to restrict religious practice through onerous registrations requirements, censorship of religious literature, placing limitations on places of worship, and designating peaceful religious groups as ‘terrorist’ organizations. Additionally, the United States spoke out against the introduction of laws and other attempts to dictate Muslim women’s dress and other policies targeting the practice of Islam in the OSCE region. Notably, the United States was one of few participating States to call for increased action against anti-Semitic acts such as recent attacks on Synagogues and Jewish gravesites in the OSCE region. (The U.S. statements from the 2010 Review Conference and High-Level Conference can be found on the website of the U.S. Mission to the OSCE.) In addition to the formal meeting, four side events and a pre-SHDM Seminar for civil society were held. The side events were: “Pluralism, Relativism and the Rule of Law,” “Broken Promises – Freedom of religion or belief in Kazakhstan,” “First Release and Presentation of a Five-Year Report on Intolerance and Discrimination Against Christians in Europe” and “The Spanish school subject ‘Education for Citizenship:’ an assault on freedom of education, conscience and religion.” The side event on Kazakhstan convened by the Norwegian Helsinki Committee featured speakers from Forum 18 and Kazakhstan, including a representative from the CiO. Kazakh speakers acknowledged that more needed to be done to fulfill OSCE religious freedom commitments and that it had been a missed opportunity for Kazakhstan not to do more during its OSCE Chairmanship. In particular, speakers noted that religious freedom rights went beyond simply ‘tolerance,’ and raised ongoing concerns with registration, censorship, and visa requirements for ‘nontraditional’ religious groups. (The full report can be found on the website of the Norwegian Helsinki Committee.) A Seminar on Freedom of Religion and Belief for civil society members also took place on December 7-8 prior to the SHDM. The purpose of the Seminar was to assist in developing the capacity of civil society to recognize and address violations of the right to freedom of religion and belief and included an overview of international norms and standards on freedom of religion or belief and non-discrimination. | 计算机 |
2014-23/3420/en_head.json.gz/1582 | Spelunky PSN gets update for bug fixes, adds Daily Challenges
by: Sam
Spelunky, the masterful roguelike from indie developer Derek Yu, recently came to PlayStation 3 and PlayStation Vita. As the only way to play it on the go, the Vita version of Spelunky was instantly praised as the best way to play the game, aside from one little problem: the game had framerate issues in a few of the special levels. Yesterday developers Derek Yu and Blitworks issued an update that fixed the slowdown issues as well as a trophy bug.
If that wasn't enough they have also added the Daily Challenge feature, previously only available on the PC version. This mode randomly creates a new level every day that all players can challenge once. If you die you have to wait till the next day for another shot, but that also means its an all new level. This takes some of the random luck out of the game and lets you compete against a leaderboard of players that all faced the same challenges that you did.
Spelunky is one of my favorite games of last year, and probably my favorite Vita game at the moment. This update fixes all of the issues and brings it in line with the PC version, so now there's no excuse to not go get it right now!
Daily Challenges Now Available in Spelunky on PS3 and PS Vita
Hey, everyone! It’s Derek once again. First off, thank you for your tremendous support of Spelunky on PS3 and PS Vita. You’ve made it clear that it was the right choice bringing the game to PlayStation! We’re really happy about how the game has been received (generally with much glee, followed by screams of anguish, followed by more glee).
But I’m really here to talk about our first update, which is out now. Of course, every game launches with some number of bugs, and Blitworks has been working hard to fix as many of those as possible. The slowdown in certain areas of the game was probably the biggest issue, and that’s been addressed. They also fixed a bug that was preventing certain Trophies from being earned.
Also, if you’ve been watching streams of people playing Spelunky on PC, you may have noticed that that version of the game has a unique feature called the Daily Challenge, where each day a set of levels is generated that has the same seed for everyone playing it. If you get a high score on the Daily Challenge, you can really attribute it to your skill and knowledge of the game, rather than the luck of the draw (i.e. serious bragging rights).
On top of that, you only get one chance to play the Daily — if you die, you gotta wait until the next day to try again! So yeah, somehow we managed to crank up the tension of playing Spelunky even higher… and PC players seem to really like it!
When Blit heard about this new mode, they couldn’t wait to implement it on PSN. That’s one of the great things about working with them — they love their work and relish the technical challenges of doing an awesome port. It’s right in line with the spirit of Spelunky and the spirit of the Daily, which, as you may have guessed, you can now play on PSN as part of this update.
So enjoy the Daily Challenge. Enjoy the bug fixes and optimizations. Enjoy life. (Enjoy death, too.) Just enjoy.
(And special thanks to Doug Wilson and Zach Gage. They gave us the idea for the Daily Challenge in the first place and have been great friends and supporters for a long time. You guys are the best.) | 计算机 |
2014-23/3420/en_head.json.gz/1636 | OOXML Appeals Rejected
Friday, August 15 2008 @ 02:09 PM EDT
I know it will not surprise you to hear that ISO/IEC have rejected the four appeals against OOXML. Here's their press release. Now what? Andy Updegrove:
Under the ISO rules of process, this now paves the way for the as-adopted version of OOXML, now called IS0/IEC DIS 29500, Information technology - Office Open XML, to proceed to publication. That version is substantially different than the current implementation of OOXML in Office 2007, and its text has still not been publicly released by ISO/IEC. According to a joint press release, "this is expected to take place within the next few weeks on completion of final processing of the document." Intriguingly, the press release goes on to say, "and subject to no further appeals against the decision.
That should be hilarious, when they publish it and anyone tries to actually use it. Anyone? Bueller? Keep in mind that Microsoft's Office 2007 does not implement OOXML. Infoworld May 21, 2008: On Wednesday, Microsoft said it will not have support for the current ISO specific for OOXML until it releases the next version of Office, code-named Office 14. The company has not said when that software will be available. No one does. How could they? Why would they? What really happens next: the complaints lodged with the EU Commission. ISO/IEC decided to go down with the ship.Some countries involved in the OOXML process filed complaints, but earlier, the Commission had already announced that it would investigate whether "Office Open XML as implemented in Office is sufficiently interoperable with competitors' products". Here's a paper, Lost in Translation, that indicates that it is not. The authors make an understandable mistake I hope the EU Commission does not, thinking that Office 2007 implements OOXML, the ISO standard. It does not. OOXML as a standard has not been published yet. I wrote to Mr. Shah, and that will be corrected in the next revision. But take a look at how bad the situation currently is.
Here's the ISO/IEC press release:
ISO and IEC members give go ahead on ISO/IEC DIS 29500
The two ISO and IEC technical boards have given the go-ahead to publish ISO/IEC DIS 29500, Information technology � Office Open XML formats, as an ISO/IEC International Standard after appeals by four national standards bodies against the approval of the document failed to garner sufficient support.
None of the appeals from Brazil, India, South Africa and Venezuela received the support for further processing of two-thirds of the members of the ISO Technical Management Board and IEC Standardization Management Board, as required by ISO/IEC rules governing the work of their joint technical committee ISO/IEC JTC 1, Information technology.
According to the ISO/IEC rules, DIS 29500 can now proceed to publication as an ISO/IEC International Standard. This is expected to take place within the next few weeks on completion of final processing of the document, and subject to no further appeals against the decision.
The adoption process of Office Open XML (OOXML) as an ISO/IEC Standard has generated significant debate related to both technical and procedural issues which have been addressed according to ISO and IEC procedures. Experiences from the ISO/IEC 29500 process will also provide important input to ISO and IEC and their respective national bodies and national committees in their efforts to continually improve standards development policies and procedures.
About ISO
ISO is a global network of national standards institutes from 157 countries. It has a current portfolio of more than 17 000 standards for business, government and society. ISO's standards make up a complete offering for all three dimensions of sustainable development � economic, environmental and social. ISO standards provide solutions and achieve benefits for almost all sectors of activity, including agriculture, construction, mechanical engineering, manufacturing, distribution, transport, medical devices, information and communication technologies, the environment, energy, quality management, conformity assessment and services.
About IEC
The IEC is the world's leading organization that prepares and publishes International Standards for all electrical, electronic and related technologies � collectively known as "electrotechnology". IEC Standards cover a vast range of technologies from power generation, transmission and distribution to home appliances and office equipment, semiconductors, fibre optics, batteries, solar energy, nanotechnology and marine energy to mention just a few. Wherever you find electricity and electronics, you find the IEC supporting safety and performance, the environment, electrical energy efficiency and renewable energies. The IEC also manages conformity assessment schemes that certify whether equipment, systems or components conform to its International Standards. Groklaw © Copyright 2003-2013 Pamela Jones. | 计算机 |
2014-23/3420/en_head.json.gz/2232 | Ryan Henson Creighton
Ryan Henson Creighton is a veteran game developer, and the founder of Untold Entertainment Inc. (http://www.untoldentertainment.com) where he creatively consults on games and applications. Untold Entertainment creates fantastically fun interactive experiences for players of all ages. Prior to founding Untold, Ryan worked as the Senior Game Developer at Canadian media conglomerate Corus Entertainment, where he created over fifty advergames and original properties for the YTV, Treehouse TV, and W networks. Ryan is the co-creator of Sissy's Magical Ponycorn Adventure, the game he authored with his then five-year-old daughter Cassandra. Ryan is the Vice President of the IGDA Toronto Chapter. He is also the author of the book that you are currently reading.
When Ryan is not developing games, he's goofing off with his two little girls and his funloving wife in downtown Toronto.
Ryan Henson Creighton has worked on the following Packt books:
Unity 4.x Game Development by Example: Beginner's Guide | 计算机 |
2014-23/3420/en_head.json.gz/2827 | Strategy puzzle game HexaLines price drop with myAppFree deal for Windows Phone Games By Daniel Rubino, Tuesday, Jul 29, 2014 at 4:01 pm EDT HexaLines is a well-known and popular strategy puzzle game that came from Windows Mobile over to Windows Phone and Windows 8 last year. The game works on a simple principle of trying to connect pipes using hexagons to keep the red liquid flowing. HexaLines uses a fun AI to try to block your moves, making this game similar to chess. The mechanics and gameplay are highly rated, with many people giving the game solid ratings (4.3 out of 5 stars).
HexaLines is normally a $1.49 game but for the next few days, it is completely free due to participating in the myAppFree promotion. That means there is no reason not to grab this game right now, as once you lock in that license it is free forever!
New Call of Duty: Advanced Warfare trailer gives us more Kevin Spacey GamesXbox By John Callaham, Tuesday, Jul 29, 2014 at 2:51 pm EDT Activision has just released the latest trailer showing off more of the single player campaign from the upcoming near future first person shooter Call of Duty: Advanced Warfare, showing us more of Jonathon Irons, the CEO of the fictional private military company Atlas that's portrayed, in both voice and likeness form, by actor Kevin Spacey.
Reach for the Sky review – A fun and free ballooning game for Windows Phone Games By Paul Acevedo, Tuesday, Jul 29, 2014 at 1:04 pm EDT Last year Windows Phone Central's "Gorgeous" George Ponder reviewed an indie game about hot air ballooning called Reach for the Sky. It came from Copenhagen-based indie developer Aemto and utilized a charging pixel art style. Sadly, that game is no longer available on the Windows Phone Store (per the developer's decision). From the ashes of the first Reach for the Sky comes a new Reach for the Sky sharing the same title.
Although the name has not changed, the new game is essentially a sequel. Reach for the Sky (2014) is still all about climbing the screen in a hot air balloon, but the visuals and gameplay have changed quite a bit. It still features a lyrical theme song from these guys that sounds catchy at first but becomes maddening before too long. It's completely free (with unobtrusive ads), just a 4 MB download, compatible with phones with 512 MB of RAM, and only takes up 4 MB of storage. Just be sure you switch the controls to "Tap" before playing…
Kitty in the Box slides into Windows Phone immediately after iOS launch Games By Mark Guim, Monday, Jul 28, 2014 at 11:45 am EDT Ready for another challenging game that is hard to put away? If you said yes, you should check out Kitty in the Box. Originally released for iOS earlier this month, it has also just been released for Windows Phone. It looks really cute and simple, but it's not as easy as you think.
We've installed Kitty in the Box on our Nokia Lumia 930. Check out the gameplay video after break. More →
Two free games each for Xbox One and Xbox 360 in August via Games With Gold GamesXbox By Joseph Keller, Monday, Jul 28, 2014 at 11:22 am EDT Microsoft has announced the Games with Gold for August for the Xbox One and Xbox 360. Xbox Live Gold subscribers will have access to Crimson Dragon and Strike Suit Zero: Director's Cut on the Xbox One, while Motocross Madness and Dishonored will be available to gamers on the Xbox 360.
Duck Destroy, a Windows Phone game where no duck is safe Games By George Ponder, Monday, Jul 28, 2014 at 9:09 am EDT Duck Destroy is a fun, somewhat challenging arcade game for your Windows Phone that calls upon you to blast ducks out of the sky as they fly across the gaming screen.
You play the role of a fox who has been tasked with fetching dinner. Armed with everything from a slingshot to dynamite to an assault rifle, you help guide the fox through fifty levels of play that span a wide range of environments.
Available for low-memory devices, Duck Destroy is an entertaining addition to your Windows Phone gaming library.
The Walking Dead Season 3 confirmed by Telltale Games Games By Rich Edmonds, Monday, Jul 28, 2014 at 7:32 am EDT Telltale Games has announced the studio will be developing Season 3 for its Walking Dead series of games. Unfortunately, while details are non-existent as to what will be contained in the game, as well as a release date, it's pleasing for fans to know that Season 3 will be coming to supporting platforms.
Educational endless runner, Get Water arrives on Windows Phone Games By Chris Parsons, Sunday, Jul 27, 2014 at 5:51 pm EDT After having spent some time on iOS and Android, the developers behind Get Water, Decode Global Studio, have finally made the game available on Windows Phone. If you're not familiar with the game, Get Water is the educational sidescrolling endless runner that aims to bring attention to the water scarcity in India and South Asia, and the effects it has on girls' education.
Royal Revolt 2 review – Nearly the best raiding game on Windows Phone and Windows 8 GamesWindows 8 Apps+Games By Paul Acevedo, Saturday, Jul 26, 2014 at 9:26 am EDT The original Royal Revolt (developed by Flare Games in Germany) was one of the first truly high quality games for Windows 8 and RT (it also appeared on Windows Phone). The game play combined smooth touch-screen combat with mild strategy elements, making for an addictive reverse tower defense game. Other than a steep and unfair difficulty curve and some iffy English translation, the first Royal Revolt was just about perfect.
Royal Revolt 2 for Windows Phone 8 and Windows 8 and RT drastically changes gears to a completely player-versus-player raiding focus, much like Cloud Raiders and Clash of Clans. Luckily, the addictive core game play survives almost completely intact. With an endless array of opponents to attack and defenses to upgrade, this sequel has become a mainstay on my daily playlist. But as good as it is, Royal Revolt 2 still has some room for improvement…
Best Rated Windows Phone Games Games By George Ponder, Saturday, Jul 26, 2014 at 9:14 am EDT Windows Phone Central Game Roundup: The Best Rated Games
Game of Legions, a match-three Windows Phone game with punch Games By George Ponder, Saturday, Jul 26, 2014 at 8:01 am EDT Game of Legions is a match-three styled Windows Phone game that has a fantasy combat twist. You create the matches to attack your opponent and advance through the gaming levels.
While the graphics are a little on the dark side, game play is challenging and it's kinda fun seeing your matches hurl spears, swords, axes and other items at your enemy.
Available for low-memory devices, Game of Legions is a nice break from the typical match-three games you see in the Windows Phone Store.
Test your piloting skills in Indian Air Force's official combat simulation game Games By Harish Jonnalagadda, Saturday, Jul 26, 2014 at 4:30 am EDT The Indian Air Force has released an official combat simulation game called Guardians that gives players an insight into the life of fighter jet pilot. The game features a training mode in which players get acquainted with the controls of a Sukhoi SU 30, a twin-engine two-seater fighter jet commonly used by the Indian Air Force during combat missions.
Play as Bollywood star Salman Khan in the official Kick game Games By Abhishek Baxi, Friday, Jul 25, 2014 at 2:17 pm EDT Kick (2014) is a Bollywood action thriller film, directed and produced by Sajid Nadiadwala and stars Salman Khan in the lead role. As part of the promotions in the run up to the movie release, the producers have partnered with Indiagames to release the official game for the movie, KICK-TheOfficialGame.
In the game, you play as Salman Khan's character in the movie – a good-natured thief who steals from the rich and gives to the poor. As you go along, you need to steal from five different locations and get on a high-speed bike chase against the cops. You need to steal with your mind coming in like a storm, and escape with valor. More →
Cloud Raiders updated with new level cap and more Games By John Callaham, Friday, Jul 25, 2014 at 8:36 am EDT Cloud Raiders, the highly popular fantasy action-strategy game from developer Game Insight, got a new update today for the Windows Phone version that adds a couple of small but still cool features, including a new level cap for the game,
Mad Transporter, putting the pedal to the metal with this Windows Phone/Windows game Games By George Ponder, Friday, Jul 25, 2014 at 7:39 am EDT Mad Transporter is a Windows Phone game that will test your skills at traveling down some rather treacherous roadways without losing your load.
The game, as you might guess from the title, has you operating a cargo transport truck with the goal of reaching the finish line as fast as possible without a) running out of gas and b) having all (or most) of your cargo intact. Graphics are nice and game play challenging enough to keep things mildly interesting.
Available for low-memory devices, Mad Transporter may not rate as a "go to" game; it's not a bad choice to have to fall back on from time to time. Plus, being a universal app, you can always take things to the larger screens of Windows 8.
Get addicted to Free the Network, launches on iOS, Android, and Windows Phone Games By Mark Guim, Thursday, Jul 24, 2014 at 4:33 pm EDT If you can't get enough of endless runners, here's another one worth checking out. Free the Network is very difficult, but you will keep coming back for more. Want to know what else is cool about this game? Pixel Blimp (pixelblimp.co.uk), the developers, has released the game simultaneously on Android, iOS, and Windows Phone!
We've installed it on our Nokia Lumia 930. Head past the break to watch our gameplay video. More →
Gameloft's Modern Combat 5 takes the shot on Windows and Windows Phone GamesWindows 8 Apps+Games By Rich Edmonds, Thursday, Jul 24, 2014 at 10:20 am EDT Gameloft has released Modern Combat 5 today on numerous platforms, including both Windows and Windows Phone. We covered the smartphone launch earlier this morning, but now those with Windows-based tablets and other hardware can enjoy the action.
Modern Combat 5 review - raising the bar for mobile action Games By Simon Sage, Thursday, Jul 24, 2014 at 8:23 am EDT Modern Combat 5 is the latest in a long series of high-quality first person shooter games from Gameloft. The whole Modern Combat series shamelessly riffs on popular shooter franchises like Call of Duty, and has been doing so well before official mobile counterparts for those games were made.
By and large, Modern Combat 5 is lock in step with Modern Combat 4. It boasts bar-setting graphics, rich multiplayer, familiar experience and achievement progression, a dazzling array of weapon customization, and top-notch voice acting. That's great and all, but there is one distinct difference in Modern Combat 5 which could easily stand as the sole selling point: there are zero in-app purchases.
Modern Combat 5 now available on Windows Phone [Updated] Games By Paul Acevedo, Thursday, Jul 24, 2014 at 4:51 am EDT Earlier this week we announced that the long-awaited Modern Combat 5: Blackout would be launching on mobile Windows platforms on the same day-and-date as the Android and iOS versions. In the past, Gameloft titles typically arrived on Windows Phone months or even years later than other platforms (just look at Real Soccer!). This year, long-awaited titles like Heroes of Order & Chaos and Order & Chaos Duels finally came over, and new titles will release at the same time as they do on competing platforms. Progress!
Modern Combat 5 is now available on Windows Phone as promised (the Windows 8 version should arrive later today). The Windows Phone version requires at least 1 GB of RAM. Top-end 3D graphics need plenty of memory in order to function, which shouldn't surprise anybody. None of the versions support Xbox Live, something we all should have gotten over a year ago. The game rings up at $6.99 – a fair price for a full console-quality campaign and extensive online multiplayer. Oh, and it supports Moga controllers!
Unicorn Rush, a fun endless runner on horseback for Windows Phone Games By George Ponder, Wednesday, Jul 23, 2014 at 8:29 am EDT Unicorn Rush is a fun endless runner styled game for Windows Phone that puts you on horseback in an effort to save the fantasy Kingdom of Grant.
The back-story has a riot occurring at the Kingdom's border and as soldiers are dispatched to deal with the riot, the evil Duke Hogan has snuck into the Kingdom's Capital and captured the King and Princess. You play the role of one of the King's Knights who will have to rush back to the Capital and defeat the evil Duke. The ride back to the capital isn't a gingerly stroll in the park with a wide range of obstacles, traps and monsters to overcome.
Available for low-memory devices Unicorn Rush makes a nice impression. Graphics are nice, game play challenging and if you can overlook that the game's Knight looks more like a farm hand, it's a Windows Phone game worth trying.
Pages123456789…next ›last » Shop accessories Daily Deal $4.95 Save $15 (75%)Amzer TPU Hybrid Case for Nokia Lumia 920 Browse All Accessories
Lumia 1520.1 Bought in US but im in Brazil, no update as...about 0 sec agoby Alessandro Dutenhefner And, it's TRUEEEEEEEE.... my Lumia 1520, is now LUMIA...about 1 min 38 sec agoby Ellious Grinsant Will downgrading to 8.0 Reset my device and erase all files...about 1 min 49 sec agoby mjyumping They don't know what they're talking about. about 2 min 9 sec agoby Indistinguishable Where did you buy your Lumia 1020? I am wondering if it is...about 4 min 49 sec agoby dongjunyang See all recent comments Download wpcentral official app | 计算机 |
2014-23/3420/en_head.json.gz/3475 | The Ministry of Type
The St. John’s Bible
IllustrationPeople Who DoProduct DesignType & Typography
This may be a bit of an old link, but it’s new for me, I think. The St. John’s Bible is a project by Donald Jackson (and team) and Minnesota’s Saint John’s Benedictine Abbey & University to produce a hand-written and illuminated bible to, as they put it, celebrate the new millennium. It’s both a massive project and a massive book - over 1000 pages with spreads 80cm wide by 60cm high, produced over 10 years at a cost of four million dollars (though its value may be denominated in other ways). The origin of the work is interesting in that it comes from the classic desire to complete a magnum opus:
For many years Donald Jackson, Senior Illuminator to Her Majesty’s Crown Office, had dreamed of creating a modern, illuminated Bible to celebrate the new millennium. Finally, in November 1995, he presented the idea to Saint John’s Benedictine Abbey & University in Minnesota.¶ Work started in 2000 and is scheduled for completion in 2007, at a total cost of over £2 million. It is taking place in a scriptorium in Monmouth, Wales, under the artistic direction of Donald Jackson and his team of scribes and illuminators.The Victoria and Albert Museum
Some sample pages from the bible. The images on the left are from the St John’s Bible website, the ones on the right are from the Victoria and Albert Museum.
Jackson has brought together an incredible range of styles for the bible, from rich, lush, g | 计算机 |