id
stringlengths 30
34
| text
stringlengths 0
75.5k
| industry_type
stringclasses 1
value |
---|---|---|
2015-40/2212/en_head.json.gz/502 | High Level Logic (HLL) Open Source over leJOS
Post your NXJ projects, project ideas, etc here!
rogerfgay New User
Posts: 9 Joined: Thu Oct 21, 2010 11:52 am High Level Logic (HLL) Open Source over leJOS
Postby rogerfgay » Thu Oct 21, 2010 11:55 am High Level Logic (HLL) Open Source Project
Blog entry: Lego Mindstorms NXT Robots (leJOS)
(Although I still get a USB related error.)
Correlation does not prove causality.
Posts: 9 Joined: Thu Oct 21, 2010 11:52 am Quote
Postby rogerfgay » Sat Oct 23, 2010 11:35 am The prototype for HLL was finally built in 2007 as part of a larger robotics project. It didn't get so much attention, because Microsoft released its Robotics Studio just as the project started and the main technical team decided to focus attention on that. Nonetheless, a small experimental prototype was built in Java SE with a browser-based GUI and simple robotics demo; flushing out design details. That code was cleaned up and used to begin the open-source project.
High Level Logic (HLL) is kind-of what its title sounds like it should be about - high level logic. The open-source project is still in a relatively early stage, but HLL has a long history. Its first version (as a concept, and idea) came in the mid to late 1980s back when "rule-based expert systems" were being commercialized as artificial intelligence. Rule-based expert systems technology ran into serious limitations. One of the particular problems was that applications were limited to very narrowly focused domains ... like you couldn't run a farm with one, but you could design one to tell you if your pigs have hoof and mouth disease. First concept HLL could provide a higher level logical framework for using multiple expert systems, exchanging information between them, etc.
There are three layers to the standard HLL machine, an executive, manager, and expert layer. The executive is in charge of communication with the outside world including other HLL installations, anywhere in the world - and like, for example; one for your robot command center and another above the robot's control system - giving both your command center and the robot their own high level logical intelligence. They decide whether to process command input and if so, pass the task on to the management layer.
Manager(s) follow plans to implement tasks (like a project manager), so a command assigned to a manager can be something much more complex than turn left, etc. A manager uses a plan constructed for implementing a task, finds the resources needed, and sends individual tasks to the expert layer. The expert layer invokes "experts" - i.e. any specialized code needed to implement the individual tasks. These can include both direct "low level" tasks (relative to High Level Logic) like "turn left" or they can themselves be something like a rule-based expert system or any other program you've written of whatever complexity and purpose. An "expert" is a program with "specialized" functionality ( --- "expertise" in HLL parlance). Back then, this was all in the world of advanced research. Fast forward to the 21st century, and there's just a huge amount of existing technology that makes it easier to build a much more powerful version. The first being built is ready for application development, but there's more work needed to make it nicer. For example, "plans" are currently implemented as (socket) server protocols, just like any protocol would be written. Good progress is being made on XML processing however, that will allow plans to be constructed in XML files. Then I'll even get a simple-rule processor constructed for HLL (I hope) or something like JBoss Rules can be used (very powerful, written in Java, easy to integrate into Java programs).
The current version is written entirely in Java SE, intentionally. It's not that big yet, and has an API document, but so far only a little bit of tutorial (sufficient if you're a serious Java programmer - I think - and also get the concept ...)
There's also a description of Java EE version that will, among other things, have a different way of communicating between layers and other HLL installations. Now it's simple sockets. It's the way socket communication is used that makes the system "loosely coupled", flexible (reusable for a wide range of applications), and capable of using distributed components easily (it's built in - just use the sendCommand() method). It's the loosely coupled approach plus the fact that messages are received by a system capable of dealing with specialized "high level" tasks that makes this an agent system.
Postby rogerfgay » Sat Oct 23, 2010 12:31 pm Thought on an interesting application.
Let's say you set up an environment for one or more LEGO robots. It could be for robots that can move things from one place to another - for example - let's say that. (And this might correspond to some robot competitions as well.)
Provide a map of the environment with all its objects and important characteristics (like, this is where the balls are stored, etc.) It doesn't need to be a three dimensional bit-map or anything like that; just a set of objects, locations, characteristics.
Provide specialized programs for moving from place to place and performing tasks - like pick up a ball, drop ball, etc. Use HLL plans to describe the sequence needed to accomplish high level tasks like go to place1 and get a ball, then go to place2 and drop the ball, etc.
You could also couple this with a strategic plan for getting as many balls as possible or whatever .... then let it go autonomous.
Postby rogerfgay » Mon Nov 01, 2010 7:33 pm The concept of High Level Logic software has not been easy for me to describe in a few short sentences, nor even in a technical brief. I'm trying a story telling approach. There are now two short first draft "chapters" on the history of the idea that also introduce the software design - in their own way.
This is a different approach to explaining software. Please feel free to comment.
Chapter 1 starts here: http://highlevellogic.blogspot.com/2010 ... ter-1.html
Postby rogerfgay » Sat Nov 13, 2010 3:23 am Getting closer and closer to version 1.0 every day. Scenarios like this won't be difficult at all. Easy stuff.
http://www.physorg.com/news/2010-11-eth ... video.html
Postby rogerfgay » Sat Nov 13, 2010 1:00 pm When Will We Have Artificial Intelligence? http://highlevellogic.blogspot.com/2010 ... icial.html
Postby rogerfgay » Sun Nov 14, 2010 10:47 am I've been thinking about how easy it will be to set up complex robotics scenarios with HLL. My focus currently is on bringing HLL to version 1.0. Those of you who've read the blog know that - you can build applications with HLL now - but you'd find yourself writing enough Java that - except for the ability to easily communicate with other HLL units anywhere - and a great start on a browser-based GUI for robots (pushes to SVG dynamic graphics that can show your robots position) .... you might wonder why you're doing it. OK - if you want to control your robot from anywhere through a browser - but maybe you don't need that right now.
Anyway - just wanted to be sure I'm not giving a false impression before I go on - actually have some good thoughts in mind. Version 1.0 isn't that far away - but it's not there yet.
I was thinking about how easy it's going to be - in combination with LeJos, to set up complex scenarios. With HLL, you can do the building from the other direction - i.e. top-down - which should be implied by the title of the software "high level logic." So, what I have for a demo, is a robot with a High Level Logic (HLL) controller on a computer, and the browser-based GUI Command Center (loosely) connected to its own HLL. A command is issued from the GUI, which is processed by the Command Center HLL and then sent to the robot's HLL. The robot's HLL sends messages via a simple "expert" to the robot itself. I built this demo while working on the HLL prototype. video: http://highlevellogic.blogspot.com/2010 ... video.html
It uses a simple robot simulation that sends new position information back. But now that I have a Lego robot built, I want to change the demo so that the robot's HLL uses an "expert" that makes the connection to the LEGO robot using LeJos.
Overly complicated? No, not if you consider what can be done with this set up.
On the robot side, there would be sufficient software to take care of some basic commands like move forward by x meters or continue x operation until. The HLL part would contain the higher level plans. I'm thinking along the lines of setting up a scaled logistics operation of some complexity. The HLL will carry the plan for the robot operating in this environment - carrying out a complex set of tasks.
But wait - there's more. Let's say that we need to issue a command to the robot that the robot does not know how to carry out. In the real world, this can happen when you have a new robot out of the box, that hasn't been programmed for the activities of the site, or you have a new activity and haven't told all your robots how to do it yet. You'd issue the command through the Command Center, as usual, and the Command Center's HLL would pass the message to the robot's HLL as usual. But the robot's HLL would check to see if it can carry out the command and respond with a refusal on the grounds that it doesn't know how to do what it's being asked to do.
The Command Center's HLL responds by delivering the needed resources to the robot's HLL. For a new robot out of the box, so to speak (i.e. only primitive commands supported by its HLL - we assume all robots have HLL right out of the box) the robot will first need Java and HLL components needed to execute whatever will be within its domain (verses a sequence of commands issued by the Command Center HLL), and then it may also need a map (for example) of its new working environment.
There's the sketch - and I'll want to have all of this well supported with version 1.0 of HLL. This is in fact, a primary reference application for HLL.
Since I'm new to LeJos and must spend the bulk of my time working on HLL, I'm wondering if there's anybody out there who might find this interesting enough to consider helping out with LeJos expertise (and perhaps even some coding).
Posts: 9 Joined: Thu Oct 21, 2010 11:52 am Re: High Level Logic (HLL) Open Source over leJOS
Postby rogerfgay » Sun Oct 23, 2011 10:06 pm Websocket Server Demonstration
Return to “NXJ Projects” | 计算机 |
2015-40/2212/en_head.json.gz/516 | The Fedora Project is an openly-developed project designed by Red Hat, open for general participation, led by a meritocracy, following a set of project objectives. The goal of The Fedora Project is to work with the Linux community to build a complete, general purpose operating system exclusively from open source software. Development will be done in a public forum. The project will produce time-based releases of Fedora about 2-3 times a year, with a public release schedule. The Red Hat engineering team will continue to participate in building Fedora and will invite and encourage more outside participation than in past releases. Fedora 15, a new version of one of the leading and most widely used Linux distributions on the market, has been released. Some of the many new features include support for Btrfs file system, Indic typing booster, redesigned SELinux troubleshooter, better power management, LibreOffice productivity suite, and, of course, the brand-new GNOME 3 desktop: "GNOME 3 is the next generation of GNOME with a brand new user interface. It provides a completely new and modern desktop that has been designed for today's users and technologies. Fedora 15 is the first major distribution to include GNOME 3 by default. GNOME 3 is being developed with extensive upstream participation from Red Hat developers and Fedora volunteers, and GNOME 3 is tightly integrated in Fedora 15." manufacturer website
1 dvd for installation on an 86_64 platform back to top | 计算机 |
2015-40/2212/en_head.json.gz/1165 | Flash and Animation Site
The Flash ActionScript Variable
The variable is the workhorse of any programming language. In Flash, it is one way that ActionScript can talk to the Flash program. Because ActionScript is based on the ECMAScript international standardized programming language for scripting, it makes sense that ActionScript will work in the same way as other scripting languages. So what is a variable and how does it work in ActionScript? You can think of a variable as a container or storage box for information. When you place information inside a variable, this is called assigning a value to the variable. The value, or information, will stay there until you change the value in some way by writing more ActionScript. How do you create a variable in ActionScript? The easiest way is to use the variable in your program. Flash will recognize that it has never encountered this variable before and will make it "official". Although this method works and is perfectly acceptable, most programmers find it helpful to maintain more control. One way is by defining a variable using the var keyword. For example, let's create a variable that we will call "container" and assign to it an initial value to 20. Here is the code. Let's take a look at it and then break it down.var container;container = 20;Notice that both lines of the code above end with a semicolon. The semicolon ends a line of program code much like a period ends a sentence. If you do not use the period at the end of your sentences, no one will be able to tell when one sentence ends and the next one begins. It's the same with programming. Don't forget those semicolons. Let's take a look at the first line of code. It begins with the var keyword which is followed by the name we want to give to our variable. It is telling Flash that we are creating a new variable and we are naming it "container". The second line of code assigns an initial value to the variable. This code might look familiar to those of you who have studied algebra. In algebra, x = 1 means that "x is equal to 1". The equal sign means "equal to" in algebra. However in ActionScript, the equal sign means "assign the value". Therefore, the second line is telling Flash to assign the value of 20 to the variable named "container". Tip: The two lines of code above can be condensed into one line.var container = 20;
Follow @ArtAnimationTut
For FREE email updates, subscribe to the Flash and Animation Newsletter
This content was written by Diane Cipollo. If you wish to use this content in any manner, you need written permission. Contact
Up and Running with Hype lynda.comWeb Motion for Beginners: An Overview by Tom GreenHalloween Haven Interactive Video Webpage | 计算机 |
2015-40/2212/en_head.json.gz/1456 | MIPS Technologies submits code for Tamarin open source project
Julien Happich10/30/2009 09:00 AM EDT Post a comment
CAMBRIDGE, UK MIPS Technologies Inc., has released a MIPS-optimized version of the ActionScript virtual machine (VM) that is used in web-connected technologies such as Adobe Flash player.
The ActionScript VM is accessible via the Tamarin open source project, and is a key component in optimizing Adobe Flash Player for running on the MIPS architecture. The optimizations accelerate ActionScript 3 performance on a validation suite of benchmarks by nearly 2.5x relative to the non-optimized VM.
In real terms, according to the company, MIPS� optimized VM executes twice as fast on a MIPS32 74K CPU core relative to the optimized VM for ARM running on an ARM Cortex A8 CPU1.
Adobe Flash technology for mobile phones, consumer electronics, and Internet-connected digital home devices already runs on a number of leading SoC platforms based on the MIPS architecture.
It enables the delivery of high-definition content and rich applications to Internet-connected TVs and TV-connected consumer electronic devices in the digital living room. The Adobe Flash platform for the digital home will build on these capabilities with support for custom filters and effects, native 3D transformation and animation, advanced audio processing, and graphics hardware acceleration. �
�Adobe Flash technology is key for the Internet-connected multimedia experience in the digital home,� said Art Swift, vice president of marketing, MIPS Technologies. �MIPS is committed to optimizing key elements of Adobe Flash Player, starting with the Tamarin project, an open source version of the ActionScript virtual machine used in Flash Player.� Email ThisPrintComment | 计算机 |
2015-40/2212/en_head.json.gz/2142 | What the IP address meltdown means for you
The end of days for IPv4 may have come. This will soon be a big concern for end users, so it's time to prepare your business
Keir Thomas (PC World (US online)) on 02 December, 2010 06:06
The world is running out of IPv4 Internet addresses, without which the Internet can't function in its existing form.
This has been known for some time, of course, but the situation has become a little more urgent with the news that in October and November, nearly all of the remaining blocks of addresses were assigned to various Regional Internet Registries (RIR) around the world.
The allocations brings the total number of available blocks to an almost depleted level, and potentially triggers an "end days" agreement in which most of the remaining blocks are automatically assigned to the five RIRs.
In other words, there's nothing left. Almost all possible IPv4 Internet addresses have been assigned -- all 4,294,967,296 of them.
Although of concern on a global scale, the IPv4 depletion is less of an immediate concern on the ground in homes and businesses. The addresses assigned to the RIRs are handed onto Internet Service Providers and organizations within each of the countries the RIRs cover. As such, there's no immediate crisis until the RIRs themselves have assigned all their addresses.
However, if the number of Internet devices keeps growing (and it's extremely certain it will, with the boom in smartphones and tablet devices) then we're almost certainly going to see this within a year or two.
The solution is to switch to IPv6, which has been widely heralded for about 10 years and brings with it about 340 trillion addresses -- arguably enough to last the world for a century or two. The trouble is that organizations are extremely hesitant to do so. The number of Websites offering IPv6 entrances barely breaks into the two-digit range.
Additionally, you might have noticed that your ISP has yet to send you any correspondence about the need to migrate to IPv6. I recently switched to a new service provider and received a cutting-edge new router, for example, but there's no sign of any IPv6 functionality -- either on the LAN-side on the hub or gateway component, or the WAN-side, or on the router or DSL connection.
Most major operating systems are entirely compatible with IPv6 and have been for some time, although without widespread deployment it's not yet possible to see how effective such technology is. It's not cynical to expect a bug or two.
The rather strange desire to avoid switching to IPv6 has even lead to reports that some Internet Service Providers are enacting Network Address Translation (NAT) at their data centres. In other words, their customers at homes and businesses are being given addresses that are routable only on the ISP's network, and not on the wider Internet.
To simply even further, this means that while such customers will be able to browse the Web and grab e-mail just like everybody else, they'll be unable to use file sharing services, or some services such as video conferencing -- effectively, they'll be denied any service that involves one computer directly talking to another across the Internet.
It's a little like being only half a person on the Internet. Some commentators have suggested that it turns a user into nothing more than a content consumer, who can be fed data by their ISPs, but who doesn't have the freedom to go out and fetch what they want, or experience new services that require a genuine, routable IP address.
Of course, not all IPv4 addresses that have been assigned are in use. In fact, the ratio of assigned to in-use addresses is probably lower than many might think. I suspect many organizations are holding onto IP addresses they've been assigned by their ISP or RIR, but have no intention of using. It simply makes business sense to do so in order to prepare for possible future developments. This has certainly been the case at businesses I've worked at in the past.
Some kind of amnesty whereby organizations surrender unused addresses is a possibility, but it's extremely unlikely to become a reality. If nothing else, the biggest question would be who would organize and administer such a scheme, and what financial benefits they would receive (and who would pay).
Any measures such as this can only be temporary because, as must be obvious, IPv6 is coming whether we like it or not. It's simply the most sensible and correct solution. If you haven't already, get in touch with your Internet Service Provider and ask when they're planning a switch to IPv6, and what the implications will be for you. Will you need a new service contract, for example? New hardware?
Additionally, it might be wise to start experimenting with IPv6 addressing within your organization; there are many books and guides out there explaining how, and it's surprisingly easy to do.
Keir Thomas has been writing about computing since the last century, and more recently has written several best-selling books. You can learn more about him at http://keirthomas.com and his Twitter feed is @keirthomas.
Tags managementLAN/WANNetworkingNetwork managementLANinternet | 计算机 |
2015-40/2212/en_head.json.gz/2344 | James Serra's Blog
James is currently a Senior Business Intelligence Architect/Developer and has over 20 years of IT experience. James started his career as a software developer, then became a DBA 12 years ago, and for the last five years he has been working extensively with Business Intelligence using the SQL Server BI stack (SSAS, SSRS, and SSIS). James has been at times a permanent employee, consultant, contractor, and owner of his own business. All these experiences along with continuous learning has helped James to develop many successful data warehouse and BI projects. James has earned the MCITP Business Developer 2008, MCITP Database Administrator 2008, and MCITP Database Developer 2008, and has a Bachelor of Science degree in Computer Engineering. His blog is at .
Master Data Management (MDM) Hub Architecture
Posted on 8 January 2013
The Master Data Management (MDM) hub is a database with the software to manage the master data that is stored in the database and keep it synchronized with the transactional systems that use the master data. There are three basic styles of architecture used for Master Data Management hubs: the registry, the repository, and the hybrid approach.
Repository (also called Enterprise or Centralized or Transactional) – The complete collection of master data for an enterprise is stored in a single database, including all the attributes required by all the applications that use the master data. The applications that consume, create, or maintain master data are all modified to use the master data in the hub, instead of the master data previously maintained in the application database, making the master data hub the system of entry (SOE) as well as the system of record (SOR). So if you have a CRM application, it would be modified to use the customer table in the master data hub instead of its own customer table (either by accessing the data directly in the hub or by the data in the hub being transferred to the source). Some of the benefits:
There are no issues with keeping multiple versions of the same customer record in multiple applications synchronized, because all the applications use the same record
There is less chance of duplicate records because there is only one set of data, so duplicates are relatively easy to detect
But there are some issues to consider:
It’s not always easy or even possible to change your existing applications to use the new master data (i.e. you are using an off-the-shelf product that does not have a feature to use or import data from another source)
Coming up with a data model that includes all the necessary data, without it being so large that it’s impossible to use (i.e. you have multiple applications that require different address formats)
What to do with data elements that are not used by all applications (i.e. a customer added by an order-entry application would likely have significantly fewer attributes than a customer added by the CRM application)
It can be extremely expensive and take a long time to implement, because it requires changes to the applications that maintain and consume the master data
You will need to transform and load all the current databases into the hub, removing duplicates in the process.
You will need to figure out how to handle history since you are changing your databases to use a new key for all you master data so you have to deal with many years of history that was created using different keys for the master data.
Registry (also called Federated) – The opposite of the repository approach, as each source system remains in control of its own data and remains the system of entry, so none of the master data records are stored in the MDM hub. All source system data records are mapped in the master data registry, making the master data registry the system of record (a virtual master data system). Data maps show the relationship between the keys of the different source systems (i.e. one row in a table for each master data entity and columns for the keys of the application systems). For example, if there are records for a particular customer in the CRM, Order Entry, and Customer Service databases, the MDM hub would contain a mapping of the keys for these three records to a common key. Benefit: Because each application maintains its own data, the changes to application code to implement this model are usually minimal, and current application users generally do not need to be aware of the MDM system. Downside: Every query against MDM data is a distributed query across all the entries for the desired data in all the application databases. Plus, adding an application to the MDM hub means adding columns to the key-matching table, which is not a big issue, but it may also mean changing queries to include the new source of information. Finally, while it helps you find duplicates, it does not help you in cleaning them up (i.e. if a person has many records with different phone numbers, there is not a way to determine the one to use)
Hybrid - Includes features of both the repository and registry models: Leaves the master data records in the application databases and maintains keys in the MDM hub, as the registry model does. But it also replicates the most important attributes for each master entity in the MDM hub, so that a significant number of MDM queries can be satisfied directly from the hub database, and only queries that reference less-common attributes have to reference the application database. Issue: Must deal with synchronization, update conflicts and replication-latency issues.
There are other variations to the above three basic styles: A “Data aggregation implementation” that involves the creation of a new system that is neither the system of entry nor the system of record but a downstream system used to aggregate the attributes of multiple systems and pass them to lower level subscribing systems; a “System-of-record-only implementation” or “Hub based implementation” where the master data hub is the system of record, but the system of entry remains the sources systems and new records are transferred to the master data hub and any discrepancies in the data defer to the master data hub. Data flow is bidirectional as new records in the master data hub are pushed into the source systems. And optionally new records could be added into the master data hub, making it very similar to the Repository method (with the only difference being the master data hub is not the only system of entry).
Choosing MDM Hub styles
Master Data Management: Bringing Order to Chaos
Master data registry implementation
An introduction to the Master Data Management Reference Architecture
MDM Data Hub Styles Part 4: Which one is the best to do MDM?
Understanding Master Data Management (MDM)
Sage Consulting
Master Data Management Implementation Styles
Kimball University: Pick the Right Approach to MDM
[www.jamesserra.com, opens in a new window] | 计算机 |
2015-40/2212/en_head.json.gz/2436 | One bus or two?
Making sense of the SOA story
Philip Howard
Which kind of cloud database is right for you?
When is a database not so relational?
Regarding IBM enterprise data management
Comment This question came up during Progress's recent EMEA (Europe, Middle East, Africa) user conference: at one point, a vendor representative showed a slide showing Sonic as an ESB (enterprise service bus) and DataXtend as a comparable bus operating at the data level. From subsequent discussions it emerged that whether these should be regarded as one bus or two has been the subject of much internal debate.
This isn't the first time that this discussion has come up. Towards the end of last year I was commissioned by IBM to write a white paper on information as a service and in this paper I posited the use of a second bus (which I suggested should be called an Information Service Bus or ISB) for data level integration. IBM wasn't too sure about this and we eventually compromised by saying that the ISB is logically distinct from the ESB, but not physically distinct.
Progress has reached the same position. It is much easier to visually explain the concept of information services working alongside application (web) services to provide a complete SOA (service-oriented architecture) environment if you use two buses rather than one. At one level (and working downwards) you have web services, legacy applications and so forth connected through the ESB to more generalised web services while those web services connect via the ISB to data services, which themselves are used to extract information from relevant data sources (data or content).
Now, both buses use common communications channels, which is the argument in favour of having a single bus. However, the sort of adapters you use to connect to data sources are very different from those used in a typical ESB environment. Further, some implementations may be data-centric while others will be application-centric and, moreover, you can implement one without the other. In particular, an ISB effectively subsumes the role of data integration and, potentially, master data management, which you might easily want to implement separately from either SOA or an ESB.
So, I firmly believe that, at least from a logical perspective, it makes more sense to think of a complete SOA environment as consisting of twin buses as opposed to one. However, that's not quite the end of the story.
If you think about it, you have services to services integration, data to data integration and services to data integration (or vice versa) and each of these has its own characteristics, so you might actually think of three buses. However, three buses in a single diagram might be considered overkill though you could depict an ‘S’ shaped bus if you wanted to, implying three uses of a single bus. Of course, you could use a sideways ‘U’ for the same purpose with a dual bus structure but again I think these approaches are overly complex—the whole point about SOA is simplification—if you can't depict it in a simple fashion you are defeating the object of the exercise.
Of course, we all know that the problem with buses is that you wait for ages for one and then several come along at once. In this particular case I think that two buses is just right: one is too few and three is too many.
Copyright © 2006, IT-Analysis.com | 计算机 |
2015-40/2212/en_head.json.gz/3739 | Diary of A Singaporean Mind
AIM-PAP Saga : The Explanation that explains nothing!
The main issue with the sale of town council management software to a PAP owned company with a clause that gives the PAP company the right to terminate use of the software if an opposition takes over the town council is that of conflict of interest. It is equivalent to the Democratic Party selling the US Social Security computer software to a company it owns then giving the company the right to stop the Republicans from using it when it recaptures the presidency in elections. The AIM deal gives enormous power to the PAP to stop the opposition from running the town councils effectively when they are chosen by the constituents to be representatives in parliament. Therein lies the conflict of interest. The rights of the people are not protected and they are put at risk.
"Dr Teo has now confirmed that this third party, AIM, is “fully-owned” by the PAP. In other words, the PAP-managed Town Councils saw it fit to sell away their ownership of the systems, developed with public funds, to a political party, which presumably could act in its own interests when exercising its rights to terminate the contracts". - Sylvia Lim, WP Statement
Professor Teo Ho Pin's 26 paragraph explanation does not address this conflict of interest issue which was the main point brought up by Sylvia Lim. I really you suggest you read his statement in full to understand what he is saying[Why the PAP town council entered into transaction with AIM]. I challenge you to find the logic in his statements that can explain that no conflict of interest has occurred.
In skirting the central issue, Teo Ho Pin's 26 paragraphs detailing PAP's Town Councils rationale and timeline for the sale really gives us an insight on how a PAP elite thinks and make decisions. I would like to share my thoughts as I run through his 26 paragraphs. As usual, here is a summary:
In 2003, PAP town councils wanted to "harmonise" their computer systems and jointly called an open tender for a computer system based on a common platform. It is strange that these systems are stove-piped and harmonisation is done on hindsight. The head of PAP town councils didn't have the foresight to achieve cost savings and cost sharing by getting the town councils to work together? NCS was chosen to provide this system from Aug 1, 2003 to Oct 31, 2010, with an option to extend the contract for one year.
Teo does not tell us how much NCS is paid to develop the system. This piece of information is of public interest. We would like to know how much money is spent on such a system when it is sold for $140K
In 2010, PAP town councils jointly appointed Deloitte and Touche Enterprise Risk Services (D&T) to advise on the review of the computer system.After a comprehensive review, D&T identified various deficiencies and gaps in the system which was becoming "obsolete and unmaintainable". After just 7 years of use, the system is "obsolete and unmaintainable". Let me ask you a question : How many new services did you receive from your town council in those 7 years? Town councils do the same thing year in-year out like landscaping, clearing rubbish, collecting conservancy etc but the NCS software becomes "unmaintainable". After a comprehensive review, D&T identified various deficiencies and gaps in the system. The main issue, however, was that the system was becoming obsolete and unmaintainable. It had been built in 2003, on Microsoft Windows XP and Oracle Financial 11 platforms. By 2010, Windows XP had been superseded by Windows Vista as well as Windows 7, and Oracle would soon phase out and discontinue support to its Financial 11 platform. - Teo H P.
Teo Ho Pin further elaborates on this saying D&T main finding is that Windows XP and Oracle Financial 11 platforms would be superseded. I'm quite surprise that they have to pay expensive consultants to find this out - something you can know by spending 10 minutes reading Microsoft's and Oracle's roadmap online. The important thing to note is when these products are superseded by new ones e.g. Windows 7, these vendors will keep their products backward compatible so that custom software written for Windows XP and older version of Oracle will still work on new versions. Companies that do not want to waste money simply stick to Windows XP (I'm sure many of you work in companies that still uses Windows XP) and older versions of Oracle so long as they get the job done and have no major issues. In moving custom software to a new version of operating system and Oracle database, all that is needed for total assurance is some software porting, if necessary, regression testing which can be done by NCS. D&T suggested the option of having a third party own the computer system, including the software, with the town councils paying a service fee for regular maintenance.
After serious consideration, the PAP town councils decided to call for a tender under which only the intellectual property in the old software would be sold. The ownership of the physical computer systems remained with the PAP town councils. Teo H P says that shared ownership of IP (Intellectual Property) is "cumbersome". Why? The PAP Town Council has been sharing the IP for 7 years before they tried to sell it off. This is a strange recommendation by D&T. Post 1990s software is written for re-use. Rarely in the software industry do we see new versions of software written from scratch - it is a massive waste of money. For Town Council management software you can imagine that 80-90% of the original written code can be reused given they do the same thing year after year -day in day out . Selling the IP to a 3rd party to far below cost of development does not make sense because you sign away your right to use the old source codes. Now you pay the price of full development of new software.
According to Teo HP, D&T recommended that the "unmaintainable" software ownership be moved to a third party to own and "provide regular maintenance". Somehow AIM, a $2 company without a permanent office is able to maintain an "unmaintainable" software. It turned out AIM did this by signing up NCS to do the maintenance. The town councils engage NCS through AIM rather than directly and that is more efficient?
On June 30, 2010, PAP town councils advertised the tender in the Straits Times. Five companies collected the tender documents: CSC Technologies Services, Hutcabb Consulting, NCS, NEC Asia and Action Information Management (AIM). On July 20, 2010, AIM submitted a bid which was the only one received by the town councils. After assessing that AIM's proposal was in the PAP town councils' best interests, the tender was awarded to AIM. Teo H P explained that the Town Council saved $8000 in the leaseback without the considering the loss of intellectual property with the potential of reuse in a new development. Teo does not tell us the cost of developing the original software. He does not tell us how much it pays consultants for what looks like trivial findings and simple decisions. ....but I'm still glad he took the trouble to write his 26 paragraphs. It tells us the PAP Town Councils are not very good at saving costs for residents and residents have to pay more and more each year for the same services. Reading the 26 paragraphs tells us cost cutting and keeping expenses low for residents does not seem feature in their thought processes.
Posting Time
Lucky Tan
Too many holes in the statement. many questions not answered and many documents need to be made public. Hence a commission of Inquiry should be called.Another good rebuttal is here.http://www.scribd.com/doc/118691311/Town-Councils-AIM
theonion
LuckyIf you would refer to the dates stated in the tender 2010 and the study 2010, there is no conflict of interest as no opposition town councils were involved in the setup of the TC systems in the first place.Further, if any opposition were to take over, they will have to consider carefully, so to me the clauses serve the interest of any takeover on both sides. If the system were jointly developed with all town councils inclusive of Hougang there would be conflict of interest since the minority interest is not taken care of.To me the obscufation of the dates does not do justice to your blog entries but which basically panders to the us vs them and to me reminiscent of the Democrats vs Republicans or etc mentality.
convexset
In contrast to what your title suggests, you have successfully argued that the explanation explains a lot. Namely that the PAP are not very capable at supervising relatively simple (though possibly large) projects.This leads to the question: If you can't handle a simple project, how can you manage a complex economy? (And look at all the failures in long term planning and uncoordinated action.)
It's disgusting to see PAP playing politics at the expense of the people, it's even more disgusting to see main stream media reporting only the PAP's "explanations" without mentioning all the issues highlighted in online media!But, the most frustrating thing is we've morons lik | 计算机 |
2015-40/2212/en_head.json.gz/4275 | It's Greek to Me: Behavioral Targeting the Ancient Way
Anna Papadopoulos | September 12, 2007 | Comments
Marketers should avoid sending cryptic or overly specific messages to customers.
If Croesus goes to war, he will destroy a great empire. --The Oracle of DelphiIn the eighth century B.C.E., Delphi was the most important shrine in Greece. The ancients believed the location was at the world's center. People came from everywhere to have their questions about the future answered by the Pythia, the priestess who presided over the Oracle of Apollo. Her answers, usually inscrutable, could determine the course of everything, from harvest to war.
I had the pleasure of visiting Delphi this summer, and it got me thinking about advertising. With both advertising and the oracle, success lies in the message. The Pythia would provide cryptic messages from which seekers draw their own conclusions. If you asked, "Which path should I take?" the reply would be something like, "You will take the path that is open." In this manner, the Pythia was never wrong and seekers would feel the message was tailored specifically for them.A lot of advertising, particularly teaser advertising, comprises cryptic or ambiguous messaging. Even keystone campaigns like Nike's "Just Do It" and Apple's "Think Different" are open to interpretation. With the Web's advent, unbranded campaigns that drive people to a URL is common practice, relying on consumer curiosity and perhaps a clever tagline.What about behavioral targeting? Are we communicating with our customers like oracles? Does vague messaging play a role in this?First, no messaging can resonate with consumers if it isn't relevant. My husband, who claims he never clicks on a banner (yes, he's one of them), not only clicked on a banner recently but configured a car and set up a test drive right from an ad. The difference between that last time and the hundreds of times before? He was in-market for a car, and the message was relevant.Let's assume the message is relevant and the media placement appropriate. Now what?Mistakes with behavioral messaging typically lie in two extremes: the message is either too direct or too general. One that's too direct uses the consumer's name in the ad. That's like meeting a stranger who knows your name. Yes, you'll grab people's attention, but it may be unwanted attention. If people haven't directly introduced themselves (online, that means opted in to receive a message directly from the sender), then don't use their name or other personal information.In the same respect, creating copy that takes into account historical behavior must also be controlled. Consumers don't want to feel as if they're being monitored, although they want relevant messages and offers.What's a marketer to do? The situation is akin to addressing a sensitive subject with someone who hasn't explicitly talked to you about it, but who may benefit from your help nevertheless.In real-world interpersonal relationships, we provide opportunities for friends or acquaintances to engage us in a subject. In advertising, we should also create opportunities for target audiences to communicate with us, then provide relevant offers and messages based on the information they offer. If we ignore consumer feedback, which in the behavioral targeting world translates into actions consumers take or don't take, we should avoid entering the conversation.On the flip side, many behavioral ads are generic and repurposed from other placements. The ads usually run at a very high frequency and say the same thing repeatedly to the same person in various environments.It's like the spouse (not mine, of course) who won't drop a topic and persists in mentioning it regardless of where you are or how much time you have. As Albert Einstein said, "Insanity is doing the same thing over and over again and expecting different results."The oracle may have worked wonders for seekers in the ancient world, but information-age customers want dialogue. One-way communication makes people suspicious about the fine print. They want communications on their own terms, and they want to remain in control. Get too close too quickly, and you'll turn them off. Remain too distant and you risk alienation.Listen carefully. Consumers will provide you with clues to follow for success.
behavioral marketing
Contact Anna
Based in New York, Anna Papadopoulos has held several digital media positions and has worked across many sectors including automotive, financial, pharmaceutical, and CPG.
An advocate for creative media thinking and an early digital pioneer, Anna has been a part of several industry firsts, including the first fully integrated campaign and podcast for Volvo and has been a ClickZ contributor since 2005. She began her career as a media negotiator for TBS Media Management, where she bought for media clients such as CVS and RadioShack. Anna earned her bachelor's degree in journalism from St. John's University in New York.
Follow her on Twitter @annapapadopoulo and on LinkedIn.
Anna's ideas and columns represent only her own opinion and not her company's.
Making the Business Case for Digital Marketing in Asia
It’s Time to Drop the Text Version
The 5 Must-Have Emails for Marketing Automation
Mobile by the Numbers: Some Fresh Stats and Implications
Dreaming of Your Holiday Email Campaign? Avoid These Mistakes | 计算机 |
2015-40/2212/en_head.json.gz/4584 | First look: Ubuntu 9.04 stays the course
The new edition of the friendly Linux desktop OS is more maintenance release than upgrade
Neil McAllister (InfoWorld) on 29 April, 2009 07:59
Having rocketed to prominence as one of the most popular desktop Linux distributions in just a few years, Ubuntu has earned a reputation for stability and ease-of-use. The latest edition -- version 9.04, code-named "Jaunty Jackalope" -- continues that tradition and is mostly a maintenance release, but it brings a number of updates that should enhance its appeal.
The list of bundled applications is largely unchanged, but they're all new versions. Chief among these is the inclusion of OpenOffice.org 3.0, which should appease those who were disappointed that it didn't make the cut for the previous release. The new version of the free office suite maintains the same look and feel, and it still launches slowly, but it brings some new features, including improved compatibility with Microsoft Office 2007.
Founder Mark Shuttleworth has hinted that big changes to Ubuntu's look and feel are coming with the next release in October -- changes that might even include abandoning its traditional, but controversial, brown color scheme -- but the cosmetic updates in version 9.04 are minor. There are new boot and log-in screens, new desktop background images, and a few UI improvements that came free with the upgrade to Gnome 2.26, but nothing that should surprise anyone who has used an earlier version of Ubuntu.
Be notified
Perhaps the most significant UI addition, one unique to Ubuntu, is the new desktop notification mechanism. Application messages -- anything from audio volume changes to alerts from your IM client -- now appear in black pop-up boxes in the upper-right corner of the screen. The idea is to make these messages as unobtrusive as possible by avoiding distractions such as modal dialog boxes. Whether it succeeds will probably depend on the user. This system is new to Linux, but it resembles features available on Windows and Mac OS X. What might annoy some Linux users, however, is the fact that it's not configurable. There is no preference panel to change its behavior and no way to switch back to the old notification system. Even if you hate it, you're stuck with it.
This is typical of Ubuntu, which often sacrifices some configurability for the sake of ease-of-use. For example, while Ubuntu includes support for GUI bling by way of Compiz Fusion, some of the more talked-about effects -- including the famed "desktop cube" -- are disabled by default. To enable them, users have to install an unsupported software package that provides a new control panel.
Ubuntu 9.04 is guilty of worse sins, however. When I booted the installation CD, it cheerfully informed me that my computer had no operating systems installed on it and offered to partition the entire drive. In reality, the PC contained not just a previous version of Ubuntu, but Windows Vista and an abortive installation of Mac OS X as well. Lucky for me I know how to manage partitions by hand.
Slips and hitches
During installation, the system offered to migrate user information from the Windows drive that it failed to detect earlier, but upon logging in, no data seemed to have been transferred. Firefox showed only the default bookmark entries and nothing from either Internet Explorer or the Windows installation of Firefox. On the positive side, Ubuntu recognized my NTFS partitions after boot and made them available for mounting without a hitch.
Typical of Linux, hardware support remains a mixed bag, and the Ubuntu team can't take all of the blame. Ubuntu's default open source video driver wouldn't recognize a TV as a second monitor out of the box, but installing Nvidia's own, proprietary driver was trivial. I was less successful with a networked printer, however. The Add Printer wizard spotted it right away but couldn't find an appropriate driver, and while the manufacturer does offer drivers for Linux, the installation packages were not compatible with the 64-bit version of Ubuntu. These kinds of hardware issues remain among the thorniest problems desktop Linux users face.
These gripes aside, the latest version of Ubuntu maintains its reputation for quality while offering incremental updates to a variety of software packages. Ubuntu 9.04 is not an LTS (long-term support) release, so customers who need an OS that will be maintained through 2011 should stick with last year's 8.04 ("Hardy Heron") edition. For those who just want a stable, polished desktop OS that's packed with the latest open source software, however, Ubuntu 9.04 is a worthwhile download.
Bottom line: Ubuntu 9.04 Desktop Edition brings minor cosmetic and UI enhancements to the easy-to-use desktop distribution. Highlights include new versions of OpenOffice.org and Gnome, as well as a new desktop notification feature. On the downside, installation was marred by missteps, and hardware support remains mixed.
Tags Linuxopenofficeubuntu
Neil McAllister | 计算机 |
2015-40/2212/en_head.json.gz/5091 | Home W3c HTML5 Finally Released as W3C Recommendation
Subject: General Tech | November 1, 2014 - 03:56 AM | Scott Michaud
Tagged: w3c, javascript, html5, html, ecma, css
Recently, the W3C has officially recommended the whole HTML5 standard as a specification for browser vendors and other interested parties. It is final. It is complete. Future work will now be rolled into HTML 5.1, which is currently on "Last Call" and set for W3C Recommendation in 2016. HTML 5.2 will follow that standard with a first specification working draft in 2015.
Image Credit: Wikipedia
For a website to work, there are several specifications at play from many different sources. HTML basically defines most of the fundamental building blocks that get assembled into a website, as well as its structure. It is maintained by the W3C, which is an industry body with hundreds of members. CSS, a format to describe how elements (building blocks) are physically laid out on the page, is also maintained by the W3C. On the other hand, JavaScript controls the logic and programmability, and it is (mostly) standardized by Ecma International. Also, Khronos has been trying to get a few specifications into the Web ecosystem with WebGL and WebCL. This announcement, however, only defines HTML5.
Another body that you may hear about is the "WHATWG". WHAT, you say? Yes, the Web Hypertext Application Technology Working Group (WHATWG). This group was founded by people from within Apple, Mozilla, and Opera to propose their own standard, while the W3C was concerned with XHTML. Eventually, the W3C adopted much of the WHATWG's work. They are an open group without membership fees or meetings, and they still actively concern themselves with advancing the platform.
And there is still more to do. While the most visible change involves conforming to the standards and increasing the performance of each implementation as much as possible, the standard will continue evolving. This news sets a concrete baseline, allowing the implementations to experiment within its bounds -- and they now know exactly where they are.
Source: W3C
Sauce Labs: Integration into modern.IE
Subject: General Tech, Mobile | April 13, 2013 - 03:16 AM | Scott Michaud
Tagged: w3c, Sauce Labs, modern.IE, IE
The main benefit of open Web Standards is that it allows for a stable and secure platform for any developer to target just about any platform. Still, due to the laws of No Pain: No Gain, those developers need to consider how their application responds on just about every platform. Internet Explorer was once the outlier, and now they are one of the most prominent evangelists. It has been barely two months since we reported on the launch of modern.IE for Microsoft to integrate existing solutions into their product.
Enter Sauce Labs. The San Francisco-based company made a name for themselves by providing testing environments for developers on a spread of browsers across Android, iOS, Linux, MacOSX, Windows 7, Windows 8, and Windows XP. The company, along with competitor BrowserStack, got recent recognition from Adobe when the software company shut down their own also-competing product.
When we first covered modern.IE back in February (again, here), the initiative from Microsoft was created to help test web apps across multiple versions of Internet Explorer and check for typical incompatibilities. With the addition of Sauce Labs, Microsoft hopes to provide better testing infrastructure as well as automatic recommendations for common issues encountered when trying to develop for both "modern" and legacy versions of their web browser.
In my position, this perfectly highlights the problems with believing you are better than open architectures. At some point, your platform will no longer be able to compete on inertia. Society really does not want to rely on a single entity for anything. It is almost a guarantee that a standard, agreed-upon by several industry members, will end up succeeding in the end. Had Microsoft initially supported the W3C, they would not have experienced even a fraction of the troubles they currently face. They struggle in their attempts to comply with standards and, more importantly, push developers to optimize for their implementation.
There are very good reasons to explain why we do not use AOL keywords anymore. Hopefully the collective Microsoft keeps this grief in mind, particularly the Xbox and Windows RT teams and their divisions.
After the break: the press release.
Source: Sauce Labs
Microsoft Likes That Modern Will Not Get Them Sued: Compatibility Website "modern.IE" Launches
Subject: Editorial, General Tech | February 2, 2013 - 06:23 PM | Scott Michaud
Tagged: webkit, w3c, microsoft, internet explorer, html5
Microsoft has been doing their penance for the sins against web developers of the two decades past. The company does not want developers to target specific browsers and opt to include W3C implementations of features if they are available.
What an ironic turn of events.
Microsoft traditionally fought web standards, forcing developers to implement ActiveX and filters to access advanced features such as opacity. Web developers would program their websites multiple times to account for the... intricacies... of Internet Explorer when compared to virtually every other browser.
Now Google and Apple, rightfully or otherwise (respectively, trollolol), are heavily gaining in popularity. This increase in popularity leads to websites implementing features exclusively for Webkit-based browsers. Internet Explorer is not the browser which gets targeted for advanced effect. If there is Internet Explorer-specific code in sites it is usually workarounds for earlier versions of the browser and only muck up Microsoft's recent standards-compliance by feeding it non-standard junk.
It has been an uphill battle for Microsoft to push users to upgrade their browsers and web developers to upgrade their sites. “modern.IE” is a service which checks for typical incompatibilities and allows for developers to test their site across multiple versions of IE.
Even still, several web technologies are absent in Internet Explorer as they have not been adopted by the W3C. WebGL and WebCL seek to make the web browser into high-performance platform for applications. Microsoft has been vocal about not supporting these Khronos-backed technologies on the grounds of security. Instead of building out web browsers as a cross-platform application platform Microsoft is pushing hard to not get their app marketplace ignored.
I am not sure what Microsoft should fear most: that their app marketplace will be smothered by their competitors, or whether they only manage to win the battle after the war changes theaters. You know what they say, history repeats itself.
Source: Ars Technica
HTML5 Games: The Legacy of PC Gaming?
Subject: Editorial, General Tech, Mobile | December 30, 2012 - 04:48 PM | Scott Michaud
Tagged: webgl, w3c, html5
I use that title in quite a broad sense.
I ran across an article on The Verge which highlighted the work of a couple of programmers to port classic Realtime Strategy games to the web browser. Command and Conquer along with Dune II, two classics of PC Gaming, are now available online for anyone with a properly standards-compliant browser.
These games, along with the Sierra classics I wrote about last February, are not just a renaissance of classic PC games: they preserve them. It is up to the implementer to follow the standard, not the standards body to approve implementations. So long as someone still makes a browser which can access a standards-based game, the game can continue to be supported.
A sharp turn from what we are used to with console platforms, right?
I have been saying this for quite some time now: Blizzard and Valve tend to support their games much longer than console manufacturers support their whole platforms. You can still purchase at retail, and they still manufacture, the original StarCraft. The big fear over “modern Windows” is that backwards compatibility will be ended and all applications would need to be certified by the Windows Store.
When programmed for the browser -- yes, even hosted offline on local storage -- those worries disappear. Exceptions for iOS and Windows RT where they only allow you to use Safari or Trident (IE10+) which still leaves you solely at their mercy to follow standards.
Still, as standards get closer to native applications in features and performance, we will have a venue for artists to create and preserve their work for later generations to experience. The current examples might be 2D and of the pre-Pentium era but even now there are 3D-based shooters developed from websites. There is even a ray tracing application built on WebGL (although that technically is reliant on both the W3C and Khronos standards bodies) that just runs in a decent computer with plain-old Firefox or Google Chrome.
HTML5 Defined!
Subject: General Tech, Mobile | December 19, 2012 - 02:56 AM | Scott Michaud
Tagged: w3c, html5
Open Web Standards has reached a new milestone on Monday when the W3C published their completed definitions for HTML5 and Canvas 2D. There is still a long and hard road until the specification becomes an official standard although the organization is finally comfortable classifying this description as feature complete.
The “Web Platform” is a collection of standards which form an environment for applications to target the web browser. HTML basically forms the structure for content and provides guidelines for what that structure physically means. CSS, Javascript, Canvas 2D, WebGL, WebCL, and other standards then contribute to the form and function of the content.
HTML5 allows for much more media, interactivity, and device-optimization than its 1999 predecessor. This standard, particularly once finalized and recommended by the W3C, can be part of the basis for fully featured programs which function as expected where the standard does.
This is an important milestone but one by no means the final destination of the standard.
The biggest sticking point in the HTML5 specification is still over video tag behavior. The W3C pushes for standards it recommends to comply with its royalty-free patent policy. Implementation of video has been pretty heavily locked down by various industry bodies, most noticeably MPEG-LA, which is most concerning for open source implementations which might not be able to include H.264. There still does not appear to be a firm resolution with this recent draft.
Still, once the patent issues have been settled, video will not just be accessible in static ways. Tutorials exist to show you how to manipulate the direct image data resulting from the video to do post-processing effects and other calculations. It should be an interesting abstraction for those who wish to implement video assets in applications such as for a texture in a game.
HTML5 is expected to be fully baked sometime in mid-2014. It would be around that time where HTML5.1 would mature to the state HTML5 celebrates today. | 计算机 |
2015-40/2212/en_head.json.gz/5341 | Google Desktop reaches Linux PCs at last
By J. Mark Lytle
News Linux users gain powerful local computer search tool
Shares Google 's popular search application that indexes data on a computer, rather than online, is now available for Linux machines after the company's latest beta release.The Linux version of Google Desktop joins a fully complete Windows program and a Mac version that is currently also in beta. It features all the indexing and searching features seen on other platforms but lacks some of the frills of the Windows application. Local searchesThe Windows Gadget and Sidebar features are missing from the new version for the open-source operating system. But users can still use a single hotkey to bring up a search box that allows them to search their computers for particular files, emails or programs.Linux Desktop is available in several European languages as well as Japanese, Chinese and Korean. It runs on the most popular versions of Linux, including Ubuntu, Fedora and Debian using either the GNOME or KDE interfaces. Mac users are already familiar with a similar technology, known as Spotlight, that is a native part of OS X. | 计算机 |
2015-40/2212/en_head.json.gz/6208 | Add-ons for Disabling Google Analytics Tracking
Google released plug-ins for Internet Explorer 7+, Firefox 3.5+ and Chrome 4+ that disable Google Analytics tracking. Google Analytics is by far the most popular free service for getting statistics about the visitors of a site and it's used by a lot of sites, including this blog. Even if the service doesn't show personal information about the visitors and it only provides aggregated data, some people are concerned that Google can track the sites they visit using a seemingly innocuous Google Analytics script.Google explains that Google Analytics uses first-party cookies to track visitor interactions, so the data can't be aggregated for all the domains. "The Google Analytics Terms of Service, which all analytics customers must adhere to, prohibits the tracking or collection of [personal] information using Google Analytics or associating personal information with web analytics information."Those that are concerned about their privacy can install an add-on and permanently disable the script. After installing the add-on, you'll notice that the browser still sends a request for this file: http://www.google-analytics.com/ga.js when visiting a page that uses Google Analytics, but it no longer sends information to Google Analytics.If a lot of users install the add-on, website owners will no longer have accurate stats, they'll no longer be able to find if their content is popular and what sections of their site still need some work. Even if Google didn't release opt-out add-ons, users could still block Google Analytics by adding an entry to the HOSTS file, but the add-ons make it easier to opt-out.Google also added a feature for website owners: Google Analytics can now hide the last octet of the IP address before storing it. "Google Analytics uses the IP address of website visitors to provide general geographic reporting. Website owners can now choose to have Google Analytics store and use only a portion of this IP address for geographic reports. Keep in mind, that using this functionality will somewhat reduce the accuracy of geographic data in your Analytics reports. " | 计算机 |
2015-40/2212/en_head.json.gz/7182 | Overview of Recent Developments on Generic Security Services Application Programming Interface
JP Hong, [email protected] Abstract:
With network security measures becoming more and more complicated, it became essential to provide programmers with the capability to write applications that are generic with respect to security. The Common Authentication Technology (CAT) Working Group of Internet Engineering Task Force (IETF) acknowledged this need and developed the Generic Security Service Application Programming Interface (GSS-API). After ten years of experience with GSS-API, the Kitten (GSS-API Next Generation) Working Group was formed to continue this development. This paper looks at the current issues that Kitten faces in improving the GSS-API.
GSS-API, Kitten Working Group, PRF, SPNEGO
2. Overview of GSS-API
3. Recent GSS-API Issues
3.1 Pseudo-Random Function API Extension
3.2 The Simple and Protected GSS-API Negotiation Mechanism
3.3 Desired Enhancements to GSS-API Version 3 Naming
3.3.1 Names in GSS-API
3.3.2 Limitations of the current Names
3.3.2.1 Kerberos Naming
3.3.2.2 X.509 Names
3.3.3 Possible Solutions
3.3.3.1 Composite Names
3.3.3.2 Mechanisms for Export Name
4. Summary
The development of Generic Security Service Application Programming Interface (GSS-API) was first launched in July, 1991 by the Common Authentication Technology (CAT) Working Group. The first version was released in May of 1993, and the current version of GSS-API is version 2, which was released in 1997. Within the scope of this paper, GSS-API will refer to version 2. Ever since, GSS-API has provided programmers a single framework in which applications using various security technologies can be implemented. However, providing such a simple answer is never easy, and therefore GSS-API needs more work done to support the most recent technologies. Currently, the dominant security mechanism used with GSS-API is Kerberos. Now, the Kitten Work Group is working to improve the current version of GSS-API, and also to produce a specification for the next generation of GSS-API so that it can support more mechanisms seamlessly. Back to Table of Contents 2. Overview of GSS-API
GSS-API, on its own, does not provide security. It provides generic guidelines so that application developers do not need to tailor their security implementations to a particular platform, security mechanism, type of protection, or transport protocol. Then, security service vendors provide implementations of their security in a form of libraries, so that even if the security implementations need to be changed, the higher level applications need not be rewritten. This allows a program that takes advantage of GSS-API to be more portable as regards network security, and this portability is the most essential aspect of GSS-API. Figure 1 shows where the GSS-API layer lies.
Figure 1 – The GSS-API Layer
Two major services that the GSS-API provides are the following[Sun02]: a. Creation of security contexts that can be passed between applications. Security context is a term used in GSS-API, which refers to a “state of trust” between applications. When two applications share a context, they acquire information about each other, and then can permit data transfer between them while the context is valid. This is shown in stage one of Figure 2.
b. Application of various types of protection, also known as security services, to the data being transferred. This is shown in stage two of Figure 2. Figure 2 – GSS-API: An Overview
Stage one of Figure 2 depicts the initial context establishment phase of GSS-API. The client sends the host tokens containing the clients’ credentials and the types of security mechanisms it supports. The server accepts the context if the information in the tokens match the services it can provide. The function calls gss_init_sec_context() and gss_accept_sec_context() are function calls used in this process. Stage two of Figure 2 shows the actual data being transferred. The appropriate security mechanisms are applied to the data by calling the gss_wrap() function. Likewise, the wrapped data will be changed backed by the gss_unwrap() call and a Message Integrity Code (MIC) will be sent back for integrity check. GSS-API also provides services such as data conversion, error checking, delegation of user privileges, and identity comparison. On the other hand, in order to maximize its generic nature, GSS-API does not provide the following[Sun02]: a. Security credentials for principals. This must be dealt with by the underlying security mechanisms.
b. Data transfer between applications. This is up to the applications. c. Ability to distinguish between different types of transmitted data. GSS-API does not determine if a data packet is GSS-API related or not.
d. Status indication of non-regarding GSS-API such as remote errors.
e. Automatic protection of information sent between processes of a multi-process program.
f. Allocation of string buffers to be passed to GSS-API functions.
g. Deallocation of GSS-API data spaces. This must be done explicitly using functions such as gss_release_buffer() and gss_delete_name(). Currently, the GSS-API language bindings are available in C and Java[GSS1]. The Kitten Working Group is working to provide a C# API, however, it has been postponed due to lack of participation. Back to Table of Contents 3. Recent GSS-API Issues
Recent issues regarding GSS-API focus on keeping up with the latest technologies and addressing how improvements can be made to support more security mechanisms. This section looks into some of these issues.
Some applications, due to their own reasons, are not able to take advantage of the per-message integrity check (MIC) and token wrapping protection provided by GSS-API. They depend on pseudo-random functions (PRF) to key them in using cryptographic protocols. This brings up the need for GSS-API to be able to key these types of applications. However, the specification of GSS-API does not provide such function, restricting such applications from making use of GSS-API. This need was ackno wledged by the Kitten Working Group, and the PRF API Extension for GSS-API was defined . The PRF API Extension defines a gss_pseudo_random() function that takes as input a context handle, a PRF key, PRF input string, and the desired output length, and outputs the status and PRF output string. The gss_pseudo_random() function must meet the following properties [RFC4401]: a. The output string must be a pseudo-random function of the input, keyed with the key material derived from the context. The chances of getting the same output given different input parameters should be exponentially small.
b. When applied to the same inputs by two parties using the same security context, both should have same result, even when called multiple times. c. Authentication must be established prior to PRF computation. d. It must not be possible to access any raw keys of a security context through gss_pseudo_random().
Pseudo-random functions are said to be capable of producing only limited amounts of cryptographically secure output, and therefore, programmers must limit the use of PRF operations with same inputs to the minimum possible [PRF1]. Also, there is a threat for a Denial of Service attack by tricking applications to input a very long input string and requesting very long output strings. Application developers should therefore place appropriate limits on the size of any input strings received from their peers without integrity protection [RFC4402].
As an extension, this provides an abstract API, and does not address the portability of applications using this extension. In other words, the biggest strength of GSS-API can be sacrificed. The Kitten Working Group is planning to look into this issue in the future development of GSS-API Version 3.
3.2 The Simple and Protected GSS API Negotiation Mechanism
GSS-API provides a generic interface that can be layered on various security mechanisms. If two peers acquire GSS-API credentials for the same security mechanism, that security mechanism will be established between them. GSS-API does not specify a mechanism in which the two peers can agree on a certain mechanism. The Simple and Protected GSS-API Negotiation (SPNEGO) mechanism specifies how this can be done. The steps of the negotiation are as follows [RFC4178]:
a. The context initiator proposes a list of security mechanisms with the most preferred mechanism coming first. b. The acceptor either chooses initiator’s preferred mechanism or chooses the one that is in the list, and it prefers. If an agreement can’t be made between the two parties, it rejects the list and informs the initiator of this. The original version of the SPNEGO mechanism was established in 1998. The Kitten Working Group later revised the mechanism and published a new standard, which obsoletes the previous specification. This revision refines the MIC check process of mechanism lists and establishes compatibility with the Microsoft Windows operating systems which implements SPNEGO.
Back to Table of Contents 3.3 Desired Enhancements to GSS API Version 3 Naming
The GSS-API provides a naming architecture that supports name-based authorization. However, with advances in security mechanisms and changes in the way applications that use these mechanisms are implemented, it is required that GSS-API’s use of this model to be extended in the following versions. We will look at some of the reasons this change is necessary.
In GSS-API, a name refers to a principal, which represents a person, a machine, or an application. These names are usually in the format of joe@machine or joe@company. In GSS-API these strings are converted into a gss_name_t object by calling the gss_import_name() function. This is called an internal name. However, each gss_name_t structure can contain multiple versions of a single name; one for each mechanism supported by the GSS-API. Calling gss_canonicalize_name(mech_type) on a gss_name_t with multiple versions, will result in a single version of gss_name_t for the specified mechanism. These are called Mechanism Names(MN). This process is shown in figure 3 below.
Figure 3 – Internal Names and Mechanism Names
Once converted into MNs, these are stored in an Access Control List (ACL), which contains information on which principals have permission to particular services. The next section looks into the limitations of this architecture and provides some possible methods for enhancement. Back to Table of Contents 3.3.2 Limitations of Current Names
The GSS-API authenticates two named parties to each other. Once the names of the parties are initially imported into the GSS-API, converted into MNs, and stored in the ACL, then future authorization decisions can be made by simply comparing the name of the requesting party to the list of names stored in the ACL using the memcmp() function as shown in figure 4 [RFC2743, Sun02]. Figure 4 – Comparing Names
However, this can become problematic due to the fact that names can change over time. For instance, if a name contains organization information such as a domain part indicating which department the principal is a part of, this will change when parties move to a different department. Also, if an individual would lawfully change their name, their GSS-API names will change too. The problem lies in the fact that updating ACLs to reflect these changes is difficult [RFC4768]. Another problematic scenario would be when a user account of a machine is deleted. If a new user creates a new account using the same name, the new user will have the privileges intended for the old user. As GSS-API is used in more complex environments, there are growing desires to use different methods for authorizations such as certificates, Kerberos authorization data, or other non-name-based authorization models. GSS-API’s concept of names needs to be enhanced to support these various methods in a mechanism independent manner. The following subsections look in more detail at naming issues regarding the development of new technologies.
The Kerberos Referrals document proposes a new type of Kerberos name called an enterprise name [RFC4768]. This addresses the similar problem mentioned above where individuals moving throughout the organization can cause complications. The idea of this enterprise name is to be an alias that the users know for themselves and use to log in to their services. The Key Distribution Center (KDC) will translate this into a principal name and provide the user with tickets authorized to the principal. The enterprise name will track individuals moving throughout the organization. Although the principal name will change for these users, they will not have to change the way they log in since the mapping will be handled in the back end by the KDC. This enhancement in Kerberos, however, complicates the way GSS-API handles the enterprise names. The future applications implementing Kerberos will ask the users for their enterprise names, requiring a need for GSS-API to handle this transaction. The problem occurs from the fact that Kerberos is not planning to provide a mapping mechanism for translating the enterprise name into a principal name. Thus, any such method would be vendor-specific. It is not possible to implement gss_canonicalize_name for enterprise name types. It will be possible to use the traditional principal names for GSS-API applications, but this will result in losing the benefits of enterprise names. Another issue arising due to enterprise names would be entering these names into the ACL. This would enhance the stability of the ACLs. It seems that this could be accomplished by including the enterprise name in the name exported by gss_export_name. However, this would result in the exported name being changed every time the mapping changes, defeating the purposes of including them in the first place. Kerberos is also looking into mechanisms to include group membership information in Kerberos authorization data. Although it would be favorable to include group names into ACLs, GSS-API currently does not have a mechanism to support this.
X.509 names present a more complicated problem due to the certificates containing multiple options. In most cases, the subject name will be the appropriate name to export in a GSS-API mechanism. However, this field can be left empty in end-entity certificates [RFC 3280, RFC4768]. This leaves the subjectAltName extension to be the only portion left to identify the subject. Due to the fact that there may be multiple subjectAltName extensions in a certificate, GSS-API will face the similar problem as it did with group membership in Kerberos. So far, there does not seem to be a sufficient interoperability with GSS-API X.509 mechanisms. Requiring certificates using subject names would limit the mechanism to a subset of certificates. Even with the use of subject names, there is ambiguity in how to handle sorting of name components in GSS-API. Back to Table of Contents 3.3.3 Possible Solutions
The following subsections look at the possible solutions for the GSS-API Naming Architecture presented by the Kitten Working Group.
The first proposal to solve the problems mentioned above would be to extend the GSS-API name to include a set of name attributes. The examples of these attributes are Kerberos enterprise names, group memberships in an authorization infrastructure, Kerberos authorization data, and subjectAltName attributes in a certificate. For this extension to be applied the following operations need to be added[RFC4768]: a. Add an attribute to a name
b. Query attributes of a name.
c. Query values of an attribute.
d. Delete an attribute from a name.
e. Export a complete composite name and all its attributes for transport between processes.
Most of these attributes will be suitable for storage in a canonical form and binary comparison in the ACL. It is not specified at this point how to deal with the attributes which are not. Due to the various types of attributes, the operation of mapping attributes into names will require a mechanism specific protocol for each mechanism. This solution does come with some security issues. Receiving attributes from different sources may be desirable since the name attributes can carry their own authentication. However, the design to this solution will need to confirm that applications can assign appropriate trust to name components. 3.3.3.2 Mechanism for Export Names
Having a GSS-API mechanism for the sole purpose of having a useful exportable name would be another solution. For instance, this will enable GSS-API to export a name as a local machine user and will work well for name information that can be looked up in directories. The advantage of this solution would be that minimum change is needed to the current GSS-API semantics. However, this is less flexible than the previously stated solution, and is not clear how to handle mechanisms that do not have a wel l-defined name to export, such as X.509.
Back to Table of Contents 4. Summary
As we reviewed in the previous sections, the recent work of the Kitten Working Group have been fixing problems in the current version of GSS-API and designing the future version. GSS-API has so far played an essential role by reducing the burden of dealing directly with security implementations to a minimum for programmers. However, as the newer technologies require more flexibility from GSS-API, the Kitten Working Group will have to make the necessary adjustments in their development of the next generation GSS-API. The continuing challenge for the Kitten Working Group is to provide an enhanced GSS-API that can take advantage of the latest security technologies, and also keep the interface as generic and simple as possible. These two objectives are somewhat contradicting issues. Many security mechanisms are very different, and an attempt to provide a generic interface that supports these various mechanisms does not seem probable. Even in the example of GSS Version 3 naming architecture, we can see this becoming a problem. The currently used naming model takes the names of context initiators and compares them to a set of names on an ACL. The architecture is simple and stable. However, it does not provide the flexibility and features to support the future deployments of GSS-API. The proposed changes for GSS-API Version 3 will increase the complexity of the GSS-API naming architecture. It will have more flexibility to support various security mechanisms, but this also means that there may be more areas vulnerable in regards to security as stated previously. It is to be seen how the Kitten will be able to overcome this dilemma.
Back to Table of Contents References
[RFC2743]
J. Linn “RFC 2743: Generic Security Service Application Program Interface Version 2, Update 1” IETF, Network Working Group, January 2000. http://tools.ietf.org/html/rfc2743
L. Zhu, P. Leach, K. Jaganathan, W. Ingersoll “The Simple and Protected Generic Security Service Application Program Interface (GSS-API) Negotiation Mechanism” IETF, Network Working Group, October 2005. http://tools.ietf.org/html/rfc4178
N. Williams “A Pseudo-Random Function (PRF) API Extension for the Generic Security Service Application Program Interface” IETF, Network Working Group, February 2006. http://tools.ietf.org/html/rfc4401
N. Williams “A Pseudo-Random Function (PRF) for the Kerberos V Generic Security Service Application Program Interface (GSS-API) Mechanism” IETF, Network Working Group, February 2006. http://tools.ietf.org/html/rfc4402
S. Hartman “Desired Enhancements to Generic Security Services Application Program Interface (GSS-API) Version 3 Naming” IETF, Network Working Group, December 2006. http://tools.ietf.org/html/rfc4768
R. Housley, W. Polk, W. Ford, D. Solo “Internet X.509 Public Key Infrastructure Certificate Revocation List (CRL) Profile” IETF, Network Working Group, April 2002. http://tools.ietf.org/html/rfc3280
[Sun02]
Sun Microsystems, Inc. “GSS-API Programming Guide” May 2002. http://dlc.sun.com/pdf/816-1331/816-1331.pdf [GSS1]
“GSS-API-Wikipedia”, http://en.wikipedia.org/wiki/Generic_Security_Services_Application_Program_Interface
[SPN1]
“SPNEGO-Wikipedia”, http://en.wikipedia.org/wiki/SPNEGO
[PRF1]
“Pseudorandom function-Wikipedia”, http://en.wikipedia.org/wiki/Pseudorandom_function
“X.509-Wikipedia”, http://en.wikipedia.org/wiki/X.509
Back to Table of Contents List of Acronyms
Common AuthenticationTechnology
Internet Engineering Task Force
PRF
Pseudo Random Function
GSS-API
Generic Security Service Application Programming Interface
Remote Procedure Call
Message Integrity Code
Mechanism Name
KDC
Key Distributioin Center
SPNEGO
Simple and Protected GSS-API Negotiation
Back to Table of Contents Last Modified: December, 2007
This paper is available at: http://www.cse.wustl.edu/~jain/index.html | 计算机 |
2015-40/2212/en_head.json.gz/7186 | Customer Service in the Cloud
A Community of Customer Experience Professionals.
Where to?Home
Welcome to Customer Service In The Cloud
An online community of Customer Experience Professionals. Sign-up here »
2012 Gartner Magic Quadrant for the Contact Center
4 May, 2012 Published by John M Perez in Contact Center
This article was reposted from Gartners Website.
Market Definition/Description
The customer service contact center refers to a logical set of technologies and processes that are engineered to support the customer, regardless of the channel. It is built in five logical groupings (the first of which is the focus of this Magic Quadrant).
CRM Business Applications for Customer Interactions
Customer service and support (CSS) problem management, trouble ticketing and case management
Knowledgebase solutions and advanced desktop search
Real-time analytics/decision support
CRM databases for account/contact/offer information
Desktop integration with telephony, cobrowsing, mobile and Web extension of the solution to online communities interested in peer-to-peer (P2P) collaboration management
Social media engagement and sentiment analysis
Real-time feedback and surveys
The ability to connect to remote sensors embedded in equipment such as consumer electronics
(For a breakdown on the weightings applied for the evaluation, see “Magic Quadrant Criteria for CRM Customer Service Contact Center, 2012″ and “Use Gartner’s Pace-Layered Application Strategy to Structure Customer Service Applications Based on Business Value.”)
Contact Infrastructures
Computer telephony integration (CTI)
Automatic call distribution
Email response
Workforce Optimization Tools
Quality management solutions
Performance management solutions
Offline and Real-Time Analytical Tools
The first layer, CRM business applications for customer interactions, handles a wide range of tasks, including case management. Other functions include advisory services, problem diagnostics and resolution, account management and returns management. Applications may also be industry-tuned for government, nonprofit agencies and higher education. They may include knowledge-enabled service resolution (such as advanced search tools), community management, offer management and service analytics dashboards. They are designed to enable employees or agents of a company to support clients directly, usually within a call or contact center, whether the product is a consumer good, a durable good or a business service, such as financial services, customer services (for example, retail banking, wealth management or insurance), hospitality, telecommunications, government, utilities or travel.
The agent needs to be able to support the customer, whether the customer is on a website or a mobile device, at a kiosk or in a vehicle. This means:
The agent sees what the customer sees.
The agent knows the path the customer has taken before the voice conversation takes place (i.e., he knows the communication context of the interaction).
The agent has the tools to solve the customer’s problem or address his or her issue from a remote location.
The customer service contact center needs to send out proactive, automated alerts. For example, when the status in a back-end system changes to one of which the customer needs to be aware (such as a bank balance, credit card fraud, flight delays, available upgrades, price range reached, a special offer on cars or insurance policy exceptions), an alert is sent to one or several devices until the customer responds that he or she has received the notification.
The application contains business rules for complex entities, such as contact, enterprise, subsidiary or partner, and the workflow processes to route a case, opportunity or order based on the rule set for the specific relationship. The application should be available as a subscription service in a cloud-architected model for all relevant industries. (Some industries, such as telecommunications and Federal Government agencies, may not be ready for this model, and on-premises software may be preferred.)
A case may be routed from one department to another, depending on type. The case can link to all interactions across channels, whether email, online, SMS or a phone call. An application supports multiple languages simultaneously. In some situations, real-time decision support is important. Multiple back-end systems synchronize using their own rules — for example, credit card fraud; telecommunications-specific functions, such as telecommunication billing, service and resource management; product life cycle management; digital content; and advertising bundling — and integrated order management.
Magic Quadrant
Figure 1. Magic Quadrant for CRM Customer Service Contact Centers
Source: Gartner (April 2012)
Vendor Strengths and Cautions
Amdocs is a profitable company with more than $3 billion in sales, selling a comprehensive set of software and services to communications service providers (telecommunications, media and satellite), with a portion of its business directed at customer service contact centers and the customer experience.
In the telecommunications industry, Amdocs has the advantage of a comprehensive set of products as part of the Customer Management Version 8.x release, ranging from an order management and billing platform, customer service functionality for the agent desktop, device management, a catalog and retail interaction manager, together with libraries of interaction flows.
Amdocs is a strong and profitable company, with customers and professional services resources in every major geographic location.
For prospects with deployed Amdocs assets, the company offers strong insight on the future of the telecommunications customer. It understands retail operations and the contact center in telecommunications and mobile, and it has good e-billing capabilities.
Amdocs’ consulting has strong experience in project management, best practices and key performance indicator mapping.
Amdocs has not kept pace with the needs of clients in the areas of cloud-based systems, social CRM, and real-time decision making. Amdocs has been weak in built-in and easy-to-use configuration tools, but has worked to improve this in v.8.1. Amdocs needs to do more to support the mobile customer beyond Device Care, which has not yet been widely deployed.
Customers not using Amdocs Smart Client Framework lack a robust development tool that can be used to build custom objects or new functionality and workflows.
Amdocs is not recommended for customer service contact center shortlists across industries other than telecommunications, although it may be appropriate for consideration on longer lists.
Despite articulating leading ideas for customer experience, we find that Amdocs is not best-of-breed in areas such as chat, knowledge management and virtual assistants.
Amdocs has limited traction in growing third-party external service providers (ESPs) for consulting and implementation services.
ASTUTE SOLUTIONS
Astute Solutions is a small (estimated revenue of less than $15 million in 2011) niche provider of cloud-based — software as a service (SaaS) — customer service functionality to the consumer market, primarily in the U.S. and the U.K.
Astute has continued to develop its products to meet changing customer demand, most recently with more-advanced cloud capabilities and improved knowledge management. It also has a clean and useful social CRM product, Social Relationship Management (SRM).
The vendor’s ePowerCenter version 8.x has been successfully deployed, primarily across the U.S. and the U.K., and has been extended to more than 25 countries, primarily in small and midsize customer service centers (15 to 100 agents).
The vendor has strong knowledge of and functionality for customer service processes in industries such as restaurants, hospitality, consumer goods and retail (nonbanking or other financial services).
Astute is a small company (fewer than 100 employees), and has yet to build a software partner ecosystem or a proven integrator/consultancy partner practice. However, steps are being taken to address this.
The product requires improvement to the configuration module and other components in multicountry/multiclient operations environments that require large data volumes. In the design phase, new customers need to think through their requirements, rather re-engineering the system once it has been developed.
The system is rarely deployed in large-scale, multichannel operations of large, distributed international service environments.
The Microsoft Dynamics CRM 2011 product is primarily used in nontraditional customer service contact center environments, where the real value may be in supporting a customer request for information, or the needs of students, citizens or government officials to interact with other people. There are many scenarios across industries (examples are government, healthcare, higher education, real estate and retailing) where the flexibility of the system to support a range of interactions makes it a good shortlist product. Microsoft has not been able to provide us with complex and scalable examples of the cloud-based version of the product for customer service contact centers.
When deployed by a skilled professional services team, the Microsoft Dynamics CRM product has powerful capabilities, including built in workflows, multichannel process integration, and blended sales, service and marketing.
The user interface has improved, as has integration with other Microsoft assets. The Microsoft Outlook look and feel, together with the integration with SharePoint and Microsoft Office, are commonly mentioned assets of the system for customers.
The latest release has improved business intelligence (BI) capabilities, visual guides and workflow support, together with improved dashboards.
Microsoft has a global reach of partners for professional services and complementary software.
Microsoft does not yet provide significant industry-specific templates for the customer service product line, relying instead on partners. These versions are not sanctioned by or supported by Microsoft, leaving the client with no direct Microsoft support. Due to the limited industry expertise, prospects that require complex customer service contact center processes, such as payer healthcare organizations, retail banking, telecommunications and utilities, should be cautious when considering Microsoft partners for their precise capabilities. Resources can be expensive and hard to find.
We do not consider the new Microsoft Dynamics CRM Online (SaaS) version of the product mature enough for complex contact center environments.
The product is not best-in-class in support of real-time decision making, knowledge management, proactive service, virtual assistants, mobile customer service or online Web communities.
References have not pointed to Microsoft Dynamics CRM as a platform on which to build a social CRM discipline around customer service.
NICE SYSTEMS
Nice Systems has a broad set of customer support functionalities, which it lists as next-best-action, cross-channel interaction, process automation and guidance, interaction analytics, and compliance and recording tools. It is a nontraditional provider of CSS in that it does not own the customer record. They are more of a complementary offering, often making it a complicated purchase decision for customer service managers. Nice offerings enable agents to act in real time, as the interaction takes place, which is the key event in customer service.
Nice’s Real-Time Impact (RTI) product helps with decisioning and is useful for customer service organizations tasked with upselling/cross-selling during inbound interactions.
The addition of real-time feedback and recent advances in support of the mobile customer, together with a partnership with Amdocs, are giving Nice greater appeal to prospects.
References speak to the ease with which information and data from multiple systems can be assembled, and workflows created to drive the customer dialogue.
The integration of RTI with the rest of the vendor’s assets for recording, agent training, governance, analytics and back-office workflow support creates good synergies for prospects owning other Nice products.
The company does not own the customer database, offer a full suite of CSS functionality or have a multitenanted SaaS technical architecture model.
Few complementary software companies or consulting organizations are advocating the Nice RTI system for customer service agent effectiveness.
There is a low level of client awareness of the capabilities of Nice in the customer service desktop space for retention, cross-sell/upsell or mobile.
Although Nice products are sold globally, the customer service RTI has less of a global presence.
ORACLE (RIGHTNOW TECHNOLOGIES)
RightNow Technologies was acquired by Oracle in early 2012, and its products are in the process of a migration to parts of the Oracle technology stack (see “Oracle to Acquire RightNow Technologies, Boost Cloud Portfolio”). This product is now called Oracle RightNow CX Cloud Service.
In addition to the customer service desktop application, Oracle RightNow CX Cloud Service, Oracle has a more significant installed base of Web customer service customers using capabilities such as knowledge management, chat and email. It continues to improve its capabilities and connect the two products. Therefore, consumer-oriented customer service contact centers that need searchable content, integrated chat and email, and solid scripting capabilities using a modern graphical user interface (GUI) will be attracted to the product.
The acquisition of the product by Oracle will lead to greater scalability as it transitions to Oracle technologies.
The system, delivered as a subscription service in a SaaS model, is straightforward to set up and configure, and doesn’t require heavy IT involvement.
Oracle RightNow CX Cloud Service has strong industry representation in high-tech, government agencies, retailers, education, travel, consumer electronics and branches of telecommunications, while not focusing deeply on industry-specific processes — for example, billing, price catalogs, order execution and underwriting.
As is true of any acquisition, it will take time to regain sales and marketing momentum outside the Oracle installed base, to train the Oracle sales force and partner network, and to integrate development efforts.
Gartner has not seen large deployment teams or configuration teams from the largest system integrators (SIs) and global consultancies, such as IBM, Accenture, Deloitte and Capgemini for the contact center desktop product.
RightNow built its products on the Microsoft .NET client, the open-source MySQL database and Red Hat Linux, as well as some other non-Oracle technologies. The migration of parts of the technology to the Oracle technology stack will take time.
Organizations leaning heavily in the direction of an all-Microsoft environment or a non-Oracle stack could face resistance from their IT organizations in regard to deepening the commitment to the Oracle RightNow CX Cloud Service product.
Oracle RightNow CX Cloud Service has begun to offer end users stronger platform as a service (PaaS) capabilities, where the business can build its own modules and business objects; the coming year will be pivotal for this initiative.
The product lacks an on-premises software model, and prospects will need to consider an on-premises Oracle product or another alternative.
ORACLE (SIEBEL)
The addition of RightNow Technologies to the Oracle CRM product line shifts the focus of the Oracle (Siebel) product. This could confuse prospects and the Oracle sales force. The Siebel product line still has strong near-term viability, even as it migrates to Fusion. It has broad functional coverage, a good partner ecosystem and areas of deep industry expertise. It remains a standard for large-scale call/contact centers looking for scalability and access to a global pool of third-party professional services, and with an inclination for the Oracle product line.
Siebel remains the only large-scale customer service contact center product deployed globally by large enterprises in 2012 across multiple business-to-business (B2B) and business-to-consumer (B2C) industries.
The Siebel role-based, on-premises platform is best in industries where fulfillment requirements are high, introducing change is difficult and custom workflows are expensive.
The product line has global software support and distribution, and a global presence of professional services for multiple industries.
Oracle continues to fund enhancements to the v.8.1.x and v.8.2.x product lines, as well as extending integration in areas such as knowledge management, marketing, real-time decisioning, workflow and policy administration. Upcoming agile development capabilities and a better user interface will energize the product.
Most of the large deployments observed in 2011 are upgrades from earlier versions. We have observed more companies migrating from Siebel CRM than moving to it in less-complex customer service contact centers. As Oracle Fusion is released, concerns about migration requirements weigh on decision makers.
Despite the system’s near 100% stability, users continue to experience performance degradation if the database isn’t carefully maintained.
Users with the High Interactivity Framework report glitches, and should be careful about tuning and integration.
For customers looking to deploy in a cloud-model, Oracle RightNow CX Cloud Service may be an alternative to on-premises Siebel deployment. The combination of Siebel’s migration to the Fusion application environment and the acquisition of RightNow Technologies has made it more complicated for the prospects with whom we have interacted to decide on the right product path.
The Siebel customer service product is not best-in-class for providing a P2P community software option (social CRM) tightly connected to the customer service process.
For users with a low level of complexity in the customer service area, and a small contact center (e.g., fewer than 250 seats), the Siebel product may be more complex than necessary. Prospects without an interest in a deeper commitment to the Oracle technology and application stack — for example, organizations moderately to heavily oriented toward Microsoft and .NET, or building their own open-source CRM applications — should look to other products first, but shouldn’t exclude Oracle.
Pegasystems grew approximately 25% in 2011, reaching more than $415 million in revenue, as it deepened its list of trained SI partners and its focus on customer-facing business processes. Continued competitive differentiation around case management, business process design and model-driven software aids industry acceptance.
Pegasystems has become more savvy about using tools such as Facebook and corporate websites to improve workflow processes for its clients.
The company has expanded the reach and depth of its professional service partner network, as well as improved slightly in the area of partnerships with complementary vendors.
The company delivers industry-specific best practices, specifically for insurance, healthcare and financial services, as well as prebuilt templates, which accelerate adoption.
Pegasystems’ PaaS is an Internet-based development area in which teams or partners can collaborate on building solutions and designing best practices; PaaS is increasingly gaining adoption.
Pegasystems offers a highly scalable solution (1,000 or more concurrent users in an integrated environment with 99.95% uptime) and provides good support.
Not all IT-driven organizations favor Pegasystems’ mashup and model-based approach, which is different from traditional software coding environments.
Whereas industries such as insurance, healthcare, and financial services are drawn to a product with a rule engine to drive consistency, the product may not be the ideal choice for a shortlist where processes are mostly unstructured.
Global organizations running a single instance of Pegasystems have found it complicated to synchronize CTI efforts across locations with disparate telephony environments.
With the exception of the real-time decisioning product, the vendor has limited multi-industry experience outside North America and the U.K.
The company has more work to do to demonstrate its vision for mobile device support and social CRM.
Salesforce.com’s Service Cloud has become the fastest growing product line at the company. Gartner estimates that it accounts for more than 30% of new subscription revenue. It is a clear leader in the market, although we continue to find large, complex, multinational customer service centers a challenge for this cloud offering. It is largely absent from public sector, complex B2B, health insurance, telecommunications and banking.
With revenue that could reach $2.75 billion in 2012, salesforce.com is the largest and fastest-growing software solution provider solely focused on customer engagement that can be verified (versus attributed revenue within a broader enterprise software sale).
For B2B customer service operations, especially those with an established salesforce.com presence in the sales department, Service Cloud is recognized as a de facto shortlist product by most North American and Western European organizations.
Key new customers — both B2B and B2C — have shown enough faith in the customer service contact center product to invest more than $10 million per year, and to retire homegrown systems and/or systems from competitors that were at an end-of-life stage, and consider the salesforce.com application platform a strategic asset.
The salesforce.com product for customer service has an excellent GUI, simple design tools, intuitive navigation and a good understanding of the importance of Web communities. There is good integration with back-end systems, such as Oracle ERP.
The added benefits of the customer portal, partner portal, social media monitoring and the Salesforce Ideas products draw customers to the Service Cloud.
Organizations running connected, multinational customer service contact centers have cited ongoing speed and performance issues.
The vendor is largely unproved in large, complex, retail, B2C contact centers — that is, large-scale, high-volume call centers where processes must be continually synchronized and monitored, such as retail banking, loan origination, insurance policy administration, bill processing and fraud management.
Very limited Asian, South American or Eastern European presence in larger-scale (more than 200 seats) customer service contact centers.
The customer service product lacks a real-time analytics capability for agent decision support, and requires a more unified BI layer to look at ad hoc data analysis. The product’s analytics capabilities and complex sentiment analysis functionality could use improvement. The company is in the process of addressing these areas.
Clients that are scaling the product for several-hundred-seat customer service centers with complex, industry-specific needs have voiced concern about the long-term total cost of ownership, as well as the value of some of the AppExchange products.
SAP references have already moved beyond SAP ERP and adopted SAP Web Channel, Marketing and Interaction Center. More work will be needed before references can demonstrate the benefit of an integrated all-SAP approach for CSS over best-of-breed choices outside SAP.
SAP’s marketing of an integrated business application suite that supports end-to-end customer processes is compelling to clients from an IT and line-of-business perspective, because it simplifies the application portfolio and promises better speed to solution delivery.
The SAP CRM 7.0 product has improved, and has good uptake with businesses in every major geographic location and in many industries. The strongest industry offerings are consumer electronics, utilities and B2B equipment support within the SAP installed base. This is due to the set of products for warranty, contract, entitlements and analytics that fit with the product.
SAP is a strong and profitable company, mitigating the financial risks of making a large and ongoing investment in the product.
The SAP CRM Interaction Center product set has a complement of customer service offerings, reducing the number of vendors required to build an end-to-end contact center solution.
Based on reference interactions, the cost to design, configure, test and deploy a midsize-to-large customer service center (for example, more than 500 users in a B2B model) is the highest of any package we have seen in the review period. To address this issue, SAP has brought to market a fixed-scope/fixed-price deployment option called SAP CRM Rapid Deployment Solution (RDS). The software cost is in line or less expensive than competing products.
We have not seen SAP as a significant factor in our clients’ decisions about the future of mobile computing for customer service and support, or for P2P support communities.
B2C contact centers that require the support of high volume, complex business processes (for example, financial services, retail banking, retail mobile operators and healthcare) are not a core strength of the SAP CRM 7.0 or Interaction Center.
Based on 2011 references, traction with non-SAP customers remains limited. Our recommendation is that non-SAP customers favor other products on their shortlists.
SAP’s multitenant SaaS delivery model, although improving, is not on a par with competitive offerings for customer service in the contact center.
There is a growing, but still limited, pool of trained ESPs with consulting practices that offer help with SAP technologies and customer process design and configuration.
SWORD CIBOODLE
Sword Ciboodle has experienced challenges expanding its market in the face of well-funded competitors. Despite good technology and a strong understanding of customer support processes, it will need a stronger partner network and more substantial support to grow the product.
Sword’s product is well-positioned for organizations that want to support a process-centric approach to customer service. Areas covered include complex case management with multichannel needs (telephone, email, chat and legacy systems).
The product can be configured for multiple user roles, which speeds the average handling time for a task, streamlines/shortens the training period and makes new processes easier to introduce.
The vendor has a responsive professional services team with a solid understanding of business processes, and has expanded to support growth in the U.S.
The underlying platform has good modeling capabilities and a strong set of customer service functionality. Ciboodle Crowd gives prospects a way to connect the customer service process to social media and collaboration.
Some clients have reported steep learning curves for application configuration, when working with Sword’s application development tools, which some consider nonintuitive.
Limited (but growing) sales and marketing resources have affected the awareness of the Ciboodle product in the market. Risk-averse buyers may mistake the lack of broad deployments and adoption as a sign of product weakness, rather than a lack of visibility.
The geographic scope of the product’s availability seems limited primarily to English-speaking geographies. Prospects should perform the standard due diligence before purchasing the product.
Sword is not known as a best-of-breed knowledge management tool or a social CRM platform to extend a business’s reach to the end customer. The product is not architected and deployed as a multitenant SaaS platform.
The product line would benefit from deeper strategic CRM application partnerships to round out product offering.
Ciboodle showed solid growth in 2011, following a period of increased (albeit modest by industry standards) sales and marketing investment.
Vendors Added or Dropped
We review and adjust our inclusion criteria for Magic Quadrants and MarketScopes as markets change. As a result of these adjustments, the mix of vendors in any Magic Quadrant or MarketScope may change over time. A vendor appearing in a Magic Quadrant or MarketScope one year and not the next does not necessarily indicate that we have changed our opinion of that vendor. This may be a reflection of a change in the market and, therefore, changed evaluation criteria, or a change of focus by a vendor.
Oracle EBS: This product continues to be sold, although we have little exposure to the product or the active installed base.
Pitney Bowes: Pitney Bowes has a good set of products, primarily involving real-time decisioning, marketing and analytics. However, we have seen insufficient examples of the product as a core customer service contact center desktop.
Inclusion and Exclusion Criteria
The vendor must have 15 customer references for CSS functionality in the contact center, of which at least five are new customers in the past four quarters in at least two geographic regions (for example, the Asia/Pacific region, Latin America, South America, North America or Europe).
The product needs to have generated at least $7 million in software revenue for core CSS in the contact center (i.e., as the desktop of record) from new clients during the past four quarters. For 2012, this revenue should equal or exceed the revenue from the previous four quarters of business results.
The product should appear regularly on client shortlists, and the company needs to have built a practice with sufficient third-party consulting and integration firms to grow at a double-digit pace for five years. The technology needs to support an extension to cross-channel customer service (e.g., Web, kiosk, in-store or mobile), without the need to code in a new development environment. That will make the company and/or its product a demonstrated trendsetter or market mover.
In the short term, the company must be financially viable. That is, it must have sufficient cash to continue operating at the current burn rate for 12 months.
ABILITY TO EXECUTE
Product/Service: Advances in software architectures — particularly in Web orientation, support of mobile devices, video and Web communities — are all Web 2.0 requirements that complicate the user’s choice. The vendor needs to have a scalable SaaS model or have the option of an on-demand delivery model for some part of its platform to be a Leader.
We weight the extent to which the company offers a componentized offering, as well as complete functionality across several service models.
We continue to see greater need for strong mashup capabilities that enable organizations to embed applications in the customer service representative’s desktop. We also see a strong demand for declarative systems that enable flexible logic flows. User organizations prefer to design their own business objects, workflows and business processes, without resorting to vendor support. We expect this demand for composite applications (through in-house development and application extension to the Internet/website) to accelerate.
We see a great need for advanced (real-time) decision support and complex knowledge solution capabilities, business rule engines and customer feedback management.
The CSS application should have out-of-the-box functionality, which means a strong set of industry- and process-specific business logic and data. Through process design or functionality breadth, the system must support end-to-end customer service processes (from customer need to resolution) for the chosen market. Published APIs are critical to connect (or expose) an application’s customer service functionality with another system or process. Vendors will be measured on the capabilities of their product releases to support customer service, and on the technical support of their multichannel and cross-channel environments.
The vendor must have a stable product development team for each product module it sells, or a demonstrably successful strategic partnership.
Overall Viability: We evaluate the vendor’s capability to ensure the continued vitality of a product, including a strong product development team to support current and future releases, as well as a clear road map regarding the direction the product will take until 2013. The vendor must have the cash on hand and consistent revenue growth during four quarters to fund current and future employee burn rates, and to generate profits. The vendor is also measured on its ability to generate business results in the CSS market. We examine the deployment partners, software partners and the consultancies that are trained and experienced with the product.
Sales Execution/Pricing: This involves the vendor’s ability to provide global sales and distribution coverage that aligns with marketing messages. The vendor must also have specific experience selling its CSS to the appropriate buying center. The strength of the management team and the partner strategy are key. We evaluate the ability to provide a revenue stream from CSS, and an observable deal flow from clients, vendors and ESPs.
Market Responsiveness and Track Record: We consider the vendor’s capability to perceive evolving customer requirements and articulate that insight back to the market, as well as create the products for readiness as demand comes online.
Marketing Execution: This refers to the vendor’s ability to consistently generate market demand and awareness of its CSS solution through marketing programs and media visibility. In an ideal world, marketing execution should be less critical than some other factors; however, the business reality is that marketing success can fuel growth and improvements.
Customer Experience: The vendor must produce a sufficient number (the recommended number is five) of quality clients and references, with varying levels of sophistication to prove the viability of its product in the marketplace. References are used as part of the evaluation criteria for the ability to execute and create a vision for how organizations can improve customer service. Included in this criterion are implementation and support.
The vendor must be able to provide internal professional services resources or must partner with SIs with vertical-industry expertise, CSS domain knowledge, global and localized country coverage, and a broad skill set (such as project management or system configuration) to support a complete project life cycle. The critical point on customer experience is to ascertain the degree of change management that accompanied the implementation. Often, the end user experiences discomfort from the change processes that were introduced with the new system, not from the new software. The vendor’s customer support organization must also provide satisfactory, prompt service to its customers in all regions of the world.
Operations: The vendor needs to offer consistent and comprehensible pricing models and structures, including for such contingencies as failure to perform as contracted and mergers and acquisitions. The vendor is measured on its flexibility to support multiple pricing scenarios, such as on-premises licensing, as well as application on-demand offerings, such as hosted and multitenant. The vendor must have sufficient professional services — in-house or through third-party business consultants and SIs — to meet evolving customer requirements (see Table 1).
Table 1. Ability to Execute Evaluation Criteria
Overall Viability (Business Unit, Financial, Strategy, Organization)
Sales Execution/Pricing
Market Responsiveness and Track Record
Marketing Execution
COMPLETENESS OF VISION
Market Understanding: The market for customer service is highly diverse, because of the multichannel nature of customer interactions and the wide-ranging industry processes that need to be supported. To succeed, a vendor must demonstrate a strategic understanding of CSS opportunities that are unique to its target market. This may be new application functionality, evolving service models or in-line analytical capabilities for unique customer segments.
Market Strategy: The vendor can describe its go-to-market strategy as something other than “growing until we are acquired by a larger company.” Even with this as the endgame, it must be clear how prospects will be protected or benefit from such a strategy. We look for a well-articulated strategy for revenue growth and sustained profitability. Key elements of the strategy include a sales and distribution plan, internal investment priority and timing, and partner alliances.
Sales Strategy: This refers to the strength of the sales force and the channel, because these make the difference between floundering and steady/rapid growth. We are looking for highly trained sales leaders who can quickly differentiate the value proposition of products and services, as compared with the competition.
Offering (Product) Strategy: We look for a componentized offering and complete functionality across several service models. Specific vision criteria include:
Supporting a threaded service task across functional areas (including midoffice, back-office and partner), regardless of the channel.
Providing for the creation of content about the most likely customer intentions and how to address them, based on continuously variable business scenarios. Continuously variable means that, depending on the business context of the interaction, the steps and decisions in a service procedure may vary.
Having the ability to sell successful tools to support customer participation in the service process via Web communities.
Communicating openly with customers (and Gartner), a statement of direction for the next two product releases that keeps pace with or surpasses Gartner’s vision and our clients’ expectations of the CSS market.
Offering a sufficiently broad set of products to ensure the success of the product.
Providing a SaaS product; without this, a vendor cannot be considered as Visionary.
Business Model: To be a Leader through the first half of 2013, an on-premises application provider needs to have deployed a SaaS option that is appropriate for its customer base. Application modules should be tightly integrated, and have business process modeling capabilities and advanced workflows. The company should have a strategy to appeal to its key vertical industries — that is, it integrates with systems unique to an industry, delivers packaged functionality and workflows for an industry (such as those for the telecommunications, automotive and consumer goods industries), and delivers B2B and B2C interactions. Gartner should observe deployment partners, software partners and consultancies that are trained and experienced.
Vertical/Industry Strategy: Unless a product is deployed as a strong add-on to an existing technology stack, a deep understanding of one or more vertical industries will be crucial to offer differentiation.
Innovation: Innovative vendors incorporate concepts that extend to consumer technologies, virtual service agents and customer service functions embedded in virtual communities (such as Facebook and Get Satisfaction). The vendor needs to understand major technology/architecture shifts in the market and communicate a plan to use them, including potential migration issues for customers on current releases. For most vendors (any founded before 2000), the architecture is built to operate in a SaaS delivery model and on-premises. We examine how well the vendor articulates its vision to support service-oriented business applications.
The customer service application should provide a catalog of Web services that enable interoperability with disparate business applications, without requiring extensive point-to-point custom integration. It should have a smart client, and be decomposable as widgets or as part of a larger mashup. Applications must help optimize a predictive customer analytics system — directly or through tightly integrated partners. These predictive analytics alert management, agents or customers when service patterns are detected that might signal the need to adjust a business strategy or direction, or indicate that the likelihood of a particular business scenario has changed (for example, customers responding to a notice on defective parts, an accident or financial news). The vendor will be measured on the capability of its architecture to support global rollouts and localized international installations. The vendor must have the tools for IT and business users to extend and administer the CSS application. The customer is the final arbiter of whether a company is a Visionary.
Geographic Strategy: The vendor understands the needs of the three largest markets — the EU, North America and the Asia/Pacific region — and knows how to build a strategy to focus on aspects of the overall market (see Table 2).
Table 2. Completeness of Vision Evaluation Criteria
Market Understanding
Offering (Product) Strategy
Vertical/Industry Strategy
Geographic Strategy
Quadrant Descriptions
Leaders demonstrate market-defining vision and the ability to execute against that vision through products, services, demonstrable sales figures, and solid new references for multiple geographies and vertical industries. Clients report that the vendors deliver a high level of value and return on their commitment. The development team has a clear vision of the implications of business rules, and the impact of social networking on customer service requirements. A characteristic of a leader is that clients look to the vendor for clues as to how to innovate in customer service in areas such as embedded sensors in equipment, mobile support and extension to social communities. The vendor does not necessarily drive a customer toward vendor lock-in, but rather provides openness to an ecosystem. When asked, clients reply that a Leader’s product has affected the organization’s competitive position in its markets and helped lower costs. Leaders can demonstrate $50 million in sales to new customers during the past year.
The vendors in the Challengers quadrant demonstrate a high volume of sales in their chosen markets (i.e., more than 30% of new business by percentage comes from more than one industry, and more than 50% of new sales come from sales into the broader installed customer base). They understand their clients’ evolving needs, yet may not lead customers into new functional areas with their strong vision and technology leadership. They often have a strong market presence in other application areas, but they have not demonstrated a clear understanding of the CSS market direction or are not well-positioned to capitalize on emerging trends. They may not have strong worldwide presence or deployment partners. Vendors in the Challengers quadrant can demonstrate $50 million in sales to customers during the past year.
Visionaries are ahead of potential competitors in delivering innovative products and delivery models. They anticipate emerging/changing customer service needs, and move into the new market space. They have a strong potential to influence the direction of the CSS market, but they are limited in execution or demonstrated track record. Typically, their products and market presence are not yet complete or established enough to challenge the leading vendors.
NICHE PLAYERS
Niche Players offer important products that are unique CSS functionality components or offerings for vertical segments. They may offer complete portfolios, but demonstrate weaknesses in one or more important areas. They could also be regional experts, with little ability to extend globally. They are usually focused on supporting large enterprises, rather than small and midsize businesses.
The established business applications for the CSS function are largely obsolete. They are simplistic and restricted by inflexible configuration rules and procedures that govern the input, retrieval, and flow of data and information. They support collaborative interactions poorly. Despite the high value of these systems, they have failed to evolve to incorporate new ideas, such as social experience design concepts, into customer interaction applications for customer service. Without collaboration capabilities baked into the software, interaction among employees and between employees and customers is limited, and best practices are hard to capture or suggest.
The major vendors developing customer management software fail to see sufficient economic value in rearchitecting their software for social experience. They are aware of the innovations brought on by communication software and social software, but believe that the social revolution in software will not adversely affect core systems. Instead, they are pursuing a tactic of acquisition and integration as an interim measure, until they sort out the importance of social CRM. Microsoft, Oracle and salesforce.com are making good progress, whereas other vendors covered in the Magic Quadrant lag behind in innovation in this area.
The philosophy among the largest software vendors serving the large and midsize enterprise could be described as follows: No major competitor will disrupt the sale of its enterprise business applications, because none has developed a social-centric system. Therefore, partnerships with or the acquisition of social media technologies will be sufficient. The result is that none of the major software platform providers has a commanding presence in social CRM software. The opportunities for a new and disruptive vendor to enter and impede the progress of the established vendors are great — similar to the market opportunity that salesforce.com has experienced in the sales force automation (SFA) area. SFA was considered a commoditized market, yet a more than $2 billion newcomer, salesforce.com, has emerged. So far, no business has entered with a strong, competitive social-centric product, thus limiting innovation.
Organizations are rarely able to migrate from an old system to a new one. More than 100 companies have demonstrated that it is possible to take an augmentation approach by which social CRM tools and internal social tools for collaboration and sharing are integrated into the CSS environment. Workflows and rules are written, often in the CRM system, and passed to the social system. This is not ideal — it’s a stopgap step that supports some experimentation and sets the stage for more-complex deployments, as experience is gathered.
An incremental approach to moving toward social-centric CSS systems enables the enterprise to gather facts, establish metrics and analyze the impact of creating greater collaborative capabilities. As more-complete social-centric CSS systems, which have a deeper mastery of real-time analytics, reach the marketing stage in 2014 and beyond, the business case for migrating to the new tools will be easier to demonstrate. There are industry-specific and geography-specific considerations that will cause businesses to accelerate investments in innovation in social-centric interfaces. The U.S. is at least two years ahead of Europe and other geographies in social media for business processes. High-tech, media and entertainment, retail, consumer goods, telecommunications providers and banking need to move forward during 2012 and 2013. Mining, chemicals, industrial machines, and oil and gas industries are under far less pressure to evolve.
The market for CSS applications for the contact center is fragmented, based on the complexity of the information required to support the customer and the complexity of the business rules or processes that form the steps in an interaction. In many parts of the world, such as India and China, Cloud-based customer service business applications are not yet the preferred model. There are many good vendors not found on the Magic Quadrant, including:
New vendors in the community space, such as FuzeDigital and ZenDesk
Open-source options, such as SugarCRM
CRMNext
Infor (Epiphany CRM)
Consona (formerly Onyx)
Neocase Software
Coheris
BPMonline
Vertical Solutions
Gartner has been surprised that some companies, such as Parature, have not scaled their products for customer service contact centers.
Gartner analysts are available for assistance with evaluations and comparisons of these companies and products, and others.
By 2013, many industries (for example, telecommunications, travel, financial services and high-tech consumer products) implicitly will include in their definitions of a CRM customer service contact center access to mobile users and community participation in knowledge creation. Throughout 2012 and 2013, agent real-time access to a view into the customer’s activity — including Facebook, on the organization’s website and beyond — will be attempted by 15% of customer service centers.
As a delivery model for customer service contact centers, SaaS is being accepted by many organizations. However, Gartner has observed resistance to SaaS in several areas, including:
Locations in which there is greater caution due to fears regarding data privacy, latency and application availability — for example, Central and Eastern Europe, many parts of Asia (such as India and China) and South America
National/federal governments and healthcare organizations in which regulations inhibit penetration
More-complex environments with high call volumes, high transaction volumes and real-time integration with legacy systems, which can slow performance
In our evaluations, we point out when we foresee a potential challenge for a product based on these limitations. Through 2H13, complete customer service solutions delivered in the SaaS model will be most prominent in the B2B, low-volume call/contact center.
As the market matures, the rating scales from one year to another can shift. The result is that a product that has not improved or declined could still show a shift in position on the Magic Quadrant that has resulted from a change in the weighting of a criterion between 2011 and 2012.
By 2014, as more applications are built in a cloud-based model, SaaS will emerge as a critical selection factor at all levels of the customer service contact center. By 2013, at least 75% of customer service centers will use some form of SaaS application as part of the contact center solution. This could be for knowledge management, desktop CRM functionality, feedback management or chat. Through 2013, fewer than 20% of organizations will select SaaS for complex business process support.
Free SugarCRM Hosted Solutions by Service-Push
Free SugarCRM setup, and hosting with unlimited users. Premium Hosted CRM solutions starting as low as $2.50 per user. Enterprise Capabilities at over 300% off the cost of traditional on demand CRM.
Download: The 2011 Gartner Social CRM Magic Quadrant Why aren’t there new leaders in the 2011 Contact Center Magic Quadrant? 11 Contact Center Key Performance Indicators & Industry Benchmarks Featured Post: 10 Ways to Re-Architect Your Contact Center Magic Quadrant for CRM Multichannel Campaign Management tags: 2012, contact center, magic quadrant
5 Responses to 2012 Gartner Magic Quadrant for the Contact Center
Reply While driving home from celebrating her 45th birthday Monday,
Tammy Rhoades stopped for a motorist in need, a selfless stop that cost the dexter missouri nursing home her
zach hedrick June 24, 2013 at 5:00 pm Click on a tab to select how you'd like to leave your comment
Customer Service in the CloudTwitterFacebookGoogleLoginLoginLogin Leave a Reply Cancel reply
Comment You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong> «Info-graphic: Customer Loyalty Test by ZenDesk
Infographic: The Art of Making Your Customers Love You»
Product Categories Customer Service Books
Customer Service Videos
Digital Goods
ROI Tools & Templates
Shopping CartThere are no items in your cart.Browse Products »Search our Knowledge Base Loading
Customer Service Channel Usage Highlights The Importance Of Good Self-Service
by John M Perez on January 22nd 2015
Forrester's Top Trends For Customer Service In 2015
by John M Perez on December 18th 2014
SugarCRM Offers Customers a Modern Online Support Experience
by John M Perez on November 26th 2014
CX Matters, thoughts from Expert John M. Perez
Amazon Echo – are you listening?
John M Perez voted 2014 top Expert to follow in Customer Service
by John M Perez on October 23rd 2014
What Jimmy Kimmel reminds us about knowledge management
by John M Perez on January 2nd 2014
5 Songs to Improve Your Customer’s IVR Experience
Did you miss Oracle Open World 2013? Replay the Sessions right here!
Consistently Good Customer Service Has Knowledge at Its Core
by John M Perez on September 19th 2013
Recent Comments how to twitter a link says The only rule Twitter has: articles must feel 140 characters or less. 4) Video...
AthTek says Thanks for sharing! After reading your article, we decide to use video for our c...
zach hedrick says While driving home from celebrating her 45th birthday Monday, Tammy Rhoades st...
Amber says Great list. IKEA has begun to publish videos to aid frustrated home-furniture bu...
John M Perez says Great post here!...
©2015 Customer Service in the Cloud
Go back to top ↑ | 计算机 |
2015-40/2212/en_head.json.gz/7283 | EMF Compare — Developer Guide
Version 3.1.0.201506080946
Comparison Process
Model Resolving
Differencing
Equivalences
The Comparison Model
Equivalence
Proxy Resolution
Equality Helper
Comparison Scope
Longest Common Subsequence
Default Behavior and Extensibility
Model Resolver Extension Point
Model Dependency Provider Extension Point
Overriding the Match engine
Changing how resources are matched
Defining custom identifiers
Ignoring identifiers
Refine the default Match result
Overriding the Diff engine
Changing the FeatureFilter
Changing the Diff Processor
Refine the default Diff result
Refine the default equivalences
Which references are followed during merging
Add your own filter
Add your own group
Customize display of differences inside an existing group
Add your own accessor factory
Using The Compare APIs
Compare two models
Loading your models
Creating the comparison scope
Configuring the comparison
Comparing from an Eclipse plugin
Query the differences
All differences
Differences related to element X
Filtering differences
Merge differences
Open a compare editor
The above figure represents the comparison process of EMF Compare. It can be roughly divided in 6 main phases.
From a given "starting point" (the file a user decided to compare), finding all other fragments required for the comparison of the whole logical model.
Iterating over the two (or three) loaded logical models in order to map elements together two-by-two (or three-by-three). For example, determine that class Class1 from the first model corresponds to class Class1' from the second model.
The matching phase told us which elements were matching together. The differencing phase will browse through these mappings and determine whether the two (or three) elements are equal or if they present differences (for example, the name of the class changed from Class1 to Class1').
The differencing phases detected a number of differences between the compared models. However, two distinct differences might actually represent the same change. This phase will browse through all differences and link them together when they can be seen as equivalent (for example, differences on opposite references).
For the purpose of merging differences, there might be dependencies between them. For example, the addition of a class C1 in package P1 depends on the addition of package P1 itself. During this phase, we'll browse through all detected differences and link them together when we determine that one cannot be merged without the other.
When we're comparing our file with one from a Version Control System (CVS, SVN, Git, Clearcase...), there might actually be conflicts between the changes we've made locally, and the changes that were made to the file on the remote repository. This phase will browse through all detected differences and detect these conflicts.
The Model resolving phase itself can be further decomposed in its own two distinct phases. More on the logical model and its resolution can be found on the dedicated page.
EMF Compare is built on top of the Eclipse platform. We depend on the Eclipse Modeling Framework (EMF), the Eclipse Compare framework and, finally, Eclipse Team, the framework upon which the repository providers (EGit, CVS, Subversive...) are built.
The EMF Compare extensions target specific extensions of the modeling framework: UML, the Graphical Modeling Framework (and its own extensions, papyrus, ecoretools, ...).
Whilst we are built atop bricks that are tightly coupled with the eclipse platform, it should be noted that the core of EMF Compare can be run in a standalone application with no runtime dependencies towards Eclipse; as can EMF itself.
EMF Compare uses a single model, whose root is a Comparison object, to represent all of the information regarding the comparison: matched objects, matched resources, detected differences, links between these references, etc. The root Comparison is created at the beginning of the Match process, and will undergo a set of successive refinements during the remainder of the Comparison: Diff, Equivalence, Dependencies... will all add their own information to the Comparison.
Here is an overview of the EMF Compare metamodel:
So, how exactly is all of the information the Comparison model can hold represented, and how to make sense of it all?
A Match element is how we represent that the n compared versions have elements that are basically the same. For example, if we are comparing two different versions v1 and v2 of a given model which look like:
Borrowables
Comparing these two models, we'll have a Comparison model containing three matches:
library <-> library
Book <-> Novel
title <-> title
In other words, the comparison model contains an aggregate of the two or three compared models, in the form of Match elements linking the elements of all versions together. Differences will then be detected on these Match and added under them, thus allowing us to know both:
what the difference is (for example, "attribute name has been changed from Book to Novel"), and
what the original elements were.
Diff elements are created during the differencing process in order to represent the actual modifications that can be detected within the source model(s). The Diff concept itself is only there as the super-class of the three main kind of differences EMF Compare can detect in a model, namely ReferenceChange, AttributeChange and ResourceAttachmentChange. We'll go back to these three sub-classes in a short while.
Whatever their type, the differences share a number of common elements:
a parent match: differences are detected on a given Match. Having a difference basically means that one of the elements paired through this Match differs from its "reference" side (see source description below). a source: differences are detected on one side of their match. The source really only holds meaning in three-way comparisons, where a difference can be detected in either right or left. All differences detected through two-way comparisons have their source in the left side. This is because we always compare according to a "reference" side. During two-way comparisons, the reference side is the right: differences will always be detected on the left side as compared with the right side. During three-way comparisons though, differences can be detected on either left or right side as compared with their common ancestor; but never as compared to themselves (in other words, this is roughly equivalent to two two-way comparisons, first the left as compared to the origin, then the right as compared to the origin).
a current state: all differences start off in their initial unresolved state. The user can then choose to:
merge the difference (towards either right or left, applying or reverting the difference in the process), in which case the difference becomes merged, or
discard it, thus marking the change as discarded. For example, if there is a conflicting edit of a textual attribute, the user can decide that neither right nor left are satisfying, and instead settle for a mix of the two.
a kind: this is used by the engine to describe the type of difference it detected. Differences can be of four general types:
Add: There are two distinct things that EMF Compare considers as an "addition". First, adding a new element within the values of a multi-valued feature is undeniably an addition. Second, any change in a containment reference, even if that reference is mono-valued, that represents a "new" element in the model is considered to be an addition. Note that this second case is an exception to the rule for change differences outlined below.
Delete: this is used as the counterpart of add differences, and it presents the same exception for mono-valued containment references.
Change: any modification to a mono-valued feature is considered as a change difference by the engine. Take note that containment references are an exception to this rule: no change will ever be detected on those.
Move: once again, two distinct things are represented as move differences in the comparison model. First, reordering the values of a multi-valued feature is considered as a series of MOVE: one difference for each moved value (EMF Compare computes the smallest number of differences needed between the two sides' values). Second, moving an object from one container to another (changing the containing feature of the EObject) will be detected as a move.
In order to ensure that the model remains consistent through individual merge operations, we've also decided to link differences together through a number of associations and references. For example, there are times when one difference cannot be merged without first merging another, or some differences which are exactly equivalent to one another. In no specific order:
dependency: EMF Compare uses two opposite references in order to track dependencies between differences. Namely, requires and requiredBy represent the two ends of this association. If the user has added a package P1, then added a new Class C1 within this package, we will detect both differences. However the addition of C1 cannot be merged without first adding its container P1. In such a case, the addition of C1 requires the addition of P1, and the later is requiredBy the former.
refinement: this link is mainly used by extensions of EMF Compare in order to create high-level differences to hide the complexity of the comparison model. For example, this is used by the UML extension of EMF Compare to tell that the three differences "adding an association A1", "adding a property P1 in association A1" and "adding a property P2 in association A1" is actually one single high-level difference, "adding an association A1". This high-level difference is refinedBy the others, which all refines it.
equivalence: this association is used by the comparison engine in order to link together differences which are equivalent in terms of merging. For example, Ecore has a concept of eOpposite references. Updating one of the two sides of an eOpposite will automatically update the other. In such an event, EMF Compare will detect both sides as an individual difference. However, merging one of the two will trigger the update of the other side of the eOpposite as well. In such cases, the two differences are set to be equivalent to one another. Merging one difference part of an equivalence relationship will automatically mark all of the others as merged (see state above).
implication: implications are a special kind of "directed equivalence". A difference D1 that is linked as "implied by" another D2 means that merging D1 requires us to merge D2 instead. In other words, D2 will be automatically merged if we merge D1, but D1 will not be automatically merged if we merge D2. Implications are mostly used with UML models, where subsets and supersets may trigger such linked changes.
conflict: during three-way comparisons, we compare two versions of a given model with their common ancestor. We can thus detect changes that were made in either left or right side (see the description of source above). However, there are cases when changes in the left conflict with changes in the right. For example, a class named "Book" in the origin model can have been renamed to "Novel" in the left model whereas it has been renamed to "Essay" in the right model. In such a case, the two differences will be marked as being in conflict with one another.
As mentioned above, there are only three kind of differences that we will detect through EMF Compare, which will be sufficient for all use cases. ReferenceChange differences will be detected for every value of a reference for which we detect a change. Either the value was added, deleted, or moved (within the reference or between distinct references). AttributeChange differences are the same, but for attributes instead of references. Lastly, the ResourceAttachmentChange differences, though very much alike the ReferenceChanges we create for containment references, are specifically aimed at describing changes within the roots of one of the compared resources.
Conflict will only be detected during three-way comparisons. There can only be "conflicts" when we are comparing two different versions of a same model along with their common ancestor. In other words, we need to able to compare two versions of a common element with a "reference" version of that element.
There are many different kinds of conflicts; to name a few:
changing an element on one side (in any way, for example, renaming it) whilst that element has been removed from the other side
changing the same attribute of an element on both sides, to different values (for example, renaming "Book" to "Novel" on the left while be renamed "Book" to "Essay" on the right)
creating a new reference to an element on one side whilst it had been deleted from the other side
Conflicts can be of two kinds. We call PSEUDO conflict a conflict where the two sides of a comparison have changed as compared to their common ancestor, but where the two sides are actually now equal. In other words, the end result is that the left is now equal to the right, even though they are both different from their ancestor. This is the opposite of REAL conflict where the value on all three sides is different. In terms of merging, pseudo conflicts do not need any particular action, whilst real conflicts actually need resolution.
There can be more than two differences conflicting with each other. For example, the deletion of an element from one side will most likely conflict with a number of differences from the other side.
EMF Compare uses Equivalence elements in order to link together a number of differences which can ultimately be considered to be the same. For example, ecore's eOpposite references will be maintained in sync with one another. As such, modifying one of the two references will automatically update the second one accordingly. The manual modification and the automatic update are two distinct mod | 计算机 |
2015-40/2212/en_head.json.gz/7909 | Become a Columnist | Microsoft Exchange Site | Microsoft Support Site | MSDN Exchange Site
Contact Management with Microsoft Exchange and Outlook
By Brian Pituley
(reprinted by permission of the author)
Contact management and activity journaling have always been an important part of doing business and, in today's increasingly fast-paced world, they are becoming indispensable. Contact management has traditionally been done by individuals and most contact management software solutions are aimed at the single-user market. In today's business environment, however, team work and the requirement for continuous coverage are driving the need for work group contact management and journaling. Exchange 5.5 scenario
Contact management and activity journaling are a built in functions of Outlook and Exchange server but they are designed to work together for a single person. A single user's contacts are stored on the Exchange server in the user's mailbox. A single user's contacts can be shared but given the security model used by Outlook and Exchange, it is impractical to share with a large group. Contact information can easily be shared for a work group or a corporation through public folders. Activity journaling records are also stored in the user's mailbox and should, in theory, be sharable. The problem is that Outlook cannot be configured to write journal entries to any location other than the user's mailbox. What this means is that even if a given user were to share his journal entries, a second user would not be able to add entries to the first user's shared folder. The second user's journal entries would end up in their own mailbox rendering them unusable unless the second user also shared his journal information. The best possible case is that there would be multiple folders containing journal entries, each one separate from the others. Searching for entries pertaining to a single contact would be time-consuming at best.
It is possible to create solutions for group contact management and activity journaling under Exchange 5.5 but because of the nature of the Exchange database, it is cumbersome to implement and slow in operation. Custom Outlook forms, an Exchange public folder, and a database are all required to make it work. Custom forms allow the use of the familiar Outlook contacts user interface while enabling additional information fields and the addition of stationery-based letters and faxes through integration with Microsoft Office. The public folder is the repository for the shared contacts, and the database and the associated code ties the contacts and journal entries together. The processing for this type of solution is all done on the client PC so the performances of the system is highly dependent on the power of the desktop PC's and the network bandwidth that's available. Because of the reliance on the network, this type of solution does not scale well, and performs especially poorly over wide area networks. There are third-party products like ContEX from Impreza (http://www.imprezacomp.com/contex/index.htm) that perform the function reasonably well but, as stated earlier, they do not scale well. Exchange 2000 scenario
Creating this type of application with Exchange 2000 will be considerable easier due to changes in the structure of the Exchange information store database. The new store structure, termed 'web store' by Microsoft allows access to stored information through a variety of ways. Where the Exchange 5.5 database could only be accessed through MAPI calls, the web store allows access to all information through Win32, MS Office, and HTTP URL's as well as MAPI. Security permissions must still be granted to files and folders but the multiple access methods allow for much greater simplicity when creating applications that will rely on the Exchange information store. For example, the ability to access individual pieces of information via URL's allows the development of web-based applications that don't have to rely on MAPI calls to retrieve information. In addition, Exchange 2000's full-text indexing capabilities will greatly simplify searching for information.
Any companies that are considering implementing an Exchange and Outlook-based contact management system on Exchange 5.5 should wait until they have migrated to Exchange 2000. While the infrastructure work required to implement Exchange 2000 is considerable, the ease of application development under Exchange 2000 will more than make the investment worthwhile. Most companies running Exchange have an immense amount of valuable information stored in the Exchange databases, information that is largely inaccessible to anyone but the creators. Exchange 2000 opens the databases to applications like contact management and data mining.
Click here to go back to my OutlookExchange homepage.
Disclaimer: Your use of the information contained in these pages is at your sole risk. All information on these pages is provided "as is", without any warranty, whether express or implied, of its accuracy, completeness, fitness for a particular purpose, title or non-infringement, and none of the third-party products or information mentioned in the work are authored, recommended, supported or guaranteed by Stephen Bryant or Pro Exchange. OutlookExchange.Com, Stephen Bryant and Pro Exchange shall not be liable for any damages you may sustain by using this information, whether direct, indirect, special, incidental or consequential, even if it has been advised of the possibility of such damages.
Copyright Stephen Bryant 2008 | 计算机 |
2015-40/2212/en_head.json.gz/7969 | //Home//Multimedia//Blogs//Moving Pixels
Capturing the Next Generation
by Scott Juster
Tweet It's clear that streaming and capturing game footage will be easier with the PS4, but why is that a good thing? In a word: democratization. To hear Sony tell it, every piece of their upcoming PlayStation 4 is an industry-changing marvel. As John Teti aptly writes, their mantra is “More”: more processing power, more polygons, more texture, more social network hooks. It’s hard to separate substance from static in the middle of the hype storm but now that some time has passed, I’m more confident that the most important feature announced is linked to a single button labeled “share.” Assuming it’s implemented gracefully (which is a big assumption given Sony’s console software track record), the ability for players to stream and save gameplay footage will have a much larger effect than any amount of increased visual fidelity. Gameplay capture is by no means new, but neither is it particularly easy to accomplish by end users. Doing it on a PC is probably the simplest way, but you still need at least one extra piece of software running behind your game (and sometimes two if you want a separate, high-quality voice over track). Sometimes there are problems with full-screen mode. Tweaking bit rates can eat up an entire afternoon. Storage and hosting considerations add more complications. Trying to get something off a console means more investments in both the hardware and software sides. Such investments require enough time and money to scare away most players.
As it stands now, distributing gameplay footage is a bit like on-line multiplayer in the days before Xbox Live. Some games have it baked into their software, others require manual tweaking and a willingness to jump through more than a few hoops. It’s not enough to want to share, you have to be willing to engineer a solution as well. Through a combination of hardware, software, and cloud services, Sony is trying to cater to those that are interested in the end product rather than the process of getting it set up.
There will definitely be those that will scoff at the inevitable limitations of such a mechanism. Purists will probably be able to get sharper images, record longer sessions, and capture at higher frame rates with their own setups, just as some people still like to host their own multiplayer services and wire their house with ethernet to cut down on latency. For the vast majority of the people however, “good enough” will be better than “not at all.”
It’s clear that streaming and capturing game footage will be easier with the PS4, but why is that a good thing? In a word: democratization. It sounds a bit pretentious without any context, but the theory behind it is simple. If people can share and comment on games easily on their own terms, they have power over how their games are framed. Exaggerated publisher claims or deceptive advertisements will ring even more hollow in the face of people who can broadcast the finished product without a corporate filter. For independent developers, a larger streaming population means more games will have the opportunity to experience the kind of grass roots popularity games like Minecraft enjoyed. Evangelists will have an easier time spreading the word about an obscure game they played and audiences will have the opportunity to form organically.
As I’ve written before, democratized game capture has historical value as well. Properly executed, the PS4 will enable millions of new record keepers to create primary sources for posterity. We will have video of how games looked in their original form on their original hardware. This is particularly important as we make the transition to all digital games, as version updates and patches can literally overwrite a game’s past. Having wider access to recording technology will also help us capture the culture surrounding particular games. If Bungie and Activision are to be believed, Destiny will be a decade-long Odyssey in which players interact with one another across a vast galaxy. Universal access to recording technology means every player can become an anthropologist within their virtual community. With any luck, major game events (both scripted and player driven) will be recorded and used to study games and their players. Will 90% of the footage be mundane to the average viewer? Probably. Will seeing people interact with one another yield both wonderful and terrible examples of human nature? Most certainly. In order to get a full picture of our medium, we need as much data as we can get.
Sony is rolling out sharing features in the hopes of gaining control of the video game market, but these very features also give players significant power. If successful, the share button will give players unprecedented opportunity to shape a game’s public perception and historical memory. In a world in which technology is increasingly focused on user “ecosystems,” the streaming and sharing pave the way for an extremely active, unpredictable user base. It’s hard to know what changes the PS4’s share button (and it’s inevitable Xbox counterpart) will bring forth. Whatever it is, you can be certain it will be more influential than fancy new shaders.
Scott Juster is a writer from the San Francisco Bay Area. He has an academic background in history and is interested in video game design and the medium's cultural significance. In addition to his work on PopMatters, he writes and creates podcasts about video games at http://www.experiencepoints.net/. | 计算机 |
2015-40/2212/en_head.json.gz/8343 | Deliverables
Dependencies and Liaisons Participation
Decision Policy
Patent Policy About this Charter
Web Applications Working Group Charter
Note: this is the original charter for the WebApps WG. It is retained for historical purposes. The newest revision of the charter can be found at http://www.w3.org/2008/webapps/charter/.
The mission of the Web Applications (WebApps) Working Group, part of the Rich Web Client Activity, is to provide specifications that enable improved client-side application development on the Web, including specifications both for application programming interfaces (APIs) for client-side development and for markup vocabularies for describing and controlling client-side application behavior. Join the Web Applications Working Group.
Proceedings are Public
Art Barstow Charles McCathieNevile
Team Contacts (FTE %: 30)
Doug Schepers Michael(tm) Smith
Usual Meeting Schedule
Teleconferences: 1-2 per week (one general, one for discussing particular specs) Face-to-face: 3-4 per year
As Web browsers and the Web engine components that power them are becoming ubiquitous across a range of operating systems and devices, developers are increasingly using Web technologies to build applications and are relying on Web engines as application runtime environments. Examples of applications now commonly built using Web technologies include reservation systems, online shopping sites, auction sites, games, multimedia applications, calendars, maps, chat applications, clocks, interactive design applications, stock tickers, currency converters and data entry/display systems. The target environments for the Web Applications Working Group's deliverables include desktop and mobile browsers as well as non-browser environments that make use of Web technologies. The group seeks to promote universal access to Web applications across a wide range of devices and among a diversity of users, including users with particular accessibility needs. The APIs must provide generic and consistent interoperability and integration among all target formats, such HTML, XHTML, and SVG.
Additionally, the Web Applications Working Group has the goal to improve client-side application development through education, outreach, and interoperability testing. To reach this goal, the Web Applications Working Group will create Primer documents for relevant specifications, and promote creation of tutorials and other educational material in the larger community. The Web Applications Working Group is a merger of the members and deliverables from the Web API Working Group and the Web Application Formats (WAF) Working Group. The deliverables of both groups have had close interdependencies and goals, and a single group makes more efficient use of Team and Member resources. Scope
This charter builds on the charters of the Web API and WAF Working Groups by continuing work already in progress, and taking on new deliverables necessary for the evolving Web application market. The scope of the Web Applications Working Group covers the technologies related to developing client-side applications on the Web, including both markup vocabularies for describing and controlling client-side application behavior and programming interfaces for client-side development.
The markup vocabularies for describing and controlling client-side application behavior category covers areas such as: special-use packaging and deployment of applications
binding elements in applications to particular interactive behavior
The application programming interfaces (APIs) for client-side development category covers areas such as:
network requests
platform interaction
Additionally, server-side APIs for support of client-side functionality will be defined as needed.
Both the APIs and markup vocabularies defined in Web Applications Working Group specifications are expected to be applicable to, and designed for, use with an array of target formats — including HTML, XHTML 1.x and 2.x, SVG, DAISY, MathML, SMIL, and any other DOM-related technology. Although the primary focus will be handling of content deployed over the Web, the deliverables of the Web Applications Working Group should take into consideration uses of Web technologies for other purposes, such as the purpose of building user interfaces on devices; for example, user interfaces in multimedia devices such as digital cameras and in industrial information tools such RFID/barcode scanners and checkout machines.
The Web Applications Working Group should adopt, refine and when needed, extend, existing practices where possible. The Working Group should also take into account the fact that some deliverables will most likely be tied to widely deployed platforms. Therefore, it is feasible for the Working Group to deliver APIs optimized for particular languages, such as ECMAScript. Interfaces for other languages such as Java, Python, C# and Ruby, may be developed in cooperation with the organizations responsible for those languages.
Furthermore, the Web Applications Working Group deliverables must address issues of accessibility, internationalization, mobility, and security. Education, outreach, and testing also play an important role in improving the current state of Web applications. The Working Group should aim to provide the community with resources that meet the educational requirements stated in its group mission statement. Comprehensive test suites will be developed for each specification to ensure interoperability, and the group will assist in the production of interoperability reports. The group will also maintain errata as required for the continued relevance and usefulness of the specifications it produces. Finally, the WebApps Working Group will collaborate with the HTML and SVG Working Groups to form a joint Task Force to specify the Canvas Graphics API, should the need arise.
Success Criteria In order to advance to Proposed Recommendation, each specification is expected to have two independent implementations of each of feature defined in the specification. Deliverables
Recommendation-Track Deliverables
The working group will deliver at least the following:
Access Control for Cross-site Requests (Access Control)a mechanism for selective and secure cross-domain scripting
Clipboard Operations for the Web 1.0: Copy, Paste, Drag and Drop (ClipOps)a detailed model for rich clipboard operations in User Agents, with consideration for different enviroments
Document Object Model (DOM): DOM Level 3 Core 2nd Edition; DOM Level 3 Events; DOM Level 4a set of objects and interfaces for interfacing with a document's tree model
Element Traversala lightweight interface for navigating between document elements
File Uploadan API to extend the existing file upload capabilities of User Agents (may include more generic file I/O operations)
Web Interface Definition Language (WebIDL)language bindings and types for Web interface descriptions
Metadata Access and Extensible Information for Media (MAXIM)a generic API for accessing metadata embedded in or linked to media files
Network Communication API (Network API)a socket interface to enable "push" content update
Progress Eventsevent types used for monitoring the progress of uploads and downloads
Selectors APIan interface for matching and retrieving Element nodes in a DOM according to flexible criteria
Web Signing Profilea mechanism for associating XML Signatures with Web content
Widgetsa specification covering various aspects of Web applications that can be installed to a local computer and still access the Web
Window Objectan object which provides many high-level pieces of functionality for Web scripting languages
XML Binding Language (XBL2)a language and set of APIs to allow for rich real-time transformations of documents
XMLHttpRequest Object (XHR Object), Level 1 and Level 2an API for client-server data transfer, both to specify what is currently implemented and to extend its capabilities
For a detailed summary of the current list of deliverables, and an up-to-date timeline, see the WebApps WG Deliverables.
The market for applications of Web technologies continues to evolve quickly. Therefore, in addition to the specifications already in draft status, the Web Applications Working Group may take on additional specifications necessary to enable the creation of Web applications to meet the needs of the market as it evolves.
Additional WebApps WG specifications may arise initially from work begun in other Working Groups, such as the HTML Working Group or the SVG Working Group; they may also be identified by new submissions from Members, or by market research. For any additional specification to be considered for development within the WebApps WG, an associated requirements document that identifies demonstrable use cases must first be developed. Per section 6.2.3 of the W3C Process document, any substantive changes to the charter (e.g. additional Recommendation-track documents not included in this charter) will follow the Advisory Committee Review process. When suggesting new deliverables, the Working Group Chair will endeavor to secure adequate resources for the timely development of those deliverables. Specific deliverables that the WebApps WG may consider when resources become available include:
An API for cross-domain access, related to or complementary to Access Control
Communication APIs (such as Server Sent Events and a Connection interface)
Offline APIs and Structured Storage for enabling local access to Web application resources when not connected to a network
A distributed eventing mechanism to allow for transfer of events between networked devices
An advanced pointers interface, to expose distinct features of specific devices such as pen/tablet inputs and multi-touch trackpads.
The WebApps WG may also enter into joint Task Forces with other groups, to collaborate on specifications that cross group boundaries, such as the Canvas Graphics API.
Other Deliverables
Other non-normative documents may be created such as:
Test suites for each specification
Primers for each specification
Requirements document for new specifications
Non-normative schemas for language formats
Non-normative group notes
A comprehensive test suite for all features of a specification is necessary to ensure the specification's robustness, consistency, and implementability, and to promote interoperability between User Agents. Therefore, each specification must have a companion test suite, which should be completed by the end of the Last Call phase, and must be completed, with an implementation report, before transition from Candidate Recommendation to Proposed Recommendation. Additional tests may be added to the test suite at any stage of the Recommendation track, and the maintenance of a implementation report is encouraged.
Given sufficient resources, the Web Application Working Group should review other working groups' deliverables that are identified as being relevant to Web Applications Working Group mission.
The actual production of some of the deliverables may follow a different timeline. The group will document any schedule changes on the group home page. Dates and milestones which have been met at the time of writing are in bold type. Although XBL2 became a Candidate Recommendation in March 2007, to address issues already identified, it will return to Working Draft in approximately 2009 and Last Call in 2010. Two of the group's members have expressed their intent to implement XBL2 but the earliest completion date for these implementations is not expected until 2011. The public is encouraged to contribute to the implementation work and/or the extensive test suite that will be required as such contributions could accelerate meeting some of the milestones. Milestones Specification
FPWD
Access Control spec
2006-Q2
ClipOps spec
DOM 3 Core 2ed spec
DOM 3 Events spec
Element Traversal spec
File Upload spec
WebIDL spec
MAXIM spec
Network API spec
Progress Events spec
Selectors API spec
Web Signing Profile spec
Widgets spec
Widgets Requirements
Window Object spec
XBL2 spec
XBL2 Primer
XHR Object spec
Dependencies and Liaisons
The XmlHttpRequest Object specification currently has a dependency upon the HTML 5.0 specification. The Web Applications Working Group is not aware of any other Web Applications Working Group specifications that depend upon specifications developed by other groups, though there are some dependencies between current Web Applications Working Group specifications. However, the specifications of several other groups, such as HTML and SVG, depend upon particular Web Applications Working Group specifications, notably the DOM specifications. Therefore, additional dependancies will be avoided to prevent the disruption of dependent deliverables.
The Web Applications Working Group expects to maintain contacts with at least the following groups and Activities within W3C (in alphabetical order):
CDF (Compound Document Formats) Working Group
To provide suitable APIs for compound documents as identified by the CDF WG.
HTML (HyperText Markup Language) Working Group
To monitor the development of the HTML specification as it complements the WebApps WG's specifications, and to help ensure HTML requirements for the WebApps WG's deliverables are met (noting that the HTML Working Group charter states specifically that the HTML Working Group will produce a specification for a language evolved from HTML4 and for describing the semantics of both Web documents and Web applications). In addition, to form a joint task force with the HTML and SVG WG to specify the Canvas Graphics API, should the need arise.
Mobile Web Initiative
To help identify use cases and requirements for Web applications on mobile devices and to help ensure that the Web Application Working Group’s deliverables address those use cases and requirements Multimodal Interaction Working Group
To coordinate regarding features and models for voice, pen, and other alternative inputs.
Protocols and Formats Working Group
To ensure that WebApps WG deliverables support accessibility requirements. SVG (Scalable Vector Graphics) Working Group
To monitor the development of the SVG specification as it complements the WebApps WG's specifications, and to help ensure SVG requirements for the WebApps WG's deliverables are met. In addition, to form a joint task force with the HTML and SVG WG to specify the Canvas Graphics API, should the need arise. User Agent Accessibility Guidelines Working Group
To ensure that WebApps WG deliverables support accessibility requirements, particularly with regard to interoperability with assistive technologies, and inclusion in deliverables of guidance for implementing WebApps deliverables in ways that support accessibility requirements. Web Security Context Working Group and XML Security Specifications Maintenance Working Group (or a successor)
To help ensure the WebApps WG's specifications, particularly those which are security-related, are consistent with Web and XML security specifications and best practices.
XHTML2 (XML HyperText Markup Language) Working Group
For coordination on event and DOM architectures.
SYMM (Synchronized Multimedia) Working Group and Web Audio and Video Working Group (TBD)
To integrate consistent APIs for all multimedia functionality.
Furthermore, the Web Applications Working Group expects to follow the following W3C Recommendations, Guidelines and Notes and, if necessary, to liaise with the communities behind the following documents:
Architecture of the World Wide Web, Volume I
Internationalization Technical Reports and Notes
External Groups
The following is a tentative list of external bodies the Working Group
should collaborate with:
ECMA Technical Committee 39 (TC39)
This is the group responsible for ECMAScript standardization, and related ECMAScript features like E4X. As the Web Applications Working Group will be developing ECMAScript APIs, it should collaborate with TC39.
OMA can provide input to the requirements and technologies for this Working Group, as well as review and possibly endorse the deliverables.
Some of the outputs of this Working Group are technologies that could be endorsed by 3GPP.
JCP The Java Community Process may develop similar APIs to the Web Applications Working Group.
To be successful, the Web Applications Working Group is expected to have 10 or more active participants for its duration, and to have the participation of the industry leaders in fields relevant to the specifications it produces. The Chairs and specification Editors are expected to contribute one to two days per week towards the Working Group. There is no minimum requirement for other Participants. The Web Applications Working Group will also allocate the necessary resources for building Test Suites for each specification. The Web Applications Working Group welcomes participation from non-Members. The group encourages questions and comments on its public mailing list, [email protected], which is publicly archived and for which there is no formal requirement for participation. The group also welcomes non-Members to contribute technical submissions for consideration, with the agreement from each participant to Royalty-Free licensing of those submissions under the W3C Patent Policy.
Most Web Application Working Group Teleconferences will focus on discussion of particular specifications, and will be conducted on an as-needed basis. At least one teleconference will be held per week, and a monthly coordination teleconference will be held, with attendance by the Chairs, W3C Team Contacts, and the Editors of each specification, as well as other interested group members, in order to assess progress and discuss any issues common among multiple specifications. Most of the technical work of the group will be done through discussions on the [email protected], the group's public mailing list. Editors within the group will use the W3C's public CVS repository to maintain Editor's Draft of specifications. The group's action and issue tracking data will also be public, as will the Member-approved minutes from all teleconferences. The group will use a Member-confidential mailing list for administrative purposes and, at the discretion of the Chairs and members of the group, for member-only discussions in special cases when a particular member requests such a discussion.
Information about the group (for example, details about deliverables, issues, actions, status, participants) will be available from the Web Applications Working Group home page. Decision Policy
As explained in the W3C Process Document (section 3.3), this group will seek to make decisions when there is consensus and with due process. The expectation is that typically, an editor or other participant makes an initial proposal, which is then refined in discussion with members of the group and other reviewers, and consensus emerges with little formal voting being required. However, if a decision is necessary for timely progress, but consensus is not achieved after careful consideration of the range of views presented, the Chairs should put a question out for voting within the group (allowing for remote asynchronous participation -- using, for example, email and/or web-based survey techniques) and record a decision, along with any objections. The matter should then be considered resolved unless and until new information becomes available. This charter is written in accordance with Section 3.4, Votes of the W3C Process Document and includes no voting procedures beyond what the Process Document requires. Patent Policy This Working Group operates under the W3C Patent Policy (5 February 2004 Version). To promote the widest adoption of Web standards, W3C seeks to issue Recommendations that can be implemented, according to this policy, on a Royalty-Free basis. For more information about disclosure obligations for this group, please see the W3C Patent Policy Implementation. About this Charter
This charter for the Web Applications Working Group has been created according to section 6.2 of the Process Document. In the event of a conflict between this document or the provisions of any charter and the W3C Process, the W3C Process shall take precedence.
Please also see the previous charters for the Web API and WAF Working Groups.
Doug Schepers, <[email protected]>, Team Contact Michael(tm) Smith, <[email protected]>, Team Contact Art Barstow, Nokia, Chair Charles McCathieNevile, Opera, Chair Copyright© 2008 W3C® (MIT, ERCIM, Keio), All Rights Reserved. | 计算机 |
2015-40/2212/en_head.json.gz/8792 | Android vulnerability reflects need for more timely updates
Researchers say that 99.7 percent of Android users are at risk from a …
Rice University professor Dan Wallach wrote a blog post in February that discussed the threat that network eavesdropping poses to Android users. Several applications, including the platform's native Google Calendar software, don't use SSL encryption to protect their network traffic. Wallach speculated that the calendar software could be susceptible to an impersonation attack.
Researchers at the University of Ulm followed up on Wallach's findings and devised a proof-of-concept attack to demonstrate the vulnerability. Several of Google's applications use the ClientLogin authentication system but fail to use SSL to encrypt their communication with Google's servers, making them susceptible to eavesdropping attacks.
ClientLogin is designed to allow applications to trade a user's credentials for an authentication token that identifies the user to the service. If the token is passed to the server in an unencrypted request, it could potentially be intercepted and used by the attacker.
The authentication tokens remain valid for two weeks, during which time the attacker has relatively broad access to the user's account in a specific Google service. The researchers found that Android's calendar sync, contact sync, and Picasa sync are all susceptible.
Although the bug has already been fixed (for calendar and contact sync, but not Picasa) in Android 2.3.4—the latest version of the operating system—the vast majority of mobile carriers and handset manufacturers haven't issued the update yet. According to Google's own statistics, this means that 99.7 percent of the Android user population is still susceptible to the vulnerability.
This reflects the need for better update practices among Android hardware vendors. During a keynote presentation at the recent Google I/O event, product manager Hugo Barra acknowledged the problematic nature of the Android update process and told developers that an effort to address the issue is in the works. At a press briefing following the keynote, Google's Andy Rubin offered some additional details.
There is no actual plan in place at this time, but a number of Google's largest handset and carrier partners have formed a working group to begin setting the guidelines for a new update initiative. The participants intend to guarantee the availability of regular updates for a period of 18 months on new handsets. They could also potentially define some boundaries to reduce the gap between when a new version of Android is released and when it is deployed over the air.
Although the initiative is still at a very early stage and the policies it formulates will be entirely voluntary, it already has preliminary buy-in from enough prominent Android stakeholders to make it credible. The leading Android handset manufacturers and all four of the major US carriers are currently involved.
If the group can build consensus around a reasonable set of update policies, it would be a big win for Android adopters. It would ensure that security issues like the ClientLogin bug can be remedied in a timely manner. Another positive side effect is that it would help diminish the uncertainty about product lifespan that frustrates many Android end users. The fact that so many Android users are still at risk from a vulnerability that has already been fixed is a telling sign of the need for faster updates.
Update: following widespread reports of this issue, Google has come up with a way to mitigate the issue on the server side. The fix will reportedly be rolling out soon. | 计算机 |
2015-40/2212/en_head.json.gz/9495 | Essential Guide
Working with Enterprise ALM: Mobile, cloud and more
A comprehensive collection of articles, videos and more, hand-picked by our editors
How does application portfolio management tie into ALM?
byKevin Parker
Actually, application development veteran Kevin Parker says ALM is really a part of the APM process when you look at it from a distance.
FROM THE ESSENTIAL GUIDE:
GUIDE SECTIONS
Mobile ALM approaches
Working with ALM
ALM glossary
Essential Guide Section you're in:ALM development, projects and processes
More articles from this section:
SDLC and the future of ALM
Governance policies for ALM project managers
Getting business leaders involved with ALM process
Virtual applications for ALM
Tools for SOA ALM processes
Unified DevOps approach to ALM
Optimize Your Operational Effectiveness in 3 Steps
Driving Innovation Through an Open Management Platform
eGuide: Tips for successful ALM Management
Effective ALM Planning; How to Appoint a New Management Team
What is Application Portfolio Management, and is it part of the Application Lifecycle Management process?
By submitting my Email address I confirm that I have read and accepted the Terms of Use and Declaration of Consent. By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA. You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy. Just like the physical assets of IT, hardware, building and infrastructure, the software we own and develop has intrinsic value to our organization. It is common today to see the software research and development (R&D) in progress to be shown on the balance sheet alongside the R&D that goes into the products we make and sell. It is natural then that we manage software just as we would manage any other asset. We need to know when its value is declining, when it has become obsolete and what it costs each year to maintain and sustain.
After a confused and fragmented beginning, application portfolio management (APM) tools are now reaching a level of maturity where informed business decisions can be made about the disposition of the software assets in the organizational portfolio. Let’s begin, though, by making it clear that application lifecycle management (ALM) is a term that is sometimes used as well, even though we more usually apply this term to the business of developing and maintaining an application. Gartner is beginning to use the term ADLM, Application Development Lifecycle Management, in order to disambiguate the uses of ALM. I mention this because it is important understand the difference between the lifecycle of the application and the lifecycle of changes to the application.
APM is the business of deciding if an application should be created, how and when it should be improved and when it should be replaced: the birth, life and death, the lifecycle of the application. Those choices occur due to requests from the business which are, in turn, costed, evaluated and, sometimes, funded. The prioritization of these requests, the organizing of the requests into releases and the management of the activities are all part of APM. The main task of APM is to determine the value of the software assets. It does this by tracking the cost of the development and maintenance of the system over time. By estimating the value the system delivers to the business and by determining the exposure the business has if the system fails. There are tools available that will measure the complexity and difficulty of enhancing a system and this should be factored into the value. With these, and many other parameters, it is possible to assign a dollar amount to each system. As a result of this it becomes possible see the point at which the value of the existing system is less than the cost of replacing it with a new one. ADLM defines the processes and activities we go through to satisfy those requests and to create, modify or decommission an application. ADLM is about decomposing those requests into increasingly more detailed requirements, about the project planning and resource utilization, the business of creating code, testing, packaging and deploying it. It is about turning the requests into tasks and executing them: it is not about deciding if the task should be done. So ALM is really part of the APM process. If we compare them to the government, APM is the cabinet making policy and deciding priorities, and ALM is the agencies and departments delivering the will of government. What are your organization’s biggest challenges in Application Portfolio Management and Application Lifecycle Management? Tell us, and we’ll find expert advice specifically for your problems.
This was first published in June 2013
Related Q&A from Kevin Parker
How adding controls can actually make for fast software development
Add controls to the business of delivering software, and teams will scream about delays. However, fast development is often the result.continue reading
Industry analyst reports are trustworthy, to a point, says expert
Kevin Parker discusses the pros and cons of industry analyst reports and advises when it might be best to trust your own instincts.continue reading
Where does an organization start with application portfolio management
IT veteran Kevin Parker explains application portfolio management (APM) for beginners, including the ties between APM, data analysis and BPM.continue reading
Meet all of our Software Quality expertsView all Software Quality questions and answers | 计算机 |
2015-40/2212/en_head.json.gz/9991 | › Analytics › ROI Marketing
Google Website Optimizer, Part 2
Bryan Eisenberg | April 27, 2007 | Comments
A chat with Google's Website Optimizer product manager. Last of a series.
Recently, Google announced the open beta for Google Website Optimizer, a free testing tool available to any AdWords marketer. I had the opportunity to discuss it with Tom Leung, business product manager for Google. Check out part one here.
Bryan Eisenberg: Now that people have a free tool, one of the questions they have is, "What resources will I need to actually implement tests?" Tom Leung: Let me put it this way: if you have the ability to change an image on a particular page or change a headline, or you have the ability to add our scripts to your page, you can test. However, there are some people who may want extra help and that's why we work with certifying different partners like [your company] to help provide a consultation on what to test or how to evaluate the results. The tool's designed for do-it-yourself marketers. But for those who want extra help, that's available either through our free technical support or more advanced professional services, like our authorized consultants. BE: One of the questions that keep popping up is if I start tagging the pages and I start making a couple of variations, how long do I have to wait until I get results that are meaningful? TL: As with all complicated questions, it depends. It depends on a number of factors, and I'll call out some of them. One is the traffic that your page gets; another is how ambitious you were with your experimentations. So if you created dozens of different variations and then created, ultimately, hundreds of different potential versions of the page, it's going to take a lot longer than if you create two or three variations for your headline and just one or two for your hero image. The other thing it depends on is your conversion rate. So if you define conversion as maybe entering step one of a purchasing process, you'll probably get a reasonably high conversion rate. But if you define conversion as making it through [to] step five of a five-step sales funnel, then the conversion rate would be lower. So the short of it is that our expectation is most tests will take a few weeks. We've had some sites, some of our larger advertisers, run tests and get result within as little as a few hours, although we don't recommend they end the test at that point. You want to let it run at least a week so you can make sure you don't have any weird day-of-week effects and things like that. BE: Yeah, I can't tell you how many times we've worked with clients in the past where they'll get what looks like an immediate boost, and all of a sudden it just fizzles out and actually becomes lower. You definitely need to give enough time to run its course. TL: We don't really encourage that kind of hit-and-run testing. Our vision is you'll always be testing and continuously trying new things. It's kind of this marathon where you're just constantly getting more efficiency out of your page. BE: What kind of things do you see happening over the course of the next six months to a year? TL: I'd probably say the feedback we're getting from our beta customers is that they're extremely happy with the tool, and many have said that it's just perfect for them. I don't really know much about the details of where other tools are heading, but we see the category as growing significantly and we don't really feel like there's ever going to be just one tool that's perfect for everybody. But we definitely feel like this could be used by potentially millions of Web sites out there. In terms of where we see the tool going, we're making a lot of effort to improve and make even more sophisticated reports, to make the setup process even faster than it is today and addressing feedback we get from customers. We're very much interested in hearing what people want, and we're working hard to deliver to them. So this is just a beginning, you know. This is version 1.0. There's a lot of great stuff on the way. I don't have anything to announce in regard to future features, but I can tell you we're burning the midnight oil over here. BE: I'd have to agree. From our perspective, the tool is just as effective as just about everyone else's. There's just a handful of companies that can benefit from a more complex algorithm or whatever it is, the amount of people who could benefit from anything extra is limited. This does what it's intended to do, which is test variations to find out which one converts best. TL: That's right, and we're always open to ideas on how to make it better. Hopefully, we'll have another opportunity to talk with you down the road when we can announce other cool things you'll see with this tool. BE: A lot of what we see in testing in general seems to happen more in e-commerce. Do you see more business-to-business Web sites using the Google optimizer to improve experience? TL: You're absolutely right. The early adopters, even for our early beta, have been the e-commerce guys. I think for them there's a very clear impact on the bottom line. You know, going from a 3 to 4 percent conversion rate makes a big impact on their financial statements in the growth of their business. However, I'd argue every page on the Web should be optimized. Every page on the Web is trying to accomplish something. Why not try and experiment with alternative ways to display that page to accomplish whatever that something is? It doesn't necessarily have to be purchasing a widget from an e-commerce [site], it could be filling out a lead-generation form, or it could be watching a demo of a new brand building campaign, or it could be just staying on the Web site for a certain period of time. There are a lot of ways you can optimize a page. The straightforward e-commerce conversion is a great way for us to start, but we're already seeing a lot of beta users who aren't really selling anything but are using their Web site as a means to communicate with customers or to build awareness. Those are conversion goals, too. BE: One last question. If you had the opportunity to talk to every single Webmaster out there who never tested before, what would you advise them to do with Google Website Optimizer in terms of first steps? TL: I'd say pick a page that gets a significant amount of traffic and try to test a few different headlines and a few different main images, and maybe have three headlines and three images and make a conversion event something that's fairly achievable. Something like triggering the first step of a purchase process or going to the sign-up form, whatever it might be. I guarantee you, you'll be blown away by what you learned and you'll be a tester for life! BE: From your mouth to the world. I hope lots of people hear it and take advantage of it. You guys are offering a phenomenal product, and I think we're going to see huge uptake. I think people are going to be excited, and we'll hear a lot of great stories about a lot of people making a lot more money than they are now. TL: That's win-win for everybody. BE: I want to thank you so much, Tom, for taking time out to tell us about Google Website Optimizer. Meet Bryan at the ClickZ Specifics: Web Metrics seminar on May 2 at the Hilton New York in New York City.
Bryan Eisenberg is co-founder and chief marketing officer (CMO) of IdealSpot. He is co-author of the Wall Street Journal, Amazon, BusinessWeek, and New York Times best-selling books Call to Action, Waiting For Your Cat to Bark?, and Always Be Testing, and Buyer Legends. Bryan is a keynote speaker and has keynoted conferences globally such as Gultaggen, Shop.org, Direct Marketing Association, MarketingSherpa, Econsultancy, Webcom, the Canadian Marketing Association, and others for the past 10 years. Bryan was named a winner of the Marketing Edge's Rising Stars Awards, recognized by eConsultancy members as one of the top 10 User Experience Gurus, selected as one of the inaugural iMedia Top 25 Marketers, and has been recognized as most influential in PPC, Social Selling, OmniChannel Retail. Bryan serves as an advisory board member of several venture capital backed companies such as Sightly, UserTesting, Monetate, ChatID, Nomi, and BazaarVoice. He works with his co-author and brother Jeffrey Eisenberg. You can find them at BryanEisenberg.com.
10 Ways Online Retailers Can Build Trust With Customers
Your Terrible Website is Ruining Your PPC Performance
Using UTM Tags to Enhance Digital Marketing Strategies
JCPenney Optical Drives More Store Visits With Google Advertising
Global Search Marketing Myths and Best Practices
Get the ClickZ Analytics newsletter delivered to you. Subscribe today! | 计算机 |
2015-40/2212/en_head.json.gz/10116 | Posted Home > Gaming > Ouya: ‘Over a thousand’ developers… Ouya: ‘Over a thousand’ developers want to make Ouya games By
Check out our review of the Ouya Android-based gaming console.
Even after the relatively cheap, Android-based Ouya console proved a massive success on Kickstarter (the console was able to pull in nearly $8.6 million from investors despite having an initial goal of only $960,000), pundits and prospective owners of the new gaming machine loudly wondered how well it would be able to attract developers who would otherwise be making games for the Xbox 360, iPhone or PC. Assuming you believe official statements made by the people behind the Ouya console, there is nothing to worry about on that front.
“Over a thousand” developers have contacted the Ouya creators since the end of their Kickstarter campaign, according to a statement published as part of a recent announcement on who will be filling out the company’s leadership roles now that it is properly established. Likewise, the statement claims that “more than 50” companies “from all around the world” have approached the people behind Ouya to distribute the console once it is ready for its consumer debut at some as-yet-undetermined point in 2013.
While this is undoubtedly good news for anyone who’s been crossing their fingers, hoping that the Ouya can make inroads into the normally insular world of console gaming, it should be noted that while these thousand-plus developers may have attempted to reach the Ouya’s creators, the company offers no solid figures on how many of them are officially committed to bringing games to the platform. That “over a thousand” figure means little if every last developer examined the terms of developing for the Ouya and quickly declined the opportunity in favor of more lucrative options. We have no official information on how these developer conversations actually went, so until we hear a more official assessment of how many gaming firms are solidly pledging support to the Ouya platform, we’ll continue to harbor a bit of cynicism over how successful this machine might possibly be.
As for the aforementioned personnel acquisitions, though they’re less impressive than the possibility that thousands of firms are already tentatively working on games for the Ouya, they should offer a bit more hope that the company making the console will remain stable, guided by people intimately familiar with the gaming biz. According to the announcement, Ouya has attracted former IGN president (and the first investor in the Ouya project) Roy Bahat to serve as chairman of the Ouya board. Additionally, the company has enlisted former EA development director and senior development director for Trion Worlds’ MMO Rift, Steve Chamberlin, to serve as the company’s head of engineering. Finally, Raffi Bagdasarian, former vice president of product development and operations at Sony Pictures Television has been tapped to lead Ouya’s platform service and software product development division. Though you may be unfamiliar with these three men, trust that they’ve all proven their chops as leaders in their respective gaming-centric fields.
Expect to hear more solid information on the Ouya and its games line up as we inch closer to its nebulous 2013 release. Hopefully for the system’s numerous potential buyers, that quip about the massive developer interest the console has attracted proves more tangible than not. | 计算机 |
2015-40/2212/en_head.json.gz/10117 | Posted Home > Computing > Choosing an electronic ecosystem: Apple, Google… Choosing an electronic ecosystem: Apple, Google, or Microsoft? By
Steve Horton —
There are many advantages to going with a single ecosystem for your desktop PC, laptop PC, mobile phone, and tablet. (Even game system and television, in some cases.) Devices within that ecosystem are designed to work well with each other. They sync easily so that preferences and media can be effortlessly copied or shared with multiple devices. Applications may be universal, meaning they require a single purchase to work on multiple devices at the same time. And the user interfaces are usually similar or identical across devices.
Though many users cross ecosystems and choose iOS, Android, and Windows devices based on need and not interoperability, we’re going to focus on what happens when a user decides to stay within a single ecosystem: what are the advantages and disadvantages, and what are the weak links are in that ecosystem?
Apple probably has the most tightly integrated ecosystem of any of the three. ITunes is just a better application on Apple products than on Microsoft’s products. And Apple has a television device, the AppleTV, that fits right into the ecosystem as well.
Benefits of the Apple ecosystem
In the Apple ecosystem, Apple devices back up to the same iCloud system that other Apple devices and PCs can use. You can stream music and video to other devices (like the Apple TV) using free AirPlay functionality. You can even mirror the device’s screen on another device. It makes a lot of sense to have multiple Apple devices in a home.
Operating system updates on iPhones and iPads are always free, updating an iPod Touch is usually free, and updating a Mac to the latest OS will run you $20. Note that there’s never a charge for incremental updates.
Once you purchase an app on any Apple device, you can sync it to any other Apple device free, or re-download it without restrictions. This is also the same for media purchased in iTunes.
Drawbacks of the Apple ecosystem
Old operating systems swiftly become unsupported by hardware and software, forcing you on an upgrade path that you may not necessarily wish to take. The flipside is also bad: newer Apple OSes can often not run legacy apps. Don’t even try to run an old PowerPC Mac app on a newer Intel Mac. It won’t work.
Apple even stops offering support for older versions of its operating systems after only a year or two, a far narrower window than, say, Windows. Also, app developers often discontinue support for earlier versions of iOS, or earlier device generations, forcing you to upgrade to continue using the app. For example, many new high-profile apps don’t work on the first iPad at all.
The other big weakness is that Apple’s ecosystem will run you much more money than any of the others. You’re paying for the brand, and also an expectation of quality Mobile phone
Millions of people consider the iPhone the best smartphone ever, and with good reason: it pioneered the touchscreen and spawned dozens of imitators. The newest iPhone 5 offers 4G connectivity and a larger screen, and the newest iPod Touch also offers a larger screen and a thin form factor.
The newest iPads are more expensive than competing Android tablets. They offer a similar experience as the iPhone and iPod Touch, but on a larger scale and with many tablet-specific applications. Apple also introduced the iPad mini this year for those who want an iPad that’s closer in size to an iPhone.
The Mac itself is somewhat of the weakest link in the Apple ecosystem. Someone used to the touchscreen iOS interface of the iPhone and iPad is going to be confused that Apple doesn’t even offer a touchscreen option for the Mac. And though several of the icons look the same, Apple’s OS X is very different than iOS. That said, iTunes works very well on the Mac, and it’s much less of a headache to sync an iPhone or iPad with a Mac than a PC. If you can justify the expense, adding a Mac to complete the Apple ecosystem makes a lot of sense. Plus, when you get a look at the slim, 5-millimeter-thick iMacs, it’s going to be hard to say no. Other devices
Like a gaming system, AppleTV is a device that connects to your TV via HDMI. But it’s not a gaming system; instead, it streams movies and TV shows to your TV from your iTunes library in the cloud. You can also stream media from an Apple device using AirPlay. This little $99 device fits right in to the Apple family and is really useful.
But what about Google…? | 计算机 |
2015-40/2212/en_head.json.gz/10139 | Neurobiology Will Become "No-Brainer" Substitute for Software
By Sam Venneri, December 16, 2008
The ability of devices to learn and adapt is in the not-too-distant future
Sam Venneri is senior vice president of Asynchrony Solutions, an innovative software technology firm focused on the agile delivery of systems integration, custom application development and secure collaboration solutions. He is the former CTO and CIO of NASA, where he focused on transforming advanced technology research into practical applications. It used to be the stuff of fantasy, what Hollywood scriptwriters and producers made their careers out of. Computers and robots, all gaining self-awareness, able to "learn" from and adapt to their environment. No longer dumb machines capable of merely following explicit orders, they gain intelligence and can actually think for themselves. The movies are replete with such images. HAL from 2001: A Space Odyssey. The machines from The Terminator. More recently, the human-looking beings from I, Robot. All of these machines became capable of making their own decisions without the input of their creators. Unfortunately, they all turned their newfound brainpower towards the purpose of destroying mankind. None succeeded, but the attempt was certainly frightening.
For many viewers of these cinema classics, one question arose in the back of their minds: Could this really happen? The answer: yes, but without the part about destroying mankind. In fact, the creation of intelligent robots and computers with the power to learn and adapt to a changing environment is closer than many people realize. Research into this fascinating discipline has been ongoing for decades, and the development of an actual prototype is very much in the offing. At the root of this once-unthinkable phenomenon is the dynamic transformation of the software industry. Software History
If one looks at the trends in software starting in the early 1950s, this industry was out in front of the hardware community in terms of sophisticated conceptual development processes and technology. The software community, in fact, started what has been defined as the modern systems engineering approach long before the hardware community adopted it. In the 1960s and 1970s, the software field progressed from basic language to FORTRAN. Then, in the 1980s and 1990s software, software became more of a driver of the devices of the day -- automobiles, dishwashers, microwaves all were equipped with microprocessor controls (the automobile is, essentially, a distributed computing environment). At this time, software became a critical issue in many organizations; the majority of team leaders and program managers at the aerospace companies, automakers, even NASA, had come from the hardware environment; software to these people was almost an afterthought. Consequently, in most industries, software projects were increasingly outsourced to specialists. However, software ultimately became a problem, not only because of the nature of code development and validation but due to the complexity of the programs necessary to drive the new, advanced devices. At NASA, we saw an abundance of errors in the lines of code we used. We were going from thousands to millions of lines of code. Plus, at the time, one individual would write single strings of code. Today, software in large, sophisticated systems isn't one continuous piece; it's written in sections by groups of programmers with interfaces defined to transfer all critical parameters from one section of software code to another. In the early 1990s, despite the introduction of tools like Unified Modeling Language (UML) and integrated verification and validation techniques, the errors in software programming were more frequent -- and more costly -- due to the complexity of the programming. A case in point was the space vehicles that NASA was producing. Basically, the functionality and controllability of virtually all of the systems in these vehicles was becoming software-driven. However, the ramifications of faulty software were dramatically illustrated by the demise of one of the NASA Mars Polar Lander due to software errors, an incident which made worldwide headlines.
The point was clear: whether it was control theory, pointing telescopes, controlling automotive drive processes such as energy management and braking, or spacecraft operational management, there are unanticipated consequences in complex software-driven systems for which there is no adequate testing method. Three-Sided Engineering
In the mid 1990s, NASA adopted a different approach. We looked at systems engineering as a sort of triangle: engineering on one leg and a combination of information technology and nanotechnology on a second leg. Then, we introduced biology on the third side to create synergy between them all. In many ways, it was the beginning of a new era: going from Newtonian mechanics to quantum physics to the principles of neurobiology. By starting to think in these terms, we began to view software not as being deterministic codes but rather as a flexible and "learnable" asset. If you look at how the mammalian brain processes data, it's rather slow, but it's massively parallel and it doesn't work on instruction-based rules. Plus, no two brains are alike, not even in identical twins. Ultimately, it is the environment, as well as the interaction in the environment, that determines how memory works, whether it's episodic or temporal memory. Back at NASA, we held a workshop in the late 1990s with respected neurobiologists to help us understand the advancement in neural science and understanding limitations with artificial intelligence. We also invited experts in the biomedical field to aid in the understanding of the human brain and how the neurobiological principles behind this amazing organ could be used to enable a revolutionary approach to embedded software systems. Our excitement grew as we began to imagine the possibilities. Take the example of an unmanned aerial vehicle (UAV). Instead of writing software code to control the various onboard processes, you have something you can train, something that you can port into another UAV once it becomes functional. Rather than being software-driven, you'd have a device that is controlled by an almost intelligent platform. It doesn't have actual emotion, but you can train it, for instance, to "feel" fear by teaching it to avoid certain hazards or dangerous conditions when those situations present themselves. This does not involve reaction to instinct, but it does constitute first-level emotional response -- all without software programming or language. At NASA, we conducted a number of experiments with robots. If one of the robots lost a sensor, they would ignore it and move to the remaining sensor sets that were still functional, whereas in a deterministic software system the device might go into an endless loop or simply shut down. An example of this is when a person loses one of his or her senses - e.g., sight - and the remaining senses compensate for this loss. This highlights the robust redundancy characteristics and the ability to integrate multiple sensors; it is similar to fuzzy logic but doesn't use rule-based or predetermined processes. The robot is utilizing environmental interaction with the ability to learn and anticipate and take actions on previously stored memories. It has what neurologists call "plasticity": the ability to form connections and reinforce connections based on previous training. The bottom line? The machine's performance is modeling that of the mammalian brain. Actually, what we are talking about is not even software. It is, at its core, an entirely new engineering discipline, using neurobiological principles as its foundation. A number of academic institutions are advancing this science, including George Mason University, the University of California at Berkeley, Rutgers University, among others. There are also other schools starting to think in terms of formally integrating biology into computer science disciplines. There are even people with PhDs in this field. And the National Science Foundation is exploring the idea of putting university activities together in this area. There is no doubt that a groundswell of support for this discipline is in its nascent stages. The truth is, programmers cannot keep writing millions of lines of code and expect reliability. The programs are getting too complex, which results in mistakes that simply cannot be caught. All of this represents a "change state" that started when the application of neural nets was instituted -- and it will continue unabated. In fact, it's already here in some forms, as evidenced by the fuzzy logic currently incorporated into the Japanese bullet trains. Further, the European and Asian markets are already well on their way to making substantial human and financial investment in this area -- even more so that the United States, where pockets of resistance still remain. A Question of Ethics
It's important to note that the ethics of this discipline are not being ignored. The second we started talking about neurobiological principles at NASA, we brought in experts from various related fields to examine the moral concerns that could potentially be raised. Without question, when neurobiological topics are discussed, there is inevitable worry -- and understandably so -- from a segment of the public that wonders whether we should even be venturing into this realm. To highlight this concern, one should look no further than the case years ago when the U.S. Department of Agriculture was beginning to promote genetically engineered crops. This led to an outcry from the public that began to worry about "mutant tomatoes." And of course, cloning still remains an emotional, hot-button practice that promises significant medical breakthroughs but that raises legitimate ethical conundrums. Further, experimentation involving animals, even the lowest invertebrate life forms, stir highly charged and visceral reactions. Witness the outcry when university researchers years ago made a computer of neuron cells from rats to control a Microsoft Flight Simulator. Consequently, nothing we had been doing at NASA involved any of these approaches; all work in the area of neurobiology centered on embedding neurobiological principles in electronics -- as opposed to the "wet" or molecular computing that has stirred so much controversy. Despite the grim prospects for the software industry, shed no tears for its eventual demise. No other industry could provide a product with such a plethora of bugs, errors and malfunctions and still be considered a viable market. (Most people working in science use Linux systems because of their higher degree of reliability.) Industry Applications
A handful of forward-thinking companies, including Asynchrony Solutions, have been investigating the ways that neurobiology can be applied to practical applications; the possibilities are virtually endless. Take healthcare, for example. Our engineers are researching different ways of displaying data, meaning that doctors will be able to have handheld nomadic mobile computing devices that allow them to get in contact with anyone anywhere -- much like an iPhone. In the defense industry, diverse information from many sources can be brought into the real-time battlefield environment in a multi-modal form that utilizes all the senses of a human operator. This ultimately allows commanders to make split-second tactical and strategic decisions. Ultimately, the adoption of neurobiology into engineering will help us to open up what a knowledge repository really is supposed to be -- including low-cost, wearable computing visualization capabilities. Our work in this area is still at the proof of principle level, but within a year or two, we're confident that some of these actual devices will be available for use. In the end, when you start talking about intelligent, brain-based neurobiological principles, you open up a whole new venue in terms of what embedded computing hardware solutions become. You can really start to think about intelligent learning capabilities that go well beyond artificial intelligence and deterministic rule-based systems. In the end, this represents a major change in what software will become over next decade. Remember: think computers and autonomous robots that have the ability to learn and adapt to their changing environment. This is not a movie -- this is the future. Related Reading
Google's Data Processing Model Hardens UpJelastic Docker Integration For Orchestrated DeliveryParasoft DevTest Shifts To ContinuousBoost.org Committee Battles Library Log-JamMore News» Commentary
Parallels Supports Docker AppsAbstractions For Binary Search, Part 10: Putting It All TogetherJolt Awards 2015: Coding ToolsJetBrains Upsource 1.0 Final ReleaseMore Commentary» Slideshow
Jolt Awards 2014: The Best Testing ToolsJolt Awards 2014: The Best Programmer LibrariesDeveloper Reading List2013 Developer Salary SurveyMore Slideshows» Video
The Purpose of HackathonsDr. Dobb's Readers Invited to Judge at ISEF 2014IBM Watson Developers CloudOpen Source for Private CloudsMore Videos» Most Popular
Logging In C++Building GUI Applications in PowerShellBuilding Scalable Web Architecture and Distributed SystemsTop 10 Practices for Effective DevOpsMore Popular» More Insights
White Papers 2015 Cloud Security Survey Report The Seven Stages of Advance Threats More >> Reports SaaS and E-Discovery: Navigating Complex Waters Research: Federal Government Cloud Computing Survey More >> Webcasts Agile Desktop Infrastructures: You CAN Have It All Catch the Security Breach Before It’s Out of Reach More >> INFO-LINK
Asset Management For Electronics Industry Mobile Content Management: What You Really Need to Know Architecting Private and Hybrid Cloud Solutions: Best Practices Revealed How to Stop Web Application Attacks How to Transform Paper Insurance Documents into Digital Data More Webcasts>>
Cloud Collaboration Tools: Big Hopes, Big Needs Research: Federal Government Cloud Computing Survey SaaS 2011: Adoption Soars, Yet Deployment Concerns Linger Will IPv6 Make Us Unsafe? Database Defenses More >>
The Role of the WAN in Your Hybrid Cloud 2015 Cloud Security Survey Report Implementing Software-Defined Security with CloudPassage Halo The 2015 Security Predictions Report | 计算机 |
2015-40/2212/en_head.json.gz/11023 | #14: It's the End of the World as we Know It
by Shannon Appelcline
December 21, 2000 Well, it's the end of the year at least and the end of the century and the end of the millennium, if you count it as such. The scant vestiges of next week, jammed between Christmas and New Year's, are going to be busy enough that I've declared it column free. All will return to normal as 2001 dawns.
As you may have already seen, scattered about the site, we have lots of exciting plans for the new year. Here's the start:
March 1, 2001: Beta Release of "Galactic Emperor: Succession". The second Skotos game marks the beginning of our gaming community. It's a pretty wide departure from Castle Marrach. Galactic Emperor uses the same basic system, but is geared toward competition rather than cooperation. Each week, players vie to become the new Galactic Emperor.
April 1, 2001: Official Release of "Castle Marrach". We're working on the last few systems that we consider critical for release and are finishing up the geography of the initial Castle, and that should all be ready at the start of the second quarter, next year.
April 1, 2001: Skotos goes pay-for-play. We pulled our pay-for-play date back three months because we weren't ready to either beta "Galactic Emperor" or officially release "Marrach", but by next April (no fooling), we'll be set. When you officially sign up, you'll get a month's free play.
Late 2001: As we approach the end of 2001, there will be more cool stuff, including our third game and the release of Skotos Seven games. More dates as they become concrete, but in the meantime make sure you're looking at the rest of the articles to get insight into Arcana, Horizon Station, and Qi-gung, three of the next-generation Skotos games.
So that's where Skotos Tech is, at the cusp of the new year.
Heated ArgumentsIn the last weeks, the Skotos forums have seen something fairly new: heated arguments. The discourse has (uniquely) been of a very high quality, analytical and thoughtful. Nonetheless, it's been clear that there are some real emotions behind the words.I'm speaking of two different topics: the selection of mages and the selection of honored guests. Let me offer a quick review for those of you who haven't been following the controversies.
The Honored Guests: At the Winter Ball less than a dozen newly awakened guests were named Honored Guests and given access to the Inner Bailey. The controversy here regarded who was selected and who wasn't. Why were some guests who were totally unknown to the most recently awakened picked? Why were guests who have worked hard to better the Castle ignored?
The Mages: And at about the same time, the Castle's mage hopefuls, now six weeks into their studies with Serista, faced their first test and half of them failed. Players who'd put six weeks into this course were ... disgruntled.
I don't have any desire to dissect the actual specifics of these situations: what should and shouldn't have been done. The topic of mages has already been played out in the forums. Questions regarding honored guests tended to appear more in-game and in email, but there is a forum topic related to that question as well.
What I *do* want to do is examine the metaissues. What could we have done as StoryBuilders and as StoryTellers to ensure that players responded more positively to the situation, even if their characters failed to meet the goals they desired? What can future StoryBuilders do?I think much of the problem has to do with the tone that we set for Castle Marrach in the first few weeks: what type of game we made Castle Marrach out to be. Let me explain that | 计算机 |
2015-40/2212/en_head.json.gz/13921 | WHAT IS THE WORLD WIDE WEB?
WHAT IS THE INTERNET?
NOTE: This essay was originally written in late 1996, and by current standards is a little dated. However, its historical content remains correct, and it is retained here both as a basic lesson in early web history and as a glance back at how things were just as the web was on the cusp of becoming an integral part of daily life for society at large.Simply click on your browser's "BACK" button to return to the main Spider's Loom site.
You've heard of them by now: the Internet and the World Wide Web. They've been vilified and glorified and mythologized to the point where their lingo is part of everyday English... but what are they?
Chances are, you'll get a different answer from every person you ask even if you restrict your questioning to those people who work in, on, and around them every day. The following information is the Spider's Loom attempt at clearing your view of "the 'net" and "the web."
So what is this "World Wide Web" thing that everyone is talking about? In simple terms, the World Wide Web (or "WWW" or just "web") is a graphical interface to the resource- and information-rich confusion that is the Internet. Just as the icons and windows you use to interface with your Macintosh* or Microsoft Windows* computer let you accomplish your work (or play!) without having to handle all the details of the CPU's data processing, the icons and windows of a web browser let you "mine" the Internet for its rich variety of information and entertainment without having to learn the complex lingo of network protocols and operating systems.
And this Internet thing... what is it? A short definition would say that it's a global network formed by the interconnection of national/regional networks, each of which is in turn made up of smaller regional networks. While the term "information superhighway" has been used ad nauseam, it's still a good metaphor: the Internet is the "major highway" that a lot of smaller roads branch off from, with still-smaller residential roads branching off from them... except that it's data that travels on these "roads" instead of people.
Of course, none of this sprang into being overnight... let's take a look at a little modern history... Back to the top of this file
Before the Dawn of the Network Era
Although scientists in several other countries (most notably England) were involved in research of cybernetics and electronics, the United States became the first industrialized nation to actively build and make (relatively) wide use of electronic devices for the storage & processing of data: computers. Thanks to a great deal of government and military support (both money and manpower) and to the efforts made during World War II, the U.S. enjoys a comfortable lead over the rest of the world in the technology arena.
Then, in 1956, the Soviet Union put the world's first man-made object (Sputnik) into orbit. Part of the U.S. Government's response was to establish the Department of Defense (DoD) Advanced Research Projects Agency (ARPA) in order to re-establish the U.S. lead in military technology.
Back to the top of this file
From the late 1950s and into the 1960s, the danger of having all military communications then dependent on a relatively small number of key sites knocked out in the event of a nuclear war between the US and USSR was of real concern. A number of researchers (most notably those at the RAND Corporation "think tank") developed and presented ideas on minimizing this danger. As early as 1962, RAND produced a paper entitled On Distributed Communications Networks which put forth the idea of a "packet-switching network" which had no single point of failure; if one link was knocked out, traffic was simply sent to the desired destination over a different route. A fleshed-out version of the plan was presented at the 1967 Association for Computing Machinery's Symposium on Operating Principles, and a formal presentation was finally made to ARPA in the spring of 1968.
The next year, DoD commissioned ARPAnet as a nationwide test bed of the packet-switching network concept. The first "node" (communications locus) of this new technology was established at UCLA, quickly followed by nodes at the Stanford Research Institute (SRI), UCSB, and the University of Utah. Later in 1969, Bolt Beranek and Newman, Inc. (BBN) developed the Information Message Processor (IMP) that became standard equipment at each ARPAnet node and joined in.
The Second Decade
By mid-1971, ARPAnet had grown from the original five nodes (and several more "hosts," or computers that were connected to the network but upon which the network did not depend for operations) to a grand total of 15 nodes (and 23 hosts) by adding nodes at CMU, CWRU, Harvard, Lincoln Lab, MIT, RAND, SDC, Stanford, UIUC, and NASA's Ames Research Center. By the next year, data transfers across the ARPAnet between as many as 40 computers were being demonstrated, the InterNetworking Working Group was created to establish standard communications protocols, and electronic mail was invented. A year later (1973), nodes in England and Norway became ARPAnet's first international links.
The first major commercial application of ARPAnet technology came in 1974 when BBN established Telenet, and large-scale global internetworking moved out of the military realm. The whole idea of wide-area networking caught on so well at the ARPAnet, Telenet, and other sites, that the semi-joking "Jargon Filequot; was released in 1975, giving plain English translations for the growing lingo used by network engineers and aficionados.
ARPAnet and its siblings & offspring continued to grow quietly throughout the 1970s, with new protocols and networks being invented and installed almost every year. Arguably the two most famous, USENET and BITNET marked the change of the decade.
USENET (commonly referred to as "Usenet News" or "news" or "newsfeed") was established between Duke University and the University of North Carolina in 1979 to act as a large-scale "bulletin board" on which people could exchange ideas and messages; although it requires the user to have certain software available, it becomes an immensely popular communications forum very quickly.
BITNET ("Because It's Time Network"), using a different technology, was established as a cooperative network at the City University of New York to give electronic mail capabilities to researchers who did not have access the ARPAnet.
Other notable networking events in 1981 included the establishment of CSNET (Computer Science NETwork) in the U.S. specifically to provide networking services to university scientists with no access to ARPANET, and the deployment of Minitel (Teletel) across France by French Telecom.
The Beginning of Life As We Know It
The beginning of the 1980s marked a major explosion in wide-area networking, with new advances and networks coming in rapid succession:
Two new definitions are coined : "internet" becomes a connected set of networks, while "Internet" becomes the large (now almost global) collection of interconnected internets.
The TCP/IP protocol suite, still in use today, is established.
EUnet is established, providing ARPAnet-like connectivity across Western Europe.
The U.S. Department of Defense establishes the Defense Data Network due to concerns that there is now enough traffic on ARPAnet to interfere with military communications in time of emergency.
TCP/IP officially becomes the Internet protocol suite on January 1 | 计算机 |
2015-40/2213/en_head.json.gz/320 | This is a brief history of this site, for those who are interested.
In the fall of 1992, I was taking university courses in Science Fiction literature and Science Fiction cinema. One of the courses required a short seminar, so for my topic I chose to combine the two. On October 19, I posted a message on Usenet asking for suggestions of films to add to my short list. There were 32 films on the list at that point.
On May 4, 1993, I had finally merged all the many suggestions from over 70 email replies generated by my original posting, and posted the updated list. It had already become large enough to split into SF, horror and fantasy sections, now totalling 271 films.
Sometime in late 1993 or early 1994 I got my first web site and posted the list there. At this time, it was still just three static pages.
November, 1997 saw a move from my former web host to my own server. This opened up many new options, and in January, 1998 I added simple scripts to search through each of the three genre-specific lists, generating a simple list of matches along with the basic details being kept at the time. Now, instead of maintaining HTML files, I maintained text files, and the HTML was generated automatically by the search scripts, greatly simplifying the process.
In June, 1998, the collection, which now included 512 films, was changed from flat text files to a simple Berkeley DB which could support the slightly more complex data relationships that were beginning to crop up. The three lists were merged, with a new indicator of which genre a movie belonged in. A new, unified search script was needed, and a second script was added to display details of a particular film. Maintenance was still done via text files which were then imported into the Berkeley DB format.
The site was named the Site of the Week for February 21, 2000 by Science Fiction Weekly.
In December, 2001 the site was promoted to a subdomain at http://fictionintofilm.trawna.com/ from its former subdirectory location at http://www.trawna.com/greg/movies/
By October, 2002, the web site was becoming too complex to be supported by the simple Berkeley DB files that were being searched. The switch was made to a MySQL database. Taking advantage of the flexibility provided by the new database structure, specific searches based on author and film release date were added. Even now, maintenance was done on text files (albeit files that were now easier to update), which were imported into the database. Having been busy with other priorities, the database had only grown marginally to 569 films by this date.
June, 2003 saw the introduction of a new server with greatly increased capabilities. This was necessitated by increased traffic; in particular some spiders would submit requests for every film's details in a very short time, and the old server (a 200MHz Pentium with 48Mb of RAM) would get bogged down by so many simultaneous requests. The database stood at 616 films at this time.
The first user survey began in July, 2003.
In August, 2003, I finally put together an administrative form that I could use to manipulate the database directly, simplifying the maintenance process tremendously.
March, 2005 saw the first of a new set of features aimed at developing a "community" feeling, by allowing users who have signed up to leave comments and ratings. Five new scripts were required just to handle user maintenance, a big expansion from the total of six which previously ran the whole site. The database numbered 837 films at this point.
April, 2005 was a busy month with improved formatting, substantial new data (including over 1500 new Amazon links and an increase in film count to 916), and the registration of a new, dedicated domain: http://fifdb.com/.
In December, 2008, the entire site was re-implemented in PHP, using the CakePHP framework and a modified version of the Wildflower CMS, thereby resolving some recent hosting issues and adding long-desired functionality. Search:
Fiction into Film DB
Include SF?
Include Fantasy?
Include Horror?
Relaxed Search?
Although we often read plays in book form, they were originally written to be performed, and are more akin to screenplays than novels. Should movies based on plays (for example, Shakespeare's Tempest) be included in this list?
Email: Password: Keep me signed in New user? Register now to take full advantage of our features. Home | Privacy policy | Contact us | Glossary | Search tips | Site history
Science Fiction | Fantasy | Horror | Data submission | Unanswered questions
Copyright 1992-2015 Trawna Publications | 计算机 |
2015-40/2213/en_head.json.gz/1011 | Q From Dan Leneker: I am looking into how the expression on the fritz came about. Please help.
A I’d like to, but most dictionaries just say, very cautiously and flatly, “origin unknown”, and I can’t do much to improve on that verdict.
The phrase is now a common American expression meaning that some mechanism is malfunctioning or broken: “The washing machine’s on the fritz again” (the British and Australian equivalent would be on the blink). However, when it first appeared — about 1902 — it meant that something was in a bad way or bad condition. Early recorded examples refer to the poor state of some domestic affairs, the lack of success of a stage show, and an injured leg — not a machine or device in sight.
Some people, especially the late John Ciardi, the American poet and writer on words, have suggested it might be an imitation of the pfzt noise that a faulty connection in an electrical machine might make, or the sound of a fuse blowing. This theory falls down because none of the early examples is connected with electrical devices, and the phrase pre-dates widespread use of electricity anyway.
Others feel it must be connected with Fritz, the nickname for a German soldier. It’s a seductive idea. There’s one problem, though — that nickname didn’t really start to appear until World War One, about 1914, long after the saying had been coined.
William and Mary Morris, in the Morris Dictionary of Word and Phrase Origins, suggest that it may nevertheless have come from someone called Fritz — in the comic strip called The Katzenjammer Kids. In this two youngsters called Hans and Fritz got up to some awful capers, fouling things up and definitely putting the plans of other members of the strip community on the Fritz. The strip appeared in newspapers from 1897 onwards, so the dates fit rather nicely. But there’s no evidence that confirms it so far as I know. There’s also the key question: why don’t we talk about being on the Hans?
As is so often, Mr Leneker, I’ve gone around the houses, considered this theory and that, but come to no very definite conclusion. But the truth is that nobody really knows, nor now is ever likely to.
Copyright © Michael Quinion, 1996–. All rights reserved. Page created 11 Aug. 2001
World Wide Words is copyright © Michael Quinion, 1996–. All rights reserved.This page URL: http://www.worldwidewords.org/qa/qa-ont4.htmLast modified: 11 August 2001. | 计算机 |
2015-40/2213/en_head.json.gz/1422 | Posted Home > Gaming > Ouya: ‘Over a thousand’ developers… Ouya: ‘Over a thousand’ developers want to make Ouya games By
Aaron Colter —
Check out our review of the Ouya Android-based gaming console.
Even after the relatively cheap, Android-based Ouya console proved a massive success on Kickstarter (the console was able to pull in nearly $8.6 million from investors despite having an initial goal of only $960,000), pundits and prospective owners of the new gaming machine loudly wondered how well it would be able to attract developers who would otherwise be making games for the Xbox 360, iPhone or PC. Assuming you believe official statements made by the people behind the Ouya console, there is nothing to worry about on that front.
“Over a thousand” developers have contacted the Ouya creators since the end of their Kickstarter campaign, according to a statement published as part of a recent announcement on who will be filling out the company’s leadership roles now that it is properly established. Likewise, the statement claims that “more than 50” companies “from all around the world” have approached the people behind Ouya to distribute the console once it is ready for its consumer debut at some as-yet-undetermined point in 2013.
While this is undoubtedly good news for anyone who’s been crossing their fingers, hoping that the Ouya can make inroads into the normally insular world of console gaming, it should be noted that while these thousand-plus developers may have attempted to reach the Ouya’s creators, the company offers no solid figures on how many of them are officially committed to bringing games to the platform. That “over a thousand” figure means little if every last developer examined the terms of developing for the Ouya and quickly declined the opportunity in favor of more lucrative options. We have no official information on how these developer conversations actually went, so until we hear a more official assessment of how many gaming firms are solidly pledging support to the Ouya platform, we’ll continue to harbor a bit of cynicism over how successful this machine might possibly be.
As for the aforementioned personnel acquisitions, though they’re less impressive than the possibility that thousands of firms are already tentatively working on games for the Ouya, they should offer a bit more hope that the company making the console will remain stable, guided by people intimately familiar with the gaming biz. According to the announcement, Ouya has attracted former IGN president (and the first investor in the Ouya project) Roy Bahat to serve as chairman of the Ouya board. Additionally, the company has enlisted former EA development director and senior development director for Trion Worlds’ MMO Rift, Steve Chamberlin, to serve as the company’s head of engineering. Finally, Raffi Bagdasarian, former vice president of product development and operations at Sony Pictures Television has been tapped to lead Ouya’s platform service and software product development division. Though you may be unfamiliar with these three men, trust that they’ve all proven their chops as leaders in their respective gaming-centric fields.
Expect to hear more solid information on the Ouya and its games line up as we inch closer to its nebulous 2013 release. Hopefully for the system’s numerous potential buyers, that quip about the massive developer interest the console has attracted proves more tangible than not. | 计算机 |
2015-40/2213/en_head.json.gz/1706 | | Site Contributors | Contact
INVENTION OF EMAIL
DEFINITION OF EMAIL
FALSE CLAIMS ABOUT EMAIL
HISTORY OF EMAIL
THE FIRST EMAIL SYSTEM
BEYOND EMAIL Celebrate the 30th Anniversary of Email - $100,000 Inner City Innovation Fund
The Inventor of Email is V.A. Shiva Ayyadurai - The Facts
In 1978, a 14-year-old named V.A. Shiva Ayyadurai developed a computer program, which replicated the features of the interoffice, inter-organizational paper mail system. He named his program “EMAIL”. Shiva filed an application for copyright in his program and in 1982 the United States Copyright Office issued a Certificate of Registration, No. TXu-111-775, to him on the program. First US Copyright for "EMAIL, Computer Program for Electronic Mail System" issued to V.A. Shiva Ayyadurai.
As required by the Regulations of the Copyright Office, he deposited portions of the original source code with the program. Prominent in the code is the name “EMAIL” that he gave to the program. He received a second Certificate of Registration, No. TXu-108-715, for the “EMAIL User’s Manual” he had prepared to accompany the program and that taught unsophisticated user’s how to use EMAIL’s features.
Recently however, a substantial controversy has arisen as to who invented email. This controversy has resulted in an unfortunate series of attacks on Shiva. Part of the problem is that different people use to the term to mean somewhat different things.
The Man Who Invented Email™
In the summer of 1978, Shiva had been recruited for programming assignments at the University of Medicine and Dentistry of New Jersey (UMDNJ) in Newark, New Jersey. One of his supervisors, Dr. Leslie P. Michelson, recognized his abilities and challenged him to translate the conventional paper-based interoffice and inter-organizational communication system (i.e., paper-based mail and memoranda) to an electronic communication system. Systems for communications among widely dispersed computers were in existence at the time, but they were primitive and their usage was largely confined to computer scientists and specialists.
TIME Article, "The Man who Invented Email", an interview with V.A. Shiva Ayyadurai.
Shiva envisioned something simpler, something that everyone, from secretary to CEO, could use to quickly and reliably send and receive digital messages. Shiva embraced the project and began by performing a thorough evaluation of UMDNJ's paper-based mail system, the same as that used in offices and organizations around the world. Article in The Verge reviewing the facts around V.A. Shiva Ayyadurai's invention of email.
He determined that the essential features of these systems included functions corresponding to “Inbox”, “Outbox”, “Drafts”, “Memo” (“To:”, “From:”, “Date:”, “Subject:”, “Body:”,
“Cc:”, “Bcc:”), “Attachments”, “Folders”, “Compose”, “Forward”, “Reply”, “Address Book”, “Groups”, “Return Receipt”, “Sorting”. These capabilities were all to be provided in a software program having a sufficiently simple interface that needed no expertise in computer systems to use efficiently to “Send” and “Receive” mail electronically. It is these features that make his program “email” and that distinguish “email” from prior electronic communications. Shiva went on to be recognized by the Westinghouse Science Talent Search Honors Group for his invention. The Massachusetts Institute of Technology highlighted his invention as one among four, in the incoming Freshman class of 1,040 students. His papers, documenting the invention of email were accepted by Smithsonian Institution. These are facts based on legal, governmental and institutional recognition and substantiation, and there is no disputing it.
Misconceptions About Email
Standard histories of the Internet, however, are full of claims that certain individuals (and teams) in the ARPAnet environment and other large companies in the 1970s and 1980s “invented email.” For example, the familiar “@” sign, early programs for sending and receiving messages, and technical specifications known as RFCs, are examples of such false claims to “email”. But as some claimants have admitted, even as late as December 1977, none of these innovations were intended to emulate the paper-based mail system - Inbox, Memo, Outbox, Folders, Address Book, etc. Sending text messages electronically could be said to date back to the Morse code telegraph of the mid 1800s; or the 1939 World's Fair where IBM sent a message of congratulations from San Francisco to New York on an IBM radio-type, calling it a “high-speed substitute for mail service in the world of tomorrow.” A copy of code sample in Wired magazine showing V.A. Shiva Ayyadurai's invention of email in 1978.
The original text message, electronic transfer of content or images, ARPANET messaging, and even the “@” sign were used in primitive electronic communication systems. While the technology pioneers who created these systems should be heralded for their efforts, and given credit for their specific accomplishments and contributions, these early computer programs were clearly not email.
The Unfortunate Reaction to the Invention of Email
V.A. Shiva Ayyadurai describes his path to The Smithsonian Museum in an interview with The Washington Post.
Based on false claims, over the past year (since the acceptance of Shiva's documents into the Smithsonian), industry insiders have chosen to launch an irrational denial of the invention. There is no direct dispute of the invention Copyright, but rather inaccurate claims, false statements, and personal attacks waged against Shiva. Attackers are attempting to discredit him, and his life's work. He has received threatening phone calls, unfair online comments, and his name and work has been maligned. It is but a sad commentary that a vocal minority have elected to hijack his accomplishment, apparently not satisfied with the recognition they have already received for their contributions to the field of text messaging. Following the Smithsonian news, they went into action. They began historical revisionism on their own “History of Electronic Mail” to hide the facts.
They enlisted “historians” who started discussions among themselves to redefine the term “email” so as to credit their own work done prior to 1978, as “email”. More blatantly, they registered the InternetHallofFame.Org web site, seven (7) days after the Smithsonian news and issued a new award to one of their own as “inventor of email”. Through the PR machine of BBN (a multi-billion dollar company), they were proclaimed as the “king of email”, and “godfather of email”. These actions were taken to protect their false branding and diminish the accolades and just recognition Shiva was beginning to receive. Shiva’s news likely threatens BBN’s entire brand, which has deliberately juxtaposed “innovation”, with the “@” logo, along with the face of their mascot, the self-proclaimed “inventor of email”. They have removed damaging references to eminent Internet pioneers of the time such as MA Padlipsky who exposed their lies, and showed that BBN’s mascot, was not the “inventor of email”.
Some industry insiders have even gone to the extent, in the midst of the overwhelming facts, to now attempt to confuse the public that "EMAIL" is not "email". It is a fact that the term "email", the juxtaposition of those five characters "e", "m", "a", "i" and "l", did not exist prior to 1978. The naming of the software program EMAIL in all capitals was because at UMDNJ, the names of software programs, subroutines and variables written in FORTRAN IV used the upper-case naming convention. Moreover, at that time, the use of upper case for the naming of programs, subroutine and variable names, was also a carry over from the days of writing software programs using punch cards. The fact is EMAIL is email, upper case, lower case, any case.
Sadly, some of these individuals have even gone further, deciding that false allegations are insufficient to make their case and have resorted to character assassination of the most debased nature including removal and destruction of facts on Wikipedia to discredit Shiva as an inventor of any kind. Threatening and racist emails telling him “to hang himself by his dhothi”, blogs referring to him as a “flagrant fraud”, and comments that EMAIL was “not an invention” are beyond disbelief, and reflect a parochial attitude that innovation can only take place in large universities, big companies, and the military. As MIT's Institute Professor Noam Chomsky reflected: “The efforts to belittle the innovation of a 14-year-old child should lead to reflection on the larger story of how power is gained, maintained, and expanded, and the need to encourage, not undermine, the capacities for creative inquiry that are widely shared and could flourish, if recognized and given the support they deserve.“
Of course a claim such as “I invented email” will leave anyone open to criticism and doubt, and as some suggest “hatred”. In this case, the victim has not made a “claim”, but rather been recognized by the government and top educational institutions in the world as an inventor. Regardless of the vitriol, animosity and bigotry by a vocal minority, a simple truth stands: email was invented by a 14-year-old working in Newark, NJ in 1978.
This is a fact. Innovation can occur, any place, any time, by anybody.
Beyond the invention of email in 1978, Dr. V.A. Shiva Ayyadurai, who holds four degrees from MIT, is a Fullbright Scholar, MIT-Lemelson Awards Finalist and Westinghouse Science Talent Search Honors Award recipient, has continued his work as both an inventor and systems scientist across a broad range of fields from media to medicine, art and technology. After inventing email, he went on to create Arts Online - the first internet portal for artists, in 1993. After winning a competition categorizing email for the White House, he started EchoMail, a company that provided Global 2000 companies an enterprise platform for email and social media management. His deep interest in medicine and biology led him to create CytoSolve in 2007, a company dedicated to revolutionizing drug development through in-silico modelling. His dedication as an educator at MIT and to the general public led him to create Systems Visualization, a new course he pioneered at MIT, which has now become one of the most popular institute-wide interdisciplinary courses. To educate medical doctors, healthcare and holistic practitioners on the bridge between eastern and western medicice, he created Systems Health™, a revolutionary education program that serves as a gateway for integrative medicine. Dr. VA Shiva, to share his love of systems, developed vashiva.com to serve as an educational source for the general public to learn how systems are the basis of understanding their bodies, their business, and the world around them.
To support innovation among youth, he started Innovation Corps, a project of his center - International Center for Integrative Systems. In his birthplace in India, he deployed Tamilnadu.com, a portal dedicated to organizing content featuring the arts and culture of the state of Tamil Nadu. More than anything else, as the Inventor of Email, his achievements provide an inspiring message to youth across the world - be it in inner cities or villages.
Example of Artifacts Submitted to Smithsonian (Feb 2012)
Learning Programming @ NYU, 1978
Professor Henry Mullish of the Courant Institute of Mathematical Science at New York University (NYU), a visionary, who recognized the importance of training America's future engineers and scientists in computer programming, organized a highly selective and intensive 8-week program. This program had 40 openings for students in the New York area. Shiva was one of the few who was fortunate to get accepted, after hearing about the program from Martin Feuerman, a colleague of Meenakshi Ayyadurai (Shiva's mom) at UMDNJ. The program offered both class room training by NYU graduate students and faculty, as well as a rigorous lab component for each of the classes in FORTRAN, PL/I, COBOL, SNOBOL, BASIC, Digital Circuit Design, ARTSPEAK. All of this was done in punch cards on old CDC main frames. Shiva graduated with Distinction and was one of the youngest of the entire group. He also won a special Computer Arts Award for artwork he developed using ARTSPEAK, one of the earliest computer graphic languages.
EMAIL was named in 1978 in FORTRAN IV
The FORTRAN IV compiler at the time being used at UMDNJ had a six-letter character limit on naming variables and subroutines. The operating system had an additional limit of five characters for the name of main programs. “EMAIL” was chosen as the name of the computer program for the system which would emulate the interoffice, inter-organizational mail system. This printout is just one example of the nearly 50,000 lines of code (there are other such examples on this website), that was submitted to the US Copyright Office and donated to the Smithsonian.
First Email System, 1980
This article appeared in the West Essex Tribune entitled “Livingston Student Designs Electronic Mail System” on October 30, 1980.His independent study teacher and coordinator at Livingston High School, Stella Oleksiak, was persistent with the Superintendent of Schools, the Principal and other teachers, who originally did not want to allow a student to travel back and forth to work in Newark, NJ, for a variety of reasons. Through her efforts and the support of Dr. Leslie P. Michelson, Shiva was allowed, starting in 1978, to do the independent study. This article was an important one, for it demonstrated to the local school board and others, that the concept of Independent Study could lead to fruitful results. Each day, Shiva traveled nearly 30 miles to UMDNJ.
Westinghouse Award Entry, 1981
This is the original of the Westinghouse Award Entry that Shiva submitted to the Westinghouse Science Talent Search Committee in 1981. The review of this document is what was used to determine the issuance of the Westinghouse Science Talent Honors Award.By reviewing this document, one can see some of the details of his thinking, design approach and thoughts on where email could go in the future. The document is typewritten. This was at a time when programs like MS Word, Powerpoint, Adobe, etc. did not exist.
Westinghouse Award Finalist, 1981
The Westinghouse Science Talent Search, now known as the Intel Science Talent Search to, has been referred to as the ‘Baby Nobels.’”In 1981, V.A. Shiva Ayyadurai was awarded an Honors Group award for “The Software Design, Development and Implementation of a High-Reliability Network-Wide Electronic Mail System.”
MIT Tech Talk, 1981
The incoming Class of 1985 to MIT entered in the Fall of 1981. The front-page article in Tech Talk, MIT's official newspaper highlighted the work of four incoming students, one being the invention of Email. Even while at MIT, Shiva continued, for a few more years to consult as a Research Fellow for UMDNJ to continue additional work on Email.
First U.S. Copyright for EMAIL, 1982
In 1982, the U.S. Copyright Office issued TXu-111-775, the first Copyright for Email, to Shiva Ayyadurai. At that time, the only protection available for software was through Copyright. The U.S. intellectual property laws, at that time, treated software similar to music, art or literary work. The original Copyright application was submitted in 1980.
COMAND, 1982
Dr. Leslie Michelson, Ph.D., a former physicist was Shiva's mentor, who provided this unique opportunity and access to infrastructure at UMDNJ as well as other colleagues twenty to fifty years older than him. Dr. Michelson recruited Shiva to be a Scholar in his Lab after hearing about his work and results at the NYU Summer program, offered by Henry Mullish. Initially, there was no pay offered, but free lunch at the UMDNJ cafeteria. Later on, he earned $1.25 per hour. The screen you see on the left was the size of the display that one had to work with.
EMAIL User's Manual Copyright, 1982
Every software system needs a User's Manual, so did the world's first email system. At that time, Shiva was everything on the project: software engineer, network manager, project manager, architect, quality assurance AND technical writer.The User's Manual for which he received Copyright TXu-108-715 was also tested. He wrote and updated multiple versions based on feedback from his user base of doctors. It had to be easy-to-read and accessible to all.
EMS Copyright, 1984
In 1984, the U.S. Copyright Office issued TXu-169-126, the first Copyright for EMS (EMAIL Management System), to (V.A.) Shiva Ayyadurai. The EMAIL copyright had been awarded to Shiva two years earlier. This copyright recognized his additional contribution for creating all the internal tools needed by system administrators to maintain EMAIL and messages long-term, e.g. archiving, password management, etc.Even in 1984, the only protection available for software was through Copyright. The U.S. intellectual property laws, at that time, treated software similar to music, art or literary work. The original Copyright application was submitted in 1980.
U.S Patent: Relationship Management System and Method usingAsynchronous Electronic Messaging, 2003
This patent was awarded in recognition for Shiva's contribution to create a holistic and integrated system for electronic management of on-line relationships through the use of ANY asynchronous electronic messages. This broad patent was issued after considerable deliberation by the USPTO.Shiva, at one point, was asked to appear in Washington, DC at the USPTO to explain aspects, given the broad protection for all asynchronous messaging he was seeking.
U.S Patent: System and Method for Content-Sensitive Automatic Reply Message Generation, 2004
In 2004, over 20 years, after creating the worlds' first email system, Shiva was issued U.S. Patent #6,718,368 for inventing a method for automatically analyzing an email and formulating a response. This patent was issued to Shiva's company General Interactive, LLC, which developed the product EchoMail. EchoMail became the leading application for intelligent message analysis, sorting and routing used by such companies as Hilton, QVC, Citigroup and others.The patent enabled the automatic and adaptive retrieval of information from a database while enabling the transmission of reply messages based on content of a received message.
U.S Patent: Filter for Modeling System and Method for Handling and Routing ofText Based Aynchronous Commmunications, 2004
This patent was awarded to Shiva for developing a unique method to route incoming text-based asynchronous communications. The system applied pattern analysis methods of Feature Extraction, Clustering and Learning, deploying a hybrid and integrated systems model of nearly 19 different technologies in a unique frame work.
Personal Statement fromV.A. Shiva Ayyadurai
VA Shiva at the age of 14, Newark, 1978.As a Lecturer at the MIT, 2012.
Podcast: Interview withRadio SBS Australia
Statement fromProf. Noam Chomsky
Statement fromLeslie P. Michelson, Ph.D.
Learning Programming@ NYU, 1978
West Essex Tribune, 1980
Westinghouse Award, 1981
First US Copyright for EMAIL, 1982
U.S Patent: Relationship Management System and Method using Asynchronous Electronic Messaging, 2003
U.S Patent: Filter for Modeling System and Method for Handling and Routing of Text Based Aynchronous Commmunications, 2004
© 2012 - 2015. International Center for Integrative Systems. All Rights Reserved. | 计算机 |
2015-40/2213/en_head.json.gz/5621 | Chinese room
For the British video game development studio, see The Chinese Room.
Illustration of Searle's Chinese room
The Chinese room is a thought experiment presented by the philosopher John Searle to challenge the claim that it is possible for a computer running a program to have a "mind" and "consciousness"[a] in the same sense that people do, simply by virtue of running the right program. The experiment is intended to help refute a philosophical position that Searle named "strong AI":
"The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."[b]
To contest this view, Searle writes in his first description of the argument: "Suppose that I'm locked in a room and ... that I know no Chinese, either written or spoken". He further supposes that he has a set of rules in English that "enable me to correlate one set of formal symbols with another set of formal symbols", that is, the Chinese characters. These rules allow him to respond, in written Chinese, to questions, also written in Chinese, in such a way that the posers of the questions – who do understand Chinese – are convinced that Searle can actually understand the Chinese conversation too, even though he cannot. Similarly, he argues that if there is a computer program that allows a computer to carry on an intelligent conversation in a written language, the computer executing the program would not understand the conversation either.
The experiment is the centerpiece of Searle's Chinese room argument which holds that a program cannot give a computer a "mind", "understanding" or "consciousness", regardless of how intelligently it may make it behave. The argument is directed against the philosophical positions of functionalism and computationalism,[1] which hold that the mind may be viewed as an information processing system operating on formal symbols. Although it was originally presented in reaction to the statements of artificial intelligence (AI) researchers, it is not an argument against the goals of AI research, because it does not limit the amount of intelligence a machine can display.[2] The argument applies only to digital computers and does not apply to machines in general.[3] This kind of argument against AI was described by John Haugeland as the "hollow shell" argument.[4]
Searle's argument first appeared in his paper "Minds, Brains, and Programs", published in Behavioral and Brain Sciences in 1980. It has been widely discussed in the years since.[5]
1 Chinese room thought experiment
3 Philosophy
3.1 Strong AI
3.2 Strong AI as computationalism or functionalism
3.3 Strong AI vs. biological naturalism
3.4 Consciousness
4 Computer science
4.1 Strong AI vs. AI research
4.2 Turing test
4.3 Symbol processing
4.4 Chinese room and universal computation
5 Complete argument
6.1 Systems and virtual mind replies: finding the mind
6.2 Robot and semantics replies: finding the meaning
6.3 Brain simulation and connectionist replies: redesigning the room
6.4 Speed and complexity: appeals to intuition
6.5 Other minds and zombies: meaninglessness
Chinese room thought experiment[edit]
John Searle
Searle's thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that he is talking to another Chinese-speaking human being.
The question Searle wants to answer is this: does the machine literally "understand" Chinese? Or is it merely simulating the ability to understand Chinese?[6][c] Searle calls the first position "strong AI" and the latter "weak AI".[d]
Searle then supposes that he is in a closed room and has a book with an English version of the computer program, along with sufficient paper, pencils, erasers, and filing cabinets. Searle could receive Chinese characters through a slot in the door, process them according to the program's instructions, and produce Chinese characters as output. If the computer had passed the Turing test this way, it follows, says Searle, that he would do so as well, simply by running the program manually.
Searle asserts that there is no essential difference between the roles of the computer and himself in the experiment. Each simply follows a program, step-by-step, producing a behavior which is then interpreted as demonstrating intelligent conversation. However, Searle would not be able to understand the conversation. ("I don't speak a word of Chinese,"[9] he points out.) Therefore, he argues, it follows that the computer would not be able to understand the conversation either.
Searle argues that without "understanding" (or "intentionality"), we cannot describe what the machine is doing as "thinking" and since it does not think, it does not have a "mind" in anything like the normal sense of the word. Therefore, he concludes that "strong AI" is false.
Gottfried Leibniz made a similar argument in 1714 against mechanism (the position that the mind is a machine and nothing more). Leibniz used the thought experiment of expanding the brain until it was the size of a mill.[10] Leibniz found it difficult to imagine that a "mind" capable of "perception" could be constructed using only mechanical processes.[e] In 1974, Lawrence Davis imagined duplicating the brain using telephone lines and offices staffed by people, and in 1978 Ned Block envisioned the entire population of China involved in such a brain simulation. This thought experiment is called the China brain, also the "Chinese Nation" or the "Chinese Gym".[11]
The Chinese Room Argument was introduced in Searle's 1980 paper "Minds, Brains, and Programs", published in Behavioral and Brain Sciences.[12] It eventually became the journal's "most influential target article",[5] generating an enormous number of commentaries and responses in the ensuing decades. Searle expanded on the Chinese Room argument's themes in his article entitled "Analytic Philosophy and Mental Phenomena,[13]" published in 1981.
David Cole writes that "the Chinese Room argument has probably been the most widely discussed philosophical argument in cognitive science to appear in the past 25 years".[14]
Most of the discussion consists of attempts to refute it. "The overwhelming majority," notes BBS editor Stevan Harnad,[f] "still think that the Chinese Room Argument is dead wrong."[15] The sheer volume of the literature that has grown up around it inspired Pat Hayes to quip that the field of cognitive science ought to be redefined as "the ongoing research program of showing Searle's Chinese Room Argument to be false".[16]
Searle's paper has become "something of a classic in cognitive science," according to Harnad.[15] Varol Akman agrees, and has described his paper as "an exemplar of philosophical clarity and purity".[17]
Philosophy[edit]
Although the Chinese Room argument was originally presented in reaction to the statements of AI researchers, philosophers have come to view it as an important part of the philosophy of mind. It is a challenge to functionalism and the computational theory of mind,[g] and is related to such questions as the mind–body problem, the problem of other minds, the symbol-grounding problem, and the hard problem of consciousness.[a]
Strong AI[edit]
Searle identified a philosophical position he calls "strong AI":
The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.[b]
The definition hinges on the distinction between simulating a mind and actually having a mind. Searle writes that "according to Strong AI, the correct simulation really is a mind. According to Weak AI, the correct simulation is a model of the mind."[7]
The position is implicit in some of the statements of early AI researchers and analysts. For example, in 1955, AI founder Herbert A. Simon declared that "there are now in the world machines that think, that learn and create"[23][h] and claimed that they had "solved the venerable mind–body problem, explaining how a system composed of matter can have the properties of mind."[24] John Haugeland wrote that "AI wants only the genuine article: machines with minds, in the full and literal sense. This is not science fiction, but real science, based on a theoretical conception as deep as it is daring: namely, we are, at root, computers ourselves."[25]
Searle also ascribes the following positions to advocates of strong AI:
AI systems can be used to explain the mind;[d]
The study of the brain is irrelevant to the study of the mind;[i] and
The Turing test is adequate for establishing the existence of mental states.[j]
Strong AI as computationalism or functionalism[edit]
In more recent presentations of the Chinese room argument, Searle has identified "strong AI" as "computer functionalism" (a term he attributes to Daniel Dennett).[1][30] Functionalism is a position in modern philosophy of mind that holds that we can define mental phenomena (such as beliefs, desires, and perceptions) by describing their functions in relation to each other and to the outside world. Because a computer program can accurately represent functional relationships as relationships between symbols, a computer can have mental phenomena if it runs the right program, according to functionalism.
Stevan Harnad argues that Searle's depictions of strong AI can be reformulated as "recognizable tenets of computationalism, a position (unlike "strong AI") that is actually held by many thinkers, and hence one worth refuting."[31] Computationalism[k] is the position in the philosophy of mind which argues that the mind can be accurately described as an information-processing system.
Each of the following, according to Harnad, is a "tenet" of computationalism:[34]
Mental states are computational states (which is why computers can have mental states and help to explain the mind);
Computational states are implementation-independent — in other words, it is the software that determines the computational state, not the hardware (which is why the brain, being hardware, is irrelevant); and that
Since implementation is unimportant, the only empirical data that matters is how the system functions; hence the Turing test is definitive.
Strong AI vs. biological naturalism[edit]
Searle holds a philosophical position he calls "biological naturalism": that consciousness[a] and understanding require specific biological machinery that is found in brains. He writes "brains cause minds"[3] and that "actual human mental phenomena [are] dependent on actual physical–chemical properties of actual human brains".[35] Searle argues that this machinery (known to neuroscience as the "neural correlates of consciousness") must have some (unspecified) "causal powers" that permit the human experience of consciousness.[36] Searle's faith in the existence of these powers has been criticized.[l]
Searle does not disagree with the notion that machines can have consciousness and understanding, because, as he writes, "we are precisely such machines".[3] Searle holds that the brain is, in fact, a machine, but that the brain gives rise to consciousness and understanding using machinery that is non-computational. If neuroscience is able to isolate the mechanical process that gives rise to consciousness, then Searle grants that it may be possible to create machines that have consciousness and understanding. However, without the specific machinery required, Searle does not believe that consciousness can occur.
Biological naturalism implies that one cannot determine if the experience of consciousness is occurring merely by examining how a system functions, because the specific machinery of the brain is essential. Thus, biological naturalism is directly opposed to both behaviorism and functionalism (including "computer functionalism" or "strong AI").[37] Biological naturalism is similar to identity theory (the position that mental states are "identical to" or "composed of" neurological events), however, Searle has specific technical objections to identity theory.[38][m] Searle's biological naturalism and strong AI are both opposed to Cartesian dualism,[37] the classical idea that the brain and mind are made of different "substances". Indeed, Searle accuses strong AI of dualism, writing that "strong AI only makes sense given the dualistic assumption that, where the mind is concerned, the brain doesn't matter."[26]
Consciousness[edit]
Searle's original presentation emphasized "understanding"—that is, mental states with what philosophers call "intentionality"—and did not directly address other closely related ideas such as "consciousness". However, in more recent presentations Searle has included consciousness as the real target of the argument.[1]
Computational models of consciousness are not sufficient by themselves for consciousness. The computational model for consciousness stands to consciousness in the same way the computational model of anything stands to the domain being modelled. Nobody supposes that the computational model of rainstorms in London will leave us all wet. But they make the mistake of supposing that the computational model of consciousness is somehow conscious. It is the same mistake in both cases.[39]
— John R. Searle, Consciousness and Language, p. 16
David Chalmers writes "it is fairly clear that consciousness is at the root of the matter" of the Chinese room.[40]
Colin McGinn argues that that the Chinese room provides strong evidence that the hard problem of consciousness is fundamentally insoluble. The argument, to be clear, is not about whether a machine can be conscious, but about whether it (or anything else for that matter) can be shown to be conscious. It is plain that any other method of probing the occupant of a Chinese room has the same difficulties in principle as exchanging questions and answers in Chinese. It is simply not possible to divine whether a conscious agency inhabits the room or some clever simulation.[41]
Searle argues that this is only true for an observer outside of the room. The whole point of the thought experiment is to put someone inside the room, where they can directly observe the operations of consciousness. Searle claims that from his vantage point within the room there is nothing he can see that could imaginably give rise to consciousness, other than himself, and clearly he does not have a mind that can speak Chinese.
Computer science[edit]
The Chinese room argument is primarily an argument in the philosophy of mind, and both major computer scientists and artificial intelligence researchers consider it irrelevant to their fields.[2] However, several concepts developed by computer scientists are essential to understanding the argument, including symbol processing, Turing machines, Turing completeness, and the Turing test.
Strong AI vs. AI research[edit]
Searle's arguments are not usually considered an issue for AI research. Stuart Russell and Peter Norvig observe that most AI researchers "don't care about the strong AI hypothesis—as long as the program works, they don't care whether you call it a simulation of intelligence or real intelligence."[2] The primary mission of artificial intelligence research is only to create useful systems that act intelligently, and it does not matter if the intelligence is "merely" a simulation.
Searle does not disagree that AI research can create machines that are capable of highly intelligent behavior. The Chinese room argument leaves open the possibility that a digital machine could be built that acts more intelligent than a person, but does not have a mind or intentionality in the same way that brains do. Indeed, Searle writes that "the Chinese room argument ... assumes complete success on the part of artificial intelligence in simulating human cognition."[42]
Searle's "strong AI" should not be confused with "strong AI" as defined by Ray Kurzweil and other futurists,[43] who use the term to describe machine intelligence that rivals or exceeds human intelligence. Kurzweil is concerned primarily with the amount of intelligence displayed by the machine, whereas Searle's argument sets no limit on this. Searle argues that even a super-intelligent machine would not necessarily have a mind and consciousness.
Turing test[edit]
Main article: Turing test
The "standard interpretation" of the Turing Test, in which player C, the interrogator, is given the task of trying to determine which player – A or B – is a computer and which is a human. The interrogator is limited to using the responses to written questions to make the determination. Image adapted from Saygin, 2000.[44]
The Chinese room implements a version of the Turing test.[45] Alan Turing introduced the test in 1950 to help answer the question "can machines think?" In the standard version, a human judge engages in a natural language conversation with a human and a machine designed to generate performance indistinguishable from that of a human being. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test.
Turing then considered each possible objection to the proposal "machines can think", and found that there are simple, obvious answers if the question is de-mystified in this way. He did not, however, intend for the test to measure for the presence of "consciousness" or "understanding". He did not believe this was relevant to the issues that he was addressing. He wrote:
I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.[45]
To Searle, as a philosopher investigating in the nature of mind and consciousness, these are the relevant mysteries. The Chinese room is designed to show that the Turing test is insufficient to detect the presence of consciousness, even if the room can behave or function as a conscious mind would.
Symbol processing[edit]
Main article: Physical symbol system
The Chinese room (and all modern computers) manipulate physical objects in order to carry out calculations and do simulations. AI researchers Allen Newell and Herbert A. Simon called this kind of machine a physical symbol system. It is also equivalent to the formal systems used in the field of mathematical logic.
Searle emphasizes the fact that this kind of symbol manipulation is syntactic (borrowing a term from the study of grammar). The computer manipulates the symbols using a form of syntax rules, without any knowledge of the symbol's semantics (that is, their meaning).
Newell and Simon had conjectured that a physical symbol system (such as a digital computer) had all the necessary machinery for "general intelligent action", or, as it is known today, artificial general intelligence. They framed this as a philosophical position, the physical symbol system hypothesis: "A physical symbol system has the necessary and sufficient means for general intelligent action."[46][47] The Chinese room argument does not refute this, because it is framed in terms of "intelligent action", i.e. the external behavior of the machine, rather than the presence or absence of understanding, consciousness and mind.
Chinese room and universal computation[edit]
See also: Turing completeness and Church-Turing thesis
The Chinese room has a design analogous to that of a modern computer. It has a Von Neumann architecture, which consists of a program (the book of instructions), some memory (the papers and file cabinets), a CPU which follows the instructions (the man), and a means to write symbols in memory (the pencil and eraser). A machine with this design is known in theoretical computer science as "Turing complete", because it has the necessary machinery to carry out any computation that a Turing machine can do, and therefore it is capable of doing a step-by-step simulation of any other digital machine, given enough memory and time. Alan Turing writes, "all digital computers are in a sense equivalent."[48] The widely accepted Church-Turing thesis holds that any function computable by an effective procedure is computable by a Turing machine. In other words, the Chinese room can do whatever any other digital computer can do (albeit much, much more slowly).
Some replies to Searle begin by arguing that the room, as described, cannot have a Chinese-speaking mind. Arguments of this form, according to Stevan Harnad, are "no refutation (but rather an affirmation)"[49 | 计算机 |
2015-40/2213/en_head.json.gz/6059 | Definition of Babylon%
Definition of Babylon 5
epic United States science fiction TV series produced in 1994 and broadcasted through 1998
Babylon 5 Definition from Computer & Internet Dictionaries & Glossaries Computer Games DictionariesBabylon 5 - The place to be ...
Length : 8592 m Mass : 9.1 billion metric tons Fighters : 48 Powersource : 8 Fusion ReactorsWeaponsystem : Twin Particle arraysParticle Laser CannonPulse CannonsPlasma Cannons Defensesystem : 8 - 12 m armored hullMk. II Defense Grid Babylon 5 is the last of the Babylon stations. Earth decided to cut money for construction after the first four stations failed. The station is only half a mile shorter than Babylon 4 but has only living space for about 250000 people. In contrast to Babylon 4 this time a single-rotation system was used to save money. In addition, the defense grid and fighter equipment weren't that big in size as used on Babylon 4. The defense grid was later upgraded to provide enough firepower to destroy a battle cruiser without any assistance. The station is located in orbit around Epsilon 3 and a jumpgate has been installed in the same sector. The main aspect Babylon 5 is to act as trade and diplomacy outpost. The commander of the station has the status of an ambassador and is empowered to speak on behalf of his government the way that a ship's captain exploring new worlds is empowered.Maintaining the station is very expensive because all goods have to be shipped using transport vessels. Every transport that wants to use Babylon 5 as docking station has to pay a certain amount of money and all people entering the station have to pay a fee. No weapons are allowed on the station except for the station security personal.Babylon 5 is divided into several sectors which allow easy navigation in all parts of the station. Blue Sector This sector houses the docking bays, medlabs, Babylon 5 Advisory Council Chambers and Earth Force Personal quarters. Smaller ships usually use the docking bays inside of the station to trade their goods. The launch bays (named Cobra Bays) for the Starfuries are located along the outside of the blue sector. Other installations include Customs and Disembarkation.Brown Sector The brown sector is located next to the blue sector. The sector is used for transient habitation and commerce. A special section is reserved for aliens which require a special atmospheric condition.Green Sector The green sector is reserved for diplomatic habitation. All alien ambassadors and their assistants live and work in this sector. The meeting and conference rooms can be used by all ambassadors upon request.Red Sector The biggest attraction in the red sector is the garden. This garden provides at most of the food, water and fresh air for the whole station. The whole garden is about 12 square miles in size and has also some areas for recreation like a Baseball field. The second attraction is called "The Zocalo". This place is like a big shopping plaza with hotels, bars, shops and more.Gray Sector The Gray Sector is the industrial center of Babylon 5. All bigger facilities and construction areas are located in this sector. Only the station personal is allowed to enter this sector.Yellow Sector This sector is mainly located outside along the station. The front part houses the Zero-G docking bay which allows very big vessels to directly dock on Babylon 5. The middle part of this sector is used for storage and exchange of containers.Downbelow This slang term is used for all "unused" parts of the station where homeless people try to live.More...
Babylon 5 Definition from Encyclopedia Dictionaries & Glossaries Wikipedia DictionariesWikipedia English - The Free Encyclopedia
Babylon 5 is an American space opera television series created by writer and producer J. Michael Straczynski, under the Babylonian Productions label, in association with Straczynski's Synthetic Worlds Ltd. and Warner Bros. Domestic Television. After the successful airing of a backdoor pilot movie, Warner Bros. commissioned the series as part of the second-year schedule of programs provided by its Prime Time Entertainment Network (PTEN). The pilot episode was broadcast on February 22, 1993 in the US. The first season premiered in the US on January 26, 1994, and the series ultimately ran for the intended five seasons. Describing it as having "always been conceived as, fundamentally, a five-year story, a novel for television," Straczynski wrote 92 of the 110 episodes, and served as executive producer, along with Douglas Netter.
© This article uses material from Wikipedia® and is licensed under the GNU Free Documentation License and under the Creative Commons Attribution-ShareAlike License Babylon 5 Definition by Categories:
Computer Games(1)
Babylon 5 Translations:
Translate Babylon 5 in English
Translate Babylon 5 in Arabic
Translate Babylon 5 in Chinese (s)
Translate Babylon 5 in Chinese (t)
Translate Babylon 5 in Dutch
Translate Babylon 5 in French
Translate Babylon 5 in German
Translate Babylon 5 in Greek
Translate Babylon 5 in Hebrew
Translate Babylon 5 in Italian
Translate Babylon 5 in Japanese
Translate Babylon 5 in Korean
Translate Babylon 5 in Portuguese
Translate Babylon 5 in Russian
Translate Babylon 5 in Spanish
Translate Babylon 5 in Swedish
Translate Babylon 5 in Turkish
What is Babylon 5 | 计算机 |
2015-40/2213/en_head.json.gz/7141 | Update: ATI Launches Radeon HD 4890; Over 50,000 Already Shipped
Jansen Ng (Blog) - April 2, 2009 9:36 AM
70 comment(s) - last by thReSh.. on Apr 6 at 11:43 PM
(Source: AnandTech)
Massive demand expected by AMD for its latest graphics card
ATI, the graphics division of AMD, has launched the Radeon HD 4890 video card exclusively with 1 GB of GDDR5 RAM.
The RV790 is not just an overclocked RV770. DailyTech noticed the RV790 chip was slightly larger than the RV770 when we acquired a reference board several weeks ago. It is a new respin of the original silicon, with retrained and rearchitectured power paths for greater power efficiency. ATI engineers also used decoupling capacitors in a decap ring to increase signal integrity.
All of this enables higher clock speeds. While the Radeon 4870 has a core clock of 750MHz, the Radeon 4890 runs its core clock at 850MHz. The standard GDDR5 runs at 3.9GHz effective, and provides 124.8 GB/s of bandwidth. Several ATI graphics board partners will be launching models with core clocks running at over 1GHz using improved cooling solutions.
Power consumption is also greatly reduced. The Radeon 4890 board consumes approximately 60W at near idle loads, such as when displaying 2D graphics or working on Word or Excel documents. This cuts powers consumption by a third from the Radeon 4870, which utilizes 90W at idle. On the flip side, maximum board power is now rated at 190W with the 4890, an increase of 30W. This is due to the higher clock speeds of both the GPU and the GDDR5 memory.
DailyTech has performed a few basic tests of the Radeon 4890, and our results show a 10%-25% performance improvement, depending on the game. The drivers in the box will work best for now, until the Catalyst 9.4 drivers are released later this month.
We have received word that there are over 50,000 Radeon HD 4890 cards already in the market. Several retailers have already sold cards to anxious fans ahead of today's launch date. The cards themselves should sell for no more than $250 at stock speeds, although we expect prices to drop slightly in a month's time. Some partners have also said that mail-in rebates for $20 will be available.
AMD recently tried to lower prices on its Radeon 4870 and 4850 cards, but its board partners believe that the performance of the Radeon 4870 is too good to lower prices further. The 1GB version of the Radeon 4870 is now selling for around the $180 mark, although it may be available for less with a mail-in rebate partly subsidized by AMD.
The primary competition for the Radeon HD 4890 will be NVIDIA's GeForce GTX 275, which is also priced at around $250. However, there will be no retail availability until April 14th, when it was originally supposed to be launched. Even so, our sources say that NVIDIA will not have the volumes that it needs to meet demand, which may end up raising prices.
Further out, ATI will be launching its first DirectX 11 parts in late summer. Some consumers may be tempted to wait for the next generation of cards with an all new architecture, but they will launch at higher prices. If you're looking for a high performance single chip video card, the Radeon 4890 may be your best bet for the next five months.
UPDATE: ATI has confirmed that the original information we received was incorrect, and the RV790 does indeed have 959 million transistors.
GTX 260 Core 216
GTS 250
Texture Address / Filtering
1986MHz GDDR3
Frame Buffer
1GB/512MB
RE: Deleted posts
It may have been moderated down to -1, in which case it would have been minimized, depending on your settings.DailyTech does not delete comments. Parent
monomer
I actually noticed a deleted thread I had commented in about a month ago. It wasn't anything too inflammatory, so I think it was just a bug in the Website.If I look at my profile, I can see a link to the post, but when I go to the location, there's no thread:http://www.dailytech.com/Article.aspx?newsid=14501... Parent
I think it is a bug, and I will pass it on.It had a rating of 2, and should not have had problems. I tried accessing myself, but cannot find it either. Parent
It has been passed up to the webmaster. Please email me if you find other threads are no longer available. Parent
Update: NVIDIA Announces GeForce GTX 275, Limited Availability Until April 14 | 计算机 |
2015-40/2213/en_head.json.gz/8886 | Revision as of 14:22, 6 December 2011 by Jshage (Talk | contribs)
With the rapid growth of interest in the Internet, network security has become a major concern to companies throughout the world. The fact that the information and tools needed to penetrate the security of corporate networks are widely available has increased that concern. Because of this increased focus on network security, network administrators often spend more effort protecting their networks than on actual network setup and administration. Tools that probe for system vulnerabilities, such as the Security Administrator Tool for Analyzing Networks (SATAN), and some of the newly available scanning and intrusion detection packages and appliances, assist in these efforts, but these tools only point out areas of weakness and may not provide a means to protect networks from all possible attacks. Thus, as a network administrator, you must constantly try to keep abreast of the large number of security issues confronting you in today's world. This article describes many of the security issues that arise when connecting a private network to the Internet.
Internetworking Basics LAN Technologies WAN Technologies Internet Protocols Bridging and Switching Routing Network Management Voice/Data Integration Technologies Wireless Technologies Cable Access Technologies Dial-up Technology Security Technologies Quality of Service Networking Network Caching Technologies IBM Network Management Multiservice Access Technologies
1 Security Issues When Connecting to the Internet
1.1 Protecting Confidential Information
1.1.1 Network Packet Sniffers
1.1.2 IP Spoofing and Denial-of-Service Attacks
1.1.3 Password Attacks
1.1.4 Distribution of Sensitive Information
1.1.5 Man-in-the-Middle Attacks
2 Protecting Your Network: Maintaining Internal Network System Integrity =
2.1 Network Packet Sniffers
2.2 IP Spoofing
2.3 Password Attacks
2.4 Denial-of-Service Attacks
2.5 Application Layer Attacks
3 Trusted, Untrusted, and Unknown Networks
3.1 Trusted Networks
3.2 Untrusted Networks
3.3 Unknown Networks
4 Establishing a Security Perimeter
4.1 Perimeter Networks
4.1.1 Figure: Three Types of Perimeter Networks Exist: Outermost, Internal, and Innermost
4.1.2 Figure: The Diagram Is an Example of a Two-Perimeter Network Security Design
4.2 Developing Your Security Design
4.2.1 Know Your Enemy
4.2.2 Count the Cost
4.2.3 Identify Any Assumptions
4.2.4 Control Your Secrets
4.2.5 Human Factors
4.2.6 Know Your Weaknesses
4.2.7 Limit the Scope of Access
4.2.8 Understand Your Environment
4.2.9 Limit Your Trust
4.2.10 Remember Physical Security
4.2.11 Make Security Pervasive
6 Review Questions
7 For More Information
Security Issues When Connecting to the Internet When you connect your private network to the Internet, you are physically connecting your network to more than 50,000 unknown networks and all their users. Although such connections open the door to many useful applications and provide great opportunities for information sharing, most private networks contain some information that should not be shared with outside users on the Internet. In addition, not all Internet users are involved in lawful activities. These two statements foreshadow the key questions behind most security issues on the Internet:
How do you protect confidential information from those who do not explicitly need to access it? How do you protect your network and its resources from malicious users and accidents that originate outside your network? Protecting Confidential Information Confidential information can reside in two states on a network. It can reside on physical storage media, such as a hard drive or memory, or it can reside in transit across the physical network wire in the form of packets. These two information states present multiple opportunities for attacks from users on your internal network, as well as those users on the Internet. We are primarily concerned with the second state, which involves network security issues. The following are five common methods of attack that present opportunities to compromise the information on your network:
Network packet sniffers IP spoofing Password attacks Distribution of sensitive internal information to external sources Man-in-the-middle attacks When protecting your information from these attacks, your concern is to prevent the theft, destruction, corruption, and introduction of information that can cause irreparable damage to sensitive and confidential data. This section describes these common methods of attack and provides examples of how your information can be compromised.
Network Packet Sniffers Because networked computers communicate serially (one information piece is sent after another), large information pieces are broken into smaller pieces. (The information stream would be broken into smaller pieces even if networks communicated in parallel. The overriding reason for breaking streams into network packets is that computers have limited intermediate buffers.) These smaller pieces are called network packets. Several network applications distribute network packets in clear text-that is, the information sent across the network is not encrypted. (Encryption is the transformation, or scrambling, of a message into an unreadable format by using a mathematical algorithm.) Because the network packets are not encrypted, they can be processed and understood by any application that can pick them up off the network and process them. A network protocol specifies how packets are identified and labeled, which enables a computer to determine whether a packet is intended for it. Because the specifications for network protocols, such as TCP/IP, are widely published, a third party can easily interpret the network packets and develop a packet sniffer. (The real threat today results from the numerous freeware and shareware packet sniffers that are available, which do not require the user to understand anything about the underlying protocols.) A packet sniffer is a software application that uses a network adapter card in promiscuous mode (a mode in which the network adapter card sends all packets received on the physical network wire to an application for processing) to capture all network packets that are sent across a local-area network. Because several network applications distribute network packets in clear text, a packet sniffer can provide its user with meaningful and often sensitive information, such as user account names and passwords. If you use networked databases, a packet sniffer can provide an attacker with information that is queried from the database, as well as the user account names and passwords used to access the database. One serious problem with acquiring user account names and passwords is that users often reuse their login names and passwords across multiple applications.
In addition, many network administrators use packet sniffers to diagnose and fix network-related problems. Because in the course of their usual and necessary duties these network administrators (such as those in the Payroll Department) work during regular employee hours, they can potentially examine sensitive information distributed across the network.
Many users employ a single password for access to all accounts and applications. If an application is run in client/server mode and authentication information is sent across the network in clear text, this same authentication information likely can be used to gain access to other corporate resources. Because attackers know and use human characteristics (attack methods known collectively as social engineering attacks), such as using a single password for multiple accounts, they are often successful in gaining access to sensitive information.
IP Spoofing and Denial-of-Service Attacks An IP spoofing attack occurs when an attacker outside your network pretends to be a trusted computer. This is facilitated either by using an IP address that is within the range of IP addresses for your network, or by using an authorized external IP address that you trust and to which you want to provide access to specified resources on your network.
Normally, an IP spoofing attack is limited to the injection of data or commands into an existing stream of data passed between a client and server application or a peer-to-peer network connection. To enable bidirectional communication, the attacker must change all routing tables to point to the spoofed IP address.
However, if an attacker manages to change the routing tables to point to the spoofed IP address, he can receive all the network packets that are addressed to the spoofed address and can reply just as any trusted user can. Another approach that the attacker could take is to not worry about receiving any response from the targeted host. This is called a denial-of-service (DOS) attack. The denial of service occurs because the system receiving the requests becomes busy trying to establish a return communications path with the initiator (which may or may not be using a valid IP address). In more technical terms, the targeted host receives a TCP SYN and returns a SYN-ACK. It then remains in a wait state, anticipating the completion of the TCP handshake that never happens. Each wait state uses system resources until eventually, the host cannot respond to other legitimate requests.
Like packet sniffers, IP spoofing and DOS attacks are not restricted to people who are external to the network.
Password Attacks Password attacks can be implemented using several different methods, including brute-force attacks, Trojan horse programs (discussed later in the article), IP spoofing, and packet sniffers. Although packet sniffers and IP spoofing can yield user accounts and passwords, password attacks usually refer to repeated attempts to identify a user account and/or password; these repeated attempts are called brute-force attacks.
Often, a brute-force attack is performed using a dictionary program that runs across the network and attempts to log in to a shared resource, such as a server. When an attacker successfully gains access to a resource, that person has the same rights as the user whose account has been compromised to gain access to that resource. If this account has sufficient privileges, the attacker can create a back door for future access, without concern for any status and password changes to the compromised user account.
Distribution of Sensitive Information Controlling the distribution of sensitive information is at the core of a network security policy. Although such an attack may not seem obvious to you, the majority of computer break-ins that organizations suffer are at the hands of disgruntled present or former employees. At the core of these security breaches is the distribution of sensitive information to competitors or others that will use it to your disadvantage. An outside intruder can use password and IP spoofing attacks to copy information, and an internal user can easily place sensitive information on an external computer or share a drive on the network with other users.
For example, an internal user could place a file on an external FTP server without ever leaving his or her desk. The user could also e-mail an attachment that contains sensitive information to an external user.
Man-in-the-Middle Attacks A man-in-the-middle attack requires that the attacker have access to network packets that come across the networks. An example of such a configuration could be someone who is working for your Internet service provider (ISP), who can gain access to all network packets transferred between your network and any other network. Such attacks are often implemented using network packet sniffers and routing and transport protocols. The possible uses of such attacks are theft of information, hijacking of an ongoing session to gain access to your internal network resources, traffic analysis to derive information about your network and its users, denial of service, corruption of transmitted data, and introduction of new information into network sessions.
Protecting Your Network: Maintaining Internal Network System Integrity =
Although protecting your information may be your highest priority, protecting the integrity of your network is critical in your ability to protect the information it contains. A breach in the integrity of your network can be extremely costly in time and effort, and it can open multiple avenues for continued attacks. This section covers the five methods of attack that are commonly used to compromise the integrity of your network:
Network packet sniffers IP spoofing Password attacks Denial-of-service attacks
Application layer attacks When considering what to protect within your network, you are concerned with maintaining the integrity of the physical network, your network software, any other network resources, and your reputation. This integrity involves the verifiable identity of computers and users, proper operation of the services that your network provides, and optimal network performance; all these concerns are important in maintaining a productive network environment. This section provides some examples of the attacks described previously and explains how they can be used to compromise your network's integrity.
Network Packet Sniffers As mentioned earlier, network packet sniffers can yield critical system information, such as user account information and passwords. When an attacker obtains the correct account information, he or she has the run of your network. In a worst-case scenario, an attacker gains access to a system-level user account, which the attacker uses to create a new account that can be used at any time as a back door to get into your network and its resources. The attacker can modify system-critical files, such as the password for the system administrator account, the list of services and permissions on file servers, and the login details for other computers that contain confidential information.
Packet sniffers provide information about the topology of your network that many attackers find useful. This information, such as what computers run which services, how many computers are on your network, which computers have access to others, and so on, can be deduced from the information contained within the packets that are distributed across your network as part of necessary daily operations.
In addition, a network packet sniffer can be modified to interject new information or change existing information in a packet. By doing so, the attacker can cause network connections to shut down prematurely, as well as change critical information within the packet. Imagine what could happen if an attacker modified the information being transmitted to your accounting system. The effects of such attacks can be difficult to detect and very costly to correct.
IP Spoofing IP spoofing can yield access to user accounts and passwords, and it can also be used in other ways. For example, an attacker can emulate one of your internal users in ways that prove embarrassing for your organization; the attacker could send e-mail messages to business partners that appear to have originated from someone within your organization. Such attacks are easier when an attacker has a user account and password, but they are possible by combining simple spoofing attacks with knowledge of messaging protocols. For example, Telnetting directly to the SMTP port on a system allows the attacker to insert bogus sender information.
Password Attacks Just as with packet sniffers and IP spoofing attacks, a brute-force password attack can provide access to accounts that can be used to modify critical network files and services. An example that compromises your network's integrity is an attacker modifying the routing tables for your network. By doing so, the attacker ensures that all network packets are routed to him or her before they are transmitted to their final destination. In such a case, an attacker can monitor all network traffic, effectively becoming a man in the middle.
Denial-of-Service Attacks Denial-of-service attacks are different from most other attacks because they are not targeted at gaining access to your network or the information on your network. These attacks focus on making a service unavailable for normal use, which is typically accomplished by exhausting some resource limitation on the network or within an operating system or application. When involving specific network server applications, such as a Hypertext Transfer Protocol (HTTP) server or a File Transfer Protocol (FTP) server, these attacks can focus on acquiring and keeping open all the available connections supported by that server, effectively locking out valid users of the server or service. Denial-of-service attacks can also be implemented using common Internet protocols, such as TCP and Internet Control Message Protocol (ICMP). Most denial-of-service attacks exploit a weakness in the overall architecture of the system being attacked rather than a software bug or security hole. However, some attacks compromise the performance of your network by flooding the network with undesired and often useless network packets and by providing false information about the status of network resources.
Application Layer Attacks Application layer attacks can be implemented using several different methods. One of the most common methods is exploiting well-known weaknesses in software commonly found on servers, such as sendmail, PostScript, and FTP. By exploiting these weaknesses, attackers can gain access to a computer with the permissions of the account running the application, which is usually a privileged system-level account.
Trojan horse attacks are implemented using bogus programs that an attacker substitutes for common programs. These programs may provide all the functionality that the normal application or service provides, but they also include other features that are known to the attacker, such as monitoring login attempts to capture user account and password information. These programs can capture sensitive information and distribute it back to the attacker. They can also modify application functionality, such as applying a blind carbon copy to all e-mail messages so that the attacker can read all of your organization's e-mail.
One of the oldest forms of application layer attacks is a Trojan horse program that displays a screen, banner, or prompt that the user believes is the valid login sequence. The program then captures the information that the user types in and stores or e-mails it to the attacker. Next, the program either forwards the information to the normal login process (normally impossible on modern systems) or simply sends an expected error to the user (for example, Bad Username/Password Combination), exits, and starts the normal login sequence. The user, believing that he or she has incorrectly entered the password (a common mistake experienced by everyone), retypes the information and is allowed access.
One of the newest forms of application layer attacks exploits the openness of several new technologies: the HyperText Markup Language (HTML) specification, web browser functionality, and HTTP. These attacks, which include Java applets and ActiveX controls, involve passing harmful programs across the network and loading them through a user's browser.
Users of Active X controls may be lulled into a false sense of security by the Authenticode technology promoted by Microsoft. However, attackers have already discovered how to utilize properly signed and bug-free Active X controls to make them act as Trojan horses. This technique uses VBScript to direct the controls to perform their dirty work, such as overwriting files and executing other programs.
These new forms of attack are different in two respects:
They are initiated not by the attacker, but by the user, who selects the HTML page that contains the harmful applet or script stored using the <OBJECT>, <APPLET>, or <SCRIPT> tags. Their attacks are no longer restricted to certain hardware platforms and operating systems because of the portability of the programming languages involved.
Trusted, Untrusted, and Unknown Networks As a network manager creates a network security policy, each network that makes up the topology must be classified as one of three types of networks:
Trusted Untrusted Unknown Trusted Networks Trusted networks are the networks inside your network security perimeter. These networks are the ones that you are trying to protect. Often you or someone in your organization administers the computers that comprise these networks, and your organization controls their security measures. Usually, trusted networks are within the security perimeter.
When you set up the firewall server, you explicitly identify the type of networks that are attached to the firewall server through network adapter cards. After the initial configuration, the trusted networks include the firewall server and all networks behind it.
One exception to this general rule is the inclusion of virtual private networks (VPNs), which are trusted networks that transmit data across an untrusted network infrastructure. For the purposes of our discussion, the network packets that originate on a VPN are considered to originate from within your internal perimeter network. This origin is logical because of how VPNs are established. For communications that originate on a VPN, security mechanisms must exist by which the firewall server can authenticate the origin, data integrity, and other security principles contained within the network traffic according to the same security principles enforced on your trusted networks.
Untrusted Networks Untrusted networks are the networks that are known to be outside your security perimeter. They are untrusted because they are outside your control. You have no control over the administration or security policies for these sites. They are the private, shared networks from which you are trying to protect your network. However, you still need and want to communicate with these networks although they are untrusted.
When you set up the firewall server, you explicitly identify the untrusted networks from which that firewall can accept requests. Untrusted networks are outside the security perimeter and are external to the firewall server.
Unknown Networks Unknown networks are networks that are neither trusted nor untrusted. They are unknown quantities to the firewall because you cannot explicitly tell the firewall server that the network is a trusted or an untrusted network. Unknown networks exist outside your security perimeter. By default, all nontrusted networks are considered unknown networks, and the firewall applies the security policy that is applied to the Internet node in the user interface, which represents all unknown networks. However, you can identify unknown networks below the Internet node and apply more specialized policies to those untrusted networks.
Establishing a Security Perimeter When you define a network security policy, you must define procedures to safeguard your network and its contents and users against loss and damage. From this perspective, a network security policy plays a role in enforcing the overall security policy defined by an organization.
A critical part of an overall security solution is a network firewall, which monitors traffic crossing network perimeters and imposes restrictions according to security policy. Perimeter routers are found at any network boundary, such as between private networks, intranets, extranets, or the Internet. Firewalls most commonly separate internal (private) and external (public) networks. A network security policy focuses on controlling the network traffic and usage. It identifies a network's resources and threats, defines network use and responsibilities, and details action plans for when the security policy is violated. When you deploy a network security policy, you want it to be strategically enforced at defensible boundaries within your network. These strategic boundaries are called perimeter networks.
Perimeter Networks To establish your collection of perimeter networks, you must designate the networks of computers that you wish to protect and define the network security mechanisms that protect them. To have a successful network security perimeter, the firewall server must be the gateway for all communications between trusted networks and untrusted and unknown networks. Each network can contain multiple perimeter networks. When describing how perimeter networks are positioned relative to each other, three types of perimeter networks are present: the outermost perimeter, internal perimeters, and the innermost perimeter. Figure: Three Types of Perimeter Networks Exist: Outermost, Internal, and Innermost depicts the relationships among the various perimeters. Note that the multiple internal perimeters are relative to a particular asset, such as the internal perimeter that is just inside the firewall server. Figure: Three Types of Perimeter Networks Exist: Outermost, Internal, and Innermost
The outermost perimeter network identifies the separation point between the assets that you control and the assets that you do not control-usually, this point is the router that you use to separate your network from your ISP's network. Internal perimeter networks represent additional boundaries where you have other security mechanisms in place, such as intranet firewalls and filtering routers.
Figure: The Diagram Is an Example of a Two-Perimeter Network Security Design depicts two perimeter networks (an outermost perimeter network and an internal perimeter network) defined by the placement of the internal and external routers and the firewall server.
Figure: The Diagram Is an Example of a Two-Perimeter Network Security Design
Positioning your firewall between an internal and external router provides little additional protection from attacks on either side, but it greatly reduces the amount of traffic that the firewall server must evaluate, which can increase the firewall's performance. From the perspective of users on an external network, the firewall server represents all accessible computers on the trusted network. It defines the point of focus, or choke point, through which all communications between the two networks must pass.
The outermost perimeter network is the most insecure area of your network infrastructure. Normally, this area is reserved for routers, firewall servers, and public Internet servers, such as HTTP, FTP, and Gopher servers. This area of the network is the easiest area to gain access to and, therefore, is the most frequently attacked, usually in an attempt to gain access to the internal networks. Sensitive company information that is for internal use only should not be placed on the outermost perimeter network. Following this precaution helps avoid having your sensitive information stolen or damaged.
Developing Your Security Design The design of the perimeter network and security policies require the following subjects to be addressed.
Know Your Enemy Knowing your enemy means knowing attackers or intruders. Consider who might want to circumvent your security measures, and identify their motivations. Determine what they might want to do and the damage that they could cause to your network. Security measures can never make it impossible for a user to perform unauthorized tasks with a computer system; they can only make it harder. The goal is to make sure that the network security controls are beyond the attacker's ability or motivation. Count the Cost Security measures usually reduce convenience, especially for sophisticated users. Security can delay work and can create expensive administrative and educational overhead. Security can use significant computing resources and require dedicated hardware. When you design your security measures, understand their costs and weigh those costs against the potential benefits. To do that, you must understand the costs of the measures themselves and the costs and likelihood of security breaches. If you incur security costs out of proportion to the actual dangers, you have done yourself a disservice. Identify Any Assumptions Every security system has underlying assumptions. For example, you might assume that your network is not tapped, that attackers know less than you do, that they are using standard software, or that a locked room is safe. Be sure to examine and justify your assumptions. Any hidden assumption is a potential security hole. Control Your Secrets Most security is based on secrets. Passwords and encryption keys, for example, are secrets. Too often, though, the secrets are not all that secret. The most important part of keeping secrets is in knowing the areas that you need to protect. What knowledge would enable someone to circumvent your system? You should jealously guard that knowledge and assume that everything else is known to your adversaries. The more secrets you have, the harder it will be to keep them all. Security systems should be designed so that only a limited number of secrets need to be kept. Human Factors Many security procedures fail because their designers do not consider how users will react to them. For example, because they can be difficult to remember, automatically generated nonsense passwords often are written on the undersides of keyboards. For convenience, a secure door that leads to the system's only tape drive is sometimes propped open. For expediency, unauthorized modems are often connected to a network to avoid onerous dial-in security measures. If your security measures interfere with essential use of the system, those measures will be resisted and perhaps circumvented. To get compliance, you must make sure that users can get their work done, and you must sell your security measures to users. Users must understand and accept the need for security. Any user can compromise system security, at least to some degree. For instance, passwords can often be found simply by calling legitimate users on the telephone, claiming to be a system administrator, and asking for them. If your users understand security issues, and if they understand the reasons for your security measures, they are far less likely to make an intruder's life easier. At a minimum, users should be taught never to release passwords or other secrets over unsecured telephone lines (especially cellular telephones) or e-mail. Users should be wary of people who call them on the telephone and ask questions. Some companies have implemented formalized network security training so that employees are not allowed access to the Internet until they have completed a formal training program. Know Your Weaknesses Every security system has vulnerabilities. You should understand your system's weak points and know how they could be exploited. You should also know the areas that present the greatest danger and should prevent access to them immediately. Understanding the weak points is the first step toward turning them into secure areas. Limit the Scope of Access You should create appropriate barriers in your system so that if intruders access one part of the system, they do not automatically have access to the rest of the system. The security of a system is only as good as the weakest security level of any single host in the system. Understand Your Environment Understanding how your system normally functions, knowing what is expected and what is unexpected, and being familiar with how devices are usually used will help you detect security problems. Noticing unusual events can help you catch intruders before they can damage the system. Auditing tools can help you detect those unusual events. Limit Your Trust You should know exactly which software you rely on, and your security system should not have to rely on the assumption that all software is bug-free. Remember Physical Security Physical access to a computer (or a router) usually gives a sufficiently sophisticated user total control over that computer. Physical access to a network link usually allows a person to tap that link, jam it, or inject traffic into it. It makes no sense to install complicated software security measures when access to the hardware is not controlled. Make Security Pervasive Almost any change that you make in your system may have security effects. This is especially true when new services are created. Administrators, programmers, and users should consider the security implications of every change they make. Understanding the security implications of a change takes practice; it requires lateral thinking and a willingness to explore every way that a service could potentially be manipulated.
Summary After reading this article, you should be able to evaluate your own network and its usability requirements, and weigh these requirements against the risk of compromise from unknown users and networks.
When defining a security policy for your organization, it is important to strike a balance between keeping your network and resources immune from attack and making the system so difficult to negotiate for legitimate purposes that it hinders productivity.
You must walk a fine line between closing as many doors as possible without encouraging trusted users to try to circumvent the policy because it is too complex and time-consuming to use.
Allowing Internet access from an organization poses the most risk to that organization. This article has outlined the types of attacks that may be possible without a suitable level of protection. If a compromise occurs, tools and applications are available to help flag possible vulnerabilities before they occur-or to at least help the network administrator monitor the state of the network and its resources.
It is important to stress that attacks may not be restricted to outside, unknown parties, but may be initiated by internal users as well. Knowing how the components of your network function and interact is the first step to knowing how to protect them. Review Questions Q - Name three common network attacks used to undermine network security. A - Password attacks, IP spoofing, denial-of-service attacks, dictionary attacks, and man-in-the-middle attacks.
Q - What are the three main types of networks that must be considered when defining a security policy? A - Trusted, untrusted, unknown.
Q - List some of the areas of possible vulnerability in your own network.
A - Internet connection, modems on PCs.
Q - What tools and applications are available to help monitor and test for system and network vulnerabilities? A - Scanning tools (Nessus, nmap, Metasploit), packet sniffers, Netflow and intrusion detection devices.
Q - List five important considerations to address when defining a security policy. A - 1. Know your enemy
2. Count the cost
3. Identify any assumptions
4. Control your secrets
5. Human factors
6. Know your weakness
7. Limit the scope of access
8. Understand your environment
9. Limit your trust
10. Remember physical security
11. Make security pervasive
For More Information Chapman and Zwicky. Building Internet Firewalls. Boston: O'Reilly and Associates, 1995.
Cheswick and Bellovin. Firewalls and Network Security. Boston: Addison-Wesley, 1998.
Cooper, Coggins, et al. Implementing Internet Security. Indianapolis: New Riders, 1997.
Currently 4.07/512345 Rating: 4.1/5 (15 votes cast) Retrieved from "http://docwiki.cisco.com/wiki/Security_Technologies"
Category: IOS Technology Handbook Views | 计算机 |
2015-40/2213/en_head.json.gz/9118 | / root / Linux Books / Red Hat/Fedora
stock:back order
release date:May 2006
Andrew Hudson, Paul Hudson
Continuing with the tradition of offering the best and most comprehensive coverage of Red Hat Linux on the market, Red Hat Fedora 5 Unleashed includes new and additional material based on the latest release of Red Hat's Fedora Core Linux distribution. Incorporating an advanced approach to presenting information about Fedora, the book aims to provide the best and latest information that intermediate to advanced Linux users need to know about installation, configuration, system administration, server operations, and security.
Red Hat Fedora 5 Unleashed thoroughly covers all of Fedora's software packages, including up-to-date material on new applications, Web development, peripherals, and programming languages. It also includes updated discussion of the architecture of the Linux kernel 2.6, USB, KDE, GNOME, Broadband access issues, routing, gateways, firewalls, disk tuning, GCC, Perl, Python, printing services (CUPS), and security. Red Hat Linux Fedora 5 Unleashed is the most trusted and comprehensive guide to the latest version of Fedora Linux.
Paul Hudson is a recognized expert in open source technologies. He is a professional developer and full-time journalist for Future Publishing. His articles have appeared in Internet Works, Mac Format, PC Answers, PC Format and Linux Format, one of the most prestigious linux magazines. Paul is very passionate about the free software movement, and uses Linux exclusively at work and at home. Paul's book, Practical PHP Programming, is an industry-standard in the PHP community. manufacturer website | 计算机 |
2015-40/2213/en_head.json.gz/9140 | The Elder Scrolls Online Impressions
Written by Travis Huinker on 2/7/2014 for
PC With the ever-increasing number of massively multiplayer online games moving to free-to-play, The Elder Scrolls Online stands against the crowd with a traditional subscription payment model. The question that remains is if the Elder Scrolls title can convince gamers to support what most consider a dying payment model. Fortunately, ZeniMax Online provided members of the press with an extended look at The Elder Scrolls Online beginning Friday, January 31. During the full weekend and partial week of gameplay with my Wood Elf Dragonknight character, I developed a love and hate relationship while exploring the vast lands of Tamriel. In this article, I'll discuss what I ultimately think worked and what didn't work quite as well during my adventures.
What WorkedMost importantly, The Elder Scrolls Online feels like, well, an Elder Scrolls game. Ranging from its presentation style to the narrative and its quests, the game incorporates enough elements from past games to be recognizable while also introducing new gameplay concepts. The fan service is ever-present from completing quests for the Fighters and Mages guilds to the instantly recognizable voice work of past actors from Skyrim and Oblivion. The quality of the game's narrative and its quests in particular were surprisingly well designed and unique from one another, most of which move away from the genre traditions of fetch-these-items or slay-those-monsters quest types. One quest in particular had me solving a murder mystery by collecting evidence and questioning other characters. Even though the traditional genre quests still exist in the game, the portions of the narrative that I've played thus far repeatedly introduced new locations and objectives that kept the gameplay continually interesting.
Also in the tradition of past Elder Scrolls games is the in-depth customization of player characters. All of the series' traditional races are included with returning customization options such as hair type and face tattoos, while also introducing a batch of new sliders ranging from full body tattoos to head adornments. The high level of customization should solve the common issue that plagues other MMOs in that most characters running around the world look awfully similar to one another. Another aspect of the game's character design that I found refreshing was that player characters blend together well with the non-playable characters that inhabitant Tamriel. Even higher-level characters with fancier armor never looked as if they seemed out of place standing among non-playable inhabitants. This in part can be contributed to the well-designed armor and clothing pieces that are especially lore friendly.
While most MMOs suffer from information overload in their various interface and menu elements, The Elder Scrolls Online fortunately takes many cues from Skyrim in employing an interface that only contains the necessary components. Nearly every element of the menu is easy to use straight from the beginning without much tutorial explanation. Skyrim veterans (PC players who use the SkyUI mod in particular) will feel right at home as everything is simple to use and scales well to specific resolutions. The simplified menu system also makes locating group members a quick and painless process as the world map always indicates their current location. Even with the game's fast travel system, I opted to travel by foot for most of my quests to discover new locations in the world. The map interface can also be switched between various zoom levels that range from the entire continent of Tamriel to the detailed area view of a particular region.
Combat in a MMOs can either be an entertaining gameplay feature or simply serve as another tedious function for progressing in the game. Fortunately, The Elder Scrolls Online incorporates an assortment of elements into the combat system from light and heavy weapon attacks to blocking and interrupting enemy attacks. I had to break away from the genre tradition of simply standing in one spot and repeatedly clicking the attack button as my character would quickly die during enemy encounters. The timing of both attacks and blocks are crucial in battles especially when various spells and special attacks are added into the mix. I particularly enjoyed the Dragonknight's special class attacks that included a fiery chain that could pull enemies toward my character and another that temporarily added dragonscale armor for increased defense. I'm looking particularly forward to experimenting with the game's character classes and their personalized attacks and spells.
What Didn't WorkWhile the press weekend for the beta contained a far fewer amount of players than there will be with the game's upcoming launch, it was obvious that many gameplay aspects still require extensive amounts of balance and difficulty revisions to ensure an enjoyable experience for both solo and group players. For my initial hours with the beta I felt confident in completing the various quests, but soon had to seek the help of groups to overcome a few tougher boss encounters. This issue was made worse with the odd balance of my character's leveling progression in relation to the amount of available quests for earning experience. I hit an experience road block on a few occasions in which I wasn't able to locate additional quests that were tailored to my character's current level. I'm hopeful that upon the game's release there will be a wider selection of quests as well as a better balanced progression of character levels.
Recent Elder Scrolls games have included the option to instantly switch between first and third-person views, both of which have their groups of advocates for the use of one perspective over the other. The addition of a first-person view to The Elder Scrolls Online was announced at a later date in the game's development as many fans advocated for its inclusion. Unfortunately, the current first-person view is best described as clunky and impractical during actual gameplay. This really didn't come as a surprise when considering that the MMO genre has always preferred the third-person view due to more dynamic and unpredictable gameplay elements. While not to say that the first-person view is completely useless, it just makes for a far more constricting view of both your character and the gameworld.
Some of the bugs I experienced on a few occasions ranged from falling through the ground to unresponsive combat and certain sound effects that would stop playing. I also experienced some odd timing of enemy mob respawns that often seemed inconsistent and unexpected when enemies would randomly appear on top of my character. Fortunately, the game is still in its beta stage and hopefully with more testing these issues will be solved before release.
Final ThoughtsUltimately, I am optimistic for the launch in April after my experiences from the game's current beta stage. While there is still an array of features that require additional polish or further revision, the majority of the content is on par with past Elder Scrolls games in regards to both the gameplay and narrative. The Elder Scrolls name after all is simply a title and won't be able to solve gameplay issues on merit alone. One true indicator of any game, especially in regards to MMOs, is the urge of returning to the game and progressing just a little bit further. The Elder Scrolls Online was no exception as I found myself continually wanting to further level my Wood Elf character as well as explore more of Tamriel, which above all else is the core selling point of the series. Check back next Friday, February 14 for part two of our Elder Scrolls Online coverage which will cover the game's player-versus-player content.
The Elder Scrolls Online will be available on April 4 for Windows PC and Mac, and later in June for PlayStation 4 and Xbox One. * The product in this article was sent to us by the developer/company for review.
I've been writing for Gaming Nexus since 2011 and focus primarily on PC games and hardware. I'm a strong advocate of independent developers and am always seeking the next genre-breaking and unique game release. My favorite game genres are strategy, role-playing, and massively multiplayer online, or any games that feature open worlds and survival elements. View Profile | 计算机 |
2015-40/2213/en_head.json.gz/10503 | Buy a Server or Move to the Cloud?
PATRICK TAMBURRINO
Patrick TamburrinoAfter a few rough years with the economy, and after squeezing every last drop of functionality out of their existing infrastructure, some companies are faced with the question of replacing that infrastructure with new equipment or moving their information to the cloud. While the option of putting data in the cloud is attractive especially to new companies that do not have established server infrastructure, the financial aspects surrounding a move to cloud computing for established companies may surprise you. According to a recent article in Forbes, when established companies received proposals from leading cloud computing companies and peeled back the onion a bit, moving to cloud-based servers was far more expensive than simply investing in new server infrastructure. In investigating the costs of moving to a cloud-based server, the “blended cost” of a server that can handle complex applications as well as file storage, and the related support, licensing and necessary infrastructure features can cost approximately $100 per month per user. If a company has 10 users, that cloud-based server cost is $12,000 per year. If you analyze your costs over a three-year period, that cost is $36,000. By comparison, a good server for a user base of that size costs approximately $7000, including licensing. Implementation or migration costs vary, but a good rule of thumb is about $4000. Support costs for the server itself average about $3,000 to $4,000 per year. Over that same three-year period, the cost of a new server with implementation and support is approximately $23,000. Most servers will be usable for far longer than three years, especially if properly maintained. Although this example paints a rather stark picture of its costs, a company should not avoid the exploration of cloud computing altogether. It is a trend that will most certainly continue, and as more companies invest in cloud-based computing resources, its costs will go down. If a company wants to take advantage of some of the benefits of cloud computing but it’s not ready to spend the sort of money it takes to plunge in altogether, it is possible to take advantage of some less-expensive cloud-based products. Consider integrating systems such as Google Apps or Dropbox for Teams. These services have a large enough user base that their costs are relatively low, and to use them, a mass transfer of information is not necessary. Equally important is that in most cases these systems can work harmoniously with a company’s existing systems. While cloud computing services can offer some significant savings for new companies that have no existing server infrastructure, it often costs more money for established companies to make the move. When considering moving to the cloud, choose your options carefully. Patrick Tamburrino is the president and technostrategist of tamburrino inc., an IT strategy, support and management company in Memphis. He can be reached at [email protected], 489-8408 or www.tamburrino.com. | 计算机 |
2015-40/2213/en_head.json.gz/11270 | Open main menu Semantic Web A Wikibookian believes this page should be split into smaller pages with a narrower subtopic.
You can help by splitting this big page into smaller ones. Please make sure to follow the naming policy. Dividing books into smaller sections can provide more focus and allow each one to do one thing well, which benefits everyone.
You can ask for help in dividing this book in the assistance reading room.
Wikipedia has related information at Semantic web
The semantic web is an exciting new evolution of the World Wide Web (WWW) providing machine-readable and machine-comprehensible information far beyond current capabilities. In an age of information deluge, governments, individuals and businesses will come to rely more and more on automated services, which will improve in their capacity to assist humans by “understanding” more of the content on the web. This has potentially far-reaching consequences for all businesses today.
More information on the web needs to be structured in a form that machines can ‘understand’ and process rather than merely display. It relies solely on a machine’s ability to solve complex problems by performing well-defined operations on well-defined data. Sir Tim Berners-Lee, inventor of the World Wide Web, has coined the term “Semantic Web” to describe this approach. Berners-Lee, Hendler and Lassila provide the following definition:
The Semantic Web is not a separate Web but an extension of the current one, in which information is given well-defined meaning, better enabling computers and people to work in cooperation.
—Tim Berners-Lee, Ora Lassila, James Hendler, Scientific American May 2001
What Is The Semantic Web?Edit
The Semantic Web is a mesh of information linked up in such a way as to be easily processable by machines, on a global scale. You can think of it as being an efficient way of representing data on the World Wide Web, or as a globally linked database.
The Semantic Web was thought up by Tim Berners-Lee, inventor of the WWW, URIs, HTTP, and HTML. There is a dedicated team of people at the World Wide Web consortium (W3C) working to improve, extend and standardize the system, and many languages, publications, tools and so on have already been developed. However, Semantic Web technologies are still very much in their infancies, and although the future of the project in general appears to be bright, there seems to be little consensus about the likely direction and characteristics of the early Semantic Web.
What's the rationale for such a system? Data that is generally hidden away in HTML files is often useful in some contexts, but not in others. The problem with the majority of data on the Web that is in this form at the moment is that it is difficult to use on a large scale, because there is no global system for publishing data in such a way as it can be easily processed by anyone. For example, just think of information about local sports events, weather information, plane times, Major League Baseball statistics, and television guides... all of this information is presented by numerous sites, but all in HTML. The problem with that is that, is some contexts, it is difficult to use this data in the ways that one might want to do so.
So the Semantic Web can be seen as a huge engineering solution... but it is more than that. We will find that as it becomes easier to publish data in a repurposable form, so more people will want to publish data, and there will be a knock-on or domino effect. We may find that a large number of Semantic Web applications can be used for a variety of different tasks, increasing the modularity of applications on the Web. But enough subjective reasoning... onto how this will be accomplished.
The Semantic Web is generally built on syntaxes which use URIs to represent data, usually in triples-based structures, i.e., many triples of URI data that can be held in databases, or interchanged on the world Wide Web using a set of particular syntaxes developed especially for the task. These syntaxes are called "Resource Description Framework" syntaxes. URI - Uniform Resource Identifier
A URI is simply a Web identifier: like the strings starting with "http:" or "ftp:" that you often find on the World Wide Web. Anyone can create a URI, and the ownership of them is clearly delegated, so they form an ideal base technology with which to build a global Web on top of. In fact, the World Wide Web is such a thing: anything that has a URI is considered to be "on the Web".
The syntax of URIs is carefully governed by the IETF, who published RFC 2396 as the general URI specification. The W3C maintains a list of URI schemes.
RDF - Resource Description FrameworkEdit
A triple can simply be described as three URIs. A language which utilises three URIs in such a way is called RDF: the W3C have developed an XML serialization of RDF, the "Syntax" in the RDF Model and Syntax recommendation. RDF XML is considered to be the standard interchange format for RDF on the Semantic Web, although it is not the only format. For example, Notation3 (which we shall be going through later on in this article) is an excellent plain text alternative serialization.
Once information is in RDF form, it becomes easy to process it, since RDF is a generic format, which already has many parsers. XML RDF is quite a verbose specification, and it can take some getting used to (for example, to learn XML RDF properly, you need to understand a little about XML and namespaces beforehand...), but let's take a quick look at an example of XML RDF right now:- | 计算机 |
2015-40/2213/en_head.json.gz/11875 | Last year, Hewlett Packard Company announced it will be separating into two industry-leading public companies as of November 1st, 2015. HP Inc. will be the leading personal systems and printing company. Hewlett Packard Enterprise will define the next generation of infrastructure, software and services.
Public Sector eCommerce is undergoing changes in preparation and support of this separation. You will still be able to purchase all the same products, but your catalogs will be split into two: Personal systems, Printers and Services and Servers, Storage, Networking and Services. Please select the catalog below that you would like to order from.
Note: Each product catalog has separate shopping cart and checkout processes.
Personal Computers and Printers
Select here to shop for desktops, workstations, laptops and netbooks, monitors, printers and print supplies Server, Storage, Networking and Services
Select here to shop for Servers, Storage, Networking, Converged Systems, Services and more.
Privacy Statement | Limited Warranty Statement | Terms of Use ©2015 Hewlett Packard Development Company, L.P | 计算机 |
2015-40/2213/en_head.json.gz/11885 | Google Groups Drops Support for Pages and Files
Google sends email notifications to all Google Groups owners to inform them about some unfortunate changes."Starting in January 2011, Groups will no longer allow the creation or editing of welcome messages, files and pages; the content will only be available for viewing and only existing files will be downloadable. If you would like to keep the content currently on the pages and files sections of your group, we highly encourage you to export and migrate it to another product. In February 2011, we will turn off these features, and you will no longer be able to access that content."Google says that you can create pages using Google Sites and store files by attaching them to Google Sites pages. After creating a site, you can invite the members of your group. "Add the email address for the Google Group (for instance, [email protected]) with which you'd like to share the site, and select the level of access you'd like the members of the group to have."It's difficult to understand why Google didn't automatically migrate the files and pages to Google Sites. Users could've used these features from the Google Groups interface, even if they were powered by Google Sites.Google says that the features have been removed "to focus on improving the core functionality of Google Groups -- mailing lists and forum discussions". I don't remember seeing significant improvements from 2006, when Google Groups added the features that are now removed: custom welcome message, pages and files. Since then, Google abandoned almost all its groups and started to use the Help Forum platform. Ironically, even the Google Groups group has been shut down. | 计算机 |
2015-40/2213/en_head.json.gz/12938 | Posted Home > Gaming > Diablo 3 beta testing on its way Diablo 3 beta testing on its way By
Last we heard, Blizzard was trying to find a console producer for Diablo 3, meaning when the thing was finally finished PC and console gamers alike would finally get their hands on the game. It was interesting news, but without a timetable or launch date of any sort, there hasn’t been too much to get excited about.
At least until today, when Blizzard’s CEO, Michael Morhaime, announced a beta version would be available sometime between July and September. That’s a pretty large window, but one fans are likely to rejoice at considering that Diablo 3 was first announced nearly three years ago.
“The game is looking great,” Morhaime said in Activision’s earnings conference call this morning according to Kotaku, “and we’re currently aiming at a third quarter launch for external beta testing.” Internal testing began last week.
Unfortunately, this doesn’t mean we’re any closer to a product launch, which Morhaime confirmed: “I want to be clear that we do not have an official release date or window yet.” All the more reason to register at Battle.net – invitations to join the beta testing group will be sent out to various lucky members.
Diablo 3 game director Jay Wilson told the New York Times last month “We’re crunching. This is when the magic happens.” The company has explained the delay numerous times, claiming it will avoid any deadlines that sacrifice the game’s quality.
Blizzard’s annual event, BlizzCon is scheduled for October 22–23, which would be the perfect time to announce the release window for Diablo 3, but if Blizzard has taught us anything, it is that they won’t be rushed–just ask StarCraft fans. | 计算机 |
2015-40/2213/en_head.json.gz/13397 | 1.1.2 History Lisp is a family of languages with a long history. Early key ideas in Lisp were developed by John McCarthy during the 1956 Dartmouth Summer Research Project on Artificial Intelligence. McCarthy's motivation was to develop an algebraic list processing language for artificial intelligence work. Implementation efforts for early dialects of Lisp were undertaken on the IBM 704, the IBM 7090, the Digital Equipment Corporation (DEC) PDP-1, the DEC PDP-6, and the PDP-10. The primary dialect of Lisp between 1960 and 1965 was Lisp 1.5. By the early 1970's there were two predominant dialects of Lisp, both arising from these early efforts: MacLisp and Interlisp. For further information about very early Lisp dialects, see The Anatomy of Lisp or Lisp 1.5 Programmer's Manual. MacLisp improved on the Lisp 1.5 notion of special variables and error handling. MacLisp also introduced the concept of functions that could take a variable number of arguments, macros, arrays, non-local dynamic exits, fast arithmetic, the first good Lisp compiler, and an emphasis on execution speed. By the end of the 1970's, MacLisp was in use at over 50 sites. For further information about Maclisp, see Maclisp Reference Manual, Revision 0 or The Revised Maclisp Manual. Interlisp introduced many ideas into Lisp programming environments and methodology. One of the Interlisp ideas that influenced Common Lisp was an iteration construct implemented by Warren Teitelman that inspired the loop macro used both on the Lisp Machines and in MacLisp, and now in Common Lisp. For further information about Interlisp, see Interlisp Reference Manual. Although the first implementations of Lisp were on the IBM 704 and the IBM 7090, later work focussed on the DEC PDP-6 and, later, PDP-10 computers, the latter being the mainstay of Lisp and artificial intelligence work at such places as Massachusetts Institute of Technology (MIT), Stanford University, and Carnegie Mellon University (CMU) from the mid-1960's through much of the 1970's. The PDP-10 computer and its predecessor the PDP-6 computer were, by design, especially well-suited to Lisp because they had 36-bit words and 18-bit addresses. This architecture allowed a cons cell to be stored in one word; single instructions could extract the car and cdr parts. The PDP-6 and PDP-10 had fast, powerful stack instructions that enabled fast function calling. But the limitations of the PDP-10 were evident by 1973: it supported a small number of researchers using Lisp, and the small, 18-bit address space (2^18 = 262,144 words) limited the size of a single program. One response to the address space problem was the Lisp Machine, a special-purpose computer designed to run Lisp programs. The other response was to use general-purpose computers with address spaces larger than 18 bits, such as the DEC VAX and the S-1 Mark IIA. For further information about S-1 Common Lisp, see ``S-1 Common Lisp Implementation.'' The Lisp machine concept was developed in the late 1960's. In the early 1970's, Peter Deutsch, working with Daniel Bobrow, implemented a Lisp on the Alto, a single-user minicomputer, using microcode to interpret a byte-code implementation language. Shortly thereafter, Richard Greenblatt began work on a different hardware and instruction set design at MIT. Although the Alto was not a total success as a Lisp machine, a dialect of Interlisp known as Interlisp-D became available on the D-series machines manufactured by Xerox---the Dorado, Dandelion, Dandetiger, and Dove (or Daybreak). An upward-compatible extension of MacLisp called Lisp Machine Lisp became available on the early MIT Lisp Machines. Commercial Lisp machines from Xerox, Lisp Machines (LMI), and Symbolics were on the market by 1981. For further information about Lisp Machine Lisp, see Lisp Machine Manual. During the late 1970's, Lisp Machine Lisp began to expand towards a much fuller language. Sophisticated lambda lists, setf, multiple values, and structures like those in Common Lisp are the results of early experimentation with programming styles by the Lisp Machine group. Jonl White and others migrated these features to MacLisp. Around 1980, Scott Fahlman and others at CMU began work on a Lisp to run on the Scientific Personal Integrated Computing Environment (SPICE) workstation. One of the goals of the project was to design a simpler dialect than Lisp Machine Lisp. The Macsyma group at MIT began a project during the late 1970's called the New Implementation of Lisp (NIL) for the VAX, which was headed by White. One of the stated goals of the NIL project was to fix many of the historic, but annoying, problems with Lisp while retaining significant compatibility with MacLisp. At about the same time, a research group at Stanford University and Lawrence Livermore National Laboratory headed by Richard P. Gabriel began the design of a Lisp to run on the S-1 Mark IIA supercomputer. S-1 Lisp, never completely functional, was the test bed for adapting advanced compiler techniques to Lisp implementation. Eventually the S-1 and NIL groups collaborated. For further information about the NIL project, see ``NIL---A Perspective.'' The first effort towards Lisp standardization was made in 1969, when Anthony Hearn and Martin Griss at the University of Utah defined Standard Lisp---a subset of Lisp 1.5 and other dialects---to transport REDUCE, a symbolic algebra system. During the 1970's, the Utah group implemented first a retargetable optimizing compiler for Standard Lisp, and then an extended implementation known as Portable Standard Lisp (PSL). By the mid 1980's, PSL ran on about a dozen kinds of computers. For further information about Standard Lisp, see ``Standard LISP Report.'' PSL and Franz Lisp---a MacLisp-like dialect for Unix machines---were the first examples of widely available Lisp dialects on multiple hardware platforms. One of the most important developments in Lisp occurred during the second half of the 1970's: Scheme. Scheme, designed by Gerald J. Sussman and Guy L. Steele Jr., is a simple dialect of Lisp whose design brought to Lisp some of the ideas from programming language semantics developed in the 1960's. Sussman was one of the prime innovators behind many other advances in Lisp technology from the late 1960's through the 1970's. The major contributions of Scheme were lexical scoping, lexical closures, first-class continuations, and simplified syntax (no separation of value cells and function cells). Some of these contributions made a large impact on the design of Common Lisp. For further information about Scheme, see IEEE Standard for the Scheme Programming Language or ``Revised^3 Report on the Algorithmic Language Scheme.'' In the late 1970's object-oriented programming concepts started to make a strong impact on Lisp. At MIT, certain ideas from Smalltalk made their way into several widely used programming systems. Flavors, an object-oriented programming system with multiple inheritance, was developed at MIT for the Lisp machine community by Howard Cannon and others. At Xerox, the experience with Smalltalk and Knowledge Representation Language (KRL) led to the development of Lisp Object Oriented Programming System (LOOPS) and later Common LOOPS. For further information on Smalltalk, see Smalltalk-80: The Language and its Implementation. For further information on Flavors, see Flavors: A Non-Hierarchical Approach to Object-Oriented Programming. These systems influenced the design of the Common Lisp Object System (CLOS). CLOS was developed specifically for this standardization effort, and was separately written up in ``Common Lisp Object System Specification.'' However, minor details of its design have changed slightly since that publication, and that paper should not be taken as an authoritative reference to the semantics of the object system as described in this document. In 1980 Symbolics and LMI were developing Lisp Machine Lisp; stock-hardware implementation groups were developing NIL, Franz Lisp, and PSL; Xerox was developing Interlisp; and the SPICE project at CMU was developing a MacLisp-like dialect of Lisp called SpiceLisp. In April 1981, after a DARPA-sponsored meeting concerning the splintered Lisp community, Symbolics, the SPICE project, the NIL project, and the S-1 Lisp project joined together to define Common Lisp. Initially spearheaded by White and Gabriel, the driving force behind this grassroots effort was provided by Fahlman, Daniel Weinreb, David Moon, Steele, and Gabriel. Common Lisp was designed as a description of a family of languages. The primary influences on Common Lisp were Lisp Machine Lisp, MacLisp, NIL, S-1 Lisp, Spice Lisp, and Scheme. Common Lisp: The Language is a description of that design. Its semantics were intentionally underspecified in places where it was felt that a tight specification would overly constrain Common Lisp esearch and use. In 1986 X3J13 was formed as a technical working group to produce a draft for an ANSI Common Lisp standard. Because of the acceptance of Common Lisp, the goals of this group differed from those of the original designers. These new goals included stricter standardization for portability, an object-oriented programming system, a condition system, iteration facilities, and a way to handle large character sets. To accommodate those goals, a new language specification, this document, was developed. Copyright 1996-2005, LispWorks Ltd. All rights reserved. | 计算机 |
2015-40/2213/en_head.json.gz/14052 | Call to Order: Mr. Lipski called the meeting to order at 7:30 PM.
Developing a 3, 5, and 7 year IT Plan for the Township: Mr. Lipski provided the members with copies of a Strategic Goals document prepared by SUNY Albany’s Information Technology department. He suggested that the Committee review this document and begin to draft a similar document for the Township as both a guide for the future and a tool to augment technology budget planning and possibly for seeking grants. Any consideration of the other projects the Committee had discussed, such as improving the Web site, developing a site for the Tax Collector, streaming video of Township meetings, enhanced use of technology during Township broadcasts, could be part of this long term plan. Ms. Brennan suggested that in considering a long term technology plan, each action item should have two goals, serving the Township staff and serving the residents. She noted as an example that the work done for the Park and Recreation Department helped the Township staff with an on-line presence and with record keeping, and also provided the community with on-line registration for programs. She suggested that before starting on a plan, the Committee members should meet with the various departments to learn what their technology uses are now and what they might need going forward.
Recommendations for Equipment Replacement Cycles: Mr. Herr said that within the Township’s staff there are different levels of sophistication with the use of technology. He briefly outlined the four areas using different technologies: Finance/ using an AMS financial package. This is an Alpha operating system provided and serviced by a local vendor
Administration/Park and Recreation/ using Microsoft office (ten employees have begun using open source software this week.) Police/ using Alert, which is a Pennsylvania based police application. Alert is used by about 1/3 of the State’s police departments.
Codes/ using a package for codes and permits.
Document management uses DocStar. Mr. Herr explained that ten employees have begun using open source software this week as a test. It is free, and is similar in appearance to Microsoft Office 2000. There are some difficulties to be worked out by the police secretary and the recreation programmer. The Township is hoping to expand the use of the open source software because of the costs associated with licensing and upgrading Office. Ms. Brennan asked about compatibility with the community.
Mr. Herr said that the software should be compatible with Microsoft. There would still be some Microsoft Office availability to Township employees, as certain functions of the various departments require the use of Excel, Publisher and PowerPoint. Mr. Herr reviewed the infrastructure, noting that the entire campus has fiber optic cable, with some additional copper wiring available. There are six servers, including the DocStar server. The DocStar server is the oldest, at six years old. It is beginning to show problems. The Police Department has its own server. The squad cars have cellular internet access using mobile data terminals. All have recently been upgraded using a grant from Homeland Security.
The Township Web site is hosted externally but managed internally. It is a Dreamweaver site, which is difficult to maintain. Internet access is through Comcast, with PA-Tech as a back-up. There are 29 channels for voice.
Video has a separate connection with Comcast. There is also a Verizon connection to provide public service feeds to Verizon customers.
PEG Central is now providing internet video, hosted off-site for a yearly fee.
Mr. Herr said that there is conventional daily back-up to tape. File servers are backed up daily. The back-ups are not encrypted.
There is no disaster recovery plan for technology. Mr. Herr said that some disaster recovery and protection have no costs associated; they are procedural.
Mr. Lipski suggested that this should be part of the IT Plan. The Committee should investigate each department’s tolerance for downtime.
Mr. Herr said that the current replacement plan calls for ten new work stations this year, with an average turn around of every five years. The new stations are installed by Beth in the technology department
The new building only has Wi-Fi in the main meeting room. It can only be accessed privately. The building is wired for wireless access. Mr. Herr suggested that there should be two zones for public and for private access.
Mr. Gallagher said that this should be part of the budget for the construction of the new building.
Replacement or Upgrade of Docstar: Mr. Herr explained that the existing Docstar server seems to be having problems. It should be replaced. Once it is replaced, Docstar would be upgraded. The upgrade would not have an additional cost; it is part of tech support already provided.
Mr. Gallagher said that the Supervisors are considering removing this upgrade from the budget. It had not been understood that the server is showing signs that it needs to be replaced. The Board had wrongly assumed that the new server was needed only in order to upgrade Docstar, not that the new server is needed because the existing one is having problems.
Mr. Herr said that he would estimate that a new server to replace the existing Docstar server would cost under $6000, including licensing and labor. This server could be part of a disaster recovery plan.
Mr. Gallagher noted that the Township has a disaster recovery plan which does not include technology. Fire Marshal Don Harris oversees the plan. Letters of re-appointment: The recording secretary reminded the members that their appointments are annual. If they wish to continue on the committee, letters of interest should be sent to the Township Manager at [email protected]. Appointments are made at the Board reorganization meeting on January 3, 2010.
Mr. Lipski asked the members to consider serving as chairman next year. Mr. Lipski noted that the next regular meeting date would be Tuesday, December 28, 2010. As this is a holiday week, he urged the members to let the recording secretary know as early as possible whether they would be available to attend.
Respectfully Submitted: Mary Donaldson, Recording Secretary | 计算机 |
2015-40/2214/en_head.json.gz/246 | IETF Starts Work On Next-Generation HTTP Standards
from the new-way-of-doing-things dept.
alphadogg writes "With an eye towards updating the Web to better accommodate complex and bandwidth-hungry applications, the Internet Engineering Task Force has started work on the next generation of HTTP, the underlying protocol for the Web. The HTTP Strict Transport Security (HSTS), is a security protocol designed to protect Internet users from hijacking. The HSTS is an opt-in security enhancement whereby web sites signal browsers to always communicate with it over a secure connection. If the user is using a browser that complies with HSTS policy, the browser will automatically switch to a secure version of the site, using 'https' without any intervention of the user. 'It's official: We're working on HTTP/2.0,' wrote IETF Hypertext Transfer Protocol working group chair Mark Nottingham, in a Twitter message late Tuesday."
hsts
Giving Your Computer Interface the Finger
from the let-your-fingers-do-the-computing dept.
moon_unit2 writes "Tech Review has a story about a startup that's developed software capable of tracking not just hand movements but precise finger gestures. The setupm from 3Gear, requires two depth-sensing cameras (aka Kinects) at the top corners of your display. Then simply give your computer thumbs up — or whatever other gesture you might feel like — and it'll know what you're doing. The software is available for free while the product is in beta testing, if you want to give it a try."
Singer Reportedly Outbids NASA for Space Tourist's Seat
from the money-well-spent dept.
RocketAcademy writes "ABC News is reporting that Phantom of the Opera singer/actress Sarah Brightman outbid NASA for a seat on a Soyuz flight to the International Space Station. Brightman reportedly paid more than $51 million. If that story is true, there may be some interesting bidding wars in the future."
Bruce Perens: The Day I Blundered Into the Nuclear Facility
from the did-you-remember-to-lock-the-door? dept.
Bruce Perens writes "I found myself alone in a room, in front of a deep square or rectangular pool of impressively clear, still water. There was a pile of material at the bottom of the pool, and a blue glow of Cherenkov radiation in the water around it. To this day, I can't explain how an unsupervised kid could ever have gotten in there."
Starting Next Year, Brazil Wants To Track All Cars Electronically
from the we-know-you-weren't-stuck-in-traffic dept.
New submitter juliohm writes "As of January, Brazil intends to put into action a new system that will track vehicles of all kinds via radio frequency chips. It will take a few years to accomplish, but authorities will eventually require all vehicles to have an electronic chip installed, which will match every car to its rightful owner. The chip will send the car's identification to antennas on highways and streets, soon to be spread all over the country. Eventually, it will be illegal to own a car without one. Besides real time monitoring of traffic conditions, authorities will be able to integrate all kinds of services, such as traffic tickets, licensing and annual taxes, automatic toll charge, and much more. Benefits also include more security, since the system will make it harder for thieves to run far away with stolen vehicles, much less leave the country with one."
panopticon
Regulators Smash Global Phone Tech Support Scam Operation
from the grarrr-ftc-smash dept.
SternisheFan sends this excerpt from ZDNet:
"Regulators from five countries joined together in an operation to crack down on a series of companies orchestrating one of the most widespread Internet scams of the decade. The U.S. Federal Trade Commission (FTC) and other international regulatory authorities today said they shut down a global criminal network that bilked tens of thousands of consumers by pretending to be tech support providers. FTC Chairman Jon Leibowitz, speaking during a press conference with a Microsoft executive and regulators from Australia and Canada, said 14 companies and 17 individuals were targeted in the investigation. In the course of the crackdown, U.S. authorities already have frozen $188,000 in assets, but Leibowitz said that would increase over time thanks to international efforts."
The Sci-fi Films To Look Forward To In 2013
from the still-waiting-on-serenity-2 dept.
brumgrunt writes "Not every sci-fi film released in 2013 will be a sequel or franchise movie. Den Of Geek has highlighted the ten sci-fi movies that might just offer something a little different from the PG-13, family-centric norm."
The list includes Elysium, from the writer/director of District 9. It's "set in 2159, where Earth has become so hopelessly overcrowded that the richest members of society live on a luxurious orbiting space station." There's also After Earth, directed (but not written) by M. Night Shyamalan, which stars Will Smith and his son Jaden. They "crash land on Earth at some point in the future, by which time it's become a dangerous place devoid of human life." And, of course, there's Ender's Game.
Google Glass, Augmented Reality Spells Data Headaches
from the waiting-on-google-smell-o-vision dept.
Nerval's Lobster writes "Google seems determined to press forward with Google Glass technology, filing a patent for a Google Glass wristwatch. As pointed out by CNET, the timepiece includes a camera and a touch screen that, once flipped up, acts as a secondary display. In the patent, Google refers to the device as a 'smart-watch. Whether or not a Google Glass wristwatch ever appears on the marketplace — just because a tech titan patents a particular invention doesn't mean it's bound for store shelves anytime soon — the appearance of augmented-reality accessories brings up a handful of interesting issues for everyone from app developers to those tasked with handling massive amounts of corporate data.For app developers, augmented-reality devices raise the prospect of broader ecosystems and spiraling complexity. It's one thing to build an app for smartphones and tablets — but what if that app also needs to handle streams of data ported from a pair of tricked-out sunglasses or a wristwatch, or send information in a concise and timely way to a tiny screen an inch in front of someone's left eye?"
HP Plans To Cut Product Lines; Company Turnaround In 2016
from the just-what-investors-like-to-hear dept.
dcblogs writes "Hewlett-Packard CEO Meg Whitman told financial analysts today that it will take until 2016 to turn the company around. Surprisingly, Whitman put some of the blame for the company's woes on its IT systems, which she said have hurt its internal operations. To fix its IT problems, Whitman said the company is adopting Salesforce and HR system Workday. The company also plans to cut product lines. It said it makes 2,100 different laser printers alone; it wants to reduce that by half. 'In every business we're going to benefit from focusing on a smaller number of offerings that we can invest in and really make matter,' said Whitman."
Kepler Sees Partial Exoplanetary Eclipse
from the peek-a-boo-from-light-years-away dept.
New submitter CelestialScience writes "The heavens have aligned in a way never seen before, with two exoplanets overlapping as they cross their star. Teruyuki Hirano of the University of Tokyo, Japan, and colleagues used data from the Kepler space telescope to probe KOI-94, a star seemingly orbited by four planets. It seems that one planet candidate, KOI-94.03, passed in front of the star and then the innermost candidate, KOI-94.01, passed between the two. The phenomenon is so new it doesn't yet have a name, though suggestions include 'planet-planet eclipse,' 'double transit,' 'syzygy' and 'exosyzygy.'"
MPAA Boss Admits SOPA and PIPA Are Dead, Not Coming Back
from the time-to-schedule-the-victory-lap dept.
concealment points out comments from MPAA CEO Chris Dodd, who has acknowledged that SOPA and PIPA were soundly — and perhaps permanently — defeated. Quoting Ars Technica:
"Dodd sounded chastened, with a tone that was a far cry from the rhetoric the MPAA was putting out in January. 'When SOPA-PIPA blew up, it was a transformative event,' said Dodd. 'There were eight million e-mails [to elected representatives] in two days.' That caused senators to run away from the legislation. 'People were dropping their names as co-sponsors within minutes, not hours,' he said. 'These bills are dead, they're not coming back,' said Dodd. 'And they shouldn't.' He said the MPAA isn't focused on getting similar legislation passed in the future, at the moment. 'I think we're better served by sitting down [with the tech sector and SOPA opponents] and seeing what we agree on.' Still, Dodd did say that some of the reaction to SOPA and PIPA was 'over the top' — specifically, the allegations of censorship, implied by the black bar over Google search logo or the complete shutdown of Wikipedia. 'DNS filtering goes on every day on the Internet,' said Dodd. 'Obviously it needs to be done very carefully. But five million pages were taken off Google last year [for IP violations]. To Google's great credit, it recently changed its algorithm to a point where, when there are enough complaints about a site, it moves that site down on their page — which I applaud.'"
Earthquakes Correlated With Texan Fracking Sites
from the all-your-fault dept.
eldavojohn writes "A recent peer reviewed paper and survey by Cliff Frohlich of the University of Texas' Institute for Geophysics reveals a correlation between an increase in earthquakes and the emergence of fracking sites in the Barnett Shale, Texas. To clarify, it is not the actual act of hydrofracking that induces earthquakes, but more likely the final process of injecting wastewater into the site, according to Oliver Boyd, a USGS seismologist. Boyd said, 'Most, if not all, geophysicists expect induced earthquakes to be more likely from wastewater injection rather than hydrofracking. This is because the wastewater injection tends to occur at greater depth, where earthquakes are more likely to nucleate. I also agree [with Frohlich] that induced earthquakes are likely to persist for some time (months to years) after wastewater injection has ceased.' Frohlich added, 'Faults are everywhere. A lot of them are stuck, but if you pump water in there, it reduces friction and the fault slips a little. I can't prove that that's what happened, but it's a plausible explanation.' In the U.S. alone this correlation has been noted several times."
Why Klout's Social Influence Scores Are Nonsense
from the another-system-to-game dept.
jfruh writes "Klout is a new social media service that attempts to quantify how much 'influence' you have, based on your social media profile. Their metrics are bizarre — privacy blogger Dan Tynan has been rated as highly influential on the topic of cigars, despite having only smoked one, decades ago. Nevertheless, Klout scores have real-world consequences, with people deemed influential getting discounts on concert tickets or free access to airport VIP lounges (in hopes that they'll tweet about it, presumably)."
Lenovo Building Manufacturing Plant in North Carolina
from the unknown-lamer-sent-to-the-factories dept.
An anonymous reader writes "One of the major themes of the ongoing presidential election in the United States has been the perceived need to bring product manufacturing back to the United States. A recent announcement from Lenovo is going to play to this point; the PC manufacturer said today that it's building a U.S. location in Whitsett, North Carolina. The new facility is small, with just over 100 people and is being built for a modest $2M, but Lenovo states that it's merely the beginning of a larger initiative."
It makes sense: their U.S. HQ is a stone's throw away in RTP.
ROSALIND: An Addictive Bioinformatics Learning Site
from the programming-is-fun-we-swear dept.
Shipud writes "Bioinformatics science which deals with the study of methods for storing, retrieving, and analyzing molecular biology data. Byte Size Biology writes about ROSALIND, a cool concept in learning bioinformatics, similar to Project Euler. You are given problems of increasing difficulty to solve. Start with nucleotide counting (trivial) and end with genome assembly (putting it mildly, not so trivial). To solve a problem, you download a sample data set, write your code and debug it. Once you think you are ready, you have a time limit to solve and provide an answer for the actual problem dataset. If you mess up, there is a timed new dataset to download. This thing is coder-addictive. Currently in Beta, but a lot of fun and seems stable."
You can have my passwords ... ... right now. Are you ready to write them down?
... only if we're married or similarly situated
... if I trust you generally
... if you work for the U.S. border patrol
... if you send me a nice warrant first
... but only the duress passwords.
... from my cold, dead fingers
Sorry I can't help, but I just can't recall any ...
You can have my passwords ... 0 | 计算机 |
2015-40/2214/en_head.json.gz/1395 | I-On Interactive Opens New York Office
Eyeing business potential in the Northeast,i-on interactive Inc., Boca Raton, FL, has opened an office in New York. The shop hopes to develop new client relationships and to work more closely with freelance technical and creative talent in New York. It also seeks to license its i-on interactive site framework, a software platform for Web applications. "We work with retained independent contractors who are specialists in their fields," said Scott Brinker, chief technology officer at i-on. "This gives us more flexibility in accurately matching resources to projects than most shops." The firm has already worked with clients in the New York area, particularly in the fashion industry, such as John Barrett Salon. Other i-on clients include Citrix and skin care products cataloger Skin Store, for which it created skinstore.com. | 计算机 |
2015-40/2214/en_head.json.gz/1445 | Social-Media Maven Amy Jo Martin on Working Hard and Being a Renegade
Shira Lazar
Few people know the import of social media more than Amy Jo Martin.
After a former life as the director of digital media and research for the Phoenix Suns, Martin started up her own companyDigital Royalty, a firm that develops social-media campaigns for corporate and entertainment brands along with professional sports teams and athletes like Shaquille O'Neal. Her company also dishes out social-media education programs through Digital Royalty University, which offers to train individuals, small businesses and corporate brands.
We sat down with the social-media maven to chat about her new book, Renegades Write The Rules and hear her tips for breaking industry molds to connect and engage more effectively than ever before. Here is an edited version of that conversation:
Q: Who are these renegades you refer to in your book?
A: Renegades are curious by nature and they don't conform to the way things have always been done. They color outside the lines, but they don't cross the line. They ask forgiveness instead of permission, but they bring their results with them. Within large organizations, renegades are oftentimes disguised as "intrepreneurs," which are internal entrepreneurs.
Related: What You Can Learn from Celebrities About Social Media
Amy Jo Martin pictured with superstar client Shaquille O'Neal who happened to also write the forward for her new book, Renegades Write The Rules.
Q: What does it take to be a renegade in business today?
A: It's important to experiment and fail early so by the time everyone else catches up, you're already polishing up your knowledge. But keep in mind that whenever people do things that haven't been done before, adversity and healthy tension shadows them. At Digital Royalty, we call these 'innovation allergies.' Renegades learn to embrace the notion of getting comfortable with being uncomfortable.
Q: What's the most important thing you teach clients about social media?
A: That it's not media. Whoever coined the term 'social media' didn't do us any favors. It's not 'media,' it's simply communication. Think of it more like the telephone than the TV. The goal is to deliver value when, where and how your audience wants to receive it. That means identifying your value by listening to what your audience wants to hear. Social-communication channels are tools which allow us to humanize brands -- whether that's a large corporate brand, a small business or an individual brand. People connect with people, not logos.
Related: Thrillist's Ben Lerer on his Success as a Young Trep
Q: You've worked with some pretty big celebs and brands. What are some examples of how you've helped them build awareness, engagement and increase sales through social media?
A: One of my best examples actually started out as a mistake. President of the Ultimate Fighting Championship and Digital Royalty client Dana White accidentally tweeted out his phone number to his 1.5 million Twitter followers. His followers retweeted his number, and, within minutes, an estimated 9 million people had his number. He called me in a panic, asking if I could undo what he had just done. After I explained that there was nothing I could do, he said, 'Well, if I'm stupid enough to tweet my phone number to everyone, I'm going to take their calls.' For the next 45 minutes, Dana took as many calls as he could. He listened to fans' concerns and answered their questions. As a result, headlines were made, engagement levels spiked and the UFC fan base increased. Dana exposed the human behind the brand, and in doing so, created an authentic community of loyal followers that has made the UFC one of today's most successful sports organizations.
After listening to the response, Digital Royalty took the concept of creating a 'Fan Phone' to Dana. We encouraged fans to call Dana before events by tweeting the Fan Phone number to his millions of fans. Eventually, we were able to secure a marketing partner to sponsor this concept, which made this social-media mistake a revenue generating opportunity. The concept became scalable when we provided all the athletes with fan phones as well.
Related: Why Social Media is Nothing Without Creativity
Q: What tips do you have for young entrepreneurs?
A: Don't forget your personal brand and why you do what you do. People don't buy what you do. They buy why you do it.
Also, beware of SOS (shiny object syndrome). It's important to know the difference between a distraction and a real opportunity. Learn to motivate and inspire yourself, everyone else is busy. The business you say no to is just as important as the business you say yes to. Lastly, your hustle factor is often your differentiating factor. Work hard. | 计算机 |
2015-40/2214/en_head.json.gz/1996 | Internet Explorer 9 hits RTM build
News Reporter
@byron_hinson
It seems that Microsoft has signed off the RTM build of Internet Explorer 9 just a few days before the final version is released to the web on Monday. According to a number of Russian sites, the RTM build is 9.00.8112.16421.110308-0330 and was compiled on March 8.
Earlier in the week, Microsoft officially confirmed that Internet Explorer 9 will be launching at 9 PM PST on March 14 as was previously predicted by Neowin back in February.
The launch will take place as part of their "Beauty of the Web" event, which will be hosted by the Internet Explorer 9 team in Austin City Limits Live. The event will begin at 9 PM on Monday, March 14, 2011 at the Moody Theater. The event will also mark a year to the day that Microsoft launched the first Platform Preview of Internet Explorer 9 to developers and the public.
Microsoft also confirmed that the public would be able to download the final version of the popular browser at 9 PM PST. Users who have already downloaded the release candidate version of the browser will automatically be updated to the final version, most likely via Windows Update.
Image Source: Windows 8 Beta
MicrosoftIe9Internet explorer 9Rtm
Man jailed for creating 35 blogs harrasing his ex
Free Realms to arrive on the PlayStation 3 on March 29
Star Citizen Drama
Discuss: What do you think of Windows 10 'RTM' build 10240?
Windows 10 will not RTM this week
Microsoft is making changes to the way Insider builds are delivered
Windows 10 signoff is slated for mid July | 计算机 |
2015-40/2214/en_head.json.gz/2037 | Open Source Community Embracing Novell's openSUSE project
Growing community of contributors, soaring numbers of new registered SUSE Linux installations and contributions demonstrate strong interest and support
BARCELONA (BrainShare® 2005) | September 12, 2005
openSUSE.org, an open source Linux* project sponsored by Novell and launched in early August, is off to a strong start, generating extensive interest and support from both the development community and end users. Within the first few weeks, registered installations of the SUSETM Linux distribution have soared to morethan 5,000 per day, with a copy downloaded every 18 seconds. Already, there are more than 4,500 registered members at openSUSE.org, representing a broad cross section of the technology community, including students, IT professionals from major corporations, open source developers and long-time Linux enthusiasts. As a result, the openSUSE project has developed significant momentum toward achieving its goals of increased global adoption of Linux and development of the world's most usable Linux.“Novell is absolutely right to focus on growing the user base for Linux beyond the established technical community, ” said Gary Barnett, research director at OVUM. “The investment that Novell is making in its work on usability is proof that it understands that there's still work to do in helping Linux cross the divide between the technically savvy and those users who just want to use software to make them more productive. This, combined with the impressive early participation numbers at openSUSE.org, demonstrates that Novell is committed to making Linux more accessible and relevant to non-technical users and technical users alike.”Through the openSUSE project, the community of Linux developers, designers, writers and users can leverage the existing SUSE Linux 9.3 distribution and participate in the creation of the next version of the distribution, SUSE Linux 10.0. To facilitate public review, the openSUSE project team has already made five beta or pre-release builds of SUSE Linux 10.0 available to the public in the last month. These builds have been downloaded and installed more than 12,000 times. Community members have already reported more than 500 bugs to the openSUSE project, which immediately helped to improve the total quality of SUSE Linux 10.0.The community is also directly contributing to the openSUSE.org wiki, creating more than 100 additional Web pages and almost doubling the site's content since its launch. Increasingly, questions are being asked and answered by this community, fostering a dialogue that will help improve the quality of both the project and the distribution.The openSUSE project is fast becoming an international effort. The project has already received numerous user requests to translate some or all of the online content into local languages. With the Novell-sponsored launch of a full-featured openSUSE site for China, openSUSE.org.cn, in mid August, the fast growing Chinese open source community is directly participating in the creation of a global Linux distribution for the first time.“The community's initial response to openSUSE.org has been tremendous, but this is only the beginning,” said David Patrick, vice president and general manager, Linux, Open Source Platforms and Services at Novell. “Our goal is to help users succeed with a stabilized Linux distribution that they can use for their everyday computing needs.”For more information on the openSUSE project, please visit: www.opensuse.orgAbout NovellNovell, Inc. (Nasdaq: NOVL) delivers software for the open enterprise. With more than 50,000 customers in 43 countries, Novell helps customers manage, simplify, secure and integrate their technology environments by leveraging best-of-breed, open standards-based software. With over 20 years of experience, Novell's 6,000 employees, 5,000 partners and support centers around the world help customers gain control over their IT operating environment while reducing cost. More information about Novell can be found at http://www.novell.com.Novell is a registered trademark; BrainShare is a registered servicemark; and SUSE is a trademark of Novell, Inc. in the United States and other countries. *Linux is a registered trademark of Linus Torvalds. All other third-party trademarks are the property of their respective owners.
Susan MortonNovellTelephone: (781) 464-8239Email: [email protected] | 计算机 |
2015-40/2214/en_head.json.gz/2432 | 14th January 1999 Archive
↑ January 1999
← 13th January 1999
15th January 1999 →
Corel ships Linux/ARM-based departmental server
The NetWinder strategy proceeds apace, and this one looks particularly tasty
Corel has announced the availability of its latest Linux machine, the NetWinder Group Server. The new StrongARM and Red Hat-based box is aimed at departmental workgroups and small businesses, and is priced between $979 and $1,839. The entry level model is a diskless machine with 32 megabytes RAM, and pricing is $1,339, $1,629 and $1,839 for 2, 4 and 6 gigabyte hard disk respectively. The machines come with a full suit of Internet/intranet services, including Web publishing, Common Gateway Interface, Perl scripting, HTML page authoring, email services, and public and private threaded discussion, allowing workgroup communication and collaboration. They also have document indexing, searching and management facilities, and support cross-platform file sharing and transfer between the NetWinder and NT, Windows 95 and Apple platforms. The machines run a Corel-modified version of Red Hat 5.1 and a StrongARM SA-110 275MHz CPU. They include two Ethernet connections, Iomega, Zip and Imation support and 2 megabytes accelerated video. Sound facilities and onboard video capture and playback are also included. ®
MS attorney caught plying witness with Linux numbers
Judge tells them to cut it out - Chinese walls, we've heard of them...
Microsoft's first witness for the defence collected a reprimand from the judge for talking out of class yesterday. Richard Schmalensee had suddenly come up with an estimate that there are something like 10 million Linux servers out there, having failed to mention this either in his written testimony or in earlier discussion. When puzzled DoJ attorney David Boies asked him where the number had come from, in that case, Schmalensee replied that he'd been passed it by Microsoft attorney David Heiner during a court recess. This is apparently how expensive academic economists conduct their research. But not, apparently, how Judge Thomas Penfield Jackson expects his courtroom to run. He issued an immediate reprimand instructing Microsoft attorneys not to discuss the case with witnesses. Schmalensee also ran into trouble over his definition of Microsoft's market, as it appears he has defined it one way in his evidence for one of the other cases, Microsoft versus Bristol. In the DoJ case it clearly suits Microsoft to try to establish the boundaries of its market as widely as possible, thus increasing the scope of possible competition and reducing the impression that Microsoft has a monopoly. In Bristol, Schmalensee argued that Microsoft's market could be narrowly defined. This difference in definition did however attract the judge's attention, and his sudden interest in other cases could be unhelpful for Microsoft. There's certainly some stuff in both the Caldera and Sun matters which could make unpleasant contributions to the DoJ trial. As regards the meat of the first cross examination sessions themselves, Schmalensee seems to have clutched onto Linux determinedly, but to have been eventually forced to concede that Microsoft had not had a serious OS competitor in 12 years. But, he said hopefully, in a year or two, Linux and/or BeOS really could be rivals. Honest. ® Complete Register trial coverage
Compaq goes back to school
Wants slice of the education market
Compaq has set its sights on the UK education market and plans to become a major player. The vendor said it would be working with 50 educational software vendors to grow its market share to 15 per cent by the end of 1999. It will offer products and services aimed at different types of educational establishment, and is working in line with the government’s objectives for the National Grid for Learning. Compaq has put together an educational team to work on the projects, three of whom are ex-teachers. David Heath, formally of Apple Computer, heads a team of five people based at Compaq’s Glasgow call centre. This group will increase to 25, Compaq said. To encourage resellers and independent software vendors to take up the Compaq educational challenge, the manufacturer is launching a channel accreditation programme. It will be specifically aimed at channel partners providing curriculum-based software. ®
Record sales for AMD but analysts disappointed
Chip maker now claims presence in 16 per cent of all PCs
AMD failed to live up to analysts’ expectations, despite record sales growth during the fourth quarter of its financial year, ending 27 December. Sales were up 16 per cent on the previous quarter to $788.8 million, which was less than the $800 million Wall Street had anticipated. Q4 97 saw AMD record sales of $613.1 million - making Q4 98 an increase of 29 per cent year-on-year. Q4 98 profit stood at $22.3 million, the same quarter last year saw AMD record a loss of $12.3 million. For the full year, AMD saw sales increase by eight per cent to $2.54 billion, which resulted in a net loss of $103.9 million. In 1997 sales were $2.3 billion, with a net loss of $21 million. AMD shipped more than 13.5 million processors from the AMD-K6 family, more than 8.5 million of which were K6-2 chips. The company claims that 16 per cent of all Windows-based PCs contain an AMD chip, with this proportion growing to 38 per cent of sub-$1,000 PCs. ®
‘Make people use Explorer’ – Gates email
The latest email deluge shows Bill leveraging the OS again. Tsk...
More smoking emails poured out of the DoJ late last night, reopening the debate over when Microsoft decided to integrate IE, and why. And one of the most damaging is from Bill Gates himself. As late as February 1997 Gates was writing to Jim Allchin and Paul Maritz saying: "It seems clear that it will be very hard to increase browser market share on the merits of IE alone. It will be more important to leverage the OS to make people use IE instead of Navigator." It’s difficult to figure out what Gates might have meant by this without arriving at damaging conclusions. Microsoft last summer began claiming that it had been its intention to integrate IE in the OS from 1994 onwards, and in early 1997 development of Windows 98 was well under way. Microsoft had announced an integrationist strategy shortly after the launch of Windows 95, the plan being to achieve integration with IE 4.0. At the time Gates was writing Explorer 4.0 was due - it was originally planned for Q1 1997, but in fact slipped until October. So in February of that year Gates had made the statement of strategic direction, and if - as Microsoft says now - the integration plans had actually been signed, sealed and delivered for some years, what was he debating? And how could Microsoft "make people use IE instead of Navigator," if not on the merits of the product? The logical conclusion would seem to be that it could only do so by restricting Navigator’s distribution channels, and by making it harder for Navigator to work with Windows. Microsoft of course says it doesn’t do that, so we’re left still wondering what Gates is on about, aren’t we? Other emails suggest that the final decision to put IE and Windows 98 into the same product hadn’t yet been made in early 97. Writing in March of that year Kumar Mehta says: "If we take IE away from the OS most users [i.e. users of Navigator] will never switch to us." As Microsoft had already said it was going to put IE into the OS, this email and Gates’ make it pretty clear that there was a move within Microsoft to go with a less integrated strategy after all. Why would it be considering this? Pressure from OEMs might have been a factor, and note that other internal documentation that’s come up in the past reveals real fear on Microsoft’s part that there might be some form of Compaq or Intel led OEM revolt. So in 97, maybe some execs were worried about pushing to hard, too fast. ® Complete Register trial coverage
Apple chalks up $123 million Q1 profit
High holiday iMac sales fuel strong growth
Apple has posted its fifth consecutive profitable quarter, as promised by interim CEO Steve Jobs at last week's MacWorld Expo keynote. As anticipated (see Apple set to announce $1.7 billion Q1 revenue), the Mac maker recorded first quarter of fiscal 1999 revenues of $1.7 billion, up eight per cent on the same period a year ago. Profits were higher than expected, reaching $152 million -- 95c a share. For Q1 1998, Apple recorded a profit of just $47 million. However, the latest figure includes $29 million made by offloading some 2.9 million of the shares Apple holds in ARM. Take that out of the equation, and Apple made $123 million -- at 78c a share, that's rather closer to the 70c a share Wall Street was expecting the company to declare. Apple also reported that it sold 519,000 iMacs during the quarter, a significant proportion of the 800,000-odd it shipped since the consumer computer was launched on 15 August. Clearly the company's Christmas extended advertising campaign and $29.99/£29.99-a-month iMac hire purchase scheme have paid off. So too has Jobs' policy of keeping inventory to a bare minimum. The company reported the quarter saw inventory drop to $25 million, or two days' worth of kit. That's five days' fewer machines than Dell's previously industry-leading five days inventory, claimed Jobs. The iMac sales contributed heavily to a year-on-year growth in unit shipments of 49 per cent, which Apple claimed was three to four times the industry average. ®
NT fails US government crypto tests
Shortcomings in the CryptoAPIs, so brace yourself for Service Pack 5...
Major cryptographic shortcomings in NT 4.0 have forced Microsoft to engage in major surgery on the product, according to Web news service Network World . In a story earlier this week Network World revealed that NT 4.0 had failed US government cryptography tests. In order to be sold to the US and Canadian governments products have to pass the Federal Information Processing Standard (FIPS) 140-1 certification test. NT failed, and the testing revealed problems in NT’s cryptographic processing. Microsoft is preparing a fix pack for release this quarter, but application of this will probably result in users being able to run IE 4.0, Outlook 98 and various other applications in FIPS mode. IE 5.0 will know how to deal with FIPS, but it seems to be a moving target for Microsoft. Humorously, Netscape Communicator has passed FIPS 140-1. According to Network World the problems are related to NT 4.0’s CryptoAPIs and were uncovered at government-certified testing lab CygnaCom. Service Pack 4, which was released relatively recently, was intended to be the last fixpack for NT 4.0, but if you’re in the US government, it looks like you’re going to have to deal with Service Pack 5 after all... ®
India issues red alert against US security software
US crypto export rules mean their software isn't safe, warns Defence organisation
The Indian government looks set to forbid Indian banks and financial institutions from using US-developed network security software if the US government does not ease the restrictions it applies to the export of encryption technologies. The announcement was made by India's Central Vigilance Commissioner (CVC), N Vittal, after the country's Defence Research and Development Organisation's (DRDO) centre for artificial intelligence issued a 'red alert' against all network security software developed in the US. The alert warned that, because of the limits the US government places on the size of data encryption keys in exported applications, US software was too easy to hack and could thus prove a security hazard. "To put it bluntly, only insecure software can be exported. When various multinational companies go around peddling 'secure communication software' products to gullible Indian customers, the conveniently neglect to mention this aspect of US export law," said the DRDO in a letter to the CVC, quoted in Indian newspaper The Economic Times. The DRDO's centre for artificial intelligence also warned of the possibility that imported software products could contain technological time bombs designed to "cause havoc to the network when an external command is issued by a hostile nation". Of course, quite how seriously the DRDO takes such a threat is hard to determine, since its red alert letter appears to be as much about promoting its own, indigenously developed encryption software, which is due to me made available for testing in three months' time. "The encryption part of the software is complete and only the communication protocols remain to be written," reported the DRDO. "Since the software has been written by ourselves, there is no upper limit on the security level provided by encryption in the software exported from the USA." Which is, of course, the fundamental flaw with US encryption policy, despite the Department of Commerce's recent relaxation of some of the rules contained in that policy. If user can't get the level of security they want from US software, they'll go elsewhere for it. And India is less likely to limit the export of its own encryption products to other, unsavoury regimes -- though there's no guarantee they wouldn't include their own 'time bombs'... In the meantime, the CVC is expected to wait until the DRDO's own software is ready before issuing an official warning against US security software to India's banks. ®
Many-coloured iMacs upset dealers
The Register saves the day
Recent reports in some weekly IT papers here in the UK (squeal if you know who you are) have pointed out the nightmare scenario facing Mac dealers when deciding which of the colourful new iMacs to hold in stock. The Register is pleased to announce that it has come to the rescue of Mac dealers everywhere. Mac retailers and peripherals vendors aren't looking forward to the introduction of the colourful range of iMacs, squealed the latest edition of The Register’s favourite channel weekly. It will cause inventory problems, Mac dealers whinged, if we get lumbered with the unpopular colours or simply sell out of the popular ones. Choosing between blueberry, strawberry, tangerine, grape and lime could indeed be baffling. And as for matching colours for peripherals, well that will be a nightmare, whined others. The consequences for office furniture salesmen are too grim to even talk about here. So we shan’t. However, The Register can reveal that far from causing problems, this could be the start of a liberating experience for Mac users all over the world. We spoke to image consultants House of Colour to get to the truth behind this colourful story. "Dealers should experiment a bit and combine colours to create a look," said colour expert Sarah Whittaker from House of Colour. "They should mix and match colours just as they would if they were creating their own outfits," she said. So, Mac users needn't be colour shy after all. Consultants at House of Colour recommended that lime and tangerine go well together, as do blueberry and grape, both of which would create some exciting combinations. So, in the never-ending search for the Holy Grail of the channel - i.e a real value-add - Mac dealers may have hit upon a whole new revenue stream. As well as the usual hardware, software and a bit of training, maybe they could now offer colour therapy as well. ®
VAT-busters target business on the Web
Illegal traders warned they cannot hide in Cyberspace
UK companies trading on the Web but failing to declare VAT are being targeted by Customs and Excise officers using the Internet to track them down. The initiative has proved so successful in East Anglia -- where it's been up and running for the last year -- that it's likely to adopted nation-wide by all 14 regional Customs divisions by the autumn. Six companies in the Anglia Region -- which includes Essex and Norfolk -- have already been caught out and are currently being investigated by officials for alleged fraud and failure to pay VAT. More still have been passed on to other Customs divisions throughout the country as officials in East Anglia have unearthed Web-based companies on the make. "The good news is that the companies we've found are now registered for VAT and paying tax," said Caroline Benbrook, district manager of the Anglia Region. "Most of them were ghost traders, and the Internet was the only place we would have found them. If we didn't look there, they'd still be trading illegally now," she said. A hotel and a company selling car parts were among the businesses unearthed by officials. Common dodges include companies trading online that are not registered for VAT, but still charging customers VAT and pocketing the difference. Or companies which are not charging VAT in order to undercut a competitor. Customs officials will decide within the next six months whether the scheme should be rolled-out throughout the country but Benbrook is in no doubt that it should. "If we want to stop fraud and tax evasion, we have to keep up with the forefront of technology," she said. ®
Marimba prepares for IPO
Once-feted push provider capitalises on new corporate application management focus
Marimba, the once much-hyped Java push technology developer, is set to announce an initial public offering, according to sources quoted in US finance paper The Red Herring. The company is believed to be near to completing its IPO prospectus, though the size of the offering has yet to be set. Morgan Stanley and Hembrect & Quist are thought to be underwriting the stock issue. Formed in 1996 by a band of former Sun Java developers, Marimba quickly became Silicon Valley Flavour of the Month thanks to its clever Castanet push software and highly photogenic president CEO Kim Polese. Castanet differed from other push applications, such as PointCast, by delivering to users' desktops not information per se, but the Java applets that presented that data. The company was very quick to see wider uses for the technology beyond pushing news and horoscopes across the Web -- even in the early days, Polese was suggesting that Castanet could be used to deliver Java applications and updates to those applications across corporate networks, what it calls "application distribution and management". When push failed to take-off, Marimba changed its course to favour the corporate market, and has, over the last couple of years, avoided the limelight, done its damnedest to distance itself from all the early push technology hype and concentrated on building sales. Current customers include Compaq's Web subsidiary Alta Vista, Intuit, Ingram Micro, Seagate, Nortel, Bay Networks, Sun and the US DIY retail chain Home Depot. So far, Marimba has been funded to the tune of $18.5 million from venture capitalists, Wall Street investors and IT firms. ®
Analyst slams Softbank Yahoo! and ZD dealings
Claims our old friends at ZD work for a "debt-bloated carcass." Oh dear...
Internet stocks in general and those owned by Softbank in particular have come under savage fire from veteran US financial commentator Christopher Byron. Writing in his Back of the Envelope column in the NY Observer, Byron describes one such company, Ziff-Davis, as a "total basket case" and a "debt-bloated carcass." Byron is particularly interested in Yahoo! and in the proposed spin-off of ZD's Internet operations via a tracking stock IPO. He draws attention to the fact that Softbank Corporation of Japan owns stock in both ZD and Yahoo!, and suggests that "Ziff-Davis is dying so that Yahoo! might live." ZD IPOed last April, and Byron points out that the company took delivery of over $1.5 billion worth of debt as part of the process, while Softbank picked up a goodly wedge for the shares. "Here at Back of the Envelope," he says, "we took one look at the asset shuffle and predicted the company would soon be flat on its keester." And here it gets more interesting. In July Softbank spent $250 million on Yahoo! shares, and now has 30 per cent of the company. It also spent $400 million on shares in the E-Trade Group. E-Trade has since spent substantially on promotions via Yahoo!. Byron has also uncovered an SEC filing which says that in July-September Yahoo! advertising revenues from "Softbank and its related companies" grew from 4 per cent of net revenues to 8 per cent. Turning the screw further, Byron points out that Yahoo! beat analysts' earnings forecasts by 50 per cent, and that of the difference (5 cents a share), which totalled $5.2 million, $4.3 million was accounted for by that very revenue from Softbank. "Since the gross profit on ad revenues at Yahoo runs to about 90 percent and all the other business costs are pretty much fixed whether the ads come in or not, we may say with some confidence that, were it not for the Softbank revenues, Yahoo's actual third-quarter earnings would have been only 11 cents per share and not 15. The company would have beaten the Street's estimate by a mere penny per share. "That hyped-up trouncing of the Street's consensus forecast, announced on October 7, launched Yahoo's stock on its most explosive price surge ever, from $104 per share to more than $275 per share less than three months later. In fact, of course, nearly 100 percent of the run-up was fuelled by the most egregious sort of related-party transaction: Ad revenues supplied by a 30 percent shareholder of the company." And Ziff, "the debt-bloated carcass that provided the cash?" Byron says that in the company's latest quarterly filing it says that it expects, as of December 31, to be in violation of its loan covenants. Basically, it could end up defaulting. Byron points to the December 22 registration statement for the ZDNet tracking stock IPO, which says ZD intends to use the proceeds to pay off as much debt as possible. He's sceptical that this will work, to say the least: "This business - to be called ZDNet - looks exactly like the one it is being carved out of, only worse… What moron is going to pay anything for that!" Fortunately for Internet stocks everywhere, Byron reckons that the morons who've put their hands up for Web IPOs so far are quite likely to open their wallets yet again. ®
Oracle sets up fund to help Web developers of tomorrow
Not as philanthropic as it sounds though
Oracle is setting up a $100 million venture capital fund to boost small business developing Web-based applications. The only snag being that companies interested in getting their hands on some of the filthy lucre have to doing their development work on Oracle 8i. The giant of the database world is planning to team up with established venture capital firms to spot the upcoming stars of tomorrow’s Internet industry. Companies working in ecommerce, content management and business intelligence will be likely candidates. Oracle CEO Larry Ellison said: "The next generation of business applications will be designed for and run on the Internet. Oracle plans to invest in and work closely with software companies that share this vision." Ellison obviously knows a good thing when he sees one and Oracle will be offering these fledgling partners access to its own developers and technology, hoping that by getting a raft of growing companies to work with Oracle it will sew-up the market in the future. ®
ATI reports record Q1 sales, profits
Rage Pro Turbo, Rage 128 lead sales to PC vendors
Graphics specialist ATI today reported record results for its first quarter of fiscal 1999. The company posted sales of $327.4 million, up 95 per cent on the same period last year, leading to profits of $52 million, a year-on-year increase of 112 per cent on the $24.5 million it made in Q1 1998. In fact, ATI's Q1 profits is closer to $50.1 million, thanks to costs deriving from the $70.9 million hit it's taking for the acquisition of PC-on-a-chip developer Chromatic Research. The company also warned it will be making further charges of $16 million for the next three quarters as a result of the purchase. That said, the outlook for ATI continues to appear very positive. ATI has always made most of its money from selling graphics acceleration chip-sets to PC manufacturers, rather than through the retail sale of graphics cards containing those chips. The massive growth in the 3D graphics add-in market over the last 18 months has encouraged more PC vendors to bundle sophisticated 3D technologies with their systems, and ATI has prospered accordingly, largely thanks to its Rage Pro and Rage Pro Turbo accelerators which brought the performance of the company's product line much closer to the leading retail add-ins. The Rage 128, which began shipping in volume at the beginning of the year and offers superior performance to 3Dfx's Voodoo 2, will extend that further. Apple is already shipping Rage 128 cards with every model in its professional Power Mac line. Rage Pro Turbo chip-sets drive the iMac and systems from Sun, Compaq, Dell, HP, NEC and Packard-Bell. ATI is also pushing its Rage Mobility notebook graphics acceleration range hard. Still, 3Dfx remains a potential trouble-maker, thanks to its dominance of the 3D market and its acquisition of board-maker STB. With 3Dfx set to release its next-generation Voodoo3 chip-set in the second quarter of 1999 and target it at the high-end games enthusiast market (see 3Dfx announces next-generation Voodoo), that leaves it plenty of scope to target Voodoo 2 and Voodoo Banshee-based boards at STB's PC vendor customers -- many of whom are now ATI customers. With Voodoo stable in its role as the graphics standard against which other chip-sets are measured, 3Dfx is in a good position to win back many of the customers STB lost to ATI. ®
Chips are up for Acer
Deal with SST to bring welcome cash boost
Acer Semiconductor has signed a five year cross-technology deal with Silicon Storage Technology (SST). Acer will manufacture SST products and SST will licence its patented SuperFlash technology to Acer. Manufacturing is not scheduled to begin until year 2000. SST will get royalty payments from Acer, who in return will be grateful for the revenue injection, the announcement coming only days after Acer said was postponing its planned share issue because of "unfavourable economic conditions." Acer’s semiconductor business was highlighted as a major drain on Acer’s resources. Stan Shih, Acer chairman and CEO, said: "We expect to benefit from this strategic relationship both as a supplier of wafer to SST and in our joint efforts to develop products that are uniquely suited for ACER's advanced system products." ®
Norway hacks off Net community
How liberal?
A 13-year-old boy from Inner Mongolia received a cuff around the ear and was grounded for a week after Chinese authorities discovered he had hacked his way into a private information network. Under Inner Mongolian law, the boy is too young to be prosecuted. But if he does it again after his next birthday -- after which time police will legally be able to nail the little urchin -- he should move to Norway, where a court has ruled that attempting to hack into a computer is not a crime. The decision -- which has left some industry pundits speechless -- is believed to be the first of a kind and arguably sets a troublesome legal precedent. The court ruled that if computers are hooked up to the Internet, their owners must expect that others may try to break into their systems. While the court said trying to hack wasn't a crime, it did conclude that breaking and entering did constitute a felony. As reported last month by The Register (see Norway legalises hacking), and just picked up AP, hackers in Norway are now free to search the Internet to identify areas where security is weak before passing on the information for others to use illegally. ®
Ascend shareholders attempt to nix Lucent deal
Lawsuits filed to block merger
Four separate lawsuits were filed today with the Delaware Chancery Court. Their goal: to block the proposed $20 billion merger between Lucent and Ascend (see Lucent and Ascend tie the knot at last). The suits were filed by Ascend shareholders and allege that company's directors, many of them named in the suits as defendantsm, have failed in their responsibility as directors to maximise the value of the shareholders' stakes in the company. ®
Redundancy hits Elcom staff
Reseller warns UK IT sector not recession-proof
Elcom Group has slashed around four per cent of UK jobs following the proposed merger of its two ecommerce subsidiaries. The US-owned reseller and ecommerce software house cut 24 jobs before today officially uniting subsidiaries elcom.com and Elcom Systems. The company said the redundancies were designed to put Elcom on a stronger footing, but that the merger was not the only reason behind the job cuts. Elcom, based in Slough, claimed there were no further plans for job cuts but warned that the UK IT industry would not escape from the overall slowing down of the economy. According to Elcom chairman Jim Rousou, the cuts were a one-off action needed to trim costs. Rousou told The Register: "This is something that will happen to a lot of companies. It’s good management in the face of recession to make sure our cost base is in shape for 1999." He said the cuts were not solely connected to today’s merger which combined Elcom Systems, the company’s existing e-commerce subsidiary, with elcom.com, formed in December. Through today’s merger elcom.com will take on the infrastructure and technology development areas of Elcom Systems. The e-commerce automated procurement technology will also be owned by elcom.com. ®
German ISDN vendor eyes UK channel
Falling prices set to boost UK market
AVM, the ISDN adapter and application software maker, is looking to recruit a network of resellers and a distributor to take on the UK market. The German-based manufacturer is looking for one broadline distributor and 30 to 40 dealers to sell AVM kit and hopes to have its revamped UK channel in place by the end of 1999. AVM is in talks with a handful of distributors, but refused to reveal names. It currently sells its high-end products solely through niche distributor SAS, based in Hounslow. AVM said ISDN line prices were no longer significantly higher in the UK than in the rest of Europe, so planned to set up the dealer programme in the second half of 1999. Kai Allais, AVM international sales director, said the introduction of BT Highway, which converts existing lines into ISDN, has made the service more affordable in the UK. He said AVM’s decision to set up the channel programme was a direct result of BT reducing prices in this area. In the first half of 1998, AVM had 15 per cent of the UK ISDN market. ®
Microsoft appeals Java injunction
Claims judge made technical errors in granting it
Microsoft has filed an appeal against the injunction granted against it in Sun's favour in November. In its filing today the company says that Judge Ronald White misapplied the law in granting the injunction. In court Microsoft had been arguing that its licensing agreement with Sun gave it the right to modify Java. Sun argued that Microsoft was in breach of its licence, and the judge came down on Sun's side on 17th November. Microsoft's initial response was to start shipping Sun code as well as its own, and give users the choice of which one they used. But it wasn't as simple as that. Last month Microsoft asked for clarification of the terms of the injunction (Earlier story), and there was the small matter of passing Sun's Java compliance tests. If the route Microsoft was taking in December was clearly in compliance with the terms of the injunction then Microsoft was in the clear, but it seems certain that this approach didn't go far enough. If Microsoft hadn't appealed, it would therefore have needed to perform more seriously radical surgery to its Java strategy within the 90 days the injunction allowed, i.e. by mid-February. The company is currently appealing on the basis that Judge White was applying copyright law when he should have been applying contract law, and that Sun did not show Microsoft wilfully violated the agreement. ® Complete Register trial coverage | 计算机 |
2015-40/2214/en_head.json.gz/2559 | We Are_ Austin Tech
WE ARE AUSTIN TECH tells the stories behind the people who have made Austin what it is today and those who are creating its tomorrow. SIGN UP to receive a 5-minute video interview once a week. Randomize
Founder & Managing Partner, Source Spring @mellieprice
Austin Hometown
Austin #00329 Mellie Price, Source Spring
Featured on November 20, 2012 Tweet
Mellie Price has 20 years experience as a successful entrepreneur and executive. Most recently she founded Source Spring where she serves as Managing Partner. Source Spring has invested in over 20 early-stage organizations and is actively involved in the Austin business and non-profit communities.
Prior to Source Spring, Mellie founded Front Gate Solutions and Front Gate Tickets in 2003. She bootstrapped the company from her living room and in 2011 the system powered over $70M in ticket sales in 800+ cities across North America. Front Gate Tickets was acquired in September 2012.
Ms. Price’s experience also includes several senior executive roles: Senior Vice President at Human Code (1997), a leading software and application developer that was funded by Austin Ventures and sold to Sapient Corporation (NASDAQ: SAPE) in 2000. At Sapient, a top-tier business and technology consultancy, Mellie was a Vice President in the Media, Entertainment, and Communications divisions.
Price began her career as a designer, programmer, and system administrator when she launched Monsterbit (1993), one of the nation's first commercial web development and web hosting companies (acquired by Human Code in 1997). She was also a co-founder of Symbiot Security (2001), leaders in risk metrics for adaptive network security.
Mellie is also a founding investor and lead mentor for Captial Factory, a seed-stage mentoring program and co-working facility in Austin, TX. She holds a Bachelor of Science from the University of Texas at Austin and serves on the Board of Directors for Animal Trustees of Austin.
[Photo by René Lego Photography]
WHAT IS AUSTIN TO YOU PERSONALLY?
I have been in Austin 23 years, and I have stayed here because it's a phenomenal place to live and to work and to be an entrepreneur. I started my first company just a few years after I moved here. At this point, I have an extraordinary group of friends, chosen family, colleagues, people that really support me in the endeavor of being an entrepreneur, and I can't imagine doing this anyplace else.
ANY ADVICE FOR NEW ENTREPRENEURS?
One person told me early on that the truth is always right there, you just have to listen to it. When it comes to being an entrepreneur and you're starting something, I really encourage people to look for the truth and listen to it. Listen to your customers. They're giving you very real feedback. Be willing to hear it. If they're giving you the feedback that what you have doesn't matter to them, then don't be afraid to stop. Your employees, your partners are giving you real feedback. Listen to it, take action on it, and be selfless in your willingness to hear what's right before you.
WHAT ARE YOU ENTREPRENEURIAL STRENGTHS?
I'm really authentic in my relationships with people, try to be at least, and consequently, I love building teams. I take a lot of pride in assembling a team that's the right capability, personality for whatever the project that we're working on is.
TELL US ABOUT SOME OF YOUR STARTUPS...
I started my first company in 1993, called Monsterbit. Monsterbit was one of the first web development companies in Austin and in the nation. And we built websites for people that needed a voice. We felt like the Internet was a great voice.
And so we focused on arts and entertainment organizations, musicians, etc., and consequently kind of became the go-to developer for web development in the entertainment space.
So Monsterbit, from 1993 to 1997, did some really fun things. We built a lot of the first websites, the first South by Southwest website, the first Capitol Records website. We did a lot of the first Internet broadcasts. We just really explored what the Internet could do for the entertainment space.
Front Gate Tickets is one of the largest privately held ticketing companies in the nation. We focus on primary-market ticketing, which means we sell tickets directly for the venues, the festivals, and the promoters that are our clients, as opposed to secondary market ticketing, which is the StubHub type of environment.
I had a client, a web development client, who came and said that they were having problems with their ticketing vendor and that they needed to find a new system. They hired me as a consultant to go look at the alternatives out there. When I realized they were all antiquated and very expensive, I went back and I said, "Hey, if you'll give me six months and be my first client, I'll give you a piece of the company and we'll start a new ticketing competitor to Ticketmaster."
So that was October of 2002. Jessie Jack, my business partner and I, went into hiding in my living room, spent 5 months building the first system, and in February 2003, we sold our first ticket.
It was later that year that we landed our largest client, who became a strategic partnership, with C3 Presents, and we started selling tickets for the Austin City Limits Festival, Lollapalooza.
People were really friendly towards us in the beginning. They understood that we loved music. We came from a music culture. So by virtue of the competition in the market, we had a sandbox to start in that we might not have had in other communities.
WHAT'S THE BEST THING ABOUT DOING BUSINESS IN AUSTIN?
Community to me, Austin as a community I think benefits from a healthy entrepreneurial base. It's a great place to come together, and you share war stories and lessons learned. You share access to people, places, ideas, and resources to help get things done.
In the last 10 years, I feel like it's matured its processes for dealing with the growth that we've dealt with, and I think it's matured its economy. And you know, I'm a proponent of growth. I think you have to grow to get to the next place.
I don't want to keep Austin the same, necessarily, but I do hope that it can find that sweet spot where the growth still reflects the core of its character, which is that it's a pretty wonderful, funky place.
WHY WE ARE_AUSTIN TECH?
This site tells the stories behind the people who have made Austin what it is today and those who are creating its tomorrow. Sign up to receive a 5-minute video interview once a week. About Us
This site is co-founded by Joshua Baer, Jonathan Berkowitz, Paul Burke, Ruben Cantu, Rene Lego, Austen Trimble. Hosting provided by WPEngine. Motion content provided by CORE Media Enterprises. Want a site for your city? Contact us.
Campus2Careers Capital Factory
Capital Thought
Ignite Austin
OtherInbox
Paintedon Productions
Thinktiv, Inc.
We Are Austin Tech | 计算机 |
2015-40/2214/en_head.json.gz/2691 | Hamakor
Hamakor's logo
Official name: Hamakor – Israeli Society for Free and Open Source Software (המקור – עמותה ישראלית לתוכנה חופשית ולקוד־מקור פתוח) is an Israeli non-profit organization dedicated to the advancement of free and open source software in Israel. Usually just referred to as Hamakor.
Hamakor was founded in January 2003. Its primary purpose is the somewhat axiomatic charter of giving an official face to the decentralized Open Source community where such a face is needed.
1 Background and formation
2 Meaning of the name
3 Two levels of membership
4 August Penguin
5 Meeting the objectives
7 Hamakor prize
8 Elected board members
Background and formation[edit]
Several members of the open-source community, most notably Gilad Ben-Yossef and Doron Ofek, pioneered the idea of forming an official, legally recognized organization. They wanted to counter difficulties with the non-open-source-related bodies and to address the inherently decentralized way open source is developed and advocated. Two main bodies expected to deal with organizations in that respect: the media, which is accustomed to having someone to call to get a comment, and the Knesset, where standing up in front of legislators requires answering the implicit question "who am I and why should you listen to me?". It was felt that, as individuals, the community's ability to make a difference suffered in comparison with an organized body.
The idea matured for some time. The final push to form the actual organization came from a desire by Gilad Ben-Yossef, circa July 2002, to see Revolution OS in Israel. It turned out that the only way to see the movie was to rent a cinema hall and to pay the distributor. Gilad decided to open the event to the public and to charge a small fee for entrance to cover costs. The event, labeled "August Penguin", took place on the first Friday of August, 2002. The event proved a financial success, in that an extra 2000 NIS were recorded after expenses. Gilad pledged to use that money to start the official organization. This also affected Hamakor's charter, adding to it the capacity to act as a recognized money-handler on behalf of the community for organizing events or any other monetary activity.
Meaning of the name[edit]
The name for Hamakor was offered by Ira Abramov.[1] It was offered after a series of Linux related names was offered, as a name that is not specifically related to Linux (as opposed to free software at large) on one hand, and yet does have a Linux specific reference hidden in it. The word "Makor" in Hebrew means "source" (as in open source), but also "beak", being a hidden reference to Tux being a bird.
The original name as submitted to the registrar of non-profits was "Hamakor – The Israeli Society for Free and Open Source Software". The registrar stroke down the "the", claiming Hamakor has no right to claim exclusivity. | 计算机 |
2015-40/2214/en_head.json.gz/3221 | The choice: New server or existing one?
Yaevindusk Ul''dah, CAPosts: 1,563Member Uncommon January 2013 in Final Fantasy XIV: A Realm Reborn If you are in a position of wondering what type of server to pick when the game finally launches, then some information regarding such may prove useful. Now, I'm not trying to convince anyone to pick one over the other, but rather simply call attention to something that many may forget in their thoughts for this. Firstly, I'll say in the most dramatic way possible, that this relaunch is potentially history in the making. We've had games launch and relaunch with some minor revisions or a cash shop added to them, but A Realm Reborn is something different entirely; one of the few, if not the only that has completely scraped their old engine and nearly everything from the last game and relaunch as practically an entirely NEW game in the same game world (that it in itself was recrafted). Many are of the persuasion that starting on a fresh server will allow for them to see how the community and economy grows, and endure the hardships and lack of resources that follow. That memory years from that point where you say, I was on this server at the start. Indeed, it is perhaps the only way some of us desire to play the game: at the start, and without any past history to speak of. But there is another case here that wasn't exactly present in many other circumstances that games have once new servers are opened. The fact that those who join in existing server will not only be a part of a helpful community that stayed together through thick and thin in the past, but they will experience what may never again happen in the history of MMORPGs. They will be a part of server that has be completely transferred from one world that no longer exists, to another that has changes to every single system that was once known. From the hundreds of thousands of characters files transferred, to the altered currency, and differences in how to acquire items and resources (We may even see new players teaching 1.0 players about some systems that seem simple to fresh players, but complicated to old as the old system wasn't like such). New servers open almost every week spread across the thousands of MMO, but this is the first time that we will see first hand how a community adapts and comes together to reevaluate everything they once knew, from the simplest quest systems to the storylines and even the economy (and the former lack of an auction house). This isn't just a patch; this isn't just a number to signify extra content. This is a completely different game, yet in it exists a people who have to learn it all over again, who come from that world and who have to come together as a community to reevaluate every single item, class, profession, dungeon, map, attack and even the simple things such as traveling. This is an opportunity to see how this all happens that will be missed if there is just a blank slate to start on -- something that can be seen, again, on a frequent basis throughout all MMORPG games (and even the future if they open new servers again). It is a great and exciting time, an experiment and something that could say to publishers, if successful, that such a thing can be done and profitable. As dramatic as it sounds, we could be seeing the entire genre go down one road, or maintain it's path of "if it's unsuccessful the first time, make it be F2P with cash shop after some changes without fixing the problems". So I guess the choice is really this:Do you want to say you were there with the start of a server... or do you want to say you experienced what has rarely happened and may never happen again in the history of MMORPGs? New players will shape this game no matter if it's a new server or not, as they'll participate and become a part of the reformation and adaptation of a community that is practically just as foreign to the game as they are. The driving factor of their level brackets, and the foundation of the new economy that will arise from the complete changes the past game and economy had endured during the transition. In my opinion, the "easy" road is joining an existing server as you will have auction houses filled with every at level item you could want at low prices (or even make friends who will make items for you, which the community is known for). But it is also the most fascinating decision, and something that could change one's thoughts regarding the future of MMOs in general and how the communities form and adapt therein. When the release date is upon us, what are your plans (subject to those who think they are going to play)? When faced with strife or discontent, the true nature of a man is brought forth. It is then when we see the character of the individual. It is then we are able to tell if he is mature enough to grin and bare it, or subject his fellow man to his complaints and woes.0 Comments Cod_Eye jarrowPosts: 1,016Member January 2013 | 计算机 |
2015-40/2214/en_head.json.gz/4279 | Posted Home > Computing > Teenage hacker sentenced to six years without… Teenage hacker sentenced to six years without Internet or computers By
Cosmo the God, a 15-year-old UG Nazi hacker, was sentenced Wednesday to six years without Internet or access to a computer.
The sentencing took place in Long Beach, California. Cosmo pleaded guilty to a number of felonies including credit card fraud, bomb threats, online impersonation, and identity theft.
Cosmo and UG Nazi, a group he runs, started out as a group in opposition to SOPA. Together with his group, Cosmo managed to take down websites like NASDAQ, CIA.gov, and UFC.com among others. Cosmo also created custom techniques that gave him access to Amazon and PayPal accounts.
According to Wired’s Mat Honan, Cosmo’s terms of his probation lasting until he is 21 will be extremely difficult for the young hacker:
“He cannot use the internet without prior consent from his parole officer. Nor will he be allowed to use the Internet in an unsupervised manner, or for any purposes other than education-related ones. He is required to hand over all of his account logins and passwords. He must disclose in writing any devices that he has access to that have the capability to connect to a network. He is prohibited from having contact with any members or associates of UG Nazi or Anonymous, along with a specified list of other individuals.”
Jay Leiderman, a Los Angeles attorney with experience representing individuals allegedly part of Anonymous also thinks the punishment is very extreme:
“Ostensibly they could have locked him up for three years straight and then released him on juvenile parole. But to keep someone off the Internet for six years — that one term seems unduly harsh. You’re talking about a really bright, gifted kid in terms of all things Internet. And at some point after getting on the right path he could do some really good things. I feel that monitored Internet access for six years is a bit on the hefty side. It could sideline his whole life–his career path, his art, his skills. At some level it’s like taking away Mozart’s piano.”
There’s no doubt that for Cosmo, a kid that spends most of his days on the Internet, this sentence seems incredibly harsh. Since he’s so gifted with hacking and computers, it would be a shame for him to lose his prowess over the next six years without a chance to redeem himself. Although it wouldn’t be surprising if he found a way to sneak online during his probation. However, that kind of action wouldn’t exactly be advisable. It’s clear the FBI are taking his offenses very seriously and a violation of probation would only fan the flames.
Do you think the sentencing was harsh or appropriate punishment for Cosmo’s misdeeds? | 计算机 |
2015-40/2214/en_head.json.gz/4711 | The AMD Radeon HD 7990 6GB dual-GPU behemoth is finally here. We've been talking about the AMD Radeon HD 7990 for years, so to finally see AMD release a card is exhilarating and saddening at the same time. This card is the culmination of years of work and likely marks the pinnacle of the AMD Radeon HD 7000 series of GPUs. Then again it's also likely the fastest AMD Radeon HD 7000 series card to ever be released by AMD. The AMD Radeon HD 7990 was designed from inception to be a juggernaut and to challenge any and all discrete graphics cards on the market. This video card has 4096 stream processors, 6GB of GDDR5 memory, 8.6 billion transistors, 576.0GB/s of memory bandwidth and 8.2 TFLOPS of compute power. This card doesn't command respect, it earns it! AMD also says that it is the world's fastest graphics card, so you shouldn't be shocked to learn that it features a nail-biting, second mortgage inducing price tag of $999.
The AMD Radeon HD 7990 goes by the codename Malta and features a pair of Tahiti XT2 GPU cores that operate at 1000MHz and 6GB of GDDR5 memory running at 6000MHz. These cores are not brand new, but they are the best GPU AMD has to offer and you are getting two of them on one board. This GPU is used on the AMD Radeon HD 7970 GHz Edition card, so you get AMD's 28nm Graphics Core Next (GCN) architecture with all the bells and whistles. AMD had to reduce the core clock by 50MHz versus the single GPU card to keep the power draw and temperatures down, but this is just a minor drop. AMD was able to leave the memory at 6000MHz (effective) though and is why the card has an insane 576GB/s of memory bandwidth.
In order to keep the temperature and noise at bay, AMD developed a radical GPU cooler that features three cooling fans and a each GPU gets a massive heatsink that each has four U-shaped copper heatpipes! The AMD Radeon HD 7990 video cards are part of the Never Settle Reloaded game bundle promotion, so you get a ton of games with this card. The exact number is eight: BioShock Infinite, Tomb Raider, Crysis 3, Far Cry 3, Far Cry 3 Blood Dragon, Hitman Absolution, Sleeping Dogs and Deus Ex Human Revolution. These games will be included directly in the HD 7990 product box in any region this bundle is available. AMD informed us that the full retail price (not including sales) on these game titles would equate to $334.94! Having eight good game titles coming with the card certainly helps take the bite out of the $999 video card.
The Radeon HD 7790 video card is 12-inches in length and is fairly hefty at 2 pounds and 11 ounces in weight. It looks pretty tough with the three cooling fans and glossy black fan shroud with red accents.
The fan shroud on the AMD Radeon HD 7990 6GB video card is open all the way around, so the hot air is spread out in pretty much all directions. With three fans, this is the only real way to do it due to airflow restrictions and trying to keep the noise levels down.
At each end of the card you can make out the four u-shaped copper heatpipes that help keep the Tahiti XT2 cores that run at 1GHz nice and cool.
For a foot long graphics card having a backplate for reinforcement is a given and here we see that is covers up pretty much everything.
AMD Eyefinity Technology on the AMD Radeon HD 7990 allows the option to expand a single monitor desktop all the way up to five displays at once on a single HD 7990 thanks to the dual-link DVI and four mini-DisplayPort video outputs. The nice thing about the rear bracket on this card is that half of it is open for ventilation!
Here we see the AMD Radeon HD 7990 6GB 'Malta' video cards pair of 8-pin PCI Express power connectors that are located along the top of the video card near the end of the PCB. AMD suggests a 1000 Watt or 1kW power supply when running this graphics card.
On a recent visit to AMD we were able to see some bare AMD Radeon HD 7990 video cards and this is an image of what the front of the PCB looks like of this foot long card. You can clearly see the two Tahiti XT2 GPUs and even the PLX PCIe bridge chip that pairs the two together.
AMD is using a PLX PEX8747 PCI-Express 3.0 48-lane bridge chip that is capable of 96GB/s of interGPU bandwidth to keep data flowing between the processors and to the motherboards PCIe 3.0 x16 slot.
Not too much is going on underneath the backplate, but you can see half of the 6GB of Hynix branded GDDR5 memory ICs.
Now we can get to testing since we know what the AMD Radeon HD 7790 is and more about the Gigabyte and Sapphire retail cards!
Test System
Before we look at the numbers, let's take a brief look at the test system that was used. All testing was done using a fresh install of Windows 7 Ultimate 64-bit and benchmarks were completed on the desktop with no other software programs running.
Video Cards & Drivers used for testing: AMD Catalyst 13.2 Beta 6 - ASUS ARES II (Radeon 7990)
AMD Catalyst 13.5 Beta 2 - AMD Radeon HD 7990
NVIDIA GeForce 313.96 - NVIDIA GeForce GTX 690
NVIDIA GeForce 314.07 - NVIDIA GeForce GTX Titan
NVIDIA GeForce 314.09 - ASUS GeForce GTX 680
Intel X79/LGA2011 Platform
The Intel X79 platform that we used to test the all of the video cards was running the ASUS P9X79 Deluxe motherboard with BIOS 0305 that came out on 12/25/2012. The Corsair Vengeance 16GB 1866MHz quad channel memory kit was set to 1866MHz with 1.5v and 9-10-9-27 1T memory timings. The OCZ Vertex 3 240GB SSD was run with firmware version 2.25.
The Intel X79 Test Platform
Brand/Model
Live Pricing
Intel Core i7-3960X
16GB Corsair 1866MHz
Solid-State Drive
OCZ Vertex 3 240GB
Intel RTS2011LC
AMD Radeon HD 7790 'Malta' Video Card GPU-Z Information:
Batman: Arkham City is a 2011 action-adventure video game developed by Rocksteady Studios. It is the sequel to the 2009 video game Batman: Arkham Asylum, based on the DC Comics superhero Batman. The game was released by Warner Bros. Interactive Entertainment for the PlayStation 3, Xbox 360 and Microsoft Windows. The PC and Onlive version was released on November 22, 2011.
Batman: Arkham City uses the Unreal Engine 3 game engine with PhysX. For benchmark testing of Batman: Arkham City we disabled PhysX to keep it fair and ran the game in DirectX 11 mode with 8x MSAA enabled and all the image quality features cranked up. You can see all of the exact settings in the screen captures above.
Benchmark Results: The NVIDIA GeForce GTX Titan and GeForce GTX 690 were faster at 1920x1080, but at 2560x1600 the AMD Radeon HD 7990 was able to pull ahead of the GeForce GTX 690 by 5 FPS or 5.6%. Battlefield 3
Battlefield 3 (BF3) is a first-person shooter video game developed by EA Digital Illusions CE and published by Electronic Arts. The game was released in North America on October 25, 2011 and in Europe on October 28, 2011. It does not support versions of Windows prior to Windows Vista as the game only supports DirectX 10 and 11. It is a direct sequel to 2005's Battlefield 2, and the eleventh installment in the Battlefield franchise. The game sold 5 million copies in its first week of release and the PC download is exclusive to EA's Origin platform, through which PC users also authenticate when connecting to the game.
Battlefield 3 debuts the new Frostbite 2 engine. This updated Frostbite engine can realistically portray the destruction of buildings and scenery to a greater extent than previous versions. Unlike previous iterations, the new version can also support dense urban areas. Battlefield 3 uses a new type of character animation technology called ANT. ANT technology is used in EA Sports games, such as FIFA, but for Battlefield 3 is adapted to create a more realistic soldier, with the ability to transition into cover and turn the head before the body.
Benchmark Results: The NVIDIA GeForce GTX 690 and the AMD Radeon HD 7990 were performing very close in Battlefield 3 at 1920x1080 and 2560x1600. The ASUS ARES II 6GB card was still the card to beat though! Borderlands 2
Borderlands 2 is a space western first-person role-playing shooter video game that was developed by Gearbox Software and published by 2K Games. It is the sequel to 2009's Borderlands and was released for the Microsoft Windows, PlayStation 3 and Xbox 360 platforms. Borderlands 2 was developed by Gearbox Software and published by 2K Games on September 18, 2012 in North America.
Borderlands 2 runs on a heavily modified version of Epic Games' Unreal Engine 3. We tested Borderlands 2 with vSync and depth of field disabled. We increased the general image quality settings and turned on 16x AF. PhysX effects were set to low to keep things fair as possible between AMD and NVIDIA cards. FXAA was enabled.
Benchmark Results: The NVIDIA GeForce GTX 690 led the pack in Borderlands 2, but the difference between it and the the ASUS ARES II and AMD Radeon HD 7990 were super duper close. Dirt: Showdown
Dirt: Showdown is a video game published and developed by Codemasters for Microsoft Windows, Xbox 360 and PlayStation 3. It was released in May 2012 in Europe and in June in North America. It is part of the Colin McRae Rally game series.
Dirt: Showdown removes several of the gameplay modes featured Dirt 3, and introduces new ones. Gameplay modes can be classified as Racing, Demolition, Hoonigan or Party. We ran the built in Benchmark at Ultra settings to get a true feel of what this engine has to offer!
It is very important to note that Global Illumination and Advanced Lighting have massive performance penalties when enabled, something not seen in other titles in the Dirt series. We disabled this setting.
Benchmark Results: The AMD Radeon HD 7990 pulled ahead of the ASUS ARES II by just 1-2 FPS at both resolutions. The NVIDIA GeForce GTX 690 and GeForce GTX Titan weren't too far behind though and both were over 100FPS at 2560x1600 with everything cranked up. All of the cards were able to run this title smoothly with no jitters or issues. Far Cry 3
Far Cry 3 is an open world first-person shooter video game developed by Ubisoft Montreal and published by Ubisoft for Microsoft Windows, Xbox 360 and PlayStation 3. It is the sequel to 2008's Far Cry 2. The game was released on December 4th, 2012 for North America. Far Cry 3 is set on a tropical island found somewhere at the intersection of the Indian and Pacific Oceans.[11] After a vacation goes awry, player character Jason Brody has to save his kidnapped friends and escape from the islands and their unhinged inhabitants.
Far Cry 3 uses the Dunia Engine 2 game engine with Havok physics. The graphics are excellent and the game really pushes the limits of what one can expect from mainstream graphics cards. We set game title to 8x MSAA Anti-Aliasing and ultra quality settings. Benchmark Results: The NVIDIA GeForce GTX 690 was found to be 2FPS slower than the AMD Radeon HD 7990 at 2560x1600 in Far Cry 3. Again, these two $999 video cards are pretty damn close! The ASUS ARES II 6GB card with it's 100MHz faster core clock was able to lead the AMD Radeon HD 7990. Metro 2033
Metro 2033 is an action-oriented video game with a combination of survival horror and first-person shooter elements. The game is based on the novel Metro 2033 by Russian author Dmitry Glukhovsky. It was developed by 4A Games in the Ukraine. The game is played from the perspective of a character named Artyom. The story takes place in post-apocalyptic Moscow, mostly inside the metro station where the player's character was raised (he was born before the war, in an unharmed city), but occasionally the player has to go above ground on certain missions and scavenge for valuables.
This is another extremely demanding game. Image quality settings were raised to 'Very High' quality with 4x AA and 16x AF. We turned off PhysX and DOF (Depth of Field) for benchmarking.
Benchmark Results: Metro 2033 had the ASUS ARES II up on top again, followed by the AMD Radeon HD 7990 and then the NVIDIA GeForce GTX 690. There was a 22% performance advantage to running the AMD Radeon HD 7990 over the NVIDIA GeForce GTX 690 in this benchmark at 2560x1600. Sleeping Dogs
Sleeping Dogs is a 2012 open world action-adventure video game developed by United Front Games in conjunction with Square Enix London Studios and published by Square Enix. The game was released on August 14, 2012, for Microsoft Windows. The game uses the Havok physics engine.
We used the Adrenaline Sleeping Dogs Benchmark tool to benchmark this game title to make sure the benchmarking was consistent. We tested with 'Ultra' quality setting at 1920x1080 and 2560x1600 resolutions.
Benchmark Results: With 'ultra' image quality settings the AMD Radeon HD 7990 was found to be 8.4 FPS or 22% faster than the NVIDIA GeForce GT 690 at 2560x1600. 3DMark 11
3DMark 11 is the latest version of the world’s most popular benchmark for measuring the 3D graphics performance of gaming PCs. 3DMark 11 uses a native DirectX 11 engine designed to make extensive use of all the new features in DirectX 11, including tessellation, compute shaders and multi-threading.
We ran 3DMark11 with both the performance and extreme presets to see how our hardware will run.
3DMark11 Performance Benchmark Results:
Benchmark Results: The ASUS ARES II scored P16542 3DMarks, the NVIDIA GeForce GTX 690 scored P15520 3DMarks and the AMD Radeon HD 7790 scored P15520 3DMarks in 3DMark11 with the performance preset. Not bad scores from the $999 and above graphics cards!
3DMark11 Extreme Benchmark Results:
Benchmark Results: When Futuremark 3DMark11 is run with the extreme settings the performance scaling between the test cards is almost linear, which is pretty wild. The AMD Radeon HD 7990 6GB video card was able to score X5839 on 3DMark11 with the extreme preset!.
3Dmark Fire Strike Benchmark Results - For high performance gaming PCs
Use Fire Strike to test the performance of dedicated gaming PCs, or use the Fire Strike Extreme preset for high-end systems with multiple GPUs. Fire Strike uses a multi-threaded DirectX 11 engine to test DirectX 11 hardware.
Fire Strike Benchmark Results:
Benchmark Results: 3DMark Fire Strike shows that the AMD Radeon HD 7990 to be ahead of the NVIDIA GeForce GTX 690 by a fairly large amount, so there is a big difference in results from the two 3DMark versions. 3DMark Firestrike shows the AMD Radeon HD 7990 to be about 19% faster than the NVIDIA GeForce GTX 690. Fire Strike Extreme:
Benchmark Results: 3DMark Fire Strike Extreme is a very tough benchmark to run and showed the AMD Radeon HD 7990 to be 17% faster than the NVIDIA GeForce GTX 690.
For testing power consumption, we took our test system and plugged it into a Kill-A-Watt power meter. For idle numbers, we allowed the system to idle on the desktop for 15 minutes and took the reading. For load numbers we measured the peak wattage used by the system while running the OpenGL benchmark FurMark 1.10.6 at 1024x768 resolution in full screen mode. We also ran four game titles at 1920x1080 and averaged the peak results recorded the highest Wattage seen on the meter for the gaming results.
Power Consumption Results: When it comes to power use the AMD Radeon HD 7990 6GB card did very well at idle thanks to AMD ZeroCore Power Technology. AMD ZeroCore Power Technology is designed to intelligently manage GPU power consumption in response to certain GPU load conditions by clock gating, power gating, memory compression, and a host of other power saving tricks. It also raises and lowers clockspeeds via AMD PowerTune and takes into account application use, system temperature and/or user system configuration. This is one of the reasons that all these high-end graphics cards have such good power numbers at idle! At full load we noted that the entire system at the wall was pulling 546 Watts in game titles on average, which is about 100 Watts more power than the NVIDIA GeFroce GTX 690. The AMD Radeon HD 7990 6GB was more efficient than the ASUS ARES II as it used nearly 120 Watts less power when gaming. The ASUS ARES II, AMD Radeon HD 7990 and NVIDIA GeForce GTX 690 are all dual-GPU powered cards powered by flagship GPUs, so it shouldn't come as a shock to see them pulling 450W and above when gaming. Just be sure to have a beefy 850W or higher PSU for your system with one of these cards.
Temperature & Noise Testing
Temperatures are important to enthusiasts and gamers, so we took a bit of time and did some temperature testing on the AMD Radeon HD 7790 6GB video card.
AMD Radeon HD 7790 6GB Idle Temperature: The AMD Radeon HD 7990 video card had an idle temperature of 32.0C in a room that was 22.0C (72F). Not a bad temperature considering this card has 8.6 billion transistors and 4096 cores! Both cores had nearly identical temperatures, so we'll just focus on one core to simplify things. AMD Radeon HD 7990 6GB in Furmark: With Furmark fired up and saw temperature reach 75C, but notice that the GPU started throttling. The average GPU core clock is around 700MHz, but was bouncing between 500MHz and 1000MHz constantly. AMD Radeon HD 7990 6GB Temps in Games: In games we found the AMD Radeon HD 7990 6GB graphics card would still reach 75C, but it would do so at 1000MHz core and 1500MHz memory and stay there. Note that we used 3206MB of the frame buffer when gaming at 2560x1600, so we put a good chuck of that frame buffer to good use. The fan speed was right around 2500RPM on the fans when running Furmark and in game titles.
We tossed up the temperature results in a chart and as you can see the AMD Radeon HD 7790 did pretty good when it comes to thermal performance and we have absolutely nothing but good things to say when it comes to temperatures and cooling performance. Sound Testing
For sound testing we use an Extech sound level meter with ±1.5dB accuracy that meets Type 2 standards. This meter ranges from 35dB to 90dB on the low measurement range, which is perfect for us as our test room usually averages around 36dB. We measure the sound level two inches above the corner of the motherboard with 'A' frequency weighting. The microphone wind cover is used to make sure no wind is blowing across the microphone, which would seriously throw off the data.
When it comes to noise levels the AMD Radeon HD 7990 was pretty quiet at idle, but was fairly loud when at full load. Our results directly contradict what AMD showed internally: AMD also used Furmark, but this really isn't really a fair comparison against NVIDIA cards as the Radeon HD 7990 is heavily throttled as we just showed you moments ago. We'll replace the battery in our sound meter to double check our scores, but the way we measure sound and our ears tell us at load that the Radeon HD 7990 is a tad louder than the GeForce GTX 690 at full load.
Overclocking The 7990
To take a quick look at overclocking we fired up AMD Catalyst Control Center and used AMD Overdrive to overclock both the mighty AMD Radeon HD 7990 6GB 'Malta' video card.
The AMD Radeon HD 7990 6GB comes clocked at 1000MHz on the core and 1500MHz on the memory. You can go up to 1100MHz on the core and 1575MHz on the memory in AMD OverDrive.
We were easily able to overclock the Radeon HD 7790 to 1100MHz on the core and 1575MHz on the memory. The card was rock solid in games and we tried the power control settings at various percentages with no issues.
Let's take a look at some Futuremark 3DMark11 on the performance preset to see how the overclock helped performance.
AMD Radeon HD 7990 at 1000MHz core and 1500MHz memory: AMD Radeon HD 7990 at 1100MHz core and 1575Hz memory: Something isn't right here! The score went from P15225 to P15242 with a 100MHz overclock on the core? We let AMD know this and the had no answer.
Final Thoughts and Conclusions
The AMD Radeon HD 7990 6GB 'Malta' video card was found to be a beast in every right. The Mediterranean island of Malta is the largest of the three major islands that constitute the Maltese archipelago. Here we have the AMD 7990 being the largest and fastest of the three cards that make up the AMD Radeon HD 7900 series, so the internal codename makes sense. This card also caps of the AMD Radeon HD 7000 series and it was able to showcase the power of the Tahiti XT2 GPU cores and AMD's 28nm GCN technology.
When it comes to performance you really need a 30-inch display running 2560x1600 or a multi-panel AMD Eyefinity setup to get the most out of the Radeon HD 7990. We don't have a 4K display available to test on, but AMD says that the Radeon HD 7990 video card is ready for Ultra HD 4K gaming and provided us with the performance slide above. It shows that the Radeon HD 7990 does a tad better than the NVIDIA GeForce GTX690 in 11 game titles. Our performance testing showed that the AMD Radeon HD 7990 be the NVIDIA GeForce GTX 690 more times than not at 2560x1600. The performance of the AMD Radeon HD 7990 was solid, but we were expecting a bit more from this card for the price. Sure, it beats the NVIDIA GeForce GTX 690 more times than not, but NVIDIA released that card back on May 2nd, 2012. It took AMD basically a year to release a dual-GPU card that could take the performance crown. While we are on the topics of dates did you know that this is AMD's first new dual-GPU card since the AMD Radeon HD 6990 was launched in March 2011? This card has been in the making for a long time and we were told that it was coming over a year ago.
As with all high-end graphics cards they AMD Radeon HD 7990 costs an arm and a leg. This card will set you back $999, but it comes with eight game titles as part of the bundle that have a full retail value of over $325. This certainly helps offset the price of this card and makes it a compelling reason to purchase this card over the NVIDIA GeForce GTX 690 that it was able to outperform more times than not. We should also mention that we completed our testing with CATALYST 13.5 Beta 2 drivers, but AMD also send over some preview drivers that really help frame latencies and frame pacing performance. A number of sites are showing weakness when it comes to frame latencies and micro stutters on AMD cards, but AMD has taken action and has some software fixes in the works. Legit Reviews hasn't gotten into frame capturing for our video card reviews just yet because we wanted to see how things play out. AMD appears to have a software fix for now and it is safe to assume that future GPU's that aren't taped out will have some new technologies to avoid the stutters. NVIDIA is clearly ahead of the game when it comes to that aspect, but we expect AMD to have a solution and that all could pass. Is it worth a $5,000-$10,000 investment to capture frames and report on something that has a fix in the works? We'll let you tell us! Legit Bottom Line: The AMD Radeon HD 7990 6GB video card is a monster and takes the dual-GPU performance crown away from NVIDIA and the GeForce GTX 690, but after waiting all this time is it enough? | 计算机 |
2015-40/2214/en_head.json.gz/4740 | Research shows that computers can match humans in art analysis
Jane Tarakhovsky is the daughter of two artists, and it looked like she was leaving the art world behind when she decided to become a computer scientist. But her recent research project at Lawrence Technological University has demonstrated that computers can compete with art historians in critiquing painting styles.
While completing her master’s degree in computer science earlier this year, Tarakhovsky used a computer program developed by Assistant Professor Lior Shamir to demonstrate that a computer can find similarities in the styles of artists just as art critics and historian do.
In the experiment, published in the ACM Journal on Computing and Cultural Heritage and widely reported elsewhere, Tarakhovsky and Shamir used a complex computer algorithm to analyze approximately1,000 paintings of 34 well-known artists, and found similarities between them based solely on the visual content of the paintings. Surprisingly, the computer provided a network of similarities between painters that is largely in agreement with the perception of art historians.
For instance, the computer placed the High Renaissance artists Raphael, Da Vinci, and Michelangelo very close to each other. The Baroque painters Vermeer, Rubens and Rembrandt were placed in another cluster.
The experiment was performed by extracting 4,027 numerical image context descriptors – numbers that reflect the content of the image such as texture, color, and shapes in a quantitative fashion. The analysis reflected many aspects of the visual content and used pattern recognition and statistical methods to detect complex patterns of similarities and dissimilarities between the artistic styles. The computer then quantified these similarities.
According to Shamir, non-experts can normally make the broad differentiation between modern art and classical realism, but they have difficulty telling the difference between closely related schools of art such as Early and High Renaissance or Mannerism and Romanticism.
“This experiment showed that machines can outperform untrained humans in the analysis of fine art,” Shamir said.
Tarakhovsky, who lives in Lake Orion, is the daughter of two Russian artists. Her father was a member of the former USSR Artists. She graduated from an art school at 15 years old and earned a bachelor’s degree in history in Russia, but has switched her career path to computer science since emigrating to the United States in 1998.
Tarakhovsky utilized her knowledge of art to demonstrate the versatility of an algorithm that Shamir originally developed for biological image analysis while working on the staff of the National Institutes of Health in 2009. She designed a new system based on the code and then designed the experiment to compare artists.
She also has used the computer program as a consultant to help a client identify bacteria in clinical samples.
“The program has other applications, but you have to know what you are looking for,” she said.
Tarakhovsky believes that there are many other applications for the program in the world of art. Her research project with Shamir covered a relatively small sampling of Western art. “this is just the tip of the iceberg,” she said.
At Lawrence Tech she also worked with Professor CJ Chung on Robofest, an international competition that encourages young students to study science, technology, engineering and mathematics, the so-called STEM subjects.
“My professors at Lawrence Tech have provided me with a broad perspective and have encouraged me to go to new levels,” she said.
She said that her experience demonstrates that women can succeed in scientific fields like computer science and that people in general can make the transition from subjects like art and history to scientific disciplines that are more in demand now that the economy is increasingly driven by technology.
“Everyone has the ability to apply themselves in different areas,” she said. | 计算机 |
2015-40/2214/en_head.json.gz/5253 | Whatever happened to PC soundcards?
By Karl Hodge
Upgrades How this once essential component became a niche product
The growth of USB audio
Where did all the soundcards go?
The difference between the graphics and sound markets is at its most pronounced in the field of I/O devices. Sure, there are discrete devices and specialist cards for getting sound into PCs, but they're not as mainstream as the market for USB audio. "We've seen a huge insurgence of people wanting USB," says Steve Erickson, "People want 5.1, for instance, but they can't get it out of their laptop, or they want to have an optical connection, or they want an extra line control or an extra headphone jack. Sales for USB headsets in gaming are definitely growing, too". "That plays into the type of games that are really popular today, the MMO type stuff, World of Warcraft, the FPS high-end graphics stuff these are more of a headphone experience". In the music sector too, USB break-out boxes, audio interfaces and external sound modules continue to be popular. Creative Labs identify home recording hobbyists as an important sector driving demand for these devices. "That's really a connectivity thing. It's quarter inch versus eighth inch, digital I/O versus analogue. That we do see more and more growth in". The same can be said of more casual users; people who might want to route their PC through their hi-fi or digitise their collection of 80s vinyl records. USB connectivity offers them a convenient way to get audio in and out of their PCs. Unlike gamers, many of them are more reluctant to build a special rig dedicated to a single purpose. USB can offer ports that onboard audio just can't. Reading between the lines, we're getting a picture here of a market that was once in thrall to the soundcard in the same way it is to add-on graphics, but that is now fragmenting. It's breaking up into power users and mainstream punters; pros and hobbyists. The soundcard is becoming specialist kit. Beyond stereo This wasn't supposed to happen. The soundcard was supposed to get more sophisticated at the high end, with volume producers churning out basic boards at the bottom end. Our PCs were supposed to become media centres, serving video and multi-channel audio to every room in the house. While this is still the vision of the industry (just look at those new Windows 7 adverts) it isn't something consumers have been adopting. We still have discrete PCs for different jobs. And, although 5.1 and 7.1 are built into many PCs, gamers are in broad agreement that 3D stereo is all they need. "And I would say that for gaming that's not necessarily a bad thing," says Steve Erickson, "We're able to do stuff with multi-channel virtualisation that's really amazing. We can make the brain think that sound is coming from behind it, just with stereo headphones". As for anything higher than 5.1, Erickson is sceptical about how useful that is. "It's funny, there's always the feature thing you have to do – 7.1 has always been that. I think Creative, Logitech and a couple of others had a 7.1 speaker system at one time, but no one's really sold one for the last two or three years". For Erickson, who admits that home cinema has "stayed flat" for Creative Labs, the motivation for offering 7.1 and higher multi-channel audio isn't a quest for the ultimate sound experience: "It doesn't really cost anything, you just add an extra output," says Erickson. "In reality, users gravitate towards 5.1, whether it's for watching movies etc". Altec Lansing, with a great deal of continued investment in surround sound and multi-speaker audio, have a different take: "Sometimes high volume, late night gaming is best enjoyed 'privately'. However, high SPL (Sound Pressure Level) audio is more than just an experience for the ears," says Adrian Bedggood, "The entire body experiences sound, and the ear itself uses many cues from the room, reflections and the body to shape the sonic experience. While headphones do provide a terrific experience of isolation, we think projected sound at all the SPL levels with a loudspeaker provides the ultimate in 'immersion'." Market demand Adrian says there's still a lot of innovation to come in the surround sound market, citing height channels as one particular feature that gamers should be embracing: "There are now a wide variety of surround schemes with as many as 9.2 channels". Overall, the industry experts admit that the demand for soundcard upgrades is on the wane. They just disagree about the reasons why. Creative Labs are still leaders in the sector, with most of the gamers we spoke to sporting Creative kit in their systems; the majority choosing one of the company's Vista and Windows 7 compatible X-Fi models. "I would say the demand for discrete soundcards has gone down, but USB solutions have climbed about the same rate, says Steve Erickson. "The overall number of units is about the same – it's just that the mix has changed". Integrated chip maker IDT would like to lay a greater claim to the decline in the soundcard market though. It is, they say, because onboard sound is now just as good. "The demand for soundcards has fallen because the quality of integrated solutions has increased dramatically," says IDT Vice President Pietro Polidori. "This hasn't happened with graphics, as there is no sign of integrated graphics providing the performance of add-in cards, but for PC audio it's game over: there's no need for a separate card". Whichever camp you side with, a cursory look at online stores tells you that there are fewer soundcards available than there were five years ago. Fewer manufacturers too. But that doesn't mean PC audio is dead. Our experts suggest that there are now more types of user, all with different demands. Gamers and home cinema enthusiasts, music fans and home recording hobbyists. These, in turn, are subdivided according to spending power and enthusiasm into smaller and smaller groups, each with their own tailored part of the market. When a sector fragments to this extent, two kinds of developer survive: volume producers and high-end, niche manufactures. Companies like Creative Labs and Plantronics are able to dominate the market with their size and financial clout. As for everyone else? They're around to mop up the gravy.-------------------------------------------------------------------------------------------------------First published in PCFormat Issue 235Liked this? Then check out TechRadar's soundcard reviewsSign up for TechRadar's free Weird Week in Tech newsletterGet the oddest tech stories of the week, plus the most popular news and reviews delivered straight to your inbox. Sign up at http://www.techradar.com/registerFollow TechRadar on Twitter
Previous Page Where did all the soundcards go?
Asus See more Upgrades news Load Comments | 计算机 |
2015-40/2214/en_head.json.gz/5493 | xorg/ Development/ Documentation/ Obsolescence
Notes on the status of X.Org technologies
Jim Gettys has kindly contributed the attached long-running draft document describing the status of various X.Org technologies. This draft is from October 2008, and much of its content is from 2004, so it is a bit old. However, it still contains much useful information. If someone wants to take over its maintenance and put it up on the wiki, let me (bart at cs dot pdx dot edu) know. No fair making fun of it or critiquing it; it was very kindly donated and is known to need work. roadmap-2-clean.pdf Open Source Desktop Technology Road Map
Jim Gettys, Version 2.0, October 23, 2008 Abstract
Navigating the myriad technologies that comprise the desktop (and palmtop) on open source systems is daunting to say the least, for newcomers of all sorts, open source developers, developers in companies using the technologies internally, and commercial ISVs, and even difficult to navigate for those immersed in open source systems on a day to day basis. This document attempts to give a sketch of the names and relationships of these technologies and projects, and a glimpse into their status and development. Some technologies have never proved themselves, and/or have been rendered obsolete by later development and are available primarily for legacy code. This document attempts to clarify much of this natural evolution and market selection. Ultimately, some technologies become so rare as to enable their interment into the strata of software history, and it can be important to know which technologies are in such a fossil state, or stuck in the Labrea Tar Pits and possibly doomed to extinction, if not yet dead. A few may manage to struggle their way out of the tar to safety on dry land again. Some indication of the licensing terms is made. For commercial software, make sure you understand the differences between licenses. For example, GPL and LGPL'ed libraries have very different consequences; one requires that source code of applications linked against them be made available, and the other does not require such disclosure. It is also possible for software to be available simultaneously under multiple licenses, sometimes allowing the implementer to choose which applies. See the Open Source Initiative for an explanation of these licenses. Where known, approximate dates of expected completion are included, but there is no guarantees made. If you would like to ensure the timely completion of technologies under development, you should work with the community to determine if further resources are needed, and if so, to contribute the talent, resources and funding to do so. Note that this document is still a bit weak in futures and I plan further work in this area. As in a map of a physical area, having information about current areas and how they interrelate was the first goal. Acknowledgments
This document is the work primarily of its author, and the opinions here are my own; blame me for any errors and biases. Please let me know of any inaccuracies, and in particular, pointers to road maps of projects mentioned here. I would much prefer to have good pointers to similar project road maps than my current (mis) understanding of their time lines and development state, which is, of course, in a constant state of flux. Similarly, if you believe I have overlooked some key piece of open source desktop middleware technology (as opposed to end user applications which are too numerous to list), please let me know. My thanks to Keith Packard, Jamey Sharp, Kevin Whitwell, Waldo Bastian, and Eric Raymond, Zenaan Harkness, David Alan Gilbert, Maarten Stolte, Maarten Stolte, Kurt Pfeifle, Brenda J. Butler, Zenaan Harkness, Eero Tamminen, Brian Gallaway Sergey V. Oudaltsov, John Smirl, and Vincent for constructive comments and feedback on Version 1 of this document. Table of contents
Open Source Desktop Technology Road Map Abstract Acknowledgements Table of contents Introduction Specifications ICCCM Freedesktop specifications X Window System Key protocol extensions/libraries Xlib - basic X library 3D libraries Mesa - The 3D Graphics library Direct Rendering Infrastructure (DRI) XInputExtension SHAPE XSYNC XVideo DOUBLEBUFFER The X Resize and Rotate Extension (RandR) Security Record XTest Render Xft2 library Xinerama Xnest X extensions under active development Obsolete X extensions X libraries under active development X toolkits GTK+ toolkit Qt toolkit Other toolkits Moribund X toolkits Motif TK Other key libraries Fontconfig - font configuration library Freetype 2 - font rendering Cairo - vector graphics library Hardware Abstraction Layer (HAL) DBUS - message bus system XML libraries Pkgconfig Zeroconf Multimedia Multimedia frameworks Helix community aRts Gstreamer Mplayer VideoLAN Xine and Xinelib Audio Advance Linux Sound Architecture (ALSA) Audio servers aRtsd Enlightened Sound Daemon (ESD) Jack MAS Microsoft interoperability SAMBA File systems File formats WINE Winelib DOS emulation .Net and Mono Displaying Windows applications on Open Source systems X implementations for Windows Cygwin and Cygwin/X Cygwin/X Commercial X implementations Fonts Printing Postscript and PDF Common Unix Printing System (CUPS) - print spooling system Thin clients Linux Terminal Server Project (LTSP) Athena Computing Environment Java VNC Introduction
The most visible desktop projects are the KDE and Gnome desktop projects. These projects provide the basic toolkits, window managers, menu systems and control panels found in modern user interfaces along with many end user applications. It is important to note that the work of freedesktop.org is to ensure that applications and infrastructure can be shared between projects, and to enable this sharing in a way that end users do not know or care what environment these applications may be "native" to. In large part, this goal of freedesktop.org is being met, though there is more work to be done. The Gnome project's roadmap covers its next few releases. Other major applications projects, which themselves may be platforms on which other applications are being built include the Open Office project (Sun's ?StarOffice suite is based on ?OpenOffice), providing a entirely free office suite, and their plans can be found in their road map. Better integration with other applications on the desktop is high on that list; Open Office has used their own toolkit and needs better integration with Gnome and KDE. The Mozilla project is also of special mention, who have built a world class free web application suite supporting all the widespread Web technologies (e.g., CSS, Javascript, etc.), including browser, mail client, bug tracking system, and other technology, used not only in their applications but also by other applications in the open source desktop. Mozilla's road map covers both its recent history and current plans. Another implementation of web technologies underlies the KHTML Rendering engine of the KDE project and Apple in Mac OS X, and is now called webkit; it may be becoming a viable alternative to the Firefox gecko rendering engine. Firefox has the distinction of having seriously undermined Microsoft's control of the web; its 20% market share (along with the additional marketshare of webkit, most notably in Mac OSX Safari) has wrested back control to web standards from their proprietary technologies. Native plugins exist, often many, for most of the commonly used web datatypes (e.g., flash, ?RealPlayer, PDF). There are a few reasonably common datatypes for which there is no good native plugin available (fewer and fewer as the months go by). Windows plugins can often then be used via WINE. One of the interesting problems is in fact, too many plugins for a given datatype. Better user interfaces to invocation of plugins have helped ameliorate this problem in current desktops, and Linux distributions have matured and reduced the number of options presented to a naive user to a reasonable defaults to help with this embarrassment of riches. A few datatypes remain difficult, but great strides have been made since V1 of this document. The desktop applications themselves are far too numerous to begin to mention. A (large) subset of open source applications of all sorts numbering in the many thousands can be discovered on the Freshmeat web site, in addition to the KDE and Gnome desktop projects. All of these projects build on the technologies covered in this road map (and sometimes additionally run on Windows and Mac OS X, most particularly the X Window System, but attempting to provide a road map to those projects is outside of the scope of this document. Specifications
Historically, the X specifications were developed and ratified in the MIT X Consortium, and its successor organization, X.org. X.org has morphed successfully from an industry consortium to an organization in which individuals, both at a personal level and as part of work they do for their companies have voice, working as part of the larger freedesktop.org and free standards community. Current X.org releases form the core of the free desktop. As discussed below, the X Window System was designed to allow for extension, and many extensions as outlined above have been developed, deployed, and sometimes discarded over the years. Note that an API is just one binding to the specific protocol; there are and have been multiple such APIs and implementations at times to the same underlying set of protocols. Besides the APIs and protocols mentioned below, there are a set of other protocols and (sometimes multiple) implementations of APIs that are involved in the overall open source desktop. Most of these are primarily of interest to toolkit, window manager, and desktop environment programmers rather than directly to most application programmers. This section attempts to outline the most important of these, and their current status. ICCCM
The original "Inter-Client Communications Conventions Manual" outlines the original set of conventions required of applications (mostly implemented in toolkits rather than directly by applications) to "play well" in the modular environment of the X architecture, allowing for interchangable window managers, and other facilities. It was (mostly) sufficient to implement the CDE desktop, but insufficient for more modern environments. These (along with the EWMH (extended window manager hints) are built on top of the X11 core protocol using its general atom and property mechanism. Freedesktop specifications
Freedesktop.org was founded to foster the discussions between the Gnome and KDE desktop projects to extend the ICCCM in ways required for more modern environments. It now often hosts core desktop infrastructure projects (e.g., X, dbus, etc.). Areas needing work to ensure further interoperability of applications build in one toolkit framework to be fully usable in others has included drag-and drop, window manager extensions, desktop entry files that describe information about applications, application embedding, UTF-8 support, bookmark exchange, menus, mime database, desktop settings, to name a few. Descriptions of the status of these specifications along with the specifications themselves are available and I recommend you look there for more information. X Window System
The X Window System, Version 11, or X11, or most commonly called X, is the network transparent window system used on Linux, UNIX, and other platforms including Macintosh OS/X, and Microsoft Windows. It provides the basic infrastructure from which graphical user interfaces are built on Linux and UNIX. X11 was first released in 1988, and has an unrivaled reputation for stability; applications running on a MicroVAX of that era will interoperate against the latest X implementations across today's network, unchanged. This stability has been ensured by a careful, extensible protocol design framework, and attention to detail in the addition of new features. I gave a USENIX talk on open source software development using the X Window System history that may be of interest. New X extensions have been defined in recent years to bring X's original capabilities up to (and in some cases well beyond) the proprietary state of the art. Whenever possible, these programmer's APIs have been built to allow even downwards compatibility to ease deployment of modern applications to older X server implementations. A good example of this is the Xft2 library, which, while performing best on X implementations where the X Render extension is present, will in fact provide high quality anti-aliased text on old X servers. In some areas X still needs work; much of this work is underway as described below and in more detail elsewhere. In the X environment GUIs are built using Toolkit libraries, of which the most common at this date are Qt and GTK+. Motif based applications from the earlier generation of development on UNIX are now extremely rare (except as legacy applications inside and of corporate environments). A component of an X Window System based environment not found as an independent component in other window systems is the external "window manager", which allows users to control the size, location and decoration of application's windows on the screen. They are, in fact, applications like any other application in X11, though you can generally only run one window manager at a time. Window managers are, for the most part, interchangeable components, and the standards defined originally by the X Consortium such as the ICCCM, and its successor X.org, along with the new specifications developed on freedesktop.org govern the protocols between applications and window managers. Window managers in common use today include KDE's window manager, Compiz Fusion or Metacity used by Gnome, and many, many others. Those that have been kept up to date with the freedesktop.org specifications are generally interchangeable and a matter of personal taste, though both major desktop projects have window managers they prefer, and which may integrate best into that environment. Some of these (e.g., Compiz Fusion) provide amazing visual eye-candy if requested, though mercifully, its default behavior is now sane. Other components, such as panels, start buttons, file managers, and many basic applications are provided by the desktop systems. The largest and most well known of these projects are the Gnome desktop project, the KDE desktop project, and the CDE desktop previously used on UNIX systems. CDE is dead. A detailed road map of these projects is outside the scope of this document. The projects have a life of their own, and you are best consulting them as to their plans. They encompass many thousands of open source applications at this date. There are multiple implementations of the X Window System which share code, both open source and provided by commercial vendors. The commonly deployed implementation on open source systems is currently provided by X.org which hosts its development here at freedesktop.org. XFree86, while still existing, has become a relic of the past, triggered by its license change and general disgust with its policies. There is much mythology of X's size; this is mostly an artefact of how memory usage is reported on systems (the entire frame buffer, off screen memory and any register space is reported against the X server's process's size, even if X itself is only consuming a megabyte or two of space itself). Similarly, some applications request X to save large amounts of pixels on their behalf, when other implementation approaches often easily avoid such memory usage. X is being successfully used on systems from IBM's Linux watch with 8 megabytes of compressed flash and 8 megabytes of RAM with a tiny 96×120 screen, to current PDAs like HP's iPAQ, to DMX based projector walls containing tens of millions of pixels. With recent work, the minimal X footprint (X server and cut down Xlib) is currently just over 1 megabyte of code (uncompressed), excluding toolkits that are typically much larger, and could be cut smaller. After all, X11 was developed on VAX 11/750s that had less than one MIP with 2 megabytes of RAM. Key protocol extensions/libraries
These protocol extensions generally come with a C language library, that is often just a wrapper around the protocol, but sometimes includes additional functionality. The Freedesktop.org X server and the XFree86 X server along with all base protocol libraries are MIT licensed, Commercial vendors of X technology may have additional restrictive licenses placed on their implementations as allowed by the MIT license. Xlib - basic X library
This library provides the basic protocol bindings and client side support for extensions, as well as a number of other facilities, some of which are useful, and some of which do not or have not seen serious use recently. The Motif toolkit uses more of these features than more modern toolkits such as GTK+ or Qt, which, for example, have found Xlib's facilities inadequate in a number of areas (e.g., internationalization). As the basic library for the X protocol, its API is very stable. Several sections of the Xlib API are seldom used in modern practice, either because the facilities never achieved widespread acceptance (as in the X Color management part of the API), or because modern toolkits have found that they needed more advanced facilities (as in the Locale section of the Xlib API), where the modern toolkits provide better facilities. A replacement for Xlib's protocol bindings called Xcb can offer better performance by making it easier to avoid round trip messages to the window system in some areas. It has recently been deployed underneath the Xlib bindings to allow a migration strategy for applications that use plug-ins. The Xlib interface remains exactly API compatible with the old implementation. Work can now begin to take advantage of this in toolkits. Xcb was also carefully designed for thread safety, which was very difficult indeed in the old Xlib implementation. Some work was underway to enable applications to be properly notified of connection failure with the X server (often seen when X is used over wireless networks, and sometimes over the wired internet) and allow for graceful shutdown. It was put on hold in favor of Xcb and would be nice to resurrect now that this work is maturing. This will enable the migration of running applications between X displays and movement of sessions. You would like to be able to go home and (securely) retrieve the applications running on your desktop at work. The GTK+ toolkit already has support for migration of applications, except for proper shutdown in the case of failure, and architecturally, this should be true for Qt as well. We hope that this will become usable during 2004, and widely deployed during 2005. 3D libraries
3D is provided in the open source environment by industry standard OpenGL. Both closed source commercial and open source implementations of OpenGL are available. With the increasing industry cooperation in providing documentation and programming resources, the open source implementations are rapidly becoming truly competitive, and may reach parity or exceed proprietary implementations during 2009 and 2010. A few vendors (e.g., Nvidia) still do not provide documents or resources for supporting their hardware and should be avoided whenever possible. Mesa - 3D graphics library
Mesa is an open source 3-D graphics library with an API which is very similar to that of OpenGL. Mesa is used as the core of many hardware OpenGL drivers for XFree86 within the DRI project. Software only implementations of Mesa are generally available even if hardware accelerated versions are not, but such implementations of Mesa will not be sufficient for more than very simple applications. GLX is the binding of OpenGL to the X protocol, to allow for network transparent OpenGL applications. Mesa has been a project for more than 10 years, and various parts of it are available under a number of open source licenses. Direct Rendering Infrastructure (DRI)
DRI is the direct rendering infrastructure for the X server for OpenGL direct rendering, and provides the device driver and coordination with the window system to allow 3D applications direct access to the display hardware. The DRI does not assume or require that the drivers be based on Mesa. Several non-Mesa, closed source drivers have used the DRI. The DRI provides direct access to graphics hardware in a safe and efficient manner. It includes changes to the X server, to several client libraries, and to the kernel. The first major use for the DRI is to create fast OpenGL implementations. It has been in use for a number of years, and is widely deployed with drivers for much of the common 3D hardware. DRI2 has started deployment. MPX
Peter Hutterer has done amazing work called "MPX", or Multi-Pointer X, enabling multiple input devices and multiple cursors in the X Window System. Part of this work has just been released in the X.org 7.5 release. XInputExtension
XInput provides support for "non-core" input devices, such as trackballs, dial boxes, tablets, etc. It provides adequate facilities for these devices, but work continues to extend XInput to support "hot-plug" input devices. Addition of new input devices may still require manual configuration of the X server, but probably not by the end of 2009. Work is also needed to aid use of new facilities provided by the base operating system (e.g., /dev/input) in Linux. This area needs some serious work. GTK+ application migration can already be demonstrated, and what is there is only adequate for session migration. Shared displays (e.g., a projector being simultaneously used by multiple users) bring to fore another issue: that of network transparent access to input devices. If I have an application I have migrated to a remote display, I may want my keyboard, mouse and other input devices to follow. While X applications like x2x help, this is really just a bandaid, and a more generic network input event mechanisms are needed, with good integration into the environment. You should be able to use devices anywhere in your environment. With hotplug a reality, building such a network environment is now possible and awaits eager hackers. Xinput V2 and a revision of the X keyboard extension are planned for the next year. SHAPE
The Shape extension provides non-rectangular windows in the X environment, and is universally deployed in X implementations. XSYNC
The X Synchronization Extension provides facilities for applications to synchronize with each other, and with real time. XSYNC is widely deployed and in its current form, very stable. Facilities to allow XSYNC to synchronize with video vertical retrace, audio sample clocks and other time bases are easy to add (and were the original intent of XSYNC), but while some work has been done to do this on Linux systems, it is not yet available in production X servers. It is also used by toolkits to synchronize repainting with the compositing manager. XVideo
The XVideo extension provides support for video overlays, and similar facilities desired for video playback. It is commonly available on displays with hardware support for video playback. It provides facilities for the scaling and conversion of HSV data. The better implementations no longer use chroma keying for display of video data, so that video can be composited as a first class citizen in the | 计算机 |
2015-40/2214/en_head.json.gz/5940 | Nexus OSS switched to the Eclipse Public License: A Clarification and an Observation June 27, 2012 By Tim O'Brien
While I was discussing a particularly tricky Nexus configuration issue with a power user of Nexus last week I suggested that he write and ship a custom plugin with Nexus OSS. His response, “I’m not going to modify Nexus, it is covered under the AGPL?” Before I corrected him to tell him that Nexus 2.0 switched back to the Eclipse Public License back in February I tried to find out what the AGPL meant to this developer. The results were interesting, but before I get into that reaction, I’d like to take this time to make an unmistakable statement for those of you who missed the switch:
Nexus OSS is covered by the Eclipse Public License Version 1.0
Any questions? I was going to try to use the “blink” tag, but I was told that might be overkill. Basic clarification here is that Nexus OSS has nothing to do with the AGPL. If someone tells you that, point them at the 36 point, red letter announcement in this blog post.
Still not convinced? Take a look at the NOTICE.txt file in the Nexus OSS distribution you just downloaded. You’ll see the following statement:
This program and the accompanying materials are made available under the terms of the Eclipse Public License Version 1.0,
which accompanies this distribution and is available at http://www.eclipse.org/legal/epl-v10.html.
If you are looking for more proof, here is an excerpt from Jason’s interview with InfoQ in February:
InfoQ: You recently announced that the upcoming Nexus repository would be licensed under the EPL-1.0 rather than the AGPLv3. What prompted the change of license?
Jason van Zyl: We find that the community is not receptive to the use of the AGPL in general, and we’ve had a few cases with potential contributors unwilling to publicly release their Nexus plugins because of the AGPL. The AGPL is a fairly aggressive license and just hasn’t been around as long as other well known licenses like the EPL. The AGPL tends to make lawyers wary and we don’t want to hinder adoption because of legal concerns. To date we have only had a small handful of plugins contributed to the Nexus project and we hope to encourage more participation from the community and expand the plugin ecosystem by adopting the EPL.
Why was he under this impression? This somewhat problematic comparison matrix had incorrect information until this morning, and I don’t think we made a big deal about the license switch for Nexus 2.0 to the Eclipse Public License Version 1.0 during our launch of the Nexus 2.0 features in February. We were focused on the compelling features we released with Nexus 2.0, but this license switch is a big deal.
After I corrected him, we started talking about what the AGPL really means. He took five minutes to tell me what his corporate lawyers had told him, and I took an equivalent amount of time to tell him what I had heard. The only thing we could agree on about the AGPL was that two different legal professionals had conflicting interpretations of the license with his expressing concern that the AGPL hadn’t been tested in a court. I understood his concerns, and skipped the rest of the conversation, “Well it doesn’t matter, because Nexus OSS is covered by the same license that covers the Eclipse IDE.”
Categories: Nexus Repo Reel, Sonatype Says Post navigation
← RSA SecurID Cracked, Experts Access Cryptographic Keys In 13 Minutes
Latest Hacker Dump Looks Like Comcast, AT&T Data → | 计算机 |
2015-40/2214/en_head.json.gz/6182 | Last year, Hewlett Packard Company announced it will be separating into two industry-leading public companies as of November 1st, 2015. HP Inc. will be the leading personal systems and printing company. Hewlett Packard Enterprise will define the next generation of infrastructure, software and services.
Public Sector eCommerce is undergoing changes in preparation and support of this separation. You will still be able to purchase all the same products, but your catalogs will be split into two: Personal systems, Printers and Services and Servers, Storage, Networking and Services. Please select the catalog below that you would like to order from.
Note: Each product catalog has separate shopping cart and checkout processes.
Personal Computers and Printers
Select here to shop for desktops, workstations, laptops and netbooks, monitors, printers and print supplies Server, Storage, Networking and Services
Select here to shop for Servers, Storage, Networking, Converged Systems, Services and more.
Privacy Statement | Limited Warranty Statement | Terms of Use ©2015 Hewlett Packard Development Company, L.P | 计算机 |
2015-40/2214/en_head.json.gz/6634 | News > Adobe Photoshop CS6 Beta Now Available
Adobe Photoshop CS6 Beta Now Available
Adobe Photoshop CS6 Beta is now available for download. From Adobe: First Major Release since April 2010 Packed with New Features and Huge Performance Enhancements SAN JOSE, Calif. — March 22, 2012 — Adobe Systems Incorporated (Nasdaq:ADBE) today announced Adobe® Photoshop® CS6 beta, a preview of what’s to come in the next release of the industry standard in digital imaging, is available as a free download from Adobe Labs. Customers can download the beta, try out the experience and provide feedback to the product team. Packed with groundbreaking new innovations,featuresand incredible performance enhancements, Photoshop CS6 beta is available for the Mac OS and Microsoft® Windows® platforms. The final release is expected in the first half of 2012. “Photoshop CS6 will be a milestone release that pushes the boundaries of imaging innovation with incredible speed and performance,” said Winston Hendrickson, vice president products, Creative Media Solutions, Adobe. “We couldn’t wait to share this beta of Photoshop CS6 with our customers and are looking forward to hearing from them and seeing the ways they are incorporating the beta into their daily creative workflows.” New Features in Photoshop CS6 Beta Photoshop CS6 beta demonstrates Adobe’s focus on huge performance enhancements, imaging magic and creativity tools that offer customers a new experience in digital imaging. Key features include new additions to the Content-Aware tools: Content-Aware Patch allows greater control by letting users select and duplicate an area of an image to fill in or “patch” another, and Content-Aware Move lets users select and magically move an object to a new place in the image. In addition, the Photoshop CS6 beta offers all the features of Adobe Photoshop CS6 and Adobe Photoshop CS6 Extended, such as new 3D editing features and quantitative imaging analysis capabilities. These features will be included in the shipping version of Photoshop CS6 Extended when it becomes available. Pricing and Availability The Photoshop CS6 beta is available immediately as a free download in English and Japanese. At installation, users will be required to provide an Adobe ID to complete a one-time login and online product activation. For information on how to install Photoshop CS6 beta, visit www.adobe.com/go/photoshopcs6. Customers can submit feedback via the Photoshop CS6 beta forum. Users can also connect with the Photoshop team via the community-powered site; on Facebook; YouTube; Photoshop.com blog; or via Twitter. More information and resources are available on Adobe's site. B&H carries Adobe products.
Posted: 3/22/2012 8:44:48 AM CT Posted By: Bryan
Posted to: Canon News, Nikon News Category: Adobe News | 计算机 |
2015-40/2214/en_head.json.gz/6690 | HomeScreenshotsDownload & RulesForumChatF.A.Q.Credits
An image of the original KHO beta.
Re:Twilight Soul has a pretty interesting past. If you're interested, keep reading.... Silvyria It all started out when Taro (owner) wanted to make an online RPG. He had discovered various different programs that would help him, until he decided the one he liked was known as "Eclipse." He didn't have any experience, but tutorials helped him out with basic things. He mapped a small game, no more than 30 maps in size, and named it "Silvyria." There was no advertising done for it, and there weren't many future plans. The game only was up for a week or so, and him and a few friends played it together.Pre-KHO Realizing his game would get nowhere, Taro wanted to make a game that would get the attention of lots of people, but he didn't know a very good way of doing it. After some thought, he realized a fan game would be a good option, because fans of a series might like to play an online game of it. Thinking of different game series', Kingdom Hearts popped into his head. It seemed easy to implement into an online RPG. So, he went to work, gathering graphics, and making some custom ones, and turning it into an online game - he called it "Kingdom Hearts Online" which was abbreviated as "KHO."KHOTaro started working on KHO. He set up a website and forum for it, and did some advertising. Taro was known as "Vivin" at the time. After the game got some attention and people signed up on the forums, he got ready for the beta version. On July 19, 2008, Vivin announced sign-ups for beta testing, where 12 members would get to try the game. The sign-up sheet filled up very quickly. On July 22, 2008, beta was released to the 12 members.A few months later, Vivin released the first official version of KHO. The game wasn't populated too much. The highest amount of members online at once was around six, but on average there were two or three. Not too much happened on the forums, aside from a few dedicated members who visited every day. KHO's graphics and maps were very messy and glitchy. After all, it was Vivin's second game and he had almost no experience. However, the members who played enjoyed it.Several months later, after the game got more populated, two new members joined: Atsuya and Shricx. Atsuya became the first staff member of KHO, and Shricx became the first moderator. Eventually, Vivin and Atsuya began talking about remaking KHO together, to fix all of the problems in it and give it better graphics. They began, and a new game went into progress, with KHO ending at version 1.
Twilight Soul
An event going on in Twilight Soul.
At the time, Vivin and Atsuya were just going to remake KHO and rerelease it. However, after seeing how different it was from KHO, they decided to change more than just the game. They created a new forum for the game, and renamed the game to "Twilight Soul", or TS.Twilight Soul is when the game truly began. This is when lots of members started to join, and a true community was formed. The forums were active and lots of people logged onto the game. At times, there were 20 or more people online. And during this time, TS was only in beta. Vivin and Atsuya made the beta available to everyone who joined, and kept it online the whole time as they worked.Vivin went through many name changes, and is now known as Taro. He added more members onto the staff team. There were new mappers, moderators, and developers, all who helped work on TS. Game progress went much faster.So much more happened in Twilight Soul that there is too much to write down. However, it did go through 3 episodes (versions.) The entire game was completed. It featured almost every world from KH1 and KH2, along with many other custom areas for fun. The game file was very large and there wasn't much left to do. Until, Taro found a new version of Eclipse. This engine was improved in every way possible, and the staff members seemed to like it to. This is when the next phase of Twilight Soul began. A remake. Re:Twilight Soul Here we are, at Re:Twilight Soul. The "Re:" was taken from other Kingdom Hearts game remakes, such as Re:Chain of Memories and Re:Coded. Just like Twilight Soul was a remake of Kingdom Hearts Online, Re:Twilight Soul is a remake of Twilight Soul. And this time, there are even more improvements.It feels like an entirely new game, and it has only just begun. This game has come a very long way, starting in 2008 with the KHO beta up until now. It has been almost three years, and this game is still alive and populated. Who knows where the game will go from here?...That's it for the history of this game. Did you enjoy reading what this game has gone through the past three years? It's been fun, and it will get even better. | 计算机 |
2015-40/2214/en_head.json.gz/7168 | Posted Home > Computing > Microsoft loophole mistakingly gives pirates free… Microsoft loophole mistakingly gives pirates free Windows 8 Pro license keys By
Anna Washenko —
Looking for a free copy of Windows 8 Pro? An oversight in Microsoft’s Key Management System – made public by Reddit user noveleven – shows that with just a bit of work, anyone can access a Microsoft-approved product key and activate a free copy of Windows 8 Pro.
The problem is in the Key Management System. Microsoft uses the KMS as part of its Volume Licensing system, which is meant to help corporate IT people remotely activate Windows 8 on a local network. The Achilles’ heel of the setup, according to ExtremeTech, is that you can make your own KMS server, which can be used to partially activate the OS. That approach requires reactivation every 180 days, though, so it’s not a practical system.
However, the Windows 8 website has a section where you can request a Windows 8 Media Center Pack license. Media Center is currently being offered as a free upgrade until Jan. 31, 2013. Supply an email address and you’ll be sent a specific product key from Microsoft. If you have a KMS-activated copy of Windows 8, with or without a legitimate license key, then going to the System screen will display a link that reads “Get more features with a new edition of Windows.” If you enter your Media Center key there, the OS will become fully activated. It’s a little surprising that with Microsoft’s complex KMS, this type of thing could slip through the cracks, allowing people to take advantage of the system. It seems most likely that after the uproar in response to Microsoft’s plans to remove Media Center from Windows 8 Pro, the company may have rushed the free upgrade, resulting in a loss for Microsoft and a gain for anyone who takes the time to acquire a free Windows 8 Pro copy. It’s unclear whether or not there’s a patch for this – other than removing the free Media Center download all together. Though ending the free Media Center upgrade would be an easy fix, it wouldn’t be a popular choice among customers who just bought a Windows 8 computers and who want the feature. We’ll have to wait and see how the company responds to this latest hit. Get our Top Stories delivered to your inbox: | 计算机 |
2015-40/2214/en_head.json.gz/7929 | Locative Games
> Jcamachor
> Pervasive Media
Home - The Grid: Run Your City. How to Play. Fleck: Grow Your World!
Turf Wars - a GPS Game for the iPhone. Pervasive game cheap phone mobile. Manhattan Story Mashup - Nokia Research Center Project. Meet your heart between (MYHT)
Space Invaders pervasive game project - "Marshotron"
PopSci's Future Of: Pervasive Games. Think Design Play. Booyah - Home. A fun new way to explore your city. New generation of location games catches on. Location-based social networks Foursquare and Gowalla, which launched the craze of “checking in” at locations such as restaurants or stores in exchange for points, are often described as games. But they’re fairly simple examples as far as games go. Checking in at a bar with Gowalla (or Loopt, or Foursquare, or Brightkite) is done in a matter of seconds. But new location-based games are emerging that hope to command much more of a player’s attention. Booyah‘s MyTown, for example, has over 2 million active users, and the population grows by more than 100,000 players per week. Most users of the Monopoly-style game spend on average more than an hour on the app a day. Then there’s Parallel Kingdom (pictured) for the iPhone and Android platforms, designed by a company called PerBlue in Madison, Wisc. Now Parallel Kingdom is more lenient in the way actual location is featured in the game. “We also found that people in general don’t go to that many locations in their lives on a daily basis.
Slashwars. IGFEST. Log - design, cities, physical & social interaction, play. Books I’ve Read in 2011 In Articles on 9 January 2012 with Comments Off Thought I’d do this again. I read 27 books last year, almost exclusively on my Kindle. Below is a list, in order of finish date (latest first). I highlighted five favorites and provided a bit of commentary.
The Wild Rumpus. Hubbub – physical, social games for public space. Urban games festival. Log - The theory and practice of urban game design. The Theory and Practice of Urban Game Design In Articles on 23 January 2009 tagged cities, conceptualisation, DGG, education, game design, HKU, ideation, NLGD, play, seminars, urban games, urbanism, VNA, workshops with 7 comments A few weeks ago NLGD asked me to help out with an urban games ‘seminar’ that they had commissioned in collaboration with the Dutch Game Garden. A group of around 50 students from two game design courses at the Utrecht School of the Arts were asked to design a game for the upcoming Festival of Games in Utrecht. The workshop lasted a week. My involvement consisted of a short lecture, followed by several design exercises designed to help the students get started on Monday. Lecture In the lecture I briefly introduced some thinkers in urbanism that I find of interest to urban game designers.
New Games for New Cities at FutureEverything – Hubbub. Chromaroma. IPerG - Integrated Project of Pervasive Games. Pervasive Games - gaming in physical space. Location-based game. A map of players' trails in a location-based game. A location-based game (or location-enabled game) is a type of pervasive game in which the gameplay evolves and progresses via a player's location. Thus, location-based games must provide some mechanism to allow the player to report their location, frequently this is through some kind of localization technology, for example by using satellite positioning through GPS. "Urban gaming" or "street games" are typically multi-player location-based games played out on city streets and built up urban environments.
About the Challenge. ARIS - Mobile Learning Experiences - Creating educational games on the iPhone. My Grove by Kranky Panda Studios. Shadow Cities - Magical location based MMORPG for iPhone, iPod touch and iPad. Situationist App By Benrik. Geo Wars App. Parallel Kingdom. GingerSquid. Tourality – The Ultimate Outdoor GPS Game. The Situationist – props & rules. Gamification has been under well deserved fire, not least because so many implementations end up as a soulless retrofitting of points and badges. Sadly, these work well enough that they can be deployed without coming close to the potential of applied game design, while casting gamification as simply manipulative. Hence the derisive and increasingly accurate term pointsification. Which is why it’s nice to see the Situationist, an iPhone app that dispenses with all these extrinsic rewards. You enter a list of situations that you want to experience (e.g. “hug me for 5 seconds exactly”), upload a picture, and it alerts nearby users of the app so they can make that situation happen. It was created by artists collective Benrick, inspired by a radical movement called the Situationist International, active 1968-1973, who sought: It’s reminiscent of Jane McGonigal and Ian Bogost’s game Cruel 2 B Kind, a large scale team game where you wield random acts of kindness as weapons:
Related: Fleck: Grow Your World!
- Booyah - Home
- Chromaroma
- ARIS - Mobile Learning Experiences - Creating educational games on the iPhone | 计算机 |
2015-40/2214/en_head.json.gz/8004 | : Arnold's Press Pause and Rewind: October 21st
Arnold's Press Pause and Rewind: October 21st
There isn't a whole lot I wish from this industry. Okay, I'm a liar. Sorry, but I was trying to be modest. Anyways, but what did occur to me recently was videogames becoming more increasingly dependent on various gameplay forced gimmicks. What do I mean? Well, simple, developers need to stop forcing us to play a game a sole and specific way. Take, for example, Lair. There's not a doubt in my mind that the biggest reason for the game's universal panning was the forced SixAxis control scheme. Factor 5 implemented a design and then forced it on us by not allowing us to play the game any other way. This landed a critical deathblow to the game. The unfortunate thing is that this isn't the end of it. Games will keep forcing concepts on gamers without offering the option of a standard/classic mode. SKATE is another example. While it's a really good game, the fact that you can't switch camera views is pretty ridiculous. After a while, the game's default camera view becomes a bit nauseating, as it's too focused on the skater, pans around violently, and impairs your judgment of distance. Furthermore, when sports games begin to make radical changes, they too should offer a classic mode. Some sports games do this, some don't. FIFA is one of those games that doesn't. FIFA 08 is far more sim than the series has ever been, and, quite honestly, I'm feeling a little alienated. The game almost feels a little too hard, as even on an amateur difficulty I can't score a goal - and I've never been a bad FIFA player. And while it's not a sports game, per se, it seems like Polyphony is well aware of this kind of concern, by implementing two physics types into Gran Turismo 5 - professional and standard. The method of not alienating a fanbase after a radical change is a fantastic way of retaining the proper accessibility, and I fear that the more this generation progresses, the less of that we will see. Another qualm is controller configurations. In adventure games I don't really mind not being able to configure the controls on my own - largely because action/adventure games don't really have very complex setups. But if it isn't offered in racing games and sports games, that's just about the worst decision a gameplay can make. Different people prefer different setups for their racers. I know I like using the triggers to accelerate and brake, as opposed to the analog stick or X and Square buttons. But I know many others that don't prefer to drive that way.
Furthermore, it's pivotal in sports games because I frequently find the control setup in a new iteration to have changed over a past entry, which forces me to get used to a new configuration. In that case, I'd like to just go to the options menu and re-configure the pad to my liking. Again, some games allow for this, some don't. A word to all developers: don't make decisions like that for us. The gamers pays $60 and they shouldn't have to be limited in how they play a game, be it a controller gimmick, a lack of configurations, or an intended gameplay mechanic. 10/21/2007 Arnold Katayev | 计算机 |
2015-40/2214/en_head.json.gz/8191 | TechNewsWorld > Developers | Next Article in Developers
Google Open Source Program Manager Chris DiBona: Best of Both Worlds
"You are not going to get malware on a Chrome OS. You are not going to get security problems on a Chrome OS that has the developer's switch," said Google's Chris DiBona. "But at the same time, if you are a developer, that sort of locking down stops you from innovating. It stops you from developing very quickly. So we wanted to make it possible to have the best of both worlds there."
By Jack M. Germain • LinuxInsider • ECT News Network
In 1996, two Stanford University students, Larry Page and Sergey Brin, created a unique search engine called "BackRub" that ran on the school's server. After one year, BackRub's bandwidth outgrew the university's needs. Its creators rebranded BackRub into Google, a respelled reference to "googol." It is a mathematical term for the number represented by the numeral 1 followed by 100 zeros.
Chris DiBona
Google began as a business after its founders accepted a US$100,000 funding grant from Sun Microsystems cofounder Andy Bechtolsheim in August 1998. Page and Brin embedded their mission statement in their corporate name. They would organize a limitless amount of information on the World Wide Web.
Several years later, Google's founders devised a list of 10 things they knew to be true about running their business. Item No. 2 was "it's best to do one thing really, really well." Google is now much more than a unique search engine. And Google does much more than catalog a world of information on its massive servers. Google does many things. But perhaps all of what it does still meets that key founding principle of organizing vast volumes of data on the Web.
In this interview, LinuxInsider discusses with Christopher DiBona, Google's open source program manager, how Linux, Chrome, Android and a host of Google-created proprietary code all mesh with open source software to maintain Google's massive information infrastructure.
LinuxInsider: Given all of the software platform options, what drew you into working with Linux and open source software?
Chris DiBona: Back in the mid '90s, I was working on a science assignment at the time. I had a choice of working in the Sun workstation lab at the school. That was crowded and hot. Or I could dial Linux on a 386 or a 486 (CPU). At the time, I was working in a computer book shop. So I accumulated all these computer books and textbooks. I traded my books for a friend's real Unix machine, an AT&T 381. It was not fully featured. Finally I got involved with a Linux machine. Later when I moved to California, I got involved with running a Linux User's Group. Now here I am 20 years later.
LI: How much is Google driven by open source?
DiBona: When you personally go and use Google, like Gmail or our online stuff, on top of that you have software that we've written. We combine our software with what you typically would expect from a server and a desktop. We pull in a standard amount of open source libraries and standard libraries. Some of these are released [as open source] and some of them are not. All of these things come together to make Google -- well, Google.
Some of what we have added is completely state of the art, such as the Android stack. We've released something like 3,000 projects of various size and quality and development models. So any kind of project you want, we've done that model for our company.
LI: How hard is it to tie all of that together for users who are on a Windows box or an Apple product? How do you make it almost platform-agnostic?
DiBona: Take our Chrome platform. Most of the people who use it have no idea that there is this open source Chromium thing inside it. That shows where open source is and where it is going. You have to come to the realization that open source technical people understand and appreciate these things. But most consumers not only don't care, they have literally no interest in it. All they want to know is that their software is good, that their software does what they want, and that it works well. That you got to it by way of open source or that we open sourced it -- they don't care.
LI: Do you find that consumer response disheartening?
DiBona: There is satisfaction to be found in that because -- in the case of Chromium -- we know that we are doing that in a way that is very exciting from an engineering perspective with the release of technology. And we are really moving all browsers forward in the way that we work with Chrome. That's pretty cool. There are a great number of people out there using Chrome and Chromium and have no clue that it is open source. It is one of those things that we are very satisfied in doing the correct amount of work there. We know that the consumers just love the product, and we do what we can to make that persist within the rubric of open source.
LI: What differences are there between Chrome and Chromium? Does Chromium really drive Chrome, or are they two separate things?
DiBona: Well, they are not separate by any measure. I would say that Chrome is complementary with Chromium. When you look at what Chrome does, we are very holistically minded about what surfing means online. For instance, if you are going to be running Flash and you have the standard plug-in architecture that a browser has ... Flash sort of exists outside the sandbox. So if Flash breaks down in one tab, it will break down across all tabs. That is not what you want.
That should really be a red flag. So we do a lot of proprietary things in Chrome with things like Flash and PDF that we couldn't really do in Chromium because those proprietary offerings are really not available in open source. To really have a plug-in system that works in the sandbox model, you kind of have to have a closed source element. So Chrome is really the closed source stuff merged with the open source stuff called "Chromium."
Remember that Chromium is all the HTML rendering and all of the browser stuff and all the data-compression stuff. It's pretty amazing. So Chrome exists to have the things that can really not be open sourced. This makes it well secured and well managed by the software, so that people have what we consider to be a very good quality Web surfing. There are some really interesting Chromium offshoots out there. It's neat to see what people do with Chromium.
LI: How does the Chrome OS fit into this security scenario?
DiBona: In many ways, the Chrome OS is very much in the spirit of Chromium. What's really remarkable about the Chrome OS is the developer's switch under the battery. Say you leave your Chrome notebook under the seat on a bus and lose it. You can go buy another Chrome notebook and sign into your account and have all of your data restored. It is incredibly secure.
LI: How does that feature work?
DiBona: There is lots to why that works. It comes down to that developer's switch under the battery. We have a cryptographically assured chain of custody, if you will, from the chip on the motherboard all the way to the communications for the device. So that's pretty amazing. But the problem with that is people use that same mechanism to make it impossible to update the operating system and do interesting things with Linux, say on the laptop or their tablets.
What we did instead is said with the flip of a switch, you basically can do whatever you want with this hardware. You can install an operating system without bootloaders or whatever. That makes it possible for the Ubuntus of the world and the Debians of the world to install on a Chrome OS laptop. So when you switch it back, it will say where is the signed binary. It will give you, again, that chain of custody that you want. That's the secured computing environment.
Chrome is pretty amazing at this. You are not going to get malware on a Chrome OS. You are not going to get security problems on a Chrome OS that has the developer's switch. But at the same time, if you are a developer, that sort of locking down stops you from innovating. It stops you from developing very quickly. So we wanted to make it possible to have the best of both worlds there. So a responsible developer who understands the risks of surfing the Web is going to be able to do that.
LI: Based on what you just said, let's talk about the Android OS. It seems to be the direct opposite in terms of security.
DiBona: Oh, I fundamentally disagree. The Nexus devices are extremely open. Take Ubuntu, for instance. Whenever they demo Ubuntu, it is done on a Nexus device. When we, Google, sell a device, it is very specifically unlocked. Or it is unlockable in a very clear manner.
Things are different with carriers. Things work a little differently in the U.S. People walk into a cellphone store and want a free cellphone. The telephone company or AT&T store or a T-Mobile store or whoever pays for that. They say in return for you getting a $500 device for free, you're committing to this two-year plan. And part of that structure allows them to lock down the phone so you can't change the operating system on it.
LI: How do you avoid that?
DiBona: I don't know if the people really pay attention to the terms of those deals -- but I know that developers who are savvy to these restrictions should just go buy a Nexus device or another unlocked device. Or work with a carrier who cooperates.
For instance, if you have T-Mobile, after the first three or six months into the contract, they will unlock your phone for you. Of course, this only works for certain kinds of phones. There are a lot of things out there that if the developers were just a little conscientious, these things wouldn't have to plague them.
LI: It almost seems -- and I don't mean this negatively -- that Google is almost shooting itself in both feet at the same time. It has the Chrome OS, and it has the Android OS. It almost seems they are competing against themselves.
DiBona: I would not characterize Google as shooting itself in both feet. Google actually develops a number of OSes. There is the Nexus Q; there is Google TV; there is Chrome OS; there is the Android. We are actually in this operating system-rich environment.
Google is not a small company any more. Just on the Google side alone, we have over 30,000 employees. Then there are the Motorola employees. Chrome OS and Android have different philosophies on what they are presenting to the users. I think that Google as a company is big enough both as a company and personnel-wise that that is OK.
Some customers respond extremely well to the Chrome OS model, especially those that use Google services like docs and spreadsheets. That's also true of the Android. But it is really a different way of approaching the user. Now they both have Chrome in common. You can have the Chrome browser in Android, and that is obviously fundamental with the Chrome OS.
LI: Don't those conflicting choices make for a confusing marketing strategy?
DiBona: It may seem odd that people are consuming both of them. We are very happy with the outcome. It is sort of like when you have two children who are kind of competitive with each other. You wouldn't get rid of one of your kids. They are both great kids. You just have to make sure that their competitiveness does not hurt each other.
LI: What do you see as your biggest obstacles as a manager indealing with all of this?
DiBona: You shouldn't over expand on what my job actually is. The primary focus of my job -- and it is very cutting-edge, actually, and is very exciting -- is open source compliance; making sure that we don't screw up with other persons' licenses. It involves making sure that when we choose a license for a project that we release, that it is consistent with our values and our philosophies for that project.
Chrome is a great example of that. We used BSD because we wanted the code to get back into the webkit and be used by other browser vendors. BSD was the most common denominator in Firefox and the webkit. And even Microsoft could use it.
We wanted to get the technology in the hands of everybody -- not just our browser, but everybody's browser. In a lot of these projects now, we have to provide infrastructure for development in the form of Git and Gerrit. Gerrit is a code review front end for Git (a distributed revision control and source code management system). That means that whenever we buy a company, we have to make sure that they are in compliance. I hate to say it, but I am a very high-functioning bureaucrat who looks after licenses.
Jack M. Germain has been writing about computer technology since the early days of the Apple II and the PC. He still has his original IBM PC-Jr and a few other legacy DOS and Windows boxes. He left shareware programs behind for the open source world of the Linux desktop. He runs several versions of Windows and Linux OSes and often cannot decide whether to grab his tablet, netbook or Android smartphone instead of using his desktop or laptop gear.
More by Jack M. Germain | 计算机 |
2015-40/2214/en_head.json.gz/8742 | stories filed under: "computing" Computers
Michael HoThu, Jul 9th 2015 5:00pm
computing, memcomputer, memcomputing, memristors, moore's law, np-complete, quantum computing, qubits, silicon, supercomputers, transistors
d-wave, hp, ibm
DailyDirt: It's Not Just A Good Idea... It's Moore's Lawfrom the urls-we-dig-up dept
Computers are such an important part of our daily lives now that it's difficult to imagine how we could get along without them sometimes. Obviously, people do. But growing accustom to supercomputer capabilities available at our fingertips all the time is much more than a luxury. We expect computers to get better and better at an astonishing (exponential) rate, but will we notice if/when that rate slows down? Here are just a few links on keeping up -- or possibly exceeding -- the performance expectations that Moore's Law has instilled in us.
IBM has unveiled a 7nm chip -- the world's first commercially-viable and functioning chip using silicon-germanium materials produced with extreme ultraviolet (EUV) lithography. Moore's Law is beginning to show some cracks now, but this test chip shows the game isn't over just yet. [url]
A working memory-crunching computer (memcomputer) prototype (the first of its kind) has been demonstrated by solving an NP-complete problem. This kind of computer requires a completely different design so that it can simultaneously process and store information -- unlike conventional computers which can't. Memprocessors (built from components such as memristors, memcapacitors, etc) can be made, but scaling them up is still a challenge -- so it'll be a while before anyone is making a memsupercomputer (or a supermemcomputer?). [url]
Ditching transistors (or even memristors) completely could be a solution to ridiculously fast computers, and the way to get there might be quantum computing qubits. The trick is constructing qubits that are stable, error-free and scalable. So far, a 1000+ qubit computer has been made recently, but this quantum computer requires various superconducting components chilled to a nearly absolute zero temperature -- so it won't be used in laptops anytime soon. [url]
computing, drm, security
How DRM Makes Us All Less Safefrom the you're-in-danger-thanks-to-bad-copyright-laws dept
May 6th is the official Day Against DRM. I'm a bit late writing anything about it, but I wanted to highlight this great post by Parker Higgins about an aspect of DRM that is rarely discussed: how DRM makes us less safe. We've talked a lot lately about how the NSA and its surveillance efforts have made us all less safe, but that's also true for DRM.
DRM on its own is bad, but DRM backed by the force of law is even worse. Legitimate, useful, and otherwise lawful speech falls by the wayside in the name of enforcing DRM—and one area hit the hardest is security research.
Section 1201 of the Digital Millennium Copyright Act (DMCA) is the U.S. law that prohibits circumventing "technical measures," even if the purpose of that circumvention is otherwise lawful. The law contains exceptions for encryption research and security testing, but the exceptions are narrow and don’t help researchers and testers in most real-world circumstances. It's risky and expensive to find the limits of those safe harbors.
As a result, we've seen chilling effects on research about media and devices that contain DRM. Over the years, we've collected dozens of examples of the DMCA chilling free expression and scientific research. That makes the community less likely to identify and fix threats to our infrastructure and devices before they can be exploited.
That post also reminds us of Cory Doctorow's powerful speech about how DRM is the first battle in the war on general computing. The point there is that, effectively, DRM is based on the faulty belief that we can take a key aspect of computing out of computing, and that, inherently weakens security as well. Part of this is the nature of DRM, in that it's a form of weak security -- in that it's intended purpose is to stop you from doing something you might want to do. But that only serves to open up vulnerabilities (sometimes lots of them), by forcing your computer to (1) do something in secret (otherwise it wouldn't be able to stop you) and (2) to try to stop a computer from doing basic computing. And that combination makes it quite dangerous -- as we've seen a few times in the past.
DRM serves a business purpose for the companies who insist on it, but it does nothing valuable for the end user and, worse, it makes their computers less safe.
Read More Say That Again
Parker HigginsThu, May 1st 2014 12:16pm
4th amendment, cloud, computing, mobile phones, privacy, reasonableness, supreme court, technology
The Supreme Court's Real Technology Problem: It Thinks Carrying 2 Phones Means You're A Drug Dealerfrom the how-can-it-judge-reasonableness dept
I spent a lot of the last week shaking my head at the commentary on the Supreme Court and its (lack of) technical expertise. Much of the criticism came in response to the oral arguments in Aereo, and broke down in two areas: it either misunderstood the nature of Supreme Court oral arguments and their transcripts, or mistook familiarity with a handful of Silicon Valley products with actual tech savviness.
But in a series of cases this week about law enforcement searches of cell phones, we caught a glimpse of the Supreme Court’s real technology problem. Here's what it comes down to: it's not essential that the Court knows specifics about how technology itself works—and as Timothy Lee argues, that might even tempt them to make technology-based decisions that don't generalize well. However, it is essential that the Court understands how people use technology, especially in areas where they're trying to elaborate a standard of what expectations are "reasonable."
So when Chief Justice Roberts suggests that a person carrying two cell phones might reasonably be suspected of dealing drugs, that raises major red flags. Not because of any special facts about how cell phones work, but because (for example) at least half of the lawyers in the Supreme Court Bar brought two cell phones with them to the courthouse that day. Should those attorneys (along with the many, many other people who carry multiple devices) reasonably expect less privacy because the Chief Justice is out of touch with that fact?
Contrast that with Justice Kagan’s point about storage location in the same argument. Justice Kagan suggested, correctly, that people don’t always know what is stored on their device and what is stored “in the cloud.” The actual answer to that question should be immaterial; the point is that it’s absurd for a person’s privacy interest to hinge on which hard drive private data is stored on.1 Instead, the important fact here, which Justice Kagan recognizes, is that the distinction between local and cloud storage just doesn’t matter to many people, and so it can’t be the basis of a reasonable-expectation-of-privacy test.
If you’re feeling less generous, you might take Justice Kagan’s point as evidence that she herself doesn’t know where her files are stored. And in fact, that’s probably true—but it’s not important. You don’t actually need to know much about file systems and remote storage to know that it’s a bad idea for the law to treat it differently.
That’s not to say that technical implementation details are never relevant. Relevant details, though, should (and almost always do) get addressed in the briefs, long before the oral argument takes place. They don’t usually read like software manuals, either: they’re often rich with analogies to help explain not just how the tech works, but what body of law should apply.
What can’t really be explained in a brief, though, is a community’s relationship with a technology. You can get at parts of it, citing authorities like surveys and expert witnesses, but a real feeling for what people expect from their software and devices is something that has to be observed. If the nine justices on the Supreme Court can’t bring that knowledge to the arguments, the public suffers greatly. Again, Justice Kagan seems to recognize this fact when she says of cell phones:
They're computers. They have as much computing capacity as laptops did five years ago. And everybody under a certain age, let’s say under 40, has everything on them.
Justice Kagan is not under 40, and might not have everything stored on a phone (or on an online service accessible through her phone). But that quote shows me that she at least knows where other people’s expectations are different. Chief Justice Roberts’s questions show me exactly the opposite.
The justices live an unusual and sheltered life: they have no concerns about job security, and spend much of their time grappling with abstract questions that have profound effects on this country’s law. But if they fail to recognize where their assumptions about society and technology break from the norm—or indeed, where they are making assumptions in the first place—we’re all in trouble.
Reposted from ParkerHiggins.net.
That speaks to a need to revisit the sort-of ridiculous third-party doctrine, which Justice Sotomayor has suggested, but one battle at a time.
Read More Predictions
Mike MasnickWed, Jan 4th 2012 2:16pm
computing, copyright, cory doctorow, regulations, war
The Ongoing War On Computing; Legacy Players Trying To Control The Uncontrollablefrom the must-watch dept
I don't think I've ever had so many people all recommend I watch the same thing as the number of folks who pointed me to Cory Doctorow's brilliant talk at the Chaos Communication Congress in Berlin last week. You can watch the 55 minute presentation below... or if you're a speed reader, you can check out the fantastic transcript put together by Joshua Wise, which I'll be quoting from:
The crux of his argument is pretty straightforward. The idea behind all these attempts to "crack down" on copyright infringement online, with things like DRM, rootkits, three strikes laws, SOPA and more, are really simply all forms of attacks on general purpose computing. That's because computers that can run any program screw up the kind of gatekeeper control some industries are used to, and create a litany of problems for those industries:
By 1996, it became clear to everyone in the halls of power that there was something important about to happen. We were about to have an information economy, whatever the hell that was. They assumed it meant an economy where we bought and sold information. Now, information technology makes things efficient, so imagine the markets that an information economy would have. You could buy a book for a day, you could sell the right to watch the movie for one Euro, and then you could rent out the pause button at one penny per second. You could sell movies for one price in one country, and another price in another, and so on, and so on; the fantasies of those days were a little like a boring science fiction adaptation of the Old Testament book of Numbers, a kind of tedious enumeration of every permutation of things people do with information and the ways we could charge them for it.
[[355.5]] But none of this would be possible unless we could control how people use their computers and the files we transfer to them. After all, it was well and good to talk about selling someone the 24 hour right to a video, or the right to move music onto an iPod, but not the right to move music from the iPod onto another device, but how the Hell could you do that once you'd given them the file? In order to do that, to make this work, you needed to figure out how to stop computers from running certain programs and inspecting certain files and processes. For example, you could encrypt the file, and then require the user to run a program that only unlocked the file under certain circumstances.
[[395.8]] But as they say on the Internet, "now you have two problems". You also, now, have to stop the user from saving the file while it's in the clear, and you have to stop the user from figuring out where the unlocking program stores its keys, because if the user finds the keys, she'll just decrypt the file and throw away that stupid player app.
[[416.6]] And now you have three problems [audience laughs], because now you have to stop the users who figure out how to render the file in the clear from sharing it with other users, and now you've got four! problems, because now you have to stop the users who figure out how to extract secrets from unlocking programs from telling other users how to do it too, and now you've got five! problems, because now you have to stop users who figure out how to extract secrets from unlocking programs from telling other users what the secrets were!
From there he goes on to put together a fantastic analogy of how a confusion over analogies, rather than (perhaps) outright cluelessness (or evilness) explains why bad copyright laws keep getting passed:
It's not that regulators don't understand information technology, because it should be possible to be a non-expert and still make a good law! M.P.s and Congressmen and so on are elected to represent districts and people, not disciplines and issues. We don't have a Member of Parliament for biochemistry, and we don't have a Senator from the great state of urban planning, and we don't have an M.E.P. from child welfare. (But perhaps we should.) And yet those people who are experts in policy and politics, not technical disciplines, nevertheless, often do manage to pass good rules that make sense, and that's because government relies on heuristics -- rules of thumbs about how to balance expert input from different sides of an issue.
[[686.3]] But information technology confounds these heuristics -- it kicks the crap out of them -- in one important way, and this is it. One important test of whether or not a regulation is fit for a purpose is first, of course, whether it will work, but second of all, whether or not in the course of doing its work, it will have lots of effects on everything else. If I wanted Congress to write, or Parliament to write, or the E.U. to regulate a wheel, it's unlikely I'd succeed. If I turned up and said "well, everyone knows that wheels are good and right, but have you noticed that every single bank robber has four wheels on his car when he drives away from the bank robbery? Can't we do something about this?", the answer would of course be "no". Because we don't know how to make a wheel that is still generally useful for legitimate wheel applications but useless to bad guys. And we can all see that the general benefits of wheels are so profound that we'd be foolish to risk them in a foolish errand to stop bank robberies by changing wheels. Even if there were an /epidemic/ of bank robberies, even if society were on the verge of collapse thanks to bank robberies, no-one would think that wheels were the right place to start solving our problems. [[762.0]] But. If I were to show up in that same body to say that I had absolute proof that hands-free phones were making cars dangerous, and I said, "I would like you to pass a law that says it's illegal to put a hands-free phone in a car", the regulator might say "Yeah, I'd take your point, we'd do that". And we might disagree about whether or not this is a good idea, or whether or not my evidence made sense, but very few of us would say "well, once you take the hands-free phones out of the car, they stop being cars". We understand that we can keep cars cars even if we remove features from them. Cars are special purpose, at least in comparison to wheels, and all that the addition of a hands-free phone does is add one more feature to an already-specialized technology. In fact, there's that heuristic that we can apply here -- special-purpose technologies are complex. And you can remove features from them without doing fundamental disfiguring violence to their underlying utility.
[[816.5]] This rule of thumb serves regulators well, by and large, but it is rendered null and void by the general-purpose computer and the general-purpose network -- the PC and the Internet. Because if you think of computer software as a feature, that is a computer with spreadsheets running on it has a spreadsheet feature, and one that's running World of Warcraft has an MMORPG feature, then this heuristic leads you to think that you could reasonably say, "make me a computer that doesn't run spreadsheets", and that it would be no more of an attack on computing than "make me a car without a hands-free phone" is an attack on cars. And if you think of protocols and sites as features of the network, then saying "fix the Internet so that it doesn't run BitTorrent", or "fix the Internet so that thepiratebay.org no longer resolves", then it sounds a lot like "change the sound of busy signals", or "take that pizzeria on the corner off the phone network", and not like an attack on the fundamental principles of internetworking.
The end result, then, is that any attempt to pass these kinds of laws really results not in building a task-specific computing system or application, but in deliberately crippling a general purpose machine -- and that's kind of crazy for all sorts of reasons. Basically, it effectively means having to put spyware everywhere:
[[1090.5]] Because we don't know how to build the general purpose computer that is capable of running any program we can compile except for some program that we don't like, or that we prohibit by law, or that loses us money. The closest approximation that we have to this is a computer with spyware -- a computer on which remote parties set policies without the computer user's knowledge, over the objection of the computer's owner. And so it is that digital rights management always converges on malware.
[[1118.9]] There was, of course, this famous incident, a kind of gift to people who have this hypothesis, in which Sony loaded covert rootkit installers on 6 million audio CDs, which secretly executed programs that watched for attempts to read the sound files on CDs, and terminated them, and which also hid the rootkit's existence by causing the kernel to lie about which processes were running, and which files were present on the drive. But it's not the only example; just recently, Nintendo shipped the 3DS, which opportunistically updates its firmware, and does an integrity check to make sure that you haven't altered the old firmware in any way, and if it detects signs of tampering, it bricks itself.
[[1158.8]] Human rights activists have raised alarms over U-EFI, the new PC bootloader, which restricts your computer so it runs signed operating systems, noting that repressive governments will likely withhold signatures from OSes unless they have covert surveillance operations.
[[1175.5]] And on the network side, attempts to make a network that can't be used for copyright infringement always converges with the surveillance measures that we know from repressive governments. So, SOPA, the U.S. Stop Online Piracy Act, bans tools like DNSSec because they can be used to defeat DNS blocking measures. And it blocks tools like Tor, because they can be used to circumvent IP blocking measures. In fact, the proponents of SOPA, the Motion Picture Association of America, circulated a memo, citing research that SOPA would probably work, because it uses the same measures as are used in Syria, China, and Uzbekistan, and they argued that these measures are effective in those countries, and so they would work in America, too!
[audience laughs and applauds] Don't applaud me, applaud the MPAA!
But his point is much bigger than copyright. It's that the copyright fight is merely the canary in the coalmine to this kind of attack on general purpose computing in all sorts of other arenas as well. And those fights may be much bigger and more difficult than the copyright fight:
And it doesn't take a science fiction writer to understand why regulators might be nervous about the user-modifiable firmware on self-driving cars, or limiting interoperability for aviation controllers, or the kind of thing you could do with bio-scale assemblers and sequencers. Imagine what will happen the day that Monsanto determines that it's really... really... important to make sure that computers can't execute programs that cause specialized peripherals to output organisms that eat their lunch... literally. Regardless of whether you think these are real problems or merely hysterical fears, they are nevertheless the province of lobbies and interest groups that are far more influential than Hollywood and big content are on their best days, and every one of them will arrive at the same place -- "can't you just make us a general purpose computer that runs all the programs, except the ones that scare and anger us? Can't you just make us an Internet that transmits any message over any protocol between any two points, unless it upsets us?"
[[1576.3]] And personally, I can see that there will be programs that run on general purpose computers and peripherals that will even freak me out. So I can believe that people who advocate for limiting general purpose computers will find receptive audience for their positions. But just as we saw with the copyright wars, banning certain instructions, or protocols, or messages, will be wholly ineffective as a means of prevention and remedy; and as we saw in the copyright wars, all attempts at controlling PCs will converge on rootkits; all attempts at controlling the Internet will converge on surveillance and censorship, which is why all this stuff matters. Because we've spent the last 10+ years as a body sending our best players out to fight what we thought was the final boss at the end of the game, but it turns out it's just been the mini-boss at the end of the level, and the stakes are only going to get higher.
And this is an important fight. It's why each of the moves to fight back against attempts to censor and break computing systems is so important. Because the next round of fights is going to be bigger and more difficult. And while they'll simply never succeed in actually killing off the idea of the all-purpose general computer (you don't put that kind of revelation back in Pandora's box), the amount of collateral damage that can (and almost certainly will) be caused in the interim is significant and worrisome.
His point (and presentation) are fantastic, and kind of a flip side to something that I've discussed in the past. When people ask me why I talk about the music industry so much, I often note that it's the leading indicator for the type of disruption that's going to hit every single industry, even many that believe they're totally immune to this. My hope was that we could extract the good lessons from what's happening in the music industry -- the fact that the industry has grown tremendously, that a massive amount of new content is being produced, and that amazing new business models mean that many more people can make money from music today than ever before -- and look to apply some of those lessons to other industries before they freak out.
But Cory's speech, while perhaps the pessimistic flip side of that coin, highlights the key attack vector where all of these fights against disruption will be fought. They'll be attacks on the idea of general purpose computing. And, if we're hoping to ward off the worst of the worst, we can't just talk about the facts and data and success stories, but also need to be prepared to explain and educate about the nature of a general purpose computer, and the massive (and dangerous) unintended consequences from seeking to hold back general computing power to stop "apps we don't like." | 计算机 |
2015-40/2214/en_head.json.gz/9216 | « Updated Project Scandium Available on Autodesk Labs |
| Autodesk River and Flood Analysis: Use HEC-RAS export/import for legacy data »
POV Dispatch: Let's Get Small: Big Breakthroughs in the World of the Nanoscale
The POV Dispatch is our Autodesk internal newsletter, published monthly, where we discuss the big ideas that are important to us and our customers. It is published by our Corporate Strategy & Engagement (CS&E) team of which Autodesk Labs is a part. In addition to articles authored by members of our CS&E team, we take guest submissions. We were thrilled to get a write-up by Autodesk Distinguished Research Scientist Andrew Hessel. I thought I would share it with you.
Let's Get Small: Big Breakthroughs in the World of the Nanoscale
by Andrew Hessel, Distinguished Research Scientist
First, the Landscape
Everyone's talking about design today, but there's still one thing about design that many people don't realize: it can be done on an extremely small scale. In fact, the nanoscale. The world that I spend a lot of time thinking about is invisible to most people. It exists far below what a conventional microscope can perceive. In fact, it can only be clearly seen at the level of the electron microscope, or the atomic force microscope.
Next, the Inhabitants
Many objects exist at this scale. They include single cell-organisms, like bacteria. Bacteria are complex, free-living organisms. They range in size, but one of the most familiar types of bacteria, E. coli, measures roughly 5 microns across, or about 5,000 nanometers. Bacteria are all around us in the world and plenty live on us and inside of us. They are so plentiful that their cells outnumber our own cells by a factor of 10, totaling about 380 trillion cells of bacteria in our bodies. In fact, the human body is about 2-3 pounds of bacteria by weight. The Human Microbiome Project was launched in 2005 to produce a map of our bacteria.
Viruses are even smaller; there are millions or billions of virus species. No one knows for sure how many there are, but we know that the vast majority of them are harmless to humans, and that their sizes vary greatly. For example, the viruses that cause the common cold are about 30 nanometers in size, while pandoraviruses are closer to 1,000 nanometers . Viruses aren't alive in the conventional sense, because they need a host cell in which to replicate.
Let's Go Even Smaller
There are many things that are even smaller than bacteria and viruses, for example:
Subcellular components like ribosomes which function like 3D printers for proteins;
Antibodies, which are part of the immune system's police force and are constantly scanning our bodies for invaders; and
Chemical molecules like glucose, the sugar that acts as the fuel for our cells.
Basically, the nanoscale world, though invisible, is teeming with diversity, activity, and complexity.
Designing the World of the Infinitesimal — and Autodesk's Role
Here's the surprising part about this world of tiny things: despite the small size and complexity of nano-biological systems, they can be manipulated with precision. This is because biology is built from the bottom up, and it has a digital programming language (DNA), and software tools can assist designers with their more sophisticated genetic designs.
Autodesk has not traditionally created tools for designers working at this scale, or with these self-assembling materials. In 2012, after more than two years of background research, Autodesk's Bio/Nano Programmable Matter (BNPM) group was established in the Office of the CTO (OCTO) to explore the commercial opportunities of this evolving realm. This team of 20 developers and scientists led by Senior Principal Research Scientist Carlos Olguin.
Enter Project Cyborg
This group is now developing a web-based platform specifically for Bio/Nano called Project Cyborg — shorthand for Cybernetic Organism, or the synergistic intersection of living and non-living materials. The platform includes a CAD shell, physics and biophysics engines, and it also supports cloud computation. A number of speculative applications are also being developed on the platform to demonstrate to scientists how it might potentially be used.
One major goal of Cyborg is the democratization of bio-nano design. We're doing this in three ways:
Technically, this is accomplished by making tools that are intuitive and easy to use, and by automating design tasks that would otherwise be difficult or repetitive, therefore speeding up the iterative development cycle.
Economically, we are democratizing these tools because they are offered free of charge.
And practically speaking, we're doing it by leveraging the power of special printers that are able to quickly translate electronic designs into reality.
Printing Out Life
One of these printers is called a bio-printer: a 3D printer that deposits "bio-inks" instead of plastic or other inert materials, as traditional 3D printers do. Bio-inks are solutions of living cells that can be precisely deposited in an additive way. San Diego-based Organovo Inc., a leader in this field, uses this technology to make synthetic liver samples for drug screening, and also synthetic blood vessels — and it is even working on printing entire organs that are suitable for transplant. We are working closely with Organovo to make this work easier — if not fully automated.
Another powerful printer is the DNA synthesizer: basically, it's a 3D printer for the DNA molecule. UCSF-based collaborator Shawn Douglas uses this DNA as a structural material to craft precision nanoscale objects, including "robots" that can recognize and kill cancer cells, one at a time. BNPM team member Joseph Schaeffer is an expert supporting this work. Other groundbreaking scientists, like Craig Venter and George Church, and even the students in the iGEM synthetic biology program are using synthetic DNA as a programming language, and in the process sprouting an entirely new branch from the tree of life, which we dub synthetica. The potentials here are so vast that they include the engineering of all life, up to and including our own species.
Let's Get Viral
Currently the capabilities of DNA printers are still quite modest, and the price per base pair (bp) is still quite high, anywhere from $.20 to $1 per bp. For this reason I have focused my own work on engineering organisms with the smallest genomes: viruses, which have genomes that range from 3,000 bp to about 1 million bp.
Viruses are, in a weird way, consistent with Autodesk's core capabilities, because they are the biological equivalent of shrink-wrapped software. The virus capsid — the protein shell that serves as the container for the viruses DNA and RNA — ultimately dictates which host cells (i.e., "computers") a virus can "run on."
Autodesk BNPM members Jackie Quinn, Merry Wang, and I, along with our external partners, have been working to make synthetic viruses easy to design and print. This opens up a range of possible applications because viruses have diverse functions in nature, many of which have been used by scientists to create practical applications, including vaccines, diagnostic tools, and even battery electrodes. Even cancer fighting viruses have been engineered, some of which are already in late-stage clinical trials.
To gain some hands-on knowledge of this new realm, we've been working with a phage, a virus that infects the E. coli bacterium. It's small (5,386 bp), harmless to humans, and synthetic variants of it have been made and researched for over a decade. The tool we've been using for this work is called the Virus Design Studio; it's still very basic, but it includes DNA editing software as well as automated biosecurity features, including the addition of a digital signature. It can be used to 3D print a plastic model of the phage or, just as easily, the biological virus itself.
Our goal is to first automate and accelerate the end-to-end virus design process, and to then explore a diverse range of applications for the new process. Using today's DNA synthesis technology, 3D printing a virus can take 2 weeks, but some researchers have already shown that it can actually be done in about 3 days. The trend is clear: doing work like this is going to cost much less in the near future, and take less time.
Bio/Nano: Somewhere We Can Boldly Go...
By combining advanced bio/nano design software with this latest generation of amazing 3D printers, there's a very real opportunity for Autodesk to play a key role in the next-generation biotechnology industry — turning makers into drug-makers — and much, much more! Developments in this field could bring about a creative explosion of R&D in terms of treating cancer, organ transplantation, etc. The opportunities are endless and the surface has barely been scratched.
Thanks Andrew. I really enjoyed how your article covers this complex topic in plain English with small words. This information will help me greatly when I cover the DNA origami exhibit on the Autodesk Gallery tour.
With apologies to Steve Martin, getting small is alive in the lab.
Posted by Scott Sheppard at 04:00 AM in Point of View | Permalink | 计算机 |
2015-40/2214/en_head.json.gz/9276 | meyerweb.com
Skip to: site navigation/presentation
Skip to: Thoughts From Eric
Archive: 11 October 2006
Jackals and HYDEsim
CulturePoliticsTools
Long-time readers (and Jeremy) probably remember HYDEsim, the big-boom ‘simulator’ I hacked together using the Google Maps API and some information in my personal reading library.
Well, with North Korea setting off something that might have been a nuclear device, it’s starting to show up in the darndest places. Everyone’s favorite millenial talk show host, Glenn Beck, not only mentioned it on his radio program this past Monday, but also put a link on the main page of his site for a couple of days. Then it got Farked. I suppose it’s only a matter of time now before it gets Slashdotted as well.
With the increased attention, some old criticisms have arisen, as well as some misunderstandings. For example, on Fark, someone said:
I thought it was funny how people are playing with this and think they were “safe” if they weren’t in the circle.
Here’s a mockup I did of the kind of blast damage you could expect from a single 1980’s era Russian ICBM carrying 10 MIRV warheads, each capable of 750KT yield.
Oh my yes. That’s something that the HYDEsim code can theoretically support, since every detonation point is an object and there’s no limit on the number of objects you can have, but I never managed to add this capability. That’s because trying to figure out the UI for placing the MIRV impact points broke my head, and when I considered how to set all that in the URI parameters (for direct linking), a tiny wisp of smoke curled out of my left ear. Still, one of these days I should probably at least add a “MIRV ring impact” option so the young’n’s can get an idea of what had us all scared back in the old days.
The interesting challenge is that a strategic nuclear strike of that variety is going to involve a whole bunch of optimum-altitude air bursts. HYDEsim takes the simpler—and also, in this darkened day and age, more realistic—approach of calculating the effects of a ground burst. The difference is in no sense trivial: a ground burst has a lot of energy, both thermal and radiological, absorbed by the ground (oddly enough!). On the other hand, its highest overpressure distances are actually greater.
This is because shock energy drops with distance, of course. An optimum-altitude air burst would be a mile or two above the ground, so the highest pressures would be directly beneath the explosion, and would be smaller than if the same weapon exploded on the ground. With an air burst there’s less ground and man-made clutter to attenuate the shock waves as they spread out, so the total area taking some degree of damage due to overpressure is actually greater. (There are also very complex interactions between the shock waves in the air and those reflected off the ground, but those are way beyond my ability to simulate in JavaScript.)
Also, direct thermal radiation is spread over a much greater area with an air burst than with a ground burst—again, there’s less stuff in the way. The amount of fallout depends on the “cleanliness” of the warhead, but for an air burst it can actually be expected to be less than a groundburst.
People also claim that radiological energy (X-rays, neutron radiation, gamma radiation, etc.) will be the deadliest factor of all. Actually, it’s just the opposite, unless you’re discussing something like a neutron bomb. The amount of harmful direct-effect radiation that comes directly from the explosion is far, far smaller than the thermal energy. And yes, I know thermal radiation is direct-effect, but there’s a large practical difference between heat and other forms of radiation.
Put another way, if you’re close enough to an exploding nuclear warhead that the amount of radiation emitted by the explosion would ordinarily kill you, the odds are overwhelmingly high that the amount of shock wave and thermal energy arriving at your position will ensure that there won’t be time for you to worry about the radiation effects. Or anything else, really.
Remember: I’m talking there about direct radiation, not the EMP or fallout. That’s a whole separate problem, and one HYDEsim doesn’t address, to the apparent disgust of another Farker:
The site is useless without fallout and thermal damage.
Well, I don’t know about useless, but it’s admittedly not as representative of the totality of nuclear-weapons damage as it might otherwise be. Of course, HYDEsim is not specifically about nuclear detonations, as I showed when I mapped the Hertfordshire oil refinery explosion and djsunkid mapped the Halifax explosion of 1917. But I certainly admit that the vast majority of explosions in the range the tool covers are going to be from nuclear weapons.
The problem with mapping fallout is that it’s kind of weather dependent, just for starters; just a few miles-per-hour difference in wind speed can drastically alter the fallout pattern, and the position of the jet stream plays a role too. Also, the amount of fallout is dependent on the kind of detonation—anyone who was paying attention during the Cold War will remember the difference between “dirty” and “clean” nuclear warheads. (For those of you who came late: to get a “dirty” warhead, you configure a device to reduce the explosive power but generate a lot more fallout.)
Thermal effects are something I should add, but it’s trickier than you might expect. There’s actually an area around the explosion where there are no fires, because the shock effects snuff them out. Beyond that, there’s a ring of fire (cue Johnny Cash). So it’s not nearly as simple as charting overpressure, which is itself not totally simple.
And then there’s there whole “how to combine thermal-effect and overpressure rings in a way that doesn’t become totally confusing” problem. Get ambitious, and then you have the “plus the show fallout plume without making everything a total muddle” follow-on problem. Ah well, life’s empty without a challenge, right?
Okay, so I went through all that and didn’t actually get to my point, which is this: I’ve been rather fascinated to see how the tool gets used. When it was first published, there was a very high percentage of the audience who just went, “Cooool!”. That’s still the case. It’s the same thing that draws eyes to a traffic accident; it’s horrible, but we still want to see.
However, I also got some pushback from conservative types: how dare I publish such a thing, when it could only be useful to terrorists?!?!? Rather than play to the audience and inform them that I simply hate freedom, I mentioned that it was desirable to have people like you and me better understand the threats we face. It’s not like the terrorists can’t figure this stuff out anyway.
Now I’ve seen a bunch of people from the same ideological camp use HYDEsim to mock the North Koreans’ test, which apparently misfired and only achieved a yield of about 0.5KT. Others have taken that figure and plotted it in American cities, giving some scale to the dimension of this particular threat. Still others have done that, but with the yield the North Koreans had attempted to reach (thought to be 4KT), or even with yields up to 50KT. In most cases, these last are shown in conjunction with commentary to the effect of “now do you understand why this is a problem?”.
This is why I do what I do, whether it’s write books or publish articles or speak at conferences or build tools or just post entries here: to help people learn more about their world, and to help them share what they know and think and believe with others. Sometimes that’s worth saying again, if only to remind myself.
November 1234567
Search Eric's Archived Thoughts
All contents of this site, unless otherwise noted, are ©1995-2015 Eric A. and Kathryn S. Meyer. All Rights Reserved.
"Thoughts From Eric" is powered by WordPress. | 计算机 |
2015-40/2214/en_head.json.gz/9439 | Some Space to Think
Mostly about games, but with occasional detours into other nerdy territories.
The Dragon Age RPG is one I've been excited about for a while, not because it's based on a video game I'm nuts for, but because of its avowed goal of being a game to bring people into the hobby. Games make that claim all the time, but there were three things going on with DARPG that raised my interest: It's a boxed set (hopefully a real one, not a faux one like the 4e starter set), it's got a hook into a good franchise that is neither too weird nor too overwhelming but can still bring in eyeballs, and it's by Green Ronin, a company that I would describe as pretty darn sharp.As if to demonstrate that sharpness, Green Ronin put DARPG up for preorder recently, and offered up a free PDF along with the preorder. It boggles my mind that this is not standard practice, but it's not, so GR gets props for a smart move. They get an initial wave of buzz and interest based off people reading and talking about the PDF, and they hopefully can build on that when the actual game releases.It's also a move that benefits me a lot because, hey, I get to read it. I'm always happy to cheer on my own enlightened self interest.Here's the short form: The Dragon Age RPG looks to have the shortest distance from opening the box to playing at the table of any game I've seen in over a decade, possibly since red box D&D.[1] It is not a revolutionary game by any stretch of the imagination, and for most gamers with a few games under it's belt, it's going to seem absolutely tired. Old ideas like random chargen and hit points are all over the place. With the exception of the Dragon Die and the stunt system, experienced gamers aren't goignt fo find much new here.But that makes it exactly what it should be. As a game for existing gamers, Dragon Age is ok, but not as impressive as other Green Ronin offerings. As a game for a new gamer, it's exactly right.First, by sticking to very strongly established mechanics (many of which will be at least conversationally familiar to people who've played video games) with a minimum of complexity, they've made a game that is easy to learn to play. The simplicity, brevity (main rulebook is 64 pages) and the clarity[2] combine to make a game that can be learned from the text, without depending on arcane oral tradition. I think back to my youth and this seems a very big deal.Second, the setting is equally familiar. Not just because some players will know it from the video game, but because the video game's setting is designed to be quickly recognizable. Elves live in the woods and have bows. Dwarves live underground and have axes. Humans run the show. Magic is mysterious and risk-filled. Sure, each of these points has more depth as you drill into them, but the basic are immediately recognizable to anyone with a little pop culture knowledge.Last, the game minimizes the barriers to play by avoiding the temptation of weird dice. By making it playable with nothing but the dice you can salvage from a Risk box, you get a couple of advantages. There's no awkwardness as you finish reading the rules but find yourself needing to wait until you've taken a trip to that creepy store [3] to get supplies. There's more of a sense of the familiar. And perhaps best of all, you can scale up with your group size - adding a few more d6s is a lot easier than, say, having to share one set of polyhedrals.Put it all in a box set and you've got a product that I'm really excited about. I could see giving this game as a gift to a non-player, and that's almost unprecedented.Now, it's not all sunshine and puppies. As noted the game is pretty simple (though I admit it's at a level of simplicity I dig, since I think my wife would not be bothered by it) and a few corners got cut to support the size and the release schedule. You can't play a Grey Warden, which is kind of a kick in the head, since that's so central to the computer game. The logic's clear: this set covers levels 1-5, next one will be 6-10 (then 11-15 and 16-20 or so I understand) and subsequent sets will be adding rules for things like specialty careers including things like Grey Warden. I suspect we'll also get magic items and runes in later sets too.There are a few layout decisions that raise my eyebrow - magic precedes combat, which is weird in terms of the order rules are explained for example - but they're all quickly set aside by the presence of indexes, glossary and comprehensive reference pages. It should not be so exciting to me to see a game do what should be the basics, but it is.The sample adventure is in the GM's book rather than in its own booklet. This makes sense in terms of cost, and it's not a bad thing, but I admit I flash back to my well worn copy of Keep on the Borderlands, and I regret that as long as they were trying to recapture the magic of redbox, they didn't revive that tradition.And that's really what's going on here. Unlike the old school, this is not an attempt to recreate old D&D, rather, it's an attempt to answer the same questions, only with decades of experience with how it went the first time. This makes the choices of what rules are included (and which ones aren't included) really fascinating to me. The Green Ronin guys know their stuff, and you can assume every choice in the design is a deliberate one.Choices like a very traditional hit point and damage system are not made because they couldn't think of another way, but rather because that choice maximized the accessibility of the game. On reading, it really feels like they pulled it off, and I'm genuinely excited to give it a play sometime and find out. One way or another I wish them luck: success with a game designed to bring new players into the hobby benefits us all.1 - The only other real contender in the intervening time is Feng Shui. There are simpler games, sure, but they lack the structure to answer the question of "OK, what do I do now?".2 - Randomization has one huge benefit for new players - it removes optimization choices. There's more to it than that, but by putting the harder decision of chargen in the hands of the dice, game-stopping questions are removed from play.3- Yes, that's an unfair characterization, but not everyone is lucky enough to be near one of the many friendly, clean, well lit gamestores with helpful staff. And even for those who are, the store is an unknown, and unknowns are scary and off-putting, especially for teenagers. Posted by
Rob Donoghue
DARPG, | 计算机 |
2015-40/2214/en_head.json.gz/9724 | Aseem Agarwala Home
Tech transfer Research projects Publications
Activities & Honors
I am a research scientist at Google, and an affiliate assistant professor at the University of Washington's Computer Science & Engineering
department, where I completed my Ph.D. in 2006; my advisor was David
Salesin. My areas of research are computer graphics, computer vision, and computational imaging. Specifically, I research computational techniques that can help us author more expressive imagery using digital cameras. I spent nine years after my Ph.D. at Adobe Research. I
also spent three summers during my Ph.D. interning at Microsoft Research, and my
time at UW was supported by a Microsoft
fellowship. Before UW, I worked for two years as a research
scientist at the legendary but now-bankrupt Starlab,
a small research company in Belgium. I completed my Masters and Bachelors at MIT majoring in computer
science; while there I was a research assistant in the Computer Graphics Group, and an intern at the Mitsubishi Electric Research
Laboratory (MERL) . As an undergraduate I did research at the MIT Media Lab.
I also spent much of 2010 building a modern house in Seattle, and documented the process in my blog, Phinney Modern. | 计算机 |
2015-40/2214/en_head.json.gz/9749 | Ani-Jobs
A Resource for the Animation, Visual Effects, and Gaming Industries
Job Forums
Resume Cafe
About Us / Privacy Policy
← Wallace and Gromit In It for the Bread
Miller Talks Spirit; Sin City 2 →
Atari Gets Desperaux Game
Posted on December 4, 2008 by admin AWN reports that Atari has acquired the rights to Universal’s film “The Tale of Desperaux” to make a videogame.
Brash Ent. had been set to release the game about the heroic, giant-eared mouse, but they went out of business last month. The company, who specialized in film adaptations, planned to also distribute a game for Nintendo DS produced by Universal, the studio’s first self-funded game.
Brash completed the game before going out of business, but they and Universal had to rush to find another publisher before the film’s December 19 release date.
Atari will release the game on the Wii, Playstation 2, and Universal’s DS. The company’s been in the market lately of picking up film adaptations for games. “Chronicles of Riddick: Assault on Dark Athena” as well as “Ghostbusters” from Activision. Atari has also announced it will be releasing a Wii music game and sequels of it’s more popular franchises. | 计算机 |
2015-40/2214/en_head.json.gz/10113 | Inbox Insider: Be excited, not afraid
Last week, the Direct Marketing Association's E-mail Executive Council launched a YouTube channel to create a more contemporary platform to evangelize about e-mail marketing. User-generated clips — garnered through an open call to the EEC's 1,400-member base, provide tips for effective e-mail marketing. It sounds like any brand's strategy, using social media as a means to be — or seem — relevant. Lana McGilvray, co-chair of the EEC's speakers' bureau and VP of marketing for Datran Media, said social media is a “perfect complement to e-mail.” She said that most of the council's members have integrated it into their models. I was interested to learn about the initiative's beginnings. McGilvray said as the EEC contemplated how to focus its social media program, it was inspired by multi-media PSA campaigns, iTunes University, the popularity of online courses and the viral exchange of content. McGilvray emphasized that the digital environment is “rich and engaging,” and was an obvious choice in selecting social as a means of communication. On the video clips, she said the goal is to “engage with,” rather than “teach at.” It does seem like organizations' hesitance to use the digital space has largely disappeared. We are even seeing e-mail marketing's evolution, including the tying of mobile to e-mail campaigns. Strategies, such as that of the EEC, are welcome.Organizations should keep moving forward - there is no other option. And they should be excited — not scared — by it. This material may not be published, broadcast, rewritten or redistributed in any form without prior authorization. | 计算机 |
2015-40/2214/en_head.json.gz/10547 | Mary L. (Anderson) Granger
Privacy Policy Updated: March 2014
Legacy.com, Inc. provides various features tools that allow users to access obituaries, express condolences and share remembrances of friends and loved ones. Legacy.com, Inc. offers these features and tools at www.legacy.com and other websites and applications (“App”) powered by Legacy.com (collectively, the “Services”). Please read the following privacy policy ("Privacy Policy") carefully before using the Services.
1. WHAT THIS PRIVACY POLICY COVERS
3. INFORMATION WE COLLECT
4. HOW WE USE YOUR INFORMATION
5. PRIVACY ALERT: YOUR POSTINGS GENERALLY ARE ACCESSIBLE TO THE PUBLIC
6. CHILDREN’S PRIVACY
7. OWNERSHIP
8. SECURITY MEASURES
9. CORRECTING OR UPDATING INFORMATION
10. OPT-OUT PROCEDURES
11. ADVERTISING AND LINKS
12. CHANGES TO THIS PRIVACY POLICY
13. ADDITIONAL INFORMATION
This Privacy Policy covers how Legacy.com, Inc. (collectively, "Legacy.com", "we", "us", or "our") treats user or personally identifiable information that the Services collect and receive. If, however, you are accessing this Privacy Policy from a website operated in conjunction with one of our affiliates ("Affiliates"), the privacy policy of the Affiliate will govern unless such policy states otherwise.
Subject to the above, this Privacy Policy does not apply to the practices of companies that Legacy.com, Inc. does not own, operate or control, or to people that it does not employ or manage.
In general, you can browse the Services without telling us who you are or revealing any information about yourself to us. Legacy.com, Inc. does not collect personally identifiable information about individuals, except when specifically and knowingly provided by such individuals on interactive areas of the Services. "Personally Identifiable Information" is information that can be used to uniquely identify, contact, or locate a single person, such as name, postal address, email address, phone number, and credit card number, among other information, and that is not otherwise publicly available. Any posting made while using the Services, and any other information that can be viewed by the public and therefore is not considered "personal information" or "Personally Identifiable Information", and is not the type of information covered by this Privacy Policy.
Personally Identifiable Information.
Examples of Personally Identifiable Information we may collect include name, postal address, email address, credit card number and related information, and phone number. We may also collect your date of birth, geo-location, social networking profile picture and the date of birth and date of death for the deceased person in connection with certain features of the Services. We also maintain a record of all information that you submit to us, including email and other correspondence. We may collect Personally Identifiable Information when you register to receive alerts or offerings, sponsor, access or submit information in connection with certain Services, post other content through the Services, purchase products or other services, opt-in to receive special offers and discounts from us and our selected partners or participate in other activities offered or administered by Legacy.com.
We may also collect Personally Identifiable Information about your transactions with us and with some of our business partners. This information might include information necessary to process payments due to us from you, such as your credit card number.
Legacy.com allows certain social media platforms to host plug-ins or widgets on the Sites which may collect certain information about those users who choose to use those plug-ins or widgets.
We do not intentionally collect Personally Identifiable Information about children under the age of 13. Please see the section on "Children’s Privacy" below.
Other Anonymous Information.
Like most websites, Legacy.com also receives and records information on our server logs from your browser automatically and through the use of electronic tools such as cookies, web beacons and locally shared objects (LSOs). Our server logs automatically receive and record information from your browser (including, for example, your IP address, and the page(s) you visit). The information gathered through these methods is not "personally identifiable;" i.e., it cannot be used to uniquely identify, contact, or locate a single person. Some browsers allow you to indicate that you would not like your online activities tracked, using “Do Not Track” indicators (“DNT Indicators”), however we are not obligated to respond to these indicators. Presently we are not set up to respond to DNT Indicators. This means that we may use latent information about your online activities to improve your use of our Services, but such usage is consistent with the provisions of this Privacy Policy.
We will use your information only as permitted by law, and subject to the terms of our Privacy Policy;
Use of Personally Identifiable Information:
We do not sell or share your Personally Identifiable Information with unrelated third parties for their direct marketing purposes.
Personally Identifiable Information and other personal information you specifically provide may be used:
to provide the Services we offer, to process transactions and billing, for identification and authentication purposes, to communicate with you concerning transactions, security, privacy, and administrative issues relating to your use of the Services, to improve Services, to do something you have asked us to do, or to tell you of Services that we think may be of interest to you.
to communicate with you regarding the Services.
for the administration of and troubleshooting regarding the Services. Certain third parties who provide technical support for the operation of the Services (our web hosting service, for example), may need to access such information from time to time, but are not permitted to disclose such information to others.
We may disclose Personally Identifiable Information about you under the following circumstances:
In the course of operating our business, it may be necessary or appropriate for us to provide access to your Personally Identifiable Information to others such as our service providers, contractors, select vendors and Affiliates so that we can operate the Services and our business. Where practical, we will seek to obtain confidentiality agreements consistent with this Privacy Policy and that limit others’ use or disclosure of the information you have shared.
We may share your Personally Identifiable Information if we are required to do so by law or we in good faith believe that such action is necessary to: (1) comply with the law or with legal process (such as pursuant to court order, subpoena, or a request by law enforcement officials); (2) protect, enforce, and defend our Terms of Use, rights and property; (3) protect against misuse or unauthorized use of this the Services; or (4) protect the personal safety or property of our users or the public (among other things, this means that if you provide false information or attempt to pose as someone else, information about you may be disclosed as part of any investigation into your actions.)
Use of Anonymous Information:
Certain information that we collect automatically or with electronic tools or tags (such as cookies) is used to anonymously track and measure user traffic for the Services and to enhance your experience with the Services and our business partners. For example:
IP Addresses/Session Information. We occasionally may obtain IP addresses from users depending upon how you access the Services. IP addresses, browser, and session information may be used for various purposes, including to help administer the Services and diagnose and prevent service or other technology problems related to the Services. This information also may be used to estimate the total number of users downloading any App and browsing other Services from specific geographical areas, to help determine which users have access privileges to certain content or services that we offer, and to monitor and prevent fraud and abuse. IP addresses are not linked to Personally Identifiable Information
Cookies. A cookie is a small amount of data that often includes an anonymous unique identifier that is sent to your browser from a website’s computers and stored on your computer’s hard drive or comparable storage media on your mobile device. You can configure your browser to accept cookies, reject them, or notify you when a cookie is set. If you reject cookies, you may not be able to use the Services that require you to sign in, or to take full advantage of all our offerings. Cookies may involve the transmission of information either directly to us or to another party we authorize to collect information on our behalf. We use our own cookies to transmit information for a number of purposes, including to:
require you to re-enter your password after a certain period of time has elapsed to protect you against others accessing your account contents;
keep track of preferences you specify while you are using the Services;
estimate and report our total audience size and traffic;
conduct research to improve the content and Services.
We let other entities that show advertisements on some of our web pages or assist us with conducting research to improve the content and Services set and access their cookies on your computer or mobile device. Other entities’ use of their cookies is subject to their own privacy policies and not this Privacy Policy. Advertisers or other entities do not have access to our cookies.
Page Visit Data. We may record information about certain pages that you visit on our site (e.g. specific obituaries) in order to recall that data when you visit one of our partners’ sites. For example, we may record the name and address of the funeral home associated with an obituary to facilitate a flower order.
We may share anonymous information aggregated to measure the number of App downloads, number of visits, average time spent on the Services websites, pages viewed, etc. with our partners, advertisers and others.
Your own use of the Services may disclose personal information or Personally Identifiable Information to the public. For example:
Submissions and other postings to our Services are available for viewing by all our visitors unless the sponsor or host of a Service selects a privacy setting that restricts public access. Please remember that any information disclosed on a non-restricted Service becomes public information and may be collected and used by others without our knowledge. You therefore should exercise caution when disclosing any personal information or Personally Identifiable Information in these forums.
When you post a message to the Services via message board, blog, or other public forum available through the Services, your user ID or alias that you are posting under may be visible to other users, and you have the ability to post a message that may include personal information.
If you post Personally Identifiable Information online that is accessible to the public, you may receive unsolicited messages from other parties in return. Such activities are beyond the control of Legacy.com, Inc. and the coverage of this Privacy Policy. Please be careful and responsible whenever you are online. In addition, although we employ technology and software designed to minimize spam sent to users and unsolicited, automatic posts to message boards, blogs, or other public forums available through the Services (like the CAPTCHA word verification you see on email and registration forms), we cannot ensure such measures to be 100% reliable or satisfactory.
Legacy.com, Inc. does not intentionally collect from or maintain Personally Identifiable Information of children under the age of 13, nor do we offer any content targeted to such children.
In the event that Legacy.com, Inc. becomes aware that a user of the Services is under the age of 13, the following additional privacy terms and notices apply:
Prior to collecting any Personally Identifiable Information about a child that Legacy.com, Inc. has become aware is under the age of 13, Legacy.com, Inc. will make reasonable efforts to contact the child’s parent, to inform the parent about the types of information Legacy.com, Inc. will collect, how it will be used, and under what circumstances it will be disclosed, and to obtain consent from the child’s parent to collection and use of such information.
Although Legacy.com, Inc. will apply these children’s privacy terms whenever it becomes aware that a user who submits Personally Identifiable Information is less than 13 years old, no method is foolproof. Legacy.com, Inc. strongly encourages parents and guardians to supervise their children’s online activities and consider using parental control tools available from online services and software manufacturers to help provide a child-friendly online environment. These tools also can prevent children from disclosing online their name, address, and other personal information without parental permission.
Personally Identifiable Information collected from children may include any of the information defined above as Personally Identifiable Information with respect to general users of the Services and may be used by Legacy.com, Inc. for the same purposes. Except as necessary to process a child’s requests or orders placed with advertisers or merchants featured through the Services, Legacy.com, Inc. does not rent, sell, barter or give away any lists containing a child’s Personally Identifiable Information for use by any outside company.
A child’s parent or legal guardian may request Legacy.com, Inc. to provide a description of the Personally Identifiable Information that Legacy.com, Inc. has collected from the child, as well as instruct Legacy.com, Inc. to cease further use, maintenance and collection of Personally Identifiable Information from the child. If a child voluntarily discloses his or her name, email address or other personally-identifying information on chat areas, bulletin boards or other forums or public posting areas, such disclosures may result in unsolicited messages from other parties.
Legacy.com, Inc. and/or the Affiliate Newspaper(s) are the sole owner(s) of all non-personally identifiable information they collect through the Services. This paragraph shall not apply to Material subject to the license granted by users to Legacy.com, Inc. pursuant to Section 3 of the Terms of Use governing the Services.
The security and confidentiality of your Personally Identifiable Information is extremely important to us. We have implemented technical, administrative, and physical security measures to protect guest information from unauthorized access and improper use. From time to time, we review our security procedures in order to consider appropriate new technology and methods. Please be aware though that, despite our best efforts, no security measures are perfect or impenetrable, and no data transmissions over the web can be guaranteed 100% secure. Consequently, we cannot ensure or warrant the security of any information you transmit to us and you do so at your own risk.
You may modify and correct Personally Identifiable Information provided directly to Legacy.com, Inc. in connection with the Services, if necessary. Legacy.com, Inc. offers users the following options for updating information:
Send an email to us at Contact Us; or
Send a letter to us via postal mail to the following address:
Legacy.com, Inc., 820 Davis Street Suite 210, Evanston, IL 60201 Attention: Operations
You may opt out of receiving future mailings or other information from Legacy.com, Inc. If the mailing does not have an email cancellation form, send an email to Contact Us detailing the type of information that you no longer wish to receive.
11. THIRD PARTY ADVERTISING AND AD DELIVERY
This Service contains links to other sites that may be of interest to our visitors. This Privacy Policy applies only to Legacy.com and not to other companies’ or organizations’ Web sites to which we link. We are not responsible for the content or the privacy practices employed by other sites.
Legacy.com works with third parties, including, but not limited to Adtech US, Inc. and Turn Inc. (collectively, the "Ad Delivery Parties:"), for the purpose of advertisement delivery on the Services including online behavioral advertising (“OBA”) and multi-site advertising. Information collected about a consumer’s visits to the Services, including, but not limited to, certain information from your Web browser, your IP address and your email, may be used by third parties, including the Ad Delivery Parties, in order to provide advertisements about goods and services of interest to you. These Ad Delivery Parties retain data collected and used for these activities only as long as necessary to fulfill a legitimate Legacy.com business need, or as required by law. The Ad Delivery Parties may also set cookies to assist with advertisement delivery services. For more information about Adtech US, Inc. cookies, please visit http://www.adtechus.com/privacy/. If you would like to obtain more information about the practices of some of these Ad Delivery Parties, or if you would like to make choices about their use of your information, please click here: http://www.networkadvertising.org/choices/ The Ad Delivery Parties adhere to the Network Advertising Initiative’s Self-Regulatory Code of conduct. For more information please visit http://www.networkadvertising.org/about-nai
Legacy.com shall obtain your prior express consent (opt-in) before using any of your “sensitive consumer information” as that term is defined in the NAI Principles.
Legacy.com may also share your social media identification and account information with the corresponding social media service to allow the service to provide you with advertisements about goods and services of interest to you.
Please keep in mind that if you click on an advertisement on the Services and link to a third party’s website, then Legacy.com’s Privacy Policy will not apply to any Personally Identifiable Information collected on that third party’s website and you must read the privacy policy posted on that site to see how your Personally Identifiable Information will be handled.
We may periodically edit or update this Privacy Policy. We encourage you to review this Privacy Policy whenever you provide information on this Web site. Your use of the Services after changes of the terms of the Privacy Policy have been posted will mean that you accept the changes.
Questions regarding the Legacy.com Privacy Policy should be directed to Contact Us or
(As provided by California Civil Code Section 1798.83)
A California resident who has provided personal information to a business with whom he/she has established a business relationship for personal, family, or household purposes ("California customer") is entitled to request information about whether the business has disclosed personal information to any third parties for the third parties’ direct marketing purposes.
Legacy.com, Inc. does not share information with third parties for their direct marketing purposes. If, however, you are accessing this Privacy Policy from one of our Affiliate sites, the privacy policy of our Affiliate will apply to the collection of your information unless the Affiliate’s privacy policy specifically states otherwise. You should review the privacy policy of the Affiliate to understand what information may be collected from you and how it may be used.
California customers may request further information about our compliance with this law by emailing Contact Us.
Thank you for your inquiry. We respond to all inquiries in the order in which they are received.
We typically respond within 2-3 business days. However, in some cases there may be a delay.
We appreciate your patience and understanding.
There was an error sending your email.
Powered by Legacy.com®
Today's The Salamanca Press Notices|
Posting Guidelines|
Advice & Support|
|Funeral Flowers
Legacy.com. All rights reserved. Guest Book entries are free and are posted after being reviewed for appropriate content. If you find an entry containing inappropriate material, please | 计算机 |
2015-40/2214/en_head.json.gz/10664 | iPad Game Review: Time of Heroes
iPad Game Review: Time of Heroes (Universal)Reviewed by Tom SlaytonTime of Heroes is a 3D turn-based strategy game with heavy role playing elements from German developer Smuttlewerk Interactive. This is the first installment of the series dubbed as The Arrival. The developer is also working on the second chapter of Time of Heroes called The Prophet's Revenge. GameplayTime of Heroes is a turn-based role-playing game of tactical combat. If you've played Final Fantasy Tactics or Heroes of Might and Magic, you will feel right at home here. At the beginning of each chapter, you will place your armies on a game board divided into hexes. From there, you will attempt to move your armies in such a way as to achieve your objectives (which is usually kill and don't be killed). Your armies have different skills and abilities as do your enemies so rushing headlong into the breach is ill-advised. As with all games of this type, there is no single best army to sweep the battlefield with. This "rock/paper/scissors" approach means that you will have to spend time organizing your troops in such a way as to maximize their damage-dealing and minimizing their damage-taking. Additionally, the RPG aspect of the game adds yet another layer of strategy as you groom and develop your heroes into something that suits your play style. Time of Heroes is designed to be a thoughtful game rather than a twitchy one, which suits me just fine. I have always loved turn-based strategy games. I enjoy the relaxed pace, the deep range of attack/defense options (compared to real-time games), and the fact that victory is completely dependent on your ability to develop and execute a strategy rather than your reflexes. That being said, this game is not for the easily distracted or those who like to consume their games in bite-sized chunks. Although Time of Heroes is single-player only, it does support Game Center Achievements and Leaderboards if you are looking to compare your skills against other heroes. GraphicsSmuttlewerk's previous game, Companions was an excellent top-down real-time(ish) role-playing game that featured hand-drawn 2D sprites. Time of Heroes utilizes an entirely different engine, opting instead for a 3D experience. The downside of this is the fact that you lose quite a bit of detail when you go 3D. However, the upside is, it is much easier to identify differences in terrain; a crucial element in Time of Heroes. When I first launched the game, I wasn't particularly impressed with the visuals. However, after I played for a while, I found that the advantages of a 3D engine vastly outweighed the disadvantages, and I reversed my original feelings of disdain.SoundTime of Heroes has decent sound and music, although neither is good enough to make you scramble for your headphones. As with all games I review, I turned both completely down to see if the game experience suffered. I found that the game was noticeably more difficult without the sound effects due to Smuttlewerk's good use of audio cues and interface feedback; a good sign that the developers spent some time working on this. In-App Purchases (IAPs)Time of Heroes offers a variety of IAP options, none of which (thankfully) take the form of consumables. for $1 a pop, you can unlock a series of powerful weapons, armor, magical artifacts, and aura enhancements. I didn't find the game to be balanced toward their purchase in any way, and was pleased very much by this fact. ConclusionTime of Heroes is a solid turn-based strategy role-playing game. If you're a fan of the genre, you're not going to want to pass this up (especially at the currently ridiculous price of $1). If you're new to the genre, the presence of a solid tutorial and an intuitive interface should go a long way toward bringing you up to speed. The 3D engine isn't as pretty as the sprite-based graphics in their previous game, but they are quite functional for a game like this, and don't take much getting used to. Ratings (scale of 1 to 5):Graphics: - 3.5 - The 3D engine is functional and silky smooth, but not big on eye-candy.Sound: - 4 - Good use of sound effects and a decent soundtrack.Controls: - 5 -The interface is intuitive, responsive, and accurate.Gameplay: - 5 - A solid turn-based strategy game with RPG elements. If you're looking for something fast-paced and twitchy, look elsewhere.Playing Hints and Tips: Play through the tutorial before you dive into the game. Remember that the battles are scripted, not randomly generated. You will very often find yourself facing additional foes at the most inopportune times, so conserve your resources, even when victory appears to be near.App Facts:Developer: Smuttlewerk InteractiveRelease Date: January 19, 2012Price: $0.99Buy App: Time of Heroes Tweet----------------------------------------------------- Check out our full list of iPhone/iPod touch game and app reviews:http://www.mobiletechreview.com/iPhone-game-reviews.htm Check out our other iPad game, app and book reviews:http://www.mobiletechreview.com/iPad-game-reviews.htm ----------------------------------------------------- --- | 计算机 |
2015-40/2214/en_head.json.gz/10849 | HDTVs Home/Reviews/Software/Operating Systems and Platforms/Windows 8/Windows 8: What We Know So Far
Windows 8: What We Know So Far
By Michael Muchmore
February 3, 2012 Comments
We've seen the developer preview, and we're about to get a beta of Microsoft's next major platform, Windows 8. Here's what we know about it.
Most people with any interest in technology have now seen or maybe even had hands-on experience with Microsoft's next big thing: Windows 8. And it's clear most that the hybrid mobile-desktop operating system represents a huge risk for the software giant. But many would argue that it's a risk the company must take in order to become a major force in the brave new world of tablet computing. The most important thing to realize about Windows 8 is that it's effectively two operating systems in one: The touch-tablet friendly, tile-based Metro interface that runs lightweight Web-app like programs, and the traditional desktop operating system. You get to the second through the first, and, though the desktop is arguably more powerful, it's relegated by Microsoft to being just another app among your Metro home screen tiles. (If terms like Metro are unfamiliar, see my Windows 8 Glossary) Whether users will acclimate to the mind-shift required by moving between the two paradigms is something that will partly determine the OS's ultimate success.
And what becomes of the more than 1.25 billion Windows desktop users who may not move to the tablet format? As you'll see in the course of this story, desktop users will certainly not be overlooked by Windows 8: Microsoft has committed to supporting any machine that runs Windows 7 with Windows 8. And not only will the newer OS offer improved features in the standard desktop view, but their systems will start up and run noticeably faster—one of the most compelling aspects of Window 8.
I first compiled this article before we'd even seen the Developer Preview of Windows 8, and boy have we learned a lot since then. Microsoft has been doling out generous amounts of information about its gestating next operating system, mostly via the Building Windows 8 blog. The team responsible for developing the OS, led by Microsoft's President Windows and Windows Live Division Steven Sinofsky, has given the world detailed glimpses into the workings of Windows 8, and offered the public unprecedented feedback opportunities to influence its development. In fact, the latest post on the blog at the time of this writing consists of dozens of user concerns the team has addressed in response to comments on the blog.
In the first post, Sinofsky made the bold assertion that Windows 8 will represent the biggest rethinking of the PC operating system since Windows 95. That's quite a statement, considering big-time releases like XP, Vista, and Windows 7 have intervened. Windows 95 was the first version to truly break the bonds of DOS, and Windows 8 also promises to move the PC in a drastically new direction. The team has rethought its every aspect, from the interface down to the file system and memory use. Before we launch into what we know, I'd like to throw out there a few of the unknowns. (I won't, however, go in the whole Rumsfeld issue of known unknowns versus unknown unknowns.) Chief among these is how different Windows 8 will be on non-Intel tablets compared with traditional Windows desktop and laptop configurations. We have yet to see a very powerful Metro app such as those in the Microsoft Office suite. Will non-Intel tablets be able to run powerful desktop apps, or only Metro apps? Will the company ever offer a Metro-only or a Desktop-only version of the OS? Will it be a real threat to the iPad in the tablet space? How much will it cost? When will it go on sale?
We'll get more answers when the beta arrives later in February. Putting those questions aside for now, click on to find out what we do know so far.
CLOSE Windows 8: What We Know So Far See Full Article »
Windows 8 will be compatible with existing PCs
With all the focus on tablets and what Microsoft has called a "touch-centric interface," Windows 8's role in the exiting installed PC base can easily get lost in the mix. We're talking about over 400 million Windows 7 machines, so it's not insignificant for Microsoft to offer an upgrade path for the existing users. In the inaugural post on the Building Windows 8 blog, Windows lead Steven Sinofsky states in no uncertain terms that Windows 8 will run on existing PCs: "It is also important to know that we're 100 percent committed to running the software and supporting the hardware that is compatible with over 400 million Windows 7 licenses already sold and all the Windows 7 yet to be sold." What will happen to the even greater number of PCs that run earlier versions of Windows, particularly XP, is less clear.
Windows 8 Will run two Kinds of Apps—Metro and Desktop
10 Bizarre PC Designs These weird and wacky case mods were actually mass produced. Go figure. | 计算机 |
2015-40/2214/en_head.json.gz/11988 | GDC 2012: Deadly Premonition follow-up, enhanced version teased
March 8, 2012 | By Christian Nutt March 8, 2012 | By Christian Nutt 1 comments
More: Console/PC, GDC, Business/Marketing
In his GDC presentation on the Harvest Moon series, Deadly Premonition executive producer Yasuhiro Wada teased two new possible installments of the series in collaboration with its director, Swery. While Wada spent the bulk of his talk reviewing the 16 year history of the farming franchise he created, he touched, in the end, on his future plans with his new company Toybox. While working at Deadly Premonition's Japanese publisher, Marvelous Entertainment, Wada was the executive producer of the Xbox 360 cult hit. The game was developed by the Osaka-based Access Games and directed by Hidetaka "Swery" Suehiro. The original game came out on both PlayStation 3 and Xbox 360 in Japan, but, "We faced some challenges... We weren't able to release the full experience" that was intended, he said. Because of this, the game "was not released on the PS3 in the West."
Despite its weaknesses, he recognizes that the game has "developed a cult following."
He said he hopes to announce that this will change -- hinting at a new, enhanced version of the original game. "We're working on a PS3 release. We still have a few hurdles to overcome, but hopefully we'll be able to show you something at E3," said Wada. He also said that at his new company, Toybox, he is working with Swery on a new title, and hopes to have news about that at E3, too. He strongly hinted that it would follow in the footsteps of Deadly Premonition without saying anything outright. [UPDATE: 1UP's Bob Mackey spoke to Wada after the presentation and confirmed via Twitter that the only game Wada intended to tease at the conference was an enhanced version of the original Deadly Premonition.]
/view/news/165119/GDC_2012_Deadly_Premonition_followup_enhanced_version_teased.php | 计算机 |
2015-40/2214/en_head.json.gz/12002 | Testimony Concerning The Next Generation Internet Initiative,
Before The U.S. Senate Subcommittee On Science, Technology, And Space
by Dr. Martha Krebs
Director Of Energy Research, U.S. Department Of Energy
November 4, 1997 Introduction
Mr. Chairman and Members of the Subcommittee, I am pleased to appear before you today to discuss the management of the Next Generation Internet Initiative (NGI) and the Department of Energy's (DOE) activities related to this initiative.
The Next Generation Internet Initiative is a Presidential research initiative to develop the foundations for the Internet of the 21st century, announced by the President and Vice President last October and discussed in the 1997 State of the Union Address. This initiative builds on the very successful government research that created the current Internet.
The Internet is one of the true success stories of technology transfer from U.S. government funded research to the broader U.S. economy. It has created many thousands of well-paying jobs across the country and billions of dollars of market value for U.S. based companies.
The Internet has given us new ways to shop, learn, entertain, conduct business, and keep in touch. Prime time network TV ads run for companies such as Amazon.com whose only business presence is on the Internet. The Internet has added new words to our vocabulary such as E-mail, The Web and Cyberspace. It has become a powerful force for democracy and individual empowerment. By erasing the barriers of space and time it is bringing us closer as a nation and to the world we live in.
But users have seen that the current Internet is limited by the technology that spawned it. Serious shortcomings in basic technology for Internet management, services, and security are already hampering the use of the Internet for business and other critical functions. Slow or congested data links preclude the use of much of the Internet for video-conferencing, distance learning, and multimedia.
In furtherance of the Administration's Next Generation Internet initiative last May, several public and private organizations sponsored a major workshop that examined the premises of the NGI initiative. The workshop was attended by over 130 representatives of companies, universities and federal laboratories, and government agencies. The workshop concluded that fundamental improvements are required to realize the potential of the Internet, development and testing of these improvements will take several years, no company or industry sector has the resources or ability carry out this research program, and only a partnership among federal and private resources can succeed.
Federally funded research continues to play an indispensable role in development of the Internet, producing most of the fundamental, pre-competitive, advances in networking technology that drive the Internet forward. These fundamental advances are commercialized by companies and offered as products. Substantial, recently formed companies that are based directly on federal research investments include Sun Microsystems, Cisco Systems, and Netscape. The NGI continues this successful model.
Next Generation Internet Initiative
The NGI has three goals which, taken together, will create the foundation for the 21st Century Internet. These are:
Promote experimentation with the next generation of network technologies.
Develop a next generation network testbed to connect universities and federal research institutions at rates that are sufficient to demonstrate new technologies and support future research. Demonstrate new applications that meet important national goals and missions.
Over the past year we have worked to design the NGI to achieve these goals by building on the base of federal research investment, meeting the needs of the participating agencies for improved networks, and partnering with the private sector to facilitate early incorporation of advances into the commercial Internet.
Four major milestones have been met during this planning process.
The NGI Concept Paper, which summarizes the purpose and strategy for the initiative, was published in draft last May and in final form last July.
The Presidential Advisory Committee for High Performance Computing and Communications, Information Technology, and the Next Generation Internet warmly endorsed the NGI in a letter to the President's Science Advisor after extensive review of the initiative and recommendations for improvements that were accepted and incorporated into the final Concept Paper.
The NGI planning workshop, comprised of over 130 Internet experts and stakeholders, endorsed the initiative.
The draft Implementation Plan was issued last July, which laid out the detailed milestones, metrics, and responsibilities for each agency participating in the initiative. (The Implementation Plan will soon be updated with final FY 1998 appropriations.)
Management of the initiative is shared between individual participating agencies and the Large Scale Networking Working Group, reporting to the Committee on Computing, Information, and Communications of the National Science and Technology Council. The management structure has been designed to avoid conflicts among agency priorities and initiative goals through the following measures:
Detailed initiative planning and execution is entrusted to the Implementation Team, which includes the responsible program managers from each of the participating agencies.
Initiative deliverables are linked to the specific agencies responsible for them and are incorporated into agency plans and budgets.
Agencies exchange research managers to review proposal and projects, thereby enhancing cross-coupling among initiative components.
This management structure has evolved from the processes developed since 1989 for the High Performance Computing and Communications program and involves many of the same people.
DOE's Large Scale Networking Research
Although the Department of Energy is not participating in the NGI during fiscal year 1998, we are continuing previously planned research that is closely related to NGI. This research is coordinated with that of other agencies through the Large Scale Networking Working Group, co-chaired by officials from DOE and NSF. DOE brings special strengths to network research. The Department closely integrates network research, network deployment and management, and network applications. As a result, the needs of applications are quickly translated into research topics, and research advances are quickly incorporated into our production networks. The DOE2000 initiative provides a good example of this approach. The initiative will use advanced computing technologies to accelerate research and development and advanced collaboration technologies to make DOE's unique facilities and resources more accessible to research and development partners in national laboratories, academia and industry.
DOE depends strongly on the Internet for its research programs; our programs are constrained by the Internet's current limitations. Programs that depend on the Internet include, among others, science-based stockpile stewardship, high energy and nuclear physics, fusion energy, global climate research, the human genome project, chemistry and materials research, transportation technology development, and environmental technology development.
All of these programs depend on the generation, manipulation and exchange of increasingly large sets of data. The DOE is also calling on its scientists in National Laboratories, universities and industries to work in larger and closer collaborations and deliver results more quickly. Tighter budgets in the DOE argue for remote access to both data and facilities. The development of technologies for national security and new energy markets argue for protection of critical information while making it available to researchers through the Internet. Prompt shared access to data and computational capability will be key to making our large international collaboratories in high energy physics and fusion efficient and productive. Underlying all these requirements for improved research and operational management will be new technologies for managing networks and tracking project and program use on the net. The ultimate conclusion here is that the DOE needs a higher capability network to meet its commitments on its missions.
Since 1974 the Office of Energy Research has developed and provided state-of-the-art data networks for remote access to supercomputers and experimental facilities. In the mid 1980s we joined our network, ESnet, to those of DARPA, NSF, and other organizations to create the modern Internet. Our user facilities -- light sources, accelerators, research reactors, electron microscopes, and other unique research equipment -- serve thousands of users each year, many via Internet connections to run experiments and analyze data. Our supercomputers similarly serve thousands of remote users and are being interconnected by very fast Internet links to run problems that are too large for any one of them. The DOE2000 initiative is creating and demonstrating more efficient ways for distributed research teams to collaborate via the Internet. Each of these activities requires improved technologies for Internet management, services, and security, as well as much higher data transmission rates for their future success.
DOE's Research Related to NGI
We are conducting research relevant to each of the three NGI goals, coordinated with similar work in other agencies. Concerning research for advanced network technologies, we are conducting research on improved techniques for network measurement and management, focusing on software links to applications and user work stations. We are experimenting with protocols and software for improved Quality of Service, and looking at ways to provide better network services to applications. We also are testing and deploying experimental techniques for network security and authentication. To ensure that technology advances can be used together, we are coordinating our work with complementary activities by DARPA, NIST, NASA, and NSF, and we are exchanging program managers to review each other's proposals and projects.
Regarding the development of the prototype network testbed, we have interconnected ESnet with other federal research networks at coordinated exchange points to create a virtual network testbed for advanced applications and network research. We are experimenting with new network protocols for management, services, and security. We also are experimenting with project-specific virtual networks over ESnet that will allow us to conduct potentially unstable network research without disrupting the production network. For several years we have shared a networking contract with NASA for their NREN and our ESnet. We will rely on NSF for most network connections to DOE funded university researchers. Funding limitations preclude DOE participation in the very high speed network testbed at this time. We view this as a very important DARPA-led project and hope to join at a later date.
Supporting demonstration of new applications, one of DOE's great strengths is our integration of mission critical applications with supporting technology development. DOE applications that will both test and benefit from advanced networking technology include the DOE2000 collaboratory pilots (diesel engine development, distributed materials characterization, and remote fusion experiments), the Oak Ridge/Sandia distributed computing project, the accelerated strategic computing initiative, and analysis of massive data sets for high energy and nuclear physics. These ongoing applications require very high-speed data rates, have varied needs for quality of service, require different levels of authentication and security, involve interactions among people, computers, and experimental equipment, and include participants distributed among laboratories, universities, companies, and foreign sites. We will be working closely with other agencies and the private sector to harmonize the network technologies used by these applications. For example, in the DOE2000 initiative we are working closely with DARPA to test security infrastructure, and we are collaborating with several companies to incorporate or modify commercial-off-the-shelf (COTS) software and instruments. Discussions are underway with NSF regarding cooperation in collaboratory technologies.
The Department of Energy brings critical skills and important applications to large scale networking and the NGI. We are coordinating our network research and applications with the agencies participating in the NGI. We believe that we will reap both cost savings and mission improvements from the resultant networking advances. We look forward to constructive dialog with the Congress regarding the Department's programs in networking research and the possibility of formal affiliation with the NGI.
I thank the Chairman and the Committee for your interest in this important investment for the country's future.
Source: This speech came from a web page which no longer exists. Copyright information: Gifts of Speech believes that for copyright purposes, this speech is in the public domain since it is testimony before the U. S. Congress. Any use of this speech, however, should show proper attribution to its author. | 计算机 |
2015-40/2214/en_head.json.gz/12138 | Linux Containers Authors: Anders Wallgren, XebiaLabs Blog, Pat Romanski, Elizabeth White, Liz McMillan Related Topics: Linux Containers Linux Containers: Article
Linux vs Windows: Another Great OS Leap Forward On the Way?
By Linux News Desk
A company based in The Philippines is claiming it has developed software that would allow Windows-based applications to run smoothly on Linux - paving the way for the production of more PCs preloaded with Linux instead of Windows. Codenamed "David," Manila-based SpecOps Labs says it will unveil a working model of this middleware tomorrow and adds that it could be commercially available before the end of this year. SpecOps Labs (formerly known as Softlabs) began the ambitious project last year knowing it would eventually put them directly against Microsoft (a.k.a. Goliath in case the reference was lost on you.)"David will break the bonds of the giant Windows software and forever change the way the world computes," SpecOps Chief Executive Fredrick Lewis says defiantly. Lewis believes that the cost of purchasing PC's will decline once "David" becomes widespread and OEMs begin preloading his company's software so that the free LinuxOS can seamlessly run Microsoft programs, CIO, CTO & Developer Resources SpecOps projects revenues of around $35 million within two years, from OEMs and the so-called "white-box builders" - the small resellers or distributors that assemble and sell personal computers without major brand names. According to SpecOps' Web site: "The next generation (of David) will, in effect, incorporate the operating system into the Web browser, virtually eliminating the need for an operating system eventually, except to boot computer and launch the browser." Just like its namesake, the biblical hero David, SpecOps Labs new David middleware "is expected to level the OS industries playing field worldwide and free all consumers from the bonds of MS Windows - giving them freedom to use OS of their choice." Fighting talk. But it's early days yet, in spite of the fact that Victor Silvino, country manager of IBM Business Partners of IBM Philippines, has indicated that IBM is "keen on supporting [SpecOps] both from a hardware and software perspective." Published April 21, 2004 Reads 24,050 Copyright © 2004 SYS-CON Media, Inc. — All Rights Reserved.
Introducing "Cooperative Linux" - Linux for Windows, No Less
Windows On Linux: Skepticism About "David" Surfaces More Stories By Linux News Desk
SYS-CON's Linux News Desk gathers stories, analysis, and information from around the Linux world and synthesizes them into an easy to digest format for IT/IS managers and other business decision-makers.
Comments (14) View Comments
Kevin Clancy 02/05/05 01:09:25 AM EST
It is all a big fat lie the web site has gone dead in august 2004
karthikeyan 07/08/04 09:35:29 AM EDT
sir , I am an computer engineering student studying in my final year. myself and my friends want to do project in co-operative LINUX . We want more details regarding the project and the basic concepts of it.
plz mention the basic thing that are needed to implement the project.
Mode1Bravo 04/27/04 05:51:40 AM EDT
All I can say is I'll believe it when I see it....
lee bogs 04/26/04 07:20:25 AM EDT
Finding the missing link in
Linux-Windows compatibility
Posted: 10:56 PM (Manila Time) | Apr. 25, 2004
By Erwin Lemuel G. Oliva
INQ7.net
IN a small seminar room of the De La Salle University (DLSU), Caslon Chua, chief software architect of SpecOps (which stands for special operations laboratories), took members of the local and international media through a guided tour of a software program called "David."
Introducing what the company is touting as the next breakthrough in computing, Chua told the audience that the demonstration was about to begin.
"But before I start, I should tell you that the David bridge software has been running the Microsoft Powerpoint presentation on this computer," said Chua pointing to the computer running on Red Hat Linux distribution.
The audience seems unmoved.
Chua who is a Ph. D. holder in computer science and the current graduate school director of the College of Computer Studies in DLSU, was about to demonstrate the "bridge" software which SpecOps developed.
This software, the company claimed, would eventually link two different environments of computing: the free operating system Linux and the commercial Microsoft Windows operating system. Both operating systems are now very popular, and the former is slowly attracting new users due to its lower cost.
Linux is an operating system developed by programmer Linus Torvalds of Finland. It was eventually given to the computing world for free use. Unlike Windows, Linux is an operating system -- the computer program that runs a computer system -- that can be used without the need to pay costly software licenses.
Companies like Red Hat, however, have recently adopted Linux and developed so-called distribution copies, which are often modified or improved versions of the free operating system with additional components.
During Thursday's public demonstration, the bespectacled Chua began showing his audience that Microsoft applications such as Office 2000 would not run on a Linux system. He then instructed his aide to install the David bridge software. After a few minutes, he again asked the aide to install the Microsoft Word program--the installation dialogue box for Office 2000 popped out in the middle of the computer screen, asking the user what to do next.
The aide was then instructed to click on the "next" button, prompting the system to ask for a CD-Key (an alphanumeric password). A few more minutes passed, the aide accepted the end-user license agreement, then proceeded to the installation of Microsoft Word, Powerpoint and Excel applications on the computer running Red Hat Linux.
Chua subsequently opened each Microsoft Office application, and showed that the "look-and-feel" of the applications remained intact, only this time it was running on a Linux box.
By the time he ended the demo, the audience was applauding.
But what is David?
According to Peter Valdes, chief technology officer of SpecOps, David uses a "new approach" in simulating the Windows environment in a Linux-powered system. Not wanting to reveal the company's trade secrets, he nonetheless said that David was breakthrough technology for today's computing world.
According to the SpecOps Labs website, David is set "to provide a platform, which will serve as a viable alternative to the MS Windows Operating System."
The company's Version 1.0 of David "will be a middleware program that will sit on top of the free and open-source Linux operating system, and enable it to seamlessly run most Windows applications," the company said.
In the future, David will become part of an operating system that will be integrated into a web browser, "virtually eliminating the need for an operating system eventually, except to boot the computer and launch the browser."
Attempts to "bridge the gap" between Windows and Linux have been made before, according to Valdes.
These projects include SunSoft's WABI (Windows Application Binary Interface), the TWIN open source project, ODIN, and the WINE project. The first two projects were abandoned, while the third targeted OS/2, an operating system developed by IBM.
WINE was the most prominent of all open source efforts to bridge the Windows gap. It was begun in 1993 to allow Windows 3.1 applications to run on Linux. Eventually, support for Win32 applications was added. Currently, the project is working on support for Windows NT and 2000 applications.
As of 2002, SpecOps said that the WINE project remained in the hands of developers, and out of reach of Windows users. The project also inherited the flaws inherent in the Windows system so early adopters experienced system crashes and performance problems.
Lindows and Crossover Office are two commercial initiatives that adopted the WINE project approach. But none of these efforts have generated consumer acceptance that is comparable to what Microsoft has achieved with its Windows OS.
Lindows was nearer to the heart of Windows users. However, the company was slapped with a legal suit by Microsoft, after the software giant claimed that its company name infringed on the copyright of Microsoft's Windows brand. This has delayed the company's efforts, and subsequently changed its direction and vision.
According to SpecOps's technical executives, David used reverse engineering to create a "Windows Subsystem Simulation Environment" to allow Windows applications to run "natively" on the Linux operating system.
It also corrected design flaws in the Microsoft Windows system to make the simulation more efficient and avoid system crashes.
SpecOps said that David incorporated into its architecture the top features of the preceding Windows compatibility projects.
However, unlike other simulation applications that still requires the user to have a copy of Microsoft Windows to run the applications on the computers, the David bridge software only requires users to install the middleware into a Linux system before installing Microsoft applications.
One advantage offered by David is that it requires minimal hardware additions, according to SpecOps. There is no need for additional memory and disk storage to execute and store the middleware code and the need for a separate computer server to run a so-called "Virtual Terminal Software" for emulating Windows applications in a Linux environment has been done away with, the company said.
SpecOps also claimed that David supports 16-bit applications (DOS/Windows v3.x) and 32-bit applications (Windows 95 applications; Windows NT/2K/XP applications).
Gentoo Ken 04/25/04 11:28:54 PM EDT
Hmm,interesting.
It will be much more fun if...it is ..an open source s/w.:p
Roger Henderson 04/25/04 08:42:38 PM EDT
So where the hell is it?!? The article states:
Codenamed "David," Manila-based SpecOps Labs says it will unveil a working model of this middleware tomorrow and adds that it could be commercially available before the end of this year."
The article was written on 21st of April - it is the 26th now. I would have thought tech. like this would warrent a mention after release, not before, esp. if the reporter only needs to wait one day. Sounds like vapourware to me...
janka 04/24/04 02:40:25 PM EDT
Not a total waste of time if it will run say AutoCad and Timberline plus a few other speciality programs not available for Linux yet but the time spent would be better invested doing ports to Linux, M$ compatibility will with time become irrelevant, Bill "Crash" Who? :)
James Jones 04/23/04 10:07:17 PM EDT
We've been here before. Remember "a better Windows than Windows"? One of the ways MS killed off OS/2 was to put IBM in the position of perpetually playing catchup to keep those Windows apps running under OS/2. Finally they added a call to win32s.dll that had no purpose save to break an assumption built into DOS compatibility mode programs under OS/2 (that they ran in a 512 MB address space), and IBM gave up--they eventually removed that limitation, but it was years after it no longer made a difference. If David really works as advertised, how does SpecOps Labs plan to avoid this fate--and even if they do, won't that undercut any motivation for the producers of software for Windows to move to Linux?
Raven Morris 04/23/04 02:06:28 PM EDT
Re: zero
I am curious, what did you find unprofessional about their web site ?
I thought the design was excellent, however I thought the pictures chosen for many of the pages were quite comical, sort of like they were parodying other companies (which they sort of are when they talk about Microsoft).
My only real complaint about the web site was that they lied about the WINE project. They made claims that WINE inherits the instabilities of Microsoft Windows, and list "Blue Screens Of Death" and "system lockups when the apps crash", neither of which are possible. WINE is merely a user-level application, it can't crash the system. For many programs it runs equally as stable as Windows, for some, even more stable.
As an example the game Grand Theft Auto 1 runs much better in WINE than it ever did in Windows, which it would regularly crash the Win32 kernel after a bad memory leak that happens in certain situations. WINE on the other hand just crashes the process and lets you re-run the app immediately. It also crashes much less often to begin with in WINE than it did back when I played it on Windows 98. Incidentally, the DOS version of the game never crashed at all.
Anyhow, my point being that they blatantly lie about the WINE project ... which is quite odd considering that much of their code base is supposedly coming *from* the WINE project. They had better GPL their code when it is done, it's going to be quite annoying if they go and find some way to circumvent the license by separating integral parts of their code from it, or something similar.
G'day.
zero 04/23/04 07:54:16 AM EDT
I was excited when I read the news but cooled down after I read their Website. The point is that with a properly run company, a proper Website is a must. Their Website (http://www.specopslabs.com/) really need some work. If it is not properly constructed it is better not to show it. If their attitute towards their Website is so unprofessional, what do you expect of their software?
E-Dan 04/23/04 04:31:08 AM EDT
The question is, if such a solution - like Wine - will not come to Microsoft's advantage at some point, in the long run. A lot of people have started putting together their own PCs. Purchasing Windows XP is quite the investment. Although Linux ships with a lot of goodies, there are some things that MS are good at, and perhaps it is not such a bad thing that they are able to sell their more useful products to Linux users - without actually making Linux versions.
PianoMan 04/22/04 01:00:41 PM EDT
Ah, read their web site, you are incorrect in your assumption. You will find the web site is in business plan format and contains very humorous observations. http://www.specopslabs.com/ Their product is a "better" WINE type technology.
My guess, this is the sssshhh method IBM is going to use to run M$Office on Linux.
K 04/22/04 09:23:45 AM EDT
Sounds like a browser based Terminal Server to me. It has been around for years. Nothing new here.
Raven Morris 04/22/04 04:25:33 AM EDT
This sounds a fair bit flakey to me. Integrating the operating system into the web browser is one of Microsofts biggest faults -- why try to emulate this with a fresh start ? And the likelihood that this company lets you run Windows apps better than WINE (which has had many years of development time) seems quite unlikely.
Still, the more publicity and mainstreaming of GNU/Linux systems means more hardware drivers, software and games being produced, so it's all good. | 计算机 |
2015-40/2214/en_head.json.gz/12286 | Open Source Research
Revision as of 05:49, 15 January 2014 by Matthew Todd (Talk | contribs)
Jump to: navigation, search Open Source Research Home Malaria Tuberculosis Links Open Source Research (OSR) adopts the following basic rules (first written down here):
First law: All data are open and all ideas are shared
Second Law: Anyone can take part at any level
Third Law: There will be no patents
Fourth Law: Suggestions are the best form of criticism
Fifth Law: Public discussion is much more valuable than private email
Sixth Law: An open project is bigger than, and is not owned by, any given lab.
This wiki gathers resources for open research aimed at finding new medicines for diseases. A wiki is intended for project status and notes - the actual collaboration occurs on other pages. In the case of the malaria project, for example, these may be found here.
1 History and Context
1.1 The Start: Open Source Software Development
1.2 A Note on Open Access
1.3 Stage 2: Open Data
1.4 A Note on Open Innovation
1.5 Crowdsourcing
1.6 Open Science
1.7 Open Source Drug Discovery
2 Philosophy of Open Research
2.1 Why Take Part?
2.2 Ownership
3 Logistics of Open Research
3.1 Licence
History and Context The Start: Open Source Software Development Open source as a term in software development implies a project is open to anyone, and the final product emerges from a distributed team of participants. There may be a funded kernel of work, but the subsequent development by the community is not explicitly funded. There are many examples of high quality, robust and widely used applications that have been developed using the open source model, such as the Firefox and Chrome web browsers, the Linux operating system and the Apache web server. It's important to appreciate the commercial significance of such products. There are thriving open source software development communities on the web at, for example, Sourceforge and GitHub. Central to the operation of these sites and projects is the sharing of data and ideas in near-real time.
A Note on Open Access "Open Access" refers to a scientific paper that is free to read, rather than behind a paywall. While this is an important issue, and is absolutely required of any publications arising from open source projects, open access needs to be distinguished from open research. The former describes a mechanism of publishing work that is complete. The latter describes a way for humans to work together.
Stage 2: Open Data Many valuable initiatives advocating open data have emerged in which large datasets are deposited to assist groups of researchers (e.g., Pubchem, ChEMBL and SAGE Bionetworks); the release of malaria data in 2010 falls into this class. These very important ventures employ the internet as an information resource, rather than as a means for active collaboration. For people to work together on the web, data must be freely available. Yet the posting of open data is only a necessary and not sufficient condition for open research. Open data may be used without a requirement to work with anyone. The GSK malaria data, for example, may be browsed and used by people engaged in closed, proprietary research projects - there is no obligation to enagage in an open research project.
An important feature of open data is that it maximises re-use (or should be released in a way that permits re-use). Essentially the generator of data should avoid making assumptions about what data are good for. The data acquired by the Hubble space telescope has led to more publications by teams analysing the data than from the original teams that acquired the data.
The Panton Principles describe important recommendations for releasing data into the open.
A Note on Open Innovation As an effort to stimulate innovation, several companies have adopted an "open innovation" model. This is a somewhat nebulous term that means companies must try to bring in the best external ideas to complement in-house research.NRDD Article The mechanisms of bringing in new ideas are:
Prizes for solutions to problems (e.g., Innocentive). A competition means that teams work in isolation and do not pool ideas. Such a mechanism does not change the nature of the research, rather the motivation to participate. The pharmaceutical industry itself essentially already operates on this model.
Licensing agreements with academic groups/start-ups (e.g., Eli Lilly’s PD2 program). In such arrangements, companies may purchase the rights to promising ideas. Vigilance of intellectual property may of course shut down any open collaboration at a promising stage. It has therefore been proposed to limit open innovation science to “pre-competitive areas” (e.g., toxicology) but to date the industry has been unable to define what the term “pre-competitive” means beyond the avoidance of duplication of effort and the requirement for public-domain information resources.NRDD article
For more on this distinction see Will Spooner's article.
Open Source Research describes a way of working that is fundamentally different from open innovation, since when something is open source everything is shared. This is not the case in open innovation, where teams are free to operate in secret.
Crowdsourcing The use of a widely distributed set of participants to accelerate a project is a strategy that has been widely employed in many areas. The writing of the Oxford English Dictionary made use of volunteers to identify the first uses, or best examples of the use, of words. Pioneering work on distribution of computing power required on science projects (where the science itself was not necessarily an open activity) was achieved with the SETI@Home and Folding@Home projects.
With the rise of the web, several highly successful crowdsourcing experiments have emerged in which tasks are distributed to thousands of human participants, such as the Foldit and Galaxyzoo projects. What is notable about such cases is the speed with which the science progresses through the harnessing of what has been termed the “cognitive surplus”.
Open Science Open science is the application of open source methods to science. Thus data must be released as they are acquired, and it must be possible for any reader of the data to have an impact on the project. There should be a minimisation of groups working on parts of the project in isolation and only periodically releasing data - ideally complete data release and collaboration happen in real time, to prevent duplication of effort, and to maximise useful interaction between participants.
Though there is no formal line to distinguish crowdsourced projects from open science projects, it could be argued that open science projects are mutable at every level. For example, while anyone could participate in the original Galaxyzoo project, the software, and the basic project methodology, were not open to change by those who participated. On the other hand in the Polymath project, while there was a question to answer at the outset, the direction the project took could be influenced by anyone, depending on how the project went. In the Synaptic Leap discovery of a chemical synthesis of a drug, the eventual solution was influenced by project participants as it proceeded.
Open Source Drug Discovery Drug discovery is a complex process involving many different stages. Compounds are discovered as having some biological activity, and these are then improved through iterative chemical synthesis and biological evaluation. Compounds that appear to be promising are assessed for their behaviour and toxicity in biological systems. The move to evaluation in humans is the clinical trial phase, and there are regulatory phases after that, as well as the need to create the relevant molecule on a large scale.
Since no drug has ever been discovered using an open source approach it is difficult to be certain about how OSDD would work. However it seems likely that the biggest impact of the open approach would be in the early phases before clinical trials have commenced. Open methods could also have an impact on the process chemistry phase, in creating an efficient chemical synthesis on a large scale.
Open work cannot be patented, since there can be no delays to release of data, and no partial buy-ins. If a group opts out of the project to pursue a "fork", they leave the project. Open source drug discovery must operate without patents. The hypothesis is that through working in an open mode, research and development costs are reduced, and research is accelerated. This offsets the lack of capital support for the project. Costs of clinical trials and product registration would have to be sourced from governments and NGOs. Whether this is possible is one of the central questions of OSDD.
A one-day meeting on open source drug discovery for malaria was held in February 2012. General issues surrounding the feasibility of open source drug discovery were discussed, followed by more specific malaria-related ideas. These talks are gradually going up on YouTube with annotations, and they frame many of the relevant issues, for example the landscape of drug discovery in neglected diseases, and whether patents are necessary in drug discovery. An important message is that open source drug discovery is where anyone may participate in driving the research, which is different from a more general use of the word "open" where data are made freely available, but perhaps after a delay which essentially prevents participation by others.
Philosophy of Open Research Why Take Part?
What of motivations? Why would people want to contribute to this project? Partly to solve a problem. Partly to be involved with quality science that is open, and hence subject to the most brutal form of ongoing peer-review. Partly for academic credentials since regular peer-reviewed papers will come from the project. Partly to demonstrate competence publicly - open science is meritocratic and status-blind. Perhaps a mixture of all these things.
A competition is possible in the future, i.e. with a cash prize. Progress towards a very promising lead compound series has been rapid, but there is a long road to a compound that looks sufficiently promising that it moves towards clinical trials. There's a lot of tweaking needed, and perhaps even the move to another series. It is not obvious what will happen. It is certain the project will need a lot more input than it has received to date. A prize may increase traffic and input. The competition would be teamless, however, awarded based on performance of individuals within a group where everything is shared. Not sharing data or ideas leads to disqualification. Such a competition is difficult to judge, difficult to award, and hence almost certainly worth doing. More about this is here.
A final point - the project is open. Nobody owns it. Those people most active in the project lead it while they are active. If you wish to contribute, in any capacity, please do so. There is no need to "clear" anything with existing project members by email first. To date is has been very common for current participants to receive questions/suggestions from people by email, which is to be discouraged. In the development of Linux, the need for Linus Torvalds to approve everything caused a serious bottleneck, and the observation that "Linus doesn't scale". Nobody scales, but the team does. So it's more efficient if all the project discussions are held publicly. Many people do not like this idea. In science the idea of "beta testing" something is alien. When data are released in science there is an expectation that the data are correct, and essentially finished. This project eschews this view. All data are released immediately, all discussions are public, anyone can participate.
Logistics of Open Research The way the project is run is one of the novelties, though as with everything in this project nothing is static and advice is always welcome on improvements. Raw experimental data are recorded in an online, openly-readable electronic lab notebook. The Synaptic Leap is being used to discuss ideas and results, as well as plan future work. The project's Google+ page is a light way to keep up with developments and discuss. The project's Twitter feed is a broadcast mechanism for updates. LinkedIn as used in the past on another project as a way of connecting with relevant experts, but has not been used much so far in this project. A wiki (that includes this page) is used to host the current overall project status. Updates on the project's progress can also be found at our Facebook page, and this also a place for interaction. If you wish to participate in this project, you can sign up to all these sites, and you would then be sent the Twitter/G+ passwords so you can used the same accounts.
The "live" philosophy of open research is that everything is released early and released often - a mantra derived from software development. To gain advantage from community inputs it is important to be clear and open about what you need.
Licences are essential in open source projects, to avoid any misunderstanding. An appropriate default licence for open research is CC-BY-3.0: any results are both academically and commercially exploitable by whoever wishes to do so, provided the project is cited. This allows for full commercial benefit from open research while maintaining well-worn standards of giving credit where credit is due.
Retrieved from "http://openwetware.org/wiki/Open_Source_Research"
Matthew Todd
Paul M. Ylioja | 计算机 |
2015-40/2214/en_head.json.gz/12884 | Nancy Drew - Treasure in the Royal Tower Walkthrough Walkthrough, Hints and Tips for PC Games.
Home | Cheatbook | Latest Cheats | Trainers | Cheats | Cheatbook-DataBase 2015 | Download | Search for Game | Blog
Browse by PC Games Title: A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | 0 - 9
The encyclopedia of game cheats. A die hard gamer would get pissed if they saw someone using cheats and walkthroughs in
games, but you have to agree, sometimes little hint or the "God Mode" becomes necessary to beat a particularly hard part
of the game. If you are an avid gamer and want a few extra weapons and tools the survive the game, CheatBook DataBase is
exactly the resource you would want. Find even secrets on our page. Nancy Drew - Treasure in the Royal Tower Walkthrough
Nancy Drew - Treasure in the Royal Tower Walkthrough
Version 1.6 12/1/08 DDDDD
DDDDDDDDDDDDD
DDDDDDDDDDDDDDDD
DDDDDDDDDDDDDDDDDDDD
DDDDDDDDDDDDDDDDDDD DDDDDDDDDDDDDDDDDDDDDD
DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD
DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD
DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD
DDDDD DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD D
DDDDD DDDDDDDDDDDDDDDDDDDDDDDDD DDDDDDDD
DDDDD DDDDDDDDDDDDDDDDDDDDDDDDDD DDD
DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD
DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD
DDDDDDDDDDDDDDDDDDDDDDDDDDDDD
DDDDDDDDDDDDDDDDDDDDDDDDDDDDDD
DDDDDDDDDDDDDDDDDDDDDD DDDDDD
DDDDDDDDDDDDDDDDD DDDDDD DDDDDDD
DDDDDDDDDDDDDDDD DDDDDD DDD DDDD
DDDDDDDDDDDDDD DDDDDD DDDDDDDDDDDDDDD
DDDDDDDDDDDDDDD DDDDDDDDD DD DDDDDDDDDDDD D
DDDDDDDDDDDDDDDD DDDDDDDDDDDDDDDDDDDDDDDD
DDDDDDDDDDDDDDDD DDDDDDDDDDDDDDDDDDDDDDD
DDDDDDDDDDDDDDDD DDDD
DDDDDDDDDDDDDDDDDD
DDDDDDDDDDDDDDDDD
DDDDDDDDDDDDDDDDDDDDDD
DDDDDDDDDDDDDDDDDDDDDDD
DDDDDDDDDDDDDDDDDDDDDDDDD
DDDDDDDDDDDDDDDDDDDDDDDDDD
DDDDDDDDDDDDDDDDDDDDDDDDDDD
DDDDDDDDDDDDDDDDDDDDDDDDDDDD
DDDDDDDDDDDDDDDDDDDDDDDD
DDDDDDD
DDDDDD
DD DDDDDDD
Nancy Drew: Treasure in the Royal Tower Walkthrough
by The Lost Gamer (AKA Michael Gray)
Videogame Humor: Gamecola.net
For the latest version of this guide, check
http://the_lost_gamer.tripod.com/guides.html
001. General information
002. Video Guide
003. Characters
004. Walkthrough
005. Credits
001-General Information
This is a walkthrough for the PC game called Nancy Drew:
Treasure in the Royal Tower. It's the fourth game in the
Nancy Drew series, in which you play as Nancy Drew and go
around and solve mysteries.
If you want to contact me, e-mail
[email protected], but make the subject blank
if you do. If you want to reproduce this guide in some
fashion, you should contact me before doing so.
002-Video Guide
Hey! Want to see how to beat the game instead of reading
about it? Well, I've got a video walkthrough, and you can
see it at...
http://www.youtube.com/view_play_list?p=CCF19C5F666C0EAC
The video guide comes with my humorous commentary. It's
more fun than a barell of monkeys.
003-Characters
Nancy Drew: Our heroine! She's a super-mystery solver, but
she's taking a break from mysteries in order to go skiing
in Wisconsin. Like most of Nancy's vacations, this one
soon turns into a mystery.
Ezra Wickford: The person who designed Wickford Castle in
1920. He's dead by now.
Christi Lane: The owner of Wickford Castle.
Dexter Egan: The caretaker of Wickford Castle. Since
Christi is away on business, Dexter is taking care of the
castle. You can usually find him at the front desk.
Jacques Brunais: The ski instructor, who represented France
in the last Winter Olympics.
Professor Hotchkiss: She's a somewhat crazy professor who
hides in her room all day writing about Marie Antoinette.
Lisa Ostrum: A photojournalist who spends most of her time
reading a magazine in the longue of the castle/hotel.
George Fayne: Nancy's friend. You can call George to get
advice during this game.
Bess Marvin: George's cousin. Bess will always be at
George's when Nancy calls.
Ned Nickerson: Nancy's boyfriend. You can call him during
this game, too.
004-Walkthrough
To start the game, select "New Game". You can take a
tutorial to learn how to move around in the game, and then
you can play the game at either junior or senior detective
level. The difference between the levels is the difficulty
of the puzzles, but luckily for you, I'm a logic puzzle
fiend, so I cover the puzzles in both of the levels.
"Dear George,
So much for my Wisconsin ski vacation! I arrived here at
Wickford Castle last night just before a blizzard swept in!
The mountain is completely shut down, and all the
surrounding roads are closed.
I think I'm one of the few guests who made it to the castle
at all. This place is huge, and old--and slightly creepy
under the circumstances. You should hear this wind!
What's more, the caretaker, Christi Lane, my father's
friend, is away on business! I tried to ask the caretaker,
Dexter Egan, how I could contact her, but he said he didn't
know. Doesn't that seem odd? I couldn't help feeling like
there was something he wasn't telling me.
All this makes me a little nervous, but I'm determined to
enjoy myself. After all, this is a vacation, right? I
have big plans to explore the castle.
That Ezra Wickford, the original owner, must have been
quite a character to have built such an extraordinary
place! It's filled with strange dead end corridors for one
thing, and I noticed one of the towers is totally different
from the other ones.
Of course I'll have to save some time to meet Jacques
Brunais, the French ski instructor. Tell Bess she'll be
the first to know if he's half as gorgeous in person as he
looks on his website!
So, George, I guess things never quite go according to
plan. But at least this time, the culprit is just a
snowstorm! Talk to you soon!
Nancy"
Boy, Nancy sure is sending a lot of photos to George, isn't
she? Nancy says she has to mail the letter, but let's
examine her room before we do so.
First off, take the card that's on the desk in front of
you. That's your room passkey, which lets you get in and
out of your room (Room 205). Then open the drawer and
look at the card in there. It gives you your locker
information: You have Locker 310, with the combination 517.
Head the dresser, and pull out the top drawer. Take the
menu from inside (you'll need it later). You can look
inside your suitcase to find a pamphlet on the castle. It
appears that the strange tower Nancy saw was imported from
France, which is why it doesn't look like the other ones.
Read the Sassy Detective magazine that Nancy has on the
round table there for information on dusting for
fingerprints. Apparently, fingerprints become less clear
over time, as the oil in fingers are used up. You need to
read this because (surprise), you're going to work with
fingerprints later on in the game.
Okay, leave the room and start exploring. You're on the
second floor, where there's not much that's important,
except for the elevator, Nancy's room, Professor
Hotchkiss' room (room 214, you can hear her typing if you
get close enough), and the stairs that lead to all levels
of the building.
There is also a big set of stairs that lead from the second
floor to the lobby of the building (Nancy can't take the
other set of stairs because it's too dark). Go to the
lobby, and head to the lounge, which is left of the lobby.
Inside in Lisa Ostrum. Start talking to her.
Lisa is a friendly person, and she tells you that someone
has vandalized the library. Dexter locked the library so
no one can get in, but he probably has an extra key at the
front desk (hint, hint).
Lisa also brings you up to speed on the other people in the
hotel. Jacques Brunais is a ski instructor who completely
screwed up in the last Winter Olympics, which is probably
why he's not in France anymore. Professor Hotchkiss is a
nutty old woman who was recently robbed, but never said
what got stolen.
Lisa also lets you know that she's a photojournalist who's
studying we | 计算机 |
2015-40/2214/en_head.json.gz/13842 | Posts Tagged ‘the p-word oh god the p-word’
Pirate Bay Goes Legal: What Now For PC Games?
By Alec Meer on June 30th, 2009 at 3:47 pm.
In case you’ve not caught the news, download site The Pirate Bay – a name that’s become almost synonymous with bittorrent and illegal filesharing – has been bought out for a princely sum, and its new owners have announced their intention to go legal. A little laughable, given this is a site with ‘pirate’ in its name, and also likely to be the end of TPB’s world-straddling popularity. No doubt it’ll hang around for a while, as the similarly born-again Napster does, but it seems highly unlikely the bulk of its existing audience will continue to visit if they can’t get stuff for free. The question for us chaps is whether the potential effective closure of the internet’s leading filesharing hub will have any effect on what’s so often accused of being PC gaming’s smoking gun. In other words: are we about to witness a sharp decline in PC game piracy?
the p-word oh god the p-word, uh oh here we go.
Underdogs, Ho
By Alec Meer on March 12th, 2009 at 2:55 pm.
Legally grey abandonware site Home of The Underdogs disappeared off to whatever under it is dogs come from a little while back, after the site’s hosts ran out of money. While we could all argue about the rights and wrongs of hosting out-of-print games without the blessing of their creators/owners until the undercows come home, it’s hard to deny it was a temptingly useful way to play the PC games of yesteryear. Even when did you already own dusty floppy copies of ‘em. With it gone, tracking down specific retro PC games has become a whole lot harder, and that’s no doubt happy news as far as some are concerned. Seems the Underdogs might yet find a new home, however.
home of the underdogs, Retro, the p-word oh god the p-word. | 计算机 |
2015-40/2214/en_head.json.gz/13899 | PS4 / XBOX ONE
The Witcher 3 Developer Sticking To Their Guns On DRM
By Matt Hawkins . June 21, 2013 . 4:30pm
CD Projekt Red’s The Witcher 1 & 2 have been at the technological forefront, pushing that envelope, and they’re set to do it all over again with The Witcher 3. However, there’s another frontline they’re equally known for, and that’s their fight against DRM. The studio has long been a staunch opponent of digital rights management. As a direct result, they’ve viewed by many as a trusted friend of the consumer.
Some would argue that they’ve paid a hefty price for such goodwill; CD Projekt Red has estimated that 4.5 million copies of The Witcher 2 have been pirated thus far. Still, their stance remains as strong as ever, though when it was discovered that the Xbox One would have its own DRM initiatives, a console that would eventually be home to The Witcher 3, many were confused.
I asked John Mamais, the game’s executive producer, about this, back when Microsoft’s new console was still going to require always-on Internet.
“We can’t control Microsoft’s business decisions, but we want to be on their platform, so we’re going to do our best to make awesome games for their platform,” Mamais replied. “At the same time, the PC version will remain DRM-free. That’s our policy, as a company. Those who are concerned can still purchase a copy through GOG.com.”
“There’s a lot of cool things about [Microsoft's] platform. Hopefully they’re going to do really well, and hopefully Sony’s going to do really well, and hopefully PCs are going to do really well.”
So, how did it feel for CD Projekt Red, one of the front-runners of anti-DRM, to see the the subject of DRM be such a hot topic at E3 this year?
“It’s kind of a recent phenomena, to take this DRM stuff off. We were doing it early on… we were one of the front-runners of it,” Mamais said. “And because we fought this war already, and carried that flag, you could say that it’s somewhat validating to raise the same concerns that we’ve done already.”
He elaborated: “It was a constant battle; during The Witcher 1, we went toe-to-toe with that game’s publisher, Atari, over DRM. Because of the feeling that, if there’s no DRM, it’s going to leak earlier, and sales will be lost.”
“I mean, it is kind of a valid opinion… I think both sides have their good arguments. But because I’m a gamer, I do tend to not to like the DRM stuff.”
Read more stories about PlayStation 4 & The Witcher 3 & Xbox One on Siliconera. Testsubject909
It’s good to hear. When I’ll get the Witcher 3, I’ll be sure to pick up the PC version.
Though… honestly whenever someone talks about CD Projekt Red, my mind goes straight to Cyberpunk…
Solomon_Kano
Same. I immediately wonder about Cyberpunk when CDPR comes up.
mirumu
They may have lost some sales due to piracy, but I’d be willing to bet they also gained a lot of sales due to their anti-DRM stance and for the fact they put their effort into making some really good games. There are many PC gamers around saying they’ll buy this game on GoG day 1 sight unseen. I plan to do the same myself. Goodwill goes a long way.
Testsubject909
Not only that, they must have saved money on not bothering to either purchase or create a DRM to put unto their games.
Yes, I don’t know what it costs to license things like SecuROM or Starforce, but I don’t imagine it’s cheap, and then there’s the cost of integrating it into the game.
Amine Hsu Nekuchan
Not to mention such number are always extreme and both likely don’t represent the actual number of pirates, much less don’t even consider people maybe have pirated it then bought it, pirated it, but never played it, bought it but then pirated it for some reason, etc.
http://www.falgram.com/ Falmung
Also DRM all it does is delay the game’s leak by a day or two. Once the crack is out its always bundled with the game wherever its shared so the DRM its completely useless. Whoever is going to pirate will pirate. But the ones who are going to buy won’t have to worry about ever having problems with DRM as a legitimate buyer.
MrTyrant
I admire cd projekt because things like this. Their anti-DRM policy and how well they treat their fans, all the updates, dlcs and expantion were free. Yeah, the huge overhaul update of Witcher 2 could be downloaded for free from their main page they were not willing to charge their fans again for the same game and they even helped one guy who downloaded an illegal torrent lol
I’ll give all my money to Witcher 3 because I believe it will be one of the greatest games and the same goes for their incoming cyberpunk game.
StaticDestroyer
I downloaded the witcher 2 in my poorer days. And I’ve bought it three times over since then, I know I’m not the only one to do so and I’d wager a lot of their actual sales were from the very same people that pirated it to begin with. CD Projekt’s policies have made me a big fan and my customer loyalty is definitely theirs. I already have witcher 3 on preorder and am awaiting the day I can do the same for cyberpunk. It’s all too uncommon that developers take a stance that benefits their consumer base more than themselves and I really feel they should be commended for it, they’re a credit to their industry.
yo1234
I hate the fact that people take advantage of their goodwill. But that’s the world i guess…
No interest in these games but respect for the devs MrTyrant
You should play it, it’s a good serie with a great lore. Different from most wrpg, so you cannot compare or pre-judge so easily. It’s more action based.
Crevox
In my opinion, it remains a marketing campaign and nothing more. I still don’t understand the big fuss about DRM, and to them, it’s an easy way to garner goodwill and support from people who hate it.
Eilanzer
…I agree about the marketing part…Even more after this partnership with the not so old drm crap of xbone…
I can understand why DRM and exists and why it sounds tempting for the industry side to use it, but you can’t just paint the anti-DRM stance off like that completely. It can just as likely be an act of good will and trust towards the customer. Admittedly a very risky route for them to take, even – which would explain why their decision gets much respect.
There is definitely a marketing aspect to it, although it’s quite a stretch to suggest it’s all there is.
A quick look across to the music industry makes it pretty clear that worst case scenarios with DRM can and do actually happen. Many people have lost everything they paid for with services like Microsoft’s Plays for Sure and MSN Music. That’s not to say there haven’t been significant DRM problems in the game industry too. Even the much praised Steam was an utter shambles at launch. The hate is so prevalent amongst those who have been PC gamers for any significant time because many have been directly burned by it. If you haven’t, that’s great. I just wish we all could say the same thing.
zazza345
I’m sorry, but their goodwill is gone. They were willing to publsh on a console that was supposed to have a terrible DRM. I will not buy a product from them from now on.
Sarcasm 101
It’s The End of the world brace your safe guys!
The Rapture is coming wahhhhh *foams in my mouth*
Fair enough. It’s your money and it’s your choices and your principles.
They were willing to publish on the Xbox One because they didn’t want to exclude the Xbox One fans from their game. Not because they wanted in on the DRM. Is it not goodwill to give your game to everyone regardless of their platform of choice?
In fairness they didn’t know about the Xbox One’s DRM until the rest of us did. Personally I would have preferred for them to have cancelled the Xbox One version once the the DRM and anti-consumer functionality was announced. I think it would have sent an important message. Of course it’s all academic now with Microsoft’s backtracking, but I can relate to where you’re coming from.
That said, as long as the PC version existed and was 100% DRM free I’d still hold goodwill for them.
Kocham Polskę!!
artemisthemp
DRM only cause issues for their paying consumers, while Pirates (ARR) will crack it in 1-2 days | 计算机 |
2015-40/2214/en_head.json.gz/13933 | The Differences Between SQL Server 2000 and 2005 - Part 2
By Steve Jones, 2007/05/31
What are the Differences Between SQL Server 2000 and SQL Server 2005?
In part I of this series I looked at the administrative differences and in this part I'll cover some of the development differences between the versions. I'm looking to make a concise, short list of things you can tell a developer who is interested, but not necessarily knowledgeable about SQL Server, to help them decide which version might be best suited to meet their needs.
And hopefully help you do decide if an upgrade is worth your time and effort.
One short note here. As I was working on this, it seemed that there are a great many features that I might put in the BI or security space instead of administrator or development. This may not be comprehensive, but I'm looking to try and show things from the main database developer perspective.
The Development Differences
Developing against SQL Server 2005 is in many ways similar to SQL Server 2000. Most all of the T-SQL that you've built against SQL Server 2000 will work in SQL Server 2005, it just doesn't take advantage of the newer features. And there are a great many new extensions to T-SQL to make many tasks easier as well as changes in other areas.
One of the biggest changes is the addition of programming with .NET languages and taking advantage of the CLR being embedded in the database engine. This means that you can write complex regular expressions, string manipulation, and most anything you can think of that can be done in C#, VB.NET, or whatever your language of choice may be. There's still some debate over how much this should be used and to what extent this impacts performance of your database engine, but there's not denying this is an extremely powerful capability. The closest thing to this in SQL Server 2000 was the ability to write extended stored procedures and install them on the server. However this was using C++ with all the dangers of programming in a low level language.
However there are many new extensions to T-SQL that might mean you never need to build a CLR stored procedure, trigger, or other structure. The main extension for database developers, in my mind, is the addition of the TRY/CATCH construct and better error information. Error handling has been one of the weakest parts of T-SQL for years. This alone allows developers to build much more robust applications.
There are also many other T-SQL additions, PIVOT, APPLY, and other ranking and windowing functions. You might not use these very often, but they come in handy. The same applies to Common Table Expressions (CTEs), which make some particular problems very easy to solve. The classic recursion of working through employees and their managers, or menu systems, have been complex in the past, but with CTEs, they are very easy to return in a query.
One of the other big T-SQL additions is the OUTPUT clause. This allows you to return values from an INSERT, UPDATE, or DELETE (DML) statement to the calling statements. In an OUTPUT statement, just like in a trigger in SQL Server 2000, you can access the data in the inserted or deleted tables.
One of the programming structures that many developers have gotten more and more exposure to over the last decade is XML. More and more applications make use of XML, it's used in web services, data transfers, etc. XML is something I see developers excited about and with SQL Server 2005 there is now a native XML data type, support for schemas, XPATH and XQUERY and many other XML functions. For database developers, there is no longer the need to decompose and rebuilt XML documents to get it in and out of SQL Server. Whether you should is another story, but the capabilities are there.
There are a couple other enhancements that developers will appreciate. The new large datatypes, like varchar(max) allow you to store large amounts of data in a column without jumping through the hoops of working with the TEXT datatype.
Auditing is much easier with DDL triggers and event notifications. Event notifications in particular, allowing you to respond to almost anything that can happen in SQL Server 2005, can allow you to build some amazing new applications.
The last enhancement in T-SQL that I think developers will greatly appreciate is ROW_NUMBER(). I can't tell you how many times I've seen forum posts asking how to get the row number in a result set, but this feature is probably greatly appreciated by developers.
There are a number of other areas that developers will find useful. Service Broker, providing an asynchronous messaging system can make SOA applications a much easier to develop. Until now, this is a system that appears easy to build, but allows unlimited opportunities for mistakes. Native web services are also a welcome addition to allow you to extend your data to a variety of applications without requiring complex security infrastructures.
Reporting Services has grown tremendously, allowing more flexibility in how you deploy reports to end users. Integration Services is probably the feature that most requires development skills as this ETL tool now really is more of a developer than a DBA system. However with the added complexity, it has grown into an extremely rich and tremendously capable tool.
There are other changes with SQL Server, ADO.NET has been enhanced, Visual Studio has been tightly integrated with it's extensions for various features as well as its influence on the Business Intelligence Design Studio, and the Team System for DB Pros. The Full-Text Search capabilities have been expanded and they work better, allowing integration with third party word-breakers and stemmers as well as working with noise words.
Why Upgrade?
This is an interesting question. As with part I of this series, I'm not completely sure of how to recommend this. If your server is running well as an administrator, there's no reason to upgrade. As a developer, however, it's a bit more complicated.
Developers, almost by definition, are looking to change things on a regular basis. For developers, they are fixing things, enhancing them, or rebuilding them. In the first or even second case, it may not make much sense to upgrade if your application is working well. In the latter case, I'd really think hard about upgrading because a rebuild, or re-architecture, takes a lot of time and resources. If you're investing in a new application, or a new version of an application, then SQL Server 2005 might make sense to take advantage of the features of SQL Server 2005.
I'm guessing that many of these features will be around through at least the next two versions of SQL Server. While I can see there being a radical rewrite after Katmai (SQL Server 2008), I can't imagine that many things won't still be around in the version after that. They may get deprecated after that, but they should be there for that version, which should see support through 2018 or 2019. If you are struggling with ETL, trying to implement messaging, or web services, then it also might make sense to upgrade your database server to SQL Server 2005.
A quick summary of the differences:
Server Programming Extensions
Limited to extended stored procedures, which are difficult to write and can impact the server stability.
The incorporation of the CLR into the relational engine allows managed code written in .NET languages to run. Different levels of security can protect the server from poorly written code.
T-SQL Error Handling
Limited to checking @@error, no much flexibility.
Addition of TRY/CATCH allows more mature error handling. More error_xx functions can gather additional information about errors.
T-SQL Language
SQL Language enhanced from previous versions providing strong data manipulation capabilities.
All the power of SQL Server 2000 with the addition of CTEs for complex, recursive problems, enhanced TOP capabilities, PIVOT/APPLY/Ranking functions, and ROW_NUMBER
Limited support using triggers to audit changes.
Robust event handling with EVENT NOTIFICATIONS, the OUTPUT clauses, and DDL triggers.
Large Data Types
Limited to 8k for normal data without moving to TEXT datatypes. TEXT is hard to work with in programming environments.
Includes the new varchar(max) types that can store up to 2GB of data in a single column/row.
Limited to transforming relational data into XML with SELECT statements, and some simple query work with transformed documents.
Native XML datatype, support for schemas and full XPATH/XQUERY querying of data.
v1.1 of ADO.NET included enhancements for client development.
v2 has more features, including automatic failover for database mirroring, support for multiple active result sets (MARS), tracing of calls, statistics, new isolation levels and more.
No messaging built into SQL Server.
Includes Service Broker, a full-featured asynchronous messaging system that has evolved from Microsoft Message Queue (MSMQ), which is integrated into Windows.
An extremely powerful reporting environment, but a 1.0 product.
Numerous enhancements, run-time sorting, direct printing, viewer controls and an enhanced developer experience.
DTS is a very easy to use and intuitive tool. Limited capabilities for sources and transformations. Some constructs, such as loops, were very difficult to implement.
Integration Services is a true programming environment allowing almost any source of data to be used and many more types of transformations to occur. Very complex environment that is difficult for non-DBAs to use. Requires programming skills.
Workable solution, but limited in its capabilities. Cumbersome to work with in many situations.
More open architecture, allowing integration and plug-ins of third party extensions. Much more flexible in search capabilities.
These are the highlights that I see as a developer and that are of interest. There are other features in the security area, scalability, etc. that might be of interest, but I think these are the main ones.
I welcome your comments and thoughts on this as well. Perhaps there are some features I've missed in my short summary that you might point out and let me know if you think it makes sense to discuss some of the security changes. As far as BI stuff, hopefully one of you will send me some differences in an article of your own.
Total article views: 36842
Views in the last 30 days: 74
SQL Server 2005 for SQL2k Developer (Part 1)
There are many changes in SQL Server 2005, especially for the SQL Server developer. New options, fea...
Inside SQL Server Development
After the announcement last week by Microsoft that there would be no Beta 3 for SQL Server 2005 and ...
SQL Server 2008: Table-valued parameters
A new feature of SQL Server 2008 is Table-valued parameters (TVP). This feature will allow Develope...
Just Wow!! features
Over the years with each release I’m often impressed by some of the features contained in SQL server...
news sql server 2005 Join the most active online SQL Server Community | 计算机 |
2015-40/2214/en_head.json.gz/14095 | Apps etc > Android > Instructions
Instructions: settings
The settings are reached by tapping the Android menu symbol (usually three dots in a vertical line) at the top right-hand corner of any Universalis page and picking "Settings" from the menu that appears. Some manufacturers have changed the standard Android menu symbol, so press whatever you do see at the top right-hand corner. If you don't see anything at all, you must be using a device with a physical menu button: in which case, press that button.
Here is a list of the settings and what they mean.
There is a General Calendar, shared by the whole Church, and then there are local calendars which have saints and celebrations of more local interest. For example, Saint Benedict is celebrated with a memorial in the universal Church but with a feast in Europe, while Saint Willibrord, who isn't in the General Calendar at all, is celebrated with an optional memorial in some English dioceses and a solemnity in the Netherlands.
Not all local calendars are included in Universalis, but an increasing number are. Pick the one that looks best for you.Liturgy of the Hours
Invitatory Psalm: four different psalms may be used as the Invitatory Psalm, although Psalm 94 (95) is the traditional one. Universalis lets you choose whether to rotate between the permitted options ("Different each day") or stick to Psalm 94 (95) permanently ("Same every day").
Psalm translation: the Grail translation is the one used in most English versions of the Liturgy of the Hours worldwide. For copyright reasons we have to use a version of our own in our web pages, so we offer it here as an alternative in case you have got used to it.
Readings at Mass
Readings & Psalms: in the English-speaking world, the most usual translation is the Jerusalem Bible for the Scripture readings and the Grail version of the psalms. In the USA, the New American Bible is used. Universalis lets you choose either. We apologize to Canada and South Africa: we are still trying to negotiate with the owners of the NRSV, which you are using at Mass.
Prayers and Antiphons: If you are using Universalis as a private spiritual resource, the Mass readings of the day are probably all that you want. If you are taking it to Mass with you, you may want the Entrance Antiphon and the other prayers and antiphons from the printed missals. This option lets you choose.
Priest's Private Prayers: You can choose whether to include in the Order of Mass (and in Mass Today) the prayers that are said silently or quietly by the priest.
Extra languages
Gospel at Mass: You can choose whether to view the original Greek text of the Gospel alongside the English. This may nor work on your device: some manufacturers do provide the correct font on their Android devices, others do not, and the app has no way of knowing. (The feature to ask about is “polytonic Greek”).
Order of Mass: You can view Latin or one of a number of other European languages in parallel with the English text of the Order of Mass. This is intended to help you follow Mass when you are abroad. The Mass Today page will also show you the parallel texts, but it will display the daily content (prayers, psalms and readings) in English only.
Liturgy of the Hours: You can choose whether to view the Latin text of the Hours alongside the English.
Set up daily emails: If you like, our web site can send you daily emails with all the Hours or just a selection of them. Press this button to set it up. | 计算机 |
2015-40/2214/en_head.json.gz/14245 | Yehuda Katz
What's Wrong with "HTML5"
In the past year or so, the term "HTML5" has increasingly been picked up by the tech press as the successor to "DHTML", "Web 2.0" or "Ajax". When used by the tech press, it is becoming a generic term for "the next generation of web technology", except that the term "HTML5" is less precise than even that.
Consider the first paragraph of an article about HTML published this week:
HTML5 is the hot topic nowadays. Everyone from Apple to Google and everyone else in between have shown their support for the standard. Word has it that HTML5 is the Adobe Flash-killer. It seems that the World Wide Web Consortium [W3C] — which is the main international standards organization for the World Wide Web — doesn’t agree. If anything, the W3C doesn’t think that HTML5 is "ready for production yet."
The problem with HTML5 appears to be that it currently lacks a video codec. In addition, digital rights management [DRM] is also not supported in HTML5, which obviously makes a problem for various companies.
My engineer friends have all but given up on pushing back against the "HTML5" moniker. After all, it's just a term for the tech press. Everyone who knows anything understands that it means just as little as requests for "Ajaxy animations" a few years back, right? I had started to agree with this line of reasoning. It's true: there's no point in being pedantic on this front for the sake of being pedantic.
Unfortunately, the term, and therefore the way the technology is understood by tech writers, is causing some fairly serious problems.
First, keep in mind that unlike "Ajax" or "Web 2.0", which were pretty benign, vague terms, HTML5 sounds like a technology. It can have "beta" versions, and one day it will be "complete" and "ready for production". Take a look at this snippet from an InfoWorld article:
His advice on HTML5 was endorsed by industry analyst Al Hilwa of IDC.
"HTML 5 is at various stages of implementation right now through the Web browsers. If you look at the various browsers, most of the aggressive implementations are in the beta versions," Hilwa said. "IE9 (Internet Explorer 9), for example, is not expected to go production until close to mid-next year. That is the point when most enterprises will begin to consider adopting this new generation of browsers."
And because HTML5 is portrayed as a "Flash killer", whether it is "complete" or in "beta" sounds really relevant. In comparison, "Web 2.0" was never portrayed as a technology, but rather the ushering in of a new era of web technologies which would allow people to build new kinds of applications.
The truth is, the "completion" of HTML5 is absolutely irrelevant. At some point, the W3C will approve the HTML5 spec, which will mean nothing about the overall availability of individual features. At some point, the other related specs (like Web Storage and Web Sockets) will be approved, and also mean nothing about the overall availability of individual features.
In response to this conundrum, most of my friends immediately throw out the idea of getting more specific with the tech press. And they're right. The tech press is not going to understand Web Sockets or local storage. They're having trouble even grasping HTML5 video.
The thing is, the core problem isn't that the name is too fuzzy. It's what the name implies. HTML5 sounds like a technology which goes through a beta period and is finally complete. Instead, what the tech press calls "HTML5" is really a continual process of improving the web browsers that people use. And honestly, that's how I'd like to see the tech press cover us. Not as group of people working towards a singular milestone that will change the web as we know it, but as a group that has gotten our groove back.
Tech reporters: please stop talking about the current state of web technologies as an event (the approval of the HTML5 spec). There are interesting stories happening all the time, like Scribd, YouTube, and Gmail leveraging newer web technologies to improve their products. Little guys are doing the same every day. Not everything is about whether "Flash is dead yet". It's worth talking about the tradeoffs that guys like Hulu make, which prevent them from moving to web technologies. But make no mistake: large swaths of the web will be using next-generation browser features well before the last guy does. The process of getting there is an interesting story, and one you should be covering.
Ruby 2.0 Refinements in Practice
First Shugo announced them at RubyKaigi. Then Matz showed some improved syntax at RubyConf. But what are refinements all…
Bundler: As Simple as What You Did Before
One thing we hear a lot by people who start to use bundler is that the workflow is more…
All content copyright Katz Got Your Tongue © 2015 • All rights reserved.
Theme created by golem.io • Proudly published with Ghost | 计算机 |
2015-40/2215/en_head.json.gz/344 | Thumbnail History of RDBMSs
It is difficult to get a man to understand something when his salary depends upon his not understanding it.—Upton Sinclair
Mankind has always sought mechanical ways to store information. When the computer arrived on the scene it was quickly put to the task. We'll skip ahead to 1970, by which time large corporations had moved their accounting operations to mainframe computers, usually from IBM. [1] IMS
As the volume of computer records increased, so too did demand for a way to organize them. Every problem is an opportunity, and computer vendors, then as now, were only too glad to provide solutions. The king of the hill was IBM's Information Management System (IMS). You may be surprised to learn (I was) that IMS is still alive and kicking in 2010. IDC is even happy to explain why IMS is a good thing [pdf]. IMS is the granddaddy of what were later known — in contrast to Relational — as “hierarchical” database management systems. Grand as it may sound, a hierarchical database is just nested key-value pairs, hardly more technically sophisticated than Berkeley DB [pdf] or, truth be told, the Fast File System [pdf]. If you gave Berkeley DB 40 years and IBM's resources, you'd have IMS. So what's the problem you ask? Ah, yes, the problem. Hierarchical databases leave a lot to be desired. To name two voids: They don't do very much to check the data for consistency, and they're notoriously inflexible. The Promise
E. F. Codd
The shortcomings of pre-relational databases were well known at the time. Into the breach stepped E. F. Codd with his seminal paper, A Relational Model of Data for Large Shared Data Banks (1970) [pdf]. Codd provided something brand new: an algebra and a calculus for operating on data arranged in tables. It was the first and, to date, only mathematical system to describe database operations. Put another way: every other system of storing and retrieving data rests on the same thing most software does: intuition and testing. Codd brought mathematical rigor to databases. Experts in the field understood the ramifications of what Codd had wrought. Listen to this recollection of Don Chamberlin, a contemporary:
… Codd had a bunch of queries that were fairly complicated queries and since I'd been studying CODASYL, I could imagine how those queries would have been represented in CODASYL by programs that were five pages long that would navigate through this labyrinth of pointers and stuff. Codd would sort of write them down as one-liners. These would be queries like, "Find the employees who earn more than their managers." [laughter] He just whacked them out and you could sort of read them, and they weren't complicated at all, and I said, "Wow." This was kind of a conversion experience for me, that I understood what the relational thing was about after that.
Would that we all had been there.
It would take IBM 10 years to convert Codd’s ideas into a product. Buy the mid-1980s, the RDBMS market had all the names you might recognize: DB2, Sybase, Oracle, and Ingres. All were based on Relational Theory. The revolution had arrived, and it was not televised. It would be written in SQL. The Delivered
Underpinned though they were by mathematics, the first products were pinned down by the computing power of the day. It was felt they could not support OLTP, that their main use would be for decision support, and that many users wouldn't be programmers at all, but non-technical people. These end users wouldn't be trained in mathematics and certainly weren't going to be up on relational calculus. A more (yes, here it comes) user friendly way would be needed for mere mortals to query the system. In keeping with the fashion of the day, it was (yet another) “fourth generation language”: SQL. That's right: We came to be saddled with SQL because a real relational language, no matter how powerful and elegant, would be too hard. Thanks, Pops. Perhaps they were right, though. Perhaps, were it not for SQL's approachability, so-called relational DBMSs would never have achieved their popularity. After all, it wasn't only end users who were unversed in relational calculus, and it's not as though every programmer, or even many programmers, can or are willing to learn some math in order to their jobs. Indeed, that remains the case today….
Further to the point, one vendor, Ingres, in fact did and does offer a non-SQL language, QUEL [pdf], based on relational calculus. If you're like most people you've never heard of it and until now never had reason to think there might be another, yea, better query language. Ingres quickly added SQL when it found it couldn't compete otherwise. (None other than Chris Date implemented Ingres's SQL interpreter, and he did it as a front end to QUEL!) The market for database management software is no different from other IT markets: faddish, ignorant, profit-driven. The vendors' customers (in aggregate) aren't particularly well informed about relational theory. They face challenges and shop for solutions that vendors are only too happy to provide. If relational theory is hard to understand and explain — never mind hard to implement and/or seen by customers as an impediment — well, education isn't the vendor's business. The vendor swaps licenses for money, and the customer is happy. Or, happyish. Competitive Cacophony
Although Codd and later many others published papers on relational theory, the implementations were separate and disjoint, mutually incompatible. Not only were their SQLs were different, surely, but so was (and is) everything else, in particular
The wire protocol. Of course there was no TCP/IP in 1970, let alone an Internet. RDBMS vendors had to contend with an array of communications protocols, each one proprietary to the hardware vendor and all, thankfully, now defunct. Likewise, every RDBMS wire protocol was equally proprietary and different, with the result that no client can communicate with another vendor's server. (This is even true of Microsoft and Sybase, which at one time did interoperate.) One difference: TCP/IP extinguished the other protocols. Nothing similar has happened at the RDBMS layer. The client API. Most languages current in 1970 had built-in I/O; C wouldn't demonstrate the superiority of a standard I/O library until the mid-1980s, by which time the RDBMS vendors had each written their own. Those ancient languages — COBOL, FORTRAN, PASCAL, BASIC — have each found their dustniche in history, but the clunky RDBMS APIs are still with us, each with its own odd personality. We are accustomed to the idea that we can use whatever email software we please, use any web browser we please. But when it comes to database servers, proprietary lock-in was and remains the rule of the day. Today
Stagnation and the Triumph of Ignorance
Apart from competitive machinations, SQL DBMSs are not much changed from 1985. SQL has improved a bit. Machines are certainly faster, which has made the DBMS easier to administer. There are more graphical tools, which are certainly popular and sometimes convenient. But someone cryogenically frozen in 1985 could learn to use a 2010 system in an afternoon. The same cannot be said of, say, the COBOL or C programmer faced with Java or C#. This is not a good thing. We have met the enemy, and he is us.—Pogo
The IT people of the 1970s can be forgiven for demanding a simplistic solution to a complex problem, for being unaware of the then-new relational theory. They probably should be saluted for recognizing the value of the technology even without understanding its foundation or full potential. That excuse is not available to us in 2010. The lack of progress in this area reflects our collective failure to learn the theory and seize on its promise.
Relational theory has made incremental progress, but you'll have a very difficult time finding support for it in commercial (or noncommercial) offerings. Or, indeed, among the database experts you know. SQL's shortcomings have been known from the start [pdf]. Its warts become obvious to anyone who encounters the GROUP BY clause. But instead of demanding something better, something correct, what demand there is is for magic, for ways to “make the SQL go away”. SQL should go away, but not by concealing it behind an Object Relational Mapper or some dynamic language construct that melds it into Python or Ruby or Lua. SQL should go away because it's not serving its intended purpose or audience. BASIC died; every other 4GL died; every other Pollyannic end-user query idea died. Yet SQL lives on, a zombie of the 1970s, when television was wireless and telephones were wired. Think of it: Why do so many programmers dislike SQL? After all, SQL is a tool designed for the non-technical, now used primarily by programmers. Someone spends years mastering C++ or pretty much any other modern language you care to name, and there, smack in the middle of his code, like sand in the gears, is that blast from the past, SQL. Could it be programmers think it's impeding their work, that it's stupid? Lots of programmers, though, wish the database was stupid, would just do as it's told instead of rejecting their queries or data. They don't regard relational theory as their ally; they regard it as an obstacle. (Strange they don't regard the compiler they same way.) Ignorant of relational theory, they suppose the DBMS is just another heap of old code and out-outmoded ideas, and yearn for a chance to do and use something shiny and new. Worst are those ignorant of history, because they're doomed to repeat it. They're pouring IMS wine in new XML or NoSQL bottles. These nonsystems — sometimes called post-relational, the horror! — lack even the most rudimentary DBMS features: consistency checking and transactions, to name two. Yet there they go, charging at Cloud9Db or somesuch, cheered on by vendors happy to pocket their money. It's good they're young, because it will take 40 years to catch up to IMS. Potholes on the High Road
Well, here's another nice mess you've gotten me into.
But what of the programmer who does take the time to develop at least a passing knowledge of relational theory? His is not a happy lot, for ignorance is bliss. He will come to know the awful state of most databases, with their dog's breakfast of naming conventions and endless normalization failures, only some of which are acknowledged mistakes. A badly coded application lies dormant and invisible (perhaps in a version control system), but a badly designed table is out in the open for all to see and, worse, for all to use. Such tables are common and frequently important. And, because important, cannot be changed. He will find his boss and his peers do not share his enthusiasm. Yes, they'll say, it's a fine theory, in theory. But not in practice. And he will find, finally, that they are right, even if they lie in a bed of their own making. The databases he meets are not so much designed as accumulated, forcing his queries to be much more complicated than strictly necessary. The server will fail him, too: His finely wrought queries will, often as not, defeat the query optimizer's poor power to reduce it to an efficient plan. If he finds himself wishing for a new shiny instead, who could blame him?
All Hail the Status Quo
The only thing necessary for evil to flourish is for good men to do nothing.
The RDBMS market stabilized (or, rather, stagnated) in more or less its current form well over 10 years ago, arguably 25 years ago. Why? One need not look for a complicated explanation; a simple one will do. Vendors are happy because the barriers to entry are very high. The lack of interoperability and the complexity of RDBMSs makes the cost of switching vendors exorbitant. Consequently vendors can charge high prices relative to their costs with little fear of the customer walking away. Proof can be found both in the lack of innovation and in the profitability of the market leaders. Customers are happyish because RDBMSs are highly reliable pieces of software that have successfully managed their data for decades. They've made a big investment in SQL and have no particular eagerness for a better language, particularly one that requires higher math to understand. Indeed, management continues to seek dumbed-down tools, as it has done for decades. Consider Edsger W. Dijkstra’s recollection of his Algol 60 days:
It was the obsession with speed, the power of IBM, the general feeling at the time that programming was something that should be doable by uneducated morons picked from the street, it should not require any sophistication. Yes … false dreams paralyzed a lot of American computing science.
Application programmers generally view the RDBMS as a foreign beast, unloved and hard to tame. Witness the unending discussion of the “object-relational impedance mismatch”, the many object-relational mapping frameworks, and other efforts to minimize the programmer's exposure to the RDBMS. While some of the disaffection surely stems from ignorance, the unlovely client API libraries certainly provide motivation, too. Even experts in SQL have a vested interest in the status quo. Relatively few have studied relational theory, meaning most don't realize what a misbegotten steaming pile of shaving cream SQL is. On the contrary, they've mastered it. To them, SQL is a marketable skill, and a better language would threaten to make that skill obsolete. Even the promise of reliable query optimization has little appeal to the person paid to know the tricks and rewrite slow queries. Is There No Hope?
Men switch masters willingly, thinking to make matters better.
There is always hope. How much depends on how many. In the RDBMS market, as in any market, sellers seek to satisfy buyers. The RDBMS market is different because the buyers are ignorant and that's good for the vendors. The surest way to perturb the stability would be smarter buyers demanding something useful. The history of IT amply demonstrates the odds of that are long, but it definitely would work. No part of $20 billion a year will be simply refused. What's new since the ossification of SQL is the Internet and the emergence of free software as a viable corporate technology stack software environment. Unlike in the 1970s, demand can create its own supply: free database projects could implement a better language, a better library (just one!), and agree on a single wire protocol. Any significant success would move the vendors. Look what baby-toy MySQL did: by creating a simple, hassle-free usable RDBMS, it forced every major vendor to offer scaled down version of its flagship product at no cost. Now imagine a world in which there are several interoperable servers sporting a real query language in addition to poor old SQL. The proprietary vendors would have to pay users to take their wares. Or, innovate. What do you think they'd do? I wager they'd follow the money. So, yes, there's hope. If enough people demand a true RDBMS, if enough people build one, we could start a revolution. Not on television. Probably on YouTube, and definitely not in SQL. | 计算机 |
2015-40/2215/en_head.json.gz/480 | Fedora Core 15 i386 DVD
release date:May 24, 2011
The Fedora Project is an openly-developed project designed by Red Hat, open for general participation, led by a meritocracy, following a set of project objectives. The goal of The Fedora Project is to work with the Linux community to build a complete, general purpose operating system exclusively from open source software. Development will be done in a public forum. The project will produce time-based releases of Fedora about 2-3 times a year, with a public release schedule. The Red Hat engineering team will continue to participate in building Fedora and will invite and encourage more outside participation than in past releases. Fedora 15, a new version of one of the leading and most widely used Linux distributions on the market, has been released. Some of the many new features include support for Btrfs file system, Indic typing booster, redesigned SELinux troubleshooter, better power management, LibreOffice productivity suite, and, of course, the brand-new GNOME 3 desktop: "GNOME 3 is the next generation of GNOME with a brand new user interface. It provides a completely new and modern desktop that has been designed for today's users and technologies. Fedora 15 is the first major distribution to include GNOME 3 by default. GNOME 3 is being developed with extensive upstream participation from Red Hat developers and Fedora volunteers, and GNOME 3 is tightly integrated in Fedora 15." manufacturer website
1 dvd for installation on a x86 platform back to top | 计算机 |
2015-40/2215/en_head.json.gz/585 | Microsoft Drives Customer Success With Microsoft Dynamics CRM
Posted March 18, 2010 September 24, 2014 By 0
REDMOND, Wash. — March 18, 2010 — Microsoft Corp. today announced that Mitsubishi Caterpillar Forklift Europe BV (MCFE), a leading forklift manufacturer based in the Netherlands, received the Gartner CRM Excellence Award in the category of Efficiency for its customer relationship management (CRM) project. Through this awards program, Gartner Inc. and 1to1 Media recognize companies that are doing an exceptional job at bringing together vision, strategy, customer experience, organizational collaboration, process, IT and metrics to create value for the customer and the enterprise.
MCFE, a company that manufactures, sells and distributes forklifts and related spare parts, was able to streamline order entry and processing, IT support requests, and dealer communications. This resulted in the company reducing the time for it to customize business applications from 35 to 10 days, saving development costs by 60 percent and improving order processing from five minutes to 90 seconds.
“Businesses need to deliver essential line-of-business applications more quickly and at a lower cost,” said Brad Wilson, general manager of Microsoft Dynamics CRM. “We are honored that MCFE is recognized by Gartner and 1to1 Media for its project. This implementation is just one example of how our customers are unlocking new value from their existing Microsoft investments and delivering business solutions that are easy to design, easy to manage, and, for their users, easy to use.”
In addition to MCFE, a broad range of companies around the world are using xRM, the flexible application development framework of Microsoft Dynamics CRM, to accelerate the development and deployment of high-impact business applications — whether on the premises or in the cloud via Microsoft Dynamics CRM Online.
Customer successes include the following:
The Arbor Day Foundation
, a U.S. nonprofit organization, has designed 15 relationship management applications to organize charitable events, grant management and outreach for fundraising efforts. The foundation estimates that it reduced application development time by 300 percent, enabling more efficient and responsive interaction with partners and sponsors.
CAPTRUST Financial Advisors
, a U.S.-based financial services organization, has used xRM to design more than 20 business applications, including an online fiduciary management tool for its retirement plan sponsors. The company was also able to build a portal for financial advisors twice as fast than with any another development strategy.
Comag Marketing Group, a U.S.-based marketing firm, has consolidated more than 40 disparate relationship management applications and is planning to consolidate another 80 applications. By standardizing on Microsoft technology, it has drastically reduced the cost of administering all these applications.
Ensto
., a manufacturer in Finland, has designed several applications, including a supplier relationship management and quality relationship management solution. With xRM, Ensto is able to save $138,000 (U.S.) with each solution it implements.
Melbourne Business School
, in Melbourne, Australia, has designed several relationship management applications to manage prospective students, alumni, donors, guest lecturers and other constituents. The school is able to process applications 50 percent faster, for 50 percent less cost.
The New Zealand Ministry of Economic Development
(MED) has developed several relationship management solutions, including a Consumer Affairs Reporting Tool, an Energy Safety Intelligence application and a Grants Information Management System. With the Microsoft Dynamics CRM business management solution already in use, it became clear that it offered a framework that could be extended cost-effectively to develop and deliver other new custom applications at an accelerated pace across other parts of the business.
The North Carolina Department of Crime Control and Public Safety
developed alcohol and lottery permit review applications that increased agent productivity by 80 percent and reduced the new application process time from five days to one.
The Product Release and Security Services
(PRSS) team at Microsoft designed product release relationship management solutions that increased product release compliance to 98 percent and accelerated product time to market across the company by up to 94 percent.
Travel Dynamics International
, a leading luxury cruise operator in North America, designed a reservation and booking relationship management system. It has experienced a 400 percent increase in productivity and projects an annual gain in sales volume of 10 percent.
ValMark Securities Inc
., a U.S. financial services firm, developed several custom solutions across several departments, including a policy relationship management solution known internally as the ValMark Back Office Support System (VBOSS). VBOSS has contributed to an 89 percent improvement in customer satisfaction and increased staff productivity.
“We set a time frame of 30 days from start to finish for building and rolling out an xRM application. We comfortably hit that target,” said Mike Ashley, IT director, Arbor Day Foundation. “We estimate that it would have taken three months or more to extend our old system in the same way.”
“Each client comes to us with a unique set of business problems and challenges to solve,” said Mark Barrett, managing partner at Ascentium Corp. “With xRM, we’re able to create specialized applications in as little as a day as opposed to months. We’re able to help our clients achieve their business goals quickly and increase our value as a partner to their business.”
Examples of how customers are benefiting from using Microsoft Dynamics CRM in new ways demonstrate Microsoft’s promise of the Dynamic Business, a vision for helping companies realize their full potential through the strategic use of flexible business applications that remain relevant as their business needs evolve.
More information about how customers are saving time and money while driving successful relationships with Microsoft Dynamics CRM through xRM usage can be found at http://crm.dynamics.com.
Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://www.microsoft.com/news. Web links, telephone numbers and titles were correct at time of publication, but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at http://www.microsoft.com/news/contactpr.mspx.
Most Popular Microsoft redefines the laptop with Surface Book, ushers in new era of Windows 10 devices Microsoft, ASUS broaden patent licensing engagement I-O DATA signs updated patent agreement with Microsoft Microsoft Azure delivers market-leading innovations TagsPress ReleasesUncategorizedBusiness Solutions | 计算机 |
2015-40/2215/en_head.json.gz/1215 | › Email › Email Marketing
The 2012 Inbox: Grow Your List Now With Mobile and Social
Jeanne Jennings | July 12, 2010 | Comments
Look for opportunities to leverage social and mobile "inboxes" to drive relationships and revenue.
This is one of six "The 2012 Inbox" columns this month, as the e-mail columnists of ClickZ examine the near future of e-mail marketing.
When we surveyed readers for this series of columns, the biggest challenge identified for 2010 was cultivating an active list with high quality subscribers (33.3 percent of those surveyed). Looking forward, readers felt that the proliferation of mobile devices (47.4 percent) and the abandonment of e-mail in favor of social networks (25 percent) would have the biggest impact on e-mail marketing in the future.
I don't disagree. But while many e-mail marketers look at these as threats, I see them as opportunities, especially if you start leveraging them in your favor now. One way to do this is to focus on using them to grow your e-mail list.
Mobile and social are new channels (or sub-channels) of communication. You don't have to use each in a vacuum; in fact, multichannel efforts usually produce better response than single channel efforts.
Case in point: US Airways encouraging customers, on cocktail napkins, to send a text message to enroll in its frequent flyer program. This is a brilliant use of offline and mobile to build an e-mail list. It's much easier to send a text to sign up for e-mail than it is to send an e-mail or visit a website. This takes a traditional form of offline acquisition and makes it better by integrating mobile marketing.
The napkin is a nice touch, but it's not required. If you're communicating with someone via text message, it's nice to ask them to visit your website to sign up for your e-mail newsletter. But it's more efficient to just ask them to text their e-mail address to you to be added to the list.
Quick note on texting - it won't ever replace e-mail. The character limitation and text-only format is great for some things, but not so good for others. I see text and e-mail like radio and television - both will exist and we (senders and recipients) will use the channel that's best for the type of communication involved.
Social networks also hold a huge opportunity for e-mail marketers - in a variety of ways. Social networks are like "forward to a friend" on steroids. With a few clicks, a reader can share your e-mail content with everyone on their social media contact list(s). Studies have been done showing the success of this - and as long as social networks are around, this will only happen more, not less.
Savvy marketers are looking at ways to leverage social networks for e-mail list growth. The obvious, and most simple way to do it, is to include social sharing links in your e-mail message. This allows readers to easily post items from your e-mail messages (articles, offers, etc.) to their social media networks for their contacts to see.
The link readers post will take people to the Web page where the content resides (if your e-mail content doesn't also reside on the Web somewhere, you need to change that). In order to complete the loop, be sure to include a call-to-action to sign up for your e-mail program on every one of these content pages.
But that's not all I have to say about social. People who see social as being the slayer of e-mail are missing what I think is a critical point: Facebook, LinkedIn, and the other primary social networks all include an inbox.
It's not the inbox that we, as e-mail marketers, are used to sending to. It's a separate inbox, one which can usually only be reached by individuals and companies that the recipient trusts. This creates a level of permission that's completely controlled by the recipient. It's non-transferrable and revocable at any time.
Unlike traditional e-mail, subscribers don't hand over their e-mail addresses. They "friend" your organization or "link" to your company via the social network. As long as you stay in their good graces, you can have an online relationship with them via the social network and its inbox. Start sending spam or irrelevant content and the reader can sever the relationship with you instantly.
People you want to communicate with may be more willing to give you access to their Facebook or LinkedIn inbox than their traditional inbox, because they have control. Over time, you may (a) gain their trust enough to gain access to their traditional inbox or (b) find that the social media inbox relationship works fine for both of you and keep it at that. Either way, dabbling in social makes sense, as long as you continue to invest in your tried-and-true e-mail program.
Many organizations, both big and small, are already leveraging this "second inbox." Case in point: The Email Experience Council (eec).
The eec wants to communicate with its members and prospects via whichever channel is best for the recipients. The e-mail I received in my traditional inbox looks different than the message I received in my Facebook inbox out of necessity. But both have the same goal - to let me know about this webinar and entice me to register. Two different inboxes, two different creative executions, but one primary goal.
Bottom line: embrace the new channels. Use them to grow your traditional e-mail list and look for opportunities to leverage these new "inboxes" to drive relationships and revenue.
Contact Jeanne
Jeanne Jennings is a recognized expert in the email marketing industry and managing director of digital marketing for Digital Prism Advisors. She has more than 20 years of experience in the email and online marketing and product development world. Jeanne's direct-response approach to digital strategy, tactics, and creative direction helps organizations make their online marketing initiatives more effective and more profitable. Digital Prism Advisors helps established businesses unlock significant growth and revenue opportunities in the digital marketplace; our clients learn to develop and implement successful digital strategies, leveraging data and technology to better meet bottom line goals. Want to learn more? Check out Jeanne's blog and Digital Prism Advisors.
Innovation, Data Integral to a High Digital IQ
Search in Asia Goes Well Beyond Google
Seven Reasons to Make Direct Mail Part of Your Digital Marketing Plan
5 Keys to Building a Quality Email List
Google’s Inbox and What It Means for Email Marketers – Part 2
Get ClickZ Email newsletters delivered right to your inbox. Subscribe today! | 计算机 |
2015-48/0357/en_head.json.gz/14419 | Following established deployment Viglen win HPC contract at University of East Anglia
Viglen has been selected as sole supplier of Managed services for Personal Computers and Notebooks by the University of East Anglia. The three year contract is estimated to be worth up to £3million.
Following their recent successful deployment of an HPC cluster for the University, Viglen won a rigorous tendering process to supply up to 1000 desktops and notebooks in each year of the contract starting on 1st January 2011. The contract will run until 31st December 2013 with a possible extension to the end of 2014.
The University wanted to minimise total cost of ownership whilst providing a high level of service to end users. Viglen showed strong evidence of a stable product line and cost effective upgrade and service management options. In addition, Viglen were able to satisfy the Universities requirement for sustainable solutions with energy saving configurations and disposal of redundant systems and packaging in line with WEE regulations. Viglen’s eco-friendly blue boxes will be used in the deployment of PC’s to reduce waste.
Viglen were invited to bid, along with other suppliers via the National Desktop and Notebook Agreement (NDNA). Recently awarded the number one spot in Lot 3, One-Stop Shop (Desktops and Notebooks) of the NDNA, Viglen were judged to offer best overall value and highest standards of service. This prestigious position is further recognition of Viglen’s pedigree in the IT service providers market and allows NDNA members to choose whether to purchase desktops and notebooks from Viglen with or without additional tendering.
In July 2010 Viglen were selected to partner with the University of East Anglia in the provision of a High Performance Computing Cluster Facility. The two phases of the two year contact are worth a total of approximately £750,000 and were awarded under the National Server and Storage Agreement (NSSA).
Viglen are very happy to be engaging with the University of East Anglia on another project so soon after our recent HPC partnership. We are excited to be involved with such a prestigious institution and look forward to the continuation of our flourishing relationship.
Bordan TkachukCEO, Viglen
About The University of East Anglia
The University of East Anglia (UEA) is one of the top research institutions in the UK and internationally recognised for excellence in teaching. Ranked eighth best for science among UK universities, it has a new medical school and is a leading member of the Norwich Research Park, one of the largest groupings of biotechnologists in the world. It has over 3,500 employees and 15,000 students. | 计算机 |
2015-48/0357/en_head.json.gz/15873 | Email 12MC
Google Plus Profile
The Complete Index & Map
Twelve Mile Circle
Delphia
On July 17, 2014 · 1 Comments
The start for this research came from a recent tragic incident, a drowning at Triadelphia Reservoir in Maryland. My mental sympathies extended to the young victim’s family and friends of course. Afterwards I began to wonder how the reservoir got its unusual name, with a triad (a group of three) applied to "Delphia."
Philadelphia Sunset by Peter Miller, on Flickrvia Creative Commons Attribution-NonCommercial-NoDerivs 2.0 Generic (CC BY-NC-ND 2.0)
The most common application of the suffix Delphia had to be the City of Philadelphia (map) in Pennsylvania, colloquially known as the City of Brotherly Love.(¹) Regardless of whether this unofficial motto should apply, and it’s open to debate, the phrase derived from a colonial-era translation of ancient Greek. Philadelphia was "taken by William Penn to mean ‘brotherly love,’ from philos ‘loving’ + adelphos ‘brother’."
Peeling that back farther, the ancient Greek word δελφύς (delphús) — and apologies in advance if the original word rendered incorrectly on the page — meant womb. The same term also applied to Dolphin, essentially a "fish" with a womb. The Oracle of Delphi in ancient Greece originated from the same root, and according to legend "Apollo first came to Delphi in the shape of a dolphin" which created a nice symmetry with the various word meanings.
Let’s set all those aside. My command of ancient Greek was even worse than my understanding of living foreign languages. I probably butchered the explanation. Let’s focus on a modern translation of the suffix to mean "brother" and return to Triadelphia.
Tri(a)delphia
Triadelphia Reservoir by Doug Miller, on Flickrvia Creative Commons Attribution-NonCommercial 2.0 Generic (CC BY-NC 2.0) license
Triadelphia, the reservoir in Maryland that straddled the Montgomery County / Howard County line derived its name from an earlier placename, a town called Triadelphia. Spellings often dropped the initial "a", and in fact the USGS listed both Triadelphia and Tridelphia as acceptable variations. Residents abandoned the town in the later part of the Nineteenth Century after a series of floods along the Patuxent River. Its former site was later submerged beneath the waters of the reservoir. The Sandy Spring museum explained the name,
Triadelphia ("three brothers") was founded in 1809 by brothers-in-law Thomas Moore, Isaac Briggs, and Caleb Bentley, who married Brooke sisters. Its water wheels powered a cotton spinning mill… Around the mills sprang up a structured little city… The town throbbed with 400 people.
That answered the question of three brothers. Similarly another Triadelphia, this time in West Virginia, seemed to have three men associated with its founding as well (map). Numerous sources speculated that perhaps these men were three sons of an early resident, the town’s first mayor, Colonel Joshiah Thompson. Research conducted in 1941 as part of the Depression-era Work Projects Administration offered a different explanation however. It attributed the name to three close friends who settled in the area circa 1800 and donated the townsite, the previously-mentioned Thompson along with Amasa Brown and John D. Foster.
I discovered a final Triadelphia in Morgan County, Ohio, via the Geographic Names Information System. The "History of Morgan County, Ohio" mentioned Triadelphia however it did not provide an explanation beyond "It was laid out in 1838 by A. Roberts." That book was published in 1886 so the source of the triad was apparently unknown or unworthy of mention even back then so it remained a mystery to me. I also found a Flickr set on the abandoned Deerfield Township school located in Triadelphia (also Google Street View) although that went down a bit of a tangent.
Profile of Speer Pavilion, Ouachita Baptist University by Trevor Huxham, on Flickrvia Creative Commons Attribution-NonCommercial-NoDerivs 2.0 Generic (CC BY-NC-ND 2.0) license
The more well-known Arkadelphia had to be | 计算机 |
2015-48/0357/en_head.json.gz/16219 | Does OpenStack need a Linus Torvalds? PC Advisor
Does OpenStack need a Linus Torvalds?
OpenStack has been dubbed by some enthusiasts as the Linux of the cloud - an open source operating system for public or private clouds. But there's one stark difference between the two projects: OpenStack doesn't have a Linus Torvalds, the eccentric, outspoken, never-afraid-to-say-what-he-thinks figurehead of the Linux world.
Brandon Butler
|Network World US
OpenStack has been dubbed by some enthusiasts as the Linux of the cloud - an open source operating system for public or private clouds. But there's one stark difference between the two projects: OpenStack doesn't have a Linus Torvalds, the eccentric, outspoken, never-afraid-to-say-what-he-thinks leader of the Linux world.
Torvalds personifies Linux in many ways. OpenStack doesn't have that one central figure right now. The question is: Does OpenStack need it?
MORE OPENSTACK: OpenStack: We can tell Amazon what to do, instead of the other way around OPENSTACK FUD? Gartner report throws cold water on uber-hyped OpenStack project Some would argue yes. Torvalds, because of the weight he holds in the project, calls the shots about how Linux is run, what goes in, what stays out of the code, and he's not afraid to express his opinions. He provides not only internal guidance for the project, but also an exterior cheerleading role.
Others would say OpenStack does not need a Torvalds of its own. The project is meant to be an open source meritocracy, where members are judged based on their code contributions to the project. OpenStack has been fighting an image that the project is just full of corporate interests, which is part of the reason Rackspace ceded official control of the project to the OpenStack Foundation recently.
What would a Torvalds of OpenStack do? For one, he or she could provide an authoritative voice for the project. The position would allow someone to express a vision for what OpenStack will be, who and what is in the project and where it's going.
Perhaps most importantly, he or she could say no. As OpenStack continues to gain momentum, more and more companies will attempt to leverage the buzz around the project and call themselves OpenStack when they're not. A Torvalds of OpenStack could help keep that in line.
For example, when VMware controversially applied to be a member of OpenStack, there was debate within the foundation's board of directors about if the company would be let in, which it ultimately was. If there is one central figure for the OpenStack project, that decision could have been much easier, instead of taking hours of deliberations and creating what some consider to be wedges within the project.
RELATED: Oops. OpenStack board member says letting VMware into project was a mistake Here are some people who could step up to the plate and be the Linux Torvalds of OpenStack.
Jonathan Bryce
If anyone is the official face of OpenStack right now, it's Bryce. A former member of the OpenStack team at Rackspace, Bryce now serves as the inaugural executive director of the OpenStack Foundation, which coordinates high-level decisions about the future of the OpenStack project. He's the default spokesperson for the project and has clearly taken on a leadership role. At the most recent OpenStack summit, he served as an emcee throughout the show and opened with the first keynote address. Along with his right-hand man Mark Collier, COO of OpenStack, Bryce is pretty much running the day-to-day operations of the project. One question about Bryce: Is he provocative enough to be a Torvalds for OpenStack? He's generally a more conciliatory type than a raucous pot-stirrer. But, who says OpenStack's version of Torvalds needs to be a mirror image of Linus's style?
One of the co-founders of the OpenStack project, Kemp is in many ways seen as the brainchild behind the OpenStack movement. While CTO of NASA, he led the team that created Nova, the core compute engine that makes up OpenStack. At the recent OpenStack Summit Kemp had a prominent keynote role in which he articulated the promise of the OpenStack project. He has the vision for OpenStack and he's all in with the project too, having launched his own startup Nebula that has bet big on OpenStack.
Joshua McKenty
Another co-founder of the OpenStack project from his time at NASA, McKenty is seen by some as a face of the startup community that has developed around OpenStack. His company, Piston Cloud Computing is a pure-play OpenStack startup that takes the project's open source code and makes it "enterprise ready." He's certainly outspoken enough, always willing to share his two cents on anything related to the project, or the tech industry in general, is edgy and has no problem being in the spotlight. He's about as close to Torvalds as you'll get at OpenStack right now, at least in terms of personality and style.
Alan Clark or Lew Tucker (right)
The two men elected chair and vice chair of the newly formed OpenStack Foundation have clearly taken a leadership role within the project. Clark is SUSE's open source director and Tucker serves as Cisco's Cloud vice president and CTO, but he's also worked at Sun and SalesForce.com. Combined, Clark and Tucker seem to provide a steady, experienced hand to guide the project. Both are non-controversial, intellectual visionaries who clearly have the best interests of the project in mind, plus represent the linkage between the project and major corporate sponsors and partners.
The OpenStack Foundation
OK, so the entire 24-member foundation isn't one person, but in many ways the creation of the Foundation is meant to be the unifying voice of the project. But can a Torvalds of OpenStack really be a group of 24 people? Already we've seen some divisions within the group, such as around the decision to let VMware in. That hardly makes the Foundation a singular voice for the project. Rather, it's more of a conglomeration of whatever a majority of the group can agree on.
Does OpenStack, the open source cloud computing project, need a Linus Torvalds?
Perhaps the reason there is no Torvalds of OpenStack is because the forces that be within OpenStack don't want it that way. OpenStack is meant to be an open source project that anyone and everyone is welcome to, if they contribute back to the community. If there was an outspoken Torvalds-equivalent at OpenStack, perhaps it could undermine what the project is all about. OpenStack is not about one person, it's about a project, and having Linus-lookalikes could undermine that.
Network World staff writer Brandon Butler covers cloud computing and social collaboration. He can be reached at [email protected] and found on Twitter at @BButlerNWW.
How to stop pop-up ads on Android
Guest said: Comments,Guest,its definitely not mckenty. that guy is a loudmouth kid, and his company contributes close to ZERO code to the project. ').insertAfter('#articleBody p:nth-of-type(6)'); | 计算机 |
2015-48/0357/en_head.json.gz/16351 | On Making Mega Man And Mighty No. 9 Music
By Spencer . November 29, 2013 . 6:00pm
Siliconera spoke with Manami Matsumae who composed music for a little game called Mega Man. She also worked on U.N. Squadron, Magic Sword, and is the composer on Mighty No. 9. In this interview, we talked about the art of making Mega Man style music how she got involved with Shovel Knight.
How has making video game music different from the NES days when you had to deal with hardware limitations, compared to now?
Manami Matsumae, Composer: Nowadays, since we can use any kind of sound source, and that there are no limitations to the number of tones we can use, a variety of musical expressions have become possible. Therefore, people who create music nowadays are in a situation in which they can easily create music of their own. I personally have no problems with that, but during the NES era, there was a limitation to the tones that could be used, which meant that we could think up anything and everything using a piano, yet it would be impossible to implement the music as is. So, we had to do plenty of trial and error in order to make sounds, and I find that there was a lot of virtue to making music using sounds produced under such limitations.
As of now, I’m a member of Brave Wave, a music label which just began to be active recently. Mohammed Taher is the main lead behind the whole idea, and is the one making albums. I am involved with several songs myself as well as helping Mohammed with artistic decisions. Creating music specifically that will NOT go into games is actually quite fun! There are many musicians in the world, and I hear melodies, chords and rhythms that I would never have thought of on my own. Those are quite stimulating to me, and my own music is better as a result. I ask readers to please look forward to what Brave Wave is making. We will make good music together.
Do you find it interesting that chiptunes is a style of music now?
Chiptunes have become something quite cool these days. For the last year, I have been working on a number of chiptune tracks, so I have listened to some composed by other famous people, all of whom are really amazing! They really understand the nuances of musical tones when creating songs. There are many people today who enjoyed music produced using the NES sound library who will find chiptunes nostalgic. I see this trend growing further. Can you tell us the process of making music for a Mega Man game or Mighty No. 9? How is it different from other games that you’ve worked on?
Whether my music goes into Mega Man or Mighty No. 9, the process is basically the same. I look at the character, as well as the game in motion, and then compose the song. For Mighty No. 9, which I recently made, all I had was a picture of the character (when I composed the song, the video it was featured in did not exist yet). However, some keywords that helped me imagine the game were “near future,” “nostalgia,” “sense of justice,” “power” and “love.” I made the song with those keywords in mind. Making music after getting a sense of what something’s image is allows you to portray the intent behind the song easily to another person.
Going back to the original Mega Man, did you ever make music for a robot master that didn’t make it in the game? Which song is your favorite and which one was the hardest to create?
I remember the development of Mega Man taking around two months. It was half a year after I had entered the company, so my work speed was slow, yet I had to make both the sound effects and music at the same time. So, there were certainly no other songs for a robot master that did not make it into the final game. There were only six bosses.
I like the song that comes after selecting a stage most. Even now, there have been several arrangements, and the song is still loved by many. Guts Man was the hardest one to make, and I was hilariously bad at his stage. (laughs) I had imagined a stage with lots of stones and rocks and such, but it turned out to be quite different. The song’s tones have this sense of deception in it as a well!
U.N. Squadron (called Area 88 in Japan) is one of my favorite games and part of it is because the soundtrack is so energetic. Can you tell us any development stories from when that game was in development?
Thank you so much! I am very happy to hear you say that. I love the music in that game as well. I’ve since forgotten some details since this occurred 25 years ago, but when I made the song for Area 88, there were pictures of the game available, but the game itself wasn’t up and running yet. Unfortunately, we didn’t have much time to waste, so I had to make the music anyway. Three days later, I took the music I had made, and by then, Area 88 was up and running. I tried matching the song with that stage, matched the scene where the scenery rises from the city onto the sky to the song’s chords, and everyone was quite surprised.
How did you get involved with the Shovel Knight team? And why do you think so many developers are trying to create Mega Man-like games?
I actually cannot speak any English at all. Therefore, I never thought about working together with a game company from outside of Japan. That said, Mohammed from Brave Wave Productions contacted me about the game. We had worked together prior to Shovel Knight, and he told me that I was well known as the composer for Mega Man, so he had contacted Yacht Club Games to see if they would be interested in merging our waters together. Mohammed ended up emailing me and asking me to make two songs for Shovel Knight, and I ended up working together with them. (laughs) Recently, there have been quite a few games that harken back to the style of 8-bit titles! I wonder why. This is just my opinion, but personally speaking, games these days seem to allow for anything and everything to be put in them, and I feel that there is too much of it. While there are people who think, “technology is always evolving day by day, so it’s great to play games with beautiful graphics while listening to flashy music,” there are also those who just want to enjoy simple games, which is why development studios are going the other way as well.
If you were going to give advice to a young composer creating a score for an action game what would you tell him or her? How about a flight game like U.N. Squadron?
I myself am still studying, so I don’t think I’m in a position to be giving advice to anyone. That said, if I had to give some pointers to others, I think it’s best to think of how to make music by looking at a game screen and trying to understand the message it is trying to tell, and how the music can make that game more exciting. If it’s an action game, then go with a theme that shows harshness or fighting spirit. For a game like U.N. Squadron, go with a song with speed and solidarity, and so forth. Every game has a main theme, so you should make music that goes along with it.
As a treat for the Siliconera readers, Ms. Matsumae shared the "One Shot, One Kill song" from the World 1-2 Encore album and a remixed version done in the style of Mighty No. 9.
Download the original track and the Mighty No. 9 style remix. Read more stories about Interviews & Mega Man & The Mighty No. 9 on Siliconera. http://shadious.tumblr.com/ Vince
May I seek permission to share this at M#9′s forums? Of course, I will put up the source as from this site. ;)
Zero_Destiny
Of course we’re a public blog. So long as there’s proper credits it’s all good. Just remember to link to this article in question. :)
GH56734
How kind of him to share those soundtracks :)
Bigabu Beaze
Whens expiration date on this news Spence?
FlyingPony
Cool beans. I always like Megaman soundtrack, so can’t wait to see how this turn out.
Shippoyasha
My Vita is BEYOND READY for this game. I love Megaman soundtracks, but I can only take so many soundtrack and indie game releases (as awesome as they are). Finally the creator of Megaman, Keiji Inafune is back and this time with less game company politics weighing him down!
Notquitesure?
If their making chiptune it would be awesome to see some collaboration with She (Atomic, coloris….)
Kurizu208
AABAR
Sorry to be a nitpick, but I thought World 1-2 is the original & Encore is the remix? – still, Manami’s work is awesome!
Haha I remember Area 88, loved that game as a kid but boy was it hard at the time.
colorblindnightmare
I love matsuame, what a kind enlightened human being. To say nothing short of her talent! | 计算机 |
2015-48/0357/en_head.json.gz/16507 | Hotline Miami interview: have you met the new guy?
Monday, 5 November 2012 09:08 GMT
Hotline Miami is the gritty crime title from Dennaton Games. VG247’s Dave Cook spoke with creator Jonatan Söderström to discuss the game’s development and his brush with piracy.
Set in the 1980s, the game stars an amnesic hitman sent out into the Miami’s seedy, neon-lit criminal underbelly to take slaughter goons.
It’s a brutally difficult game. Line of sight, stealth and attack timing are all crucial to surviving. But it’s incredibly rewarding when you succeed.
The game’s soundtrack is worthy of note. With dirty synth stabs and an ’80s vibe, it makes for memorable listening.
Interested? Download Hotline Miami on Steam and try it out.
How do you get a job in the games industry? To the freshly graduated coder or designer, the sector may appear to be an impenetrable fortress under the command of big players like EA or Activision. But if you look closely at current trends, you’ll see that people have begun to dig under its walls and into the compound, essentially creating their own back door. New routes to market have opened. It isn’t mandatory to serve for years on a studio’s food chain, slowly rising up the ranks to the position you think you deserve. You can create that role for yourself. Several heavyweight figures have begun weighing in on this scene, offering their own expertise to aspiring developers.
Peter Molyneux is an avid champion of self-styled developers, Valve’s Chet Faliszek has recently been giving lectures on the topic, and prominent names like ex-Infinity Ward community man Rob Bowling have broken free of the triple-a sector to set up studios on their own.
It’s inspiring, but the notion that going into business for yourself is easy is nothing more than illusion. Working independently – in any industry – is hard, risky work, and recently, no one has signified this more than Hotline Miami creator Jonatan Söderström.
Just last week, Söderström revealed that his game – the critically acclaimed 1980s ‘fuck-’em-up’ Hotline Miami – was suffering rampant piracy on The Pirate Bay. Rather than shut the download links down, he gave the community the most recent version that rang true to his desired standard of quality.
It was an admirable move, but a rather sad one at the same time. It was almost like an admission of defeat, a realisation that piracy – gaming’s largest elephant in the room – cannot be strong-armed out the door. Should the industry just learn to live with it? Does it not counter the supposed ‘golden age’ figures like Molyneux, Faliszek and Bowling so avidly champion? Perhaps more should be done to safeguard fledgling coders and teams, because after all, you never know which one of these new developers could go on to become the next Valve, given the chance.
Don’t laugh it off, it could happen, and we should be supporting the idea, all of us. That said, not everyone suffers the same hardships when entering game development. But if it isn’t piracy, then it’s the staggering cost involved with getting indie games on XBLA or PSN, or strenuous development costs.
Or if it isn’t that, it’s the difficult odds of making your money back on iTunes or Android’s marketplace. With so many new coders and teams entering the development gold rush, submitting a game on App Store becomes a wind-pissing exercise. Things need to change.
To self-inflict insult to injury, Söderström revealed on Twitter that he had been broke for some time while developing Hotline Miami, so VG247 simply had to arrange a chat with him to find out what it feels like to be part of a small team entering an industry crawling with parasites and hidden dangers. What has been your reaction to the critical reception of Hotline Miami so far? People really seem to have really taken to it.
Jonatan Söderström: I’m really happy about it, as we didn’t know quite what to expect when we released the game, so it was really nice to see people enjoying it.
That’s excellent. I want to also try and get a feel for where the concept of the game came from, and how you took those initial steps to get the project rolling.
It started when I made a prototype back in 2007 I think, and we were looking for something quick to do this year – back in Spring. So I showed the prototype to Dennis and he really liked it, so he took a few days and started just drawing graphics for the game. He showed this to me, and then I started putting another prototype together, and it turned out really well so we just kept working on it. We then got in touch with Devolver Digital and things sort of got out of hand from there.
”I’m not making games to make money. I do want to make money, but it’s not my major intention with my creativity. I just like expressing myself, making cool stuff, and like, if you don’t want to pay for the game but want to play it anyway, I’m not going to stop people from doing that.”
It was only meant to be a small project from the start, but suddenly we just kept coming up with new ideas and kept wanting to make a bigger game.
And how have you responded – personally – to the overnight success and notoriety of the game?
I’m not sure I quite grasp what’s happening right now, so I’m just focusing on working on the game, making sure it works the way we intended it to, and then I guess it will sink in across the next couple weeks.
We saw last week that you were helping people who were torrenting the game on The Pirate Bay, by offering them the patched version. That must have been tough.
Yeah. I’m not making games to make money. I do want to make money, but it’s not my major intention with my creativity. I just like expressing myself, making cool stuff, and like, if you don’t want to pay for the game but want to play it anyway, I’m not going to stop people from doing that.
I prefer if they play a version of the game that isn’t bugged out, so they get a good impression of it. I don’t want people to pirate the game or anything like that, but I know it’s an issue and there’s nothing we can do about it. I’m not sure I want to do anything about it, but I just want want people to enjoy the game.
Regardless of whether or now we can stop piracy, we did see you mention on Twitter that you’ve been ‘broke’ for a while. Is the plan then to recoup your investment and perhaps get the game out on other platforms?
I want to make enough money to make bigger games, and that’ probably not something I’m able to do if I have to get a job, and of course I want to be able to pay rent and buy food from what I do. So, we’re looking to make another game as soon as possible, and hopefully it will turn out as good as Hotline Miami.
I think a lot of people don’t realise just how ‘all or nothing’ game development is. Like, you can’t do a part-time job and be a seriously dedicated developer at the same time. Is it true that you can only really focus on one or the other?
I think you can still make games as a hobby, but it’s very time-consuming. The more work you put into it, the more you have to work yourself to put back into the project – because it becomes a hurdle to get into it when there are so many things going on.
It starts to become more like work after a while once it’s nearing completion, because there’s the last boring bits left to do, and it’s easy to just feel like you don’t want to do it any more. So it’s good to have some kind of motivation, such as being able to make money off it, and to see some good come out if that isn’t just letting people play it for free.
Regarding the game itself, why did you decide to set it in the 1980s – GTA: Vice City thematic parallels aside of course?
I grew up watching a lot of movies and stuff from the ’80s, a lot of ultra-violent stuff. Then when I saw Drive, it drew a lot of vibe from that era but made it feel more modern, while putting emphasis on the stylish bits of the ’80s. That was a really big inspiration.
“Part of learning the game means exposing yourself to danger a lot so you can figure out how to tackle every situation in the game. We looked at Meat Boy for that part of the game, and we thought it had a good flow, and allowed people to master it without feeling too punished..”
It felt like a really fresh take on something old,and it seems like a lot of games are nostalgic and look back at what happened during the early days of game development. I wanted to make a fresh take on that, not to just re-produce it, and we wanted to figure out a way to do the ’80s thing while feeling fresh and modern at the same time.
It has a lot in common with Super Meat Boy in terms of both difficulty, and the ease of restarting and attempting your run again in an instant.
Yeah, both me and Dennis wanted to make a challenging game that still felt like it was fair, like, you could still beat it in a reasonable amount of time, and you didn’t feel punished for failing a couple of times. Part of learning the game means exposing yourself to danger a lot so you can figure out how to tackle every situation in the game. We looked at Meat Boy for that part of the game, and we thought it had a good flow, and allowed people to master it without feeling too punished.
If you take a game today and compare its challenge to something like – I don’t know – Pac-Man or Donkey Kong, those are two very different styles of challenge. Was Hotline Miami a comment on the coddling nature of some games today?
I guess some game developers are afraid that some people will be intimidated by a game that doesn’t hold the player’s hand too much. I think in some ways there is a good reason for that, like, if people don’t understand how to play your game then that’s a bad thing.
But when it comes to difficulty it doesn’t have to be a bad thing that the player doesn’t immediately get what to do, or how to do it, that they have to go through a learning curve before they can play the game the way it’s meant to be played.
One thing some players have found hard to grasp is the line of sight mechanic. Was that a tricky thing to code?
Coding it wasn’t that difficult, but there are a lot of things in the game that I’m not sure are apparent – like game design choices – like people complain about the enemies being stupid and not noticing if they pass a dead body or something like that.
We had thought about all of that stuff, but we decided not to add it because it made the game more difficult. If enemies reacted to bodies and started chasing you, or like, hearing other enemies fire their guns – if they reacted to that it would have made the game harder and less fun to play. Game design-wise it’s been a little bit difficult getting everything right in terms of how the enemies are supposed to behave, but actually coding it hasn’t been difficulty. Although we had some AI problems where the path-finding wasn’t working correctly.
Enemies would walk through walls and stuff like that, but it’s fairly simple from a coding perspective, but maybe that’s because I’m not really used to it yet.
It’s set in the ’80s, the decade that saw a huge influx of bedroom coders creating games off their own back and making money off their creations independently. Indies have so many routes to market now, and you have first-hand experience of that. Is the industry coming full circle?
I guess so, but I haven’t really thought about it to be honest. But yeah, that sounds like it’s true. I like a guy called thecatamites who made a game called Space Funeral, which is a really strange RPG game. You can’t see any influences from it in Hotline Miami but it’s been very inspiring because you can feel the creativity and fun that was had while making the game. I guess that’s the only parallel you can pull between this game and that. We had fun making this game, and we wanted to create an interesting world and let our imaginations run wild without conforming to any pre-defined game standards.
Indies do have more options open to them today, and given that freedom, do you think the barriers to indie console development will have to come down going forward?
I would hope they do come down a little, but I’m not sure what to expect from the future. I’m not really thinking too much about that stuff, and I just let whatever happens happen. What are your plans for the Hotline Miami IP in the near future in that case?
We have a sort of sequel, but we’ll probably release that as DLC. We would like to start working on that and have it explain the storyline a little bit more and maybe expanding a bit on the gameplay. We’re thinking about doing a level editor as well. It would be cool to put the game out on consoles, but for technical reasons – it was made in GameMaker – it’s not very easy to port to another system.
We did see that you were pondering a PS Vita version. Is that on the cards potentially?
We’ve talked about it at least, but I’m not sure how far into talking with Sony we are. It would be cool but I’m not sure it’s going to happen at the moment.
UK Charts: Assassin's Creed 3 stealths to top spot PS All-Stars: Battle Royale Russia trailer goes cosplay mad Share on: | 计算机 |
2015-48/0357/en_head.json.gz/16736 | Group NameCreate New GroupClipSucharita MulpuruVice President, Principal Analyst serving eBusiness & Channel Strategy PROFESSIONALSBlog: http://blogs.forrester.com/sucharita_mulpuruTwitter: twitter.com/@smulpuruFind Sucharita on:
eBusiness & Channel Strategy Blog
Sucharita serves eBusiness & Channel Strategy Professionals. She is a leading expert on eCommerce, multichannel retail, consumer behavior, and trends in the online shopping space. She is also a noted authority on technology developments that affect the online commerce industry and vendors that facilitate online marketing and merchandising.
In her research, Sucharita covers such consumer-oriented topics as eCommerce forecasting and trends, merchandising best practices, conversion optimization, and social computing in the retail world. She has also authored "The State Of Retailing Online," a joint study conducted annually with Shop.org and a leading industry benchmark publication.Previous Work ExperiencePrior to Forrester, Sucharita was the director of marketing at Saks Fifth Avenue, where she managed the customer acquisition, retention, and market research efforts for the $2 billion luxury retailer's online channel. Prior to Saks, she held management positions at Toys R Us, where she was a merchant in the Babies R Us division and a store manager in one of the company's largest toy stores. She also worked for the Walt Disney Company, where she developed and managed marketing plans for new business initiatives, including the Disney Stores, the Disney Cruise Line, and Club Disney.
Additionally, she was involved in the expansion of Cap Cities/ABC properties, specifically ESPN Zone, ESPN Magazine, and the Go.com network. She has written two nonfiction books and has contributed to BusinessWeek Online.EducationSucharita holds a B.A. in economics from Harvard University and an M.B.A. from the Stanford Graduate School of Business.(Read Full Bio)(Less)275Research CoverageAmazon, Apparel, B2C eCommerce, Best of the Web, Cisco Systems, Cognizant Technology Solutions, Commerce Solutions, Consumer Electronics, Consumer Mobility, Consumer Retail & CPG, Cross Channel Strategies, Customer Experience Management... (More) | 计算机 |
2015-48/0358/en_head.json.gz/46 | “PSP Remaster” program brings titles to PS3 with updated graphics
Select PSP titles will soon be re-released for the PS3 using the Dual Shock 3 …
It looks like Sony is hoping to get some extra mileage out of its existing portable library. The upcoming "PSP Remaster" program will take existing games, give them a new high-definition facelift, add support for the Dual Shock 3, in some cases 3D graphics, and then re-release the title on a Blu-ray disc.
The first PSP Remaster game in action That's not all, though! "Users will also be able to utilize the same save data from the original PSP game for the 'PSP Remaster' version and enjoy the game on the go with the PSP system and continue the game at home on a large TV screen using PS3," Sony explains. "Ad-hoc mode gameplay will also be supported through 'adhoc party for PlayStation Portable' application on the PS3 system."
The PlayStation Portable had some great games that suffered from a lack of a dual-stick control option, and now they'll get it... on the big screen, no less! We'll wait until we see some examples of this with our own eyes, but this seems like a good way to give some old games some new life and some very high-class features. So far the only game announced for this program is Monster Hunter Portable 3rd HD Ver. in Japan, but US titles will likely be announced at E3.
It's odd that the materials specify these will be Blu-ray disc releases, as they seem perfect candidates for a digital release. Also, will we need to have this version and the PSP version to share save-game files, or will the remake give us access to both? We'll be digging for those answers, but until then, there are some neat ideas at play here. | 计算机 |
2015-48/0358/en_head.json.gz/903 | ConsortiumInfo.org
Your online research resource for Standards and Standard Setting.
MetaLibrary
Home > Standards Blog
Advanced Search Find out more about this site's sponsor
Gesmer Updegrove has represented more than 132 standards consortia and open source foundations, including: View Full Client List
Order your copy of
The Lafayette Campaign: A Tale of Deception and Elections
The Alexandria Project
Subscribe to Standards Today
Subscribe & Share
Home Lafayette Deception (a Cyber Thriller) (15/16)
Adventures in Self-Publishing (40/1)
Alexandria Project (a Cyber Thriller) (37/15)
China (13/19)
General News (24/22)
Intellectual property Rights (64/20)
Monday Witness (20/17)
ODF vs. OOXML: War of the Words (an eBook) (7/17)
Cybersecurity (7/15)
On the Media (5/18)
Open Source/Open Standards (98/19)
OpenDocument and OOXML (238/19)
Not Here but There: A Wilderness Journal (44/15)
Semantic & NextGen Web (12/21)
Standards and Society (38/19)
Wireless (9/31)
WSIS/Internet Governance (13/0)
Don't have an account yet? Sign up as a New User
Blogs I ReadAlex Brown/Where is There an End to it?Bob Sutor's Open BlogDoug Mahugh/Office InteroperabilityGlyn Moody's OpenGray Knowlton's Gray MatterGroklawLee Gesmer's Legal BlogLinux Foundation BlogsMatt Asay's The Open RoadO'Reilly.net XML BlogsOnce More Into the BreachOpenSourceLegal.orgPeter Korn on AccessibilityRob Weir/An Antic DispositionStephen O'Grady/TecosystemsThe Cover PagesTim Berners-Lee
Welcome to ConsortiumInfo.orgWednesday, November 25 2015 @ 02:25 PM CST
The Free Standards Group: Squaring the Open Source/Open Standards Circle
Monday, May 29 2006 @ 10:27 PM CDT
Contributed by: Andy Updegrove
Before there was Linux, before there was open source, there was (and still is) an operating system called Unix that was robust, stable and widely admired. It was also available under license to anyone that wanted to use it, because it had been developed not by a computer company, but by personnel at AT&T's Bell Labs, which for a time was not very aware of its status as the incubator of a vital OS. Mighty was the rise of that OS, and regrettably, so was the waning of its influence. Happily, Unix was supplanted not only by Windows NT, but also by Linux, the open source offshoot of Unix. But today, LInux is at risk of suffering a similar fate to that suffered by Unix. That risk is the danger of splintering into multiple distributions, each of which is sufficiently dissimilar to the others that applications must be ported to each distribution - resulting in the "capture," or locking in, of end-users on "sub brands" of Linux.
The bad news is that the rapid proliferation of Linux distributions makes this a real possibility. The good news is that it doesn't have to, because a layer of standards called the Linux Standard Base (LSB) has already been created, through an organization called the Free Standards Group (FSG), that allows ISVs to build to a single standard, and know that their applications will run across all compliant distributions. And happily, all of the major distributions have agreed to comply with LSB 3.1, the most recent release.
I recently interviewed Jim Zemlin, the Executive Director of FSG, as well as Ian Murdock, the creator of Debian GNU/Linux, and the FSG's CTO and Chair of the LSB Working Group. That interview appears in the May issue of the Consortium Standards Bulletin and covers a great deal of ground. Some of the most interesting details, though, relate to how this open standards process interacts with, and serves, the open standards process that creates Linux itself. Below, I've excerpted those parts of the interview, so that you can see how it's done. [Disclosure: I am on the Board of Directors of the FSG, and am also FSG's legal counsel.]
FSG — Linux Interface 1. Which open source projects does FSG actively engage with? Primarily the Linux distributions but also many of the constituent projects, particularly if those projects provide a platform that developers can target that could benefit from better integration with the broader Linux platform. Good examples here include the GNOME and KDE desktop environments. Each of these desktop environments is a platform in its own right, but a desktop isn't much use unless it is well integrated with the operating system underneath. Furthermore, ISVs targeting the Linux desktop ideally want to provide a single application that integrates well regardless of which environment happens to be in use.
[more] 2. How does FSG work with the Linux development team and the Linux process?
Actually, the LSB doesn't specify the kernel--it only specifies the user level runtime, such as the core system libraries and compiler toolchain. Ironically, then, the _Linux_ Standard Base isn't Linux specific at all--it would be entirely possible (and probably not altogether hard) for Solaris to be made LSB compliant. The LSB is entirely concerned with the application environment, and the kernel is usually pretty well hidden at the application level.
3. Does the Linux community participate in FSG as well?
Yes, though most participation comes from engineers that work for the various companies that have an interest in Linux (Intel, IBM, Novell, HP, Ubuntu, etc.). However, there's nothing particularly unusual about that. Most open source development these days is done by commercial interests, not by college students working out of their dorm rooms, which seems to be the common perception. (Of course, a lot of it starts there, but the best developers eventually figure out how to get paid to do it.) Whether you're interacting with paid engineers or unpaid volunteers, though, a key to success in the open source community is getting the right people to buy in to what you're doing and, ideally, getting them to participate. In general, the FSG mission resonates well with the open source community, and we have little difficulty getting that buy in and participation.
FSG — Linux Dynamics
1. I've heard you describe the relationship of the open source and open standards processes in "upstream" and "downstream" terms. Given that open source development is "real time" and ongoing-release, while standards have traditionally operated on a fixed basis, with nothing changing for a period of time, how do you make this work?
One way to understand this is look at the attributes of a successful open source project. Success is relative to the number of developers and users of a particular set of code. Apache is a good example. As the community iterates code with similar functionality, for example a web server or a C compiler, the participants end up aligning themselves around one or in some cases two projects. Smaller projects tend to die. The ones that succeed then join the many other packages that are integrated into a platform such as Linux. The trick in standardizing then is to decide which snapshot in time — which interfaces from those packages at that point across all these packages - will guarantee interoperability. By coordinating with these disparate upstream projects which versions of their code are likely to be broadly adopted downstream with the distro vendors, we provide a framework for those working both upstream and downstream. In the case of the Linux distros, we help them cooperate in order to bring meaning to the term "Linux" in terms of the type of interoperability that is commonly expected on an operating system platform such as Windows or Mac OS.
This effort requires ongoing awareness of the spec development process itself both upstream and downstream, and a rapid feedback framework for all parties. It also requires a coordinated parceling of the testing efforts to the appropriate sub-projects. In other words, we are applying the bazaar method of open source coding to the development of standards. That is how the community plays and we are a part of that community.
2. At the process level, what other aspects of open source development are most problematic for standard setting, and vice versa?
Before answering that question, there's one very important thing to understand about the FSG, and that's that we don't define standards in the same way that a traditional standards body defines standards. And that's just the nature of the beast: The open source community is vast, complex, amorphous, and continually in motion. It's also an integral part of what we do. So, the FSG by nature isn't just a well-defined consortium of technology vendors that can define things unilaterally. It's a well-defined consortium of vendors, certainly, but it's also more than that, in that the vast, complex, amorphous, continually moving open source community needs to be represented at the table. In a lot of ways, what we're doing at the FSG, namely bringing together open standards and open source, is unprecedented. Clearly, our interactions with the open source community affect the processes we use to build the LSB and our other standards. We can't just say "this is the way things are" the way we'd be able to do if our constituency was smaller and more self-contained. Instead, the way we define standards is far more about consensus building and observation--we watch what's happening in the opensource community and industry and track what's emerging as a "best practice" through natural market forces and competition.
One of the challenges of the LSB project, then, is understanding what technologies have become or are becoming best practice, so that we can begin the process of incorporating those technologies. Another challenge is dealing with a moving target--after all, although the process of defining the standard is different, at the end of the day, the standard has to be every bit as precise as, say, a plumbing specification, or it won't guarantee interoperability. Fortunately,we already have a model to follow here, namely the Linux distributions, which perform the analogous task at the technology level by assembling the various open source components into a cohesive whole.
So, our task essentially boils down to tracking the technologies that ship in the majority of Linux distributions, and in building a layer of abstraction, a metaplatform of sorts, above the multiplicity of distributions so that application developers can target a single, generic notion of Linux rather than each distribution individually.
We also work to increase participation in the ongoing development of the standard and to facilitate collaboration among the key stakeholders to more rapidly reach consensus around the best practices. The goal here is to capture in the LSB roadmap not just what exists in the current generation of the major distributions, but what's coming in the next as well. After all, ISVs developing Linux applications today will often see the next generation as a primary target. 3. What compromises (technically and process-wise) have the Linux and FSG communities had to made in order for the LSB to be practical while not impeding the work of either side?
The biggest challenge in what we do is probably no different than in any other standardization effort: Balancing the need for standards with the need for vendors to differentiate from each other. However, in the open source world, this tension is probably more pronounced due to the speed at which development occurs. I'd say the biggest compromise the open source community makes is understanding the importance of standards, backward compatibility, and all the sorts of things that tend not to be "fun" but which are vital to commercial acceptance--and being committed to doing what needs to be done. On the FSG side, the biggest compromise is being fairly hands off and leaving it to the marketplace to determine which of many alternatives is the best practice. The real key is making sure interoperability problems don't crop up in the process, and the key to making sure that doesn't happen is ensuring all the parties are in a constant dialogue to make sure the right balance is struck. We see that as one of the roles of the FSG--providing a neutral forum for these kinds of conversations between the key stakeholders.
1. Where else are organizations modeled on the FSG needed?
I wouldn’t frame it as where else is an FSG needed but rather where should the FSG go from here? At the end of the day, the LSB is a development platform standard. Some developers target the operating system in C or C++; others target middleware platforms like Java or LAMP; others are moving further up the stack to the web, where applications span site and even organizational boundaries (think of the various "mashups" that are happening around the so-called "Web 2.0" applications like Google Maps). Today, we cover the C/C++ level pretty well, but we need to move up the stack to cover the other development environments as well. The ultimate goal is to provide an open standard developers can target at any layer of the stack that's independent of any single vendor.
So, the short answer is that we aspire to provide a complete open standards based platform (“metaplatform” is actually a more accurate way to describe it), and Linux is obviously just one part of such a platform. We need to move up the stack along with the developers to incorporate the higher level platforms like Java and LAMP. We need to extend the coverage of the operating system platform too, as we've done in LSB 3.1 with the addition of desktop functionality and are doing around printing, multimedia, accessibility, internationalization, and other areas in LSB 3.2. Even at the operating system level, there's nothing inherently Linux specific about the LSB, so there's nothing preventing us from encompassing other open platform operating systems, such as the BSDs or Solaris. In the end, it's about all open platforms vs. closed platforms, where the closed platform du jour is Windows.
So, the real question is, how can the open metaplatform better compete against Windows? For one, Windows has .NET. Linux (and the other open platform operating systems) have Java, but it's not as well integrated, and it's not as well integrated because of the Java licensing. Sun has indicated they're going to open source Java as soon as they address the compatibility concerns. We have a lot of experience in that area, so perhaps we can help. In the end, it all comes down to a strong brand and tying compatibility testing to the use of that brand, which is the approach we take with the LSB. There's no reason a similar approach couldn't work for Java, and the benefit of a integrated Java with the open metaplatform would be enormous.
Obviously, doing all of that is an enormous amount of work, undoubtedly an impossible task for any single organization to accomplish on its own. Then again, so is building a complete operating system, and a lot of little companies (the Linux distribution vendors) managed to do it by taking preexisting pieces and fitting them together into integrated products. And, as it turned out, the whole was a lot more valuable than the sum of its parts. We take the same approach on a few levels. First of all, the LSB is an open process, so the best way to get something into the standard (assuming it's a best practice, i.e., shipping in the major Linux distributions) is to step up and do the work (i.e., write the conformance tests, etc.). In other words, we leverage the community the same way an open source software project would. Second, there are a lot of open standards efforts tackling pieces of the overall problem, and we seek to incorporate their work. In that sense, we're essentially an integrator of standards, a hub of sorts, much as the Linux distributors are essentially integrators of technology. We don't have to solve the total problem ourselves, just provide an open framework in which the relevant pieces can be fitted together. 2. In the long term, should the standardization process and the open source process merge? In other words, is there a benefit to there being an independent FSG, or in the future would it be better if the open source community incorporated this role into its own work?
Currently, there is no better way to balance the needs of a competitive distribution community with application interoperability. An independent standards provider bridges the gap between the open source community and the distributions implementing their software by allowing best practices of the latter to be standardized, thus making it easier for ISVs and end users to actually use the platform. The open source community does not want to concern itself with this standardization concern, nor should they. An independent consortium can drive consensus while being politically sensitive to the needs of its constituents. 3. What is the single thing that open source advocates most need to "get" about standards, and need to work harder to accommodate? Same question in reverse?
It would be helpful if those in some of the upstream projects participated more closely with our standards efforts. They are already doing this but there is always room for more participation. Closely tracking of the projects into the standard (or just through a database) will provide a great deal of service to ISVs and the distribution vendors. We plan on offering this service. In the other direction, standards bodies need to recognize that open source development is fundamentally different than traditional software development. When working with the open source community, participation and buy-in are critical—you can't just declare something to be so and expect the open source community to just follow suit—as is the ability to move quickly. For the FSG's part, we understand all of this very well—after all, we grew out of the open source community—but it's an observation other standards efforts would do well to keep in mind as open source and open standards increasingly intersect.
The entire interview can be read here.
For further blog entries on ODF, click here
subscribe to the free Consortium Standards Bulletin (and remember to Buy Your Books at Biff's)
| | | What's Related
not very aware
Consortium Standards Bu...
Biff's)
More by Andy Updegrove
More from Open Source/Open Standards
The Free Standards Group: Squaring the Open Source/Open Standards Circle | 7 comments | Create New Account
The following comments are owned by whomever posted them. This site is not responsible for what they say.
Authored by: Marcion on
Tuesday, May 30 2006 @ 06:47 AM CDT
"Linux is at risk of suffering a similar fate to that suffered by Unix. That risk is the danger of splintering into multiple distributions, each of which is sufficiently dissimilar to the others that applications must be ported to each distribution - resulting in the "capture," or locking in, of end-users on "sub brands" of Linux."
There are lots of assumptions in there. Firstly, I do not think that Unix, suffered by there being lots of unicies; it suffered because there were not enough versions. At the pivotal moment of the personal computer revolution, there was not one decent version of Unix easily available to users on their low-powered home/office computers. So as mainframes gave way to white boxes, Unix-like had to build the market again from scratch in a sea of DOS.
There are at least two differences between the bad Unix days and today. Firstly, I can never be captured or locked in because I have all the source-code on my hard drive. I can make it run on whatever new platform that comes along, mobile phones, xboxes or whatever. Secondly, the Internet has made possible near-instant distribution of software from creator to user.
I like the fact there are different distributions of GNU/Linux, different horses for different courses. A server is not the same as a mobile phone or a desktop. You could turn all distributions into Redhat or into Debian, but there is really no real advantage and lots of disadvantages. How could Gentoo or Damn Small Linux fit into the Standard Base?
The problems/opportunities for Linux adoption is not the core OS but the applications that run on it - you do not install Linux to have Linux, you install it to run applications.
The key for not being locked-in is not the OS but the abililty to pick up your data and walk. So I am not interested in the compiler toolchain or some other low-level library but the document formats. If free/open source application A uses X format and application B uses Y format, then it can end up as bad as proprietary software.
We are now getting to the point where there are a set of common file formats that will work across all posix platforms and even the proprietary operating system that comes installed on cheap new computers.
We already have png for images and ogg for media files, now we also have the benefit of OpenDocument, so you can take a file from an Abiword user on Gnome, give it to a Koffice user on KDE to correct, who then hands it on to an OpenOffice user on that proprietary operating system, and all the way through you will not lose formatting, version information and so on.
"...allows ISVs to build to a single standard, and know that their applications will run across all compliant distributions"
Well just make it compile on GCC on one version of GNU/Linux and then it will work on the rest if you give out the source code. That is what after all distributions are for, taking the source code from all the projects and providing it in a useful form for the end-users.
What we are talking about is medium to large proprietary software companies wanting to make one closed-source binary that will work on every Linux/BSD/Unix system.
If you want to make such a blob then make it an Redhat RPM, as the rest of the GNU/Linux userbase probably will ignore your application by virtue of it being not open-source. When in Rome do as the Romans do. For an open-source operating system needs open-source applications, thats just how it is.
GNU/Linux is not Windows, so trying to make it into Windows means that you have a poor copy of Windows. This applies both to software side and the business side. If you desire monopoly, i.e. a single binary and single distribution then use Windows.
The Linux model is distributions competing to provide the best service, the Windows model is stagnant monopoly. Do not cross the streams.
"the potential for Linux to fragment" "splintering"
Well it is fragmented, that is the whole point, and that is a good thing. I would instead use the term 'modular'. Do not think wood, think lego. We have a few kernels - Linux, BSD, Mach and so on, a toolkit - GCC, binutils, Glibc etc, and a huge number of libraries and applications. We also have distributions that put them together in different combinations - whether that be Debian giving out isos or Nokia compiling an OS for a mobile phone. "Linux" does not need to be a single operating system like Windows XP with clear boundaries of what is in and out. Instead the 'stack' will slowly take over everything, like a creeping plant or Giger's Xenomorph. Mac OS X often uses GCC, Solaris is moving towards GCC. Firefox runs on them all. So, in conclusion, we need common open data formats, and let the distributions innovate and differentiate however they want to create and deliver that data.Thanks for reading. Have a nice day.
[ # ]
- Authored by: Anonymous on Tuesday, May 30 2006 @ 07:22 AM CDT
- Authored by: Marcion on Tuesday, May 30 2006 @ 07:27 AM CDT
Authored by: Anonymous on
Wednesday, May 31 2006 @ 07:57 AM CDT
<i>So, the real question is, how can the open metaplatform better compete against Windows?"</i>
Well,... of course distributors could add (for example) graphics drivers that work "out of the box".
Linux haters claim all sorts of reasons for not doing this,... but are they right?
The following quote is taken from: http://linux.coconia.net/politics/kmodsGPL.htm
The above mentioned possibility of hiding the entire code of a program
as an application library, is the reason that the GPL demands that any
application that links to GPL'd shared libraries, must itself be GPL'd
(a program is GPL'd, if it is licensed under the GPL).
<b>It has been claimed that distributing a GPL'd kernel with
binary closed source kernel modules is illegal. This claim has been
advanced, to stop Linux distributors from shipping with Nvidia and ATI
drivers that work "straight out of the box". A recent example of this
is the Kororaa controversy.</b>
Those wishing to cripple Linux, make many unsubstantiated claims, some
of which are wrong, in order to prevent Linux distributors shipping
Nvidia and ATI drivers that work "out of the box". Here is a sample:
1) GPL and non-GPL components cannot be included together on a CD.
2) Closed source kernel modules distributed with a GPL'd kernel clearly violates the GPL.
3) Don't include closed source kernel modules as the situation is murky. You might get sued.
4) Closed source kernel modules link to the kernel in the same way that
applications link to libraries, therefore you cannot include them with
a GPL'd kernel.
One, is wrong. Two, is not clear at all. Three, which sounds
correct, is also wrong. Think about it, who is going to sue you? The
Free Software Foundation? Not likely. Perhaps Microsoft might be
interested in enforcing the GPL. Four, seems to have some merit, but is
wrong..........
For the full article, see http://linux.coconia.net/
Terms of Use | Gesmer Updegrove LLP | Search | Contact | FAQ | Sitemap(617) 350-6800 | Email: ©2003 - 2012 All Rights Reserved | 计算机 |
2015-48/0358/en_head.json.gz/1571 | God of War: Ascension PS3 Q&A: Taking Kratos to the Next Level
Category: PlayStation 3 & PSN News (blog.us.playstation) By: PS4 News Tags: god of war ascension ps3 qa ps3 taking kratos next level March 9, 2013 // 5:13 am - Sony Blog Manager Fred Dutton has shared a God of War: Ascension PS3 Q & A update below today which takes Kratos to the next level.
To quote: With God of War: Ascension's launch just a few days away now, we sat down with the game's Lead Combat Designer Jason McDonald and Lead Game Designer Mark Simon to find out how they've kept the formula fresh six games into the series, and what challenges the addition of multiplayer presented to the team.
We'll have more insight from Game Director Todd Papy next week before Santa Monica's lean, mean new actioner hits the shelves on March 12th. In the meantime, over to Mark and Jason...
PlayStation.Blog: How difficult is it retro-fitting a prequel story onto the existing God of War trilogy?
Mark Simon: It's kinda nice actually. At the end of God of War 3 Kratos is completely rage-filled. His sole-focus has been figured out. With Ascension, we can go back to a different time period before he was this character. He has a wider range and you can explain things about him that you didn't know before. You get to find out what turned him into the guy that he is - what makes him snap, and why is it that breaking a bond with a god like Ares does this.
PSB: If I was to plot Kratos' anger on a graph, I'd say he starts at 'seriously ticked off' for God of War 1 and climbs to 'ball of latent fury' for God of War 3. But from what I've played of Ascension, he starts this prequel with a serious rage on. What gives?
MS: Sure, he does! But that's due to the way the story is told. It's like Slumdog Millionaire, or something like that. He's not at the beginning of the story when you start the game. It's told in a non-linear fashion. It builds up to why he is in the prison - why he was taken there by The Furies.
PSB: Do you ever worry that you're going to run out of Greek gods for Kratos to beat up?
MS: Every game is a challenge, but the Greek mythos is so wide and varied. We could never do every myth that it has for us. We don't find it limiting; it's more exciting to explore more areas of it - new gods, new titans, new locales.
Take the Furies. They're primordial. They're from before the gods - they're more powerful than the gods. Some of their abilities are just ridiculous - so powerful. They make really great nemeses for Kratos.
PSB: Do you have an in-house expert who spends all their time going through Homer looking for new myths and characters?
MS: The cool thing about the studio is that some ideas come from the director, and then a lot of it comes from the rest of the team. Someone comes in and says 'Y'know this would be really cool!' Suddenly you're in a brainstorming session, and before you know it you're building it into the game. That's the great thing about our studio - ideas come from everywhere.
Jason McDonald: But if you look at the typical desk in the office you'll see Greek mythology books, random Greek materials - we do often reference that so we have to have those around.
MS: And the movies! Immortals, Jason and the Argonauts - all of them. We can't get enough!
PSB: There are no office fact-finding outings to Greece then?
JM: No, but you should recommend that!
MS: Santorini, maybe. There's got to be some myths around that island, right?
PSB: Every God of War game has had a different director. How hard is it to maintain a consistent feel in every game?
JM: Even when the director changes, the core of the team remain the same. There's a number of people who've been there for every title. Each director, when they assume that role, was really skilled to begin with, so it's not like 'Oh my god, what do I need to do?' They know exactly what they need to do. Every director puts their spin on it. Like [Ascension director] Todd Papy was a designer, so with this game he kept a close eye on mechanics.
MS: I think that after a project this size and scope, it's not unhealthy thing to go 'You know what? The director is going to move onto another thing if he wants to'. We're a team full of leads. So if one director decides he doesn't want to do it on the next project, there are a lot of people who can help out.
PSB: The series is known for its visceral violence and I've already seen some brutal kills in Ascension. Was there ever a moment during development where you said 'Okay, we went too far with that one...'?
MS: It's got to feel impactful. If you swing a club and hit someone it's got to feel like you've just hit them with a club. If it doesn't, it feels gamey. We don't want that gamey feeling - we want it to feel like you're actually impacting someone's head. It makes that sound, it feels like that - you kind of cringe thinking about it, but that's what melee combat should feel like.
PSB: Which of the additions that you've made to the God of War formula this time around are you most happy with?
JM: The Rage system turned out really well. Everyone uses it differently and it's nice to see that come together. The multiplayer though - seeing all that come together and people having fun - that's an experience that is very unique to this game and I'm very proud we were able to accomplish it.
PSB: How did the decision to add multiplayer come about?
JM: I don't remember anyone saying 'it's multiplayer time, let's do it!' It was more that we were curious about it. We hadn't tried multiplayer before so we were asking ourselves 'can it be done? Is there fun to be had?'
So we tried out a few tests using Kratos, as he was already built. What we found was that people would sit down with two Kratoses and have a lot of fun. They'd sit there for hours. It was un-tuned and very rough, but when we saw people enjoying it we thought it had merit. After that it was all about putting the God of War spin on it - making sure the scale reaches what we expect, and not just eight players bundled into a room fighting each other. We had to design modes and rules to make sure it wasn't repetitive.
PSB: How difficult was it to keep the combat balanced?
MS: You always start with something very simple - people fighting one another. Then you start adding new things and you watch the balance go out the window. Then you try desperately to get it back again before introducing another new thing. That's how we iterate. We didn't start with everything all at once.
PSB: How useful was the beta? Did players' behavior take you by surprise?
MS: I learned a lot just looking at the data that comes in. I'd be like, 'Woah, I can't believe this guy opened 17 chests' or 'this guy actually killed three guys at once?!'
And the thing that I was surprised by was some of the stuff that I thought would be cool. Like, I thought it would be cool when the god throws the spear down in the middle of a match - everybody would get the same cinematic and we would have this big spectacle in the middle of a match.
I thought that would be cool. And it was cool... the first time. But it wasn't cool the second time or the third time. We found that players really wanted to keep the action going as long as they could. And when the match was over they generally wanted to get right back to it again. So we took the camera cut out so as not to stop the action. The beta was really helpful for that kind of feedback.
PSB: Finally, I've got ask for your take on the PlayStation 4 announcement last month. What do you make of the new system?
JM: Very excited! We don't get hardware leaps like this that often, so to have one coming up is very exciting. The social stuff in particular. The power is going to be awesome and we'll have amazing artists and engineers who will be able to draw so much out of it - but the social stuff is great. It's something gaming is moving towards so the more features that support people playing together, the better. Follow us on Twitter, Facebook and drop by the PS3 Hacks and PS3 CFW forums for the latest PlayStation 3 scene and PS4 Hacks & JailBreak updates with PlayStation 4 homebrew PS4 Downloads.
Related PS4 News and PS3 CFW Hacks or JailBreak Articles | 计算机 |
2015-48/0358/en_head.json.gz/1585 | Buxtehude, Prelude in F Major, BuxWV 156 (mp3)
Do you hear music? If everything works as intended, you should hear streaming mp3. However, there seem to be
infinite complications of operating systems, internet browsers, plug-ins, and so forth. The method for streaming
mp3 that we used is to call an mp3 playlist, which in turn calls the actual mp3 file. In most cases, the call to the playlist invokes your local mp3 player in streaming audio mode. For dial-up connections, it may take almost
forever before a sufficient level of "buffering" is reached before playback begins. Good luck!
The whole point of JEUX is the music. Even if you don't have the right combination of hardware and software to enjoy
the MIDI files, we have recorded many of these pieces in MP3 format. More MP3 files will be added to this site soon.
These pages are devoted to my favorite music, and the JEUX SoundFont, which I designed to help me create MIDI realizations of pipe organ music written before 1750. The project has grown to encompass organ music from the Romantic era as well! —John W. McCoy
Author's home page
We have testimonials!
The Interpretation of Baroque Keyboard Music in the Age of MIDI
"Music is funny stuff" — Eva Heinitz, Grande Dame du Violincello
"La manque d'expression est peut-�tre la plus grand �normit�. J'aimerais mieux que la musique dise quelque chose de diff�rent de ce qu'elle devrait dire, plut�t que de ne rien dire du tout." — Jean-Jacques Rousseau
The question of authenticity: Bringing the music to life, versus the "Urtext" performance
The JEUX soundfont: MIDI sounds for baroque organ music, with a French accent
Download the JEUX SoundFont, free of charge! Version 1.4 is now ready for distribution (July 4, 2000) - List of changes
Click here to link to the stop list and technical details for the JEUX SoundFont.
What does it sound like? That depends on the playback parameters relating to reverb and other options specified when the piece is recorded. In this case I used a SoundBlaster Live! card on a PC running Windows ME. The sound you hear also depends on your own speakers and soundcard. Notebook speakers probably won't be able to play the low notes, but you might be surprised at the quality delivered from a notebook PC when you use a good pair of stereo headphones! The background music for this page is Buxtehude's Toccata in F Mayor, BuxWV 156, in .MP3 format. It lasts almost 7 minutes and uses many of the capabilities of the JEUX soundfont. The .MP3 format is compressed, of course! For an even better example of what JEUX can do, here's an extended recording of a passacaglia by Buxtehude. The file is in CD-quality .WAV format, well over 50 MB, so don't try this unless you are using a high-speed internet connection!
Who's using it? JEUX is already being used creatively for arrangements and new compositions. I am amazed by the effects and unexpected sonorities that others have been able to coax from it; indeed, I think the exploration of this virtual pipe organ has only just begun! Further, minds more industrious than mine have also begun to show us how to create high-quality realizations of MIDI files using the JEUX SoundFont and MP3 technology. As examples of what is possible, I must give special mention to Philip Goddard's beautifully crafted compositions and their MP3 recordings, and the organ music of Bach and other delightful confections of the "Lorin Swelk Orchestra".
Ten years of JEUX ! After working with the JEUX soundfont for a decade, I am only beginning to exploit its resources. I begin to understand why some of my teachers stuck with one instrument for most of their careers: it does in fact take years of experience to
get the most out of a particular instrument, especially one with registers. For my purposes, there is no need for more registers in JEUX.
What I need most is time to develop my personal art. Download my harpsichord realizations for General MIDI (no special soundfonts required, but results will depend on your sound card) or for the "Hpschd" SoundFontDownload my organ music realizations for the JEUX soundfont
My muses
JEUX has clones!
Note! In order to make room for a lot more music, the individual MIDI files are gradually being replaced with ZIP files containing extensive collections of MIDI files. While this modification is taking place, there might be a bit of confusion on this web site, but the results should be worth the inconvenience. Let me know if you find broken links. The question of authenticity: Bringing the music to life, versus the "Urtext" performance
MIDI realizations can hardly make any claim to be "authentic" renditions of Baroque music. In this respect, they have a lot in common with live performances. There is only one way we will ever hear an authentic performance of Bach's organ music, and that involves having a time machine. Perhaps this fact explains the numerous claims for authenticity that have been advanced over the years: they cannot be tested! The most admirable musicians understand this. Making few claims, they let their music speak for itself.
Even though the "authentic" performance will remain elusive, it is possible, of course, to increase our understanding of music of the past by studying documents, scores, and instruments that survive from earlier centuries. The satisfaction this study brings more than makes up for the impossibility of discovering "the" authentic performance. The following brief essays reflect my experience as one who has played and listened to music of the baroque era for most of his life.
More thoughts on some dreadful "Urtext performances".
The worst criticism that can be leveled at a professional musician is that he or she "only plays the notes". For reasons incomprehensible to me, many harpsichordists now go out of their way to avoid coloristic effects in performance, in spite of the undeniable fact that the history of their instrument, like that of the organ, is littered with attempts from the earliest times to add registers and effects whose only possible purpose was to extend the palette of tone colors. Sometimes harpsichordists even include defensive or apologetic statements about this in their program notes. Yet we find that even some early fortepianos were equipped with janissaries, bassoon stops, drums, and other contraptions. Are modern harpsichordists afraid someone might think the music entertaining, instead of a deep, serious exercise in abstract counterpoint? Does that mean organists(sometimes playing the very same music) should play all early music drawing only the Montre or the plenum, on the grounds that the use of additional timbres would trivialize the music of Buxtehude, the Couperins, Pachelbel, or Bach?
The music itself seems to answer these questions. Pachelbel's best-known treatment of Vom Himmel hoch features joyous celestial trumpet fanfares over the cantus firmus. This delightful idea will be lost on the listener if the figures are not played like trumpet calls. To me, the effect sounds even better using the Clairon and Trompette. On the harpsichord, if I had enough hands, I would not hesitate to use the 4' register here. Likewise, I find in one of his settings of An Wasserfl�ssen Babylon a very clear sound picture (in spite of the violent and bitter imagery of the other verses of this Psalm), in which the flowing waters wash away the sadness of the captive Israelites and carry them back in memory to their beautiful city. How to communicate this to the listener? Should not the registration be allowed to contribute to the effect?
The MIDI orchestrator, of course, has more hands than an octopus. Compositions intended for two manuals and pedal can be rescored for most improbable combinations of stops that could not possibly be produced on a real pipe organ. My guide in these matters is the intent of the music, to the extent that I can discern it. If, for example, the cantus firmus seems to play against three or more interlaced voices, or wanders to notes that can't be reached because my virtual organist's hands and feet are busy elsewhere, that will not deter me from giving the cantus firmus its own manual, if the result sounds better.
Did composers want their pieces played only in one "original" registration? I doubt it! Certainly there is little evidence of this except in France. The instruments differed significantly from one church to another, and from one town to the next. They differed greatly in size. Some were old, some had just been built or rebuilt. Some were built by great masters, some by unknowns. They differed in pitch by up to a fourth. Composers traveled long distances to study, to audition for a lucrative post as church organist, to curry favor from princes, or even to flee from various difficulties. We imagine John Bull leaving England and discovering the astonishing resources of the Dutch organs. Pachelbel, by no means unusual for his times, served as organist in his native N�rnberg only briefly before moving on to Altdorf, Regensburg, Vienna, Eisenach, Erfurt, Stuttgart, and Gotha, after which he returned permanently to N�rnberg. Along the way he encountered a plague and a French invasion which hastened him to his next venue. Thus, we have to assume that composers encountered a wide variety of instruments, some "modern", some decidedly antique with entirely different stops.
Except in France, there was little attempt to communicate how a piece should sound, even when it was published, or when it was copied for a distant patron. Manuscripts, too, traveled long distances, all over Europe and even into the New World. And in France, where the tradition of indicating registrations developed in the late 17th Century, the documents that survive indicate a range of options for such registrations as "Grand Jeu" and "Jeu de Tierce." There is enough consistency to allow the formulation of "typical" recipes, and enough variation to allow for different personalities and instruments.
Many recordings of historical instruments seem intended to show how the instrument sounds, even when not strictly contemporaneous with the featured composer. Why is so much Buxtehude recorded on Silbermann organs? Not for authenticity in the sense of recreating original registration, but for the pleasure of hearing some really nice sounds that combine great music with an equally great instrument, and in a way that shows both to advantage.
The Problem of Recordings
Recordings are the ultimate anachronism of the "authentic" instrument movement, since, of course, the required
equipment did not exist in the Baroque era. We understand the need for recordings, but how some of them can be
said to serve the cause of authenticity is simply beyond comprehension. We have two main complaints:
It has become very popular in "period instrument" chamber music for the recording engineer to fiddle with the sound levels in such a way that the
harpsichord or organ seems to be on wheels. When the keyboard has a "featured" passage, we hear it suddenly loom forward
toward the microphone, and then when the passage is completed, or sometimes earlier, it recedes into the background, lying in wait for its next solo. Whenever we hear such a recording, we know that, whatever the musicians were doing among themselves to achieve a good ensemble, it has not been captured on the recording. Would it be so terrible to put the microphones where they belong, establish the proper balance, and then leave well enough alone? My guess is that the engineers, and probably others involved in the recording process, are still laboring under the idea that in contrapuntal music, one voice must be louder than the others in order to prevent the listener from forgetting the subject of a fugue! This idea was wrong when it was adopted by pianists, and it is even more wrong when it is applied to real Baroque instruments. Above all, it is certainly inappropriate to give the illusion that large instruments such as
harpsichords and organs are moving about the stage!
It has also become popular to perform the solo works of Bach with exaggerated rubato, apparently so that the performer will be recognized as having a more profound and meaningful understanding of the music than anyone else. This
practice has been overdone to the extent that it is hard to find even four 16th notes played evenly. Typically, the recipe for sounding profound is to make the first 16th note quite long, and the next three progressively shorter. This
rarely works very well; the result reminds us of a certain famous 20th Century composer of circus music whose dances seem intended for a little man with one leg shorter than the other. Instead of communicating whatever deep thoughts were
behind, say, the Goldberg Variations, this method conveys instead mainly an appalling degree of narcissism, reducing
the work of the master to an endless string of mannerisms.
I can hardly believe that today's performers somehow missed the advice that they should have heard from their teachers, "When playing Bach, go for the long lines." Too much tinkering gives the same distracting effect that we might hear from a misguided actor reciting Shakespeare, if he tried to give a different inflection to each syllable, without regard for the meanings of the words and the additional layers of meaning of the passage in the context of the scene. It is more likely that some performers so deeply believe that everything they learned about Baroque performance practice in school was wrong, that they have thrown out the proverbial baby with the bathwater.
There are ways of applying expressive rubato without detracting from the architecture and meaning of a piece. One
has only to study the recordings of Jean Langlais playing the organ music of C�sar Franck to understand that a flexible
tempo can be highly expressive in an appropriate context. But to apply any sort of rubato to the music of Bach, one
has to be clear about the context. In Bach's large-scale organ works, there are many passages that are immediately recognizable as written-out cadenzas or where the improvisational quality of the passage is obvious. These sections frequently come between sections featuring a strict counterpoint or dance rhythms. The dramatic rubati are best left to the improvisational sections; otherwise, the sense of the composition is lost. More subtle rubati may sometimes
help to clarify the structure of the phrases and thus guide the listener through the structure of a piece. Applying
exaggerated rubati at a more detailed level is simply distracting.
Due to the difficulty of implementing meantone tuning with the MIDI software I use (Cakewalk Home Studio 8.0), I have made few attempts to duplicate historic tunings. Different tuning recipes are legion, as are different standards (historically justified or not) for concert pitch. I wonder if some of the tunings have not been used in recent times merely as pawns in a game of musicological one-upsmanship. On the other hand, my first piano was tuned far out of perfect temperament, rather like the tunings fashionable in the 18th Century, and I sometimes miss the interesting effects this had, making the sharp keys especially brilliant and the flat keys more subdued.
The tuning that matters most for the organ, and also for the MIDI that hopes to sound like it, is that mutations are tuned to their just intervals. It is possible to get away without doing this for the Nazard, perhaps, but the mutation pitches will not blend properly into a focussed Cornet timbre unless they are tuned. That means quints are raised by 2 cents, tierces are lowered by 14 cents, sevenths are lowered by 31 cents, and ninths are raised by 4 cents. (Yes, the virtual JEUX organ has a usable Septade III, Nonade IV, and a very nice Septi�me VI, a sort of ultra-cornet.) The JEUX soundfont accomplishes this fine tuning in the "Melodic Preset" definitions, so that the "stops" of the soundfont can be used without further adjustment.
The other important tuning, applicable to all soundfonts, is that all the sound samples have to be configured so that they are in tune with each other. This turns out to be exceedingly difficult if one is working with real sound samples. The SoundFont file structure supports fine tuning only to the nearest cent. For very high notes, it is easy to calculate on the back of an envelope that beats will be noticeable even if samples are within 1/2 cent of each other. Also, it is possible to hear strong heterodyne effects when matching simple tones at very low pitches. These problems occur on real pipe organs, too, accounting for an old tradition of drawing only one rank of each pitch level on a given manual.
In practice, I found it best to tune each sample to the same "Flute 7-3" sound sample that the AWEPOP soundfont was based on. I continue to make adjustments. I find JEUX is acceptably in tune with at least some GM soundfonts, so that it is possible to realize the Handel organ concertos. The design of organs always involves compromises. With respect to tuning, if the mutations are true, they will be out of tune with the harmony if the organ is based on equal temperament. If we had a simple way to accomplish meantone temperament in the MIDI world, the tierces of the Grand Cornet and the Grand Jeu would be far more resonant in most of the intervals encountered in music of the 17th and 18th Centuries. However, the Cornet is most often used as a solo stop, and the Grand Jeu is supposed to be brash. We therefore opted to have pure tierces in preference to cleaner harmony.
The indications of the composers provide the best guide for ornamentation. I generally begin trills on the auxillary note unless there is a strong reason to do otherwise. Melodic contour is not such a strong reason, because the notes of the trill are part of the melody they adorn. Authentic indications for the "tremblement appuy�e" of some composers are a strong reason, provided we understand that the preceding note must then be slurred into the trill—this context provides evidence for the correct articulation of a delightful little Basse de Trompette by Dandrieu. It is widely appreciated that most of Beethoven's written-out trills start on the upper note, and the reasons for this are sometimes obvious—maintaining harmonic suspense, for example. Evidently, it is not true that all trills after Bach or even after Mozart should begin on the main note! Armed with some experience in French harpsichord music, as well as that of English and German composers who used their own ornamental vocabulary, I try to tackle ornamentation in a sequencing program such as Cakewalk without sacrificing these principles. Making the switch from keyboard to PC mouse makes one think hard about the nature of ornamentation and its musical function!
One ornament harpsichordists do not have to consider is tremolo. My readings have revealed that the use of the tremulant in early organ music was far more extensive than most organists are willing to use today. Who would suspect that a French fugue would call for multiple tierce registrations and tremulant? Why do we find so many specifications like "Jeux d'anches mais sans tremblemens", unless the tremulant was ordinarily used when the reed chorus was drawn? The tremulant of historic organs was a tricky thing to adjust, and would not have been maintained through countless restorations and alterations over the centuries if it were not important for the music! A similar train of thought makes me suspect organists have always had a secret desire for what harpsichordists consider cheap tricks, in that Nightingale stops are found all over Europe. I would like to know when they were used! Other than a number of ornithologically suggestive passages in Handel's organ concertos, I am unable to guess. And what pieces were being played when churchgoers were so delighted to hear the Zimbelstern? Rhythmic Alterations
"It don't mean a thing if it ain't got that certain je-ne-sais-quoi." — Peter Schickele
There are many situations in the Baroque repertoire that call for changes in note values. The extent to which these changes are necessary to make a piece sound well varies greatly. In the harpsichord music of D'Anglebert, I find it necessary to alter almost every note shorter than a quaver. On the other hand, many of Bach's chorales and fugues seem to demand (or even to accept) no alterations whatever. The strength of MIDI realizations is the possibility to achieve the precise effect desired, if one has the patience to do so. Perhaps the degree of patience this requires is one reason relatively few MIDI realizations exist for the French harpsichord repertoire. For French organ music, I posit additional causes, notably the lack of a suitable "instrument", a gap I have attempted to fill with the JEUX soundfont.
About the JEUX Soundfont
The JEUX soundfont is a collection of sounds designed for use with Creative Labs' AWE32 and similar sound cards that support the SoundFont 2 file structure. I have tried to duplicate the characteristic sounds of real pipe organ stops (including individual ranks, mixtures, choruses, and effects), with a particular goal of assembling a collection of stops suitable for MIDI realizations of organ music written before 1750. Through careful selection of stops, I have found it is possible to realize music of later composers as well, for the palette of tone colors I have included is very broad. The JEUX soundfont may be freely distributed but may not be sold or used commercially under any circumstance without my written permission.
What is a Soundfont?
A soundfont is a specific file structure used by AWE32, AWE64, and similar sound cards from Creative Labs and other vendors. Sound samples are organized into "Instruments", which in turn are organized into "Melodic Presets". The "Melodic Presets" are the "patches" you specify in your MIDI sequences with numbers 0 through 127, or by selecting from a "patch list" that assigns names to this same range of numbers. In JEUX, you can think of the soundfont "Instruments" as ranks or mixtures on a real pipe organ, and the "Melodic Presets" are the stop knobs, tabs, or buttons on the console.
There are actually two specifications for the soundfont format. The current one, for AWE soundcards, uses the file extension "sf2". Soundfonts are most easily edited with the Vienna Sound Studio software from Creative Labs. Vienna Sound Studio Version 2.3 is far more reliable than its predecessors. A large number of parameters can be specified, though most of them are not very useful for building organ stops! I have tried to use the parameters in a consistent manner in order to simplify maintenance and development tasks.
If you have the SoundBlaster Live! sound card, please be sure you have upgraded to version 2.0 or later of LiveWare from Creative Labs. This upgrade is required to fix errors in the original SoundBlaster Live! software that caused distortion in some soundfonts that worked correctly on AWE 32 or AWE 64 sound cards. JEUX was one of the soundfonts affected by this problem! The upgrade can be done by downloading files from the Creative Labs web site, but be prepared for a herculean effort, as the total size of the download approaches 30 MB! That should be enough to fix a lot of bugs and create some new ones besides.
Two products of Seer Systems, "Reality" and "SurReal", were designed to use soundfonts to generate MIDI realizations directly from software. "Reality" also recorded the output directly to .WAV format on the hard drive. Seer Systems was planning release a new product, "Wavemaker", to provide a more inexpensive method to write .WAV files from the output of "SurReal". This promising product does not seem to have caught on. Other software-based synthesizers using the soundfont format are available, but none actually duplicates the sound of JEUX. The hardward from Creative Labs is undoubtedly the best vehicle for soundfonts, but in recent years the complications of drivers and operating systems have made it very difficult to obtain a workable environment. For example, my notebook
PC, running Windows Vista, turns out to have a BIOS and an internal architecture that is incompatible with adding an external soundcard that supports MIDI synthesis.
Where does this leave the MIDI musician? If the original capabilities of the SoundBlaster cards are unavailable due to the absence of appropriate drivers, computer architecture, etc., then it will be necessary to work within
more limited capabilities of software-based synthesizers. However, I have some suggestions on how this might be
accomplished!
Measurement of the sounds produced by SoundBlaster cards shows that a number of the parameters defined in the published
soundfont standard are implemented in a non-standard way. For example, the parameters that are nominally calibrated in decibels turned out to be implemented in units of approximately 0.4 decibels in the SoundBlaster cards. Some of the filters were found to differ from the standard as well. As a result, any part of the JEUX soundfont that depends on these
parameters will necessarily sound different when a different synthesizer is used. Therefore, the only available method for ensuring reliable performance of the JEUX soundfont in a wide variety of hardward and software environments
would seem to be to tweak the sound samples in such a way that they do not depend on any of these parameters.
For the future, then, the most reliable method of soundfont construction must be to prepare each sample so that it can be used without any adjustment of the internal soundfont parameters. All the filtering and all the volume adjustments
will have to be incorporated into the individual samples. The sound samples will have to be adjusted to an appropriate volume level before they are added to the soundfont. In the case of JEUX, this approach will significantly increase the size of the soundfont, because many of the current samples are used at different volumes or with different filter
threshholds in the various "stops". The parameters of the SoundBlaster synthesizer were a great idea, but until and unless the SoundBlaster cards can be made truly portable, we will have do without them. Unfortunately, converting the JEUX soundfont to this hardware and software independent format can only be done in an environment that fully supports
the parameters that JEUX now uses. Will we ever have time to undertake this project?
I gathered sound samples from all over, even my own kitchen. Most, however, were available free of charge or other stated restrictions on the internet. A few were included in the AWEPOP soundfont of Arend Zwaag, my starting point for the development of JEUX. Some of the samples have been synthesized by mathematical transformations of other sounds, and many have been subjected to filterings or digital transformations to bring them into the required form (44.1 KHz sampling rate, for instance). I would very much like to find better quality samples of some of the characteristic organ sounds, such as Bourdon 16', Doublette, Gambe 8', and some of the open flutes. Such samples need to be uncompressed, with a sampling rate of at least 22 KHz, free from extraneous noises.
As far as I know, the first attempt at a non-commercial, comprehensive library of organ stops for MIDI wasa set of specifications adapted by Benjamin Tubb from those of the Sound Canvas Pipe Organ Project (SCPOP) developed by Raphael Diodati and Filippo Tigli (1997). The Italian stop names of SCPOP have persisted in all of the spinoffs, and I have retained a few for convenience in the JEUX soundfont, too. These specifications were of the form, to make a "Fondi 16-8" registration, combine Flute at velocity 110, Shakuhachi at velocity 70, Whistle at velocity 60, and Recorder at velocity 100, the latter lowered by one octave. These specifications seem to have been used as a starting point by Arend Zwaag, who constructed the very nice AWEPOP soundfont (with similar Italian stop names). A somewhat different approach, using only ROM sounds, was taken by Ralph van Zant in the AWEORGAN soundfont. A few commercial soundfonts are also available, most recently from Andreas Sims, though I do not know of any that have targeted the baroque organ styles. The "Church Organ" patch usually encountered in General MIDI is, in my opinion, a miserable substitute for the sound of the pipe organ, to be used only as a last resort.
How I Designed JEUX
My intent has been to develop a soundfont for use with MIDI sequences created through Cakewalk or other sequencing programs. I have not added a MIDI keyboard to my home computer setup, but I see no reason why the JEUX soundfont should not be useful in that environment as well. The JEUX soundfont will be easier to use if the design principles are understood: The stops are designed to have a reasonable and consistent level of reverb. I set my AWE64sound card for "Hall 2" and "Chorus 3" effects (chorus effect is used only in the Viola Celeste 4 stop).
Samples are tuned to each other as well as possible. However, the soundfont parameters permit fine tuning only to the nearest cent, not enough to eliminate all heterodyne effects for very high ranks (Cymbale, mixtures), or for very low notes held longer than 5-10 seconds. These effects, of course, occur in real pipe organs, too. That's why many classical registrations contain only one rank at each pitch level. Mutation ranks are tuned to just intervals. Experimentation showed that failure to perform this step is a significant problem in the AWEORGAN and AWEPOP soundfonts. Without this adjustment, mixtures, especially those containing the tierce, do not sound well. Quints are raised by 2 cents, tierces are lowered by 14 cents, sevenths are lowered by 31 cents, and ninths are raised by 4 cents. JEUX has a very nice Septade III, a Septi�me VI cornet-like combination, and even a Nonade IV.
Volume levels have been greatly reduced from those of the AWEPOP soundfont in order to restore some balance with General MIDI soundfonts and to reduce static noise generated within the AWE sound card or within the speakers' pre-amplifier. It is always possible to turn up the volume control on the speakers! Stops are designed to be used, whether singly or in combination, at a volume level of 100 (MIDI "velocity" parameter). If swell effects are desired, use the MIDI "expression" parameter (controller 11). Stops are based on classic French concepts, but with provisions for German, Spanish, Italian, and English tastes by providing additional flute and reed color stops and mutation ranks. At least a few of the sound samples (notably the Bombarde and Basse de Trompette) can be traced to French organs. It turns out that constructing a soundfont suitable for organ music of the German Baroque is far easier than duplicating French sounds. I have tried endless variations of the Plein Jeu and Grand Jeu. I feel these characteristic sounds are at last recognizable, at times with thrilling results.
The second bank contains, among others, a series of foundation and reed combinations of the type usually called for in French organ music of the Romantic era, notably that of C�sar Franck. Depending on the demands of the music and the polyphony limits of your sound card, some care is needed in using these combinations. When chords of more than 6 notes are encountered, and more than one track is required to play these same notes, the polyphony limits of the AWE64 sound card are exceeded, and some of the top notes may not be heard. When this happens, it is necessary to select a simplified registration or omit some of the duplicated notes in the chords.
The virtual Grand Orgue has more or less normal-scale principals (and a wide-scale Nazardos VIII for Spanish registrations). The principals on the virtual Positif use narrow pipes, based on the same flute samples from the AWEPOP soundfont. The resulting contrast between the virtual manuals provides interesting effects. The reeds on the Positif are also weaker than those assigned to the virtual Grand Orgue, though there is nothing in the design of JEUX to prevent the stops from being used as if they were on other manuals! (I would very much like to obtain better samples of authentic principals, particularly notes above the treble clef, sampled at 44.1 KHz and free of noise and reverb.)
The virtual Echo manual has a unique "Gobletfl�te" created from a large wineglass I foundin my kitchen. It also has a selection of suitably weak reeds. Stops on the Echo can be used "as is" for the effects required in some of the Daquin noels.
The remaining stops can be considered assigned to the R�cit, Bombarde, or Pedal divisions according to the needs of the music. My limited experience with the music of the French classical organ school, as well as the recent recording of the wonderful Dom B�dos organ at Bordeaux, have convinced me of the wisdom of assigning the Grand Cornet to the R�cit and the largest reeds to the Bombarde manuals, for this repertoire. In the JEUX soundfont, these divisions have some special ranks of limited range, including the remarkable Bombarde 16, Basse de Trompette 16, and Gros Cromorne 8. For a more German rendition of chorales, the Posaune 16 is fairly well behaved, though like all low reeds its speech is a little slow. Organists know to add the Prestant to compensate for this problem.
The effects department, added purely for entertainment value, features huge bells, a very nice Carillon, music box ("Petit Carillon"), Zimbelstern (notes 60-71 give the effect ofthe Zimbelstern starting up and continuing its revolution with 7 tuned bells, while notes 48-59 give the effect of the apparatus coming to a stop), and real nightingales in stereo, very effective in certain passages in the Handel organ concertos.
The samples have varying lengths. The longest are the bells and nightingales, which could be removed if you find your sound card does not have enough memory! The Vox Humana, following modern practice, has built-in tremolo. However, some early French vox ranks seem not to have had their own tremulant, and were used in combination with other ranks, so I included an early French registration. The other stops will generally respond to the MIDI "modulation" parameter (controller 1). Some may be surprised to learn that French classical organists favored the tremulant in fugues and with the Jeu de Tierce and Cornet. Many early recipes for the Grand Jeu require "tremblement fort", sure to raise eyebrows today.
The Grand Jeu turns out to be one of the more difficult sounds that had to be duplicated. To do it right would require more layers of sound than the AWE64 sound card could handle. Further, the combination of reeds and mutation ranks brings us face to face with a characteristic of the classic French organ that is not usually encountered in modern instruments: the reeds were much weaker in the treble than in the bass, while the cornet ranks tended in the opposite direction. Therefore, the Grand Jeu registration requires both the reeds and the Cornet, sounding like a Cornet in the high range and becoming more clarinet-like in the bass. When all of these pipes get going on a real classic French organ, like the one at Notre-Dame de Guibray in Falaise, France, there is a distinct wheezing sound, probably the accumulation of reverberations, overtones not quite in tune, and a substantial amount of wind, an exciting effect that the soundfont can only hint at.
JEUX can be loaded as a "user" soundfont into any available bank of the AWE sound card. My MIDI sequences assume it is in bank 42. It will be necessary to set the "bank selection method" to "controller 0" for each track (accomplished easily in Cakewalk software), and the list of stops will have to be defined to your sequencer, either by importing a jeux.insfile or by typing in the stop names in Cakewalk. My MIDI sequences use controller messages to set the banks for you—at least it works for me! Stronger stops or combinations may still produce unwanted soundcard noise or other effects in some situations:
If a high degree of stereo separation is used (the MIDI "pan" parameter). This was a major problem in the AWEPOP soundfont.
If a large number of notes is played simultaneously.
If strong bass stops are doubled in octaves (a situation which also tends to bring out the worst performance from small speakers—the worst bass "booming" comes from low notes of the wide-scale flutes and is exacerbated by high reverb levels).
A variety of plenum registrations are provided. They will be most useful if attention is paid to how they are constructed (i.e., wide or narrow ranks, whether 5ths or reeds are included, etc.) and to which virtual manual they belong.
It is not advised to use stops very far above their intended pitch level, especially when the track requires notes above the treble clef. This comes from internal processing limitations in the AWE sound cards, which can raise a sample sound no more than 2 octaves above its original pitch. Experimentation will reveal these limitations.
Compound stops such as the Grand Cornet, Jeu de Tierce, Nasard III, etc. are built from ranks of the appropriate scale, at least over most of their range. Cymbale is of intermediate scale, Fourniture of relatively narrow scale. For very high mixtures such as Cymbale, it is physically impossible to achieve perfect intonation, because soundfont sample pitch can be adjusted only in increments of one cent.
Good speakers make all the difference! I switched from Altec-Lansing ACS-40 (limited bass response, lots of booming and distortions) to Cambridge SoundWorks, which I find entirely satisfactory. The volume knob goes up to number 10, but I have not found any reason to turn it past 4.
A Note on 3-D Sound
For cathedral organ presence, a four-speaker system (two front, two rear) will probably not be any more realistic than a normal stereo system. Real pipeorgan sounds come from any part of the organ case, at different heights as well as left, right, front, or rear. In particular, the Echo division, if there was one, was frequently housed at the top of the organ case, well suited for celestial sounds.
If MIDI supported at least two dimensions of stereo separation (right-left and up-down), we could produce a reasonable approximation of realistic cathedral presence. Until such a MIDI structure comes along, one-dimensional stereo separation will have to suffice.
A Problem with Reverb
In the MIDI environment, reverb is an "effect" added to the output of the MIDI synthesizer. Reverb is usually created
by "echoing" the "dry" MIDI output signal at a fixed time delay. Whatever the delay, there will exist a group of frequencies such that the reverb effect will be almost exactly out of phase with the dry signal. For JEUX and the SoundBlaster
hardware and drivers, this happens at E in the bass clef. The result is that the low E's, especially those with fairly simply acoustical characteristics (such as principals and flutes) are significantly softer than the other notes. Further, when the
note ends, there is an audible "bump" when the echo signal continues to sound after the dry signal has stopped.
This effect is sometimes heard on real organs, when the dimensions of some part of the case result in reflected sound cancelling
out the sound produced by a particular pipe. Whenever you hear a "bump" at the end of an organ sound, you know that something
like this has occurred.
In order to produce a smoother reverb, it seems to me that the software needs to "reflect" the sound from several different
logical points (thus, several independent delays) chosen in such a way that there is less interference with the dry signal. Real
reverb, of course, involves echoes from large, complex surfaces. Has anyone succeeded in modeling reverb in a realistic
room? Does anyone know of a VST plug-in that might produce the sort of reverb we need, and that can be added to the dry
signal in a digital audio editing program such as Audacity?
Click here to send me E-mail! | 计算机 |