id
stringlengths 30
34
| text
stringlengths 0
75.5k
| industry_type
stringclasses 1
value |
---|---|---|
2014-35/1128/en_head.json.gz/9172 | AmerisourceBergen is looking for a Digital and Web Media Coordinator- GNP. Eat This Not That is looking for a Web Editor, Eat This Not That. MakerBot Industries is looking for a Junior Web Engineer. KOAMTAC, Inc. is looking for a In-house Graphic/Web Designer. Callahan & Associates is looking for a Web & Graphic Assistant. Brennan Center for Justice at NYU School of Law is looking for a Web Assistant. 3D Systems is looking for a Creative Director for Web Design. CREATIVE CIRCLE is looking for a Web Production Artist - EXPERT HTML CODER/EMAIL. see all
2014 Facebook Final Four: Duke, Florida, Virginia, Nebraska
The 2014 NCAA Division I Men’s Basketball Championship tournament is decided on the court, and not by posts on Facebook, but if the latter were true, Duke University, the University of Florida, the University of Virginia, and the University of Nebraska would advance to the Final Four at AT&T Stadium in Arlington, Texas, and the Duke Blue Devils would be cutting down the nets April 7.
Facebook Kicks Around College Football’s AP Top 25 Preseason Teams
David Cohen on August 30, 2013 1:06 PM
The 2013 college football season kicked off with a handful of games Thursday night, and Data Editor Robert D’Onofrio helped Facebook users prepare for the first weekend of tailgating by analyzing the social network’s fans of the top 25 teams in the AP preseason rankings, uncovering some surprising findings, and some that were not so surprising.
Facebook Dating App Coffee Meets Bagel Puts Big Ten Conference Under A Microscope
David Cohen on June 12, 2013 7:29 AM
Social dating application and website Coffee Meets Bagel would not be the first name to come up when thinking of analysis of the Big Ten Conference, but it did just that, examining the connections (or lack thereof) between alumni of universities in the conference.
University Of North Alabama Schools The Field In Facebook Engagement
Justin Lafferty on September 26, 2012 6:47 PM
The University of North Alabama may not have the enrollment or national following of schools such as Ole Miss, Nebraska, or Marquette, but it has bettered them in one category: Facebook engagement. | 计算机 |
2014-35/1128/en_head.json.gz/10486 | MeAndMyDrum
The beat of a different blogger
What To Do With Low-Paying CPC Keywords
by Mark Sierra on May 14, 2009
When you’re researching keywords to build your niche site for the purposes of making money with Google Adsense, it’s important to find keywords with a high cost-per-click (CPC) and relatively low search volume (low competition). This is the low-hanging fruit that I’ve outlined recently.
But what if the CPC stinks, but the search volume looks ideal? Do you build the site anyway and hope for the best? Well, you could, but you might make more money with what I’m about to tell you. Before I get to that, here’s a disclaimer:
If you are pregnant, thinking of becoming pregnant, or know someone who is pregnant, then inform your spouse immediately. Success may result if this information is used properly. If you experience profit for more than four hours, then seek the counsel of a financial adviser and invest responsibly. Dizziness may occur if you spin in place really, really fast. Do not operate heavy machinery after taking this advice because, well, this information has nothing to do with educating you on that.
Okay, good. Got that out of the way. There have been many times when I’ve found keywords that fit this scenario. I tend to avoid CPCs lower than, say, $1.75. Not all the time; it could be lower or higher, and that could be based on my overall feel of the niche I’m researching or if it’s late at night and I’m just too darn tired to care.
But in either case, my eye naturally gravitates to the upward end of the spectrum and your’s should, too. We need to find a way to maximize our profits while minimizing our efforts.
So what can you do to boost your profits with these keywords?
If the niche interests you, but Adsense doesn’t seem to be the way to go, then why not build an eBay store using something like BANS or phpBay Pro? The money you’d make would then come from commissions made off sales or new customer signups. It’s like giving all that information you’ve collected a second chance, which can be very rewarding.
But don’t setup shop just yet! There’s more you need to know that could help maximize your profits even more. Details for that will be in my next post.
In the meantime, I’d like to direct your attention to a buddy of mine, Mark Mason. He’s put together a video on Josh Spaudling’s amazingly popular report called the $5 Mini-Site Formula. Both the video and report are free, btw. The video will explain in detail the process I use when building niche sites and profiting from them. | 计算机 |
2014-35/1128/en_head.json.gz/11109 | GIMP 2.6 released, one step closer to taking on Photoshop
GIMP 2.6 has been officially released. The new version is the first to include …
- Oct 2, 2008 2:20 am UTC
A new release of the venerable GNU Image Manipulation Program (GIMP) is now available for download. Version 2.6 offers a variety of new features, user interface improvements, and is also the first release to include support for the Generic Graphics Library (GEGL), a powerful, graph-based image editing framework. The GIMP is an open source graphics editor that is available for Linux, Windows, and OS X. It aims to provide Photoshop-like capabilities and offers a broad feature set that has made it popular with amateur artists and open source fans. Although the GIMP is generally not regarded as a sufficient replacement for high-end commercial tools, it is beginning to gain some acceptance in the pro market. One of the most significant limitations of the GIMP is that it has traditionally only supported 8 bits per color channel. This weakness is commonly cited as a major barrier to GIMP adoption by professional artists, who require greater color depth. This problem has finally been addressed by the new GEGL backend, which delivers support for 32-bpc. The inclusion of GEGL is a major milestone for the GIMP and takes it one step closer to being a viable Photoshop replacement for professional users. In this release, GEGL is still not quite ready to be enabled by default, but users can turn it on with a special option. GIMP 2.6 also includes some minor user interface enhancements. The application menu in the tool palette window has been removed, and its contents have been merged into the document window menu. A document window will now be displayed at all times, even when no images are open. The floating tool windows have also been adjusted so that they are always displayed over the document window and cannot be obscured. To reduce clutter and make the windows easier to manage, the floating windows will no longer be listed in the taskbar. The GIMP user interface has long been a source of controversy, and is characterized by some users as one of the worst on the Linux desktop. The modest changes made in this release are nice improvements, but probably won't be enough to satisfy the most vehement haters. A more extensive redesign is in the works, however, and the developers are gathering insight from users and experts. The empty window behavior in version 2.6 is based on one of the first specification drafts that emerged from the redesign project. There are a number of important functionality improvements that will be welcomed by users, too. The freehand selection tool now has support for polygonal selections and editing selection segments, the GIMP text tool has been enhanced to support automatic wrapping and reflow when text areas are resized, and a new brush dynamics feature has added some additional capabilities to the ink and paint tools. Version 2.6 also has a few improvements for plug-in developers, like a more extensive scripting API for manipulating text layers. For the next major release, the developers plan to improve GEGL support and integrate the development work that was done in Summer of Code projects. One of the Summer of Code projects that could land in 2.8 brings support for editing text directly on the image canvas, thus obviating the need for a text input dialog. Another project that we could see in 2.8 adds support for marking specific brushes and gradients with tags so that they are easier to find and organize. Users can download the latest release from the GIMP web site. For more details about the new version, check out the official release notes. Expand full story | 计算机 |
2014-35/1128/en_head.json.gz/11722 | What Michael Hall's Blog talks about
application-development
coreapps
ubblopomo
Posts tagged with 'community'
Communicating Recognition
by Michael Hall
, community
, economics-of-community
, foss
, ubuntu
Recognition is like money, it only really has value when it’s being passed between one person and another. Otherwise it’s just potential value, sitting idle. Communication gives life to recognition, turning it’s potential value into real value.
As I covered in my previous post, Who do you contribute to?, recognition doesn’t have a constant value. In that article I illustrated how the value of recognition differs depending on who it’s coming from, but that’s not the whole story. The value of recognition also differs depending on the medium of communication.
Over at the Community Leadership Knowledge Base I started documenting different forms of communication that a community might choose, and how each medium has a balance of three basic properties: Speed, Thoughtfulness and Discoverability. Let’s call this the communication triangle. Each of these also plays a part in the value of recognition.
Again, much like money, recognition is something that is circulated. It’s usefulness is not simply created by the sender and consumed by the receiver, but rather passed from one person to another, and then another. The faster you can communicate recognition around your community, the more utility you can get out of even a small amount of it. Fast communications, like IRC, phone calls or in-person meetups let you give and receive a higher volume of recognition than slower forms, like email or blog posts. But speed is only one part, and faster isn’t necessarily better.
Thoughtfulness
Where speed emphasizes quantity, thoughtfulness is a measure of the quality of communication, and that directly affects the value of recognition given. Thoughtful communications require consideration upon both receiving and replying. Messages are typically longer, more detailed, and better presented than those that emphasize speed. As a result, they are also usually a good bit slower too, both in the time it takes for a reply to be made, and also the speed at which a full conversation happens. An IRC meeting can be done in an hour, where an email exchange can last for weeks, even if both end up with the same word-count at the end.
The third point on our communication triangle, discoverability, is a measure of how likely it is that somebody not immediately involved in a conversation can find out about it. Because recognition is a social good, most of it’s value comes from other people knowing who has given it to whom. Discoverability acts as a multiplier (or divisor, if done poorly) to the original value of recognition.
There are two factors to the discoverability of communication. The first, accessibility, is about how hard it is to find the conversation. Blog posts, or social media posts, are usually very easy to discover, while IRC chats and email exchanges are not. The second factor, longevity, is about how far into the future that conversation can still be discovered. A social media post disappears (or at least becomes far less accessible) after a while, but an IRC log or mailing list archive can stick around for years. Unlike the three properties of communication, however, these factors to discoverability do not require a trade off, you can have something that is both very accessible and has high longevity.
Most communities will have more than one method of communication, and a healthy one will have a combination of them that compliment each other. This is important because sometimes one will offer a more productive use of your recognition than another. Some contributors will respond better to lots of immediate recognition, rather than a single eloquent one. Others will respond better to formal recognition than informal. In both cases, be mindful of the multiplier effect that discoverability gives you, and take full advantage of opportunities where that plays a larger than usual role, such as during an official meeting or when writing an article that will have higher than normal readership.
Who do you contribute to?
When you contribute something as a member of a community, who are you actually giving it to? The simple answer of course is “the community” or “the project”, but those aren’t very specific. On the one hand you have a nebulous group of people, most of which you probably don’t even know about, and on the other you’ve got some cold, lifeless code repository or collection of web pages. When you contribute, who is that you really care about, who do you really want to see and use what you’ve made?
In my last post I talked about the importance of recognition, how it’s what contributors get in exchange for their contribution, and how human recognition is the kind that matters most. But which humans do our contributors want to be recognized by? Are you one of them and, if so, are you giving it effectively?
The owner of a project has a distinct privilege in a community, they are ultimately the source of all recognition in that community. Early contributions made to a project get recognized directly by the founder. Later contributions may only get recognized by one of those first contributors, but the value of their recognition comes from the recognition they received as the first contributors. As the project grows, more generations of contributors come in, with recognition coming from the previous generations, though the relative value of it diminishes as you get further from the owner.
After the project owner, the next most important source of recognition is a project’s leaders. Leaders are people who gain authority and responsibility in a project, they can affect the direction of a project through decisions in addition to direct contributions. Many of those early contributors naturally become leaders in the project but many will not, and many others who come later will rise to this position as well. In both cases, it’s their ability to affect the direction of a project that gives their recognition added value, not their distance from the owner. Before a community can grown beyond a very small size it must produce leaders, either through a formal or informal process, otherwise the availability of recognition will suffer.
Leadership isn’t for everybody, and many of the early contributors who don’t become one still remain with the project, and end of making very significant contributions to it and the community over time. Whenever you make contributions, and get recognition for them, you start to build up a reputation for yourself. The more and better contributions you make, the more your reputation grows. Some people have accumulated such a large reputation that even though they are not leaders, their recognition is still sought after more than most. Not all communities will have one of these contributors, and they are more likely in communities where heads-down work is valued more than very public work.
When any of us gets started with a community for the first time, we usually end of finding one or two people who help us learn the ropes. These people help us find the resources we need, teach us what those resources don’t, and are instrumental in helping us make the leap from user to contributor. Very often these people aren’t the project owners or leaders. Very often they have very little reputation themselves in the overall project. But because they take the time to help the new contributor, and because theirs is very likely to be the first, the recognition they give is disproportionately more valuable to that contributor than it otherwise would be.
Every member of a community can provide recognition, and every one should, but if you find yourself in one of the roles above it is even more important for you to be doing so. These roles are responsible both for setting the example, and keeping a proper flow, or recognition in a community. And without that flow or recognition, you will find that your flow of contributions will also dry up.
Why do you contribute to open source?
It seems a fairly common, straight forward question. You’ve probably been asked it before. We all have reasons why we hack, why we code, why we write or draw. If you ask somebody this question, you’ll hear things like “scratching an itch” or “making something beautiful” or “learning something new”. These are all excellent reasons for creating or improving something. But contributing isn’t just about creating, it’s about giving that creation away. Usually giving it away for free, with no or very few strings attached. When I ask “Why do you contribute to open source”, I’m asking why you give it away.
This question is harder to answer, and the answers are often far more complex than the ones given for why people simply create something. What makes it worthwhile to spend your time, effort, and often money working on something, and then turn around and give it away? People often have different intentions or goals in mind when the contribute, from benevolent giving to a community they care about to personal pride in knowing that something they did is being used in something important or by somebody important. But when you strip away the details of the situation, these all hinge on one thing: Recognition.
If you read books or articles about community, one consistent theme you will find in almost all of them is the importance of recognizing the contributions that people make. In fact, if you look at a wide variety of successful communities, you would find that one common thing they all offer in exchange for contribution is recognition. It is the fuel that communities run on. It’s what connects the contributor to their goal, both selfish and selfless. In fact, with open source, the only way a contribution can actually stolen is by now allowing that recognition to happen. Even the most permissive licenses require attribution, something that tells everybody who made it.
Now let’s flip that question around: Why do people contribute to your project? If their contribution hinges on recognition, are you prepared to give it? I don’t mean your intent, I’ll assume that you want to recognize contributions, I mean do you have the processes and people in place to give it?
We’ve gotten very good about building tools to make contribution easier, faster, and more efficient, often by removing the human bottlenecks from the process. But human recognition is still what matters most. Silently merging someone’s patch or branch, even if their name is in the commit log, isn’t the same as thanking them for it yourself or posting about their contribution on social media. Letting them know you appreciate their work is important, letting other people know you appreciate it is even more important.
If you the owner or a leader in a project with a community, you need to be aware of how recognition is flowing out just as much as how contributions are flowing in. Too often communities are successful almost by accident, because the people in them are good at making sure contributions are recognized and that people know it simply because that’s their nature. But it’s just as possible for communities to fail because the personalities involved didn’t have this natural tendency, not because of any lack of appreciation for the contributions, just a quirk of their personality. It doesn’t have to be this way, if we are aware of the importance of recognition in a community we can be deliberate in our approaches to making sure it flows freely in exchange for contributions.
When is a fork not a fork?
Technically a fork is any instance of a codebase being copied and developed independently of its parent. But when we use the word it usually encompasses far more than that. Usually when we talk about a fork we mean splitting the community around a project, just as much as splitting the code itself. Communities are not like code, however, they don’t always split in consistent or predictable ways. Nor are all forks the same, and both the reasons behind a fork, and the way it is done, will have an effect on whether and how the community around it will split.
There are, by my observation, three different kinds of forks that can be distinguished by their intent and method. These can be neatly labeled as Convergent, Divergent and Emergent forks.
Convergent Forks
Most often when we talk about forks in open source, we’re talking about convergent forks. A convergent fork is one that shares the same goals as its parent, seeks to recruit the same developers, and wants to be used by the same users. Convergent forks tend to happen when a significant portion of the parent project’s developers are dissatisfied with the management or processes around the project, but otherwise happy with the direction of its development. The ultimate goal of a convergent fork is to take the place of the parent project.
Because they aim to take the place of the parent project, convergent forks must split the community in order to be successful. The community they need already exists, both the developers and the users, around the parent project, so that is their natural source when starting their own community.
Divergent Forks
Less common that convergent forks, but still well known by everybody in open source, are the divergent forks. These forks are made by developers who are not happy with the direction of a project’s development, even if they are generally satisfied with its management. The purpose of a divergent fork is to create something different from the parent, with different goals and most often different communities as well. Because they are creating a different product, they will usually be targeting a different group of users, one that was not well served by the parent project. They will, however, quite often target many of the same developers as the parent project, because most of the technology and many of the features will remain the same, as a result of their shared code history.
Divergent forks will usually split a community, but to a much smaller extent than a convergent fork, because they do not aim to replace the parent for the entire community. Instead they often focus more on recruiting those users who were not served well, or not served at all, by the existing project, and will grown a new community largely from sources other than the parent community.
Emergent Forks
Emergent forks are not technically forks in the code sense, but rather new projects with new code, but which share the same goals and targets the same users as an existing project. Most of us know these as NIH, or “Not Invented Here”, projects. They come into being on their own, instead of splitting from an existing source, but with the intention of replacing an existing project for all or part of an existing user community. Emergent forks are not the result of dissatisfaction with either the management or direction of an existing project, but most often a dissatisfaction with the technology being used, or fundamental design decisions that can’t be easily undone with the existing code.
Because they share the same goals as an existing project, these forks will usually result in a split of the user community around an existing project, unless they differ enough in features that they can targets users not already being served by those projects. However, because they do not share much code or technology with the existing project, they most often grow their own community of developers, rather than splitting them from the existing project as well.
All of these kinds of forks are common enough that we in the open source community can easily name several examples of them. But they are all quite different in important ways. Some, while forks in the literal sense, can almost be considered new projects in a community sense. Others are not forks of code at all, yet result in splitting an existing community none the less. Many of these forks will fail to gain traction, in fact most of them will, but some will succeed and surpass those that came before them. All of them play a role in keeping the wider open source economy flourishing, even though we may not like them when they affect a community we’ve been involved in building.
Community Donations Funding Report
Last year the main Ubuntu download page was changed to include a form for users to make a donation to one or more parts of Ubuntu, including to the community itself. Those donations made for “Community projects” were made available to members of our community who knew of ways to use them that would benefit the Ubuntu project.
Every dollar given out is an investment in Ubuntu and the community that built it. This includes sponsoring community events, sending community representatives to those events with booth supplies and giveaway items, purchasing hardware to make improve development and testing, and more.
But these expenses don’t cover the time, energy, and talent that went along with them, without which the money itself would have been wasted. Those contributions, made by the recipients of these funds, can’t be adequately documented in a financial report, so thank you to everybody who received funding for their significant and sustained contributions to Ubuntu.
As part of our commitment to openness and transparency we said that we would publish a report highlighting both the amount of donations made to this category, and how and where that money was being used. Linked below is the first of those reports.
Calling for Ubuntu Online Summit sessions
, events
, uds
, work
A couple of months ago Jono announced the dates for the Ubuntu Online Summit, June 10th – 12th, and those dates are almost upon us now. The schedule is opened, the track leads are on board, all we need now are sessions. And that’s where you come in.
Ubuntu Online Summit is a change for us, we’re trying to mix the previous online UDS events with our Open Week, Developer Week and User Days events, to try and bring people from every part of our community together to celebrate, educate, and improve Ubuntu. So in addition to the usual planning sessions we had at UDS, we’re also looking for presentations from our various community teams on the work they do, walk-throughs for new users learning how to use Ubuntu, as well as instructional sessions to help new distro developers, app developers, and cloud devops get the most out of it as a platform.
What we need from you are sessions. It’s open to anybody, on any topic, anyway you want to do it. The only requirement is that you can start and run a Google+ OnAir Hangout, since those are what provide the live video streaming and recording for the event. There are two ways you can propose a session: the first is to register a Blueprint in Launchpad, this is good for planning session that will result in work items, the second is to propose a session directly in Summit, which is good for any kind of session. Instructions for how to do both are available on the UDS Website.
There will be Track Leads available to help you get your session on the schedule, and provide some technical support if you have trouble getting your session’s hangout setup. When you propose your session (or create your Blueprint), try to pick the most appropriate track for it, that will help it get approved and scheduled faster.
Ubuntu Development
Many of the development-oriented tracks from UDS have been rolled into the Ubuntu Development track. So anything that would previously have been in Client, Core/Foundations or Cloud and Server will be in this one track now. The track leads come from all parts of Ubuntu development, so whatever you session’s topic there will be a lead there who will be familiar with it.
Track Leads:
Łukasz Zemczak
Leann Ogasawara
Antonio Rosales
Marc Deslaurs
Introduced a few cycles back, the Application Development track will continue to have a focus on improving the Ubuntu SDK, tools and documentation we provide for app developers. We also want to introduce sessions focused on teaching app development using the SDK, the various platform services available, as well as taking a deeper dive into specifics parts of the Ubuntu UI Toolkit.
Alan Pope
Zsombor Egri
Nekhelesh Ramananthan
Cloud DevOps
This is the counterpart of the Application Development track for those with an interest in the cloud. This track will have a dual focus on planning improvements to the DevOps tools like Juju, as well as bringing DevOps up to speed with how to use them in their own cloud deployments. Learn how to write charms, create bundles, and manage everything in a variety of public and private clouds.
Marco Ceppi
Patricia Gaughen
Jose Antonio Rey
The community track has been a stable of UDS for as long as I can remember, and it’s still here in the Ubuntu Online Summit. However, just like the other tracks, we’re looking beyond just planning ways to improve the community structure and processes. This time we also want to have sessions showing users how they can get involved in the Ubuntu community, what teams are available, and what tools they can use in the process.
Daniel Holbach
Laura Czajkowski
Svetlana Belkin
Pablo Rubianes
This is a new track and one I’m very excited about. We are all users of Ubuntu, and whether we’ve been using it for a month or a decade, there are still things we can all learn about it. The focus of the Users track is to highlight ways to get the most out of Ubuntu, on your laptop, your phone or your server. From detailed how-to sessions, to tips and tricks, and more, this track can provide something for everybody, regardless of skill level.
Elizabeth Krumbach Joseph
Nicholas Skaggs
Valorie Zimmerman
So once again, it’s time to get those sessions in. Visit this page to learn how, then start thinking of what you want to talk about during those three days. Help the track leads out by finding more people to propose more sessions, and let’s get that schedule filled out. I look forward to seeing you all at our first ever Ubuntu Online Summit.
App Developer Sprint
, canonical
, phone
, sdk
I’ve just finished the last day of a week long sprint for Ubuntu application development. There were many people here, designers, SDK developers, QA folks and, which excited me the most, several of the Core Apps developers from our community!
I haven’t been in attendance at many conferences over the past couple of years, and without an in-person UDS I haven’t had a chance to meetup and hangout with anybody outside of my own local community. So this was a very nice treat for me personally to spend the week with such awesome and inspiring contributors.
It wasn’t a vacation though, sprints are lots of work, more work than UDS. All of us were jumping back and forth between high information density discussions on how to implement things, and then diving into some long heads-down work to get as much implemented as we could. It was intense, and now we’re all quite tired, but we all worked together well.
I was particularly pleased to see the community guys jumping right in and thriving in what could have very easily been an overwhelming event. Not only did they all accomplish a lot of work, fix a lot of bugs, and implement some new features, but they also gave invaluable feedback to the developers of the toolkit and tools. They never cease to amaze me with their talent and commitment.
It was a little bitter-sweet though, as this was also the last sprint with Jono at the head of the community team. As most of you know, Jono is leaving Canonical to join the XPrize foundation. It is an exciting opportunity to be sure, but his experience and his insights will be sorely missed by the rest of us. More importantly though he is a friend to so many of us, and while we are sad to see him leave, we wish him all the best and can’t wait to hear about the things he will be doing in the future.
Make Android apps Human with NDR
, application-development
, experiment
, opensource
, programming
, projects
, touch
Ever since we started building the Ubuntu SDK, we’ve been trying to find ways of bringing the vast number of Android apps that exist over to Ubuntu. As with any new platform, there’s a chasm between Android apps and native apps that can only be crossed through the effort of porting.
There are simple solutions, of course, like providing an Android runtime on Ubuntu. On other platforms, those have shown to present Android apps as second-class citizens that can’t benefit from a new platform’s unique features. Worse, they don’t provide a way for apps to gradually become first-class citizens, so chasm between Android and native still exists, which means the vast majority of apps supported this way will never improve.
There are also complicates solutions, like code conversion, that try to translate Android/Java code into the native platform’s language and toolkit, preserving logic and structure along the way. But doing this right becomes such a monumental task that making a tool to do it is virtually impossible, and the amount of cleanup and checking needed to be done by an actual developer quickly rises to the same level of effort as a manual port would have. This approach also fails to take advantage of differences in the platforms, and will re-create the old way of doing things even when it doesn’t make sense on the new platform.
NDR takes a different approach to these, it doesn’t let you run our Android code on Ubuntu, nor does it try to convert your Android code to native code. Instead NDR will re-create the general framework of your Android app as a native Ubuntu app, converting Activities to Pages, for example, to give you a skeleton project on which you can build your port. It won’t get you over the chasm, but it’ll show you the path to take and give you a head start on it. You will just need to fill it in with the logic code to make it behave like your Android app. NDR won’t provide any of logic for you, and chances are you’ll want to do it slightly differently than you did in Android anyway, due to the differences between the two platforms.
To test NDR during development, I chose the Telegram app because it was open source, popular, and largely used Android’s layout definitions and components. NDR will be less useful against apps such as games, that use their own UI components and draw directly to a canvas, but it’s pretty good at converting apps that use Android’s components and UI builder.
After only a couple days of hacking I was able to get NDR to generate enough of an Ubuntu SDK application that, with a little bit of manual cleanup, it was recognizably similar to the Android app’s.
This proves, in my opinion, that bootstrapping an Ubuntu port based on Android source code is not only possible, but is a viable way of supporting Android app developers who want to cross that chasm and target their apps for Ubuntu as well. I hope it will open the door for high-quality, native Ubuntu app ports from the Android ecosystem. There is still much more NDR can do to make this easier, and having people with more Android experience than me (that would be none) would certainly make it a more powerful tool, so I’m making it a public, open source project on Launchpad and am inviting anybody who has an interest in this to help me improve it.
My phone is lonely, let’s fix that
, django
I’ve been using Ubuntu on my only phone for over six months now, and I’ve been loving it. But all this time it’s been missing something, something I couldn’t quite put my finger on. Then, Saturday night, it finally hit me, it’s missing the community.
That’s not to say that the community isn’t involved in building it, all of the core apps have been community developed, as have several parts of our toolkit and even the platform itself. Everything about Ubuntu for phones is open source and open to the community.
But the community wasn’t on my phone. Their work was, but not the people. I have Facebook and Google+ and Twitter, sure, but everybody is on those, and you have to either follow or friend people there to see anything from them. I wanted something that put the community of Ubuntu phone users, on my Ubuntu phone. So, I started to make one.
Community Cast
Community Cast is a very simple, very basic, public message broadcasting service for Ubuntu. It’s not instant messaging, or social networking. It doesn’t to chat rooms or groups. It isn’t secure, at all. It does just one thing, it lets you send a short message to everybody else who uses it. It’s a place to say hello to other users of Ubuntu phone (or tablet). That’s it, that’s all.
As I mentioned at the start, I only realized what I wanted Saturday night, but after spending just a few hours on it, I’ve managed to get a barely functional client and server, which I’m making available now to anybody who wants to help build it.
The server piece is a very small Django app, with a single BroadcastMessage data model, and the Django Rest Framework that allows you to list and post messages via JSON. To keep things simple, it doesn’t do any authentication yet, so it’s certainly not ready for any kind of production use. I would like it to get Ubuntu One authentication information from the client, but I’m still working out how to do that. I threw this very basic server up on our internal testing OpenStack cloud already, but it’s running the built-in http server and an sqlite3 database, so if it slows to a crawl or stops working don’t be surprised. Like I said, it’s not production ready. But if you want to help me get it there, you can get the code with bzr branch lp:~mhall119/+junk/communitycast-server, then just run syncdb and runserver to start it.
The client is just as simple and unfinished as the server (I’ve only put a few hours into them both combined, remember?), but it’s enough to use. Again there’s no authentication, so anybody with the client code can post to my server, but I want to use the Ubuntu Online Accounts to authenticate a user via their Ubuntu One account. There’s also no automatic updating, you have to press the refresh button in the toolbar to check for new messages. But it works. You can get the code for it with bzr branch lp:~mhall119/+junk/communitycast-client and it will by default connect to my test instance. If you want to run your own server, you can change the baseUrl property on the MessageListModel to point to your local (or remote) server.
There isn’t much to show, but here’s what it looks like right now. I hope that there’s enough interest from others to get some better designs for the client and help implementing them and filling out the rest of the features on both the client and server.
Not bad for a few hours of work. I have a functional client and server, with the server even deployed to the cloud. Developing for Ubuntu is proving to be extremely fast and easy.
First steps towards a converged email client for Ubuntu
, coreapps
, upstream
Yesterday we made a big step towards developing a native email client for Ubuntu, which uses the Ubuntu UI Toolkit and will converge between between phones, tablets and the desktop from the start.
We’re not starting from scratch though, we’re building on top of the incredible work done in the Trojitá project. Trojitá provides a fast, light email client built with Qt, which made it ideal for using with Ubuntu. And yesterday, the first of that work was accepted into upstream, you can now build an Ubuntu Components front end to Trojitá.
None of this would have been possible without the help up Trojitá’s upstream developer Jan Kundrát, who patiently helped me learn the codebase, and also the basics of CMake and Git so that I could make this first contribution. It also wouldn’t have been possible without the existing work by Ken VanDine and Joseph Mills, who both worked on the build configuration and some initial QML code that I used. Thanks also to Dan Chapman for working together with me to get this contribution into shape and accepted upstream.
This is just the start, now comes the hard work of actually building the new UI with the Ubuntu UI Toolkit. Andrea Del Sarto has provided some fantastic UI mockups already which we can use as a start, but there’s still a need for a more detailed visual and UX design. If you want to be part of that work, I’ve documented how to get the code and how to contribute on the EmailClient wiki. You can also join the next IRC meeting at 1400 UTC today in #ubuntu-touch-meeting on Freenode.
Ubuntu App Developer Week starts Today!
Starting at 1400 UTC today, and continuing all week long, we will be hosting a series of online classes covering many aspects of Ubuntu application development. We have experts both from Canonical and our always amazing community who will be discussing the Ubuntu SDK, QML and HTML5 development, as well as the new Click packaging and app store.
You can find the full schedule here: http://summit.ubuntu.com/appdevweek-1403/
We’re using a new format for this year’s app developer week. As you can tell from the link above, we’re using the Summit website. It will work much like the virtual UDS, where each session will have a page containing an embedded YouTube video that will stream the presenter’s hangout, an embedded IRC chat window that will log you into the correct channel, and an Etherpad document where the presenter can post code examples, notes, or any other text.
Use the chatroom like you would an Ubuntu On Air session, start your questions with “QUESTION:” and wait for the presenter to get to it. After the session is over, the recorded video will be available on that page for you to replay later. If you register yourself as attending on the website (requires a Launchpad profile), you can mark yourself as attending those sessions you are interested in, and Summit can then give you a personalize schedule as well as an ical feed you can subscribe to in your calendar.
If you want to use the embedded Etherpad, make sure you’re a member of https://launchpad.net/~ubuntu-etherpad
That’s it! Enjoy the session, ask good questions, help others when you can, and happy hacking.
My App Showdown Wishlist
, ubblopomo
Today we announced the start of the next Ubuntu App Showdown, and I have very high hopes for the kinds of apps we’ll see this time around. Our SDK has grown by leaps and bounds since the last one, and so much more is possible now. So go get yourself started now: http://developer.ubuntu.com/apps/
Earlier today Jono posted his Top 5 Dream Ubuntu Apps, and they all sound great. I don’t have any specific apps I’d like to see, but I would love to get some multi-player games. Nothing fancy, nothing 3D or FPS. Think more like Draw Something or Words With Friends, something casual, turn-based, that lets me connect with other Ubuntu device users. A clone of one of those would be fun, but let’s try and come up with something original, something unique to Ubuntu.
What do you say, got any good ideas? If you do, post them in the App Showdown subreddit or our Google+ App Developers community and let’s make it happen.
New roof, new developer portal, new scopes, what a week!
It’s been a crazy busy week, and it’s only Tuesday (as of this writing)! Because I’m exhausted, this is going to be a short post listing the things that are new.
New Roof
I wrote earlierthat I was having a new roof put on my house. Well that all starter unceremoniously at 7:30am on Monday, and the hammering over my head has been going on non-stop for two full working days. Everybody who joined me on a Google+ Hangout has been regaled with the sounds of my torment. It looks nice though, so there’s that.
New Developer Portal
Well, new-ish. We heavily revamped the Apps section to include more walk-through content to help new Ubuntu app developers learn the tools, the process and the platform. If you haven’t been there yet, you really should give it a read and get yourself started: http://developer.ubuntu.com/apps/
New HTML5 APIs
In addition to the developer portal itself, I was able to publish new HTML5 API docs for the 14.04 release of Ubuntu. Not only does this include the UbuntuUI library from the previous release, it also introduced new platform APIs for Content Hub, Online Accounts and Alarms, with more platform APIs coming soon. The Cordova 3.4 API docs are proving harder to parse and upload than I anticipated, but I will hopefully have them published soon. If you’re an HTML5 app developer, you’ll be interested in these: http://developer.ubuntu.com/api/html5/sdk-14.04/
New Scopes
While not exactly a secret, we did start to make some noise about the new Scopes framework and Unity Dash that bring in a lot of improvements. As much as I liked the Home lens searching everything and aggregating results, it just wasn’t reaching the potential we had hoped for it. The new setup will allow scopes to add more information that is specific to their result types, control how those results are displayed, and more clearly brand themselves to let the user know what’s being searched. You can read more about the enhancements at http://developer.ubuntu.com/2014/02/introducing-our-new-scopes-technology/ Like I said, it’s been a crazy busy week. And we’re not done yet!
There is no “Touch”, only “Ubuntu”
, unity
There’s been a lot of talk about Ubuntu’s phone and tablet development over the last year, and it’s great that it’s getting so much attention, but people have been getting the name of it all wrong. Now, to be fair, this is a problem entirely of our own making, we started off talking about the phone (and later tablet) developments as “Ubuntu Touch”, and put most of the information about on our wiki under a page named Touch. But there is no Ubuntu Touch! It’s not a separate OS or platform, there is only one OS and it’s simply called Ubuntu.
Ubuntu 14.04 Stack
What people are referring to when they say Touch or Ubuntu Touch, is really just Ubuntu with Unity 8. Other than the shell (and display server that powers it), it’s the same OS as you get on your desktop.
Everything under the hood is the same: same tools, same filesystem, even the same version of them, because it’s all built from the same source. Calendar data is stored in the same place, audio and video is played through the same system, even the Unity APIs are shared between desktop and phone.
So why is the name important? Not only is it more accurate to call them both Ubuntu, it’s also one of the (in my opinion) most exciting things about having an Ubuntu phone. You’re not getting a stripped down embedded Linux OS, or something so customized for phones that it’s useless on your desktop. You’re getting a fully featured, universal operating system, one that can do everything you need from a phone and everything you need from a desktop.
Future Ubuntu Stack
This is the key to Ubuntu’s convergence strategy, something that nobody else has right now. Android makes a terrible desktop OS. So does iOS. Chrome OS won’t work for a phone either, nor OSX. Even Microsoft has built two different platforms for mobile and desktop, even if they’ve slapped the same interface on both.
But with Ubuntu, once Unity 8 comes to the desktop, you will have the same OS, the same platform, on all of your devices. And while you will run the same version of Unity on both, Unity 8 is smart enough to change how it looks and how it works to meet the needs and capabilities of what you’re running it on. Better still, Unity will be able to make these changes at run time, so if you dock your convertible tablet to a keyboard, it will automatically switch from giving you a tablet interface to a desktop interface. All of your running apps keep running, but thanks to the Ubuntu SDK those too will automatically adjust to work as desktop apps.
So while “Ubuntu Touch” may have been a useful distinction in the beginning, it isn’t anymore. Instead, if you need to differentiate between desktop and mobile versions of Ubuntu, you should refer to “Unity 8″ if talking about the interface, or “Ubuntu for phones” (or tablet) if you’re talking about device images or hardware enablement. And if you’re a developer and you are talking about the platform APIs or capabilities, you’re talking about the “Ubuntu SDK”, which is already available on both desktop and mobile installs of Ubuntu.
Ubuntu API Website Teardown
, python
Ubuntu API Website
For much of the past year I’ve been working on the Ubuntu API Website, a Django project for hosting all of the API documentation for the Ubuntu SDK, covering a variety of languages, toolkits and libraries. It’s been a lot of work for just one person, to make it really awesome I’m going to need help from you guys and gals in the community.
To help smooth the onramp to getting started, here is a breakdown of the different components in the site and how they all fit together. You should grab a copy of the branch from Launchpad so you can follow along by running: bzr branch lp:ubuntu-api-website
First off, let’s talk about the framework. The API website uses Django, a very popular Python webapp framework that’s also used by other community-run Ubuntu websites, such as Summit and the LoCo Team Portal, which makes it a good fit. A Django project consists of one or more Django “apps”, which I will cover below. Each app consists of “models”, which use the Django ORM (Object-Relational Mapping) to handle all of the database interactions for us, so we can stick to just Python and not worry about SQL. Apps also have “views”, which are classes or functions that are called when a URL is requested. Finally, Django provides a default templating engine that views can use to produce HTML.
If you’re not familiar with Django already, you should take the online Tutorial. It only takes about an hour to go through it all, and by the end you’ll have learned all of the fundamental things about building a Django site.
Branch Root
When you first get the branch you’ll see one folder and a handful of files. The folder, developer_network, is the Django project root, inside there is all of the source code for the website. Most of your time is going to be spent in there.
Also in the branch root you’ll find some files that are used for managing the project itself. Most important of these is the README file, which gives step by step instructions for getting it running on your machine. You will want to follow these instructions before you start changing code. Among the instructions is using the requirements.txt file, also in the branch root, to setup a virtualenv environment. Virtualenv lets you create a Python runtime specifically for this project, without it conflicting with your system-wide Python installation.
The other files you can ignore for now, they’re used for packaging and deploying the site, you won’t need them during development.
./developer_network/
As I mentioned above, this folder is the Django project root. It has sub-folders for each of the Django apps used by this project. I will go into more detail on each of these apps below.
This folder also contains three important files for Django: manage.py, urls.py and settings.py
manage.py is used for a number of commands you can give to Django. In the README you’ll have seen it used to call syncdb, migrate and initdb. These create the database tables, apply any table schema changes, and load them with initial data. These commands only need to be run once. It also has you run collectstatic and runserver. The first collects static files (images, css, javascript, etc) from all of the apps and puts them all into a single ./static/ folder in the project root, you’ll need to run that whenever you change one of those files in an app. The second, runserver, runs a local HTTP server for your app, this is very handy during development when you don’t want to be bothered with a full Apache server. You can run this anytime you want to see your site “live”.
settings.py contains all of the Django configuration for the project. There’s too much to go into detail on here, and you’ll rarely need to touch it anyway.
urls.py is the file that maps URLs to an application’s views, it’s basically a list of regular-expressions that try to match the requested URL, and a python function or class to call for that match. If you took the Django project tutorial I recommended above, you should have a pretty good understanding of what it does. If you ever add a new view, you’ll need to add a corresponding line to this file in order for Django to know about it. If you want to know what view handles a given URL, you can just look it up here.
./developer_network/ubuntu_website/
If you followed the README in the branch root, the first thing it has you do is grab another bzr branch and put it in ./developer_network/ubuntu_website. This is a Django app that does nothing more than provide a base template for all of your project’s pages. It’s generic enough to be used by other Django-powered websites, so it’s kept in a separate branch that each one can pull from. It’s rare that you’ll need to make changes in here, but if you do just remember that you need to push you changes branch to the ubuntu-community-webthemes project on Launchpad.
./developer_network/rest_framework/
This is a 3rd party Django app that provides the RESTful JSON API for the site. You should not make changes to this app, since that would put us out of sync with the upstream code, and would make it difficult to pull in updates from them in the future. All of the code specific to the Ubuntu API Website’s services are in the developer_network/service/ app.
./developer_network/search/
This app isn’t being used yet, but it is intended for giving better search functionality to the site. There are some models here already, but nothing that is being used. So if searching is your thing, this is the app you’ll want to work in.
./developer_network/related/
This is another app that isn’t being used yet, but is intended to allow users to link additional content to the API documentation. This is one of the major goals of the site, and a relatively easy area to get started contributing. There are already models defined for code snippets, Images and links. Snippets and Links should be relatively straightforward to implement. Images will be a little harder, because the site runs on multiple instances in the cloud, and each instance will need access to the image, so we can’t just use the Django default of saving them to local files. This is the best place for you to make an impact on the site.
./developer_network/common/
The common app provides views for logging in and out of the app, as well as views for handling 404 and 500 errors when the arise. It also provides some base models the site’s page hierarchy. This starts with a Topic at the top, which would be qml or html5 in our site, followed by a Version which lets us host different sets of docs for the different supported releases of Ubuntu. Finally each set of docs is placed within a Section, such as Graphical Interface or Platform Service to help the user browse them based on use.
./developer_network/apidocs/
This app provides models that correspond directly to pieces of documentation that are being imported. Documentation can be imported either as an Element that represents a specific part of the API, such as a class or function, or as a Page that represents long-form text on how to use the Elements themselves. Each one of these may also have a given Namespace attached to it, if the imported language supports it, to further categorize them.
./developer_network/web/
Finally we get into the app that is actually generates the pages. This app has no models, but uses the ones defined in the common and apidocs apps. This app defines all of the views and templates used by the website’s pages, so no matter what you are working on there’s a good chance you’ll need to make changes in here too. The templates defined here use the ones in ubuntu_website as a base, and then add site and page specific markup for each.
If you’re still reading this far down, congratulations! You have all the information you need to dive in and start turning a boring but functional website into a dynamic, collaborative information hub for Ubuntu app developers. But you don’t need to go it alone, I’m on IRC all the time, so come find me (mhall119) in #ubuntu-website or #ubuntu-app-devel on Freenode and let me know where you want to start. If you don’t do IRC, leave a comment below and I’ll respond to it. And of course you can find the project, file bugs (or pick bugs to fix) and get the code all from the Launchpad project.
10 minute Doctor Who app
, building-sdk-app-series
It may surprise some of you (not really) to learn that in addition to being a software geek, I’m also a sci-fi nerd. One of my current guilty pleasures is the British Sci-Fi hit Doctor Who. I’m not alone in this, I know many of you reading this are fans of the show too. Many of my friends from outside the floss-o-sphere are, and some of them record a weekly podcast on the subject.
Tonight one of them was over at my house for dinner, and I was reminded of Stuart Langridge’s post about making a Bad Voltage app and how he had a GenericPodcastApp component that provided common functionality with a clean separation from the rest of his app. So I decided to see how easy it would be to make a DWO Whocast app with it. Turns out, it was incredibly easy.
Here are the steps I took:
Create a new project in QtCreator
Download Stuart’s GenericPodcastApp.qml into my project’s ./components/ folder
Replace the template’s Page components with GenericPodcastApp
Customize the necessary fields
Add a nice icon and Suru-style gradients for good measure
That’s it! All told it took my less than 10 minutes to put the app together, test it, show it off, and submit my Click package to the store. And the app doesn’t look half bad either. Think about that, 10 minutes to get from an idea to the store. It would have been available to download too if automatic reviews were working in the store (coming soon).
That’s the power of the Ubuntu SDK. What can you do with it in 10 minutes?
Update: Before this was even published this morning the app was reviewed, approved, and available in the store. You can download it now on your Ubuntu phone or tablet.
Winning the 1%
Yesterday, in a conference call with the press followed immediately by a public Town Hall with the community, Canonical announced the first two hardware manufacturers who are going to ship Ubuntu on smartphones!
Now many have speculated on why we think we can succeed where so many giants have failed. It’s a question we see quite a bit, actually. If Microsoft, RIM/Blackberry and HP all failed, what makes us think we can succeed? It’s simple math, really. We’re small. Yeah, that’s it, we’re just small.
Unlike those giants who tried and failed, we don’t need to dominate the market to be successful. Even just 1% of the market would be enough to sustain and continue the development of Ubuntu for phones, and probably help cover the cost of developing it for desktops too. The server side is already paying for itself. Because we’re small and diversified, we don’t need to win big in order to win at all. And 1%, that’s a very reachable target.
My first Debian package uploaded
, debian
, packaging
Today I reached another milestone in my open source journey: I got my first package uploaded into Debian’s archives. I’ve managed to get packages uploaded into Ubuntu before, and I’ve attempted to get one into Debian, but this is the first time I’ve actually gotten a contribution in that would benefit Debian users.
I couldn’t have done with without the the help and mentorship of Paul Tagliamonte, but I was also helped by a number of others in the Debian community, so a big thank you to everybody who answered my questions and walked me through getting setup with things like Alioth and re-learning how to use SVN.
One last bit of fun, I was invited to join the Linux Unplugged podcast today to talk about yesterday’s post, you can listen it it (and watch IRC comments scroll by) here: http://www.jupiterbroadcasting.com/51842/neckbeard-entitlement-factor-lup-28/
Razing the Roof
Today was a distracting day for me. My homeowner’s insurance is requiring that I get my house re-roofed[1], so I’ve had contractors coming and going all day to give me estimates. Beyond just the cost, we’ve been checking on state licensing, insurance, etc. I’ve been most shocked at the differences in the level of professionalism from them, you can really tell the ones for whom it is a business, and not just a job.
But I still managed to get some work done today. After a call with Francis Ginther about the API website importers, we should soon be getting regular updates to the current API docs as soon as their source branch is updated. I will of course make a big announcement when that happens
I didn’t have much time to work on my Debian contributions today, though I did join the DPMT (Debian Python Modules Team) so that I could upload my new python-model-mommy package with the DPMT as the Maintainer, rather than trying to maintain this package on my own. Big thanks to Paul Tagliamonte for walking me through all of these steps while I learn.
I’m now into my second week of UbBloPoMo posts, with 8 posts so far. This is the point where the obligation of posting every day starts to overtake the excitement of it, but I’m going to persevere and try to make it to the end of the month. I would love to hear what you readers, especially those coming from Planet Ubuntu, think of this effort.
[1] Re-roofing, for those who don’t know, involves removing and replacing the shingles and water-proofing paper, but leaving the plywood itself. In my case, they’re also going to have to re-nail all of the plywood to the rafters and some other things to bring it up to date with new building codes. Can’t be too safe in hurricane-prone Florida.
Getting into Debian
Quick overview post today, because it’s late and I don’t have anything particular to talk about today.
First of all, the next vUDS was announced today, we’re a bit late in starting it off but we wanted to have another one early enough to still be useful to the Trusty release cycle. Read the linked mailinglist post for details about where to find the schedule and how to propose sessions.
I pushed another update to the API website today that does a better job balancing the 2-column view of namespaces and fixes the sub-nav text to match the WordPress side of things. This was the first deployment in a while to go off without a problem, thanks to having a new staging environment created last time. I’m hoping my deployment problems on this are now far behind me.
I took a task during my weekly Core Apps update call to look more into the Terminal app’s problem with enter and backspace keys, so I may be pinging some of you in the coming week about it to get some help. You have been warned.
Finally, I decided a few weeks ago to spread out my after-hours community a activity beyond Ubuntu, and I’ve settled on the Debian new maintainers Django website as somewhere I can easily start. I’ve got a git repo where I’m starting writing the first unit tests for that website, and as part of that I’m also working on Debian packaging for the Python model-mommy library which we use extensively in Ubuntu’s Django website. I’m having to learn (or learn more) Debian packaging, Git workflows and Debian’s processes and community, all of which are going to be good for me, and I’m looking forward to the challenge.
>> Older posts | 计算机 |
2014-35/1128/en_head.json.gz/12533 | Wikipedia taking on the vandals in Germany
Encyclopedia of the people fights pranksters
By James Niccolai | 26 September 06
When Cardinal Joseph Ratzinger was elected as pope last year, some internet users who logged onto Wikipedia to see what he looked like found a rather different image: that of the evil emperor from 'Star Wars'.
The prank lasted only a minute or so before someone fixed it, but it highlighted one of the chief problems faced by the online encyclopedia: how to remain open enough that anyone can contribute entries and edit them, while also keeping pranksters at bay.
In the coming days Wikipedia will begin using a new system at the German version of its website that it hopes will fix that problem. If successful, the system is likely to be introduced at Wikipedia sites in other languages.
At the German site, users who have been registered for four days or more will be able to flag a recent entry as being correct and unvandalised, effectively locking it for a period of time. People will be able to update the entry with new material, but it won't be visible as part of the main entry until another trusted contributor has flagged the updates as being correct.
Four days may not seem a long time to become 'trusted', but it should be enough to deter troublemakers who come up with an idea to deface the site on a whim, said Jimmy Wales, Wikipedia's founder, at the IDC European IT Forum in Paris today.
The idea isn't to flag all of the half million or so entries on the German site, just the contentious or topical ones likely to attract attention-seekers, he said. People who visit the entry will be able to see that it's flagged as vandal-free, and have the option to click through and see the recent additions that have not yet been vetted.
He unveiled the German plan in August at the Wikimania conference in Cambridge, Massachusetts, and it's likely to go live any day now.
"I'm just off to look at the software to see if it's ready," Wales said today. The exact details are still being worked out and the flagging system may be updated even after it goes live, depending on what works best, he added.
It's one of several measures that Wikipedia has taken to improve the quality of its content, some of which have drawn criticism for making the encyclopedia less open. At Wikimania, Wales also talked about creating 'stable' or 'static' pages for entries that are considered complete, to help people who want to cite them in published works.
The plan being tested in Germany appears designed to root out mischief, as opposed to inaccuracies that may be harder to detect.
The question of accuracy spurred Wikipedia co-founder Larry Sanger to unveil plans last week for a 'progressive fork' of Wikipedia called Citizendium (a compendium created by citizens), which will enlist so-called experts to resolve disputes and verify that entries are correct. The experts will have to publish their credentials online to verify they are who they claim to be.
Citizendium will begin life as a mirror of Wikipedia, the contents of which can be used by anyone under its GFDL (GNU Free Documentation License). But articles that have been updated at Citizendium will remain that way in the future, according to a description.
Wales noted that the content of Citizendium will also be available under the GFDL, so if the site is successful, Wikipedia will be able to incorporate the changes back into its own site.
Asked about Citizendium in Paris, Wales said that he and Sanger had a 'difference of vision', but he said the two men are 'still friends'.
Some animosity seems to exist, however. One of the entries flagged as 'disputed' at Wikipedia is Sanger's biography. Sanger calls himself a co-founder of Wikipedia, a claim that Wales disputes.
"He used to work for me," Wales said in Paris. "I don't agree with calling him a co-founder, but he likes the title." | 计算机 |
2014-35/1128/en_head.json.gz/13012 | ← 5 Tips Enterprise Architects Can Learn from the Winchester Mystery House
Open Group Security Gurus Dissect the Cloud: Higher of Lower Risk →
by The Open Group Blog | February 6, 2012 · 12:30 AM San Francisco Conference Observations: Enterprise Transformation, Enterprise Architecture, SOA and a Splash of Cloud Computing
By Chris Harding, The Open Group This week I have been at The Open Group conference in San Francisco. The theme was Enterprise Transformation which, in simple terms means changing how your business works to take advantage of the latest developments in IT.
Evidence of these developments is all around. I took a break and went for coffee and a sandwich, to a little cafe down on Pine and Leavenworth that seemed to be run by and for the Millennium generation. True to type, my server pulled out a cellphone with a device attached through which I swiped my credit card; an app read my screen-scrawled signature and the transaction was complete.
Then dinner. We spoke to the hotel concierge, she tapped a few keys on her terminal and, hey presto, we had a window table at a restaurant on Fisherman’s Wharf. No lengthy phone negotiations with the Maitre d’. We were just connected with the resource that we needed, quickly and efficiently.
The power of ubiquitous technology to transform the enterprise was the theme of the inspirational plenary presentation given by Andy Mulholland, Global CTO at Capgemini. Mobility, the Cloud, and big data are the three powerful technical forces that must be harnessed by the architect to move the business to smarter operation and new markets.
Jeanne Ross of the MIT Sloan School of Management shared her recipe for architecting business success, with examples drawn from several major companies. Indomitable and inimitable, she always challenges her audience to think through the issues. This time we responded with, “Don’t small companies need architecture too?” Of course they do, was the answer, but the architecture of a big corporation is very different from that of a corner cafe.
Corporations don’t come much bigger than Nissan. Celso Guiotoko, Corporate VP and CIO at the Nissan Motor Company, told us how Nissan are using enterprise architecture for business transformation. Highlights included the concept of information capitalization, the rationalization of the application portfolio through SOA and reusable services, and the delivery of technology resource through a private cloud platform.
The set of stimulating plenary presentations on the first day of the conference was completed by Lauren States, VP and CTO Cloud Computing and Growth Initiatives at IBM. Everyone now expects business results from technical change, and there is huge pressure on the people involved to deliver results that meet these expectations. IT enablement is one part of the answer, but it must be matched by business process excellence and values-based culture for real productivity and growth.
My role in The Open Group is to support our work on Cloud Computing and SOA, and these activities took all my attention after the initial plenary. If you had, thought five years ago, that no technical trend could possibly generate more interest and excitement than SOA, Cloud Computing would now be proving you wrong.
But interest in SOA continues, and we had a SOA stream including presentations of forward thinking on how to use SOA to deliver agility, and on SOA governance, as well as presentations describing and explaining the use of key Open Group SOA standards and guides: the Service Integration Maturity Model (OSIMM), the SOA Reference Architecture, and the Guide to using TOGAF for SOA.
We then moved into the Cloud, with a presentation by Mike Walker of Microsoft on why Enterprise Architecture must lead Cloud strategy and planning. The “why” was followed by the “how”: Zapthink’s Jason Bloomberg described Representational State Transfer (REST), which many now see as a key foundational principle for Cloud architecture. But perhaps it is not the only principle; a later presentation suggested a three-tier approach with the client tier, including mobile devices, accessing RESTful information resources through a middle tier of agents that compose resources and carry out transactions (ACT).
In the evening we had a CloudCamp, hosted by The Open Group and conducted as a separate event by the CloudCamp organization. The original CloudCamp concept was of an “unconference” where early adopters of Cloud Computing technologies exchange ideas. Its founder, Dave Nielsen, is now planning to set up a demo center where those adopters can experiment with setting up private clouds. This transition from idea to experiment reflects the changing status of mainstream cloud adoption.
The public conference streams were followed by a meeting of the Open Group Cloud Computing Work Group. This is currently pursuing nine separate projects to develop standards and guidance for architects using cloud computing. The meeting in San Francisco focused on one of these – the Cloud Computing Reference Architecture. It compared submissions from five companies, also taking into account ongoing work at the U.S. National Institute of Standards and Technology (NIST), with the aim of creating a base from which to create an Open Group reference architecture for Cloud Computing. This gave a productive finish to a busy week of information gathering and discussion.
Ralph Hitz of Visana, a health insurance company based in Switzerland, made an interesting comment on our reference architecture discussion. He remarked that we were not seeking to change or evolve the NIST service and deployment models. This may seem boring, but it is true, and it is right. Cloud Computing is now where the automobile was in 1920. We are pretty much agreed that it will have four wheels and be powered by gasoline. The business and economic impact is yet to come.
So now I’m on my way to the airport for the flight home. I checked in online, and my boarding pass is on my cellphone. Big companies, as well as small ones, now routinely use mobile technology, and my airline has a frequent-flyer app. It’s just a shame that they can’t manage a decent cup of coffee.
Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing. Before joining The Open Group, he was a consultant, and a designer and development manager of communications software. With a PhD in mathematical logic, he welcomes the current upsurge of interest in semantic technology, and the opportunity to apply logical theory to practical use. He has presented at Open Group and other conferences on a range of topics, and contributes articles to on-line journals. He is a member of the BCS, the IEEE, and the AOGEA, and is a certified TOGAF practitioner.
Filed under Cloud, Cloud/SOA, Conference, Enterprise Architecture, Enterprise Transformation, Service Oriented Architecture, Standards
Tagged as Andy Mulholland, big data, Capgemini, Chris Harding, cloud, cloud computing, Cloud Computing Work Group, CloudCamp, IBM, Jeanne Ross, Lauren States, MIT, Mobile, Nissan. Celso Guiotoko, OSIMM, SOA, SOA Reference Architecture, SOCCI, The Open Group Conference, unconference | 计算机 |
2014-35/1128/en_head.json.gz/15010 | .NET Authors: David H Deans, Jayaram Krishnaswamy, Shelly Palmer Related Topics: Cloud Expo, .NET, Open Source, Virtualization, Web 2.0, Security, OpenStack Journal Cloud Expo: Article
OpenNebula 2012: A Year of Innovation in Open Source Cloud Computing
A Review of OpenNebula Progress in 2012
By Ignacio M. Llorente
January 2, 2013 07:45 AM EST
Time flies, and we are approaching the end of another successful year at OpenNebula!. We've had a lot to celebrate around here during 2012, including our fifth anniversary. We took that opportunity to look back at how the project has grown in the last five years. We are extremely happy with the organic growth of the project. It's five years old, it's parked in some of the biggest organizations out there, and that all happened without any investment in marketing, just offering the most innovative and flexible open-source solution for data center virtualization and enterprise cloud management. An active and engaged community, along with our focus on solving real user needs in innovative ways and the involvement of the users in a fully vendor-agnostic project, constitute, in our view, the OpenNebula's recipe to success.
As 2012 draws to and end, we'd like to review what this year has meant for the OpenNebula project and give you a peek at what you can expect from us in 2013. You have all the details about the great progress that we have seen for the OpenNebula project in our monthly newsletters.
TechnologyDuring 2012, we have worked very hard to continue delivering the open-source industry standard to build and manage enterprise clouds, providing sysadmins and devops with an enterprise-grade data center virtualization platform that adapts to the underlying processes and models for computing, storage, security, monitoring, and networking. The Project has released 4 updates of the software: 3.2, 3.4, 3.6 and 3.8 within a rapid release cycle aimed at accelerating the transfer of innovation to the market. These new releases have incorporated full support for VMware, a whole slew of new computing, storage, network, user, accounting and security management features in the core, and many improvements to Sunstone, Self-service, oZones, and the AWS and OCCI interfaces. Thanks to this innovation, OpenNebula brings the highest levels of flexibility, stability, scalability and functionality for virtualized data centers and enterprise clouds in the open-source domain.
The roadmap of these releases was completely driven by users needs with features that meet real demands, and not features that resulted from an agreement between IT vendors planning to create their own proprietary cloud solution. Most of the OpenNebula contributors are users of the software, mostly sysadmins and devops, that have developed new innovative features from their production environments. We want to give a big two thumbs up to Research in Motion, Logica, China Mobile, STAKI LPDS, Terradue 2.0, CloudWeavers, Clemson University, Vilnius University, Akamai, Atos, FermiLab, and many other community members for their amazing contributions to OpenNebula. During 2012, we have tried to keep updated the list of people that have contributed to OpenNebula during the last five years. Send us an email if we forgot to include your name on the list.
We also announced the release of the new OpenNebula Marketplace, an online catalog where individuals and organizations can quickly distribute and deploy virtual appliances ready-to-run on OpenNebula clouds. Any user of an OpenNebula cloud can find and deploy virtual appliances in a single click. The OpenNebula marketplace is also of interest to software developer looking to quickly distribute a new appliance, making it available to all OpenNebula deployments worldwide. OpenNebula is fully integrated with the new OpenNebula Marketplace. Any user of an OpenNebula cloud can very easily find and deploy virtual appliances through familiar tools like the Sunstone GUI or the OpenNebula CLI.
Additionally, a set of contextualization packages have been developed to aid in the contextualization of guest images by OpenNebula, smoothing the process of preparing images to be used in an OpenNebula cloud. We have also extended the mechanisms offered to try out OpenNebula. The Project now provides several Sanboxes with OpenNebula 3.8 preinstalled for VirtualBox, KVM, VMware ESX and Amazon EC2, and simple how-to guides for CentOS and VMware, and for CentOS and KVM.
It is also worth emphasizing the aspects that makes OpenNebula the platform of choice for the enterprise cloud: it is a production-ready software, easy to integrate with third party tools, and with unique features for the management of enterprise clouds. In 2012, C12G announced several releases of the OpenNebulaPro distribution: 3.2, 3.4, 3.6 and3.8, and the brand-new OpenNebulaApps suite, a suite of tools for users and administrators of OpenNebula to simplify and optimize cloud application management. OpenNebulaPro provides the rapid innovation of open-source, with the stability and long-term production support of commercial software. C12G also announced new training sessions andjumpstart packages.
2013 will bring important changes in the Release Strategy and Quality Assurance Process of the project that will make OpenNebula even more enterprise-ready and community-friendly. All of the benefits of the OpenNebulaPro distribution, as a more stable and certified distribution of OpenNebula, will be incorporated into OpenNebula and so publicly available for the community.
The Team is now focused on the upcoming 4.0 release that will bring many new features which will come in very handy for the day to day enterprise cloud management, including improvements in SunStone facelift and usability, enhancements in the core with audit trails or new states in the the virtual machine lifecycle, or support for disk snapshots and RBD block devices.
CommunityMany people and organizations have contributed in different ways to the project, from the expertise and dedication of our core committers and hundreds of contributors to the valuable feedback of our thousands of users. Some of our users and contributors have reached us with valuable testimonials, expressing their opinion of OpenNebula and the reasons of their choice over other cloud manager platforms. These testimonials include opinions by industry and research leaders like China Mobile, Dell, IBM, Logica, FermiLab, CERN, European Space Agency and SARA. We are looking forward to hearing from you!.
During 2012, we have seen a truly remarkable growth in the number of organizations and projects using OpenNebula, and many leading companies and research centers were added to our list of featured users: CITEC, LibreIT, Tokyo Institute of Technology, CloudWeavers, IBERGRID, MeghaCloud, NineLab, ISMB , RENKEI, BrainPowered, Dell, Liberologico, Impetus, OnGrid, Payoda, Cerit-CS, BAIDU, RJMetrics, RUR, MIMOS... Send us an email if you would like to see your organization or project on the list of featured users.
CIO, CTO & Developer Resources An interesting study was published by C12G Labs, resulting from a survey among 820 users with a running OpenNebula cloud. The results stated that 43% of the deployments are in industry and 17% in research centers, KVM at 42% and VMware at 27% are the dominant hypervisors, and Ubuntu at 31% and CentOS at 26% are the most widely used linux distributions for OpenNebula clouds.
"Because it simply works" was the most frequent answer to the question "Why would you recommend OpenNebula to a colleague?" that we made to our users in a short survey that tells us how we are doing. Other frequent answers were "Because it is easy to install, maintain and update" or "Because it is easy to customize". "Rich functionality and stability" and "support for VMware" are also frequently mentioned by the survey respondents.
Several new components have been contributed to the OpenNebula ecosystem: Carina, CLUES, a new version of Hyper-V drivers (result of our collaboration with Microsoft), Green Cloud Scheduler, Onenox, OpenVZ drivers,Contrail's Virtual Execution Platform, one-ovz-driver, and a new OpenNebula driver in Deltacloud. We would like to highlight RIM's contribution of Carina. The Carina project was motivated by the need to speed up the deployment of services onto the OpenNebula private cloud at RIM, it is a successful attempt to standardize the process for automating multi-VM deployments and setting auto-scaling and availability management policies in the cloud. We are looking forward to other upcoming contributions, like the components that China Mobile is developing for its Big Cloud Elastic Computing System. Regarding implementation of standards, new versions of rOCCI have been released to provide OpenNebula with a fully compliant OGF OCCI API.
Thanks also to our community, OpenNebula is now part of the repositories of the main Linux distributions: OpenSUSE, Fedora, Debian, Ubuntu and CentOS. Moreover, there is a new book on OpenNebula and people from many organizations like Puppet Labs, IBM, China Mobile and RIM, or projects like FutureGrid have contributed new guides and experiences to our blog. One of the benefits of having a truly international community is that several users have been able to contribute partial and complete translations of OpenNebula's user-facing interfaces. We started using Transifex to help us manage these translations, we also want to give a big thumbs up to our community for the translation efforts. Sunstone and Self-service are available in 9 different languages, and more are underway, making a total of 17!.
We also want to highlight a very special mention of OpenNebula by Neelie Kroes, VP of the European Commission and Comissioner for Digital Agenda, during a talk about how the EU is supporting Open ICT systems, namely open-source, open-procurement, and open-data.
In the coming year, we will continue our collaboration with other communities and will launch new initiatives to support our wide community of users and developers, and the ecosystem of open-source components and innovative projects being created around OpenNebula.
OutreachOpenNebula presented 20 keynotes, invited talks and tutorials in the main international events in cloud computing including CloudScape, FOSDEM, Open Source Datacenter, LinuxTag, NASA Ames, RootCamp Berlin, Matchmaking in the Cloud, CloudOpen, FrOSCon, Libre Software Meeting, BeLUG, GigaOM Structure:Europe, or LinuxCon Europe. C12G Labs started a series of Webinars focused on different aspects and possible deployments achieved by OpenNebula. Moreover, here's been a lot of coverage in the media of OpenNebula during 2012. We created a page to keep track of the OpenNebula apparitions in the press.
If OpenNebula has become such a successful open source project is thanks to its awesome community of users and contributors. We would like to thank all the people and organizations who have contributed to OpenNebula by being active with the discussions, answering user questions, or providing patches for bugfixes, features and documentation. We appreciate your feedback and welcome your comments on all issues. The team will be monitoring this post for the next weeks or so and will try and answer all the questions we can.
Thanks for continuing to spread the word and stay tuned because we are announcing important changes in our release cycle and processes to make OpenNebula even more enterprise-ready and community-fiendly.
We'd also like to take this opportunity to wish you health, happiness and prosperity in 2013 to you and your loved ones!.
On behalf of the OpenNebula Project. Published January 2, 2013 Reads 9,823 Copyright © 2013 SYS-CON Media, Inc. — All Rights Reserved.
Cloud Computing - Yahoo, HP & Intel Embark on Joint Cloud Research
Crash Course in Open Source Cloud Computing (v2)
NASA's Nebula Cloud Technology to Play Key Role in Open Source Initiative
OpenNebula vs. OpenStack: User Needs vs. Vendor Driven More Stories By Ignacio M. Llorente
Dr. Llorente is Director of the OpenNebula Project and CEO & co-founder at C12G Labs. He is an entrepreneur and researcher in the field of cloud and distributed computing, having managed several international projects and initiatives on Cloud Computing, and authored many articles in the leading journals and proceedings books. Dr. Llorente is one of the pioneers and world's leading authorities on Cloud Computing. He has held several appointments as independent expert and consultant for the European Commission and several companies and national governments. He has given many keynotes and invited talks in the main international events in cloud computing, has served on several Groups of Experts on Cloud Computing convened by international organizations, such as the European Commission and the World Economic Forum, and has contributed to several Cloud Computing panels and roadmaps. He founded and co-chaired the Open Grid Forum Working Group on Open Cloud Computing Interface, and has participated in the main European projects in Cloud Computing. Llorente holds a Ph.D in Computer Science (UCM) and an Executive MBA (IE Business School), and is a Full Professor (Catedratico) and the Head of the Distributed Systems Architecture Group at UCM. | 计算机 |
2014-35/1128/en_head.json.gz/15094 | Google Updates
Unoffical google blog which updates Google products and Updates from google Labs, gmail features,Google, you tube, Trends,Google news and Google search Technology.
Send Gmail attachments while offline
One of the most requested features for Offline Gmail has been the ability to include attachments in messages composed while offline. Starting today, attachments work just the way you would expect them to whether you are online or offline (with the exception that when you're offline you won't be able to include inline images). Just add the attachment and send your message.If you have Offline Gmail enabled, you'll notice that all your mail now goes through the outbox, regardless of whether you're online or offline. This allows Gmail to capture all attachments, even if you suddenly get disconnected from network. If you're online, your mail will quickly be sent along to its destination.If you haven't tried offline access yet, follow these instructions to get started: 1. Select Enable next to Offline Gmail. 2. Click Save Changes. 3. After your browser reloads, you'll see a new "Offline" link in the upper righthand corner of the Gmail page, next to your username. Click this link to start the offline set up process and download Gears if you don't already have it.http://gmailblog.blogspot.com/2009/11/send-attachments-while-offline.html
srinivas kalakota
features of gmail,
gmail offline
Google Chrome OS First Look
oday we released Chromium OS, the open source project behind Google Chrome OS. Google Chrome OS is an operating system that is intended for people who spend most of their time on the web. It aims to provide a computing experience that is fast, simple and secure. The Chromium OS project as you'll see it today is comprised of the code that has been developed thus far, our early experiments with the user interface, and detailed design docs for many parts that are under active development.To get a feel for the Google Chrome OS user experience, you can watch the demo from this morning's announcement eventhttp://chrome.blogspot.com/2009/11/announcing-chromium-os-open-source.html
chrome OS,
google chrome unvieled
12 Things to Know About Google's Go Programming Language
Google's new programming language, called Go, took the application development world by storm when the search giant released it Nov. 10. The ambitious technology's pedigree features programming experts from the Unix world, including Ken Thompson, who teamed with Dennis Ritchie to create Unix. Created as a systems programming language to help speed up development of systems inside Google, Go is now viewed as a general-purpose language for Web development, mobile development, addressing parallelism and a lot more.Google's new programming language, called Go, took the application development world by storm when the search giant released it Nov. 10.The ambitious technology comes with a pedigree featuring programming experts from the Unix world, including Ken Thompson, who teamed with Dennis Ritchie to create Unix. Created as a systems programming language to help speed up development of systems inside Google, Go is now viewed as a general-purpose language for Web development, mobile development, addressing parallelism and a lot more.Ironically, Google launched Go just a week before Microsoft's Professional Developers Conference, which typically dominates the software development landscape while it is running. This time there might be a little Go buzz at the event.Go is an experimental language that is still in the process of being tweaked and maturing, but it holds huge potential. The Google Go team blogged about Go, saying, "Go combines the development speed of working in a dynamic language like Python with the performance and safety of a compiled language like C or C++. Typical builds feel instantaneous; even large binaries compile in just a few seconds. And the compiled code runs close to the speed of C."1. Where did the idea for Go come from?Pike, Thompson and Robert Griesemer of Java HotSpot virtual machine and V8 JavaScript engine fame, decided to make a go of developing a new language out of frustration with the pace of building software. Said Pike:"In Google we have very large software systems and we spent so long literally waiting for compilations, even though we have distributed compilation and parallelism in all of these tools to help, it can take a very long time to build a program. Even incremental builds can be slow. And we looked at this and realized many of the reasons for that are just fundamental in working in languages like C and C++, and we needed a different approach. We also decided the tools that everybody used were also slow. So we wanted to start from scratch to write the kind of programs we need to write here at Google in a way that the tools could be really efficient and the build cycles could be very short." 2. Go is a multipurpose languagePike said Go is appropriate for a broad spectrum of uses, including Web programming, mobile programming and systems programming. "We based it on our ideas of what we think systems programming should be like," he said.Then a Google engineer told the team he wanted to do a port to ARM processors for the Go language because he wanted to do some work in robotics. With the ARM support, "We can now run Go code in Android phones, which is a pretty exciting possibility," Pike said. "Of course, ARMs also run inside a lot of the other phones out there, so maybe it's a mobile language."He added, "I think people, once they absorb it a little bit more, will see the advantage of having a modern language in some ways that actually runs really fast. And it's an interesting candidate to think of as an alternative for JavaScript in the browser."Although getting Go supported inside browsers is going to be a seriously challenging undertaking ... but it is an interesting thing to think about because it has a lot of the advantages of JavaScript as a lightweight, fun language to play with. But it's enormously more efficient. So some of the big, heavy, client-oriented applications out there like Google Wave would be much zippier if they were written in Go, but of course they can't be written in Go because it doesn't run in a browser yet. But I'd like to see some stuff in that direction, too, although how that's going to happen I don't know."http://www.eweek.com/c/a/Application-Development/12-Things-to-Know-About-Googles-Go-Programming-Language-859839/
Go programming language,
google go computing
Google Chrome OS: A Nice Place to Visit, But?
Google's Chrome operating system could mark a turning point in computing, but many questions remain. Today's rumor is the OS will be released to developers next week, answering some questions but probably raising even more. Google had previously promised Chrome OS, in some form, before the end of this year. Chrome OS strikes me as being just enough Linux to allow an underpowered computer to run Chrome browser and connect to cloud-based applications. How exciting can that really be?On a netbook, Chrome OS may be enough to provide mobile functionality. On a desktop, Chrome OS may turn a PC into a glorified terminal, relying on the Internet for nearly everything the user does.There are many questions about Chrome OS, some of which may be answered when Google releases whatever it decides to make available to make good its promise to release the OS, in some form, before the end of this year.Among those questions:Just how limited will Chrome OS be? What will and won't it do?Will it natively run third-party applications on the hardware where it resides? Or just to connect to applications in the Internet cloud?Will cloud apps need to be written specifically for Chrome?Will Chrome create a standard for the look-and-feel for cloud application?Might Chrome only run applications that Google hosts?Will Chrome require--or even use--a hard drive? Might Chrome OS netbooks have a small silicon drive and nothing else?When Google promises an end to security hassles, such a viruses, malware, or updates, what trade-offs are required?Google has previously said Chrome is intended to be lightweight and get users connected to cloud applications quickly. The company seems to believe that cloud apps will become pervasive and will not require a very powerful machine to run them.Thus, Google is creating a very lightweight browser (Chrome) to run atop what amounts to an embedded operating system (Chrome OS) running on netbooks (to be released next year).I also expect the OS to include Gears, Google's technology for offline access to its cloud-based applications.What will Chrome do beyond that? Maybe nothing. If Google really believes its cloud rhetoric and is really serious that Chrome OS will be virus-free, maybe the new OS won't run applications, just the browser and Gears?Add a robust security mechanism, to make certain the cloud-based applications and Web sites haven't been tampered with, and Chrome could be a more secure operating system than we're used to. If only by keeping the computer from doing anything besides interacting with Web sites and web-based applications.I find that idea strangely attractive, though it will certainly result in devices with limited functionality, just like today's netbooks. However, performance may actually be better since netbooks could be freed from laboring to run Windows and heavy Windows applications.Google Chrome OS introduces a new computing model and may even change how we think about operating systems. Its importance hinges upon how widely and quickly cloud applications take center stage, what trade-offs customers are willing to make, and most importantly, what Chrome OS actually turns out to be.http://www.pcworld.com/businesscenter/article/182152/google_chrome_os_a_nice_place_to_visit_but.html
features of google chrome OS
Will Google's Wave Replace E-Mail—and Facebook?
Google has big plans for Google Wave, its new online communication service—and they won't all come from Google.The Web search giant is hoping that software developers far and wide will create tools that work in conjunction with Wave, making an already multifaceted service even more useful. Google (GOOG) is even likely to let programmers sell their applications through an online bazaar akin to Apple's App Store, the online marketplace for games and other applications designed for the iPhone. "We'll almost certainly build a store," Lars Rasmussen, the Google software engineering manager who directs the 60-person team in Sydney, Australia, that created Wave, told BusinessWeek.com. "So many developers have asked us to build a marketplace—and we might do a revenue-sharing arrangement."Combining instant messaging, e-mail, and real-time collaboration, Wave is an early form of so-called real-time communication designed to make it easier for people to work together or interact socially over the Internet. Google started letting developers tinker with Wave at midyear and then introduced the tool on a trial basis to about 100,000 invited users starting on Sept. 30. Invitations were such a hot commodity that they were being sold on eBay (EBAY). For Google the hope is that Wave, once it's more widely available, will replace competing communications services such as e-mail, instant messaging, and possibly even social networks such as Facebook.http://www.businessweek.com/technology/content/oct2009/tc2009104_703934.htm
Google Wave first Invitees
Google Wave is about to open to new users. Starting today, Google will send 100,000 invites to some of those who were eager to use an early version of the service. Google's blog lists three categories of users that will receive invites: Google Wave Sandbox users, those who signed up and offered to give feedback on Google Wave and some Google Apps users. When you receive an invitation to Google Wave, you'll be able to invite other people so you can use Google Wave together."Google received more than 1 million requests to participate in the preview, said Lars Rasmussen, engineering manager for Google Wave, and while it won't be able to accommodate all those requests on Wednesday it is at least ready to begin the next phase of the project," writes CNet.Like Gmail's early version released in April 2004, Google Wave lacks many basic features: you can't remove someone from a wave, you can't configure permissions or write drafts. The interface is not very polished and some of the options are difficult to find, but it's important to keep in mind that Google Wave is just one of the ways to implement an open protocol. Gmail revolutionized email with an interface inspired by discussion boards: messages are grouped in conversations and it's easy to handle a large amount of messages. Google Wave wants to revolutionize real-time communication by extending a protocol mostly used for instant messaging, XMPP.Combining email, instant messaging and wikis seems like a recipe for confusion, but Google Wave pioneers a new generation of web applications, where everything is instantaneous. As Google explains, each wave is a hosted conversation and users can edit the conversation in real-time.http://googlesystem.blogspot.com/2009/09/new-batch-of-google-wave-invites.html
google wave,
google wave preview
Personalized YouTube Homepage
YouTube tests a new homepage that is customizable and centered on your activities. Instead of displaying the same content for all YouTube users, the new homepage looks different, depending on your preferences and your activities. Here's what's new:* recommended videos, a feature that relies on your previous activity: favorite videos, subscribed channels* latest from your subscriptions: 12 videos from 3 of your subscribed channels* friend activity: a list of videos uploaded, favorited or rated by your YouTube contacts. This information is displayed only if your contacts added it to their public profiles.* inbox: messages, friend invites, received videos.* statistics about your videos (total views, subscribers) and your activity (subscriptions, comments)."The goal with all of this is to gauge people's interest in having a YouTube that's tailored to the individual. Ultimately, we want to get you one step closer to the videos you'll enjoy most every time you come to the site," explains YouTube. http://googlesystem.blogspot.com/2008/03/personalized-youtube-homepage.html
Google you tube,
youtube homepage
Google Offical Blog
Google System
Followme on Twitter | 计算机 |
2014-35/1128/en_head.json.gz/15189 | Home Blog Contact Follow us on Twittah!
Social Media Icons for Joomla! Mark Willis
Mark Willis, Dennysville Party Town Chair and School Board Member, is the National Committeeman for Maine.
Mark serves the Maine Republican Party in Dennysville as Town Chair and as member of the Dennysville School Board. He has a Bachelor degree in International Relations, a Master's degree in Information Systems Management, and a Doctor of Law degree from George Mason School of Law.
A U.S. Army Counterintelligence Agent in Haiti and Bosnia in the 1990s, he followed with 10 years as Senior Software Engineer at US Army Security and Intelligence Command (INSCOM) and the Information Technology Liaison between INSCOM and NSA Personnel Divisions and is employed in the private sector as a Manager of an Application Development Security Team. Mark ran for National Committeeman based upon the following principles:
1.His top priority will be to identify, promote, endorse, and help fund those candidates in the State of Maine who believe in the Constitution of the United States, Freedom and most importantly, Liberty.
2.He will be a direct line between the RNC and the GOP grassroots at the county level by attending at least one county GOP meeting in all 16 counties within 1 year of being elected. 3.He will work with other like-minded members of the RNC to introduce resolutions as necessary. This includes resolutions to abolish the TSA and demand the repeal of section 1021 of the NDAA
Mark served in the US Army on Active Duty from 1993-1998 as a Counterintelligence Agent with duty in Haiti (1995) and Bosnia (1996, 1998). In 1995, Mark conducted over 160 counterintelligence missions in Haiti which gathered critical information significantly contributing to a zero casualty rate of U.S. soldiers from May-November 1995. In 1998, Mark served at General Headquarters, Tuzla, Bosnia-Herzegovina where he managed the counterintelligence activities, reports and ad hoc support to all counterintelligence teams and provided daily briefings to the HQ General Staff on theater related counterintelligence operations. A FARMER AND SMALL BUSINESS OWNER
Mark and wife Violet own Kilby Ridge Farm, one of the last remaining small village farms in Maine. Married 18 years, their children are Declan, 12, and Brynne, 4.
Mark and Violet continue to restore this 200 year old, 20 acre coastal farm and specialize in Icelandic sheep, heritage poultry and heirloom vegetables. Mark and Violet are very proud of the fact that they have not taken one cent of government money to restore their farm. They “pay as they go” in order to retain their independence.
Their motto is "Culinary Excellence from Pasture to Plate”.
Outside of Work and Farming, Mark and his family enjoy fly fishing in the Greater Grand Lake Steam Area as well as hunting and trapping, nature walks, hiking, beach-combing and general outdoor exploring. back to top HomeBlogContacts mainedelegates.com © | 计算机 |
2014-35/1128/en_head.json.gz/16706 | (0) Innovation is Even More Critical During Times of Economic Turmoil – Interview with AMD Graphics Products Group. Page 3
[12/24/2008 11:42 AM | Graphics]by Anton Shilov Even though graphics processing units are very complex these days and take years to develop, just as central processing units, the right GPU at the right time can completely reshape the market in just one or two quarters. This is why it is crucial to make the right business decisions for the next month and right technology decisions for the next two-three years. With these rules in mind, we decided ask ATI, graphics products group of Advanced Micro Devices, a dozen of question regarding current state of the business and future products.
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 ]
X-bit labs: Do you think that chipsets with integrated graphics processors (and eventually central processing units with built-in graphics cores) will impact the TAM of discrete graphics cards going forward?David Cummings: Integrated graphics has already had a significant impact on discrete graphics and now represents the lion’s share of graphics sales by unit volume. In fact, AMD has a vested interest in integrated graphics as we not only make chipsets with integrated graphics, we are actively and aggressively developing graphics cores that reside on the same piece of silicon as the CPU.AMD's highest-performance core-logic with ATI Radeon HD 3300 integrated graphics processorIf anything, I believe that on the other side of current economic uncertainty, the market for discrete graphics will remain healthy for the foreseeable future. As mentioned earlier, PC gaming is alive and well. Visual computing continues to grow in importance, as demonstrated by everything from user interfaces to new applications like Microsoft Photosynth and Seadragon. Additionally, new applications of GPUs such as transcoding and stream computing are likely to have a positive effect on discrete sales.X-bit labs: What do you think ATI/AMD should do in order to gain market share, revenue and profitability amid the economic crisis?David Cummings: Continue to innovate.X-bit labs: Do you think that the launch of Larrabee GPUs by Intel will fundamentally change the market of discrete GPUs?David Cummings: To date, Intel has been offering controlled glimpses of their Larrabee vision. Based on information they have released, we know Larrabee will be a multi-core x86 architecture and that it will target the personal computer graphics market. This may or may not happen in the 2009/2010 timeframe.First, I’d like to bring up the saying, “When all you have is a hammer, everything looks like a nail.” Intel is taking their existing x86 technology and attempting to apply it to what they see as a market they have yet to tap – personal computer 3D graphics. The challenge they are going to have is in convincing the existing PC ecosystem of software developers, OEMs, system integrators and many other stakeholders to add the additional cost of Larrabee to their existing BOM, or substitute Larrabee in place of existing graphics solutions that have evolved over the last two decades, in response to the needs of this very same ecosystem. The reality is that our discrete graphics products are incredibly powerful, multi-core processors in their own right that have developed in lockstep with the needs of the hardware and software vendors. I like our chances.X-bit labs: Unlike graphics cards, TAM of video game consoles has been showing dramatic growth levels. Do you think that video game consoles also impact the market of graphics cards if not represent a threat to gaming PCs in general?David Cummings: I think the key here is that while PC gaming is showing slower growth relative to video game consoles, it is growing and that is good for the graphics market.As you probably know, AMD designed the graphics powering both the Xbox 360 and the Nintendo Wii. At the same time, we have a team that works very closely with the software development community. From this unique vantage point, we have watched the evolution of today’s video game market. What we have witnessed is the addition of thousands if not millions of new video game players, thanks to the Nintendo Wii and the many mobile devices that have provided another vector for delivery of video games. The casual gamer has never been so well served. At the same time, many of the gamers we would classify as mainstream and enthusiast now own both a gaming PC and an Xbox 360. Often, this group will prefer to play one genre of games on the Xbox 360 and another genre on the PC.X-bit labs: Do you think that since the most popular video game console – Nintendo Wii – has very basic graphics capabilities, this will lower demands towards high-quality graphics in PC video games and consequently will impact sales of advanced graphics cards. David Cummings: I think the Nintendo Wii is a brilliant product that took a novel approach to the game play experience, an approach that has been incredibly successful. If anything the Wii shows that immersive game play is about more than just graphics. I think the Wii will ultimately be good for the video game industry, both console and PC, as it has introduced non-gamers to this entertainment medium.X-bit labs: Do you think that emergence of high-definition video standard will help the PC graphics adapter market to grow?David Cummings: Definitely. It already has. There is a growing audience of PC gamers with their [Blu-ray enabled] systems hooked up to 52” HD television sets. As prices continue to drop on LCD televisions, those numbers will explode. Table of contents:
Graphics Adapters Business and Market Today
Graphics Processors Strategy
Graphics Processors Design
Future Graphics Products
Mainstream GPGPU? It Is Already Mainstream... Nearly! | 计算机 |
2014-35/1128/en_head.json.gz/16768 | Red Hat to open RHN Satellite Server
In response to Oracle's recent release of new enterprise Linux management …
Red Hat's director of product management has revealed plans to release the source code of the Red Hat Network Satellite Server, the underlying infrastructure of the company's Red Hat Network (RHN) management technologies. RHN, an enterprise systems management platform that enables administrators to manage and maintain entire networks of Red Hat Linux computers, is at the heart of Red Hat's service-oriented business model. RHN facilitates update automation, system monitoring, broad permission control, configuration management, automated installation, package management and many other features that are critical for managing a large-scale Linux deployment. The availability of the source code for Red Hat's Network Satellite Server could allow third-party Red Hat derivatives and possibly other distributions to build their own systems management platforms that can integrate with RHN and Red Hat's support services. It could also provide a foundation for other distributors that want to provide their own support services networks to compete with RHN. The timing of this move makes it seem like a response to Oracle's recent release of the Oracle Management Pack for Linux, which is based on Oracle's Enterprise Manager 10g. Oracle's Linux Management Pack provides functionality similar to that of RHN and is intended to augment Oracle's Unbreakable Linux support services. Since October, Oracle has been competing directly with Red Hat, offering support to Red Hat customers for a fraction of the price of Red Hat's own support offerings. It is too early to say how the availability of Satellite Server source code will influence competition between Red Hat and Oracle, but it seems likely that at least some commercial Linux distributors will evaluate the possibility of integrating the software into their own support frameworks. Expand full story | 计算机 |
2014-35/1128/en_head.json.gz/16970 | EP/J007617/1
A Population Approach to Ubicomp System Design
Chalmers, Dr M
Calder, Professor M
Girolami, Professor M
School of Computing Science
Programme Grants
Fundamentals of Computing
Human-Computer Interactions
Modelling & simul. of IT sys.
Programme Grant Interviews - 7 September 2011
Adding and removing modules (also called plug-ins, add-ins and other names) without technical support or supervision is becoming the norm for everyday software applications such as web browsers, image editors and email tools, e.g. Firefox, Photoshop and Outlook. This same approach is becoming important in mobile phone software, where 'in-app purchase' of modules is becoming more and more popular, and a huge money-spinner for developers. The consequences are not all good: users often do not know which modules to use or change, to suit their goals, or whether a program will crash after such changes. As a result of modules being from different developers, their combination may never have been tested prior to public use. Evaluators and developers struggle to help, because established approaches to software definition, design and analysis are based on the structure of a program being the same wherever and whenever it is used. In constrast, one would be hard put to define a single software structure that accurately describes what a program like Firefox is. Use is similarly hard to pin down, as individuals make systems fit with their own uses and contexts, and share their innovations with others.As a modular program becomes complex, the result is often a 'plug-in hell' of broken software dependencies, functions, uses and expectations. If, instead, software structure is kept simple, then design opportunities are lost as developers avoid the difficulty and cost of implementing innovative programs. More generally, software theory and engineering have fallen behind practice. We lack ways to reason predictively in a principled way about real world structure and use. We lack tools for designers, evaluators and users that support adaptation, and we lack principles and techniques that can deal with the scale of human and software interaction. Our primary objective is to deliver a new science of software structures, with design, theory and tools that reflect software in real world use, and able to tackle the complex problem of how to design to support change and appropriation. The key concept is the 'software population': a statistical model of the variety we see when we look at how the same initial program has been used and adapted by its users. A population model is kept up to date by each instance of a program logging how it is used and changed. The population idea affords a common design vocabulary for users' understanding and adaptation of programs, for evaluators' analysis of programs in use, and for developers' making informed changes to the modules available to users. As a result, users will have programs that may vary but are more comprehensible, robust and adaptable than is the case today. We will enable each individual user to make a practical decision that only he/she is qualified to make: how to balance the changed robustness and functionality of one's system with changes to the system's support for individual and social interactions. One will have tools built into one's program that makes clear what it consists of, and how its structure relates to the programs and experiences of other users. One can find out about other modules that are compatible with one's program, how it will work after adding in one or more new modules, and therefore which configurations of modules will and will not work.In order to test whether the approach works at the scale of, for example, typical iPhone applications, we will build and deploy programs among large numbers of users, for weeks or months-mobile games and social networking applications. We will work with industrial partners including the Edinburgh Festivals, as well as using popular sports and events, such as soccer (e.g. the 2014 FIFA World Cup Brazil), athletics (e.g. the London 2012 Olympics and the Glasgow 2014 Commonwealth Games), and popular TV programmes. Overall, We plan for 500,000-1,000,000 users of our systems in the course of the programme. Key Findings
http://www.gla.ac.uk | 计算机 |
2014-35/1128/en_head.json.gz/17572 | Game Dev Feels Companies Rely Too Much On Copy-Cat Designs, Lack Of Innovation
By William Usher 2012-02-01 13:50:16 comments Kixeye's Will Harbin had some eye-opening words for developers out there of any and every size. He believes that too many companies have become reliant on data, feedback and statistics to determine the design outcome of a game as opposed to good old fashioned creativity and innovation. Kixeye hasn't exactly entered into mainstream media recognition amongst most core gamers, but they have titles like Battle Pirates and War Commander to their name. In an interview with GameIndustry.biz, Harbin states that...
"...we definitely do not design our games around data. We try to make improvements around data, and actually I had to give this speech to my team yesterday about being a little bit less reliant of A/B tests and data and trusting our intuition and instinct, since it seems to be more effective at moving the needle at a larger scale than doing a lot of micro data analysis."... "But it's very helpful when you come to a decision where you have two or more options, it's good to allow data to influence some decisions. But you can't do game design through data exclusively. It can be an aid, it can be a tool, but if you're not creative, if you're not a thoughtful gamer or you're not super passionate about the space then you're not going to make a good game."
It's true, very true. I was just thinking about why a game like Limbo sticks out in the way that it does...you could easily tell that the designers had a vision and a goal with the game. It was such an engaging experience because it was such a unique and creatively compelling game that didn't stick to the "norms" so to speak. PlayDead was even encouraged by its investors that the game should have had some kind of multiplayer to be more "competitve" with other games on the market...I'm glad they stuck to their guns and kept the game a single-player, thought-provoking title instead. Anyway, Harbin goes on to say that because so many developers and publishers rely on data feedback they oftentimes end up copying another game either to mimic its success or play it safe. Harbin also bites into social gaming company Zynga, essentially saying that they aren't eating into the core market with their games and Playdom, Zynga's rival, isn't doing anything else different from Zynga itself. You can check out the entire interview over at GameIndustry.biz | 计算机 |
2014-35/1128/en_head.json.gz/17735 | For information on Income Withholding of Child Support, click this button:
If you run the 1997 to 2003 versions of Excel on your computer,
please read this important announcement:
Georgia�s electronic calculators are created in Excel� (�Excel�), a Microsoft Corporation product. The Microsoft Corporation has announced that it will discontinue support of the 2003 version of Excel effective April 8, 2014. Visit http://support.microsoft.com/gp/msl-windows-xp-office for more information.
It has always been the goal of the Georgia Commission on Child Support (the �Commission�) to ensure accurate child support worksheet calculations regardless of the Excel version used by the public. Unfortunately, once the Microsoft Corporation discontinues support of the 2003 version of Excel on April 8, 2014, the Commission will no longer be able to provide compatibility of the 2003 version of Excel with the 2007 through 2013 and future versions of Excel.
If you are using the 2003 version of Excel (or any earlier version) and attempt to download and run the calculator in one of those early versions, there is no guarantee that the calculator will run properly. There will be no support or assistance to help you resolve issues you encounter. This may mean that you will need to purchase new Microsoft Excel software. We regret any inconvenience and any cost you may incur to upgrade your version of Excel. The Commission cannot assist anyone with the cost of new Excel software.
© Administrative Office of the Courts Information Technology | 计算机 |
2014-35/1128/en_head.json.gz/17990 | Hello guest register or sign in or with: Mods - Empires: Dawn of the Modern World Game
Empires: Dawn of the Modern World
Stainless Steel Studios, Activision | Released Oct 20, 2003
summary news reviews features tutorials downloads mods videos images Empires: Dawn of the Modern World is a history-based real-time strategy computer game developed by Stainless Steel Studios and released on October 21, 2003. Considered an unofficial sequel to Empire Earth, the game requires players to collect resources to build an empire, train military units, and conquer opposing civilizations.
Based on a slightly compressed version of world history, Empires covers five eras, from the Medieval Age to World War II. The game features seven civilizations: England, the Franks, Korea and China are playable from the Medieval Age to the Imperial Age; the United States, Russia, Germany, France and the United Kingdom are playable in the World War I and World War II ages. The game attracted positive critical reaction. Summary List
A patch is being released for part 1 by October 14, 2010.
Fixing the second map.
Empires Bodies
Released Jul 4, 2009
3.0 version relased!. News: 3.0 version fix some bugs. 2.0 version keeps also blood on the map forever. This mod keeps all bodies AND BLOOD on the map...
Uncontacted Tribes
Uncontacted tribes is an Empires Dawn of the Modern World campaign that focuses on a small town in a remote and unknown location in Russia. The campaign...
Stainless Steel Studios | 计算机 |
2014-35/1128/en_head.json.gz/18687 | Douglas D. Nebert
445 National Center, Reston, VA 22092, USA
E-mail: [email protected]
James Fullton
Clearinghouse for Networked Information Discovery and Retrieval (CNIDR)
P.O. Box 12889, RTP, NC 27709
E-mail: [email protected]
Search for information based on its geographic location in traditional library information systems has been relegated to geographic classification codes and, for catalogued maps, a geographic coordinate reference and subject headings. The ability to provide searchable geographic characteristics of both mapped and non-map related holdings can be provided through a set of extended geographic coordinates associated with the document. These coordinates, when understood as part of a searchable attribute set within the American National Standards Institute Z39.50 information retrieval standard, provide a new and consistent way to search and retrieve documents based on their geography. This paper discusses work to date in collaboration with the Project Alexandria team to provide for search and retrieval of documents based on complex geographic `footprints.'
KEYWORDS: spatial information, geographic information, search and retrieval, Z39.50, data discovery
Traditional library catalogs contain references to documents held in the library in conventional formats. Automated card catalogs and related search and retrieval systems have provided the ability to find documents based on field search of data elements such as title, author, and subject as per existing subject heading systems. Classification systems, by their nature, seek to organize information into hierarchies that allow users to search on more general or more specific terms to discover documents held by the library. A document may also have several subject headings associated with it because most information can be categorized or approached by different disciplines in different ways. Representative geographic locations of documents, for the purposes of cataloging and retrieval, are not amenable to cataloguing within a hierarchical classification system, except where documents are referenced to well-defined political boundaries. Whereas one can conveniently subdivide the Earth into a set of (sometimes disputed) nested political subdivisions, this organization may not be necessarily appropriate for information discovery by the earth scientist, oceanographer, or climatologist interested in a user-defined region that has not been formally classified. Geographic location can be described using mathematical constructs defining position with respect to an origin using latitude and longitude as measured in degrees away from the origin. Point locations on the Earth and other bodies can be described reliably with such a common spatial reference system. Moreover, bounding rectangles and complex chains of coordinates can be developed in this coordinate space to circumscribe edges or `footprints' of the coverage of a map, digital data set, or even an environmental publication that references a specific geography. The search of relevant documents with complex geographic footprints has been an operation traditionally restricted to geographic information systems (GIS) software. Its incorporation into the indexing and search capabilities of a digital library is a logical extension to accommodate geographically referenced data of all types.
Organizational Background
There are a number of efforts on the Internet to catalog digital spatial data -- either as maps (pictures) or digital spatial data sets that may be printed or loaded into a desktop mapping or GIS, respectively [1]. Most of these activities communicate through the GeoWeb project supported by the U.S. Bureau of the Census and hosted by the State University of New York at Buffalo. GeoWeb, through on-line World-Wide Web pages linking to its participants and an Internet `list server' that reflects mail messages to all subscribers, provides a forum to discuss and implement geographic information retrieval systems on the Internet. Services such as the Virtual Tourist and an extended interface to the Xerox Map Browser allow Internet users to generate simple maps for immediate display. Other systems such as the U.S. Environmental Protection Agency and National Oceanic and Atmospheric Administration's Geophysical Data Center provide a point-and-click interface for the public to download large volumes of spatially-referenced data through custom interfaces.
As the volume of digital geographic information and the number of information providers on the Internet increase, the ability for a user to discover, evaluate, and download appropriate information is greatly hindered. The hypertext paradigm of the Internet allows every site to organize and connect its holdings to other holdings in a very random way, making browsing for specific data an unpredictable undertaking. Indexes of the entire World Wide Web, such as the Lycos system offered by Carnegie Mellon University provide some means to identify data, but are limited to pure text searching and cannot search for information with spatial or temporal extent in a consistent way. A systematic approach to serving geographic information on the Internet is required.
The Alexandria Digital Library (ADL), a project funded by the National Science Foundation Digital Libraries Initiative, is developing a comprehensive digital library capability for the Map and Image Library at the University of California, Santa Barbara. The project includes applied research on spatial data cataloging, scanning and metadata creation (ingest), data compression and enhancement, search and on-line service of raster and vector data for local and, eventually, remote data repositories. A primary goal of the ADL project is to Providing a means to search and retrieve data on text and spatial characteristics is a high priority for the project.
The Federal Geographic Data Committee (FGDC) has been developing a spatial data clearinghouse capacity over the past year. Member agencies are encouraged to develop metadata records, serve these records as searchable documents on the Internet, and link the records to on-line stores of digital spatial data, where available. The ability to search multiple data servers for data sets that are spatially relevant is a key element for success of this distributed clearinghouse concept. Standards and Protocols
There are a number of protocols relevant to the service of digital spatial information on the Internet. These include markup and cataloging conventions and data service protocols used by libraries and the wider Internet community.
The U.S. Library of Congress, through its MAchine-Readable Catalog (USMARC), implements a storage and classification system that provides for human-readable and machine-searchable characteristics of catalogued documents [2]. Where relevant, library holdings with an explicit geographic reference (e.g. `Geologic Map of the Golden Quadrangle, Jefferson County, Colorado') are catalogued using the USMARC Geographic Subject Subdivision (USMARC 65x subfields a and z, with searchable element 052). Additionally, maps in a catalog will be coded with the bounding latitude and longitude coordinates using the searchable USMARC field 034 subfields e,f,g,h (coordinates) and a human-readable counterpart, field 255 subfield c. The bounding latitude and longitude fields define a bounding rectangle that encloses the area of interest. These coordinate fields provide a limited capability for the description of map information but are not customarily applied to non-map data. In addition these coordinate fields do not provide for the encoding of complex geographic footprints (e.g. river basins, congressional districts, study areas) that describe the true, searchable extent of digital spatial data and related reports. The FGDC, through Executive Order 12906 [3] signed by President Clinton in April 1994, has directed all federal agencies to use the `Content Standards for Digital Geospatial Metadata' (CSDGM) -- a federally-developed standard to establish a formal vocabulary for digital spatial data set descriptions. Among the approximately 300 data elements described in this standard are a set of bounding coordinates that correspond to the USMARC subfields given above, and the ability to encode one or more coordinate chains that describe the true footprint of a digital map document. Although the CSDGM only requires the bounding rectangle of a document to be recorded, it provides for the more complex footprints to be stored and searched for spatial relevance.
The bounding rectangle used in both USMARC and CSDGM is defined by bounding lines of latitude and longitude which makes it useful for describing many traditional maps, such as topographic quadrangles, that also follow such lines. Aerial photography, satellite imagery, and data sets whose edges are defined by political or other application-defined boundaries are examples of information whose footprints can be approximated through the use of a bounding rectangle but are more truthfully represented by a complex footprint defined by many points.
The World-Wide Web uses the Hyper-Text Markup Language (HTML) as the primary method for document linkage and presentation on the Internet. HTML is a simple, but still unofficial, subset of the Standard Generalized Markup Language (SGML) with adequate functionality to display text documents with in-line graphics. Both text and graphics can be used in HTML as a hyper-text link to another place in the current document or to another document on the Internet. In-line graphics, known as imagemaps, allow the user to click on regions within the bitmap and traverse a link to a specific document. This interface has been demonstrated as a retrieval mechanism for individual maps by a number of organizations, including the U.S. Geological Survey.
The American National Standards Institute Z39.50-1992 standard is being used within the library community for catalog and document search and retrieval [4]. The Z39.50 standard provides for the use of common attribute sets whose use and operations are well-known to both client and server. The latest version of the standard also allows a server to `explain' its searchable attributes and operators to a client to permit an intelligent query of non-common attributes. A geographic data profile (GEO) is being defined by the FGDC to incorporate the data elements of the CSDGM including bounding coordinate and footprint fields and is being implemented in a freely-available Z39.50 server (I-Site) developed by the Clearinghouse for Networked Information Discovery and Retrieval (CNIDR) in Research Triangle Park, North Carolina.
Z39.50-COMPLIANT SOFTWARE DEVELOPMENT
CNIDR was formed in 1992 through a three-year grant from the National Science Foundation to sponsor a development center for wide-area network search and retrieval software. Initially proposed as a maintainer for the public-domain version of the Wide-Area Information Server (WAIS) software the scope was expanded to focus on the integration of the various Internet access protocols (ftp, Archie, Gopher, World-Wide Web, and WAIS). Commercial and public-domain versions of the WAIS software are based on the 1988 version of the Z39.50 standard. The 1988 version is limited to free-text search of documents whereas the 1992 version of the standard supports fielded search. CNIDR developed a series of public-domain release of the WAIS software known under the freeWAIS name, versions 0.1, 0.2, and 0.3.
Figure 1. Configuration of I-Site
Z39.50-compliant software developed by the Clearinghouse for Networked
Information Discovery and Retrieval. In 1994 CNIDR released server software that supports the Z39.50-1992 standard and an Application Programming Interface (API) that permits users to integrate the search engine or database of choice with the information server process. This ZDist software is a dramatic departure from the tightly coupled index and search provided through freeWAIS, allowing for extensibility as well. The I-Site package was developed in late 1994 to include the ZDist server, a World-Wide Web (WWW) gateway, the search API, and a text search engine known as ISearch. Together these provide a complete information service that is accessible to Z39.50 clients and WWW clients such as Mosaic and Netscape without requiring a commercial database. Users requiring special search engines or databases can incorporate them, replacing the default search engine through use of the search API. The I-Site software package is described on the Internet at the URL:
http://vinca.cnidr.org/software/Isite/Isite.html
and may be downloaded using anonymous ftp to the following location:
ftp://ftp.cnidr.org/pub/NIDR.tools/Isite/
in which executables for SunOS, Ultrix, Solaris, OSF, and Linux are available. Source code is available from the same directory for other platforms. ISite is written using the GNU version of C++, called g++ which is required to compile on platforms not listed above.
The configuration of the I-Site software suite is illustrated in figure 1, including its interaction with a WWW server and multiple client types. The I-Site software includes the Zclient gateway, the Zdist server, and the search Application Programming Interface (API). Data are indexed using default free-text indexing (I-Index), an external data base management system, or other search engine supplied by the user. The configuration allows for multiple search engines -- including one being developed for spatial search -- to be coupled to perform a search. The server is connected to the search and retrieval systems through the search API.
The search API currently supports free-text indexing and search of text documents (Isearch) and a command-line based search protocol (Script) that allows one to define a search script to pass along query terms and perform a retrieval from a database or other organized collection of information. A simple C-based API for direct software integration is available for these basic functions (sapi.c) to enable programmers to make direct connections into databases that have embedded C interfaces.
The Zserver software, the core of Isite, is a Z39.50-1992 service implementation that is designed to accept a request from a Z39.50 client and translate the search request through the search API to one or more local or remote stores of information, and return a list of relevant documents. These documents may be returned in Standard Unstructured Text Records, Generalized Record Syntax -- a way to `wrapper' data objects, and USMARC records Client access provided with the I-Site package includes a Zclient query program that can be used in building other interfaces or can be incorporated into a WWW server as a gateway script. Zclient is not an interactive client but can be used by programmers as an example of how the Z39.50 client library can be used. With this gateway installed, forms can be written in Hyper-Text Markup Language (HTML) to customize a WWW query interface. I-Site also supports Z39.50-1992 clients such as Willow, available from the University of Washington, and through an integrated gateway, clients using forms capable WWW browsers. An interface has also been provided to accept formatted requests using an electronic mail gateway for users without WWW or Z39.50 clients.
Z39.50-1992 supports a series of implementation profiles; the most commonly used profile is `bib-1' -- a field-level definition for cataloging of bibliographic entries. A profile includes a set of numbered attributes (field-like constructs) that may be queried, along with the operations or characteristics that apply to each attribute. This set of attributes may be registered as part of the standard to ensure that implementors can support well-known set objects in server and client software. Once an entry, or document, reference is located by a query the user may retrieve the document in one of several formats. The structural contents of a document, within a given profile, are defined by a schema and can be used within the server to convert documents from one format to another.
Integration of Geographic Search into Z39.50
A prototype spatial search system was integrated into a version of the public-domain WAIS (version 8-b5.1) software in 1992 by CNIDR to index and retrieve documents based on text and spatial characteristics. The indexing routine was modified to recognize a string construct in text documents being indexed that contained a series of ordered coordinates defining a bounding chain, or footprint, of the document. All other words were indexed as searchable text in the dictionary. A mixed query using words and a spatial term would be processed such that documents were ranked based on the word score first (default behavior in this version of WAIS), the documents were separately flagged based on spatial relevance with respect to the search area. The two document scoring arrays were multiplied together to present a final set of relevant documents to the user -- those documents that had certain words and fell within the search region. A query in the general-purpose text window would have the form:
pipelines or roads inside(35,-83 36.2,-83.4 35.4,83.8 35,-83)
where the term `inside' was used to declare the string of latitude and longitude coordinates within parentheses that define the search footprint.
To avoid forcing the user to enter latitude and longitude values by hand, a map query tool was added to WAIS clients for the Windows, Macintosh, and X-Windows environments. This interface enabled the user to enter search points or regions graphically against an orthographic map of the world, and the software pasted the coordinate string into the text query window in the above format.
This prototype system worked well for small collections of geographic footprints, particularly those with a convex1 footprint. The use of a concave search or target footprint would yield unpredictable results because of the polygon overlay algorithm used. Also, the prototype software compared every target footprint with the search region which worked reasonably well on small collections but took a very long time on large collections. This serial, non-indexed search implementation was not suited to collections of more than hundreds of documents. As a result of the unpredictable spatial search behavior and its limited scalability, these features were not incorporated into the general distribution of freeWAIS.
In 1994 the Informatics Department at the University of Dortmund in Germany released an enhanced version of the freeWAIS product called freeWAIS-sf. This software added the ability to index discoverable portions of text documents as fields for direct query. Field types include text, date, and numeric data and permit queries more like those associated with a database than with free-text search of the entire documents, which is still supported. The FGDC adopted the freeWAIS-sf software for testing within the Clearinghouse and defined four consistently-named fields for the bounding coordinates to be used in Clearinghouse servers (ebndgcoord, nbndgcoord, wbndgcoord, and sbndgcoord for the East, North, West, and South-bounding coordinates, respectively). Through use of these coordinates and an intelligent entry form, users can specify a search rectangle and quickly identify target documents whose rectangles overlap, include, or are included within the search rectangle. This search is conducted using a single text query using a compound expression with `greater-than' and `less-than' constructs to rapidly find the targets against the indexed fields. Because the freeWAIS-sf (and all other versions of freely-available WAIS-derivative software) were not based on the current version of the Z39.50 protocol, an alternate solution was sought. A contemporary solution was required to provide interoperability with other Z39.50 services and to take advantage of new service features.
Integrating Spatial Search into I-Site
Work being conducted by CNIDR for both the U.S. Geological Survey and National Aeronautics and Space Administration (NASA) indicated a need to extend the I-Site Z39.50-1992 software suite to include a basic spatial search engine. Both organizations have large collections of documents and digital data sets that have a defined geographic extent. As a testbed, a collection of several thousand NASA data set descriptions in Directory Interchange Format were extracted from the NASA Master Data Directory and were indexed for search in I-Site using a subset of the bib-1 registered attributes so they could be accessed by Z39.50 clients commonly used in the library environment. The bounding coordinates described in the DIF files did not have equivalent attributes in bib-1, so elements from the draft GEO profile of Z39.50 were used instead.
The collection was indexed using a parser provided with the I-Site software to recognize the location of the bibliographic and coordinate fields in the target collection and produce a searchable index that can be accessed using Z39.50 clients. A query form was generated in HTML to collect general text and field (attribute) query including the spatial coordinates and selection of a spatial operator to consider in the search as shown in figure 2.
In this example, a user is searching for all data sets using climate as a topical search term within the full document (DIF Full Text) and a bounding rectangle set of search coordinates, as entered under the Spatial Search Parameters. Only those documents whose footprint overlaps the query region will be returned. The first 15 documents selected will be provided to the user in the form of `headlines' or document titles.
Figure 2. HTML form interface to the
NASA DIF collection viewed the NCSA X-Mosaic. Figure 3 illustrates the result set returned to the user from the query. Several global data set references and one Alaskan weather reference were found as a result of this query. Clicking of the highlighted `Full' hypertext marker will retrieve the document in its full form. Yet to be implemented are summary records (a subset of all attributes) or variants such as a USMARC representation of the DIF entries.
Thus far, the prototype has demonstrated the use of bounding coordinates similar in approach to that taken in the freeWAIS-sf implementation used in the FGDC Spatial Data Clearinghouse. A library of spatial processing routines has been acquired from the Defense Intelligence Agency for use in the FGDC effort that includes indexing, processing of point-in-polygon and polygon overlap even in mathematically difficult regions of the poles and 180 degrees longitude. CNIDR and the FGDC will be working on integrating this indexing code into the I-Site implementation in the near future to provide robust spatial data search for documents with rectangular, concave, or convex polygon footprints. This software will be available for use by the Clearinghouse and the general public by mid-summer 1995.
Figure 3. The set of documents returned for the climate query for Alaska. Directions and Limitations
The forms-based interface to the I-Site server allows one to find information using several information fields in a format similar to that used to access a relational database. Interfaces to information collections that include spatially-referenced documents would benefit from having a map-based interface. Research into more complex mechanisms to visualize documents in multiple topical and temporal dimensions is underway [5], but the protocol support within HTML does not support the complexity and versatility needed for more advanced spatial and temporal searches. At present, even the imagemap linkages are restricted to a single click. This precludes a user from defining a complex search region with multiple points such as a polygon, rectangle, or circle -- basic features to a geographic user interface required by ADL and other projects.
Inclusion of a geographic query tool is being considered in two forms by the HTML developer community. Within HTML 3.0 (in draft)[6] is a feature called the `scribble widget' that allows the user to enter many coordinates over an existing bitmap and forward the coordinates to the server to perform an action such as data retrieval. This would allow the unmodified WWW client to access and interact with spatial information in a more sophisticated way. A second method of providing a WWW client with a map interface would be the inclusion of a geographic `helper application' that would display geographic information and allow for the preparation of a geographic query similar to the map query tool built within the prototype spatial WAIS software. Such an application would be launched when a spatial data file is received or a special instruction is given by the client. The scribble widget option places most of the control and query burden on the server, whereas the client-side helper application lets the client do more of the interface work. For information providers it may be more desirable to focus resources on the development of robust servers rather than worry about both client and server software development and support. The development of a very large number of spatial data services on the Internet -- either WWW or Z39.50 servers using a common protocol -- will at some point make gateways and referral services a bottleneck. The Harvest System from the University of Colorado employs the concept of automated information brokers that search the Internet for information resources and summarize the information for more rapid and relevant retrieval without placing the burden on a single computer or index[7]. Use of a system such as Harvest, that is not restricted to a specific information protocol, with well-known spatial and temporal attributes could complement the development of a network of Z39.50 servers with a high degree of interoperability. The success of any attempt to federate digital spatial information will require agreement on the searchable attributes to be posted to the Internet -- a task being undertaken by the FGDC.
Indexing of information of geographic interest by bounding coordinates is not commonly done for non-map data. The use of flexible, freely-available software that uses the Z39.50-1992 search and retrieve standard makes such indexing possible. As more digital spatial information, reports, and reconnaissance data come on-line it is necessary to provide reliable means of accessing it without being restricted to a geographic place names hierarchy.
The prototype spatial search demonstrated in this paper provides examples of how conventional stores of catalog information from a non-library setting can be indexed and presented using known Z39.50 attribute tags including elements that describe spatial characteristics of target data sets. Although only rectangular search has been demonstrated to date, the spatial data are accessible through standard Z39.50 clients and WWW clients. Spatial search capability will be provided with the I-Site software as part of the search engine toolbox.
Nebert, D.D. Trends in Internet Service of Maps and Spatial Data Sets, presented at Association of American Geographers, Chicago, IL, March 1995. Available in electronic form at http://h2o.er.usgs.gov/public/AAG/page1.html
U.S. Library of Congress, Machine Readable Catalog (MARC) system, Volumes 1-6: Washington, D.C.
U.S. Office of the President. Coordinating Geographic Data Acquisition and Access: The National Spatial Data Infrastructure, Executive Order 12906, April 11, 1994. Available in electronic form at ftp://fgdc.er.usgs.gov/pub/general/documents/execord.txt
National Information Standards Organization Information Retrieval Service Protocol for Open Systems Interconnection (ANSI Z39.50-1994): Bethesda, Maryland. 1994. Available in electronic text form at ftp://ftp.loc.gov/pub/z3950/1sthalf.txt and 2ndhalf.txt
Rao, R. et al., Rich Interaction in the Digital Library. Communications of the ACM 38, 4 (April 1995), 29-39.
Internet Engineering Task Force. HyperText Markup Language Specification Version 3.0. Available in electronic form at ftp://ds.internic.net/internet-drafts/draft-ietf-html-specv3-00.txt
Bowman,C.M., et al.The Harvest Information Discovery and Access
System. In: Proceedings of the Second International World Wide
Web Conference: 763-771, Chicago, Illinois, October 1994. Available in
electronic form at
ftp://ftp.cs.colorado.edu/pub/cs/techrepts/schwartz/Harvest.Conf.ps.Z | 计算机 |
2014-35/1128/en_head.json.gz/19543 | Tom Gillis
I write about directions in cloud, security and enterprise computing.
Follow Tom Gillis on Twitter
Tom Gillis’ RSS Feed
Tom Gillis’ Website
Tom Gillis’ Profile
Contact Tom Gillis
I have been working at tech start-ups and larger companies for more than 15 years. My most recent job was VP/GM of the Security Technology Group at Cisco. Prior to that, I was part of the founding team of IronPort Systems. I have had the good fortune to experience many ups and downs on the tech roller coaster, ranging from a successful multi-billion dollar IPO to financing a seed company in the ashes of the dot com collapse. I have recently started a new company, currently in stealth mode, that is looking at putting enterprise grade infrastructure into the cloud.
Additional info on:
The *End* of the Engineer?
Author’s Comment: The response to this post [originally published 07/14/2011] has been frothy and fantastic. I’m thrilled. This is a complex issue, and it warrants discussion. The topic I explored—the future of engineering—represents a very significant shift. Yes—I used strong language to illustrate the point. But the resulting discussion shifted away from my key message. I am not suggesting that engineering doesn’t matter—it does. I grew up with engineers, I’m an engineer myself, and I lead a team of brilliant engineers working with me to change the world every day. My point is that the successful companies of tomorrow must rely on more than technology to create value. To prepare for this shift, the skills—and the outlook—of the engineering leader must also evolve. Tatamimi’s comment articulated the issue well. “We need to broaden the definition of engineering. Technical leadership for tomorrow will need to possess a deep understanding of the customer problem. The skill set required to achieve that understanding is changing.” With that, witness A New Era – the Era of the Customer… I’m an engineer who grew up in a family of engineers. They probably won’t be too happy with the blasphemous statement I am about to make, but it’s the truth: The era of the engineer is over. (Sorry, Dad.) Allow me to defend myself by putting this statement into some historical context. If we look back at the evolution of commerce in this country, we see that it is constantly changing. Zoom out 150 years ago, to when we lived in a largely agrarian society. Landowners ruled. But with the onset of the Industrial Revolution, more value was created by companies that had the ability to efficiently manufacture and distribute goods. Henry Ford and the others who thrived in the industrial era were successful because they found ways to efficiently produce higher-quality products at a lower cost than their competitors. But those competencies would not secure success decades later. Industry evolved so that companies created more value by focusing on a particular part of the value chain—manufacturing, distribution, or inventory management. Ultimately these once “niche” core competencies also evolved into commodities. For example, today manufacturing is something that is largely outsourced.
I believe that we are now experiencing a similar paradigm shift in the technology industry. Three decades ago the core competency that separated good from great was determined by the ability to produce something that was “better, faster, and cheaper” than any alternative. Intel was spectacular at delivering speeds and feeds to the market more consistently over time than any other company. Dell destroyed Compaq, Microsoft destroyed Apple (version 1.0), and Oracle destroyed Sybase. In aggregate, over the past three decades, companies with a strong engineering core competency created the most value.
see photosGetty Images
Click for full photo gallery: The Tech Companies Hiring The Most
But just as we’ve seen manufacturing, distribution, and supply chain management mature to the point of commoditization, engineering development is now on the same trajectory. As China and India continue to evolve, their supply of engineering talent is likely to outpace demand, driving down the cost of engineering a product and increasing the availability of this skill. Having the ability to design a product that runs at 3.2 GHz instead of 2.8 GHz, for example, will not be sufficient for lasting value creation. I’m not suggesting that great engineering talent won’t be important to great companies, but we are rapidly reaching a point where multibillion-dollar value creation will not be enabled by a bunch of techies who have a new algorithm or architecture that is better, faster, and cheaper. The great companies of tomorrow will be built around something else: a competency of customer understanding. This understanding includes a vision of solving problems the customer has yet to anticipate. It requires the soft skills that allow a company to gather a million points of often-conflicting data—and turn them into a clearly articulated solution. Exhibit 1 for this argument: Apple. In the better, faster, cheaper era of the engineer, Apple version 1.0 fell to the brink of irrelevance. The name of the game was processor speeds and memory capacity, and the successful companies of that era were consistently outdelivering Apple. Steve Jobs (full disclosure: his picture is on my wall of heroes) decided to compete using a different rule set. He pulled Apple out of the era of the engineer and into the era of marketing. Today the iPhone has set the standard for what a cell phone should be. But by the standards of better, faster cheaper, the iPhone is pretty terrible. It doesn’t have the fastest processor or the most memory or the highest display resolution. Yet it’s the phone I want. Why? Because Apple has developed a core competency of customer understanding. This deep customer understanding needs to be infused into all aspects of the value chain. The way we design, build, distribute, and support our products must align behind the goal of making the customer thrilled. I’ve seen the benefits of this way of thinking firsthand. While I’m proud of the engineering work we did at IronPort, our success did not result from our engineers developing a file system that could outperform alternative file systems by an order of magnitude. Our core competency resulted from developing a deep customer understanding, which led us to build, sell, and support products that solved the customer problem better than the alternatives. One of our competitors delivered features at much higher velocity than we did, which gave that company a short-term advantage in the war of paper-based evaluations, where speeds and feeds represented value. But our focus on delighting our customers won in the long run: IronPort sold with an enterprise value of $830M. Our competitor, the feature fans, had an enterprise value of less than $200M. Value creation correlated with customer understanding—not engineering velocity. In the coming decades, success will be defined by the ability to understand the complex problems that customers face, and the ability to solve these problems elegantly. Technology development is important, as is finance, manufacturing, and distribution. But these areas are not core competencies for the industry leaders. The next billion-dollar company will be run by history majors who are skilled in wading through a massive jumble of facts and who have the ability to distill these facts down to a clear set of objectives that a global team can fulfill. Great companies of tomorrow will not be defined by products that are better, faster, and cheaper, but by products that are sexier and smarter. That’s why I’m encouraging my kids to pursue a liberal arts education. I can’t think of another course of study that would prepare them better for the future. I hope calligraphy is a part of the curriculum, too.
josec
The problem with this ridiculously facile article is that the author takes a company which he admittedly loves (Apple), pinpoints what Apple is good at, and then assumes that all other tech companies in America are in the same line of buisness (consumer electronics) and competing with Apple, and therefore, need to have the same core skills which is a complete load of bunk.
Apple v1 was destroyed by processor speed an memory capacity? Garbage. It was made irrelevant bespoke the fact that it had faster processors and a greater memory capacity. Remember MS’s 640K limit? The iPhone 4 may not have the fastest processor or the most memory, we are after all near the end of its refresh cycle, but it’s hard to really compare since the processor is unique and optimised more for instructions per joule, since its a mobile device, and even now it still has best in class battery life. Its also not really relevant to compare memory since the iOS process and software model require less mememory than, say Android which dies very inelegantly as it runs out. The iPhone 4, however, has a 960×640 resolution screen. How long is your list of phones with higher resolution screens?
Bespoke = despite. Spelling checkers!
sscutchen
I think your are correct when it comes to end of engineers as leaders of companies; as the people setting the vision. But it is still the engineers that are the global team that meets the clear set of objectives.
dsgarnett
Excellent insights. Have to admit I’m not exactly certain about the “End of the Engineer” part. But there is absolutely no question that the ascendant leaders in technology are now those who understand consumers and how to dig deep to develop those products which offer consumers the most value. As an ad agency exec, I’ve noted this challenge most often in advertising. So much tech advertising (the Droid ads to point to one example) is actively inhuman – representing the idea that tech products are examples of engineering and not products that add serious value to consumer lives. Even further, I think we see this struggle show itself with those who carry a chip on their shoulders for new technology (like Apple’s) that just works and doesn’t require engineering background for success. But consumer savvy work is going to win that battle every time. Thought provoking article. Thanks.
jones1618
Paradigm shift? No, more like visibility shift. While it is true that shaving megahertz and kilobytes doesn’t give you command of a market anymore, engineering still defines market leaders like Amazon, Google, and Facebook. Sure, their “customer understanding” is key to their success (and most visible) but they couldn’t serve their customer experience without mountains of engineered infrastructure. What about Apple? Are you claiming that they’re just a pure design firm now? No, they may fly the design banner but below decks the Good Ship Lollipop is manned by a notoriously cut-throat band of engineering scalawags bent on shaving components and squeezing price/performance blood out of their suppliers. Yes, component engineering and manufacturing have moved overseas but system and product engineering still dictate what the user experience designers (who also need a fair amount of engineering know-how) can deliver. So, go ahead and encourage your “Liberal Arts” kids but make sure that they learn math, science, programming and basic web technology. Think about it: We live in a world where artists use tech skills every day and librarians are becoming computer scientists at heart. Engineering isn’t dead. It has just been absorbed into the host culture.
Very well, put, matey. I think Jobs would smile when hearing his tech ninjas referred to as a “cut-throat band of engineering scalawags bent on shaving components and squeezing price/performance blood out of their suppliers.” I know I did. Arrrrr…
me1248
Excellent point: engineering/technical skills have gone mainstream. Artists, scientists, … “The Ubiquity of Engineering”. And understanding the customer well has always been in fashion.
jbelkin
What you’re saying is an over-simplication. The more precise point is that Apple IS at the corner of TECHNOLOGY & LIBERAL ARTS (as they use in their presenations) or aka: marketing AND engineering. The problem with relying solely on engineering is either a spec based race or “technology” delivered as requested by the product manager or marketing with no idea of its best use. It’s like putting a 900 horsepower engine in a jetta. it’s a checkoff on the list of to-do’s but pointless … just look at Nokia’s attempt at a touchscreen. What’s the point of a touchscreen that is no-responsive , slow and inaccurate but yet – the engineers said there’s you’re touchscreen and marketing without the guts to say it’s crap just went out to try and sell it – defeating both purposes. Engineers think anyone can just tinker with it and make it better … but that is NOT what the average person wants and Marketers think they can sell anything on gullible consumers but that is no longer the case … with most corporations, there is no Steve Jobs who both marketers & engineers respect the hell out of Steve Jobs, (I’m the guy who invented the consumer PC market, and relaunched mp3 players, digital storefront, ereaders, smartphones and tablets to the masses, built the fastest retailer to a $1 billion dollars and (in my spare time, I funded Pixar) … what were you saying again about why I’m wrong?) The bottom line with 98% of comapnies are bureacrats who are just there to protect their lower mgmt jobs or their turf … see MS or Nokia.
taltamimi
Nice write-up! I’m not sure I would go so far as saying that engineering is dead. Also, I’m not sure someone with a history or literature education will have the interest in the level of detail that’s required to build the kind of things you guys build in the technology industry. I think what’s needed is a broadening of the definition of engineering. Engineers can no longer afford to focus on the traditional technical measures of performance and quality. They need to broaden their horizons to include the end-user view. I think engineering education programs have to include a wider variety of non-technical material (in addition to the usual electives of economics and business). The products of engineering are used by humans, yet the human element is surprisingly underrepresented in engineering curricula. I think an engineering-focused program in sociology and marketing would be very useful for filling this gap. Bohlen, Beal, and Rogers showed us a long time ago that technologies go through a lifecycle. Geoffrey Moore used those ideas to demonstrate that different stages of that cycle require different strategies. Engineering excellence, product innovation, buyer convenience, and price competitiveness… each of these strategies has a time and a place. In The Innovator’s Dilemma, Clayton Christensen showed us that focusing on typical measures of technical performance can be lethal to a company under certain circumstances (what the author calls performance oversupply). I think many engineers are not aware of these key (human) principles of doing business in the technology industry. Yes, Apple is very customer-focused, but it is also a very strong engineering company. Mac OS X, the basis of iOS and a key differentiator of Apple’s products, is a software engineering tour-de-force that has been under development since it was a non-Apple product called NextSTEP back in the 1980s. Consumer understanding drives engineering at Apple; it doesn’t replace it. I enjoyed reading your article. Thanks!
lamuncha
Wonder where he was canned from. Sounds like a bitter old has-been. Not everyone can sweep floors, someone has to design the broom. Maybe he missed Immelt’s comment yesterday about the lack of engineering talent. Maybe he is smarter than Immelt.
justsans85
Could not disagree with you more. Technological advancements happen in cycles. The observations you have made is highly biased to what is happening in the present. The importance of marketing is always there. During times of fast paced technological advancements sometimes marketing becomes less significant. You may not be able to sell a typewriter when your competitor sells a laptop no matter how much marketing you do. But in times of slow technological progress the effect of marketing becomes more apparent. During these times the products offered by the competitors and the person who is more customer centric wins the battle. But humanity advances only through technological progress. It may be a good thing for you to focus on liberal arts right now. But it would be a completely different thing to say that might be good in the times your kids grow up. They might be caught off guard in a new rising tech boom. Perhaps nanobots.
escapefromcalifornia
I’ve been hearing this for a few decades now – the ‘Age of the Customer’ has arrived, and manufacturing/design/engineering are old school disciplines. Its an HBS-centered view of the world – wherein marketing and administration are the keys to the future. And look where that’s taken us… I think you’re missing the underlying economics that are driving your observation. What you are describing is the effects of the maturity cycle that every industry goes through eventually – wherein technical innovation gives way to commoditization gives way to customer insight as being key value drivers of the day. P&G can sell soap at a premium because it knows the channel and the customer today – but it started as an innovative manufacturer. This cycle will repeat as new industries emerge – there’s nothing to fear or run away from – the only question is where the next revolution will happen. Nanotech, fusion/energy, or ??? I agree that our kids should study liberal arts – but science and engineering remain critical elements of their learning experience, too. But even more importantly, they need to learn to work with passion, and to add value to whatever they do. Then they’ll be usefully occupied for life. | 计算机 |
2014-35/1128/en_head.json.gz/19675 | 2/13/200700:00 AMCommentaryCommentaryConnect Directly0 commentsComment NowLogin50%50%
Drupal and the Power of CommunityWe often are so focused on the future opportunities of collaborative technologies that we may not see the outcome of successful collaborative work right in front of us. The Drupal Open Source content management and community development platform is a powerful Web 2.0 system that is being used to facilitate distributed teams, publish blogs, host communities, and serve thousands of Internet websites. But the Drupal project itself is an example of how a large loosely knit group of people can produce powerful results.
You may not recognized the Drupal name but you have used the product. According to estimates the number of sites using Drupal is in excess of 50,000! These include MTV UK, Sony's MusicBox, Leo Laporte's TWiT site, NowPublic, The Onion, Spread Firefox, Linux Journal, and several political sites such as Vote Hillary, Draft Obama, and Chris Dodd for President. Web 2.0 darlings Flock and SocialText use Drupal on their corporate websites. Drupal.org, the project's home site, is one of the busiest Drupal sites serving over half a million software downloads a month. This includes downloads of Drupal itself (about 80,000 a month) and numerous extensions available for the product.One of the reasons Drupal is so popular is its robust and growing community of developers. Although architecture may not attract system integrators it can be a primary reason they stick around. The core Drupal development team, led by Dries Buytaert, has staunchy insisted on developing a platform that is modular and extensible. Because of this system integrators from around the world have used Drupal as the basis for thousands of websites.As standard functionality Drupal provides support for things like blogging (both individual and group blogs), remote authoring (using tools like Windows Live Writer and Performancing), collaborative books (think wikis, but slightly more structured), RSS syndication and aggregation, and search engine friendly urls, to name just a few. But perhaps the most powerful capabilities come from innovations available in user contributed extensions to Drupal. Here are just some of the incredible things you can do with these extensions:Easily integrate services such as Flickr, Amazon, Facebook, Google Analytics, Google AdSense, and online calendar systems within your website. Support various forms of media such as flash video, podcasts, image and photo galleries.Drupal can be a powerful content aggregator repurposing syndicated Internet content and can even leverage Yahoo's term extraction services. The innovation doesn't stop at the corporate firewall. Enterprises can use Drupal's powerful taxonomy and keyword management capabilities as well as its support for single sign-on systems. Distributed authentication was architected into the platform from nearly the start of the project. The support for single sign-on is very good.The core developers also incorporate significant user contributed innovations into the platform itself enabling the development of the next generation of extensions that continue to outpace offerings from competitive products such as those from Microsoft and IBM. For example, the team is considering adding native support for OpenID in a future major release. But, for now, support for OpenID is available as a user contributed module.Why does the use of Drupal continue to grow? One reason is because a number of companies actively promote Drupal. The most prominent is Lullabot, a web development firm led by Jeff Robbins. Their weekly podcasts are incredibly valuable for anyone working with Drupal. Even developers not involved in Drupal would have found their recent podcast discussing PHP development tools insightful.But there are many others in the Drupal community contributing to its success. There has been a recent explosion of Drupal screencasts explaining how to use, configure, customize, and extend Drupal. In addition, groups.drupal.org hosts local interest groups (my favorite is the Grand Rapids Drupal user group, they call themselves GRupal) as well as those interested in contributing to the community but who aren't software developers. For example, there are groups focused on marketing, education, and use within enterprises. A great example is the Drupal Dojo group. They host regularly scheduled web conferences and make screencasts available illustrating Drupal capabilities.And there is interest in Drupal from large companies as well. Most notably, IBM DeveloperWorks published a series of tutorials that were written by engineers in their Internet Technology Group. The tutorials provide an introduction to site theming (customizing the website's appearance), module development (how to extend Drupal), as well as basic information about setting up a development environment.Yahoo uses Drupal for internally managing user interface patterns and is also hosting an Open Source CMS conference in March. This will have a "sub-conference" covering just Drupal.There is probably nothing in Drupal that products from the big vendors can't do and may have implemented somewhere. The difference is companies using Drupal are meeting customer needs faster and cheaper because they are sharing innovations within the community. This is resulting in a growing community that is increasing the pace in which new innovations are brought to market. Companies such as Optaros and SpikeSource, for example, are stepping in to service large corporate customers and deliver solutions quickly and inexpensively.In my opinion, unless the large software vendors figure out a way to leverage a community to this extent their days of competing for Internet marketshare may be numbered. At one time we used to measure the success of Microsoft, Netscape, and Apache by tracking how many websites were hosted by a particular web server (Netcraft conveniently publishes these numbers). But, these statistics are becoming less important since systems like Drupal are increasingly being used and run on any number of webserver platforms.In an era where Google is giving away services for free, the cost of deployment and the time it takes to bring innovations to market is becoming much more important. Many smart system integrators are recognizing the power of an open community like Drupal and are effectively competing in this new environment. Time will tell if the large software vendors can adapt. | 计算机 |
2014-35/1128/en_head.json.gz/20198 | Front Page > Privacy Policy
This privacy policy is intended to protect an individual's privacy and seeks to explain the type of information The Telegraph collects from visitors to its site, what The Telegraph does with that information, and how users may find out more about this profile. This policy may change from time to time so please check it frequently. Type of Information Collected
Technical and routing information about the Customer's computer is collected when he/she visits The Telegraph's site. This facilitates use of the site by the Customer. For example, the Internet Protocol address of the customer's originating Internet Service Provider may be recorded, to ensure the best possible service and use the Customer's IP address to track his/her use of the site. The Telegraph also records search requests and results to ensure the accuracy and efficiency of its search engine. These information may be collected by using cookies. "Cookies" are small date files, typically made up of a string of text and numbers, which assign to the Customer a unique identifier. Cookies may be sent to the Customer's browser and / or stored on The Telegraph servers. The cookies enable The Telegraph to provide the Customer with better access to the site and a more tailored or user friendly service. The Customer may set the browser to not to accept cookies but that would limit the functionality The Telegraph can provide to the Customer while visiting the site. The Telegraph site contains advertisements and/or contents which may have cookies maintained or tracked by the ad server or third parties. The Telegraph does not have control or access to such cookies. The Customer should contact these companies directly if Customer have any questions about their collection or use of information.
Finally, The Telegraph collects aggregate information about the use of its site, such as which pages are most frequently visited, how many visitors The Telegraph receives daily, and how long visitors stay on each page etc.
The information The Telegraph collects about the Customer in the course of its relationship is used to provide the Customer with both general and tailored information about offers, services or other useful information from The Telegraph or others. The Telegraph also may combine information the Customer has provided in communications offline with the information given online (or vice versa). The Telegraph uses demographic and site usage information collected from visitors to improve the usefulness of our site and to prepare aggregate, non-identifying, information used in marketing, site advertising, or similar activities.
As the services or offerings evolve, the types of information The Telegraph collects may change. Please check this policy frequently for the most current explanation of The Telegraph date practices.
With whom the information is shared. The Telegraph does not sell the Customer's email address or other identifying information to third parties. The Telegraph may provide to others the aggregate statistics about activities taking place on its site or related site activity for purposes of marketing or promotion. The Telegraph may disclose information about the Customer to others if The Telegraph has a good faith and belief that it is required to do so by law or legal process, to respond to claims, or to protect its rights, property or safety. Copyright © 2014 The Telegraph. All rights reserved. | 计算机 |
2014-35/1129/en_head.json.gz/456 | Home > Risk Management
OverviewGetting StartedResearchTools & Methods Additional Materials ConsultingOur People Risk Management
Consider a broad range of conditions and events that can affect the potential for success, and it becomes easier to strategically allocate limited resources where and when they are needed the most.
Overview The SEI has been conducting research and development in various aspects of risk management for more than 20 years. Over that time span, many solutions have been developed, tested, and released into the community. In the early years, we developed and conducted Software Risk Evaluations (SREs), using the Risk Taxonomy. The tactical Continuous Risk Management (CRM) approach to managing project risk followed, which is still in use today—more than 15 years after it was released. Other applications of risk management principles have been developed, including CURE (focused on COTS usage), ATAM® (with a focus on architecture), and the cyber-security-focused OCTAVE®. In 2006, the SEI Mission Success in Complex Environments (MSCE) project was chartered to develop practical and innovative methods, tools, and techniques for measuring, assessing, and managing mission risks. At the heart of this work is the Mission Risk Diagnostic (MRD), which employs a top-down analysis of mission risk.
Mission risk analysis provides a holistic view of the risk to an interactively complex, socio-technical system. The first step in this type of risk analysis is to establish the objectives that must be achieved. The objectives define the desired outcome, or "picture of success," for a system. Next, systemic factors that have a strong influence on the outcome (i.e., whether or not the objectives will be achieved) are identified. These systemic factors, called drivers, are important because they define a small set of factors that can be used to assess a system's performance and gauge whether it is on track to achieve its key objectives. The drivers are then analyzed, which enables decision makers to gauge the overall risk to the system's mission.
The MRD has proven to be effective for establishing confidence in the characteristics of software-reliant systems across the life cycle and supply chain. The SEI has the MRD in a variety of domains, including software acquisition and development; secure software development; cybersecurity incident management; and technology portfolio management. The MRD has also been blended with other SEI products to provide unique solutions to customer needs.
Although most programs and organizations use risk management when developing and operating software-reliant systems, preventable failures continue to occur at an alarming rate. In many instances, the root causes of these preventable failures can be traced to weaknesses in the risk management practices employed by those programs and organizations. For this reason, risk management research at the SEI continues. The SEI provides a wide range of risk management solutions. Many of the older SEI methodologies are still successfully used today and can provide benefits to your programs. To reach the available documentation on the older solutions, see the additional materials.
The MSCE work on mission risk analysis—top-down, systemic analyses of risk in relation to a system's mission and objectives—is better suited to managing mission risk in complex, distributed environments. These newer solutions can be used to manage mission risk across the life cycle and supply chain, enabling decision makers to more efficiently engage in the risk management process, navigate through a broad tradeoff space (including performance, reliability, safety, and security considerations, among others), and strategically allocate their limited resources when and where they are needed the most. Finally, the SEI CERT Program is using the MRD to assess software security risk across the life cycle and supply chain. As part of this work, CERT is conducting research into risk-based measurement and analysis, where the MRD is being used to direct an organization's measurement and analysis efforts. Spotlight on Risk Management
The Monitor June 2009
New Directions in Risk: A Success-Oriented Approach (2009)
A Practical Approach for Managing Risk
A Technical Overview of Risk and Opportunity Management
A Framework for Categorizing Key Drivers of Risk
Practical Risk Management: Framework and Methods | 计算机 |
2014-35/1129/en_head.json.gz/895 | The Advanced Computer Architecture Laboratory (ACAL) was established in 1985 as a research unit of the Electrical Engineering and Computer Science Department at the University of Michigan. With a research budget exceeding $4 million per year, the Lab serves as the focal point for interdisciplinary research into the theory, design, programming, and applications of advanced computer systems. Over the past two decades, ACAL researchers have made pioneering contributions to the design of high-performance computer systems such as, for example, Intel's Pentium chip and Compaq's Alpha chip, spearheading technical advances in pipelining, branch prediction, parallel processing, timing analysis and optimization, and automatic test generation. ACAL members and alumni have been and continue to be instrumental in the exploration and development of key technologies for high-performance microprocessors and embedded microcontrollers. | 计算机 |
2014-35/1129/en_head.json.gz/3210 | PLEX86 x86- Virtual Machine (VM) Program
Plex86
| CVS
| Successes
| In the Media
IBM Eyes 50,000Plus Indian Employees
And they're doing high-level work. Look at last week's decision to consolidate SOA work in Bangalore By Paul McDougall InformationWeek India will overtake China in 7 years 2965IQ and the Wealth of Nations...was not peer-reviewed....its viewed as highly critical,and its been dubbed... Mar 13, 2006 12:03 AM Meet the new face of IBM software. Siddharth Purohit lives in Bangalore, India, and is an expert at developing the kind of reusable code on which the company is staking much of its future. As such, Purohit represents two of IBM's biggest bets--Indian talent and software built around service-oriented architectures. IBM is on a hiring binge in India. The company employs about 39,000 people in the country, up 70% from 23,000 a year ago. That rate of growth should continue "for quite some time," says Amitabh Ray, who heads IBM's global delivery operations in India. At that clip, IBM will have at least 55,000 workers in India by next year. And the figure could easily pbutt 60,000--or 20% of its current worldwide workforce of 300,000. Jeby Cherian, part of a new world order at IBM Make no mistake: This isn't the kind of routine, brute-force coding for which India is known. IBM last week revealed it would spend $200 million a year on a Bangalore development center to centralize work on one of its most strategic efforts--building SOA-based software systems that consultants can resell to customers in various industries. "We're moving all of that development to India," says Jeby Cherian, head of IBM's new Global Solutions Delivery Center in Bangalore. Previously, IBM did this work in a number of development centers worldwide. Along with churning out software components, workers at the Bangalore center will design new ways in which businesses can combine those components with other technologies to solve some of their thorniest, and costliest, problems: straight-through processing for banks, for example, and inventory optimization systems. That's hardly commodity work. India's growing role behind Hollywood scenesBy Anand Giridharadas International Herald Tribune MONDAY, MARCH 13, 2006 MUMBAI, India After thanking the Academy and their mothers, Oscar winners of the future may... The SOA Bet IBM needs growth. Its software sales were flat last quarter, and its global services business was down 5%. Software based on SOA is one of its big growth bets. It plans to invest $1 billion this year around SOAs, which let companies reorganize IT infrastructures around processes. Software to "check shipping status" exists as a reusable component, one of many that can be mixed and matched to create, say, an online inventory Debt Management system. SOAs are all the rage because they're easier to maintain and update, and because they offer a way to Web-enable processes with less custom programming. Most companies using SOAs spend less than $1 million annually on the technology, but 60% of them will increase spending by an average of 17% this year, AMR Research predicts. Enter Purohit. IBM's India gamble is that it can find enough people like him to make its strategy work. Purohit, 40, obtained a master's degree in computer science from the New York Insbreastute of Technology in the late 1980s, then spent 17 years in the United States on technology and consulting gigs. It wasn't a tough call when IBM offered him a position at the new center. "This is the vision and situation I've been waiting for," says the married father of two girls, ages 2 and 6. Purohit is the chief architect on a number of key projects at the Bangalore center, including one to build an SOA-based system that will let shipping companies monitor the contents of containers throughout cross-ocean journeys. It's a critical capability for ensuring the integrity of goods that are temperature sensitive or could pose security concerns. The first customer is shipping company Maersk Logistics. The system features wireless container-level tracking devices developed by IBM researchers in Zurich, Switzerland. It's a sensor network that transmits data from the devices to databases that can be accessed by numerous parties, such as shipping managers, customers, and port authorities, using a variety of front-end applications. Purohit's challenge was to identify and buttemble the technologies required for such a system, develop software components where needed, and buttemble everything into a working whole. "This group here has the charter to bring all of IBM's technologies and services together on behalf of the customer," Purohit says. "We're creating business solutions and buttets that can be reused. This is breaking new ground." Another example: Teams at the Bangalore center are designing a system that uses telemetry devices, embedded processors, and mathematical algorithms to help automakers predict and manage costs from warranty claims. While IBM's hiring numbers are huge, its rivals have similar ambitions. Infosys, India's second-largest IT outsourcer, added more than 3,200 employees in its most recent quarter. India's tech and business-process outsourcing industry will employ 1 million more people in 2010 than it does today, as it grows from $22 billion in revenue to $60 billion, predicts India's National buttociation of Software and Service Companies and consulting firm McKinsey. Consolidation In India IBM says its existing delivery centers outside India won't be closed but will be "remapped" into demo centers. "They will become more customer facing," Cherian says. Still, IBM's decision to put in India virtually all of the design and development of the bundled solutions its consultants offer won't comfort U.S. workers who hoped such high-end work wouldn't go abroad, at least not this quickly. The company employs about 150,000 workers in the United States but has quietly eliminated a number of domestic positions in recent months. It has lowered its costs in global services, improving gross margins about 3 percentage points last quarter, to 27.4%, compared with a year ago. The systems created in Bangalore will be marketed and sold to customers through IBM's Business Consulting Unit, which posted a 6% decrease in revenue in its most recent quarter. With the bulk of the unit's offerings to be designed in India, IBM will need to find a lot more people with the skills and experience of a Purohit. That won't be easy in India's tightening labor market (see story, India Calls Its Talent Home). Expect the hiring pace to continue, Ray says. Ray predicts IBM and its customers will get two main advantages from the move to India: Costs will be lower, and greater centralization will speed design and innovation. "In the previous model, these solutions were splintered across a number of development centers," he says. "We can get cross-visibility--something that's applicable in retail might be applicable in automotive." IBM's challenge will be to make sure this centralized development group is close enough to customers' real-world business problems and to the consultants and researchers around the world working on them. A pilot project in Washington and Oregon shows where IBM can excel. Called GridWise, it links 200 homes to see if it's feasible to make power consumption more price sensitive. The plan is to use the Internet to connect home thermostats to a real-time feed of energy prices, letting homeowners automatically lower the temperature in response to spikes. GridWise runs over an Internet-based messaging system that IBM developed. It works because IBM energy experts understood the problem and what information is vital to maintaining a stable power supply. "They've been able to bring a technology I presume they developed elsewhere and apply it here," says David Chbuttin, staff scientist at the Pacific Northwest National Laboratory, which runs the GridWise project, set to go live next month. Chbuttin says he's not concerned or surprised that IBM is sending this kind of development work to India. The energy industry wants the lowest cost for the right product. Plus, global development is the reality, he laments, given the dearth of American computer science and engineering graduates. "We're dependent on foreign intellectual engineering capacity already," Chbuttin says. In Need Of Growth IBM needs big ideas that drive software and consulting sales, not just cost cutting from a lower-wage operation. But the SOA market is a risky bet because companies, while keen on the concept, aren't spending big money on it. They're doing one-off component-software projects, but very few are creating an entire architecture based on it that requires major investment. If they ever do embrace that "holistic view," says David Grossman, an analyst at Thomas Weisel Partners, IBM's broad portfolio of technologies and services give it an advantage over more specialized vendors like BEA Systems and Systinet. India will overtake China in 7 years 2966Believe what you will, ultimately I put more faith in Statistics than in Theory, since it will reflect the truth A single event, as important as it may be is not a... IBM this week will unveil an online library to help companies track its components and services. Called SOA Business Central, it also will stock offerings from IBM software partners such as Actuate, a financial services specialist. The library is one small example of how IBM CEO Sam Palmisano "is making a double-down bet on SOA," says Sandy Carter, IBM's VP for SOA strategy. IBM's system integration arm handled 1,800 SOA engagements last year, and Carter expects a big increase this year. India will overtake China in 7 years 2963India will continue to be marred by political instability in the future , It's population does not have the savvy to elect a honest government. The few Intelligent Indians out their... Behind IBM's SOA push will be Indian managers and technologists like Purohit. Finding such experienced talent in India will be tough, but there's no doubting the ambition behind this latest expansion. Purohit says he and his colleagues at the center are expected to deliver "thought leadership." That and a heap of revenue growth is just what IBM needs to show results from its Indian hiring spree. Amazon.com Widgets
List | Previous
India will overtake China in 7 years 2963 Alt Computer Consultants from Newsgroups/p>
Globalization still has problems | 计算机 |
2014-35/1129/en_head.json.gz/3780 | Oracle® Fusion Middleware Command Reference for Oracle WebLogic Server
1 Introduction and Roadmap
This section describes the contents and organization of this guide—Command Reference for Oracle WebLogic Server.
Document Scope and Audience
Guide to This Document
New and Changed Features in This Release
This document describes Oracle WebLogic Server command-line reference features and Java utilities and how to use them to administer Oracle WebLogic Server.
This document is written for system administrators and application developers deploying e-commerce applications using the Java Platform, Enterprise Edition (Java EE). It is assumed that readers are familiar with Web technologies and the operating system and platform where Oracle WebLogic Server is installed.
The document is organized as follows:
This chapter, Chapter 1, "Introduction and Roadmap," describes the scope of this guide and lists related documentation.
Chapter 2, "Using the Oracle WebLogic Server Java Utilities," describes various Java utilities you can use to manage and troubleshoot an Oracle WebLogic Server domain.
Chapter 3, "weblogic.Server Command-Line Reference," describes how to start Oracle WebLogic Server instances from a command shell or from a script.
Chapter 4, "WebLogic SNMP Agent Command-Line Reference (Deprecated)," describes using Simple Network Management Protocol (SNMP) to communicate with enterprise-wide management systems.
"Using Ant Tasks to Configure and Use a WebLogic Server Domain" in Developing Applications with Oracle WebLogic Server.
Oracle WebLogic Scripting Tool
Configuring Server Environments for Oracle WebLogic Server
Oracle WebLogic Server Administration Console Help
For a comprehensive listing of the new WebLogic Server features introduced in this release, see What's New in Oracle WebLogic Server. | 计算机 |
2014-35/1129/en_head.json.gz/5205 | Photo Software
Getting To Know Photokit Sharpener
The PhotoKit Sharpener is an all-in-one sharpening tool for Adobe Photoshop. Here is a brief introduction to the PhotoKit Sharpener for Adobe Photoshop.
About PhotoKit Sharpener
There are a lot of sharpening tools for Adobe Photoshop, but the PhotoKit Sharpener is different in the sense that it provides a complete sharpening workflow in itself (from capturing an image to its output). The PhotoKit Sharpener is available for both the Windows and the Mac platforms, and it's compatible with the CS, CS2, CS3 and CS4 version of Adobe Photoshop.
The PhotoKit Sharpener has put a lot of creative control into the hands of the end user by catering to individual images according to the tastes of the end user. The tool is used as a plug-in to Adobe Photoshop and its intuitive interface blends in with the parent application. The learning curve for this tool is almost flat. It is available for a trial as a demo download and is available for purchase for around $100.
Digicam Dictionary How-To Articles archives | 计算机 |
2014-35/1129/en_head.json.gz/6572 | Contact Haas
Haas Home
Fall 2008 CalBusiness
Power of Ideas
About CalBusiness
CalBusiness
Top Acrobat at Adobe
CEO Shantanu Narayen, MBA 93, focuses on the next generation of software and leadership.
By Hubert Huang
Adobe Systems has come a long way since Shantanu Narayen, MBA 93, first started working at the San Jose software company a decade ago. Although Narayen has been CEO for less than a year, he has been instrumental in building Adobe into a 7,000-employee tech giant whose software sits on more than 700 million computers and devices worldwide. Now Narayen is keenly focused on steering Adobe through an increasingly Internet-focused software landscape — and developing leaders within the company to tackle that challenge.
For executing a vision that spurs financial prosperity, fosters employee development, and fulfills its responsibility to the community, Narayen has been named the Haas School's Business Leader of the Year. Each year, Haas honors a member of its community who exemplifies the type of business and thought leader the school is committed to creating. The school will present the award to Narayen, a member of the Haas School's advisory board, at its annual gala Nov. 7 at the Ritz-Carlton in San Francisco.
"Narayen's success at Adobe is an inspiration to all of us at Haas," says Dean Rich Lyons. "We are so fortunate to have such an innovative, forward-looking leader as an exemplar for our community." Narayen joined Adobe in 1998 as vice president of engineering after working at Apple and Silicon Graphics. He became executive vice president of worldwide products in 2001 and was promoted in 2005 to president and COO, which placed all product research and development, day-to-day global operations, marketing, and corporate development under his purview. He became CEO in December 2007. "Adobe's success over the last eight or nine years is largely because of Shantanu," says Bruce Chizen, who preceded him as CEO. "His ability to learn and understand the complexity of sales, nuances of marketing, and legal and financial issues of running a company is unlike that of any individual I've ever worked with."
But Narayen won't take full credit for that success. Rather, he's a strong believer in giving individuals who show initiative additional responsibility and room to grow as leaders. "To create new businesses and drive growth, you need to have a leader who wakes up wanting to make an impact," Narayen says. Under Narayen's guidance, Adobe has enjoyed 20 percent annual growth since 2002, with sales reaching a record $3.2 billion in 2007. And the third release of Adobe's Creative Suite — an integrated collection of desktop applications such as Photoshop, Illustrator, and InDesign — outsold its precursor by 40 percent. Narayen also co-managed with Chizen the $3.4 billion acquisition of then-competitor Macromedia in 2005. Some pundits questioned the merger, but Narayen saw how well Macromedia's product lines complemented Adobe's. "We had the video authoring tools, they had the video playback. We were great in imaging and illustration; they were great in animation," Narayen says. "It was actually quite obvious."
Macromedia's Dreamweaver, Fireworks, and Flash are all key components in Creative Suite, which now boasts 43 percent market share among the country's 6 million creative professionals. "What's most gratifying is we've brought to market something neither company could have created as successfully standing alone," Narayen adds.
Not Flashy Narayen doesn't fit the stereotype of the bold, acquisitive CEO. Unlike many executives who answer questions like a smooth-talking politician, Narayen responds in an unassuming tone. And consistent with his relaxed demeanor, he works out of a modestly sized, non-corner office, in contrast to the typical workspace of other Silicon Valley CEOs.
Narayen was born in Bombay but spent most of his childhood in Hyderabad, India. His mother taught American literature; his father ran a plastics company. He graduated with a bachelor of engineering from Osmania University. He then moved to the United States, where he earned an MS in computer science from Bowling Green State University.
After earning his master's degree, Narayen moved to California to work in tech. Looking to develop management and leadership skills to handle greater responsibility, Narayen enrolled in the Haas School's Evening & Weekend MBA Program while working for Apple. Applying principles learned in the classroom directly to the workplace, he gained a real-world understanding of how businesses function. Even the commute to Haas itself proved invaluable.
"Back then the program was in San Francisco, so a bunch of us would drive together," Narayen recalls. "I can't tell you how many great conversations we had carpooling back and forth."
After getting his MBA, Narayen worked at Silicon Graphics and went on to co-found Pictra, a company that led the way in digital photo-sharing on the Internet. While trying to sell Pictra to Adobe – albeit unsuccessfully – he caught the eye of Chizen and was then hired by Adobe. "Running Pictra was an incredible experience," Narayen says. "I did everything from collecting the mail to figuring out corporate strategy."
Narayen joined the Haas School's advisory board in 2005 and has become a strong proponent of the school's Leading Through Innovation strategic initiative.
"Innovation is a great theme to rally around and speaks to the core values you want," Narayen says. "What's great about the Leading Through Innovation initiative is that Haas – much like businesses – really thought about its strategic plan. When you're training the next generation of business leaders, that's what you need to do."
Bringing Sand Hill to Adobe
Developing a new generation of entrepreneurs at Adobe is a major priority for Narayen, who sponsored the company's Entrepreneurs in Residence program. The program lets any employee pitch an idea to the company, much as a startup would one of the venture capital firms lining Menlo Park's Sand Hill Road. If the company accepts the pitch, it finances the employee's venture and sets metrics for the employee to qualify for additional funding. "By the time Shantanu's done at Adobe, he will have recruited and trained a number of candidates ready to take over senior leadership," says Charles Geschke, co-chairman and co-founder of Adobe, "all while leading an aggressive expansion."
Ultimately, Narayen's ability to foster innovation within Adobe will play a large role in determining its future expansion. As software delivered through the Internet and mobile devices increasingly becomes the norm, Narayen is continuing to direct Adobe's expansion beyond the desktop. The unveiling of Acrobat.com in June demonstrated how Adobe will leverage existing products to gain an advantage in Internet-based computing. Acrobat.com — a suite of hosted services including word processing, file sharing, PDF creation, and Web conferencing – can be accessed through the Internet browser, but also ties into the latest version of the widely used Adobe Acrobat.
"We've always understood cross-platform and heterogeneous systems better than any other company," Narayen says. "We probably distribute more software than anyone else today."
Giving to the Community
As important as Adobe's financial success is to Narayen, he places equal emphasis on preserving the core values of community responsibility instilled by Adobe's co-founders. Adobe donates 1 percent of profits to charity and encourages employees to do the same through its matching gift program. Recently, the company launched the Adobe Foundation, a private nonprofit foundation dedicated to driving social change and improving Adobe's surrounding communities. The company also has expanded its Adobe Youth Voices program, which empowers youth in underprivileged areas by teaching them to share their experiences through visual media. "The culture of Adobe makes it a special place to work, and it's been a special place since John Warnock and Chuck Geschke co-founded it and Bruce Chizen expanded it," Narayen says. "Now that we're a global company, we know we can have a deeper presence in these markets by enabling education and working with kids — not just shipping our products there." [Top of page]
Business Leader of the Year, Shantanu Narayen, MBA 93
"Adobe's success over the last eight or nine years is largely because of Shantanu."
—Bruce Chizen, former Adobe Systems CEO
Copyright © 1996-2014 | University of California, Berkeley | Haas School of Business | 计算机 |
2014-35/1129/en_head.json.gz/7601 | PC version of Death Rally gets new trailer
by: Sean M
Remedy has just released a new trailer for Death Rally, the PC remake of an iOS and Android game that was itself a remake of a game from 1996.
The game itself centers around racing cars armed with machine guns and mines with the goal of killing your opponents. By surviving each race you will be able to upgrade your car from money earned in each race. It appears as if Remedy is taking a more comedic approach with this trailer, as it shows some guy in his underwear wearing a racing helmet doing karate moves interspersed with footage from the game. I'm not sure what some guy doing karate moves in his underwear has to do with racing cars, but oh well.
The game will be released on Steam on August 3, so if you're interested be sure to pick it up.
By pressing the button below, you are certifying that you are 18 years old or older and you are of age to view the content. | 计算机 |
2014-35/1129/en_head.json.gz/9051 | Hello, No Javascript
LEGAL | PRIVACY | TERMS
HASBRO and its logo, TRANSFORMERS, and its associated characters are trademarks of Hasbro and are used with permission. © 2012 Hasbro. All rights reserved. Game © 2012 Activision Publishing, Inc. All rights reserved. Activision is a registered trademark of Activision Publishing, Inc. KINECT, Xbox, Xbox 360, Xbox LIVE, and the Xbox logos are trademarks of the Microsoft group of companies and are used under license from Microsoft. “PlayStation”, the “PS” Family logo and “PS3” are registered trademarks and the PlayStation Network logo is a trademark of Sony Computer Entertainment Inc. The ESRB rating icon is a registered trademark of the Entertainment Software Association. All other trademarks and trade names are the properties of their respective owners. Activision makes no guarantees regarding the availability of online play or features, and may modify or discontinue online service in its discretion without notice, including for example, ceasing online service for economic reasons due to a limited number of players continuing to make use of the service over time. Online interactions not rated by the ESRB. | 计算机 |
2014-35/1129/en_head.json.gz/9577 | ILOVEDUST Launches
Boca Ceravolo April 12, 2010 News of a Motion Design studio launch are always exciting, more so when the studio has already possessed a considerable repertoire and strong reputation in Graphic Design.
Originally a Graphic Design / Illustration studio with a clientele ranging from Pepsi to Microsoft and Sony, ilovedust had been flirting with Motion Design for a while, but it wasn’t until recently that they finally decided to launch a Motion department.
With four new projects for clients such as Nike and Mtv , ilovedust showcases their range of styles and technique, from traditional animation to 3D, as well as some great storytelling skills, particularly in the “Nike Chase” project – done in collaboration with Curious Pictures and Director Ro Rao.
We were fortunate enough to get a first glance at the new ilovedust site – which launches today, by the way – and catch up with CD Ingi Erlingsson for an exclusive and very interesting interview.
Make sure to spend some time on their new site and check out the work, motion or static, bombastic stuff!
Looks like they are here to stay. Welcome!
How did ilovedust come to be and when did you first become a part of the team?
ilovedust was started back in 2003 by Mark Graham and Ben Beach. They were both working for a fashion label and decided their time would be better spent on their own ventures. They set up shop in a dusty studio space in Southsea, UK and went to work building a portfolio of initially local clients, but were soon working for some of the biggest companies in the world like coke, Bloomingdales and T-mobile. I joined them in early 2006 after graduating and a short stint in New York working for a motion design company called Surround.
Originally a Illustration / Graphic Design shop, what made ilovedust wander into Motion Design?
Back when I joined the major bulk of work was made up of illustration, with the occasional web site or logo thrown into the mix. Because of my animation background we were always experimenting and playing around with animation work and one day we were working on a print campaign for Pepsi and the opportunity to direct and produce a TV ad came up. We jumped on it head first and the next day we were on a plane headed for New York to cast and shoot the ad. At the time we didn’t have too much of a clue about what we were getting ourselves into, but we surrounded ourselves with some great, talented people that helped guide us through it all. Being in at the deep end has always been a big part of our ethos, we feel we learn the most when we bite off just a little more than we can chew. After that we gathered momentum and started to pick up more and more motion work, which led us to the decision to start up a dedicated department.Illustration and design are still a big part of what we do, but I feel we’ve found a great partner in motion design and animation. Our designers find inspiration in the animation work and the animators get the same from the designers. It helps us evolve and keeps things exciting and interesting, so it’s a great combination for us.
Was there any specific challenges involving the setup of the department, and how do you balance things during the setup of the London branch?
We recognised early on in the process that in order to make the most of our opportunities it would be essential to be situated in London. Here we have access to some of the best freelancers, facilities and creative minds around, so it was a no brainer to set up here. So in early 2009 we started off by renting a small space (which we soon outgrew), hiring a few key people and then went to work. We’re lucky enough to have found and hired some incredibly talented people who have helped us develop a style and approach and also fit right into the family.
It was important to us to get the motion work up to the same standard as our print and illustration work so it took a lot of trial, error, swearing and experimenting. We were lucky enough to be able to balance the personal, experimental work with enough paid work to keep us afloat until we were ready to show what we could do.
Regarding the Nike project: how did the project first begin? Can you take us through the main evolution stages/process of the project? For example: did ilovedust pitch on this directorially?Was there a specific element that the agency was looking for which would determine who won the pitch? I.e. was it the character design mainly, or other things?
Initially when the project started AKQA asked us to pitch treatments and style frames. I think we pitched about 10 ideas to them, all of which we felt pretty strongly about, which I think helped us win the pitch. The final script turned out a little different to our treatments as we’d based them on the lead character being a runner, but in the end she was a dancer. The main stages of the project played out pretty quickly after the script was signed off, we built our characters, designed the environments, shot the motion capture and then got to work putting all the pieces together. The last piece of the puzzle was the sound design which was done by our friend Wevie in Brighton. The director’s cut version we decided to put on the site is a lot closer to our original direction, mixed with a bit of angry robot and mayhem.
How did you guys end up collaborating with Curious Pictures and Ro Rao as live action director?
Ro and the guys at Curious were already working with AKQA on the campaign, producing and directing the other 4 spots of the 5 spot campaign. They shot the footage in LA under the watchful eye of the AKQA creative team, with us keeping tabs on the progress remotely from the UK. It was great to work with Ro and the guys on the live action as it came out really great and helped all 5 spots work as a series.
How do you see yourself in comparison to other studios in the industry, both locally in the UK/Europe and internationally? Do you see a certain advantage or disadvantage having grown from an illustration and graphic design company?
I think we have a definite advantage having evolved out of a design and illustration environment. The more animation we do the less we need to worry about the technical challenges which allows us to really let our design experience take the lead. We also have a team of 12 full time designers/illustrators so when it comes to pitches and work we have a huge resource to pull ideas and design from. We have the advantage of being able to do full campaigns in-house, from the print ads to the tv ads and websites.
In terms of our positioning in comparison to other studios I’d say we were somewhere in between the small 3-4 person shops and the big production companies. We’re still young and fresh to the game, but there are over 20 of us in all across two studios so we have a lot of aggregated experience behind us.
What lies in the future for the ilovedust motion department ? And for the company as a whole?
For the motion department it’s all about growth. We’re looking to take on a few new key people and expand and build on our expertise to really take our work to the next level, along with continuing build new relationships with designers, animators sound designers and creative types around the world that we can collaborate with. We’re also constantly working on self initiated studio work so there will be plenty more of that coming from us in the next year. In terms of things we haven’t done yet, we’d love to do some music videos. For the company as a whole I think it’s a similar goal. We want to build on our previous experience and use it to do bigger and better work. We have some great existing clients like Nike, Sony, Microsoft and Dunkin Donuts that we will be continuing to build our relationship with, as well as making new relationships in the future.
Share Tags: 2d, 3d, animation
Lilian Darmono April 12th, 2010 well done guys! you certainly have come a long way !!! gorgeous work!
Tyquane Wright April 13th, 2010 great work dust…I love dust more now :) | 计算机 |
2014-35/1129/en_head.json.gz/13179 | The browser does not support javascript.
The USGS Land Cover Institute (LCI)
What is LCI?
USGS Land Cover US Land Cover
CONUS Descriptions
Global Land Cover
North American Land Cover
Get Land Cover Data
Tweets by @USGSLandCover
* DOI and USGS and privacy policies apply. USGS releases 1973�2000 time-series of land-use/land-cover for the conterminous U.S. Access the publication and data.
The USGS Land Cover Institute (LCI) is a focal point for advancing the science, knowledge, and application of land use and land cover information. What can LCI do for you?
The USGS and other agencies and organizations have produced land cover data to meet a wide variety of spatial needs. The USGS LCI has been established to provide access to, and scientific and technical support for, the application of these land cover data.
Links and pointers to non-USGS sites, as well as any product mentioned, are provided for information only and do not constitute endorsement by the USGS, U.S. Department of the Interior, or U.S. Government, of the referenced organizations, their suitability, content, products, or services, whether they are governmental, educational, or commercial. Further details can be found in the USGS Policies and Notices page.
Continue to FaceBook
Continue to YouTube
Continue to Flickr
LCI Factsheet
URL: http://landcover.usgs.gov
Page Contact Information: LCI Administrator
Page Last Modified: December 2012
Your browser does not support scripts. | 计算机 |
2014-35/1129/en_head.json.gz/13281 | Difference between revisions of "Version Retention Policy"
Revision as of 19:37, 28 March 2011 (view source)Trini (Talk | contribs) (Created page with "== Background == The OpenEmbedded community has agreed and the TSC has discussed the creation of a policy for how long to retain (and when to replace and remove) old recipes and...")Newer edit →
Background The OpenEmbedded community has agreed and the TSC has discussed the creation of a policy for how long to retain (and when to replace and remove) old recipes and what should happen at that time, with regards to oe-core and meta-oe and associated layers.
It is expected that OE will have a related meta-oe or similar layers which older components can be moved into while they are still useful and desirable to maintain. However, these will be alternative versions and not the "core" version any longer.
Within the oe-core we can divide the components into two classes. Critical infrastructure components and standard components. The critical components include the toolchain, autotools, and key libraries. Virtually everything else fits into the standard components bucket.
We also have use cases such as:
Upstream provides provides support (new releases) and clear guidelines on upgrading for version 4.0 (current), version 3.8 (previous and stable) and version 3.6 (further previous, stable). Upstream is also working on version 4.1.x (unstable, active development).
Upstream provides no clear policy about what's supported other than current.
Community standards indicate a specific version should be used rather then the latest for some reason
An architecture requires specific versions.
Policy The goal of oe-core is to remain a stable, yet up-to-date set of software that forms the foundation of the Open Embedded world. While not everyone will be able to agree on a broad definition of "stable, yet up-to-date" the following guidelines will help define the rules for the inclusion and/or replacement of different versions into the oe-core.
First, each of the packages need to be divided into two categories: Critical Infrastructure and Core components. If an item is neither of these, then the oe-core is likely the wrong place for the component. The definition of which packages are in which categories is outside of the scope of this policy.
By default we want to use the latest stable version of each component. The latest stable version of each component is defined by the component's upstream. When there is no clear policy from upstream we simply have to apply best judgment.
There of course will be exceptions to the default policy. However, when an exception occurs it must be clearly stated and documented when and why we did not use the latest stable version -- or why we may have multiple versions available of a given component. This will allow us to reevaluate the exceptions on a timely basis and decide the exception is no longer reasonable.
Most of these exceptions will be located in the critical infrastructure components, specifically the toolchains. In many cases we will need to support variants of these components either for stability or architectural reasons.
Another common exception is to meet specific policy or compatibility objectives within the system, such as the need to support both GPLv2 and GPLv3 versions of selected components.
If multiple versions are provided, usually the latest stable version will be preferred, however best judgment will be used to determine the preferred version.
As existing versions of removed, if they are still desirable, they should be moved into meta-oe or a suitable layer.
We also have the issue of upcoming development versions it is suggested that upcoming development versions of software be worked on in specific development layers until they have reach sufficient maturity to be considered stable and ready for inclusion in oe-core.
Related to this are:
We want to encourage distributions that are tracking the latest to try and stay with the latest.
We want to encourage recipes which people are interested in to be maintained long term to be maintained, long term, in meta-oe.
We want to encourage distributions to work with and add to / maintain the core rather than deciding we have too frequent of an unhelpful churn (which is to say 4.0.1 -> 4.0.2 of $whatever is good, 4.0.1 -> 4.4.3 of $whatever is not).
Retrieved from "http://openembedded.org/index.php?title=Version_Retention_Policy&oldid=4185" Personal tools | 计算机 |
2014-35/1129/en_head.json.gz/14927 | Attendee Info
TGS Forum
Exhibitor Info Contact
Organized by: Computer Entertainment Supplier’s Association / Nikkei Business Publications, Inc.
In cooperation with: International Game Developers Association Japan Chapter
11/5/2008 Linked to videos of presentations.
9/17/2008 12 titles to be presented at the SOWN have been announced! SENSE OF WONDER NIGHT 2008 (SOWN 2008) is a new event that will be held at the Tokyo Game Show 2008. SOWN 2008 will shine the spotlight on game developers who are seeking new possibilities of expression through the medium of games and will serve as a vehicle for a new style of presentation that broadens the possibilities of games.
The objective of SOWN 2008 is to unleash on the world new games that can give people a "sense of wonder" a feeling that something will change in their world and make them gasp at the moment they lay eyes on the games or hear the game concepts. Moreover, we would like to open up new possibilities for games by
sharing ideas with the participants who view these presentations.
Presentation outline
October 10, 2008 (Friday), 18:00 to 20:30
Held in conjunction with the International Party (17:15 to 20:30)
Restaurant NOA, 1F, International Conference Hall
Presentation Titles
http://www.geocities.jp/yareyare_yaugari/
Yareyare
Depict
http://depict.villavanilla.net/
Jesus Cuauhtemoc Moreno Ramos
The Unfinished Swan
http://iandallas.com/games/swan/
Ian Dallas, University of Southern California
WorldIcelansista
http://wil.tv/pc/
Twin Tower
http://nagoya.cool.ne.jp/o_mega/product/tower.html
http://pixeljunk.jp/library/Eden/
GOMIBAKO(tentative)
http://www.jp.playstation.com/scej/title/gomibako/
Trash Box Team, PlayStation C.A.M.P!
Moon Stories
http://www.ludomancy.com/blog/sown/
Daniel Benmergui
The Misadventures of P.B. Winterbottom
http://www.winterbottomgame.com/
The Odd Gentlemen
Genocide Automation
http://www11.plala.or.jp/normal/
Naoya Sasaki
Nanosmiles
http://www.freem.ne.jp/game/win/g01665.html
Yu Iwai
Review by the screening committee
“Sense of Wonder Night” objectives
To introduce games with a game design and ideas that are experimental and creative, and that cannot be called conventional or traditional
To heighten awareness of the importance of creating games that give people a “sense of wonder”, and to revitalize the game industry with the selected games
To offer people creating experimental games opportunities for the future
To create new domains in game design
The games that we are looking forward to considering for presentation will be demos of prototypes, games with experimental elements that have already been released or that are planned for release, and games developed by students who have hit upon something out of the ordinary. There will be no distinctions made whatsoever between professionals and amateurs. We welcome submissions of games created by small venture businesses as well as doujin games developed by individuals.
If you would like to make a presentation of a game that can give people a “sense of wonder”, please read through the following entry guidelines and then submit your game for consideration.
Games that meet the “Sense of Wonder Night” selection criteria
・A game realizing an innovative user interface
A game that employs features such as natural language processing, image recognition or gesture control to present a new kind of experience
・A game that is created through an automatic generation process
A game that creates a world where the game play or the environment in which the user plays is changed dramatically according to selections made by the player
・A game with an interactive story-telling concept
A game that presents a story experience in a new way and that has the potential of developing into a tool to create a totally new story
・A game that has emerging elements
A game that creates a new form of game play by skillfully incorporating the physical system into the game play elements and by combining the AIs
・A “terrific!” game, even though the reasons are not immediately clear
A game that at any rate gives a strong impression that “this is terrific” as soon as you see it
Games that do not meet the “Sense of Wonder Night” selection criteria
・A game where the focus is on elements that do not necessarily have any relation to the game itself Games where the wonder is centered on the actual elements that comprise the game such as an innovative background setting or situation, character design, graphics, story and audio
・A new genre created by the rehashing and combination of already existing genres.
However, a genre that creates a truly new game experience will be considered for selection.
・A game whose reason for being new is the targeting of a specific demographic
Examples are games that target women only or games that are designed only for a mature audience. However, if such games leave a deep impression on a large number of people they will be considered for selection.
・A game whose appeal lies in a purely technological innovation, experimental business model or distribution mechanism that has no effect on game play itself.
Such games will not be completely excluded from the screening process, but the game must clearly show that it is capable of directly and definitely changing the game experience.
The above guidelines are vague and incomplete. Things that are unexpected always go beyond rules prescribed in this way by words when they appear, and that is why many people are surprised. Therefore, please understand that the above points are merely guidelines.
The games selected for presentation do not have to be completed games or successful games. The reason for this is that there is much to be learned even from game designs that may ultimately fail, and the knowledge thus acquired may become elements that lead on to the next step. The games selected for presentation should offer an element of wonder, but enjoyment is not an absolute condition.
The size of a game’s budget or of the development team, whether the game’s release has been decided, or whether it has already been released will not have any influence whatsoever on the screening process. Moreover, games do not have to be complete when they are submitted. Furthermore, games can be developed on any type of hardware?a consumer game console, a hand-held game console, mobile phone, personal computer, or original hardware. The reason for this is while there are cases where well-established teams have succeeded in creating games that are different, there are also cases where small teams have succeeded in the introduction of unusual games to the market. These scenarios have been repeated many times in the game industry.
Furthermore, there will be no distinctions made between submissions from people in Japan and those from people overseas. Equally high hopes are being held for both the submissions from Japan and those from overseas.
However, since selections will be based on the information supplied at the time that the games are submitted, the submission of a playable demo and/or video clips of the game being played is an extremely important factor.
Selections was made by the “Sense of Wonder Night” screening committee.
Screening committee members
Keita Takahashi
Kenji Sugiuchi
Takashi Katayama VerX
Simon Carless
Independent Games Festival/Gamasutra
Born in 1975 in Mojiko, Fukuoka Prefecture
Joined Namco (Now NAMCO BANDAI Games) in 1999
Released the “Katamari Damacy” software for PlayStation 2 in 2004
Currently developing “NobiNobiBOY” software for PlayStation 3
Joined ASCII Media Works in 1988
After a stint at editing the computer information magazine “Log in”, he was in charge of planning and developing the game construction tool “Maker Series”.
Currently working for Enterbrain, he oversees the producers of that series.
Representative works are the RPG Maker Series and Panzer Front Series
Participated in setting up an online game business as part of the SoftBank Group, and then established ELEVEN-UP Inc.
After a stint as the president and representative director of ELEVEN-UP, he is currently a director of VerX Inc., a company which is under the umbrella of the Vector Group, and is in charge of producing online games and establishing new services.
Is Chairman of the Independent Games Festival, which holds its awards at Think Services' Game Developers Conference yearly, and has helped popularize independent and innovative gaming for 11 years. He is also a Group Publisher at Think Services, encompassing game products such as the Maggie award-winning Game Developer magazine and the double Webby award-winning Gamasutra website, the top information sources for professional video game developers in North America.
Kiyoshi Shin
International Game Developers Association
Japan Chapter
Is also a game journalist, a lecturer at the Ritsumeikan University College of Image Arts and Sciences, and a member of the board of directors of the Computer Entertainment Supplier’s Association (CESA)
As of July 17
The “Sense of Wonder Night” event has received a lot of inspiration from the “Experimental Gameplay Workshop” that was started at the Game Developers Conference in 2002. We would like to express our thanks to the many people who helped to make these workshops a success and to all our friends.
SOWN Secretariat:
(Only email enquiries will be accepted)
Copyright(c) 2002-2008 CESA/Nikkei Business Publications, Inc.All rights reserved. | 计算机 |
2014-35/1129/en_head.json.gz/17238 | Link Archive Updated
Added a new Tablatures category in the Link Archive. You can use the archive to find material that's not on this site.
Burn To Shine Tour Ends
Just because the tour is over doesn't mean we don't have plenty of fun stuff planned at .net. We're almost ready to open up The Official Ben Harper Store, and a cool giveaway is just around the corner. In the meantime, check out the setlist from the last show of the tour in Asheville, NC.
Ben Harper Featured in Details
The December issue of "Details" magazine features an 8-page spread on Ben called "The Harper Style." | 计算机 |
2014-35/1130/en_head.json.gz/461 | Game Physics Engine Development
Thanks for stopping by, this is the website for the
book Game Physics Engine Development by Ian Millington,
the second edition published by Morgan Kaufmann in
This site contains the links to other books of mine, it
allows you to keep up to date with any news on the book,
and will include an errata if needed.
But most likely you're here to get the source code
for the book. The source code is open-source and hosted by
GitHub. You can find it by clicking "Source
Code" at the top of the page.
Source Moved to GitHub
In the book it says the code is hosted on Google
Code. This is no longer true. I moved the code to
github in 2010. This is because it is far easier for
you to alter the code, and for me to respond to
corrections, when it is hosted on a distributed
version control system. It doesn't make much
sense, I don't think, to have open source
projects hosted on a centralised version control
You can find the code at
http://github.com/idmillington/cyclone-physics.
This site and its content is © 2006-2012 Ian Millington.
Additional resources and content may be
© 2008-2010 Elsevier Inc. | 计算机 |
2014-35/1130/en_head.json.gz/1257 | LMTV
TikiWiki The Internet was originally conceived to improve communication between far-flung researchers. Today, of course, the Internet can be used by anyone, virtually anywhere, to send and receive information of all kinds. Email, newsgroups, web sites, and more recently, blogs, and RSS feeds are all methods to share information. By Martin Streicher Tuesday, July 15th, 2003
http://sourceforge.net/projects/tikiwiki
The Internet was originally conceived to improve communication between far-flung researchers. Today, of course, the Internet can be used by anyone, virtually anywhere, to send and receive information of all kinds. Email, newsgroups, web sites, and more recently, blogs, and RSS feeds are all methods to share information. While all of those forms of communication are popular and effective, they’re also implicitly static: an article in a newsgroup can’t be changed once it’s posted, and the content of a web page is typically maintained and controlled by the page’s owner. Certainly, people can post replies to groups and submit comments to a web site’s forums, but even that new material remains as standalone, static amendments to the original content. Wikis, on the other hand, are implicitly collaborative. Content — any content — in the wiki can be changed, extended, or created anew at any time by any user (although some access controls are typically available). In fact, a wiki is largely a content management system, where the wiki itself is just one way to organize and present the information. And that’s the kernel of the idea behind TikiWiki, or just Tiki, an expansive wiki that also provides for articles, file and image galleries, forums, weblogs, and many other forms of sharing information. At more than 250,000 lines of code, and more than 375 different features, the Tiki developers describe their work as “A catch-all PHP application, so you don’t have to install so many!” This month, iki project leaders Luis Argerich and Garland Foster invite us in to their hut to discuss their ambitious project. What is Tiki? Luis Argerich: Tiki is a general-purpose content management system that can be used for intranets, online communities, portals, forums, and many other kinds of applications. It’s not unique, but it has many, many features, a very active development rhythm, and very detailed documentation.
Garland Foster: Tiki is a full-featured content management (CMS) system. It has a lot of tiny and not so tiny details, making it unique. We have a workflow engine, graphic creation using jGraphPad, an XML-RPC interface to edit blogs using desktop tools, PDF generation, wiki structures, quizzes, configurable trackers, a caching system to cache external links in any object, categories, themes that can be changed for individual sections, etc., etc. And the “et ceteras” are not trivial at all! We like to say, “You can do * using Tiki.” Why “yet another” content management system? Argerich: While there were a lot of open source content management systems at the time Tiki was started, no one package fulfilled the list of features we wanted. There were also a lot of problems with licensing and the way the projects were handled. For example, some started as free products and then switched to paid releases. We wanted a 100%-free CMS, a huge list of features, and no license restrictions. Tiki was born.
Foster: When we started there were some very nice pieces of software in PHP, some good content management systems, nice forums, blogs, and even nice wikis. But what if we wanted everything in a single package so users, permissions, and administration can be shared? That’s how Tiki was started, and now we can’t stop the gigantic snowball.
Tiki sounds a little daunting. Does it feel “monolithic?” Argerich: Tiki does have a zillion features, yet it’s simple and friendly. It’s easy to install, easy to use, and has very complete documentation. You can turn off all the features but one and have a nice wiki, a blog, or forum software. If you need to add a feature to your site you don’t need to download another PHP product — just enable the feature and roll. Another key aspect is the use of Smarty and templates so you can easily customize Tiki to look like anything you want without touching a single line of PHP code.
Your project is very ambitious. What’s been your biggest challenge? Foster: Keeping the balance between the number of features and the quality of features. We decided to start adding as many features as we could as fast as possible, so more and more users could help us refine the features. In general, in the near future, we’d like to focus on making features better and better more often than adding new features. But so far, it’s been “Expand first. Conquer later.” Maintaining the documentation and the translations to different languages has also been a big challenge.
page 1 2 next >>
Linux Magazine /
July 2003 / PROJECT OF THE MONTH
TikiWiki
Tiki is the second most-active project on SourceForge.net. What do you attribute that ranking to? Foster: We have a very friendly and open-minded approach to open-source development: we welcome everybody to help us and discuss Tiki features and implementation. I’ve found that elitism is a very common disease in open source communities and we want to avoid that by all means.
Argerich: We’re really surprised that Tiki has remained in the top three most-active projects for such a long time. Somehow, we were lucky to find a group of very intelligent and nice people interested in Tiki. The Tiki online community is just great. We are constantly getting ideas, feedback, and suggestions from the mailing lists and forums. How many people use Tiki? Argerich: We don’t track Tiki’s usage. My guess is that Tiki is not a very famous piece of software — but that’s surely going to change after this interview!
Foster: It’s almost impossible to measure. Tiki is an application that can rule the world of intranets. Speaking honestly, as long as Tiki can be useful to at least a single user, it’s a success.
How do you coordinate such a large project? Argerich: Developers are free to do whatever they want as long as they don’t dramatically change an existing feature. They also have to make whatever feature they add optional. So, they can fix bugs or develop planned features as they see fit. We have a task list of things to do for each new version, and developers can pick things they would like to implement. We have ordered chaos.
Foster: I fly around the whole project, planning releases, planning tasks and to-dos for each release, and writing documentation.
How much time do you spend on Tiki? Argerich: For some reason, I need money to survive, so I do have a full-time, salaried job. Tiki is a part-time occupation. The time I spend on Tiki varies, depending on subtle factors such as hunger, the caffeine level in my blood, sleepiness. I spend maybe a half hour per day and more on the weekends.
Foster: I have a job, too. Tiki is a part-time pastime. What’s next for Tiki? Foster: Our next version, 1.7, dubbed “Eta Carinae” is scheduled for mid-July 2003. It’ll add some new features and dramatically improve others, as well as enhance usability. It will be a bigger release, but much more usable and nice to the user. Then for 1.8 we want to target the enterprise and education areas, so we’ll be adding features and improving others to make Tiki attractive for companies, schools, and universities. The workflow engine, quizzes, and other important areas will be extensively surveyed to improve them. 1.9 will focus on performance and scalability to make Tiki very attractive to hosting companies and big websites, as well as hobbyists that have a large user base. The 2.0 version will be the result of testing, fixing, and improving the 1.6-1.9 releases. We may not add any features to 2.0, but instead concentrate on making it very fast, stable, and usable. Then we’ll start to survey the extensive list of requests for enhancements (RFEs) and plan new features for 2.1
And what’s on your wish list? Argerich: I’d really like to have a developer or a group of developers in charge of different Tiki features so they can make each Tiki feature as good as any ad-hoc PHP application in the field. It would be really nice to make Tiki the application you use instead of a lot of different PHP applications. Some artists contributing to make a better user interface are also needed. How can people contribute? Foster: If you like Tiki, you can contribute. If you don’t like it, then you can help us to make it better. We badly need PHP coders and artists, as well as testers and translators. Feel free to contact us, and you will be on board in a breeze.
PURPOSE: TikiWiki, or just Tiki, is an open source, web application that provides a full wiki, as well as articles, sections, user and group management (including optional LDAP interaction), polls, and quizzes, file and image galleries, weblogs, and much more.
SYSTEM REQUIREMENTS: Tiki is based on PHP, MySQL, and Smarty and can run on any platform that supports those software systems. LICENSE: Lesser GNU Public License (LGPL)
FOUNDED: Tiki 1.0 was released October 2002
PROJECT LEADERS: Luis Argerich, Garland Foster, Eduardo Polidor
DEVELOPMENT STATUS: Production/Stable
Martin Streicher is the Editor of Linux Magazine. You can reach Martin at [email protected].
<< prev page 1 2 Linux Magazine /
Fatal error: Call to undefined function aa_author_bios() in /opt/apache/dms/b2b/linux-mag.com/site/www/htdocs/wp-content/themes/linuxmag/single.php on line 62 | 计算机 |
2014-35/1130/en_head.json.gz/1482 | Re: Easter eggs in software
Posted by ESC on April 15, 2003
In Reply to: Re: Easter eggs in software posted by R. Berg on April 15, 2003
: : : : : : : : EASTER WORDS AND SYMBOLS
: : : : The egg. The egg has been the symbol of renewed life after death and resurrection in many cultures. "The Greeks and Romans buried eggs, real or dummy, in their tombs, scenes on Athenian vases show how baskets of eggs were left on graves, Maoris used to put an egg in the hand of a dead person before burial. Still today Jews present mourners on their return from the funeral of a relative with a dish of eggs as their first meal. Christianity took this ancient sign of rejoicing at rebirth and applied it to the Resurrection of Jesus.The tradition of painting the Easter egg in bright colours may have its origin in a legend that tells that Simon of Cyrene, who carried Christ's cross, was an egg merchant. When he returned from Calvary to his basket of produce, which he had left by the roadside, he found that all the eggs miraculously colored and adorned." From "How Did It Begin?" by R. Brasch (Pocket Books, New York, 1969).
: : : : Easter bunny. And what about the Easter bunny? Early Christians often celebrated their sacred occasions on the same days as pagan holidays to "blend in" and avoid persecution. They ".astutely observed that the centuries-old festival to Eastre, commemorated at the start of spring, coincided with the time of year of their own observance of the miracle of Christ's Resurrection.It just so happened that Eastre, a fertility goddess (the ancient word eastre means 'spring'), had as her earthly symbol the prolific hare, or rabbit. Hence, the origin of the 'Easter bunny.'" From Sacred Origins of Profound Things: the Stories Behind the Rites and Rituals of the World's Religions" by Charles Panati (Penguin Books, New York, 1996).
: : : : Easter Lily. The lily is a symbol of purity because of its whiteness and delicacy of form. It also symbolizes innocence and the radiance of the Lord's risen life. It is called the Easter lily because the flowers bloom in early spring, around Easter time. From the Hallmark Press Room online at http://pressroom.hallmark.com/easter_symbols.html Accessed April 11, 2003.
: : : : Easter Parade and Wearing New Clothes. In the early church, those who were baptized at the Easter Vigil dressed in white robes and wore the robes during Easter week as a symbol of their new life in Christ. People who had been baptized in previous years wore new clothes to indicate their sharing in the new life. New clothes at Easter became a symbol of Easter grace. In Europe during the Middle Ages, people in their new clothes would take a long walk after mass, which has evolved into the tradition of Easter Parades. An American belief is that good luck can be ensured for the year by wearing three new things on Easter Sunday. From the Hallmark Press Room online.
: : : : Easter Sunrise Service. The Easter custom of the sunrise religious service was brought to America by Protestant immigrants from Moravia who held the first such service in Bethlehem, Pa., in 1741. Origins of the early morning time stem from a passage in the Bible from the book of Luke: ".but on the first day of the week, at early dawn" women visited Jesus' tomb and found it empty. Sunrise services also may be related to the Easter fires held on hilltops in continuation of the New Year fires - a worldwide observance in antiquity. Those rites were performed at the vernal equinox, welcoming the sun and its great power to bring new life to the world. From the Hallmark Press Room online.
: : : : Hot crossed buns. "At the feast to Eastre, an ox was sacrificed and the image of his horns carved into ritual bread - which evolved into the twice-scored Easter biscuits we call 'hot cross buns.' In fact, the word 'bun' derives from the Saxon for 'sacred ox,' 'boun.'" Sacred Origins of Profound Things. A cross bun kept from one Good Friday to the next was thought to bring luck, the buns were supposed to serve as a charm against shipwreck, and hanging a bun over the chimneypiece ensured that all bread baked there would be perfect. Another belief was that eating hot cross buns on Good Friday served to protect the home from fire. From the Hallmark Press Room online.
: : : ...all of which is fine and wonderful, but does anyone know why the hidden little gimmicks that programmers and/or producers embed in software (whether computer programs or these days even DVD movies) are also called easter eggs?
: : Oh, never mind. Happy Easter.
: I think it's because they're hidden and found.
I am guessing that they are called "Easter eggs" because they are like the "hidden picture" drawings of our (or at least my) childhood. It's a line drawing with eggs or other objects "hidden" in it.
Re: Hidden picture ESC 04/15/03
Hidden eggs R. Berg 04/15/03
Re: Hidden eggs ESC 04/16/03
Re: Hot crossed buns - an update ESC 04/18/03 | 计算机 |
2014-35/1130/en_head.json.gz/1936 | Google talks Chrome OS, HTML5, and the future of software
Ars Technica's Jon Stokes and Ryan Paul sit down with the engineering director …
- Jan 20, 2010 5:30 am UTC
Privilege levels and permissions for Web apps
RP: I also wanted to ask if there's a way for an application to request heightened privileges for certain kinds of activities. For example, in the Mozilla ecosystem with XULRunner, they distinguish between internal application-level privileges for extensions and Web-level privileges. Is there a way to have a secure mechanism so that Web applications get higher privileges to more? MP: I think it's a slippery slope. That's the challenging thing—that's how current operating systems work: there's explicit ECLs and permission grants. I guess the thing that I've learned from traditional OSes is, if you look at how that goes wrong, is that users tend to have a very hard time managing it. We have over 200 Googlers using this every week, and we tend to just inflict a new build on them and see if they use things more or less, and we just iterate from there. If you contrast that with the Web model, the Web mostly takes the view of "you shouldn't be able to do anything bad from a Web application." Which mostly serves the Web really, really well. You cruise the Web without worrying too much about badness lurking out there. It's not 100 percent true, because of malware and browser exploits and stuff like that, but for the most part you just cruise the Web and don't sweat it too much.
We want to be really careful that we don't end up in the bad ECL/permission list kind of place that conventional operating systems are, so we're being careful with it. But we're definitely struggling with this right now, with the geolocation UI, which is one of the hardest ones we've had to add so far. It's fairly tricky; you need to add it carefully because you don't want users to be spied on in terms of where they are physically in the world—they clearly need to have some say in whether websites know where they are. But you also want to figure out how to make it so it's not obnoxious.
Like, some phones today, if you go to websites that use HTML5 geolocation, they're constantly popping up boxes that say, "Do you want show your location? I know you just told me an hour ago, but are you sure you want to show your location?" And that gets really obnoxious, too, and users get trained to just click "Yes" and not read the text. (My kids call it the "'yeah, whatever' button.") That's dangerous, too. So, we're trying to figure this out, and I guess what I'd say is, we're trying to keep it simple, but make sure that users know what's going on. But we don't have the total answer yet. For the most part, the way browsers have been addressing this is info bars and dialog boxes, but I would say in general that we're moving more towards info bars and away from dialog boxes, because dialog boxes tend to lock up your browser experience. A good example is when Google Calendar pops up a dialog box saying I have an appointment, the whole browser locks up until I deal with that thing, which is not very good. So a lot of the browsers, including Firefox, have been moving for geolocation towards an info bar approach, where something pops up at the top of the browser window and notifies you; but you can keep cruising and ignore it, or you can decide to deal with it.
A good example is what Chrome does when you type in a password. It offers to remember the password, but you can just ignore it and it'll go away eventually. That's the path we're going down right now with our UI thinking in terms permission grants for some of these things, but it's still in flux and we haven't tested a lot of it with users, so we'll see what we learn when we try it.
Extending the Chrome OS UI
RP: It seems like Chrome extensions are going to be the primary point of extensibility for extending the UI and the operating system. Is that correct?
MP: In terms of the browser chrome, yeah, that's probably fair. In terms of toolbars or things like that that you might want to add, sure, that's the way you would do it.
RP: So are there any plans to add extra UI integration points for extensions that are going to be specific to Chrome OS, like, to be able to customize features that are specific to the platform?
MP: We haven't thought of any, yet, but I don't know... that's an interesting idea. Did you have anything specific in mind?
RP: I don't know, I was just thinking about the menu stuff, and how, on a widescreen netbook there would be some value in having a sidebar infrastructure kind of thing that might not be as useful for a regular browser, but might be worthwhile in an environment where the browser is all you have.
MP: Yeah, that's a good point. And it is true that when you cruise the Web today on a netbook you see a lot of dead space. Particularly on news sites, there's a lot of dead space on the left and right edges of the screen, because nobody ever expected to run at these aspect ratios, and with such wide screens.
But that's an interesting idea... yeah, you're right, we could allow some sidebars. We have been looking at sidebar UIs for Chrome OS, experimenting with that, and with stuff like, should the tabs be on the top, on the side, on the left, on the right; or with panels, should they be on the bottom, or on the right? We've gone back and forth and tried a lot of different things, and we're not done trying yet. They keep moving around every week or two.
A lot of our approach is, we have over 200 Googlers using this every week, and we tend to just inflict a new build on them and see if they use things more or less, and we just iterate from there. And we've got another year of iteration left before consumers get their hands on it, so it's going to change a lot as a result.
The native apps vs. Web apps issue, round II
MP: Ryan, I'd love to get your take [on Chrome OS]. Is there anything that you think is weird, or that doesn't compute in your head?
RP: Well, I'm taking a wait-and-see approach because I think there's a lot of potential here. But the main thing that sticks out in my mind is that it's hard at first to see what kind of value you can really deliver beyond what you can already get today with a regular Linux distribution running Chrome. And I think that really the integration is going to be everything. But I'm still a little bit skeptical about the whole idea of making the browser be the core part of the user experience, and not having anything external to that. Because if you look at the way that people are interacting with Web services, increasingly they're using desktop client applications, particularly in the Twitter space, for example. I think that it'll be interesting to see if there's a way to serve those kinds of needs while sticking with a browser model. MP: Uh-huh. Yeah, and if we have to extend what browser apps can do to do that, we should do that.
JS: That's what I was talking about earlier with Fluid and Mailplane.
MP: I still think what we've got to do is go look at those apps and see, "why did this have to be created as a native app?" And add that capability so that you don't have to do this anymore.
A good example is that a lot of native apps end up being made in these things because people want to run things in the background, and people are still trying to figure that out [for Web apps]. There's work going on in "persistent workers," with the Web community trying to figure out, how do we make an API in HTML 5 that lets something upload photos in the background without consuming a tab and without having to have the website open?
EB: Or notifications, for that matter. Take Twitter desktop clients, where people are notified every time they get a new tweet. There's a notifications API that would allow something like that within the browser. MP: We're also looking at the issue of uploads, because that's a common use case, where I'm trying to upload photos to Picasa and I have to keep the tab open. Sometimes people install helper apps for that, so we're trying to figure out how to make that be a built-in part of the Web instead. The built-in media player
MP: One thing that we mentioned during the launch that nobody's picked upon, but you guys might be into, is the integrated media player.
Another big aspect to what we're doing is we're integrating a whole media player into Chrome and into Chrome OS. People often get confused about this, and it's a fairly subtle but important point. Because in a sense what we're doing is integrating the equivalent of Windows Media Player into Chrome itself. To have a computer, that's the other thing you need: you need a way to play JPEGs and MP3s and PDFs and all that stuff when you're offline. For example, you might just have a USB key that has a bunch of MP3s on it, so you want to be able to plug that in and listen to those MP3s. There might not be any controlling webpage for that activity, but it's clearly something you need to be able to do in any reasonable operating system or browser. So we're doing a lot of work to make Chrome and Chrome OS handle those use cases really well.
RP: Is your approach to that standards-based, with HTML5 audio and video?
MP: Yes, definitely. Which is not to say... we love Flash, too, so we run with Flash as well. But we also love video tag and audio tag and all that stuff.
RP: Are you going to be exposing more codecs through the video tag?
MP: We're not specifically looking at adding more codec support, but what I'm talking about is really orthogonal to the video/audio tag. Because the video/audio tag is designed for when you have a webpage that says, "I want to play this video." But there's also a use-case where you just have this USB key with an MP3 or a video file on it. How do I browse that when I'm not necessarily on any website whatsoever?
Another example is when someone mails you an MP3 in Gmail. You shouldn't have to exit the browser to listen to that. We want to make that work directly in Chrome, and make it a really clean, smooth experience in ChromeOS.
So that's the other big part of what we're doing on the UI front, is integrating a whole media player framework into the browser. Which is not unusual, because other browsers do this, too. But it's not something Chrome has done historically.
Page: 1 2 3 4 Reader comments 47 | 计算机 |
2014-35/1130/en_head.json.gz/1947 | AccelOps Doubles Customers, Revenues for Its Integrated SIEM, Performance and Availability Software in FY 2012 Company Delivers Significant Product Upgrades, Increased Market Traction, and Additional Funding
SANTA CLARA, CA -- (Marketwire) -- 11/29/12 -- AccelOps, Inc., a pioneering developer of software that integrates security (SIEM), performance and availability monitoring into a single easy-to-use application, doubled its customer base and revenues for its 2012 financial year, which ended September 30, 2012. AccelOps also introduced significant cloud, database and Bring Your Own Device (BYOD) monitoring enhancements to its software, and closed an $18 million round of financing during the year.
AccelOps software is used to monitor more than 350 data centers worldwide. Customers include managed service providers, government agencies and enterprises in all vertical sectors, including financial services, education, healthcare, manufacturing and retail. The third-generation product continues to pioneer comprehensive, integrated monitoring across the entire IT infrastructure.
AccelOps' real-time analytics and next-generation Security Information and Event Management (SIEM) gives users a single-screen view of all on-premise and cloud resources: servers, storage, network, security, virtualization, users and applications. Patented real-time analytics technology cross-correlates log and event data to make sense of complex IT patterns and events as they happen. The AccelOps modular architecture is designed for today's virtualized hybrid cloud environments and eliminates the need for multiple single-function tools, providing greater visibility while reducing unnecessary cost and complexity for modern IT organizations.
"2012 was a turning point for the SIEM market, and the time has come for a new generation of integrated IT monitoring," said Elie Antoun, President and CEO of AccelOps. "Cloud security and operational effectiveness are inseparable. Our customers tell us how much more secure and effective their environments are when they replace existing SIEM, network, server and other single function monitoring tools with the 'single pane of glass' view our software provides."
Additional information is available at www.accelops.com.
About AccelOps AccelOps provides a new generation of integrated security, performance and availability monitoring software for today's dynamic, virtualized data centers. Based on patented distributed real-time analytics technology, AccelOps automatically analyzes and makes sense of behavior patterns spanning server, storage, network, security, users, and applications to rapidly detect and resolve problems. AccelOps works across traditional data centers as well as private and hybrid clouds. The software-only application runs on a VMware ESX or ESXi virtual appliance and scales seamlessly by adding additional VMs to a cluster. Its unmatched delivery of real-time, proactive security and operational intelligence allows organizations to be more responsive and competitive as they expand the IT capabilities that underpin their business. For more information, visit www.accelops.com.
AccelOps and the AccelOps logo are trademarks of AccelOps, Inc., a privately held Delaware corporation. Other names mentioned may be trademarks and properties of their respective owners.
Agnes Lamont
Marketingsage | 计算机 |
2014-35/1130/en_head.json.gz/2891 | GeoNames founder opens up the GIS world
Marc Wick discusses the GeoNames project: how it started, what it uses to keep running, where it is being used and where the project is heading. He also discusses free and open software, how an increasingly GPS-enabled world is driving the need for free data, the politics in data access and more. Dahna McConnachie (Good Gear Guide) on 21 November, 2007 05:13
GeoNames is a free and open source geographical database. Primarily for developers wanting to integrate the project into web services and applications, it integrates world-wide geographical data including names of places in various languages, elevation, population, and all latitude / longitude coordinates. Users are able to manually edit, correct and add new names with a user-friendly wiki interface. The data is accessible through a number of webservices and a daily database export. Launched at the end of 2005, GeoNames is already serving up to over 3 million web service requests per day, and contains over eight million geographical names. A growing number of organisations such as the British Broadcasting Company (BBC) and Greenpeace are ditching proprietary aggregation databases and turning to GeoNames. Other high profile organisations using the database include Microsoft Popfly, LinkedIn, Slide.com, and Nike. Like many open source projects, it all started somewhat accidentally when founder, Marc Wick needed to develop a holiday apartments application. We speak to Wick about the project, the challenges involved in using public data, and the changing times in the GIS world. What is your technical background?I have a degree in Computer Science from the Federal Institute of Technology in Zurich (ETHZ). I worked as Software Engineer for Siemens Transportation Systems and for some major Banks in Switzerland mainly in an environment of Oracle, Java, Unix. What gave you the idea for the project?I was developing a holiday apartments application and needed gazetteer data for it. Since commercial data was prohibitively expensive I searched for free data and started aggregating it. Later I realized that other applications have exactly the same needs and are also aggregating free data. I subsequently released the data as the GeoNames project as it seems a waste of effort if a lot of people are doing the same task without sharing it. The idea was to share what I had already aggregated, if other people join in and help improve it, that is all the better.(The theory being) that if we all team up together we will manage to build a better gazetteer than each of us would be able to do on our own. A lot of applications share this point of view and have switched from their own proprietary aggregation database to GeoNames. Among them are Greenpeace and the BBC. Why did you make the project open source, and why did you choose the creative commons attribution license over other available licenses?It is a common engineering principle to use the simplest possible implementation that could possibly work. I believe we should use the same principle when choosing a free license and pick the freest license possible. If there is no absolute requirement for restrictions like ShareAlike (SA) or Noncommercial (NC), don't use them. Let the project free. For many applications SA or NC licenses are as closed and restrictive as a commercial license. It is a pain so many 'free' projects are using only half-free licenses. There are more than a couple of 'free' geo data projects whose data we cannot use because they are using a less liberal license. They are as inaccessible to us as the commercial data providers. (Read more information on Creative Commons Licences here)How much has GeoNames grown from when it was launched?Initially there were three or five sources, namely the National Geospatial-Intelligence Agency in the US (an agency related to the US department of defence), the United States Geological Survey (USGS) and worldgazetteer. The number of sources has now grown to around 100. The number of services in the web service API has grown from a single search service to around 30 different services from elevation to timezone. The number of supported administrative levels has grown from one to up to four for some countries.
Exclusive: Xiaomi Global shuts down Australian online stores
Microsoft pitches Windows for Internet of Things to maker community
Dahna McConnachie | 计算机 |
2014-35/1130/en_head.json.gz/3536 | Novell debuts open source toolkit for .NET
Mono gets real
Novell yesterday took the wraps off a test version of Mono, an open source development platform that allows the creation of .NET applications that can run on Linux and Unix as well as Windows machines.Mono opens the way towards cross-platform interoperability for emerging .NET applications and environments while allowing developers to write in higher level, richer programming languages. The project is the brainchild of Miguel de Icaza, who joined Novell when it acquired his company, Ximian, last year. As with many software projects, Mono is behind schedule. Originally Ximian (as was) talked about the delivery of Mono sometime in 2002. This date has gradually been pushed back until later this quarter.
On arrival, Mono 1.0 promises to incorporate key .NET-compliant components, including a C# compiler, a Common Language Runtime just-in-time compiler and a full suite of class libraries. The development platform will help coders to write rich client, Web services and server-side applicatio | 计算机 |
2014-35/1130/en_head.json.gz/3678 | 5 Ways to Make a Powerful Portal (live copy)
To cut costs and optimize benefits, HR portals must be tweaked and adjusted. Here are some ways to rev up your portal's performance for ease of use, employee relevance, and functionality.
Samuel Greengard
Vinu Raman can still recall the dark days of the Web. It was only a few years ago that the supply-chain manager for Hewlett-Packard would have to click across dozens of intranet sites in search of information. He’d also find himself constantly logging on at different sites and battling to keep all his browser bookmarks current. "Sometimes, it would seem as if the Web was creating more chaos than it was solving," he says.
Fortunately for Raman, that’s no longer the case. As HP has added capabilities and improved the functionality of its portal, he and his colleagues are finding that it is transforming the way they work -- and think. Now when workers at the computing giant need information, they simply click to the appropriate tab using a browser and find what they’re looking for. An HR tab, for example, offers a range of information from wage reviews to benefits, employee assistance to performance-management criteria. In fact, Raman can post a requisition online when he needs to fill a position and view résumés as they stream in. He also can access the portal from home or while traveling.
Hewlett-Packard is one of a growing number of companies that are redesigning and reorganizing their Web interface to make the information age a reality. They -- and human resources departments -- are recognizing that in order to cut costs and fully realize the benefits of e-business, it’s essential to optimize performance, usability, and functionality. "A well-designed portal can bring order from chaos," says David Rhodes, a principal at consulting firm Towers Perrin in Stamford, Connecticut. "It can put an incredible number of resources at employees’ fingertips."
Just a couple of years ago, organizations were busy piling on features as quickly as upstart dot-coms accumulated venture capital. In today’s tough economy, however, the emphasis is clearly on tweaking and improving portal performance, while adding tools as they make sense. Not surprisingly, HR has a key role to play in the process, since many enterprise portal applications use data and systems that originate in human resources or pass through it. "HR is usually at the center of a successful portal deployment," says Michael Rudnick, national enterprise portal leader for consulting firm Watson Wyatt Worldwide in Stamford, Connecticut.
Building a first-class portal requires more than a pretty home page. Yet with many technical and practical issues converging, achieving success is no sure thing. It takes considerable planning to put the right information in the right place, and ensure that it’s up-to-date and available in a digestible format. As a result, many companies are now conducting formal usability studies and focus groups, and analyzing surfing patterns, in order to optimize their portals and maximize their gains. Many are also creating a task force or interdepartmental team to oversee projects and interface with outside companies that link to the portal, such as HMOs and 401(k) providers.
Know what users needOne company that has raised the stakes of its portal is General Growth Properties, a Chicago firm that owns 96 shopping centers and manages 46 other malls throughout the United States. A few years ago, the company realized that communicating with about 3,000 employees in 146 locations required a sophisticated portal. "It’s essential for today’s employees to have the information they need to make good decisions," says Judy Herbst, vice president of human resources.
In April 1999, the company went live with an HR portal from Ultimate Software, headquartered in Weston, Florida, that focused primarily on employee self-service. Workers could update basic information such as an address or phone number, and view paycheck and tax information. Over time, GGP has included links to its learning-management system and added the ability to select and change benefits, and actually enroll in classes online. "The portal has become the main point of entry into HR and other online services," Herbst says.
Yet GGP learned early on that additional features don’t always translate into a better portal. Only about 50 percent of its employees have access to computers, so it was essential to add kiosks at various locations. HR also worked hard to assume a strategic role in the design and development of the portal. That meant understanding how users click through the site, what they stumble over, and what they’d like to see. GGP uses Web tracking software and has conducted internal usability studies. An interdepartmental team that oversees the site solicits feedback from employees on a regular basis, and the company has helped train key employees in the skills required to manage the portal. Currently, 85 percent of employees use the portal.
"In order for a portal to succeed, those using it must be able to customize content and have the specific information they need right at their fingertips." Building a first-class portal requires more than a pretty home page, Rudnick says. One glaring problem is that workers often find themselves lost once they click beyond the start page. That’s because many organizations have simply tied together a variety of intranets, usually defined by departmental boundaries. HR might have its own site, while finance and operations | 计算机 |
2014-35/1130/en_head.json.gz/3701 | Is Microsoft's Hyper-V in Windows Server 8 finally ready to compete with VMware? | ZDNet
Is Microsoft's Hyper-V in Windows Server 8 finally ready to compete with VMware? Topic: Virtualization Follow via: RSS
Email Alert Is Microsoft's Hyper-V in Windows Server 8 finally ready to compete with VMware? Summary: Microsoft's built-in virtualization in Windows 2008 Server R2 has so far been unable to attain significant enterprise penetration. But all of that could change with Windows Server 8.
By Jason Perlow for Tech Broiler | March 1, 2012 -- 08:00 GMT (00:00 PST)
SUBSCRIBE TO: Virtualization
Virtualization, Enterprise Software, VMware, Storage, Servers, Operating Systems, Networking, Microsoft, Hardware, Windows 39
If you've been following my writings on ZDNet for the last several years, you've probably realized by now that I am something of a virtualization and server technology junkie.
In my day job as a systems integration professional, I spend a great deal of time with these technologies, helping my customers attain greater server efficiency and density in their datacenters.
Figuring out how to fine-tune and optimize server and datacenter infrastructure is what I do, and in doing so I get to play with any number of virtualization and server operating stacks from all kinds of vendors. This includes Mid-range UNIX and mainframe virtualization stacks, VMware vSphere and also to a much more limited extent, Microsoft's Hyper-V.
It's actually kind of interesting that March 1, 2012 marks just over four years since I started writing for ZDNet, when I got my start as a guest columnist for Mary Jo Foley's All About Microsoft blog.
My second ZDNet article, published in Mid-February of 2008, was a review of the very first version of Hyper-V, which was introduced in beta form as an add-on to Windows Server 2008.
Microsoft's Hyper-V puts VMware and Linux on notice (Feb 2008)
Hyper-V: The no-brainer virtualization stack for Windows (July 2008)
I knew at that time that Hyper-V had a number of compelling features that could potentially allow it to gain significant inroads against VMware, particularly in Microsoft technology-centric environments.
However, despite excellent performance and overall value compared to its much more expensive competitor, the product was missing a number of key virtual infrastructure management and high availability features that was necessary to seal the deal for large enterprises in order to consider it to be in the running for x86 server virtualization platform of choice.
Four years later, enter Windows Server 8 Beta. In 2012, VMware continues to be the primary x86 virtualization platform for large enterprises and its position as industry leader in that space seems secure. But sometime at the end of this year, presu | 计算机 |
2014-35/1130/en_head.json.gz/4850 | Trust In, and Value From, Information Systems
PeopleSite ContentConversationsAdvanced Search
ABOUT Membership CERTIFICATIONEducationCOBITKnowledge CenterJournalBookstore
What We Offer & Whom We Serve
@ISACA Newsletter
Licensing and Promotion
ISACA TV
IT GovernanceInstitute
Careers at ISACA
Professional Membership Student Membership Academic Membership Local Chapter Information Join Today Professional GrowthGlobal CommunityAdvance Your Career
What is CISA
What is CISM
What is CGEIT
What is CRISC
Benefits of CISA
Benefits of CISM
Benefits of CGEIT
Benefits of CRISC
How to BecomeCertified
September 2014 Exam Information
December 2014 Exam Information
Maintain Your CISA
Maintain Your CISM
Maintain Your CGEIT
Maintain Your CRISC
Why Certify How to Earn CPE
Maintain Your Certification
Write an Exam Question
US DoD Information
North America CACS
Training Week
North America ISRM
eLearning Campus Virtual Conferences EuroCACS / ISRM On-Site Training Governance, Risk and Control
Exam Review Courses
COBIT EDUCATION
Latin America CACS / ISRM
Oceania CACS Asia Pacific CACS Call for Papers Browse All Events Sponsorship Opportunities
COBIT 5 Home
COBIT Online
Training & Accreditation
COBIT Focus
Browse Knowledge Center topics
Where networking and knowledge intersect.
BMIS
COBIT 5 | COBIT 4.1
ITAF (IS Audit and Assurance Standards)
Research (Deliverables\Projects)
Risk IT
Val IT
Legislative Reporting
JOnline
Author Blog
CPE Quizzes
Cyberethics – Morality and Law in Cyberspace, 5th Edition
Cyber Crime & Warfare: All That Matters
COBIT 5 for Risk
Top Sellers Configuration Management: Using COBIT 5
SAP Security and Risk Management, 2nd Edition
CGEIT Review Manual 2014
My ISACA
Join ISACARenewFeedbackShopping Cart
ISACA > Journal > Past Issues > 2011 > Volume 2 > Value Assessment Tool for ICT Projects at the European Commission
Value Assessment Tool for ICT Projects at the European Commission Journal
Journal Author Blog
Current Digital Journal
Journal Mobile Apps
Article in Digital Form
Stefka Dzhumalieva, Franck Noël and Sébastien Baudu Article
The Directorate-General for Informatics (DIGIT) enables the European Commission to make effective and efficient use of information and communication technologies to achieve its organisational and political objectives. More than 10 years ago, many European Commission departments, led by the Internal Audit Service and Directorate-General for Agriculture, selected COBIT as a framework for the assessment and improvement of IT processes.
IT governance processes at DIGIT involve strategy and portfolio management, project and development methodology, and enterprise architecture. Major challenges faced by DIGIT today involve improving the integration of business and IT planning cycles as well as optimising investments of scarce resources to maximise the business value of IT.
The Value Assessment Tool (VAST) research is one of many IT governance implementation initiatives led by Francisco Garcia Moran, director general of DIGIT, and Declan Deasy, director of information systems and interoperability solutions. This research takes advantage of frameworks such as ISACA’s Val IT and categorises non-financial benefits of projects to highlight and compare their expected value.
When speaking to DIGIT officials about the governance of IT within the Commission, their alignment with ISACA principles and their focus on the five areas of governance that are supported by COBIT are evident.
Georges Ataya, CISA, CISM, CGEIT, CISSPAcademic Director of IT Management Education, Solvay Brussels School of Economics and Management
Electronic government (e-government) is now mainstream for transforming the public sector so that it achieves its political objectives in an effective, efficient and transparent manner. Today, practically all policy initiatives result in related information and communication technologies (ICT) projects,1 and ICT has become a key enabler for policy impact, transparency and compliance to norms and standards.
At the same time, public organisations have increasingly limited resources, so new investments have to be made carefully. This trend has been reinforced due to the current economic crisis. Additionally, the complex characteristics of the public sector2 may further influence new initiatives, so projects could often go beyond their initial scope and budget, and require more time than had been envisaged. They, however, may still be considered successful.
While concepts such as cost-effectiveness and return on investment (ROI) can be easily used to define the success of a project in the private sector, within the public sector the created ‘public value’ has the biggest weighting.3 Costly and risky projects must be undertaken to comply with legislative requirements; extended scope is often accepted to satisfy all stakeholders; deadlines are extended to cover all business needs put forward. Therefore, the ‘public value’ created by an ICT project will determine its success and it should be differentiated from the conventional concepts of project benefits.
Based on similar reflections within the European Commission, the recently set up Corporate Project Office (CPO) had to look for a way to evaluate and prioritise promising ICT projects. Such evaluation needed to distinguish between the public value created by the project (qualitative value) and the cost effectiveness of the project (quantitative value), and at the same time take into account the environment in which this new ICT project would be developed, implemented and operated.
A number of well-established methodologies used in private and public organisations were evaluated for their potential to be reused in the European Commission’s context. The analyses concluded that none of the evaluated solutions fit with the specific organisational setup. Therefore, building on these methodologies, the aim was to define a custom-made, easy-to-use and automated solution, allowing the Commission’s services to assess the expected value of envisaged projects. The result of the work is the Value Assessment Tool (VAST) of the European Commission and is the subject of this article.
ICT Context at the European Commission
The European Commission is a complex, decentralised organisation composed of 41 services with a great level of autonomy, each under the leadership of a director-general. This organisational setup is reflected in the ICT aspects of the institution:
At the business process level, services are fully autonomous and harmonisation is done on an ad hoc and voluntary basis.
At the information systems level, services are also autonomous, but corporate systems are mainly developed by the Directorate-General for Informatics (DIGIT), which is also responsible for defining the development and operating the underlying infrastructure for which certain layers are managed centrally.
For proximity support services, a consolidation exercise is underway.
At the infrastructure level, the network, for example, is managed centrally.
DIGIT is also responsible for e-government: internally with the eCommission initiative and with Member States through the Interoperability Solutions for European Public Administrations (ISA) programme.
Like other public administrations, the European Commission is subject to constraints: constant or diminishing resources, in the face of increasing demand. Therefore, priorities have to be carefully established, and the launch of new ICT projects should be based on their expected value. Within the Commission, this aspect is reinforced by the nature of the organisation, since duplications might easily occur when ICT is managed at several, sometimes independent, levels.
To alleviate such difficulties, it is essential to assess the value promised by a given project at an early stage of its inception and, most important, to distinguish between its potential qualitative and quantitative value. It is also important to benefit from a fair comparison element between projects coming from different services. These elements triggered the need for a value assessment methodology.
However, due to the organisational context of the European Commission, such a methodology could not serve its purpose if it was used only at corporate level by the CPO. A potential value assessment methodology needs to be widely adopted by the decentralised structures responsible for ICT. Only suitability to the whole ICT community would reveal the full potential of the selected method.
Therefore, for such a methodology to be accepted, a first requirement is its ease of use and ‘self-training’. Qualitative value assessments should take 30 minutes, given the fact that people conducting the assessment have familiarised themselves with the new ICT project through the standard project documentation for the Commission (e.g., business case and vision) or they are part of the project team. The latter case of usage could be defined as self-assessment.
Furthermore, this methodology has to be usable by both ICT and non-ICT professionals so that they can complement each other’s views (business and technological) during the project assessment process. Moreover, when decisions have to be taken based on the assessment, the chosen method should provide meaningful, but at the same time, concise, output so that it could be used as a communication means with top management.
Lastly, going beyond the organisational setup and environment, a specific and unique requirement to the Commission is the need to estimate the value of a project at the level of the European Union.
Value Assessment Methodologies Review
Taking into account the ICT organisational context of the Commission and the specific requirements that it imposed, well-established and practically oriented frameworks/ methodologies from both the private and the public sector were selected and examined.
Val IT FrameworkVal IT is a governance framework initiated to address the lack of IT investment and management guidelines. Its goal is to ensure the delivery of optimal value from IT investment at adequate costs and levels of risk. The Val IT framework provides extensive guidelines and describes processes to be set up and followed in three main domains: Value Governance, Portfolio Management and Investment Management.4
On the positive side, it was considered that Val IT gives a holistic, high-level overview of the mechanisms that can be used to manage the value derived from IT. However, it was not possible to be applied at the European Commission due to the highly decentralised organisational setup when managing ICT. Bearing in mind this constraint, Val IT was used only as an insight for the endeavour.
Demand and Value Assessment Methodology for Better Government ServicesThe Demand and Value Assessment Methodology for better government services is an initiative of the Australian government that assesses the:
Demand of e-government services from the viewpoint of end users
Value of such services, based on the more traditional costs and benefits, but taking into account social and governance implications
It is supported by a spreadsheet-based tool5.
The major advantages of the Demand and Value Assessment tool are that it covers both financial and non-financial value and is assisted by a semi-automated tool giving graphical representation of the results. These also closely matched the projects requirements. However, the assessment criteria of the methodology were chosen for the assessment of service provision at the national or local government level and, therefore, differ greatly from those considered at the European level. Furthermore, as it is rather detailed and provides an ‘open’ structure (criteria and objectives need to be defined by the evaluator), the use of the methodology entailed training, and this was not in line with the aim of easy use and quick results.
Economic Efficiency Assessment Methodology (WiBe) 4.0WiBe has been used since 1992 by the German federal administration to ensure the economic efficiency of its ICT projects.6 It is based on two main steps:7
Identifying parameters that may have an impact on the economic efficiency of the project (a general catalogue of criteria is provided)
Determining the economic efficiency of the project with the support of detailed guidelines
The core of WiBe is an exhaustive list of criteria, of which some can be quantified in monetary terms, and some in non-monetary terms. In order to evaluate a project, one should pick the applicable criteria from this catalogue. The strong point of this approach is that it may entail accurate cost/benefit analysis in both monetary and non-monetary terms. However, due to the differences between the chosen criteria, the cross-comparison between projects could be questioned. Again, a disadvantage for the use of this methodology is that it may require up to one day or even more of training if it is used for the first time.8
MAREVAMAREVA was launched in 2005 by the French eGovernment Agency (ADAE) and is widely used by the French governmental organisation, and more recently in Quebec.9 This methodology bases its assessment on the following axes:10
Profitability
Risk control
Values both qualitative and quantitative for the whole public sector
Values both qualitative and quantitative generated outside of the public sector (citizens and enterprises)
The project’s necessity
MAREVA is composed of two spreadsheet-based files. One addresses the first axis, and the other targets the other four axes. It has been positively evaluated that this approach is structured, and the tool comes with a training package. However, the method focuses on the financial aspects, is greatly detailed, and requires training to understand the various concepts and the calculations. On the contrary, the approach of the European Commission was to focus on the qualitative value.
Value-measuring MethodologyThe Value-measuring Methodology (VMM) was developed between 2001 and 2002 under the co-ordination of the Federal Chief Information Officer Council and has the main objective of sound ICT investment management. VMM encompasses four steps:11
Develop a decision framework.
Perform an alternative analysis.
Pull the information together.
Communicate and document.
This approach is closely linked to the establishment of a business case for new projects and could assist portfolio management practitioners.
The major advantage of this methodology is that it aims to assess both qualitative and quantitative values and builds clear processes to be followed. However, VMM does not propose the set of criteria to be used; it suggests only how to select these criteria and prioritise them to fit the assessed investment closely. While this may entail a close match and precision, the criteria selection process may require time. Further, the cross-comparison between initiatives may be weak if the criteria differ from project to project.
The scope used to evaluate these methodologies has been confirmed by Gartner’s report on Public-Value-of-IT Frameworks,12 in which worldwide examples were selected and reviewed, underlining both their strengths and their weaknesses. Four out of the five assessed methodologies form part of this report.
Project Approach
The analyses of the existing methodologies provided a good overview and important insights on the subject of value assessment in the private and public sectors. They have also shown that for such a methodology to serve its purpose, it should be carefully tailored to the environment in which it will be used. However, considering that none of the methodologies fully satisfied the requirements posed by the specific ICT context of the European Commission, a decision was taken to develop a customised solution.
As already explained, the assessment needed to go beyond the traditional financial benefits. Therefore, both qualitative and quantitative criteria had to be used, and the qualitative criteria had to evaluate explicitly the value for the European Union promised by the new project. Furthermore, all ICT projects’ assessments should use the same set of criteria to allow cross-comparison and prioritisation. Although keeping its focus on ICT, it should use, as much as possible, business-oriented terms to assure suitability for both business and IT services. Finally, in order to assure effortless adoption, ease of use, and concise and quick presentation of results, a spreadsheet-based tool was selected. Thus, a certain level of automation and enabling flexibility was achieved while the overall approach was still in its adoption phase.
To assure the success of the tool, several iterations of development, tests and feedback sessions with a subset of the European Commission services were completed until a stable version was produced, supported by guidelines for a methodological reference for the use of the tool and for the analyses of the results. The next section of this article looks at the custom-made value assessment tool in more detail.
Presentation of the Value Assessment Tool
VAST is a spreadsheet-based tool that consists of an Index page, five Value perspectives and a graphical Results page (depicted in figure1 ). The tool is also supported by guidelines that serve as a methodological reference and help its use. Each of these parts is presented in the following sub-sections.
Project IdentificationThe Index page collects general information about the project: project name, contacts, business owner of the project and date of assessment. This Index page also serves as a central point of the tool, and in doing so, it provides shortcuts to the other parts of the tool.
Value PerspectivesVAST consists of five value perspectives: four estimate the qualitative value of the ICT projects (Value for European Union [EU], Value for European Commission [EC], Risks and Necessity) and one estimates the quantitative value (Financial Costs and Benefits). The four qualitative perspectives consist of sets of criteria that are grouped into a number of sections and sub-sections. The quantitative perspective requires financial information on the project, and some exact figures need to be provided. The five perspectives, with their objectives, are outlined as follows.
The Value for EU perspective looks at the assessment of the external value of an ICT project. Any benefits delivered outside the Commission itself (value to the European society or to European citizens) are considered external value. If the project does not have external users and is used for purely administrative purposes, this value perspective can be omitted.
The Value for EC perspective encompasses criteria that assess the internal value of an ICT project. All factors that can contribute to the improvement of the Commission performance are considered to deliver an internal value, including:
Political value—Whether the IT solution contributes to achieving the Commission’s strategic objectives
Administrative value—Whether the project will contribute to the work efficiency and effectiveness
IT governance value—Whether the project will contribute to the rationalisation of the Commission’s information systems portfolio
Internal users’ value—The value for the Commission’s employees
The Risks perspective indicates risks related to the need for adequate project management to deliver the ICT project. It also assesses technical, security, business, legal and acceptance-related risks.
The Necessity perspective assesses the need for supporting or developing the project by looking at four subject areas: external demand, internal demand, business needs and technical needs. This perspective tries to answer questions such as ‘Do we really need to undertake this project?’ and ‘Why do we need to support it?’
The Financial Costs and Benefits perspective aims to quantify, in monetary terms, the costs and benefits of the ICT project. The approach consists of identifying every cost for development, maintenance, support, training and infrastructure and the benefits from saved time, reduction in direct operation costs and reduction in IT costs.
Results of the Value AssessmentEach qualitative criterion has a pre-assigned weight (from 1 to 3), which is based on its importance. Furthermore, each criterion has four possible assessments for which it receives between 0 and 3 points multiplied by the criterion weight. This approach promotes both a consistent way of evaluation and fine-tuning and precision of the achieved results. The quantitative criteria are expressed in numbers.
The defined formulas are calculated by the spreadsheet application and are consolidated in the Results page of VAST. The Results page is literally one page (printed) that graphically presents the four qualitative perspectives. The quantitative perspective follows a similar approach and is represented in a different graph.
The assessment is based on the already-provided data. However, to have a complete overview of the project, some supplementary information needs to be introduced into the Results page. This supplementary information is comprised of the project executive summary, the addressed business domain, the main stakeholders and the time frame. Adding this information to the value assessment allows the Results page to be used independently from the tool.
GuidelinesVAST is supported by guidelines that explain each criterion addressed in the five value perspectives. The document lists the criteria in the exact same way as the tool so that a user can easily navigate through it. The guidelines also provide general information on how the tool should be used and how to analyse the achieved results and use the tool as a methodological reference. The guidelines are, thus, entirely sufficient for self-training on VAST.
Practical Applications of the Value Assessment Tool
For an evaluation and validation of the tool, three iterations of development were undertaken. The chosen approach and the set of criteria were presented to the Commission IT community, and the tool was provided for free use by the Commission services. Interested parties were requested to provide detailed feedback from tests undertaken. Building upon these remarks, an adjusted and stable version of the tool was produced. Throughout these iterations, the achievement of the requirements put forward has been confirmed. Some limitations of the tool were also revealed.
The IT professionals at the Commission agreed that it is rather difficult to express in a concise manner the benefits of ICT projects, and they agreed that the VAST tool, which allows qualitative and quantitative value to be distinguished, helps to do this. The VAST tool also sheds light on otherwise overlooked areas of projects. For example, if a system aims at serving needs of the European citizens (external user needs), the spotlight would usually focus on the satisfaction of these needs. However, the project might be using innovative technology or be producing reusable modules. These possibilities additionally increase its value, and VAST can demonstrate this.
Utilising the exact same criteria for each assessment allows a valid cross-comparison of similar projects. For example, VAST can be very useful when the business requests a higher number of projects than what the IT entity can feasibly deliver. The use of the tool allows justified prioritisation, and the projects with higher value can be put forward. However, when comparison needs to be done between projects coming from different services, it is considered best that the evaluation be performed via a mediator (e.g., by the CPO).
The objective for the tool to be suitable for both business and the IT professionals was confirmed by the Health and Consumer Protection policies service. It established a practice to perform value assessment with VAST by representatives from the business and IT at the beginning of a project. Going together through each criterion, discussions identified the weak areas of the project. In this way, many concepts are clarified for the two parties, and a true partnership between business and IT can emerge from this practice.
The results of VAST comprise one self-explanatory page, which gives an overview of not only the qualitative and the quantitative value, but also the risks and necessity of the ICT project. Therefore, the tool can be used for communication purposes to engage senior management and stakeholders of the project. The Results page can be attached to other documentation or can be the subject of a specific meeting. Going even further, as found by the service for Trade Policies, VAST can be considered a helpful management tool.
Overall, it has been confirmed by the Commission services that the tool is easy to use and adopt. A project assessment lasts between approximately 30 minutes and one hour (at first use), assuming that the individuals conducting the assessment have familiarised themselves with the standard project documentation required at the Commission or that they are part of the project team on the business or technology side. It was also confirmed that there is no need for specific training, and where difficulties appear, the guidelines provide a sufficient level of clarification. However, the financial costs and benefits perspective was often noted as challenging to fill in, as concepts like saved time and workload are difficult to express in monetary terms.
Other European institutions (beyond the Commission) also showed interest in adopting VAST. However, as the tool is highly tailored to the Commission context, the adoption requires certain customisation, which has been the case with the European Chemical Agency.
Although this method of internal validation of the tool served its purposes, in order to assess VAST’s wider applicability and benefits, a systematic comparison of VAST with similar value frameworks should be conducted.
Summary and Conclusions
Public organisations are increasingly managed with limited resources, and decisions for new investments have to be taken cautiously. At the same time, risky and costly projects have to be undertaken due to the circumstances in which such organisations work: compliance with legal regulations, various stakeholders’ needs, etc. Thus, costly projects, from a financial perspective, can still bring benefits to public organisations and their stakeholders. This is especially true for the traditionally costly and complex ICT projects.
The public sector requires clear differentiation between the benefits of an ICT project (qualitative value) and its financial cost effectiveness (quantitative value). This work addresses precisely this issue. Building upon the well-established value assessment methodologies, VAST, the Value Assessment Tool of the European Commission, was delivered. To validate the work, three iterations of development were undertaken with the Commission’s services, which confirmed the achievement of the tool’s requirements and, thus, revealed its benefits: demonstrated value and benefits of ICT projects, cross-comparison and prioritisation, enhanced communication between project stakeholders, suitability for both the business and IT communities, ease of use, and adoptability.
Despite the benefits of VAST, the process of validation also revealed its weak points. It was observed that the financial costs and benefits perspective is challenging to fill in as it is sometimes hard to express in financial terms some of the benefits of a project (e.g., saved time and workload). Further, comparison between projects emanating from different services may be difficult if the assessment is not performed by a mediator, for example the CPO. The tool is highly tailored to the Commission’s context, and to be adopted by other organisations, it may require customisation. Finally, the tool has been tested only within the Commission, and to prove its general applicability, further tests should be undertaken. However, as it has been developed in the public domain, the tool package is freely available upon request.
Going beyond the scope of this work, the approach of using a tool for the assessment of important, but intangible areas of ICT management was positively accepted within the European Commission. Using the same approach, a tool for evaluation of the IT governance maturity of the organisation is in its pilot phase.
This article was previously published in Electronic Government and Electronic Participation: Joint Proceedings of Ongoing Research and Projects of IFIP EGOV and ePart 2010 Conferences (Trauner, Austria, 2010), edited by Jean-Loup Chappelet, Olivier Glassey, Marijn Janssen, Ann Machintosh, Jochen Scholl, Eftimios Tambouris and Maria A. Wimmer. Permission to republish was granted by the editors of the IFIP EGOV Conference.
The latest version of VAST and its guidelines can be downloaded at http://ec.europa.eu/dgs/informatics/vast.
1 Schäuble, Wolfgang; 2007, www.wolfgang-schaeuble.de/ fileadmin/user_upload/PDF/070301egovernment.pdf2 Rainey, Hal; Robert Backoff; Charles Levine; ‘Comparing public and private organizations’, Public Administration Review, 36/2/1976, p. 233–2443 Halachmi, Arie; Tony Bovaird; ‘Process reengineering in the public sector: learning some private sector lessons’, Technovation, 5/17/1997, p. 227–2354 IT Governance Institute, Enterprise Value: Governance of IT Investments, The Val IT Framework 2.0, USA, 20065 Australian Government, Information Management Office, Demand and Value Assessment Methodology for Better Government Services, Canberra, Australia, 20046 ePractice, 2004, www.epractice.eu/en/library/2812297 Federal Ministry of the Interior, Department IT 2 (KBSt), WiBe 4.0—Recommendations on Economic Efficiency Assessments in the German Federal Administration in Particular with Regard to the Use of Information Technology, Berlin, 20048 Ibid.9 ePractice, 2007, www.epractice.eu/en/cases/mareva10 Le portail du ministère du Budget, des Comptes publics, de la Fonction publique et de la Réforme de l’État, 2007, https://mioga.minefi.gouv.fr/DB/public/controlegestion/web/pages/CHAP_4_8_Methode_MAREVA.html11 Chief Information Officer Council, 2007, www.cio.gov/documents/ValueMeasuring_Methodology_HowToGuide_Oct_2002.pdf12 Gartner, Industry Research, ‘Worldwide Examples of Public-Value-of-IT Frameworks’ (ID Number: G00146056), 2007
Stefka Dzhumalievaa member of the strategy and portfolio management team at the European Commission since 2008, works on enforcing IT governance within the European Commission and steering the development of the e-Commission program. She can be reached at stefka. [email protected].
Franck Noëlwas deputy head of the unit and led the strategy and portfolio management team at the European Commission. He is currently head of the IT office at the European Court of Auditors and is responsible for IT governance in the institution. He can be reached at [email protected].
Sébastien Baudua member of the strategy and portfolio management team at the European Commission since 2006, works on enforcing IT governance within the European Commission and steering the development of the e-Commission program. He can be reached at [email protected].
Enjoying this article? To read the most current ISACA® Journal articles, become a member or subscribe to the Journal. The ISACA Journal is published by ISACA. Membership in the association, a voluntary organization serving IT governance professionals, entitles one to receive an annual subscription to the ISACA Journal.
Opinions expressed in the ISACA Journal represent the views of the authors and advertisers. They may differ from policies and official statements of ISACA and/or the IT Governance Institute® and their committees, and from opinions endorsed by authors’ employers, or the editors of this Journal. ISACA Journal does not attest to the originality of authors’ content.
© 2011 ISACA. All rights reserved.
Instructors are permitted to photocopy isolated articles for noncommercial classroom use without fee. For other copying, reprint or republication, permission must be obtained in writing from the association. Where necessary, permission is granted by the copyright owners for those registered with the Copyright Clearance Center (CCC), 27 Congress St., Salem, MA 01970, to photocopy articles owned by ISACA, for a flat fee of US $2.50 per article plus 25¢ per page. Send payment to the CCC stating the ISSN (1526-7407), date, volume, and first and last page number of each article. Copying for other than personal use or internal reference, or of articles or columns not owned by the association without express permission of the association or the copyright owner is expressly prohibited.
© 2014 ISACA. All Rights Reserved Site MapContact UsPress RoomTerms of UseISACA PRIVACY POLICY – YOUR PRIVACY RIGHTSAbout Our AdsIP Guidelines | 计算机 |
2014-35/1130/en_head.json.gz/5067 | 2 January 2013 15:10:22
Far Cry Nexus Launched
posted by Dark0ne
Following on from a great 2012 I’m keeping the momentum up with the launch of Far Cry Nexus. For those of you who are unaware, Far Cry 3 came out towards the start of December and in my personal opinion (and a few of the online publications I browse) it’s one of, if not the best game released during 2012. I managed to sneak this gem in to my collection during the recent Steam Christmas sale and I haven’t been able to put it down since. It’s actually managed to curb my current DotA 2 addiction, which is quite a feat.My experience with the Far Cry series has been a bit hot and cold. While I enjoyed the initial areas of the original Far Cry back in 2004 I found myself uninstalling it once I was getting assaulted by invisible mutants in dense jungle locations and never completed it. In 2007, Crysis came out which I enjoyed immensely, completing it several times. Development of Far Cry 2 moved from Crytek (Far Cry, Crysis) to Ubisoft Montreal (Assassin’s Creed, Tom Clancy franchise) and was set in Africa, rather than a tropical island. The setting and the move from Crytek to Ubisoft left me a bit cold so I never got into the game, despite owning it. In 2011 Crysis 2 came out, and despite really wanting to like it, I’m not afraid to say I thought it was an absolutely terrible game and lacked any of the open, sandbox elements that made the original so good. With my confidence in Crytek shattered, I opened up a bit to the idea of Ubisoft Montreal’s Far Cry 3 which was back on tropical islands. I lost my initial skepticism of the game within the first hour of play and am now thoroughly enjoying it and highly recommend it.If you’re not in the know about Far Cry 3, then here’s a very quick break-down: if you like open sandbox games like Skyrim, Oblivion, Morrowind, Fallout 3 et al, where you can move freely around a large map and pretty much do anything and everything then you’ll like Far Cry 3. To put it in even more layman’s terms; it’s much like Skyrim with guns on a tropical island. It has hints of influences from Assassin’s Creed, but by and large it’s an amazingly unique game that is sure to keep you engaged for many, many hours.It’s all sounding great so far, right? Unfortunately there’s a snag: while Ubisoft have released a map editor for multiplayer maps, they haven’t released an SDK or editor for the main game, which makes modding a bit more tough. Not as tough as the recent XCOM game, but still, it does present a barrier to entry for some. With no official modding forums over at the Ubisoft Official Forums talk about modding has been restricted to a few rather large threads where new hints and tips are shared by budding modders. New mod announcement threads are lost among the myriad of other Far Cry 3 related discussions to do with anything and everything. Modders working hard to ensure they, and you, can make the most of the game deserve more than that, they deserve a dedicated place where they can share their work without getting drowned out by all the other game related noise. And that’s why I think the game needs a Nexus.Over the Christmas and New Year period I’ve worked hard on setting up Far Cry Nexus and getting in contact with as many of the known authors as I can, asking them if they’d be willing to share their work on the site and talking to a few about what I can do to help. The initial response has been good, and once again I’m here hoping to support a great game that could be even greater if only the modders were given the room they need to make the most of the game.You’ll already find the file database has been populated with some of the mods, and hopefully that will continue to fill up as word spreads and we can put a spotlight on the modding community for Far Cry 3. Honestly, if you’re a bit burned out with modding for some of the other games we support right now I can highly recommend Far Cry 3 for filling the void. If you’re finding it a bit hard to talk about modding on the official forums amongst all the other general chit chat for the game then our Far Cry 3 forums have been split up into separate categories to better accommodate modding chatter.I’ll continue to look into ways of helping to support Far Cry 3 modding, and I’d love to hear from any of you currently working hard to mod the game. Comments (93)
jvk1166z - Morrowind Creepypasta
Better Jumping and Falling
Blood Cover
Bleach (main menu theme) | 计算机 |
2014-35/1130/en_head.json.gz/5711 | DUB For the Future
thoughts on human-computer interaction, user interfaces, design
Time to claim success on electronic sketching of UIs?
In Las Vegas this week (March 18th), Microsoft demoed Microsoft Expression Blend 3 with SketchFlow. SketchFlow is a new tool that appears to be a commercial strength version of our previous research tools in this space:SILK -- for sketching, storyboarding, and prototyping GUIsDenim -- for sketching, storyboarding, and prototyping Web sites (w/ lots more in terms of functionality and testing than was possible in SILK)K-Sketch - informal prototyping of animations Much of the functionality of these three research tools has been embedded in a full strength web development system (Expression Blend 3). The image above shows the UI, but doesn't really capture it (you need to watch the video below).Watch this video of the demo (go to about 2/3 of the way in If you go too far in and they are already sketching, just back up a bit. The video is quite long.): http://live.visitmix.com/ (click on Day 1 Keynote)More info on it here:http://electricbeach.org/?p=145The talks use the term "informal" all over the place. Clearly our "informal user interfaces" work has had impact on industry. I know this often takes many years (we first showed SILK in 1995!). But I want to thank Brad Myers (my PhD advisor) and all of the students, staff, and postdocs that have worked on this project (especially Jimmy, Jason, Mark, Richard, and Yang). You should all be quite happy to see this come to fruition. It sometimes takes many years to see impact and many other researchers never see it.What do you guys think? Are we done? What don't they do?PS Watch the Buxton intro at the beginning to see a lot of the motivation. I wonder if they will claim to never have seen our work and were instead motivated by Buxton's recent book?
James Landay
sketching UIs informal UIs prototyping SILLK denim expresion blend
Professor of Computer Science at Stanford, specializing in human-computer interaction. I am also a Cornell Tech, fellow. My current research interests include Design, Crowdsourcing, Automated Usability Evaluation, Demonstrational Interfaces, Ubiquitous Computing, User Interface Design Tools, and Web Design. I am using ideas from these domains to help solve problems in the areas of Environment, Health, and Education. Previously, I was a Professor in Information Science at Cornell Tech, a Professor in Computer Science & Engineering at the University of Washington, a Lab Director at Intel Research, and a Professor in Computer Science at UC Berkeley.
visit statistics
Time to claim success on electronic sketching of U... | 计算机 |
2014-35/1130/en_head.json.gz/5724 | Video Subtitling
ASCII (American Standard Code for Information Interchange) is one of the early character encoding systems for computers. It is a 7 bit, 128 character system that was designed to represent the Latin alphabet, numerals and punctuation. It is not designed to represent characters from other alphabets. This often causes problems because many programming languages were originally developed for ASCII, and only later added support for Unicode and other character sets.
ATOM is a content syndication standard, similar to RSS, which allows websites to publish feeds that allow other sites, news readers and web servers to automatically read or import content from each other.
See also RSS.
bridge language
A bridge language is a widely spoken, international language, such as English, French or Spanish, that is used as an intermediate language when translating between two less widely spoken languages. For example, to translate from Romanian to Chinese, one might translate first from Romanian to English, and then English to Chinese because few people speak Romanian and Chinese directly.
See also interlingua.
A character set can be as simple as a table that maps numbers to characters or symbols in an alphabet. ASCII, for example, is an old system that represents the American alphabet (the number 65 in ASCII equals 'a', for example).Unicode, in contrast, can represent a much larger range of symbols, including the large pictographic symbol sets for languages such as Chinese and Japanese.
Character encoding is a representation of the sequence of numeric values for characters in text. For many character set standards, there is only one coding, so it is possible to confuse the two ideas. In Unicode, on the other hand, there is one numeric value for each character, but that value can be represented (encoded) in binary data of different lengths and formats. Unicode has 16-bit, 32-bit, and variable length encodings. The most important is UTF-8, which is to be used for all data transmission, including Web pages, because it is defined as a byte stream with no question of size or byte order. Fixed-length formats also have to specify processor byte order (Big-Endian or Little-Endian).
A content management system is a piece of software that manages the process of editing and publishing content to a website or blog. A CMS enables editors to supervise the work of writers, manage how articles or posts are displayed, and so on. These systems also make it easier to separate content production (writing) from design related tasks, such as a page layout. Word Press, Movable Type, Drupal and Joomla are examples of widely used content management systems.
A corpus (plural corpora) is a large and structured collection of texts used for linguistic research. In the context of translation tools, a corpus consist of one or more aligned texts. These corpora typically contain texts that are about a certain domain and consequently can help to find the terminology used in a domain.
Copyleft is a use of copyright law to enforce policies that allow people to reprint, share and re-use published content without prior written permission from the author. Copyleft licences require that derivative works use the same licence, so that they are as Free as the original work.
Copyright is a form of intellectual property law giving the author of a work control over its use, re-use in different media, translation, and distribution.
Creative Commons is an organization that was founded to promote new types of copyright terms, also known as copyleft. The organization has developed legal templates that define new policies for sharing and distributing online content without prior knowledge or consent from the original producer.
Disambiguation is the process of determining or declaring the meaning of a word or phrase that has several different meanings depending on its content. The English word "lie", for example, could mean "to recline" (I need to lie down), or "to tell a falsehood". Machine translation systems often have a very difficult time with this, while it is an easy task for humans, who can usually rely on context to determine which meaning is appropriate.
disambiguation markup
Disambiguation markup is a way to embed hints about the meaning of a word or phrase within a text, so that a machine translator or other automated process can understand what the author intended. For example, the expression "<div syn=similar>like</div>" would tell a text processor that the word like is synonymous with similar, information a program could use to avoid misinterpreting like as "to like someone".
The principal database and catalogue of human languages, providing linguistic and social data for each language. In particular, Ethnologue lists estimates of the number of speakers of each language in each country and worldwide. It is available in printed form and on the Internet at http://www.ethnologue.org. Ethnologue's database includes information on more than 6,900 known languages, and continues to grow.
Free, Libre and Open Source Software. An umbrella term for all forms of software which is liberally licensed to grant the right of users to study, change, and improve its design through the availability of its source code. FLOSS is an inclusive term generally synonymous with both free software and open source software which describe similar development models, but with differing cultures and philosophies.
fuzzy matching
Fuzzy matching is a technique used with translation memories that suggests translations that are not perfect matches for the source text. The translator then has the option to accept the approximate match. Fuzzy matching was meant to speed up translation however there is a greater risk of inaccuracy.
gettext is a utility, available in several programming languages, for localizing software. It works by replacing texts, or strings, with translations that are stored in a table, usually a file stored on a computer's disk drive. The table contains a list of x=y statements (e.g. "hello world" = "hola mundo").
GNU / GPL
GNU or GNU's Not Unix, is a recursive acronym for a set of software projects announced in 1983 by a computer scientist at MIT named Richard Stallman. The GNU project was designed to be a free, massively collaborative software, open source software initiative. In 1985 the Free | 计算机 |
2014-35/1130/en_head.json.gz/7093 | Home > Risk Management
OverviewGetting StartedResearchTools & Methods Additional Materials ConsultingOur People Risk Management
Consider a broad range of conditions and events that can affect the potential for success, and it becomes easier to strategically allocate limited resources where and when they are needed the most.
Overview The SEI has been conducting research and development in various aspects of risk management for more than 20 years. Over that time span, many solutions have been developed, tested, and released into the community. In the early years, we developed and conducted Software Risk Evaluations (SREs), using the Risk Taxonomy. The tactical Continuous Risk Management (CRM) approach to managing project risk followed, which is still in use today—more than 15 years after it was released. Other applications of risk management principles have been developed, including CURE (focused on COTS usage), ATAM® (with a focus on architecture), and the cyber-security-focused OCTAVE®. In 2006, the SEI Mission Success in Complex Environments (MSCE) project was chartered to develop practical and innovative methods, tools, and techniques for measuring, assessing, and managing mission risks. At the heart of this work is the Mission Risk Diagnostic (MRD), which employs a top-down analysis of mission risk.
Mission risk analysis provides a holistic view of the risk to an interactively complex, socio-technical system. The first step in this type of risk analysis is to establish the objectives that must be achieved. The objectives define the desired outcome, or "picture of success," for a system. Next, systemic factors that have a strong influence on the outcome (i.e., whether or not the objectives will be achieved) are identified. These systemic factors, called drivers, are important because they define a small set of factors that can be used to assess a system's performance and gauge whether it is on track to achieve its key objectives. The drivers are then analyzed, which enables decision makers to gauge the overall risk to the system's mission.
The MRD has proven to be effective for establishing confidence in the characteristics of software-reliant systems across the life cycle and supply chain. The SEI has the MRD in a variety of domains, including software acquisition and development; secure software development; cybersecurity incident management; and technology portfolio management. The MRD has also been blended with other SEI products to provide unique solutions to customer needs.
Although most programs and organizations use risk management when developing and operating software-reliant systems, preventable failures continue to occur at an alarming rate. In many instances, the root causes of these preventable failures can be traced to weaknesses in the risk management practices employed by those programs and organizations. For this reason, risk management research at the SEI continues. The SEI provides a wide range of risk management solutions. Many of the older SEI methodologies are still successfully used today and can provide benefits to your programs. To reach the available documentation on the older solutions, see the additional materials.
The MSCE work on mission risk analysis—top-down, systemic analyses of risk in relation to a system's mission and objectives—is better suited to managing mission risk in complex, distributed environments. These newer solutions can be used to manage mission risk across the life cycle and supply chain, enabling decision makers to more efficiently engage in the risk management process, navigate through a broad tradeoff space (including performance, reliability, safety, and security considerations, among others), and strategically allocate their limited resources when and where they are needed the most. Finally, the SEI CERT Program is using the MRD to assess software security risk across the life cycle and supply chain. As part of this work, CERT is conducting research into risk-based measurement and analysis, where the MRD is being used to direct an organization's measurement and analysis efforts. Spotlight on Risk Management
The Monitor June 2009
New Directions in Risk: A Success-Oriented Approach (2009)
A Practical Approach for Managing Risk
A Technical Overview of Risk and Opportunity Management
A Framework for Categorizing Key Drivers of Risk
Practical Risk Management: Framework and Methods | 计算机 |
2014-35/1130/en_head.json.gz/7892 | The Gay Hong Kong portal Has Been Permanently Removed
Thank you for visiting, but the Gay Hong Kong community information portal is no longer available.
I have been told that the University computer center (ITSC) recently received one (1) complaint from an alumnus about this site being located on the campus network. That complaint triggered an investigation, and ITSC determined that this site was in violation of the following section of their Acceptable Usage Policy:
Users should not use the network resources for activities that are not related to the University (e.g. commercial and private activities).
It was explained to me that while the actual computer serving this site belongs to me personally, and resides in my rented flat in the University's Staff Quarters, the use of the campus network was inappropriate for this content.
So I have taken that content down, and redirected all of its page requests to this message.
Please note that I have no significant objection to any of this, especially since the site is no longer needed and I have been wondering what to do with it. While the Gay Hong Kong community information portal was heavily accessed years ago, for the past few years it has been very rarely updated, as there are other sites which have taken on the mantel of information provision and are doing a much better job at it than I ever could.
It has been a lot of fun, and I thank you for your interest and support over the past 19 years.
Edward Spodick (Spode) | 计算机 |
2014-35/1130/en_head.json.gz/7944 | [Docs] [txt|pdf] [Tracker] [WG] [Email] [Diff1] [Diff2] [Nits] Versions: 00 01 02 03 04 05 06 RFC 4230
Hannes Tschofenig
Internet Draft Siemens
draft-ietf-nsis-rsvp-sec-properties-01.txt
Expires: September, 2003
RSVP Security Properties
<draft-ietf-nsis-rsvp-sec-properties-01.txt>
This document is an Internet-Draft and is in full conformance
with all provisions of Section 10 of RFC2026.
material or to cite them other than as "work in progress".
http://www.ietf.org/ietf/1id-abstracts.txt
Tschofenig Informational - Expires April 2003 1
RSVP Security Properties March 2003
As the work of the NSIS working group has begun there are also
concerns about security and its implication for the design of a
signaling protocol. In order to understand the security properties
and available options of RSVP a number of documents have to be read.
This document tries to summarize the security properties of RSVP and
to view them from a different point of view. This work in NSIS is
part of the overall process of analyzing other signaling protocols
and to learn from their design considerations. This document should
also provide a starting point for security discussions.
1 Introduction...................................................3
2 Terminology....................................................3
3 Overview.......................................................5
3.1 The RSVP INTEGRITY Object....................................5
3.2 Security Associations........................................6
3.3 RSVP Key Management Assumptions..............................7
3.4 Identity Representation......................................7
3.5 RSVP Integrity Handshake....................................11
4 Detailed Security Property Discussion.........................12
4.1 Discussed Network Topology..................................12
4.2 Host/Router.................................................13
4.3 User to PEP/PDP.............................................17
4.4 Communication between RSVP aware routers....................25
5 Miscellaneous Issues..........................................28
5.1 First Hop Issue.............................................28
5.2 Next-Hop Problem............................................28
5.3 Last-Hop Issue..............................................30
5.4 RSVP and IPsec..............................................31
5.5 End-to-End Security Issues and RSVP.........................33
5.6 IPsec protection of RSVP signaling messages.................33
5.7 Accounting/Charging Framework...............................34
6 Conclusions...................................................34
7 Security Considerations.......................................36
8 IANA considerations...........................................36
9 Open Issues...................................................36
10 Acknowledgments...............................................36
Appendix A. Dictionary Attacks and Kerberos......................36
Appendix B. Example of User-to-PDP Authentication................38
11 References....................................................39
12 Author's Contact Information..................................42
13 Full Copyright Statement......................................43
Tschofenig Informational - Expires September 2003 2
1 Introduction
also provide a starting point for further discussions.
The content of this document is organized as follows:
Section 3 provides an overview of the security mechanisms provided by
RSVP including the INTEGRITY object, a description of the identity
representation within the POLICY_DATA object (i.e. user
authentication) and the RSVP Integrity Handshake mechanism.
Section 4 provides a more detailed discussion of the used mechanism
and tries to describe the mechanisms provided in detail.
Finally a number of miscellaneous issues are described which address
first-hop, next-hop and last-hop issues. Furthermore the problem of
IPsec security protection of data traffic and RSVP signaling message
is discussed.
2 Terminology
To begin with the description of the security properties of RSVP it
is natural to explain some terms used throughout the document.
- Chain-of-Trust
The security mechanisms supported by RSVP [RFC2747] heavily relies on
optional hop-by-hop protection using the built-in INTEGRITY object.
Hop-by-hop security with the INTEGRITY object inside the RSVP message
thereby refers to the protection between RSVP supporting network
elements. Additionally there is the notion of policy aware network
elements that additionally understand the POLICY_DATA element within
the RSVP message. Since this element also includes an INTEGRITY
object there is an additional hop-by-hop security mechanism that
provides security between policy aware nodes. Policy ignorant nodes
are not affected by the inclusion of this object in the POLICY_DATA
element since they do not try to interpret it.
To protect signaling messages that are possibly modified by each RSVP
router along the path it must be assumed that each incoming request
is authenticated, integrity and replay protected. This provides
protection against unauthorized nodes injecting bogus messages.
Furthermore each RSVP-router is assumed to behave in the expected
manner. Outgoing messages transmitted to the next hop network element
experience protection according RSVP security processing.
Using the above described mechanisms a chain-of-trust is created
whereby a signaling message transmitted by router A via router B and
received by router C is supposed to be secure if router A and B and
router B and C share a security association and all routers behave
expectedly. Hence router C trusts router A although router C does not
have a direct security association with router A. We can therefore
conclude that the protection achieved with this hop-by-hop security
for the chain-of-trust is as good as the weakest link in the chain.
If one router is malicious (for example because an adversary has
control over this router) then it can arbitrarily modify messages and
cause unexpected behavior and mount a number of attacks not only
restricted to QoS signaling. Additionally it must be mentioned that
some protocols demand more protection than others (this depends
between which nodes these protocols are executed). For example edge
devices, where end-users are attached, may more likely be attacked in
comparison to the more secure core network of a service provider. In
some cases a network service provider may choose not to use the RSVP
provided security mechanisms inside the core network because a
different security protection is deployed.
Section 6 of [RFC2750] mentions the term chain-of-trust in the
context of RSVP integrity protection. In Section 6 of [HH01] the same
term is used in the context of user authentication with the INTEGRITY
object inside the POLICY_DATA element. Unfortunately the term is not
explained in detail and the assumption is not clearly specified.
- Host and User Authentication
The presence of the RSVP protection and a separate user identity
representation leads to the fact that both user- and the host-
identities are used for RSVP protection. Therefore user and host
based security is investigated separately because of the different
authentication mechanisms provided. To avoid confusion about the
different concepts Section 3.4 will describe the concept of user
authentication in more detail.
- Key Management
For most of the security associations required for the protection of
RSVP signaling messages it is assumed that they are already available
and hence key management was done in advance. There is however an
exception with the support for Kerberos. Using Kerberos an entity is
able to distribute a session key used for RSVP signaling protection.
- RSVP INTEGRITY and POLICY_DATA INTEGRITY Object
RSVP uses the INTEGRITY object in two places of the message. The
first usage is in the RSVP message itself and covers the entire RSVP
message as defined in [RFC2747] whereas the latter is included in the
POLICY_DATA object and defined in [RFC2750]. In order to
differentiate the two objects regarding their scope of protection the
two terms RSVP INTEGRITY and POLICY_DATA INTEGRITY object are used.
The data structure of the two objects however is the same.
- Hop vs. Peer
In the past there was considerable discussion about the terminology
of a nodes that are addressed by RSVP. In particular two favorites
have used: hop and peer. This document uses the term hop which is
different to an IP hop. Two neighboring RSVP nodes communicating with
each other are not necessarily neighboring IP nodes (i.e. one IP hop
away).
3 Overview
3.1 The RSVP INTEGRITY Object
The RSVP INTEGRITY object is the major component of the RSVP security
protection. This object is used to provide integrity and replay
protect the content of the signaling message between two RSVP
participating router. Furthermore the RSVP INTEGRITY object provides
data origin authentication. The attributes of the object are briefly
described:
- Flags field
The Handshake Flag is the only defined flag and is used to
synchronize sequence numbers if the communication gets out-of-sync
(i.e. for a restarting host to recover the most recent sequence
number). Setting this flag to one indicates that the sender is
willing to respond to an Integrity Challenge message. This flag can
therefore be seen as a capability negotiation transmitted within each
INTEGRITY object.
- Key Identifier
The Key Identifier selects the key used for verification of the Keyed
Message Digest field and hence must be unique for the sender. Its
length is fixed with 48-bit. The generation of this Key Identifier
field is mostly a decision of the local host. [RFC2747] describes
this field as a combination of an address, the sending interface and
a key number. We assume that the Key Identifier is simply a (keyed)
hash value computed over a number of fields with the requirement to
be unique if more than one security association is used in parallel
between two hosts (i.e. as it is the case with security association
that have overlapping lifetimes). A receiving system uniquely
identifies a security association based on the Key Identifier and the
sender's IP address. The sender's IP address may be obtained from the
RSVP_HOP object or from the source IP address of the packet if the
RSVP_HOP object is not present. The sender uses the outgoing
interface to determine which security association to use. The term
outgoing interface might be confusing. The sender selects the
security association based on the receiver's IP address (of the next
RSVP capable router). To determine which node is the next capable
RSVP router is not further specified and is likely to be statically
configured.
- Sequence Number
The sequence number used by the INTEGRITY object is 64-bits in length
and the starting value can be selected arbitrarily. The length of the
sequence number field was chosen to avoid exhaustion during the
lifetime of a security association as stated in Section 3 of
[RFC2747]. In order for the receiver to distinguish between a new and
a replayed sequence number each value must be monotonically
increasing modulo 2^64. We assume that the first sequence number seen
(i.e. the starting sequence number) is stored somewhere. The modulo-
operation is required because the starting sequence number may be an
arbitrary number. The receiver therefore only accepts packets with a
sequence number larger (modulo 2^64) than the previous packet. As
explained in [RFC2747] this process is started by handshaking and
agreeing on an initial sequence number. If no such handshaking is
available then the initial sequence number must be part of the
establishment of the security association.
The generation and storage of sequence numbers is an important step
in preventing replay attacks and is largely determined by the
capabilities of the system in presence of system crashes, failures
and restarts. Section 3 of [RFC2747] explains some of the most
important considerations.
- Keyed Message Digest
The Keyed Message Digest is an RSVP built-in security mechanism used
to provide integrity protection of the signaling messages. Prior to
computing the value for the Keyed Message Digest field the Keyed
Message Digest field itself must be set to zero and a keyed hash
computed over the entire RSVP packet. The Keyed Message Digest field
is variable in length but must be a multiple of four octets. If HMAC-
MD5 is used then the output value is 16 bytes long. The keyed hash
function HMAC-MD5 [RFC2104] is required for a RSVP implementation as
noted in Section 1 of [RFC2747]. Hash algorithms other than MD5
[RFC1321] like SHA [SHA] may also be supported.
The key used for computing this Keyed Message Digest may be obtained
from the pre-shared secret which is either manually distributed or
the result of a key management protocol. No key management protocol,
however, is specified to create the desired security associations.
3.2 Security Associations
Different attributes are stored for security associations of sending
and receiving systems (i.e. unidirectional security associations).
The sending system needs to maintain the following attributes in such
a security association [RFC2747]:
- Authentication algorithm and algorithm mode
- Key
- Key Lifetime
- Sending Interface
- Latest sequence number (sent with this key identifier)
The receiving system has to store the following fields:
- Source address of the sending system
- List of last n sequence numbers (received with this key identifier)
Note that the security associations need to have additional fields to
indicate their state. It is necessary to have an overlapping lifetime
of security associations to avoid interrupting an ongoing
communication because of expired security associations. During such a
period of overlapping lifetime it is necessary to authenticate either
one or both active keys. As mentioned in [RFC2747] a sender and a
receiver might have multiple active keys simultaneously.
If more than one algorithm is supported then the algorithm used must
be specified for a security association.
3.3 RSVP Key Management Assumptions
[RFC2205] assumes that security associations are already available.
Manual key distribution must be provided by an implementation as
noted in Section 5.2 of [RFC2747]. Manual key distribution however
has different requirements to a key storage a simple plaintext
ASCII file may be sufficient in some cases. If multiple security
associations with different lifetimes should be supported at the same
time then a key engine, for example PF_KEY [RFC2367], would be more
appropriate. Further security requirements listed in Section 5.2 of
[RFC2747] are the following:
- The manual deletion of security associations must be supported.
- The key storage should persist a system restart.
- Each key must be assigned a specific lifetime and a specific Key
Identifier.
3.4 Identity Representation
In addition to host-based authentication with the INTEGRITY object
inside the RSVP message user-based authentication is available as
introduced with [RFC2750]. Section 2 of [RFC3182] stated that
"Providing policy based admission control mechanism based on user
identities or application is one of the prime requirements." To
identify the user or the application, a policy element called
AUTH_DATA, which is contained in the POLICY_DATA object, is created
by the RSVP daemon at the users host and transmitted inside the RSVP
message. The structure of the POLICY_DATA element is described in
[RFC2750]. Network nodes like the PDP then use the information
contained in the AUTH_DATA element to authenticate the user and to
allow policy-based admission control to be executed. As mentioned in
[RFC3182] the policy element is processed and the policy decision
point replaces the old element with a new one for forwarding to the
next hop router.
A detailed description of the POLICY_DATA element can be found in
[RFC2750]. The attributes contained in the authentication data policy
element AUTH_DATA, which is defined in [RFC3182], are briefly
explained in this Section. Figure 1 shows the abstract structure of
the RSVP message with its security relevant objects and the scope of
protection. The RSVP INTEGRITY object (outer object) covers the
entire RSVP message whereas the POLICY_DATA INTEGRITY object only
covers objects within the POLICY_DATA element.
+--------------------------------------------------------+
| RSVP Message |
| INTEGRITY +-------------------------------------------+|
| Object |POLICY_DATA Object ||
| +-------------------------------------------+|
| | INTEGRITY +------------------------------+||
| | Object | AUTH_DATA Object |||
| | +------------------------------+||
| | | Various Authentication |||
| | | Attributes |||
Figure 1: Security relevant Objects and Elements within the RSVP
The AUTH_DATA object contains information for identifying users and
applications together with credentials for those identities. The main
purpose of those identities seems to be the usage for policy based
admission control and not for authentication and key management. As
noted in Section 6.1 of [RFC3182] an RSVP may contain more than one
POLICY_DATA object and each of them may contain more than one
AUTH_DATA object. As indicated in the Figure above and in [RFC3182]
one AUTH_DATA object contains more than one authentication attribute.
A typical configuration for a Kerberos-based user authentication
includes at least the Policy Locator and an attribute containing the
Kerberos session ticket.
A successful user authentication is the basis for doing policy-based
admission control. Additionally other information such as time-of-
day, application type, location information, group membership etc.
may be relevant for a policy.
The following attributes are defined for the usage in the AUTH_DATA
object:
a) Policy Locator
The policy locator string that is a X.500 distinguished name (DN)
used to locate the user and/or application specific policy
information. The following types of X.500 DNs are listed:
- ASCII_DN
- UNICODE_DN
- ASCII_DN_ENCRYPT
- UNICODE_DN_ENCRYPT
The first two types are the ASCII and the Unicode representation of
the user or application DN identity. The two "encrypted"
distinguished name types are either encrypted with the Kerberos
session key or with the private key of the users digital certificate
(i.e. digitally signed). The term encrypted together with a digital
signature is easy to misconceive. If user identity confidentiality
shall be provided then the policy locator has to be encrypted with
the public key of the recipient. How to obtain this public key is not
described in the document. Such an issue may be specified in a
concrete architecture where RSVP is used.
b) Credentials
Two cryptographic credentials are currently defined for a user:
Authentication with Kerberos V5 [RFC1510], and authentication with
the help of digital signatures based on X.509 [RFC2495] and PGP
[RFC2440]. The following list contains all defined credential types
currently available and defined in [RFC3182]:
+--------------+--------------------------------+
| Credential | Description |
| Type | |
+===============================================|
| ASCII_ID | User or application identity |
| | encoded as an ASCII string |
| UNICODE_ID | User or application identity |
| | encoded as an Unicode string |
| KERBEROS_TKT | Kerberos V5 session ticket |
| X509_V3_CERT | X.509 V3 certificate |
| PGP_CERT | PGP certificate |
Table 1: Credentials Supported in RSVP
The first two credentials only contain a plaintext string and
therefore they do not provide cryptographic user authentication.
These plaintext strings may be used to identify applications, which
are included for policy-based admission control. Note that these
plain-text identifiers may, however, be protected if either the RSVP
INTEGRITY and/or the INTEGRITY object of the POLICY_DATA element is
present. Note that the two INTEGRITY objects can terminate at
different entities depending on the network structure. The digital
signature may also provide protection of application identifiers. A
protected application identity (and the entire content of the
POLICY_DATA element) cannot be modified as long as no policy ignorant
nodes are used in between.
A Kerberos session ticket, as previously mentioned, is the ticket of
a Kerberos AP_REQ message [RFC1510] without the Authenticator.
Normally, the AP_REQ message is used by a client to authenticate to a
server. The INTEGRITY object (e.g. of the POLICY_DATA element)
provides the functionality of the Kerberos Authenticator, namely
replay protection and shows that the user was able to retrieve the
session key following the Kerberos protocol. This is, however, only
the case if the Kerberos session was used for the keyed message
digest field of the INTEGRITY object. Section 7 of [RFC2747]
discusses some issues for establishment of keys for the INTEGRITY
object. The establishment of the security association for the RSVP
INTEGRITY object with the inclusion of the Kerberos Ticket within the
AUTH_DATA element may be complicated by the fact that the ticket can
be decrypted by node B whereas the RSVP INTEGRITY object terminates
at a different host C. The Kerberos session ticket contains, among
many other fields, the session key. The Policy Locator may also be
encrypted with the same session key. The protocol steps that need to
be executed to obtain such a Kerberos service ticket are not
described in [RFC3182] and may involve several roundtrips depending
on many Kerberos related factors. The Kerberos ticket does not need
to be included in every RSVP message as an optimisation as described
in Section 7.1 of [RFC2747]. Thus the receiver must store the
received service ticket. If the lifetime of the ticket is expired
then a new service ticket must be sent. If the receiver lost his
state information (because of a crash or restart) then he may
transmit an Integrity Challenge message to force the sender to re-
transmit a new service ticket.
If either the X.509 V3 or the PGP certificate is included in the
policy element then a digital signature must be added. The digital
signature computed over the entire AUTH_DATA object provides
authentication and integrity protection. The SubType of the digital
signature authentication attribute is set to zero before computing
the digital signature. Whether or not a guarantee of freshness with
the replay protection (either time | 计算机 |
2014-35/1130/en_head.json.gz/8123 | Robin Hunicke Interview
Published on 4th November 2011 by Brendan Caldwell 1 - A Journey with ThatGameCompany
2 - Robin Hunicke Interview
Robin Hunicke InterviewBG: Would you say that, having worked for EA and now ThatGameCompany, you’ve seen both sides of the developer-publisher conflict? RH: Oh, for sure. I’ve been on both sides of the fence, as it were. I’ve actually had conversations with my producing partner at Sony that people have had to have with me as their producing partners. I’ve seen it all. Actually, I can’t say that. I’ve seen a lot. But I haven’t seen it all, not yet, knock on wood. BG: Do you think that a studio-distributor relationship is less or more antagonistic than a publisher-developer one?
RH: I don’t know. I’ve never worked with just a plain distributor, like a game for the iPhone or web.
BG: I mean with Sony as TGC’s acting distributor.
RH: Well, our relationship with Sony is a little bit like a fairytale. You know, Kellee [Santiago] and Jenova [Chen] graduated from college and they had this crazy idea that they were going to make games about emotions and create this new genre of art game... and Sony went for it! Not only did they go for it but they supported them through three really experimental projects. If it weren’t for very important people at Sony believing very strongly in our mission I’m not sure those games would exist.
BG: That’s a feeling that a few developers seem to share; that Sony is the more fertile ground for new ideas.
RH: Absolutely. I can’t believe sometimes how amazing and diverse the portfolio is for PlayStation Network. If you look at the games... The last event that I did, there were so many amazing games in the room with us that it was hard for me to stand around and watch Journey because I wanted to go play [PixelJunk] 4am... I wanted to go play Papo & Yo and Closure and Retrograde. I mean, there are all these games coming out and they’re all people that I know and really appreciate. It was actually kind of shocking and, the Vita? I can’t wait to play Sound Shape. I’m dying to play it right now!
BG: Is Vita something which ThatGameCompany would be interested in exploring as a platform?
RH: I can’t imagine why not. Right now, the thing that we’re so interested in about Journey is that it’s a networked game and we’ve been able to have a relationship with the fans by doing the beta and creating this conversation with our fans. And that’s definitely something we’re excited about... I can’t think of any reasons why the Vita would be any different, especially since it seems like it’s got a great development environment. I’ve heard great things about it. But right now it’s just Journey... It’s hard not to day-dream of the future but we really have to stay focused until we’re done.
BG: When you say that Kellee and Jenova where interested in creating this new kind of art game genre, there's a lot of that on the PC as well – and Tale of Tales are here [at GameCity Festival] as well.
RH: They’re so great. We love their games – or their notgames, rather. BG: Is that ‘notgames’ manifesto that they adhere to something that you would apply to your studio as well?
RH: It’s hard to say. I mean, they’d have to be the judge of that. It’s their manifesto.
BG: I just ask because although Journey is very clearly artistic, it still has an aim and goal – maybe it still has what you’d call a win condition?
RH: Well, it’s hard to talk about that because I don’t want to spoil it, so I’m not going to say too much. But Journey is our attempt to create genuine connection between two people – that’s what it is. Whether it’s a game or a notgame, [I don’t know]. It has some traditional elements but the way that you experience them might not be what you would expect. We’re really waiting to hear from you guys –the players – when it’s done. We’re really excited to hear what you have to say because it doesn’t matter what we thought the game was going to be about or what the experience was going to be. What matters is what the players take away from it. It’s something that we cannot predict. I mean, we have no idea. Prev
Next 1 - A Journey with ThatGameCompany
Share This Interview
brendan caldwell
World of Warplanes Interview
Frozen Synapse Community Interview
World of Tanks Developer Interview
Curve Interview: For Country and Console
Resident Evil 6 will reboot series
Crisis Core: Interviewing Yoshinori Kitase | 计算机 |
2014-35/1130/en_head.json.gz/8265 | Microsoft Ups the Price of Office for Mac Users by Roughly 17%
47 comment(s) - last by blankslate.. on Feb 24 at 2:37 AM
Microsoft adds about $20 to single license Office for Mac software
When it comes to pricing and licensing of Office software these days, Microsoft certainly isn’t making any new friends. If you are a Mac user that relies on Office software for business or school, you will be paying more for the next upgrade you purchase.
Microsoft has raised the price of Office for the Mac by as much as 17% and has stopped selling multi-license bundles for the productivity suite. The price change puts Office for Mac 2011 on the same pricing schedule as Office 2013 for Windows, despite the fact that it is much older software.
Microsoft hopes that the move will push Mac users to adopt its subscription Office 365 offering.
Under the new pricing schedule, a single-license of Office for Mac Home & Student has jumped from $120 to $140, while Office for Mac Home & Business has been bumped from $200 to $220. Microsoft previously offered Mac users a Home & Student bundle with three licenses for $150 and a Home & Business two-license bundle for $250, which have now been discontinued.
If you need multiple licenses, the new pricing means a significantly larger expense than in previous years. By contrast, Office 365 Home Premium will cost about $100 per year or $10 per month for a single household license covering up to five computers. Office 365 Small Business Premium costs $150 per year per user and allows that one user to install the application on up to five devices that they own.
It's worth noting that if you want a new version of Office for Mac computers, some retailers are still offering the software at the previous prices. However, both Microsoft and Apple are now charging the higher prices.
RE: Apple customers have been paying a premium for MS products since the 80s...
ven1ger
That's assuming that you're paying extra for porting something new. Umm...it says Office for Mac 2011, considering the cost has gone up now after 2 years, doesn't seem to hold water that it to pay for the costs of porting the software.MS wants to move to a subscriber based payment system much like the Office 365. This way they have continuous payment from users to use their software and then to make more profit later, they'll steadily bump up their subscriber fees every couple of years. Many offices are still using 2003 and that's 10 years ago, they get no $$$ from those until they find they have to upgrade to 2007 or 2010. Then maybe it's another 10 years to get additional $$$ from those that don't upgrade with each iteration. With a subscription formula, they get money each year from everyone that buys into it, $$$$$$$... Parent
My statements were in reference to the comment from SAN-Man quote: Most of you probably don't remember Microsoft started out writing productivity software for Apple way back when. It's always been expensive. I am indicating that there is at least one reason the Apple version always costs more then for the Windows version. Parent
Report: Retail Office 2013 Software Can Only Be Installed on a Single PC for Life
Office 365 Launches Today for $100/Year | 计算机 |
2014-35/1130/en_head.json.gz/8337 | Rom Files
Gameboy / Colour
Active Affiliates
Gaming portal takes you to the Sega CD
The Mega CD was first shown in Japan at the Tokyo Toy Show in 1991 and later released on December 1st for �49800. In the first year of release in Japan, Sega sold 100 000 systems, but would have sold more if the price wasn't so high. Sega of Japan did not inform Sega of America about their Mega CD until a few months later. It was first shown in the US at CES in Chicago, Illinois in March 1992 and announced for release in November. It was released earlier than this, on 15th October in America (for US$299) but not until Spring 1993 in Europe where it was very expensively priced and so only 4% of European Mega Drive users owned a Mega CD in the end. UK had the biggest following of the Mega CD in Europe when it debuted in April 1993 for �270. 60 000 of the 70 000 Mega CDs shipped to Europe were sold by August 1993. The Australian release for the Mega CD was 19th April 1993.
The Mega CD (or Sega CD in America) came about just after when the Super NES was released and Sega was beginning to lose some sales on the Mega Drive/Genesis, so they released the Mega CD as an add-on to pick up sales and make sure they remained at the top of the market (By 1992 Sega had a 55% share in the US video game market). It was not the first CD-based video game system on the market, though. NEC had already released their PC Engine CD/Turbo CD/Turbo Duo, but was not very successful. The Mega CD/Sega CD was superior to NEC's system as well. Originally a CD tray unit that sat under the console, it was redesigned in 1993 as a top-loading unit that was smaller, cheaper (US$230), more reliable and would fit next to the Mega Drive II/Genesis II. Some European countries did not receive the Mega CD until this second version came out, thus the slow sales in the continent. The above snippet of information was written by Console Database
Rom Files >> Atari 5200
0-9 A B C D E F G H I J K L M N O P Q R S T U V W X Y Z All
Castle Crisis
Language: N/A
Choplifter!
Countermeasure
Contact Us if you have any questions or comments.
Copyright © 2014 GamerZ Paradise. All rights reserved. Privacy Policy.
Graphics design and layout are the property of GamerZ Paradise. Reproduction is strictly prohibited.
SNES roms GBC roms N64 roms GBA roms ATARI roms NES roms GENESIS roms NDS roms NEO GEO roms MAME roms SATURN roms | 计算机 |
2014-35/1130/en_head.json.gz/9307 | AmerisourceBergen is looking for a Digital and Web Media Coordinator- GNP. Callahan & Associates is looking for a Web & Graphic Assistant. KOAMTAC, Inc. is looking for a In-house Graphic/Web Designer. Eat This Not That is looking for a Web Editor, Eat This Not That. MakerBot Industries is looking for a Junior Web Engineer. 3D Systems is looking for a Creative Director for Web Design. CREATIVE CIRCLE is looking for a Web Production Artist - EXPERT HTML CODER/EMAIL. Brennan Center for Justice at NYU School of Law is looking for a Web Assistant. see all
10 Little Known Facts About New York City Facebook Users
Brian Ward on August 11, 2010 11:40 AM
New York City is the largest city in the United States. Between the Five Boroughs there is an abundance of different world cultures and an unmatched atmosphere throughout. The city has developed a culture and identity that makes it completely individual. The Big Apple’s sites and sounds bring in tourists from all over the world, their sports fans are passionate, and the city is a prominent place for music and history in general. We have researched the statistics of Facebook users to find out what makes New Yorkers tick. Check out our findings below!
New York City is a tricky place to calculate its total Facebook users. For our purposes, our stats are based on those living within 50 miles of New York, New York. In this area there are 7,669,620 users with profiles split between 53% female and 47% male.
Alex Rodriguez recently hit his 600th home run. The Yankees, in general, still seem to be the team to beat as they continue their winning ways after taking the World Series last year. The Yankees have 315,780 fans in New York while their National League counter part the Mets have just over 81,000 fans. The former World Series, and Mets division, rival’s the Phillies have only about 19,000 fans in the city.
The Giants seem to be the primary team in New York with over 40,000 more fans than the Jets. Maybe it is just the Eli Manning, or former player Michael Strahan and all of his new commercials?
One of the largest schools in the city, and Manhattan specifically, is New York University. Over half of the 179,100 NYU graduates are still living in the city. Currently, there are over 18,700 Facebook users enrolled at the college. If they follow the pattern, it seems quite a few of them will be extending their stay in New York.
The fans of the traditional print media outlets seems on the decline from the perspective of the city’s Facebook users. The Wall Street Journal has the largest following with 5,580 fans. New York magazine has just over three thousand fans. Suprisingly, there are less than a thousand New York Times fans who live in New York city, despite the publication having more than 697,000 Facebook fans.
The Big Apple is known for its tourist attractions, from its parks to its skyscrapers and everything in between. There are almost 22,000 fans of Central Park living in the city. Over in Brooklyn, there are about 10,000 fans of Prospect Park. The Bronx zoo has over 8,600 fans. The Statue of Liberty has just over a thousand fans in the city. A big party and celebration in Coney Island is the annual Mermaid Parade. As it turns out, it only has 1,600 local people interested on the site. Who’s ready to go do some sight seeing?
With his single, “Empire State of Mind”, being of the most-played songs around the city, it was no surprise that Jay-Z has over 170,00 fans in the city. I guess we can go ahead and declare him the king of New York. Bob Dylan put Greenwich Village on the map in the ’60s. There was just over 30,000 fans of Mr. Dylan in the city on Facebook though. Maybe the times really are changing?
Movies and Television There has been thousands of TV shows and movies over the years that have been set in New York. Martin Scorsese made the famous ’70s movie Taxi Driver about growing up in the city. The movie has about 11,000 fans in the city.
Television show Friends has over 430,000 fans on Facebook who live in New York City. Long live the re-runs. Though a slightly innaccurate picture of the city, How I met Your Mother is still making over 84,00 people laugh in New York.
New York City is a massive place. To represent the entire city is almost impossible. Its cultures and ways make it an unmatchable place. It is a very active place on Facebook though. With over seven million Facebook users living somewhere in the five boroughs, it is safe to say that New York is, if nothing else, a strong social networking city.
Now It’s Twitter’s Turn To Borrow From Facebook10 Ways To Keep Facebook From Distracting YouHOW TO: Cut Through Facebook's News Feed Algorithm And Create A ListFacebook Prompts Old News Feed Users To Examine Acquaintances
Tags Cities Demographics Lists Statistics Comments
Previous Post How To Steal Your Competitor’s Customers On Facebook Next Post Facebook Geolocation Services Are Coming! Mediabistro Course | 计算机 |
2014-35/1130/en_head.json.gz/9447 | Putting fun back into hacking
By Robert Lemos Special to ZDNet News
August 5, 2002, 10:15 AM PT
LAS VEGAS--In a dim section of the main ballroom at the Alexis Park Hotel, hackers were trying to break into the computer systems of current stock market favorite Weiss Labs. A mix of teenagers to thirty-somethings, the hackers at the Defcon gathering here breathed second-hand cigarette smoke and quaffed Red Bull energy drink by the liter, their hearts beating to a techno rave track. They're a dangerous bunch, too: A hacker from the research arm of a rival company broke into the Weiss Labs server and found a flaw, causing the computer to eat up its own memory, finally crashing the system. "They didn't get root, but they did manage to DoS us," said Crispin Cowan, head of the Weiss Labs' team. That is, the break-in didn't get all the way to the core of the server, but a denial-of-service (DoS) attack overwhelmed it with traffic. That may seem like a blase response to a serious security incident, but it's in the spirit of things at Defcon. Cowan and nearly a hundred other hackers and security experts were playing the latest incarnation of the conference's hacking contest, capture the flag. In real life, Cowan is a chief researcher with Wirex Communications, a maker of security software for the Linux operating-system, and his fictitious company, Weiss Labs, was one of eight teams taking part in the contest. This incarnation of capture the flag, the brainchild of a Seattle group of high-minded security geeks known as the GhettoHackers, pits rival hacking groups against each other in a game of corporate espionage. Each grou | 计算机 |
2014-35/1130/en_head.json.gz/10651 | TextMate 2 open sourced
Started by tiagosilva29
Allan Odgaard, the developer of the popular TextMate text editor for Mac OS X, has open sourced version 2.0 of his software. According to Odgaard, TextMate 2 is still in alpha status, but the source code is now available on GitHub. TextMate 2 was announced as largely completed in June 2009, but users have been waiting on a final release of version 2 ever since; an alpha release was made available in December 2011.Odgaard explains that he open sourced his code base to give users the ability to "tinker" with his software. He also explains that he chose the GPLv3 licence to prevent proprietary forks of his tool, and points out that he wants to counteract what he perceives as a growing trend by Apple to close down the Mac OS X platform. He does not rule out, however, the possibility that he will re-license the code base under a more liberal licence in the future. He also invites developers to submit pull requests on GitHub and otherwise get involved in the development of the software.There have been some concerns in the community of TextMate users that this move to an open source code base means that the program will not be actively developed further, but, according to a report by The Unofficial Apple Weblog (TUAW), this is not the case and Odgaard has stated that he will continue his work on the program.The current stable version of the program, TextMate 1.5.11, is still under a proprietary licence and can be purchased from the MacroMates site for €45.63 (£35.92). A licence for educational use is also available free of charge.Source: The H Online
+Quillz
Quillz
A Talking Pokemon
OS: iOS, OS X, Windows
Are there still people seriously waiting for TextMate 2 at this point? It may as well be vaporware.
It's all in the hands of the tinkering army.Allan Odgaard's Ars Technica Interview
+Matthew S.
Matthew S.
What the what
Location: Georgetown, ON
OS: Windows 8.1 / Mac OS X 10.9.x iOS 7
Time to go compile me a copy... | 计算机 |
2014-35/1130/en_head.json.gz/10725 | Excited for Ubuntu Linux 11.10? The Countdown Has Begun
Promising 'a whole new world,' a dedicated site is counting down the seconds until the release of 'Oneiric Ocelot.'
By Katherine Noyes | PC World | 03 October 11
It may not compare with the hype that comes out of Cupertino or Redmond, but there's no denying that the new “This Is the Countdown” website launched in the past few days adds a considerable dose of excitement to the upcoming launch of Ubuntu 11.10, or “Oneiric Ocelot.”
“A whole new world,” reads the text on the site. “A whole new computer.”
Also available on the site is a downloadable flyer in PDF format, complete with QR code and tear-off strips.
Release Candidate on the Way
Oneiric Ocelot, or Ubuntu 11.10, is, of course, the next upcoming version of Canonical's Ubuntu distribution of the free and open source Linux operating system, and it's certainly anticipated with a fair bit of excitement. Following the launch of the first beta version of the software in early September, the second beta release came out later in the month.
Among the new additions in that second beta version are a new kernel, now based on version 3.0.4; an updated GNOME desktop (currently version 3.1.92 on the way to GNOME 3.2); and improved support for installing 32-bit library and application packages on 64-bit systems.
OneConf, meanwhile, has now been integrated into the Ubuntu Software Center to make synching applications between computers easier. A new set of community-supported ARM architecture images will also become available before the software's final release.
On Sept. 29, the final freeze for the release went into effect. A release candidate is due this Thursday, followed by the final version on Oct. 13.
Ahead of the Pack
I've long felt that a relative lack of marketing is one of the main reasons desktop Linux hasn't gained more mainstream acceptance, so I'm particularly excited to see this latest move on behalf of Ubuntu.
Canonical may have raised more than a few eyebrows with some of the decisions it made for the last Ubuntu--Natty Narwhal--but I still believe many of them make a great deal of sense, particularly for Linux newcomers.
Now, this fresh attention to marketing is one more sign that Ubuntu, of all the Linux distributions out there, is currently the one with the best chance at achieving widespread mainstream acceptance. I can't wait to see the final version next week. | 计算机 |
2014-35/1130/en_head.json.gz/10942 | TechnicaLee Speaking
Software designs, implementations, solutions, and musings by Lee Feigenbaum
Semantic Web Technologies in the Enterprise
By Lee Feigenbaum on November 28, 2006 10:31 AM
Over the past two years, my good friend and coworker Elias Torres has been blogging at an alarming rate about a myriad of technology topics either directly or indirectly related to the Semantic Web: SPARQL, Atom, Javascript, JSON, and blogging, to name a few. Under his encouragement, I began blogging some of my experiences with SPARQL, and two of our other comrades, Ben Szekely and Wing Yung, have also started blogging about semantic technologies. And in September, Elias, Wing and Ben blogged a bit about Queso, our project which combines Semantic Web and Web 2.0 techniques to provide an Atom-powered framework for rapid development and deployment of RDF-backed mash-ups, mash-ins, and other web applications.
But amidst all of this blogging, we've all been working hard at our day jobs at IBM, and we've finally reached a time when we can talk more specifically about the software and infrastructure that we've been creating in recent years and how we feel it fits in with other Semantic Web work. We'll be releasing our components as part of the IBM Semantic Layered Research Platform open-source project on SourceForge over the next few months, and we'll be blogging examples, instructions, and future plans as we go. In fact, we've already started with our initial release of the Boca RDF store, which Wing and Matt have blogged about recently. I'll be posting a roadmap/summary of the other components that we'll be releasing in the coming weeks later today or tomorrow, but first I wanted to talk about our overall vision.
The family of W3C-endorsed Semantic Web technologies (RDF, RDFS, OWL, and SPARQL being the big four) have developed under the watchful eyes of people and organizations with a variety of goals. It's been pushed by content providers (Adobe) and by Web-software organizations (Mozilla), by logicians and by the artificial-intelligence community. More recently, Semantic Web technologies have been embraced for life sciences and government data. And of course, much effort has been put towards the vision of a machine-readable World Wide Web—the Semantic Web as envisioned by Tim Berners-Lee (and as presented to the public in the 2001 Scientific American article by Berners-Lee, Jim Hendler, and Ora Lassila).
Our adtech group at IBM first took note of semantic technologies from the confluence of two trends. First, several years ago we found our work transitioning from the realm of DHTML-based client runtimes on the desktop to an annotation system targeted at life-sciences organizations. As we used XML, SOAP, and DB2 to develop the first version of the annotation system along with IBM Life Sciences, we started becoming familiar with the enormity of the structured and unstructured, explicit and tacit data that abounds throughout the life sciences industry. Second, it was around the same time that Dennis Quan—a former member of our adtech team—was completing his doctoral degree as he designed, developed, and evangelized Haystack, a user-oriented information manager built on RDF.
Our work continued, and over the next few years we became involved with new communities both inside and outside of IBM. Via a collaboration with Dr. Tom Deisboeck, we became involved with the Center for the Development of a Virtual Tumor (CViT) and developed plans for a semantics-backed workbench which cancer modelers from different laboratories and around the world could use to drive their research and integrate their work with that of other scientists. We met Tim Clark from the MIND Center for Interdisciplinary Informatics and June Kinoshita and Elizabeth Wu of the Alzheimer Research Forum as they were beginning work on what would become the Semantic Web Applications in Neuromedicine (SWAN) project. We helped organize what has become a series of internal IBM semantic-technology summits, such that we've had the opportunity to work with other IBM research teams, including those responsible for the IBM Integrated Ontology Development Toolkit (IODT).
All of which (and more) combines to bring us to where we stand today.
While we support the broad vision of a Semantic World Wide Web, we feel that there are great benefits to be derived from adapting semantic technologies for applications within an enterprise. In particular, we believe that RDF has several very appealing properties that position it as a data format of choice to provide a flexible information bus across heterogeneous applications and throughout the infrastructure layers of an application stack.
Name everything with URIs
When we model the world with RDF, everything that we model gets a URI. And the attributes of everything that we model get URIs. And all the relationships between the things that we model get URIs. And the datatypes of all the simple values in our models get URIs.
URIs enable selective and purposeful reuse of concepts. When I'm creating tables in MySQL and name a table album, my table will share a name with thousands of other database tables in other databases. If I have software that operates against that album table, there's no way for me to safely reuse it against album tables from other databases. Perhaps my albums are strictly organized by month, where | 计算机 |
2014-35/1130/en_head.json.gz/11187 | AMD + ATI = slow down!
Inbox, have mercy! Readers are hearing all kinds of rumors about fallout from …
- Jul 25, 2006 12:58 am UTC
So AMD is buying ATI. You already knew that. But did you know that the entire deal still has several important milestones to reach before it's put in the books? It will, and the fact of the matter is that neither AMD nor Intel used this past weekend to change corporate life as we know it. There's going to be a lot of "analysis" out there over the next several days, but remember this key point: very few people actually know anything for certain. And when it comes to claims about motherboard licensing, ATI, NVIDIA, and chipsets, bear this one key point in mind: this is not a done deal.
A merger like this takes time. So far, the Boards of both companies have approved the deal. Shareholders will also need to approve the deal, and given all of the confusion out there over just what makes this deal a "good idea," I think it's safe to say that the deal will be closely reviewed by everyone with a serious stake in the matter. For starters, shareholders will have to think about what the change means for a company that has prided itself on being "brand neutral." Indeed, AMD CEO Hector Ruiz told investors in New York that the deal isn't about AMD becoming a "platform" company, insisting that the company's "best of breed" approach remains intact, squarely against Intel's platform strategy. How that fits with such an expensive acquisition is not yet clear. Shareholders may also be concerned about a acquisition that involves significant new debt to finance the cash portion of the deal, reportedly somewhere to the tune of $3 billion from Morgan Stanley Senior Funding. That concern could be compounded by AMD's latest financial results, which show that the company is indeed mired in a costly price war with Intel. With Conroe coming, competition should get even more heated.
Just as important, due diligence is required on both sides. The Canadian courts will have to look over the deal much in the same way that the US government will be involved in assessing the the fairness of the arrangement and its potential effects. The earliest the deal would clear would be the fourth quarter of this year.
None of this is to say that the deal won't happen. But quick, reactionary moves by the market's onlookers (Intel, NVIDIA, etc.) are not likely, especially when we start to think about the inter-relationships of everyone involved. There is no clear and unambiguous reason to believe that Intel has flipped ATI the bird, or better yet, that Intel is now hot on the heels of NVIDIA. Furthermore, if we take Ruiz's comments seriously, we shouldn't look at this as the end of NVIDIA and AMD cooperation, either. Intel, for instance, offers its own chipsets and yet currently also supports chipsets from both ATI and NVIDIA (although ATI was already expecting business from Intel to slow over the next few quarters). For AMD's part, they insist that their Torrenza strategy of pushing for third-party adoption of key AMD technologies is still intact. The fact of the matter is that it will likely be several months before we can see what this acquisition really means for the players in this market and how they are going to react. Caveat lector. | 计算机 |
2014-35/1130/en_head.json.gz/12399 | The Soul of A New Machine > Wiki current version
The computer revolution brought with it new methods of getting work done--just look at today's news for reports of hard-driven, highly-motivated young software and online commerce developers who sacrifice evenings and weekends to meet impossible deadlines. Tracy Kidder got a preview of this world in the late 1970s when he observed the engineers of Data General design and build a new 32-bit minicomputer in just one year. His thoughtful, prescient book,The Soul of a New Machine, tells stories of 35-year-old "veteran" engineers hiring recent college graduates and encouraging them to work harder and faster on complex and difficult projects, exploiting the youngsters' ignorance of normal scheduling processes while engendering a new kind of work ethic. These days, we are used to the "total commitment" philosophy of managing technical creation, but Kidder was surprised and even a little alarmed at the obsessions and compulsions he found. From in-house political struggles to workers being permitted to tease management to marathon 24-hour work sessions, The Soul of a New Machine explores concepts that already seem familiar, even old-hat, less than 20 years later. Kidder plainly admires his subjects; while he admits to hopeless confusion about their work, he finds their dedication heroic. The reader wonders, though, what will become of it all, now and in the future. --Rob Lightner --This text refers to the Hardcover edition. edit this info
Books, Tracy Kidder Details
Author: Tracy Kidder
Genre: Computers & Internet
What's your opinion on The Soul of A New Machine?
Ask a question about 'The Soul of A New Machine'
More The Soul of A New Machine reviews
review by splogue.
The soul of a great team!
How can a book about such a geeky subject be so gripping? I think I know. The book is about the development of new technology, yes, but more than that this is the narrative of teamwork. Every so often, a team of people with a shared goal come together and create greatness. This book is a look into that fascinating process and the culture that encouraged it. I couldn't put this down until the last page was turned -- it is just fantastic.
see all The Soul of A New Machine reviews (1)
rate more like 'The Soul of A New Machine'
splogue
"The soul of a great team!" | 计算机 |
2014-35/1130/en_head.json.gz/13837 | How iOS 7 Screwed Developers Who Make Apps For Both iPhone And Android
Kyle Russell
Getty Images See Also
While many have framed the redesign to Apple's new operating system for iPhones and iPads, iOS 7, as a response to rhetoric that Apple's software has begun to look "stale," "boring," and "outdated," Instapaper founder and early Tumblr employee Marco Arment thinks that Apple had a different strategy in mind.
Rather than simply reinvigorating consumer excitement by giving them something new to look at, Arment thinks that Apple's big changes are meant to force developers to focus on the iOS platform for the next few months.
He compares it to the release of the iPad in 2010. Rather than figuring out how to bring their apps to Android, many iOS developers instead spent time looking at how to make their apps work on a tablet form-factor device.
As we've covered before, the changes that iOS 7 introduces will require more than a simple update. If developers don't want to look out of date, they're going to have to rethink how major aspects of how their apps work.
Arment thinks that this will have the greatest impact on developers of apps that are on both iPhone and Android devices:
"iOS 7 is also going to be a problem for cross-platform frameworks. Fewer assumptions can be made about the UI widgets and behaviors common to all major platforms. And any UI targeting the least common denominator will now look even more cheap and dated on iOS 7, since the new standard on the OS is so far from the old one."
He also predicts that developers who were planning big redesigns that would apply to both likely had to go back to the drawing board to figure out how to make their plans not look out of place with the new look coming to Apple's platform:
"[W]hatever app developers were planning to do this fall is probably on hold now, because everyone’s going to be extremely busy updating and redesigning their apps for iOS 7. Anyone thinking about expanding into another platform now has a more pressing need to maintain marketshare on iOS."
Click here to read Arment's full blog post >
Developers can't assume things will work on both. | 计算机 |
2014-35/1130/en_head.json.gz/14137 | The rise and fall and rise of HTML
By Glyn Moody
HTML began life as a clever hack of a pre-existing approach. As Tim Berners-Lee explains in his book, “Weaving the Web”:
Since I knew it would be difficult to encourage the whole world to use a new global information system, I wanted to bring on board every group I could. There was a family of markup languages, the standard generalised markup language (SGML), already preferred by some of the world's top documentation community and at the time considered the only potential document standard among the hypertext community. I developed HTML to look like a member of that family.
One reason why HTML was embraced so quickly was that it was simple – which had important knock-on consequences:
The idea of asking people to write the angle brackets by hand was to me, and I assumed to many, as unacceptable as asking one to prepare a Microsoft Word document by writing out its binary-coded format. But the human readability of HTML was an unexpected boon. To my surprise, people quickly became familiar with the tags and started writing their own HTML documents directly.
Of course, once people discovered how powerful those simple tags could be, they made the logical but flawed deduction that even more tags would make HTML even more powerful. Thus began the first browser wars, with Netscape and Microsoft adding non-standard features in an attempt to trump the other. Instead, they fragmented the HTML standard (remember the blink element and marquee tag?), and probably slowed down the development of the field for years.
Things were made worse by the collapse of Netscape at the end of the 90s, leaving Microsoft as undisputed arbiter of (proprietary) standards. At that time, the web was becoming a central part of life in developed countries, but Microsoft's dominance – and the fact that Internet Explorer 7 only appeared in 2006, a full five years after version 6 – led to a long period of stagnation in the world of HTML.
One of the reasons that the Firefox project was so important was that it re-affirmed the importance of open standards – something that Microsoft's Internet Explorer had rendered moot. With each percentage point that Firefox gained at the expense of that browser, the pressure on Microsoft to conform to those standards grew. The arrival of Google's Chrome, and its rapid uptake, only reinforced this trend.
Eventually Microsoft buckled under the pressure, and has been improving its support of HTML steadily, until today HTML5 support is creeping into Visual Studio, and the company is making statements like the following:
Just four weeks after the release of Internet Explorer 9, Microsoft Corp. unveiled the first platform preview of Internet Explorer 10 at MIX11. In his keynote, Dean Hachamovitch, corporate vice president of Internet Explorer, outlined how the next version of Microsoft’s industry-leading Web browser builds on the performance breakthroughs and the deep native HTML5 support delivered in Internet Explorer 9. With this investment, Microsoft is leading the adoption of HTML5 with a long-term commitment to the standards process.
“The only native experience of HTML5 on the Web today is on Windows 7 with Internet Explorer 9,” Hachamovitch said. “With Internet Explorer 9, websites can take advantage of the power of modern hardware and a modern operating system and deliver experiences that were not possible a year ago. Internet Explorer 10 will push the boundaries of what developers can do on the Web even further.”
Even if some would quibble with those claims, the fact that Microsoft is even making them is extraordinary given its history here and elsewhere. Of course, there is always the risk that it might attempt to apply its traditional “embrace and extend” approach, but so far there are few hints of that. And even if does stray from the path of pure HTML5, Microsoft has already given that standard a key boost at a time when some saw it as increasingly outdated.
That view was largely driven by the rise of the app, notably on the iPhone and more recently on the iPad. The undeniable popularity of such apps, due in part to their convenience, has led some to suggest that the age of HTML is over, and that apps would become the primary way of interacting with sites online.
Mozilla responded by proposing the idea of an Open Web App Store:
An Open Web App Store should:
exclusively host web applications based upon HTML5, CSS, Javascript and other widely-implemented open standards in modern web browsers — to avoid interoperability, portability and lock-in issuesensure that discovery, distribution and fulfillment works across all modern browsers, wherever they run (including on mobile devices)set forth editorial, security and quality review guidelines and processes that are transparent and provide for a level playing fieldrespect individual privacy by not profiling and tracking individual user behavior beyond what’s strictly necessary for distribution and fulfillmentbe open and accessible to all app producers and app consumers.
As the links to earlier drafts on its home page indicate, HTML5 has been under development for over three years, but it really seems to be taking off now. Some early indications of what it is capable of can be seen in projects to replace browser plugins for PDFs and MP3s with browser-native code.
HTML5 is also at the heart of the FT's new Web App:
Creating an HTML5 app is innovative and breaks new ground – the FT is the first major news publisher to launch an app of this type. There are clear benefits. Firstly, the HTML5 FT Web App means users can see new changes and features immediately. There is no extended release process through an app store and users are always on the latest version.
Secondly, developing multiple ‘native’ apps for various products is logistically and financially unmanageable. By having one core codebase, we can roll the FT app onto multiple platforms at once.
We believe that in many cases, native apps are simply a bridging solution while web technologies catch up and are able to provide the rich user experience demanded on new platforms.
In other words, the FT was fed up paying a hefty whack of its revenue to Apple for the privilege of offering a native app. And if the following rumour is true, the FT is not the only well-known name to see it that way:
Project Spartan is the codename for a new plat | 计算机 |
2014-35/1130/en_head.json.gz/14323 | LinuxInsider > E-Commerce > Analytics | Next Article in Analytics
Data, Information and Knowledge
Data by itself is useless. An MP3 file is garbage without software to render it into a song, which is a kind of information. Ditto with your bank balance and the video you shot over the holiday, or the formula or source code for a new product. Instead of making data security our top priority, wouldn't we be better off focusing on data transformation?
By Denis Pombriant • CRM Buyer • ECT News Network
I still see far too many examples of content confusing the ideas of data and information. Sometimes it seems a writer is simply trying to avoid being redundant when using data and information in the same sentence to mean the same thing. Of course, they are different, and the result is unnecessary confusion.
I just wrote a paper for a European law journal on the topic, and I learned more about it than is healthy for one person. The piece will be out in August. Generally, I admire the effort the Europeans are making to get it right, though they are less concerned with data and information per se than they are with privacy and security. These things all intersect but in sometimes unpredictable ways. The more I think about things, the less I am sure of -- and the more questions I have.
The European parliament is trying to figure out laws that protect individual rights to privacy, which necessarily affect what data is kept and what is not. That makes sense, and it sounds simple, but how do you do that? Does a person walking on a street have a right to privacy and thus a right to determine how you use a crowd photo?
What if a corporation like Google or a government takes the photo? Are we to prevent photos, based on the premise that someone someday might do something to a person in one of the photos based on the picture? From there it gets silly, but there are some concrete situations that are nothing to laugh at.
The Persistence of Memory
Take the case of a nurse in Connecticut who was arrested for possessing a small amount of pot. The case was dismissed when she agreed to take some drug education courses, according to an article in The New York Times. In the good old days, that would have been the end of it, because according to Connecticut law -- and the laws in many other states -- her record was wiped clean with the dismissal. Under Connecticut law, she can even testify under oath that she has never been arrested now that the record has been cleared.
That all makes good sense to me. It might not be factually correct, but these expungement laws are one of the fictions we create in modern life to keep the world spinning. However, with the Internet, there's no such thing as expungement, and a search still comes up with the original news article that -- while true when it was published -- is now false.
It matters, because this nurse can't find a job any more, thanks to the simple expedient of prospective employers doing a rudimentary search on every new job applicant. What to do? She's suing the news organizations that wrote the story for slander, but the story was true when it was reported. Yikes!
The Internet and our modern world are full of examples like this. Society used to be able to conveniently forget small indiscretions, and we all got on with life. Now that's being taken away, without anyone even giving permission or any new law being adopted. The Internet is the defacto repository of all things digital about us -- but should it be? The Europeans take all this very seriously, and perhaps we should too.
Information Alchemy
It seems to me that the biggest issue we have with data and information today is not data security, even though lots of it gets stolen (I'm talking to you People's Liberation Army unit 61398). In fact, I think we've put too much emphasis on physically securing data and given too little thought to how it is transformed into information.
After all, data by itself is useless. An MP3 file is garbage without software to render it into a song, which is a kind of information. Ditto with your bank balance and the video you shot over the holiday, or the formula or source code for a new product.
Wouldn't we be better off focusing on data transformation? A new photo sharing service, SnapChat, takes this approach by delivering photos that disintegrate after 10 seconds. That's far from ideal for most applications, but it's on the right track.
Generally, I think data ought to be handled like milk in a supermarket; it ought to have an outdate after which it automatically becomes archival. You might be able to access archival data, but transforming it back into its original information content would have to be restricted in some way.
Look, we can still access information about various flat Earth theories, but we all know this is archival and historic and no longer scientific. Some of us can still take it seriously if we want to, but we can't take it to the bank or whatever -- you know what I mean.
We don't have anything like that for data yet -- something that says this does not yield not the information it once did. On a parallel path, if we were better able to control the conversion of data to information so that only the data's owners could de-encrypt it, might we have less data theft and the loss of intellectual property that goes with it?
If any of this makes sense, then it's not data security we should focus on as much as secure data conversion or transformation into information -- those are different issues with different approaches. When you think of it this way, the differences between data and information are starkly clear. It gives us all good reason to consciously choose the right words to convey our meaning.
Denis Pombriant is the managing principal of the Beagle Research Group, a CRM market research firm and consultancy. Pombriant's research concentrates on evolving product ideas and emerging companies in the sales, marketing and call center disciplines. His research is freely distributed through a blog and Web site. He is the author of Hello, Ladies! Dispatches from the Social CRM Frontier and can be reached at [email protected].
More by Denis Pombriant | 计算机 |
2014-35/1130/en_head.json.gz/14965 | Research and Markets: North American Development Survey 2012 v.2: Over Half of North American Software Developers are Moonlighting
Research and Markets (http://www.researchandmarkets.com/research/6x854n/north_american) has announced the addition of the "North American Development Survey 2012 v.2" report to their offering.
This series started in the Winter of 1998 and is the most comprehensive research survey series in existence focused exclusively on developers and IT managers. In this survey, we examine the changing face of Operating platforms; Languages including particular emphasis on Scripting Languages; Web Services and Service Oriented Architectures with deeper drill-down for Software as a Service and Cloud Computing, highlighting trend updates and significant changes.
This series explores: global demographics, platform use and migrations, language use, internal and external cloud computing, SaaS, SOA, security, Linux and open source software, Java development, general internet development, architecture and technology adoption, software development requirements, development tools, development issues, and application management.
Conducted biannually; based upon 393 in-depth developer interviews for Fall 2012.
New Evans Data Survey Shows 53% work on apps outside of work
Over half of all software developers work on apps on their own personal time according to the newly released North American Development Survey, a survey of over 400 software developers in North America conducted last month. Of those who do work on apps outside of work, 34% spend 20 to 40 hours per week, while 29% spend more than 40 hours per week on their own projects. The more experience the developer has, the more likely he is to work long hours on his own.
There's been a lot of conjecture over the last couple of years about just who are the people writing all those apps for app stores, said Janel Garvin, CEO of Evans Data Corp. While there obviously are specific companies focused on that space, and maybe a handful of hobbyists or students, the author sees lots of evidence that the bulk of those apps are being developed by the same developers who write traditional software for many types of companies as their day job.
For more information visit http://www.researchandmarkets.com/research/6x854n/north_american | 计算机 |
2014-35/1130/en_head.json.gz/15054 | By Janice Helwig and Mischa Thompson, Policy Advisors Since 1999, the OSCE participating States have convened three “supplementary human dimension meetings” (SHDMs) each year – that is, meetings intended to augment the annual review of the implementation of all OSCE human dimension commitments. The SHDMs focus on specific issues and the topics are chosen by the Chair-in-Office. Although they are generally held in Vienna – with a view to increasing the participation from the permanent missions to the OSCE – they can be held in other locations to facilitate participation from civil society. The three 2010 SHDMs focused on gender issues, national minorities and education, and religious liberties. But 2010 had an exceptionally full calendar – some would say too full. In addition to the regularly scheduled meetings, ad hoc meetings included: - a February 9-10 expert workshop in Mongolia on trafficking; - a March 19 hate crimes and the Internet meeting in Warsaw; - a June 10-11th meeting in Copenhagen to commemorate the 20th anniversary of the Copenhagen Document; - a (now annual) trafficking meeting on June 17-18; - a high-level conference on tolerance June 29-30 in Astana. The extraordinary number of meetings also included an Informal Ministerial in July, a Review Conference (held in Warsaw, Vienna and Astana over the course of September, October, and November) and the OSCE Summit on December 1-2 (both in Astana). Promotion of Gender Balance and Participation of Women in Political and Public Life By Janice Helwig, Policy Advisor The first SHDM of 2010 was held on May 6-7 in Vienna, Austria, focused on the “Promotion of Gender Balance and Participation of Women in Political and Public Life.” It was opened by speeches from Kazakhstan's Minister of Labour and Social Protection, Gulshara Abdykalikova, and Portuguese Secretary of State for Equality, Elza Pais. The discussions focused mainly on “best practices” to increase women’s participation at the national level, especially in parliaments, political parties, and government jobs. Most participants agreed that laws protecting equality of opportunity are sufficient in most OSCE countries, but implementation is still lacking. Therefore, political will at the highest level is crucial to fostering real change. Several speakers recommended establishing quotas, particularly for candidates on political party lists. A number of other forms of affirmative action remedies were also discussed. Others stressed the importance of access to education for women to ensure that they can compete for positions. Several participants said that stereotypes of women in the media and in education systems need to be countered. Others seemed to voice stereotypes themselves, arguing that women aren’t comfortable in the competitive world of politics. Turning to the OSCE, some participants proposed that the organization update its (2004) Gender Action Plan. (The Gender Action Plan is focused on the work of the OSCE. In particular, it is designed to foster gender equality projects within priority areas; to incorporate a gender perspective into all OSCE activities, and to ensure responsibility for achieving gender balance in the representation among OSCE staff and a professional working environment where women and men are treated equally.) A few participants raised more specific concerns. For example, an NGO representative from Turkey spoke about the ban on headscarves imposed by several countries, particularly in government buildings and schools. She said that banning headscarves actually isolates Muslim women and makes it even harder for them to participate in politics and public life. NGOs from Tajikistan voiced their strong support for the network of Women’s Resource Centers, which has been organized under OSCE auspices. The centers provide services such as legal assistance, education, literacy classes, and protection from domestic violence. Unfortunately, however, they are short of funding. NGO representatives also described many obstacles that women face in Tajikistan’s traditionally male-oriented society. For example, few women voted in the February 2010 parliamentary elections because their husbands or fathers voted for them. Women were included on party candidate lists, but only at the bottom of the list. They urged that civil servants, teachers, health workers, and police be trained on legislation relating to equality of opportunity for women as means of improving implementation of existing laws. An NGO representative from Kyrgyzstan spoke about increasing problems related to polygamy and bride kidnappings. Only a first wife has any legal standing, leaving additional wives – and their children - without social or legal protection, including in the case of divorce. The meeting was well-attended by NGOs and by government representatives from capitals. However, with the exception of the United States, there were few participants from participating States’ delegations in Vienna. This is an unfortunate trend at recent SHDMs. Delegation participation is important to ensure follow-up through the Vienna decision-making process, and the SHDMs were located in Vienna as a way to strengthen this connection. Education of Persons belonging to National Minorities: Integration and Equality By Janice Helwig, Policy Advisor The OSCE held its second SHDM of 2010 on July 22-23 in Vienna, Austria, focused on the "Education of Persons belonging to National Minorities: Integration and Equality." Charles P. Rose, General Counsel for the U.S. Department of Education, participated as an expert member of the U.S. delegation. The meeting was opened by speeches from the OSCE High Commissioner on National Minorities Knut Vollebaek and Dr. Alan Phillips, former President of the Council of Europe Advisory Committee on the Framework Convention for the Protection of National Minorities. Three sessions discussed facilitating integrated education in schools, access to higher education, and adult education. Most participants stressed the importance of minority access to strong primary and secondary education as the best means to improve access to higher education. The lightly attended meeting focused largely on Roma education. OSCE Contact Point for Roma and Sinti Issues Andrzej Mirga stressed the importance of early education in order to lower the dropout rate and raise the number of Roma children continuing on to higher education. Unfortunately, Roma children in several OSCE States are still segregated into separate classes or schools - often those meant instead for special needs children - and so are denied a quality education. Governments need to prioritize early education as a strong foundation. Too often, programs are donor-funded and NGO run, rather than being a systematic part of government policy. While states may think such programs are expensive in the short term, in the long run they save money and provide for greater economic opportunities for Roma. The meeting heard presentations from several participating States of what they consider their "best practices" concerning minority education. Among others, Azerbaijan, Belarus, Georgia, Greece, and Armenia gave glowing reports of their minority language education programs. Most participating States who spoke strongly supported the work of the OSCE High Commissioner on National Minorities on minority education, and called for more regional seminars on the subject. Unfortunately, some of the presentations illustrated misunderstandings and prejudices rather than best practices. For example, Italy referred to its "Roma problem" and sweepingly declared that Roma "must be convinced to enroll in school." Moreover, the government was working on guidelines to deal with "this type of foreign student," implying that all Roma are not Italian citizens. Several Roma NGO representatives complained bitterly after the session about the Italian statement. Romani NGOs also discussed the need to remove systemic obstacles in the school systems which impede Romani access to education and to incorporate more Romani language programs. The Council of Europe representative raised concern over the high rate of illiteracy among Romani women, and advocated a study to determine adult education needs. Other NGOs talked about problems with minority education in several participating States. For example, Russia was criticized for doing little to provide Romani children or immigrants from Central Asia and the Caucasus support in schools; what little has been provided has been funded by foreign donors. Charles Rose discussed the U.S. Administration's work to increase the number of minority college graduates. Outreach programs, restructured student loans, and enforcement of civil rights law have been raising the number of graduates. As was the case of the first SHDM, with the exception of the United States, there were few participants from participating States’ permanent OSCE missions in Vienna. This is an unfortunate trend at recent SHDMs. Delegation participation is important to ensure follow-up through the Vienna decision-making process, and the SHDMs were located in Vienna as a way to strengthen this connection. OSCE Maintains Religious Freedom Focus By Mischa Thompson, PhD, Policy Advisor Building on the July 9-10, 2009, SHDM on Freedom of Religion or Belief, on December 9-10, 2010, the OSCE held a SHDM on Freedom of Religion or Belief at the OSCE Headquarters in Vienna, Austria. Despite concerns about participation following the December 1-2 OSCE Summit in Astana, Kazakhstan, the meeting was well attended. Representatives of more than forty-two participating States and Mediterranean Partners and one hundred civil society members participated. The 2010 meeting was divided into three sessions focused on 1) Emerging Issues and Challenges, 2) Religious Education, and 3) Religious Symbols and Expressions. Speakers included ODIHR Director Janez Lenarcic, Ambassador-at-large from the Ministry of Foreign Affairs of the Republic of Kazakhstan, Madina Jarbussynova, United Nations Special Rapporteur on Freedom of Religion or Belief, Heiner Bielefeldt, and Apostolic Nuncio Archbishop Silvano Tomasi of the Holy See. Issues raised throughout the meeting echoed concerns raised during at the OSCE Review Conference in September-October 2010 regarding the participating States’ failure to implement OSCE religious freedom commitments. Topics included the: treatment of “nontraditional religions,” introduction of laws restricting the practice of Islam, protection of religious instruction in schools, failure to balance religious freedom protections with other human rights, and attempts to substitute a focus on “tolerance” for the protection of religious freedoms. Notable responses to some of these issues included remarks from Archbishop Silvano Tomasi that parents had the right to choose an education for their children in line with their beliefs. His remarks addressed specific concerns raised by the Church of Scientology, Raelian Movement, Jehovah Witnesses, Catholic organizations, and others, that participating States were preventing religious education and in some cases, even attempting to remove children from parents attempting to raise their children according to a specific belief system. Additionally, some speakers argued that religious groups should be consulted in the development of any teaching materials about specific religions in public school systems. In response to concerns raised by participants that free speech protections and other human rights often seemed to outweigh the right to religious freedom especially amidst criticisms of specific religions, UN Special Rapporteur Bielefeldt warned against playing equality, free speech, religious freedom, and other human rights against one another given that all rights were integral to and could not exist without the other. Addressing ongoing discussion within the OSCE as to whether religious freedom should best be addressed as a human rights or tolerance issue, OSCE Director Lenarcic stated that, “though promoting tolerance is a worthwhile undertaking, it cannot substitute for ensuring freedom of religion of belief. An environment in which religious or belief communities are encouraged to respect each other but in which, for example, all religions are prevented from engaging in teaching, or establishing places of worship, would amount to a violation of freedom of religion or belief.” Statements by the United States made during the meeting also addressed many of these issues, including the use of religion laws in some participating States to restrict religious practice through onerous registrations requirements, censorship of religious literature, placing limitations on places of worship, and designating peaceful religious groups as ‘terrorist’ organizations. Additionally, the United States spoke out against the introduction of laws and other attempts to dictate Muslim women’s dress and other policies targeting the practice of Islam in the OSCE region. Notably, the United States was one of few participating States to call for increased action against anti-Semitic acts such as recent attacks on Synagogues and Jewish gravesites in the OSCE region. (The U.S. statements from the 2010 Review Conference and High-Level Conference can be found on the website of the U.S. Mission to the OSCE.) In addition to the formal meeting, four side events and a pre-SHDM Seminar for civil society were held. The side events were: “Pluralism, Relativism and the Rule of Law,” “Broken Promises – Freedom of religion or belief in Kazakhstan,” “First Release and Presentation of a Five-Year Report on Intolerance and Discrimination Against Christians in Europe” and “The Spanish school subject ‘Education for Citizenship:’ an assault on freedom of education, conscience and religion.” The side event on Kazakhstan convened by the Norwegian Helsinki Committee featured speakers from Forum 18 and Kazakhstan, including a representative from the CiO. Kazakh speakers acknowledged that more needed to be done to fulfill OSCE religious freedom commitments and that it had been a missed opportunity for Kazakhstan not to do more during its OSCE Chairmanship. In particular, speakers noted that religious freedom rights went beyond simply ‘tolerance,’ and raised ongoing concerns with registration, censorship, and visa requirements for ‘nontraditional’ religious groups. (The full report can be found on the website of the Norwegian Helsinki Committee.) A Seminar on Freedom of Religion and Belief for civil society members also took place on December 7-8 prior to the SHDM. The purpose of the Seminar was to assist in developing the capacity of civil society to recognize and address violations of the right to freedom of religion and belief and included an overview of international norms and standards on freedom of religion or belief and non-discrimination. | 计算机 |
2014-35/1130/en_head.json.gz/15426 | Our Organization Our People Year in Review Library Search the Library
Three Perspectives of Service-Oriented Architectures
NEWS AT SEI Authors
Grace Lewis
Dennis B. Smith
This library item is related to the following area(s) of work:
Performance and Dependability
This article was originally published in News at SEI on: January 1, 2006 A previous column described the role of service-oriented architectures (SOAs) in a modern information technology environment. With the advent of universal Internet availability, many organizations have leveraged the value of their legacy systems by exposing all or parts of them as services. A service is a coarse-grained, discoverable, and self-contained software entity that interacts with applications and other services through a loosely coupled, often asynchronous, message-based communication model [Lewis 05]. A collection of services with well-defined interfaces and shared communications model is called a service-oriented architecture (SOA). A system or application is designed and implemented using functionality from these services. The characteristics of SOAs (e.g., loose coupling, published interfaces, standard communication model) offer the promise of enabling existing legacy systems to expose their functionality as services, presumably without making significant changes to the legacy systems. However, constructing services from existing systems to obtain the benefits of an SOA is neither easy nor automatic. There are no effective published strategies for building end-to-end systems1 based on SOAs; there are no approaches for understanding end-to-end quality of service (QoS); the technologies that SOAs are based on are still immature, and it is not clear what works and what does not work [Ma 05, Manes 05]. This column distinguishes three different perspectives that should be addressed in developing an effective SOA or in developing a component of an SOA-based system. For each perspective, it identifies the issues, tasks, and risks and outlines a set of research issues. Development Perspectives in SOAs
The three different development perspectives are illustrated in Figure 1. Middleware or infrastructure providers (shown in the middle level of Figure 1) must identify the network and communications protocols and standards to be employed. They must also determine what additional SOA infrastructure capabilities are necessary and provide them as common services (e.g., service registry, service-orchestration mechanisms). Infrastructure providers must allow the discovery of services and the data exchange between application and services. Application developers (shown at the top level of Figure 1) must locate or select appropriate services to be used by the application and develop application-specific code to invoke the selected services. They are concerned with whether services invoked by the application meet a full range of capability, QoS, and efficiency-of-use expectations. Service providers (shown at the bottom level of Figure 1) must identify a needed service, identify legacy code that potentially can satisfy the needed service, develop the appropriate interfaces, and finally modify the legacy code and/or develop new code to provide the service so that it has useful capability to the widest range of applications possible. Figure 1: High-Level View of an SOA The focus, tasks, and risks that must be addressed by each perspective are outlined in the following sections. 1. Middleware or Infrastructure Providers Developers of SOA infrastructure or middleware must provide a stable infrastructure for the discovery and invocation of services. Specific challenges include development of a set of common infrastructure services for discovery, communication, security, etc. identification and development of binding mechanisms to satisfy the largest set of potential service users Based on these challenges, the tasks for infrastructure developers are development of a set of common infrastructure services for discovery, communication, security, etc. provision of tools for application and service developers Infrastructure developers face some risks: Constraints of the infrastructure may limit the availability of services and applications that use it. The effort for development, support, and training for the use of tools and infrastructure may be underestimated. Delays in schedule will affect the schedule of applications and services that depend on the infrastructure. Changes in the infrastructure will affect the applications and services that use it. 2. Application Developers Developers of applications that use services must either discover and connect to services statically at design time or build the applications so that services can be located and invoked dynamically at run time. The semantics of the information being exchanged are always a challenge, as is the question of what to do when a service is no longer available in the case of static binding. Specific challenges for application developers include identification of the right services understanding of the semantics of the information being exchanged determination of rules to follow when services are no longer available (in the case of static binding) creation of an architecture that is stable enough to accommodate changes in services that are often outside of the control of the organization
Tasks for developers of applications that use services include understanding the SOA infrastructure: bindings, messaging technologies, communication protocols, service-description languages, and discovery services discovering services to be incorporated into applications2 retrieving service-description documentation—description, parameters, bindings, transport protocol invoking the identified services in applications composition service request error handling availability handling Application developers, therefore, face these risks: Available services might not meet functional and non-functional application requirements. Services may change or disappear without notification. With proprietary SOAs, there might be dependencies on tools and programs provided by the infrastructure developers that require training and that may conflict with the development and deployment environments.
3. Service Providers Developers of services must describe and develop services that applications can easily locate and use with acceptable QoS. If there are existing systems that can provide service functionality, developers must focus on the development of proper interfaces, the extraction of information from the systems, and the wrapping of this information according to the requirements of the SOA infrastructure. If services are non-existent, developers must focus on the service-specific code that must be written or reused from legacy systems. In this case, there are additional issues of service initialization. If code is reused, developers must analyze the feasibility of migration from legacy to target. Specific challenges for service providers include mapping of service requirements to component capabilities, often having to anticipate requirements because the complete set of potential service users is not known description and granularity of services with acceptable QoS development of new service-specific code and wrappers to existing code determination of migration feasibility of potential services from legacy applications, if required
Tasks for service developers, therefore, include gathering requirements from potential service users: Who would use the services and how would they use them? understanding the SOA infrastructure: bindings, messaging technologies, communication protocols, service-description languages, and discovery services developing code that receives the service request, translates it into calls into existing systems, and produces a response describing and publishing the service developing service-initialization code and operational procedures
Risks faced by service providers are these: Developed services may not be used because they do not meet functional and/or non-functional requirements. The effort to translate legacy data types into data types that are allowed by the SOA infrastructure can be greater than expected, especially in the case of complex data types such as audio, video, and graphics. If dealing with proprietary SOAs, there may be multiple constraints imposed on developed services; and there might be dependencies on tools and programs provided by the infrastructure developers that require training and that may conflict with the development and deployment environments. Why End-to-End SOA Engineering is Important In a system-of-systems (SoS) context, it is common for these three types of components to be developed independently by separate organizations. This, in fact, is the idea behind SOAs: loose coupling and separation of concerns. Nonetheless, decisions made locally by any one of these development groups can have an effect on the other groups and can potentially have a global effect on the SoS: The granularity of service interfaces can affect the end-to-end performance of an SoS because services are executed across a network as an exchange of a service request and a service response. If service interfaces are too coarse-grained, clients will receive more data than they need in their response message. If service interfaces are too fine-grained, clients will have to make multiple trips to the service to get all the data they need. If developers of the SOA infrastructure offer only an asynchronous communication mechanism, system developers requiring high performance and availability will encounter problems. If service developers do not gather functionality and QoS needs from potential users of services, they might develop and deploy services that are never used. If service interfaces are constantly changing and there is no formal mechanism for communicating these changes, users of the SOA-based system might find that certain system functionality is no longer available because the system was not able to anticipate and accommodate these changes. Conclusions and Next Steps Most literature and analyses related to SOAs focus on only one of the development perspectives and tend to assume an ideal environment in which there are no conflicts among the three perspectives. The SEI is beginning to develop a research agenda to examine each of the perspectives—application developer, infrastructure developer, and service provider—and to answer questions such as how to select the appropriate services and infrastructure for the organization's system goals, how to determine the QoS delivered by a system when some of its components are discovered and composed at runtime, and how to build services that can be used in a wide range of applications. All of these three perspectives require awareness of the needs and challenges of the others so they can contribute overall to the quality of the SOA-based system. Addressing these issues can ultimately lead to a disciplined approach to building end-to-end systems based on SOAs. Such an approach must focus on an interrelated set of critical issues, including QoS in a system where service discovery, composition, and invocation play a major role insights on paths and limitations of technologies that enable SOAs understanding critical functional and QoS needs of potential users of services service-level agreements in a dynamic environment References
[Lewis 05] Lewis, Grace and Wrage, Lutz. Approaches to Constructive Interoperability (CMU/SEI-2004-TR-020 ESC-TR-2004-020). Pittsburgh, PA: Software Engineering Institute, Carnegie Mellon University, 2005. [Ma 05] Ma, Kevin. "Web Services: What's Real and What's Not." IT Professional 7, 2 (March-April 2005). [Manes 05] Manes, Anne. VantagePoint 2005-2006 SOA Reality Check. Midvale, UT: Burton Group, 2005.
1 By end-to-end, we mean the complete pathway through applications, the communication infrastructure and network, and the actual services to perform a specific task. 2 Dynamic discovery, composition, and invocation are still extremely limited with existing technologies.
Grace Lewis is a senior member of technical staff at the Software Engineering Institute (SEI) of Carnegie Mellon University (CMU), where she is currently working in the areas of constructive interoperability, COTS-based systems, modernization of legacy systems, enterprise information systems, and model-driven architecture. Her latest publications include several reports published by Carnegie Mellon on these subjects and a book in the SEI Software Engineering Series. Grace has more than 15 years of experience in software engineering. She is also a member of the technical faculty for the Master's in Software Engineering program at CMU. Dennis Smith is the lead for the SEI Integration of Software Intensive Systems (ISIS) Initiative. This initiative focuses on addressing issues of interoperability and integration in large-scale systems and systems of systems. Earlier, he was the technical lead in the effort for migrating legacy systems to product lines. In this role he developed the method Options Analysis for Reengineering (OARS) to support reuse decision-making. Smith has also been the project leader for the CASE environments project. This project examined the underlying issues of CASE integration, process support for environments and the adoption of technology. Smith has published a wide variety of articles and technical reports, and has given talks and keynotes at a number of conferences and workshops. He has an MA and PhD from Princeton University, and a BA from Columbia University. The views expressed in this article are the author's only and do not represent directly or imply any official position or view of the Software Engineering Institute or Carnegie Mellon University. This article is intended to stimulate further discussion about this topic.
Training Software Architecture Design and Analysis
See more related courses » Find Us Here
Contact Us [email protected] Help us improve | 计算机 |
2014-35/1130/en_head.json.gz/17370 | Subpages: AwesomeMusicCharactersFranchiseFunnyHeadscratchersMainQuotesTearJerkerTriviaVideoGameWMGYMMV
Discussion Franchise Main VideoGame Characters Trivia YMMV 0 reviews Franchise: Ysaka: Ys II Ancient Ys Vanished The Final Chapter
"In my life, I've wandered everywhere... Around this world, hope would always be there."— Excerpt from the opening of the English translation of Ys: The Oath in FelghanaYs (typically pronounced "ees"note Basically like "geese" without the G; this pronunciation is the one used by the series's current American publisher, XSEED Games, though "yees" was used instead in the English localization of Ys: The Ark of Napishtim) is an action RPG series developed by Falcom and published by its developer in Japan and currently published by XSEED Games in North America and Europe, with a large number of companies having localized licensed ports in the past (such as the TurboGrafx-CD versions) that has spanned over twenty years and thirteen consoles. The games chronicle the adventures of Adol Christin, a wandering swordsman with fiery red hair who always seems to be in the right place at the wrong time as far as world-threatening disasters are concerned. The eponymous Ys is a mythical island floating in the sky, which is visited in some games and merely referenced in others.The games have a few recurring characters (leaving aside Adol, who is the main playable character in every game except for the prequel Ys Origin and Ys Strategy) and take place in the same world and continuity, but otherwise keep things fresh by introducing a brand new cast, location, and scenario each game, not unlike fellow traveling swordsman-starring series The Legend of Zelda. Games are generally played with a top-down perspective, with early games requiring Adol to "ram" into enemies in just the right spot to kill them and later installments having a Hack and Slash style of gameplay (Hack and Slash combat was present since Ys V). (The Eternal/Complete remakes of I & II still have ramming combat, which carried over to the PSP version of those games, Ys I & II Chronicles.) The games themselves have gotten multiple remakes and "re-imaginings" in order to fit them better into the series' ever-expanding mythology.The games are also famous for their power-rock soundtracks composed by various members of Falcom's JDK Sound Team, most famously by Yuzo Koshiro (Ys I-II) and Mieko Ishikawa (Ys II-III) and performed by Ryo Yonemitsu (Music for the TurboGrafx-16 versions and the Perfect Collections) and more recently Yukihiro Jindo (Arrangments of Oath in Felghana and Ys I & II Chronicles). In addition to standalone soundtrack CDs, the TurboGrafx-16 games have much of their in-game soundtracks encoded in the same standardized Red Book format as a typical audio CD, allowing the game discs themselves to double as soundtrack CDs when placed into a CD player or other optical media player. The Windows games store their audio files in the open, patent-free Ogg Vorbis format and can be found and played by digging through the game's files and getting the .ogg files from the music folder. A few of the game re-releases also do special things with their soundtracks, and specifically their soundtrack history: note Ys I & II Chronicles for the PSP and and the port of Ys: The Oath in Felghana to the same system have the option to choose between multiple versions of the games' soundtracks within the games themselves; specifically, Chronicles contains the original PC-88 soundtracks from the first two games as well as the newer Eternal/Complete versions, both in addition to the the JDK-performed remixes (the other two options are synthesized) new to Chronicles, while Oath contains the soundtracks from the PC-88 and Sharp X68000 versions of the original Ys III in addition to the soundtrack originally composed for Oath itself).The games long suffered from extensive No Export for You syndrome after the series' lackluster initial release push in the very early 90s, which is the primary reason the series was practically unheard of outside of Japan for so long. Beginning with the release of Ys: The Ark of Napishtim on the PlayStation 2 and PlayStation Portable, Ys: Book I & II on the Wii Virtual Console, and Legacy of Ys: Books I & II for the Nintendo DS, the games started to reach a much wider audience, and the American video game publishing and localization company XSEED Games announced an exclusive partnership with Falcom in 2010 that included the localization of Ys SEVEN, Ys: The Oath in Felghana, and Ys I & II Chronicles for the PSP for the North American market, in that order. In March of 2012, XSEED and Falcom followed this up with a series of releases on Valve's Steam service, starting with the original Windows version of Oath and the long-Japan-only Origin, followed by a further-updated PC version of I&II Chroniclesnote ironically giving English-speakers the definitive version of that game; they then capped it off with the release of Memories of Celceta, Falcom's definitive version of IV, on the PS Vita, released in English in November of 2013.Thanks to all this, the series' No Export for You tendencies are well and truly over, as Ys is now one of XSEED's most consistent sellers and the company has openly stated they'd love to work on any future releases, and between Steam and the Playstation market, virtually every single major Ys title is now available in English, both at retail and as a download. Ys V remains as the sole gap, and even that only extends to official localization; a fully playable fan translation patch is available courtesy of Aeon Genesis.The main games in the series (excluding mobile phone games and compilations) are: Ys I: Ancient Ys Vanished: Omen (1987, 1998) Ys II: Ancient Ys Vanished: The Final Chapter (1988, 2000) Remakes of the first two games are generally packaged together (known as the Complete compilation), most recently with 2009's Ys I & II Chronicles for the PlayStation Portable, which was released in English in North America in February 2011 (and on Steam in February 2013). Ys III: Wanderers from Ys (1989, 1991, and a re-imagining in 2005 in Japan and in North America and Europe through Steam in 2012 as Ys: Oath in Felghana. A PSP version was released in Japan in 2010, and November the same year for North America.) Ys IV: The Dawn of Ys (1993) & Ys IV: Mask of the Sun (1993, 2005) Interestingly, neither of these titles was developed by Falcom directly; Hudson Soft developed Dawn of Ys, while Tonkin House handled Mask of the Sun. Both followed a plot outline laid out by Falcom, but ultimately had multiple significant differences. Another version, Ys: Memories of Celceta, was released for the Playstation Vita, finally giving fans a Falcom-developed version of IV. Origina | 计算机 |
2014-35/1130/en_head.json.gz/17438 | We use cookies to ensure we give you the best experience on our website. If you continue to use our website we assume that you are happy to receive these cookies. Find out more.
History of Acton Court Home
History of Acton Court
House & Grounds Today
Tours of House & Grounds
A Royal Progress
King Henry sat here
A well kept secret
The Poyntz Arms Sundial designed by Nicholas Kratzer
In 1535, one of England’s most colourful kings, Henry VIII, came to stay at Acton Court with his second wife, Anne Boleyn, while on his summer Progress around the West Country. The owner of Acton Court, Nicholas Poyntz, wanted to impress his sovereign, so for Henry’s pleasure, he built a magnificent new East Wing on to the existing moated manor house. The new wing was a splendid testament to Nicholas Poyntz’s loyalty to his King. He went to immense trouble and expense to impress Henry, decorating the state apartments lavishly and fashionably. He was well rewarded as it is thought he was knighted during the royal visit.
Today, the East Wing which was built in just nine months comprises most of what remains at Acton Court. It offers a rare example of 16th century royal state apartments and some decorations which are said to be the finest of their kind in England.
Also surviving, hidden in the masonry until it was discovered during conservation work in 1994, is the King’s “en suite” garderobe, or privy.
Sir Nicholas went on building at Acton Court until his death in 1556. The surviving Eastern half of his long gallery can still be admired. It was a daring construction with large windows and a painted frieze of biblical text and moralising verses in Latin. During recent archaeological excavations at Acton Court, there were many exciting finds, thought to be associated with King Henry’s visit. These included examples of the finest Venetian glass of its time, Spanish ceramics, and some of the earliest clay tobacco pipes yet discovered. Dating from the late 16th century, these support the view that Sir Walter Raleigh gave one of the first demonstrations in England of the technique of smoking during a visit to Acton Court.
One item of particular importance was found by chance in a nettle patch next to the building. It is a Cotswold limestone sundial designed by the royal horologist, Nicholas Kratzer, dated 1520.
The Poyntz family owned Acton Court from 1364 until 1680 when the direct line of succession ended and the house was sold. It was subsequently reduced in size and converted for use as a tenant farmhouse. The building’s fortunes declined to the point of dilapidation in the 20th century. It is due in part to this neglect that Acton Court was left largely untouched and as a result a unique Tudor building has been preserved virtually intact.
Sir Nicholas Poyntz
King Henry's “en suite” garderobe or privy
Clay pipe discovered at Acton Court
Contact | Acton Court Latteridge Road Iron Acton Bristol BS37 9TL | © Acton Court 2014 | Cookies and Privacy | Terms & Conditions | Links | 1535 Petition | site by Tuch | 计算机 |
2014-35/1130/en_head.json.gz/20425 | Playdead’s new game over three years away, to use Unity
Friday, 26th August 2011 05:11 GMT
Limbo developer Playdead has confirmed its next game may be more than three years from release, and will use an entirely new engine.
“A good game takes time,” Dino Patti, CEO of the indi studio, told Edge.
“I think the new production will take at least three and a half years.”
Playdead said in July it was doubtful the new game could be shown this year.
Saying the new game is “a little more ambitious than Limbo”, game director Arnt Jensen explained that the new title will be bult on the Unity Engine.
“It was just too much work It’s like having a double product, doing both engine and game. There are a lot of things we don’t want to make from the beginning,” he said.
Patti revealed that Playdead can’t recycle Limbo’s engine, as by the time the game shipped, it had become too specialised.
“Limbo’s engine only works when it’s black and white now. It can’t render colour anymore,” he said.
Atmospheric platformer Limbo released to great acclaim and success on Xbox Live Arcade, and was recently made available on Steam.
Thanks, Kotaku.
Tags: Arnt Jensen Dino Patti Limbo Playdead unity Share on: | 计算机 |
2014-35/1131/en_head.json.gz/942 | The CD Writer--DVD-9 to 5: The New Copying Reality
Hugh Bennett
Posted Jun 1, 2003 - December 2004 Issue
It just isn't practical for people to copy DVDs—that's what a columnist for a major computer magazine recently proclaimed. Had this declaration come from a Hollywood executive living in denial, from a consumer electronics manufacturer being politically correct, or even from an uninformed member of the popular media, I wouldn't have been surprised. But I would have expected better from the technical press.
So, just how hard is it to copy a DVD movie? Not as difficult or time-consuming as some make it out to be—and it's getting easier and more convenient every day.The biggest historical barrier to copying DVD movies was, obviously, that few consumers owned DVD recorders. Like the CD-R industry before it, early-generation writable DVD hardware and media were prohibitively expensive professional prototyping tools and came with imperfect playback compatibility. However, unlike CD-R, which took years to hit the mainstream, writable DVD products are being thrust into low-cost consumer markets. Witness the latest estimates from Strategic Marketing Decisions projecting that 27 million DVD recorders will already be installed worldwide by year's end, rising to some 244 million over the next three years.With this first line of defense evaporating, the next level of protection against easy copying comes from the Content Scrambling System (CSS) currently installed on most commercial DVD movie titles. As has been well-publicized, however, programmers broke this digital encryption in 1999 and (despite legal efforts to stop the proliferation of the techniques employed), throwing the digital copying door wide open. Since then, a profusion of Internet-distributed freeware has rendered CSS irrelevant.Beyond digital encryption, an ever-growing number of prerecorded movies enjoy the de facto copy protection of being DVD-9s (8.4GB single-sided, dual-layer). Once a challenge to manufacture and thus a high-priced novelty, DVD-9s became necessary to accommodate lower compression rates as well as supplementary material and therefore were enlisted to publish most recent mainline titles. That, of course, complicated matters from a copying perspective. While DVD-5 discs (4.7GB single-sided single-layer) could easily be decrypted and transferred onto writable DVD media, DVD-9's 8.4GBs won't fit. Equivalent-capacity writable discs simply aren't available.Some argue that it's only a matter of time before this dual-layer writable DVD media makes its way onto the scene. However, I'm more pessimistic about that possibility. Indeed, Matsushita, MPO, Philips, ITRI and others have demonstrated the technical feasibility of dual-layer DVD-RAM, ROM/RAM, and other formats. These attempts, albeit impressive, are laboratory experiments that give little consideration to cost, ease of manufacture, and market realities. For example, given the extreme difficulties involved in fabricating even current 4.7GB DVD-RAM discs, dual-layer writable DVD discs—of any variety—would arguably remain for some time too complex and therefore too unpredictable and expensive to produce. They would then be entering into a commodity marketplace where price, rather than capability, rules the roost. Further complicating matters will be the inability of the existing installed base of DVD recorders to write dual-layer discs and the likelihood that they couldn't (or wouldn't) be updated to do so.Of course, DVD-9s can be decrypted and viewed from computer hard drives but, realistically, most consumers want to watch their movies on TV. The more likely scenarios, therefore, are already being played out courtesy of the latest, and grossly underestimated, generations of DVD manipulation tools.It's possible to jump the DVD-9 hurdle by splitting content into two discs or re-authoring to prune out the additional material. Deleted scenes, documentaries, trailers, and multiple angles, languages, and audio tracks can often all be hacked away so only the main movie remains and fits onto a single writable DVD disc. Accomplishing that used to take a fist full of software and plenty of know-how, but free and commercial tools continue to appear to automate and demystify the process.The latest approach to emerge is ruthlessly effective and the one most likely to have broad ongoing consumer appeal. It involves recompressing, or selectively recompressing, the video content to a lower bit rate to make it fit onto one writable DVD disc. Excluding extraneous material increases storage space and lessens the required degree of recompression. Obviously, some visual quality is sacrificed, but the result is perfectly acceptable and analogous to that of converting Red Book audio CDs to lower-resolution MP3 files. And while it's true that some products can take upward of five hours to accomplish this, free software is now available to decrypt and recompress full DVD-9s in 45 minutes or less. When paired with a 4X DVD recorder, that's less than one hands-off hour from start to finish. With inevitable advances in software engineering, as well as with processor and recorder speeds, it might not be long before the whole procedure could be performed in less than 15 minutes.Looking ahead, it's also possible that the MPEG-4 compression and metadata navigation capabilities now making their way into set-top DVD players could "MP3" the video market. Satisfied with "good enough" quality, consumers might simply rip and record multiple movies to a single writable DVD disc and exchange this material online. Alternatively, if next-generation systems catch on, it's sobering to consider that higher-capacity technology, such as Blu-ray Disc (23.3 to 27GB), holds three to four decrypted DVD-9 discs without sacrificing a thing.Legal and ethical issues aside, the hardware and software needed to copy any DVD movie is already here, and it will continue to get cheaper, faster, easier, and more convenient to use. Ignoring, dismissing, or denying just doesn't cut it. | 计算机 |
2014-35/1131/en_head.json.gz/1720 | No. 2717KANT'S CRITIQUE OF PURE REASON
by Andrew Boyd
Click here for audio of Episode 2717
Today, a critique of pure reason. The University of Houston�s College of Engineering presents this series about the machines that make our civilization run, and the people whose ingenuity created them. All areas of human endeavor have their short list of luminaries. Baseball has Babe Ruth. Psychiatry, Sigmund Freud. And philosophy, Immanuel Kant.
Kant was a child of the Enlightenment. His early interest in classics gave way to philosophy when he entered the University of K�nigsberg, also known as Albertina. Upon graduation, Kant worked as a private tutor, then returned to Albertina as an unsalaried teacher. For fifteen years his main source of income was payment from his students. Busy as he was with teaching, Kant wrote prolifically. His reputation grew, and at age forty-six he was appointed chair of logic and metaphysics. Yet for his many highly respected works, by far the most influential was the Critique of Pure Reason, published in Kant�s late fifties.
The Critique was a revolutionary look at an age-old problem that had found new life during the Enlightenment. Inspired by works like Descartes� Meditations and Newton�s Principia, the Enlightenment celebrated reason. People could, and sh | 计算机 |
2014-35/1131/en_head.json.gz/4195 | Home › Plus Blog News from the world of maths: Game over for checkers
Game over for checkers
Computer scientists at the University of Alberta have solved checkers, the popular board game with a history dating back to 3,000 BC.
After 18-and-a-half years and sifting through 500 billion billion (a five followed by 20 zeroes) checkers positions, Dr. Jonathan Schaeffer and colleagues have built a checkers-playing computer program that cannot be beaten. Completed in late April this year, the program, Chinook, may be played to a draw but will never be defeated. The
results of this research were published recently in the journal Science.
"This is a tremendous achievement — a truly significant advance in artificial intelligence," said Dr. Jaap van den Herik, editor of the International Computer Games Journal. "I think we've raised the bar — and raised it quite a bit — in terms of what can be achieved in computer technology and artificial intelligence," said Schaeffer, chair of the University of Alberta Department of
Computing Science. "With Chinook, we've pushed the envelope about one million times more than anything that's been done before."
A self-described "awful" checkers player, Schaeffer created Chinook to exploit the superior processing and memory capabilities of computers and determine the best way to incorporate artificial intelligence principals in order to play checkers.
With the help of some top-level checkers players, Schaeffer programmed heuristics — rules of thumb — into a computer software program that captured knowledge of successful and unsuccessful checkers moves. Then he and his team let the program run, while they painstakingly monitored, fixed, tweaked, and updated it as it went.
An average of 50 computers, with more than 200 running at peak times, were used everyday to compute the knowledge necessary to complete Chinook. Now that it is complete, the program no longer needs heuristics — it has become a database of information that knows the best move to play in every situation of a game. If Chinook's opponent also plays perfectly the game would end in a
draw.
"We've taken the knowledge used in artificial intelligence applications to the extreme by replacing human-understandable heuristics with perfect knowledge," Schaffer said. "It's an exciting demonstration of the possibilities that software and hardware are now capable of achieving."
Schaeffer started the Chinook project in 1989, with the initial goal of winning the human world checkers championship. In 1990 it earned the right to play for the championship. The program went on to lose in the championship match in 1992, but won it in 1994, becoming the first computer program to win a human world championship in any game — a feat recognized by the Guinness Book of
World Records.
Chinook remained undefeated until the program was retired in 1997. With his sights set on developing Chinook into the perfect checkers program, Schaeffer restarted the project in 2001. "I'm thrilled with this achievement," he said. "Solving checkers has been something of an obsession of mine for nearly two decades, and it's really satisfying to see it through to its
conclusion."
"I'm also really proud of the artificial intelligence program that we've built at the University of Alberta," he added. "We've built up the premier games group in the world, definitely second-to-none. And we've built up a strong, international, truly world-class reputation, and I'm very proud of that."
News from the world of maths: Understanding embryo development moves forward thanks to biology and maths
Understanding embryo development moves forward thanks to biology and maths
Research aimed at understanding the mechanisms underlying embryo development has taken a step forward thanks to collaborative work between biology and mathematics. A study of wing formation in the fruit fly (Drosophila melanogaster), led by the researchers Marco Milán and Javier Buceta, both in Barcelona, has led to the discovery of a new genetic function involved in this process, and furthers
our understanding of the internal laws which regulate it.
The development of a living being is based on general laws written into the genetic code of each cell and which enable the cell to develop a specialist function, modifying the way they divide, their form and their behaviour. These changes are coordinated through a series of instructions that must be correctly | 计算机 |
2014-35/1131/en_head.json.gz/4575 | Appeared on: Thursday, April 26, 2007
Adobe to Open Source Flex
This initiative will let developers worldwide participate in the growth of the industry's most advanced framework for building cross-operating system rich Internet applications (RIAs) for the Web and enabling new Apollo applications for the desktop.
The open source Flex SDK and documentation will be available under the Mozilla Public License (MPL).
Available since June 2006, the free Adobe Flex SDK includes the technologies developers need to build effective Flex applications, including the MXML compiler and the ActionScript 3.0 libraries that make up the popular Flex framework. Together, these elements provide the modern, standards-based language and programming model used by leading businesses such as BMC Software, eBay, salesforce.com, Scrapblog, and Samsung to create RIAs deployed on the Adobe Flash Player.
This announcement expands on Adobe's commitment to open technology initiatives, including the contribution of source code for the ActionScript Virtual Machine to the Mozilla Foundation under the Tamarin project, the use of the open source WebKit engine in the "Apollo" project, and the release of the full PDF 1.7 specification for ISO standardization. By committing to releasing Flex source code to developers as open source, Adobe is embracing collaboration with the worldwide developer community and enabling other open source projects to take full advantage of the powerful capabilities of the Flex framework.
Using the MPL for open sourcing Flex will allow full and free access to source code. Developers will be able to freely download, extend, and contribute to the source code for the Flex compiler, components and application framework. Adobe also will continue to make the Flex SDK and other Flex products available under their existing commercial licenses, allowing both new and existing partners and customers to choose the license terms that best suit their requirements.
Starting this summer with the pre-release versions of the next release of the Flex product line, code named "Moxie," Adobe will post daily software builds of the Flex SDK on a public download site with a public bug database. The release of open source Flex under the MPL will occur in conjunction with the final release of Moxie, currently scheduled for the second half of 2007. | 计算机 |
2014-35/1131/en_head.json.gz/5433 | Microsoft Ups The Spam War Ante Sep 24, 2004 (11:09 AM EDT) Read the Original Article at http://www.informationweek.com/news/showArticle.jhtml?articleID=47902821
Cheapbulletproof.com makes an offer that's sure to appeal to spammers: "We guarantee your website will not get shutdown!!" Microsoft is putting that claim to the test. Last week, it took aim at this "bulletproof" Web hosting company, filing a lawsuit against company owner Levon Gillespie and numerous John Doe defendants who allegedly utilized his services.
The lawsuit is one of nine the Microsoft has filed against spammers in the past month. In less than two years, Microsoft has supported more than 100 anti-spam enforcement actions worldwide, about 70 of which the company has filed itself.
The suit is significant in that it represents a new front in the war on spam: spam service providers. "It's the first time we've taken action against a Web host hosting spam content," says Aaron Kornblum, Internet safety enforcement attorney at Microsoft.
"This particular Web host is providing a vital service to spammers," he explains. "He is giving spammers a place to host their content to sell their products and services. Spammers need a place to drive their customers to, and without these Web hosts setting up pages like these, spammers wouldn't be able to do business."
Microsoft's legal complaint alleges violations of the Washington Commercial Electronic Mail Act, the Washington Consumer Protection Act, the federal Can-Spam Act of 2003, and the Lanham Act (under which trademark claims are brought). The Washington State laws let Microsoft target those who assist spammers. Kornblum says Microsoft is making solid progress in its legal battles with spammers, but adds that it's too early to tell how litigation will affect the spam problem. The company has seen a variety of outcomes in individual cases so far. These include default judgments, when defendants fail to appear in court, settlements, and summary judgments--in California this summer, a judge found the facts overwhelmingly supported Microsoft's allegations and ruled accordingly. Some defendants have declared bankruptcy to avoid financial and legal responsibility; others have elected to fight.
"We're trying to change the economics of spam," he says. "That's our primary goal in our enforcement efforts. We're trying to make spamming a more expensive proposition and to drive spammers out of the market so that customers and consumers can trust the E-mail that they receive." | 计算机 |
2014-35/1131/en_head.json.gz/5803 | Source: The New York TimesDate: August 19, 1996Byline: Stacy LuComputers Making Inroads in Crossword Market
Puzzlers, take note: the biggest clue to doing crosswords in the '90s may be a nine-letter word that starts with a "C" -- computers.
Computer-generated puzzles are changing and expanding the crossword business. Although almost no one expects such puzzles to replace man-made ones completely, computer-generated puzzles now account for an increasing share of those on the market.
Crossword magazines began using software to mass-produce puzzles in the mid-1980s, and in the last several years individual puzzle writers have developed sophisticated programs for their own use. Commercial software is available for creating original crosswords, and puzzle sites abound on the Internet.
For many, computer-generated puzzles are indistinguishable from those created by people. But some critics worry that technology will produce bland, standardized puzzles, without any of the wit or nuance that have made crosswords a favorite American pastime.
Puzzle-making is well suited for computers. What machines handle best is the sort of complex and highly analytical process involved in making a crossword. It must assign words from a data base that fit both horizontally and vertically in a grid that meets classical American standards. That means a symmetrical pattern containing no two-letter words or uncrossed letters.
Filling in a grid requires a huge data base of words and phrases, which puzzle constructors usually gather from existing lists like dictionaries or reference works, while contributing their own favorites. Finally, the computer numbers the grid. In all but the simplest puzzles, the writer will then create the clues.
Crossword magazines took to computers first, partly because their less-complicated puzzles are easier to mass produce, and partly to satisfy a large and growing audience.
Puzzle magazine newsstand sales in the United States and Canada total approximately $70 million a year for 170 titles, or 1,200 to 1,300 issues, according to Meg McMann, vice president of publisher research and sales programs for Warner Publisher Services, a national distributor based in New York. While the number of titles has not changed recently, many now publish 16 issues a year, up from 12, Ms. McMann said.
Publishers are circumspect about how much of their product actually comes from computers. Still, Fran Danon, editor in chief of Penny Marketing of Norwalk, Conn., which owns Dell Magazines and Penny Press, said technology had a "tremendous" impact on her business.
"Only with the aid of computers have puzzle publishers been able to expand their lines," she said. "When I started working at Penny Press in 1983 we had five titles, and now there are 32." All told, Penny Press and Dell publish 62 titles a year.
Doug Heller, a crossword constructor in Flourtown, Pa., said he supervised the development of a computer program in 1988 for Kappa Publishing Group Inc. of Fort Washington, Pa., that helped the publisher increase its number of annual issues of puzzle magazines to about 800 from 200. Heller's latest version, which he developed for his own use, creates about 30 relatively simple puzzles a second.
Nevertheless, this same assembly line potential has aficionados worried.
"There are puzzles in the print media that are disseminated without a whole lot of quality," said Stanley Newman, managing director of puzzles and games for Times Books, a division of Random House Inc., in turn a unit of Advance Publications Inc. "Without the guardianship of professional editors, there's going to be a barge full of garbage swimming out to sea."
But Eric Albert of Newton, Mass., a former computer scientist for Bell Laboratories well known in the crossword industry for his elaborate proprietary software, argues that simplistic and poorly made puzzles done by hand are the problem.
"They're filled with weird old words, really boring definitions, virtually no humor whatsoever. Those puzzles are doomed. Computers can do them better or faster.
"The most difficult crossword puzzles to construct are the easier ones, and they're the ones editors pay the most for, because they're in shorter supply," he said. "With software, here's a way we can jumble all the words that people know."
Albert is one of the few constructors who supports himself entirely by puzzle-making. Most daily publications pay $15 to $75 a puzzle, and specialty magazines and commercial projects pay $100 to $500.
As well as compiling a huge word data base, Albert has rated each word for its suitability in puzzle-solving. Although this is a gargantuan project that even he admits takes "a certain amount of lunacy," he expects that someday people will probably sell such ratings systems.
Other constructors, like Bob Klahn of Wilmington, Del., have written programs that create grids, but prefer to produce puzzles in tandem with their computers. "It makes perfect sense to have the computer draw diagrams for you," he said. "I don't want it to be a substitute for mental gymnastics."
Most modern crosswords also have themes, which still need to be conceived by a human. Ditto creative, witty clues.
Yet crossword computing quality has already fooled the experts. Will Shortz, the crossword editor of The New York Times, estimates he receives about 75 puzzle submissions a week, more than any other editor. He said he choose the best crossword puzzles for the newspaper regardless of whether they are done by hand or with a computer, and occasionally could not tell which was which.
"If the person submits the puzzle with handwritten letters in the grid, or if it's a very poor crossword, I can assume it's done by hand," he said. "Otherwise, it can be hard to tell."
Some industry leaders hope that electronic puzzles will help attract a new audience of computer users of all ages. Newman of Random House, the largest crossword book publisher at 30 titles a year, said the company was also enthusiastic about the possibilities of on-line crosswords. He believes that print and computer puzzlers are two almost entirely different audiences.
"The potential is there to reach the type of people that don't do crosswords now -- that don't buy books and magazines but are active on the Web and are software buyers. They're waiting for the right presentation to have it enter their world," Newman said.
Packaged software programs have had modest success. Lyriq International of Cheshire, Conn., issued its Lyriq Crosswords Premium Edition -- consisting of puzzles from The Washington Post, Penny Press and others -- on diskette in 1991. The product sold 35,000 copies at about $10 each in the first year, according to Randal Hujar, a Lyriq co-founder.
"When you look at the dynamics of who does crosswords, the overlap was not too high," he said. "But hobbies were moving to the computer, newspapers were going electronic, and we thought crosswords were going to be heading in that direction."
In January, Lyriq was purchased by Enteractive Inc., a New York multimedia publisher, in a stock transaction valued at about $3 million.
Do-it-yourself crosswords have been on the market for several years. Cogix Corp. in San Anselmo, Calif., makes Crossword Wizard, software that can produce an infinite number of puzzles. Total Wizard sales since the product was introduced in 1994 have totaled more than $2 million, said Camilo Wilson, the president of Cogix.
A bigger ticket for crossword expansion may be the World Wide Web, whose audience seems to have an insatiable appetite for games.
Riddler.com, a site developed by Interactive Imaginations Inc. of New York, features puzzles by Random House, which owns an equity stake in Riddler. Visitors to the site -- about 7,500 a day so far -- must enter demographic information to register for play, which determines which ads the player will see. Advertisers have included Toyota, Apple, Microsoft, Capital Records, Ziff Davis and NBC.
"From April 1995 to July 1996, we've increased our revenue twentyfold," said Michael Paolucci, co-founder of Interactive Imaginations.
To date, however, there are still too many puzzlers using old-fashioned pen and paper for technology's takeover to be complete. Merl Reagle, whom many name as one of the best constructors in America, has been making puzzles for 40 years. Reagle writes his crosswords in cafes or in bed, using a computer only to check for form errors.
"I've been doing it for so long it's like second nature," he said.
Besides, Reagle said, crossword fans like to test their mettle against a human, not a computer. "They like matching wits with me," he said. "And I know what makes a puzzle hard enough for it to be a puzzle, but not so hard that you can't solve it."
Return to In the News Index
Copyright ©2001-2010 American Crossword Puzzle Tournament | 计算机 |
2014-35/1131/en_head.json.gz/7643 | Ramblings in Valve Time
A blog by Michael Abrash
Raster-Scan Displays: More Than Meets The Eye
Posted on January 28, 2013 by MAbrash Back in the spring of 1986, Dan Illowsky and I were up against the deadline for an article that we were writing for PC Tech Journal. The name of the article might have been “Software Sprites,” but I’m not sure, since it’s one of the few things I’ve written that seems not to have made it to the Internet. In any case, I believe the article showed two or three different ways of doing software animation on the very simple graphics hardware of the time. With the deadline looming, both the article and the sample code that would accompany it were written, but one part of the code just wouldn’t work right.
As best I can remember, the problematic sample moved two animated helicopters and a balloon around the screen. All the drawing was done immediately after vsync; the point was to show that since nothing was being scanned out to the display at that time (vsync happens in the middle of the vertical blanking interval), the contents of the frame buffer could be modified with no visible artifacts. The problem was that when an animated object got high enough on the screen, it would start vanishing – oddly enough, from the bottom up – and more and more of the object would vanish as it rose until it was completely gone. Stranger still, the altitude at which this happened varied from object to object. We had no idea why that was happening – and the clock was ticking.
I’m happy to report that we did solve the mystery before the deadline. The problem was that back in those days of dog-slow 8088s and slightly faster 80286s, the display was scanning out pixels before the code had finished updating them. And if that explanation doesn’t make much sense to you at the moment, it should all be clear by the end of today’s post, which covers some decidedly non-intuitive consequences of an interesting aspect of the discussion of latency in the last post – the potentially problematic AR/VR implications of a raster scan display, and the way that racing the beam interacts with the raster scan to address those problems.
Raster scanning
Raster scanning is the process of displaying an image by updating each pixel one after the other, rather than all at the same time, with all the pixels on the display updated over the course of one frame. Typically this is done by scanning each row of pixels from left to right, and scanning rows from top to bottom, so the rightmost pixel on each scan line is updated a few microseconds after the leftmost pixel, and the bottommost row on the screen is updated a few milliseconds (roughly 15 ms for 60 Hz refresh – less than 16.7 ms because of vertical blanking time) after the topmost row. Figure 1 shows the order in which pixels are updated on an illustrative if not particularly realistic 8×4 raster scan display.
Originally, the raster scan pattern directly reflected the way the electron beam in a CRT moved to update the phosphors. There’s no longer an electron beam on most modern displays; now the raster scan reflects the order in which pixel data is scanned out of the graphics adapter and into the display. There’s no reason that the scan-in has to proceed in that particular order, but on most devices that’s what it does, although there are variants like scanning columns rather than rows, scanning each pair of lines in opposite directions, or scanning from the bottom up. If you could see events that happen on a scale of milliseconds (and, as we’ll see shortly, under certain circumstances you can), you would see pixel updates crawling across the screen in raster scan order, from left to right and top to bottom.
It’s necessary that pixel data be scanned into the display in some time-sequential pattern, because the video link (HDMI, for example) transmits pixel data in a stream. However, it’s not required that these changes become visible over time. It would be quite possible to scan in a full frame to, say, an LCD panel while it was dark, wait until all the pixel data has been transferred, and then illuminate all the pixels at once with a short, bright light, so all the pixel updates become visible simultaneously. I’ll refer to this as global display, and, in fact, it’s how some LCOS, DLP, and LCD panels work. However, in the last post I talked about reducing latency by racing the beam, and I want to follow up by discussing the interaction of that with raster scanning in this post. There’s no point to racing the beam unless each pixel updates on the display as soon as the raster scan changes it; that means that global display, which doesn’t update any pixel’s displayed value until all the pixels in the frame have been scanned in, precludes racing the beam. So for the purposes of today’s discussion, I’ll assume we’re working with a display that updates each pixel on the screen as soon as the scanned-in pixel data provides a new value for it; I’ll refer to this as rolling display. I’ll also assume we’re working with zero persistence pixels – that is, pixels that illuminate very brightly for a very short period after being updated, then remain dark for the remainder of the frame. This eliminates the need to consider the positions and times of both the first and last photons emitted, and thus we can ignore smearing due to eye movement relative to the display. Few displays actually have zero persistence or anything close to it, although scanning lasers do, but it will make it easier to understand the basic principles if we make this simplifying assumption.
Raster scanning is not how anything works in nature
To recap, racing the beam is when rendering proceeds down the frame just a little ahead of the raster, so that pixels appear on the screen shortly after they’re drawn. Typically this would be done by rendering the scene in horizontal strips of perhaps a few dozen lines each, using the latest reading from the tracking system to position each strip for the current HMD pose just before rendering it.
This is an effective latency-reducing technique, but it’s hard to implement, because it’s very timing-dependent. There’s no guarantee as to how long a given strip will take to render, so there’s a delicate balance involved in leaving enough padding so the raster won’t overtake rendering, while still getting close enough to the raster to reap significant latency reduction. As discussed in the last post, there are some interesting ways to try to address that balance, such as rendering the whole frame, then warping each strip based on the latest position data. In any case, racing the beam is capable of reducing display latency purely in software, and that’s a rare thing, so it’s worth looking into more deeply. However, before we can even think about racing the beam, we need to understand some non-intuitive implications of rolling display, which, as explained above, is required in order for racing the beam to provide any benefit.
So let’s look at a few scenarios. If you’re wearing an HMD with a 60 Hz rolling display, and rendering each frame in its entirety, waiting for vsync, and then scanning the frame out to the display in the normal fashion (with no racing the beam involved at this point), what do you think you’d see in each of the following scenarios? (Hint: think about what you’d see in a single frame for each scenario, and then just repeat that.)
Scenario 1: Head is not moving; eyes are fixated on a vertical line that extends from the top to the bottom of the display, as shown in Figure 2; the vertical line is not moving on the display.
Scenario 2: Head is not moving; the vertical line in Figure 2 is moving left to right on the display at 60 degrees/second; eyes are tracking the line.
Scenario 3: Head is not moving; the vertical line in Figure 2 is moving left to right relative to the display at 60 degrees/second across the center of the screen; eyes are fixated on the center of the screen, and are not tracking the line.
Scenario 4: Head is rotating left to right at 60 degrees/second; the vertical line in Figure 2 is moving right to left on the display at 60 degrees/second, compensating for the head motion so that to the eye the image appears to stay in the same place in the real world; eyes are counter-rotating, tracking the line.
Take a second to think through each of these and write down what you think you’d see. Bear in mind that raster scanning is not how anything works in nature; the pixels in a raster image are updated at differing times, and in the case of zero persistence aren’t even on at the same time. Frankly, it’s a miracle that raster images look like anything coherent at all to us; the fact that they do has to do with the way our visual system collects photons and makes inferences from that data, and at some point I hope to talk about that a little, because it’s fascinating (and far from fully understood).
Here are the answers, as shown in Figure 3, below:
Scenario 1: an unmoving vertical line.
Scenario 2: a line moving left to right, slanted to the right by about one degree from top to bottom. (The slant is exaggerated in Figure 3 to make it more visible; in an HMD, a one-degree slant is much more visible, for reasons I’ll discuss a little later.)
Scenario 3: a vertical line moving left to right.
Scenario 4: a line staying in the same place relative to the real world (although moving right to left on the display, compensating for the display movement from left to right), slanted to the left by about one degree from top to bottom.
How did you do? If you didn’t get all four, don’t feel bad; as I said at the outset, this is not intuitive – which is what makes it so interesting.
In a moment, I’ll explain these results in detail, but here’s the underlying rule for understanding what happens in such situations: your perception will be based on whatever pattern is actually produced on your retina by the photons emitted by the image. That may sound obvious, and in the real world it is, but with an HMD, the time-dependent sequence of pixel illumination makes it anything but.
Given that rule, we get a vertical line in scenario 1 because nothing is moving, so the image registers on the retina exactly as it’s displayed.
Things get more complicated with scenario 2. Here, the eye is smoothly tracking the image, so it’s moving to the right at 60 degrees/second relative to the display. (Note that 60 degrees/second is a little fast for smooth pursuit without saccades, but the math works out neatly on a 60 Hz display, so we’ll go with that.) The topmost pixel in the vertical line is displayed at the start of the frame, and lands at some location on the retina. Then the eye continues moving to the right, and the raster continues scanning down. By the time the raster reaches the last scan line and draws the bottommost pixel of the line, it’s something on the order of 15 ms later, and here we come to the crux of the matter – the eye has moved about one degree to the right since the topmost pixel was drawn. (Note that the eye will move smoothly in tracking the line, even though the line is actually drawn as a set of discrete 60 Hz samples.)
That means that the bottommost pixel will land on the retina about one degree to the right of the topmost pixel, which, due to the way images are formed on the retina and then flipped, will cause the viewer to perceive it to be one degree to the left of the topmost pixel. The same is true of all the pixels in the vertical line, in direct proportion to how much later they’re drawn relative to the topmost pixel. The pixels of the vertical line land on the retina slanted by one degree, so we see a line that’s similarly slanted, as shown in Figure 4 for an illustrative 4×4, 60 Hz display.
Note that for clarity, Figure 4 omits the retinal image flipping step and just incorporates its effects into the final result. The slanted pixels are shown at the locations where they’d be perceived; the pixels would actually land on the retina offset in the opposite direction, and reversed vertically as well, due to image inversion, but it’s the perceived locations that matter.
If it’s that easy to produce this effect, you may well ask: Why can’t I see it on a monitor? The answer depends on whether the monitor waits for vsync; that is, whether the entire rendered frame is scanned out to the display only once per displayed frame (i.e., at the refresh rate), or scanned out to the display as fast as frames can be drawn (so multiple rendered frames affect a single displayed frame, each in its own horizontal strip – a form of racing the beam).
In the case where vsync isn’t waited for, you won’t see lines slant for reasons that may already be obvious to you – because each horizontal strip is drawn at the right location based on the most recent position data; we’ll return to this later. However, in this case it’s easy to see the problem with not waiting for vsync as well. If vsync is off on your monitor, grab a screen-height window that has a high-contrast border and drag it rapidly left to right, then back right to left, and you’ll see that the vertical edge breaks up into segments. The segments are separated by the scan lines where the copy to the screen overtook the raster. If you move the window to the left and don’t track it with your eyes, the lower segments will be to the left of the segments above them, because as soon as the copy overtakes the raster (this assumes that the copy is faster than the raster update, which is very likely to be the case), the raster starts displaying the new pixels, which represent the most up-to-date window position as it moves to the left. This segmentation is called tearing, and is a highly visible artifact that needs to be carefully smoothed over for any HMD racing-the-beam approach.
In contrast, if vsync is waited for, there will be no tearing, but the slanting described above will be visible. If your monitor waits for vsync, grab a screen-height window and drag it back and forth, tracking it with your eyes, and you will see that the vertical edges do in fact tilt as advertised; it’s subtle, because it’s only about a degree and because the pixels smear due to long persistence, but it’s there.
In either case, the artifacts are far more visible for AR/VR in an HMD, because objects that dynamically warp and deform destroy the illusion of reality; in AR in particular, it’s very apparent when artifacts mis-register against the real world. Another factor is that in an HMD, your eyes can counter-rotate and maintain fixation while you turn your head (via the combination of the vestibulo-ocular reflex, or VOR, and the optokinetic response, or OKR), and that makes possible relative speeds of rotation between the eye and the display that are many times higher than the speeds at which you can track a moving object (via smooth pursuit) while holding your head still, resulting in proportionally greater slanting.
By the way, although it’s not exactly the same phenomenon, you can see something similar – and more pronounced – on your cellphone. Put it in back-facing camera mode, point it at a vertical feature such as a door frame, and record a video while moving it smoothly back and forth. Then play the video back while holding the camera still. You will see the vertical feature tilt sharply, or at least that’s what I see on my iPhone. This differs from scenario 4 because it involves a rolling shutter camera (if you don’t see any tilting, either you need to rotate your camera 90 degrees to align with the camera scan direction – I had to hold my iPhone with the long dimension horizontal – or your camera has a global shutter), but the basic principles of the interaction of photons and motion over time are the same, just based on sampling incoming photons in this case rather than displaying outgoing ones. (Note that it is risky to try to draw rolling display conclusions relevant to HMDs from experiments with phone cameras because of the involvement of rolling shutter cameras, because the frame rates and scanning directions of the cameras and displays may differ, and because neither the camera nor the display is attached to your head.)
Scenario 3 results in a vertical line for the same reason as scenario 1. True, the line is moving between frames, but during a frame it’s drawn as a vertical line on the display. Since the eye isn’t moving relative to the display, that image ends up on the retina exactly as it’s displayed. (A bit of foreshadowing for some future post: the image for the next frame will also be vertical, but will be at some other location on the retina, with the separation depending on the velocity of motion – and that separation can cause its own artifacts.)
It may not initially seem like it, but scenario 4 is the same as scenario 2, just in the other direction. I’ll leave this one as an exercise for the reader, with the hint that the key is the motion of the eye relative to the display.
Rolling displays can produce vertical effects as well, and they can actually be considerably more dramatic than the horizontal ones. As an extreme but illustrative example (you’d probably injure yourself if you actually tried to move your head at the required speed), take a moment and try to figure out what would happen if you rotated your head upward over the course of a frame at exactly the same speed that the raster scanned down the display, while fixating on a point in the real world.
The answer is that the entire frame would collapse into a single horizontal line, because every scan line will land in exactly the same place on the retina. Less rapid motion will result in vertical compression of the image. Vertical motion in the same direction as the raster scan will similarly result in vertical expansion. Either case can cause either intra- or inter-frame brightness variation.
None of this is hypothetical, nor is it a subtle effect. I’ve looked at cubes in an HMD that contort as if they’re made of Jell-O, leaning this way and that, compressing and expanding as I move my head around. It’s hard to miss.
Racing the beam fixes everything – or does it?
In sum, rolling display of a rendered frame produces noticeable shear, compression, expansion, and brightness artifacts that make both AR and VR less solid and hence less convincing; the resulting distortion may also contribute to simulator sickness. What’s to be done? Here we finally return to racing the beam, which updates the position of each scan line or block of scan lines just before rendering, which in turn occurs just before scan-out and display, thereby compensating for intra-frame motion and placing pixels where they should be on the retina. (Here I’m taking “racing the beam” to include the whole family of warping and reconstruction approaches that were mentioned in the last post and the comments on the post.) In scenario 4, HMD tracking data would cause each scan line or horizontal strip of scan lines to be drawn slightly to the left of the one above, which would cause the pixels of the image to line up in proper vertical arrangement on the retina. (Another approach would be the use of a global display; that comes with its own set of issues, not least the inability to reduce latency by racing the beam, which I hope to talk about at some point.)
So it appears that racing the beam, for all its complications, is a great solution not only to display latency but also to rolling display artifacts – in fact, it seems to be required in order to address those artifacts – and that might well be the case. But I’ll leave you with a few thoughts (for which the bulk of the credit goes to Atman Binstock and Aaron Nicholls, who have been diving into AR/VR perceptual issues at Valve):
1) The combination of racing the beam and compensating for head motion can fix scenario 4, but that scenario is a specific case of a general problem; head-tracking data isn’t sufficient to allow racing the beam to fix the rolling display artifacts in scenario 2. Remember, it’s the motion of the eye relative to the display, not the motion of the head, that’s key.
2) It’s possible, when racing the beam, to inadvertently repeat or omit horizontal strips of the scene, in addition to the previously mentioned brightness variations. (In the vertical rotation example above, where all the scan lines collapse into a single horizontal line, think about what each scan line would draw.)
3) Getting rid of rolling display artifacts while maintaining proper AR registration with the real world for moving objects is quite challenging – and maybe even impossible.
These issues are key, and I’ll return to them at some point, but I think we’ve covered enough ground for one post.
Finally, in case you still aren’t sure why the sprites in the opening story vanished from the bottom up, it was because both the raster and the sprite rendering were scanning downward, with the raster going faster. Until it caught up to the current rendering location, the raster scanned out pixels that had already been rendered; once it passed the current rendering location, it scanned out background pixels, because the foreground image hadn’t yet been drawn to those pixels. Different images started to vanish at different altitudes because the images were drawn at different times, one after the other, and vanishing was a function of the raster reaching the scan lines the image was being drawn to as it was being drawn, or, in the case of vanishing completely, before it was drawn. Since the raster scans at a fixed speed, images that were drawn sooner would be able to get higher before vanishing, because the raster would still be near the top of the screen when they were drawn. By the time the last image was drawn, the raster would have advanced far down the screen, and the image would start to vanish at that much lower level.
51 Responses to Raster-Scan Displays: More Than Meets The Eye
Mitch Skinner says: | 计算机 |
2014-35/1131/en_head.json.gz/8104 | Continuous Time Recurrent Neural Networks for Generative and Interactive Musical Performance
Bown, Oliver and Lexer, Sebastian (2006) Continuous Time Recurrent Neural Networks for Generative and Interactive Musical Performance. In: EvoWorkshops2006: EvoMUSART; the 4th European Workshop on Evolutionary Music and Art, 10 - 12 April 2006, Budapest, Hungary. [Creative Arts and Design > Sound Arts & Design] Details Creators:Bown, Oliver and Lexer, SebastianDescription:‘Continuous-Time Recurrent Neural Networks for Generative and Interactive Musical Performance’ was a paper delivered at the 4th European Workshop on Evolutionary Music and Art, subsequently published in proceedings.The paper begins by identifying the dominant paradigm located at the junction of computer music and artificial intelligence as one which identifies the goal of developing generative or interactive software agents which exhibit musicality.Where once such a goal remained the preserve of dedicated research institutes with access to massively parallel computer networks, the authors demonstrate that this is no longer the case with increased computational capacity in consumer hardware and extensible music programmes such as Max/MSP, PD and Supercollider.The researchers identify a problem with this potentially liberating convergence of raw number crunching and flexible programming languages: musicians tend to tailor their own dedicated solutions to their own perceived needs. Rather than follow this route, the researchers determined to engineer a generic behavioural tool that can be developed in different directions by different practising musicians, each with their own aesthetic instincts and compositional requirements.The vehicle through which the researchers develop their solution is a specific form of artificial intelligence called a Continuous-Time Recurrent Neural Network. The researchers evoke a number of previous approaches that have sought to simulate complex systems in both analogue and computer-based settings. The advantage of their chosen model is that the AI is able to evolve in response to stimuli - in this case musical events – and for that evolution to function as a kind of training, meaning that the system can be programmed to adapt to different kinds of musicians’ work. The researchers argue that a CTRNN-based evolutionary music system is ideally suited to working with improvising musicians. They provide evidence for this argument, in the form of applied case studies.Official Website:http://www.springerlink.com/content/y126n478j116p632/Type of Research:Conference, Symposium or Workshop Item (Paper)Additional Information (Publicly available):Current ResearchI am working with CRiSAP co-director Cathy Lane on a project to develop audio analysis software, in particular for use by composers producing artistic works with collections of speech recordings. This software will enable users to search through a database of sound for similarities in timbre or pitch content. We are making use of a state-of-the-art set of open-source low-level tools currently being developed by the OMRAS II group at Goldsmiths, University of London, which are designed to perform this task over a network for Google-style searching of music databases. Our project involves embedding this existing software in a graphical user interface program which composers and sonic artists can use creatively on their own collections of recorded sound, and aims to make the software accessible to end users without needing an understanding of the low level behaviour of audio similarity technology. As well as the development of the software, the project also involves research into the artistic practice of composing music using automatic analysis audio similarity, and in particular the specific case of the analysis of vocal sounds, and some creative works will be developed as an outcome of the project. In general this project reflects CRiSAP's commitment to research into the development of new software tools for the sonic arts, and their philosophy that the rapid development of prototype software for creative use is a valuable goal.Your affiliations with UAL:Colleges > London College of CommunicationDate:04 March 2006Related Websites:http://www.olliebown.com/main_blog/?page_id=4, http://www.lcc.arts.ac.uk/41501.htmEvent Location:Budapest, HungaryID Code:1529Deposited By:INVALID USERDeposited On:03 Dec 2009 23:04Last Modified:18 Aug 2010 14:15Repository Staff Only: item control page | 计算机 |
2014-35/1131/en_head.json.gz/8470 | Frank Washkuch
Navigating the online contest trend
Marketers are increasingly using web-based sweepstakes and contests in their integrated marketing campaigns, bringing a classic direct marketing tactic to consumers via the Internet. Brands that have recently employed web-based sweepstakes include tortilla company Mission Foods, Kodak and Ragu. Companies use web-based contests to increase their engagement with consumers and to add a valuable element to their websites, said Norma Rojas, senior marketing director at Mission Foods, which launched the “Mission Menus Challenge” competition in late June. “The reason why we have a contest primarily online is to enhance and strengthen the campaign overall,” she said. “We are doing this right now as a way to engage people who use our product.... I think the online contest is a great component.” Mission Foods' integrated effort also contains TV, out-of-home, a microsite, social media, e-mail and in-store signage. The contest encourages consumers to enter to win a $10,000 kitchen appliance. Rojas added that making the contest customer-friendly is a priority for marketers. “We wanted to make sure it's easy to find and to fill out the forms, and that it doesn't take a long time,” she said. Kodak launched a business-to-business contest last month to promote its i4000 Series of scanners with digital and direct agency Catalyst, its longtime direct marketing partner. Online contests also allow brands to repeatedly communicate with customers who have opted in for further communications, notes Frank Magnera, account director at Catalyst. “Any contest or sweepstakes where you can continue to reach out to folks after they have registered is a way to keep your name top of mind,” he said, adding that contests also work best with recipients who have some knowledge of the product being given away. “We find early in the process that everyone wants to win something, but it's not very attractive to someone who doesn't know anything about the scanner. But when we've been communicating on a regular basis, we find out they're much more interested.”
From the July 2010 Issue of Direct Marketing News » | 计算机 |
2014-35/1131/en_head.json.gz/17285 | open source, open genomics, open content
Hell Goes Sub-Zero, Sony Does Open Source
For me, Sony has always been the antithesis of open source. So this comes as something of a shock:Sony Pictures Imageworks, the award-winning visual effects and digital character animation unit of Sony Pictures Digital Productions, is launching an open source development program, it was announced today by Imageworks' chief technology officer, Rob Bredow. Five technologies will be released initially: * OSL, a programmable shading language for rendering * Field3d, a voxel data storage library * Maya Reticule, a Maya Plug-in for camera masking * Scala Migration, a database migration tool * Pystring, python-like string handling in C++Imageworks' production environment, which is known for its photo-real visual effects, digital character performances, and innovative technologies to facilitate their creation, has incorporated open source solutions, most notably the Linux operating system, for many years. Now the company is contributing back to the open source community by making these technologies available. The software can be used freely around the world by both large and small studios. Each project has a team of passionate individuals supporting it who are interested in seeing the code widely used. The intention of the open source release is to build larger communities to adopt and further refine the code.OK, so it's only a tiny, non-mainstream bit of Sony, but it's surely a sign of the times....Follow me @glynmoody on Twitter @glynmoody and identi.ca.
glyn moody
sony pictures imageworks,
voxel
Why Single Sign On Systems Are Bad
Wow, here's a really great article about identity management from, um, er, Microsoft. Actually, it's a rather remarkable Microsoft article, since it contains the following sentences:On February 14, 2006, Microsoft Chairman Bill Gates declared that passwords would be gone where the dinosaurs rest in three to four years.But as I write this in March 2009, it is pretty clear that Bill was wrong.But it's not for that frisson that you should read it; it's for the following insight, which really needs hammering home:The big challenge with respect to identity is not in designing an identity system that can provide SSO [Single Sign On], even though that is where most of the technical effort is going. It's not even in making the solution smoothly functioning and usable, where, unfortunately, less effort is going. The challenge is that users today have many identities. As I mentioned above, I have well over 100. On a daily basis, I use at least 20 or 25 of those. Perhaps users have too many identities, but I would not consider that a foregone conclusion.The purist would now say that "SSO can fix that problem." However, I don't think it is a problem. At least it is not the big problem. I like having many identities. Having many identities means I can rest assured that the various services I use cannot correlate my information. I do not have to give my e-mail provider my stock broker identity, nor do I have to give my credit card company the identity I use at my favorite online shopping site. And only I know the identity I use for the photo sharing site. Having multiple identities allows me to keep my life, and my privacy, compartmentalized. Yes yes yes yes yes. *This* is what the UK government simply does not want to accept: creating a single, all-powerful "proof" of identity is actually exactly the wrong thing to do. Once compromised, it is hugely dangerous. Moreover, it gives too much power to the provider of that infrastructure - which is precisely why the government *loves* it. (Via Ideal Government.)Follow me @glynmoody on Twitter @glynmoody and identi.ca.
bill gates,
id cards,
id management,
Open Source Cognitive Science
A new site with the self-explanatory name of "Open source cognitive science" has an interesting opening post about Tools for Psychology and Neuroscience, pointing out that:Open source tools make new options available for designing experiments, doing analysis, and writing papers. Already, we can see hardware becoming available for low-cost experimentation. There is an OpenEEG project. There are open source eye tracking tools for webcams. Stimulus packages like VisionEgg can be used to collect reaction times or to send precise timing signals to fMRI scanners. Neurolens is a free functional neuroimage analysis tool.It also has this information about the increasingly fashionable open source statistics package R that was news to me, and may be of interest to others:R code can be embedded directly into a LaTeX or OpenOffice document using a utility called Sweave. Sweave can be used with LaTeX to automatically format documents in APA style (Zahn, 2008). With Sweave, when you see a graph or table in a paper, it’s always up to date, generated on the fly from the original R code when the PDF is generated. Including the LaTeX along with the PDF becomes a form of reproducible research, rooted in Donald Knuth’s idea of literate programming. When you want to know in detail how the analysis was done, you need look no further than the source text of the paper itself.Follow me @glynmoody on Twitter @glynmoody and identi.ca.
open source cognitive science,
openeeg,
visionegg
Transparency Saves Lives
Here's a wonderful demonstration that the simple fact of transparency can dramatically alter outcomes - and, in this case, save lives:Outcomes for adult cardiac patients in the UK have improved significantly since publication of information on death rates, research suggests.The study also found more elderly and high-risk patients were now being treated, despite fears surgeons would not want to take them on.It is based on analysis of more than 400,000 operations by the Society for Cardiothoracic Surgery.Fortunately, people are drawing the right conclusions:Experts said all surgical specialties should now publish data on death rates.Follow me @glynmoody on Twitter @glynmoody and identi.ca.
Profits Without Intellectual Monopolies
Great interview with Mr Open Innovation, Eric von Hippel, who has these wise words of advice:It is true that the most rapidly developing designs are those where many can participate and where the intellectual property is open. Think about open source software as an example of this. What firms have to remember is that they have many ways to profit from good new products, independent of IP. They’ve got brands; they’ve got distribution; they’ve got lead time in the market. They have a lot of valuable proprietary assets that are not dependent on IP.If you’re going to give out your design capability to others, users specifically, then what you have to do is build your business model on the non-design components of your mix of competitive advantages. For instance, recall the case of custom semiconductor firms I mentioned earlier. Those companies gave away their job of designing the circuit to the user, but they still had the job of manufacturing those user-designed semiconductors, they still had the brand, they still had the distribution. And that’s how they make their money.Follow me @glynmoody on Twitter @glynmoody and identi.ca.
eric von hippel,
intellectual monopolies,
open innovation,
RIAA's War on Sharing Begins
Words matter, which is why the RIAA has always framed copyright infringement in terms of "piracy". But it has a big problem: most people call it "sharing"; and as everyone was told by their mother, it's good to share. So the RIAA needs to redefine things, and it seems that it's started doing just that in the Joel Tanenbaum trial:"We are here to ask you to hold the defendant responsible for his actions," said Reynolds, a partner in the Boulder, Colorado office of Holme, Robert & Owen. "Filesharing isn't like sharing that we teach our children. This isn't sharing with your friends." Got that? P2P sharing isn't *real* sharing, because it's not sharing with your friends; this is *evil* sharing because it's bad to share with stranger. Apparently.Watch out for more of this meme in the future.Follow me @glynmoody on Twitter @glynmoody and identi.ca.
joel tanenbaum,
riaa,
war on sharing
It's Not Open Science if it's Not Open Source
Great to see a scientist come out with this in an interesting post entitled "What, exactly, is Open Science?":granting access to source code is really equivalent to publishing your methodology when the kind of science you do involves numerical experiments. I’m an extremist on this point, because without access to the source for the programs we use, we rely on faith in the coding abilities of other people to carry out our numerical experiments. In some extreme cases (i.e. when simulation codes or parameter files are proprietary or are hidden by their owners), numerical experimentation isn’t even science. A “secret” experimental design doesn’t give skeptics the ability to repeat (and hopefully verify) your experiment, and the same is true with numerical experiments. Science has to be “verifiable in practice” as well as “verifiable in principle”.The rest is well worth reading too.(Via @phylogenomics.)Follow me @glynmoody on Twitter @glynmoody and identi.ca.
open access,
open science,
Why Hackers Will Save the World
For anyone that might be interested, my keynote from the recent Gran Canaria Desktop Summit is now online as an Ogg video.
gnome,
gran canaria,
kde,
ogg,
Bill Gates Shows His True Identity
And so it starts to come out:Microsoft is angling to work on India’s national identity card project, Mr. Gates said, and he will be meeting with Nandan Nilekani, the minister in charge. Like Mr. Gates, Mr. Nilekani stopped running the technology company he helped to start, Infosys, after growing it into one of the biggest players in the business. He is now tasked with providing identity cards for India’s 1.2 billion citizens starting in 2011. Right now in India, many records like births, deaths, immunizations and driving violations are kept on paper in local offices.Mr. Gates was also critical of the United States government’s unwillingness to adopt a national identity card, or allow some businesses, like health care, to centralize data keeping on individuals.Remind me again why we bother listening to this man...Follow me @glynmoody on Twitter or identi.ca.
centralised databases,
Why the GNU GPL v3 Matters Even More
A little while back, I wrote a post called "Why the GNU GPL Still Matters". I was talking in general terms, and didn't really distinguish between the historical GNU GPL version 2 and the new version 3. That's in part because I didn't really have any figures on how the latter was doing. Now I do, because Matt Asay has just published some plausible estimates:In July 2007, version 3 of the GNU General Public License barely accounted for 164 projects. A year later, the number had climbed past 2,000 total projects. Today, as announced by Google open-source programs office manager Chris DiBona, the number of open-source projects licensed under GPLv3 is at least 56,000.And that's just counting the projects hosted at Google Code.In a hallway conversation with DiBona at OSCON, he told me roughly half of all projects on Google Code use the GPL and, of those, roughly half have moved to GPLv3, or 25 percent of all Google Code projects.With more than 225,000 projects currently hosted at Google Code, that's a lot of GPLv3.If we make the reasonable assumption that other open-source project repositories Sourceforge.net and Codehaus have similar GPLv3 adoption rates, the numbers of GPLv3 projects get very big, very fast.This is important not just because it shows that there's considerable vigour in the GNU GPL licence yet, but because version 3 addresses a particularly hot area at the moment: software patents. The increasing use of GPL v3, with its stronger, more developed response to that threat, is therefore very good news indeed.Follow me @glynmoody on Twitter or identi.ca.
chris dibona,
gnu gpl v3,
oscon,
Pat "Nutter" Brown Strikes Again
To change the world, it is not enough to have revolutionary ideas: you also have the inner force to be able to realise them in the face of near-universal opposition/indifference/derision. Great examples of this include Richard Stallman, who ploughed his lonely GNU furrow for years before anyone took much notice, and Michael Hart, who did the same for Project Gutenberg.Another of these rare beings with both vision and tenacity is Pat Brown, a personal hero of mine. Not content with inventing one of the most important experimental tools in genomics - DNA microarrays - Brown decided he wanted to do something ambitious: open access publishing. This urge turned into the Public Library of Science (PLoS) - and even that is just the start:PLoS is just part of a longer range plan. The idea is to completely change the way the whole system works for scientific communication.At the start, I knew nothing about the scientific publishing business. I just decided this would be a fun and important thing to do. Mike Eisen, who was a post-doc in my lab, and I have been brain-storming a strategic plan, and PLoS was a large part of it. When I started working on this, almost everyone said, “You are completely out of your mind. You are obviously a complete idiot about how publishing works, and besides, this is a dilettante thing that you're doing.” Which I didn't feel at all.I know I'm serious about it and I know it's doable and I know it's going to be easy. I could see the thermodynamics were in my favor, because the system is not in its lowest energy state. It's going to be much more economically efficient and serve the customers a lot better being open access. You just need a catalyst to GET it there. And part of the strategy to get it over the energy barrier is to apply heat—literally, I piss people off all the time.In case you hadn't noticed, that little plan "to completely change the way the whole system works for scientific communication" is coming along quite nicely. So, perhaps buoyed up by this, Brown has decided to try something even more challenging:Brown: ... I'm going to do my sabbatical on this: I am going to devote myself, for a year, to trying to the maximum extent possible to eliminate animal farming on the planet Earth.Gitschier: [Pause. Sensation of jaw dropping.]Brown: And you are thinking I'm out of my mind.Gitschier: [Continued silence.]Brown: I feel like I can go a long way toward doing it, and I love the project because it is purely strategy. And it involves learning about economics, agriculture, world trade, behavioral psychology, and even an interesting component of it is creative food science.Animal farming is by far the most environmentally destructive identified practice on the planet. Do you believe that? More greenhouse production than all transportation combined. It is also the major single source of water pollution on the planet. It is incredibly destructive. The major reason reefs are dying off and dead zones exist in the ocean—from nutrient run-off. Overwhelmingly it is the largest driving force of deforestation. And the leading cause of biodiversity loss.And if you think I'm bullshitting, the Food and Agricultural Organization of the UN, whose job is to promote agricultural development, published a study, not knowing what they were getting into, looking at the environmental impact of animal farming, and it is a beautiful study! And the bottom line is that it is the most destructive and fastest growing environmental problem.Gitschier: So what is your plan?Brown: The gist of my strategy is to rigorously calculate the costs of repairing and mitigating all the environmental damage and make the case that if we don't pay as we go for this, we are just dumping this huge burden on our children. Paying these costs will drive up the price of a Big Mac and consumption will go down a lot. The other thing is to come up with yummy, nutritious, affordable mass-marketable alternatives, so that people who are totally addicted to animal foods will find alternatives that are inherently attractive to eat, so much so that McDonald's will market them, too. I want to recruit the world's most creative chefs—here's a REAL creative challenge!I've talked with a lot of smart people who are very keen on it actually. They say, “You have no chance of success, but I really hope you're successful.” That's just the kind of project I love.Pat, the world desperately needs nutters like you. Let's just hope that the thermodynamics are in your favour once more.Follow me @glynmoody on Twitter or identi.ca.
DNA,
microarrays,
nutters,
pat brown,
PLOS,
No Patents for Circuits? Since You Insist...
I love this argument:Arguments against software patents have a fundamental flaw. As any electrical engineer knows, solutions to problems implemented in software can also be realized in hardware, i.e., electronic circuits. The main reason for choosing a software solution is the ease in implementing changes, the main reason for choosing a hardware solution is speed of processing. Therefore, a time critical solution is more likely to be implemented in hardware. While a solution that requires the ability to add features easily will be implemented in software. As a result, to be intellectually consistent those people against software patents also have to be against patents for electronic circuits.People seem to think that this is an invincible argument *for* software patents; what the poor darlings fail to notice is that it's actually an invincible argument *against* patents for circuits.Since software is just algorithms, which is just maths, which cannot be patented, and this clever chap points out that circuits are just software made out of hardware, it follows that we shouldn't allow patents for circuits (but they can still be protected by copyright, just as software can.) So, thanks for the help in rolling back what is patentable...
circuit patents,
Has Google Forgotten Celera?
One of the reasons I wrote my book Digital Code of Life was that the battle between the public Human Genome Project and the privately-funded Celera mirrored so closely the battle between free software and Microsoft - with the difference that it was our genome that was at stake, not just a bunch of bits. The fact that Celera ultimately failed in its attempt to sequence and patent vast chunks of our DNA was the happiest of endings.It seems someone else knows the story:Celera was the company founded by Craig Venter, and funded by Perkin Elmer, which played a large part in sequencing the human genome and was hoping to make a massively profitable business out of selling subscriptions to genome databases. The business plan unravelled within a year or two of the publication of the first human genome. With hindsight, the opponents of Celera were right. Science is making and will make much greater progress with open data sets.Here are some rea[s]ons for thinking that Google will be making the same sort of mistake as Celera if it pursues the business model outlined in its pending settlement with the AAP and the Author's Guild....Thought provoking stuff, well worth a read.Follow me @glynmoody on Twitter or identi.ca.
authors guild,
celera,
digital code of life,
human genome project,
Building on Open Data
One of the great things about openness is that it lets people do incredible things by adding to it in a multiplicity of ways. The beatuy is that those releasing material don't need to try to anticipate future uses: it's enough that they make it as open as possible Indeed, the more open they make it, the more exciting the re-uses will be. Here's an unusual example from the field of open data, specifically, the US government data held on Data.gov:The purpose of Data.gov is to increase public access to high value, machine readable datasets generated by the Executive Branch of the Federal Government. Although the initial launch of Data.gov provides a limited portion of the rich variety of Federal datasets presently available, we invite you to actively participate in shaping the future of Data.gov by suggesting additional datasets and site enhancements to provide seamless access and use of your Federal data. Visit today with us, but come back often. With your help, Data.gov will continue to grow and change in the weeks, months, and years ahead.Here's how someone intends to go even further:Today I’m happy to announce Sunlight Labs is stealing an idea from our government. Data.gov is an incredible concept, and the implementation of it has been remarkable. We’re going to steal that idea and make it better. Because of politics and scale there’s only so much the government is going to be able to do. There are legal hurdles and boundaries the government can’t cross that we can. For instance: there’s no legislative or judicial branch data inside Data.gov and while Data.gov links off to state data catalogs, entries aren’t in the same place or format as the rest of the catalog. Community documentation and collaboration are virtual impossibilities because of the regulations that impact the way Government interacts with people on the web.We think we can add value on top of things like Data.gov and the municipal data catalogs by autonomously bringing them into one system, manually curating and adding other data sources and providing features that, well, Government just can’t do. There’ll be community participation so that people can submit their own data sources, and we’ll also catalog non-commercial data that is derivative of government data like OpenSecrets. We’ll make it so that people can create their own documentation for much of the undocumented data that government puts out and link to external projects that work with the data being provided.This the future.Follow me @glynmoody on Twitter or identi.ca.
data.gov,
open government,
open government data,
sunlight foundation,
British Library Turns Traitor
I knew the British Library was losing its way, but this is ridiculous:The British Library Business & IP Centre at St Pancras, London can help you start, run and grow your business.And how might it do that?Intellectual property can help you protect your ideas and make money from them. Our resources and workshops will guide you through the four types of intellectual property: patents, trade marks, registered designs and copyright.This once-great institution used to be about opening up the world's knowledge for the benefit and enjoyment of all: today, it's about closing it down so that only those who can afford to pay get to see it.What an utter disgrace.Follow me @glynmoody on Twitter or identi.ca.
british library,
patents,
registered designs,
Patents *Are* Monopolies: It's Official
As long-suffering readers of this blog will know, I refer to patents and copyrights as intellectual monopolies because, well, that's what they are. But there are some who refuse to accept this, citing all kinds of specious reasons why it's not correct.Well, here's someone else who agrees with me:A further and more significant change may come from the President's nomination of David Kappos of IBM to be the next Director of the Patent Office. While in the past, IBM was a prolific filer of patent applications, many of them covering business methods and software, it has filed an amicus brief in Bilski opposing the patentability of business method patents. However, and perhaps not surprisingly, IBM defends approval of software patents.Mr. Kappos announced his opposition to business method patents last year by stating that "[y]ou're creating a new 20-year monopoly for no good reason."Yup: the next Director of the USPTO says patents are monopolies: it's official. (Via @schestowitz.)Follow me @glynmoody on Twitter or identi.ca.
david kappos,
Harvard University Press on Scribd
This sounds like a great move:It’s a recession. Save the $200,000 you were going to spend on that Harvard education and check out some of the books Harvard University Press is selling on Scribd starting today.It's so obviously right: saving trees, and making academic materials more readily available, thus boosting the recognition achieved by the authors - exactly what they want. Plus readers can buy the books much more cheaply, allowing many more people to access rigorous if rather specialist knowledge.Or maybe not: a quick look through the titles shows prices ranging from mid-teens up to $45. Come on, people, these are *electrons*: they are cheap. The whole point is to use this fact to spread knowledge, reputation and joy more widely.When will they (HUP) learn?Follow me @glynmoody on Twitter or identi.ca.
electrons,
harvard university press,
Gadzooks - it's ZookZ from Antigua
I've been following the rather entertaining case of Antigua vs. US for a few years now. Basically, the US government has taken a "do as I say, not as I do" attitude to the WTO - refusing to follow the latter's rules while seeking to enforce them against others. The net result is that plucky little Antigua seems to have won some kind of permission to ignore US copyright - up to a certain point - although nobody really knows what this means in practice.That's not stopping an equally cheeky Antigua-based company from trying to make money from this situation:ZookZ provides a new way to get pure movie and music enjoyment. We deliver unlimited, high-quality movies and music through a safe, legal and secure platform for one low monthly subscription fee.ZookZ makes it simple for any to enjoy digital entertainment. Our user-friendly interface provides access to all our digital assets. We offer unlimited downloads of all movies and music for one low monthly price. Files are delivered in MP3 and MP4 formats that are compatible with most mobile devices and players so you can enjoy your entertainment when, where and how you want. ZookZ is changing the way people use and enjoy digital entertainment. Unlike other companies, once you download the file, you can view or listen to it on any medium of your choice –without restrictions.ZookZ is not a peer-to-peer file sharing system and prohibits that use of its product. Customers directly download safe content from our secure database, not from an unknown third party. ZookZ guarantees that all our digital media is free from viruses, adware and spyware. We are dedicated to providing high-quality, safe and secure digital files.ZookZ operates under the parameters of the 2007 WTO ruling between Antigua and the United States, and is the only website that can legally offer members unlimited digital entertainment.The FAQ has more details.I doubt whether the US media industries will sit back and let ZookZ try to implement its plan, and I suspect that this could get rather interesting to watch.
antigua,
wto,
zookz
Why Most Newspapers are Dying
This is something that's struck me too:as is oh-so-typical in these situations, Osnos does nothing at all to engage or respond to the comments that call out his mistakes. You want to know why newspapers are failing? It's not because of Google, it's because of this viewpoint that some journalists still hold that they're the masters of the truth, handing it out from on high, wanting nothing at all to do with the riff raff in the comments. This is perhaps the biggest single clue that newspaper do not understand how the Internet has changed relationships between writers and readers. Indeed, one of my disappointments with the Guardian's Comment is Free site is that practically *never* do the writers deign to interact with their readers. Given that the Guardian is probably the most Web-savvy of the major newspapers, this does not augur well...
comment is free,
mike masnick,
newspapers,
(Open) Learning from Open Source
As regular readers of this blog will know, I am intrigued by the way that ideas from free software are moving across to different disciplines. Of course, applying them is no simple matter: there may not be an obvious one-to-one mapping of the act of coding to activities in the new domain, or there may be significant cultural differences that place obstacles in the way of sharing. Here's a fascinating post that explores some of the issues around the application of open source ideas to open educational resources (OER):For all my fascination with all things open-source, I'm finding that the notion of open source software (OSS) is one that's used far too broadly, to cover more categories than it can rightfully manage. Specifically, the use of this term to describe collaborative open education resource (OER) projects seems problematic. The notion of OSS points to a series of characteristics and truths that do not apply, for better or worse, to the features of collaborative learning environments developed for opening up education.While in general, open educational resources are developed to adhere to the letter of the OSS movement, what they miss is what we might call the spirit of OSS, which for my money encompasses the following: * A reliance on people's willingness to donate labor--for love, and not for money. * An embrace of the "failure for free" model identified by Clay Shirky in Here Comes Everybody. * A loose collaboration across fields, disciplines, and interest levels.Open educational resources are not, in general, developed by volunteers; they are more often the product of extensive funding mechanisms that include paying participants for their labor.Unusually, the post does not simply lament this problem, but goes on to explore a possible solution:an alternate term for OERs designed in keeping with the open source ideals: community source software (CSS)Worth reading in its entirety, particularly for the light it sheds on things we take for granted in open source.Follow me @glynmoody on Twitter or identi.ca.
community source software,
oer,
Now You Too Can Contribute to Firefox...
...with your money:This pilot allows developers to request an optional dollar amount for their Firefox Add-on. Along with requesting this amount, we’re helping developers tell their stories with our new “About the Developer” pages, which explain to prospective contributors the motivations for creating an add-on and its future road map. Since contributions are completely optional, users will have ample time to evaluate an add-on to determine whether or not they want to help a developer.Some details:How will payments work?We are working with PayPal on this pilot to provide a secure and international solution for facilitating payments. Developers can optionally create a PayPal ID for each of their Firefox Add-ons. Users will be presented with a “Contribute” button that gives them the option of paying the suggested amount or a different amount.This is a nice touch, too:Why did you call this “Contributions” and not “Donations”?At Mozilla, we use the word “Contributor” for community members who contribute time and energy to our mission of promoting choice and innovation on the Internet. Our goal is that users who contribute money to developers are supporting the future of a particular add-on, as opposed to donating for something already received.Quite: this isn't just about getting some well-deserved dosh to the coders, but also about giving users a way to feel more engaged. Great move.
firefox addons,
Bill Gates Gets Sharing...Almost
Yesterday I wrote about Microsoft's attempt to persuade scientists to adopt its unloved Windows HPC platform by throwing in a few free (as in beer) programs. Here's another poisoned chalice that's being offered:In between trying to eradicate polio, tame malaria, and fix the broken U.S. education system, Gates has managed to fulfill a dream of taking some classic physics lectures and making them available free over the Web. The lectures, done in 1964 by noted scientist (and Manhattan Project collaborator) Richard Feynman, take notions such as gravity and explains how they work and the broad implications they have in understanding the ways of the universe.Gates first saw the series of lectures 20 years ago on vacation and dreamed of being able to make them broadly available. After spending years tracking down the rights--and spending some of his personal fortune--Gates has done just that. Tapping his colleagues in Redmond to create interactive software to accompany the videos, Gates is making the collection available free from the Microsoft Research Web site.What a kind bloke - spending his *own* personal fortune of uncountable billions, just to make this stuff freely available.But wait: what do we find when go to that "free" site:Clicking will install Microsoft Silverlight.So it seems that this particular free has its own non-free (as in freedom) payload: what a surprise.That's a disappointment - but hardly unexpected; Microsoft's mantra is that you don't get something for nothing. But elsewhere in the interview with Gates, there's some rather interesting stuff:Education, particularly if you've got motivated students, the idea of specializing in the brilliant lecture and text being done in a very high-quality way, and shared by everyone, and then the sort of lab and discussion piece that's a different thing that you pick people who are very good at that.Technology brings more to the lecture availability, in terms of sharing best practices and letting somebody have more resources to do amazing lectures. So, you'd hope that some schools would be open minded to this fitting in, and making them more effective.What's interesting is that his new-found work in the field of education is bringing him constantly face-to-face with the fact that sharing is actually rather a good thing, and that the more the sharing of knowledge can be facilitated, the more good results.Of course, he's still trapped by the old Microsoft mindset, and constantly thinking how he can exploit that sharing, in this case by freighting it with all kinds of Microsoft gunk. But at least he's started on the journey, albeit unknowingly.Follow me @glynmoody on Twitter or identi.ca.
richard feynman,
sharing,
silverlight,
Hamburg Declaration = Humbug Declaration
You may have noticed that in the 10 years since Napster, the music industry has succeeded in almost completely ruining its biggest opportunity to make huge quantities of money, alienating just about anyone under 30 along the way (and a fair number of us old fogies, too). Alas, it seems that some parts of the newspaper industry have been doing their job of reporting so badly that they missed that particular news item. For what does it want to do? Follow the music industry's lemming-like plunge off the cliff of "new intellectual property rights protection":On the day that Commissioner Viviane Reding unveils her strategy for a Digital Europe during the Lisbon Council, and as the European Commission's consultation on the Content Online Report draws to a close this week, senior members of the publishing world are presenting to Information Society Commissioner Viviane Reding and Internal Market Commissioner Charlie McCreevy, a landmark declaration adopted on intellectual property rights in the digital world in a bid to ensure that opportunities for a diverse, free press and quality journalism thrive online into the future.This is the first press communiqué on a significant meeting convened on 26th June in Berlin by news group Chief Executives from both the EPC and the World Association of Newspapers where the 'Hamburg Declaration' was signed, calling for online copyright to be respected, to allow innovation to thrive and consumers to be better served.This comes from an extraordinary press release, combining arrogant self-satisfaction with total ignorance about how the Internet works:A fundamental safeguard of democratic society is a free, diverse and independent press. Without control over our intellectual property rights, the future of quality journalism is at stake and with it our ability to provide our consumers with quality and varied information, education and entertainment on the many platforms they enjoy.What a load of codswallop. What makes them think they are the sole guardians of that "free, diverse and independent press"? In case they hadn't noticed, the Internet is rather full of "quality and varied information, education and entertainment on the many platforms", most of it quite independent of anything so dull as a newspaper. As many others have pointed out, quality journalism is quite separate from old-style press empires, even if the latter have managed to produce the former from time to time.Then there's this:We continue to attract ever greater audiences for our content but, unlike in the print or TV business models, we are not the ones making the money out of our content. This is unsustainable.Well, at least they got the last bit. But if they are attracting "ever greater audiences" for their content, but are not making money, does this not suggest that they are doing something fundamentally wrong? In a former incarnation, I too was a publisher. When things went badly, I did not immediately call for new laws: I tried again with something different. How about if newspaper publishers did the same?This kind of self-pitying bleating would be extraordinary enough were it coming out of a vacuum; but given the decade of exemplary failure by the music industry taking *exactly* the same approach, it suggests a wilful refusal to look reality in the face that is quite extraordinary.Speaking personally, the sooner all supporters of the Humbug Declaration are simply omitted from every search engine on Earth, the better: I'm sure we won't miss them, but they sure will miss the Internet...Follow me @glynmoody on Twitter or identi.ca.
hamburg declaration,
humbug,
lemmings,
music industry,
I Fear Microsoft Geeks Bearing Gifts...
Look, those nice people at Microsoft Research are saving science from its data deluge:Addressing an audience of prominent academic researchers today at the 10th annual Microsoft Research Faculty Summit, Microsoft External Research Corporate Vice President Tony Hey announced that Microsoft Corp. has developed new software tools with the potential to transform the way much scientific research is done. Project Trident: A Scientific Workflow Workbench allows scientists to easily work with large volumes of data, and the specialized new programs Dryad and DryadLINQ facilitate the use of high-performance computing.Created as part of the company’s ongoing efforts to advance the state of the art in science and help address world-scale challenges, the new tools are designed to make it easier for scientists to ingest and make sense of data, get answers to questions at a rate not previously possible, and ultimately accelerate the pace of achieving critical breakthrough discoveries. Scientists in data-intensive fields such as oceanography, astronomy, environmental science and medical research can now use these tools to manage, integrate and visualize volumes of information. The tools are available as no-cost downloads to academic researchers and scientists at http://research.microsoft.com/en-us/collaboration/tools.Aw, shucks, isn't that just *so* kind? Doing all this out of the goodness of their hearts? Or maybe not:Project Trident was developed by Microsoft Research’s External Research Division specifically to support the scientific community. Project Trident is implemented on top of Microsoft’s Windows Workflow Foundation, using the existing functionality of a commercial workflow engine based on Microsoft SQL Server and Windows HPC Server cluster technologies. DryadLINQ is a combination of the Dryad infrastructure for running parallel systems, developed in the Microsoft Research Silicon Valley lab, and the Language-Integrated Query (LINQ) extensions to the C# programming language.So basically Project Trident is more Project Trojan Horse - an attempt to get Microsoft HPC Server cluster technologies into the scientific community without anyone noticing. And why might Microsoft be so keen to do that? Maybe something to do with the fact that Windows currently runs just 1% of the top 500 supercomputing sites, while GNU/Linux has over 88% share.Microsoft's approach here can be summed up as: accept our free dog biscuit, and be lumbered with a dog.Follow me @glynmoody on Twitter or identi.ca.
gnu/linux,
microsoft research,
project trident,
top500,
Batik-Makers Say "Tidak" to Copyright
Yesterday I was talking about how patents are used to propagate Western ideas and power; here's a complementary story about local artists understanding that copyright just ain't right for them:Joko, speaking at this year’s Solo Batik Fashion Festival over the weekend, said that the ancient royal city was one of the principal batik cities in Indonesia, with no fewer than 500 unique motifs created here that are not found in any other region. The inventory process, however, was hampered by the reluctance of the batik makers to claim ownership over pieces.The head of the Solo trade and industry office, Joko Pangarso, said copyright registration work had begun last year, but was constantly held up when it was found a particular batik only had a motif name because the creator declined to attach their own.“So far only 10 motifs have been successfully included in the list,” he said. “The creators acknowledged their creations but asked for minimal exposure.Interestingly, this is very close to the situation for software. The batik motifs correspond to sub-routines: both are part of the commons that everyone draws upon; copyrighting those patterns is as counter-productive as patenting subroutines, since it makes further creation almost impossible without "infringement". This reduces the overall creativity - precisely the opposite effect that intellectual monopolists claim. (Via Boing Boing.)Follow me @glynmoody on Twitter or identi.ca.
batik,
software patents,
National Portrait Gallery: Nuts
This is so wrong:Below is a letter I received from legal representatives of the National Portrait Gallery, London, on Friday, July 10, regarding images of public domain paintings in Category:National Portrait Gallery, London and threatening direct legal action under UK law. The letter is reproduced here to enable public discourse on the issue. For a list of sites discussing this event see User:Dcoetzee/NPG legal threat/Coverage. I am consulting legal representation and have not yet taken action.Look, NPG, your job is to get people to look at your pix. Here's some news: unless they're in London, they can't do that. Put those pix online, and (a) that get to see the pix and (b) when they're in London, they're more likely to come and visit, no? So you should be *encouraging* people to upload your pix to places like Wikipedia; you should be thanking them. The fact that you are threatening them with legal action shows that you don't have even an inkling of what you are employed to do. Remind me not to pay the part of my UK taxes that goes towards your salary....
national portrait gallery,
visual wikipedia,
Are Patents Intellectual Monopolies? You Decide
Talking of intellectual monopolies, you may wonder why I use this term (well, if you've been reading this blog for long, you probably don't.) But in any case, here's an excellent exposition as to why, yes, patents are indeed monopolies:On occasion you get some defender of patents who is upset when we use the m-word to describe these artificial state-granted monopoly rights. For example here one Dale Halling, a patent attorney (surprise!) posts about "The Myth that Patents are a Monopoly" and writes, " People who suggest a patent is a monopoly are not being intellectually honest and perpetuating a myth to advance a political agenda."Well, let's see.Indeed, do read the rest of yet another great post from the Against Monopoly site.Follow me @glynmoody on Twitter or identi.ca.
What Are Intellectual Monopolies For?
If you still doubted that intellectual monopolies are in part a neo-colonialist plot to ensure the continuing dominance of Western nations, you could read this utterly extraordinary post, which begins:The fourteenth session of the WIPO Intergovernmental Committee on Genetic Resources, Traditional Knowledge and Folklore (IGC), convened in Geneva from June 29, 2009 to July 3, 2009, collapsed at the 11th hour on Friday evening as the culmination of nine years of work over fourteen sessions resulted in the following language; “[t]he Committee did not reach a decision on this agenda item” on future work. The WIPO General Assembly (September 2009) will have to untangle the intractable Gordian knot regarding the future direction of the Committee.At the heart of the discussion lay a proposal by the African Group which called for the IGC to submit a text to the 2011 General Assembly containing “a/(n) international legally binding instrument/instruments” to protect traditional cultural expressions (folklore), traditional knowledge and genetic resources. Inextricably linked to the legally binding instruments were the African Group’s demands for “text-based negotiations” with clear “timeframes” for the proposed program of work. This proposal garnered broad support among a group of developing countries including Malaysia, Thailand, Fiji, Bolivia, Brazil, Ecuador, Philippines, Sri Lanka, Cuba, Yemen India, Peru, Guatemala, China, Nepal and Azerbaijan. Indonesia, Iran and Pakistan co-sponsored the African Group proposal.The European Union, South Korea and the United States could not accept the two principles of “text-based negotiations” and “internationally legally binding instruments”.Australia, Canada and New Zealand accepted the idea of “text-based negotiations” but had reservations about “legally binding instruments” granting sui generis protection for genetic resources, traditional knowledge and folklore. We can't possibly have dveloping countries protecting their traditional medicine and national lore - "genetic resources, traditional knowledge and folklore" - from being taken and patented by the Western world. After all, companies in the latter have an inalienable right to turn a profit by licensing that same traditional knowledge it back to the countries it was stolen from (this has already happened). That's what intellectual monopolies are for.Follow me @glynmoody on Twitter or identi.ca.
folklore,
genetic resources,
traditional knowlege,
traditional medicine,
western nations,
This Could Save Many Lives: Let's Patent It
Bill Gates is amazing; just look at this brilliant idea he's come up with:using large fleets of vessels to suppress hurricanes through various methods of mixing warm water from the surface of the ocean with colder water at greater depths. The idea is to decrease the surface temperature, reducing or eliminating the heat-driven condensation that fuels the giant storms.Against the background of climate change and increased heating of the ocean's surface in areas where hurricanes emerge, just imagine how many lives this could save - a real boon for mankind. Fantastic.Just one problemette: he's decided to patent the idea, along with his clever old chum Nathan Myhrvold.The filings were made by Searete LLC, an entity tied to Intellectual Ventures, the Bellevue-based patent and invention house run by Nathan Myhrvold, the former Microsoft chief technology officer. Myhrvold and several others are listed along with Gates as inventors.After all, can't have people just going out there and saving thousands of lives without paying for the privilege, can we?Follow me @glynmoody on Twitter or identi.ca.
nathan myhrvold,
Do We Need Open Access Journals?
One of the key forerunners of the open access idea was arxiv.org, set up by Paul Ginsparg. Here's what I wrote a few years back about that event:At the beginning of the 1990s, Ginsparg wanted a quick and dirty solution to the problem of putting high-energy physics preprints (early versions of papers) online. As it turns out, he set up what became the arXiv.org preprint repository on 16 August, 1991 – nine days before Linus made his fateful “I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones” posting. But Ginsparg's links with the free software world go back much further.Ginsparg was already familiar with the GNU manifesto in 1985, and, through his brother, an MIT undergraduate, even knew of Stallman in the 1970s. Although arXiv.org only switched to GNU/Linux in 1997, it has been using Perl since 1994, and Apache since it came into existence. One of Apache's founders, Rob Hartill, worked for Ginsparg at the Los Alamos National Laboratory, where arXiv.org was first set up (as an FTP/email server at xxx.lanl.org). Other open source programs crucial to arXiv.org include TeX, GhostScript and MySQL. arxiv.org was and is a huge success, and that paved the way for what became the open access movement. But here's an interesting paper - hosted on arxiv.org:Contemporary scholarly discourse follows many alternative routes in addition to the three-century old tradition of publication in peer-reviewed journals. The field of High- Energy Physics (HEP) has explored alternative communication strategies for decades, initially via the mass mailing of paper copies of preliminary manuscripts, then via the inception of the first online repositories and digital libraries.This field is uniquely placed to answer recurrent questions raised by the current trends in scholarly communication: is there an advantage for scientists to make their work available through repositories, often in preliminary form? Is there an advantage to publishing in Open Access journals? Do scientists still read journals or do they use digital repositories?The analysis of citation data demonstrates that free and immediate online dissemination of preprints creates an immense citation advantage in HEP, whereas publication in Open Access journals presents no discernible advantage. In addition, the analysis of clickstreams in the leading digital library of the field shows that HEP scientists seldom read journals, preferring preprints instead. Here are the article's conclusions:Scholarly communication is at a cross road of new technologies and publishing models. The analysis of almost two decades of use of preprints and repositories in the HEP community provides unique evidence to inform the Open Access debate, through four main findings: 1. Submission of articles to an Open Access subject repository, arXiv, yields a citation advantage of a factor five. 2. The citation advantage of articles appearing in a repository is connected to their dissemination prior to publication, 20% of citations of HEP articles over a two-year period occur before publication. 3. There is no discernable citation advantage added by publishing articles in “gold” Open Access journals. 4. HEP scientists are between four and eight times more likely to download an article in its preprint form from arXiv rather than its final published version on a journal web site.On the one hand, it would be ironic if the very field that acted as a midwife to open access journals should also be the one that begins to undermine it through a move to repository-based open publishing of preprints. On the other, it doesn't really matter; what's important is open access to the papers. If these are in preprint form, or appear as fully-fledged articles in peer-reviewed open access journals is a detail, for the users at least; it's more of a challenge for publishers, of course... (Via @JuliuzBeezer.)Follow me @glynmoody on Twitter or identi.ca.
gold open access,
paul ginsparg,
Not Kissing the Rod, Oh My Word, No
Becta today [6 July 2009] welcomes Microsoft's launch of the new Subscription Enrolment Schools Pilot (SESP) for UK schools, which provides greater flexibility and choice for schools who wish to use a Microsoft subscription agreement.Great, and what might that mean, exactly?The new licensing scheme removes the requirement that schools using subscription agreements pay Microsoft to licence systems that are using their competitor's technologies. So for the first time schools using Microsoft's subscription licensing agreements can decide for themselves how much of their ICT estate to licence.So BECTA is celebrating that fact that schools - that is, we taxpayers - *no longer* have to "pay Microsoft to licence systems that are using their competitor's technologies"? They can now use GNU/Linux, for example, *without* having to pay Microsoft for the privilege?O frabjous day! Callooh! Callay!Follow me @glynmoody on Twitter or identi.ca.
becta,
frabjous,
licensing,
Policing the Function Creep...
Remember how the poor darlings in the UK government absolutely *had to* allow interception of all our online activities so that those plucky PC Plods could maintain their current stunning success rate in their Whirr on Terruh and stuff like that? Well, it seems that things have changed somewhat:Detectives will be required to consider accessing telephone and internet records during every investigation under new plans to increase police use of communications data.The policy is likely to significantly increase the number of requests for data received by ISPs and telephone operators.Just as every investigation currently has to include a strategy to make use of its subjects' financial records, soon CID officers will be trained to always draw up a plan to probe their communications.The plans have been developed by senior officers in anticipation of the implementation of the Interception Modernisation Programme (IMP), the government's multibillion pound scheme to massively increase surveillance of the internet by storing details of who contacts whom online.Er, come again? "CID officers will be trained to always draw up a plan to probe their communications"? How does that square with this being a special tool for those exceptional cases when those scary terrorists and real hard naughty criminals are using tricky high-tech stuff like email? Doesn't it imply that we are all terrorist suspects and hard 'uns now?Police moves to prepare for the glut of newly accessible data were revealed today by Deputy Assistant Commissioner Janet Williams. She predicted always considering communications data will lead to a 20 per cent increase in the productivity of CID teams.She told The Register IMP had "informed thinking" about use of communications data, but denied the plans gave the lie to the government line that massively increased data retention will "maintain capability" of law enforcement to investigate crime.Well, Mandy Rice-Davies applies, m'lud...Follow me @glynmoody on Twitter or identi.ca.
Interception Modernisation Programme,
mandy rice-davies,
Are Microsoft's Promises For Ever?
This sounds good:I have some good news to announce: Microsoft will be applying the Community Promise to the ECMA 334 and ECMA 335 specs.ECMA 334 specifies the form and establishes the interpretation of programs written in the C# programming language, while the ECMA 335 standard defines the Common Language Infrastructure (CLI) in which applications written in multiple high-level languages can be executed in different system environments without the need to rewrite those applications to take into consideration the unique characteristics of those environments."The Community Promise is an excellent vehicle and, in this situation, ensures the best balance of interoperability and flexibility for developers," Scott Guthrie, the Corporate Vice President for the .Net Developer Platform, told me July 6.It is important to note that, under the Community Promise, anyone can freely implement these specifications with their technology, code, and solutions.You do not need to sign a license agreement, or otherwise communicate to Microsoft how you will implement the specifications.The Promise applies to developers, distributors, and users of Covered Implementations without regard to the development model that created the implementations, the type of copyright licenses under which it is distributed, or the associated business model.Under the Community Promise, Microsoft provides assurance that it will not assert its Necessary Claims against anyone who makes, uses, sells, offers for sale, imports, or distributes any Covered Implementation under any type of development or distribution model, including open-source licensing models such as the LGPL or GPL.But boring old sceptic that I am, I have memories of this:The Software Freedom Law Center (SFLC), provider of pro-bono legal services to protect and advance free and open source software, today published a paper that considers the legal implications of Microsoft's Open Specification Promise (OSP) and explains why it should not be relied upon by developers concerned about patent risk.SFLC published the paper in response to questions from its clients and the community about the OSP and its compatibility with the GNU General Public License (GPL). The paper says that the promise should not be relied upon because of Microsoft's ability to revoke the promise for future versions of specifications, the promise's limited scope, and its incompatibility with free software licenses, including the GPL.That was then, of course, what about now? Well, here's what the FAQ says on the subject:Q: Does this CP apply to all versions of the specification, including future revisions?A: The Community Promise applies to all existing versions of the specifications designated on the public list posted at /interop/cp/, unless otherwise noted with respect to a particular specification. Now, is it just me, or does Microsoft conspicuously fail to answer its own question? The question was: does it apply to all versions *including* future revision? And Microsoft's answer is about *existing* versions: so doesn't that mean it could simply not apply the promise to a future version? Isn't this the same problem as with the Open Specification Promise? Just asking.
c#,
ecma,
mono,
Open Specification Promise
The Engine of Scientific Progress: Sharing
Here's a post saying pretty much what I've been saying, but in a rather different way:Here we present a simple model of one of the most basic uses of results, namely as the engine of scientific progress. Research results are more than just accumulated knowledge. Research results make possible new questions, which in turn lead to even more knowledge. The resulting pattern of exponential growth in knowledge is called an issue tree. It shows how individual results can have a value far beyond themselves, because they are shared and lead to research by others.Follow me @glynmoody on Twitter or identi.ca.
scientific progress,
Patents Don't Promote Innovation: Study
It's extraordinary how the myth that patents somehow promote innovation is still propagated and widely accepted; and yet there is practically *no* empirical evidence that it's true. All the studies that have looked at this area rigorously come to quite a different conclusion. Here's yet another nail in that coffin, using a very novel approach:Patent systems are often justified by an assumption that innovation will be spurred by the prospect of patent protection, leading to the accrual of greater societal benefits than would be possible under non-patent systems. However, little empirical evidence exists to support this assumption. One way to test the hypothesis that a patent system promotes innovation is to simulate the behavior of inventors and competitors experimentally under conditions approximating patent and non-patent systems. Employing a multi-user interactive simulation of patent and non-patent (commons and open source) systems ("PatentSim"), this study compares rates of innovation, productivity, and societal utility. PatentSim uses an abstracted and cumulative model of the invention process, a database of potential innovations, an interactive interface that allows users to invent, patent, or open source these innovations, and a network over which users may interact with one another to license, assign, buy, infringe, and enforce patents. Data generated thus far using PatentSim suggest that a system combining patent and open source protection for inventions (that is, similar to modern patent systems) generates significantly lower rates of innovation (p<0.05), productivity (p<0.001), and societal utility (p<0.002) than does a commons system. These data also indicate that there is no statistical difference in innovation, productivity, or societal utility between a pure patent system and a system combining patent and open source protection. The results of this study are inconsistent with the orthodox justification for patent systems. However, they do accord well with evidence from the increasingly important field of user and open innovation. Simulation games of the patent system could even provide a more effective means of fulfilling the Constitutional mandate ― "to promote the Progress of . . . useful Art" than does the orthodox assumption that technological innovation can be encouraged through the prospect of patent protection.When will people get the message and start sharing for mutual benefit?Follow me @glynmoody on Twitter or identi.ca.
patentsim
Help Me Go Mano a Mano with Microsoft
Next week, I'm taking part in a debate with a Microsoft representative about the passage of the OOXML file format through the ISO process last year. Since said Microsoftie can draw on the not inconsiderable resources of his organisation to provide him with a little back-up, I thought I'd try to even the odds by putting out a call for help to the unmatched resource that is the Linux Journal community. Here's the background to the meeting, and the kind of info I hope people might be able to provide....On Linux Journal.
linux journal,
mano a mano,
I have been a technology journalist and consultant for nearly 30 years, covering
the Internet since March 1994, and the free software world since 1995.One early feature I wrote was for Wired in 1997: The Greatest OS that (N)ever Was.
My most recent books are Rebel Code: Linux and the Open Source Revolution, and Digital Code of Life: How Bioinformatics is Revolutionizing Science, Medicine and Business.
Creative Commons CC0 — “No Rights Reserved”
To the extent possible under law,
has waived all copyright and related or neighbouring rights to
this work. | 计算机 |
2014-35/1131/en_head.json.gz/18255 | Bebot User Manual
Introduction and Quick Start
Bebot is a synthesizer played using a touchscreen interface. You can use it as a musical instrument, or just to make sound effects and entertain your friends. It is designed to be fun and intuitive, while also versatile in its range of sounds and playing methods.
The quickest way to get started is by putting this manual aside and just playing around to see what everything does. The demonstration videos at normalware.com are also highly recommended viewing, and provide a good, audiovisual introduction to the application's features. Please note that the videos show a slightly older version of Bebot than the one currently on the App Store, so there may be some minor differences and new features that are not covered in the videos.
This manual is not essential reading, but it may help to give you a better understanding of how Bebot works, and what you can do with it.
The Playing Surface
Once Bebot has loaded, the first thing you'll see is this screen, showing a cartoon robot with a moving background. This is the playing surface. Touching anywhere on the screen will produce a sound. The pitch of the sound is determined by the horizontal position of your finger on the screen, while the vertical position controls the timbre or volume of the sound (depending on the current mode). Note that you can use up to four fingers at the same time, to play multiple sounds simultaneously.
Try sweeping your finger across the screen, or moving it in circles, and observe the effects that your movements have on the sound and the robot's expression. You may also want to try handing the device to a friend and observing the effect it has on their expression.
Control Panel Button
The circular icon in the lower-right corner of the playing surface is the Control Panel Button. Double-tap this button (ie, tap it twice in rapid succession) to open the control panel. This is where you can access all the options and features that Bebot has to offer. It can seem daunting at first, but if you experiment with each control one by one, you'll soon get the hang of it all.
Main Control Panel
This is the panel that first pops up when you open the control panel. It contains a preset selector, three buttons that take you to other panels, an About button which nobody ever presses, and finally a Close button, which closes the control panel.
Much like any modern synthesizer, Bebot allows you to store useful settings and sounds you've made as presets. It also comes with a collection of inbuilt presets, which demonstrate a few of the application's capabilities.
To load a preset, just tap the arrow buttons on either side of the preset name to cycle through the stored presets. Each preset is loaded automatically as you cycle through the list.
To save a preset, you must first have made some changes to the preset since it was loaded. If no changes have been made, the "Save as" button will appear greyed out. Once you make a change to the settings, the preset name will turn grey to indicate that the preset has unsaved changes, and the "Save as" button will become available. Tapping this button will bring up a keyboard, allowing you to provide a name for the new preset. Giving it the same name as an existing preset will overwrite that preset. A dialog box will appear to confirm this, so you can't do it accidentally. The inbuilt demo presets cannot be overwritten.
To delete a preset, first use the arrow buttons to cycle through to the preset you want to delete, and then tap the "Delete" button. A confirmation box will appear to make sure you meant to tap this button. The inbuilt demo presets cannot be deleted, and the delete button is greyed out when one of these presets is selected.
Sub-Panels
Any button ending in a right-arrow will take you to a sub-panel. On the main panel, there are three such buttons.
Synth Controls
This will take you to the panel where you can edit the tonal characteristics of the sound that the application generates.
This sub-panel lets you apply and tweak the audio effects which modify the sound in interesting ways.
This sub-panel lets you set up the playing surface so that it functions like a keyboard or slide guitar, allowing you to lock into user-programmable musical scales and "zoom in" to different pitch ranges.
An explanation of each of these sub-panels follows.
The Synth Controls sub-panel is slightly unusual in that the controls that appear on it change depending on the Synth Mode that is currently selected. This is because each mode represents a different method that can be used to generate a sound, so some controls only apply to certain modes. To change the current mode, tap the arrow buttons on either side of the "Synth Mode" selector box. Each mode is explained below.
In this mode, the sound is a sawtooth wave, which sounds like a simple buzz at a certain pitch. A low-pass filter is applied to this sound, and is controlled by the vertical touch position. The Timbre slider controls the effect that the filter has. For people familiar with synthesizers, this controls the resonance of the filter.
This mode generates a pulse wave, also known as a rectangle wave. The Timbre slider works the same as it does in Sawtooth mode.
This mode has an extra slider, to control the pulse width. At its lowest extreme, this will produce a square wave. Increasing this will lengthen the duty cycle, and at its highest extreme will produce a very narrow pulse. You'll be able to hear the difference it makes to the sound by adjusting this slider while playing a sound on the playing surface.
This generates a sine wave. It sounds something like the sound you make when you whistle, or say "ooh". As a sine wave is a single frequency without harmonics (aka partials), there is no lowpass filter in this mode, hence the lack of a Timbre slider. Instead, the vertical position of a touch controls the amplitude (volume) of the sine wave.
Please note that this mode tends to be the most difficult to hear on the internal speaker. Tiny speakers are incapable of producing low frequency sounds at audible volumes, so only higher pitches will be audible in this mode. Plugging in your headphones, or hooking your device up to external speakers, is recommended.
This mode is similar to Pulse mode, but with a different method of control. Instead of controlling the filter, the vertical touch position controls the pulse width. As this means that the touch position has no control over the filter, overall filter controls are provided in the control panel. These are provided in case you wish to filter off the high frequencies produced by this mode, which some people consider to sound somewhat harsh. If you don't want the filter to have any effect, turn the Cutoff slider to maximum (all the way to the right) and the Resonance slider to minimum (all the way to the left).
Also known as "delay", this produces repeated copies of the sound, at a certain time interval.
Echo Volume
How loud the echo effect should be overall. Turning this right down to minimum will turn off the echo effect completely. Turning it up to maximum will produce an echo matching the volume of the original sound.
Echo Time
This controls the amount of time between the original sound and the echo. This ranges from very short times that can produce a metallic sound, right up to a two-second delay which can be used to make short loops. The most typically 'echo-like' results can be found in the region of about 10% - 30% of this slider's range.
Echo Repeat
This controls how much of the echo should be fed back to produce another echo. Turning this all the way down will produce just one copy of the sound and no more. Turning it all the way up to maximum will create an infinitely sustaining loop.
Please note that when the Echo Time is at a low setting, the maximum echo repeat is reduced slightly, to minimize any physical harm that could come to you from the people around you, as a result of making irritating feedback sounds. Even so, courtesy and discretion are advised.
This is a simple form of distortion, in which the sound is boosted in amplitude and then "clipped" back to the output range. This can produce interesting results, as it changes the timbre and harmonic structure of the sound.
In this control panel, the Overdrive slider controls how much of this effect should be applied. Turning this all the way to minimum will turn the overdrive effect off.
The "Post-mixer" button toggles between pre-mixer and post-mixer overdrive. In most cases, you will want this button to be switched off.
When the Post-mixer button is off, each synthesizer voice is distorted independently, before being mixed together. As each voice produces a set of related harmonics, this usually results in a pleasant alteration of the sound and retains the identity of the note being played.
With the Post-mixer button switched on, the synth voices are mixed together before the resulting mix is distorted. If only one voice is playing, the effect will be the same as with the button off. But with two or more voices playing, the voices will be distorted together. This can produce some interesting results, such as power chords for example. However, most of the time, it will result in cacophonous noise. But sometimes that might be just what you want.
The chorus effect can best be described as a "modulated delay". But since that probably doesn't mean much to anyone who doesn't already know what a chorus effect is, it's not a very effective description. Basically, the chorus effect produces a delayed copy of the sound, but adds vibrato to the copy by continually altering the delay time. It can thicken up your sound by making it sound a bit like there are two of you playing in slightly-detuned unison.
Wet/Dry Mix
This slider controls how much of the original ("dry") sound comes through, and how much of the effected ("wet") copy of the sound is added. When set to minimum, only the original sound comes through and no chorus effect is applied. When set to maximum, only the effected copy will come through.
For a chorus effect, you would normally want to set the wet/dry slider about halfway, to get an even mix of both. If you turn this slider all the way up, you will instead get just a vibrato effect. You can hear this effect on the "Theremin" demo preset.
Chorus Depth
This controls how much the delay time should vary. You can consider this the "amount of vibrato" applied to the copy.
Chorus Rate
This controls how rapidly the delay time should vary. You can think of this as the "vibrato speed" control.
Minimum Delay
This controls the minimum amount of time between the original and the copy. You can use this to adjust the phasing of the chorus effect. The effect is very subtle.
As well as the vibrato effect, another interesting use of the chorus effect is to put the Wet/Dry Mix at about halfway, but turn the Chorus Rate down very low (even right down to minimum). This produces a slow phasing effect known as a flanger. You can hear this on the "Power PWM" demo preset.
Scale Panel
The scale menu contains all of the settings related to playing Bebot in tune as a musical instrument. As well as the default, theremin-like continuous pitch system, you can use the autotune modes to lock in on musical notes. By setting up the user-defined scale to only contain the notes you want, you can make it easy to play in key and avoid wrong notes. You can also zoom in to the pitch range you want to play, and display a grid showing where each note in the scale is, allowing you to play Bebot somewhat like a keyboard. Some of the inbuilt presets demonstrate this functionality.
Scale Presets
On the main Scale panel, you will find a preset selector, much like the one on the main panel. This is for loading and saving presets that only affect the settings found in the Scale panel.
It is important to note that main-panel presets also contain their own scale preset. In other words, presets saved in the main panel save the state of all settings, while scale presets affect only the settings in the scale panel. When the selector in the Scale panel shows "(Preset)", as shown in the picture on the right, this indicates that the settings in the scale panel came from the preset loaded in the main panel.
The purpose of having a separate preset selector just for scale options is so that you can apply commonly-used scales to multiple sounds, or multiple scales to any one sound. It may help to think of it as a clipboard for useful scale options, so that you can apply them to other main-screen presets.
For example, let's say you've set up an F minor scale to play along with a particular song in that key. Then you decide you'd like to try playing it with a different sound that you've saved. But when you load the other sound, its scale settings will be loaded along with it.
So, here's how to apply the scale you've set up to an existing main preset. First, save the F minor scale you've set up, as a scale preset. Then, after you've loaded any sound you want to use, you can use the scale preset selector to load the F minor scale back in. You can optionally save this new combination of settings as a main-screen preset, if you think you'll be using it often.
As another example, let's say you want to play a song that changes key from C major to D major halfway through. What you can do in this case is to set up the scales for each key, and save them as two scale presets. During the song, you can switch the scale preset to the other key and continue playing.
Note Grid
This button toggles the note grid display on and off. This is a set of vertical lines that appear on the playing surface, indicating where each note in the scale is. In some cases, the note grid will be a very dense grid of lines, which won't be very useful. But after setting the pitch range and programming a scale (both of which will be covered shortly), the note grid can become very handy.
Once a scale is set up, the note grid can act like the keys on a piano, or the strings on a harp, indicating where to put your finger to play a certain note. The lines in the grid correspond to the coloring of the keys on a keyboard. To help you identify notes and octaves, the lowest note in the current scale is colored differently. If this note is normally white, it will become yellow. If it's normally black, it will become red. This is of particular help when a scale consists of just one color.
Note: It is recommended that you turn the note grid on while you are setting up the pitch range for a scale, so that you can see the effects that your changes are having.
Scale settings
Hitting the "Scale Settings" button will take you to this sub-panel, which is where you set up the pitch range, the autotune mode, and which keys you want to include in the scale. You can think of this as the "scale editor" panel.
Pitch ranges
The two sliders at the top of this panel control the range of pitches covered by the width of the playing surface. The Pitch slider will move the range up and down, so that you can access higher or lower notes. The Zoom slider focuses in on a narrower range of pitches, which can make it much easier to hit the note you want.
The typical use, when setting up a scale, is to first turn on the Note Grid. Then, use the Zoom slider such that one or two octaves are shown on the screen. Finally, use the Pitch slider to find the octave you want to play in, and to align the root note of the scale near the left edge of the screen. A number of the demo presets are already set up in a similar manner, and it may be easier to start by loading one of them first.
Autotune modes
Perhaps the most important thing about setting up Bebot to play a musical scale is to set it to one of the three available autotune modes. These will correct the pitch that you're playing by rounding it to the nearest note in the scale.
The default setting (off) applies no pitch correction. "Snap" mode applies full pitch correction, such that any pitch you play is instantaneously rounded ("snapped") to the nearest note in the scale. In this mode, the only pitches that can be played are those of the notes in the scale (much like on a piano keyboard).
The other two settings, "slow" and "fast" are two versions of the same autotune mode. They are a combination of the continuous-pitch scale, and the "snap" mode. In these modes, wherever you put your finger down, it is rounded to the nearest scale note (like in snap mode). However, moving your finger left or right will produce a continuous pitch-slide. When your finger stops, the pitch correction will then kick in, and round it to the nearest note again.
The difference between these two modes is how quickly the pitch correction will apply this rounding. In fast mode, the pitch will be quickly rounded to the nearest note as soon as your finger is detected to have stopped. In slow mode, the rounding is done at a slower rate, which can produce a more natural-sounding slide as opposed to the slightly artificial (though more stylized) correction of fast mode. Some playing styles will fit one of these modes better than the other.
Scale notes
The area beneath the autotune mode selector is where you can program in which notes you want in the scale. The buttons correspond to the notes in one octave of a piano keyboard. Tap the buttons to toggle them on or off. If a button is lit, that note is in the scale. Note: A scale must contain at least one note.
Back on the Scale panel, this button toggles the hotkeys on or off. The hotkeys are some small buttons that appear in the lower-left corner of the playing surface.
The buttons marked + and - shift the pitch range up or down by one octave. However, please note that they have no effect on sounds that are already playing. Ie, if you are playing a note, and tap these buttons while your finger is still held down, the pitch of the note won't be affected, but any new notes you play (including if you lift that finger and put it back down) will be played in the new octave.
The button marked "S" activates the "snap" autotune mode. This doesn't actually set the autotune mode to snap in the scale settings. Instead, it makes the current autotune mode work like snap mode while the button is lit. This button will not appear if the autotune is already set to snap mode.
You may notice that this button still appears if autotune is switched off, yet it doesn't have any effect. Actually, this is a bug, and will be corrected in the next update.
Most of the technical support requests regarding Bebot are concerning a lack of sound output. The most common cause of this problem is that it is running on an iPhone with the "silent" switch on. This prevents Bebot from using the internal speaker for sound output. Although some other apps are not affected by this, Bebot uses a particular audio mode that allows iPod music to continue playing, which forces it to obey the silent switch.
Although there is now an on-screen message alerting you when the audio is blocked in this way, it is also important to note that there is a second cause for a lack of audio: The volume may be turned down to zero. This is easily overlooked, and may be the last thing you think of when trying to show Bebot to your friends.
Also, when you're trying to use the internal speaker, make sure that you don't have headphones plugged in, and that your finger is not covering the speaker itself (on the iPhone). This is a small hole on the bottom end of the device, and is easily covered by a part of the hand. When covered, very little sound can be heard.
It should also be noted that because of the tiny size of this speaker, it can tend to introduce distortion into the sound. Where possible, it is recommended that you use headphones or plug the device into an external speaker system for significantly improved sound quality.
If you have any other problems, the first thing to try is to exit out of the application and try starting Bebot again. If it still doesn't work, try resetting your device (ie, turning it off and then on again). In rare cases, the device can be left in a slightly unstable state after running some applications, and may exhibit strange behavior. Resetting the device will return it to its normal state, and should make things run smoothly again.
One frequently asked question is: how can you control the iPod's music while Bebot is still running? Actually, there is a setting in the iPhone/iPod Touch's "Settings" app, which lets you pop up a small iPod control window without exiting the current application.
To use this, first run the Settings app, then tap "General", then scroll down and tap on "Home". This lets you assign a function that happens when you double-tap the home button. Set this to "iPod", and if you see an option below that reading "iPod Controls", make sure that is switched on.
Now, if you have music playing (or paused) when you launch Bebot, you'll be able to double-tap the home button to pop up the iPod window without exiting. If you don't have music playing, Bebot will exit and the full-screen iPod browser will launch. Don't worry - after you've selected some music, you can launch Bebot again and your settings will still be just as they were.
Tips for live performance
Let's say you've decided to incorporate Bebot into some kind of live performance. There are a few things to look out for in this scenario. Battery time
Make sure that the batteries are charged up, or even have the device plugged into a charging cable during the performance. Realtime synthesis tends to be fairly heavy on the processor, and despite optimizations to improve battery usage, it is something you'll need to look out for. However, it is not advised that you have the device plugged into the USB port of a computer during a critical time such as a performance, in case iTunes performs a sync, causing the application to quit. The wall-socket USB charging adapter is a preferable choice.
Airplane mode
If you're using an iPhone, remember that it's still a phone while Bebot is running. You don't want to receive a phone call while you're performing! So with that in mind, before a performance, you should make sure your iPhone is in "airplane mode". This will disconnect it from the cellular network and prevent it from receiving any calls or messages, as well as any audio interference that may be caused by wireless data transmission. An even better idea is to use an iPod Touch for your performances, as it is easy to forget to turn airplane mode on, with potentially disastrous consequences.
Note: Even though the iPod Touch can't receive phone calls, it still has wireless functions, so airplane mode is still advisable to avoid any potential audio interference.
Auto-Lock
During a performance, it can be annoying if your device decides to go to sleep and lock itself just before you want to use it. This is especially a problem if you have a passcode lock set. It may be a good idea to set the Auto-Lock time in the settings menu to "Never", to prevent this from happening. However, since this will keep the device running at full power, it is only advisable if you have the device plugged into a charger.
About and Contact Details
Bebot was developed by Russell Black, and the graphics were made by Lily McDonnell.
For more information, visit normalware.com. If you have any questions or comments, you can contact the developers at [email protected] | 计算机 |
2014-35/1131/en_head.json.gz/18813 | 71% More echochrome PSP, Coming Tomorrow
+ Posted by Tsubasa Inaba on Oct 08, 2008 // Senior Producer, International Software Development
Hey, PlayStation Blog Readers! Just when you were close to beating all your clear times on all 56 echochrome stages on your PSP, we have even more challenges coming your way! An expansion pack for echochrome is coming to the PSN on October 9th for $4.99.
You can almost double the fun of the original with an additional 40 new stages to master. For those of you who don’t have the original version, not to worry. We have a bundle pack ready for you guys and gals too, so you can pick up the game and expansion pack all at once! Also today, I have a message to you from my buddy (and counterpart) back in Tokyo.
Check out what he has to say to get a glimpse of what makes echochrome as fun and interesting as it is! echochrome wouldn’t be what it is without its core philosophy. Hello, echochrome fans! This is Tatsuya Suzuki, Producer at JAPAN STUDIOS for this project. I hope you have been enjoying your experience so far. Five months after our release in May, we are happy to bring you a whole bunch of additional stages for echochrome PSP/PSN. In an era when crisp, photo-realistic graphics are a pre-requisite for games, one may look at echochrome and perhaps question its artistic design choices, namely the modernistic approach of its monotone color palette and line art style. However, you may be surprised to learn, this look was only made possible with modern
day technology. First and foremost, there is the high quality wide screen LCD that is standard to the PSP. All of the artwork in echochrome is based on straight lines, thus clarity and resolution is a very important aspect to our game. The mannequin figure is also drawn just with lines, so it’s important that its outline is smooth. Another important aspect of the screen is its responsiveness. Since the core of the game’s design relies on rotating the stage, clarity and visibility plays a key role as you try to find the best perspective for your next move. And then there is the all-important CPU that feeds the images to the LCD screen. Again, at first, echochrome may not appear to be such a CPU-intensive game. However, in the background, the title is pushing the processing power of the PSP to its limits. It’s like a swan swimming on a lake. It looks graceful to our eyes, but beneath the surface of the water that swan is actually kicking its feet pretty hard to just to move around.
The core of the graphics rendering and the calculation and rendering of the transformation of the stage as the perspective changes are two separate operations. Jun Fujiki, the creator of echochrome, originally designed it as a PC-based application, thus it relied heavily on the PC’s processing power. So, when it came time to bring this game to the PSP, there was a need to dig deep into the source code and fine tune the code to maximize the performance of the PSP processor. Without the PSP’s 333MHz mode, the game would not have been possible. Music was another aspect where the PSP hardware performance shines. Not only were we inspired by the works of MC Escher, the father of optical illusions in art, but also for his love of strings quartet consisting of 2 violins, 1 viola and 1 cello. All of the in-game music was written and recorded with a string quartet. Only the PSP’s ATRAC support could have allowed for the streaming of such high-quality sound, faithfully reproducing the original studio recordings. All of these elements are core elements in bringing the world of echochrome to life on the PSP. The team took extra care in creating all of the intricate stages. The design team consisted of members who worked as planners and stage designers for puzzle games in the past, and even then they had difficulty during the initial stages of development. There was a learning curve because of how radical the designs and game rules are. We also turned to some graphic designers who had no previous experience with game design. They were given a mission to explore the limits of their artistic skills and imagination, without worrying about how they would work when the rules of echochrome were applied. Obviously not all of their creations were suitable for play in echochrome, but it certainly was an eye-opener seeing their creativity and the beauty of architecture they were able to create. And these were eventually converted into playable stages. It was a moment all of us realized that “it is when common sense and assumptions are let go of, that the hidden path opens up”. This later became a philosophical mantra for the development of echochrome. With that said, we are happy to bring these 40 new playable stages to you as an expansion pack. There are plenty of new puzzles and challenges waiting for you inside, and maybe some of these new stages will inspire some new level creations in your own imagination. | 计算机 |
2014-35/1131/en_head.json.gz/20128 | More information about Android
Android Open Source Project
Android SDK
Android™ delivers a complete set of software for mobile devices: an operating system,
middleware and key mobile applications.
Android was built from the ground-up to enable developers to create compelling mobile
applications that take full advantage of all a handset has to offer. It was built to be
truly open. For example, an application can call upon any of the phone’s core
functionality such as making calls, sending text messages, or using the camera,
allowing developers to create richer and more cohesive experiences for users. Android
is built on the open Linux Kernel. Furthermore, it utilizes a custom virtual machine
that was designed to optimize memory and hardware resources in a mobile environment.
Android is open source; it can be liberally extended to incorporate new cutting edge
technologies as they emerge. The platform will continue to evolve as the developer
community works together to build innovative mobile applications.
All applications are created equal
Android does not differentiate between the phone’s core applications and third-party
applications. They can all be built to have equal access to a phone’s capabilities
providing users with a broad spectrum of applications and services. With devices built
on the Android Platform, users are able to fully tailor the phone to their interests.
They can swap out the phone's homescreen, the style of the dialer, or any of the
applications. They can even instruct their phones to use their favorite photo viewing
application to handle the viewing of all photos.
Breaking down application boundaries
Android breaks down the barriers to building new and innovative applications. For
example, a developer can combine information from the web with data on an individual’s
mobile phone — such as the user’s contacts, calendar, or geographic location — to
provide a more relevant user experience. With Android, a developer can build an
application that enables users to view the location of their friends and be alerted
when they are in the vicinity giving them a chance to connect.
Fast & easy application development
Android provides access to a wide range of useful libraries and tools that can be used
to build rich applications. For example, Android enables developers to obtain the
location of the device, and allows devices to communicate with one another enabling
rich peer–to–peer social applications. In addition, Android includes a full set of
tools that have been built from the ground up alongside the platform providing
developers with high productivity and deep insight into their applications. | 计算机 |
2014-35/1132/en_head.json.gz/517 | The mainframe is 45 years old
Robert Crawford
Systems programmer
In the early 1960s, IBM took a risk to develop the System/360, the precursor to the modern mainframe. Over 45 years later, IBM can point to unbroken mainframe compatibility.
This Article Covers IBM System z and mainframe systems
Is Cisco's VFrame a 'virtual mainframe'?
IBM upgrades z/OS mainframe operating system
Share mainframe users upgrade to CICS TS 3.2
Remote Job Entry (RJE)
– SearchDataCenter
BAL (Basic Assembler Language or branch-and-link)
Basic Assembler Language (BAL)
FlexPod Systems From Cisco and NetApp Upend Conventional Wisdom
Endress + Hauser Success Story
Go Green with IBM System x Servers and Intel Xeon Processors
I recently got some junk mail commemorating the 45th anniversary of IBM's announcement of System/360. This news made me dig out a copy of a paper IBM distributed a few years ago titled, "The 360 Revolution" by Chuck Boyer. I reread the document, which outlines the genesis of IBM's seminal computer that evolved into today's mainframe. System/360: The big gamble IBM was at the top of the computer heap in the early 1960s, but all was not well. There were many competitors selling their own systems, claiming to be faster, better and cheaper (as the old joke about IBM goes, "You can buy better, but you won't pay more"). IBM itself had three competing, non-compatible computer divisions and flattening revenues. Worsening the problem was the incompatibility between upgrades of the same system family forcing customers to continuously buy new software and peripherals. IBM executive management decided it was time to take a risk. The primary goals for the new system were compatibility, both in software and hardware, and a smooth upgrade path. There were many problems along the way, some of them technical. The technical issues included using a relatively new electronic technique called Solid Logic Technology (SLT), which was something between individual transistors and an integrated circuit. Not content just to use the new technology, IBM decided to manufacture it as well, which meant building and manning new factories. Along with the new processor, IBM decided to create a line of peripherals compatible with the new system. The new system needed new software as well. Programmers were not only tasked with writing an operating system (OS) that worked on all System/360 incarnations, they had to develop one of the first OSes to support multiprogramming and interactive processing. It's telling that software ended up being the biggest wedge in the System/360 development budget, ballooning from the original $30 million to over $500 million. In the end, the programmers created three different operating systems to cover the 360 spectrum: Basic Operating System (BOS), Tape Operating System (TOS) and Disk Operating System (DOS). Then there were the human factors. As mentioned above, IBM had to stop the upgrades already in progress for the three old computer lines. And, as we IT workers know, opinions are like pocket protectors, every technician has one. This meant IBM had to convince hundreds of technicians with personal and professional stakes in the old machines to drop everything to work on System/360. It took a lot of arm twisting by forceful personalities to get everyone on board before the first the first circuit was drawn. Even after the announcement there was still dissent throughout the company and many clucking tongues. The results of the mainframe Despite the ordered chaos, IBM announced System/360 on April 7,1964. The system must have been well received, as IBM took orders for around 2,000 systems in the first eight weeks. After sales came the hard part, when IBM had to build the factories to build the machines and hire programmers to write the code to run the system. It took another two years for System/360 to be considered successful. By 1966 there were between 7,000 and 8,000 systems installed, creating $4 billion in revenue. Best of all, IBM delivered on its compatibility promise. The paper mentions that IBM made the original hardware specifications and source code available to everyone. In a wonderful foreshadowing of open source, the availability of this information caused a huge burst of customer- and vendor-driven innovation, something I think IBM could use now. I think it's also important to note there were some things the System/360 couldn't do. For instance, IBM realized the mainframe left a big hole in the small and medium-sized market, which was eagerly filled by the competition. IBM bridged that gap by creating other lines of smaller (and incompatible) computers, such as System/36 and AS/400. In the end, System/360 did not bring computing to the masses, midwife the Internet or change the music industry, but its influence is considerable. System/360 was the first computer to use 8-bit bytes instead of the 6-bit bytes that were the standard at the time. The IBM mainframe may not have been the first to implement some technologies, but it is certainly where they were perfected. The list includes things like virtual storage, virtual machines, resource security, relational databases, system integrity through storage keys and authorized instructions, print spooling, job scheduling, online processing, multi-programming and multi-processing. Then there are other technologies that other platforms have yet to implement, including granular system maintenance, workload management and rational systems management. I also think it's significant that IBM can proudly point to 45 years of unbroken compatibility. You may not want to run that program that hasn't been assembled since 1974, but, in a pinch, you could. Would you try that on any other platform? Forty-five years later, the future of the mainframe is a little cloudy as other hardware gets cheaper and Windows and Unix system management gets easier. But, even if the mainframe is destined for the bit bucket of history, future generations should remember its contributions and think about whether the computer world would be where it is today if IBM hadn't taken the big gamble. ABOUT THE AUTHOR: For 24 years, Robert Crawford has worked off and on as a CICS systems programmer. He is experienced in debugging and tuning applications and has written in COBOL, Assembler and C++ using VSAM, DLI and DB2. What did you think of this feature? Write to SearchDataCenter.com's Matt Stansberry about your data center concerns at [email protected].
Dig deeper on IBM System z and Mainframe Systems
High mainframe software costs may lead to platform's demise
Manage CICS workloads with transaction classes
Run CICS in batch to beat a shrinking batch window
Using External Call Interface (EXCI) to access CICS
ASHRAE (American Society of Heating, Refrigerating, and Air-Conditioning Engineers)
CA zaps mainframe costs with zIIP specialty engine
Third-party software costs killing mainframe growth
Cloud data encounters global challenges
Windows Server Enterprise Linux Server Virtualization Cloud Computing SearchWindowsServer | 计算机 |
2014-35/1132/en_head.json.gz/1128 | Google I/O Written by Ian Elliot Tuesday, 25 May 2010 The Google developers conference is a place where you might expect to hear about exciting new things. Was it a big deal or was it all marketing and incremental progress?
Given that Google I/O is billed as "Google's developers conference" not a lot seems to have happened to shake the developer's world.
There was a great deal of fuss about the the launch of Android 2.2 (Froyo) and the annoucment of Google TV which is planned to run Android and Chrome.
The new Android is a step in the right direction. It is faster thanks to a JIT compiler that augments the VM and there are a host of minor improvements - auto app updates, SD card support, portable hotspot support and Flash support. All worth having but incremental rather than revolutionary.
Possibly more interesting is the news that C# development might be coming to Android. Novell has added MonoDroid to the Mono project and this allows C# programs to talk to the native Java API. It still isn't clear if a similar tool will be allowed to run on the most protected of all platforms, the iPhone/iPad.
Google TV may be an interesting development and you certainly need to keep any eye on it, but at the moment there isn't much you can do about it unless you are a "big player". There are plans to open up the platform more generally, but not until 2011. At the moment all we can do is optimise our web sites so that they work with the new devices.
An interesting development that has mostly been over-shadowed by reports of Google TV is the Google App Engine for Business. In case you have missed the story, Google Apps has been trying to encourage developers to produce products for it by launching the Apps market place and making the Google App Engine API easier to use. Now we have the Google App Engine for Business which allows users to deploy internal apps on the Google cloud infrastructure. A collaboration with VMware will also make it easier to move applications from on-site to in-cloud with out making any changes.
If you have or plan to create any Chrome browser applications then the new Chrome Web Store will make it easier to get to users. Although it has Chrome in the title the apps can be any old web applications - which is a little odd and suggests that the store should be called "any browser" store. Of course you can take advantage of the unique features of Chrome. For example you could use the Native Client SDK which allows a browser to run C/C++ code. The store is live but there are no apps listed at the moment.
Among the announcements was the final release of Google Wave to everyone. You now just register and join the rest of the users trying to work out what to do with this new application. It is also now available on Google Apps.
Google Storage for Developers is well ... cloud storage aimed at developers. The only question this raises is what storage requirements developers have that make them in need of a targeted facility? Google Storage uses a REST interface, can work with very large, hundreds of Gigabyte, objects and is replicated across data centers. Intended for use with web applications access is controlled by key or email authentication. Currently the service is available as a preview and you can have 100GB and 300GB of traffic per month for free.
Of course the big announcement that has made it into the more general news is that Google has open-sourced the VP8 codec which it recently acquired as part of a take over. In addition the WebM project has been established with the aim of developing the WebM video format which VP8 supports. Both initiatives should help with the HTML5 video problem - i.e. whether to use a patented codec or an open source one. VP8 is open source and claims to be high enough quality to do the job. Mozilla and Opera have been involved in the development of WebM and Microsoft partially supports it in IE9 in that if the codec is installed it will use it.
There are also a number of new APIs. An API for downloading fonts may sound tedious but it's very useful. You can now set your web page to download a font from Google and use it independent of the browser. Good but at the moment there are only 18 fonts in the library. An API for Google Buzz give us yet another social bookmarking service that we have to integrate with and in the process do more for Buzz that it does for us.
So Google I/O produced very little new or radical from a programmer's point of view - we will have to wait to see if Google TV provides anything to work with.
Android Usage Overtakes iOS08/08/2014In terms of device ownership Android already outstripped Apple. Now it has overtaken iOS in terms of OS share.
+ Full Story PHP Gets A Formal Specification31/07/2014Given how important PHP is in terms of its use, it is very surprising to learn that it only now is getting a formal specification after 20 years of use. + Full StoryMore News Visual Studio Tools for Unity 1.9 ReleasedGoogle Play Publishing API Wire The Programmer To Prevent Buggy CodeExtracting Audio By Watching A Potato Chip PacketResearchers Jailbreak Current iOS 7.1.2New Enterprise APIs For Android DevelopersGCC Gets An Award From ACM And A Blast From Linus Bing Developer AssistantHijacking Chromecast Is EasyJibo Over $1 Million On Indiegogo70 Years Ago - Formal Presentation of Harvard Mark 1Racket 6.1GraphLab Create
<ASIN:059652272X>
Last Updated ( Monday, 24 May 2010 ) | 计算机 |
2014-35/1132/en_head.json.gz/2052 | Famous Mormons
Computer Scientists, Computer Programmers, Computer Software Engineers, Mathematicians and Statisticians
Alan Ashtoncomputer software engineer
Dr. Ashton deserves credit for his pioneering work in word processing that has forever changed the way we use computers. Since selling his ownership in WordPerfect he has built Thanksgiving Point.
Ashton is the grandson of late Mormon Church president David O. McKay. James Cannonmathematician
He is a professor of mathematics at BYU who played a key role in the classification of simple groups.Source: Famous LDS Scientists
www.ieee.org
Edwin Catmull
computer animation Computer animation pioneer; co-founder of Pixar
From CNN A COMPANY like Pixar wasn't what Ed Catmull had in mind when he first hatched his plan to use computers to make animated films. But in hindsight, this company couldn't exist without a leader who cites Pinocchio, Peter Pan, and Einstein as the cultural heroes of his youth. Catmull grew up in Salt Lake City as one of five children in a Mormon family. As a kid he made "flip-books" filled with crude animation, and dreamed of working for Disney one day. His favorite character was a hybrid of a man and a unicycle." CNN.com
Dr. Ed Catmull, president and co-founder of Pixar Animation Studios, has made groundbreaking contributions to the field of computer graphics in modeling, animation and rendering that have revolutionized the way live-action and animated motion pictures are created. Dr. Catmull is one of the architects of the RenderMan� software product utilized to create animated films such as Pixar’s Toy Story and Finding Nemo and special effects in live-action films. www.ieee.org
Bernard DainesCreator of new and innovative cluster computing solutions
In 1999 Time magazine did an article on the 100 people most likely to influence the next century and included Bernard on the list.Daines is widely recognized as instrumental in pioneering Ethernet technology, especially the IEEE standards for Fast Ethernet and Gigabit Ethernet networking technology. In 2002, Daines was elected chairman of the board of Linux NetworX.
Who's Who in Internet and Computer Technology
Robert G. Freeman
Robert is a Principal Engineer and Team Manager at The Church of Jesus Christ of Latter-Day Saints. He is the author of eleven popular Oracle books
including the best selling Oracle RMAN Backup and Recovery series and the Oracle Database New Features series from Oracle Press.
Robert G. Freeman's Blog
Tom Halesmathematician
He is a professor of mathematics at the University of Michigan who proved a long standing conjecture about the optimal stacking method of spheres.Source: Famous LDS ScientistsAfter receiving his PhD from Princeton in 1986, Tom Hales took up a post doc at Berkeley, and then positions at Harvard, Chicago and Michigan. Tom's research interests lie in algebra and geometry. In 1998 Tom Hales astonished mathematicians across the World by confirming the 400 year old Kepler Conjecture, and followed that by proving the even more venerable Honeycomb Conjecture. (For more information on the Kepler and Honeycomb Conjectures see Cannonballs and Honeycomb below.) The proof of the Kepler Conjecture relied in part on extensive and intricate computer calculations, and Tom is now looking at ways to take that further, and investigate to what extent computers can be used to prove other difficult theorems.
Drew Majorcomputer software engineer
He helped develop the original NetWare, and has played an integral role in designing and developing every release of the Network Operating System which seems to be everywhere. In 2000, Drew was inducted into the Computer Hall of Fame. He was also named as one of the top ten most influential persons in the computer industry by BYTE Magazine. Photo: Ensemble Studios
Sandy Petersen
computer game designer
He is a member of the Gaming Hall of Fame (1990), and was involved in the production of such award winning game titles as Civilization, Doom, Doom 2, Quake, Rise of Rome, Age of Kings, The Conquerors, Age of Empires 3, and The War Chiefs. As with some other successful computer game designers, Petersen's roots are in the board game industry. His illustrious portfolio includes Runequest, Call of Cthulhu, and Petersen's Field Guide to Monsters. As an internationally recognized game designer and writer, Petersen's works have been published worldwide.
He served a mission to Los Angeles and has been active in the Church since his early life. He has five children ages 27, 25, 21, 20, and 16 and has 2 grandchildren. He and his four sons are Eagle Scouts. He is a High Priest in the Rockwall Ward in Texas.
Sandy Petersen on Wikipedia
Kim Peek
mega-savant
Kim Peek, inspired the 1988 movie "Rain Man," Peek, Died when he was 58, He was likely the world's most famous savant, enduring mental handicaps while at the same time possessing extraordinary gifts of memory and recall.
Peek was born on Nov. 11, 1951. At 9 months, doctors said he was severely mentally retarded. "They told us we should institutionalize him because he would never walk or talk," Fran Peek said. "But we refused to do that."
By 16 months, Peek demonstrated extraordinary abilities. He could read and memorize entire volumes of information.
"He could find anything he wanted to. He read all of Shakespeare, the Old and New Testaments," Fran Peek said.
An MRI later showed that his brain lacked a corpus callosum -- the connecting tissue between the left and right hemispheres. Peek said his son's brain lacked the normal filtering system for receiving information. The condition left him able to retain nearly 98 percent of everything he read, heard or watched on television. The average person only retains about 45 percent. As both a child and adult, Peek's favorite place was the library, where he devoured books at a confounding rate. At the time of his death, Peek is believed to have committed at least 9,000 books to memory. Deseret news
Roger Porter
He served for more than a decade in senior economic policy positions in the White House, most recently as Assistant to the President for Economic and Domestic Policy from 1989-1993. He served as Director of the White House Office of Policy Development in the Reagan Administration and as Executive Secretary of the President's Economic Policy Board during the Ford Administration. Source: Harvard
Richard B. Wirthlin
He is best known as President Reagan's strategist and pollster. At the White House, he was a close and trusted advisor to President Reagan. He directed all of the President's opinion surveys, analyzed trends and regularly briefed the President and Cabinet officers on American attitudes about everything from education, jobs and taxes to issues of war and peace. He participated in White House planning and strategy sessions, and played a key role in communications planning. He was chief strategist for two of the most sweeping presidential victories in the history of the United States. In 1981 he was acclaimed "Adman of the Year" by Advertising Age for his role in the 1980 campaign. About Famous Mormons.net
Actors, Producers and Directors
Athletes, Soccer, Diving and more
Computers and Mathematics
Executives in Business [A-E]
Executives in Business
[F-M]
Executives in Business [N-Z]
Favorite links at Famous Mormons
Infamous Mormons
Email Ron
Military / Astronauts
Menu WebPage
Olympians (1912-2000)
Musicians / Singers Entertainment page 1 Musicians / Singers Entertainment page 2
Musicians / Singers Entertainment page 3 Musicians / Singers Entertainment page 4 Protective Services
Rodeo and Horseracing
Rocky R. Twitchell Boxer
Mitt Romney Politician Rugby
Rumored to be Mormon
Disclaimer: The views and opinions expressed here are those of the webmaster. Although I consider myself a faithful member of The Church of Jesus Christ of Latter-day Saints (nicknamed The Mormon Church) and this site is designed to interest members of the before mentioned church, it is in no way an official affiliate of The Church, and no content within this site should be taken as official church policy or doctrine. � 1995-2013� Ron Johnston � Webmaster Ron Johnston � All rights reserved. | 计算机 |
2014-35/1132/en_head.json.gz/2425 | Is it suitable to change the design of a service over time?
1 When designing a web service is it acceptable to change the layout over time? For example when you first launch a service you may be introducing new concepts and relationships to the user which require some explanation. Would it be suitable to have explanatory information within the page? If so would it also make sense that over a period of time "most" users would become familiar with the concepts therefore this information could then be moved to a less prominent location?
Personally I have always been of the opinion that where possible you should avoid presenting the user with new concepts that are not intuitive, but because of the existing IA and content of the service I feel the above approach could be suitable in this instance.
information-architecture new-experience share|improve this question asked Feb 29 '12 at 10:57
Sheff
Don't forget about new users. Although the existing users may become used to the features and no longer require explanatory text, new users won't have the benefit of experience so may still find this information useful.
Totally agree. Which is why I thinks its important to keep the information somewhere, but as for most users it would no longer be of primary importance, so it could be moved. Trying to cater for the majority of users without effecting the minority too much. Obviously I will base such a decision on the data as I have available.
– Sheff
"Web service" is a very misleading description. At first I thought you were talking about machine-to-machine interaction (en.wikipedia.org/wiki/Web_service) and the evolvement of the API you provide.
– Jørn E. Angeltveit
...maybe "web-based service" or "web application"?
Apologies for the confusion, as you say I think web application is probably a more appropriate term.
This totally depends on the task/product and the user-group.
As a web developer, you think that your web-site is the center of the world and that your audience keep track of every change and content update you do. But that's not the case. In most cases, the users don't care. They just want to do what they came to do. Be it registration, buying stuff or finding information.
So I would say that in most cases, it's not a problem.
There is a case study, that documents how a required registration led to a huge dropout. Whereas the redesign to a quick-buy solution increased the sales significantly. I think this is a good example of successful redesign. You should also take look at some of the examples in this question: Case studies to help sell UCD.
That said, some services heavily rely on regular use and regular visits. Social media sites, for instance. Facebook, Google+ and perhaps Youtube depend on users that carry out the same task every day. These kind of sites should be careful with their redesign and only deliver thoroughly tested solutions.
You will always have some users complaining about new design and new features. Sometimes these complaints are justified, but more than often they are just another expression of the reluctance of change. This is an important factor itself, of course, and the user's subjective perception of the system (the satisfaction with the system) should always be taken into the consideration when you're about to deliver something. But if you've done the redesign the right way, then this will be a temporary phase. Which leads us to the final point. As a vendor, you really need to know that you're doing the right thing. You must have conducted the necessary task- and user-analyzes, and you must have done enough evaluation (user testing, surveys etc) of the redesign. That's the only way to ensure that a redesign will succeed and not turn into a gigantic failure...
Not the answer you're looking for? Browse other questions tagged information-architecture new-experience or ask your own question. asked
Case studies to help sell UCD
Designing for the fold
Ideas on displaying and editing 'Hours of Operation' for a service provider
How to prove a design opposed by the product stakeholders?
Should we be aware of bandwidth concerns given recent data-plan pricing changes?
Which design for internal pages not represented in site navigation?
How to present boring facts that aren't really suitable for an infographic in an interesting way?
How to avoid a label that includes the word “Other”
How to optimally place time, temperature and icons in a grid?
Determining when to use questions vs. statements in a navigation
What's the difference between Information Architecture and Interaction Design? | 计算机 |
2014-35/1132/en_head.json.gz/2464 | The Asylum Computer Hardware & Troubleshooting
forum=sanctuary lives!
Jun 25 2013 at 4:11 PM
gbajiEncyclopedia31,462 posts
Friar Bijou wrote:Allegory wrote:trickybeck wrote:Actually, I never heard of this back in 2009 when it happened. I bet gbaji can tell you why. gbaji wrote:I seem to recall mentioning it once or twice on the forum as an example of something that the media was avoiding talking about.Forum search says otherwise; nice try. Don't know if forum searches are working past like a year back. If you change your search from "journolist" to "journalist", the oldest post with that word in it is in 2012. You don't honestly think no one wrote the word "journalist" on this forum until about a year ago, do you? Quote:gbaji wrote: that didn't jive with their political leaningsJIBE, not jive. Aren't you supposed to know, like, 200x more about everything than the rest of us? Do you speak Jive? Quote:gbaji wrote:It was a big deal among conservatives because we had long suspected some kind of collusion among liberal journalists.Because the conservatives do the same thing. What are you, nine years old? Which is interesting given that you're basically saying that two wrongs make a right. Jimmy did it, so it's ok for me to! Also, you're excusing known behavior by one group because of speculated behavior among another. So because liberals think that conservatives will do something nefarious, they feel it's ok for them to do it? Um... What if they're wrong about their assumption? What if they're projecting their own willingness to break the rules on conservatives in order to justify what they're doing (It's ok to do this cause we all know that conservatives would do it too). That doesn't make what they were doing right, and it doesn't change the fact that a group of liberal journalists were actually caught doing it. Now, if someone catches conservatives doing this, then we can discuss that separately. But it's pretty weak to use the potential as an excuse for the actual.
____________________________King Nobby wrote:More words please
Jophiel, Samira, Technogeek, Yodabunny, Anonymous Guests (30) | 计算机 |
2014-35/1132/en_head.json.gz/2752 | 12/15/201110:36 AMAdrian LaneCommentaryConnect Directly0 commentsComment NowLogin50%50%
Data Security, Top Down Focus on what needs to be done, not how to do itContinuing the database security trends with database activity monitoring (DAM), the next model I want to talk about is policy-driven data security. Conceptually this means you define the security or compliance task to be accomplished, and that task gets divided by a number of technologies that get the work done. Policies are mapped -- from the top down -- into one or more security tools, each performing part of the workload to accomplish a task. In an ideal world, the tool best-suited to do the work gets assigned the task. To make this type of approach work, you must have a broad set of security capabilities and a management interface tightly coupled with underlying features.
For database security, this is the classic coupling of discovery, assessment, monitoring, and auditing -- each function overlapping with the next. The key to this model is policy orchestration: Policies are abstracted from the infrastructure, with the underlying database -- and even non-database -- tools working in unison to fulfill the security and compliance requirements. A policy for failed logins, as an example, will have assessment, monitoring, and auditing components that capture data and perform analysis. A central management interface includes lots of pre-generated rules that coordinate resources that allow customers to cover the basics quickly and easily. In practice, it's rare to see top-down security limited to database security, and it usually covers general network services and endpoints as well. This model is great for reducing the pain of creating and managing security across multiple systems. It also lowers the bar on technical and compliance expertise required for users to manage the system -- that knowledge is built in. Workers don't have to know "how" to do it, just what they need to do. And that is a big deal for mid-market firms that cannot afford to have lots of security and compliance experts on staff. Finally, since it's designed to leverage lots of tools, integrating with other platforms is much easier. However, there are several significant detractors to the model because the lack of flexibility and deployment complexity overwhelms the midmarket buyer it conceptually best serves. Since the technologies are prebundled, you have more tools to accomplish tasks you can solve with a single product from a different vendor, resulting in a much larger footprint on your organization. While basic setup of policies is as simple as selecting a prewritten rule, custom policies have a greater degree of complexity as compared to more traditional database security systems. In this model, DAM is just one of many tools to collect and analyze events, and not necessarily central to the platform. As such, policy-driven security is not an evolution or change to DAM (as it is with business activity monitoring and ADMP); it's more a vision of how DAM fits within the security ecosystem. The intention of the model is to mask the underlying complexities of technology from the users who simply want to get a compliance or security task done. Many in IT don't have the time or desire to be a security expert, and want more of a "point-and-click" solution. This model has been around for a long time -- since at least 2006 with DAM. I know because this was the model I architected and was attempting to build out as a "next generation" DAM solution. Conceptually, it's very elegant, but every firm I have ever seen try to attempt this fails because the underlying technologies have not been best-of-breed, nor did the interfaces deliver on promised simplicity. It requires a great deal of commitment from the vendor to carry this off, and the jury is still out as to whether it will deliver on the intended value proposition. Adrian Lane is an analyst/CTO with Securosis LLC, an independent security consulting practice. Special to Dark Reading.
Adrian Lane is a Security Strategist and brings over 25 years of industry experience to the Securosis team, much of it at the executive level. Adrian specializes in database security, data security, and secure software development. With experience at Ingres, Oracle, and ... View Full BioComment | Email This | Print | RSSMore InsightsWebcasts
BYOD: Why and How IT Should Embrace MobilityBuilding the Physical Network for the Software-Defined Data CenterDeep Packet Inspection with Wireshark
Staying Agile with Big Data - A Roadmap to Long Term SuccessHow Evolved ?419 Scammers? Are Targeting the EnterpriseFrom Zero-Day Attacks to exploit kits: How to Contain Advanced Threats | 计算机 |
2014-35/1132/en_head.json.gz/3276 | How to battle the botnets
2,846 Tutorials
Why we're losing the fight against botnets
By Joaquim P. Menezes | 28 July 07
Botnets - they're dangerous, deceptive, and very difficult to detect and deal with. What's more, according to recent surveys, the botnet threat is growing...rapidly.
Experts say it's imperative that businesses and end users become aware of the acute and growing dangers posed by botnets, and take decisive and effective steps to counter them before it's too late.
But that's easier said than done as botnets are insidious, and use stealth as a key weapon.
Short for robot, a bot is a captured and compromised computer; and of course botnets are networks of such computers. After being commandeered, these machines may be used for a range of nefarious purposes, including scanning networks for other vulnerable systems, launching denial of service (DoS) attacks against a specified target, sending spam emails, and keystroke logging as a prelude to ID or password theft.
Botnets are generally created through spam emails or adware that leaves behind a software agent, also sometimes called a 'bot'. Captured machines can be controlled remotely by the malware creator, referred to as the bot master or bot herder.
If additional software has to be downloaded to complete the capture process, the bot would first do that. "It may use any mechanism - FTP, PFTP, HTTP - to install the software," explains Jim Lippard, director of information security operations at network services provider Global Crossing, whose customers include more than 35 percent of the Fortune 500, as well as 700 carriers, mobile operators and ISPs.
The next thing the bot does is call home. It would "usually do a domain name server (DNS) lookup on a particular name used by the miscreant for that botnet. Then it will find the host for that name, and connect to it using standard Internet Relay Chat (IRC) protocol," Lippard says.
The larger a botnet, the more formidable the attack it can launch. For instance, when a botnet containing tens of thousands of captured machines is used to launch a denial of service attack, the consequences can be serious and irreparable.
There's the well publicised case of the botnet created by Christopher Maxwell that installed adware on vulnerable machines. It was estimated his botnet attacked more than 400,000 computers in a two-week period. Maxwell's attack, it was reported, crippled the network at Seattle's Northwest Hospital in January 2005, shutting down an intensive care unit and disabling doctors' pagers. The botnet also shut down computers at the US Department of Justice, which suffered damage to hundreds of computers worldwide in 2004 and 2005.
Maxwell pleaded guilty and was sentenced to three years in jail, three years of probation and a fine of $250,000.
The motivation of most bot herders is usually financial, say experts who follow this phenomenon closely. Botnets are sometimes rented out to spammers, scam artists or other criminal elements.
Lippard dubs bot software "the Swiss army knife of crime on the internet". There are multiple functional roles in the botnet economy, he says. For instance, there's the bot herder - the person who controls the bot. Lippard talks about two common ways bot herders make money. The first is by installing adware or clickware on to the systems they control.
next » Share this article
Jim Lippard said: The reference to PFTP should say TFTP | 计算机 |
2014-35/1132/en_head.json.gz/4833 | by The Honeynet Project, Addison-Wesley 2004,
IEEE Cipher, E61 July 17, 2004"
Know Your Enemy. 2nd ed. Learning About Security Threats
by The Honeynet Project
Addison-Wesley 2004.
ISBN 0-321-16646-9. 6 appendices, Resources and References, Index, CD-ROM. 768 pages, $49.99
Reviewed by Bob Bruen July 18, 2004 The Honeynet Project has come a long way in the two years since the first edition of Know Your Enemy. The table of contents is still divided into three parts (The Honeynet, The Analysis and The Enemy), but the content shows great progress. The underlying idea of the honeynet is to have a place that crackers could break into while being observed. The idea is simple, but the architecture of the system has evolved into a sophisticated one. Moreover, the observation methodology has evolved significantly. Not only are the tools are better, but so are the applications of the
tools. This edition has expanded and improved sections on forensics, which
seems rather an obvious outgrowth of the research. As with the rest of
Honeynet tools, forensics is carried out with open source tools. In this
case it is Sleuth Kit, Autopsy, netcat and built-in unix commands like dd.
They also list a number of other useful tools, such as CDs that can boot a
system for analysis or acquisition.
The new material on reverse engineering is a welcome addition. It has always been my opinion that analysis such as this is not complete without reverse engineering binary code or data files. Since blackhats generally do not leave source around, figuring out what they did can only be accomplished by reverse engineering. This section includes material on making reverse engineering more difficult, along with descriptions of code that will do this. It looks like one of those constantly escalating battles. An excellent tutorial on The Honeynet Reverse Challenge from the binary through disassembly to source code provides a practical demonstration on how reverse engineering works.
Since the first edition, Honeynets have gone into generations, GenI and GenII. Each is explained thoroughly, as are Sebek and other additional approaches such virtual honeynets, User Mode Linux and VMWare. There seems to be no limit to what can be done to learn about what happens to our systems. There is also no reason why the same tools and techniques can not be used to analyze normal systems that have not been compromised, but only failed or exhibited unexpected behavior.
The end goal of this work is to learn and understand the behavior of the blackhat. My sense is that the blackhat of today is somewhat different from the blackhat of several years ago, even though the basic techniques have evolved rather than made revolutionary advances. There seems to be more criminal intent now and this is reflected in how the Honeynet Project describes the events. The section on The Enemy has been expanded to include profiling. The psychological analysis has given way to the sociological analysis, that is to say the view has moved from the individual to the group.
The Enemy section has a wonderful analysis of the life cycle of an exploit that alone is worth the price of the book. I highly recommend this edition of Know Your Enemy for all the lessons provided. This is a great project that deserves the attention of all security people. The future looks better because of them. | 计算机 |
2014-35/1132/en_head.json.gz/5154 | Linux certificate program launches in North America
Following a successful pilot test in Europe, the Middle East, and Africa, this new program aims to help newcomers get started.
By Katherine Noyes | PC World | 04 October 12
There's no doubt that demand for IT professionals with Linux skills is growing rapidly, and earlier this year I wrote about a brand-new certification program targeting newcomers to the open source operating system.
At the time, the Linux Essentials program from the Linux Professional Institute (LPI) was gearing up for a June launch in Europe, the Middle East, and Africa, but last Friday the group announced that it is now available in North America as well.
"Jobs for those with knowledge of Linux and open source software are available now," said Jim Lacey, president and CEO of LPI. "This is due in part to the phenomenon of 'Big Data' and the cloud, which are built on open source infrastructure. This rapidly growing IT sector doesn't just require those with hard technology skills but also needs to fill a wide variety of job roles that have a basic understanding and literacy around the open source ecosystem."
A Certificate of Achievement
The Linux Essentials program is the fruit of two years' worth of development in partnership with qualification authorities, academic partners, private trainers, publishers, government organizations, volunteer IT professionals, and Linux and open source experts.
Culminating in a single Linux Essentials exam, the program leads to a certificate of achievement recognizing knowledge of a variety of related subjects, including the Linux community and open source careers; popular operating systems and applications; open source software and licensing; and Linux command line basics, files, and scripts.
Participants in the program also get regional links to employment and apprenticeship opportunities as well as support for skills competitions such as Worldskills International.
Pricing on the resulting PDF certificate of achievement is $85 at private testing or training centers and $65 at academic partners through internet-based testing. More information about the program can be found on the LPI site.
There are, of course, numerous places to boost your Linux skills, both online and off. But if you're a newcomer to the OS based in North America and want to get started learning about Linux, this new program could be a good place to start. | 计算机 |
2014-35/1132/en_head.json.gz/5504 | World Wide Web Consortium Issues XSL Transformations (XSLT) and XML Path Language (XPath) as
Two specifications work to transform XML documents and data, supporting presentation flexibility and device independence
Contact America --
Janet Daly, <[email protected]>,
Contact Europe --
Josef Dietl, <[email protected]>,
+33.4.92.38.79.72
Contact Asia --
Yuko Watanabe <[email protected]>,
+81.466.49.1170
(also available in Japanese)
http://www.w3.org/ -- 16 November 1999 --
The World Wide Web Consortium (W3C) today releases two specifications, XSL
Transformations (XSLT) and XML Path Language (XPath), as W3C Recommendations. These new specifications represent cross-industry and expert community agreement on technologies that will enable the transformation and styled presentation of XML documents. A W3C Recommendation indicates that a specification is stable, contributes to Web interoperability, and has been reviewed by the W3C membership, who favor its adoption by the industry.
"Anyone using XML can now take advantage of XSLT, a powerful new tool for manipulating, converting or styling documents," declared Tim Berners-Lee, W3C Director. "XPath adds a simple way of referring to parts of an XML
document. Together, they strike a fine balance between simplicity of use and underlying power."
XSLT and XPath Add Strength, Flexibility to XML Architecture
As more content publishers and commercial interests deliver rich data in XML, the need for presentation technology increases in both scale and
functionality. XSL meets the more complex, structural formatting demands that XML document authors have. XSLT makes it possible for one XML document to be transformed into another according to an XSL Style sheet. As part of the document transformation, XSLT uses XPath to address parts of an XML document that an author wishes to transform. XPath is also used by another XML technology, XPointer, to specify locations in an XML document. "What we've learned in developing XPath will serve other critical XML technologies already
in development," noted Daniel Veillard, W3C Staff contact for the XML Linking Working Group. Together, XSLT and XPath make it possible for XML documents to be
reformatted according to the parameters of XSL style sheets and increase
presentation flexibility into the XML architecture.
Device Independent Delivery of XML Documents Separating content from presentation is key to the Web's extensibility and flexibility. "As the Web develops into a structured data space, and the tools used to access the Web grow more varied, the need for flexibility in styling and structure is essential," explained Vincent Quint, W3C User Interface Domain Leader and staff contact for the XSL Working Group. "With XSLT and XPath, we're closer to delivering rich, structured data content to a wider range of devices." Broad Industry Support, Multiple Implementations Already Available The XSLT Recommendation was written and developed by the XSL Working Group, which includes key industry players such as Adobe Systems, Arbortext, Bell Labs, Bitstream, Datalogics, Enigma, IBM, Interleaf, Lotus, Microsoft, Novell, Oracle, O'Reilly & Associates, RivCom, SoftQuad Inc, Software AG, and Sun Microsystems. Notable contributions also came from the University of Edinburgh and a range of invited experts. The XPath Recommendation pooled together efforts from both the XSL Working Group and the XML Linking Working Group, whose membership includes
CommerceOne, CWI, DATAFUSION, Fujitsu, GMD, IBM, Immediate Digital, Microsoft, Oracle, Sun Microsystems, Textuality, and the University of Southampton. The creators of XML documents now have a variety of open source and
commercial tools which support XSLT and XPath. In addition, many W3C members who reviewed the specifications have committed to implementations in upcoming products, indicated in the wide range of testimonials.
About the World Wide Web Consortium [W3C]
The W3C was created to lead the Web to its full potential by developing
common protocols that promote its evolution and ensure its interoperability.
It is an international industry consortium jointly run by the MIT Laboratory for Computer Science (MIT
LCS) in the USA, the National Institute for
Research in Computer Science and Control (INRIA) in France and Keio University in Japan. Services provided
by the Consortium include: a repository of information about the World Wide
Web for developers and users, reference code implementations to embody and
promote standards, and various prototype and sample applications to
demonstrate use of new technology. To date, over 370 organizations are Members of the Consortium. For more
information see http://www.w3.org/ | 计算机 |
2014-35/1132/en_head.json.gz/5539 | Category: Internet
What is Remote Control Software?
7 Links to Related Articles 1 Discussion Post Watch the Did-You-Know slideshow
David White
Edited By: Niki Foster
With remote control software, you can access your computer even if you're nowhere near it. Despite its name, however, it doesn't really involve a handheld control like the one you use for your television. In fact, the only buttons you need to push are on a keyboard.Remote control software enables long-distance access of computers that have been set up to allow such access. Login codes, such as usernames and passwords, are required on the receiving end, of course. You can't just pop open remote control software and search anyone else's machine, but this kind of software does come in handy if you travel often and can't or don't want to take your entire hard drive with you.Most remote control software allows you full access to your computer. Functionality such as drag-and-drop, password alteration, and security updating are all possible and encouraged. It's like being there, even when you're not.Remote control software comes in another form, that of desktop management. Information technology (IT) managers or session controllers naturally want to have access to computers in a company's system, for security and upgrading purposes, among others. The accounting department of a large company with many stores will also want to use this kind of remote control software in order to keep track of sales via database analysis and receipt reporting. Ad
These kinds of remote control software are generally standalone applications that are stored on one central machine or mainframe and used to access other computers, either nearby or far away. Another kind of remote control software, however, is web-based. You can buy or pay for access to a similar kind of software suite that you can access via a website, rather than an application that resides on your computer. This kind of remote control software is hosted by someone else, and you pay for the usage without having to pay for subsequent upgrades, which are the responsibility of the hosting company.
What Is a Lightweight Design?
What Is an Access Method?
What Is a Waterproof Remote Control?
How Do I Choose the Best VCR Remote?
How Do I Choose the Best Remote?
What Does a Computer Software Professional Do?
What Is Computer Numerical Control?
Markerrag
This kind of software is a dream come true for computer support types. Instead of having to walk a customer through what needs to be done to resolve a problem during a phone call, the support person can simply ask a customer to install some remote software and then get permission to directly access the computer that needs attention.
The support professionals who use that approach wind up saving a lot of time. view entire post | 计算机 |
2014-35/1132/en_head.json.gz/5672 | ADVERTISE AT BOING BOING! How SOPA will attack the Internet's infrastructure and security Cory Doctorow at 10:34 am Sat, Nov 12, 2011 • The Electronic Frontier Foundation is continuing its series of in-depth analysis of the Stop Online Piracy Act, the most dangerous piece of Internet legislation ever introduced, which is set to be fast-tracked through Congress by Christmas. Today, EFF's Corynne McSherry and Peter Eckersley look at the way that SOPA attacks innovation and the integrity of Internet infrastructure.
In this new bill, Hollywood has expanded its censorship ambitions. No longer content to just blacklist entries in the Domain Name System, this version targets software developers and distributors as well. It allows the Attorney General (doing Hollywood or trademark holders' bidding) to go after more or less anyone who provides or offers a product or service that could be used to get around DNS blacklisting orders. This language is clearly aimed at Mozilla, which took a principled stand in refusing to assist the Department of Homeland Security's efforts to censor the domain name system, but we are also concerned that it could affect the open source community, internet innovation, and software freedom more broadly:
* Do you write or distribute VPN, proxy, privacy or anonymization software? You might have to build in a censorship mechanism — or find yourself in a legal fight with the United States Attorney General.
* Even some of the most fundamental and widely used Internet security software, such as SSH, includes built-in proxy functionality. This kind of software is installed on hundreds of millions of computers, and is an indispensable tool for systems administration professionals, but it could easily become a target for censorship orders under the new bill.
* Do you work with or distribute zone files for gTLDs? Want to keep them accurate? Too bad — Hollywood might argue that if you provide a complete (i.e., uncensored) list, you are illegally helping people bypass SOPA orders. * Want to write a client-side DNSSEC resolver that uses multiple servers until it finds a valid signed entry? Again, you could be in a fight with the U.S. Attorney General.
Hollywood's New War on Software Freedom and Internet Innovation
Warner Bros admits it sends takedown notices for files it hasn't seen and doesn't own Cory Doctorow at 9:10 am Thu, Nov 10, 2011 • Warner Brothers has filed a brief in its lawsuit against file-locker service Hotfile in which it admits that it sent copyright takedown notices asserting it had good faith to believe that the files named infringed its copyrights, despite the fact that it had never downloaded the files to check, and that it sometimes named files that were not under Warners's copyright, including files that were perfectly legal. Among the files that Warner asked Hotfile to remove was a file called "http://hotfile.com/contacts.html and give them the details of where the link was posted and the link and they will deal to the @sshole who posted the fake" and others. The studio also "admits that it did not (and did not need to) download every file it believed to be infringing prior to submitting the file's URL" to the Hotfile takedown tool. That's because "given the volume and pace of new infringements on Hotfile, Warner could not practically download and view the contents of each file prior to requesting that it be taken down."
This is interesting because the DMCA requires a copyright holder issuing a takedown notice to state that it has a "good faith belief that the use of the material in the manner complained of is not authorized by the copyright owner, its agent, or the law." It's hard to see how anyone at Warner Brothers could have formed any beliefs—good faith or otherwise—about files it admits that no human being at Warner had even looked at.
The recently-proposed Stop Online Piracy Act, which is backed by the major Hollywood studios, would give copyright holders new powers to cut off websites' access to payment processors and advertising networks. It even includes a new DMCA-style notice-and-takedown scheme. But given the cavalier way that Warner Brothers has used the powers it already has under the DMCA, policymakers may be reluctant to expand those powers even further.
Warner Bros: we issued takedowns for files we never saw, didn't own copyright to
"Piracy-stricken" Viacom CEO tops pay-raise charts Cory Doctorow at 9:09 am Tue, Nov 8, 2011 • Philippe P. Dauman, CEO of Viacom, led the executive compensation raise chart this year with a $50.5 million raise that brought his total annual compensation up to $84.5 (much of the 148.6% raise came in the form of stock options). Meanwhile, Viacom continues to argue that it is in danger of capsizing unless radical changes are made, starting with taking away the right to privately share videos of our personal lives on YouTube. | 计算机 |
2014-35/1132/en_head.json.gz/6534 | Developer: Kyle Choi
& Publisher: Shine Studio, Hong Kong
PC Requirements: Windows 95/98, 100 MHz Pentium or faster, 16 MB RAM minimum, 10 MB hard disk space, 8x CD-ROM drive or faster, 800x600 display, 24-bit True Color preferred, 640x480 display, 16-bit High Color acceptable, Windows-compatible sound device
Walkthrough Walkthrough
by lasanidine
No dying
This game never had an American publisher and has never been extensively advertised. In spite of this, it has been known and played by adventure gamers for its many good qualities and because of the outstanding music it contains.
Comer is a one-man effort in the style often called Myst-like. It is a first person point-and-click game that comes on four disks with a manual that suffers somewhat from a less than perfect translation.
Somewhere in the future there are archeological findings that imply that human beings existed on the earth long before it was commonly believed. These ancients experimented and altered the environment to suit their purposes regardless of the detriment to other living things. Through the centuries figures appeared who had a great influence over the development of the world, and who tried to end the harmful experiments and channel the energies in a more favorable direction. These outstanding people are recognized by us as the �Comers�. They left behind ample proof of their presence and plentiful clues as to their activities. The player, the latest person in this area, has to find these clues and solve the puzzles. This will not only disclose what has taken place before but also reveal who will be the next Comer. It is interesting to note the philosophical nature of this game and the unusual suppositions that went into its development. Whether or not one looks favorably at the underlying arguments is up to personal taste....
Comer uses a first person point-and-click interface and is played entirely with the mouse. Navigation is self-evident, but unfortunately the mouse action leaves a lot to be desired. There is no way to tell the locations of hot spots and it is trial and error to see what works. This is especially annoying at the start of the game.
The mechanics of saving the game are quite good and include an overwrite warning. The problem is that a saved game does not restore the player to the point of the save, but rather to an earlier place so that one has to travel for a while to arrive where the game left off. This does not affect the changes that were made before the save or negate the solved puzzles. The saves must include the *.CMR extension to be loadable.
The game can be started from any one of the disks by clicking on the logo and pulling down the menu. During the game, clicking on the top of the screen brings up the menu; the "Esc" key cancels the video sequences. The puzzles are not overly complicated, but are interesting enough to hold the attention and to entertain. The ending of the game is also somewhat unusual. One can still wander around after all the puzzles are solved until one realizes what this ending means.
Mr. Bill -- who wrote the very nice and thoughtful walkthrough -- has this to say about the ending: "It felt very strange to have a game with no real ending, with no credits, with everything deserted, no trees, no wind in the trees, very sad music, just you alone on a volcanic island in the middle of nowhere. But that's exactly how it would be if the story were real, isn't it?" The graphics
At the time this game was designed the clear slide show-like graphics were the norm. What we see are subdued colors. The shapes are a little blocky but not unpleasantly so. On the whole it makes for an interesting, uncluttered environment. Sound and music
The voices are hard to understand. The music is good. Here is what the designer himself says about it:
"All of 28 music titles of this audio CD were arranged / composed by the author of Comer, entirely with the means of computer. Parts of them are variations from works by the greatest composers of all times, such as Peer Gynt, Vivaldi, Tchaikovsky, and Mahler. Variations were made with a modern and a new age taste, by adding strong ambience and percussions."
This game and a music CD still can be purchased at the developer�s web site. My thoughts about the game
Even though the game was published quite a few years ago it has not lost its freshness and has not aged too much. It has many of the characteristics that appeal to a true blue adventure player and can be played as a family game.
A thread of haunting sadness runs through the game that culminates in an invisible pool of regret. There is a message here if you care to receive it....
100 MHz Pentium or faster
16 MB RAM minimum
10 MB hard disk space
8x CD-ROM drive or faster
800x600 display, 24-bit True Color preferred
640x480 display, 16-bit High Color acceptable
Windows-compatible sound device
XP/Home:
I tried the game on XP/Home on a game partition formatted FAT32, with Win98 compatibility settings. It played without any problem on my computer:
XP/Home
Intel � Pentium� 4 CPU 1.60 GHz
Nvidia GeForce4 Ti 4100
Review Grade: C+
design copyright � 2003
GameBoomers
Group GB Reviews Index | 计算机 |
2014-35/1132/en_head.json.gz/6590 | Blender 2.64 improves green screen and compositing
The effects used in Tears of Steel have greatly influenced development of the latest Blender version
The latest version of the open source 3D modelling and movie production application, Blender, has been released by the Blender Foundation and its developers, who have been concentrating on green screen and compositing functionality in the software. The release of Blender 2.64 includes a number of features that were developed as part of the Blender Foundation's fourth open movie project. The aim with Tears of Steel, which was formerly known as Project Mango, was to produce a film focusing on live action scenes integrated with computer generated imagery (CGI) – the new features added to Blender reflect this focus.
Blender 2.64 now includes a mask editor in the image and movie clip interface. Masks can be defined by manipulating splines and have fine-grained feathering controls. This allows movie makers to add effects into shots and block out unwanted objects or parts of objects. Blender's motion tracking abilities have been significantly improved, a fact that is clearly evident in Tears of Steel. This improvement goes hand in hand with the refined green screen abilities that the developers have also added. Both of these features allow film makers to better combine video footage with 3D generated characters. Additionally, a new compositing backend has been implemented and Blender now sports a redesigned colour management system based on the OpenColorIO project.
Where CGI creation is concerned, Blender 2.64 offers improvements to the mesh and sculpting tools. A new Skin Modifier option allows users to generate a polygon skin from a skeleton made out of vertices. This should make character creation easier to achieve and while the result is not perfect, it can of course be refined manually afterwards. The new release also includes a large number of bug fixes and several other improvements throughout the application. More information about all of the new features and bug fixes is available from the B | 计算机 |
2014-35/1132/en_head.json.gz/6933 | Internet/
Web Services Get Fit
Sarah L. Roberts-Witt
June 30, 2002 Comments
Life Time Fitness's turn-of-the-century resolution was a whopper. 0shares
Web Services Warm Up
The Basic Ingredients
Start Small, Think Big
Who's Who in Web Services
iBiz Stats (v21n12)
Life Time Fitness's turn-of-the-century resolution was a whopper. That's when Brent Zemple, the chief information officer of the Eden Prairie, Minnesotabased health club chain, decided that the membership management systemthe core of its IT operationshad to go. The company was expanding, having bulked up to nearly 30 clubs in the Midwest over the past five years, and the ten-year-old application wasn't exactly fit for the job. "We're talking about a flat-file, proprietary system that you could only access by command line," says Zemple. "There was nothing open about it."
After shopping for more flexible software that was customized for health clubs, Zemple and his ten-person IT team made a bold moveone that was unusual for the health and fitness industry. They decided to build Life Time's own systems, using the new languages and emerging standards for Web services.
Initially, Life Time experimented with the Microsoft .NET offerings to build its new applications, but the team found the tools frustrating to use. Eventually, the company landed on Java, XML, and SOAP as its internal standards, and it turned to development and infrastructure products from Sun Microsystems.
In August 2000, after eight months of work, Zemple's staff rolled out the new membership management system. Built on J2EE, it runs at the company's headquarters. Apache Web servers and a BEA WebLogic application server also run from this central location. The servers act as a front end to the management system for the company's accounting and administration employees as well as for personnel at the health clubs. All employees access the system via a Web browser, and Life Time relies on the iPlanet Directory Server for authentication.
Using XML and SOAP, Zemple's team built a Web service that handles the electronic funds transfer for membership dues. They are now working on a Web service that will let members use the Web to schedule massages and personal training sessions over the Web.
Life Time is also working with a large HMO that subsidizes employee memberships in Life Time clubs. The two companies intend to build a Web service to replace the manual process of verifying attendance. This Web service will let the HMO access a portion of Life Time's membership database electronically, so it can ensure that beneficiaries are making the required number of visits for reimbursement.
"This is something we would never have been able to do with our old system, because it was just too proprietary," says Zemple. "With Web services, it's really pretty easy." No doubt easier than that first trip to the gym.
Next : By Sarah L. Roberts-Witt
More Stories by Sarah L.
Hungry for Video
Digital asset management lets you make all your multimedia content available to subscribers.
What Else Is Hot Now At Krispy Kreme?
Krispy Kreme's corporate portal strategy is an open window to the company's core business operations...
Mining for Counterterrorism
As a result of the events of September 11, 2001, the federal government is paying more attention to ... | 计算机 |
2014-35/1132/en_head.json.gz/7088 | The Sims Online.
SlateWebheadInside the Internet.Nov. 12 2002 11:28 AM
The No-Magic Kingdom
Even without wizards, the Sims Online is a fantasy.
By Steven Johnson
In the world of video games, the most interesting development of the past few years has been the success of massively multiplayer online games like Everquest and Ultima Online. These games create open-ended universes that can be explored by thousands of players simultaneously: Players wage war with each other, build homes, learn skills, and barter goods and services. These games are so popular that a bustling black market has developed—in real-world currencies —for virtual items accumulated in them. On hundreds of auction sites around the Web, you can pay cash to buy magic spells, swords, or even entire characters.
Games like Ultima and Everquest are the closest we've come to the William Gibson/Matrix vision of cyberworlds that exist alongside the real one. But there's one thing holding them back from mainstream appeal, and you can summarize it in one word: orcs. If you're not the sort of person who goes for wizards and magic scepters, it's hard to throw yourself into the Ultima or Everquest experience, which is basically Dungeons & Dragons without the 20-sided dice. Building an open-ended virtual world with thousands of other participants sounds like an irresistible project. I just don't want that world to have elves. A similar anti-fantasy sentiment has fueled a lot of the hype for Will Wright's latest creation, the Sims Online—the multiplayer follow-up to the Sims, the most popular video game of all time. The original Sims is a celebration of the quotidian: Your characters trudge off to work each day, and they clean up the kitchen after dinner. You can even steer them to the bathroom each time their bladders get full. The allure of the Sims Online is having that living-room drama projected onto a broader stage: Instead of managing a household, you're helping to create a living city, with varied neighborhoods and industries, hot spots and slums. You get the fishbowl economies of Ultima and its ilk but without the magic spells and heavy armor. It's a virtual world that, at long last, looks like reality. At least, that's been the sales pitch: The Real World vs. Ultima's Xena. But the early glimpses of the Sims Online, which began a public beta test last month and is scheduled for release by Christmas, suggest that the game has its own kind of distorted reality. It's as far from everyday life, in its own way, as Everquest's dragons and sorcerers. The game, in its earliest incarnation at least, has a bizarre high-school-like quality, where every design element encourages more "team spirit." Right now the most lucrative money-making activity for players is a group exercise where four characters make pizza together. Using the game's chat dialogue function, you recruit three other participants, and a successful payout requires that you coordinate your group actions (supplying dough, toppings, cheese, etc.). There are other incentives for players to collaborate with each other, too: "friendship webs" and a roommate system that encourages you to make connections with other players—both of which reward you with cash in various ways. There are also public lists of the most popular players. Wright's games are justifiably famous for their open-endedness, but as far as I can tell, it would be very hard to play the Sims Online successfully as a loner. The overall effect is a maniacally social and collaborative universe. There's something sweet in this but also something unreal. The Sims Online is the mirror image of 2002's other hot title, Grand Theft Auto: Vice City, which is all about maniacally anti-social behavior: running over pedestrians, abusing prostitutes, or just crashing into things for no good reason. The test of the Sims' reality principle will be whether the final version lets a little menace into the mix. If the game is going to satisfy our craving for multiplayer realism, you should be able to carjack your fellow Sims, not just make pizza with them. Steven Johnson is the author of five books, including Everything Bad Is Good For You andThe Ghost Map, and co-founder of Outside.in. | 计算机 |
2014-35/1132/en_head.json.gz/9039 | Sasser suspect walks free
Probation and no fine for 'world's worst' VXer
The teenage author of the infamous Sasser worm has been sentenced to one year and nine months probation following his conviction for computer sabotage offences. Sven Jaschan, 19, escaped a prison sentence after confessing to computer sabotage and illegally altering data at the beginning of his trial in the German town of Verden this week. Jaschan will also have to serve 30 hours community service at the local hospital but he escaped any fine.The teenager was tried behind closed doors in a juvenile court because he was 17 at the time the worm was created, a mitigating factor that went a long way to ensuring Jaschan escaped a more severe punishment for the havoc he wrought.
Sasser is a network aware worm that exploited a well-known Microsoft vulnerability (in Windows Local Security Authority Subsystem Service - MS04-011) to infect thousands of systems in May 2004. German prosecutors picked three German city governments and a broadcaster whose systems were disrupted by Sasser as specimen victims in the prosecution against Jaschan. These organisations were selected from the 143 plaintiffs with estimated damages of $157,000 who have contacted the authorities. All indications are that this is the tip of a very large iceberg. Anti-virus firm Sophos reckons Jaschan was responsible for more than 55 per cent of the viruses reported it last year, thanks to his role in creating bot the Sasser and NetSky worms."Jaschan's worms caused considerable damage, but was committed when he was still a junior. Some who have to defend computer systems against worms may feel frustrated that the sentence isn't stronger, but it has to be remembered that he was a young kid who did something immensely dumb rather than one of the organised crime gangs intent on stealing money via viruses that we are now commonly encountering," said Graham Cluley, senior technology consultant at anti-virus firm Sophos."It is, however, surprising that there doesn't appear to any fine attached to his sentence. From the sound of things Jaschan will go into work at the security firm that employs him [Securepoint] just as normal tomorrow morning."Jaschan was arrested in the village of Waffensen near Rotenburg, in northern Germany, on suspicion of writing and distributing the Sasser worm in May 2004. The teenager later confessed to police that he was both the author of Sasser and the original creator of the NetSky worm.He was arrested after a tip-off to Microsoft from individuals (Jaschan's erstwhile friends) hoping to cash in through Microsoft's Anti-Virus Reward Program. Investigators questioned Jaschan's mates on suspicion of assisting his virus writing activi | 计算机 |
2014-35/1132/en_head.json.gz/9437 | Phase-locked loop
(Redirected from Phase locked loop)
"PLL" redirects here. For other uses, see PLL (disambiguation).
A phase-locked loop or phase lock loop (PLL) is a control system that generates an output signal whose phase is related to the phase of an input signal. While there are several differing types, it is easy to initially visualize as an electronic circuit consisting of a variable frequency oscillator and a phase detector. The oscillator generates a periodic signal. The phase detector compares the phase of that signal with the phase of the input periodic signal and adjusts the oscillator to keep the phases matched. Bringing the output signal back toward the input signal for comparison is called a feedback loop since the output is 'fed back' toward the input forming a loop.
Keeping the input and output phase in lock step also implies keeping the input and output frequencies the same. Consequently, in addition to synchronizing signals, a phase-locked loop can track an input frequency, or it can generate a frequency that is a multiple of the input frequency. These properties are used for computer clock synchronization, demodulation, and frequency synthesis.
Phase-locked loops are widely employed in radio, telecommunications, computers and other electronic applications. They can be used to demodulate a signal, recover a signal from a noisy communication channel, generate a stable frequency at multiples of an input frequency (frequency synthesis), or distribute precisely timed clock pulses in digital logic circuits such as microprocessors. Since a single integrated circuit can provide a complete phase-locked-loop building block, the technique is widely used in modern electronic devices, with output frequencies from a fraction of a hertz up to many gigahertz.
1 Practical analogies
1.1 Automobile race analogy
1.2 Clock analogy
3 Structure and function
3.1 Variations
3.2 Performance parameters
4.1 Clock recovery
4.2 Deskewing
4.3 Clock generation
4.4 Spread spectrum
4.5 Clock distribution
4.6 Jitter and noise reduction
4.7 Frequency synthesis
5 Block diagram
6.1 Phase detector
6.2 Filter
6.3 Oscillator
6.4 Feedback path and optional divider
7 Modeling
7.1 Time domain model
7.2 Phase domain model
7.2.1 Example
7.3 Linearized phase domain model
7.4 Implementing a digital phase-locked loop in software
Practical analogies[edit]
Automobile race analogy[edit]
For a practical idea of what is going on, consider an auto race. There are many cars, and the driver of each of them wants to go around the track as fast as possible. Each lap corresponds to a complete cycle, and each car will complete dozens of laps per hour. The number of laps per hour (a speed) corresponds to an angular velocity (i.e. a frequency), but the number of laps (a distance) corresponds to a phase (and the conversion factor is the distance around the track loop).
During most of the race, each car is on its own and the driver of the car is trying to beat the driver of every other car on the course, and the phase of each car varies freely.
However, if there is an accident, a pace car comes out to set a safe speed. None of the race cars are permitted to pass the pace car (or the race cars in front of them), but each of the race cars wants to stay as close to the pace car as it can. While it is on the track, the pace car is a reference, and the race cars become phase-locked loops. Each driver will measure the phase difference (a distance in laps) between him and the pace car. If the driver is far away, he will increase his engine speed to close the gap. If he's too close to the pace car, he will slow down. The result is all the race cars lock on to the phase of the pace car. The cars travel around the track in a tight group that is a small fraction of a lap.
Clock analogy[edit]
Phase can be proportional to time,[1] so a phase difference can be a time difference. Clocks are, with varying degrees of accuracy, phase-locked (time-locked) to a master clock.
Left on its own, each clock will mark time at slightly different rates. A wall clock, for example, might be fast by a few seconds per hour compared to the reference clock at NIST. Over time, that time difference would become substantial.
To keep his clock in sync, each week the owner compares the time on his wall clock to a more accurate clock (a phase comparison), and he resets his clock. Left alone, the wall clock will continue to diverge from the reference clock at the same few seconds per hour rate.
Some clocks have a timing adjustment (a fast-slow control). When the owner compared his wall clock's time to the reference time, he noticed that his clock was too fast. Consequently, he could turn the timing adjust a small amount to make the clock run a little slower. If things work out right, his clock will be more accurate. Over a series of weekly adjustments, the wall clock's notion of a second would agree with the reference time (within the wall clock's stability).
An early electromechanical version of a phase-locked loop was used in 1921 in the Shortt-Synchronome clock.
Spontaneous synchronization of weakly coupled pendulum clocks was noted by the Dutch physicist Christiaan Huygens as early as 1673.[2] Around the turn of the 19th century, Lord Rayleigh observed synchronization of weakly coupled organ pipes and tuning forks.[3] In 1919, W. H. Eccles and J. H. Vincent found that two electronic oscillators that had been tuned to oscillate at slightly different frequencies but that were coupled to a resonant circuit would soon oscillate at the same frequency.[4] Automatic synchronization of electronic oscillators was described in 1923 by Edward Victor Appleton.[5]
Earliest research towards what became known as the phase-locked loop goes back to 1932, when British researchers developed an alternative to Edwin Armstrong's superheterodyne receiver, the Homodyne or direct-conversion receiver. In the homodyne or synchrodyne system, a local oscillator was tuned to the desired input frequency and multiplied with the input signal. The resulting output signal included the original modulation information. The intent was to develop an alternative receiver circuit that required fewer tuned circuits than the superheterodyne receiver. Since the local oscillator would rapidly drift in frequency, an automatic correction signal was applied to the oscillator, maintaining it in the same phase and frequency as the desired signal. The technique was described in 1932, in a paper by Henri de Bellescize, in the French journal L'Onde Électrique.[6][7][8]
In analog television receivers since at least the late 1930s, phase-locked-loop horizontal and vertical sweep circuits are locked to synchronization pulses in the broadcast signal.[9]
When Signetics introduced a line of monolithic integrated circuits such as the NE565 that were complete phase-locked loop systems on a chip in 1969,[10] applications for the technique multiplied. A few years later RCA introduced the "CD4046" CMOS Micropower Phase-Locked Loop, which became a popular integrated circuit.
Structure and function[edit]
Phase-locked loop mechanisms may be implemented as either analog or digital circuits. Both implementations use the same basic structure. Both analog and digital PLL circuits include four basic elements:
Phase detector,
Low-pass filter,
Variable-frequency oscillator, and
feedback path (which may include a frequency divider).
Variations[edit]
There are several variations of PLLs. Some terms that are used are analog phase-locked loop (APLL) also referred to as a linear phase-locked loop (LPLL), digital phase-locked loop (DPLL), all digital phase-locked loop (ADPLL), and software phase-locked loop (SPLL).[11]
Analog or linear PLL (APLL)
Phase detector is an analog multiplier. Loop filter is active or passive. Uses a Voltage-controlled oscillator (VCO).
Digital PLL (DPLL)
An analog PLL with a digital phase detector (such as XOR, edge-trigger JK, phase frequency detector). May have digital divider in the loop.
All digital PLL (ADPLL)
Phase detector, filter and oscillator are digital. Uses a numerically controlled oscillator (NCO).
Software PLL (SPLL)
Functional blocks are implemented by software rather than specialized hardware.
Neuronal PLL (NPLL)
Phase detector, filter and oscillator are neurons or small neuronal pools. Uses a rate controlled oscillator (RCO). Used for tracking and decoding low frequency modulations (< 1 kHz), such as those occurring during mammalian-like active sensing.
Performance parameters[edit]
Type and order
Lock range: The frequency range the PLL is able to stay locked. Mainly defined by the VCO range.
Capture range: The frequency range the PLL is able to lock-in, starting from unlocked condition. This range is usually smaller than the lock range and will depend, for example, on phase detector.
Loop bandwidth: Defining the speed of the control loop.
Transient response: Like overshoot and settling time to a certain accuracy (like 50ppm).
Steady-state errors: Like remaining phase or timing error
Output spectrum purity: Like sidebands generated from a certain VCO tuning voltage ripple.
Phase-noise: Defined by noise energy in a certain frequency band (like 10 kHz offset from carrier). Highly dependent on VCO phase-noise, PLL bandwidth, etc.
General parameters: Such as power consumption, supply voltage range, output amplitude, etc.
Applications[edit]
Phase-locked loops are widely used for synchronization purposes; in space communications for coherent demodulation and threshold extension, bit synchronization, and symbol synchronization. Phase-locked loops can also be used to demodulate frequency-modulated signals. In radio transmitters, a PLL is used to synthesize new frequencies which are a multiple of a reference frequency, with the same stability as the reference frequency.
Other applications include:
Demodulation of both FM and AM signals
Recovery of small signals that otherwise would be lost in noise (lock-in amplifier to track the reference frequency)
Recovery of clock timing information from a data stream such as from a disk drive
Clock multipliers in microprocessors that allow internal processor elements to run faster than external connections, while maintaining precise timing relationships
DTMF decoders, modems, and other tone decoders, for remote control and telecommunications
DSP of video signals; Phase-locked loops are also used to synchronize phase and frequency to the input analog video signal so it can be sampled and digitally processed
Atomic force microscopy in tapping mode, to detect changes of the cantilever resonance frequency due to tip–surface interactions
DC motor drive
Clock recovery[edit]
Some data streams, e | 计算机 |
2014-35/1132/en_head.json.gz/9471 | DBeaver
DBeaver is a universal database manager and SQL Client. It supports MySQL, PostgreSQL, Oracle, DB2, MSSQL, Sybase, Mimer, HSQLDB, SQLite, Derby, and any database that has a JDBC driver. It is a GUI program that allows you to view the structure of a database, execute SQL queries and scripts, browse and export table data, handle BLOB/CLOB values, modify database meta objects, etc. It has a native UI (provided by the Eclipse SWT library), great performance, and relatively low memory consumption.
GPLDatabaseSQLMySQLpostgresql
Minetest Classic
Minetest-Classic is a fork of Minetest-C55, an infinite-world block sandbox game and a game engine, inspired by InfiniMiner, Minecraft and the like. It aims to improve speed, fix bugs, and add features and functionality. The game includes over 400 blocks, craft items, and tools, in both functional and decorative types. Minetest-Classic has a focus on immersive gameplay where in-world mechanisms are preferred over special commands, such as using incinerators instead of a /pulverise command, or craft guides being implemented as part of the in-game book system, rather than as a special menu item. In addition to single player mode, online multi-player is also available.
GPLv3GameOpenGL3Dsandbox
OpenSimulator
OpenSimulator is a multi-platform, multi-user 3D distributed virtual environment platform. Out of the box, it can be used to simulate virtual environments similar to that of Second Life. These can be used as social virtual worlds or for specific applications such as education, training, and visualization. Access is via the regular Second Life open-source viewer or via third-party clients. There are a number of private and public deployments of OpenSimulator, including OSgrid, which has over 8000 regions hosted by independent individuals and organizations spread over the Internet.
BSD Revisedvirtual worldVirtual environmentimmersive environmentSecond Life
Ostinato
Ostinato is a network packet and traffic generator and analyzer with a friendly GUI. It aims to be "Wireshark in Reverse" and thus become complementary to Wireshark. It features custom packet crafting with editing of any field for several protocols: Ethernet, 802.3, LLC SNAP, VLAN (with Q-in-Q), ARP, IPv4, IPv6, IP-in-IP a.k.a IP Tunneling, TCP, UDP, ICMPv4, ICMPv6, IGMP, MLD, HTTP, SIP, RTSP, NNTP, etc. It is useful for both functional and performance testing.
GPLv3Network AnalysisPacket CapturingTraffic GeneratorPacket Generation
webon is a Web content management system. It provides an access log to check who has visited your site. It has a counter that lets anybody know the number of people who have visited your site.
Magento Anybooking Script
A Magento script for online bookings. | 计算机 |
2014-35/1132/en_head.json.gz/9751 | Everything You Needed to Know About the Internet in May 1994
A snapshot of a revolution, just before it really took off. By Harry McCracken @harrymccrackenSept. 29, 20130 Share
ZD Press Email
Back in 1994, the Internet was the next big thing in technology — hot enough that TIME did a cover story on it, but so unfamiliar that we had to begin by explaining what it was (“the world’s largest computer network and the nearest thing to a working prototype of the information superhighway”).
And in May of that year, computer-book publisher Ziff-Davis Press released Mark Butler’s How to Use the Internet. I don’t remember whether I saw the tome at the time, but I picked up a copy for a buck at a flea market this weekend and have been transfixed by it.
Among the things the book covers:
E-mail: “Never forget that electronic mail is like a postcard. Many people can read it easily without your ever knowing it. In other words, do not say anything in an e-mail message which you would not say in public.”
Finding people to communicate with: “… telephone a good friend who has electronic mail and exchange e-mail addresses with him or her.”
Using UNIX: “UNIX was developed before the use of Windows or pointing and clicking with a mouse … although there are lots of commands that you can use in UNIX, you actually need to know only a few to be able to arrange your storage space and use the Internet.”
Word processing: “Initially, you may make mistakes because you think you are in Command mode when you’re really in Insert mode, or vice versa.”
Joining mailing lists: “Although it is polite to say ‘please’ and ‘thank you’ to a human, do not include these words in the messages you send to a listserv. They may confuse the machine.”
Newsgroups: “Remember, a news reader is a program that enables you to read your news.”
Online etiquette: “Flaming is generally frowned upon because it generates lots of articles that very few people want to read and wastes Usenet resources.”
“Surfing” the Internet: “Surfing the Internet is a lot like channel surfing on your cable television. You have no idea what is on or even what you want to watch.”
Searching the Internet: “If a particular search yields a null result set, check carefully for typing errors in your search text. The computer will not correct your spelling, and transposed letters can be difficult to spot.”
Hey, wait a minute — does How to Use the Internet cover Tim Berners-Lee’s invention, the Web, which had been around for almost three years by the time it was published? Yup, it does, but the 146-page book doesn’t get around to the World Wide Web — which it never simply calls “the Web” — until page 118, and then devotes only four pages to it, positioning it as an alternative to a then popular service called Gopher:
What Is the World Wide Web? Menus are not the only way to browse the Internet. The World Wide Web offers a competing approach. The World Wide Web doesn’t require you to learn a lot of commands. You simply read the treat provided and select the items you wish to jump to for viewing. You can follow many different “trails” of information in this way, much as you might skip from one word to the next while browsing through a thesaurus. The ease of use makes the World Wide Web a favorite means of window-shopping for neat resources on the Internet.
Version 1.0 of the first real graphical browser, Marc Andreessen and Eric Bina’s Mosaic, appeared in November 1993. How to Use the Internet mentions it only in passing, describing it as “a multimedia program based on the World Wide Web; it allows you to hear sounds and see pictures in addition to text.” It devotes far more space to Lynx, a text-only browser that you navigated from the keyboard rather than with a mouse. By the time I first tried the Web in October 1994 or thereabouts, Mosaic was a phenomenon and Lynx was already archaic.
Still, by the standards of early 1994, when the book was published, the text-centric Web was already a hit. As it warns:
More and more people are using the Internet, and WWW is a very popular service. For this reason, you may have to wait a long time to receive a document, or, in some cases, you may not even be able to make a connection.
The book’s original owner, whoever he or she may have been, was keenly interested in this whole Internet thing. When I opened it, I found a clipping on universities and other local institutions that offered lessons in going online. And this Post-it note, which reminds me of the instructions on using Windows 95 that I found in a different old computer book that I bought last year, was affixed to the inside front cover:
In the spring of 1994, How to Use the Internet was probably pretty successful at helping people figure out a newfangled and arcane means of communications. Things progressed so rapidly that it was soon obsolete. But in 2013, it’s useful once again as a reminder of how much the Web has changed the world, and how recently it came to be.
DanB21 5pts
I actually tech reviewed this beauty way back when, as a wee lad. adding to ehurtley's note, I remember the author (great/smart feller) demo'ing Mosaic for me at his house while the book was being written. He totally understood the big wow of it, and what it was going to change. It just wasn't practical to recommend it yet for consumers because it was too slow on whatever ridiculous baud we had for home modems back then...
RichaGupta 5pts
something about internet, we don't know..http://t.co/AlGUsHsv6T
markburgess 5pts
For an even further step back, check out Harley Hahn's "The internet Complete Reference". The book copyright date inside is 1994, but my copy is signed by Harley on October 23, 1993...there are 15 pages on the Web and he references a list of browsers at CERN...where the Web part of the Internet was born in April 1993.
cronocloudauron 5pts
Lots of books out there like this in the mid 90's. As was said, they had to cover shell accounts because PPP wasn't common until later. Depending on where you were, you might not have had a local access number till the late 90's. AOL didn't have a local number until after the local cable company began offering broadband.
AskMisterBunny 5pts
James Gleick and some people ran a service called Pipeline in NYC in the early 90s that used some weird SLIP emulation called PinkSlip. It was very buggy but the only game in midtown if you wanted the net at home. ehurtley 5pts
Why didn't they cover Mosaic? Because in May 1994, dial-up PPP or SLIP was still *VERY* uncommon. It was far more common to have dial-up access to a UNIX shell account (which is why UNIX shell access is covered on a book about the Internet.)It wasn't until late 1994 that Portland, OR, a fairly tech-savvy city, got its first commercial dial-up SLIP/PPP provider. (How do I know? Because a friend and I are the ones who convinced a dial-up UNIX shell access company to offer SLIP.)Yes, many people had direct connections before then, either through work or college. But this book wasn't aimed at them - it was aimed at home users.
I wrote a book for Random House in 1996 called "The Book Lover's Guide to the Internet." I spent the first half of the book explaining how the net worked and how to access it through AOL, CompuServe, Genie, Prodigy, et al. I think I still have a press account on AOL, for what that's worth. Somewhere I even have a pc with Mosaic on it. I did an author appearance at a B&N in NYC in '97 that was covered by C-SPAN. First question from the audience was "Isn't it true that the government is watching everything you do online?" I think I answered, "Yeah, probably."
mysidia 5pts
re: Email -- "Many people can read it easily without your ever knowing it. In other words, do not say anything in an e-mail message which you would not say in public."I would argue that today's average internet user still doesn't manage to understand, even that (the nature of e-mail being sent over the network in cleartext, and the content being potentially accessible to many prying eyes, unless specially encrypted). | 计算机 |