id
stringlengths 30
34
| text
stringlengths 0
75.5k
| industry_type
stringclasses 1
value |
---|---|---|
2014-23/2620/en_head.json.gz/24182 | We're accepting applications for summer internships (2006) at Fog Creek. The deadline is February 2006.
Microsoft Jet
I'm glad Microsoft is upgrading the Jet engine once again. Even though they tried to tell application developers that we needed to move to MSDE/Sql Server Express, we know that our customers want their data in a single file and don't want to have to install and administer SQL Server just to get to that file.
In the minus column, Erik Rucker seems to be saying that the new Jet engine won't be redistributable. “Developers can still program against the Access engine, but since it isn’t part of the system any more, application users will need Access on their machines.” Great. That makes it nearly useless to typical ISVs developing Windows apps, so I guess we're stuck with the creaky old Jet 4.0.
I haven't heard the whole story yet, but Erik also says that the main big new Jet feature is this clever way of doing many-to-many relations between tables that was done solely to benefit SharePoint. That's nice, hon', but it doesn't seem like such a big deal. I'm hoping Erik has a Jobsian "just one more thing" up his sleeve.
What I'd much rather see is real, authentic, fast full-text search. Access (Jet) 4.0 just doesn't have full-text search at all, and it will be pretty darn disappointing if Access 12 doesn't have a good, instant full-text search feature that's native and built in to the engine at the deepest levels.
When we work on FogBugz, whether with Access (Jet), SQL Server, or MySQL as the backend, we spend way too much time trying to get full-text search to work even moderately well. SQL Server 2000, even though it technically has a full-text search feature, actually has a very-badly grafted-on full-text engine that is poorly integrated, slow, unreliable, and assumes that programmers have nothing better to do than think about when the indexes are built and where they are stored. In production, the full text engine grafted onto SQL Server 2000 falls down all the time. In particular, if you use a lot of databases and frequently detach and re-attach them (what we were told we have to do in order to use SQL Server as a "file oriented" database like Jet), the full text search indexes get all screwed up. They rely on insanely complex chunks of registry data to associate indices, stored in files, with databases. It is almost monumentally difficult to backup and restore full text indexes.
The fact that it's 2005 and I can't buy a relational database from Microsoft that has full text search integrated natively and completely, and that works just as well as "LIKE" clauses, is really kind of depressing. A very senior Microsoft developer who moved to Google told me that Google works and thinks at a higher level of abstraction than Microsoft. "Google uses Bayesian filtering the way Microsoft uses the if statement," he said. That's true. Google also uses full-text-search-of-the-entire-Internet the way Microsoft uses little tables that list what error IDs correspond to which help text. Look at how Google does spell checking: it's not based on dictionaries; it's based on word usage statistics of the entire Internet, which is why Google knows how to correct my name, misspelled, and Microsoft Word doesn't.
If Microsoft doesn't shed this habit of "thinking in if statements" they're only going to fall further behind. | 计算机 |
2014-23/2620/en_head.json.gz/29397 | Choco Bay Menu
Choco Bay
your social media networking resource
Welcome to Choco Bay, your new one-stop resource for all things social media networking related. If you’re looking to advance your company’s social standing and reputation, our website will guide your way to that goal.
Discover the Demographics of Social Media
Does anyone from your target audience use Instagram? Do your customers in rural areas spend as much time on social media as your city-dwelling customers? These are the types of questions all companies need to ask before they invest time and money in a social media campaign. If you don’t know where your customers spend time online, you can’t be sure you’ll reach them.
An infographic from DocStoc and the Pew Research Center illustrates which demographics spend time on social media. For example, 71 percent of women use social media as compared with 62 percent of men. More city dwellers spend time on social media (70 percent) than those who live in rural areas (61 percent). You should click here for background information on this.
The graphic also features some network-specific insights:
Pinterest appeals most to rural residents, women and those with middle- to high-level incomes.
Instagram appeals most to urban residents and 18-29 year olds.
Facebook is the most popular social media site among adults, followed by LinkedIn.
Do you know where your audience spends time online? The Zena 21 website has much more to say on this matter.
Check out the full graphic for more on the demographics of social media users.
Social Media Demographics
Where the Future of Social Media Marketing Lies
There is no better window into the fast-changing world of social media marketing than Facebook’s preferred marketing developer program. It has only been in existence for 18 months, and already there are over 260 such partners operating worldwide, helping brands plug into Facebook’s ad platforms and parse performance. Within the program, there’s an even more elite group of fourteen “strategic preferred marketing developers,” or SPMDs. Since they have privileged access, and often help Facebook develop ad products, these so-called SPMDs are arguably the best sources to turn to when trying to predict the future of social media as a high-tech marketing platform. You can read more about this at Z Plasma.
In a recent report, BI Intelligence interviewed executives at four leading SPMDs, who pointed to the key factors driving social media marketing’s future, like the changing relationship between paid, owned and earned media. They saw consolidation in their area — the tech side of things, as opposed to the creative side — as inevitable, and believed only the marketing developers with the best technology would win out. As more brand dollars flow into social media, some firms will be able to build scale and others will lose the race and fall by the wayside. Go ahead and click here for additional details.
With over 260 PMDs all vying for the same pool of ad dollars, it is unlikely that they all will be able to remain in business. Our sources see industry consolidation via bankruptcies, mergers, and acquisitions. The key to this game is the technology. It’s not about a flashy name and a reputation for social media knowledge. The best social media marketing specialists will have a great tech stack at their foundation. PMDs should see a greater share of revenue come from software and technology licensing, or software-as-a-service. Already, one prominent PMD has folded after failing to reach sustainability. Syncapse was overly dependent on a single client, BlackBerry. And it had not achieved any significant revenue figures for its software package. The lessons for social media marketing specialists? Diversify your client base, and build your company on a foundation of great technology, not fee-skimming. You’ll find more info on this online. Other social media networks like Twitter and Pinterest will build out schemes similar to Facebook’s PMD ecosystem, and push agencies and brands to connect with their ad solutions via these partners. Here are some of the other insights gleaned from our conversations with Facebook’s strategic marketing partners: Influencing Facebook Product Development: SPMDs have influence at Facebook and have pushed Facebook to make many needed changes such as streamlining its paid media ad product line.
Pre-Testing Paid Media: Other elite Facebook marketing partners like Brand Networks and Adaptly understand that owned and earned media isn’t just valuable in and of itself. It’s also valuable as a source of analytics and data that will hint at what types of content will work as paid media. One airline brand using this technique saw total reach more than double to 63% of its targeted fans.
Moving beyond last-click attribution: One Adobe client, a hospitality and entertainment group, realized their apps were driving sales through other online and offline channels. They only realized this once they stopped obsessing on the last click before a sale, and tracked customers across channels. Visit www.social-media-brand-value.com for more on this topic.
Measuring Quality Of Engagement: SPMDs understand that the best metrics don’t just measure quality, but quantity too. SPMDs have the best technology and interfaces for sifting through data. Understanding Facebook Activity In Emerging Markets: SPMDs and PMDs more broadly can be marketers’ field experts, sensitizing them to seasonal, cultural, and local economic factors.
Creating the Most Effective Social Networking Campaign
As a marketer, your every encounter with a smartphone-toting consumer should be considered an opportunity: Every mom who searches for product info in a grocery store, every kid who checks in at a concert venue, and every pet-lover who tweets a photo of his dog is a potential buyer to be reached via mobile.
They want to buy. You want to sell. But how do you make the connection happen?
Best-practices for mobile campaigns may be unclear to many, but the mobile opportunity is all around us. Here are five can’t-miss steps to highly effective social-mobile campaigns. You can click here to get more information on this topic.
Engage consumers to turn those likes into action
Get your prospects’ attention and get them to scan a QR code on your point-of-sale displays or visit your URL after seeing a banner ad. Once you have them on your page, ask them to do more than “Like” your brand on Facebook or follow you on Twitter. Social media is about more than just collecting people around your brand. It’s about creating long-term relationships with them through compelling content and engaging campaigns. The Research and Development site has more to say on this.
Turn those likes into love by engaging them around their passion for fashion, sports, music, causes, or even your own products. Offer them rewards to post user-generated content, leave a comment, or share something of yours. Such interactions help you enter the stream of social conversation and share your branded content among your consumers’ friends. When you win the social endorsement game, you win social media.
Skip the app store, and reach them through the Web browser
Don’t make your buyer download and install an app. Instead, deliver a rich user experience on a mobile-optimized site available through the smartphone’s Web browser (for example, Safari on an iPhone or Chrome on a Droid device). You should click here if you’d like to follow up on this.
In the early days of mobile campaigns, a lot of marketers focused on developing native apps that had to be downloaded from the app store. That’s OK if you’re developing a high-utility lifestyle app, but not if you’re running a marketing campaign.
Instead, deliver your message straight from the Web to remove a barrier to entry for your consumer. Plus, the Web gives you the flexibility to tweak messaging, update design, repurpose content, and test calls to action on the fly, without pushing an app update through the app store.
Don’t forget the desktop
Choose a campaign technology that enables you to simultaneously launch campaigns across mobile, social (Facebook), and the Web so that your campaigns have the consistent reach they need to succeed.
It no longer makes sense to run marketing campaigns where the mobile platform is not an access point. Conversely, mobile-only campaigns leave opportunities on the table with desktop prospects. To read more about this, click here. You can’t get the most out of your campaign investment without integrating your social properties and other Web platforms and assets.
And make sure user data and content synchronizes across all three so that a Like from a mobile device is represented on the full Facebook Web asset.
Update your CRM and start selling
All the interest and attribute data you collected during your mobile campaign is a goldmine of business intelligence. The next step is to put the data to work for you. You need a plan for following up with relevant content and offers that convert those known leads into buyers. You’ll find more info about this online.
This step requires discipline. The user data you collected during your campaign isn’t worth much if it’s never brought to your customer relationship management (CRM) system. A clean, data-rich CRM lets you segment your audience and deliver the right messages at the right times—driving new revenue from prospects and additional revenue from existing customers.
Capture interest and attribute data
Left-brain marketers will love this step: With all the new data analysis and marketing automation tools in our arsenals, we need to know more than names, email addresses, and telephone numbers; it’s important to understand our customers’ interests and attributes, too.
At a minimum, interest and attribute data refers to the information your users post publicly to their social profiles. However, it’s possible to go deeper than what’s publicly available. The right mobile technology enables you to paint a more detailed picture of your customers by analyzing their interactions with your campaign.
So, whenever you run mobile campaigns, make sure to capture user interests and attributes. Remember, this is social media, so the data is out there. It’s up to you to gather their insights to craft better messaging, repurpose content, and generate offers that lead to a purchase and create strong, enduring relationships. Check out www.video-dtp.com for more.
When you understand interests and attributes, you understand what matters most to your customers and prospects.
[Source: Andy Lombard]
Microsoft Office Undergoing a Make Over
Microsoft Corp unveiled a new version of its Office suite aimed at traditional PC users as well as the fast-growing tablet market in a major overhaul of the aging workplace software.
The revamped Office makes use of cloud computing and is compatible with touch screens widely used in tablets. It comes as Apple Inc and Google Inc make inroads into the workplace, long Microsoft’s stronghold. Office is Microsoft’s single-biggest profit driver. “The Office that we’ll talk about and show you today is the first round of Office that’s designed from the get-go to be a service,” Microsoft Chief Executive Steve Ballmer told reporters. “This is the most ambitious release of Microsoft Office that we’ve ever done.” Microsoft has a lot riding on the 15th version of Office. Windows is one of the world’s biggest computing platforms, and the Office applications – Word, Excel, PowerPoint, and other tools – are used by more than 1 billion people around the world. If interested in this topic you should click here.
Microsoft last updated Office in 2010, when it incorporated online versions for the first time. The full version of Office 15 is expected to be available in early 2013. Cloud computing refers to a growing trend toward providing software, storage and other services from remote data centers over the Web instead of relying on software or data installed on individual PCs. “Your modern Office thinks cloud first. That’s what it means to have Office as a service,” Ballmer said, adding that a preview version of the software is now available online. Microsoft has also recently announced that subscribers to Office 365 will be able to use a new Office Mobile app on their Android phones. The new app is available immediately in the Google Play store. And it comes just a few weeks after Microsoft released a similar app for iPhone users who subscribe to Office 365. The Android app is free to download. But an Office 365 subscription costs $100 a year. For that fee, users can install Office apps on up to five devices such as PCs, Macs, and smartphones. So far, Microsoft hasn’t created an Office app for tablets. If you’d like to read more then visit the Niners website.
Microsoft Office users with iPhones rejoice: Word, PowerPoint and Excel are now available on your smartphone. The Redmond, Wash., software giant released Office Mobile for the Apple smartphone Friday, finally bringing heavy Office users a way to edit their files on the go from their device.
The app can be downloaded from the App Store and allows users to edit recent files they have been working on that are saved in their SkyDrive. “After signing in to an Office 365 account, you can access, view and edit Microsoft Word, Excel and PowerPoint documents from anywhere,” Microsoft said in a statement. The app is free to download but not everyone will be able to use it. You’ll need a subscription for Office 365 in order to use the app’s document-editing features. Those subscriptions vary in price depending on whether you’re a student, professional or a large company, but the Office 365 for Home subscription costs $100 annually. You should click here to continue reading about this topic.
Office Mobile for iPhone will work with the iPhone 4, 4S and 5 and with the fifth generation of the iPod Touch. No word on when there will be an Office Mobile app for Android.
WordPress.org © 2012-2013 Choco Bay | 计算机 |
2014-23/2620/en_head.json.gz/29963 | Forums > Submission Feedback > pickhut's Ghost House review
This thread is in response to a review for Ghost House on the Sega Master System. You are encouraged to view the review in a new window before reading this thread.
Author: mrmiyamoto
Posted: October 31, 2013 (05:24 AM)
Don't you just love how games you adored as a kid are so different as an adult? In the first paragraph, there's nothing after "ever-loving", unless you intended to do that for dramatic effect.
So do you actually own the master system, or are you playing these on ROM? Good review that emphasizes the difficulty in an entertaining way. I also enjoyed the imagery you conjured up in describing the house.
"Nowadays, people know the price of everything and the value of nothing"
*Oscar Wilde*
Author: pickhut
The ever-loving was intentional. I was actually going to add a McMuffin after it, but changed my mind.
I currently don't own a Master System, though, just a small collection of games. I've always wanted to rebuy one, for the fact that it was a memorable part of my younger childhood days, but just never got around to it. I'm more a fan of playing games on their original intended system/platform, for the authentic feel, but if I know a game well enough, I'll do the latter, purchase a download version, or play it off a compilation for a refresher.
I head spaceshit noises
Author: ThoughtFool1
I suppose that this is one game I'll skip on my quest to discover the SMS.
Nice play / juxtaposition on the "in no time" phrase by the way.
Posted: October 31, 2013 (07:39 PM)
Ah, the SMS is certainly more of a nostalgia trip than anything else, at least for me. I was more than surprised at how bad some games actually were when I tried replaying them a decade after they originally came out. But the system definitely has its gems, and maybe you'll have a more enlightening experience than me. You've certainly livened up the SMS section with your reviews, at least.
Posted: November 01, 2013 (12:34 PM)
Why thank you :)
I'm actually semi-surprised with the SMS so far...I mean I'm trying to pick and choose the higher rated games, but from what I can tell the SMS is an unappropriated unappreciated system.
Author: JedPress
How is the music in this game? I find a lot of SMS music quite charming. For example, I prefer the music in the SMS version of Fantasy Zone to that of the arcade version. Not sure how to make a sig? While logged into your account, you can edit it and your other public and private information from the Settings page.
It's not bad, and there's only two themes. They have a spooky catchiness to them, but due to the difficulty, the dramatic one that plays for the Dracula fights gets annoying.
Though, if I were to compare it to Fantasy Zone's soundtrack, it definitely pales.
None of the material contained within this site may be reproduced in any conceivable fashion without permission from the author(s) of said material. This site is not sponsored or endorsed by Nintendo, Sega, Sony, Microsoft, or any other such party. Ghost House is a registered trademark of its copyright holder. This site makes no claim to Ghost House, its characters, screenshots, artwork, music, or any intellectual property contained within. Opinions expressed on this site do not necessarily represent the opinion of site staff or sponsors. | 计算机 |
2014-23/2620/en_head.json.gz/31913 | Old article from The Economist
Forum: Linux �and Free Software
Topic: Old article from The Economist
started by: bokaroseani
Posted by bokaroseani on Sep. 10 2006,19:35 Open-source businessOpen, but not as usualMar 16th 2006From The Economist print editionAs �open-source� models move beyond software into other businesses, their limitations are becoming apparentEVERY time internet users search on Google, shop at Amazon or trade on eBay, they rely on open-source software�products that are often built by volunteers and cost nothing to use. More than two-thirds of websites are hosted using Apache, an open-source product that trounces commercial rivals. Wikipedia, an online encyclopedia with around 2.6m entries in more than 120 languages, gets more visitors each day than the New York Times's site, yet is created entirely by the public. There is even an open-source initiative to develop drugs to treat diseases in poor countries.The �open-source� process of creating things is quickly becoming a threat�and an opportunity�to businesses of all kinds. Though the term at first described a model of software development (where the underlying programming code is open to inspection, modification and redistribution), the approach has moved far beyond its origins. From legal research to biotechnology, open-business practices have emerged as a mainstream way for collaboration to happen online. New business models are being built around commercialising open-source wares, by bundling them in other products or services. Though these might not contain any software �source code�, the �open-source� label can now apply more broadly to all sorts of endeavour that amalgamate the contributions of private individuals to create something that, in effect, becomes freely available to all.However, it is unclear how innovative and sustainable open source can ultimately be. The open-source method has vulnerabilities that must be overcome if it is to live up to its promise. For example, it lacks ways of ensuring quality and it is still working out better ways to handle intellectual property.But the biggest worry is that the great benefit of the open-source approach is also its great undoing. Its advantage is that anyone can contribute; the drawback is that sometimes just about anyone does. This leaves projects open to abuse, either by well-meaning dilettantes or intentional disrupters. Constant self-policing is required to ensure its quality.This lesson was brought home to Wikipedia last December, after a former American newspaper editor lambasted it for an entry about himself that had been written by a prankster. His denunciations spoke for many, who question how something built by the wisdom of crowds can become anything other than mob rule.The need to formalise open-source practices is at a critical juncture, for reasons far beyond Wikipedia's reputation. Last year a lengthy process began to update the General Public Licence�the legal document which makes available �free software�, such as Linux, an operating system that poses a challenge to Microsoft's dominance. The revision will enable the licence to handle issues such as patents and online services. The drafting process uses the same approach as the software production itself. It relies on an open collaboration that has hundreds of contributors around the world. �What we are actually doing is making a global institution,� says Eben Moglen, a professor at Columbia Law School in New York and the legal architect behind the licence.One reason why open source is proving so successful is because its processes are not as quirky as they may first seem. In order to succeed, open-source projects have adopted management practices similar to those of the companies they vie to outdo. The contributors are typically motivated less by altruism than by self-interest. And far from being a wide-open community, projects often contain at their heart a small close-knit group.With software, for instance, the code is written chiefly not by volunteers, but by employees sponsored for their efforts by companies that think they will in some way benefit from the project. Additionally, while the output is free, many companies are finding ways to make tidy sums from it. In other words, open source is starting to look much less like a curiosity of digital culture and more like an enterprise, with its own risks and rewards.Projects that fail to cope with open source's vulnerabilities usually fall by the wayside. Indeed, almost all of them meet this end. Of the roughly 130,000 open-source projects on SourceForge.net, an online hub for open-source software projects, only a few hundred are active, and fewer still will ever lead to a useful product. The most important thing holding back the open-source model, apparently, is itself.Just browsingTo get a sense of just how powerful the open-source method can be, consider the Firefox web browser. Over the last three years it has crept up on mighty Microsoft to claim a market share of around 14% in America and 20% in parts of Europe. Firefox is really a phoenix: its code was created from the ashes of Netscape, which was acquired by AOL in 1998 when it was clear that it had lost the �browser war� to Microsoft. Today, the Mozilla Foundation manages the code and employs a dozen full-time developers.From that core group, the open-source method lets a series of concentric circles form. First, there are around 400 contributors trusted to offer code into the source tree, usually after a two-stage review. Farther out, thousands of people submit software patches to be sized up (a useful way to establish yourself as new programming talent). An even larger ring includes the tens of thousands of people who download the full source code each week to scrutinise bits of it. Finally, more than 500,000 people use test versions of forthcoming releases (one-fifth of them take the time to report problems in bug reports).Traditional profit-seeking firms cannot usually rely on their customers to play an active role in their product development. In fact, they often strongly resist any such interference. For decades software was �proprietary�, because secret code could not be copied or used without payment. Moreover, the closed approach is seen as a way to prevent exposing possible security flaws. By contrast, open source encourages sharing, and its greater scrutiny may translate into cleaner code. As a cherished open-source adage has it: �Given enough eyeballs, all bugs are shallow.�The way open-source projects organise themselves is critical to ensuring their quality. Rather than harnessing a magical, bubbling-up of creativity from cyberspace, many open-source projects have established formal, hierarchical governance. �These are not anarchistic things when you look at successful open-source projects�there is real structure, real checks and balances, and real leadership taking place,� explains Josh Lerner, a professor at Harvard Business School.A good example is MySQL, a type of open-source database software used by companies including Google (to serve up advertisements alongside search results), Yahoo! and Travelocity. The company, founded in 1995, has a hybrid business model. It gives away its software under an open-source licence. At the same time, it sells its software along with maintenance and support contracts. The firm has around 8,000 customers who pay 1-10% of the amount they would spend on proprietary products. Yet for every paying customer, MySQL estimates around 1,000 people use the free version.�We don't mind,� says Marten Mickos, boss of MySQL since 2001, �they help us with other things.� For example, making the code open encourages a group of users (who may one day become paying customers) to become familiar with it. This creates a talent pool that the firm can draw upon for future employees. Companies developing software products that work with MySQL are potential acquisitions. The community of users freely gives feedback on new features and bugs. It also writes ancillary software and documentation, all of which enhances the value of the core product.When it comes to the software itself, the company is very much in charge. It rarely accepts code from outside developers (the complexity of database software makes it less amenable to being independently cobbled together). Instead, MySQL employs 60 developers, based in 25 countries, of whom 70% work from home. �We maintain full governance of the source code. That allows us to go to the commercial users of the product and guarantee the product,� explains Mr Mickos. �You could say that this is what they pay for.�The question of accountability is a vital one, not just for quality but also for intellectual-property concerns. Patents are deadly to open source since they block new techniques from spreading freely. But more troubling is copyright: if the code comes from many authors, who really owns it? This issue took centre stage in 2003, when a company called SCO sued users of Linux, including IBM and DaimlerChrysler, saying that portions of the code infringed its copyrights. The lines of programming code upon which SCO based its claims had changed owners through acquisitions over time; at some point they were added into Linux.To sceptics, the suit seems designed to thwart the growth of Linux by spreading unease over open source in corporate boardrooms�a perception fuelled by Microsoft's involvement with SCO. The software giant went out of its way to connect SCO with a private-equity fund that helped finance the lawsuits, and it paid the firm many millions to license the code. Fittingly, Microsoft indemnifies its customers against just this sort of intellectual-property suit�something that open-source products are only starting to do.For the moment, users of Linux say that SCO-like worries have not affected their adoption of open-source software. But they probably would be leery if, over time, the code could not be vouched for. In response, big open-source projects such as Linux, Apache and Mozilla have implemented rigid procedures so that they can attest to the origins of the code. In other words, the openness of open source does not necessarily mean it is anonymous. Strikingly, even more monitoring of operations is required in open source than in other sorts of businesses.Openness has been both the making of, and a curse to, Wikipedia. In January 2001 Jimmy Wales's plan for an online encyclopedia written entirely by volunteers over the net was foundering. It was called �Nupedia� and contributions were supposed to go through a rigorous editing process by experts. However, after a year only two dozen articles were in. After Mr Wales and the project's co-ordinator, Larry Sanger, heard about so-called �wiki� software�which makes it easy for people jointly to compose and edit web pages�they changed course. Wikipedia was born and opened to anyone. To welcome the masses, the first entry on its �rules to consider� page was �ignore all rules�.People did. Yet two seemingly contradictory things happened: chaos reigned, and an encyclopedia emerged. So-called �edit wars� dominated the online discussions, biases were legitimised as �another point of view� and specialists openly sneered. Many contributors were driven away by the fractious atmosphere (including Mr Sanger, who went on to pen essays predicting Wikipedia's vulnerability to abuse). Still, the power of decentralised collaboration astounded everyone. After 20 days, the site had over 600 articles; six months later, it had 6,000; by year's end, it totalled 20,000 articles in a plethora of languages (see chart 2).As problems of vandalism, prejudice and inaccuracy ensued, Mr Wales was reluctant to clamp down. In the end, he had to. The site has set down policies to mediate debates; it has banished unco-operative contributors; it locked down entries that were frequently vandalised (such as one on George Bush)�changes come only from contributors who are designated as leaders on the strength of their work. A blunt new policy was promulgated: �Don't be a dick.� And after the furore over the biographical entry last year, Wikipedia changed its rules so that only registered users can edit existing entries, and new contributors must wait a few days before they can start new ones.At the sourceOther sectors have also begun to adopt open-source approaches. Richard Jefferson, the director of CAMBIA, an Australian non-profit research organisation, manages an initiative for biotechnology that uses an open-source licence. Researchers may freely use the techniques�such as a way to place genes into plants�on condition that they openly share any improvements they devise. Other projects, such as the Tropical Disease Initiative and the Synaptic Leap, are forming along similar lines. Synaptic Leap points out that because it is not motivated by profit, it has no motive to keep secret any fruits derived from collaboration in research on, for example, malaria.CAMBIA is spending time and money on establishing an elaborate system whereby contributions can be assessed. It would consider numerous factors, such as the experience of the researcher and the ranking of their work by the community, to identify promising techniques and the best avenues of research. �As it works now, you choose the labs you work with and basically know what you are going to get before you start because you know the people,� says Dr Jefferson. �The power of distributed innovation is to be surprised, and hopefully pleasantly surprised.�Rather than a democracy, open source looks like a Darwinian meritocracy. The tools for extremely productive online collaboration exist. What is still missing are ways to �identify and deploy not just manpower, but expertise,� says Beth Noveck of New York University Law School (who is applying open-source practices to scrutinising software-patent applications, with an eye to invalidating dubious ones). In other words, even though open-source is egalitarian at the contributor level it can nevertheless be elitist when it comes to accepting contributions. In this way, many open-source projects look more hierarchical than the corporate organograms the approach is supposed to have torn up.Even if the cracks in the management of open source can be plugged by some fairly straightforward organisational controls, might it nevertheless remain only a niche activity�occupying, essentially, the space between a corporation and a commune? There are two doubts about its staying power. The first is how innovative it can remain in the long run. Indeed, open source might already have reached a self-limiting state, says Steven Weber, a political scientist at the University of California at Berkeley, and author of �The Success of Open Source� (Harvard University Press, 2004). �Linux is good at doing what other things already have done, but more cheaply�but can it do anything new? Wikipedia is an assembly of already-known knowledge,� he says.The second doubt is whether the motivation of contributors can be sustained. Companies are good at getting people to rise at dawn for a day's dreary labour. But the benefit of open-source approaches is that they can tap into a far larger pool of resources essentially at no cost. Once the early successes are established, it is not clear that the projects can maintain their momentum, says Christian Alhert, the director of Openbusiness.cc, which examines the feasibility of applying open-source practices to commercial ventures.But there are arguments in favour of open source, too. Ronald Coase, a Nobel prize-winning economist, noted that firms will handle internally what it would otherwise cost more to do externally through the market. The open-source approach seems to turn this insight on its head and it does so thanks to the near-zero cost of shipping around data. A world in which communication is costly favours collaborators working alongside each other; in a world in which it is essentially free, they can be in separate organisations in the four corners of the earth.Perhaps that is why open source is taking up a permanent place as a facet of modern business. As open source begins to look more corporate, corporations themselves are looking to adopt and adapt more open-source practices.For example, Toyota has organised its teams in ways that stress the same sort of decentralisation, flexibility and autonomy that exist in the Linux community, according to Philip Evans and Bob Wolf of the Boston Consulting Group in an article in the Harvard Business Review last July. As such, conventional companies would do well to embrace the work-style, the authors note, such as sharing knowledge widely, establishing reputation systems, and creating a community in which people work for peer recognition as much as remuneration. The lesson is that companies stand to gain by giving up a degree of control over their proprietary knowledge�or rather, some of their proprietary knowledge.Strikingly, mainstream technology companies�once the most proprietary outfits of them all�have started to cotton on to this. Sun Microsystems is making its software and even chip designs open, in a bid to save the company's business from competition from open-source alternatives. Even Microsoft has increasingly made some products open to outside review, and released certain code, such as for installing software, free of charge under licensing terms whereby it can be used provided enhancements are shared. �We have quite a few programs in Microsoft where we take software and distribute it to the community in an open-source way,� gushes Bill Hilf, director of platform technology strategy at the company. Open source could enjoy no more flattering tribute than that.[B]
Posted by WDef on Sep. 16 2006,12:23 Quote Constant self-policing is required to ensure its quality.And that's not true of closed-source software? �It's true of any software. �And the article neglects to mention that propietary software faces many of the same management and development problems as open source, it's just that these processes are cut off from public view.It also partly misses the key point of open source - that it enables a huge base of scrutiny and talent on which to draw. Thus it greatly improves the safety of software (eg firefox vs IE) , but the article wrongly suggests the opposite.And it neglects to mention that much of the progress in IT and the internet would not have happened without open source. Eg Apache. Linux, which has generally outstripped progress in the commercial unices and now powers mobile phones, all kinds of domestic devices and embedded systems.In the end, this is a confused survey, which I suppose reflects the confusion of traditional capitalist enterprise when faced with an entirely different and altogether natural way of producing software. | 计算机 |
2014-23/2620/en_head.json.gz/32660 | Microsoft Releases ASP.NET Update on Automatic Channels
Sep 30, 2010 9:16 PM EST
By Larry Seltzer
Today Microsoft released the update for a publicly-disclosed vulnerability affecting ASP.NET servers to all their standard distribution channels, including Windows Update and WSUS (Windows Server Update Services).
The update, designated MS10-070, was released two days ago to the Microsoft Download Center so that administrators could begin to test it on their applications.
The update will show up for all users with all versions of the .NET framework, so if you're on a desktop system that doesn't do web serving it's not a really high priority to apply the update. In fact, one could argue that it doesn't really matter for desktops, unless they are running development IIS web servers and those servers are available to untrusted users.
For servers it's a clearly important update, but not of the "drop everything and apply this now" type. You should test it. And if you don't do .NET applications on the server you shouldn't need it (but then you shouldn't need the .NET Framework either).
Privacy,Security Software,Top Threat,Vulnerabilities,Windows 7,Windows Vista,Windows XP,Servers
crypto,oracle,asp.net,cipher,padding
Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. Featured Programs | 计算机 |
2014-23/2620/en_head.json.gz/32986 | Aseem Agarwala Home
Tech transfer Research projects Publications
Activities & Honors
I am a principal scientist at Adobe Systems, Inc., and an affiliate assistant professor at the University of Washington's Computer Science & Engineering
department, where I completed my Ph.D. in June 2006 after five years
of study; my advisor was David
Salesin. My areas of research are computer graphics, computer vision, and computational imaging. Specifically, I research computational techniques that can help us author more expressive imagery using digital cameras. I
spent three summers during my Ph.D interning at Microsoft Research, and my
time at UW was supported by a Microsoft
fellowship. Before UW, I worked for two years as a research
scientist at the legendary but now-bankrupt Starlab,
a small research company in Belgium. I completed my Masters and Bachelors at MIT majoring in computer
science; while there I was a research assistant in the Computer Graphics Group, and an intern at the Mitsubishi Electric Research
Laboratory (MERL) . As an undergraduate I did research at the MIT Media Lab.
I also spent much of the last year building a modern house in Seattle, and documented the process in my blog, Phinney Modern. | 计算机 |
2014-23/2620/en_head.json.gz/34336 | Casting shadows: On Lost in Shadow with director Osamu Tsuchihashi
Digg | del.icio.usFriday, July 9, 2010Shortly after I returned from E3, fresh with a lengthy list of games�many ominously targeted for "Holiday 2010"�that I had good feelings about, I thought it would be good to get into my favorite game from the show a little more in-depth. The man I wanted to get in touch with was clear: Osamu Tsuchihashi, director of that game�Lost in Shadow�who I'd previously chatted a bit with over Marble Saga Kororinpa.
Mr. Tsuchihashi was a delight and very gracious, providing several very interesting insights into the game and its creation process. I really enjoyed hearing his answers, and I hope you will too�and maybe understand a little more about Lost in Shadow in the process.
Thank you for taking the time to talk with us today. We last spoke about Marble Saga Kororinpa, which I enjoyed quite a bit. When I saw the first information about Lost in Shadow last year, I wondered if perhaps you and your team were behind it, and then I found out that you are! I was very pleased to hear that, and now that I have had the time to play the game, I walked away thinking it was the best I played all week at E3.Osamu Tsuchihashi, Director, Lost in Shadow:I am really honored to know that you have been following our team and our work, and for that I would like to thank you.
The fact that we were able to show our game at a world-renowned show like E3 is an accomplishment that we wouldn�t trade for anything, especially with all the other great games that were shown there. With that in mind, I am truly grateful that you enjoyed playing Lost in Shadow.
We are already familiar with the Kororinpa series, of course, but what other games have you and your team worked on in the past that we might be familiar with?Tsuchihashi:We are a development team that came together to work on the two Kororinpa games and Lost in Shadow. Before that, we were all working on different titles.
Okay, so, on to the main topic: Lost in Shadow. First, for our readers that may not be familiar with the game, can you explain the basic premise, and maybe a little of the story?Tsuchihashi:The story starts with a body�s shadow being cut off from his body. All he wants to do is to climb the tower so he can get back to it. The boy�s shadow cannot walk about in the real world. He must proceed through a path made by shadows. The maze-like path that he must follow is full of traps and enemies that he must overcome, all of which are in shadow form. The shadow of the boy will have some help from his partner, the Spangle, in his journey.
Where did the idea for the game come from?Tsuchihashi:In Japan, there is a children�s game similar to tag, but using shadows. As you may have guessed, it is called "Shadow Tag." What's different from normal tag is that, instead of touching the other person, all you have to do is to step on his or her shadow to tag the other person "it."
There are various rules depending on regional versions of the game, but the rules that I played within were as follows: If the runner hides in a shadow, the person who is "it" cannot pursue him. There�s a catch for the runner though; he can stay in the shadow only for a limited amount of time. If that time is exceeded, he is forbidden to hide in any shadows until his penalty runs out, and must seek his sanctuary in the light, where he will most likely fall into the hands of the person who is "it."
Lost in Shadow was based partly on this idea of Shadow Tag, and you will see this kind of situation in the actual game.
Having seen so many ways to use light and shadow in gameplay, I imagine that the game engine is quite unique. Did you have to create an engine specifically for this game? Was it a challenge?Tsuchihashi:Your assumption is absolutely correct.
Most of the development period for this game was used to create the shadow engine. We had to continue to tweak and correct the engine, right up to the recent debugging. We even had to re-write some data to have it work with this engine. I can tell you that those tweaking days are the reason why I have an ulcer now...
...continues on next page Author
Lost in Shadow (Wii) | 计算机 |
2014-23/2620/en_head.json.gz/36691 | Search Site: New User Registration
Log in username: *
We recommend using your e-mail address as the username to make it easier to remember.
Terms and ConditionsLegal Terms and Conditions
This page states the Terms and Conditions under which you, the Web Site visitor ("You") may use this Web Site, which is owned by American Woodmark Corporation ("AWC").
Please read this page carefully. By using this Web Site, You agree to be bound by all of the Terms and Conditions set forth below. If You do not accept these Terms and Conditions, please do not use this Web Site. AWC may, in its sole discretion, revise these Terms and Conditions at any time; therefore, You should visit this page periodically to review the Terms and Conditions.
Use of Site Material
The contents of this Web Site, such as text, graphics, images and other content (the "Site Material") are protected by copyright under both United States and foreign laws. AWC authorizes You to view and download a single copy of the Site Material for Your personal use. Unauthorized use of the Site Material violates copyright, trademark, and other laws. You agree to retain all copyright and other proprietary notices contained in the original Site Material on any copy You make of such material. You may not sell or modify our Site Material or reproduce, display, distribute, or otherwise use the Site Material in any way for any public or commercial purpose. Use of the Site Material on any other web site or in a networked environment is prohibited.
The names, marks and logos appearing in the Site Material are, unless otherwise noted, trademarks owned by or licensed to AWC or an affiliated company. The use of these marks, except as provided in these Terms and Conditions, is prohibited. From time to time, AWC makes fair use in this Web Site of trademarks owned and used by third parties. Any such marks are clearly noted, and AWC makes no claim to ownership of those marks.
AWC welcomes Your comments on our Web Site, products, and services. However, You acknowledge that if You send us or post on the Web Site creative suggestions, ideas, notes, drawings, concepts, photographs, inventions or other information, (collectively, the "Information"), You grant to AWC a perpetual, unrestricted, royalty-free license to use the Information for any purpose whatsoever, commercial or otherwise, without compensation to the provider of the Information.
As a user of this Web site, You are responsible for Your own communications and are responsible for the consequences of their posting. Therefore, do not do any of the following things: transmit to us material that is copyrighted, unless You are the copyright owner or have the permission of the copyright owner to post it; send material that reveals trade secrets, unless You own them or have the permission of the owner; send material that infringes on any other intellectual property rights of others or on the privacy or publicity rights of others; send material that is obscene, defamatory, threatening, harassing, abusive, hateful, or embarrassing to another user or any other person or entity; send sexually-explicit images; send advertisements or solicitations of business; send chain letters or pyramid schemes; or impersonate another person. By posting Information on the Web Site, You represent that You have the right to do so.
AWC reserves the right to expel users and prevent their further access to this Web Site for violating these terms or the law and reserves the right to remove any communications from this Site. The violation of any of the terms and conditions set forth on this Legal Terms and Conditions page shall result in the immediate revocation of Your license to use the Site Material and obligates You to immediately destroy any copies of the Site Material in Your possession.
In accordance with the Digital Millennium Copyright Act (“DMCA”), 17 U.S.C. 512(c)(3), Pub. L. 105-304, AWC has designated the following individual to receive notification of alleged copyright infringement on the Web Site:
Glenn Eanes
American Woodmark Corporation
3102 Shawnee Drive
Winchester, Virginia 22601
Email: [email protected]
(“Copyright Agent”)
A claim of copyright infringement must be submitted in writing via either a physical or electronic medium and contain the following:
Identification of the copyrighted work claimed to have been infringed.
Identification of the material that is claimed to be infringing or to be the subject of infringing activity and that is to be removed or access to which is to be disabled, and information reasonably sufficient to permit the service provider to locate the material.
Information sufficient to allow the service provider to contact the complaining party, such as an address, telephone number and, if available, an electronic mail address.
A statement that the complaining party has a good-faith belief that use of the material in the manner complained of is not authorized by the copyright owner, its agent or the law.
Although AWC strives for accuracy in all elements of the Site Material, it may contain inaccuracies or typographical errors. Additionally, while users of this Site are bound by these terms and conditions not to submit false material, AWC cannot be responsible for the violation of these terms by users, or for the reliance by users upon false or misleading material submitted by other users. AWC makes no representations about the accuracy, reliability, completeness, or timeliness of the material on this Web Site or about the results to be obtained from using the Web Site. You use the Web Site and its material at Your own risk.
AWC disclaims all liability arising from or related to Your use of the Project Planner. You agree that by using the Project Planner, You are solely responsible for the information and material that you input or post on the Web Site, including, but not limited to measurements, mis-measurements, specs, photographs, sketches, drawings, text, images, graphics, Information, numbers, budgets, plans, and data.
AWC DOES NOT WARRANT THAT THE WEB SITE WILL OPERATE ERROR-FREE OR THAT THE WEB SITE AND ITS SERVER ARE FREE OF COMPUTER VIRUSES OR OTHER HARMFUL MATERIAL. IF YOUR USE OF THE WEB SITE OR THE SITE’S MATERIAL RESULTS IN ANY COSTS OR EXPENSES, INCLUDING, WITHOUT LIMITATION, THE NEED FOR SERVICING OR REPLACING EQUIPMENT OR DATA, AWC SHALL NOT BE RESPONSIBLE FOR THOSE COSTS OR EXPENSES.
THIS WEB SITE AND ITS MATERIAL ARE PROVIDED ON AN "AS IS" BASIS WITHOUT ANY WARRANTIES OF ANY KIND. AWC, TO THE FULLEST EXTENT PERMITTED BY LAW, DISCLAIMS ALL WARRANTIES, INCLUDING THE WARRANTY OF MERCHANTABILITY, NON-INFRINGEMENT OF THIRD PARTIES RIGHTS, AND THE WARRANTY OF FITNESS FOR PARTICULAR PURPOSE. ALTHOUGH AWC STRIVES TO PROVIDE THOROUGH AND ACCURATE MATERIALS ON ITS SITE, WE MAKE NO WARRANTIES ABOUT THE ACCURACY, RELIABILITY, COMPLETENESS, OR TIMELINESS OF THE MATERIAL, SERVICES, SOFTWARE, TEXT, GRAPHICS, AND LINKS.
Disclaimer of Consequential Damages
IN NO EVENT SHALL AWC, ITS AFFILIATES, OR ANY THIRD PARTIES MENTIONED ON THE SITE BE LIABLE FOR ANY DAMAGES WHATSOEVER (INCLUDING, WITHOUT LIMITATION, INCIDENTAL, CONSEQUENTIAL OR PUNITIVE DAMAGES, LOST PROFITS, OR DAMAGES RESULTING FROM LOST DATA OR BUSINESS INTERRUPTION) RESULTING FROM THE USE OR INABILITY TO USE MATERIAL ON THIS WEB SITE OR SITES LINKED TO THIS WEB SITE, WHETHER BASED ON WARRANTY, CONTRACT, TORT, OR ANY OTHER LEGAL THEORY, AND WHETHER OR NOT AWC IS ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
This Web Site contains links to web sites owned by third parties. These links are provided solely as a convenience to You and are not an endorsement by AWC of the contents on those other sites. AWC is not responsible for the content of any linked sites and makes no representations regarding the content or accuracy of materials on such sites. If You decide to visit any third-party sites using links from this Web Site, You do so at Your own risk.
By using this Web Site, You agree to defend, indemnify, and hold harmless AWC, its officers, directors, employees and agents, from and against any and all losses, claims, damages, costs and expenses (including reasonable legal and accounting fees) that AWC may become obligated to pay arising or resulting from Your use of the Site Material or the Project Planner or Your breach of these Terms and Conditions. AWC reserves the right to assume or participate, at Your expense, in the investigation, settlement and defense of any such action or claim.
This Privacy Policy is effective as of July 1, 2011. If there are material changes to this Privacy Policy, You will be notified by email provided that Your email address of record is both valid and active at the time of the notification.
At AWC, Your privacy is important to us. AWC does not collect personally-identifying information about You (such as Your name, address, telephone number, or e-mail address) unless You voluntarily submit that information to us through one of our contact or registration pages, the Project Planner, by e-mail or other means. We treat the information in our customer database, including any information You, as a customer, submit through this Web Site, as confidential and do not sell or otherwise disclose such information to third parties unless You have elected to share Your information with third parties by opting-in to third party marketing emails or under strict contracts involving customer service or the enhancement of our customer programs.
IF YOU ARE UNDER 18 YEARS OF AGE YOU MAY NOT OPT-IN TO RECEIVE MARKETING MATERIALS.
When You visit our Web Site, we may automatically collect certain non-identifying information about You, such as the type of browser or computer operating system You use or the domain name of the Web Site from which You linked to us. In addition, we may store some information on Your computer’s hard drive as a "cookie" or similar type of file. We use this information internally, to help us enhance the efficiency and usefulness of the Web Site and to improve customer service. If You object to this, please consult Your Browser’s documentation for information on erasing or blocking cookies.
Any information we gather, whether voluntarily provided by You or automatically collected, may be used for our internal business and marketing purposes but will not be sold or otherwise disclosed to third parties for any purposes, unless You have expressly requested information about a specific vendor or service identified on our Web Site, selected the Share My Plan feature, or have opted-in to receiving third party marketing emails. If You have elected to receive third party marketing emails, but no longer wish to receive such correspondence, You may, at any time, opt-out of third party marketing emails by updating Your profile on the Web Site. Similarly, if You no longer wish to participate in the Share My Plan program, You can update Your profile on the Web Site to remove this feature. You also may, at any time, review, update or delete the information, You have provided to us. If You are concerned about the information You have provided to us, please contact us at: [email protected].
Notwithstanding any other statements or representations here or elsewhere on our Web Site, we reserve our right to disclose any information in our possession if we are required to do so by law, or we believe, in good faith, that such a disclosure is necessary to comply with the law, defend our rights or property, or to respond to an emergency situation.
AWC employs reasonable security measures to protect the personal information You submit to us, once it is received.
This Web Site originates from Chicago, Illinois. AWC does not claim that the Site Materials on this Web Site are appropriate or may be used outside of the United States. Access to the Site Materials may not be legal by certain persons or in certain countries. If You access the Web Site from outside of the United States, You do so at Your own risk and are responsible for compliance with the laws of Your jurisdiction.
These Terms and Conditions are governed by the substantive laws of the Commonwealth of Virginia, without respect to its conflict of laws principles. You agree to submit to the jurisdiction of the courts situated in the Commonwealth of Virginia with respect to any dispute, disagreement, or cause of action related to or involving this Web Site. If any provision is found to be invalid by any court having competent jurisdiction, the invalidity of such provision shall not affect the validity of the remaining provisions of these Terms and Conditions, which shall remain in full force and effect. No waiver of any of these Terms and Conditions shall be deemed a further or continuing waiver of such term or any other term. Except as expressly provided elsewhere in our Web Site, these Terms and Conditions constitute the entire agreement between You and AWC with respect to Your use of this Web Site.
I agree to these Terms and Conditions | 计算机 |
2014-23/2620/en_head.json.gz/36743 | 24 ways to impress your friends
Christian Heilmann grew up in Germany and, after a year working for the red cross, spent a year as a radio producer. From 1997 onwards he worked for several agencies in Munich as a web developer. In 2000 he moved to the States to work for Etoys and, after the .com crash, he moved to the UK where he lead the web development department at Agilisys. In April 2006 he joined Yahoo! UK as a web developer and moved on to be the Lead Developer Evangelist for the Yahoo Developer Network. In December 2010 he moved on to Mozilla as Principal Developer Evangelist for HTML5 and the Open Web. He publishes an almost daily blog at http://wait-till-i.com and runs an article repository at http://icant.co.uk. He also authored | 计算机 |
2014-23/2620/en_head.json.gz/41701 | e-Vision home current volume
cross-referenced index submission guidelines editorial staff
WRTC 395 practicum WRTC home
� A Man-Made Natural Wonder by Jeremy Cohen "What’s in the bag m’am?” She shuffled in her purse. “Oh, you know…stuff.” My mom answered the customs officer at the Canadian border with a “who, me?” response, since we were completely innocent travelers. We were crossing over to see Niagara Falls. The Canadian side offered better views of the massive series of naturally formed waterfalls. That was probably the cause of the problems I had in “seeing” the falls. I’ll elaborate on this further, but even though I was looking right at them, it felt like something was missing from the total experience. Walker Percy, an esteemed novelist, summed up this feeling in his essay “The Loss of the Creature.” In the piece, he talks of a loss of sovereignty over experience—our original sense of a place independent of stigma and preconception. He offers a solution: a “recovery” process of gaining back that which humanity has destroyed with its influence. However, according to Percy, full recovery isn’t a typical outcome. Glaciers formed Niagara Falls naturally many years ago. However, nowadays, one cannot imagine what the setting originally looked like before the town surrounding it boomed. Looking out over the falls, the golden arches of a miniature fast-food utopia blazed into the night sky with a phosphorescent glow rivaled only by the multi-story Best Western next door. Ignoring these artificial distractions, we sought out a clerk for a guide on how to tackle the falls, but were redirected to an informational kiosk. The three-dimensional display exclaimed, “Welcome to Niagara” and gave a listing of tours available that day. To say that there weren’t many options available to the tourist who wanted to “see it all” would be akin to blasphemy in the church of the falls. Our first viewing option was a towering mass of steel, rising hundreds of feet into the sky and eclipsing the sun. The giant spire, à la the Seattle Space Needle, offered express elevators and a spectacular 360-degree view from the top. Among other possibilities were helicopter rides, tunnels that bore through the rocky interior behind the falls, a shoddy walkway that actually allowed passage through the falls (raincoats were provided gratis), and a harbor boat that cruised around the base of the falls offering tours. Epitomizing the falls hysteria was a structure we thought was a bridge to the other side. But it was only a bridge that led us halfway over the falls, with a glass floor allowing us to gaze right into the heart of the falls. My mom, brother, and I went on almost all of these tours mainly because we booked a hotel for two days, and there wasn’t anything else to do there. After the whirlwind of “falls” mania, I felt jaded. I got back to my hometown exhausted. My grandparents, who lived in Canada growing up, had talked about the wonders of the great falls my whole life. I know I didn’t feel the same sort of awe at the sight of the falls that they did. I felt like what Percy calls a “consumer of experience.” I was sold the Niagara Falls experience just as the surrounding businesses chose to sell it. With all the roaring rapids and clouds of mist in the square mile of Niagara, the one thing that stood out the most for me was found inside a Burger King, of all places. The trash bins opened automatically upon sensing hand movement. I had seen waterfalls before, especially Niagara, in magazines and on the Discovery Channel, but never before had I seen something like this. In fact, everyone in the restaurant seemed delighted to be there, inside, eating hamburgers, while everyone outside complained about rain, sunburn, and getting their photograph taken. I figured the falls should have had a greater effect on people than a Burger King that can be found in thousands of locations across the world—a man-made pre-fab abomination of neon and cinderblock. So I can only assume that what was missing for me flew right over others’ heads as well.
What was missing was authenticity, my own sovereign experience, “it.” How could I recover it with so many preconceptions and enforced rituals of enjoyment for a natural wonder such as the falls? Percy offers several suggestions, but the most applicable one to my experience occurs as “a consequence of a breakdown of the symbolic machinery by which the experts present the experience to the consumer” (470). Perhaps if it were Sunday, and all the tours were shut down, and the crowds had gone home for the day, the vendors and their ilk would lose their hold on the reins and let us experience the falls. The “experts” are those whom we look for to tell us to jump. They define the experience by telling us what the experience is or should be—in this case, that one should ride the “Maid of the Mist” boat tour across the harbor in order to fully experience the falls. Or that, by being hundreds of feet in the air above it, somehow one will feel enlightened and “get it.” Knowing that the only way to “recover” an experience like mine is by wrestling back the hold of the “experts,” it seemed impossible to experience Niagara Falls any other way than how I did. The sheer intensity of the other tourists, who like us were just trying to “see” the falls, brought forth the typical mob scene that led to consumer-crazed chaos. Everyone was selling a piece of the falls—in t-shirts, snow globes, and hats. Everyone else was selling an experience, pushing slogans all saying the same thing: “Come with us, and have the experience of a lifetime!” I found that instead of focusing on the falls themselves, I ended up being endlessly fascinated with the humanity of the place. Strange-looking people, beautiful people, and nondescript people were crawling out of every crevice while I fished for the snippets of their conversations like salmon swimming upstream. Percy explains the phrases “This is it” and “Now we are really living” as meaning that “now at last we are having the acceptable experience” (473). This is the mantra of the nü-falls, Niagara Incorporated.
My trip to Niagara was not unique. It was or has been felt by countless other “patrons.” The falls are a business. I couldn’t recover anything from a site so decayed by incorporation it practically begged to be sponsored by Coca-Cola. Percy talks about breaking down the “machinery” of the experience, but the machine itself cannot be broken down because it has already lobotomized the original essence of the falls and parasitically attached itself to them. Sovereignty cannot exist independent of outside influence because the mere notion of sovereignty requires an individual to declare it. Therefore, recovery was in my hands, but I failed to see it, and all I got was a lousy shirt.
Percy, Walker. “The Loss of the Creature.” Ways of Reading. Eds. David Bartholomae and Anthony Petrosky. 7th Ed. Boston: Bedford/St. Martin’s, 2005. 467-484. Back to volume nine table of contents
Jeremy Cohen Assignment
Printer-friendly PDF Back to volume nine table of contents For more information, contact:
e-Vision MSC 2103 Harrisonburg, VA 22807 540-568-7817
Contact Webmaster at [email protected]
modified April 21, 2009 | 计算机 |
2014-23/2620/en_head.json.gz/41881 | Linux Foundation Appoints Ted Ts’o to Position of Chief Technology Officer
By Linux_Foundation - December 18, 2008 - 11:28am Linux Foundation Appoints Ted Ts’o to Position of Chief Technology Office
Linux Kernel Developer Ted Ts’o to lead Linux Standard Base and ISV relationships, among other initiatives SAN FRANCISCO, December 18, 2008 – The Linux Foundation (LF), the nonprofit organization dedicated to accelerating the growth of Linux, today announced that Linux kernel developer Theodore Ts’o has been named to the position of Chief Technology Officer at the Foundation. Ts’o is currently a Linux Foundation fellow, a position he has been in since December 2007. He is one of the most highly regarded members of the Linux and open source community and is known as the first North American kernel developer. Other current and past LF fellows include Steve Hemminger, Andrew Morton, Linus Torvalds and Andrew Tridgell.
Ts’o will be replacing Markus Rex as CTO of the Linux Foundation. Rex was on loan to the Foundation from his employer Novell. He recently returned to Novell to work as the acting general manager and senior vice president of Novell’s OPS business unit. As CTO, Ts’o will lead all technical initiatives for the Linux Foundation, including oversight of the Linux Standard Base (LSB) and other workgroups such as Open Printing. He will also be the primary technical interface to LF members and the LF’s Technical Advisory Board, which represents the kernel community. “Ted is an invaluable member of the Linux Foundation team, and we’re happy he is available to assume the role of CTO where his contributions will be critical to the advancement of Linux,” said Jim Zemlin, executive director of the Linux Foundation. “We’re also very grateful to Markus Rex for his assignment at the Foundation and thank him and Novell for their commitments to Linux and the LSB.”
“I continue to believe in power of mass collaboration and the work that can be done by a community of developers, users and industry members,” said Ted Ts’o, chief technology officer at The Linux Foundation. “I’m looking forward to translating that power into concrete milestones for the LSB specifically, and for Linux overall, in the year ahead.” Since 2001, Ts’o has worked as a senior technical staff member at IBM where he most recently led a worldwide team to create an enterprise-level real-time Linux solution. He will return to IBM after this two-year fellowship at The Linux Foundation.
Ts’o has been recognized throughout the Linux and open source communities for his contributions to free software, including being awarded the 2006 Award for the Advancement of Free Software by the Free Software Foundation (FSF). Ts’o is also a Linux kernel developer, a role in which he serves as ext4 filesystem maintainer, as well as the primary author and
maintainer of e2fsprogs, the userspace utilities for the ext2, ext3, and ext4 filesystems. He is the founder and chair of the annual Linux Kernel Developers’ Summit and regularly teaches tutorials on Linux and other open source software. Ts’o was project leader for Kerberos, a network authentication system used by Red Hat Enteprise Linux, SUSE Enterprise Linux and Microsoft Windows. He was also a member of Security Area Directorate for the Internet Engineering Task Force where he chaired the IP Security (ipsec) Working Group and was a founding board member of the Free Standards Group (FSG). Ts’o studied computer science at MIT, where he received his degree in 1990.
The Linux Foundation is a nonprofit consortium dedicated to fostering the growth of Linux. Founded in 2007, the Linux Foundation sponsors the work of Linux creator Linus Torvalds and is supported by leading Linux and open source companies and developers from around the world. The Linux Foundation promotes, protects and standardizes Linux by providing unified resources and services needed for open source to successfully compete with closed platforms. For more information, please visit www.linux-foundation.org.
Trademarks: The Linux Foundation and Linux Standard Base are trademarks of The Linux Foundation. Linux is a trademark of Linus Torvalds. Third party marks and brands are the property of their respective holders.
Home › Linux Foundation Appoints Ted Ts’o to Position of Chief Technology Officer | 计算机 |
2014-23/2620/en_head.json.gz/42641 | Business Intelligence & Data Visualisation: SQL Server 2012
The new Business Intelligence and Data Visualisation features and functionality are key to the strategic and technical changes in SQL Server 2012. We will take a look at: - Power View and its impact for Data Visualisation according to the principles of Stephen Few, Tufte and other data visualisation experts
- PowerPivot improvements and new features - Reporting Services and its future in Sharepoint Come to this session in order to 'hit the ground running' with Business Intelligence and Data Visualisation in SQL Server 2012.
Jennifer Stirrup
Jen Stirrup is an award-winning, world recognised Business Intelligence and Data Visualisation expert, author, data strategist, MVP, and technical community advocate who is repeatedly honoured with peer recognition as one of the top 100 most global influential tweeters on Big Data topics. Jen has over 16 years experience in delivering BI and dataviz projects for companies of various sizes around the world.
Jen is an elected member of the Board of Directors for the Professional Association of SQL Server, holding the seat for Europe and holder of the Virtual Chapters portfolio worldwide. Jen has presented at TechEd North America and Europe, Summit, SQLBits, as well as at SQLSaturday and other SQL events in the UK, in the US and on many occasions across Europe such as Hungary, France, Belgium, Germany, Portugal, Poland, Sweden, Norway, Switzerland, Ireland, Denmark, Bulgaria, Austria and the Netherlands. Therefore she works hard! She has proven evidence of being extremely involved in the SQL Community across the UK, Europe and the US, and has been for a number of years now.
She has also presented preconference sessions in Microsoft Business Intelligence and Data Visualisation for SQLPass, SQLSummit and at various countries around Europe. When she is not working for the SQLFamily, Jen jointly runs a Copper Blue Consulting based in Hertfordshire with Allan Mitchell, where they helps business leaders derive value from their Microsoft SQL Server, SharePoint and Office365 investment both here in the UK, Europe and the US, and more recently in Africa. Jen has also won awards for her blog and community work, most notably from SQLServiaPedia as runner-up for best Business Intelligence blog. She is a former Awardee of PASS's prestigious PASSion Award in 2012. Video | 计算机 |
2014-23/2662/en_head.json.gz/16848 | Is code auditing of open source apps necessary before deployment?
Posted on 23 December 2009.
Following Sun Microsystems' decision to release a raft of open source applications to support its secure cloud computing strategy, companies may be wondering if they should conduct security tests of their customized open source software before deployment.
"Given the significant savings to be had from using open source applications, Sun's strategy is a security testing at all stages in the customization process," said Richard Kirk, Fortify European Director.
"It's also good to see Sun announcing its support for the new security guidance from the Cloud Security Alliance, since this means that its open source apps will support the best practice guidelines, which is essential when supporting a private cloud infrastructure," he added.
According to Kirk, whilst the use of encryption and VPNs to extend a secure bridge between a company IT resource and a private cloud facility is very positive - especially now that Amazon is best testing its pay-as-you-go private cloud facility - it's important that the underlying application code is also secure.
Security in any IT resource, he explained, is only as strong as the weakest link, so it's just as important to secure the source code of the software being used as it is to defend the cloud environment, as well as other aspects of a company's IT systems.
"Sun's strategy in opting for open source cloud security tools - including OpenSolaris VPC Gateway, Immutable Service Containers, Security Enhanced Virtual Machine Images and a Cloud Safety Box - is excellent news on the private cloud security front," he said.
"Even so, if businesses go down this route, it's critically important that they invest some of the costs saved by taking the open source path, in security at the program code development and customisation stages. This will help them to create an even more robust solution," he added.
Email Address Spotlight | 计算机 |
2014-23/2662/en_head.json.gz/16939 | Contact Advertise Concept Enables PC Operating Systems to Survive Attacks
Linked by Thom Holwerda on Thu 27th Jan 2011 22:15 UTC, submitted by jimmy1971 "Researchers at North Carolina State University have developed a method to restore a computer operating system to its former state if it is attacked. [...] The concept involves taking a snapshot of the operating system at strategic points in time (such as system calls or interrupts), when it is functioning normally and, then, if the operating system is attacked, to erase everything that was done since the last 'good' snapshot was taken - effectively going back in time to before the operating system attack. The mechanism also allows the operating system to identify the source of the attack and isolate it, so that the operating system will no longer be vulnerable to attacks from that application. The idea of detecting attacks and resetting a system to a safe state is a well-known technique for restoring a system's normal functions after a failure, but this is the first time researchers have developed a system that also incorporates the security fault isolation component. This critical component prevents the operating system from succumbing to the same attack repeatedly."
0 4 Comment(s) http://osne.ws/is1 Permalink for comment 460119
RE[2]: ZFS has it by BlueofRainbow on Sat 29th Jan 2011 04:14 UTC in reply to "RE: ZFS has it" Member since:
The original article is vague in one regard - is-it for a specific operating system or is-it general for any OS on a X86 architecture? The later case would be quite interesting. | 计算机 |
2014-23/2662/en_head.json.gz/17397 | Resource Center Current & Past Issues eNewsletters This copy is for your personal, non-commercial use only. To order presentation-ready copies for distribution to your colleagues, clients or customers, click the "Reprints" link at the top of any article. Hewlett-Packard Board Shakeup Gives CEO a Fresh Start
With the departure of board chairman Ray Lane and two directors, Meg Whitman can focus on reviving growth.
By Aaron Ricadela, Bloomberg April 5, 2013
Hewlett-Packard Co.’s board shakeup, including Ray Lane’s exit as chairman, gives Chief Executive Officer Meg Whitman a clearer path to revive growth and shake off years of tumult at the world’s largest computer maker.
A former president of Oracle Corp., Lane failed to use his extensive experience in enterprise computing to help Hewlett-Packard’s turnaround, and his public gaffes -- including being photographed using an Apple Inc. computer -- also sometimes served as an embarrassment to the company.
Lane, 66, instead bore the stain of the disastrous 11-month tenure of former CEO Leo Apotheker and the company’s acquisition of software maker Autonomy Corp., which led to an $8.8 billion writedown and accusations of accounting fraud. To build on the momentum that Whitman has begun to show, the board is seeking a new chairman with global experience and who can devote more time and energy to revival efforts, Pat Russo, a company director, said in an e-mailed statement. Until then, Ralph Whitworth will serve as interim chairman.
“Somebody has to be symbolically accountable,” said Jeffrey Sonnenfeld, a management professor at Yale University in New Haven, Connecticut. “The hope is that it puts this behind them so it doesn’t become a governance sideshow.”
In addition to Lane’s exit, directors G. Kennedy Thompson and John Hammergren are departing, Palo Alto, California-based Hewlett-Packard said yesterday in a statement. Lane “decided to step down,” Whitworth wrote in a blog posting.
Lane and Whitworth didn’t respond to requests via telephone and e-mail for comment.
Lane, a distinguished-looking, gray-haired elder statesman of Silicon Valley, is known as something of enterprise computing’s Mr. Fixit. He helped repair Oracle’s relationships with its customers in the early ’90s and disciplined the company’s freewheeling sales culture during his seven-year tenure there. He then stepped in alongside Apotheker after the departure of CEO Mark Hurd, who left in August 2010 after the board said Hurd violated Hewlett-Packard’s code of business ethics.
Lane and Apotheker weren’t able to make a transition from the lower-profit personal computers and other hardware the company traditionally sold, to more lucrative software, despite a mandate to expand in that area.
Lane is giving up his chairmanship two weeks after investors re-elected him in a narrow majority of votes, issuing a rebuke of his oversight of the botched Autonomy acquisition.
“It’s not typical to get a withhold vote, and if you do you’d usually resign,” said Charles Elson, director of the John L. Weinberg Center for Corporate Governance at the University of Delaware. “Lane deserves credit for stepping down. If he’d stayed on he’d become the issue.”
After the March 20 vote, during the company’s annual shareholder meeting at the Computer History Museum in Silicon Valley, Lane believed he wasn’t given sufficient credit for remaking the company’s board and ousting Apotheker, a person familiar with his thinking said. Shareholder unrest was also making it difficult for Hewlett-Packard to attract additional, high-quality directors to its board, this person said.
The second board overhaul in two years underscores shareholders’ dissatisfaction with the company’s performance and the takeover of Autonomy. The writedown of Autonomy in November capped three years of management upheaval, strategy shifts and slowing growth that hammered the shares and complicated Whitman’s turnaround efforts.
“Lane is clearly the fall guy for the botched Autonomy acquisition,” said Bill Kreher, an analyst at Edward Jones & Co. who rates Hewlett-Packard a sell. “When he was announced as the chairman, we were pleased with that decision, but at this time it’s in the best interest of the company to move on.”
Hewlett-Packard and its investors looked to Lane -- a former No. 2 at Oracle with deep roots in enterprise technology and venture capital -- to bring stability and help navigate a transformation away from personal computers into products and services for corporate customers.
His reign instead came to be associated with the ill-fated tenure of Apotheker, who was ousted after 11 months on the job; strategy shifts, such as a flip-flop over whether to sell the PC division; and acquisitions, including Autonomy, that did little to revamp Hewlett-Packard.
“It’s the right thing, even if a little late,” said Erik Gordon, a professor at the Ross School of Business at the University of Michigan. Lane “was just barely re-elected to the board and was chairman during HP’s most horrible years. His legendary status didn’ | 计算机 |
2014-23/2662/en_head.json.gz/17695 | The W3C on Web Standards
Digital Publishing and the Web
April 25, 2013 · Published in Content, User Experience, Information Architecture, HTML
∙ 2 Comments
A note from the editors: Each month, a new author from the W3C will keep you informed on what we're up to—and how you can be a part of it. This month's column is from Ivan Herman, W3C Semantic Web Activity Lead.
Electronic books are on the rise everywhere. For some this threatens centuries-old traditions; for others it opens up new possibilities in the way we think about information exchange in general, and about books in particular. Hate it or love it: electronic books are with us to stay.
A press release issued by the Pew Research Center’s Internet & American Life Project in December 2012 describes an upward trend in the consumption of electronic books. The trends are similar in the UK, China, Brazil, Japan, and other countries.
“…the number of Americans over age 16 reading eBooks rose in 2012 from 16 to 23 percent, while those reading printed books fell from 72 percent to 67. …the number of owners of either a tablet computer or e-book reading device such as a Kindle or Nook grew from 18% in late 2011 to 33% in late 2012. …in late 2012 19% of Americans ages 16 and older own e-book reading devices such as Kindles and Nooks, compared with 10% who owned such devices at the same time last year.” What does this mean for web professionals? Electronic books represent a market that’s powered by core web technologies such as HTML, CSS, and SVG. When you use EPUB, one of the primary standards for electronic books, you are creating a packaged website or application. EPUB3 is at the bleeding edge of current web standards: it is based on HTML5, CSS2.1 with some CSS3 modules, SVG, OpenType, and WOFF. EPUB3’s embrace of scripting is sure to encourage the development of more interactivity, which is sought after in education materials and children’s books.
Recently W3C has been working more closely with digital publishers to find out what else the Open Web Platform must do to meet that industry’s needs.
One comment we’ve heard loud and clear is that people care deeply about centuries-old print traditions. For example, Japanese and Korean users have accepted that many websites display text horizontally, from left to right. While that may be ok for the web, when these users read a novel, they expect traditional layout: characters rendered vertically and from right to left. Japanese readers often find it more tiring to read a long text in any other way. To address these requirements, W3C is looking at the challenges that vertical layout poses for HTML, CSS, and other technologies; see for example CSS Writing Modes Module Level 3.
Requirement of Japanese Text Layout summarizes the typesetting traditions and resulting requirements for Japanese. These traditions should eventually be reproduced on the web as well as in electronic books. In June, W3C will hold a digital publishing workshop in Tokyo on the specific issues surrounding internationalization and electronic books.
We have also heard that the “page” paradigm—including notions of headers, footnotes, indexes, glossaries, and detailed tables of contents—is important when people read books of hundreds or thousands of pages. Web technology will need to reintegrate these UI elements smoothly; see for example the CSS Paged Media Module Level 3 (Joe Clark talked about paged media and the production of ebooks in 2010, and Nellie McKesson gave us an update in 2012). In September in Paris, W3C will hold a workshop on the creation of electronic books using web technologies. Note that both this and the Writing Modes Module are still drafts and need further work. That means now is the right time for the digital publishing community to have its voice heard!
In the realm of metadata, important to publishers, librarians, and archivists, the challenge is to agree on vocabulary (and there are many: Harvard’s reference to metadata standards is only the tip of the iceberg). Pearson Publishing recently launched the Open Linked Education Community Group to examine creating a curated subset of Wikipedia data that can be used for tagging educational content.
Here are a few other places to look for activity and convergence:
People take notes in books and highlight text. Most ebook readers these days have built-in support for these features, but they are not widely deployed on the web.
Today search engines tend to ignore electronic books; I expect that will not remain the case for long.
“Offline mode” in web technology is still difficult to use if you try to access more than a single page of a site. Since an ebook is quite often a packaged website, ebook offline mode will need to improve to support browsing.
ebooks business models are likely to drive new approaches to monetization, some of which may be found in native mobile environments but not yet on the web.
Although publishing has some specific requirements not common to the web generally, I think that the distinction between a website (or app) and an ebook will disappear with time. As I have written before, both will demand high-quality typography and layout, interactivity, linking, multimedia, offline access, annotations, metadata, and so on. Digital publishers’ interest in the Open Web Platform is a natural progression of their embrace of the early web.
The W3C
The World Wide Web Consortium (W3C) is an international community where member organizations, a full-time staff, and the public work together to develop web standards. Led by web inventor Tim Berners-Lee and CEO Jeffrey Jaffe, W3C’s mission is to lead the web to its full potential. Contact W3C for more information.
Ad via The Deck
Job listings via We Work Remotely
More from ALA
Matt Griffin on How We Work
Being Profitable
So you own a business. It’s the best job you’ve ever had, and it will be forever—as long as the business stays viable. That means understanding when it's profitable, and when you may have to make some adjustments. Don’t worry—it doesn’t require an accounting degree and it won’t turn you into a greedy industrialist.
Laura Kalbag on Freelance Design
I Don’t Like It
The most dreaded of all design feedback is the peremptory, “I don’t like it.” Rather than slinking back to the drawing board, it’s important to get clarity on what the client is reacting to. Guiding this conversation can turn a show-stopper into a mutual win.
Ten CSS One-Liners to Replace Native Apps
Håkon Wium Lie is the father of CSS, the CTO of Opera, and a pioneer advocate for web standards. Earlier this year, we published his blog post, “CSS Regions Considered Harmful.” When Håkon speaks, whether we always agree or not, we listen. Today, Håkon introduces CSS Figures and argues their case. Håkon Wium Lie
Longform Content with Craft Matrix
Jason Santa Maria recently shared some thoughts about pacing content, and my developer brain couldn’t help but think about how I’d go about building the examples he talked about. The one fool-proof way to achieve heavily art-directed layouts like those is to write the HTML by hand. The problem is that content managers are not always developers, and the code can get complex pretty quickly. That’s why we use content management systems—to give content managers easier and more powerful control over content. Anthony Colangelo
Ten Years Ago in ALA: Dynamic Text Replacement
Ten years ago this month in Issue 183, A List Apart published Stewart Rosenberger’s “Dynamic Text Replacement.” Stewart lamented text styling as a “dull headache of web design” with “only a handful of fonts that are universally available, and sophisticated graphical effects are next to impossible using only standard CSS and HTML.” To help ease these pains, Stewart presented a technique for styling typography by dynamically replacing text with an image.
Yesenia Perez-Cruz
Apple and Responsive Design
Apple has always had a funny relationship with responsive design. They’ve only sparingly used media queries to make minor visual tweaks on important pages, like their current homepage. Though a “handcrafted for all devices” approach seems like the “Apple way,” it’s almost as if they’ve avoided it because of the iPhone’s original pitch—giving users the ability to pinch and zoom their way through the “full” web, as opposed to being shuttled off to the mobile web. Anthony Colangelo
Testing Responsive Images
At long last, the native picture element isn’t just coming: it’s here. The picture element has landed in Canary—Google’s “beta” channel for upcoming Chrome releases—and we can try it out for ourselves right now. Now, we need to test it out, look for bugs, and file issues. Mat Marquis
For people who make websites. | 计算机 |
2014-23/2662/en_head.json.gz/17716 | Find Help Developing Mobile Apps at Build It With Me
Maybe you have an amazing idea for a new iPhone or Android app that you think could be the next big thing, or you've already created a killer phone app that you want help porting to other mobile platforms, Build It With Me is a new Web service that aims to connect developers and people with ideas with one another or with companies looking to fund new mobile apps.To use Build It With Me, you have to sign up for an
account. Once you're signed up, you can start posting ideas for apps
that you'd like to make but you need a designer for, or you can sign up
as a developer and offer your services out to people with app ideas. Depending on whether you're looking for a developer to help with your new idea, or you have skills and you're interested in freelancing or working on the next new idea, click the People or Ideas buttons on the top of the page. You'll be able to filter your results, based on the details of the category. For example, if you're a developer looking for projects to work on, you can scroll through the list or filter based on the type of idea (apps for iPhone, Android, the Web, or Mac or Windows), the status of the project (seeking developer, seeking designer, in development, or launched), or how much, if anything, the designer or idea person is willing to pay for your work. If you're looking for a developer, you can filter the list based on developers or designers, the type of talent you're looking for (a Web, Mac, Windows, iPhone, or Android app developer), and when they're available to start. You can even search for developers with their own ideas,or for those looking to offer their services. People with app ideas and people with the talent to build them can use Build It With Me to find one another and partner on everything from Web apps to mobile apps, and negotiate to share the profits that may come if their app is the next one to make it big. Categories: | 计算机 |
2014-23/2662/en_head.json.gz/17731 | Android fragmentation: something to fear?
The growing popularity of the Android operating system has raised questions …
- Jun 8, 2010 2:28 am UTC
Fragmentation is often cited as a major challenge for the Linux platform and mobile software ecosystem. The word gets thrown around a lot and tends to be used as a catch-all phrase to describe a wide range of loosely connected issues.
The rapid growth of the Android ecosystem and the significant number of new Android devices that are reaching the market with heavy software customizations has raised some questions about whether Google's Linux platform is going to succumb to the fragmentation menace. In this article, we'll take a look at what fragmentation means for mobile Linux and how Google's operating system addresses some of the biggest challenges.
What is fragmentation?
When used to describe software platforms, the term fragmentation generally refers to the proliferation of diverging variants—a situation in which many custom versions of the software platform emerge and coexist with the original. Platform fragmentation can weaken interoperability because applications that are built for one variant might not work on others.
The Linux platform is particularly susceptible to fragmentation because its modularity and open license make it highly conducive to customization and derivation. Although mainstream Linux distributions are all functionally similar, there are a number of major areas where they diverge. Some examples include package management, preferred desktop environment, default application selection, file system layout, and software version choices.
The lack of consistency makes it difficult to build an application that will integrate properly across the broad spectrum of Linux distributions. This is why packaging, compatibility testing, and certain kinds of platform integration tasks are often done by the distros themselves rather than by upstream application developers.
The challenges are more profound in the mobile space than on the desktop because the degree of fragmentation is compounded by the fundamental differences between different kinds of devices. For example, applications that are built to support a specific form factor, screen resolution, or input mechanism might not be compatible with devices that have different characteristics in those areas.
Another issue that arises in the mobile space is that individual handset makers and mobile carriers will make their own changes in order to differentiate their products from competitors. Such changes can sometimes create additional compatibility pitfalls.
Android's approach to minimizing fragmentation
Android fragmentation appears to be analogous to conventional desktop Linux fragmentation in some ways, but the Android platform is a very different kind of software ecosystem and consequently demands different solutions.
Unlike major Linux distributions—which each have their own meticulously curated set of package repositories—most Android variants share the same application delivery channel. When an Android device is released with unusual characteristics or a custom version of the platform, there isn't really a practical way to make sure that the device can run every program in the Android Market. There is obviously no way to patch closed-source, third-party applications that might need to be modified to guarantee compatibility with platform customizations. The distro approach to mitigating fragmentation arguably won't work for a platform that has a vibrant ecosystem of proprietary commercial software.
Google's solution is the Android compatibility definition, a document that defines a set of baseline compatibility standards for the platform. Google uses its control over the Android Market to compel device vendors to conform with the compatibility standard. The Market is one of several pieces of the Android platform that is not open and has to be licensed from Google. In order to obtain a license to ship the Market, a device maker has to first demonstrate that its products meet the criteria established in the compatibility definition.
Google's approach of tying Market access to compliance with the compatibility definition effectively encourages hardware vendors to stay within certain boundaries and not deviate from the default code base to an extent that would make applications incompatible. This is the reason why you see the same standard input model and user experience on virtually all mainstream Android devices.
The parameters of the Android compatibility definition are more restrictive than you might think. For example, the standard says that devices must have a touchscreen, camera, Bluetooth transceiver, and GPS. Any Android product that doesn't have those hardware components is not in compliance with the compatibility standard and consequently cannot ship the Android Market.
These restrictions effectively ensure that all Android devices that are intended to run third-party applications are basically the same with respect to application compatibility. In addition to mandating some consistent hardware specifications, Google has also taken steps to make the Android software more resilient to fragmentation.
One key example is the extensive use of managed code for userspace applications. Most Android programs are compiled into bytecode that is executed by Google's specialized Java runtime engine. Taking that approach—instead of using C and compiling to binary—sacrifices a lot of performance in exchange for significant gains in portability (and possibly security). Because the applications are compiled into bytecode, the software can work seamlessly across multiple processor architectures and there are not going to be any binary compatibility issues.
The exception to this rule is software that relies on Google's Native Development Kit (NDK), which allows some components that are developed in C or C++ to be used in Android through the Java Native Interface (JNI). Mozilla's Android port of Firefox is one example of an application that uses the NDK.
In order to ensure that applications that use native code will work across devices, the Android compatibility definition requires devices to support the NDK and supply access to certain native libraries and frameworks, including OpenGL ES. The compatibility definition says that the mandatory native libraries shipped on the device must also be binary-compatible with the versions that are included in the Android open source project.
Safely enabling differentiation
As we explained earlier in the article, product differentiation is a common cause of fragmentation. Google has found some practical ways to enable third-party platform customization without having to compromise application compatibility.
One of the most important parts of the puzzle is the Intent system, a mechanism that allows one Android application to invoke another and request specific functionality. When you click an e-mail address, for example, the platform will instruct the default e-mail application to start composing a new message and will automatically set the selected address as the recipient. This feature is facilitated by an Intent.
One of the many advantages of the Intent system is that it makes it easy for third-party applications to serve as drop-in replacements for the default applications. If you want to make your own Android e-mail client and you want to make sure that it doesn't break all of the other applications that rely on e-mail functionality, you merely have to support the same standard set of Intents as the default e-mail client. You can also similarly replace the Web browser, the calendar, the media player, the addressbook, and many other core applications.
Unsurprisingly, Google's compatibility standard says that when device makers replace a core application, they must implement support for the same set of standard Intents that is supported by the original application. This approach allows very deep customization in a way that is also respectful of application compatibility.
Version fragmentation
The compatibility standard and the various technical means by which Android discourages fragmentation are relatively effective. That is why Google's Android compatibility program manager Dan Morrill contends that Android fragmentation is a myth.
"Because it means everything, it actually means nothing, so the term [fragmentation] is useless," he wrote in a blog entry. "Stories on 'fragmentation' are dramatic and they drive traffic to pundits' blogs, but they have little to do with reality. 'Fragmentation' is a bogeyman, a red herring, a story you tell to frighten junior developers. Yawn."
It's not hard to understand how he arrived at the viewpoint. Much of the public discussion about the fragmentation issue is severely lacking in nuance and is often misdirected. As I have attempted to describe in this article, Android has many mechanisms in place that mitigate a wide range of common fragmentation problems. Although Android's exposure to fragmentation is much less dire than the critics contend, the platform is also not totally protected.
Android's rapid pace of development and swift version churn create some very significant challenges. New versions of the platform introduce features and APIs that aren't accessible in previous versions. The consequence is that software developed specifically for the latest version of Android might not work on older handsets. Google's own platform version statistics show that more than half of all Android devices that have access to the Market are still using a 1.x version of Android.
The Android SDK defines API "levels" that are associated with each version of the platform. New API levels can introduce additional functionality and potentially deprecate existing functionality. Each application has a manifest file that specifies the minimum and maximum API levels in which the program can operate. The application can only run on devices that are running a version of Android that fits within the boundary of the application's supported API levels. It's a bit like the versioning model that Mozilla uses for Firefox add-ons.
By default, the Android Market program will hide applications that aren't compatible with the user's device. Obviously, there is some very real fragmentation going on between different Android versions. As we saw at the recent Google I/O conference, development is still moving forward swiftly and new features are going to be arriving in future versions.
The significant performance improvements in Android 2.2 could potentially worsen the fragmentation issue. Computationally intensive applications that are developed to take advantage of Froyo's faster execution speed will obviously not perform well on devices with older versions of the operating system.
Pace of development
The only real way to combat version fragmentation is to slow down the pace of development, a tactic that would probably be detrimental to Android's competitiveness and long-term viability. Google's aggressive pace of development has allowed Android to evolve from the mediocre me-too effort that it was at launch into the extremely compelling, top-notch platform that it is today. It's on a trajectory to leapfrog Apple's iPhone operating system, but it's only going to be able to do that if Google keeps pushing forward at breakneck speed.
Google is faced with a tricky balancing act, because it will be difficult to keep up this pace without alienating existing users who are stuck on older versions of the platform. It's also increasingly clear that some of the hardware vendors, with their inability to roll out timely updates, are a major factor that contributes to the problem.
This is partly due to lack of incentive (they would prefer to sell you a new device rather than add new capabilities to an existing device), but it's also partly Google's fault because the search giant routinely waits until after a new version is launched before it makes all of the new source code public. Increasing the transparency of development and moving from the code-dump "cathedral" model to a more inclusive "bazaar" model would make it easier for handset makers and mobile carriers to prepare for updates concurrent with Google's development process.
Fragmentation isn't going away any time soon, but I don't think that the resulting compatibility challenges are seriously damaging to Android. Google has found practical ways to minimize the impact of fragmentation and to keep the broader Android ecosystem marching to the same beat. The advantages of forward momentum arguably outweigh the cost of version fragmentation, but there are still steps that Google can and should take to encourage device makers to roll out updates. | 计算机 |
2014-23/2662/en_head.json.gz/17802 | Android/Linux kernel fight continues
September 07, 2010 12:58 PM EDT
You could argue that Google's Android, so popular on smartphones now, is the most popular Linux of all right now. There's only one little problem with that: Android has continued to be apart from the Linux mainstream. People became aware of the Android and Linux split when Ryan Paul reported that "Google engineer Patrick Brady stated unambiguously that Android is not Linux." Brady over-stated the case. Android is Linux. To be exact, version, 2.2, Froyo, runs on top of the 2.6.32 Linux kernel. To quote from the Android developer page, Dalvik, Android's Java-based interface and user-space, uses the "Linux kernel for underlying functionality such as threading and low-level memory management." Let me make it simple for you, without Linux, there is no Android. But, Google took Android in its own direction, a direction that wasn't compatible with the mainstream Linux kernel. As Greg Kroah-Hartman, head of the Linux Driver Project and a Novell engineer, wrote in Android and the Linux kernel community, "The Android kernel code is more than just the few weird drivers that were in the drivers/staging/androidsubdirectory in the kernel. In order to get a working Android system, you need the new lock type they have created, as well as hooks in the core system for their security model. In order to write a driver for hardware to work on Android, you need to properly integrate into this new lock, as well as sometimes the bizarre security model. Oh, and then there's the totally-different framebuffer driver infrastructure as well." As you might imagine, that hasn't gone over well in Android circles. This disagreement arose from at least two sources. One was that Google's Android developers had taken their own way to address power issues with WakeLocks. The other cause, as Google open source engineering manager Chris DiBona essentially said, was that Android's programmers were so busy working on Android device specifics that they had done a poor job of co-coordinating with the Linux kernel developers. The upshot was that developer circles have had a lot of heated words over what's the right way of handling Android specific code in Linux. Linus Torvalds dropped the Android drivers from the main Linux kernel. Google tried to do the right thing by hiring two new Android developers to work more closely with the Linux kernel development team to get Android back in sync Linux. At the time, it looked like Google and Android would quickly get back to the same page. It hasn't worked out that way. At LinuxCon, I asked the Linux kernel developers about this, and I got an earful. Google kernel developer Ted Ts'o said that he didn't think it was that big a deal that Android included some non-standard software. "I can't think of any shipping Linux distro, including Red Hat, that doesn't have some out-of-tree packages." And, Ts'o continued, "No one ever said, oh my God, Red Hat or Novell forked the kernel." From where Ts'o sits, the real problem is that "Android has been so successful, and that has inspired many hardware vendors to write device drivers for Android. WakeLocks calls in device drivers become problematic when people want to submit code upstream." The bottom line is that this forces chip vendors, like Qualcomm and Texas Instruments to maintain two versions of Linux, with and without WakeLocks. Needless, these companies aren't happy with the extra work. Chris Mason, Oracle's director of Linux kernel engineering, added that this kind of conflict is not new. While James Bottomley, a distinguished engineer at Novell and Linux kernel maintainer, added that getting Android to work smoothly with the rest of Linux will "Take a lot of effort, but it will be worth the time for the larger community." Unfortunately, according to Ts'o, time is not something the Android team has a lot of. They're too busy running to keep up with hardware requirements. Ts'o said that, although, "There's less than 64K of patch, there's been over 1,800 mail messages of discussion." Ts'o made it sound like the Android team is getting fed up with the process. "Android is a small team. They feel that they're spending a vast amount of time getting the code upstream (to the main Linux kernel)." On the Linux Kernel Mailing List (LKML), Ts'o suggested later that "You know, you don't have to wait for the Android engineers to do this work. You (or others who want to be able to use stock upstream kernel with Android devices) could just as easily try to do the 'bit more work' yourselves -- that is, do the open source thing and scratch one's own itch." Later, Ts'o also pointed out on the LKML that mainstream Linux distributions include their own non-standard code. He summed it up with, "Can we please cut out this whole forking nonsense?" In the meantime, of course, Google has other Android worries with its Oracle patent fight. In the end, I'm sure that Android and the mainstream Linux kernel will get back in sync with each other. I don't see it happening anytime soon, though, and I suspect there will be a lot more heated words exchanged before it finally happens. Print
TAGS:Android, developer, Google, Linux, Linux kernel, open source. programming
TOPICS:App Development, Applications, Emerging Technologies, Linux and Unix, Mobile Apps, Open Source, Operating Systems
Older Post: Mint 9: Minty fresh LinuxNewer Post: Last chance to get XP Our Commenting PoliciesView the discussion thread. | 计算机 |
2014-23/2662/en_head.json.gz/18260 | My thoughts on things in general.
Search Marketing Value For Small Business Advertising
No Comments | Posted by admin | Category:Uncategorized In the business world, there have been changes now and then which cause great impact to small and upcoming businesses, this has forced search engine marketing companies to evolve so as to effectively and efficiently help businesses to connect business with communities across the globe. Search engines can offer new opportunities and expand the customer base as more and consumers use the internet for information. By establishing an effective online marketing strategy, it enables small business to compete with the established businesses that have flourished in the market for a long period of time. According to San Diego marketing companies, a smart playing field is created for small businesses and individuals as well to flourish.
Many SEO experts take shortcuts to achieve quick results that fade within a short duration in the market, which underscores the importance of working with an internet marketing company which will not put you business at risk. With some companies you will be sure your online business is safe; giving you time to think on how you can incorporate other ideas into the business to catch up with big businesses. Choosing the right search engine marketing company will establish a cost effective marketing campaign while developing a web presence that will generate more traffic. Promoting your business online is essential for connecting with more customers and reducing costs of advertising.
3 Things That Internet Marketing San Diego Companies Can Give You
If you are a business owner, your major concern is how to bring your products or services close to your potential clients. Print and media advertising can be effective but they are costly and their reach is limited. With more and more people turning to the internet to search for providers of their needs, internet marketing has become the best advertising mode and internet marketing San Diego companies are here to serve you.
Here are three things that you can avail from internet marketing San Diego Companies. First, these companies offer search engine optimization services at affordable prices. You can opt to have a package or to pay only for SEO services that you need such as keyword search, content marketing, on-page optimization, and offsite optimization. Second, these companies have the tools to launch massive marketing campaigns using email and social media. Email campaigns can reach millions in a few minutes while social media can improve your web site’s ranking in a very short time. Last, your web site will be analyzed to determine what other aspects need improvement so that you can attract more clients. After these weak points are identified, you will be helped to strengthen them to achieve better results. Grab this opportunity to have your business services by internet marketing San Diego companies, the best in terms of digital marketing services.
Strategy, Marketing And Cans!
No Comments | Posted by admin | Category:Business While others are clamoring for integrated marketing campaigns, U.S. Can Corp., a leading manufacturer of metal containers, is moving in the opposite direction.
The company in July unveiled a corporate restructuring plan that focuses on segmentation of marketing, marketing strategy and sales, rather than integration.
A growing customer sophistication, coupled with the ever-increasing global marketplace, led to the restructuring of the Oak Brook, Ill.-based company, says its new chairman-CEO, Paul Jones.
Mr. Jones says that when he joined the company in April, he set out to understand the business and how he could prepare it for growth. What he learned, he says, is that U.S. Can’s “manufacturing plants, equipment and people are among the best in the industry,” but the company “had an opportunity in the area of strategic marketing and strategic pricing.” The company’s containers are used for personal care, household, automotive, paint and industrial products.
Conversations with employees, customers and analysts brought Mr. Jones to the conclusion that the key to success lies in a new approach to marketing.
Reorganization for customers
Under the new plan, each of the company’s business operations – aerosol; paint, general line and plastic; custom and specialty products; and international – will be responsible for its own marketing, market strategy, manufacturing, sales and overall business leadership.
With most companies leaning toward heavily integrating the organization, U.S. Can may seem a bit out of step with business trends, but Mr. Jones says it’s all a matter of timing.
“We do need to do a lot more integrating and we will do a lot more integrating, but right now, we’ve got our work cut out for us to get this organization working and focusing on the customers,” he says. “At some point in the future, we’ll take another step, but that’s not part of my plans right now.”
In U.S. Can’s previous structure, one person took charge of all sales and marketing efforts. Mr. Jones says the new approach will result in a more concentrated marketing effort.
“In today’s environment, an organization is much better served to have one person solely focused on marketing and another solely focused on managing the sales function,” he says. “When you have someone over both sales and marketing, 98% of their time is spent on managing the sales organization. The amount of time devoted to marketing is little or none.”
Each of the four operations will be in charge of its own marketing budgets, including merchandising and pricing. However, a marketing council is being formed in which all marketing leaders will come together with the director of communications to make key decisions on issues such as media buying.
While the sales function will not be altered by the new structure, a new sales force automation program is in development that promises to take efforts to a new level. Furthermore, a global accounts management process will be established in October, which Mr. Jones said he thinks will help ensure that customers are served properly and promptly.
In addition to the four business divisions, a Business Support Organization has been created to help foster cooperation among plants. Serving as vice president will be Thomas Scrimo, who was recruited from Greenfield Industries’ consumer products group, Latrobe, Pa.
Mr. Jones says the organization will serve as a quarterback of sorts, “calling signals for those 35 plants out there, so they will all be marching in the same direction.”
Mr. Scrimo and his staff will be responsible for the company’s overall quality assurance, manufacturing systems and programs, plant rationalization and restructuring, and manufacturing technology and strategy for each of the business units.
With all the division leaders located in the company’s Oak Brook headquarters, they also will be able to coordinate an ongoing interchange of ideas and practices.
Against the trend
Still, U.S. Can’s new structure “is’ against the trend that we see among the clients we serve,” says Michael Weaver, VP-account group director for NKH&W, a Kansas City, Mo.-based integrated marketing communications agency. “The rest of the world is mainly moving in a different direction, and that is integration.” NKH&W’s client list includes Phillips Petroleum Co., Yellow Freight and Nesta Chemicals.
“The appointment of someone to oversee the new structure seems like a prudent step,” Mr. Weaver says, “but unless there is line responsibility, facilitating positions often do not work out. A unified structure almost always seems to pay off in the long run.”
Says Rick Kean, executive director of the Business Marketing Association, Chicago: “I don’t know how you can have a seamless operation with so many separate parts. The cool thing about integration is it makes it easier to quantify what’s working and what isn’t.”
If U.S. Can doesn’t go the integrated route, Mr. Kean says, “it would be important that their structure identify a measure of accountability. So often, a major problem is lack of communication and accountability. Maybe the [marketing] council will do this.”
Next millennium
Mr. Jones said he thinks U.S. Can’s new marketing focus will take it into the new millennium as a force to be reckoned with in the container industry. And despite what critics say, he is determined to see it through.
“It’s a case of looking at where we were and where we need to go, and I think this is the right step for this company right now,” he says. “We’ve got an initiative to take marketing to the same level as our manufacturing and make it one of the success stories of this company.”
| Tags: business change, integrated marketing
Talking About “A Moment On Earth”
No Comments | Posted by admin | Category:The Environment It’s difficult to avoid the conclusion that some ideas are easier to get published than others. As an environmentalist, you may have at least heard of Losing Ground, a tough critique of the big U.S. environmental groups. But the average citizen must navigate the swells of environmental literature (and environmental politics) without even a vague insider’s sense of the rumors, jokes, and daily gossip of the green movement. If they know the new books at all, they know them by advertising and media “buzz,” and are likely to have noticed only A Moment On The Earth.
Losing Ground is published by an academic press, and in bookstores is generally tucked almost invisibly within daunting stacks brimming with “new nonfiction.” This is hardly the fate of Gregg Easterbrook’s book, which was expensively acquired, aggressively promoted, and received by talk-show hosts, op-ed editors, and other assorted opinion-makers with almost abject eagerness. Why the stark contrast? Has it anything to do with, say, politics? With the mood of the country? Easterbrook’s claim, certainly, is soothing, so much so that it stands apart. Here is its kernel, in his own words:
In the western nations and especially the United States, which is the first nation to attempt a systematic if flawed, but genuine and systematic attempt to protect the environment, trends are now in the main positive.
Dowie, like the grassroots environmental activists he champions, has reached different conclusions – so different that one reviewer was moved to ask if the two books even hail from the same planet. Easterbrook celebrates a “coming age of environmental optimism,” and insists in the midst of today’s frenzy of free-market anti-environmentalism, that long-term trends will vindicate him. Dowie, for his part tells the story of an official environmental movement at the brink that, led by a privileged and disoriented elite and outflanked by increasingly sophisticated corporations, risks becoming altogether irrelevant in the great game for the future.
It’s an odd divergence, but it has explanations, and they concern overall approach as well as political agendas. Losing Ground takes a tight focus – the American environmental movement, and particularly the failures which, Dowie argues, are products of its elitism and self-absorption. A Moment on the Earth, for its part, wanders from ill-spirited, pro-corporate polemics – here greens appear as “doomsters” and “transrationalists” – to vast ideological vistas in which Easterbrook presumes to speak, repeatedly and with startling hubris, for “nature” itself.
Losing Ground was published several months after A Moment on the Earth. It was easy enough to find “off the record” tales of green activists, pressured to optimism by Easterbrookean funders and development directors, who heaved a sigh of relief as Dowie came to their support. Today, with U.S. greens under heavy attack, even Easterbrook’s moment is fading. Not that he is gone, but certainly his claims are palpably, visibly, misformed. Indeed, he himself has regrets. Speaking in Berkeley in May, Easterbrook admitted that if he had “been smarter” and “seen what was coming” in the GOP-controlled Congress, he would have written the book differently. It’s an astounding admission, especially when made by an author still on his book tour.
Ironically, it was the Environmental Defense Fund that most clearly saw the need to debunk Easterbrook. EDF, with its strident and generally unreflective advocacy of pollution trading and the other mechanisms of market environmentalism, has done more than any other single organization to push the American environmental movement onto the treacherous slopes of cost-analyzed, corporate-biased realpolitik. Easterbrook was still touring when EDF published the first installment of “A Moment Of Truth” to correct Easterbrook’s “scientific errors.” (See page 28.)
Then, as a delicious followup, came something like comedy. Here is Easterbrook’s response (May 5th, National Public Radio) to CEO Fred Krupp’s announcement that EDF had a detailed critique of Moment in the works:
Don’t these guys have better things to do, with Gingrich and Dole up on Capitol Hill? . . .Now I think, sadly, from the point of view of financial self-interest it may be perfectly logical. I mean, obviously, EDF knows that Gingrich and Bob Dole are bad news for the environment, but they may make it easier to raise money for environmental groups. . . . Now if you turn around and look at me, for example, my optimistic ideas may be good for the environment, but they certainly might upset traditional forms of environmental group fundraising.
This is more than a clever, opportunistic defense. It is also peculiarly Dowiesque. Easterbrook’s claim here is that EDF’s virtue is that of the heroic, bulk-mail fundraiser, and this is just Losing Ground’s charge against elite environmentalism.
EDF was right to attack Easterbrook, for Moment is one of the most pernicious documents to claim itself as green in quite-some time. Yet EDF and Easterbrook actually have a great deal in common – most obviously technological optimism and a faith that, in the end, the market will save us all. Moreover, there is a shard of justice in Easterbrook’s charge against EDF.
Losing Ground has its own problems. Dowie’s history is uneven, for one thing, and troubled by an unfortunate, politically correct tone. And his whirlwind review of the philosophical problems raised by ecological crisis is more than just a bit thin. Losing Ground’s more significant problems hover about its core concerns – the tribulations of “legislation and litigation,” justice and elitism, money and politics, both near and long term. And here, crucially, the contrast between Dowie and Easterbrook couldn’t be stronger, for Losing Ground is constructive even in its weaknesses.
Take Dowie’s history of U.S. environmental law, which is alone reason enough to read the book. The story of large, environmental organizations as a “class-bound interest group” that pursues, largely though the litigation/legislation strategy, “a more cautious reform agenda than the movement as a whole” is a story screaming to be told, and Dowie does not disappoint. In a dramatic narrative that begins with a chapter called “Sue the Bastards!” and runs inexorably though “Fix Becomes Folly,” he offers a vivid, unsparing history of the often funder-dominated and always lawyerly process by which the American people were so unwisely encouraged to trust in the sincerity and efficacy of official regulatory politics.
Dowie’s central claim is that “American land, air, and water. . .would be in far better condition had environmental leaders been bolder; more diverse in class, race, and gender; less compromising in battle; and less gentlemanly in their day-to-day dealings with adversaries.”
It is a claim we should take dead seriously. Anti-environmentalists now control the playing field, and much depends on how we choose to understand the path by which they stormed it. No good will be served by denying that official environmentalism is a big part of the problem. Denis Hayes, the director of the first Earth Day and a man who has taken both sides in this debate, recently observed, “There is some sense that [environmentalists] are sort of pointy-headed intellectuals who use complicated analyses and don’t care much about regular people.” With social insecurity high and rising, “regular people” find their concerns rotating tightly around money and the future, and we should not be surprised if they lack great sympathy with a distant “environmentalism” of bureaucrats and well-dressed expertise.
Not all enviros are happy with this analysts, pointing to the accomplishments of the Clean Air and Water acts, Superfund, NEPA, the Endangered Species Act and so on. Dowie does not disagree that the early victories were significant. It’s just that he sees them as limited and more likely to be rolled back than reinforced.
It has not been long since Al Gore won high office, but who today will claim this a greal event? Is it not, to be honest, the token of a “victory” that did more to inflame the right than to empower the greens, a victory that left huge numbers of Americans concluding that the environmental problem was taken care of? Obviously, this isn’t the whole story, but certainly it’s part of it, and it must be faced if the green movement is to have a future. Dowie, then, is essential reading, and this despite the fact that he hardly has the whole story sorted out.
Losing Ground is a report from the front, not a breakthrough of political synthesis. Dowie has talked to people throughout the U.S. movement, and done a fine job of collecting their thoughts, war stories, and theories in a reasonably balanced and tidy package. It may be too tidy, but the fact remains that the opinions here are the common currency of activist circles. But there are crucial contradictions among them. Here are some of them, noted by Paul Rauber in the Sept/Oct 1995 issue of Sierra magazine:
[Dowie] faults environmental groups for not hiring enough people of color – but when they do he faults them for stealing talent from grassroots groups. He complains that environmentalists are failing to reach out to distressed loggers – and then belabors any group that fails to advocate a total ban on logging in the national forests. He insists that they add to their core concerns environmental justice, international human rights, eco-feminism, and spiritual ecology – and then ridicules the “passive supporters of mainstream groups [who] have proven themselves mercurial, faddish, and easily attracted to other causes.”
This is quite fair. Dowie does all of this. But it is a flaw far less decisive than Rauber imagines. The real point, and the reason why Losing Ground is a constructive book, is that the contradictions here are not for Dowie to resolve, but for us all. Dowie is trying to tell hard truths, and to do so while speaking for a strategy in which green mainstreamers and grassroots environmental justice activists evolve new ways of working, ways that allow them to disagree, and yet to continue to work together to pursue their common goals. His sympathy is with the grassroots, but he is generous to opposing views.
The easy victories are over. From now on, environmental battles will be hard won, if they are won at all. There are many lessons to be drawn from looking back on the path that led us here, and I doubt many of them are better than Dowie’s insistence that “justice” is the key to the future.
We should not imagine, though, that in affirming even such a fine word as justice we have done something decisive. There is more at stake here than pretty rhetoric, as there is more to strategy than uncritical cooperation. The mainstream is here to stay, and so is the grassroots. The question is how we can work together so that each of us strengthens, rather than undermines, the other.
It won’t be easy. As Easterbrook has proven, one may imagine oneself a fine environmentalist, and yet eagerly carry water for the anti-environmental right. And Dowie has shown that the elitism and simple-minded realism of the mainstream movement goes beyond absurdity to be actively debilitating, and even nurtures backlash.
What is the lesson of that backlash? It is, I think, that people do not attend solely to isolated single issues like spotted owls or even clean water. “The voters,” as they are called, care as well about large political themes. And if the right has managed to seize the stage and define those themes as excessive regulation and government interference, that is at least strong evidence that environmental protection will not be won by an apolitical strategy that seeks to avoid facing the realities of life on an increasingly polarized planet.
The story of the new right, and of anti-environmental upsurge, and indeed of the militias, is the story of American populism swinging again to its right-wing pole, of freedom seen as a variant of private life and property, and as the antithesis of regulation. This is not inevitably its definition, but with big capital benefiting so richly from its repetition, an alternative will not come easy.
Yet there can be an alternative. Populism has a second, better, past, and we just may be able to build on it. But popu-lism is not enough. If freedom is to cease to be a synonym for property, it will not do for environmentalists to give lip service to the poor, but actually spend their days working for the salvation they imagine hidden in markets and cost analysis. If justice is to once again be widely accepted as a political ideal, it must as well be seen as an aspect of freedom, and as a product of life in healthy, active, communities. Building those communities is what environmental justice is all about, and if we are to escape this dead-end into which we have wandered, we had best think beyond new laws and regulations, and beyond a bright new set of bulk mailings.
| Tags: books, environmental trends, movements
How Does Enterprise Marketing Automation Help Your Company?
No Comments | Posted by admin | Category:Business In this era of integrated marketing, business-to-business marketers have had few tools to help them create and implement customized promotions and campaigns – and even fewer that tie into the Web as well.
But that’s changing with the emergence of a new category of front-office applications called enterprise marketing automation, a spinoff of sales force automation tools designed specifically for the marketing department.
Formally introduced this year, this category has only a handful of vendors so far, but the market is expected to grow rapidly.
“The whole idea of managing marketing and marketing campaigns has been neglected,” says Judith Hurwitz, president of the Hurwitz Group, Framingham, Mass., an analyst company specializing in strategic business applications.
As businesses begin to do more sales and marketing over the Web, they will definitely need tools and technology to help them, Ms. Hurwitz says.
In fact, Web marketing is one of the primary forces driving this category.
Taming the Wild West
“Right now the Web is a wild frontier. With a sophisticated Web tool you can start learning from your experience and make those intelligent decisions about how you spend money,” says Ms. Hurwitz.
However, EMA is more than just a reporting mechanism for Web marketing. It’s designed to provide companies with the ability to integrate information on marketing campaigns, prospects and customers, regardless of the marketing channel.
Armed with that data, marketers can set up processes to collect information, send appropriate responses to prospective customers and eventually push qualified prospects into the correct sales channel.
Tied to sales automation or other sales channel systems, EMA also allows companies to accurately measure marketing return on investment.
That’s the kind of functionality Hitachi Semiconductor was looking for in 1994. When it couldn’t find packaged software that fit its needs, the company built its own, spending $350,000 on its Responsive Sales Lead Management System.
While the homemade system has been successful, it’s only gone so far, says Jim Rey, director-marketing communications for Hitachi Semiconductor, South San Francisco, Calif. The company is now piloting an EMA system from Rubric, San Mateo, Calif., to replace it.
“As the Internet has taken off and as field sales and customers became more progressive, we saw the need for mobilization and a Web-based system, and that’s where our home-grown system has fallen off,” Mr. Rey says.
The Internet is becoming increasingly important to Hitachi Semiconductor as a marketing channel, and that’s putting a strain on marketing systems.
“In the paper world, we were dealing with less than 1,000 inquiries per month. On the Internet, just last month, there were 32,000 downloads of pdf [portable document format] documents,” says Mr. Rey. “On top of that, the views and visits are becoming hundreds of thousands.”
Impressive ROI
As with Hitachi’s current system, the goals of the new project center on collecting as much customer information as possible. Then, its sales reps can provide prospects with custom information and dangle carrots designed to get them to respond to its offers.
While the complete cost of the project has not been determined, Mr. Rey says it will be a lot less than the cost to implement its old system. And even more important, he adds, it will mean an increase in sales.
“I’ve done all the ROI analysis with all of the standard assumptions looking at a 5% close rate, a 10% close rate or a 20%, and the numbers are quite impressive,” he says. “This is definitely the way to go.”
Companies most interested in these systems are those struggling with distributing and managing leads from their marketing channels, says Markus Duffin, director-enterprise marketing solutions at consulting and systems integration firm Cambridge Technology Partners, San Francisco. The systems also are attractive to companies trying to extend their Web marketing channel and coordinate those activities with their telemarketing and direct mail channels.
Cost is steep
The cost of entry is steep. Most of the vendors’ EMA solutions start at about $200,000, with system integration potentially doubling that cost.
But Mr. Duffin says ROI is there for those willing to make the leap.
“I think they’re going to see an increase in revenue at least between 5% and 10%, conservatively, coupled with another 5% to 10% reduction in marketing costs over time,” he says.
“The business case is not that hard to make,” Mr. Duffin adds. “What clients are a little bit leery about is the newness of these products and how their organizations will embrace it.”
| Tags: ema, integrated marketing, proper ROI
The Tech Industry Isn’t Just Good, Clean Fun
No Comments | Posted by admin | Category:The Environment The speed of microprocessors has increased dramatically. Hard drives that once held 40 megabytes now hold gigabytes. The products of the electronics industry are transforming the way we live, work, learn and play. However, despite its promise, high-tech development also has a darker side. The legacy of high-tech production in Silicon Valley, California – the birthplace of the electronics revolution includes the toxic footprint of groundwater pollution, a high worker illness rate and an elevated rate of miscarriage for production workers.
In 30 years, the production of semiconductors, data processing and telecommunications equipment made the electronics industry one of the world’s largest and fastest growing manufacturing sectors. Some industry projections indicate 100 new semiconductor plants will be built before the end of the century. As the industry spreads through the United States and into the Third World, it has become a worldwide player, bringing significant economic and environmental impacts. The passage of NAFTA and GATT have increased the mobility of the electronics industry and highlight the urgency of establishing networks with groups in other countries and other parts of the United States.
Despite the squeaky clean image of computer products and campus-like appearance of their manufacturing facilities, the electronics industry is dependent on some of the most toxic substances ever synthesized. These include toxic gases, large quantities of dangerous solvents, metals, acids and volatile organic compounds.
Exposure to hazardous chemicals in the workplace and toxic releases to surrounding communities have resulted in cancer, central nervous system disorders, birth defects, deaths, unprecedented environmental degradation and substantial groundwater contamination. The “clean” in this so-called clean industry refers to the conditions needed to produce working circuits, not to the working conditions for employees or the environmental impact of production.
A few examples of the serious toxic legacy of high-tech development include:
* Silicon Valley has more EPA Superfund sites than anywhere else in the country due to groundwater contamination caused by electronics firms.
* The semiconductor industry uses more toxic gases, including such lethal gases as arsine and phosphine – than any other industry in the country.
* Until very recently, electronics manufacturing depended on the use of large quantities of ozone-destroying CFCs.
Unless the electronics industry makes a commitment to toxics-use reduction, the next generation of smaller and faster chips will use even more solvents and toxic substances to achieve the necessary requirement for “clean” components.
The same chemicals that are harmful to the environment are also harmful to the industry’s workforce, which is primarily composed of people of color, women and immigrants. According to Dr. Joseph La Dou, director of Occupational and Environmental Medicine at the University of California at San Francisco, systemic poisoning (illness related to exposure to toxic chemicals) of electronics workers is higher than workers in the chemical industry, even those in pesticide manufacturing. Semiconductors workers have the highest rate of all electronics workers.
Reproductive hazards from chemical exposure are a real concern to both environmentalists and workers. Yet employers provide little information and even less protection against reproductive hazards. In the highly competitive field of chip manufacturing, production demands often outweigh safety demands. Often the industry has opted to phase out the workers, rather than phase out the toxics.
Electronics firms often do not have enlightened employee relations policies for production workers, and the industry is notorious for its anti-union stance. Not surprisingly, federal data confirms the industry’s highly polarized workforce. White men dominate managerial and professional positions, while women and people of color – those exposed to the toxic chemicals – dominate the semi-skilled production workforce.
The corporate centers for the electronics industry are concentrated in the Silicon Valley, the Route 128 area near Boston, Japan and western Europe. In search of lower wages and weak environmental standards, the industry is expanding its production into Asia, Latin America and the Caribbean. This relocation is also happening within the United States. Numerous manufacturing plants are relocating to the Southwest – a low wage region with large populations of people of color, a lack of trade union activity, less stringent environmental regulations and workplace safety standards, and weak government and enforcement structures. (See box on Intel below.)
More recently, a new enticement has entered into the picture – subsidies. To attract high-tech businesses, municipalities enter “bidding wars” to see who can offer the largest incentives or weaken environmental and workplace regulations the most. In this “race to the bottom,” states offer multi-billion dollar industrial revenue bonds, tax abatement programs, streamlined environmental permitting process, and enormous direct and indirect subsidies. Oregon has become a new area for the industry to relocate because of its “Strategic Incentive Program.”
Industry “promoters” emphasize increased jobs when an economic development strategy based on micro-electronics and computers is adopted. These big giveaways could result in local communities not having the money necessary to build the infrastructure necessary to accommodate the new growth.
Many of these problems are systemic to the structure of the industry. Few industry or government leaders have confronted the negative impact of high-tech industries. The Silicon Valley Toxics Coalition (SVTC) and other groups are challenging these impacts by creating models to help level the playing field so communities can attain sustainable development and environmentally-responsible manufacturing. This involves encouraging industry to adopt a proactive pollution prevention approach and develop new technologies that will reduce the environmental and occupational health impact of their production processes. It also involves organizing local communities to counter industry’s whipsaw tactics of pitting city against city in the quest for jobs by imposing conditions on new economic development plans.
The electronics industry tries to drive wedges between workers and those concerned about the environment by flaming the debate as a choice between the environment and jobs. To thwart this effort, we are building broad coalitions with environmental, community and labor organizations who agree that the relationship between electronics manufacturing companies, workers and communities must be restructured.
Because environmental problems are woven into the social fabric of our lives we must recognize the need for broader social solutions beyond the mitigation of a particular risk of environmental hazard. We must work for greater, more democratic public participation at every stage of policy making, community and worker empowerment, and corporate accountability. We can’t put the genie back into the bottle. Instead, we must become involved in efforts to ensure that high-tech industrialization benefits the communities and workers without harming the local economies or the environment.
| Tags: bad products, dangerous industries, tech toxic waste
Is Environmentalism Really Dead, Or Just Getting Stronger?
No Comments | Posted by admin | Category:The Environment “Environmental Groups Are Drying Up in the `10s.” “Green Magazines in the Red. “Environmental Movement Struggling as Clout Fades.” The headlines in the nation’s press read like epitaphs.
A Wall Street Journal article observed that, “After years of fighting to save whales and spotted owls, the nation’s big environmental groups are in agony about another dwindling species – their supporters.”
Adding to the perception that the greens have lost their muscle was the dismal lack of legislative victories in the last Congress, when the Democrats controlled both houses and the White House.
But it’s not just the media offering a grim prognosis. “The environmental movement is in massive decline and is going to need a major overhaul if it wants to stage a comeback,” says Eric Mann, director of the Labor/Community Strategy Center in Los Angeles.
A quarter-century past the first Earth Day, is the environmental movement really in its last gasp? Or, in the words of Mark Twain, have reports of its death been greatly exaggerated?
One thing is clear: If the movement is ailing, it’s not due to lack of popular support for the issues. Public opinion polls reflect a strong and consistent commitment to the environment. A Times Mirror survey last June found that 79 percent of respondents describe themselves as active (23 percent) or sympathetic (56 percent) environmentalists. More than half thought environmental laws hadn’t gone far enough, with only 16 percent thinking laws had gone too far. Environmental groups were also given high ratings, with 74 percent feeling highly or moderately favorable toward them. Eighty-nine percent of college students named the environment as the top concern facing the nation in 1992, according to The Student Political Organizing Guide, published by Sierra Club Campus Green Vote and Americans for the Environment.
This support is reflected in the hefty membership rolls and multi-million-dollar budgets of the nation’s largest environmental organizations. Some, like the Nature Conservancy, World Wildlife Fund and Environmental Defense Fund, are experiencing remarkable growth. Others, like the National Audubon Society and Natural Resources Defense Council, lost support when President Clinton was elected, but are now rebounding. Local groups continue to flourish, with the Citizens Clearinghouse on Hazardous Waste (CCHW) in touch with more than 8,000 community organizations.
“Overall, the membership trend was downward,” says Matt MacWilliams, managing partner of MCSSR, a communications consulting firm based in Takoma Park, Maryland, which represents numerous environmental clients. “Not because people were less interested in the environment, but because they thought the problems were being solved when Clinton was elected Since the [congressional] elections, many groups have been picking up membership. Ifs a direct relation to the perceived threat.”
“When Republicans are in office, we seem to get stronger, more united and more powerful,” agrees Luis Sepulveda, president of West Dallas Coalition for Environmental Justice. “Environmental programs get more support.”
While the state of the movement may not be as bad as some have said, most would agree that environmentalists need to regain some of their earlier vigor. Critics from both the right and left point to national groups they say are bloated with bureaucracy, overrun with lawyers and far removed from their grassroots base.
“The most fundamental problem is the collapse of much of the mainstream environmental movement into an explicitly pro-corporate stance,” says Mann.
Not surprisingly, articles in the mainstream press contend that it’s this pro-corporate stance that offers a promising future for environmentalism. For example, an article in the Philadelphia Inquirer argued that EDF’s policy of working with McDonald’s and Prudential Insurance on recycling is just the kind of practical, problem-solving approach that the public wants.
A conservative analysis, “Restructuring Environmental Big Business,” by the Center for the Study of American Business at Washington University in St. Louis, attributes what it calls a public backlash not only to top-heavy bureaucracy but to the movement’s purported exaggeration of environmental dangers and the tendency of most groups to take on such a broad agenda that their mission is lost. “Policy makers and potential donors find it increasingly difficult to tell different organizations apart. Essentially, different environmental groups are offering potential supporters identical products,” according to the report. The report gives high marks to the Nature Conservancy, which has kept its focus on buying land for nature preserves.
All this dissection of the movement has led to some soul-searching. “The national groups, including Greenpeace, did become too large in the 1980s and did grow top-heavy,” says Barbara Dudley, executive director of Greenpeace. “The draw to legislative solutions was too seductive and took the large organizations that evolved from a mass movement too far away from their grassroots.”
“It’s fair to say the environmental movement is confronting extraordinary change, not just in terms of Newt [Gingrich], but far more fundamental and pervasive,” says Lynn Greenwalt, vice president of the National Wildlife Federation (NWF).
Some of the changes are administrative. In an effort to streamline organization, staffs are being cut and publications scaled back. For example, last year, Friends of the Earth pared back the frequency of its newsmagazine from monthly to bimonthly, and NWF is now considering contracting out its mail order business as a way to save money.
But other changes are substantive, with a renewed respect and attention being paid to the grassroots. “A lot of the action is at the state level and we’ll be shifting staff and resources there,” says Greenwalt. “The grassroots is not in Washington. There are plenty of people out there willing to work on their own behalf and for the environment and the human future. That means redeploying resources, money or ideas from here to there.”
The National Audubon Society, based in New York, is in the midst of a major strategic planning process, involving interviews with hundreds of members, staff and board members from around the country, as well as colleagues, political leaders and foundations. “We tried to glean from them some of their best ideas on how we should organize ourselves for the future,” says Tom Martin, Audubon’s chief operating officer.
Like NWF, Audubon plans to shift more attention to the organization’s 500 chapters. “We’re not going to affect Congress through insider lobbying. We have to do it in the home offices [of legislators]. The advocacy will move outside the Beltway,” says Martin.
But the very remedy proposed by many pundits for the environmental movement – to become more pragmatic and single-focused – is not likely to emerge from a grassroots strategy. For example, NWF plans to continue expanding its message beyond wildlife to include public health concerns. “It’s impossible to separate human health issues from those that affect other creatures,” says Greenwalt. “A toxin in the Great Lakes may be devastating to fish and ducks, but it’s also not good for people.”
As activists make clear in the interviews on the future of environmentalism, beginning on page 5, the trend is increasingly toward building coalitions and making connections between environmental and social justice concerns. “In the long run we’ll win the battle because we’re on the side of average Americans,” says environmental consultant Matt MacWilliams. “The other side has been able to marginalize us as elitist and extremist. So the biggest task is to make the environmental message relevant once again to the mainstream. We need to talk to them in terms of what really matters: the health and safety of their families and their communities.”
Whether you call it the mainstream or the grassroots, if national groups follow through on their commitment to communities, the movement could well recapture its former ardor. “Let me reveal a not so obscure secret,” says Greenpeace’s Dudley. “The grassroots environmental groups are a far sight more radical than the national groups.”
Local groups faced with a hazardous facility moving in next door are much less willing to compromise than are the Washington representatives of national groups trying to craft a piece of legislation. In addition, the initial anger and fear about a local site often spreads to other issues as well. “The environmental movement caved in on NAFTA (North American Free Trade Agreement),” says Sue Lynch, executive director of People Against Hazardous Landfill Sites (PAHLS), in Valparaiso, Indiana. “The local groups didn’t. We worked with our local unions and opposed NAFTA. If we cave in, we’re really letting the people down who are counting on us.”
In ways that even the best-intentioned national groups are unable to do, grassroots activism gives a movement its edge. As Lois Gibbs, director of Citizens Clearinghouse for Hazardous Waste, points out, “In most communities, people don’t think they can fight City Hall. We give people back a sense of self-worth and empowerment. We help them build up their self-confidence and say yes, you can.”
Ultimately it will be this self-empowerment that promises not only a healthy future for the environmental movement, but for democracy as well.
| Tags: environmentalism, politics and the environment, trends
Recent Posts Search Marketing Value For Small Business Advertising
Copyright © 2014 Michael Sheen Online | 计算机 |
2014-23/2662/en_head.json.gz/18479 | Home > Risk Management
OverviewGetting StartedResearchTools & Methods Additional Materials ConsultingOur People Risk Management
Consider a broad range of conditions and events that can affect the potential for success, and it becomes easier to strategically allocate limited resources where and when they are needed the most.
Overview The SEI has been conducting research and development in various aspects of risk management for more than 20 years. Over that time span, many solutions have been developed, tested, and released into the community. In the early years, we developed and conducted Software Risk Evaluations (SREs), using the Risk Taxonomy. The tactical Continuous Risk Management (CRM) approach to managing project risk followed, which is still in use today—more than 15 years after it was released. Other applications of risk management principles have been developed, including CURE (focused on COTS usage), ATAM® (with a focus on architecture), and the cyber-security-focused OCTAVE®. In 2006, the SEI Mission Success in Complex Environments (MSCE) project was chartered to develop practical and innovative methods, tools, and techniques for measuring, assessing, and managing mission risks. At the heart of this work is the Mission Risk Diagnostic (MRD), which employs a top-down analysis of mission risk.
Mission risk analysis provides a holistic view of the risk to an interactively complex, socio-technical system. The first step in this type of risk analysis is to establish the objectives that must be achieved. The objectives define the desired outcome, or "picture of success," for a system. Next, systemic factors that have a strong influence on the outcome (i.e., whether or not the objectives will be achieved) are identified. These systemic factors, called drivers, are important because they define a small set of factors that can be used to assess a system's performance and gauge whether it is on track to achieve its key objectives. The drivers are then analyzed, which enables decision makers to gauge the overall risk to the system's mission.
The MRD has proven to be effective for establishing confidence in the characteristics of software-reliant systems across the life cycle and supply chain. The SEI has the MRD in a variety of domains, including software acquisition and development; secure software development; cybersecurity incident management; and technology portfolio management. The MRD has also been blended with other SEI products to provide unique solutions to customer needs.
Although most programs and organizations use risk management when developing and operating software-reliant systems, preventable failures continue to occur at an alarming rate. In many instances, the root causes of these preventable failures can be traced to weaknesses in the risk management practices employed by those programs and organizations. For this reason, risk management research at the SEI continues. The SEI provides a wide range of risk management solutions. Many of the older SEI methodologies are still successfully used today and can provide benefits to your programs. To reach the available documentation on the older solutions, see the additional materials.
The MSCE work on mission risk analysis—top-down, systemic analyses of risk in relation to a system's mission and objectives—is better suited to managing mission risk in complex, distributed environments. These newer solutions can be used to manage mission risk across the life cycle and supply chain, enabling decision makers to more efficiently engage in the risk management process, navigate through a broad tradeoff space (including performance, reliability, safety, and security considerations, among others), and strategically allocate their limited resources when and where they are needed the most. Finally, the SEI CERT Program is using the MRD to assess software security risk across the life cycle and supply chain. As part of this work, CERT is conducting research into risk-based measurement and analysis, where the MRD is being used to direct an organization's measurement and analysis efforts. Spotlight on Risk Management
The Monitor June 2009
New Directions in Risk: A Success-Oriented Approach (2009)
A Practical Approach for Managing Risk
A Technical Overview of Risk and Opportunity Management
A Framework for Categorizing Key Drivers of Risk
Practical Risk Management: Framework and Methods | 计算机 |
2014-23/2662/en_head.json.gz/19287 | Help | Connect | Sign up|Log in Ed Sperling, None
Intel Inside--Everything
‘s chips have become so pervasive that they are the de facto standard in computers. Now the company is looking to include them in everything from cars to medical devices to handheld devices. For Intel , this is a market that it barely seemed to even notice in the past. But with cellphone sales now surpassing the 1 billion mark and computing being done even by non-traditional computers, the company clearly sees the need for expanding beyond its traditional boundaries. Forbes caught up with Pat Gelsinger, senior vice president and general manager of Intel’s Enterprise Group, to talk about the new strategy. Forbes: Intel’s direction for decades was faster processing and more power. What’s changed? Pat Gelsinger: If I’m doing a Google map, I don’t want the entire planet represented on an iPhone. That’s an enormous database, and I need just one little piece of that. That’s the quintessential cloud. There is a large aggregation of data or services on the back end, a small amount of data and bandwidth that you need to provide to it. Those applications are ideal for the iPhones of the future. If you look at the three basic elements of a computing resource, it’s processing, storage and communications. But communications is the weakest of the three. Moore’s Law applies directly to processing. Storage is growing faster than Moore’s Law. Communications is growing significantly slower, so applications that demand high bandwidth never effectively use a cloud model. Those are the three legs of the stool that you’re always working with inside of computers, right? Yes, and applications that rely on large data sets and small bandwidth are well suited to cloud computing. Those that require high bandwidth and high visualization or user interface are very bad for those environments. The other limiters for cloud are associated with things like security, privacy and identity. Having said that, cloud applications will be a key part of the consumer and enterprise experience. I suspect that enterprise data centers will look like virtualized data centers in the future. They end up being the intranet cloud because of the scalability of resources, load balancing and energy efficiency. Threading of software delineates what goes where on a core. How does that change with more cores? One of the cool features of Nehalem (Intel’s code name for its next-generation general-purpose processor) is turbo mode. If you only have one thread to run, it gives all the power of the die to the core that’s running that thread. If you have four threads to run, it spreads the power across all four cores. Dynamically this intelligent architecture adjusts from single thread to multithread and turns on and off portions of the die to accommodate that. So basically this is an on-chip traffic cop? Yes, you can think of it that way. It’s like throwing a breaker switch to that portion of the die. This is one of the product design trade-offs we’ve introduced into Nehalem. Our basic tenet is that with every new generation, the core gets faster on single-threaded applications. And we scale more dramatically with core count. But that performance increase on a single core doesn’t grow as fast as in previous generations of single-core chips, right? Correct. There isn’t a big knob of frequency anymore. Frequency is growing modestly. We’re not leaping from 100 megahertz to 500 MHz to 1 gigahertz. Now we’re going from 3 GHz to 3.3 GHz in far more incremental gains in frequency. My single-thread performance is at a more modest rate, and when I do turbo, I can do better. On top of that, my core count can scale. When you double the number of transistors, it’s easy to double the number of cores. But we’re not going to keep doubling the number of cores. Is it going to be a rational application of cores vs. just adding more and more cores, which was the prediction five years ago? Core counts in servers will continue to increase at a reasonably rapid pace. It won’t double every generation. We’ve gone from two to four to eight. In the mainstream of clients, the core counts are far more modest. There will be two cores and some four cores, but that will be almost flat for the foreseeable future because the applications don’t demand it. There isn’t enough threading there to warrant a significant increase in core counts. The exception is that we are looking at Larrabee (Intel’s upcoming high-performance server chip), which uses a visual computing co-processor to attack high-throughput workloads. So something like ray-tracing graphics would work fine for that, right? Absolutely. And it turns out that graphics rendering is extremely parallel. If you carve up your screen into blocks, and one core owns the upper right block, the next core owns the block next to that, and so on, and then you do some cleanup at the edges, it’s essentially an infinitely parallel task to render. Why is that? With ray tracing you have a thread for every ray you’re shooting through an object or scene. Many of the physics algorithms are extremely parallel, as well. As we look at Terascale applications, we find highly scalable algorithms in many areas of computing. That might be in financial services, oil and gas exploration, image modeling, and human interface and recognition. What are Terascale applications? It’s this categorization of applications that need a teraflop of computing, a terabyte of memory and terabit of communication of bandwidth. As we get to this teraflop level of performance–in some applications it may be half of that or 5 teraflops, which is the biggest supercomputer five years ago–you can solve a lot of problems you wouldn’t even think about before. There are gaming applications, but a real world example is volume rendering. It’s a technique used by all MRI or CAT scan analysis. There is so much noise that you need to volume-render to pull out an image. This ends up being an extremely parallelizable application. The Mayo Clinic has done a paper with us. They can do things in real-time that were previously off-line tasks. A CAT scan becomes a real-time event. So Intel is going after all the markets it can? It’s anything that computes. If there is a computer anywhere, that’s an interesting market for us. Clearly we are targeting the price points of these embedded markets, as well, and we’re bringing a huge software stack with us. Software development for embedded chips is going up three to four times. We have had the solution for years. We just didn’t know how to package it. See Also: The Many Cores Of Intel Intel’s Stimulus Plan Intel’s Unusual Executive Move | 计算机 |
2014-23/2662/en_head.json.gz/19674 | Loop-based Music Composition With Linux, Pt. 1
Sep 14, 2007 By Dave Phillips Loop-based music composition is the practice of sequencing audio samples to create the various parts of a musical work. A sample may contain only a single event such as a bass note or cymbal crash or it may contain a measured pattern of events such as a drum beat, a guitar chord progression, or even an entire piece of music. The former type is sometimes referred to as a "one-shot" sample, while a longer sampled pattern is often simply called a loop.
A loop is usually created at a specific tempo in a precise time period (musical beats and measures) for exact concatenation with other loops. Sequencing a series of timed loops creates realistic tracks that can convince a listener that he or she is listening to a part specifically performed for the piece.
Loop sequencing offers certain advantages over MIDI sequences. They may contain performance characteristics that are difficult or even impossible to achieve with MIDI instruments, such as bass slaps and guitar fretboard tapping techniques. As we shall see, loops can be used in many non-obvious ways, thanks to the tools and utilities available for manipulating sampled audio. This two-part article will explore some of the techniques and software used for composing music with audio loops. We'll look at some well-known Linux audio applications that include powerful tools for loop manipulation, and I'll also introduce some utilities specifically designed to help you create and edit your own seamless loops. Some History
Musical looping is not a new technique. In the parlance of classical musicians, a loop is an ostinato, a musical phrase that is repeated until the composer decides to move on. Sequencing loops is likewise not a novel concept, at least not since Vivaldi's time, nor is the use of existing sound recordings as base material for a new musical work. Composers of musique concrète employed the possibilities of the new tape recorder, establishing such procedures as audio splicing, reversed sound, playback rate change, and many other techniques still common in the modern computer-based studio. As music technology advanced the tape recorder was eventually displaced by the hard-disk recording system, extending the possibilities inherited from tape-based systems and giving birth to exciting new techniques unique to the computer.
The popular music industry was changed forever with the introduction of sample-based song production (see the Wikipedia entry on Sampling (music) for a good summary). Despite the legal complications from copyright holders and the moral outrage from musicians who objected to being replaced by machines, the use of audio samples flourished and has a become common practice in the modern recording studio.
Early use of sampled sound was usually momentary, providing snippets for "hits" or other incidental sounds, but it quickly became obvious that sampled sounds could provide the base material for an entire composition. Now, loop-based composition has come of age and is a mainstay practice in the modern recording world. It has also given rise to a thriving industry of purveyors of extensive collections of sampled loops of every instrument in all musical styles.
The most obvious advantage of using audio loops is the sound. A wide range of performance techniques can not be rendered realistically with MIDI instruments, a problem solved with audio samples. Thanks to evolved practice, libraries of loops are now available with accurate timing, matched timbres, and precise key and tuning information, making it easy to create musical parts by sequencing the loops.
Loops are a particular blessing for those of us with home studios too small for recording drums. As a former drummer, I know how to write a technically realistic MIDI drum part, but alas, there are so many percussion performance techniques unavailable via MIDI. Despite my best efforts at tempo, timing, and velocity manipulation I still think my MIDI drum parts sound "not-so-live". Recently I've been using drum loops from Beta Monkey that include such common (and MIDI-unfriendly) techniques as stick drags, press rolls, hi-hat cutoffs, and pitch variation due to the striking area. I still like writing my own parts, but I'll quickly admit that the sample loops sound more realistic than my best MIDI tracks.
MIDI appears to have a singular advantage over the use of sampled sounds: A MIDI track can be transposed without altering its tempo, or its tempo can be changed without altering its pitch. However, contemporary audio loop tools include high-quality time and pitch compression/expansion with various options for changing or maintaining duration and intonation. Samples can be tuned accurately with these tools, without distorting the sound's timbre, and a loop can be lengthened or shortened to any precise measurement (a.k.a. beat-matching). Thus, if I have two loops pitched a semitone apart I can tune one to the other without altering its length at all. If one loop is shorter I can stretch its durations to accurately match the length and beat-patterns of another loop. Disadvantages And Common Problems
Loop-based composition does have its own difficulties. If you build your own sample sets you may discover that audi | 计算机 |
2014-23/2662/en_head.json.gz/19754 | Business Intelligence Tools and Services
Vendors urged to be more transparent over software audits
In the face of increasing software audits vendors have been advised to establish formal programmes to avoid damaging relationships with customers. The amount of activity by vendors keen to catch any lost revenue as a result of unlicensed or pirate products has risen sharply in the last two years of recession but according to a survey from Ernst & Young both users and suppliers have to make changes to how they are compliant and enforcing policies. "The survey confirms that software audits are increasingly becoming a way of life for both customers and vendors. Faced with the challenge of doing more audits, vendors need to establish formal programmes which are conducted efficiently and in a transparent manner which does not damage the relationship," stated the report. But there was also some advice for users that the survey revealed continue to have a patchy response to tracking their software assets. "Users need to get a grip on their software estate so as to minimise the time and resources dedicated to audits and any subsequent penalties for non-compliance," it added. The costs of failing to keep track of software assets has already been highlighted this week with yesterday's settlement with the BSA by a labelling firm for tens of thousands for failing to have the necessary Microsoft licenses. Gartner recently revealed its customers have had more software audits and compliance specialists FAST Ltd have also noticed an increase in vendor activity. The Ernst & Young report, Software Compliance without tears, pointed out that there was an increasing opportunity for resellers to step in and help users handle the pressure around the audit process. "Audits represent a significant and growing cost to both parties, and some users proactively set up internal tracking processes, as well as bringing in external parties to carry out independent audits," it stated. Over the past couple of years the reaction from the industry has been to promote Software Asset Management (SAM) as a way for users to track their compliance requirements but increasingly that function is being added to a broader range of services. Ian McEwan. vice president of EMEA at Frontrange, said that the SAM specialist was extending its range into the SaaS market with the express intention of offering more services but also to provide customers with more clarity around licenses. "There have been shifts in the last ten years adapting to the customer needs and it is now a flexible model because people don't have the skills and the knowledge to manage it themselves," he said. Related Topics:
Business Intelligence Tools and Services,
E-mail threats and viruses worsen in 2002
SAP bolstered by software and services growth in Q1
Vendors increase audits and extend reasons for non-compliance
How to handle a software audit
Success Lies in the Cloud - But Which Cloud? | 计算机 |
2014-23/2662/en_head.json.gz/20347 | forumsThe Firm
"Patent Pending?" iA's Militant Stance on Syntax Control in Writer Pro [UPDATED]
Posted by weswanders
12/26/13 - See update and postscript at the end of this article regarding iA's statement that it will abandon its patent applications. Also, please see The Verge's writeup on Writer Pro if you'd like an introduction to the software and the Syntax Control feature. Oliver Reichenstein, one of iA’s principals (he appears in the Writer Pro video posted to The Verge) has made some cryptic and vaguely threatening statements to developers about Writer Pro’s new Syntax Control feature:
@MarkedApp @JedMadsen …mostly, I wouldn't suggest at this point to rip off Writer Pro's core innovation. We're well prepared there. :-)
— Oliver Reichenstein (@reichenstein) December 19, 2013
@JedMadsen Thanks, Jed. It looks obvious now, but it was a tough fight; so tough, that I'm ready to go into another fight to protect it. :)
iA also uses the (TM) symbol when mentioning Syntax Control on the Writer Pro website. And in a recent blog post, Mr. Reichenstein included a short blurb on Syntax Control, saying:
Syntax Control is a solid innovation, one we’ve been working on for more than four years. As with every serious design, once you have seen how it works, you can figure out cheap ways to copy it. We’ve trademarked and obtained patent pending for Syntax Control. If you want it in your text editor, you can get a license from us. It’s going to be a fair deal.
This has miffed other writing app developers like The Soulmen, makers of Ulysses III and Daedalus Touch.
@MacSusana I don't think it's a meaningful addition. I'm just offended by the threats. @JedMadsen @MarkedApp @ulyssesapp
— Marcus | The Soulmen (@the_soulmen) December 21, 2013
So, does iA actually have the exclusive right to the idea of Syntax Control, putting unsuspecting future developers on a collision course with iA? It appears the answer is no. What’s more, iA’s claims of beating everyone to the punch appear to be disingenuous at best.
This may be different if iA had designed its syntax recognition from scratch. But in fact, the heavy lifting is already baked into Apple’s developer platform. Since iOS 5 and OS X 10.7, Apple has provided a class called NSLinguisticTagger that segments natural-language text and labels the text with various bits of information, including parts of speech. NSHipster wrote a quick blurb about the class back in 2012, and other apps like Phraseology have already showcased similar syntax-parsing technology.
Trademark?
The trademark databases for the U.S., E.U., and the International Register all come up empty when searching for "Syntax Control." And unfortunately for iA, it’s very unlikely that iA can obtain a trademark registration for "Syntax Control" because the name almost certainly fails to function as a trademark. In the United States and the EU, trademarks must be "distinctive" to be registrable. The words used must not merely describe the attached goods or services. This is why, for example, many companies can use the name "Raisin Bran" for breakfast cereals — the name describes the goods (the cereal) and does not distinguish the goods’ source.
iA isn’t out of options. Rights holders are not required to register their trademarks (though there are numerous benefits). In the U.S., iA can obtain what’s known as a "Supplemental Registration" and can eventually claim to have "acquired distinctiveness" in an otherwise-descriptive trademark, but in the U.S. that requires proof that the mark has been used exclusively by iA for five years.
Patent?
Same story with a patent search — application searches in the U.S. and Europe did not show any pending applications from Information Architects or Mr. Reichenstein. iA does have a pending patent application for "Focus Mode," which was a feature of the original iA Writer. But there’s no evidence that iA has even applied for a patent for Syntax Control aside from a throwaway statement that iA has "obtained patent pending."
And what would be the basis for the patent in light of Apple providing the underlying functionality? iA hasn’t said.
Mr. Reichenstein made this tweet one day after I made this post, with a screenshot of a receipt for a provisional patent application:
Syntax Control is the game changer for text editing we thought it would be. Glad we claimed it in time. pic.twitter.com/59bhBVlzYg
In the United States, a typical patent application isn't published until 18 months after it is filed. And what's pictured here is a provisional patent application, which are never published for public viewing or examined by the USPTO. Provisional patent applications require the applicant to file a full, "non-provisional" patent application within one year. If the full patent application is approved, the applicant has patent rights that date back to the filing date of the provisional application. To iA's credit, a provisional patent application does entitle the inventor to use say the idea is "Patent Pending." But since the application is never examined or published, this doesn't mean iA has protected its idea, or that iA is entitled to a patent at all.
Now, will iA actually get a patent for Syntax Control? Without knowing what ideas are actually claimed in the patent application, it is hard to make a definitive judgment. However, as detailed in the comments below, it appears Apple showcased something very, very similar to Syntax Control at WWDC 2011. This would mean iA's patent application does not have the novelty required to be patentable.
Meanwhile, iA's patent application for "Focus Mode" in the original iA Writer was examined and given a non-final rejection by the U.S. Patent & Trademark Office on November 5, 2013. The rejection can be viewed here. iA has until February 5, 2014 to respond to the patent examiner's objections.
As it happens iA left out the third prong of intellectual property — and this one actually applies to Syntax Control. In the United States, developers have copyright protection to their source code and, like trademarks, registration isn’t required (though it is required to bring a copyright infringement lawsuit). This doesn’t protect the concept of Syntax Control — only the actual underlying source code.
iA has a compelling new app that may ultimately be quite successful, particularly given its price point. But as more apps begin to use iOS’s syntax-parsing capabilities, it will be interesting to see if iA follows through on its grandiose statements and tries to prevent others from developing "ripoffs" to Syntax Control.
In the meantime, iA’s militant stance on its standout feature doesn’t appear to be grounded in fact. This may well alienate other developers and, ultimately, customers.
UPDATE 12/26/13
iA has tweeted that it will abandon its patent applications. Unless there are also other unpublished patent applications out there, I will assume this refers to the non-provisional application for Focus Mode and the provisional patent application for Syntax Control.
We will drop our patents pending. Thank you @dhh for clearing our minds.
— iA Inc. (@iA) December 27, 2013
iA has also amended its blog post (particularly the excerpt discussed above) to read:
Syntax Control — distinguishing a specific aspect of the text to assist in editing — is a solid innovation, one we’ve been working on for more than four years. As with every serious design, once you have seen how it works and how effective it is, it seems obvious, but it was a long road to get there. We’ve trademarked and obtained patent pending for Syntax Control. If you want it in your text editor, you can get a license from us. It’s going to be a fair deal. If you want to use it in your text editor, just give us credit for introducing it.
Note also that the post further clarifies what Syntax Control does -- i.e., "distinguish[es] a specific aspect of the text to assist in editing."
Finally, iA had some choice words for yours truly on Twitter. I do grant that I should have clarified my words regarding a patent application for Syntax Control -- that is, provisional patent applications are never published for public viewing, so an application search cannot be conclusive. I hope that oversight did not undermine the aim of this post. I am a fan of iA's software, and I was disappointed and confused by its marketing for Writer Pro and its principal's public comments. I therefore wanted to document the reaction to iA's marketing for Writer Pro and provide some background research for the lay reader who may be unfamiliar with intellectual property.
Recently in The Firm
Dear Comcast employees: we'd like to talk to you
10 days ago by Adrianne Jeffries
WHY IS THE VERGE IGNORING THIS?????
about 1 month ago by Mike McKowski
Every American Should Be Pissed
about 1 month ago by Shane Smedley
Taking on Tom Wheeler and the ISP's of Doom: a center to see what you can do to stop them
2 months ago by RG7
Apple, Netflix, Amazon, and More Face Lawsuit Over Media Transfer Patent
3 months ago by Franklin Graves | 计算机 |
2014-23/2662/en_head.json.gz/20678 | Ars Technica sits down with Scott Collins from Mozilla.org
During this past Spring's Penguicon, Ars had a chance to sit down with Mozilla …
by Jorge O. Castro
- Jun 16, 2004 3:50 am UTC
Past Mozilla mistakes
Ars: You mention mistakes made by Microsoft. What do you feel are mistakes that Mozilla has made in the past?
One: There was a fundamental mistake made by Netscape management, twice, which cost us a release at the most inopportune time. I think we can attribute a great deal of our market share loss to this mistake that was pretty much based completely on lies from one executive, who has since left the company (and left very rich) and who was an impediment to everything that we did. He was an awful person, and it is completely on him that we missed a release. We had a "Netscape 5" that was within weeks of being ready to go, and this person said that we needed to ship something based on Gecko within 6 months instead. Every single engineer in the company told management "No, it will be two years at least before we ship something based on Gecko." Management agreed with the engineers in order to get 5.0 out.a
Three months later they came back and said "We've changed our mind, this other executive has convinced us, except now instead of six months, you need to do it in three months." Well, you can't put 50 pounds of [crap] in a ten pound bag, it took two years. And we didn't get out a 5.0, and that cost of us everything, it was the biggest mistake ever, and I put it all on the feet of this one individual, whom I will not name.
Two: We made a version of COM, called XPCOM, a fundamental piece of every component of every part of the software. We took COM all the way down to the lowest levels of the system, and I think that XPCOM is a fantastic useful tool. We had great and important places to use it, but we used it too deeply, we allowed it to influence our memory model across the board, and there continue to be costs associated with that which I don't think we need to be paying. I feel bad that I let it go that deep when I continually fought against it, but I am one of the perpetrators ? I wrote some of the machinery that made it easy to use COM, and that allowed people to go deeper and deeper, which we knew was a mistake.
I've been saying this for years, and people tend to think that I'm damning COM unconditionally, and I'm not. COM is very important to us, and it's a foundation of our scriptability layer, and scriptability is fundamental to how our application works. But I don't think every object should be a COM object. We are finally moving away from using COM for everything, and maybe a synergy with Mono will help us get to the right level of differentiation between objects. This is a deep engineering thing, but I believe that fundamentally that we took the COM metaphor too deep and we paid a price for that: we could have more performance sooner earlier on if we had recognized and stuck to limits in our use of COM.
Three: I was the head of the team that fought hard with Netscape management to get a system of XML to define the user interface. I named it the XML User Interface Language, or XUL (pronounced zool). David Hyatt is the primary implementer of it, and one of the ideas that we had is that we would have a cross-platform interface, but that you specify native controls with XUL. It turned out that specifying native controls with XUL was hard. They were limited, and hard to write, and we waited a long time. People kept telling us that Mozilla would never be good until we had the native controls, and we knew they were wrong, it is good without native controls, but boy, people really really want their native controls. And they really rewarded us when we gave them native controls. When we gave them things like Firefox, which looks native on the platforms that its on, and Camino, which is native in OS X. They thought it was fantastic.
We should have used native controls as soon as it was possible, despite of the fact that they're harder to write, because we ended up going that way anyway.
Ars: Can you name three things that you guys did well?
One: We promised that we would get the code out as open source on March 31, 1998. We moved heaven and earth to do it, and we got it out on time. It was no slam dunk, it was soul sapping, but we kept our promise ? it was very important that we kept our promise. [The PBS documentary Code Rush documents this entire period of Netscape's history, along with the birth of Mozilla.org. -ed]
Two: We got to a 1.0. We got there not on a time schedule, but by a set of things that we needed to do to get a 1.0 out.
Three: I think that we managed to let go of our preconcieved ideas of Mozilla. It's only because we're willing to let go of the past that we're able to move on to things like Firefox, Thunderbird, and Camino. I think that willingness to jump from the thing that we had, that courage to move to the untested new thing is the third thing we did right.
Page: 1 2 3 4 5 Next → Reader comments You must login or create an account to comment. | 计算机 |
2014-23/2662/en_head.json.gz/20802 | ForumsCoffeehouseMicrosoft still in denial phase over W8.. possible "relaunch" in February
evildictaitor
2 hours ago, bondsbw wroteIn case you thought otherwise, I meant WinRT as in Modern apps, not as in the ARM OS.If you are talking about Windows 8 (Intel), then you seem to be claiming that Microsoft has purposely chosen to target one OS at two different audiences, the desktop audience and the tablet audience, with no intention to bring them together for a cohesive user experience.That's exactly what I'm saying. Windows8 Metro apps are not aimed at content creators, and according to most of the people I've met at MS, is explicitly not designed to remove desktop applications from the equation.Metro apps are just a new way of doing things for the post-app world we live in. Users want to be able to download crappy apps without fear of pwnage and to be able to use them in a swishy way with centralized ownership, content controls and design cohesiveness; to put it another way, people do actually want iPhone/Android apps on a PC, and that's what Metro apps give them.But Microsoft isn't stupid. They fully realize that some applications are never a good fit for that model. Visual Studio (which is used by at least 50% of the employees of Microsoft) is a clear example of an app that would never really fit into Metro. Microsoft also realize that their entire business is founded on binary compatibility with older versions of the OS. That's why most applications just work on Windows8 despite having been written and compiled on machines before Windows8 even existed.Perhaps it's not obvious to people who've never been to Redmond and seen the work they put into app-compat, but the simple fact of life is that the desktop isn't going away, and it never was going to. Metro might be front-and-centre in the adverts and in the mind of content-consumers (which let's face it, is most home users), but to content-creators, the Desktop is still critical, and will continue to remain and evolve with Windows for the foreseeable future. | 计算机 |
2014-23/2662/en_head.json.gz/21060 | Java Authors: Elizabeth White, Liz McMillan, Roger Strukhoff, Pat Romanski, Yeshim Deniz Related Topics: Cloud Expo, Java, SOA & WOA, Web 2.0, Big Data Journal, SDN Journal Cloud Expo: Article
Network Neutrality, Victory or Disappointment? | Part 1 A January 14th ruling from the United States Federal Court of Appeals has stirred the pot once again
By Esmeralda Swartz
Despite the fact that the net neutrality debate is a discussion that has been ongoing for years, a January 14th ruling from the United States Federal Court of Appeals has stirred the pot once again. The court's decision has created a renewed upsurge in comments, opinions and future-gazing, with debate squarely landing in two very different camps. And, as is to be expected, there is actually very little neutrality.
One is left to ask if it is in fact possible to look at this topic objectively, without taking sides from the outset. Perhaps the passage of time has helped to put the topic in perspective. It may be that the Internet itself, which plays such a central role in our daily lives, has achieved a sort of self-defining momentum that will in due course make some of the net neutrality debate academic.
In this blog and follow-up posts, I'll try to keep the discussion of the recent court decision short and to the point. The actual decision document is here for you to read if you have a couple of hours to spare, and it's worth reading closely to get the true sense of what this decision is all about. Strangely enough, it's not really about net neutrality at all.
The Appeals Court decision is really all about the FCC's Open Internet Order. Essentially, it tears down the Open Internet Order's rules that prohibit Internet service providers (ISPs) from site blocking and from providing preferential service to chosen edge providers. It does not overrule the transparency requirement, which says that ISPs must disclose their traffic management policies. If this federal court decision stands, it essentially means ISPs are allowed to block sites and provide preferential service to some edge providers, but if they do so, they must tell us all what they are doing.
The basis for this decision is important. The judges did not closely examine the pros and cons of Internet openness because that was not what the case was about. The complaint against the FCC is that it overstepped its jurisdiction and that it was not in fact legally entitled to make these rulings. The judges for the most part agreed with the complaint and struck down two significant rules that, in the eyes of the FCC, sought to preserve the "continued freedom and openness of the Internet."
The Wall Street Journal proclaims this decision as a "Victory for the Unfettered Internet." The New York Times, in contrast, describes this as a "Disappointing Internet Decision" on the grounds that it "could undermine the open nature of the Internet." Most vocal opinions are divided along these lines. They are all reading the same decision, but one group believes this will make the Internet more unfettered and open while the other believes the opposite.
CIO, CTO & Developer Resources Advocates on each side assert that they uphold the principle of an open and unfettered Internet, but their interpretations of what "open and unfettered" means in practice leads (or drives) them to conflicting conclusions. Since different takes on this concept help drive the debate, let's look at those perspectives to see what light they cast on the outcome.
Some people regard "open and unfettered" as meaning that governments should leave the Internet alone. That means no government censorship, no blocking of sites and no monitoring user activity. The traffic must flow unimpeded. Most participants in the U.S. debate would agree on this, so perhaps some meeting of the minds is possible? Not likely, because there is also an opinion that "open and unfettered" means no government regulation either. That means no control of pricing, no rules that specify in any way how ISPs deliver their parts of this immense global cooperative enterprise, and certainly no treating Internet access like a phone service.
To some others, "open and unfettered" means that the corporations that provide Internet services should themselves play by these rules. In other words, they too, just like governments, should refrain from censorship, blocking and tracking what users do (at least without consent of each user). If those companies do not allow traffic to flow unimpeded, then the Internet is in reality not open and unfettered.
Let's be clear that not everybody views a completely open and unfettered Internet as a good idea. Various governments around the world limit Internet access with various forms of site blocking, censorship, user tracking and traffic interception. Presumably the officials and politicians responsible for this believe that their individual varieties of fettering are a good thing, overall.
We also know that some Internet service providers engage in, or have engaged in, site blocking, port blocking and scrutiny of user activity, again presumably because decision makers in those companies and organizations see benefits to doing so.
Where do the various parties fall in the spectrum as a result of the recent ruling? And how will it impact the existing system? I'll get into that in Part II. In the meantime, check out our other thoughts on the latest technology trends for the coming year. Published January 27, 2014 Reads 3,277 Copyright © 2014 SYS-CON Media, Inc. — All Rights Reserved.
Network Neutrality, Victory or Disappointment? | Part 2 Carriers, Apple, Google, Microsoft... Fighting Over Slices of the Pie Net Neutrality, Internet of Things and Google: Three Forces Colliding More Stories By Esmeralda Swartz
Esmeralda Swartz is CMO of MetraTech. She has spent 15 years as a marketing, product management, and business development technology executive bringing disruptive technologies and companies to market. Esmeralda is responsible for go-to-market strategy and execution, product marketing, product management, business development and partner programs. Prior to MetraTech, Esmeralda was co-founder, Vice President of Marketing and Business Development at Lightwolf Technologies, a big data management startup. Esmeralda was previously co-founder and Senior Vice President of Marketing and Business Development of Soapstone Networks, a developer of OSS software, now part of Extreme Networks (Nasdaq:EXTR). At Avici Systems (Nasdaq:AVCI), Esmeralda was Vice President of Marketing for the networking pioneer from startup through its successful IPO. Early in her career, she was a Director at IDC, where she led the network consulting practice and worked with startup and leading software and hardware companies, and Wall Street clients on product and market strategies. Esmeralda holds a Bachelor of Science with a concentration in Marketing and International Business from Northeastern University.
You can view her other blogs at www.metratech.com/blog. Comments (0) Share your thoughts on this story. | 计算机 |
2014-23/2662/en_head.json.gz/22477 | U.S. Leads Multi-National Action Against “Gameover Zeus” Botnet and “Cryptolocker” Ransomware, Charges Botnet Administrator
WASHINGTON, D.C. – The Justice Department today announced a multi-national effort to disrupt the Gameover Zeus Botnet – a global network of infected victim computers used by cyber criminals to steal millions of dollars from businesses and consumers – and unsealed criminal charges in Pittsburgh, Pennsylvania, and Omaha, Nebraska, against an administrator of the botnet. In a separate action, U.S. and foreign law enforcement officials worked together to seize computer servers central to the malicious software or “malware” known as Cryptolocker, a form of “ransomware” that encrypts the files on victims’ computers until they pay a ransom. Deputy Attorney General James M. Cole, Assistant Attorney General Leslie R. Caldwell of the Justice Department’s Criminal Division, FBI Executive Assistant Director Robert Anderson Jr., U.S. Attorney David J. Hickton of the Western District of Pennsylvania, U.S. Attorney Deborah R. Gilg of the District of Nebraska, and Department of Homeland Security’s (DHS) Deputy Under Secretary Dr. Phyllis Schneck made the announcement.
Victims of Gameover Zeus may use the following website created by DHS’s Computer Emergency Readiness Team (US-CERT) for assistance in removing the malware: https://www.us-cert.gov/gameoverzeus.
“This operation disrupted a global botnet that had stolen millions from businesses and consumers as well as a complex ransomware scheme that secretly encrypted hard drives and then demanded payments for giving users access to their own files and data,” said Deputy Attorney General Cole. “We succeeded in disabling Gameover Zeus and Cryptolocker only because we blended innovative legal and technical tactics with traditional law enforcement tools and developed strong working relationships with private industry experts and law enforcement counterparts in more than 10 countries around the world.”
“These schemes were highly sophisticated and immensely lucrative, and the cyber criminals did not make them easy to reach or disrupt,” said Assistant Attorney General Caldwell. “But under the leadership of the Justice Department, U.S. law enforcement, foreign partners in more than 10 different countries and numerous private sector partners joined together to disrupt both these schemes. Through these court-authorized operations, we have started to repair the damage the cyber criminals have caused over the past few years, we are helping victims regain control of their own computers, and we are protecting future potential victims from attack.”
“Gameover Zeus is the most sophisticated botnet the FBI and our allies have ever attempted to disrupt,” said FBI Executive Assistant Director Anderson. “The efforts announced today are a direct result of the effective relationships we have with our partners in the private sector, international law enforcement, and within the U.S. government.”
“The borderless, insidious nature of computer hacking and cybertheft requires us to be bold and imaginative,” said U.S. Attorney Hickton. “We take this action on behalf of hundreds of thousands of computer users who were unwittingly infected and victimized.”
“The sophisticated computer malware targeting of U.S. victims by a global criminal enterprise demonstrates the grave threat of cybercrime to our citizens,” said U.S. Attorney Gilg. “We are grateful for the outstanding collaboration of our international and U.S. law enforcement partners in this successful investigation.”
“The FBI has demonstrated great leadership in continuing to help combat cyber crime, and our international and private sector partners have made enormous contributions as well,” said Deputy Under Secretary Schneck. “This collective effort reflects our ‘whole-of-government’ approach to cybersecurity. DHS is proud to support our partners in helping to identify compromised computers, sharing that information rapidly, and developing useful information and mitigation strategies to help the owners of hacked systems.”
Gameover Zeus Administrator Charged
A federal grand jury in Pittsburgh unsealed a 14-count indictment against Evgeniy Mikhailovich Bogachev, 30, of Anapa, Russian Federation, charging him with conspiracy, computer hacking, wire fraud, bank fraud and money laundering in connection with his alleged role as an administrator of the Gameover Zeus botnet. Bogachev was also charged by criminal complaint in Omaha with conspiracy to commit bank fraud related to his alleged involvement in the operation of a prior variant of Zeus malware known as “Jabber Zeus.” In a separate civil injunction application filed by the United States in federal court in Pittsburgh, Bogachev is identified as a leader of a tightly knit gang of cyber criminals based in Russia and Ukraine that is responsible for the development and operation of both the Gameover Zeus and Cryptolocker schemes. An investigation led in Washington, D.C., identified the Gameover Zeus network as a common distribution mechanism for Cryptolocker. Unsolicited emails containing an infected file purporting to be a voicemail or shipping confirmation are also widely used to distribute Cryptolocker. When opened, those attachments infect victims’ computers. Bogachev is alleged in the civil filing to be an administrator of both Gameover Zeus and Cryptolocker. The injunction filing further alleges that Bogachev is linked to the well-known online nicknames “Slavik” and “Pollingsoon,” among others. The criminal complaint filed in Omaha alleges that Bogachev also used “Lucky12345,” a well-known online moniker previously the subject of criminal charges in September 2012 that were unsealed in Omaha on April 11, 2014. Disruption of Gameover Zeus Botnet
Gameover Zeus, also known as “Peer-to-Peer Zeus,” is an extremely sophisticated type of malware designed to steal banking and other credentials from the computers it infects. Unknown to their rightful owners, the infected computers also secretly become part of a global network of compromised computers known as a “botnet,” a powerful online tool that cyber criminals can use for numerous criminal purposes besides stealing confidential information from the infected machines themselves. Gameover Zeus, which first emerged around September 2011, is the latest version of Zeus malware that began appearing at least as early as 2007. Gameover Zeus’s decentralized, peer-to-peer structure differentiates it from earlier Zeus variants. Security researchers estimate that between 500,000 and 1 million computers worldwide are infected with Gameover Zeus, and that approximately 25 percent of the infected computers are located in the United States. The principal purpose of the botnet is to capture banking credentials from infected computers. Those credentials are then used to initiate or re-direct wire transfers to accounts overseas that are controlled by cyber criminals. The FBI estimates that Gameover Zeus is responsible for more than $100 million in losses. The Gameover Zeus botnet operates silently on victim computers by directing those computers to reach out to receive commands from other computers in the botnet and to funnel stolen banking credentials back to the criminals who control the botnet. For this reason, in addition to the criminal charges announced today, the United States obtained civil and criminal court orders in federal court in Pittsburgh authorizing measures to redirect the automated requests by victim computers for additional instructions away from the criminal operators to substitute servers established pursuant to court order. The order authorizes the FBI to obtain the Internet Protocol addresses of the victim computers reaching out to the substitute servers and to provide that information to US-CERT to distribute to other countries’ CERTS and private industry to assist victims in removing the Gameover Zeus malware from their computers. At no point during the operation did the FBI or law enforcement access the content of any of the victims' computers or electronic communications. Besides the United States, law enforcement from the Australian Federal Police; the National Police of the Netherlands National High Tech Crime Unit; European Cybercrime Centre (EC3); Germany’s Bundeskriminalamt; France’s Police Judiciare; Italy’s Polizia Postale e delle Comunicazioni; Japan’s National Police Agency; Luxembourg’s Police Grand Ducale; New Zealand Police; the Royal Canadian Mounted Police; Ukraine’s Ministry of Internal Affairs – Division for Combating Cyber Crime; and the United Kingdom’s National Crime Agency participated in the operation. The Defense Criminal Investigative Service of the U.S. Department of Defense also participated in the investigation.
Invaluable technical assistance was provided by Dell SecureWorks and CrowdStrike. Numerous other companies also provided assistance, including facilitating efforts by victims to remediate the damage to their computers inflicted by Gameover Zeus. These companies include Microsoft Corporation, Abuse.ch, Afilias, F-Secure, Level 3 Communications, McAfee, Neustar, Shadowserver, Anubis Networks and Symantec.
The DHS National Cybersecurity and Communications Integration Center (NCCIC), which houses the US-CERT, plays a key role in triaging and collaboratively responding to the threat by providing technical assistance to information system operators, disseminating timely mitigation strategies to known victims, and sharing actionable information to the broader community to help prevent further infections. Disruption of Cryptolocker
In addition to the disruption operation against Gameover Zeus, the Justice Department led a separate multi-national action to disrupt the malware known as Cryptolocker (sometimes written as “CryptoLocker”), which began appearing about September 2013 and is also a highly sophisticated malware that uses cryptographic key pairs to encrypt the computer files of its victims. Victims are forced to pay hundreds of dollars and often as much as $700 or more to receive the key necessary to unlock their files. If the victim does not pay the ransom, it is impossible to recover their files.
Security researchers estimate that, as of April 2014, Cryptolocker had infected more than 234,000 computers, with approximately half of those in the United States. One estimate indicates that more than $27 million in ransom payments were made in just the first two months since Cryptolocker emerged.
The law enforcement actions against Cryptolocker are the result of an ongoing criminal investigation by the FBI’s Washington Field Office, in coordination with law enforcement counterparts from Canada, Germany, Luxembourg, the Netherlands, United Kingdom and Ukraine. Companies such as Dell SecureWorks and Deloitte Cyber Risk Services also assisted in the operation against Cryptolocker, as did Carnegie Mellon University and the Georgia Institute of Technology (Georgia Tech). The joint effort aided the FBI in identifying and seizing computer servers acting as command and control hubs for the Cryptolocker malware. The FBI’s Omaha and Pittsburgh Field Offices led both malware disruptions and conducted the investigation of Bogachev. The prosecution in Pittsburgh is being handled by Assistant U.S. Attorney Shardul Desai of the Western District of Pennsylvania, and the prosecution in Omaha by Trial Attorney William A. Hall of the Criminal Division’s Computer Crime and Intellectual Property Section (CCIPS) and Assistant U.S. Attorney Steven Russell of the District of Nebraska. The civil action to disrupt the Gameover Zeus botnet and Cryptolocker malware is led by Trial Attorneys Ethan Arenson and David Aaron of CCIPS and Assistant U.S. Attorney Michael A. Comber of the Western District of Pennsylvania. The Criminal Division’s Office of International Affairs provided significant assistance throughout the criminal and civil investigations.
The details contained in the indictment, criminal complaint and related pleadings are merely accusations, and the defendant is presumed innocent unless and until proven guilty.
Anyone claiming an interest in any of the property seized or actions enjoined pursuant to the court orders described in this release is advised to visit the following website for notice of the full contents of the orders: http://www.justice.gov/opa/gameover-zeus.html. Return to Top | 计算机 |
2014-23/2662/en_head.json.gz/22705 | Hello guest register or sign in or with: Reviews - Final Fantasy Tactics Game
Final Fantasy Tactics
Square Enix | Released Jan 28, 1998
summary news reviews features tutorials downloads mods videos images Final Fantasy Tactics is a tactical role-playing game developed and published by Square (now Square Enix) for the Sony PlayStation video game console. It was released in Japan in June 1997 and in the United States in January 1998. The game combines thematic elements of the Final Fantasy video game series with a game engine and battle system unlike those previously seen in the franchise. In contrast to other 32-bit era Final Fantasy titles, Final Fantasy Tactics uses a 3D, isometric, rotatable playing field, with bitmap sprite characters. Review RSS Feed JuSTiNX says
Nov 11th, 2011 No review provided j0lt says
Jul 23rd, 2010 No review provided Maxen1416 says
Feb 24th, 2010 No review provided Community Rating | 计算机 |
2014-23/2662/en_head.json.gz/23118 | The Effective Strategy For Choosing Right Domain Names
By Christopher Johnson
BusinessDomainsWeb Design
Naming is linguistic design, and a good domain name is an important part of the overall design of a website. A name plays a prominent role when people discover, remember, think about, talk about, search for, or navigate to a website. It establishes a theme for the branding of a website before people even visit it for the first time.
Coming up with a good domain name requires a combination of strategy, imagination and good linguistic design practice.
You’ll find some basic pieces of advice all over the Web, and it’s worth mentioning those right away. Ideally, your domain name should be:
Catchy and memorable,
Easy to pronounce,
Easy to spell,
Not too similar to competing domain names,
Not a violation of someone else’s trademark.
These are all good rules of thumb. But they lack specifics. These are really criteria to use to evaluate ideas for names after you’ve thought of them. To come up with a name in the first place, you need to know what type of name is best for you. And before you can answer that question, you have to answer two others: one about your resources, and the other about your Web strategy.
Two Questions
The first question is easy: Are you willing and able to spend lots of money on your domain name? If not, you can forget about a .com domain that’s a single real word, like Twitter.com or Amazon.com. They’re all registered, many by domain speculators, and buying one will cost a lot. You’ll need to look for a different kind of name. Real words on .net and .org domains are pretty hard to come by, too.
1Image from the Visual Thesaurus2, Copyright © 1998-2009 Thinkmap, Inc. All rights reserved.
The other question is a strategic one and takes more thought: How do you plan to get traffic to your website? Answering this question can help you avoid a lot of confusion about what makes for a good name. Some views on this issue directly contradict others. For example, Rob Monster, CEO of Monster Venture Partners, believes that Google.com and Yahoo.com are “lousy domain names” and that podcast.com and slideshow.com are great ones. Marketing guru Seth Godin advises against real words like these and in favor of unique made-up names like Squidoo.com (his company).
So, what’s going on here? These two views correspond to different strategies for getting Web traffic. Monster is interested in what we might call a “discoverable” domain name. That’s a name that can be found by someone who doesn’t know about your website but is doing web searches on keywords and phrases related to a specific topic, or by typing those words and phrases directly into the navigation bar of the browser. Discoverable names are generically descriptive.
The type of name that Godin is talking about is a “brandable” domain name. A brandable name establishes a distinct identity and communicates indirectly to evoke interesting ideas and feelings. Some brandable names, like Squidoo, provide a unique character string unlikely to be found anywhere except in documents that mention that particular website. That means people who know the name of the website can easily use a search engine to navigate there. Godin makes good use of this advantage, though it may not be a significant source of traffic. A unique character string also makes it possible for mentions of your website to dominate top search results for your name. That helps establish credibility, which may be considerably more important.
Discoverable Or Brandable?
So, do you need a discoverable name or a brandable name? If you intend to rely primarily on organic search results for a specific topic, you might want a discoverable name… but not necessarily. Even if your website has a brandable name, it can still rank well in search engine results for keywords and phrases as long as it’s full of relevant content. Discoverable names are only necessary for people counting on “type-in” traffic.
3Domains Bot64, a search engine that is geared specifically towards finding a domain name. It works best if you’re looking for a compound-word domain rather than an invented word.
Discoverable names are real words and phrases. If you don’t have the budget to buy a single real-word domain, then you’ll need to go for a phrase. Common phrases are often registered as well, so it can take time to find one. The trick to a discoverable name is not to be clever but to think of a phrase that other people would likely think of as well and would type in a search engine or navigation bar. The catch is that you have to find one that hasn’t yet been registered. Instant Domain Search5 and Domains Bot64 are great tools for checking the availability of domain names and suggest available alternative names.
If your marketing plans involve paid search listings and buzz generated by prominent mentions of your website, then you will almost certainly want a brandable name. A brandable name is distinctive, evocative and memorable.
Strategies For Brandable Names
So how do you come up with a brandable name? It takes some creativity. You sometimes hear people, including marketing people, say that a name should be an “empty vessel,” so that it can get all its meaning from other forms of branding. That’s not the most productive way to think when coming up with a name. Most great website names are connected to the purpose of the website in an indirect and interesting way. Often they use sensory images or tap into people’s personal experience in some way.
Some names are metaphors. PageFlakes, for example, uses the unexpected flake metaphor to help people understand something about how to use the website: you drag little boxes of content around, and they stick in the places you drop them, like flakes. Smashing Magazine is based on a word used in an enthusiastic appraisal of a performance, outfit, or design — “That looks smashing!” — but it also evokes the idea of being physically clobbered. That metaphor is brought to the foreground by the tagline: “We smash you with the information that makes your life easier. Really.”
7Image credit: eBoy8.
How do you come up with a metaphor? First, you have to have a clear understanding of what makes your website special and interesting. Then you have to find a simpler concept that helps people understand that concept by analogy, usually by imagined sensory experiences. The sensory information used in metaphors makes them vivid and memorable. There’s no algorithm for finding a metaphor, but it often involves thinking visually, which should come naturally to Web designers.
Some names have indirect connections to a website’s purpose but not through a metaphor. Flickr.com, for example, relates to photography through the concept of light that’s implicit in the word “flicker.”
Putting Names Together
Because you won’t be looking for a single-word name (unless you have big bucks to spend), you’ll have to build your name out of pieces. There are several different ways to do that:
Example: YouTube
Two whole words, often two nouns, stuck together. Don’t let anyone tell you that this kind of name is a “fad” and will go away. This has been the most common way to coin new English words as well as to create new names, and that’s unlikely to change in the next few hundred years.
Phrase Example: Six Apart
Words put together according to normal grammatical rules. Phrase names can be similar to compounds, but have a different pattern of syllabic emphasis. In compounds, the emphasis goes on the first word, the way we emphasize “white” in “the Whitehouse.” In phrases, the emphasis often goes on the second word, the way we emphasize “house” in “a white house.”
Examples: Microsoft, Farecast
A blend combines a part of a word with another word or word part. The name Microsoft combines the “micro” part of “microcomputer” with the “soft” part of “software.” When blends involve a surprising overlap in sound between the two words, they’re a form of wordplay. Farecast is like that. It combines the words “fare” and “forecast,” and “fare” resembles the first syllable of “forecast.” When you create this kind of blend, be sure to avoid awkwordplay: don’t pile up consonants in ugly ways (like in the name Syncplicity), and don’t use important words to replace syllables that aren’t emphasized (the way the names Mapufacture and Carticipate do).
Tweaked word
Examples: Flickr, Zune
Sometimes you can find a good domain name that’s basically a real word, but changed in some small way. It might have a modified spelling, like Flickr, or it might have a changed or added sound, like Zune (from “tune”) and iPhone.
Affixed word
Example: Friendster
Some names are new words created by sti | 计算机 |
2014-23/2662/en_head.json.gz/23254 | TerpBE
Location: Philly 'burbs
I've been having a problem with Comcast for two months, and it seems to be an endless cycle.
I noticed in January that I was being billed $1.50 for the second cablecard, and $8.90 for an additional outlet, which I don't have. I called Comcast to have the additional outlet charge removed. Once they removed it, my second cablecard stopped receiving some of my channels. (I think the ones that are missing are the difference between the "Plus" and "Platinum" tier).
Since they're Comcast, they couldn't figure out what was wrong and had to send out a tech. He came out, tested the signal, swapped the cablecards, and did a bunch of other stuff. After about 2 hours, he realized that something was set up wrong on Comcast's end with the signal they were sending. Apparently the signal for cc#2 didn't have the platinum package. He had them fix it and got everything working.
Then the next bill came, and I saw I was being billed the $8.90 additional outlet charge again. I called to have it removed, and once again, my second cablecard stopped getting some of the channels. Once again, Comcast said the only thing they could do is send out a tech. Once again, he came out, tested the signal, swapped the cablecards, and did a bunch of other stuff. When it still wasn't working, he called his supervisor who said "Oh, it's fine now, it just takes 24 hours for the Tivo to recognize it". Realizing that I wasn't going to accept that as an answer, he talked with the people that send the signal, and noticed that the "sports package" was on one of the cablecards twice, which was confusing things. He apparently had one of them removed and resent the signal, and everything was working fine.
Now, this month I once again was charged the $8.90 fee. I called Comcast and said I shouldn't have that fee, and that every time they take it away, one of my cablecards stops working. The guy said that he would stay on the phone to make sure that didn't happen. He removed the charge, then the line disconnected, and - surprise, surprise! - the cablecard stopped working immediately. I called back again, was disconnected twice more, talked to two supervisors, and spent 3 hours on the phone. The end result is that they're sending ANOTHER tech out.
Comcast is screwing up something so that when they remove the "additional outlet" charge, they take away some of the channels from cablecard #2. When they eventually get it working, they start charging me the "additional outlet" charge again.
According to everyone I've talked to at Comcast, everything is set up correctly on their end, and the signal they're sending should have all the channels. But, that's what they were telling me before, and every time this happened it turned out to be a problem with the signal they're sending. How can I convince Comcast that it is an issue on their end and there's no need to send out a tech? Does anybody who works for Comcast or who knows a lot about cablecards know what SPECIFICALLY I can tell them to check?
Also, how can I get them to stop billing me the additional outlet charge and ALSO have all of my channels work?
I don't know if this could have anything to do with it, but they said th | 计算机 |
2014-23/2662/en_head.json.gz/23580 | Jump to: navigation, searchArambilet: Dots on the I's, D-ART 2009 Online Digital Art Gallery, exhibited at IV09 and CG09 computer Graphics conferences, at Pompeu Fabra University, Barcelona; Tianjin University, China; Permanent Exhibition at the London South Bank UniversityComputer art is any art in which computers play a role in production or display of the artwork. Such art can be an image, sound, animation, video, CD-ROM, DVD-ROM, videogame, web site, algorithm, performance or gallery installation. Many traditional disciplines are now integrating digital technologies and, as a result, the lines between traditional works of art and new media works created using computers has been blurred. For instance, an artist may combine traditional painting with algorithm art and other digital techniques. As a result, defining computer art by its end product can thus be difficult. Computer art is by its nature evolutionary since changes in technology and software directly affect what is possible. Notable artists in this vein include James Faure Walker, Manfred Mohr, Ronald Davis, Joseph Nechvatal, Matthias Groebel, George Grie, Olga Kisseleva, John Lansdown, Perry Welman, and Jean-Pierre Hébert.Contents1 History2 Output devices3 Graphic software4 Robot Painting5 References6 See also7 Further readingHistory[edit] Picture by drawing machine 1, Desmond Paul Henry, c.1960sThe precursor of computer art dates back to 1956-1958, with the generation of what is probably the first image of a human being on a computer screen, a (George Petty-inspired)[1] pin-up girl at a SAGE air defense installation.[2] Desmond Paul Henry invented the Henry Drawing Machinine in 1960; his work was shown at the Reid Gallery in London in 1962, after his machine-generated art won him the privilege of a one-man exhibition. In 1963 James Larsen of San Jose State University wrote a computer program based on artistic principles, resulting in an early public showing of computer art in San Jose, California on May 6, 1963.[3][4]By the mid-1960s, most individuals involved in the creation of computer art were in fact engineers and scientists because they had access to the only computing resources available at university scientific research labs. Many artists tentatively began to explore the emerging computing technology for use as a creative tool. In the summer of 1962, A. Michael Noll programmed a digital computer at Bell Telephone Laboratories in Murray Hill, New Jersey to generate visual patterns solely for artistic purposes .[5] His later computer-generated patterns simulated paintings by Piet Mondrian and Bridget Riley and become classics.[6] Noll also used the patterns to investigate aesthetic preferences in the mid-1960s.The two early exhibitions of computer art were held in 1965- Generative Computergrafik, February 1965, at the Technische Hochschule in Stuttgart, Germany, and Computer-Generated Pictures, April 1965, at the Howard Wise Gallery in New York. The Stuttgart exhibit featured work by Georg Nees; the New York exhibit featured works by Bela Julesz and A. Michael Noll and was reviewed as art by The New York Times.[7] A third exhibition was put up in November 1965 at Galerie Wendelin Niedlich in Stuttgart, Germany, showing works by Frieder Nake and Georg Nees. Analogue computer art by Maughan Mason along with digital computer art by Noll were exhibited at the AFIPS Fall Joint Computer Conference in Las Vegas toward the end of 1965.Joseph Nechvatal 2004 Orgiastic abattOirIn 1968, the Institute of Contemporary Arts (ICA) in London hosted one of the most influential early exhibitions of computer art called | 计算机 |
2014-23/2662/en_head.json.gz/23625 | Sign in Network Access Protection (NAP)
Latest news from the Network Access Protection (NAP) team at Microsoft.
Downlevel OS Support
SHAs and SHVs
» Network Access Protection (NAP)
Example of using the new NPS templates feature in Windows Server 2008 R2
MS NAP Team
In a previous NAP blog entry , we described the new NPS templates feature in Windows Server 2008 R2. In this blog entry, we show an example of using a template for a RADIUS shared secret. Templates for RADIUS shared secrets allow users to specify a...
NPS templates in Windows Server 2008 R2
NPS templates, the flagship feature of NPS in Windows Server 2008 R2, provides a huge reduction in cost of ownership and deployment for all NPS environments. NPS templates separate common RADIUS configuration elements such as RADIUS shared secrets and...
Changes to the NAP user experience in Windows 7
Windows 7 and Windows Server 2008 R2 are now available as public betas. In Windows 7, the NAP client user interface (UI) has been integrated into the Windows Action Center (previously known as the Windows Security Center). For example, Network Access...
Network Access Protection Design Guide wins big at Society of Technical Communication (STC) awards!
Greg Lindsay (writer) and Allyson Adley (editor) won the Online Best of Show award for the NAP Design Guide at the Puget Sound Chapter of the Society for Technical Communication (STC) awards ceremony on January 29th. Congratulations Greg and Allyson... | 计算机 |
2014-23/2662/en_head.json.gz/24180 | Implementer tools & User agents
About IEF
Why Implement P3P?
Table of Contents A.Distinguishing Data Privacy
B.Data Privacy is a Widespread & Tangible Issue
i.Consumers are Looking for Options
ii.Attitudes of Canadian and Australian Consumers
iii.Businesses are Seeking Consumer Trust
iv.The Technology Infrastructure is Evolving with Privacy in Mind
v.Governments are Engaged
vi.International Community Standards Have Emerged
C. Implementing P3P Makes Sense from Many Perspectives
This is the first question for anyone thinking of implementing P3P. Upon examining the specification and their organization´s privacy needs, each person may come up with a different reason, ranging from the desire to make their privacy policies easier to use to promoting privacy-enhancing technologies to the more pragmatic concern of maintaining Web site functionality. Any organization operating on the Internet wishing to increase user trust and confidence in the Web should consider implementing P3P. As you consider whether to implement P3P, it may be helpful to understand the broader context of the privacy issue on the Internet. This section discusses the benefits of implementing P3P and gives readers an overview of how various constituents recognize the privacy problem and their role in addressing the issue. A broader discussion of the privacy debate, its history and details about policy initiatives, laws and technologies is outside the scope of this Guide. If you want to learn more about data privacy concerns and efforts to address those concerns, there are many excellent books and Web sites to read. The P3PToolbox offers a list of suggested resources at http://www.p3ptoolbox.org.
Distinguishing Data Privacy
Before discussing how P3P impacts the privacy debate, it is important to clarify the definition of privacy for the purposes of this Guide. There are various types of privacy concerns reflected in laws and customs around the world. P3P has been developed to address a branch of privacy sometimes called data privacy or information privacy - the concern about an individual´s control over personal information about him or her1. Data privacy has imprinted itself globally as a major subject of concern for people.
The information age has seen companies take advantage in the dramatic increase in data on individuals to provide them with more information on products, new services, and customized assistance or products. In recent years, however, media coverage of marketing companies amassing huge amounts of personal information about individuals and potentially creating detailed profiles about them have resulted in significant public concern. This has given rise to the need for companies to balance new services based on data collection with consumers concerns about personal information. Data Privacy is a Widespread & Tangible Issue
As the information revolution began at the end of the 20th century, it gave companies the power to inexpensively collect and process large databases of personal information. Information is powerful and its collection and use is fundamental to our way of life. But the misuse of personal information can cause a range of problems from the nuisance of junk mail, to the stress of recovering from identity theft, to potentially devastating forms of discrimination. The Internet, with its exchange of information between computers, companies, schools, individuals and countries, has drawn Web users´ attention to information privacy.
Each of the forces that shape our options and attitudes about privacy, whether they be governments, corporations, friends and family, or the technology infrastructure, are in their own way recognizing the importance of the privacy issue and are now involved in addressing concerns about data privacy. Governments are passing laws; companies are posting privacy policies and giving consumers more options; many individuals are questioning requests for personal information; and technology is being designed to not only process information more efficiently but to also store it securely and track its access and use.
Consumers Are Looking for Options
How P3P Can Help? P3P can help balance the information economy´s need for information to provide consumers with desired services with each individual´s desire for control over information about them by empowering people with tools for notice and control to make decisions based on their own preferences. Consumer polls have consistently demonstrated that privacy protection is a significant concern and are expressing concern about what data is collected from them, how it is protected, what it is used for, and how it is shared with others. A Business Week survey, released in March 2000 found that 82% of those polled were not at comfortable with online activities being merged with personally identifiable information, such as your income, driver´s license, credit data, and medical status. 2 In another recent survey, conducted by Harris Interactive in December 2001, 86% of respondents felt it was somewhat or very important that the Web sites they visited posted a privacy policy on their Web site.3 Attitudes of Canadian and Australian Consumers
As a result of such concerns about privacy, some consumers in Canada and Australia appear to be staying away from online shopping. Rather than becoming more comfortable with e-commerce as it becomes a more ubiquitous marketplace, some Canadian consumers are growing more concerned about the security and privacy of their personal and credit card information that is transferred online. A Canadian Ipsos-Reid survey found that: 83% of consumers who have not shopped online cited that their reluctance is due to not knowing what was being done with their information and who was watching their surfing habits and 69% of frequent Internet purchasers say they have concerns about handing out personal information like credit card numbers online.4 Similar concerns were voiced by Australian consumers in a recent survey conducted by the Australian Privacy Commissioner´s office. This survey found:
57% of Australians were more concerned about their privacy on the Internet than any other form of media.
90% of Australians considered practices, including the monitoring of Internet usage without consent and seeking personal details irrelevant to a transaction, to be an invasion of their privacy.
According to the Australian Federal Privacy Commissioner, Malcolm Crompton, Companies often fail to grasp the importance their customers place on privacy. Nearly half of Australians say they have already stopped - not thought about stopping, but actually stopped - transacting with organizations they feel they can't trust with their personal information.5 Businesses Are Seeking Consumer Trust
How P3P Can Help? P3P enables businesses to build trust with their customers and potential customers by making the privacy/data-gathering process more transparent. This allows consumers to better understand why and how companies collect information.
This concern about privacy is starting to affect business practices. Companies are increasingly recognizing that providing clear information to their customers and allowing their customers a greater degree of control over the collection and use of their personal information makes good business sense. Beyond overcoming consumer confidence concerns, we are beginning to see an environment develop where privacy will be viewed as a general enabler in a wide range of commercial and non-commercial transactions. Respect for individual privacy is beginning to be used to differentiate one company from another in the marketplace and to build a closer, more focused bond between the company and the customer.
Across the globe, many corporations are hiring executive level managers, often on the Chief Privacy Officer level, to create and implementing corporate-wide data management programs. There are Privacy Officer Associations and international training programs. Companies are recognizing the highly-valuable yet volatile nature of customer information and are beginning to take steps to manage it with the care such a valuable asset deserves.
The Technology Infrastructure is Evolving with Privacy in Mind
How P3P Can Help? Although the initial user agents will be focused on traditional Internet browsing, P3P lays the groundwork for standardizing the way in which an organization´s privacy practices are communicated via other communications devices such as wireless, PDAs, and voice-based devices. P3P is therefore just as relevant to emerging as it is to existing technologies.
Computer programmers, the millions of individuals responsible for creating the computer revolution, the Internet, and the myriad of applications that we take for granted each day are taking informational privacy much more seriously. Technology ethics courses that include security and privacy issues are now part of curriculum at colleges and universities. Organizations such as the European Data Commissioners, Computer Professionals for Social Responsibility and the Association for Computing Machinery are helping developers recognize the power they wield when architecting new information systems and user applications.
The emergence of P3P is evidence of this shift within the technology community. P3P has been developed to help steer the force of technology a step further toward automatic communication of data management practices and individual privacy preferences.
Governments are Engaged
How P3P Can Help? Governments around the world are closely watching how companies and organizations communicate their data management practices, handle consumer complaints, and transfer personal data. P3P facilitates the process of providing notice of data gathering and can therefore be a useful tool for compliance.
In some jurisdictions, adherence to a set of privacy principles is not just good business; it´s also the law. It is an increasingly popular opinion that individuals, have an important stake in the proper management of their identity and that information6. Many policy leaders support, and some jurisdictions enforce, an individual´s right to determine who has access to personal information about them, to authorize what it is used for, and to be provided with a mechanism to review and correct that data. Europe. As the global community has faced the issues created by mass collection and exchange of personal data, some have taken the lead to promote strict standards for responsible information management. The European Union has taken the strongest steps to deploy information privacy regulation (called data protection legislation) including the creation of country-level data protection agencies7. Other non-EU countries such as Canada and Australia have passed comprehensive data protection legislation as well. The European data protection legislation includes strict provisions regarding when and how a European data controller may transfer data to other countries8.
United States. In general, the United States has focused its data privacy laws on specific misuses of information, such as regulations prohibiting disclosure of video rental records, or on specific industries that deal with the most sensitive kinds of personal data, such as the credit, banking, and healthcare industries and information about children. Using existing trade and advertising laws and recognizing the importance of this issue to consumers, individual states attorneys general and the U.S. Federal Trade Commission have taken action against companies that mislead the public with regard to their privacy practices.
International Community Standards Have Emerged
How P3P Can Help? By implementing P3P, a Web site does not automatically comply with the OECD guidelines or the FTC recommendations, however when combined with other procedures and technical tools, P3P can help an organization address some of the Fair Information Practices.
In 1980, recognizing the importance of the data privacy issue in international commerce, the Organization for Economic Cooperation and Development (OECD) issued privacy guidelines that have become an important foundation for the privacy debates since that time9. The guidelines were proposed to harmonize national privacy legislation and, while upholding human rights, prevent interruptions in international flows of data. They represent a consensus on basic principles which can be built into existing national legislation, or serve as a basis for legislation in those countries which do not yet have it.
The guidelines formulate a set of eight principles, often referred to as Fair Information Practices. The principles are10: Purpose Specification Principle: The purposes for which personal data are collected should be specified not later than at the time of data collection and the subsequent use limited to the fulfillment of those purposes or such others as are not incompatible with those purposes and as are specified on each occasion of change of purpose.
Openness Principle: There should be a general policy of openness about developments, practices and policies with respect to personal data. Means should be readily available of establishing the existence and nature of personal data, and the main purposes of their use, as well as the identity and usual residence of the Data Controller.
Collection Limitation Principle: There should be limits to the collection of personal data and any such data should be obtained by lawful and fair means and, where appropriate, with the knowledge or consent of the data subject.
Data Quality Principle: Personal data should be relevant to the purposes for which they are to be used, and, to the extent necessary for those purposes, should be accurate, complete and kept up-to-date.
Accountability Principle: A Data Controller should be accountable for complying with measures which give effect to the principles stated above.
Use Limitation Principle: Personal data should not be disclosed, made available or otherwise used for purposes other than those specified in accordance with the Purpose Specification Principle of the OECD Privacy Guidelines except: with the consent of the data subject; or by the authority of law. Individual Participation Principle: An individual should have the right:
to obtain from a data controller, or otherwise, confirmation of whether or not the data controller has data relating to him; to have communicated to him, data relating to him within a reasonable time; at a charge, if any, that is not excessive; in a reasonable manner; and in a form that is readily intelligible to him; to be given reasons if a request made under subparagraphs(a) and (b) is denied, and to be able to challenge such denial; and to challenge data relating to him and, if the challenge is successful to have the data erased, rectified, completed or amended.
Security Safeguards Principle: Personal data should be protected by reasonable security safeguards against such risks as loss or unauthorised access, destruction, use, modification or disclosure of data.
Other Variations of the Data Protection Guidelines
The Fair Information Principles represent an international consensus on how best to balance effective privacy protection with the free flow of personal data. These principles have been re-cast by some with variations. For example, organizations in the United States should note the formulation by the Federal Trade Commission of five elements that should be addressed in any data privacy standard:
Notice of the ways in which information will be used;
Consent to the use or third-party distribution of information;
Access to data collected about oneself;
Security and accuracy of collected data; and
Enforcement mechanisms to ensure compliance and obtain redress.
P3P Facilitates Fair Information Practices
The adoption of P3P into Web sites and communication technologies, promotes a technology environment that supports the Fair Information Practices.
P3P provides an automatic way for organizations to communicate to Web site visitors about the purposes for which personal data is collected.
P3P is based on openness and improving the level of conversation between data subjects and organizations who collect personal information on the World Wide Web.
With P3P, users can be notified prior to collection of information increasing their opportunity to consent or reject a specific request for information.
By improving notice to Web site visitors about what data is being collected about them, P3P will trigger more questions to the organizations collecting the information. This scrutiny will hopefully help organizations to take care to collect only information that is relevant and necessary to the organization.
P3P enables organizations on the Web to automatically communicate their privacy policy enforcement methods. As with human readable privacy policies and depending on jurisdiction, data subjects may bring claims against organizations who mislead data subjects using P3P policies.
Implementing P3P Makes Sense from Many Perspectives
How P3P Can Help? Each of the players in the information society is concerned about privacy and is struggling to develop and apply codes of conduct with regard to the use of personal data. P3P is an important step in that process - a step toward simplifying the communication about privacy practices and the choices individuals may have with regard to such practices. The P3P framework can facilitate our move toward a privacy sensitive information society.
As you speak with others about implementing P3P within your organization, you´ll soon notice that different reasons for implementing will resonate with different people. Here are just a few of the various perspectives that you may encounter.
A marketing perspective: P3P strengthens our users´s privacy, building goodwill between our customers and our brand. Privacy is becoming a way of distinguishing a brand, with several companies incorporating strong privacy policies into their advertising.
A policy perspective: Wide spread implementation of P3P will help empower consumers to choose their own level of privacy. I´sd prefer to have consumers make choices rather than have the government dictate one choice for everyone.
Several industry groups, such as the Privacy Leadership Initiative and the Online Privacy Alliance, are supporters of P3P. A technical perspective: With P3P user agent tools already available and in the marketplace, most Web users are now or soon will be using P3P when they visit our Web site, so we better implement P3P to make sure the Web site functions correctly.
For a review of the user agents currently available and under development, see http://www.p3ptoolbox.org.
A legal perspective: Going though the Web site audit necessary to implement P3P will help our organization get a handle on our true data practices and will help us confirm the accuracy of our human privacy policy or identify areas that need to be updated.
Posting an inaccurate privacy policy or failing to make updates to a policy as a business grows are the biggest legal risks associated with privacy policies. Although P3P alone would not constitute compliance with the various data protection laws enacted in recent years; P3P can be an important part of an overall compliance strategy. P3P helps users make informed decisions based on a Web sites´ privacy practice disclosures. Informed choice is an important component of this. Flexibility enables P3P to be used in conjunction with various laws and policies the world over.
An individual perspective: Individuals want more control over how their personal information is gathered and used. By implementing P3P, companies recognize the importance of providing individuals with the tools to control their own information.. The privacy debate touches every Web user. By empowering individual Web users to make their own privacy decisions, companies are also empowering themselves and their employees.
1For a healthy discussion of various types of privacy concerns reflected in the U.S. law, see Dorothy Glancy, At the Intersection of Visible and Invisible Worlds: United States Privacy Law and the Internet, 16 Santa Clara Comp. & High Tech. L.J. 357, 360 (2000).
2Business Week/Harris Survey, March 20, 2000, Business Week, Our Four-Point Plan E-privacy and e-commerce can coexist. Here's how to safeguard both, March 20, 2000; http://businessweek.com/2000/00_12/b3673006.htm
3Harris Interactive Survey, December 2001. For more information, visit http://www.understandingprivacy.org. 4Newsbytes, Security Concerns Plague E-tailing In Canada - Report, November 17, 2001; www.newsbytes.com/news/01/172521.html
5The West Australian, October 16, 2001; www.thewest.com.au/20011016/business/tw-business-home-sto28010.html
6For a discussion of this issue including a perspective that is promoting some level of ownership of one's personal information, see Julie E. Cohen, Examined Lives: Informational Privacy and the Subject as Object, 52 Stanford Law Rev. 1373, 1375 (2000). For some other perspectives, see Pamela Samuelson, Privacy As Intellectual Property?, 52 Stan. L. Rev. 1125 (2000); Eugene Volokh, Cyberspace And Privacy: A New Legal Paradigm? Freedom Of Speech And Information Privacy: The Troubling Implications Of A Right To Stop People From Speaking About You, 52 Stan. L. Rev. 1049 (2000); and Jessica Litman, Cyberspace and Privacy: A New Legal Paradigm? Information Privacy/Information Property, 52 Stan. L. Rev. 1283 (2000).
7Additional information can be found in article 29 of the directive, at http://europa.eu.int/eur-lex/en/lif/dat/1995/en_395L0046.html. 8This has triggered complex bi-lateral negotiations such as the Safe Harbor program created by United States and the European Union to address the need for companies to transfer employee and customer information across the Atlantic. See http://www.export.gov/safeharbor/. 9The OECD consists of representatives from 29 countries that work to develop policies to foster international trade.
10For a further discussion of the development of the Fair Information Principles see: http://www.ftc.gov/reports/privacy2000/privacy2000.pdf
P3P Implementation Guide
Section I Section II Section III
Please note that this document is a working draft for review and reference purposes only. Any questions or comments should be e-mailed to [email protected]. | 计算机 |
2014-23/2662/en_head.json.gz/24190 | peperonity.com » Sites » segawap » Sega Genesis - 1989-1997
Sega Genesis - 1989-1997 - segawap
Sega Genesis - 1989-1997System History
It was 1989. Nintendo´s NES had reigned supreme in the videogame market for nearly five years, and it was time for a new system to take over the throne. Sega´s Master System, while graphically superior to the NES, failed to make any kind of lasting impression in the U.S. market (although it was very popular in Europe), and Sega knew that their next system would not only have to be superior to everything else out there, but they´d have to have a lot of third-party developers lined up. The lack of third-party support is cited as the main cause of the Master System´s demise.
After two years of development, Sega introduced their "next generation" system to the world in late 1989. Known as the Genesis in the West, and the Mega Drive in the east, Sega began an aggressive marketing campaign, not only to customers, but also to developers.Although NEC´s TurboGrafx-16 had beat the Genesis to market by nearly four months, Sega quickly regained lost ground, thanks to their line-up of quality arcade conversions, killer sports games, and most of all, the full support of Trip Hawkins and Electronic Arts. Although the Genesis development kit was reportedly overly expensive and initially difficult to work with, by the end of 1990 there were over 30 third-party developers writing games for the new system, compared to four for the TG16.
From 1990 through to late 1991, the Genesis was pretty much the only kid on the block. The TG16, while boasting excellent games, hadn´t a hope of catching up, and Nintendo´s Super NES was delayed again and again. One reason for the numerous delays was that software was still being developed for the NES, and Nintendo didn´t want to risk losing any of those third-party developers to Sega.
When the SNES was finally released in September of 1991, Sega realized that the first real threat to their grip on the 16-bit market had surfaced. Sega spent a massive amount of money on advertising, promoting its superior game line-up and showing pictures of its "still-in-development" Sega CD. Although it would be nearly two years before the CD made it to market, this stopgap tactic was ingenious. Sales of Genesis consoles and games only dropped slightly in the Christmas season of 1991.
1992 was a turbulent year for the 16-bit systems. Early on, NEC announced that sales and distribution of the TG16 would be handled by a new company known as Turbo Technologies Inc. (TTI), made up of senior staff members of both NEC and Hudson Soft. Meanwhile, Sega and Nintendo battled it out for control of the videogame market. Despite the graphical splendor of the SNES, many critisized its slow processor speed, which was said to be rougly half of the Genesis´ 7mhz processor speed. Sega used this publicity well. In the summer of 1992, Sega unveiled its secret project. Known as Sonic The Hedgehog, its stunning visuals pushed the Genesis to the limit, and earned the title of the fastest videogame in history.
The arrival of Sonic was a major blow to Nintendo. It proved that the Genesis wasn´t as primitive as Nintendo wanted everyone to believe. New SNES software was slow to arrive in 1992, and Sega´s huge third-party support helped carry them through the year, despite the fact that the promised Sega CD had yet to arrive in stores.
1993 was the year that Sega´s stronghold on the market began to slip for the first time since its introduction. Third-party support for the SNES was finally coming up to speed, and some truly remarkable games were starting to be released for it. Sega introduced the Sega CD late in the year, and despite all the hype that had been built up over the last two years, the Sega CD sold very poorly, partially due to its 200+ initial asking price. The Sega CD software was also extremely disappointing. While there were a few gems, most of the CDs were nothing more than straight ports of the cartridge versions with Redbook CD audio. With only the Sega CD to carry them through the Christmas season, Nintendo came out on top in 1993. 1994 was the year that the first 32-bit systems saw the light of day. Trip Hawkins had left EA in mid 1993 to form the 3DO company, and 1994 saw the release of the 3DO Interactive Multiplayer. While incredibly underpowered compared to later 32-bit systems (Playstation & Saturn), it was the first glimpse of what gaming could be like beyond the 16-bit realm, and both Sega and Nintendo realized what a threat this was to their supremacy in the videogame market. Although Sega of Japan had been developing their successor to the Genesis (the Saturn) for a few months already, Sega of America began developing the 32X, an add-on that claimed to turn the Genesis into a full-fledged 32-bit system. While this was a good idea in concept, the 32X turned out to be the final nail in the Genesis´ coffin (with the Sega CD being the first). The add-on, while cheaper than a new system, was still terribly overpriced. The games were disappointing, and definitely not up to the standards set by the 3DO. In late 1994, Nintendo introduced Donkey Kong Country, a game that was promoted as giving 32-bit quality graphics and gameplay without any add-ons. It proved through advertsing that the SNES was indeed the most powerful 16-bit system on the market and even rivaled the quality of the full-fledged 32-bit (3DO, CD-I), and 64-bit (Jaguar) systems. Sega never recovered from the 32X fiasco, and many believe that the sales of the Saturn were also hurt by it. Hoping to revive interest in the Genesis, Sega released the Nomad, a 179 portable color Genesis. Its high price kept it from becoming very popular, but soon the Nomad could be found discounted for under 50 and it did succeed in invigorating interest in the Genesis somewhat.
From 1994 to 1997, the focus of the videogame market gradually shifted away from the 16-bit systems, and even the first-generation 32- and 64-bit systems. The 3DO, CD-I, and Jaguar were all laid to rest in the mid-1990s, and sales of the Playstation and the Saturn (in Japan) took . The 16-bit market share slowly dwindled, and 1997 marked the final year of production for the Genesis. Games If you´re new to the Genesis, I´d definitely recommend checking out a game in the Sonic series. Sonic 1, 2, 3 and Sonic & Knuckles are superfast side-scrolling platform games and a must-see. Sonic Spinball and Sonic 3D Blast are pinball and isometric platform games respectively, and are; in my opinion, not as good.
The Genesis was noted for its abundance of sports titles. It is generally agreed that the Electronic Arts line of sports games (EA Sports) are the best overall. Role Playing Games are few and far between on the Genesis, and there are a lot of Japanese RPGs that were never translated. Having said this, both the Phantasy Star and "Shining" series (Shining in the Darkness, Shining Force 1 & 2) are considered among the best RPGs on any 16-bit system. Accessories The most notable accessory for the Genesis has to be the Sega CD. Although its transfer rate and access time is roughly that of a 1x speed CD-ROM, it more than doubled the Genesis´ available RAM, and added an additional sound processor and a chip that enabled hardware scaling and rotation, similar to the SNES´s famed Mode 7.
The 32X add-on was Sega´s answer to the 32- and 64-bit systems that began to arrive in 1994. It was extremely underpowered compared to the full-fledged 32-bit systems (such as Sega´s own Saturn) and it consequently sold very poorly. The production run of the 32x lasted only a few months, and Sega ended up losing valuable market share to Nintendo and Sony. The Power Base Converter allowed Master System games to be played on the Genesis. It bypassed the main 68000 processor in the Genesis and used the Z80 sound processor to run the original Master System code. While rumors circulated of a Game Gear to Genesis converter, it never made it past the prototype stage.
The Mega Mouse was released around the same time as the SNES mouse, and to my knowledge only one title (the terrible excuse for an drawing program, Art Attack) supports it.
The Activator was Sega´s attempt at a "virtual reality" interface for the Genesis. It was a flat, octagonal piece of plastic and wires that translated the movement of someone standing inside it into movement in a game. Control was clumsy and imprecise, and after a few frustrated minutes of play, the urge to sit down and pick up a gamepad is nearly uncontrollable. Emulation 1997, the year the Genesis was discontinued, was also the year that Genesis emulators reached near-perfection. Both Bloodlust software´s Genecyst and Steve Snake´s KGen are nearly equal in quality, both able to play over 80% of the existing Genesis and MegaDrive game library.
Unfortunately, most of the Sega Genesis ROM sites, including the Genesis section of The Dump have been shut down by an organization representing Sega. Links http://www.sega.com http://www.ea.com This section was Researched and Authored by Jonathan J. Burtenshaw (Harry Tuttle). Author´s Note: "I searched far and wide for a Sega Genesis FAQ. I couldn´t find one, so most of this information ended up coming from my own brain. If any information is incorrect, please let me know". [mail to [email protected] ]Proceed to 'Sega Master System/SG-1000 Mark III'Back to 'Sega Genesis/Mega Drive/Nomad'segawapSend to a friendVote for this siteAuthor: gindxMessage to gindxOffer friendshipJust visitedAdd to favouritesMy favouritesReport pageTopicsMusic & Entertainmentsasikiran.peperonity.comgurraberra.peperonity.com | 计算机 |
2014-23/2662/en_head.json.gz/24463 | A Look at SQL Server 2008
Paul Thurrott's Supersite for Windows
When one thinks of Microsoft's platforms, thoughts turn naturally to such products as Windows, Windows Server, Microsoft .NET (including the Web-based ASP .NET technologies) and perhaps Office. But Microsoft's success as a maker of platforms is based on far more than just the platforms themselves. It includes a developer ecosystem rooted in the Visual Studio tools and .NET development languages. And it includes the related data back-end, which centers on SQL Server.
The next version of SQL Server, codenamed Katmai and dubbed SQL Server 2008, will debut in February 2008 alongside Windows Server 2008 and Visual Studio 2008. This is no coincidence of timing. Indeed, these products will most certainly be completed at varying times, with Visual Studio 2008 scheduled for completion in late 2007 and both Windows 2008 and SQL 2008 not due until after the launch event. (SQL Server 2008 will be the last of the th | 计算机 |
2014-23/2662/en_head.json.gz/25275 | Forums > Submission Feedback > pickhut's The Mansion of Hidden Souls review
This thread is in response to a review for The Mansion of Hidden Souls on the Saturn. You are encouraged to view the review in a new window before reading this thread.
Author: mrmiyamoto
Posted: October 30, 2013 (01:55 AM)
Wait, so...they made a markedly similar game for Sega CD? I played this on Sega Saturn, and it's weird, but...I found the game strangely mysterious. Perhaps it was impressionable age (I was roughly 9-10 years old), but the tinny, hollow voiceovers creeped me out, and the disturbing laughing by a particular girl set me over the edge. I never did beat the game, but I have vivid memories of playing it.
Also, if you've played a fair amount of Sega Saturn, I'm trying to figure out the name of this other game I played all the time. It was about this, like, source marine, and he was red in color. You controlled him as he went around shooting guns at things. It had a mechanic which entailed a stopping and pivoting shooting style, and it was semi top down point of view. Ring any bells? For the LIFE of me, I can't find out what that game was, but I remember the images like it was yesterday.
"Nowadays, people know the price of everything and the value of nothing"
*Oscar Wilde*
So talking about that game again, I did some research and reviewed a list of American Saturn games. Turns out the game was Crusader: No Remorse. Wow, memory lane for sure watching Youtube videos of that game. "Nowadays, people know the price of everything and the value of nothing"
Author: pickhut
Yeah, I had a review up for the first game prior to this review. I provided a link in the first paragraph (predecessor), if you wanna see that one, I guess.
Crusader is one of the games I haven't tried out yet for the Saturn. Never got around to it. I was going to purchase the game and its sequel on gog.com, but never got around to that, as well.
I head spaceshit noises
Author: EmP (Mod)
I adored Crusader: No Remorse back in the day. I managed to nab a super cheap copy on release as the shop I worked for at the time over-ordered like mad, so got it below cost then played it like crazy. Oddly, what I remember the most is the internal e-mail system you could browse in between missions and catch up with the base's gossip, and there being this huge long-running argument about a joke someone made about a butcher shop chicken and how it made no sense.
Always been scared to offer a reply to keep nostalgia intact.
For us. For them. For you.
Posted: October 30, 2013 (01:58 PM)
Yeah, the game seemed so ahead of its time. Glad someone else besides me has memories of it. :D
None of the material contained within this site may be reproduced in any conceivable fashion without permission from the author(s) of said material. This site is not sponsored or endorsed by Nintendo, Sega, Sony, Microsoft, or any other such party. The Mansion of Hidden Souls is a registered trademark of its copyright holder. This site makes no claim to The Mansion of Hidden Souls, its characters, screenshots, artwork, music, or any intellectual property contained within. Opinions expressed on this site do not necessarily represent the opinion of site staff or sponsors. | 计算机 |
2014-23/2662/en_head.json.gz/25335 | Breaking down the FTC's definition of 'native advertising' in games
August T. Horvath wonders if the FTC really groks native advertising, especially in gaming and entertainment contexts
By August T. HorvathApril 18, 2014
A few months ago I experienced the phenomenon of Advertising Week, a kind of Comic-Con of online advertising that occurs each year around Times Square. My two main objectives were to meet some of the Angry Birds in person and to find out exactly what “native advertising” is. I had more success with the first goal. Even though many vendors actively promoted native advertising and used it in every other sentence, explaining what it was proved challenging.
Several other entities, especially advertising enforcers, seem disturbingly certain that they know what native advertising is. At a recent presentation, a Federal Trade Commission staffer announced, with tongue-in-cheek pride, the FTC’s first native advertising enforcement action: a 1915 case involving an advertisement posing as a magazine news article. It was a cute way to make the point that nothing in advertising law is really new, and to reinforce the FTC’s perennial position that any truth-in-advertising issue can be resolved by reference to the broad principles stated in the FTC Act. Somehow, though, it was unsatisfying. I came away wondering if the FTC really groks native advertising, especially in gaming and entertainment contexts.
Semi-formally, the FTC has defined “native advertising” as “blending advertisements with news, entertainment, and other editorial content in digital media” and has asserted it to be synonymous with “sponsored content.” Except for the “digital media” part, it encompasses the advertorials that have been around for over a century in print media. This, and the name of the FTC’s December 2013 “Blurred Lines: Advertising or Content?” workshop on native advertising, signal the FTC’s primary concern with native advertising, which is that sponsored content should be clearly labeled as such.
There are a couple of problems with the FTC’s definition of native advertising. First, it is basically tantamount to “disguised advertising.” If in-media advertising is clearly designated as advertising, then it has separated itself from the surrounding content, broken the frame, interrupted the experience; it is no longer native. If it’s native, by definition, it’s blended; the lines are blurred; you can’t tell it from the “content.” It is inherently suspicious, if not deceptive. Second, it has dubious relevance to the gaming context, or to most entertainment media.
If you are looking for a true ancestor of video game native advertising in the (relatively) low-tech world, a better one might be product placement in movies and TV shows. Here, the FTC’s enforcement position deviates from its own principle that sponsored content should be clearly designated as such. In 2005, responding to a Commercial Alert complaint about product placement in TV shows, the FTC declined to require advertisers who pay to have their products appear in programs to flash a superimposed disclosure such as “ADVERTISEMENT” at viewers. Essentially the FTC skirted its own principle by defining the mere placement of one’s brand in front of consumers as not an advertisement that makes objective, and thus potentially deceptive, claims about the product. Product placement in traditional media has reached its most advanced form in Asia, where top film and TV stars participate in “Commercial Films” (CFs) that are extended dramatic programs sponsored by an advertiser. Typically the advertiser’s product plays a key plot role, such as in a romantic comedy where the sponsor’s smartphone is crucial in uniting the lovers — similar to, but more pervasive than, America Online’s ground-breaking product placement in the 1998 Tom Hanks/Meg Ryan film You’ve Got Mail. It’s certainly possible that a product in such a story could be shown doing things it can’t really do, and nothing prevents an agency or litigant from challenging any deceptive implied claim. If, for example, Chrysler had sponsored the appearance of the legendary 1969 Charger “General Lee” in The Dukes of Hazzard, the FTC might very well have taken issue with the show’s depiction of the car’s aerial capabilities and crashworthiness.
Games are more like movies than magazines or search portals, and the FTC’s hands-off logic for product placement in movies and TV shows that does not make objective claims holds true for the most common form of video game native advertising, which is to throw a trademark in front of users in the context of a game environment. The FTC’s position implicitly acknowledges an important benefit of this practice. Most games are interactive works of fiction in which the programmer interacts with the user to create a more or less guided but unique narrative experience. Many games build a realistic environment that calls for the suspension of disbelief. Corporate brands and advertising are a part of our modern environment, and it enhances the realism of any simulated environment, be in in a movie or a game, when genuine, familiar brands appear. We have all seen movies and TV productions in which the producer has invented a bogus brand for a common consumer product because it could not use the trademark of a real one. This unrealism jars us momentarily, interfering with our suspension of disbelief. Product placement serves not only the sponsor’s commercial ends, but also the producer’s dramatic purposes and user-viewer’s experience, because it is realism-enhancing to see real, rather than ersatz, product billboards in a street-chase game and real logos on the boards of an ice hockey simulation.
Perhaps not surprisingly, Wikipedia has a more balanced definition of native advertising than the FTC: Online advertising “in which the advertiser attempts to gain attention by providing content in the context of the user's experience [whose] formats match both the form and the function of the user experience in which it is placed.” It’s a definition better suited to native advertising in the gaming context, because it acknowledges that native advertising in gaming is not just snuck into the experience in disguise, it is affirmatively part of the experience, in the same way that products, brands and advertising are part of our lives. Deceptive is still deceptive, of course; but native is not necessarily deceptive. Contributing Author
August T. Horvath
August T. Horvath is a partner in the Kelley Drye's New York office. He focuses his practice on advertising law and antitrust matters. Mr. Horvath...
Regulatory 2191 Technology 1403 Compliance 275 Federal Trade Commission 257 FTC 119 advertising 26 gaming 8 online advertising 8 Join the Conversation | 计算机 |
2014-23/2662/en_head.json.gz/27211 | Quantum Computation for Quantum Chemistry: Status, Challenges, and Prospects - Session 1
DownloadVideo (WMV)Video (MP4)Audio (WMA)Audio (MP3)Slides (XPS)Slides (PDF)Transcript (DOC)
Speaker Michael Freedman, Krysta Svore, Matthias Troyer, and Markus Reiher
Affiliation Microsoft Research, ETH Zurich
Host Michael Freedman, Krysta Svore
Date recorded 12 November 2012
9:00 – 9:15 AM
Speaker: Michael Freedman, Microsoft Station Q
Michael Freedman is Director of Station Q, Microsoft’s Project on quantum physics and quantum computation located on the UCSB campus. The project is a collaborative effort between Microsoft and academia directed towards exploring the mathematical theory and physical foundations for quantum computing.
Freedman joined Microsoft in 1997 as a Fields Medal-winning mathematician whose accomplishments included a proof of the 4-dimensional Poincare conjecture, the discovery (with Donaldson and Kirby) of exotic smooth structures on Euclidian 4-space, applications of minimal surfaces to topology, and estimates for the stored energy in magnetic fields. Freedman has received numerous awards and honors: The Fields Medal, election to the National Academy of Science and the American Academy of Arts and Sciences, the Veblen prize, a MacArthur Fellowship and the National Medal of Science. His work since joining Microsoft has been primarily on the interface of quantum computation, solid state physics, and quantum topology.
Quantum Computing: A Short Tutorial
Speaker: Krysta Svore, Microsoft Research QuArC
Krysta Svore is a Researcher in the Quantum Architectures and Computation Group (QuArC) at Microsoft Research in Redmond, WA. She is passionate about quantum computation and determining what problems can be better solved on a quantum computer. Her research focuses on quantum algorithms and how to implement them on a quantum device, from how to code them in a high-level programming language, to how to optimize the resources they require, to how to implement them on quantum hardware. Her team works on designing a scalable, fault-tolerant software architecture for translating a high-level quantum program into a low-level, device-specific quantum implementation. Dr. Svore received her Ph.D. with Honors in Computer Science from Columbia University in 2006 under Dr. Alfred Aho and Dr. Joseph Traub. She was a visiting researcher at MIT under Dr. Isaac Chuang, at Caltech under Dr. John Preskill, and at IBM Research under Dr. David DiVincenzo and Dr. Barbara Terhal. 9:30 – 9:45 AM
Motivation for the meeting
Speaker: Matthias Troyer, ETH Zurich
While a quantum computer can solve many electronic structure problems in polynomial time, the time needed for interesting problems might still exceed the age of the universe on the fastest imaginable quantum computer. In this introductory presentation I will present limitations of the largest and fastest quantum computer that we might imagine building. I will then discuss the consequences of these limitations for solving problems in quantum chemistry and materials science, to set the stage for the discussions during the meeting. Bio:
Matthias Troyer is professor of computational physics at ETH Zurich and consultant for Microsoft Research Station Q. He is a recipient of an ERC Advanced Grant of the European Research Council and a Fellow of the American Physical Society. His research activities center on numerically accurate simulations of quantum many body systems, with applications ti quantum magnets, correlated materials, ultracold quantum gases, quantum devices and topological quantum computing. He achieves progress in simulations through novel simulation algorithms combined with high performance computing approaches. He has initiated the the open-source ALPS projects for the simulation of quantum many body systems in condensed matter physics.
9:45 – 10:30 AM
What Could Quantum Computers Accomplish for Chemical Reactions?
Speaker: Markus Reiher, ETH Zurich Abstract:
In the past 15 years, my group has worked on various problems in chemistry ranging from its fundamental relativistic basis to applications in template chemistry and transition metal catalysis.
While the electron correlation problem is one of the major issues in Theoretical Chemistry and seemingly prone to be tackled by quantum computers, other issues involving the huge size of chemical compound / configuration space are probably much more important when actual chemical problems shall be solved.
In my talk, I will elaborate on some prominent examples which we encountered in our work in order to highlight persistent difficulties. Then, I shall discuss whether or not these problems will be amenable to solution by virtue of quantum computers.
Markus Reiher is professor for theoretical chemistry at ETH Zurich since 2006. He was born in Paderborn/Westphalia in 1971. In 1998, he was awarded a PhD in theoretical chemistry from the University of Bielefeld working with Juergen Hinze. In 2002, he finished his habilitation thesis in the group of Bernd Artur Hess at the University of Erlangen and continued as a private docent first in Erlangen and then at the University of Bonn. In 2005, he accepted an offer for a professorship in physical chemistry from the University of Jena, where he worked until he moved to ETH Zurich. His research covers many different areas in theoretical chemistry and ranges from relativistic quantum chemistry, (vibrational) spectroscopy, density functional theory, transition metal catalysis and bioinorganic chemistry to the development of new electron-correlation theories and smart algorithms for inverse quantum chemistry.
> Quantum Computation for Quantum Chemistry: Status, Challenges, and Prospects - Session 1 | 计算机 |
2014-23/2662/en_head.json.gz/27859 | Founder and Vice President of Research
Bill Joy founded Sun in 1982, coming to the company from U.C. Berkeley
where he was the author of Berkeley UNIX (BSD) and the "vi" text editor.
Berkeley UNIX was an early example of an "open source" operating system,
and provided early and strong support for TCP/IP and the Internet in 1980.
At Sun Bill has led Sun's new technical initiatives for a number of
years. Major new technical initiatives he has led and/or designed include
Sun's Network File System (NFS), the business and technical strategy for
the Java programming language and platform, and, most recently, the Jini
distributed system.
Bill is co-author (with James Gosling and Guy Steele) of the Java Language
Specification, the definitive description of the Java Programming Language.
His most recent work on the Jini technology for networked computing devices
using Java (with Mike Clary) is described in the August, 1998 issue of
Wired magazine, and at http://java.sun.com/products/jini.
Bill is also a designer of the SPARC microprocessor architecture (with
Robert Garner and Anant Agrawal), the picoJava architecture for the embedded
market and systems on a chip (with Marc Tremblay and Mike O'Connor) and
the ultraJava architecture for high performance em | 计算机 |
2014-23/2662/en_head.json.gz/27897 | Google Won't Develop Dedicated Apps For Windows 8, Windows Phone 8
Shane McGlaun (Blog) - December 14, 2012 8:16 AM
52 comment(s) - last by nikon133.. on Dec 16 at 5:45 PM
Google says if the demand grows it might develop for Windows 8
Microsoft has been working hard to entice developers to make apps for its new Windows 8 computer operating system and its new smartphone platform, Windows Phone 8. One of the companies that Microsoft hoped would make official apps for its devices was Google, which owns popular e-mail services like Gmail and the cloud-based storage service Drive.
Google has now said that it has no plans to develop dedicated apps for Windows 8 or Windows Phone 8 for Gmail or Drive. In an interview with v3, Google Apps product management director Clay Bavor indicated that Google has chosen not to develop for the new Windows platforms due to a lack of interest from clients.
"We have no plans to build out Windows apps. We are very careful about where we invest and will go where the users are but they are not on Windows Phone or Windows 8," he said.
He adds, "If that changes, we would invest there, of course."
Bavor said that Google was committed to continually improving and updating the apps it offers for iOS and Android products. Source: v3 Comments Threshold -1
RE: Oh really?
... You do realize that from the perspective of the Store, there is no difference. All Win 8 store apps work on all versions of Win 8, on all devices it is on... WP8 however is different... slightly. Parent | 计算机 |
2014-23/2662/en_head.json.gz/28240 | The Forge Forums General Forge Forums Site Discussion The Winter of the Forge looms near
Topic: The Winter of the Forge looms near (Read 19719 times)
The Winter of the Forge looms near
Hi everybody,I aim to move the Forge into its winter stage by the end of the year. For those of you who don't know this, I announced quite a while ago that the Forge was never intended to be a permanent site. Especially since, well, bluntly, I (and Clinton, and Ed Healy, and a lot of other people active at the founding) have unequivocally won the battle we wanted to win. "The Big Bang has Bung," I like to say.That's the main reason besides time and effort that I have never tried to update the Forge into a physical format more suited to 2010 rather than 1999.However, it's been a slow process to downsize, and sometimes stalled. That's a time and effort thing too, but it's also a matter of deciding exactly how. You see, I want the site to be good throughout the whole process - it may surprise some of you, but I do not think downsizing and closure are a bad thing and necessarily about disuse, stagnation, and failure. I'm kind of hoping the final phase, "winter," will be a really good thing.Vincent and I came up with a partial plan, which we'll iron out in terms of software and policy details for a little while before implementing. Comments are welcome, but bear in mind, this is not a democracy and your input may well be ignored. The main reason I'm posting it is policy transparency, so you can decide whether the winter-Forge is somewhere you care about and think you can make use of.1. Most publisher forums will be moved to the Archives. I am hoping that we can either continue to host or to transfer forums to the publishers' ownership, if desired, but they will no longer be an active feature via the Forge page itself. (Please do not post with helpful software advice. Vincent will make his own decisions about that, and I frankly know nothing about it, so it won't help me.)1'. For purposes of blunt self-interest, the Lumpley and Adept forums will remain, at least for a while. Or maybe they'll go with the others if the whole transfer-technology proves to be easy. 2. The Conventions and Connections forums will go to the Archives. Those services can be picked up by anyone who wants to start them elsewhere.3. Some of the First Thoughts functions and the Playtesting function will be combined into "Development," which is intended to be a very practical forum about games in design. First Thoughts will go the Archives and Actual Play will go back to where it once was, the top of the page, with an introduction encouraging first users to post there (and how).4. The current Endeavor forum(s) will go into the Archives, but temporary versions will be implemented on request for a given project. So you know, I'm thinking of resurrecting the Ronnies in early December, in somewhat more practical form.Well, that was pretty much what Vincent and I talked about. To summarize, the forums are to become (in order), Actual Play, Development, Publishing, and sometimes a specific Endeavor; plus perhaps the Adept and Lumpley forums until they find their new homes.Let Vincent know if you want your Publisher forum to stay active and you can work out the details of how and all that in that computer talk.Best, Ron
Re: The Winter of the Forge looms near
Congratulations on the Forge being successful enough that you can shut parts of it down. That's a serious accomplishment, Ron, no question, and a testament to how important it has been as an instrument of change.As a sidenote, if we can keep the special Game Chef 2010 Endeavor forum going through early December, when the playoffs will wrap up this year's activities, that would be great. The Forge has been a great co-host this year and I'd definitely be interested in coming back here for 2011 if Game Chef is the kind of periodic, productive endeavor that you and Vincent are still interested in supporting. But if not, we'll make something else work.
mcdaldno
Joe Mcdaldno
Ron,This is really cool.The Forge unlocked so many ideas and design processes for me. And for the roleplaying game world, I think.I think that you're right: it's achieved so many of its goals and helped realize so many diverse potentials.And it's right and just that it come to a close.So: props. cheers. good choice.
I am a game chef.Buried Without Ceremony
Huh. Seems like I've been preaching to the choir. So, winter is coming? I'm looking forward to it.- Frank
matthijs
Congratulations to all the people who created & maintained the Forge! Well done. You've done an amazing thing for a lot of people, and I'm personally very grateful for it.
N�rwegian Style
I'm glad to see we're still keeping Dev and AP -- those are the most useful places for my purposes.Incidentally, having done a modicum of archive reading, I can safely say you guys did some amazing work, and I'm glad to have found this resource; it opened up my eyes to the things that RPGs can do, and the things major RPGs don't. I personally have about five times more fun gaming now than I ever did before.Thank you, Vince, Ron, and everyone who fought the good fight to make this happen.
Teataine
By hook or by crook.
I just wanted to say that I applaud this decision. (particularly the new Development merger)
Gregor Vuga
Thanks for the kind words.Jonathan, the current Game Chef Endeavor is certainly going to stay up through its whole course. That's exactly the sort of thing I hope to see running throughout the final phase, one or two at a time.My current thinking on the Development forum is that the Forge will no longer be a place for initial musings on game designs. Playtested or not, the project in question should be a project with some kind of real material basis - a website, a document, anything like that. The whole point is to be more of a literal forge, a workshop where things are being made. I expect traffic to drop somewhat but I hope the existing traffic will be very productive.Best, Ron
I just returned from 5 days at the Lucca Comics and RPGs fair, and people called me saying "the Forse is closing"Well, the eventual "coming of the winter" is public knowledge from something like 2005, so my initial reaction was "already? It's too soon!" (but with the tone of voice one usually use for "the sky is falling! The sky is falling!")Reading this thread I realize that the reports of the Forge death have been greatly exaggerated, and I like many of these changes (putting the actual play forum to the top again, for example). But at the same time, I am worried about any change that would give the impression to people that the Forge is not THE place to talk about actual play and design anymore. Not because of forum rivalry or something stupid like that, but because.. well, there is still not any other forum that could take the Forge's place on that aspect (not even the one I am moderator of, I have no illusion about that). And after 5 years from the Forge Diaspora, this is not improving (rather the other way around)
That sounds great, Ron. Just like in Mouse Guard, it seems like the winter phase is going to be pretty awesome!
Rafu
Raffaele, from Italy
Quote from: Moreno R. on November 02, 2010, 11:54:58 AMBut at the same time, I am worried about any change that would give the impression to people that the Forge is not THE place to talk about actual play and design anymore. Not because of forum rivalry or something stupid like that, but because.. well, there is still not any other forum that could take the Forge's place on that aspect (not even the one I am moderator of, I have no illusion about that). And after 5 years from the Forge Diaspora, this is not improving (rather the other way around)If the war is over and "we" won, we no longer have the need for a citadel, don't you think? The time has come for all of the harbors and marketplaces of the world to become a little more like the Forge, instead of the Forge being a singular "special place".
Raffaele Manzo, or "Rafu" for short. From (and in) Italy. Here's where I blog about games (English posts). Here's where I micro-blog about everything.
Chris_Chinn
QuoteNot because of forum rivalry or something stupid like that, but because.. well, there is still not any other forum that could take the Forge's place on that aspect (not even the one I am moderator of, I have no illusion about that). And after 5 years from the Forge Diaspora, this is not improving (rather the other way around)Keeping discussion on focus is a difficult and very labor intensive thing- and so far, no one has come up with an alternative. Secondarily, it's also a lot easier to get social reward by forming social forums around "We're excited about this thing!" than it is to build forums, educate people in treating it in a goal-oriented manner, and then having to be the moderator who constantly has to remind everyone to stay focused.While a lot of people developed great networks of folks to design -with-, carrying over the lessons from the Forge for their own work, there hasn't been a lot of good passing along of that information to newcomers or to the public at large.When the Forge finally does close, the real question is whether this understanding can be passed along or if it'll be a wheel folks will have to re-invent. (Several lessons, in terms of play, design, and publishing have been well absorbed already, so those will be fine, at least).Chris
HelloLooking forward to the winter phase too! Ron, what do you mean by the "physical format more suited to 2010 rather than 1999" (my emphasis)?
At this moment, I am trying to explain the "4 seasons" concept to people in the Italian forums who never played Ars Magica and are still in the "the sky is falling, the forge will close at the end of the year" phase (but I have seen that it's a very common misconception even in the American forums). To explain how the concept apply to the Forge, Ron, what do you think was the transition from Spring to Summer? (I suppose Autumn started with the closing of the theory subforums, the site reorganization and the start of the "two year" policy at the Forge Booth)@Rafu: the Forge never was a "Citadel", not a fortified and closed one at least. If Ron like the Ars Magica Covenant metaphor, I am not against the coming of the Winter. The problem is that I really don't see any other covenant like it not even in Spring phase. Most of the ones that started at the time of the diaspora become deranged like Calabais. So the Forge is still "a singular Special Place" by fact, not by my desire.
Hi everyone,The interview thread Moreno linked to has some text about this that I'll paraphrase here.The beginning of Winter doesn't mean the end of the Forge. The end of the Winter is the end of the Forge. Here's what I want to clarify about the content of Winter, which has only become clear in my own mind over the past few months.1. The duration of Winter is not pre-set. Autumn lasted for three years, for instance, somewhat longer than I'd informally expected. I have no idea how long the Winter will be.2. The activities during Winter will be much more directed toward the production of games, rather than merely musing and speculating about them. The bulk of the current function of First Thoughts will become off-topic for the site, so don't think that the new Development forum is merely combining First Thoughts and Playtesting. It's more like Playtesting without the absolutely strict playtesting requirement, instead with a concrete requirement in terms of available documents.3. I'd like to encourage a focus on discovering and highlighting outstanding intellectual issues about role-playing, especially those which still seem problematic in the context of the Big Model, but not limited to that construction. I'd also like to encourage a more dedicated actual play discussion culture, in terms of discovering existing independent games which aren't well known, and in terms of playing older games with a critical eye. All those are good things as it stands in Actual Play, but I would like to see some great things. I also want to break the uncritical devotion to New Hawtness and Indie Designer Celebrity over my knee like a dry stick.4. I would like to see some Endeavor forums really be exciting, and for people to learn from what works and what doesn't when organizing an on-line activity for our hobby. Obviously next year's Game Chef will be welcome again if the organizer wants, and we can use the same model perhaps to organize and add value to the Forge Midwest convention. Anyone can ask me and Vincent about setting up such a forum at any time; one of these forums' features is that they will have planned start and stop dates (with some flexibility in practice; things don't always go as planned).5. I desperately want help in setting up the Forge Wiki we almost got going a few years back. What I have in mind is not a standard multi-user Wiki, but rather a means of understanding the ideas developed here through user-friendly explanations with organized linkage into the whole history of Forge threads. The Winter will be dedicated to making such a thing functional, perhaps including a Forge Forum specifically for bitching about the Wiki's contents once it gets going, not only for corrections purposes, but also so nuances and alternate views can be acknowledged as part of the resources too.With any luck this will help alleviate the "sky is falling" reactions happening here and there.Moreno, you asked about which phases are which seasons. I might have to go back and check a few time-periods on the Forge to be sure, but my current thinking is:Spring: 2000-2004Summer: 2004-2007Autumn: 2007-2010Winter: 2010-? Very roughly, the conceptual transitional point in 2004, 2007, and 2010 could be placed in August, at GenCon, but that also means that by that point, the old season is showing its limitations and flaws, and the features of the new are becoming evident. As it happens, transitions in topics for intellectual development, waves of new arrivals and departures at the site, and distinct shifts in the culture of design all fall into those same rough transition points.Also, using the seasonal terms doesn't have anything to do with Ars Magica. It's a pretty standard metaphor when talking about something which is born, delivers something, and passes away. I mean, sure, it's consistent with the Ars Magica usage, so I don't mind the connection, but it's not like I was directly inspired by that game.Best, Ron | 计算机 |
2014-23/2662/en_head.json.gz/28303 | Top 5 Paid Games for Linux
With Linux matching Windows and Mac head-to-head in almost every field, indie developers are ensuring that gaming on Linux doesn't get left behind. We've covered various types of games that are available for Linux, from the best MMORPGs to the top action-packed First Person Shooters. While most of these games are free, there are a few paid games that have come out for Linux.Here's a look at the top 5 paid games that are making noise:MinecraftMinecraft is a new cross-platform indie game, which has recently gained a lot of popularity. It is a 3D sandbox game, where players must try and survive in a randomly generated world. In order to do this, they must build tools, construct buildings/shelters and harvest resources. If you're still curious, then do check out the best minecraft structures created by addicted players from around the world. Minecraft comes in two variants – Beta and Classic, both with single-player and multiplayer options. The Classic version (both single-player and multiplayer) is free. On the other hand, Minecraft Beta, which is still under heavy development, will retail at 20 Euros (that's about 28.5 USD) when finished. For the moment, the game can be pre-purchased and played as a beta for 14.95 Euros. Users who buy the beta version won't have to pay anything for the stable release once it comes out.In case you're still confused what the whole hype is about, then here's a nice video explaining the basics of the game: World of GooThis multiple award-winning game, developed by former EA employees has been one of the most popular games for the Linux platform. World of Goo is a physics-based puzzle game by 2D Boy, that works on Windows, Linux, Mac, Wii and even iOS. The game is about creating large structures using balls of goo. The main objective of the game is to get a requisite number of goo balls to a pipe representing the exit. In order to do so, the player must use the goo balls to construct bridges, towers, and other structures to overcome gravity and various terrain difficulties such as chasms, hills, spikes, or cliffs. The graphics, music, and the effects come together to provide a very Tim Burton-esque atmosphere to the game.The game consists of 5 chapters, each consisting of multiple levels. In all, there are about 48 levels, making the experience truly worthwhile. In case you've missed it, World of Goo was part of the Humble Indie Bundle 1 and 2. However, the game can still be purchased at $19.95 from the Ubuntu Software Center or from the official website.Amnesia Dark DescentWe've covered Amnesia in detail before. In this game, you play the role of Daniel, who awakens in a dark, godforsaken 19th century castle. Although he knows his name, he can hardly remember anything about his past. Daniel finds a letter, seemingly written by him and which tells him to kill someone. Now, Daniel must survive this spooky place using only his wits and skills (no knives, no guns!). Amnesia brings some amazing 3D effects along with spectacularly realistic settings making the game spookier than any Polanski movie. As of now, the game retails at as low as 10 USD. Before buying, you can also try out the demo version of the game HERE. One warning though, don't play this game with the lights turned down; it's really that scary!Vendetta OnlineVendetta Online is a science fiction MMORPG developed by Guild Software. Quoting the website “Vendetta Online is a 3D space combat MMORPG for Windows, Mac, Linux and Android. This MMO permits thousands of players to interact as the pilots of spaceships in a vast universe. Users may build their characters in any direction they desire, becoming rich captains of industry, military heroes, or outlaws. A fast-paced, realtime "twitch" style combat model gives intense action, coupled with the backdrop of RPG gameplay in a massive online galaxy. Three major player factions form a delicate balance of power, with several NPC sub-factions creating situations of economic struggle, political intrigue and conflict. The completely persistent universe and detailed storyline add to the depth of immersion, resulting in a unique online experience.” The game has been around since 2004, and since then, it has evolved a lot with developers claiming it as one of the most frequently updated games in the industry. Gamespot rated Vendetta as 'good', but there have been some criticisms about its limited content compared to its high subscription price. The game uses a subscription-based business model and costs about $9.99 per month to play the game. Subscribers get a discount on subscriptions for longer blocks of time bringing the monthly price down to $6.67 a month. A trial (no credit-card required) is also available for download on the official website.OsmosOsmos is a puzzle-based game developed by Canadian developer Hemisphere Games. The aim of the game is to propel yourself, a single-celled organism (Mote) into other smaller motes by absorbing them. In order to survive, the user's mote has to avoid colliding with larger motes. Motes can change course by expelling mass, which causes them to move away from the expelled mass (conservation of momentum). However, doing this also makes the mote shrink. In all there are 3 different zones of levels in Osmos, and the goal of each level is to absorb enough motes to become the largest mote on the level. With its calm, relaxing ambiance thanks to the award-winning soundtrack, Osmos creates a truly unique gaming experience.The game has received a great response so far. On Metacritic, it has a metascore of 80, based on 22 critics reviews. Apple selected Osmos as the iPad game of the year for 2010. Osmos retails at $10 for the PC, Mac and Windows versions. The game is available across Windows, Linux, Mac OS X and iOS.Why pay?In the free world of Linux and Open-source, many people argue that if everything is 'free', why should I pay for a game? Of course, the Linux world is free but free doesn't mean free as in 'free beer', the word free implies freedom. Most of the popular games for Windows, now come with SecuROM and other such DRM restrictions that restrict one's fair-use rights. This means that the user will only be able to use the software on one machine, sometimes requiring constant activations. Games and other software developed in the FOSS world don't have such absurd restrictions. Users are free to use and distribute the game, and yes there's none of that activation or cd-key nonsense. While these games respect the user's freedom, keeping them free (as in free beer) is not a viable option because developers have to devote a lot of time and money in making these games. So, shelling out a few dollars for these games will help the indie developers pay their rent as well as come up with many new games for this emerging gaming platform.
linux games, | 计算机 |
2014-23/2662/en_head.json.gz/28678 | Page: Speakers
OW2con-2013: OW2Con 2011 Speakers
OW2Con 2011 Speakers
Nicolas BarcetNick Barcet joined Canonical in September 2007 as Ubuntu Server Product Manager, focusing on bringing together the requirements that our users have in order to make our server product the easiest platform to deploy in business, enterprises and Internet data centers. More recently Nick is transitioning to a Cloud Architect role to help organizations (customers and partners) define their cloud strategy and build their own cloud infrastructures. Previously Nick worked at Intel as a Technical Marketing Manager and at Novell as an Identity Management consultant and pre-sales manager. As such he was involved on multiple very large projects deployment projects.Florent BenoitFlorent Benoit is leading the development of the OW2 EasyBeans EJB3 container and is one of the key developers of OW2 JOnAS Java EE application server. Florent is an expert in Java EE, and has developed critical JOnAS components. He received a master of Computer Science from the University of Grenoble (France). Florent is a member of the EJB 3.1 and Java EE 6 expert group and is a speaker at some european java conferences.Gael BlondelleGaël Blondelle has a strong experience in Open Source, and more specifically in communities like OW2 and Eclipse.Gaël Blondelle works for Obeo as Open Source Business Developper. He represents Obeo at the Eclipse Modeling Working Group. Gaël is currently JUG leader in Toulouse JUG. He co-founded Petals Link in 2004, the company which supports the OW2 projects for SOA : Petals ESB and Petals Master. Gaël acted as CTO of Petals Link until 2010.From 2007 to 2010, he was Chairman of the Technology Council at the OW2 consortium, the global open source consortium for Middleware. Since he started in the software industry in 1996, he has been working mainly in Telco, Java and SOA Technologies. He started at Alcatel as a sotfware engineer on phone simulators environment and then as a research engineer on corporate mobile communication tools. By 2000, he became consultant and trainer on Java, J2EE, XML and Web Services technologies at Valtech. He then acted as a middleware architect at France Telecom before creating Petals Link.Alain BoulzeAlain Boulze, EasiFab founder. Alain Boulze has 25 years experience in information system design and architecture, as well as in project management. From 2004 to 2010, he's been team leader at the INRIA labs, Grenoble and lead and worked on several Open Source R&D projects and communities (Eclipse, OW2, fOSSa). He's then founded EasiFab out of this context in order to develop and provide online services to help streamlining IT production, and especially tooling to manage service lifecycle.Hugo BrunelièreHugo Bruneliere is an R&D engineer working in the field of Model Driven Engineering (MDE) for the AtlanMod team with focuses on (model driven) reverse engineering, tool interoperability (based on model transformation) and global model management. He had notably been working, around these different topics, as the responsible for the INRIA coordination on the MODELPLEX (MODELling solution for comPLEX software systems) IST European project during three years and a half.Since several years, he is active in the Eclipse community as the leader of the MDT-MoDisco project, a committer on the EMFT-EMF Facet project and a regular user of EMF, M2M-ATL and other Eclipse Modeling projects. He is a regular speaker at the Eclipse Community major events which are EclipseCon (North America), EclipseCon Europe (former Eclipse Summit Europe), as well as an organizer of DemoCamps (in Nantes) for the yearly Eclipse Simultaneous Releases.In addition to frequently interacting with the various team’s partner companies within the context of different collaborative projects, he has also published and presented more than 10 papers in various journals, conferences and workshops around MDE.Tom CahillAs Vice President EMEA at Jaspersoft, Tom Cahill is responsible for building the EMEA sales organization and driving both channel and direct sales. Jaspersoft is the market leader in Open Source Business Intelligence, the most world’s most widely used BI software, with more than eight million total downloads worldwide and more than 10,000 commercial customers in 96 countries.Prior to Jaspersoft, Tom Cahill held sales management responsibilities at a number of European and US-based software organisations with responsibility for US, European and Asian regions (Valista, Serco, CampusIT) covering both Fortune 500 accounts as well as SMB markets. Previously, in the ICT industry, Tom held the position of Managing Director at Interxion, Europe's leading operator of Internet Data Centres, where he established and managed the UK and Irish operations. Tom has also held the posts of COO at ETP and General Manager Europe at Music Control (now a Nielsen Company). Tom has a degree in International Marketing from Dublin City University and is fluent in several European languages.Cédric Carbone, CTO, Talend Cédric Carbone is Talend's Chief Technical Officer (since the creation of Talend) and OW2 Board Member (since the creation of OW2). He leads the technical team (100 people located in France, USA and China) and stay at the Talend steering committee and OW2 Board and is member of OW2 Cloud Expert Group and OW2 BI Initiative. Prior to joining Talend in 2006, he managed the Java practice at Neurones, a leading systems integrator in France. Cédric has also lectured at several universities on technical topics such as XML or Web Services. He holds a master's degree in Computer Science and an advanced degree in Document Engineering.Denis Caromel, INRIA Professor and ActiveEon founder Denis Caromel is full professor at University of Nice-Sophia Antipolis and CNRS-INRIA. Denis is also co-founder and scientific adviser to ActiveEon, a startup dedicated to providing support for CLOUD Computing. His interests include parallel, concurrent, and distributed computing, in the framework of GRID and CLOUD.Denis Caromel gave many invited talks on Parallel and Distributed Computing around the world, over (Jet Propulsion Laboratory,Berkeley, Stanford, ISI, USC, Electrotechnical Laboratory Tsukuba,Sydney, Oracle-BEA EMEA, Digital System Research Center in Palo Alto,NASA Langley, IBM Tom Watson and IBM Zurich, Boston HARVARD MEDICAL SCHOOL, MIT, Tsinghua in Beijing, Jiaotong in ShangHai). He acted as keynote speaker at several major conferences (including Beijing MDM, DAPSYS 2008, CGW'08, Shanghai CCGrid 2009, IEEE ICCP'09, ICPADS 2009 in Hong Kong, WSEAS in Taiwan). Recently, he gave two important invited talks at Sun Microsystems HPC Consortium (Austin, Tx), and at Devoxx (gathering about 3500 persons). http://www-sop.inria.fr/oasis/caromel/ Paolo CeravoloPaolo Ceravolo is an Assistant Professor at the Department of Information Technologies, UNIMI. His research interests include Ontology-based Knowledge Extraction and Management, Process Measurement, Semantic Web technologies, Emergent Process applied to Semantics, Uncertain Knowledge and Soft Computing. On these topics he published several scientific papers and book chapters. Recently he has been conducting research activities within the research projects MAPS, KIWI, TEKNE, and SecureSCM. Currently, Paolo Ceravolo is directly involved in the ARISTOTELE projects. He is involved in the organization of different conferences such as: Innovation in Knowledge-Based & Intelligent Engineering Systems (KES), IEEE/IES Conference on Digital Ecosystems andTechnologies (IEEEDEST), Knowledge Management in Organizations (KMO), OnTheMove (OTM). Since 2008 he is secretary of the IFIP 2.6 Working Group on Database Semantics.Pierre Chatel, Thales CommunicationsPierre CHATEL is an R&T Software Engineer at Thales Communications who contributes to the CHOReOS FP7 UE project. He graduated from Pierre & Marie Curie (Paris VI) University with a PhD Thesis in Computer Science in 2010. During his PhD, he worked in the LIP6 laboratory of Paris VI and, at the same time, in the SC2 laboratory at Thales Land & Joint Systems under a 'CIFRE' grant from ANRT. There, he had the opportunity to contribute to the SemEUsE ANR project. The subject of his thesis is 'A qualitative approach for decision making under non-functional constraints during agile service composition'. Pierre was also a teacher in Master of Computer Science at University Vincennes - Saint Denis (Paris VIII) in the field of distributed computing from 2007 to 2009, and a research engineer at LIP6 in 2010.Jamil ChawkiBruno Cornec, HPBruno Cornec has been managing various Unix systems since 1987 and Linux since 1993 (0.99pl14).Bruno first worked 8 years around Software Engineering and Configuration Management Systems in Unix environments.Since 1995, he is Open Source and Linux (OSL) Technology Architect and Evangelist, initially for an HP reseller and now for Hewlett Packard directly in the HP/Intel Solution Center. Bruno is also HP OSL Profession Lead for EMEA and OSL Advocate.Bruno is a contributor in various OSL projects: MondoRescue, Mandriva, LinuxCOE, Tellico, FOSSology, Fedora, rinse, collectl, Pause. He is also project leader for MondoRescue (GPL disaster recovery solution), project-builder.org (GPL build service).As part of his work he has made numerous presentations for Solution Linux in France, Libre Software Meeting, NordU, Linux World UK, Linux Expo Milano, Linux.Conf.au, OSCON, Linux Symposium, Fosdem around various topics (High Availability, Deployment solutions, System management, Disaster Recovery...)Outside computers, Bruno also likes early music, singing and playing the recorder.Marc DalmauMarc is member of the LIUPPA computer science laboratory, University of Pau, France. Teodor Danciu, JaspersoftTeodor Danciu is the founder and architect of the JasperReports library, the most popular open source reporting tool, and is now working for Jaspersoft. Before joining Jaspersoft, Teodor worked for almost 9 years with several French IT companies as a software engineer and team leader on ERP and other medium-to-large database-related enterprise applications using mainly Java technologies and the J2EE platform. Teodor has a degree in computer science from the Academy of Economic Studies in Bucharest.Frederic Dang TranFrederic Dang Tran is working as R&D Engineer on Middleware & Advanced Service Platform Department, for Orange Labs, Paris, France. Jean-Vincent Drean, XWikiJean-Vincent Drean is software R&D engineer, and technology manager for XWiki CloudRoberto Di Cosmo, IRILLRoberto Di Cosmo is an Italian born computer science investigator, based in France. He graduated from the Scuola Normale Superiore di Pisa and has a PhD from the University of Pisa, before becoming tenured professor at the École normale supérieure in Paris, then professor at the Paris 7 University. Di Cosmo was an early member of the AFUL, association of the french community of Linux and Free Software users, he's also known for his support in the Open Source Software movement. He was one of the founders, and the first president, of the Open Source Thematic Group within the System@tic innovation cluster. He also leads the new Free / Open Source Research and Initiative (IRILL) at INRIA, the largest IT research organization in Europe.Bruno Dillenseger, France Telecom Orange LabsBruno Dillenseger is a computing scientist and engineer. He has been working during the past 18 years in the area of distributed computing middleware. His contributions range from academic papers to code in open source projects. Since 2002, he is leading OW2's (formerly ObjectWeb) CLIF project, providing a highly adaptable Java framework for load testing. With this orientation 1 P stands for slide-based presentation, D for demonstration and L for lab (hands-on performed by participants)2 expected duration in minutesLudovic Dubost, XWikiA graduated of PolyTech (X90) and Telecom School in Paris, Ludovic Dubost starts his career as software architect at Netscape Communications Europe. Then he joins NetValue as CTO, one of the first French start-up that went public. He leaves NetValue after the purchasing of it by Nielsen/NetRatings and then he launches XWiki in 2004. Marc DutooMarc Dutoo, Open Wide R&D Leader. Marc Dutoo heads Open Wide's R&D department since 2006. There hecurrently leads the EasySOA project, which aims at a light Service Oriented Architecture (SOA) platform. He has 10 years experience on Open Source ECM, SOA, BPM and entreprise Java technologies. A member of the OW2 consortium Technology Council and of the Eclipse SOA Project Management Committee, he's also an occasional teacher and a regular speaker at events like Linux Solutions (2009, 2010), Eclipse Con (2010) and Summit (2008, 2009), Open World Forum (2008), ICT (2008), SOA BPM (2007).Stéfane FermigierStefane Fermigier is the founder of Nuxeo and is an evangelist for open source ECM. Prior to the foundation of Nuxeo, he co-founded and was the first president of the AFUL (French-speaking association of Linux and free software users). He's also a cofounder and currently serves as the chairman of the Open Source Working group of the Systematic innovation cluster in the Paris Region. He has published many articles and books on free software and is a regular speaker of conferences in this field.Florent GarinFlorent Garin is the co-founder of DocDoku, an IT consultancy and software development organization based in France, our headquarters are in Toulouse. We are specialized in object oriented languages, web and mobile technologies. Our customers are mostly big companies and innovative startups.Prior to DocDoku, Florent has 10 years’ experience working for the IT departments of fortune 500 companies, and more specifically in the PLM area in the aerospace domain.Florent is also « JEE architect » and « Java Developer » Oracle certified. He’s the author of the successful french written Android book « Concevoir et développer des applications mobiles et tactiles ».Tugduall GrallTugdual Grall is responsible of the product strategy within eXo Platform. eXo Platform provides open-source collaborative solution based on JavaEE and many associated standard (JSR-286, JSR-170, Open Social, ...). Tugdual has joined eXo Platform in June 2008. His current areas of focus are Enterprise 2.0 Portals, including ECM, Enterprise Social Networking, and IT Architecture. Before working for eXo Platform, he was principal product manager for J2EE and web services on Oracle Fusion Middleware. Tugdual joined Oracle in January 1999 initially with Oracle France in consulting, and since April 2002, he has worked with Oracle Application Server product management. His areas of focus include J2EE and web services with Oracle Containers for J2EE (OC4J). Tugdual has gained extensive development and project management experience with the Oracle development tools and the underlying architecture, working on a number of internet application development projects. Tugdual has been speaker in various conferences, and maintains an active blog at http://blog.grallandco.comChristophe GravierChristophe Gravier is researcher at Telecom Saint Etienne, University of Saint Etienne. Stephan HadingerStephan Hadinger is Chief Architect Cloud Computing for Orange Business Services. He has spent his entire career at France Telecom - Orange and has more than 15 years experience in IT. He led the desktop and messaging internal infrastructure (more than 100,000 PCs), and the IPTV and Livebox (home gateway) platforms handling millions of retail customers. His last mission was the creation of the Orange API business unit (api.orange.com) for 3rd party developers. He is now leading the global Cloud architecture at Orange both for the enterprise market and for the group's internal needs.Christophe Hamerling, PetalsLinkChristophe Hamerling is Research Engineer on Service Oriented Architecture Projects at PetalsLink, a French Open Source SOA Software Editor and active OW2 member.Christophe is currently working on European Research projects such as SOA4All (http://soa4all.eu) as main Developer and Architect of the large scale distributed SOA infrastructure involving OW2 based middleware technology such as OW2-Petals Enterprise Service Bus and OW2-ProActive Framework. Christophe is also OW2-Petals SOA product family core commiter and focus his technology interests on Distributed/Cloud Computing, Open Source and Java stuff.His next challenge : Leading PetalsLink Cloud activity to enable SOA in the Cloud.Christophe shares his technology life (and more) on http://chamerling.org and on http://twitter.com/chamerlingBenjamin JaillyBenjamin Jailly is a PhD Student at Télécom Saint-Etienne and Télécom SudParis, two engineering schools of the same group "Institut Télécom". He received his Master Degree in telecommunication with a specialization in Computer Vision at Télécom Saint-Etienne in 2009. His current research work funding come through a project call named "Futur&Ruptur". His Research interests deal with the introduction of multimedia such as interactive video tool and augmented reality in conjunction with semantic Web approaches in order to propose new online services such as remote control of devices thanks to multimedia streams.Rodrigue Le GallRodrigue Le Gall Co-Founded BonitaSoft SA and serves as the Chief Scientific Officer. Mr. Gall is incharge of BonitaSoft Services & Support operations. Prior to BonitaSoft he was head of the Bonita Design and Web development group for Bull, the French Integrator. Mr. Gall has also a strong experience as IT Consultant in different major companies such HP. Mr. Gall holds a Master degree in Computer Science from ENSIMAG (France).Ignacio Llorente, OpenNebulaIgnacio M. Llorente, Ph.D in Computer Science (UCM) and Executive MBA (IE Business School), is a Full Professor in Computer Architecture and Technology and the Head of the Distributed Systems Architecture Research Group at Complutense University of Madrid, and Chief Executive Advisor and co-founder of C12G Labs, a technology start-up. He has 17 years of experience in research and development of advanced distributed computing and virtualization technologies, architecture of large-scale distributed infrastructures and resource provisioning platforms. His current research interests are mainly in the area of Infrastructure-as-a-Service (IaaS) Cloud Computing, co-leading the research and development of the OpenNebula Toolkit for Cloud Computing and coordinating the Activity on Management of Virtual Execution Environments in the RESERVOIR Project, main EU funded research initiative in virtualized infrastructures and cloud computing. He founded and co-chaired the Open Grid Forum Working Group on Open Cloud Computing Interface; and participates in the European Cloud Computing Group of Experts and in the main European projects in Cloud Computing.Bernard LupinBernard Lupin is software architect at Orange - IST (Global Information System and Technology), he's in charge of the Java Skill Center. Bernard works on an internal PaaS ( Platform as a Service) project, running on top of the JOnAS application server. He is also contributing and supporting various Orange Java Enterprise projects. Juho Makkonen, Avoin Interactvie OyJuho Makkonen, M.Sc. (Tech.), is an entrepreneur with a background in research, web development, and online community building. He is a co-founder and the CEO of Avoin Interactive Oy, a startup company that is developing the Kassi web service. Before founding the company Juho worked over three years in Aalto University as a researcher and a web developer. His prior work history includes various positions in the areas of software development, web design, and journalism.Jamie MarshallJamie Marshall is the Chief Technical Officer of Prologue (France), a renowned industry leader in the field of cloud computing. Jamie is the author of the business application languages ABAL and ABAL++ based on virtual microprocessor technologies and precursors to Java. He has been working in this field for the past 20 years.He is currently managing the team of IT experts for Prologue working hand in hand with partners around the world in a variety of projects in the fields of cloud computing and thin client terminal services. Prior to joining Prologue in 1986 Jamie worked in the banking and finance IT sector for Burroughs Machines Ltd in the UK addressing . He is english native.Philippe MerlePhilippe MERLE is a senior researcher at INRIA in the ADAM research-team. He obtained its PhD thesis in 1997 at the University of Lille. Its research covers software engineering and middleware for adaptable distributed applications. He is involved in OW2 since its first days. He was the president of the OW2 College of Architects (ex Technical Council). He is the leader of three OW2 projects: FraSCAti, Fractal, and OpenCCM. Currently, its main involvement is on the OW2 FraSCAti project, which targets the next generation of SOA runtime platforms.Patrick MoreauPatrick Moreau arrived in 2009 at the Technology Transfer and Innovation Department of INRIA as head of software assets. He has worked nine years in the industry, in the departments of R & D of Schlumberger. After an experience project manager, he was responsible for a development department of electronics and software. He then directed the research laboratory in communications and embedded computing. Patrick Moreau then has worked eight years in technology consulting companies in management positions.Frederic MunozFreddy joined Antelink in August 2010, after working for 3 years at INRIA. During his period at INRIA, Freddy worked actively in the European project DiVA (Dynamic variability in complex adaptive systems) mainly as a research assistant and integration manager. As a research assistant his work focused on model driven engineering, self-adaptive systems, empirical studies, and software testing. As an integration manager he focused on the establishment of a continuous integration policy and quality metrics. Freddy obtained in 2007 a Master in Computer Science from INSA Rennes and in 2010 a PhD in Computer Science from the University of Rennes 1.Benoit PelletierBenoit Pelletier is a software engineer for BULL. He obtained his degree in computer science from INSA Rennes in France. During 7 years, he worked in several large projects at BULL Service as developper and then as designer. His expertise is mainly focused in the area of technologies for distributed systems. Nowadays, he has joined the JOnAS team at BULL R&D France where he is involved in the clustering and administration features.Marius PredaMarius Preda is researcher at Telecom Saint Etienne, University of Saint Etienne, France. Guillaume PierreGuillaume Pierre is an associate professor at VU University Amsterdam. His research focuses on the management of very large-scale distributed systems. He particularly studied Web applications as a good example of demanding large-scale systems, but I am also interested in domains such as peer-to-peer systems, Grid and Cloud computing. His researchaddresses a variety of questions such as how to make applications scale, how to control their non-functional properties, and how to design large-scale decentralized infrastructures where they can be deployed.Guillaume Pierre is a member of the Contrail European research project (and member of the OW2 consortium) which builds an open-source Cloud integrating Infrastructure as a Service (IaaS), services for federating IaaS Clouds, and Platform as a Service (PaaS) on topof federated Clouds.Olli Pitkänen, Aalto UniversityOlli Pitkänen is a senior research scientist and a docent at Aalto University. He holds a doctorate in information technology, a master's degree in software engineering, and a master's degree in laws. He has worked as a researcher and a teacher at Aalto University, at Helsinki University of Technology and at Helsinki Institute for Information Technology HIIT (http://www.hiit.fi) since 1993. Prior to academia he had worked as a software engineer and practiced law in the private sector. He has also been a member of the board in several ITcompanies. In 1999-2001 and 2003, he was a visiting scholar at University of California, Berkeley. He has also been a visitor at The Interdisciplinary Centre for Law and Information & Communication Technology, K.U. Leuven, Belgium. His research interests include legal, societal, and ethical issues related to future media, digital services, and information and communication technologies (ICT).Mathieu PoujolMathieu Poujol is Senior Consultant and Manager for IT Strategy and Marketing. Christian RazaChristian Raza is Sales Director at Actian Corporation (formerly Ingres). Alban RichardAlban Richard is CEO of UShareSoft, bringing more than 25 years experience in the IT industry, including engineering, marketing and product management executive roles, with P&L and general management responsibilities. He has a successful track record for building more than 20 server software product lines, including world's leading technologies such as Oracle's Directory Server Enterprise Edition and Sun Java Management Extensions (JMX) for which Richard was one of the two inventors. Prior to joining UShareSoft, Richard spent more than 18 years with Sun Microsystems in the US and France, most recently leading Sun's Directory Server Enterprise Edition product line and managing the company's R&D Engineering Center (GEC) in Europe. Richard is also a Board Member for the OW2 Consortium. Philippe RoosePhilippe Roose is associate professor at the University of Pau (Anglet) since 2001. He obtained his PhD thesis in 2000 and his upper French thesis (HdR) in 2008.He is member of the LIUPPA computer science laboratory, team T2I-Alcool. He created the ALCOOL Group (Software Architecture, Components and Protocols) 2007. The last ten years, he published more than 40 scientific papers, books, chapters. He supervised more than 6 PhD thesis.In 2006 he obtained and was leader of a three years ANR Grant for a project called “TCAP - Video-flow transport over wireless sensor networks”.In 2010 he obtained and still is the leader of another three years ANR Grant called “MOANO - Models and Tools for pervasive applications focusing on Territory Discovery”. This project involves four public CS Labs and an INRIA team.His research domain areas are adaptation, middleware, software architecture, components and services, mobile and distributed applications and multimedia information system.Marc SallièresMarc Sallieres is CEO and co-founder of Altic, integrator of open source BI solutions. Guillaume Sauthier, Bull S.A.S.Guillaume Sauthier is currently holding a senior developer position in Bull where he is working on the OW2 JOnAS application server since 2003. He is now responsible of the JOnAS 5 OSGi architecture, ensuring a best usage of this technology. Guillaume is also fluent in WSDL and speaks XML from the time he's been involved in Apache Axis (the first), now he is a contributor on Apache Felix, iPOJO and CXF projects, without speaking about his daily work on OW2 projects (JOnAS, EasyBeans, …). OSGi is also on his skills board, being a power user of Felix and iPOJO for at least 4 years. He's been involved in the OW2 community since the beginning (since Objectweb in fact), being part of the technology council, helping on Opal, managing our Bamboo instance, proposing new tools for OW2, ...Stefano Scamuzzo, Engineering Stefano Scamuzzo has been working in IT field since 1989. Initially involved in European research projects on hypertext technology, he then undertook the technical management of complex projects in several technological areas such as document and workflow applications, web based applications, enterprise portals and business intelligence applications. He is presently Senior Technical Manager in the Research and Innovation Division of Engineering Ingegneria Informatica and member of the SpagoWorld Executive Board, mastering the domains of Service Oriented Architecture and Business Intelligence with a particular focus on open source solutions. He teaches training courses on Service Oriented Architecture at the Engineering Group ICT Training School in Italy. Alessandra ToninelliAlessandra works as a consultant for SpagoBI (www.spagobi.org). She is involved in BI projects, as well as communication and training activities on SpagoBI. Alessandra obtained her PhD in Computer Science Engineering at the University of Bologna. She has a solid background in IT research, developed over many years of international work experience on semantic web technologies for data representation. She has authored several publications for international journals and peer-reviewed conferences.Stephane Woillez, MicrosoftStephane Woillez is Cloud Computing Consultant at Microsoft, Technical Advisor for the Windows Azure platform. Before Microsoft, Stephane has worked 10 years for IBM, first as Security Consultant, then Software Technical leader for the Communication sector. Stephane was the Cloud leader of the IBM Software Group in France. Before IBM, Stephane worked for France Telecom as project manager for a technical integration project. Stephane has a research degree in parallel computing; he worked at the ONERA on the parallelization of the FLU3M application, a fluid dynamic code used for the Arianne 5 projectJunfeng Zhao, Pekin UniversityDr. Minghui Zhou Dr. Minghui Zhou is very interested in conducting research in summarizing system evolution data and improving the understanding and control of such systems. She has been leading a team to work on open source middleware for a long time, and looking for approaches to help global distributed development. Currently she is an associate professor in School of Electronics Engineering and Computer Science, Peking UniversityRainer Zimmermann | 计算机 |
2014-23/2662/en_head.json.gz/30580 | BLM>Idaho>Programs>Cadastral Survey
Cadastral Survey In the Spotlight Survey plats for Idaho are available for downloading at the BLM GLO Records web site.The plats may be searched for by State, County, Township number and direction, Range number and direction, Meridian and Survey type.The Request for Cadastral Survey Form 9600-4 is available for downloading . This is a fillable Adobe pdf.
Cadastral Survey
Field Section
GCDB Section
Conditions of Survey
Initial Point
The Branch of Cadastral Survey is responsible for surveying all public lands in the United States.History:The rectangular survey system, also known as the Public Land Survey System (PLSS), was developed from 1785 to about 1849.Until 1785 all land descriptions and surveys were by the indiscriminate method of the metes-and-bounds system.The first surveys, using Thomas Jefferson's idea of a rectangular survey system, were done north of the Ohio River and south of Lake Erie.The General Land Office (GLO), under the Treasury Department, was created in 1812 to oversee the land surveys and sales of public land. The GLO was later placed under the Department of the Interior when it was established in 1849.In 1946 the GLO, the Grazing Service, the Oregon and California Administration, Alaska Fire Control, and others were joined to form the Bureau of Land Management making Cadastral Survey the oldest program within the BLM.The Public Land Survey System forms the basis for all legal land descriptions in the United States except for the original 13 colonies, Texas, Hawaii, and portions of Louisiana. These legal land descriptions are needed to meet the requirement that all federal land, to be sold or otherwise conveyed, must be surveyed and have a legal land description.LocationThe Branch of Cadastral Survey responsible for surveying public lands in Idaho is located at:Bureau of Land ManagementIdaho State Office1387 S. Vinnell WayBoise, Idaho 83709Field offices are located at Burley, Boise, Coeur d'Alene, Fort Hall, Idaho Falls, and Lapwai.Stanley French, Chief, Cadastral Surveyor, 208-373-3981Jeff Lee, Field Chief, 208-373-3984G. Mike Dress, Office Chief, 208-373-4094 | 计算机 |
2014-23/2662/en_head.json.gz/30857 | Web Page Basics
Graphics, Sound, and Video
Lists, Tables, Frames
CSS3 For Dummies Extras
Finding Sound Files to Include on Your Web Page
By Bud E. Smith from Creating Web Pages For Dummies, 9th Edition
It’s fairly easy to set up a sound file on your Web site to play by download. If the user has the necessary software and hardware set up, he or she can click your sound file link. Although, it’s easy to find an MP3 file on the Web, a lot of conditions limit how you can use them. Also, some people who want to distribute viruses, spyware, and other malware use MP3 files to attract visitors to their sites and the innocent visitors leave with more than they bargained for.
The most desirable MP3 files are free copies of hit songs. These files are illegal and attract increasingly aggressive litigation and prosecution. So, although they’re common on the Web, they tend to be hidden away or only available to those who use file-sharing services (which may themselves carry viruses, spyware, and other such dismal free gifts).
One trustworthy source for virus-free, legal, free MP3s is the music-download section of CNET’s download.com. Not many hit tunes are there, but not many problems are, either.
You can search the Web for other sites, many of which are trustworthy, but make sure your virus-protection software is up to date and running first!
Check the permissions on any MP3 file you find carefully. Once you’re certain the file is free of copyright problems and available for free use, you can do either of two things:
Link to it in place from your own site: This is easy and convenient. If this gives the appearance of the file being on your site, there is the possibility of problems with copyright, as you’ve made someone else’s property appear to be your own, so make sure the file is free to use. If you bring a lot of file downloads to the site, you’re also abusing their bandwidth, as someone else’s bandwidth is serving your purposes. This is unlikely to be a problem for an occasional download, but for heavy use it’s unfair to the file’s host.
Download the file and transfer it to your own site: If permissions allow this, it gives you control and responsibility. It also gives you the bill if there are a whole lot of downloads and you’re paying for download bandwidth.
In either case, to make the file available for download, you link to it from your Web page to the location where it’s hosted, whether that’s on someone else’s site or your own site.
How to Add Images to Your Web Page in CoffeeCup
Get Images Ready for the Web with Photoshop
How to Create a Navigation Menu in Fireworks
How to Add Sound to Your Page with CoffeeCup
How to Create Flash Audio for a Web Page
Creating Web Pages For Dummies, 9th Edition | 计算机 |
2014-23/2662/en_head.json.gz/31183 | Government // Open Government
7/19/201310:30 AMElena MalykhinaNewsConnect Directly4 commentsComment NowLogin50%50%
Massachusetts Computer Services Tax Riles IT IndustryProposed 6.25% sales tax on certain computer and software services would slow innovation and hurt the state's tech industry, opponents say.Massachusetts' legislature has proposed imposing a 6.25% sales tax on computer and software services. Opponents of the bill say the new tax would negatively affect the state's technology industry, create a drag on innovation and hurt other industries and businesses that use these services. The tax is part of a transportation bill meant to address the Massachusetts Bay Transportation Authority's budget deficit. The bill was adopted by the House of Representatives on July 17 and by the Senate on July 18. It initially passed in June, but Massachusetts Governor Deval Patrick returned it to lawmakers in a dispute over toll revenue. If enacted, the legislation would enforce a 6.25% sales tax on certain computer and software services in the state. The stakes could be significant. Massachusetts is the sixth-largest tech employer in the U.S., and 9% of the state's private sector talent works at tech firms, according to industry trade group TechAmerica.
[ Former Department of Transportation CIO Nitin Pradhan shares his 5 Habits Of Highly Effective Government IT Leaders. ]
The tech industry, however, had been slow to challenge the proposed tax, according to Senator Stephen Brewer, who chairs the Senate Committee on Ways and Means. "We heard precious little from industry," he told the Boston Globe. That's changed in recent days as opponents have challenged the bill, arguing that it would put the state at a competitive disadvantage. TechAmerica issued a letter to members of the Massachusetts General Court asking to "fix the computer system design services tax so that Massachusetts can maintain -- if not strengthen -- its place as a leading high-tech state." "While we understand the need to fund critical transportation infrastructure projects, there must be careful consideration of the impact that new taxes pose on businesses. [The tax] does not strike that balance and punishes businesses -- particularly the technology sector," Kevin Callahan, TechAmerica's director of state government affairs for Massachusetts, said in the letter. "The purpose of the tax is to increase state revenue; however, driving business out of Massachusetts would ultimately have the opposite result."
Callahan also contended that the effects would be felt not only by the IT sector, but many other industries as well, including retailers, restaurants, banks and healthcare providers. Consumers, too, would "bear the weight of this new tax," said Callahan.
Another group, the Massachusetts Taxpayers Foundation, said the tax on computer services could cost the state's employers an additional $500 million annually. "The tax takes clear aim at the state's innovation economy, which is the essence of the state's competitive edge and at the core of its economic future. Many of the key investments in computers and software that help to incubate groundbreaking discoveries and cutting-edge ideas will now be subject to the sales tax," the nonprofit research organization said in a statement reacting to the initial bill in June.
Only three other states have a sales tax on computer services -- New Mexico (5.1%), Hawaii (4%) and South Dakota (4%).
majenkins,
re: Massachusetts Computer Services Tax Riles IT Industry Every time someone says they plan to do something in the tech industry that someone else doesn't like one of first, if not the very first, things the people that don't like it say is it will slow/hurt innovation. Well if innovation were really that fragile then it would have been completely killed by now. I am not saying this tax is a good thing or a bad thing, heck everybody wants more money, but please stop with the clich+�d Gǣthis will slow/hurt innovationGǥ nonsense. Just find another dead horse to beat for a while, canG��t you.
re: Massachusetts Computer Services Tax Riles IT Industry It would be nice if this article would have shed some light on what exactly is going to be subject to a sales tax and what still isn't taxable in that state before and after the new law. Here in Ohio, I've always had to charge my clients the state sales tax on certain types of computer work, but not for all types of work, so perhaps Massachusetts is just catching up to Ohio. I'm also wondering what to think about the last line in this article that fails to include Ohio, since I charge sales tax for some computer work! Sadly, the article fails to clarify any important facts regarding the proposed tax, such as whether it's just normal repair, hardware, and software sales that are taxable or otherwise, and it makes me think there's more to this story than the author bothered to investigate and include in the article - just copying the A.P. wire newsfeed, are we?
bshajenko01401,
re: Massachusetts Computer Services Tax Riles IT Industry majenkins: You're right innovation doesn't just stop. But it does pack up and move elsewhere.Back in the 70's and 80's, Connecticut had a booming high-tech industry that was was not only able to offer competitive jobs to all engineering and technology graduates, but also entice those from other states. In fact, CT was called the "High-Tech Corridor" between Boston and NY. Then in the late 80's they passed new legislation to control growth in high-tech with new taxation. What followed is a collapse of the high-tech industry as companies relocated to other states. To this date, CT has not recovered. It's a nice state, as long as your profession is NOT in high tech or in any way related to high tech.Since engineering talent is a scarce resource, and at the top of the food chain, it is fragile and can affect so many others if critical mass cannot be maintained. A bill, just like the one proposed, can trigger major investors to move their R&D and IT centers to other states, and if R&D leaves, others follow. What executive would be against saving their company an additional 6.25% on R&D and IT costs? Doing business in MA already carries a high cost of living as well as high income, property, sales, and corporate taxes. Adding an additional burden could be the tipping point. I believe it is a too high a price to pay for covering the transportation defect.
PeterW408,
re: Massachusetts Computer Services Tax Riles IT Industry Tech in general is global, and IT services - meaning software development is an extremely competitive global market. Adding a local tax simply pushes work from local vendors to non-local ones. 6.25% is a big number, we win or lose contracts for less then 2% difference. It worries me that I will have to charge 6.25% tax to any clients that i have in MA. but it worries me more, that any vendors that I use (1099 staff) will have to charge me 6.25%, and I will have to mark that up (2.5x) and pass it on, and then charge 6.25% on top of that. The work to figure this out will wind up costing way more then the 6.25%, and i don't yet know what I can pass on and what I can absorb. I do know that National, and global companies that do business is MA, will assume that they might be on the hook regardless of where the work is preformed, and simply choose to pick a different vendor. This is a disaster, for me, and for small software companies in MA. | 计算机 |
2014-23/2662/en_head.json.gz/32309 | SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
CP RSS Channel
Sponsoring CP
About Our Sponsors
CORE STANDARDS
XML Query
TECHNOLOGY REPORTS
General Apps
Government Apps
Academic Apps
Technology and Society
Jabber XML Protocol
Principal URIs
General: Articles, Papers, News, Reports
[June 2007] Jabber Software Foundation (became XMPP Standards Foundation in January 2007) was founded in 2001 as an open forum for definition and extension of the streaming XML technologies that grew out of the open-source Jabber project started by Jeremie Miller in 1999. Since the beginning, the organization has focused on defining open protocols rather developing open-source software. In 2002, the JSF contributed the core Jabber protocols to the Internet Engineering Task Force (IETF) under the name Extensible Messaging and Presence Protocol (XMPP). Meanwhile, the JSF has continued to lead the Jabber/XMPP developer community through publication of XMPP extensions, hosting of interoperability testing events, and deployment of a certification authority for XMPP servers... The XMPP Standards Foundation (XSF) builds open protocols for presence, instant messaging, and real-time communication and collaboration on top of the IETF's Extensible Messaging and Presence Protocol (XMPP), and also provides information and infrastructure to the worldwide community of Jabber/XMPP developers, service providers, and end users. Widely considered the lingua franca of instant messaging, XMPP is an Internet standard for presence, real-time messaging, and streaming Extensible Markup Language (XML) data that grew out of the popular Jabber open-source technologies first released in 1999. With approval of XMPP by the IETF in 2004, the XSF continues to develop XMPP extensions that meet the needs of its many stakeholders: open-source and commercial developers (including Apple, HP, Nokia, and Sun), organizations large and small (including the U.S. defense establishment and most Wall Street investment banks), Internet and mobile service providers (including Google, NTT, and Portugal Telecom), and an estimated 40-50 million end users worldwide..."
Jabber is an open, secure, ad-free alternative to consumer IM services like AIM, ICQ, MSN, and Yahoo. Under the hood, Jabber is a set of streaming XML protocols and technologies that enable any two entities on the Internet to exchange messages, presence, and other structured information in close to real time.
[September 21, 2000] Jabber: 'open source XML-based instant messaging.' "Essentially, Jabber defines an abstraction layer utilizing XML to encode the common essential data types. This abstraction layer is managed by an intelligent server which routes data between the client APIs and the backend services that translate data from remote networks or protocols. By using this compatible abstraction layer, Jabber can provide many aspects of an Instant Messaging (IM) and/or Presence service in a simplified and uniform way. At the core, Jabber is an API to provide instant messaging and presence functionality independent of data exchanged between entities. The primary use of Jabber is to give existing applications instant connectivity through messaging and presence features, contact list capabilities, and back-end services that transparently enrich the available functionality."
"XML is used in Jabber to define the common basic data types: message and presence. Essentially, XML is the core enabling technology within the abstraction layer, providing a common language with which everything can communicate. XML allows for painless growth and expansion of the basic data types and almost infinite customization and extensibility anywhere within the data. Many solutions already exist for handling and parsing XML, and the XML Industry has invested significant time in understanding the technology and ensuring full internationalization. XML Namespaces are used within all Jabber XML to create strict boundaries of data ownership. The basic function of namespaces is to separate different vocabularies of XML elements that are structurally mixed together. By ensuring that Jabber's XML is namespace-aware, it allows any XML defined by anyone to be structurally mixed with any data element within the protocol. This feature is relied upon frequently within the protocol to separate the XML that is processed by different components."
The Jabber Project: Statement on IETF Activity: "Jabber is an open development project and is commited to fully support any open real-time messaging protocols, including the IETF recommended protocol. When such protocol is available, users of Jabber software and services will automatically be allowed to communicate with users of the IETF protocol. As support for the IETF efforts grows, Jabber is aiming to create a leading open-source platform around its IETF support. The IMPP Working Group is currently entering the design phase for its protocol. Developers of the Jabber Project have been following the creation of the existing requirements draft closely, and expect to participate in the development of the protocol within the IMPP group. Based on the requirements draft, the protocol already used internally to Jabber is an existing possible candidate for the recommended protocol. As the IETF activity in designing the protocol continues, Jabber will be aligned as closely as possible to the discussions and requirements, and lobbied as a test platform for IMPP development efforts."
On XML (Instant) Messaging, see [1] Common Profile for Instant Messaging (CPIM); [2] Jabber XML Protocol; [3] WAP Wireless Markup Language Specification; [4] MessageML; [5] XML Messaging Specification (XMSG); [6] Wireless Village Initiative.
Jabber.org Web site Jabber Software Foundation. In January 2007, the corporation announced a change of name to the XMPP Standards Foundation (XSF).
Jabber.com - Business support for Jabber
Jabber, Inc. FAQ document "Jabber, Inc. develops and markets flexible, extensible, scalable, and secure presence-powered, real-time messaging solutions, including enterprise instant messaging (EIM) software for the enterprise, government, communications / service providers, and financial services markets. Its flagship products, the Jabber Extensible Communications Platform. Jabber XCP and JabberNow, are based on the Internet Engineering Task Force (IETF)-approved Extensible Messaging and Presence Protocol (XMPP)."
Jabber: User FAQ document
Jabber User Guide
Jabber Software Clients "There are literally hundreds of Jabber clients."
Wikipedia: list of Jabber client software
Jabber mailing lists
Jabber Wiki
See also: Extensible Messaging and Presence Protocol (XMPP)
[October 4, 2004] "IETF Publishes XMPP RFCs Core Jabber Protocols Recognized As Internet-Grade Technologies." - "The Internet Engineering Task Force (IETF) today officially published the specifications for the Extensible Messaging and Presence Protocol (XMPP) as RFCs within the Internet Standards Process. These documents, which formalize the XML streaming protocols first developed by the Jabber open-source community in 1999, are the result of two years of work by the IETF's Extensible Messaging and Presence Working Group and represent the state of the art in open instant messaging (IM) and presence technologies. The specifications published today are as follows: (1) RFC 3920: Extensible Messaging and Presence Protocol (XMPP): Core — The core XML streaming technology that powers Jabber applications, including advanced security and internationalization support. (2) RFC 3921: Extensible Messaging and Presence Protocol (XMPP): Instant Messaging and Presence — Basic IM and presence extensions, including contact lists, presence subscriptions, and whitelisting/blacklisting. (3) RFC 3922: Mapping the Extensible Messaging and Presence Protocol (XMPP) to Common Presence and Instant Messaging (CPIM) — A mapping of XMPP to the IETF's abstract syntax for IM and presence. (4) RFC 3923: End-to-End Signing and Object Encryption for the Extensible Messaging and Presence Protocol (XMPP) — An extension for interoperable, end-to-end security... "Combined with XMPP development and integration by the likes of Apple, HP, Oracle, and Sun, publication of these RFCs is yet another vote of confidence in the power of Jabber technologies," said Peter Saint-Andre, Executive Director of the Jabber Software Foundation and editor of the XMPP specifications. "We now have a stable, secure foundation for developing a wide range of presence and messaging applications and for building out the real-time Internet." In contributing XMPP to the Internet Standards Process, the JSF ceded change control over its core technologies to the IETF. Now that the protocols have passed through the IETF's rigorous cross-area and security review, attention turns to the enormous base of Jabber servers, clients, and code libraries, which are currently being upgraded to comply with the XMPP specifications. In addition, the JSF continues to develop many popular XMPP extensions through its JEP series, covering everything from advanced IM and extended presence, to real-time content syndication, to bindings for SOAP and other application protocols..." See: (1) "Extensible Messaging and Presence Protocol (XMPP)"; (2) "Jabber XML Protocol."
[September 22, 2004] "Jabber Readying IM Appliance for SMBs." By Ryan Naraine. From InternetNews.com (September 22, 2004). "Instant messaging software firm Jabber Inc. has announced plans to release a plug-and-play IM appliance designed for small- to medium-sized businesses (SMBs). The new appliance is scheduled to ship in the first quarter of next year and will run on the Jabber Extensible Communications Platform (Jabber XCP). Jabber, which is working on an XMPP-to-SIP Gateway to achieve interoperability with IBM's Lotus IM product, said the release of an SMB instant messaging appliance will coincide with a new version of the Jabber XCP Platform and an upgraded Jabber Messenger desktop client. Jabber XCP is a presence, messaging and XML routing infrastructure that is used to create real-time applications, systems and services. Enterprise customers use Jabber XCP to presence-enable real-time application like workflow systems, transactional financial trading systems, alert and notification systems and customer service portals... It will also support Web Services to embed presence and messaging into other applications using SOAP-based APIs and a Presence Mirror to allow access to users' availability information via a database connection. Jabber said the product suite will also include wireless instant messaging clients for RIM (BlackBerry), PocketPC, SmartPhone, J2ME, Symbian, SMS and WAP..."
[December 24, 2003] "Jabber XCP Generates Corporate IM." By Michael Caton. In eWEEK (December 16, 2003). "Jabber Inc.'s Jabber Extensible Communications Platform has a lot under the covers that brings IM beyond user-to-user communications. Unfortunately, Jabber XCP lacks the graphical management tools found in competing products. Jabber XCP 2.7 is available now, priced at $30 per user. In eWEEK Labs' tests, we found a good deal to like in the way Jabber XCP and its included Jabber Messenger work together to deliver instant messaging, but the lack of a management console is a troubling shortcoming of the platform. In terms of base price, Jabber XCP is competitive with Microsoft Corp.'s Live Communications Server 2003. It costs much less than IBM's Lotus Sametime 3.1 but doesn't offer Sametime's Web conferencing features. Jabber Inc. originated out of the Jabber Open Source Project, when Webb Interactive Services Inc. created a software company around the core developers of the original open-source Jabber server. Open-source versions of products that leverage XMPP (Extensible Messaging and Presence Protocol), the XML-based Jabber communications protocol, are available through the Jabber Software Foundation at www.jabber.org. The JSF manages the standardization process for adding extensions to XMPP for backward compatibility. The Jabber XCP product differs from the open-source Jabberd server in that it is a multithreaded and modular application. Jabber offers an interesting wrinkle on IM As a framework application, Jabber XCP offers companies a flexible platform for delivering IM- and presence-aware applications. Overall, we liked the IM experience Jabber XCP provides, including its default options for indicating presence, which are broader than those in competing enterprise IM clients, and its ability to customize the Jabber IM client... Because Jabber XCP relies heavily on XML as the core to communications, seeing how the product works and making modifications can be relatively straightforward. For example, customizing the client's look and feel essentially involves making changes to three XML files..."
[August 12, 2003] "Instant logging: Harness the Power of log4j with Jabber. Learn How to Extend the log4j Framework with Your Own Appenders." By Ruth Zamorano and Rafael Luque (Orange Soft). From IBM developerWorks, Java technology. August 12, 2003. With source code. ['Not only is logging an important element in development and testing cycles -- providing crucial debugging information -- it is also useful for detecting bugs once a system has been deployed in a production environment, providing precise context information to fix them. In this article, Ruth Zamorano and Rafael Luque, cofounders of Orange Soft, a Spain-based software company specializing in object-oriented technologies, server-side Java platform, and Web content accessibility, explain how to use the extension ability of log4j to enable your distributed Java applications to be monitored by instant messaging (IM)'] "The log4j framework is the de facto logging framework written in the Java language. As part of the Jakarta project, it is distributed under the Apache Software License, a popular open source license certified by the Open Source Initiative (OSI). The log4j environment is fully configurable programmatically or through configuration files, either in properties or XML format. In addition, it allows developers to filter out logging requests selectively without modifying the source code. The log4j environment has three main components: (1) loggers control which logging statements are enabled or disabled. Loggers may be assigned the levels 'ALL, DEBUG, INFO, WARN, ERROR, FATAL, or OFF'. To make a logging request, you invoke one of the printing methods of a logger instance. (2) layouts format the logging request according to the user's wishes. (3) appenders send formatted output to its destinations... The log4j network appenders already provide mechanisms to monitor Java-distributed applications. However, several factors make IM a suitable technology for remote logging in real-time. In this article, we cover the basics of extending log4j with your custom appenders, and document the implementation of a basic IMAppender step by step. Many developers and system administrators can benefit from their use..."
[June 02, 2003] "HP Turns to Jabber for Enterprise IM." By John K. Waters. In Application Development Trends (June 02, 2003). "Instant messaging (IM) is fast emerging as a useful and productivity-enhancing enterprise technology, and many businesses have begun to embrace it in a serious way. In fact, according to the Gartner Group, instant messaging is proving to be a real driver of enterprise communications as companies seek to integrate IM and so-called presence technologies into their enterprise applications. Gartner analyst Maurene Caplan Grey believes that vendor alliances in the IM space are fueling the current drive toward adoption of a common IM and presence protocol... One example of this trend can be seen in the alliance between industry heavyweight Hewlett-P | 计算机 |
2014-23/2662/en_head.json.gz/35549 | Musings: Computer Thoughts and Info
Random thoughts on computing, systems administration, and programming - including tutorials and example code in Perl, PHP, and VBS - and anything else that wanders into mind.
N810 - I Am Free
I've recently purchased a Nokia N810 Internet Tablet. I spent a lot of time reviewing the reviews, checking the spec's, comparing it to other devices, and it came out on top. The deciding factor was a little application that can be downloaded and installed. It's called "I-am-free" and is maintained by Owen Williams. This application displays a picture of a shiny gem, nothing more. Now, I've not actually installed it, and have no intention of doing so. The fact that it exists is enough.It's existence is part joke but mostly a statement of beliefs. You see, for a few days, there existed an application on the iPhone store called "I Am Rich." It was put up by Armin Heinrich and it displays a picture of a shiny gem, nothing more. The difference between this application and the I-am-free application is, of course, price. The "I Am Rich" app cost $999, the maximum the iPhone store allows. If you think that no one would be insane enough to spend a grand on something that does nothing, well, eight people say your wrong. I suppose it would be more than eight if Apple hadn't pulled it off the iPhone store within a few days.Thus, we have the real difference between the iPhone and the N810. The iPhone is a locked-down proprietary system where nearly everything you do with it will cost money. The N810 runs a version of Linux and nearly everything you can do with it is free. Sure, the N810 includes a GPS navigation application that wants you to pay for a subscription to get the advanced features, but there are several other completely free mapping apps that you can install. The N810 software repositories, the Linux way of distributing software, contain hundreds of other programs available, all for free.Linux is a Free and Open Source Software (FOSS) operating system. That means that it's free, as in free beer, and free, as in free speech; both forms of freedom are important. Free, as in beer, means the the software is free of Digital Rights Management (DRM) and all the other stupid tricks companies put in to stop people from using their products without permission. DRM systems, these days, are so complicated that they are often the primary difficulty encountered while installing and using a piece of software. Free, as in beer, also means that you can use the software for free, which is good, very good. FOSS also means that if the software doesn't work the way you want, you are free to change it. This second freedom, free as in free speech, means that the source-code for the software is available to anyone that wants it. That means that you can modify it to meet your requirements. Or, if you're not a programmer, you can pay someone else to modify it. This kind of freedom may not be critical to your average Joe playing with an N810, but if you were a company using an application for business, then the ability to customise the code can be very useful. More importantly, it allows communities of people to collect around applications or particular hardware platforms, like the N810, and improve them. These communities often drive the development of free, as in beer, applications. Of the two FOSS freedoms, free as in free speech is the most important over the long term.I know, because there is a vibrant community of people supporting the N810, that my new purchase will still be useful long after iPhone users have to send their toy in to Apple to get a replacement battery installed. Yes, it's so easy to change the N810's battery that I'm thinking of carrying spares while travelling. I know the N810 software repositories will exist long after Apple has yanked the last iPhone app from its store. Yes, anyone can put up an N810 repository if they want, several have already; I could put up my own repository and complile my own applications if I really wanted to, and I might at some point. And, when the day comes that technology standards have long-since left both the N810 and the iPhone behind, I know I will find a niche use for the N810 while the iPhone will be landfill. I can already think of several, from a car OBDII reader (car computer interface) to a digital photo frame. Being based on FOSS, the possibilites are only limited by the imagination.
I "was" wondering why there are thousands of apps that actually excite me to have on the iphone. On the n800 I get to Play more I get to Learn more about software and maybe see a pioneer or two being born. I love it here but I do not think it would exist without the leaves that bring in the sunlight. Everything is as it should be. I like it in the shade.
Navigator:
Philosophy and Science
Computer Thoughts and Info
War and the Military
GPS and Digital Mapping
Workshop Thoughts and Projects
Dualsport Motorcycle
These pages, and the associated website, are still under construction. New content is being added regularly and items are moving around as I get organised. Google indexing might be a little behind, so if you're not seeing what you expected to find, check around or try again later.
Programing | 计算机 |
2014-23/2662/en_head.json.gz/37011 | HJ • Resource Library • Signs
that work • More
than words • Interpreter
About Symbols
What are Universal Symbols?
Symbol Design and Testing
Pilot Testing
"Symbols can express a message in a compact form, may be more noticeable in a 'busy' environment than a written message, have more impact than words and ...be understood more quickly than (written) messages. " (ISO Bulletin December 20011)
Universally recognized graphic symbols, such as traffic symbols or those used to delineate parking spaces for the physically handicapped, can be an effective tool for communicating important information to those unable to read or with limited English proficiency (LEP). Pictures or images are what make symbols useful. Universal symbols are pictograms or images that consistently enable logical associations that help communication across languages and cultures. The testing conducted to develop the Universal Health Care Symbols is one of the most comprehensive symbol design efforts ever undertaken. This page is designed for those who are interested in learning how the health care symbols were developed and tested. If you are ready to use the new Universal Health Care Symbols, go to Using Symbols to download symbol artwork and the ‘how to’ best practice workbook. Health care executives and the design community need tools and methodologies to enable LEP populations to navigate in the health environment. The number of people who speak a language other than English at home has grown significantly. This segment of the population grew by 38 percent in the 1980s and by 47 percent in the 1990s2. By 2000, 47 million persons over the age of 5 spoke a language other than English in the home. Attention to federal and state laws, which require health facilities to have signage available in the language of their patients, has increased with the growth of this population. Presidential Executive Order 13166, "Improving Access to Services for Persons with Limited English Proficiency" and National Standards for Culturally and Linguistically Appropriate Services in Health Care adopted by United States Department of Health and Human Services underscore these requirements but offer no new solutions. what are universal symbols? top Symbols are visual images that represent a referent, a word or a real world object, place or concept. Long before written language, pictographs (symbols) served as a means of communication. As societies grew and written languages developed, pictographs were employed to provide information to people who were largely illiterate. However, pictographs served mainly an informal function until the 20th century. When air travel and expanding world immigration increased, universal symbols served as an international communication tool. In hospitals, universal symbols on signs are rare. Alternatives such as signs using text, in one or more languages, or letters and numbers are quite common as are symbols or landmarks specific to a facility or a hospital. The idea of symbols for health care signage came from the subway system in Mexico City which uses cultural icons to identify destinations. Symbols have been used for more than 30 years, making the city's subway system accessible to tourists and those unable to read. To explore the idea, a call for qualifications was issued on January 2003 by Hablamos Juntos to find a consultant to help explore the use of symbols in health care signage. JRC Design , a design firm located in Scottsdale Arizona, was commissioned to prepare a white paper on the feasibility of using symbols for health care wayfinding, including recommendations for future steps. The conclusion of the white paper was not only that symbols were a viable option for wayfinding in health care, but that a set of tested symbols, publicly available, would give designers and health facilities a much-needed alternative. The report entitled Symbol Usage in Health Care Settings for People with Limited English Proficiency Part 1: Evaluation of Use of Symbol Graphics in Medical Settings was completed in April 2003. The history and usage of visual symbols as communication tools in health care settings around the world is examined and several symbols developed for health care environments are also included. None of the health symbols found, except those from a project in Australia, were tested for public recognition and comprehension. A companion report Part 2: Implementation Recommendations provided suggestions for developing a set of tested symbols for use in health care environments.
How were the symbols developed? The testing that was conducted to develop the set of 28 universal health care symbols is one of the most comprehensive symbols design efforts ever undertaken. The multi-step process began with the selection of referents for the project. This was accomplished with a terminology survey designed to identifying the 30 most common destinations in health facilities. A team of seven graphic designers from around the country then worked with a symbols testing consultant to design and test candidate symbols using a testing method recommended by the International Organization for Standardization (ISO). See Project's Who's Who to learn about the Design Team. Three hundred participants from four language groups: English, Spanish, Indo-European and Asian languages provided input on the comprehension value of candidate symbols. Seventeen of the 28 symbols could be understood by at least 87% of the multilingual participants.
To compare the new symbols to typical word signage, the Society for Environmental Graphic Design (SEGD) worked with a wayfinding consultant to pilot test the symbols in the wayfinding systems of four hospitals across the country. The results were impressive: More than 75% of people who were tested felt that the symbols were more effective than text - symbols were easier to see and understand, and preferred even by those that could read English More than 80% of hospital staff interviewed felt that symbols would ease the process of giving directions to patients and visitors
The research team also found that symbols were flexible and simple to implement in a variety of health care environments, including those with complex wayfinding programs using signs, print materials and internet features like informational kiosks. The steps taken to develop the universal symbols are briefly described below. For more details on any aspect of the project go to the Archive section of this website.
Step 1: Referent Selection The first step was to identify the top 30 referents most often used in health facilities. A survey, developed from an inventory of existing signage in health facilities located in Hablamos Juntos demonstration sites, with over 220 health care terms was used. The results of this survey led to the 28 referents for which symbols were developed. The terminology survey (Health Care Facility Signage survey) was disseminated to health care facilities in the demonstration sites. Up to ten people, in each facility were asked to prioritize destinations in order of importance for their users. The survey was aimed at persons who frequently interact with visitors and patients to provide direction, or those who understood visitor/patient traffic patterns in their facilities such as information booth staff or volunteers, customer service representatives, discharge planning or social work staff, admitting staff managers, director/chief/head nurses and medical officers or physicians. The highest priority terms became the referents for the project. To see a sample of the survey for Round 1 click here. See Archive for more details.
Step 2: Symbol Design and Testing top
Public information symbols used on signs to help patients and visitors navigate in health care facilities have rarely been evaluated from the user’s perspective. Important component of this project are the method used to test symbols and the recruitment of multilingual participants to provide input on candidate symbols . The over-arching goal was to develop symbols that would be effective for the broadest possible group of people. This meant avoiding cultural taboos or relying on visual clues that were strictly American or Western in nature. It also meant testing the symbols with people of various cultures and ethnic backgrounds. Finally, it meant letting these people, through the results of an iterative testing process, influence the selection of the final symbols to be included in the final set. The design team, consisting of graphic designers experienced in symbol design, met in August 2004 in the first of three Charettes, to review and collect symbols that best represented each referent. Symbols developed through this Charette were use in the first round of comprehension estimate surveys. Results from each round of comprehensibility surveys were used to guide the redesign/refinement of new symbols and to determine the final set of the health care symbols developed through this project. A total of 600 symbols were collected or created for the project.
Symbol Survey - Comprehensibility Estimation Testing
The International Organization for Standardization (ISO) recommends Comprehension Estimate testing to test public information symbol. In this project the comprehension estimate survey instrument, with 28 open-ended questions with 100-180 unique symbols (five to six symbols per referent) was used in three rounds of testing. The research question asked for each referent was “Which public information symbols for this referent is the most meaningful to users of health care facilities?”, meaning that they serve to cross language boundaries, for the user populations in the regions surveyed. For each of the 28 referents in the study, respondents were presented with 5-6 symbols. They were then asked to assign a number to each symbol representing the percent of the U.S. population that speaks their language who they think would understand a given symbol to mean a given referent. When asked through an interpreter, the question was modified to ask about “people who speak your language”. As an example of the survey design; these circles show the symbols that were tested for the referent “Chapel” in the three rounds of testing. The first circle was used in the first round, the second in the second round and so on. To see a sample of the survey for Round 1, click here. To learn more on the testing process see the Technical Report.
Candidate symbols were tested for their comprehensibility with participants from four language groups: English, Spanish, Indo-European and Asian language. The locations of the Hablamos Juntos demonstration sites created a natural opportunity to gather a national sample of health care facility users in ten different states. Hablamos Juntos site leaders designated a survey administrator and recruited volunteers from limited English-speaking populations. This provided, for each round of testing, a non-probability convenience sample of approximately 100 accessible and cooperative adult patients or visitors who speak a variety of languages. In many cases, interpreters assisted respondents to complete the survey. Selecting the Final Symbol Set Designing symbols to represent objects, a procedure, complex action or to show interaction between people is more challenging than creating an image to represent an object. The team learned this early on when to their surprise they learned that many of the symbols they created for the survey did not test well in the first round. The gap between what designers and survey participants thought would work was reinforced when symbols that were rated less than 79 did not show any significant improvement in the second round. After all the testing was completed, 17 referents had at least one symbol meeting the threshold (greater than 87) from which to establish a final set, and 11 referents with no symbol reaching the required threshold. The team met in July for a two-day final charrette in Chicago to select the final symbols set. Survey results determined the final symbols for the seventeen referents with symbols meeting the testing threshold. These images were refined to maintain consistency in figure sizes, weights, borders and to achieve a balance among the symbols as a set. When two or more symbols for a referent tested within a few points of each other, the team selected the symbol that best supported congruence in the set overall. For the eleven referents with low scoring symbols, the design team identified elements of the image content which seemed to be present in the higher rated symbols. Symbols for these 11 referents were selected, refined or further developed based upon the lessons learned through all phases of testing. For some referents, where all symbols tested scored poorly, the refinements resulted in entirely new symbols. In this final phase, new symbols were not tested in their final design iterations. Step 3: Pilot Testing top
To compare the new symbols to more traditional word signage, final symbol candidates were tested in the wayfinding systems in four hospitals across the country: Somerville Hospital in Massachusetts; Saint Francis Medical Center in Grand Island, Nebraska; Grady Memorial Hospital in Atlanta; and Kaiser Permanente in San Francisco. Pilot testing also helped to learn how the tested symbols can be used effectively in health facilities. Pilot site testing took place from April through May. A symbol/referent matching test was administered to visitors and use of collateral material, such as maps and printed materials was also tested. In focus groups, staff offered insights about their current signage system and made recommendations about the use of symbols in wayfinding. The participants – visitors and patients in the pilot site facilities – had language proficiency ranging from little or no English (an interpreter gave instructions to these participants) to sufficient English to take the test on their own. The participants were of four language groups: English, Spanish, Indo-European and Asian. Tested languages from the latter two groups varied according to the demographics of the site area and included Creole, Nuer, Hindi, Amharic, Portuguese, Loatian, Mandarin, Cantonese and Vietnamese. Besides completing the matching test, these participants were also timed in finding six destinations on the site - four with symbols signage and two with traditional work signage. Lessons Learned top
The development of a cohesive symbol system, particularly for health care, is a controversial undertaking in the design world. The paramount goal was to create a symbol set that was simple, uniform, distinctive and clearly understood that could be used in health care wayfinding systems. The symbols had to have a graphic clarity and credibility in design to make others want to use them. Using well established symbol testing methods recommended by the ISO and extensive iterative testing (which included pilot testing in hospitals across the country) makes this one of the most comprehensive symbols design efforts ever undertaken. In the end, this work confirmed that a thoughtful and well-designed symbol system can assist English speakers as well as people from many languages and cultures. Symbols are not the panacea for a poor signage system, nor will they alone solve wayfinding issues. However, they can be a part of a viable and dynamic system to assist all people, regardless of their reading skill level, to feel more comfortable and confident within a health care facility. The full potential of these symbols, including their usability and effectiveness in wayfinding, will be determined through implementation in real-world health care environments. This will require more than just adding these new symbols to existing signs. It will take systems designed with openness to what visitors need for wayfinding, where symbols are a part of more comprehensive solutions. In the long term, the investment of time and money should be recouped when less time, money and energy is required to physically guide people, public and staff alike, through the site. To help facilities implement these symbols, a workbook with best practices is available on this website. The suggested practices were drawn from best practices in other fields, such as airports, parks and cities, and lessons learned in the hospital pilot sites. Because air travelers, park visitors and pedestrians in cities are different from visitors of health facilities, more work is needed to develop best practices for health care environments. SEGD is committed to helping build practices more suitable for health environments and will continue to disseminate achievements in health facilities among its designer constituency. Reports top
Universal Symbols In Health Care Workbook, Best Practices for Sign Systems. This workbook is for health and hospital administrators, facilities managers, architects and designers of wayfinding systems. It covers the importance of universal symbols, the benefits they provide to hospitals and healthcare facilities and offers practical suggestions for implementation taken from best practices in other fields that effectively use symbols as part of their wayfinding systems. Technical Report
Symbol Usage In Health Care Settings for People with Limited English Proficiency - Part Three Symbols Design Technical Report. This report describes details of developing and testing health care symbols and includes tools used in the process. White Paper Symbol Usage in Health Care Settings for People with Limited English Proficiency - Part One: Evaluation Of Use Of Symbol Graphics In Medical Settings. This is a white paper with a brief review of symbols use that looks at the feasibility of using symbols in health care. The report finds evidence of health care symbol development and use in a variety of countries, include the United States, and concludes that not only are symbols viable for health care signage, but that a set of tested health care symbols would give designers an alternative beyond multilingual signs.
Symbol Usage in Health Care Settings for People with Limited English Proficiency - Part Two: Implementation Recommendations. Part two makes suggestions for developing a set of universal symbols. 1 ISO Bulletin December 2001. Graphical Symbols
2 U.S. Census (October 2003) Language Use and English-Speaking Ability: 2000. Census 2000 Brief • Privacy policy • Hablamos Juntos. All Rights Reserved. | 计算机 |
2014-23/2662/en_head.json.gz/37023 | The Web celebrates a birthday
by Mark Ollig
This week we remember a historic milestone. The spark which ignited what is today known as the “Web” began on March 12, 1989. For it was on this date, when Tim Berners-Lee, a British scientist working in Switzerland’s European Organization for Nuclear Research, otherwise known as CERN, delivered what, according to his supervisor, was a “vague but exciting” proposal. This proposal described developing a distributed information management system for the CERN laboratory. Berners-Lee wrote about a client/server computing model for a distributed hypertext system. A computer with an installed client software program would allow a user to effortlessly browse information stored in remotely located, hypertexted, computing servers on CERN’s network.
His proposed model is what led to the merging of hypertext with the Internet. Berners-Lee called his creation a “global hypertext system.”
He suggested a global hypertext “space” be created in which any network-accessible information could be referred to by what he called a Universal Document Identifier (UDI). Today, it is known as the Uniform Resource Locator or URL, which we type, or paste into a web browser, in order to access a particular resource or website. Berners-Lee finished coding the new client web-browser software he called the “WorldWideWeb Program” in 1990. It was later renamed “Nexus.”
He used a work station computer called a NeXT (NeXTcube), to write the code for the first web browser.
The NeXT computer was from a company founded by Apple’s Steve Jobs and other folks, who had worked on Apple’s Macintosh and Lisa computers. Early web browsers had names like Erwise, Viola, Cello, and Mosaic.
Berners-Lee also wrote the web server program using the NeXTcube.
This computer was the first web server. Here is a photograph of the NeXTcube as it was displayed in 2005 at CERN’s Microcosm science museum: http://tinyurl.com/bytes-lee3. A copy of: “Information Management: A Proposal,” which was the document Berners-Lee submitted to CERN, and which eventually led to the World Wide Web, is to the left of the keyboard in the photograph. I smiled while noticing the faded sticker Berners-Lee most likely attached to the front of the NeXTcube’s tower case, saying: “This machine is a server. DO NOT POWER DOWN!!”
This caution sticker took me back to the day when I used them at work.
It sometimes took hours to run the Reflection program script files I would code using a Text Pad editor. The lines of text, or “tuples” I created in it were for adding hundreds, and sometimes thousands of numbers, or other specific, call-processing related information, into various software tables contained inside the digital telephone switches I maintained.
It was very important not to touch any of the keys on the keyboard, so as not to interrupt the script file program actively running while connected (via telnet) to the digital switch.
While these automated script files were dumping information into a particular digital switch, I had time to leave my workstation, and grab a cup a coffee. As a precaution, yours truly would tape a piece of paper over his computer screen, warning others with the words: “Do Not Touch Keyboard! Executing Script File in Progress!”
By December of 1989, Berners-Lee had completed the world’s first webpage. Here is a screen capture of “Tim’s Home Page”: http://tinyurl.com/bytes-lee1.
Some people still think the Internet and the Web are the same; however, this is not true.
The Internet is a type of mesh topology network, which includes computers, routers, gateways, and cables used to carry (transmit/receive) logical packets of voice, video, and data information. Think of a packet like the contents of an envelope with a mailing address on it.
If you put the right address on a packet, and hand it off to a device connected on the Internet network, the programming inside the device (a router or server for example) will determine the best path to use to get the packet to its final destination. The Internet quickly delivers packets over its network to anywhere in the world using TCP/IP (Transmission Control Protocol/Internet Protocol). This is yours truly’s condensed description of how the Internet performs.
Tim Berners-Lee created a hyper-texting program which would link the information contained inside webpages stored on computers connected to the Internet, and make them simple to access, clearly discernible, and easily distributable to others. Berners-Lee realized his distributed information management system concept proposal for CERN, could also be implemented throughout the world; thus creating a World-Wide Web. His proposal’s conclusion from 1989 includes; “We should work toward a universal linked information system.” The front page of Berners-Lee’s proposal is here: http://tinyurl.com/bytes-proposal.
You can see his detailed proposal at: http://tinyurl.com/bytes-cern2.
And so I end this week’s column by saying, “Happy 25th birthday to the World Wide Web, and thank you to Sir Tim Berners-Lee.”
We can only imagine how far the Web will progress during the next 25 years. | 计算机 |
2014-23/2663/en_head.json.gz/234 | The Internet is now a dominant tool for regular people
The Internet has succeeded in becoming a tool that many regular people turn to in lieu of alternatives for communicating and for finding information.
< Small Business and Web Sites The Value of Experience >
I've written a few essays that deal with the idea of computer applications that are "tools" (such as Thoughts on the 20th Anniversary of the IBM PC, Metaphors, Not Conversations, and The "Computer as Assistant" Fallacy). I think that examining aspects of people's use of tools is important to see where our use of computer technology will go.
There's an old saying that "When all you have is a hammer, everything looks like a nail." Sometimes it is said in a derogatory way, implying that the person is not looking for the "correct" tool. For example, people used to laugh at how early spreadsheet users did their word processing by typing their material into cells, one cell per line, rather than learn another, new product.
I think a more interesting thing to look at is what makes a tool so general purpose that we can logically (and successfully) find ways to use it for a wide variety of things, often ones not foreseen by its creators. Tools that can be used for many different important things are good and often become very popular.
I have lots of experience watching the early days of the spreadsheet. Many people tried to create other numeric-processing/financial forecasting products soon afterwards, but none caught on like the spreadsheet. Most of these tools were tuned to be better suited for particular uses, like financial time-series. What they weren't as well tuned for were free format, non-structured applications. What were successful were later generation spreadsheets that kept all of the free format features, and added additional output formatting options, such as commas and dollar signs, graphing, mixed fonts, and cell borders and backgrounds.
The automobile caught on especially well here in the US partially because of its general purpose nature. First used for recreation, taking you out to the "country" (wherever that may be), it could also be used in rural areas to go into town or visit friends, for commuting in suburban and urban areas, visiting people at great distances, as part of work (such as the old family doctor), as a means for status or "freedom", etc., etc. In our large growing country, no other means of transportation met as many needs during the years when we built up much of society around it.
The Internet successes
Given those general thoughts, let's look at applications of the Internet. Where have they become accepted and entrenched general purpose tools among regular people?
The first and most obvious accepted use is as communications tool with people you already know. Email and instant messaging has gone way past the early adopter phase. For many families, communities, and businesses, it has become one of the dominant forms of communication. Email is up there with telephone and visiting, and more and more is displacing physical mail and fax.
This is pretty amazing. It took the telephone years to reach this level of acceptance for such mundane uses. Fax never reached it for personal uses.
The second most obvious accepted use is as an information gathering tool. Research such as that published by the Pew Internet Project comes up with numbers like these: Over 50% of adult Internet users used the Internet (most likely the Web) for job-related research. On any given day, 16% of Internet users are online doing research. 94% of youth ages 12-17 who have Internet access say they use the Internet for school research. 71% of online teens say that they used the Internet as the major source for their most recent major school project or report. (From The Internet and Education.) During the 2000 Christmas season, 24% of Internet users (over 22 million people) went to the Web to get information on crafts and recipes, and to get other ideas for holiday celebrations. 14% of Internet users researched religious information and traditions online. (This is for an event that happens every year of their lives.) (From The Holidays Online.) 55% of American adults with Internet access have used the Web to get health or medical information. "Most health seekers treat the Internet as a vast, searchable library, relying largely on their own wits, and the algorithms of search engines, to get them to the information they need." (From The Online Health Care Revolution.) Of veteran Internet users (at least 3 years) 87% search for the answer to specific questions, 83% look for information about hobbies. (From Time Online.)
Anecdotal evidence I've seen: Driving directions web sites like Mapquest are becoming a preferred tool for drivers, supplementing other forms of getting directions. More and more people research vacations on the Web before committing to accommodations or activities. 50% of all tableservice restaurants have web sites (and it isn't so that you'll order a meal to be delivered by Fedex). Search engines are very popular, and people will switch to better ones because they know the difference. Web site addresses are replacing 800 numbers in advertising and public service announcement "for more information".
At the Basex KM & Communities 2001 West conference, IBM Director of Worldwide Intranet Strategy and Programs, Michael Wing, presented some statistics that show where things are going. In surveying IBM employees' feelings about what were the "Best" (most credible, preferred, and useful) sources of information, in 1997 their Intranet was listed a distant 6th, after the top co-workers, then manager, INEWS, senior executive letters, and external media. In 2000, the Intranet was tied for first with co-workers, and the Internet (outside IBM) had moved into a place just below external media and senior executive letters.
For many people, the general Internet is on par with other, older public information sources, and sources they have relationships with or an affinity for (certain web sites, people through email, etc.) are trusted even more. The huge rush of people to the Internet during times of tragedy or rapidly unfolding events that are of deep importance to them shows this. When you get a call from a friend, or a co-worker pokes their face into your office with some news, an awful lot of people go straight to the Internet to learn more.
So, I feel that the Internet has passed that magic point for most users (which is over half of the US population who can read) where it is one of those tools that they already know how to use, and will depend upon to do all sorts of things, often instead of using other "better" ways of getting things done.
Where the Internet hasn't come that far
In contrast to these successes in changing behavior to favor Internet usage, I don't believe that buying on the Internet has passed that point for most people. Amazon and similar ventures that rely on purely electronic Internet-based transactions have failed to become the way we'll buy everything from toothpaste to lawn chairs. Some people do, but not the majority for any large portion of their purchasing. (Of course, for researching the purchase, the Internet is becoming extremely important.) A few categories, like travel, have broken out into popular acceptance, but not to the level of communications or information seeking.
Also, it seems that the Internet has not passed that point to be a major tool for passive entertainment. While it has become key to getting information out about movies (and credited for creating the main "buzz" to launch some), few people go to "watch the Internet". It is used to transfer music, but then only for songs the user wants, not as a passive receiver from a dominating "service". Just as TV is being affected by the new generations of people who wield the remote control to flit from program to program as they see fit on dozens or more channels, the "user choice" view of the Internet, much like the use for searching, is what you mainly find. Somebody else doesn't tell you what's interesting -- you decide and then go to the Web to look for more about it.
The implications for those with information that they want others to find, or who want people to communicate with them, is that they should include the Internet in their plans. For communications, an email address is a minimum, though for some types of interactions an instant messaging screen name is also important. For disseminating information, a web site is extremely important, as is being findable through various means (either in search engines and directories, and/or through other means of providing information, such as on printed material or in general advertising in any medium or links from likely places on the Web).
As we find over and over again about new technologies, users choose what they want to use them for. The purveyors of the technology can advise, but they can't control. We should learn from what they gravitate to.
The Internet has succeeded in becoming a tool that many regular people turn to in lieu of alternatives for communicating and for finding information. It has become a new, often-used tool in their personal toolbox.
What's good is that these two uses, communications and finding information, have proven to be ones for which people willingly pay.
As further proof and insight into the use of Internet searching as a general purpose tool, read Richard W. Wiggins' fascinating article about the evolution of Google during the 9/11 tragedy: The Effects of September 11 on the Leading Search Engine.
Writings Home < Previous Next >
© Copyright 1999-2014 by Daniel Bricklin | 计算机 |
2014-23/2663/en_head.json.gz/416 | Structuring Elements
The field of mathematical morphology provides a
number of important image processing operations, including
erosion, dilation, opening and
closing. All these morphological operators take two
pieces of data as input. One is the input image, which may be either
binary or grayscale for most of the operators. The other is the
structuring element. It is this that determines the precise
details of the effect of the operator on the image.
The structuring element is sometimes called the kernel, but we
reserve that term for the similar objects used in
convolutions.
The structuring element consists of a pattern specified as the
coordinates of a number of discrete points relative to some origin.
Normally cartesian coordinates are used and so a convenient way of
representing the element is as a small image on a rectangular grid.
Figure 1 shows a number of different structuring elements of
various sizes. In each case the origin is marked by a ring around that
point. The origin does not have to be in the center of the structuring
element, but often it is. As suggested by the figure, structuring
elements that fit into a 3×3 grid with its origin at the center are
the most commonly seen type.
Figure 1 Some example structuring elements.
Note that each point in the structuring element may have a value. In
the simplest structuring elements used with binary images for
operations such as erosion, the elements only have one value,
conveniently represented as a one. More complicated elements, such as
those used with thinning or grayscale morphological
operations, may have other pixel values.
An important point to note is that although a rectangular grid is used
to represent the structuring element, not every point in that grid is
part of the structuring element in general. Hence the elements shown
in Figure 1 contain some blanks. In many texts, these blanks
are represented as zeros, but this can be confusing and so we avoid it
When a morphological operation is carried out, the origin of the
structuring element is typically translated to each pixel position in
the image in turn, and then the points within the translated
structuring element are compared with the underlying image pixel
values. The details of this comparison, and the effect of the outcome
depend on which morphological operator is being used.
©2003 R. Fisher, S. Perkins, A. Walker and E. Wolfart. | 计算机 |
2014-23/2663/en_head.json.gz/793 | Home > Archive > 2011 > December > 31
Dave Winer, 56, is a software developer and editor of the Scripting News weblog. He pioneered the development of weblogs, syndication (RSS), podcasting, outlining, and web content management software; former contributing editor at Wired Magazine, research fellow at Harvard Law School, entrepreneur, and investor in web media companies. A native New Yorker, he received a Master's in Computer Science from the University of Wisconsin, a Bachelor's in Mathematics from Tulane University and currently lives in New York City.
"Dave was in a hurry. He had big ideas." -- Harvard.
"Dave Winer is one of the most important figures in the evolution of online media." -- Nieman Journalism Lab.
10 inventors of Internet technologies you may not have heard of. -- Royal Pingdom.
8/2/11: Who I Am.
scriptingnews1mail at gmail dot com.
DaveRiver
Droidie
EC2 for Poets
Frontier News
Great VaVaVoom
Leon Winer
Linkblog/Tumblr
NY Times River
OPML Editor
opml.org
Outliners.com
Photos (Flickr)
Protoblogger
ReallySimpleSyndication
Realtime RSS
Rebooting the News
River2
RSS spec
rssCloud
Tech River
Tech Without Borders
Unberkeley
WikiRiver
My 40 most-recent links, ranked by number of clicks.
People are always asking about my bike.
Here's a picture. Calendar
FYI: You're soaking in it. :-)
What a mess Earlier I reported that Twitter was down just as I was publishing two big pieces. Then things got worse. The URL shortener we use that runs behind r2.ly went down too. So you were doubly-blocked by a creaky infrastructure that I was actually writing about in one of the pieces that didn't get through.
It's poetry in motion. Actually non-motion. People worry about SOPA, but you should also worry about the house-of-cards we've built around Twitter. We seriously need to simplify things. The people at Twitter need to hear this. A simple change in their software, that moves the link out of the 140 characters would completely obviate the need for shorteners, and allow us to remove a whole level of brittleness in the infrastructure. It's always been this way, and the problem has persisted for years. They store a lot of metadata with each tweet, so there's clearly no reason their infrastructure would have any trouble at all supporting it.
Anyway, I temporarily switched to bit.ly, so the blork.ly address we use here is likely to work. As long as bit.ly is up.
Heh. What a mess.
12/31/2011; 1:07:28 PM. .
Blogger of the Year Every year, when I have it together, I name someone blogger of the year.
It's always a person, never an organization -- because that's essential imho to being a blogger. To think of a tech pub like the NY Times or TechCrunch as a blogger is to miss the point. And anyone who is edited, in any way, is not a blogger. Because once you accept editing, you've allowed another mind into the writing. You're no longer finding out what someone thinks. It's not quite as clear.
Not to say one is better than the other. Just different.
Now, for past bloggers -- they continue to impress, in different ways of course because they're individuals. That's what made them such excellent bloggers in the first place! They are: Joel Spolsky, Jay Rosen, NakedJen, Julian Assange.
This year's BOTY is...
In a minute. First, let me say who I thought of, and why I think he may well be my choice for BOTY next year.
Richard Stallman.
It came as a surprise to me that Stallman is a blogger. Somehow I tripped across his feed. Added it to my river, and since then have been very impressed. Stallman is, in every way, what I think of as a Natural Born Blogger. His impulse is to share. And he has an opinion. And he states it, concisely and with irony and humor. It's really good stuff.
And the thing I like about it most is that it is so concise. Twitter has made this a value we appreciate, and that's something to thank Twitter for. With conciseness as an established practice, we're heading into a new kind of blogging. What I call a linkblog. That's why I think Stallman might be a great choice next year, if things turn out as I think they will. I think enough of us will be linkblogging outside the silos to make it interesting. If this happens it will make new kinds of aggregators possible. New news flows. Because freedom has been bottled up in the silos for too long. Too much have we waited for them to innovate in ways that don't cause more money to flow to them. Over time the low-hanging fruit becomes riper and riper. Eventually a strong wind willl come and blow it all to the ground. That's what I look forward to. It didn't happen in 2011, so the linkblog isn't the trend, yet. Maybe it will happen in 2012.
But in 2011, the most value for me was with bloggers who continue to make a strong personal investment in the web outside the silos. So I was looking for someone who had something to say, and who beat the drum regularly. Someone who said things in clear language that was accessible to large numbers of people. And who said things we really need to hear. Things to think about, to consider, to be reminded of. Someone who leads and inspires. I think you're going to be surprised at my choice. Seth Godin.
His blogging meets all these criteria and more. I never know what's coming from his corner of the world, but I know when I read it I'm going to have something to think about. Whether I agree or not. All good bloggers do that, push you in a new direction. Change the world by changing minds. Godin doesn't force his ideas on us. He presents them as part of a smorgasboard of thought that's available in any quantity you like. There's no programming involved. He draws you back to him by the quality of his provocation. He's really the best at what he does, and what he does is important.
So bravo Seth Godin!
Thank you for all the blogging, and please keep us well-supplied with lots of new ideas and ways of looking at things. 12/31/2011; 11:12:51 AM. .
Twitter is down I published my end-of-year think piece, The Un-Internet, about an hour ago by posting it to my linkblog. Which, among other places, flows to Twitter. Which is down. This has blunted the impact. No retweets, no kudos, no condemnation! Proves the point of the piece so damned well.
We're way too dependent on the Un-Internet., which behaves somewhat like the Internet, but has chokepoints that can cut off the flow. The whole point of the Internet, from the point of view of the US Govt, was that it couldn't be cut off this way. I like to say that RSS doesn't have a fail whale.
It's at times like this that it doesn't seem so cute. 12/31/2011; 10:59:13 AM. .
The Un-Internet The tech world is in an infinite loop. I've written about it so many times, but that's how it goes with loops. You don't have to write original stuff more than once. Each time around the loop, at some point, everything comes back into style. No need to list all the loops, other than to say Here We Go Again! At issue is this: Control. For whatever reason, the people who run the tech companies want it. But eventually the users take it.
I wrote in 1994, my first time as a chronicler of the loops: "The users outfoxed us again. It happens every fifteen years or so in this business, We lost our grounding, the users rebelled, and a new incarnation of the software business has been created."
In the same 1994 piece: "Once the users take control, they never give it back."
You can see it playing out in the Twitter community, and now the Tumblr community.
It isn't a reflection on the moral quality of the leaders of the companies, to want to control their users. But it's a short-term proposition at best. Either the companies learn how to take the lead from their users, or they will be sidelined. Unless the laws of technology are repealed, and I don't think laws like that can be repealed. Lest you think I was smart enough to see this coming in my own early experience as a tech entrepreneur, I wasn't. We were scared of software piracy, didn't understand how we could continue to be in business with software that could be easily copied. So we established controls that made it difficult for non-technical users to copy the software. That created a market of other software that would copy our software. So it was reduced down to whether or not the users would knowingly do something we disapproved of. Many of our users were honorable, they did what I would have done in their place. They stopped using our products. I would regularly receive letters from customers, people who had paid over $200 for the disks our software came on, with the disks cut in half with a scissor. These letters made their point loud and clear. One day everyone took off their copy protection, and the users got what they wanted. I came to believe then that this is always so. This time around, Apple has been the leader in the push to control users. They say they're protecting users, and to some extent that is true. I can download software onto my iPad feeling fairly sure that it's not going to harm the computer. I wouldn't mind what Apple was doing if that's all they did, keep the nasty bits off my computer. But of course, that's not all they do. Nor could it be all they do. Once they took the power to decide what software could be distributed on their platform, it was inevitable that speech would be restricted too. I think of the iPad platform as Disneyfied. You wouldn't see anything there that you wouldn't see in a Disney theme park or in a Pixar movie. The sad thing is that Apple is providing a bad example for younger, smaller companies like Twitter and Tumblr, who apparently want to control the "user experience" of their platforms in much the same way as Apple does. They feel they have a better sense of quality than the randomness of a free market. So they've installed similar controls. Your content cannot be displayed by Twitter unless you're one of their partners. How you get to be a partner is left to your imagination. We have no visibility into it.
Tumblr has decided that a browser add-on is unwelcome. Presumably it's only an issue because a fair number of their users want to use it. So they are taking issue not only with the developer, but with the users. They have admitted that the problem is that they must "educate" their users better. Oy! Does this sound familiar. In the end, it will be the other way around. It has to be. It's the lesson of the Internet.
My first experience with the Internet came as a grad student in the late 70s, but it wasn't called the Internet then. I loved it because of its simplicity and the lack of controls. There was no one to say you could or couldn't ship something. No gatekeeper. In the world it was growing up alongside, the mainframe world, the barriers were huge. An individual person couldn't own a computer. To get access you had to go to work for a corporation, or study at a university. Every time around the loop, since then, the Internet has served as the antidote to the controls that the tech industry would place on users. Every time, the tech industry has a rationale, with some validity, that wide-open access would be a nightmare. But eventually we overcome their barriers, and another layer comes on. And the upstarts become the installed-base, and they make the same mistakes all over again.
It's the Internet vs the Un-Internet. And the Internet, it seems, always prevails.
12/31/2011; 10:00:24 AM. .
© Copyright 1997-2011 Dave Winer. Last build: 12/31/2011; 4:19:49 PM. "It's even worse than it appears." Previous / Next | 计算机 |
2014-23/2663/en_head.json.gz/794 | Our Deal with Salon
Wednesday, July 24, 2002 by Dave Winer.
Salon and weblogs 1999 was a seminal year for weblogs. It was the year of Manila, Pitas and Blogger. It was the year we discovered that the Web browser is the best and worst place to edit Web content. In May 1999, Scott Rosenberg at Salon wrote the first weblog piece outside the weblog community, and it rippled through the then-small world of weblogs. I pointed to it from Scripting News, in DaveNet, and later from Weblogs.Com. I said on Scripting News that day "Scott's article will probably be pointed to by lots of weblogs!" It was, and so were all the articles that followed. Writing about weblogs is and probably always will be a good way to get flow. Weblogs for the people But I wished for more. I wanted Salon to open up to amateur authors. This made sense to me, because more than any other publication, Salon was "of" the Web, not of print. The value of a publication like Salon is its reputation, which can direct attention to people with something to say, even people who write for love, not money. Salon's editors could play different roles, scouts, curators, editors, and promoters of talent. New businesses could be spawned from the new weblogs, and therein, imho, lies the elusive business model for the Web. As a recent NY Times article pointed out, Wall Street may have fallen out of love with the Web, but Main Street hasn't. Perhaps Silicon Valley can't compete with Wal-Mart and USA Today; but Joe's Motor Shop can be enhanced by having a nice weblog to communicate with their customers and other people in their community. Joe can be an expert in his domain, a weblog helps him share what he knows, and builds his reputation. That translates into more business and more profits. It's not hard to see how new businesses will start. I've written about this at length over quite a few years.
Our deal with Salon So, today we announced a new business relationship between Salon and UserLand. Salon is granting my wish -- they're opening their site to amateur authors. Scott Rosenberg, now Salon's managing editor, is writing a weblog, where he will follow the developments in their weblog community. As they often do, Salon is leading the way. They are the first publication to offer such a service to its community. We are proud to enable this innovation with our Radio UserLand software.
Both companies stand to grow economically from this. In the past such a service would have been free. Some free weblog services still exist, but more and more they are subsidized by customers who pay. Since we got a fresh start long after the end of the dotcom bubble, our weblog community doesn't have this kind of subsidy. People pay a reasonable price for what they use, and this makes it possible for us to operate the servers and add features and fix bugs. So from my sabbatical desktop, I'd like to thank my colleagues at UserLand for their excellent work; and to thank the people at Salon for choosing our software for their new weblog service. We hope to continue to deserve your trust for many years to come.
© Copyright 1994-2004 Dave Winer. Last update: 2/5/07; 10:50:05 AM Pacific. "There's no time like now." | 计算机 |
2014-23/2663/en_head.json.gz/954 | 4:15PM, Wednesday, January 14, 2003 NEC Auditorium, Gates Computer Science Building B03
http://ee380.stanford.edu
BiReality: Mutually Immersive Mobile Telepresence
Norm Jouppi
About the talk:
BiReality uses a teleoperated robotic surrogate to visit remote locations as a substitute for physical travel. Our goal
is to create to the greatest extent practical, both for the user and the people at the remote location, the sensory
experience relevant for business interactions of the user actually being in the remote location. Our second-generation
system provides a 360-degree surround immersive audio and visual experience for both the user and remote participants,
and streams eight 720x480 MPEG-2 videos totaling almost 20Mb/s over 802.11a wireless networking. The system preserves
gaze and eye contact, presents local and remote participants to each other at life size, and preserves the head height
of the user at the remote location. This talk focuses on some of the system challenges inherent in the project, and
includes a short video demonstration.
About the speaker:
Norman P. Jouppi is currently a Fellow at HP Labs in Palo Alto, California. He received his PhD in Electrical
Engineering from Stanford University and joined Digital Equipment Corporation's Western Research Lab in 1984. From 1984
through 1996 he was also a consulting assistant/associate professor in the department of Electrical Engineering at
Stanford University. He was the principal architect of four microprocessors, and also contributed to the design of
several graphics accelerators. His current research interests include audio, video, and physical telepresence as well
as computer systems architecture.
Norman P. Jouppi
1501 Page Mill Road - MS 1181 | 计算机 |
2014-23/2663/en_head.json.gz/1550 | Science > Computers and the Internet > Computer GlossaryP - SpalmA hand-held computer. PCPersonal computer. Generally refers to computers running Windows with a Pentium processor.printed circuit boardPC boardPrinted Circuit board. A board printed or etched with a circuit and processors. Power supplies, information storage devices, or changers are attached.PDAPersonal Digital Assistant. A hand-held computer that can store daily appointments, phone numbers, addresses, and other important information. Most PDAs link to a desktop or laptop computer to download or upload information. PDFPortable Document Format. A format presented by Adobe Acrobat that allows documents to be shared over a variety of operating systems. Documents can contain words and pictures and be formatted to have electronic links to other parts of the document or to places on the web.Pentium chipIntel's fifth generation of sophisticated high-speed microprocessors. Pentium means “the fifth element.”peripheral)Any external device attached to a computer to enhance operation. Examples include external hard drive, scanner, printer, speakers, keyboard, mouse, trackball, stylus and tablet, and joystick.personal computer (PC)A single-user computer containing a central processing unit (CPU) and one or more memory circuits.petabyteA measure of memory or storage capacity and is approximately a thousand terabytes. petaflopA theoretical measure of a computer's speed and can be expressed as a thousand-trillion floating-point operations per second.platformThe operating system, such as UNIX®, Macintosh®, Windows®, on which a computer is based.plug and playComputer hardware or peripherals that come set up with necessary software so that when attached to a computer, they are “recognized” by the computer and are ready to use.pop-up menuA menu window that opens vertically or horizontally on-screen to display context-related options. Also called drop-down menu or pull-down menu.Power PCA competitor of the Pentium chip. It is a new generation of powerful sophisticated microprocessors produced from an Apple-IBM-Motorola alliance.printerA mechanical device for printing a computer's output on paper. There are three major types of printers: Dot matrix: creates individual letters, made up of a series of tiny ink dots, by punching a ribbon with the ends of tiny wires. (This type of printer is most often used in industrial settings, such as direct mail for labeling.)Ink jet: sprays tiny droplets of ink particles onto paper.Laser: uses a beam of light to reproduce the image of each page using a magnetic charge that attracts dry toner that is transferred to paper and sealed with heat. programA precise series of instructions written in a computer language that tells the computer what to do and how to do it. Programs are also called “software” or “applications.”programming languageA series of instructions written by a programmer according to a given set of rules or conventions (“syntax”). High-level programming languages are independent of the device on which the application (or program) will eventually run; low-level languages are specific to each program or platform. Programming language instructions are converted into programs in language specific to a particular machine or operating system (“machine language”) so that the computer can interpret and carry out the instructions. Some common programming languages are BASIC, C, C++, dBASE, FORTRAN, and Perl.puckpuckAn input device, like a mouse. It has a magnifying glass with crosshairs on the front of it that allows the operator to position it precisely when tracing a drawing for use with CAD-CAM software.pull-down menuA menu window that opens vertically on-screen to display context-related options. Also called drop-down menu or pop-up menu.push technologyInternet tool that delivers specific information directly to a user's desktop, eliminating the need to surf for it. PointCast, which delivers news in user-defined categories, is a popular example of this technology.QuickTime®Audio-visual software that allows movie-delivery via the Internet and e-mail. QuickTime mages are viewed on a monitor.RAIDRedundant Array of Inexpensive Disks. A method of spreading information across several disks set up to act as a unit, using two different techniques:Disk striping: storing a bit of information across several discs (instead of storing it all on one disc and hoping that the disc doesn't crash).Disk mirroring: simultaneously storing a copy of information on another disc so that the information can be recovered if the main disc crashes.RAMRandom Access Memory. One of two basic types of memory. Portions of programs are stored in RAM when the program is launched so that the program will run faster. Though a PC has a fixed amount of RAM, only portions of it will be accessed by the computer at any given time. Also called memory.right-clickUsing the right mouse button to open context-sensitive drop-down menus.ROMRead-Only Memory. One of two basic types of memory. ROM contains only permanent information put there by the manufacturer. Information in ROM cannot be altered, nor can the memory be dynamically allocated by the computer or its operator. scannerAn electronic device that uses light-sensing equipment to scan paper images such as text, photos, and illustrations and translate the images into signals that the computer can then store, modify, or distribute. search engineSoftware that makes it possible to look for and retrieve material on the Internet, particularly the Web. Some popular search engines are Alta Vista, Google, HotBot, Yahoo!, Web Crawler, and Lycos. serverA computer that shares its resources and information with other computers, called clients, on a network.sharewareSoftware created by people who are willing to sell it at low cost or no cost for the gratification of sharing. It may be freestanding software, or it may add functionality to existing software.softwareComputer programs; also called “applications.”spiderA process search engines use to investigate new pages on a web site and collect the information that needs to be put in their indices.spreadsheetSoftware that allows one to calculate numbers in a format that is similar to pages in a conventional ledger.storageDevices used to store massive amounts of information so that it can be readily retrieved. Devices include RAIDs, CD-ROMs, DVDsstreamingTaking packets of information (sound or visual) from the Internet and storing it in temporary files to allow it to play in continuous flow.stylus and tabletstylus and tabletA input device similar to a mouse. The stylus is pen shaped. It is used to “draw” on a tablet (like drawing on paper) and the tablet transfers the information to the computer. The tablet responds to pressure—the firmer the pressure used to draw, the thicker the line appears.surfingExploring the Internet.surge protectorA controller to protect the computer and make up for variances in voltage.Fact Monster/Information Please® Database, © 2007 Pearson Education, Inc. All rights reserved.L - OComputer GlossaryT – Z
Explore Solubility Of Solute , Help With Math Citing Fact Monster™ | | 计算机 |
2014-23/2663/en_head.json.gz/2012 | Cut-scenes
Ways to lose
Box art and goodies
Welcome to my Maniac Mansion fan site! Maniac Mansion is one of my favourite computer games of all time, and this site is completely dedicated to it.
My site is mainly destined to people who are already fans of the game, and as such, it is full of spoilers. The only spoiler-free sections are this one and the "Versions" section. So if you are new to Maniac Mansion and want your gaming pleasure to be left intact, don't visit the other sections and go play the game instead! Sincerely,
ManiacMansionFan Plot
The story is a parody of horror B-movies. You control teenager Dave Miller and two of his friends. You are on a mission to rescue Dave's girlfriend, Sandy, who has been kidnapped to the mansion of Dr.Fred, a mad scientist. Dr.Fred is under the influence of a purple evil meteor which landed near his mansion twenty years ago and has controlled his life ever since. Dr.Fred now wants to dominate the world, and, for some unclear reason, needs the fresh brains of teenagers to do this. It is up to you to infiltrate his mansion, avoid getting caught by Dr.Fred's family, gain entrance to the secret lab and save Sandy!
Originally released in 1987, Maniac Mansion was a groundbreaking game in many respects. Back then, most computer adventure games used a parser to let the player type on the keyboard what he wanted his character to do. This often resulted in a frustrating experience in which the player had to struggle to figure out what the program wanted to see (for example, phrases like "open door with key" or "use key in door" wouldn't work if the computer only wanted "unlock door with key", even though all of these phrases describe the same action). When budding game programmer Ron Gilbert was hired by Lucasfilm Games (now LucasArts) and got the green light to create his own adventure game, he didn't want the player to struggle with the parser anymore. He created a tool to allow the player, using a mouse, to choose from a set of common verbs displayed at the bottom of the screen, and combine them with the items and characters displayed in the game. This allowed the player to easily build sentences that the computer would always understand. He named this tool SCUMM (Script Creation Utility for Maniac Mansion) which Lucasfilm would keep using for ten more years.
Maniac Mansion was also one of the very first games featuring "cut-scenes", scenes that cut away from the action to let you know what's going on in a different part of the game, providing background information or clues. Nowadays this feature has become a staple of adventure games and RPGs.
Last but not least, Maniac Mansion featured different playable characters with different abilities, different ways to solve the game, and different endings. While this (unfortunately) didn't really catch on in later adventure games, it is definitely part of what made Maniac Mansion special.
What makes Maniac Mansion so good
Maniac Mansion is a clever mix of horror and comedy. It's tense enough to keep you on your toes and make you jump when you unexpectedly bump into Dr.Fred's family while exploring the mansion, and funny enough to keep it light-hearted and make you laugh. The different characters, different ways to solve the game and different endings make it unbelievably replayable for an adventure game, a genre that is usually known for its low replayability. Once you beat it with one set of characters, you will want to see how you can beat it with another, and how you can get a different ending. There are little secrets everywhere, so much that I kept discovering new tidbits about the game years after first completing it.
Maniac Mansion has black comedy moments and it is often random and unpredictable. It is clear when you play it that the developers had a great deal of freedom to make it, the kind of freedom they probably wouldn't get if they made the game today. Nowadays, most adventure games are much more streamlined and "safe". When you play Maniac Mansion for the first time, you can never guess what's going to happen next... because pretty much anything can happen, and that makes it a wacky, unforgettable gaming experience.
Main credits
Designed and written by Ron Gilbert and Gary Winnick
Scripted by Ron Gilbert and David Fox
Programmed by Aric Wilmunder and Ron Gilbert
Graphic art and animations by Gary Winnick
Original music by Chris Grigg and David Lawrence
Where to find Maniac Mansion
Maniac Mansion is a great but old game, and it can be difficult nowadays to find a first-hand copy. I wish it was easier to find, but there isn't a thing I can do about it. Some budget compilations featuring Maniac Mansion among other classic games exist, though they can be tough to track down and they are not available everywhere. Otherwise, your best bet is to look for a second-hand copy on auction websites. Unfortunately the Nes version is by far the easiest to find, but I urge you not to get that one. Read the "Versions" section to find out why.
Do not ask me about finding illegal copies of the game, I will not answer.
Day of the Tentacle
In 1993, a loose "sequel" was released, named Day of the Tentacle. I put "sequel" between quotation marks because, although it's an enjoyable game, it is only loosely based on Maniac Mansion:
- It was not designed by the same people.
- It is very different in terms of tone, gameplay, atmosphere and humour.
- You only play with one set of characters, there is only one way to beat the game, and there is only one ending, making it a much more streamlined game.
In fact, the only true similarity with Maniac Mansion is that some of the original characters are featured in it, namely Bernard, Dr.Fred's family, and the two tentacles.
A TV series very loosely based on the game ran for three seasons between 1990 and 1993. Apart from a few very superficial similarities, the game and the series have absolutely nothing in common.
All the original written content on this site (i.e. content that is not quoted from the game or its official documentation) was written by ManiacMansionFan. Please do not distribute or reproduce it anywhere without his explicit permission. This site is NOT endorsed by Lucasfilm Ltd. or LucasArts Ltd. It is a non-profit site made by a fan for fans, for entertainment and information purposes only. Maniac Mansion and all Maniac Mansion related characters and items are registered trademarks and/or copyrights of Lucasfilm Ltd. or LucasArts Ltd., or their respective trademark and copyright holders. | 计算机 |
2014-23/2663/en_head.json.gz/2357 | Current Classes & Activities
Science Map Library & Data
Mapping & GIS Quiz
Library Search TrailHead Base Cam
What Is Metadata?
Metadata or "data about data" describes the content, quality, condition, and other characteristics of data. The concept of metadata is familiar to most people who deal with geospatial data issues. A map legend is pure metadata. The legend contains information about the publisher of the map, the publication date, the type of map, a description of the map, geospatial references, the map's scale and its accuracy, among many other things. Metadata are simply descriptive information applied to a digital geospatial file. Metadata utilize a common set of terms and definitions to document geospatial data.
Metadata helps people who use geospatial data find the data they need and determine how best to use it. Metadata benefit the data producing organization as well. As personnel change in an organization, undocumented data may lose their value. Later workers may have little understanding of the contents and uses for a digital database and may find they can't trust results generated from these data. Lack of knowledge about other organizations' data can lead to duplication of effort. The generation of metadata is an essential element of building a digital map database.
The FGDC "Content Standard for Geospatial Metadata" was developed to effect a common set of terminology and definitions for the documentation of digital geospatial data. The standard establishes the names of data elements and compound elements (groups of data elements) to be used for these purposes, the definitions of these compound elements and data elements, and information about the values that are to be provided for the data elements. There are 334 different elements in the FGDC standard, 119 of which exist only to contain other elements. These compound elements are important because they describe the relationships among other elements.
Metadata consist of information that characterizes data. Metadata are used to provide documentation for data products. Metadata helps publicize and support the data an organization has produced. This information must be standardized in order to facilitate information sharing and automated storage and retrieval technology. Online systems for handling metadata need to rely on their being predictable in both form and content. Predictability is assured by conformance to standards. Metadata that conform to the FGDC standard are the basic product of the National Geospatial Data Clearinghouse, a distributed online catalog of digital spatial data. The major uses of metadata are to: Maintain an organization's internal investment in geospatial data Provide information about an organization's data holdings to data catalogues, clearinghouses, and brokerages Provide information needed to process and interpret data to be received through a transfer from an external source The information included in the standard was selected based on four roles that metadata play : Availability
Data needed to determine the sets of data that exist for a geographic location. Fitness for use
Data needed to determine if a set of data meets a specific need. Access Data needed to acquire an identified set of data. Transfer
Data needed to process and use a set of data. These roles form a continuum in which a user cascades through a pyramid of choices to determine what data are available, to evaluate the fitness of the
data for use, to access the data, and to transfer and process the data. The exact order in which data elements are evaluated, and the relative importance of data elements, will not be the same for all users. Introduction Calendar Current Briefing Activities
GIS Map 1
Description of the Map
www.rain.org/gis | 计算机 |
2014-23/2663/en_head.json.gz/2574 | Jan 29, 2011 (12:01 AM EST)
Top Features Absent From Windows 7
1 2 3 4 5 SteadyState offered a single point of control for many system-protection features that exist in Windows 7, but are no longer managed through a single console.
Windows does not have, by default, a single all-encompassing mechanism for returning the entire system -- user settings, data on disk, etc. -- to a given state. There are plenty of scenarios where this is useful, from rent-by-the-hour PCs to computers in institutional environments like schools or libraries. But for a long time the only way to accomplish something like that was through a not-very-elegant combination of native Windows features and third-party products.
Microsoft introduced SteadyState as a free add-on for Windows XP and Windows Vista, but when Windows 7 came out, admins were dismayed to learn SteadyState didn't work reliably with it, and begged to have SteadyState updated for the new OS. Instead, Microsoft announced that support for SteadyState was being discontinued.
More On SteadyState: What Windows 7 Is Still Missing
7 Ways To Save Microsoft In 2011
Windows 8 Too Late For Slates?
Top 10 Microsoft Stories Of 2010
Exclusive: Microsoft Altered Windows Sales Numbers
A major feature of SteadyState was disk protection, which made it possible for a system to be restored to a given set-point after each reboot.
The "Base System Device" shown here requires a hardware driver only supplied by the motherboard manufacturer through its Web site. The burden is on the user to find and install them.
More On Hardware Driver Updates: What Windows 7 Is Still Missing
A Dell-specific hardware driver installer. Because this is only intended for specific devices, Dell releases it directly from their Web site to speed the distribution process.
AppSnap, a program for updating third-party apps, supports quite a few common programs on its own, but isn't a substitute for a true native mechanism for keeping applications updated.
More On Software Updates: What Windows 7 Is Still Missing | 计算机 |
2014-23/2663/en_head.json.gz/8973 | Apple’s HTTP Live Streaming Proposal is Actually Pretty Cool, But it Needs Partners by Peter Kirn It takes two to tango, and lots of people for a line dance.
Yes, as the rest of the Web has noticed, Apple has just proudly touted the fact that it’s streaming its own press event in a format only people with the latest Apple devices can actually watch. Even Mac site TUAW, gearing up for today’s press event, thinks it’s pretty odd. But let’s skip straight to the good stuff: what’s this HTTP live streaming, anyway? The short answer is, it’s something cool – but it’ll be far cooler if Apple can acquire some friends doing the same thing.
Apple PR has this to say about their stream: Apple® will broadcast its September 1 event online using Apple’s industry-leading HTTP Live Streaming, which is based on open standards.
(Update – it may also help if you have a $1 billion server farm, as that could be the reason Apple is doing this at all. I’m, uh, still holding out for some magical nginx module, myself, but okay. How many billions would Apple have needed to reach more than Mac and iOS devices?)
Note that they never actually claim HTTP Live Streaming is a standard, because it isn’t. Apple has proposed it to the Internet Engineering Task Force, but it hasn’t been accepted yet. Meanwhile, as we’ve learned painfully in the case of ISO-certified AVC and H.264, just having a standard accepted is far from the end of the story – standards on paper aren’t the same as standards in use. Ironically, presumably all Apple means by saying HTTP Live Streaming is “industry-leading” is that they’ve done it first, and no one else has.
Apple can claim, correctly, that HTTP Live Streaming is “based on Internet standards.” In lay terms, you take a video, chop it up into bits, and re-assemble it at the other end. While common in proprietary streaming server software (think Flash), that hasn’t been something you can do simply with an encoder, a server, and a standard client. As Ars Technica explains, one key advantage of Apple’s approach is that by using larger slices or buffers – at the expense of low latency – you can count on higher reliability than real-time streams. And unlike previous approaches, the use of HTTP means you don’t have to worry about which ports are open. So you get something that’s reliable, easy to implement, and doesn’t require pricey additional software.
Other than that, it’s all basic stuff, meaning implementations should be easy to accomplish, software stays lightweight, and lots of clients could easily add support on a broad variety of desktop and mobile platforms. Here are the basic ingredients:
MPEG-2 Transport stream, set by the encoder.
Video encoding – Apple’s proposal suggests only that you use something the client can support, so while they require H.264 video and HE-AAC audio for their implementation, you could also use VP8 video and OGG Vorbis audio; you just have to hope the client has the same support.
Stream segmenter – this is the part that actually chops up the video.
Media segment files – the output of the streamer, this could be a video .ts file (the MPEG-2 format), or even, as Apple observes in their developer documentation, a standard M3U (M3U8 unicode) file, just as you may be accustomed to using with Internet radio stations and the like.
The client reads the result, by reading the standard playlist file. That’s the reason multi-platform, open source player VLC can read Apple’s stream.
It all makes perfect sense, and it’s actually a bit odd that it hasn’t been done sooner in this way. For the record, just streaming video over HTTP doesn’t cut it; you need exactly the kind of implementation Apple is proposing. The proposal is so simple, I’d be surprised if someone hadn’t implemented something similar under a different name, but then, I can’t personally find a case of that. Sometimes, technologists overlook just these kinds of simple, elegant solutions.
All of this raises an obvious question: why is Apple crowing about how cool it is that only they are using it? (“Look at me! I’m the only one on the dance floor!”) I suppose the message is supposed to be that other people should join, but that leads to the second question: where are the implementations?
There’s no reason HTTP Live Streaming couldn’t see free encoding tools on every platform, and still more-ubiquitous client tools. John Nack of Adobe muses that it’d be nice to see it in Flash. Browsers could work as clients via the video tag, as Safari does now. VLC appears to work as a client already. One likely missing piece there is the encoder. In their FAQ from the developer documentation, Apple lists two encoders they’ve tested:
Inlet Technologies Spinnaker 7000
Envivio 4Caster C4
This tech is currently used as a way of streaming to iPhones specifically, but it’s not exactly household stuff.
Client implementations shouldn’t be that hard. But that brings us to a climate in the tech world that, for all the progress on open standards, could still use some improvement.
Making interoperable technologies work requires building partnerships. Apple hasn’t exactly been focused on building bridges lately, it seems. Nor are they alone; today’s lawsuit-heavy, savagely competitive, politically-charged tech environment seems to have everyone at each other’s throats. I’m all for competition. Friendly competition can even help standards implementation: witness the near-daily improvements to rival browsers Safari, Firefox, Chrome, and others, all of which are made by a group of engineers who share a common interest in getting compatibility for these innovations in the near term, and within a standard framework. A little one-upsmanship on getting those things done first or better is absolutely healthy.
But even as the draft HTML5 spec continues to evolve and open Web standards improve, badly-needed, genuine working partnerships seem to be fewer and further between. Posturing between competitors isn’t helping.
And nor can I find evidence that, while this is in draft, it’s set up for people to implement. Even the draft document begins by telling you you’re not allowed to use it:
Pursuant to Section 8.f. of the Legal Provisions (see the IETF Trust’s Legal Provisions Relating to IETF Documents effective December 28, 2009), Section 4 of the Legal Provisions does not apply to this document. Thus, to the extent that this Informational Internet Draft contains one or more Code Components, no license to such Code Components is granted. Furthermore, this document may not be modified, and derivative works of it may not be created, and it may not be published except as an Internet-Draft.
So, in other words, you can read the draft, but you can’t use the code in it, and you can’t make derivative works of the draft. (As far as I know, this is standard boilerplate for IETF drafts. But then, much legal writing in general can be summed up in one word: just, “No.”)
1. HTTP Live Streaming is super cool.
2. It’s based on open standards and should be easy to implement.
3. Let’s hope we get implementations.
4. This PR stunt aside, it’s unclear what efforts Apple has made to reach out to anyone else doing an implementation, though information is sketchy.
Regardless, this somewhat odd move will certainly raise visibility of the tech. Whether that lasts beyond today’s media event remains to be seen.
Here’s where to go for more information.
HTTP Streaming Architecture [iOS Developer Library]
Apple proposes HTTP streaming feature as IETF standard [Ars Technica]
Image (CC-BY-SA) Ryan Harvey.
Updated: I’m indeed remiss in not talking about the excellent open-source, Java-based Red5 media server:
http://red5.org/
And to Adobe’s credit, open standards support in Flash – along with tolerance in place of litigation – is part of why such projects can exist.
In fact, I don’t see any reason Red5 couldn’t be the basis of a solution that streams to browsers using the video tag. I’ll try to follow up on that very topic, because I’m ignorant of the details.
Email Apple, editorial, h.264, http-live-streaming, ios, Mac, mobile, mpeg-2, opinion, Quicktime, standards, streaming, video, VLC, 16 StreamGuru
Peter – Great read. Thanks for publishing. To be clear, Apple's implementation is the most open approach to media delivery and streaming – ever. iOS stream delivery eliminates the need for proprietary servers like the Flash Media Server, Windows Media Server/IIS, Darwin, Helix, etc. etc. This is a significant move forward, and Apple's decision to stream today's event only came as a result of HTTP streaming making it possible to keep up with crushing demand.
What they are doing is cool and has been used for a great number of live events over the past 18 months. Can it be done via free or open source tools? While the spec is reasonably open, pulling off high quality live events at scale isn't child's play. The tools must be enterprise grade and extremely reliable. Net – this is a good thing and advances a space that has been evolving for nearly 15 years. Welcome to the world, Neo. Peter Kirn
@StreamGuru: Yeah, I agree, and I hope I didn't obscure my genuine enthusiasm. The tech looks fantastic. I'm still intrigued by what it'd take to get more implementations. On the client, it's easy; it seems like it's a lack of server implementations, then, that would hold it up. (There's no reason why there shouldn't be the same implementation in Chrome as in Safari, for instance.)
The server side, obviously not child's play — although free software is no stranger to enterprise-class solutions. I'm unclear from Apple's documentation what would be necessary to assemble the server, or why we haven't seen more implementations. http://www.matthewfabb.com/ Matthew Fabb
For live events, Adobe's new "multicasting" technology basically streaming video peer-to-peer, in Flash Player 10.1, while closed source sounds like a cheaper solution for live events. Basically, taking the pressure off of the server and using clients to push part of the video stream, radically reducing the bandwidth and server costs. So while you would have to pay for Adobe's server technology to support this, you wouldn't quite need a $1 billion server farm to pull off a big live event.
That said, while Adobe has promoted this technology, I've yet to hear or see a big live implementation of this. However, Flash Player 10.1 while in beta for quite some time only launched this June, so it's still early. http://www.imagineer.net.au asterix
Good article and well thought through.
Do you think this has big role in apples history of opposing flash? Flash is after all truly cross platform and can realistically integrate whatever wrapper / compression they like without requiring modifications to the html specification.
Either way, apple have always been good at revolutionising technology. Congrats to them for that. I just hope they don't get too power hungry. Peter Kirn
@asterix: That's an easy one — yes. HTTP Live Streaming can replace both Flash client and the pricey Flash server.
@Matthew: I don't know that you need a $1 billion server farm. I was joking about that. I don't know that there's anything about this that would be more intensive than running Adobe's server software, at least theoretically, unless I'm missing something. But, of course, without a whole lot of options here, there's not much to even discuss… and the lack of client software means there's yet another chicken and egg problem. cache
just an FYI, there are free, open source Flash media server software options:
http://red5.org/ The lack of knowledge around Flash (and the immediate buy-in to Apple propaganda) is why people are so against it…
great article though Peter. Peter Kirn
@cache: Good point, added to the story. Red5 is terrific (in fact, I'd love to find an excuse to run a server, even as an experiment.) Are you a Red5 user?
Look, I'll defend the efforts of browser developers to move beyond reliance on Flash as a crutch. After all, they're *browser developers*; I'd say it's partly their job to try to make the browser stack as strong as it can possibly be. And I like Google's approach in particular: they're able to advocate doing this stuff in the browser, while at the same time supporting both technologies in their YouTube product, and – looking at what happens in Chrome – they continue to push updates that improve the performance and reliance of Flash. (I've seen some significant improvements on Linux in recent builds, which is saying something.)
That said, yes, I'd love to talk more about technologies like Red5. The Flash Media Server simply isn't economical for some users, and the industry benefits from having open source developers working on the same problems, even in parallel. I'm not terribly impressed by Apple talking about standards and openness while blatantly ignoring interoperability, which is supposed to be one of the benefits. I think you have to have a balance. cache
I'm not a Red5 user, but have been watching the project for some time. From what I understand, it uses the same RTMP protocol that Adobe's media servers use, but perhaps there's more support for other protocols. I've used the Akamai CDN at my day job for Flash media streaming, and I definitely appreciate a move towards something a little more "standardsy".
This new Apple technology is certainly a different system, and looks promising, at least as a more straightforward streaming technique. Like you said, industry support will determine whether their implementation is vaporware, or yet another Apple-specific technology…
I support the drive to make browsers more featureful and integrated. At the same time, the array of browsers is becoming more splintered, and cross-browser implementations are getting more complicated. It's an interesting time, no doubt. I've been moving away from Flash as my clients want mobile, and apps. What really irks me is Apple's propaganda about how they support open standards… Most recently, I've been making iPad apps, and iPad-optimized sites with HTML5, and while it's nice to use the Webkit transform CSS, it's utterly proprietary. Even Chrome doesn't support most of the 3D transform functions. These days I'm especially unexcited about Apple's new technologies, given their misleading marketing and questionable tactics. Peter Kirn
@cache: Well, technically they say "based on open standards" — which things like MPEG-2 transport and M3U and (since it's ISO-certified) H.264 certainly are.
The problem with this argument is that they imply that Flash isn't — which by their own measure is simply wrong; Flash can claim exactly the same things in this case. And then there's the questionable wisdom of throwing interoperability and compatibility out the window in the name of abstract, on-paper standards.
I disagree that browser implementation is becoming more fragmented — depending on what we're talking about, though. Layout, canvas, WebGL, even the audio tag, all represent steps forward in HTML5, and they work pretty darned well across browsers, especially compared to what we've had in the past. Netsockets I think *might* work out, though it's just not ready yet. Native Client, video, OS integration (like drag-and-drop), for now at least, are definitely a step to greater fragmentation. (Native Client to me is the biggest problem there; I'd like to see Google address that.)
On the other hand, I'd describe the general landscape as being marked by change and innovation, and by and large, I think that's good. I think things are getting better faster than they're getting worse. I think you just have to differentiate between what's experimental and what isn't. We need both, but you can't start pushing experimental features on users before they're fully baked. You wind up discrediting a technology before it has a chance to mature, for one thing. Peter Kirn
And, uh, all of this is overshadowed by the fact that Apple's stream isn't working. I'm sure people aren't all *that* glued to an iPod press conference. As Adobe is quick to point out, the most-watched video stream ever, using Flash and Flash Media Server, worked just fine. And I know Red5 has handled some big jobs.
I mean, the bottom line is, whatever you're doing, it has to *work*. I don't think that means the idea is necessarily wrong, but *your implementation* has to work. You have to keep your server up. http://www.matthewfabb.com/ Matthew Fabb
I meant there's yet to be a large scale test of streaming a live event using the new Flash peer-to-peer video, where a large number of Flash clients are getting the video stream not from the server but other Flash clients. So I don't doubt that Adobe's media server would be able to handle it, it's how smooth or choppy the video stream would be on the client side of things. I imagine in a year's time the majority of web users will have Flash Player 10.1 and then we can see how it runs. Perhaps Adobe might use it for their Adobe MAX conference coming up this fall.
Also I know you were joking about the billion dollar server farm, but for anyone doing live video events with big audiences need pretty huge pockets to handle the bandwidth and server costs. A peer-to-peer solution would have a HUGE reduction in costs and hopefully meaning more online live events as it becomes more affordable. cache
"throwing interoperability and compatibility out the window in the name of abstract, on-paper standards" agreed!
I heard about the Apple stream flailing..
agreed that things have to work, and the difference between experimenting and having a user base that's ready. I work at an advertising agency, and we want our work to be seen everywhere, while pushing the limits of available technology. the penetration of browsers that support all the fancy html5 stuff is really not good… nevermind the issues with actually using the audio object, different formats supported per browser, webGL having to be manually enabled in Firefox, HORRIBLE implementations of html5 audio and video on the iPad… meanwhile, Flash has become undesirable because of the huge mobile user base. we're definitely in a transitional period. it's coming, but right now it's all very fragmented and "beta". I really hope things unify, users upgrade and we can actually write complex web things without special cases for 17 different browsers (something I'm doing right now on a really fancy Flash-like .js-powered UI).
anyway, thanks for the thoughts. Peter Kirn
@cache: Well, WebGL is a perfect case in point. It's not enabled by default, I think, for a reason – it's not done. I think they're doing the right thing in getting it out there and allowing people to hack on it, because it means that it's more likely it will mature and be ready for people to use.
But you're absolutely right – that doesn't mean you should be shipping things in it, or overhyping it, or claiming it's done when it's not.
There's also some painfully exaggerated hype about what the browser does for you that I've never fully understood. It just seems to me that you should first conceptualize what you want to do, then match the technology to fit. Now, it you're on the browser team, you should absolutely do the opposite; if you want to experiment and push the technology, it's great, too. But you don't have to then claim it's the right tool for absolutely everything. Let the tool speak for itself. You don't have to warp the universe around it. igor
there is already an open source solution for segmenting video files and creating m3u8 playlists (based on ffmpeg), we're using it to publish videos for iphone/ipad, because there's a 10min limit policy for iOS applications in the app store.
http://www.ioncannon.net/programming/452/iphone-h… Pingback: MediaCollege.com |
Actually vlc nightly builds (1.2-series) can do that http live for iphone somewhat easily. It needs some cli-fu but I'm doing it currektly without major issues other than delay compared to rtsp/rtp stream Powerful 3D Tech, Housebroken: Unity, OpenFrameworks, Blender, Dog in Action A link to the archives
Crowd-funded Sculpture, Made by Binder Clips, an Inkjet Printer, and a Lasercutter | 计算机 |
2014-23/2663/en_head.json.gz/10264 | Review: Ahead Nero 6 Ultra Edition
Posted Aug 1, 2003 - April 2005 Issue
Synopsis: With Nero 6, Ahead has attempted to make their product friendlier for mainstream users, but hasn't always succeeded in their implementations of consumer-oriented interface changes. Nero remains a solid package for CD and DVD recording with a tremendous (and much-expanded) wealth of features, and packs plenty of power for those users who want it. But it's hard to be all things to all people, and if Ahead is determined to make Nero the tool of choice for consumers and power-users alike, they need to find ways to integrate the two identities more cleanly.
Software users often take comfort in pigeonholing software packages and CD/DVD recording software is no exception. Based on this type of thinking, one could say, Nero is the tool for the technically minded, while Roxio Easy CD & DVD Creator 6 is the tool for consumers. (True, Roxio has its own pro-oriented tool in CeQuadrat WinOnCD, but that product lacks the visibility of ECD or Nero, at least in North America.) Though there is a nugget of truth in generalizing Nero as a pro's preference and ECD as a consumer's choice, with the release of Nero 6 Ultra Edition, the line that had made it so easy to divide the two programs in previous versions has become somewhat blurred (and to a certain extent it already had begun to blur with version 5).Much like Roxio, Ahead has decided to make its flagship product kinder and gentler for the average user. To this end, they have added a project launcher interface similar in many ways to Roxio's program selector. In addition, Ahead has decided (like Roxio) that recording CDs and DVDs wasn't enough for one program and has rebuilt itself into a digital media one-stop shop providing such new elements as Nero Vision Express 2, a tool for capturing video and creating DVD movies and slideshows, a DVD viewer (so you don't have to use a third-party viewer any longer), backup software, audio editing, enhanced MPEG-4 support (a feature they expect to take on greatly increased significance in the next year), and more.While Ahead has for the most part maintained the flexibility and control that made Nero a favorite of more technical end-users in the past, by trying to cross the line into consumer-friendly tool, they have in some ways made the program more confusing to use. That's because those "friendly" features have been implemented to a large extent in an awkward fashion. There's certainly a lot to like about this program, but they need to work harder if they truly hope to make the leap to the consumer side, or to mix their two alter egos into one program. This review looks at some of the highlights of the latest version.Installation Shuffle In an effort to give users control over the installation process, Ahead has divided the install process into a series of separate installations, the theory being that the users can pick and choose the items they want. Unfortunately, if you want to install the entire feature set, this process becomes tedious because as you complete each installation, you need to move down the menu, pick the next item, and go through the same process several times (and there is no system to tell you which ones you've already installed).It would have made more sense to use an Office-like single installation interface (as Roxio did) where users can pick and choose the elements they want to install and can easily see which tools are already installed. While Ahead deserves praise for taking into consideration that every user might not want to install every piece in the package, they could have done a better job designing the installation interface.That said, the applications installed fine on the test PC, a Sony VAIO Pentium 3 running Windows 98SE with 256MB RAM, with a Datoptic Speedzter 5 DVD Recorder used as the test recorder for both CD-R and DVD-R. Nero will run on all versions of Windows from 95B to XP (plus NT); minimum system requirements include 300mHz Pentium 3 for basic CD recording; 500mHz for DVD playback; 800mHz for video capture; and 1.6gHz for direct-to-DVD capture and recording; and 64MB RAM (128MB RAM recommended).Is SmartStart a Good Start? Although Nero still provides users with the ability to launch each program in the product separately, they have attempted to integrate the entire package into a single interface dubbed Nero SmartStart. This bone is tossed to the consumer side who want to see exactly what is included in the program; again, Ahead gets credit for effort, but the implementation needs some adjustment.The interface opens with a set of icons with a single sentence for instructions. There is no additional Help available, which really should be a must if they want to make this attractive to a consumer audience. It's up to the user to figure out several key concepts here.First of all, you need to run your cursor over each icon to display the various features associated with that choice. Next, you need to understand (so long as you have a CD and DVD recorder installed) that there are two small toggle switches at the top of the screen labeled CD and DVD, and you need to click the appropriate toggle to reveal the correct tools. In addition, there is another, not terribly obvious, button at the bottom of the window that allows you toggle between "Standard" and "Expert" modes. Clicking the Expert toggle allows you to see additional functions that Ahead has decided are too complex for the standard user. Unless you happen upon the button and view the tool tip, however, it's hard to tell it's there.Finally, there is an Expand window button on the left side of the window. Clicking this button reveals a list of the full suite of tools. It might have been smarter to display this view as the default. It's important to note that Roxio had a similarly awkward interface in ECD 5 before opting for a more streamlined approach in the latest version. Ultimately, this menu approach needs some tweaking to be a truly useful integration tool. | 计算机 |
2014-23/2663/en_head.json.gz/12053 | The Enterprise System Spectator
SSA cornering the market on AS/400 enterprise software. SSA just announced that it is acquiring Infinium (formerly Software 2000), a vendor of financial systems for the AS/400 (IBM iSeries). Infinium's 1,800 customers are located mostly in North America in process manufacturing, gaming, hospitality, and healthcare. Infinium products include modules for human resources, payroll, financial management, CRM, materials management, process manufacturing, and corporate performance management. This latest announcement follows on SSA GT's acquisition earlier this year of Computer Associates Interbiz group, which was CA's collection of enterprise system acquisitions, such as PRMS, KBM, MK, MANMAN, Warehouse BOSS, CAS, Masterpiece, and Maxcim. Last year, SSA bought MAX International, a small company ERP system. Most of the systems in SSA's portfolio are AS/400-based, with its original flagship product, BPCS, being one of the major AS/400 ERP systems through the 1990s. SSA's strategy appears to be to acquire as many of the remaining AS/400 packages as possible and continuing to receive the maintenance revenue stream from their installed base clients. (SSA is the only ERP vendor I've seen that actually has a page on its Web site aimed at parties interested in being acquired). A number of SSA's clients are large multi-national companies that SSA would love to keep in the fold. It remains to be seen whether SSA can keep existing clients satisfied to the extent that it is not worth the effort for them to switch. by Frank Scavo, 10/31/2002 08:26:00 AM | permalink | e-mail this! Read/post comments! (0)
HIPAA privacy compliance may be taking back seat to EDI. In 2003, medical providers will face HIPAA deadlines for privacy regulations as well as mandated use of EDI for submitting claims for payment. But for those that can't meet both deadlines, the lesser of two evils might be to let privacy compliance slip and be sure to get electronic transactions implemented in time. The risk analysis is simple, according to Jeremy Pierotti, consultant with Partners Healthcare Consulting. "The chances of being inspected by the Centers for Medicare and Medicaid for privacy violations are low, but the chances of not getting paid for submitting a non-standard claim are high," he says. Health Data Management has more. by Frank Scavo, 10/30/2002 07:28:00 AM | permalink | e-mail this! Read/post comments! (0)
Manhattan Associates emerging as leader in warehouse management systemsManhattan Associates just announced its 2002 Q3 results, coming in with net income of $6M, a 30% increase over same quarter last year. More impressively, the profit increase didn't come just by cutting expenses. Revenue rose 8% to nearly $43M. In this market, those are excellent results, and Manhattan has been putting up strong numbers on the board since the late 1990s. The standalone WMS market is highly fragmented, with hundreds of niche vendors. Manhattan's main competitor has been EXE Technologies, a well-established vendor with whom it was evenly matched a couple of years ago. But that has all changed, as Manhattan Associates has continued to grow through the recession, while EXE has shrunk. Other major WMS players include Catalyst, HK Systems, McHugh, and Optum. But none has been able to turn in results similar to Manhattan's.
Separately, Manhattan last week signed a letter of intent to buy Logistics.com, for about $20M. The addition of Logistics.com transportation management systems and services will extend Manhattan's core WMS products, allowing it to provide a more comprehensive supply chain planning and execution suite of offerings. by Frank Scavo, 10/28/2002 08:24:00 AM | permalink | e-mail this! Links to this post
Buzzword alert: "rich client"No, it doesn't mean a client with a lot of money. But first, some background. Until recently, software vendors spoke of client/server presentation schemes as falling into two main categories: "thin client" and "fat client." A thin client architecture is one where business rules and processing logic reside at a server level and the desktop client machine is used only for user presentation. Because a thin client architecture typically makes use of a Web browser, or other general purpose GUI presentation program (e.g. Citrix ICA client), no application program code need be installed or maintained on the client machine. Hence, the client is said to be "thin." Fat client, on the other hand, is an architecture where application programs are installed on each desktop machine.
The advantage of thin client is that it is simple to administer and maintain. But, as anyone who uses browser-based applications knows, thin client systems do have some drawbacks. They are typically slower, especially when deployed over a wide-area network, since even routine data validation requires processing at the server level. Furthermore, they can be cumbersome, since the browser interface (even using DHTML) does not have as many capabilities as a native Windows application. The fat client architecture overcomes these limitations by doing much processing locally, and by taking advantage of the GUI functionality of the desktop operating system (e.g. Windows). On the other hand, a traditional fat client system is difficult to administer because you have to maintain program code on each desktop. Software developers are looking for an alternative. Hence, the term "rich client." Now it might be tempting to treat the term "rich client" as simply a synonym for fat client and leave it at that. And, I'm afraid that's what many application vendors are going to do--simply rename their old fat client systems as "rich client." But the real concept of a "rich client" goes farther. Rich client, as its proponents maintain, is an attempt to capture the benefits of a thin client architecture without the limitations. How? By using presentation technology that is more feature-rich than a Web browser but still allows the application logic to reside at the server level. The main contenders for rich client development platforms include Macromedia Flash MX, which takes advantage of the fact that Flash is already installed on the majority of user desktops, Microsoft's .NET Framework, using features such as managed code and Windows Forms, and Sun Java WebStart. Additional candidates for a rich client development environment include Droplets, an application platform for Java and C++ development, Sash Weblications, an IBM toolkit from Alphaworks, the Altio client applet and presentation server, the Spidertop Bali development tools for Java, and the Curl Surge Lab Integrated Development Environment (IDE).
For more details on the trend toward rich client, see Ted Neward's post in his weblog on the O'Reilly Network. Update: For a follow up discussion on tools for rich client, see my post on Nov. 17. by Frank Scavo, 10/25/2002 03:26:00 PM | permalink | e-mail this! Read/post comments! (0)
Getting tough on ROI. InformationWeek has a good article that explores various methods that companies are using to evaluate the return on investment from IT projects. Among the companies with best practices for managing ROI: Schneider National, Inc., which categories IT projects according to whether they reduce costs, increase revenue, or simplify business processes, allowing management to use a different set of criteria in justifying each category of project. Schlumberger, which uses worst-case, likely case, and best-case scenarios, allowing management to focus on those factors that reduce the risk of realizing the worst-case. Vanguard, where executives meet in "sunlight sessions" to debate and challenge the projected benefits of a proposed IT initiative, allowing overly optimistic assumptions to be caught before the project is approved. The article also includes a case study of Citibank, which used a complex valuation simulation tool to trace the potential impact of an executive portal project on Citibank's stock price. Although such an approach has certain appeal, I believe that the complexity far outweighs the benefits for all but the largest companies. In my experience, most companies would benefit simply by doing a reasonable cost/benefit calculation at project initiation, verifying the assumptions, managing risks, and measuring the results after implementation. by Frank Scavo, 10/23/2002 09:33:00 AM | permalink | e-mail this! Read/post comments! (0)
Siebel makes strategic bet on MicrosoftCRM vendor Siebel is the latest major player to pick sides in the Web services platform battle between open J2EE standards and Microsoft's .NET Framework. Although Siebel will continue to support and interoperate with J2EE-built applications, it is putting the bulk of its development effort into .NET. In addition, Siebel will optimize its applications for Microsoft server operating systems and its SQL Server database. In return, Microsoft has agreed to make its Biztalk integration server compliant with Siebel's Universal Application Network (UAN). This will allow existing Siebel installs as well as other software that is UAN-compliant to interoperate via Biztalk. The new version of Biztalk is expected to ship in early 2003. Siebel's move to get close to Microsoft is in contrast with other large enterprise systems vendors, such as SAP, which is a strong supporter of J2EE, and J.D. Edwards (JDE), which is in near total alliance with IBM, another strong supporter of J2EE. Siebel's move is not without risk. Microsoft is rolling out its own small/mid-tier CRM solution. Even though Siebel sells mainly to larger organization, at some point Siebel may find itself competing with Microsoft for CRM deals in mid-size companies. Furthermore, by identifying with Microsoft, Siebel risks alienating those large company prospects that have standardized on J2EE as well as on non-Microsoft databases and operating systems. Reportedly, the bulk of Siebel customers (65%) run over Oracle databases, with 25% on MS SQL Server, and the remaining 10% on IBM's DB2. How will all those Oracle and DB2 shops feel about Siebel's favoring Microsoft technologies? CRN has more analysis of the Siebel/Microsoft announcement. by Frank Scavo, 10/22/2002 09:22:00 AM | permalink | e-mail this! Links to this post
Epicor picks up Clarus e-procurement products for a songLast week, Clarus announced that it's selling its core products to Epicor for a mere $1 million in cash. Clarus is/was a best-of-breed vendor of indirect e-procurement systems such as applications for expense management, private trading exchange (PTX), reverse auctions, and electronic settlement (electronic bill presentation and payment). Clarus was in the same space as major e-procurement players, such as Ariba and Commerce One, and Clarus had a few big name accounts and partners, such as Microsoft. Epicor, formed from the merger of Platinum and Dataworks a few years ago, has gotten a reputation on the street for carrying too many products in its portfolio. Most were acquired by Dataworks, prior to the merger with Platinum. These days, Epicor is focusing on actively developing and marketing only a few of them, specifically its "e by Epicor" product line (based on the Platinum products), its Avante, Vantage, and Vista manufacturing systems (based on the Dataworks offerings), and its well-regarded Clientele products for customer service and technical support. At first glance, it would appear that Epicor is continuing its tendency toward product proliferation with its acquisition of the Clarus products. Nevertheless, if Epicor can integrate the Clarus products horizontally across its other offerings, the acquisition will look like a smart move a few years from now. Especially at the fire-sale price it's paying. AMR provides a good analysis of the acquisition. by Frank Scavo, 10/21/2002 07:32:00 PM | permalink | e-mail this! Read/post comments! (0)
ERP does improve business performance, if implemented correctlyHBR Working Knowledge has a good interview with Harvard Business School's Mark Cotteleer regarding his survey on the effect of ERP on company performance. In the most recent survey, 86% of IT executives describe their ERP implementations as successful and over 60% report that the benefits exceeded expectations. On the other hand, 14% of implementations are "troubled or abandoned," and 40% either just met or fell short of expectations. Recognizing, as Cotteleer says, that "successful" and "painless" are not the same thing, what makes some companies successful while others fail? The answer is in how ERP is implemented. Cotteleer goes on to describe some useful maxims for successful implementations, including: "stay the course" (i.e. give users time to get used to the new system before rushing off to resolve non-critical issues); "the devil is in the data" (e.g. "Over the years we have witnessed pitched battles erupt over, for example, how to define units of measure"); and "know the difference between understanding something and liking it." On that last point, Cotteleer says, "Implementations get bogged down when ... project managers focus on finding a way to make everyone happy. Sometimes that way does not exist. Managers should recognize that and move on when needed."
by Frank Scavo, 10/18/2002 01:56:00 PM | permalink | e-mail this! Links to this post
SAP sales and earnings are ... upSwimming against the tide of diminishing earnings from enterprise application vendors, SAP this week reported 2002 Q3 net income of $198 million, compared with $36 million for the same period in 2001. Interestingly, while product revenue overall was up 3% to $1.65 billion, revenue from CRM sales was up 19% while SCM was down 3%. SAP's press release has more details. SAP is clearly benefiting in this weak market from its large installed base and dominant position worldwide to maintain license revenues at the expense of most other ERP vendors such as Peoplesoft, which just reported an 11% drop in Q3 net profit and a 20% drop in license revenue. SAP is also getting some uplift from sales of SCM and CRM products, at the expense of stand-alone supply chain and CRM vendors such as i2 and Siebel. For evidence just look at Siebel's quarterly results this week, where it reported a whopping net loss of $92 million on a 34% drop in license fees. by Frank Scavo, 10/18/2002 01:53:00 PM | permalink | e-mail this! Links to this post
Buzzword alert: Part 11 complianceOver the past few years, a number of software vendors selling into the pharmaceutical and medical device industries have been claiming that their systems are "Part 11 compliant." Here's some background. In 1997 the US Food and Drug Administration (FDA) issued its final rule on the use of electronic records and electronic signatures, publishing it in the Federal Register under 21 CFR Part 11. Hence, the term "Part 11." Essentially, Part 11 provides criteria by which companies regulated by FDA can use electronic records and electronic signatures as equivalent to paper records with handwritten signatures in meeting FDA regulations. Furthermore, FDA investigators have started to inspect companies' computer systems for compliance to Part 11. In some cases, companies may find it easier to replace legacy systems than to remediate them. This has created a market opportunity for software vendors serving FDA-regulated industries. However, some software vendors, hoping to win a piece of this business, make claims about their systems that go too far. Here are a few examples, without naming the vendors: "[Package name] is fully compliant with 21 CFR Part 11." .... "[Vendor name] has developed proprietary software utilizing 128-bit encryption technology that fully complies with 21 CFR Part 11." .... "This solution is 21 CFR Part 11 compliant and will provide an immediate solution to using electronic signatures with minimum investment and minimal impact on legacy systems." .... "[Package name] is 100% compliant with the US Food and Drug Administration (FDA) final ruling on Electronic Records and Electronic Signatures referred to as 21 CFR Part 11." And my personal favorite: "Are you concerned about Title 21 CFR Part 11 FDA regulations governing electronic records and electronic signatures? Don't be. The FDA edition of [package name] is fully compliant." The basic problem is that these claims imply that the packages themselves are "compliant," whereas FDA regulations and guidance make it clear that it is the end users and their companies that must be compliant. One package may be easier than another to implement in a compliant fashion. But compliance is much more than buying and implementing a certain package. In a recent meeting with one software vendor (the minutes of which are public record), FDA representatives made the following simple and clear comment: "During the meeting we discussed the appropriateness of representing software as 'part 11 compliant.' We explained that the term is a misnomer because people who are subject to part 11 are responsible for compliance with the rule and because achieving compliance involves implementing a collection of administrative, procedural, and technical controls. We suggested that where software has technical features that are required by part 11, it would be appropriate to map those features to particular part 11 controls and then let prospective customers determine for themselves the potential suitability of the software in their own circumstances."
Open source ERPA couple weeks ago, I asked to hear from anyone who knew of a truly open source ERP system. I didn't get any responses, but I did come across a research note by Paul Hamerman of Giga Information Group on Compiere, which appears to fit the definition. Compiere's license agreement, which is modeled after that of Mozilla and Netscape, provides source code to users at no charge. However, the product is built over the Oracle database, and Compiere is an Oracle database reseller. So the company evidently is giving away the Compiere source code and pulling through Oracle license sales as well as training and implementation. It's an interesting approach, although one would imagine that companies looking for "free software" would be reluctant to turn around and buy Oracle database seats. I also wonder how many small and midsize firms, which would be the natural market for Compiere, would be willing to invest in the requisite effort to keep up on patches and fixes. The whole open source movement, to me, makes more sense for operating systems, tools, and utilities, which can leverage a much larger development community. It would seem that the higher you go up the technology stack toward complex business applications, such as ERP, the more difficult the open source model would be to sustain. I would like to be wrong on this one, but I'm still waiting for an "existence proof." by Frank Scavo, 10/15/2002 11:25:00 PM | permalink | e-mail this! Links to this post
Wal-mart still pushing its suppliers to Internet EDIThere was more news this week on Walmart's Internet EDI initiative. Both IBM and Sterling Commerce announced that they have been selected by Wal-mart to provide integration and network services for 8,000 of its suppliers as they move to Internet EDI (EDI-INT AS2 standards). According to the IBM press release, IBM was chosen because of its global reach, its experience in the apparel, consumer products, and retail industries, and its implementation services. IBM will also provide VAN services as a backup contingency to EDIINT. In addition, IBM points out that it has built AS2 support into its WebSphere integration products, allowing Walmart suppliers who use Websphere an easy migration path to AS2. According to the Sterling Commerce press release, Sterling will serve in a similar capacity, giving Walmart and its suppliers a choice of two such providers. Sterling, of course, is a major EDI provider, and one of the companies that helped develop the AS2 specification.
One of the early barriers to Internet-based EDI is that the Internet by itself does not provide the level of data security and reliability that is required for B2B commerce. This is why EDI traditionally uses value added networks (VANs) for transport. The AS2 standard eliminates this barrier by providing, among other things, a reliable and secure Internet messaging protocol, using public key encryption (PKI). When I reported on Wal-mart's EDI AS2 initiative on Sept. 17, the Spectator started to receive a large number of search engine hits for that post, probably from all those Walmart suppliers that are no doubt highly interested in what hoops they will need to jump through for this major customer. As I noted previously, the positive impact of Walmart's action for Internet commerce cannot be understated. It is the sort of "supplier mandate" that could greatly speed up general adoption of Internet-based EDI and B2B commerce in general. by Frank Scavo, 10/12/2002 01:40:00 PM | permalink | e-mail this! Links to this post
More thoughts on Home Depot's priorities. My associate Lewis Marchand sent me some comments on my post regarding Home Depot's data warehouse/business intelligence project. Lewis specializes in business intelligence applications, so I was interested in his feedback. He says, "It definitely surprises me that a firm this size has not gone to BI before, given the complexity of their business. What they are proposing is huge and they are certainly going for broke." He also points out that Home Depot's plans to implement BI systems in three separate areas is aggressive. "I would think they would concentrate on areas with the greatest potential return first, which for them of course is their supply chain." Who knows? Home Depot has over 250,000 employees in 1500 locations. Perhaps it has found that employee performance improvement, satisfaction, and retention is currently the key constraint to success, and therefore a priority for business intelligence. Or, more likely, Home Depot believes that developing BI applications in the HR area is an easier first step than doing so in the supply chain planning function. Either way, it will be an interesting case to watch. by Frank Scavo, 10/10/2002 08:17:00 AM | permalink | e-mail this! Read/post comments! (0)
Everyone agrees: it's a tech buyer's market. Earlier this week, Gartner CEO Michael Fleisher told 5,000 IT executives at the Gartner Symposium ITxpo that if they are planning to buy anything, now would be a good time to do it. "This is, quite simply, the best market ever for technology buyers," said Fleisher. While pointing out that the IT industry is suffering from overcapacity and the absence of any "killer application" on the horizon, he forecast that "there is essentially no chance for a tech recovery in 2003." He also predicted that 50% of all technology brands will disappear by 2004. CRN has more details on Fleisher's talk.
Along the same line, Dylan Tweney, writing for Business 2.0, announces the death of the million dollar software deal. by Frank Scavo, 10/09/2002 09:49:00 AM | permalink | e-mail this! Read/post comments! (0)
Home Depot is on a shopping spree for data warehouse and business intelligence tools. Home Depot announced last week that it is planning a huge roll out of a data warehouse capability that will cost tens of millions of dollars. Business intelligence applications will be rolled out in three phases: 1) HR applications to provide analytical dashboards and metrics to improve employee performance, satisfaction, and retention, 2) inventory planning applications to give material planners near-real-time access to point-of-sale (POS) transactions to better manage supply, demand, and store assortments, and 3) supplier access to the inventory and sales data, so that trading partners can better manage demand and logistics. The system will be built on IBM's DB2 database running on an IBM AIX box with sixty (60!) terabytes of storage. This opportunity at Home Depot is an exception to the trend away from large complex software deals. Perhaps it is indicative of Home Depot's need to catch up with competitors such as Wal-mart (Walmart) that already have large analytic capabilities in place. It will be interesting to see what sort of creative deals vendors put together in order to play in this pond. Still on Home Depot's shopping list: extract, transformation, and load (ETL) tools as well as business intelligence (BI) tools for data analysis. Although, to my knowledge, no specific vendors have been mentioned as being on Home Depot's short list, expect all the major BI vendors as well as select supply chain vendors (e.g. i2, Manugistics) to want a piece of this deal. Since the data warehouse is being built on IBM's DB2 database, it is possible that IBM's Datawarehouse Manager would be considered, as well as one or more of the major ETL vendors, such as Informatica (PowerMart/Center), Ascential (Datastage XE), and SAS (Warehouse Administrator). Some leading vendors of analytic tools include Business Objects (BusinessObjects), Cognos (PowerPlay and Impromptu), Information Builders (WebFOCUS), MicroStrategy (MicroStrategy 7), Computer Associates (Eureka Suite), Sagent (Sagent Solution Platform) and Hummingbird (BI/Suite). by Frank Scavo, 10/07/2002 10:11:00 AM | permalink | e-mail this! Read/post comments! (0)
Buzzword alert: "open source"Over the past month or so, I've noticed some ERP and supply chain software vendors refer to their products as "open source," when in truth all they are doing is making their proprietary source code available to clients, something that many software vendors have been doing for years. You buy a license for Package X, and the vendor provides some (or even all) of the source code so that you can modify it, but only for use within the licensing entity. That last condition is what makes proprietary code propietary. When you license proprietary software, you have no right to redistribute the product, even if you have the source code. Open source, on the other hand, is a specific licensing model whereby the author(s) of the software freely distribute the source code along with the rights to redistribute it. The details of various open source licenses, such as the GNU General Public License (GPL), are more complicated, but that's the gist. For a more information on the meaning of "open source," see the definition on the Open Source Initiative (OSI) web site. Examples of true open source software include the Linux operating system and the Apache web server. To my knowledge, no vendor of significant ERP or supply chain management application systems licenses their system on an open source model. If you know of an ERP or SCM system that is truly open source, please e-mail me. by Frank Scavo, 10/04/2002 09:37:00 AM | permalink | e-mail this! Links to this post
First look at Microsoft's Axapta ERP systemYesterday, we got a quick demo of the Axapta product, courtesy of mcaConnect, a new nationwide Axapta distributor. Microsoft obtained Axapta through its acquisition of Norwegian vendor Navision earlier this year. Axapta has not had very much market presence to this point in the US, with few people even having heard of it prior to 2001, when Navision obtained it through its acquisition of Damgaard in the Netherlands. However, Axapta has a fairly strong presence in Europe, where it is positioned toward mid-tier manufacturing firms. In the US, Microsoft Business Solutions is targeting Axapta at mid-tier manufacturing firms of $50-800M in annual sales. Based on what we saw, however, we think Microsoft would do best to aim at the lower end of that range, because the product is a mixed bag in terms of features and functions. It appears to have strong functionality in certain areas, such as a rule-based dimensional product configurator, a knowledge management module (including balanced scorecard), customer/supplier/employee survey instruments, an integrated project management module, and a Web self-service capability. On the other hand, the product is weak on multi-plant operations. For example, although it supports multiple warehouses for inventory control, it does not allow multiple facilities for planning and scheduling (i.e. multi-plant MPS and MRP). That last point alone could disqualify Axapta from many upper mid-market deals.
From a technology perspective, the Axapta product is built using its own development environment called Morphx, that generates a language called X++ (a blend of C++ and Java). I would speculate that, at some point, Microsoft may want to rewrite Axapta using its own development toolset (Visual Studio). But for the short term, Microsoft will probably be content just to run the X++ code through its .NET SDK so that it can operate within Microsoft's .NET framework. It is interesting to note that, because of their use of MS Visual Studio, other mid-market ERP vendors such as Frontstep (formerly Symix) and Made2Manage, are actually "more Microsoft than Microsoft" when compared to Axapta. And I would be surprised to see this change any time soon. It is not a trivial exercise to rewrite an entire ERP system in a new development tool set. Microsoft is more likely to devote new Axapta development efforts to address any gaps in functionality that will be required for it to compete in the US against more established players and to provide integration to Microsoft's new CRM solution. So, my advice to prospective buyers is this: if having Microsoft standing behind your ERP system is important to you, consider Axapta. But if a Microsoft-standard development environment is what you are looking for, look elsewhere for now. Which brings us to the bottom line: the strongest point in favor of Axapta, of course, is that Microsoft is behind it. When considering the financial viability of many of the other mid-market ERP vendors, many buyers will find this a strong plus. Another strong point in Axapta's favor is its distributor network. There are reportedly 50 resellers in the US with rights to sell the product, of which about 20 are actively building Axapta sales and service groups. Many of these are former distributors of competing ERP Tier II or III products that have fallen on hard times. These resellers are experienced in selling and servicing the mid-market, and they are hungry. We have only seen Axapta in one deal so far in southern California (which it won), but we expect to see it more often as these distributors gain traction. by Frank Scavo, 10/03/2002 03:27:00 PM | permalink | e-mail this! Links to this post
Large system implementations require organizational disciplineDrew Rob, writing for Datamation, has a good case study of a $22 million implementation of Peoplesoft ERP and procurement applications at the City of Los Angeles. Although there were significant issues and complaints about the new system, a post-implementation audit found that nearly all of the problems were with users and entire departments not adhering to City policies. The project team implemented the new system "by the book," but it appears that large numbers of users hadn't been reading the book to begin with. As Robb points out, "Those complaining … were actually moaning about their own processes. PeopleSoft [was implemented] in accordance with mandated city policies and processes. Yet many within the city were not in compliance."
I often complain about the poor state of software quality. However, this case study illustrates that large system implementations often fail not because of software deficiencies but because of lack of organizational discipline. Nevertheless, this story does have a happy ending. After additional post-implementation work to improve organizational disciplines, the City of LA was able to cut check processing staff in half, cut warehouse staff by 40 heads, reduce inventory from $50M to $15M, and give each City vendor a single point of contact. Furthermore, LA saved $5M a year in contract consolidation and significantly improved the number vendor discounts taken for timely payment.
Read about the case study in Datamation. by Frank Scavo, 10/01/2002 03:51:00 PM | permalink | e-mail this! Links to this post
(c) 2002-2014, Frank Scavo.Independent analysis of issues and trends in enterprise applications software and the strengths, weaknesses, advantages, and disadvantages of the vendors that provide them.
About the Enterprise System Spectator.
Send tips, rumors, gossip, and feedback to Frank Scavo at .
I'm interested in hearing about best practices, lessons learned, horror stories, and case studies of success or failure. Selecting a new enterprise system can be a difficult decision. My consulting firm, Strativa, offers assistance that is independent and unbiased. For information on how we can help your organization make and carry out these decisions, write to me.
For reprint or distribution rights for content published on the Spectator, please contact me.
Go to latest postings
Join over 1,700 subscribers on the Spectator email list!
Max. 1-2 times/month.Easy one-click to unsubscribe anytime.
Computer Economics
ERP Support Staffing Ratios
Outsourcing Statistics
IT Spending and Staffing Benchmarks
IT Staffing Ratios
IT Management Best Practices
Worldwide Technology Trends
IT Salary Report
Get these headlines on your site, free!
Tweets by @fscavo
Blog Roll and Favorite Sites
Strativa: ERP software vendor evaluation, selection, and implementation consultants, California
StreetWolf: Digital creative studio specializing in web, mobile and social applications
Vinnie Mirchandani: The Deal Architect
Si Chen's Open Source Strategies
diginomica
CISO Handbook
Spectator Archives | 计算机 |
2014-23/2663/en_head.json.gz/12540 | Fatty Acid, Triglyceride, Phospholipid Synthesis and Metabolism
Fatty Acid Synthesis
Origin of Acetyl-CoA for Fat Synthesis
Regulation of Fatty Acid Synthesis
ChREBP: Master Lipid Regulator in Liver
Elongation and Desaturation of Fatty Acids
Triacylglyceride Synthesis
Lipin Genes: TAG Synthesis and Transcriptional Regulation
Phospholipid Structures
Phospholipid Metabolism
Plasmalogen Synthesis
Omega-3, and -6 Polyunsaturated Fatty Acids (PUFAs)
Eicosanoid Metabolism
Sphingolipid Metabolism
Cholesterol Metabolism
Return to The Medical Biochemistry Page
© 1996–2013 themedicalbiochemistrypage.org, LLC | info @ themedicalbiochemistrypage.org
One might predict that the pathway for the synthesis of fatty acids would be the reversal of the oxidation pathway. However, this would not allow distinct regulation of the two pathways to occur even given the fact that the pathways are separated within different cellular compartments.
The pathway for fatty acid synthesis occurs in the cytoplasm, whereas, oxidation occurs in the mitochondria. The other major difference is the use of nucleotide co-factors. Oxidation of fats involves the reduction of FADH+ and NAD+. Synthesis of fats involves the oxidation of NADPH. However, the essential chemistry of the two processes are reversals of each other. Both oxidation and synthesis of fats utilize an activated two carbon intermediate, acetyl-CoA. However, the acetyl-CoA in fat synthesis exists temporarily bound to the enzyme complex as malonyl-CoA.
The synthesis of malonyl-CoA is the first committed step of fatty acid synthesis and the enzyme that catalyzes this reaction, acetyl-CoA carboxylase (ACC), is the major site of regulation of fatty acid synthesis. Like other enzymes that transfer CO2 to substrates, ACC requires a biotin co-factor.
The rate of fatty acid synthesis is controlled by the equilibrium between monomeric ACC and polymeric ACC. The activity of ACC requires polymerization. This conformational change is enhanced by citrate and inhibited by long-chain fatty acids. ACC is also controlled through hormone mediated phosphorylation (see below).
The acetyl groups that are the products of fatty acid oxidation are linked to CoASH. As you should recall, CoA contains a phosphopantetheine group coupled to AMP. The carrier of acetyl groups (and elongating acyl groups) during fatty acid synthesis is also a phosphopantetheine prosthetic group, however, it is attached a serine hydroxyl in the synthetic enzyme complex. The carrier portion of the synthetic complex is called acyl carrier protein, ACP. This is somewhat of a misnomer in eukaryotic fatty acid synthesis since the ACP portion of the synthetic complex is simply one of many domains of a single polypeptide. The acetyl-CoA and malonyl-CoA are transferred to ACP by the action of acetyl-CoA transacylase and malonyl-CoA transacylase, respectively. The attachment of these carbon atoms to ACP allows them to enter the fatty acid synthesis cycle.
The synthesis of fatty acids from acetyl-CoA and malonyl-CoA is carried out by fatty acid synthase, FAS. The active enzyme is a dimer of identical subunits.
All of the reactions of fatty acid synthesis are carried out by the multiple enzymatic activities of FAS. Like fat oxidation, fat synthesis involves 4 enzymatic activities. These are, β-keto-ACP synthase, β-keto-ACP reductase, 3-OH acyl-ACP dehydratase and enoyl-CoA reductase. The two reduction reactions require NADPH oxidation to NADP+.
The primary fatty acid synthesized by FAS is palmitate. Palmitate is then released from the enzyme and can then undergo separate elongation and/or unsaturation to yield other fatty acid molecules.
Reactions of fatty acid synthesis catalyzed by fatty acid synthase, FAS. Only half of the normal head-to-tail (head-to-foot) dimer of functional FAS is shown. Synthesis of malonyl-CoA from CO2 and acetyl-CoA is carried out by ACC as described. FAS is initially activated by the incorporation of the acetyl group from acetyl-CoA. The acetyl group is initially attached to the sulfhydryl of the 4'-phosphopantothenate of the acyl carrier protein portion of FAS (ACP-SH). This is catalyzed by malonyl/acetyl-CoA ACP transacetylase (1 and 2; also called malonyl/acetyltransferase, MAT). This activating acetyl group represents the omega (ω) end of the newly synthesized fatty acid. Following transfer of the activating acetyl group to a cysteine sulhydryl in the β-keto-ACP synthase portion of FAS, the three carbons from a malonyl-CoA are attached to ACP-SH (3) also catalyzed by malonyl/acetyl-CoA ACP transacetylase. The acetyl group attacks the methylene group of the malonyl attached to ACP-SH catalyzed β-keto-ACP synthase (4) which also liberates the CO2 that was added to acetyl-CoA by ACC. The resulting 3-ketoacyl group then undergoes a series of three reactions catalyzed by the β-keto-ACP reductase (5), 3-OH acyl-ACP dehydratase (6), and enoyl-CoA reductase (7) activities of FAS that reduce, dehydrate, and reduce the substrate. This results in a saturated four carbon (butyryl) group attached to the ACP-SH. This butyryl group is then transferred to the CYS-SH (8) as for the case of the activating acetyl group. At this point another malonyl group is attached to the ACP-SH (3b) and the process begins again. Reactions 4 through 8 are repeated another six times, each beginning with a new malonyl group being added. At the completion of synthesis the saturated 16 carbon fatty acid, palmitic acid, is released via the action of the thioesterase activity of FAS (palmitoyl ACP thioesterase) located in the C-terminal end of the enzyme. Not shown are the released CoASH groups.
Origin of Cytoplasmic Acetyl-CoA
Acetyl-CoA is generated in the mitochondria primarily from two sources, the pyruvate dehydrogenase (PDH) reaction and fatty acid oxidation. In order for these acetyl units to be utilized for fatty acid synthesis they must be present in the cytoplasm. The shift from fatty acid oxidation and glycolytic oxidation occurs when the need for energy diminishes. This results in reduced oxidation of acetyl-CoA in the TCA cycle and the oxidative phosphorylation pathway. Under these conditions the mitochondrial acetyl units can be stored as fat for future energy demands.
Acetyl-CoA enters the cytoplasm in the form of citrate via the tricarboxylate transport system (see Figure). In the cytoplasm, citrate is converted to oxaloacetate and acetyl-CoA by the ATP driven ATP-citrate lyase (ACLY) reaction. This reaction is essentially the reverse of that catalyzed by the TCA enzyme citrate synthase except it requires the energy of ATP hydrolysis to drive it forward. The resultant oxaloacetate is converted to malate by malate dehydrogenase (MDH).
Pathway for the movement of acetyl-CoA units from within the mitochondrion to the cytoplasm for use in lipid and cholesterol biosynthesis. Note that the cytoplasmic malic enzyme catalyzed reaction generates NADPH which can be used for reductive biosynthetic reactions such as those of fatty acid and cholesterol synthesis. SLC25A1 is the citrate transporter (also called the dicarboxylic acid transporter). SLC16A1 is the pyruvate transporter (also called the monocarboxylic acid transporter).
The malate produced by this pathway can undergo oxidative decarboxylation by malic enzyme. The co-enzyme for this reaction is NADP+ generating NADPH. The advantage of this series of reactions for converting mitochondrial acetyl-CoA into cytoplasmic acetyl-CoA is that the NADPH produced by the malic enzyme reaction can be a major source of reducing co-factor for the fatty acid synthase activities.
Regulation of Fatty Acid Metabolism
One must consider the global organismal energy requirements in order to effectively understand how the synthesis and degradation of fats (and also carbohydrates) needs to be exquisitely regulated. The blood is the carrier of triacylglycerols in the form of VLDLs and chylomicrons, fatty acids bound to albumin, amino acids, lactate, ketone bodies and glucose. The pancreas is the primary organ involved in sensing the organisms dietary and energetic states via glucose concentrations in the blood. In response to low blood glucose, glucagon is secreted, whereas, in response to elevated blood glucose insulin is secreted. The regulation of fat metabolism occurs via two distinct mechanisms. One is short term regulation which is regulation effected by events such as substrate availability, allosteric effectors and/or enzyme modification. ACC is the rate-limiting (committed) step in fatty
acid synthesis. There are two major isoforms of ACC in mammalian tissues. These are identified as ACC1 (also called ACCα) and ACC2 (also called ACCβ). The ACC1 gene (symbol = ACACA) is located on chromosome 17q12 and encodeds a 2,346 amino acid proteins. The ACACA gene spans approximately 330 kb and is composed of 64 exons which includes 7 alternatively spliced minor exons. Transcriptional regulation of ACACA is effected by 3 promoters (PI, PII, and PIII), which are located upstream of exons 1, 2, and 5A, respectively. The PI promoter is a constitutive promoter, the PII promoter is regulated by various hormones, and the PIII promoter is expressed in a tissue-specific manner. The presence of the alternatively spliced exons does not alter the translation of the ACC1 protein which starts from an ATG present in exon 5. The ACC2 gene (symbol = ACACB) is located on chromosome 12q24.11 and ecodes a protein of 2,458 amino acids.
ACC1 is strictly cytosolic and is enriched in liver, adipose tissue and lactating mammary tissue. ACC2 was originally discovered in rat heart but is also expressed in liver and skeletal muscle. ACC2 has an N-terminal extension that contains a mitochondrial targeting motif and is found associated with carnitine palmitoyltransferase I (CPT I) allowing for rapid regulation of CPT I by the malonyl-CoA produced by ACC. Both isoforms of ACC are allosterically activated by citrate and inhibited by palmitoyl-CoA and other short- and long-chain fatty acyl-CoAs. Citrate triggers the polymerization of ACC1 which leads to significant increases in its activity. Although ACC2 does not undergo significant polymerization (presumably due to its mitochondrial association) it is allosterically activated by citrate. Glutamate and other dicarboxylic acids can also allosterically activate both ACC isoforms.
ACC activity can also be
affected by phosphorylation. Both ACC1 and ACC2 contain at least eight sites that undergo phosphorylation. The sites of phosphorylation in ACC2 have not been as extensively studied as those in ACC1. Phosphorylation of ACC1 at three serine residues (S79, S1200, and S1215) by AMPK leads to inhibition of the enzyme. Glucagon-stimulated increases in cAMP and subsequently to increased PKA
activity also lead to phosphorylation of ACC where ACC2 is a better substrate for PKA than is ACC1. The activating effects of insulin on ACC are complex and not completely resolved. It is known that insulin leads to the dephosphorylation of the serines in ACC1 that are AMPK targets in the heart enzyme. This insulin-mediated effect has not been observed in hepatocytes or adipose tissues cells. At least a portion of the activating effects of insulin are related to changes in cAMP levels. Early evidence has shown that phosphorylation and activation of ACC occurs via the action of an insulin-activated kinase. However, contradicting evidence indicates that although there is insulin-mediated phosphorylation of ACC this does not result in activation of the enzyme. Activation of α-adrenergic receptors in liver and skeletal muscle cells inhibits ACC activity as a result of phosphorylation by an as yet undetermined kinase.
Control of a given pathways' regulatory enzymes can also occur by alteration of enzyme synthesis and turn-over rates. These changes are long term regulatory effects. Insulin stimulates ACC and FAS synthesis, whereas, starvation leads to decreased synthesis of these enzymes. Adipose tissue lipoprotein lipase levels also are increased by ins | 计算机 |
2014-23/2663/en_head.json.gz/14077 | You're Awesome!
TIFF IMAGES: AN INFORMATIONAL GUIDE
Resource Center » TIFF Images: An Informational Guide
What is a TIFF Image?
The TIFF (or TIF) image file format is short for "tagged information file format" and is used as a storage format for images, photographs, and line art drawings. The TIFF file format was originally created to offer a standardized file for computer scanners in the 1980s; this was done to avoid compatibility problems with various companies' proprietary formats.
When to Use the TIFF Format
TIFF images are best reserved for print applications because they are bitmap (or pixel-based) images. Even though the TIFF format creates very large files, there is absolutely no loss of quality; this is why they are so popular in print productions and for imprint on promotional products like personalized pens. TIFF preserves the alpha transparency, the layers, and the similar aspects of image files saved from image editing software, such as Photoshop and Fireworks. These extra types of features are stored differently in each type of image editing software.
History of TIFF Images
The TIFF format was originally developed by the Aldus Corporation, and they are credited with starting the entire desktop publishing industry. The company developed PageMaker, the first real desktop publishing program, which was soon surpassed by Adobe Systems. When Adobe and Aldus merged, the TIFF became the property of Adobe Systems. The TIFF file format hasn't had a major update since 1992, but extensions and specifications have been sporadically updated.
Downloads for TIFF Specifications
TIFF has given rise to many formats and many different specifications for its properties and uses. The TIFF format has several versions available for download that can be used for any application. Currently, the most recent TIFF version is the TIFF 6.0 specification, but versions such as GeoTIFF, TIFF 4.0, and TIFF 5.0 are still available for use and download. Tag extensions and geographical data for TIFF formats are also available in these specifications.
Programs That Use and Read Multi-Page Tiffs
A multi-page TIFF file is an image file that stores multiple pages in a single location, such as faxes and email attachments. Certain programs are needed to read and convert multi-page TIFFs into useable formats for other users and programs. Many of these programs allow for the creation and edit of existing TIFF multi-page files, as well as other images that need to be grouped together.
Places to Find TIFF Software
A logical starting point for TIFF software is the Adobe site, but many other companies offer TIFF editors and viewers for free (or for a minimal fee) depending upon the intended use. These programs exist for Linux, Windows, and Apple systems and may be found at each of the individual web sites.
What TIFF Extensions are Available
TIFF 6.0, TIFF 4.0, TIFF 5.0, and GeoTIFF are a few of the TIFF extensions currently available. The intention to keep TIFF a standardized format has been abandoned because there are now approximately fifty or more variations on the original format. Different encodings can be applied to a TIFF file including Zip-in-TIFF, TIFF/IT, and Adobe Extensions for PageMaker.
Though even though TIFF images are popular for print productions and design, many designers and promotional products companies prefer vector files. These have extensions like .ai, .eps, .cdr, and .fh. Vector artwork makes sure that the logo on your | 计算机 |
2014-23/2663/en_head.json.gz/16081 | T H E + E N T E R T A I N M E NT + D E P O T // EntDepot. Untitled Document
NAVIGATION > Untitled Document
PC / Mac / PlayStation 2 / Xbox / GameCube / Game Boy Advance
.............CONTENT .Home
.News .Reviews .Previews .Features.Fun Facts .Wallpapers .Forums .............MISC. .Advertise .Contact .About Us .FAQ .Legal .Privacy Policy .............AFFILIATES .insert credit .DigitalBackSpin .Rock, Paper, Shotgun
.The Wargamer
...ADVERTISEMENTS
... Kohan II: Kings of War
Developer: TimeGate Studios
Publisher: Global Star Software
Genre: Real-Time Strategy
Similar To: Kohan: Immortal Sovereign
Rating: Teen
Published: 12 : 03 : 04
Reviewed By: Ryan Newman Overall: 7.5 = Good
Minimum Req.: P4 1.5GHz, 256MB RAM, 64MB video card, DirectX 9 comp. sound card Reviewed On: P4 2.5 GHz, 512 Meg RAM, ATI Radeon 9800 Pro -
TimeGate Studios came out strong in 2001 with Kohan: Immortal Sovereign. A slower-paced real-time strategy game, it focused on economic development, city planning, and squad-based combat. The sequel, Kohan II: Kings of War, makes the leap into the third dimension while expanding upon the economic model and introducing new characters into the universe. By maintaining the strong core design of the original, the sequel manages to remain enjoyable, despite feeling like a glorified expansion pack. After the Ceyah was defeated, Kohan Darius Javidan, immortal and the game's hero unit, took his human allies and settled down into a time of peace. The peace is now coming to an end with the factionalized humans fighting amongst themselves, as well as with the remaining Ceyah and a new threat that is making its presence slowly felt in the land of Khaldun. As Javidan keeps locked away in his kingdom, the other Kohans and their followers attempt to band their shattered peoples together to fight off the new foe. Joining the humans are the Drauga, Gauri, Haroun, Shadow, and the Undead, all of which are playable throughout the main campaign. Each race has its own benefits: the Elvin-like Haroun's structures repair themselves while the Dwarven-like Gauri are more productive at gathering resources. Each also has its own detriments: the Shadow's companies cannot be as large as the other races, while the brawling Drauga have low defense, with the humans as the only race suffering no penalties. To spice things up, there are also factions - Ceyah, Council, Fallen, Nationalist, and Royalist - which, like actual conflicts, contain members of all races, with in-fighting going on between those following different cause, with each faction providing their own bonuses and special units. A shame there couldn't have been a way to fit in politicking, as the situation is ripe for some.
Immediately noticeable to fans of the original is the new 3D engine. Despite looking fairly basic close up, the benefit - and the only one I can think of - is that it is now easier to distinguish the advantages of the environment. The environment is crucial to how units move on the map and, along with icons displaying the exact ramifications of moving troops to a certain location, having objects in 3D makes it easier and quicker to tell whether they should remain or be moved. Pretty much everything affects units on the map, from trees providing cover from arrows, to cavalry getting a bonus to attack on open land, and while it may not always seem necessary to know the details, it also makes life much easier when setting up defenses and attacking from an advantageous position. Aside from the graphical face lift, the economic model has also been beefed up. What was so interesting in the original was that each city would grow in size, with each size having allotted spaces for buildings to be constructed in, and each of these buildings would offer additional benefits that needed to be balanced against the needs of the kingdom. The upgrades usually involved increased production, more gold, or possibly a military benefit or a middle-of-the-road option (keep some wood and get some gold). So, if a lumber mill was built, it could be upgraded to provide additional lumber, because all resources in the negative would subtract from the income, be sold for additional gold, or a mixture of both. Now, that system has been added upon by an additional branch of upgrades, but these tend solely for the military. The same lumber mill from part one can now be upgraded to increase production, sell more for additional income, or do a little of both, but it can also then be used to upgrade how far archers can shoot, the strength of their shots, and so on. The resources are broken down into iron, copper, ore, wood, and mana crystals, which are automatically collected once the resource-specific building is built. There are also additional deposits on the map that can be harvested once engineers build mines upon them. The economic system is still integral and is one of the more enjoyable parts of the game, but it never feels as important as it did in the original. The economics come across as less important because of the smaller map sizes. Kohan was really a war of posts: you built your kingdoms in certain spots to choke off the enemy, to be near a resource, as well as to build outposts and captured those belonging to other kingdoms in order to gain similar advantages. In Kohan II, the player cannot build a settlement wherever they want; they now have to build them only on select spots. They can, however, still build outposts. This seems to have affected the size of the maps because they are much smaller than those found in the original. Now outposts, which are important because militia come out to attack passing enemies and it (along with kingdoms) have a re-supply radius that slowly heals armies within its borders, take the predominant role in choking off spots. Since units cannot be raised in outposts, the settlement spots are much nearer to each other now so that units don't have lengthy distances to walk. There are still some large maps, mind you, but I found that, overall, they are in the minority. Because of the smaller size, units are engaged much quicker than before, which makes the game faster. The levels are also very linear, so time spent in one mission is roughly half of what I would spend in the same in the original, sometimes even less than that. This doesn't leave much time for the economy to be fleshed out and, frankly, it doesn't seem all that necessary, as it did before. In Kohan, it was crucial to have enough supplies to keep your resource-draining armies on the field, but now, I rarely needed such a balancing act. One thing that hasn't changed - thankfully - is combat. Raised in cities, companies consist of a captain and four units, as well as the optional two flanking units and two support units. The main and flanking units are the same, for the most part, consisting of archers, cavalry, and different types of infantry (swordsmen, pikemen, etc.), but some units, like catapults and juggernaut units, are available only as a main army selection. The supporting units are healers, stronger ranged units, and magicians. Like the original, the supporting units can make all the difference, and they seemed a bit stronger now, with magicians setting fire to, tossing lightning at, and poisoning enemy troops. Custom companies also make a return, with the player able to quickly train a group or make a savable company with the flank and supports they prefer, which is handy. Captains or heroes can lead the men, with the heroes being slightly stronger but not controllable, like in WarCraft III or Warlords: Battlecry. I did find that the units were less inclined to listen to me than they were in the original. In both titles, companies can be routed or told to escape. When told to escape, they immediately switch to the fastest formation and head for the hills, but a rout makes them uncontrollable. The formations here are also different than in other titles as they actually have a use; there are three formations, combat, skirmish, and column, which goes strongest but slowest, decreases combat ability but increases in speed and sight, and weakest but fastest, respectively, so an escaping squad that is attacked will likely be decimated because their attacking proficiency is greatly penalized at the expense of speed. But I had a problem with just being listened to in general. Just to disengage a unit so it could attack a target of opportunity, I would have to tell it to retreat, and then try again. This wouldn't be due to numerical superiority on part of the enemy or any reasonable explanation like that; I could have four squads fighting one weakened enemy squad and still fight to tell them to go and attack a nearby portal or settlement building. That was incredibly frustrating, and it required almost constant babysitting of the troops. When playing Kings of War, I felt like I was playing a slightly inferior expansion for the original, despite the new engine. The graphical update is nice, but it didn't enhance my experience any, and the audio, being of decent technical quality but suffering from so-so voiceovers and repetitious unit responses, didn't really pull me in. The levels were also much more straightforward than before, leaving little room for ingenuity; it was really just strengthening the town closest to the enemy, then launching assaults. Because of the linearity, though, the story is of greater focus, and it isn't bad. The new characters are good additions, as are the new units (which I found to be more different from a graphical than a gameplay standpoint), but it never felt like it was enough. One improvement was the menu and navigational system, with clear icons and graphical representations of things like company movement speed, benefits and negatives of resource usage, affects of veteran status on companies, and so on, being easy to distinguish and just more pleasant to deal with. The online portion is hurting due to lack of players, but even then it's hard to really get into the nuts and bolts of the game because of rushing. I did find the players more helpful than normal, which is a plus for those new to the series, and was common with the original as it garnered a strong following, so there is a natural community aspect that can be looked forward to. Overall: 7.5/10 The original Kohan kept me up late many a night. Sending waves of companies against unbending hordes of undead, balancing my fledgling economy, and expanding my kingdom were more addictive to me then than just about any other strategy title. Kohan II: Kings of War quickens the pace, making it more engaging more quickly, but the sacrifice was that I just didn't find myself as enamored with it as I was with its predecessor. It's still a solid game, but I found the original to be better. If you're new to the series, I would suggest starting here as it is much more inviting. Despite my preference, those looking for a solid strategy game still cannot go wrong here.
Related Links: Official Site | 计算机 |
2014-23/2663/en_head.json.gz/16159 | What's New at the Nexus? Vol. 2
Written by The GN Staff on 5/2/2003 for
Tyler SagerThe most influential game for me, as I’m sure it was for a lot of people, was a charming little game called “Pong”. I remember clearly seeing this at a friend’s house, in all its black-and-white pixilated glory. After one play I knew this was something I was very interested in having for my very own. By that time, we didn’t have to buy the hard-wired Pong system, but instead we were able to pick up the modern marvel known as the Atari 2600, which could play not only Pong, but hundreds of other titles as well. Countless hours were spent in those very first virtual tennis matches, be it along against the maniac computer, or against family and friends. Pong was my first exposure to the world of electronic gaming, and I’ve never been the same since. Ben ZackheimPattycake. Is this a game you might ask? Sure is. I played it all the time in high school. Requires hand/eye coordination, multitasking, memorization, motor skills, intense concentration, creativity, explosive speed. It's multiplayer (granted cooperation only) and nothing beats a good grudge match between two sets of dueling pattycakers. Especially armed ones. However a close second would have to be Pong since it was the first digital representation of the oldest and, up to that point, second best game - Hit The Ball With A Stick. With Pong, old met new and nothing has been the same since. * The product in this article was sent to us by the developer/company for review. | 计算机 |
2014-23/2663/en_head.json.gz/18886 | JOIN DAA
Your privacy is important to the Digital Analytics Inc. (“DAA”) and its online community. Our goal is to provide you with a personalized online experience that provides you with the information, resources, and services that are most relevant and helpful to you. This Privacy Policy has been written to describe the conditions under which the DAA website (the “site”) and the services available on the site are being made available to you by DAA. The Privacy Policy discusses, among other things, how data obtained during your visit to this Site may be collected and used. The Privacy Policy also discusses important limitations about the way you may use materials and services you find on the site. Read the Privacy Policy carefully. By using this site, you will be deemed to have accepted the terms of this Policy. If you do not agree to accept the terms of the Privacy Policy, you are directed to discontinue accessing or otherwise using the site or any materials obtained from it.
We protect your personal information using industry-standard safeguards. We may share your information with your consent or as required by law as detailed in this policy, and we will let you know when we make significant changes to this Privacy Policy by posting changes to the site.
IMPORTANT INFORMATION REGARDING YOUR ACCOUNT
Sites Covered by this Privacy Policy This Privacy Policy applies to the DAA website (the “site” or the “sites”) located at [INSERT URL(S)].
The process of maintaining a website is an evolving one, and DAA may decide at some point in the future, without advance notice, to modify the Privacy Policy by posting a new Policy on the site. Please review the changes carefully. If you agree to the terms, simply continue to use the site. If you object to any of the changes to the Privacy Policy, please do not continue to access the site, as your continued use of the site after we’ve posted a notice of changes to the Privacy Policy shall constitute your consent to the changed terms or practices. Children’s Privacy
DAA is committed to protecting the privacy needs of children, and we encourage parents and guardians to take an active role in their children’s online activities and interests. DAA does not intentionally collect information from children under the age of 13, and DAA does not target its sites to children. Only persons who are more than 18 years old or an emancipated minor may use the sites. By accessing the sites, you are legally acknowledging that you are over the age of 18 or an emancipated minor. If you are under the age of 18, you don’t have the legal right to access the sites.
California Shine the Light Law California residents who provide personal information in obtaining products or services for personal, family or household use are entitled to request and obtain from us, once per calendar year, information about the customer information we shared, if any, with other businesses for their own direct marketing uses. If applicable, this information would include the categories of customer information and the names and addresses of those businesses with which we shared customer information for the immediately prior calendar year. To obtain this information from us, please contact us, and choose "Request for California Privacy Information" for the subject of your message, and we will send you a reply e-mail containing the requested information. Not all information sharing is covered by the "Shine the Light" requirements and only information on covered sharing will be included in our response.
We have implemented industry-standard security safeguards designed to protect the personal information that you may provide. We also periodically monitor our system for possible vulnerabilities and attacks, consistent with industry standards. You should be aware, however, that since the Internet is not a 100% secure environment, we cannot ensure or warrant the security of any information that you submit to the site. There’s also no guarantee that information may not be accessed, disclosed, altered, or destroyed by breach of any of our physical, technical, or managerial safeguards. It’s your responsibility to protect the security and integrity of your account details, including your username and password. Please note that emails, instant messaging, and similar means of communication with other site users are not protected or encrypted, so you should not communicate any confidential information through these means.
THE TYPES OF INFORMATION WE COLLECT Registration
In order to use the DAA website, you may need to create an account by providing use with at least your name, email address, and a password. You can choose to provide other information about yourself during the registration process (for example, your gender, location, company affiliation, etc.). We use this additional information to provide you with more customized services, and this information may be viewable by others. You understand that, by creating an account, DAA and others will be able to identify you by your profile, and you agree to allow DAA to use this information in accordance with this Privacy Policy and our Terms of Use [INSERT LINK TO TERMS OF USE]. You must follow this link to our Terms of Use in order to understand the terms of your relationship with (DAA). On some pages of the site, you may be able to request information, subscribe to mailing lists, participate in online discussions, collaborate on documents, provide feedback, submit information into registries, register for events, apply for membership, or join technical committees or working groups. The types of personal information you provide to us on these pages may include name, address, phone number, e-mail address, user IDs, passwords, billing information, or credit card information.
Account Profile Information Once you’ve created an account, you may choose to provide additional information on your user profile, such as descriptions of your job title, professional experience, your educational background, professional affiliations and memberships, and technical skills. This information that you voluntarily provide may be seen by other users.
Non-Personal Information Non-personal information is data about usage and service operation that is not directly associated with a specific personal identity. DAA may collect and analyze non-personal information to evaluate how visitors use the site. Aggregate Information
DAA may gather aggregate information, which refers to information your computer automatically provides to us and that cannot be tied back to you as a specific individual. Examples include referral data (the sites you visited just before and just after our site), the pages viewed, time spent at our site, and Internet Protocol (IP) addresses. An IP address is a number that is automatically assigned to your computer whenever you access the Internet. For example, when you request a page from one of our sites, our servers log your IP address to create aggregate reports on user demographics and traffic patterns and for purposes of system administration. Log Files and IP Addresses
We may collect information from the devices and networks that you use to visit the site in order to help improve the services we provide. Every time you request or download a file from the site, DAA may store data about these events and your IP address in a log file. We may use this information to analyze trends, administer the site, track users’ movements, and gather broad demographic information for aggregate use or for other business purposes. When you access or leave the site by clicking on a hyperlink, we receive the URL from the site from which you last visited or the one to which you’re directed. We may receive the Internet Protocol (“IP”) address of your computer or proxy server used to access the site, your operating system, the type of browser you used, and the type of device and/or operating system you use, your mobile device carrier or your ISP. We also may receive location data passed to us from third-party services or GPS-enabled devices that you have set up in order to customize your experience based on location information.
We use cookies and similar technologies, including mobile device identifiers, to help us recognize you when you log into our site. By accessing the site, you are consenting to the placement of cookies and other similar technologies in your browser in accordance with this Privacy Policy and our Terms of Use. Cookies are small packets of information that a site’s computer stores on your computer. DAA can then read the cookies whenever you visit our site. We may use cookies in a number of ways, such as to save your password so you don’t have to re-enter it each time you visit our site, to deliver content specific to your interests and to track the pages you’ve visited. These cookies allow us to use the information we collect to customize your experience so that your visit to our site is as relevant and as valuable to you as possible. You may modify and control how and when cookies are set through your browser settings. Most browsers offer instructions on how to reset the browser to control or reject cookies in the “Help” section of the toolbar. We do not link non-personal information from cookies to personally identifiable information without your permission.
FIRST PARTY COOKIES
These are cookies that are set by this website directly. Google Analytics: We use Google Analytics to collect information about visitor behavior on this website. Google Analytics stores information about what pages you visit, how long you are on the site, how you got here and what you click on. This Analytics data is collected via a JavaScript tag in the pages of our site and is not tied to personally identifiable information. We therefore do not collect or store your personal information (e.g. your name or address) so this information cannot be used to identify who you are.
You can find out more about Google’s position on privacy regarding its Analytics service at: http://www.google.com/analytics/terms/gb.html
THIRD PARTY COOKIES AND PRIVACY POLICIES
These cookies are set on your machine by external websites whose services are used on this site. Cookies of this type are the sharing buttons across the site that allow visitors to share content on social networks such as LinkedIn, Twitter, Facebook and Google+. This site also uses the AddThis service.
In order to implement these buttons, and connect them to the relevant social networks and external sites, there are scripts from domains outside our website. You should be aware that these sites are likely to be collecting information about what you are doing all around the internet, including on this website.
The DAA website is hosted by Timberlake.com. You can view their privacy policy.
We use a session cookie to remember your log-in for you and to remember what you’ve put in the shopping basket. We consider these necessary to provide that service for you. If these are disabled then it will disable other functionality on the site.
PIXEL TAGS
Like many websites, we may also use pixel tags, also known as beacons, spotlight tags or web bugs, to improve our understanding of site traffic, visitor behavior, and response to promotional campaigns, as a supplement to our server logs and other methods of traffic and response measurement. Pixel tags are sometimes used in conjunction with small Javascript-based applications, also for the purpose of traffic measurement. We may also implement pixel tags provided by other companies, for the same purpose. You can disable pixel tags by changing your browser settings to omit images and disable Javascript; or there are commercial software packages available that can omit pixel tags and most advertisements.
CLICKSTREAM
No clickstream data collection is being used for this website at the current time. However, we have many vendors who are the industry leaders in this area as members in the DAA.
New Technologies As new technologies emerge, DAA may be able to improve our services or provide you with new ones, which means that DAA may create new ways to collect information on the site. If we offer a new service or new features to our existing site, for example, these changes may result in our collecting new information in order to improve your user experience.
Personal Information Personal information is information that is associated with your name or personal identity. DAA uses personal information to better understand your needs and interests and to provide you with better service. On some of our web pages, you may be able to request information, subscribe to mailing lists, participate in online discussions, collaborate on documents, provide feedback, submit information into registries, register for events, apply for membership, or join technical committees or working groups. The types of personal information you provide to us on these pages may include name, address, phone number, e-mail address, user IDs, passwords, billing information, or credit card information.
Members-Only Web Site DAA may provide a members-only section of our web site. Information you provide on DAA’S membership application may be used to create a member profile, and some information may be shared with other of our individual member representatives and organizations. Member contact information may be provided to other members on a secure web site to encourage and facilitate collaboration, research, and the free exchange of information among our members, but we expressly prohibit members from using member contact information to send unsolicited commercial correspondence. DAA’S members automatically are added to our member mailing lists. From time to time, member information may be shared with event organizers and/or other organizations that provide additional benefits to our members. By providing us with your personal information on the membership application, you expressly consent to our storing, processing, and distributing your information for these purposes.
Company Information Company information is information that is associated with the name and address of member organizations and may include data about usage and service operation. The primary representative of any our member organizations may request usage reports to gauge the extent of their employees’ involvement in consortium activities. You should be aware that information regarding your participation in technical committees or working groups, for example, may be made available to your company’s primary representative and to DAA’s staff members.
Administrators and Moderators
If you contact a DAA Administrator or Moderator, we collect information that helps us categorize your question or report, respond to it, and, if applicable, investigate any breach of our Terms of Use or this Privacy Policy. We also may use this information to track potential problems and trends in order to improve our services to you and to the community as a whole.
Group Participation
We may collect information when you use the site, such as when you join and participate in any group, participate in any polls or surveys, or otherwise interact with other users within the community.
Links to Third-Party Sites and Services The site may provide links to third-party sites for the convenience of our users. If you access those links, you will leave our site. DAA does not control these third-party sites and cannot represent that their policies and practices will be consistent with this Privacy Policy. For example, other sites may collect or use personal information about you in a manner different from that described in this document. You should be aware that materials available through third-party sites may be protected from unauthorized copying and dissemination by U.S. copyright law, trademark law, international conventions, and other intellectual property laws, and the usage of such materials may be subject to limitations that are more or less restrictive than those expressed herein. Therefore, you should use other sites with caution, and you do so at your own risk. We encourage you to review the privacy policy of any site before submitting personal information.
We may receive information when you use your account to log into a third-party site or application in order to recommend tailored content to you and to improve your user experience on our sites. We may provide reports containing aggregated impression information to third parties to measure Internet traffic and usage patterns.
When you join the DAA, you acknowledge that information you provide on your membership profile may be seen by others and used by DAA as described in this Privacy Policy and our Terms of Use. If you are a registered member of DAA, you should be aware that some items of our personal information may be visible to other members and to the public. DAA’s member database may retain information about your name, e-mail address, company affiliation (if an organizational member), and such other personal address and identifying data as you choose to supply. That data may be generally visible to other of our members and to the public. Your name, e-mail address, and other information you may supply also may be associated in DAA’s publicly accessible records with our various committees, working groups, and similar activities that you join, in various places, including: (i) the permanently-posted attendance and membership records of those activities; (ii) documents generated by the activity, which may be permanently archived; and, (iii) along with message content, in the permanent archives of DAA’s e-mail lists, which also may be public. Consent to Use by DAA
DAA may use personal information to provide services that support the activities of the organization, DAA members, and their collaboration on DAA activities. When accessing the site, your personal user information may be tracked by DAA in order to support collaboration, ensure authorized access, and enable communication between members.
The personal information you may provide to DAA may reveal or allow others to discern aspects of your life that are not expressly stated in your profile (for example, your picture or your name may reveal your gender). By providing personal information to us when you create or update your account and profile, you are expressly and voluntarily accepting the terms and conditions of our Terms of Use and freely accepting and agreeing to our processing of your personal information in ways set out by this Privacy Policy. Supplying information to us, including any information deemed “sensitive” by applicable law, is entirely voluntary on your part. You may withdraw your consent to DAA’s collection and processing of your information by changing closing your account.
Communications from DAA
We use the information you provide to customize your experience on the site. We may communicate with you using email or other means available to us regarding the availability of services, service-related issues, or promotional messages that we believe may be of interest to you. We may, for example, send you welcome messages, emails regarding new features or services, and promotional information from DAA, or our affiliates, members, and partners. You may opt out of receiving promotional messages from DAA by following the instructions contained in the email. As long as you’re a registered user, however, you can’t opt out of receiving service messages from us. DAA may also use personal information in order to customize content on the site to you, such as news relevant to you or to your industry or company. Communications from Others
Member contact information may be provided to other members on a secure site to encourage and facilitate collaboration, research, and the free exchange of information among our members. Please remember that any information (including personal information) that you disclose on our site, such as forums, message boards, and news groups, becomes public information that others may collect, circulate, and use. Because we cannot and do not control the acts of others, you should exercise caution when deciding to disclose information about yourself or others in public forums such as these.
Sharing Information with Members and Affiliates
DAA may share your personal information with our members and affiliates, as necessary to provide you with the services on the site. From time to time, member information may be shared with event organizers and/or other organizations that provide additional benefits to our members. By providing us with your personal information during the user registration process and by agreeing to the terms of this Privacy Policy, you expressly consent to our storing, processing, and distributing your information for these purposes.
Information you put on your profile and any content you post on the site will be seen by others. In keeping with our open process, DAA may maintain publicly accessible archives for our activities. For example, posting an email to any of DAA’s hosted mail lists or discussion forums, subscribing to one of our newsletters or registering for one of our public meetings, may result in your email address becoming part of the publicly accessible archives. Content contained on the site may result in display of some of your personal information outside of DAA. For example, when you post content to a forum that is open for public discussion, your content, including your name as the contributor and your email address, may be displayed in search engine results. In addition, your public profile may be indexed and displayed through public search engines when someone searches for your name.
You are responsible for any information you post on the site, and this content may be accessible to others. Accordingly, you should be aware that any information you choose to disclose on the site can be read, collected, and used by other users in the forum, and in the case of forums open to the public, by third parties. DAA is not responsible for the information you choose to submit on the site. DAA does not rent or sell or otherwise distribute personal information that you have shared with us, except as permitted in this Privacy Policy and our Terms of Use. We will not disclose personal information that is associated with your profile unless DAA has a good faith belief that disclosure is permitted by law or is reasonably necessary to: (1) comply with a legal requirement or process, including, but not limited to, civil and criminal subpoenas, court orders or other compulsory disclosures; (2) investigate and enforce this Privacy Policy or our Terms Use; (3) respond to claims of a violation of the rights of third parties; (4) respond to member service inquiries; (5) protect the rights, property, or safety of DAA, our users, or the public; or (6) as part of the sale of the assets of DAA or as a change in control of the organization or one of its affiliates or in preparation for any of these events. DAA reserves the right to supply any such information to any organization into which DAA may merge in the future or to which it may make any transfer in order to enable a third party to continue part or all of the organization’s mission. Any third party to which Foundation transfers or sells its assets will have the right to use the personal and other information that you provide in the manner set out in this Privacy Policy. Polls and Surveys
DAA may conduct polls and surveys of our users, and your participation in this type of research is at your sole discretion. DAA may follow up with you regarding your participation in this research. You may at any time opt out of participating in our polls and surveys.
Given the international scope of DAA’s activities, personal information may be visible to persons outside your country of residence, including to persons in countries that your own country’s privacy laws and regulations deem deficient in ensuring an adequate level of protection for such information. If you are unsure whether this Privacy Policy is in conflict with applicable local rules, you should not submit your information. If you are located within the European Union, you should note that your information will be transferred to the United States, which is deemed by the European Union to have inadequate data protection. Nevertheless, in accordance with local laws implementing the European Union Privacy Directive on the protection of individuals with regard to the processing of personal data and on the free movement of such data, individuals located in countries outside of the United States of America who submit personal information do thereby consent to the general use of such information as provided in this Privacy Policy and to its transfer to and/or storage in the United States of America. By utilizing the site and/or directly providing personal information to us, you hereby agree to and acknowledge your understanding of the terms of this Privacy Policy, and consent to have your personal data transferred to and processed in the United States and/or in other jurisdictions as determined by DAA, notwithstanding your country of origin, or country, state and/or province of residence. YOUR OPTIONS AND OBLIGATIONS
Rights to access, correct, or delete your information; closing your account.
You may access, modify, correct, or delete your personal information controlled by DAA regarding your profile or close your account. You can also contact us for any account information which is not on your profile or readily accessible to you. If you close your account, all of your content will remain visible on the site.
You should be aware that information that you’ve shared with others or that others have copied may also remain visible after you have closed your account or deleted the information from your own profile. In addition, you may not be able to access, correct, or eliminate any information about you that other users have copied or exported out of the Sites, because this information may not be in our organization’s control. Your public profile may be displayed in search engine results until the search engine refreshes its cache.
We will keep your information for as long as your account is active or as needed to comply with our legal obligations, even after you’ve closed your account, such as to meet regulatory requirements, resolve disputes between users, to prevent fraud and abuse, or to enforce this Privacy Policy and our Terms of Use. We may be required to retain personal information for a limited period of time if requested by law enforcement. We also may retain indefinitely non-personally identifiable, aggregate data to facilitate our ongoing operations. Your Obligations
Be respectful and courteous. The DAA is a community group, and you have certain obligations both to the DAA and to your fellow users. In order to ensure the integrity of the DAA community effort, you must respect the terms of our Privacy Policy, our Terms of Use, any other applicable policies of the DAA, as well as the rights of other community users, including their intellectual property rights. You must not upload or otherwise disseminate any information that may infringe on the rights of others or that may be deemed to be defamatory, injurious, violent, offensive, racist or xenophobic, or that may otherwise violate the purpose and community spirit of the DAA or its members.
If you violate any of these guidelines or those detailed in our Terms of Use, the DAA may, at its sole discretion, suspend, restrict, or terminate, your account and your ability to access the site.
Opting Out From time to time DAA may email you electronic newsletters, announcements, surveys or other information. If you prefer not to receive any or all of these communications, you may opt out by following the directions provided within the electronic newsletters and announcements.
Contacting Us Questions about this Privacy Policy can be directed to [email protected]. | 计算机 |
2014-23/2663/en_head.json.gz/19492 | Moserware
Jeff Moser's software development adventures.
SKU Driven Development
I like ice cream. Sure, I like enjoying a coffee cup filled with ice cream after working out at the gym (and sometimes even when I don't), but I also enjoy its perceived simplicity. All of my favorite flavors have some form of vanilla base with something added to it. For example, Fudge Tracks to me is vanilla + chocolate + peanut butter. Cookie Dough is, unsurprisingly, vanilla + cookie dough + chocolate. I said "perceived" simplicity because my father-in-law works in the ice cream business and I know there are lots of smart people in the lab working on formulas and many people that design the manufacturing processes to ensure the final result "just right." There are lots of little "gotchas" too. For example, when adding cookie dough, you might have to reduce the butterfat content to keep the nutrition label from scaring people. Also, many people (e.g. me) think that everything starts with vanilla, but it's really a "white base" that may or may not have vanilla in it.But despite all of the little details under the covers, ice cream still has a simplicity that our grandmas can understand. As I work more with software professionally, I'm becoming more convinced that if your architecture is too complicated that you couldn't realistically explain it at a high level to your grandma (while she's not taking a nap that is), it's probably too complicated.My favorite ice cream flavors all start out roughly the same, but then get altered by the addition of things. The end result is a unique product that has its own Universal Product Code (UPC) on its container. Can we make software development that simple? Can we start with some core essence like vanilla ice cream and create various versions of products, each with its own unique Stock Keeping Unit (SKU)? This idea isn't foreign to our industry. Microsoft has at least five different SKUs of Vista and Visual Studio has over 10. The test is, could we as developers of products with much smaller distribution create different SKUs of our software? More importantly, even if we only planned to sell one version of our software, would it still be worth putting an emphasis on thinking in terms of partitioning our product it into several "SKUs?" I am starting to think that there might be merit in it. What will it take to think that way? I think it requires just a teeny bit of math.The Calculus of SKU Driven DevelopmentEven as a math major in college, I never really got excited about calculus or its related classes like differential equations. It tended to deal with things that were "continuous" whereas my more dominant computer science mind liked "discrete" things that I could count. With whole numbers, I could do public key cryptography or count the number of steps required in an algorithm. With "continuous" math, I could do things like calculate the exact concentration of salt in leaky tank of water that was being filled with fresh water. In my mind, salt doesn't compare with keeping secrets between Alice and Bob.Although I could answer most of the homework and test problems in calculus, I never really internalized or "connected with" it or its notation of things like "derivatives."That is, until this week.While trying to find a solution to the nagging feeling that software could be simpler, I came across a fascinating paper that was hidden away with the title of "Feature oriented refactoring of legacy applications." It had a list of eight equations that had an initial scary calculus feel to them. But after the fourth reading or so, they really came alive: The first equation essentially says that if you want to make something like Cookie Dough Ice Cream (here labeled "H") from your vanilla base product (labeled "B"), you'll need... cookie dough! See? Math is simple. The actual cookie dough is expressed by "h." The "db/dh" part is telling us "here's how you have to modify the vanilla ice cream base when you're making Cookie Dough to keep the nutrition reasonable." The letter "b" is simply the raw vanilla ice cream base. The "*" operator says "take the instructions for how to modify the base and actually do them on the base." Very intuitive if you think about it. The only trick is that uppercase letters represents SKUs (aka "Features") and lowercase letters represent the ingredients (or "modules") in that SKU. The paper was nice enough to include this picture to visualize this: We'll skip to the last significant equation. It is also the most scary looking, but it's just as simple: The scary part is that if we want to to add chocolate chips, "j", to our existing Cookie Dough Ice Cream, "H(B)", we will start to see a "2" superscript, called a "second order derivative." The "d2b/(dJdH)" just means that "if I have both chocolate chips and cookie dough, I'll need to lower the butterfat content of the vanilla base even more to make the nutrition label not scare people." Then, make the cookie dough healthier to allow for the chocolate chips (dh/dJ) and then finally add the chocolate chips (j). That is, say that if I just added chocolate chips to vanilla (db/dJ), I'd only have to lower the vanilla butterfat by 5%. Similarly, if I just added cookie dough, I'd have to lower the butterfat by 7%. If I have both chocolate chips and cookie dough, I have to lower the butterfat an additional 3% (d2b/(dJdH)) for a total butterfat lowering of 3 + 5 + 7 = 15%. Why Software Development Can Be Frustrating The above calculus shows, in a sort of sterile way, why developing software can frustrate both you as a developer and as a result, your customers. The fundamental philosophy of SKU Driven Development is that you absolutely, positively, must keep your derivatives (first order and higher) as small as humanly possible with zero being your goal. If you don't, you'll feel tension and sometimes even pain.It starts off with the marketing guys telling you that customers really want SKU "J" of your product because they really need all that "j" will give them. Moreover, your top competitors have promised to have "j" and if you don't add it to your existing product, "H(B)", then customers have threatened to jump ship.So management gets together and then eventually asks you their favorite question: "how long will it take to make SKU 'J' of our product? How much will it cost?" Enlightened by the calculus above, you look at equation 4 and then count every time where you see "J." These items denotes changes to the existing code. The final "j" represents the actual additional feature that marketing wants. You'll likely have a conversation like this: "well, we can't just add 'j.' We need to change our core library to get ready for J, and update this user control to get ready for J, oh and then we'd need update this menu and that toolbar and this context menu and this documentation, oh yeah and then make sure that we didn't cause a regression of any of the functionality in our existing product, H(B), and then finally we could add 'j' and then test and document it."Sure, you didn't use nice simple letters like "J", but that's what makes math so elegantly expressive. It doesn't matter what "J" is, the math doesn't lie.Here's another pain point: let's say that in testing feature #4 of your product, a lot of bugs came up and moreover, marketing has proven beyond a reasonable doubt that no one even cares about feature #4. It was just some silly feature that a programmer added to feel more macho. Feature #4 is stopping your company from shipping and making any more money. You have to cut it! "Go, cut it out! Here's a knife" they shout to you."But, it's not that simple!" you reply."Why not? It's just as if I told you to make a version of a car that has an option to not have a GPS, or a TV, or cruise control, or rear defrost. The auto industry has been doing this for decades. Why can't you do it? Is this too much to ask!? Don't get in the way of my next bonus buddy!"Ok, it's usually more peaceful than that. Well, usually anyway.You can't just remove feature #4, "F4", because your product is now P = F6(F5(F4(F3(F2(F1(B)))))). Right about this time, you can feel the weight of all the derivatives. They are going to drive you to do a lot of searching and testing. It's not that you're a bad programmer, it's just that in programs that haven't been built with the "SKU Driven Development" philosophy in mind tend to have higher amounts of derivatives. Usually, derivatives that could be almost zero are much higher and therefore cost you time and your company money.Is There Any Hope?I use the term "SKU Driven Development" because, as I write this, it has zero matches on Google. This is a good thing because it means I can define to be what I choose it to mean. To me, SKU Driven Development is a software development philosophy that has the following four principles:Always have a "get the product to market" mentality by thinking in terms of shipping a new "SKU" that has the additional functionality. This makes it easier to think of your product in a more natural way of adding and removing options like people can do with ice cream and cars.Build your product in such a way that derivatives are as small as possible. You want them to be zero. This is not always possible, but that's your goal. It is extremely important to have higher order derivatives small. I think that a developer should seriously think about his or her architecture if it requires significant second order derivatives. Furthermore, a software architect should be forced to give a rigorous defense of why their architecture necessitates third order or higher derivatives and convincingly prove there is no other reasonable way that wouldn't sacrifice something else that is more important.Your product should be stripped down to some fundamental "core" SKU where further functionality can't be stripped or it ceases to be meaningful. This is sort of like a car with an engine and seats, but no air conditioning, radio, clock, cruise control, etc. Only once you have stripped things down to the core can you start to think in terms of additive SKUs.Each piece of new functionality and every derivative that it necessitates must be testable. This principle ensures that a quality product continues to be a quality product, even with the additional functionality. The problem isn't that our industry hasn't thought about this; the problem is that there are many different things to "keep in mind" while trying to achieve it. As developers, we have many tools that can help:Object Oriented Programming and Component-Based Software Engineering help think of the world in higher level pieces rather than individual functions or methods. Software Platforms / Frameworks like the .NET Framework and Rails help one have a richer base of functionality to start with. This is the "B" in the calculus. Aspect Oriented Programming (AOP)is a way of cleanly describing and injecting a change to an existing code base. Feature Oriented Programming (FOP) is a way expressing programs in terms of "features." It was from a FOP paper that I "borrowed" the calculus. Separation of Concerns is a concept that has you break up programs "into distinct features that overlap in functionality as little as possible." Refactoring is a technique to take the product as you have it today and make it more SKU oriented. Product Line Architecture which includes things like: Software Product Line Engineering Generative/Automatic Programming Domain Specific Languages Model Driven Architecture Algebraic Hierarchical Equations for Application Design (AHEAD)Software Design Patterns like the Strategy, Observer, Model View Controller, and Model View Presenter patterns. This leads to things like the fancy new ASP.NET MVC Framework Test Driven Development (TDD) helps you achieve principle #4 by having you ensure that changes and additions are tested by creating a test first. Behavior Driven Design (BDD) extends TDD to a higher level focusing on end functionality of what the application should do. Mixins in languages like Ruby and Scala allow you to add functionality to a class in parts. It's like adding interfaces to a class, but the interfaces can have functionality. This is sort of like multiple inheritance, but cleaner. Plugins and technologies like .NET's System.Addin classes make it easier to build an application out of many parts. Dependency Injection can help make your app more customizable by allowing you to easily switch implementations of pieces. Inversion of Control (IoC) helps developers apply the Hollywood Principle of "don't call us, we'll call you." This helps designing components that can be added to other products easily and therefore help to create SKUs more readily. The Composite UI Application Block (CAB) and Smart Client Software Factory (SCSF) ideas and tools help blend components together into a cohesive end product. Monads came from the functional programming where they've been in use in languages like Haskell for years. They help add functionality to some core concept in a nice way. They're starting to be introduced into the mainstream now with LINQ and its future evolutions. There is nothing to fear about them; it's just that they have an unfortunate sounding name. Processes like Scrum help ensure that you develop meaningful SKUs on a regular schedule. ...The list could go on and on.Having all of these things that we have to "keep in mind" as developers makes it hard to keep up. But, it's also why we're paid to do what we do. Marketing wants to give us a Marketing Requirements Document (MRD) with additional features F1, F2, F3, and we're paid to turn our existing product, B, into F3(F2(F1(B))). It's not as easy as we'd like (f1 + f2 + f3) because all of those derivatives get in the way.There's no silver bullet and its unlikely that completely derivative free programming will ever be possible for real applications. SKU Driven Development isn't prescriptive beyond the four principles I outlined above. It's sort of a "try your best, but keep these overarching goals in mind." Academics are already showing inroads into how it might be possible with simple examples and things like AHEAD.It's going to take time. I'm going to try to head towards the SKU mentality. I'll probably going to go down the wrong path many times. I'll probably create or at least suggest of designs that are too complicated and don't stand up to growth and maintainability. Over the years I want to get better, but I don't see of a clear path there yet. It will probably involve using some of the tools I outlined above.Lots of things to think about and "keep in mind" while enjoying my next bowl of ice cream.
Jeff Moser
Mike Petry
Good stuff! I browsed to the Product Line Architecture Research Group site and tagged it as a bookmark for some future reading. I like your idea and I think you have found a good model for the complexity of software development but I don't see it playing well with the management types that I am used to. Where I work, our managers use colors to indicate status (Green - good, Yellow - so-so, and Red - bad). I don't see talking about derivatives furthering my cause for my resources. Of course it may be fun to try it on them just to watch the drool run down their chins and thier eyes glaze over doh! Of course I am just joking! If we where able to truly communicate the complexity of our work, our managers would close down shop, cut thier losses and go into the dry-cleaning business. Your list of Software Engineer practices to minimize derivatives is very complete. Right now I would like to explore product line architectures. PLAs lead to the thought that you are not just developing products but you are developing and nuturing the means of production to be used for future efforts. I think this type of thinking will keep our brilliant managing brethren from being entirely short-term bottom-line fixated and think more big picture.
Thanks for the heart-felt comments Mike. It's always tough working in a business where no one can physically see your work. However, ideas like feature-oriented programming can help.Since writing this post, I've been in contact with the paper's authors. They recommended I read the follow-up that extends/solidifies the algebra. Another one that focuses on representing programs as trees. An interesting one on using colors in an IDE to represent granularity/features along with hiding an #ifdef approach in PLAs. Finally, one discussing problems with AspectJ.All of them are interesting and offer how tools might make the problem more manageable. Unfortunately, no "silver bullet" exists. There are things like the Adapter pattern, or focusing on interfaces, but in our languages (especially without mixins), it's hard.
Fishers, Indiana, United States
Does Your Code Pass The Turkey Test?
For Loops: Using i++, ++i, Enumerators, or None of... | 计算机 |
2014-23/2663/en_head.json.gz/20985 | Jarvis' e-mail, sent before the hostile takeover was revealed on June 6, 2003, came from the marketer once billed as Oracle CEO Larry Ellison's right-hand man. Jarvis has since left the company.
Confidential court documents show Oracle intended its takeover bid to sow doubt among rival's customers. Bottom line: Whether caused by an orchestrated FUD campaign or the simple fact that Oracle's bid cast legitimate doubt on PeopleSoft's future, it's clear that PeopleSoft has suffered.
More stories on this topic CNET News.com has reviewed a confidential court document that includes excerpts of Jarvis' e-mail and other Oracle documents that mention FUD. A June 2003 memo with no author listed, for instance, instructed Oracle salespeople to exploit the acquisition to "create FUD (fear, uncertainty and doubt) with prospects and customers alike as they understand the implications of this acquisition."
Portions of the potentially embarrassing documents could emerge in the latest court case pitting the two influential software companies against each other. Oracle is suing PeopleSoft in Delaware's Court of Chancery to eliminate antitakeover defenses including a "poison pill" agreement designed to keep the company independent.
PeopleSoft has argued that Oracle is not serious about completing the $7.7 billion acquisition and instead has wielded the offer to cast doubt on the future of the world's second-largest provider of enterprise application software and encourage customers to consider rival products. As soon as the Oracle offer was public, PeopleSoft has claimed in court filings, its salespeople "encountered palpable resistance among previously enthusiastic prospects" because of fears that some products would be discontinued.
The internal documents were excerpted in Oracle's Aug. 6 response to "interrogatories" from PeopleSoft. Interrogatories are a standard way to obtain information from an opposing party in a lawsuit. PeopleSoft had originally cited the excerpts when asking "what is meant by Oracle's stated objective to create fear, uncertainty, and doubt in connection with the tender offer." In its written reply, Oracle said its intention when making the offer was simply "to acquire 100 percent of the outstanding voting shares of PeopleSoft." Oracle's reply to PeopleSoft's interrogatories was provided by the Delaware court clerk's office. Brock Czeschin, an Oracle attorney at Richards Layton & Finger, specified in an accompanying letter that nine sections of the reply were "Highly Confidential--Attorneys' Eyes Only" and must be removed before the reply was made available to the public. However, the document provided by the Delaware court and reviewed by CNET News.com was not redacted.
Another excerpt included in the PeopleSoft interrogatory and Oracle's reply was an e-mail message sent by Keven Blake to Oracle President Charles Phillips and others on Sept. 10, 2003. It said that "(we) have successfully spread enough FUD along with our own capabilities to have a good average shot at winning." Oracle spokeswoman Jennifer Glass said Tuesday that the company will not "comment on the documents in question." Glass pointed to a recent statement from Oracle's Phillips saying: "We believe that the combined companies will provide customers with superior benefits and a stronger long-term alternative." A PeopleSoft representative did not respond to an interview request.
Mark Ostrau, a partner at Fenwick & West in Mountain View, Calif., said that the FUD documents alone do not prove that Oracle's offer for PeopleSoft was prompted by any ulterior motive that would have a legal impact. It would be more interesting to review early correspondence between executives and board members while the purchase was being contemplated, Ostrau said.
"You could read these (documents) either way," said Ostrau, who has represented PeopleSoft in the past but is not involved in the current litigation. "It's either following up on the grand sinister plan, or it's some enterprising salespeople trying to capitalize on it. It's aggressive behavior but not a smoking gun."
Whether caused by an orchestrated FUD campaign or the simple fact that Oracle's bid cast legitimate doubt on PeopleSoft's future, it's clear that PeopleSoft has suffered. The company has taken a hit on revenue and profits since Oracle launched the bid, missing analyst projections for the first half of the year and warning it's unlikely to meet full-year earnings targets. News this week that PeopleSoft expects to report third-quarter revenue growth was tempered by lower-than-expected profit. PeopleSoft shares closed Tuesday at $22.83, up from a 52-week low of $15.39. FUD's long history The concept of FUD enjoys a venerable history in the computing field. According to The Jargon File, an online dictionary of hacker slang, Gene Amdahl used the term as an attack on IBM after he left in the early '70s to found his own company: "FUD is the fear, uncertainty and doubt that IBM salespeople instill in the minds of potential customers who might be considering (Amdahl) products."
In a 1995 case pitting Addamax against the Open Software Foundation and Hewlett-Packard, Addamax claimed that the defendants used FUD to paralyze the industry and unreasonably raise customers' fears. One internal HP memo cited in that case was titled "Impact of FUD on Sun" and discussed ways to sabotage the AT&T-Sun Microsystems operating system by describing it as "nonstandard."
FUD also was used by cryptographers in the 1990s to scare politicians about criminals using data-scrambling encryption products to cloak their communications. More recently, the term has cropped up in the Microsoft and Linux war, with free software advocates using it to describe disinformation they say SCO Group and Microsoft have spread about the merits of software other than Windows.
Even Microsoft has been known to charge others of spreading FUD, an allegation that it made last September when complaining about software patents.
Unintentional disclosures have been a problem in the Delaware case before. Donald Wolfe, an attorney for PeopleSoft, wrote a letter to Judge Leo Strine on Sept. 21 saying he had no idea how the press had learned of a conference call a week earlier between the judge and the lawyers involved in the case.
"I personally do not believe that anyone on the PeopleSoft team would have had reason to regard disclosure of either the scheduling or the purpose of the conference as having any value to the company from a public relations standpoint," Wolfe wrote. "Nonetheless, if Your Honor would like me to pursue this further, I will certainly do so."
Other disclosures in Oracle's response to the interrogatories say that:
"No proposals have been made" by Oracle to condition its PeopleSoft offer on eliminating its customer assurance program, which offers customers a money-back guarantee if the company is purchased.
Oracle first learned of the Justice Department's staff recommendation to file suit to block the proposed merger on Feb. 10, one day before the news was made public.
Oracle's general counsel has sent the Securities and Exchange Commission a series of letters about the customer assurance program saying it "may negatively affect PeopleSoft's ability to recognize revenue on the contracts containing those terms."
Oracle's board and representatives of Credit Suisse First Boston worked closely when revising the prices the company had offered to pay for PeopleSoft shares. Oracle executives spoke "with the holders of a majority of PeopleSoft shares" around June 18, 2003, before upping the price. They raised it again in February 2004, because "Oracle believed an increased bid would aid in its efforts to wage a successful proxy contest."
CNET News.com's Alorie Gilbert contributed to this report. 1
crypto FUD
"FUD also was used by cryptographers in the 1990s to scare politicians about criminals using data-scrambling encryption products to cloak their communications."Surely you don't mean that - maybe you meant "FUD also was used by the FBI and other law enforcement agencies in the 1990s to scare politicians about criminals using data-scrambling encryption products to cloak their communications."
October 7, 2004 12:39 PM (PDT) | 计算机 |
2014-23/2663/en_head.json.gz/21333 | A world class studio based in Canberra, Australia We've been making awesome games for over 14 years. Australia's only AAA developer, we've developed critically acclaimed titles for console and PC and are always working towards improving player experience, pushing the edge of immersive and satisfying gameplay.
© 2014 Take-Two Interactive Software and its subsidaries. All rights reserved, 2K, the 2K logo and Take-Two Interactive Software are all trademarks and/or registered trademarks of Take-Two Interactive Software, Inc | 计算机 |
2014-23/2663/en_head.json.gz/22691 | Bring the Customer Along
4/30/2012 by Shweta Darbha
Recently I wrote an article with a rhetorical question for a title: “Are Customers Ready for Agile?” The idea stemmed from the fact that software development organizations have followed Waterfall methodology for so long that they h've ...
Maximizing the Value of Your Stand-up
Over the last several years, I've been both a participant and a facilitator in many different stand-ups. As we know, the true value of the stand-up lies in the team's ability to continually strive toward the "commitment" for the current sprint cycle. The stand-up isn't a status report, yet often it becomes easy for team members to slip into a pattern of providing status-related information. I've used the time-honored stand-up approach for a while now, but I've often thought that a mature team could take these 15 minutes to a different level as it continues to evolve using Agile/Scrum.
How Scrum Is Changing the Global Delivery Model for Software Development
4/25/2012 by Dr. Sanjeev Raman PMI-ACP SPC SA SPM/PO SP CCA
We're all familiar with the Waterfall offshore paradigm of software development, in which clients and vendors engage in an asynchronous, sequential model for software development. The client spends money and time to develop a formal project charte...
An Argument for Comprehensive User Stories
4/23/2012 by Shylesh Mysore
As Scrum practitioners know, a user story is a high-level requirement of a feature, provided from the perspective of a stakeholder who desires the new capability. These requirements enable the development and testing team to think about a solution...
Yes, We're Doing Scrum
4/23/2012 by Joe Morgan
In July 2011, the Scrum Alliance website featured an article by Alan E. Cyment entitled "Compasses, Trees and Pains." It posed the question, "Am I doing Scrum or not?" Interestingly, one reader responded, '"Yeah, we're doing Scrum, but we have thr...
Back to Basics: Daily Scrum
4/16/2012 by Ovidiu Pitic
More than a decade ago I was programming as part of enterprise software teams, fairly well protected from everything not related to our planned deliverables. I had a lot of fun.
But then I moved up to become an architect and team manager, and I g...
The Role of the Product Owner in Moving a Backlog Item to Done (or, It's Not Over Until the Fat Lady Sings)
4/16/2012 by Timothy Korson
Abstract: This article explores how to achieve the productivity benefits of an up-front enabling specification, given the reality that Scrum is an empirical framework in which emergent understanding of the story under development is inherent.
The Value of a Business-Oriented Team
3/29/2012 by Gastón Guillerón
More and more companies and organizations are enthusiastically adopting Agile, defining roles for each project, learning the practices, and entering the world of backlogs and burn-down charts.However, the theoretical simplicity of Scrum is often difficult to apply in real projects. Several organizational factors (assimilation of the methodology, resistance to change, organizational culture) can turn the trip into a nightmare. Luckily, there are ways to mitigate this problem.
Organizing a Scrum Conference When You're the Only Black Belt in Town
3/21/2012 by Juan Banda
For disclosure, I'll start by saying that even though I've practiced several martial arts, I've never achieved a black belt in any. However, I've trained in different Japanese martial arts with black belts who moved to my hometown for various reasons. They all wanted to continue practicing and teaching their martial arts; some of them found dojos to join while others needed to start from zero, attracting students and finding resources and a place to train.
"I Have No Impediments"
3/19/2012 by Christopher Broome
"I have no impediments."
It's the most common sign-off for every team member in the daily Scrum. It's also a lie.
We've all been there. Standing in a little circle of people, listening to the carousel of "This is what I did yesterday, this is wh... | 计算机 |
2014-23/2663/en_head.json.gz/25466 | 11/29/2012Police Tech & Gearwith Tim DeesBlurry photos: New program fixes the unfixableVladimir Yuzhikov, a computer engineer specializing in image and signal processing, has created software called SmartDeblur, a free software used for de-blurring images.In the days of my misspent youth, I was a semi-professional photographer. This essentially means that people occasionally paid me money for photos (and not the kind taken after bursting out of the closet of a cheap motel room).
SmartDeblur is his free, no-questions-asked software used for (as the name suggests) de-blurring images. (PoliceOne Image)In learning the technical aspects of the art — then done with light-sensitive film and chemicals — I was told that most exposure errors could be fixed (at least to a certain extent) in the darkroom. Underexposed film could be “pushed” in processing, and overexposures remedied in printing.
What couldn’t be fixed was bad focus. You miss your focus with the lens, and you need a mulligan. That has held true for the digital photography era, but there may now be a fix for that, as well.
Vladimir Yuzhikov is a computer engineer specializing in image and signal processing. He clearly knows so much more about the science of reconstructing poorly focused images than I do that I’m not even prepared to try and explain how his method works.
He offers to explain it on his website and the explanation includes several complex formulas I might have been able to work out when I was 18 years old — then again, maybe not. Even if we had the formulas then, the fastest computers then in existence wouldn’t have had a prayer of running them. Now, the processor in your laptop can handle it.
SmartDeblur is his free, no-questions-asked software used for (as the name suggests) de-blurring images. It doesn’t even have to be installed on the computer that runs it. You open the application from an executable file and are greeted with a window and several slider controls.
Load your blurry image into the program, and start monkeying with the sliders. A progress bar at the bottom of the window moves to show how the image is being processed according to each change in parameters, and on my reasonably powerful desktop machine, each change took about 15 seconds to resolve.
The software has its limits. If an image is too blurry, the best result you can get is still pretty useless. But if you’re trying to see detail that is just out of reach in the blurred image, you may be able to clear it up enough to see what you need to see.
The example furnished by Yuzhikov simulates a car license plate that is illegible in the original, but easily read in the processed version. His website includes several other images processed by his software, and the results are impressive.
Adobe Photoshop has included an “unsharp mask” filter for some time that can simulate sharpness in a softly focused image, but it doesn’t do much to clear up details. It’s better adapted for removing artistic blur effects from photos and making them look more utilitarian and businesslike. SmartDeblur uses an entirely different approach to the problem.
The current version of the software is free, but I expect it will soon be licensed to a commercial developer and improved so that detail will be recoverable from images even more poorly focused than the example. What it won’t do is produce information that isn’t there.
Popular crime TV shows depict the gearhead techie grabbing a tiny portion of a video frame and “enhancing” it to produce a portrait-like image of the bad guy, reflected in the chrome bumper of a passing car. Until we have video hardware capable of recording 50 megapixels of data in every video frame (a standard video frame has about 307,000 pixels), that ain’t happening.
Get this software while it’s still available for free. You may not need it today, but a future case might include a blurry still image you want to clear up for detail. About the authorTim Dees is a writer, editor, trainer, and former law enforcement officer. After 15 years as a police officer with the Reno Police Department and elsewhere in Northern Nevada, Tim taught criminal justice as a full-time professor and instructor at colleges in Wisconsin, West Virginia, Georgia, and Oregon.
He was also a regional training coordinator for the Oregon Dept. of Public Safety Standards & Training, providing in-service training to 65 criminal justice agencies in central and eastern Oregon.
Tim has written more than 300 articles for nearly every national law enforcement publication in the United States, and is the author of The Truth About Cops, published by Hyperink Press. In 2005, Tim became the first editor-in-chief for Officer.com, moving to the same position for LawOfficer.com at the beginning of 2008. He now writes on applications of technology in law enforcement from his home in SE Washington state.
Tim holds a bachelor’s degree in biological science from San José State University, a master’s degree in criminal justice from The University of Alabama, and the Certified Protection Professional credential from ASIS International. He serves on the executive board of the Public Safety Writers Association.
Dees can be reached at [email protected]. Keep up on the latest products by becoming a fan of PoliceOne Products on Facebook | 计算机 |
2014-23/2663/en_head.json.gz/26831 | Cuyahoga Valley
DOPWIC
Greenspace Plan
NRAC
Towpath Trail
Whiskey Island
Work Access
Towpath Trail & Greenway Extension
Towpath Trail Extension
Alignment & Design Study
Project Background and Significance
The Towpath Trail has become a defining feature in the Cuyahoga Valley landscape. Constructed in the 1820s as part of the Ohio & Erie Canal, it was a simple dirt path on which to lead animals pulling canal boats. When the economically unprofitable canal finally ceased to be used after the 1913 flood, the towpath survived as a silent witness to an earlier era.
The rediscovery of the towpath began with the establishment of the Cuyahoga Valley National Park in 1974. One of the major projects completed by the National Park Service was the conversion of approximately 20 miles of the towpath into a shared use trail. The success of this segment of towpath has sparked a campaign to extend the Towpath Trail to over 100 miles as a continuous journey through the federally designated Ohio & Erie Canalway National Heritage Area. In addition, the trail will serve as the northeast Ohio section of the Ohio to Erie Trail (Cincinnati to Columbus to Cleveland), now in progress.
Cleveland Metroparks completed additional segments of the Towpath Trail in its Ohio & Erie Canal Reservation, situated immediately north of the Cuyahoga Valley National Park. The northern terminus of the Towpath Trail is now at old Harvard Avenue, just east of Jennings Road.
The current project will complete the Towpath Trail in Cuyahoga County by creating about six miles of trail and greenway from old Harvard Avenue to the proposed Canal Basin Park at downtown Cleveland, under the Detroit-Superior Bridge.
In 2002, the Cuyahoga County Planning Commission completed the Alignment & Design Study. This project produced a detailed preferred alignment for an off-road route and neighborhood connectors, as well as a suggested design vocabulary for the project. Trailhead and interpretive opportunities were also refined. The project also included an environmental regeneration plan for the surrounding landscape, such as ecological restoration of hillsides, soil enhancements, improvements to drainage patterns, constructed and enhanced wetland pockets, and creation or restoration of riparian buffers and natural edges along the river channel.
In October 2004, nine agencies and organizations signed a Memorandum of Understanding concerning the roles and responsibilities for completion of the Towpath Trail. The members of the Towpath Trail Partnership Committee are the Cuyahoga County Executive, City of Cleveland, Cleveland Metroparks, Cuyahoga County Department of Public Works, Cuyahoga County Planning Commission, National Park Service, Northeast Ohio Areawide Coordinating Agency, Ohio Canal Corridor, and the Ohio Department of Transportation. The Management Committee consists of the County Executive, City of Cleveland, Cleveland Metroparks, and Ohio
Canal Corridor.
Engineering, design, and construction are being administered by the Cuyahoga County Department of Public Works. Once built, Cleveland Metroparks will handle day-to-day maintenance, interpretation, and security. Generally, the City of Cleveland will own the land under the trail.
Stage 1 is the three-quarter mile section from old Harvard Road to the south entrance of the Steelyard Commons shopping center. A consulting team led by DLZ Ohio is currently working on engineering and design. Studies by the U.S. Army Corps of Engineers identified environmental concerns north of Harvard Avenue. A new trail route has been identified. In the meantime, a temporary route has been created on Harvard Avenue and Jennings Road, connecting to Steelyard Commons.
For more information, visit the Stage 1 website.
Stage 2 is the one-mile section that is part of Steelyard Commons. This segment opened in early 2007 and provides a direct connection to the Tremont neighborhood of Cleveland at West 14th Street. The trail, including two underpasses, was fully paid by First Interstate Properties. First Interstate also built a wide sidewalk, giving users two riding options.
Stage 3 is the section from the north entrance of Steelyard Commons to Literary Road (north of the I-490 bridge). The Michael Baker Corp. is leading the consulting team. The work will be coordinated with the City of Cleveland's improvements to Clark Field, a major park and outdoor organized sports facility.
The final section of trail (Stage 4) will bring the project to Canal Basin Park, a new 18-acre urban park to be created at the northern terminus of the Ohio & Erie Canal. When constructed in the late 1820s, the canal originally included a large basin for the loading and unloading of canal boats, situated in the Flats where the canal connected to the Cuyahoga River (just south of Settler's Landing Park, in the vicinity of the Detroit-Superior Bridge).
The Michael Baker Corp. was selected to lead the design of Stage 4.
From Canal Basin Park, it is anticipated that connector trails will provide access to Lake Erie and across Cleveland's lakefront.
Please check updates for updated information about the project and construction status, and maps for a current project map.
Please feel free to email us with your comments and questions.
Rick Sicha, Principal Planner
Cuyahoga County Planning Commission
Cuyahoga County Planning Commission 2079 East 9th Street, Suite 5-300 Cleveland, OH 44115 | 计算机 |
2014-23/2663/en_head.json.gz/27655 | Microsoft Releases Silverlight 2
Microsoft Corp. today announced the availability of Silverlight 2, one of the industry’s most comprehensive and powerful solutions for the creation and delivery of applications and media experiences through a Web browser.Silverlight 2 delivers a wide range of new features and tools that enable designers and developers to better collaborate while creating more accessible, more discoverable and more secure user experiences.
Microsoft also announced further support of open source communities by funding advanced Silverlight development capabilities with the Eclipse Foundation’s integrated development environment (IDE) and by providing new controls to developers with the Silverlight Control Pack (SCP) under the Microsoft Permissive License."We launched Silverlight just over a year ago, and already one in four consumers worldwide has access to a computer with Silverlight already installed," said Scott Guthrie, corporate vice president of the .NET Developer Division at Microsoft. "Silverlight represents a radical improvement in the way developers and designers build applications on the Web. This release will further accelerate our efforts to make Silverlight, Visual Studio and Microsoft Expression Studio the preeminent solutions for the creation and delivery of media and rich Internet application experiences."Silverlight adoption continues to grow rapidly, with penetration in some countries approaching 50 percent and a growing ecosystem that includes more than 150 partners and tens of thousands of applications. During the 17 days of the 2008 Olympics Games in Beijing, NBCOlympics.com, powered by Silverlight, had more than 50 million unique visitors, resulting in 1.3 billion page views, 70 million video streams and 600 million minutes of video watched, increasing the average time on the site (from 3 minutes to 27 minutes) and Silverlight market penetration in the U.S. by more than 30 percent. Broadcasters in France (France Televisions SA), the Netherlands (NOS), Russia (Sportbox.ru) and Italy (RAI) also chose Silverlight to deliver Olympics coverage online. In addition, leading companies such as CBS College Sports, Blockbuster Inc., Hard Rock Cafe International Inc., Yahoo! Japan, AOL LLC, Toyota Motor Corp., HSN Inc. and Tencent Inc. are building their next-generation experiences using Silverlight."CBS College Sports Network streams more than 20,000 hours of live content annually for our 150-plus college and university official athletic partners, so we demand that our video player environment be both consumer friendly and robust," said Tom Buffolano, general manager and vice president, Digital Programming and Subscription, CBS Interactive-Sports. "Silverlight was the perfect choice to help develop and power our new, exclusive online collegiate sports experience, as it features the best price and performance of any streaming media solution on the market today. Silverlight also gives us the most flexibility in expanding the product in the future as we develop embeddable players and mobile platforms and explore new advertising integration opportunities."Continued Commitment to Openness and InteroperabilityMicrosoft announced plans to support additional tools for developing Silverlight applications by providing funding to Soyatec, a France-based IT solutions provider and Eclipse Foundation member, to lead a project to integrate advanced Silverlight development capabilities into the Eclipse IDE. Soyatec plans to release the project under the Eclipse Public License Version 1.0 on SourceForge and submit it to the Eclipse Foundation as an open Eclipse project.Microsoft also will release the Silverlight Control Pack and publish on MSDN the technical specification for the Silverlight Extensible Application Markup Language (XAML) vocabulary. The SCP, which will augment the powerful built-in control set in Silverlight, will be released under the Microsoft Permissive License, an Open Source Initiative-approved license, and includes controls such as DockPanel, ViewBox, TreeView, Accordion and AutoComplete. The Silverlight XAML vocabulary specification, released under the Open Specification Promise (OSP), will better enable third-party ISVs to create products that can read and write XAML for Silverlight."The Silverlight Control Pack under the Microsoft Permissive License really addresses the needs of developers by enabling them to learn how advanced controls are authored directly from the high-quality Microsoft implementation," said Miguel de Icaza, vice president, Engineering, Novell. "By using the OSP for the Silverlight vocabulary, they further solidify their commitment to interoperability. I am impressed with the progress Microsoft continues to make, and we are extremely satisfied with the support for Moonlight and the open source community."Beyond funding development in the free Eclipse IDE, Microsoft currently delivers state-of-the-art tools for Silverlight with Visual Studio 2008 and Expression Studio 2. In addition, support is now extended to Visual Web Developer 2008 Express Edition, which is a free download."We wanted to build a cutting-edge, rich Internet application that enables our customers to search our vast database of content and metadata so they can access movie reviews, watch high-quality movie trailers, and either rent or buy movies from our new MovieLink application," said Keith Morrow, chief information officer, Blockbuster. "Because Silverlight 2 now includes several new rich controls such as data grids and advanced skinning capabilities, as well as support for the .NET Framework, allowing us to access our existing Web services, we were able to easily maintain the high standards of the Blockbuster brand and bring the application to market in record time."Delivering Features for Next-Generation Web ExperiencesHighlights of new Silverlight 2 features include the following: .NET Framework support with a rich base class library. This is a compatible subset of the full .NET Framework. Powerful built-in controls. These include DataGrid, ListBox, Slider, ScrollViewer, Calendar controls and more. Advanced skinning and templating support. This makes it easy to customize the look and feel of an application. Deep zoom. This enables unparalleled interactivity and navigation of ultrahigh resolution imagery. Comprehensive networking support. Out-of-the-box support allows calling REST, WS*/SOAP, POX, RSS and standard HTTP services, enabling users to create applications that easily integrate with existing back-end systems. Expanded .NET Framework language support. Unlike other runtimes, Silverlight 2 supports a variety of programming languages, including Visual Basic, C#, JavaScript, IronPython and IronRuby, making it easier for developers already familiar with one of these languages to repurpose their existing skill sets. Advanced content protection. This now includes Silverlight DRM, powered by PlayReady, offering robust content protection for connected Silverlight experiences. Improved server scalability and expanded advertiser support. This includes new streaming and progressive download capabilities, superior search engine optimization techniques, and next-generation in-stream advertising support. Vibrant partner ecosystem. Visual Studio Industry Partners such as ComponentOne LLC, Infragistics Inc. and Telerik Inc. are providing products that further enhance developer capabilities when creating Silverlight applications using Visual Studio. Cross-platform and cross-browser support. This includes support for Mac, Windows and Linux in Firefox, Safari and Windows Internet Explorer. Get Silverlight 2Silverlight 2 is available for download at www.microsoft.com/silverlight. Customers already using a previous version of Silverlight will be automatically upgraded to Silverlight 2. | 计算机 |
2014-23/2663/en_head.json.gz/27732 | American McGee's Alice (c) Electronic Arts
Windows, Pentium III-400, 64 MB Ram, 580MB HDD, 4x CD-ROM
Friday, December 15th, 2000 at 01:25 PM
By: DaxX
American McGee's Alice review
I'm sure a lot of you loyal readers out there are wondering, "Who is American McGee? Is his first name really American?" To answer those questions, American McGee is an ex-id Software member who left to start this project in 1998 after mapping levels for Doom2, Quake, and Quake2. His first name is really American. Another question you might ask is "Why did he attach his name to a game, who the hell does he think he is?" Now I don't know the answer to that question, but I can say that he picked a pretty great game to attach his name to.
Wow. Ooo. Very nice. The engine powering Alice is none other than the Quake3 engine, arguably the most visually pleasing engine to date. Hell, it's not really even very arguable. This game is beautiful. The levels are lush and detailed. Every single level is simply stunning to look at. You can tell a lot of time and effort went into crafting each level. You won't see rehashed textures. You won't see reused houses, crates or rocks. Everything is fine-tuned. The colours (I'm Canadian I spell it that way, OK??) are rich and varied. The characters are well animated and stylized.
Great pains were taken to convert the Alice world into a dark and twisted place. They've done a perfect job of creating a world that remains true to the original story, yet twisted it just enough to make it interesting. The main characters are really well stylized, they all retain a look we recognize, yet are eerily evil. Weapon effects are great, especially fire effects.
Some handy features are a blue dot, which shows you where you're aiming (sometimes a problem in 3rd person games) and little feet that show where you'd jump to if you just press the jump button (jumping almost ALWAYS a problem in 3rd person games). There is almost no clipping issues, which is wonderful, and even small details like how Alice holds weapons as she climbs up ropes and ledges are virtually free of clipping problems.
The sound is probably some of the best I've heard in a game. The music, for starters, is almost perfect. It's simple, ambient, and blends into the background well and it sounds fantastic. Most of the tunes are very creepy, they really give the sense of a twisted and evil Wonderland. I hear the music was done by an ex-NIN member, which would explain how well the music comes across as creepy, but it's not fast or heavy.
Ambient sounds are worked in wonderfully as well. There are enough to add variety to the levels, but they aren't constant. This is good because it lets you listen to the background music and also gives you that creepy "alone" feeling. Some of the creepiest you'll hear are the little psychotic kids cackling and crying. Steam vents burst into action, rivers murmur, gears grind, electricity crackles. It's very immersive.
Weapon sounds are pretty good. Small details like the croquet mallet "squeaks" when you hit something (if you remember, the croquet mallet is actually a bird) and the jack-in-the-box plays a little music before it opens make a big difference. Overall they are nothing to write home about. Alice has 3 types of gibbing - slice in 2, slice off the head, and explosive gib. The slicing ones sound fantastic, you hear a satisfying sound and hear the blood splatter on the ground. The voice acting in this game is exceptional too. Congrats are in order for the voice of the Cheshire Cat, he sounds amazing. Other developers take heed - voice acting should be done by PROFESSIONALS, not by some jackass off the street. Hitman developers, do you hear me? "Excuse me I have to go to the bathroom"...ugh.
The gameplay is addictive but frustrating. I REALLY like this game but I REALLY don't like jumping puzzles and I REALLY don't like dying every 5 minutes because I fall into space or I fall into lava, or I fall into acid, etc. There is a lot of jumping puzzles in this game. They've done a good job with the little feet icon to help you figure out how to jump, but I still don't think that jumping puzzles = good gameplay. It's the single biggest problem I have with this game, and it IS quite a big problem.
That aside, however, the game is great. The enemies are difficult but not impossible. Every enemy you kill leaves behind a certain amount of health and mana (magic, consumed by most weapons) so it's pretty easy to keep maxed out. The level variety is fantastic, the thing that will keep people playing will be the need to see every single level in the game, not the challenge.
The enemy variety, sadly, leaves a bit to be desired. By the end I was pretty sick of the screaming skulls flying around and the card players get old fast. However, the variety isn't lacking that much and I'm glad they spent their time perfecting a few enemies rather than creating a ton.
AI wise, the enemies are decently smart. They will follow you around although they occasionally get stuck behind walls. They have a realistic field of view but they don't work together at all. For instance, a lot of the time you'll have an enemy that shoots explosive things, and in front of them will be a short-range fighting creature. The one behind will just keep firing explosive things and they'll keep hitting the enemy in front. I'm not sure if enemies can damage each other though, I usually kill them all before I could find out.
Certain other frustrations - some of the levels are too maze-like. The mirror maze and the queen hedge maze are both examples. They're not terribly confusing but I never like running around not knowing where I'm going or if I'm going around in circles. I think they could have been cut down to be less maze-like.
This game is fun because of the beautiful levels, graphics, and sound. This game is not fun because of the constant jumping puzzles and the dying that results from them. Overall, though, this game is quite addictive. The jumping puzzles aren't THAT frequent and they aren't THAT bad, but they've always annoyed me as a gameplay mechanism so I'm mentioning them a lot. Overall the game is a great experience and has kept me entranced for a long time.
In terms of the controls, there's not much to say, standard control set up. You can control things like the distance between Alice and the camera but the default is pretty good. The camera is very well done, it never gets stuck in awkward spots and follows Alice fluidly. I normally don't like 3rd person games because of camera and control problems but this game showed me that 3rd person can be done well, the Quake 3 engine is good at 3rd person, and that I am capable of enjoying a 3rd person game.
American McGee's Alice is simply a very good game. It's a good addition to your video game library. Buy it for a loved one for Chrismas or be your own early Santa and buy it for yourself.
Written By: DaxX
[ 42/50 ] Gameplay[ 10/10 ] Graphics[ 10/10 ] Sound[ 09/10 ] Storyline[ 10/10 ] Controls[ 07/10 ] Fun Factor
By: Prolix
With the creation of Lewis Carroll's novel Alice in Wonderland, the world became unwittingly exposed to a psychedelic subculture promoting alternate views on reality by means of acid and mushroom trips. The world of Wonderland depicted in the novel and Disney's movie was twisted and surreal. Perhaps it was this haunting view that allured me to American McGee's Alice. One of the first games to utilize the Quake 3 engine, Alice brings forth Carroll's demented view of Wonderland via 3rd person perspective.
The main character of the game is Alice herself, however, she is much more gothic and demented this time around. It seems as though when Alice left Wonderland, things started to crumble and guess who has to save it? With the help of a decrepit Cheshire cat, Alice must battle the undead and the queen's guards in order to save Wonderland from a horrible fate. The entire game is done via 3rd person, which is superbly done without any camera angle flaws.
One word sums up the eye candy in Alice, stunning. Throughout the entire game I felt as though I was really in Wonderland. Perhaps the best aspect of the game is the amazing environments the designers behind Alice created. Most of the levels are absolutely breathtaking and twisted at the same time. It is absolutely incredible to see what the Quake 3 engine is capable of. A few of the levels just have to be seen to be believed. The entire cast of characters has all been redone to fit American Mcgee's gothic vision of Wonderland and is first-rate. My favorite character would have to be Tweedle Dumb and Tweedle Dee, who are about as evil as they come in this rendition of Wonderland. Be forewarned, my 600mhz Pentium 3 struggled at times pumping out the beautiful world of Wonderland, so those of you with lower end machines might want to pass this one up. Alice's graphics are sure not to disappoint even the most cynical gamer.
The sound and control are flawless as well in Alice. The music is fantastic and sets the mood for each section of Wonderland perfectly. The voice effects are one of my favorite aspects of the game. As the Cheshire cat spoke to me, I really felt a sense of his wisdom and understanding of Wonderland. As for controlling, each key is configurable and I was able to get away with using my Quake 3 configuration.
Gameplay in Alice consists of moving from one linear location to the next. Conceivably, the only downside to the game is the fact that you can't stray from the path the designers want you to take. However, the level design and visual splendor more than make up for this minor inconvenience. Each level is designed to perfection and gives a genuine feel for the twisted imagery of Wonderland. The weapons at Alice's disposal range from a knife to deadly jacks, each weapon draws from the fantasy of Wonderland. The enemies are also true to the novel and movie, each being more gothic and disturbed than their original counterparts, my favorite being the card guards. As you progress through the game, the story behind your return to Wonderland becomes clearer. Despite the great fantasy behind Wonderland, a lot of it fails to return in American McGee's Alice. The story falls a little thin and in the end, it is insignificant.
The worst part of American McGee's Alice is knowing it is eventually going to end. Throughout the entire game, I felt so captivated and a part of this twisted world and I had a hard time quitting the game. Despite my personal love of the game, Alice isn't for everyone. A lot of people might be put off by this disturbing view of Wonderland, or just never cared for Wonderland in the first place. Unfortunately, Alice brings nothing innovative to 3rd person gaming, but relies on artistic talent instead. If you find yourself intrigued by Wonderland, I would defiantly suggest giving Alice a try, one of the best games this year.
Written By: Prolix
Copyright (c) 1998-2009 ~ Game Over Online Incorporated ~ All Rights Reserved
Game Over Online Privacy Policy | 计算机 |
2014-23/2663/en_head.json.gz/30265 | > Layout
Once the Copyeditor completes the "clean" copy in Step 3 of Copyediting, that version of the submission goes to the Layout stage. The Section Editor, on receiving notification of the completion of the copyediting, needs to then select a Layout Editor, if that has not already been done, and request that the Layout Editor begin work, by using the email icon under Request Layout. The Layout Editor will prepare galleys for the submission in each of the journal's publishing formats (e.g., HTML, PDF, PS, etc.). The Supplementary Files, which remain in the original file format in which they were submitted, will be reviewed by the Layout Editor and Proofreader to ensure that basic formatting is in place, and that the files conform as well as possible to journal standards. When the Layout Editor has completed the initial production of the galleys, which are the files that will be published online, the Layout Editor will email the Section Editor.Note that the Editor can schedule a submission for publication at any point in the editing process; this information will be made available to Layout Editors and Proofreaders and will give them the ability to preview the issue before publication. | 计算机 |
2014-23/2663/en_head.json.gz/31610 | Original URL: http://www.theregister.co.uk/2010/03/16/intel_xeon_5600_launch/
Intel pushes workhorse Xeons to six cores
Go Westmere, young man
The first volley in the volume x64 server price war was officially fired today, with Intel rolling out its "Westmere-EP" Xeon 5600 processor. Rival Advanced Micro Devices is widely expected to counter with its "Magny-Cours" Opteron 6100 processors on March 29, to be followed by the long-awaited launch of Intel's "Nehalem-EX" Xeon 7500s on March 30.The Xeon 5600s are the kickers to the very successful Xeon 5500s, the first server chips Intel got onto the field with the much-needed QuickPath Interconnect. QPI is important because it got processor cores and memory bandwidth back into whack after being out of kilter for years with the old Xeon frontside bus architecture.With the Xeon 5600s, Intel is increasing the core count from four to six with the top-end parts, but the memory slots per socket remain the same. With 4GB DDR3 DIMMs being affordable and 8GB DIMMs being merely expensive instead of outrageous - as they were a year ago - Intel is counting on DDR3 DIMM capacities to make up for holding the memory slots constant. Moreover, it's also counting on server OEMs being thrilled that they merely have to drop the Xeon 5600s into the same machines they created to support the Xeon 5500s, since they are socket-compatible.Intel stole a whole bunch of its own thunder for the Xeon 5600 launch back in early February, when it talked about the power gating and security features of the chip at the International Solid State Circuits Conference in San Francisco. The "transformational" Xeon 5500s launched with much anticipation in March 2009 and provided a much-needed goose to the server racket that had been hammered into the ground by the economic meltdown. Intel and its partners are hoping that the Westmere-EP follow-ons can keep building momentum for x64 server sales.The Xeon 5500s had two or four cores, 4MB or 8MB of L3 cache, and their 730 million transistors were implemented in 45 nanometer high-k process. Having perfected its 32 nanometer high-k metal gate processes late last year with desktop and laptop processors that were announced in January, Intel is deploying the next rev of its 32 nanometer processes to make the Xeon 5600s. That 45-to-32 nanometer process shrink, combined with better power gating to core and now non-core parts of the chip (allowing for the quiescing of segments of the chip that are not in use), means Intel can boost the maximum core count to six and pump the maximum L3 cache size up to 12MB and still stay in the same thermal envelope.The 32 nanometer, six-core Westmere-EP chip
The Xeon 5600 weighs in at 1.17 billion transistors and is 240 square millimeters in size. It is implemented in two halves of three cores each, as you can see. The core regions have their own clock speed and power supply, and with the tweaks to the Westmere design the L3 cache and memory controller regions - what Intel calls the "uncore" areas - get their own separate power gating. This allows Intel to be a whole lot more stingy about power usage with the Xeon 5600s.As El Reg previously reported, the Xeon 5600s have had their on-chip DDR3 main memory controllers tweaked so they can support low-voltage DDR3 main memory. This low-voltage memory runs at 1.35 volts instead of the 1.5 volts of standard DDR3 chips, and the net effect is that memory DIMMs run about 20 per cent cooler when using the low-voltage parts without sacrificing performance, Intel said back at ISSCC, but now the company is only claiming a 10 per cent savings in power.The Xeon 5600 processors, Intel divulged back in February, have a set of native cryptographic instructions that implement the Advanced Encryption Standard (AES) algorithm for encrypting and decrypting data. But in a conference call with journalists, Boyd Davis, general manager of marketing for Intel's Data Center Group, said that the company has also grabbed its Trusted Execution Technology (TXT) security features from the vPro business PC platform and hardened it so it can be used to secure virtualized server environments. Specifically, the TXT functions built into the Xeon 5600 platform can be used to prevent the insertion of malicious software prior to the launching of the hypervisor when a machine boots.Here's how the Xeon 5600s stack up, and how they compare to the Xeon 5500s and 3400s that have not been replaced in the lineup:The current Intel one-socket and two-socket server and workstation chip lineup
As with the Xeon 5500s, not every feature is enabled in every chip. In the table above, TDP is Intel's thermal design point rating, in watts. TB is short for Turbo Boost, whic | 计算机 |
2014-23/2663/en_head.json.gz/32672 | Nominations Sought for 2014 IEEE/SEI Watts S. Humphrey Software Process Achievement Award
October 1, 2013—Nominations are now open for the 2014 IEEE/SEI Watts S. Humphrey Software Process (SPA) Award. Since 1994, the SEI and the Institute of Electrical and Electronics Engineers (IEEE) Computer Society have cosponsored the SPA Award. This award recognizes outstanding achievements in improving an organization's ability to create and evolve high-quality software-dependent systems. It is based on achievement and not necessarily made every year, and multiple awards may be made in a year.
The SPA Award competition is open to all software professionals who participate in software development, support, or management, and are employed by and participate in the software work of an organization that produces, supports, enhances, or otherwise provides software-intensive products or services.
Achievements recognized by the SPA Award can be the result of any type of process-improvement activity. They need not have been based on a specific framework, model, or body of software engineering principles, practices, techniques, or methods.
The SPA Award may be presented to an individual, group, or team. Nominees are most often employees of an organization that produces, supports, enhances or provides software-dependent systems. However, the nominee's work may have been undertaken in other contexts. The nominee's organization may be for-profit, not-for-profit or non-profit;may be industrial, academic, government organizations or foundations;and need not be based in the United States.
The SPA award is named for Watts S. Humphrey, known as the "Father of Software Quality." Humphrey, following a long career with IBM, served at the SEI from 1986 until his death in 2010. He dedicated the majority of his career to addressing problems in software development including schedule delays, cost increases, performance problems, and defects. During Humphrey's tenure at the SEI, he and his team identified characteristics of best practices in software engineering that began to lay the groundwork for what would eventually become the Software Capability Maturity Model (CMM) and, eventually, CMMI. In 2005, Humphrey received the National Medal of Technology for his work in software engineering.
"We have found by applying to software the principles that made the industrial revolution possible, software engineering teams can achieve improvements in quality, predictability, and productivity that exceed our wildest dreams." –Watts S. Humphrey
"Past winners have compelling stories that show not only how their process improvement activities have increased their ability to deliver better quality software on time and within budget," said John Goodenough, SEI fellow and longtime member of the SPA Awards Committee. "They also demonstrate how they are sustaining the introduction of disciplined software engineering practices throughout their organization."
In many cases, Goodenough explained, the winners are organizations that have grown significantly over several years. As a consequence, their approaches to instilling a disciplined engineering culture are often instructive in terms of the costs incurred, the benefits obtained, and the overall effectiveness of their indoctrination strategies. "Winners are not just highly effective software engineering organizations today;they have structures and practices in place to insure that this effectiveness is maintained and improved in future years. They are committed to continuous improvement, and as award winners, they show why this commitment pays off in business success."
Award winners will receive an engraved plaque commemorating their achievement. Presentation of the award will be made at a practitioner and researcher community event at which the winner will be invited to keynote. Award recipients will also produce an SEI technical report describing their accomplishments, experiences, and lessons learned.
Past winners of the Watts S. Humphrey Software Process Achievement Award include
2009 Infosys Technologies Limited
2006 Productora de Software S.A. (PSL)
2004 IBM Global Services, Australia: Application Management Group
2002 Wipro: Software Process Engineering Group
1999 Oklahoma City Air Logistics Center: Aircraft Management Test Software & Industrial Automation Branches
1998 Advanced Information Services, Inc.: Development Group
1997 Hughes Electronics: Software Process Improvement Team
1995 Raytheon Company: Software Engineering Process Group, Equipment Division
1994 Goddard Space Flight Center: Software Engineering Laboratory
For more information about the Watts S. Humphrey Software Process Award, please visit http://www.sei.cmu.edu/process/casestudies/processawards/.
To nominate an individual or group for a Humphrey SPA Award, please visit http://www.computer.org/portal/web/awards/spa and click the Nominate link. Filter News by Category
SEI Bulletin Media Contact
If you are a member of the media or analyst community and would like to schedule an interview with an SEI expert, please contact:SEI Public RelationsRichard LynchMedia Line: 412-268-4793Email: [email protected] other useful information sources, please visit the Contact Us page. © 2014 Carnegie Mellon University | 计算机 |
2014-23/2663/en_head.json.gz/34559 | Dennis Richie
SEARCH : Home : Unix Contributors
Contribution - (Full Biography)
Joined in 1967, Bell Labs was a corporation jointly owned by American Telephone and Telegraph Company and its subsidiary Western Electric. Soon after with Ken Thompson and others, first started work on Unix. After Unix had become well established in the Bell System and in a number of educational, government and commercial installations, along with Steve Johnson and help by Ken Thompson, transported the operating system to the Interdata 8/32, thus demonstrating its portability, and laying the groundwork for the widespread growth of Unix: the Seventh Edition version from the Bell Labs research group was the basis for commercial Unix System V and also for the Unix BSD distributions from the University of California at Berkeley. The last important technical contribution made by Dennis Richie to Unix was the Streams mechanism for interconnecting devices, protocols, and applications.
Father of C and Co-Developer of Unix, Dies - Oct 8, 2011
Dennis Richie Home Page
Unix Programmer's Manual, First Edition (1971)
C Programming Language
Why is the picture on a Playing Card | 计算机 |
2014-23/2664/en_head.json.gz/1950 | Research shows that computers can match humans in art analysis
Jane Tarakhovsky is the daughter of two artists, and it looked like she was leaving the art world behind when she decided to become a computer scientist. But her recent research project at Lawrence Technological University has demonstrated that computers can compete with art historians in critiquing painting styles.
While completing her master’s degree in computer science earlier this year, Tarakhovsky used a computer program developed by Assistant Professor Lior Shamir to demonstrate that a computer can find similarities in the styles of artists just as art critics and historian do.
In the experiment, published in the ACM Journal on Computing and Cultural Heritage and widely reported elsewhere, Tarakhovsky and Shamir used a complex computer algorithm to analyze approximately1,000 paintings of 34 well-known artists, and found similarities between them based solely on the visual content of the paintings. Surprisingly, the computer provided a network of similarities between painters that is largely in agreement with the perception of art historians.
For instance, the computer placed the High Renaissance artists Raphael, Da Vinci, and Michelangelo very close to each other. The Baroque painters Vermeer, Rubens and Rembrandt were placed in another cluster.
The experiment was performed by extracting 4,027 numerical image context descriptors – numbers that reflect the content of the image such as texture, color, and shapes in a quantitative fashion. The analysis reflected many aspects of the visual content and used pattern recognition and statistical methods to detect complex patterns of similarities and dissimilarities between the artistic styles. The computer then quantified these similarities.
According to Shamir, non-experts can normally make the broad differentiation between modern art and classical realism, but they have difficulty telling the difference between closely related schools of art such as Early and High Renaissance or Mannerism and Romanticism.
“This experiment showed that machines can outperform untrained humans in the analysis of fine art,” Shamir said.
Tarakhovsky, who lives in Lake Orion, is the daughter of two Russian artists. Her father was a member of the former USSR Artists. She graduated from an art school at 15 years old and earned a bachelor’s degree in history in Russia, but has switched her career path to computer science since emigrating to the United States in 1998.
Tarakhovsky utilized her knowledge of art to demonstrate the versatility of an algorithm that Shamir originally developed for biological image analysis while working on the staff of the National Institutes of Health in 2009. She designed a new system based on the code and then designed the experiment to compare artists.
She also has used the computer program as a consultant to help a client identify bacteria in clinical samples.
“The program has other applications, but you have to know what you are looking for,” she said.
Tarakhovsky believes that there are many other applications for the program in the world of art. Her research project with Shamir covered a relatively small sampling of Western art. “this is just the tip of the iceberg,” she said.
At Lawrence Tech she also worked with Professor CJ Chung on Robofest, an international competition that encourages young students to study science, technology, engineering and mathematics, the so-called STEM subjects.
“My professors at Lawrence Tech have provided me with a broad perspective and have encouraged me to go to new levels,” she said.
She said that her experience demonstrates that women can succeed in scientific fields like computer science and that people in general can make the transition from subjects like art and history to scientific disciplines that are more in demand now that the economy is increasingly driven by technology.
“Everyone has the ability to apply themselves in different areas,” she said. | 计算机 |
2014-23/2664/en_head.json.gz/2482 | Knowledge Center How-To Articles
How-To Categories Becoming a Professional Photographer
Camcorder Operation
Camera Phone Operation
Digital Camera Operation
Filmmaking Tips
Online Sharing & Social Networking
Adding Audio Effects in Adobe Premiere Sound is just as important to a production as video is. Great audio effects enhance your video projects and add value to your production. They help audiences become more drawn in to the artificial world...[more]
Understanding DVD Aspect Ratio: Pan and Scan and Letter Box Aspect ratio is the ratio between the width and height of an image. The most popular aspect ratios are 4:3 and 16:9. The 4:3 aspect ratio is the standard aspect ratio for NTSC, PAL, and digital...[more]
Beyond the Battery: How to Use Alternative Power Sources on DV Camera If you find yourself in a situation where you DV camera's battery is about to die, then you need to find alternative power sources. The only way to power a camera without the battery is...[more]
Animating Using Keyframes in Adobe Premiere Adobe Premiere is a wonderful video editing program where the only limit is your imagination. With some creative thinking, you can add very dynamic animations to your project. This is all possible because of key...[more]
FireWire vs USB FireWire and USB are two types of connection which can be used to connect external storage devices and cameras to your computer. Most computer users will use USB devices as a way to connect external...[more]
All About FireWire Firewire and USB are two competing types of connection used for cameras and external storage devices. Although USB is the most popular form of connection for consumers there are plenty of reasons why video editors...[more]
A List of the Final Cut Pro Keyboard Shortcuts Below is a list of some of the Final Cut Pro keyboard shortcuts. These shortcuts should work regardless of what version of Final Cut Pro you have. The majority of the shortcuts require that you use...[more]
Final Cut Pro 7: 6 New Features Final Cut Pro 7 is a non-linear video editing program. Apple is the manufacturer of Final Cut Pro 7. Final Cut Pro 7, which was released in 2009, is only able to run on Mac computers that are...[more]
4 Reasons to Use Compressor Compressor is a software that is designed by Apple for use with their Final Cut Studio package. This software allows users to encode projects into the necessary DVD format. Compressor can also convert video from...[more]
The Difference Between NTSC and PAL NTSC stands for National Television Standards Committee. PAL stands for Phase Alternating Line. NTSC is the standard broadcast format in the United States, while PAL is the standard broadcast format in Europe, Australia, and parts of Asia. If you...[more]
How To Get a DVD Studio Pro Upgrade A DVD Studio Pro Upgrade can be used to make the application more stable and add additional features. It's very important that you learn how to upgrade the software so it can be made as...[more]
DVD Studio Pro: Adding Sound To a Menu DVD studio pro is one of the most popular video editing packages available for Apple computers. This is a very powerful application that makes video editing and DVD authoring much easier. One of the amazing...[more]
How to Record Yourself on a Digital Video Camera If you have a digital video camera, you might be interested in finding out how you can record yourself. Recording yourself is actually very easy, and can be used to make a video recording of...[more]
LCD Indicator and Symbols While Recording Explained When recording, there are a number of LCD indicator symbols that mean a variety of different things. There are lots of different symbols that are all designed to tell you different things about your video...[more]
How to Change the Language on Screen of Your DV Camera View Finder Most digital video cameras support multiple language screen options. This is useful if you buy a camera from a foreign country. Depending on the camcorder you have, you might be able to use different languages....[more]
3 Ways to Prevent LCD Screen Damage on Your Digital Video Camera Anyone with a digital video camera will need to find a way of preventing LCD screen damage. The LCD screen is used to view what you are recording, and to replay any videos you've already...[more]
The Difference Between a 16:9 Aspect Ratio and 4:3 Aspect Ratio Aspect ratio is defined as the ratio of width to height of an image. A 4:3 aspect ratio means that for every 4 inches of width in an image, you will have 3 inches of height....[more]
Progressive and Interlaced Displays Explained Progressive/interlace displays are the two main categories for screens today. Put simply, progressive is better because it will minimize picture flickering. Progressive displays re-draw every single horizontal line each cycle. For example, a 1080p (p...[more] | 计算机 |
2014-23/2664/en_head.json.gz/2567 | Original URL: http://www.theregister.co.uk/2011/02/14/google_mozilla_and_microsoft_do_do_not_track/
Google, MS, Mozilla: Three 'Do Not Tracks' to woo them all
So many ways to do one simple thing
With the arrival of Microsoft's IE9 release candidate, we now have three separate "do not track" mechanisms from three separate browsers makers. There's room for them all. But it would be nice if we could agree a single mechanism that makes it as easy as possible for netizens to sidestep behavioral ad tracking, as the US Federal Trade Commission has requested.In a December report on web privacy, the FTC recommended a "simple, easy to use choice mechanism for consumers to opt out of the collection of information about their Internet behavior for targeted ads". The most practical method, the commission said, would "involve the placement of a persistent setting, similar to a cookie, on the consumer’s browser signaling the consumer’s choices about being tracked and receiving targeted ads".Mozilla has already built such a mechanism into the latest Firefox beta: a "Do Not Track" http header that lets netizens tell the world they don't want to be tracked. All that's left is for websites and ad networks to actually recognize the thing – and for other browser makers to adopt it too.Neither is on the immediate horizon. Mozilla only proposed its DNT header last month, and the open source outfit is still in the early stages of sweet-talking the rest of the web. "Mozilla has garnered support from a number of stakeholders, starting with our users and developers," Mozilla global privacy and public policy leader Alex Fowler tells The Reg. "We continue to engage with key players in the online advertising industry and are seeing strong interest in server-side implementations of the DNT header."Meanwhile, both Google and Microsoft have rolled out their own do-not-track mechanisms. Hours after Fowler and Mozilla unveiled their proposal, Google released a Chrome extension that lets you opt-out of tracking cookies from multiple advertising networks, including the web's top 15. It works even if you regularly clear your cookies.Of course, Google is among those running the top 15 ad networks. This is very much a case of self-regulation, and it's not much of a change f | 计算机 |
2014-23/2664/en_head.json.gz/7557 | Jaspersoft 4.1 unifies analysis from multiple data sources
In what is positioned as the first of a series of releases to enable support for "existing and emerging data environments", Jaspersoft has released the latest version of Jaspersoft BI Suite, the open source business intelligence suite. In Jaspersoft 4.1, the most prominent new feature is a new unified analysis environment; instead of requiring the use of a variety of tools to access data from different sources, the new environment provides a single web-based user interface for data from OLAP, relational and big data sources. This will allow "BI Builders" – the people who create the reports and analysis – to stay within the Jaspersoft BI web-based environment. With the integration of analysis into the BI web application framework, the company says it now offers a "100% web application architecture" based around W3C standards. The BI web framework has been improved with the ability to more simply customise it using CSS markup techniques. Jaspersoft 4.1 now supports a native 64-bit installer, enabling developers to take advantage of the more powerful hardware where additional performance or scalability is needed. Jaspersoft's BI suite is structured as an open core project. Its basic components are available as open source code in community versions, but the complete suite with additional features is only available as a commercial product. A table is provided to compare the various editions. The community versions of Jaspersoft products are available to download under the GPL or LGPL licence. See also:
JasperSoft integrates with R statistics, a report from The H.BI suite: Jaspersoft 4 with new user interface, a report from The H. | 计算机 |
2014-23/2664/en_head.json.gz/8315 | Home > Risk Management
OverviewGetting StartedResearchTools & Methods Additional Materials ConsultingOur People Risk Management
Consider a broad range of conditions and events that can affect the potential for success, and it becomes easier to strategically allocate limited resources where and when they are needed the most.
Overview The SEI has been conducting research and development in various aspects of risk management for more than 20 years. Over that time span, many solutions have been developed, tested, and released into the community. In the early years, we developed and conducted Software Risk Evaluations (SREs), using the Risk Taxonomy. The tactical Continuous Risk Management (CRM) approach to managing project risk followed, which is still in use today—more than 15 years after it was released. Other applications of risk management principles have been developed, including CURE (focused on COTS usage), ATAM® (with a focus on architecture), and the cyber-security-focused OCTAVE®. In 2006, the SEI Mission Success in Complex Environments (MSCE) project was chartered to develop practical and innovative methods, tools, and techniques for measuring, assessing, and managing mission risks. At the heart of this work is the Mission Risk Diagnostic (MRD), which employs a top-down analysis of mission risk.
Mission risk analysis provides a holistic view of the risk to an interactively complex, socio-technical system. The first step in this type of risk analysis is to establish the objectives that must be achieved. The objectives define the desired outcome, or "picture of success," for a system. Next, systemic factors that have a strong influence on the outcome (i.e., whether or not the objectives will be achieved) are identified. These systemic factors, called drivers, are important because they define a small set of factors that can be used to assess a system's performance and gauge whether it is on track to achieve its key objectives. The drivers are then analyzed, which enables decision makers to gauge the overall risk to the system's mission.
The MRD has proven to be effective for establishing confidence in the characteristics of software-reliant systems across the life cycle and supply chain. The SEI has the MRD in a variety of domains, including software acquisition and development; secure software development; cybersecurity incident management; and technology portfolio management. The MRD has also been blended with other SEI products to provide unique solutions to customer needs.
Although most programs and organizations use risk management when developing and operating software-reliant systems, preventable failures continue to occur at an alarming rate. In many instances, the root causes of these preventable failures can be traced to weaknesses in the risk management practices employed by those programs and organizations. For this reason, risk management research at the SEI continues. The SEI provides a wide range of risk management solutions. Many of the older SEI methodologies are still successfully used today and can provide benefits to your programs. To reach the available documentation on the older solutions, see the additional materials.
The MSCE work on mission risk analysis—top-down, systemic analyses of risk in relation to a system's mission and objectives—is better suited to managing mission risk in complex, distributed environments. These newer solutions can be used to manage mission risk across the life cycle and supply chain, enabling decision makers to more efficiently engage in the risk management process, navigate through a broad tradeoff space (including performance, reliability, safety, and security considerations, among others), and strategically allocate their limited resources when and where they are needed the most. Finally, the SEI CERT Program is using the MRD to assess software security risk across the life cycle and supply chain. As part of this work, CERT is conducting research into risk-based measurement and analysis, where the MRD is being used to direct an organization's measurement and analysis efforts. Spotlight on Risk Management
The Monitor June 2009
New Directions in Risk: A Success-Oriented Approach (2009)
A Practical Approach for Managing Risk
A Technical Overview of Risk and Opportunity Management
A Framework for Categorizing Key Drivers of Risk
Practical Risk Management: Framework and Methods | 计算机 |
2014-23/2664/en_head.json.gz/8708 | There are two ways you can get on-line, or so conventional wisdom has it: one for the computer whizzes, and the other for the rest of the world.
If words like "Unix" and "Winsock" and acronyms like PPP and FTP don't scare you, you can buy a flat-rate account with a small- or middle-sized company that will hook you up directly to the Internet without a lot of hand-holding.
But everyone else, most computer advice columns suggest, would do better to fork out their money to one of the "big three" online services: America Online, Compuserve and Prodigy (to be joined later this year by the Microsoft Network). They may cost a little more, but they make life easier -- and besides, only a big, well-funded company can provide a decent level of service and support, right?
Last week I decided that, in this as in so many other things, the conventional wisdom is dead wrong.
Here's how I spent last Friday morning: I'd heard about an article in Newsweek I wanted to read. I don't subscribe to that magazine, but I do subscribe to Prodigy, and Newsweek has recently launched a nicely designed, multimedia-enhanced version of itself there.
I fire up Prodigy. Unfortunately, I haven't used it in some time, and I soon discover I'll have to download the latest version of Prodigy's software before Newsweek can appear on my screen. A half-hour later -- after I obey a three-page set of instructions and do a needlessly complex little dance with temporary directories -- the new programs are safely on my hard drive. But when I launch them, the program stops dead in its tracks: It wants my password.
My password? I haven't thought about it since I entered it, a couple of years before, when I first joined Prodigy. Ever since, it has resided somewhere in the bowels of Prodigy's "automatic logon" program. Who knows what it is? I can't even remember if it's a word I'd chosen or one they'd assigned me. And I certainly never wrote it down anywhere; I have followed the experts' advice.
By now, it's been about an hour since I decided to look up the Newsweek article. In that time I could've bought the paper magazine at the corner store and read it cover to cover. Instead, I'm waiting for a human being to pick up the phone at Prodigy's headquarters in White Plains, N.Y.
White Plains says to call Prodigy's 800 number. Prodigy's 800 number turns out to be a voice-mail menu from hell. After wrestling with mind-numbing touch-tone options, I find the recording that promises to explain how to change your password and how to cancel your account. But all the instructions involve logging on to Prodigy -- which is precisely what I can't do at the moment.
I'm getting a little steamed, so I call back White Plains. It's pushing lunch time here on the west coast, which mean's it's 2 or 3 p.m. back at Prodigy HQ. I explain to the operator that I wish to change my password or, failing that, cancel my account.
"Wait a second," he says, putting me on hold. Finally, he returns: "We can't deal with that right now. Everyone's gone home early for the holiday weekend."
This is the best a national communications service can do? The truth is that I get far better service, and far more immediate response, fro | 计算机 |
2014-23/2664/en_head.json.gz/9320 | Our talk with Warhammer Online's Carrie Gouskos, part 2
by Eliot Lefebvre on May 12th 2010 3:00PM
Are the other cut cities ever going to be introduced?
They're not off the table. There are two pieces to the city: it's a whole lot of stuff to do, and we weren't even completely satisfied with the two that we had. Now that we're taking a look at the new city gameplay, we might go and say "this is great, we can find an interesting way to reintroduce those cities for sure." That's the core piece of it -- the other, well, I was going to cast aspersions and ask "would you rather have some awesome new zone or another city?" But I think they could be interchanged. So, no, they're definitely not off the table.
It's interesting that people always ask that question. It will come back if you mention anything... people will say "well, what about the cities?" It's fun, because the players keep us honest.
Where do you think the game is currently the weakest, and what's being done to address it?
When we look at our game, the major thing we want to do is take whatever part of the game has the biggest impact on players -- how can we improve that quality? Sometimes, that means bugs, but usually it has something to do with some kind of core system. The cities are a good example of how we took a big part and just said we were going to tackle this, it's going to have a large impact on a lot of players. In particular, because more players are going to have a chance now to try the cities out, and that's really cool. I think the next big thing that I mentioned is open RvR, that's something we really need to tackle. That's going to have the most impact on the largest number of players in our game. That's always the focus, the most impactful thing.
What is the game's biggest strength?
Because we just really highlighted those two -- we looked at scenarios, we looked at cities -- we're really, really satisfied with the movement in both of those directions. I think those are really shining right now. There's so much RvR in our game that you're always going to find one piece or another that people prefer, and so it will almost always answer both of those questions. Just because we're so focused, we really want to focus our game on RvR.
Can you give us any hints about what's coming with 1.3.6 and beyond?
This is my least favorite question, because we want to give you and your readers something valuable. One of the things I'm very passionate about -- and it's gotten us into trouble -- is saying "this new thing is coming, it's going to be awesome," and then for one reason or another it's not panned out the way we wanted, or it had to get cancelled. So I'm really careful about teasing the future.
We've talked a little about 1.3.6, such as my producer's letters talking about how we're working on improvements to auction houses and items, various things along those lines. I've got nothing else, but I think I would highlight that we do have some really awesome stuff coming -- and 1.4 is a patch that I'm very excited about.
Is a boxed expansion still in the cards for Warhammer Online?
Absolutely not out of the question, that's certainly something that I would like. Expansion is a funny word, and it sets expectations in a certain way, but we also don't plan on only doing specific patches going forward. That's probably the best answer I can give you. When people ask "oh, when are you going to do more content?" -- we do content in every patch. "When are you going to do an expansion?" We are planning on running the game as much as we possibly can, as long as we possibly can, and definitely more than just patches.
What would you like to say to the often-vocal portion of the population who claim the game is in maintenance mode or dying?
It's funny -- there are certain outlets on the internet, like forum threads and comments on blog posts, one thread where someone called me "the Sarah Palin of Warhammer Online", for example -- where people are really negative. I mean, really negative. And not just about MMOs, you see it about everything. YouTube comments, people can be really negative. In the case of WAR, it does feels like it comes from people who used to play the game, but don't currently. And I'm not sure why those people care. That being said, I really do hope that our current playerbase doesn't feel that way, and they understand that we're doing quite a lot. We run the game all over the world, we have weekly maintenance, mostly where we do major crash fixes, bug fixes, and exploit fixes, things like that. We have a weekend warfront every weekend - - sometimes we recycle them, but we have new ones far more often than we have repeat ones, and so we're always coming up with things to do on the weekends. Some of those we're going to change the rules around a little bit, more and more. We do a major patch every couple of months. We do major live events several times a year. And there's always more that we're working on. So we're doing quite a lot, so it baffles me when I see it -- and I definitely see it when people say this -- but we're really happy right now. We're in a really good place, our playerbase is really happy -- but if anyone out there is really concerned about whether we're dying or not, they're more than welcome to get a subscription and come help us out.
On behalf of the Massively team, we'd like to thank Ms. Gouskos for her time and her responses. < < Back to page 1 of 2
Share Reader Comments (45) Posted: May 12th 2010 8:09PM Deadalon said | 计算机 |
2014-23/2664/en_head.json.gz/15872 | Ats Diary Week Two
Part of the AtsDiary.
Today we finished estimating the UserStories and had our PlanningGame meeting. We weren't able to do a SpikeSolution for all of our cards, unfortunately, so some remained high risk and difficult to estimate. The planning meeting went very well, though: we got through about 40 cards in an hour. Even so, I think the process could have been streamlined some. More about it in AtsPlanningGame.
After planning, we came back to our office and defined AtsEngineeringTasks. We didn't estimate them for two reasons. First, one of the developers who's supposed to be working with us hasn't arrived yet. Second, the remaining developer isn't familiar enough with the application to feel comfortable estimating tasks. So we estimated the tasks for just one story. We'll do those tasks and then compare our estimate to the actual number of IdealHours? we spent on each task. That will allow us to calibrate our estimates so we can be more accurate in the future.
The last thing I did today was to create the first AtsStatusReport. Although it took me about an hour and a half to complete, I think the time was well spent. Doing these reports will keep the users up to date on the progress of the application, but its primary purpose is to satisfy the project's GoldOwners that things are progressing smoothly. Without this regular feedback, since the GoldOwners won't actually be using the app or participating in planning meetings, the GoldOwners wouldn't have the confidence in our process they need to continue funding development.
We started development today. There's not much to report; we spiked a high-risk problem successfully, which has made me feel much better overall about the risk factors on the project. We also discussed the risks on the project as part of AtsRiskManagement. Then we started in on our first major development task and set up the AtsUnitTests?. (We're using JavaUnit.)
We doing AtsPairProgramming, but I'm a little ambivalent about it so far. It does help me focus a bit better, and we did get a ton of stuff done today, but it's hard for me to see my PairPartner? sitting there and just watching. Because I have much more experience with ATS than my partner does, as well as more Java language experience, there wasn't much in the way of collaboration between us. On the other hand, my partner is very enthusiastic about it and says he's learning a lot. I own this task and am sitting at the keyboard; maybe the next task, which my partner owns, will go more smoothly. (There's only two of us on the project at this point, so we're pretty much stuck with each other.)
Today we did nothing but write a Perl script to automatically deploy ATS. ATS is a distributed application consisting of an applet front-end and a "servlet" back-end that allows our in-house distribution protocol to run on HTTP. In the first phase of ATS, deploying ATS was always a day+ ordeal involving dark rites and chicken blood.
For this phase of ATS, I knew that we had to release more often, because when a distribution problem did arise, it was almost impossible to track down. ExtremeProgramming's ContinuousIntegration seemed like a very good idea, but I knew that if we had to continue to sacrifice goats to deploy ATS, ContinuousIntegration wouldn't work. So today we wrote a script that compiles everything, UnitTests the build, packages it up, deploys it, and then unit and function tests the application in its distributed state.
So far, we don't have very many AtsUnitTests? (the idea was introduced in the first phase of the project, but it never caught on), and my partner is definitely not TestInfected, but he's humoring for now. We only have one functional test -- to see if the application is truly deployed (i.e., is the 'distributed' flag turned on?), but that's a good start. I've used UnitTests extensively on other projects, and I'm looking forward to having the feeling of confidence they provide available on this project.
("We," by the way, is just Al and myself. We were supposed to get another developer last week, but he keeps getting delayed.) 10 March 2000
Our third and final developer arrived yesterday. We spent a good deal of time reviewing the situation, re-estimating and redistributing AtsEngineeringTasks (remember, developers must estimate their own tasks), and setting up a second machine. As a result, our AtsLoadFactor skyrocketed. I calculated it for the first time today at lunch and it came out to 5.8! Our schedule commitments had been based on a load factor of 3 (and I had thought that was high), so I panicked a bit. I posted the figure prominently on the AtsTrackingWhiteboard, underlined the "Committed to 3.0" section a few times in red, and went to lunch.
This afternoon, we all buckled down and got some work done. I don't claim it was the whiteboard that did it, but my notes certainly did draw attention and people discussed it. After an uninterrupted afternoon's work, I recalculated our load factor -- it had dropped to 3.7. I guess we're early enough in the iteration that we can still have a significant effect on the load factor. I was concerned that it would be hard to change.
Our initial development efforts were a little fragmented. Rather than estimate all of the AtsEngineeringTasks in the iteration, we just estimated the tasks for one AtsUserStory. The thinking was that we weren't really sure how long the tasks would take, so we'd do one story's worth and use that to calibrate our estimates.
In retrospect, that wasn't necessarily a good idea. The tasks have a lot of dependencies on each other, and they were spread across two developers. (I have very few tasks, as I'll be spending most of my time mentoring.) When we ordered the tasks, we concentrated more on risks (WorstThingsFirst) than on dependencies, and we ended up thrashing around a bit. We set up our tests, then went to code, and discovered that we'd end up stubbing in more code than we'd actually write, and that we'd have to redo most of our work once the stubs were implemented. So we got together again and reprioritized our tasks based on dependencies.
Since we reprioritized, dependencies haven't been too big of an issue. There's been some cases here or there where we've had to stub in a class that we knew the other developer was working on, but since we're using a revision control system that supports concurrency, it's not a big deal. The tests give us confidence that merges won't cause problems.
A lot of our initial development effort has been spent on the test framework. For the most part, though, we're still coming in under our estimates. Most of the stuff we're working on right now involves database work, so our tests have to tweak the data in the database. (We could use a MockDatabase, but for us that's not DoTheSimplestThingThatCouldPossiblyWork.) We're gradually factoring out a nice set of methods that will set up our database tests for us. In the process, we've also added some nice simple methods to our Database class and identified additional refactorings on it that we can do in the future.
The developers are starting to become TestInfected. One developer mentioned that the tests give him more confidence in the code. They're having more difficulty with CodeUnitTestFirst, with both developers mentioning that they're having trouble with that mindset. I'm fortunate, though, in that they're willing to humor me for now. Next week, we'll review the methodology and discuss what's working and what's not. One problem we have had is that we can't seem to get our debugger to recognize breakpoints that are behind JavaUnit tests. (I suspect it's related to the reflection JUnit does.) This hasn't been too big of a problem, fortunately, since we've rarely needed that kind of debugging, so println has sufficed. (Note: This turned out to be a case of PoorUserInterfaceDesign. JUnit wasn't at fault.)
Overall, this is the most fun I've ever had on a software project, and I think the other developers agree. There's a lot of back-and-forth banter between the two pairs, typically "complaining" about how I asked for a test or changed a test to break somebody's code. The tone is light-hearted but we're getting a lot of work done. The team's definitely starting to jell. One thing that's been absolutely critical to this process is that our offices are across the hall from each other, and we can shout back and forth at each other. (No doubt greatly annoying the other residents of this section.) It would have been even better if we could have gotten a single large room to ourselves, but I count myself lucky to get what we did.
So far, the process of DictactingByOsmosis? has worked well. I'm being incredibly nitpicky about style ("May I type for a second? I've got a few tiny changes..." or "I'm just going to clean up a few things..."), but it's paying off. A developer refactored some code while I was away, and when I came back, the style was so close to the one I had been using that I was vaguely surprised to see it but figured I had done the refactoring and forgotten it.
I've also been using polite questions as a form of DictatingByOsmosis?. When I pair up with a developer who's working on some code I haven't seen before, I ask to see the test. ("So, what's the test for that look like?") I also emphasize the importance of the tests and accept them as final authority on whether a task's done or not. ("Do the tests run? They do? Great! What's next?") Again, though, I'm very fortunate in that both developers are flexible enough to try a new way of doing things. There's a little skepticism about some things, but almost no resistance. | 计算机 |
2014-23/2664/en_head.json.gz/18047 | The Fedora Project is an openly-developed project designed by Red Hat, open for general participation, led by a meritocracy, following a set of project objectives. The goal of The Fedora Project is to work with the Linux community to build a complete, general purpose operating system exclusively from open source software. Development will be done in a public forum. The project will produce time-based releases of Fedora about 2-3 times a year, with a public release schedule. The Red Hat engineering team will continue to participate in building Fedora and will invite and encourage more outside participation than in past releases. Fedora 15, a new version of one of the leading and most widely used Linux distributions on the market, has been released. Some of the many new features include support for Btrfs file system, Indic typing booster, redesigned SELinux troubleshooter, better power management, LibreOffice productivity suite, and, of course, the brand-new GNOME 3 desktop: "GNOME 3 is the next generation of GNOME with a brand new user interface. It provides a completely new and modern desktop that has been designed for today's users and technologies. Fedora 15 is the first major distribution to include GNOME 3 by default. GNOME 3 is being developed with extensive upstream participation from Red Hat developers and Fedora volunteers, and GNOME 3 is tightly integrated in Fedora 15." manufacturer website
1 dvd for installation on a x86 platform back to top | 计算机 |
2014-23/2664/en_head.json.gz/18352 | Extended Events and the new Profiler in SQL Server 2012
Extended Events made its appearance in SQL Server 2008, but in SQL Server 2012, the number of lightweight events have been expanded. All of the events and fields available in SQL Profiler are now available in Extended Events as well as a graphic user interface, covering more use cases and enabling new debugging opportunities. After a quick review of how Extended Events work, I’ll cover the enhancements in detail. In addition to more events, Extended Events also exposed in the 2012 SQL Server PowerShell provider and SMO, and I’ll show working with Extended Events in PowerShell.
Bob Beauchemin
Slide Deck
Bob Beauchemin is a database-centric application practitioner and architect, instructor, course author, writer, and Developer Skills Partner for SQLskills. Over the past few years he's been writing and teaching his SQL Server 2005-2012 courses to students worldwide through the Ascend program, the Metro (SQL Server 2008 Jumpstart) program, and other database developer-centric classes. He is lead author of the books "A Developer's Guide to SQL Server 2005" and "A First Look at SQL Server 2005 For Developers", author of "Essential ADO.NET" and has written articles on SQL Server and other databases, database security, ADO.NET, and OLE DB for MSDN, SQL Server Magazine, and others. | 计算机 |
2014-23/2664/en_head.json.gz/19114 | This Week In Video Game Criticism: From Kotaku To Content Degradation About GameSetWatch
2011 Independent Games Festival Announces Student Showcase Winners
January 9, 2011 9:00 PM | Simon Carless
The Independent Games Festival has announced the eight Student Showcase winners for the thirteenth annual presentation of its prestigious awards, celebrating the brightest and most innovative creations to come out of universities and games programs from around the world in the past year. This year's showcase of top student talent include slapstick physical comedy adventure Octodad, from DePaul University's Team DGE2, University of Montreal student Richard E. Flanagan's boldly styled Myst-like adventure Fract, and Tiny and Big, an ambitious, comic-book styled 3D action platformer from Germany's School of Arts and Design Kassel.
In total, this year's Student Competition took in more than 280 game entries across all platforms -- PC, console and mobile -- from a wide diversity of the world's most prestigious universities and games programs, a 47% increase from entrants in the 2010 Festival, making the Student IGF one of the world's largest showcases of student talent.
All of the Student Showcase winners announced today will be playable on the Expo show floor at the historic 25th Game Developers Conference, to be held in San Francisco starting February 28th, 2011. Each team will receive a $500 prize for being selected into the Showcase, and will are finalists for an additional $2,500 prize for Best Student Game, revealed during the Independent Games Festival Awards on March 2nd.
In conjunction with this announcement, IGF organizers are also revealing that this year's Independent Games Festival Awards at GDC will be hosted by Anthony Carboni. Carboni is host and producer of Bytejacker, the acclaimed indie and downloadable game video show and website, and one of the most enthusiastic and devoted followers of the independent game scene.
The full list of Student Showcase winners for the 2011 Independent Games Festival, along with 'honorable mentions' to those top-quality games that didn't quite make it to finalist status, are as follows:
e7 (Gymnasium Koniz Lerbermatt)
Fract (University of Montreal)
GLiD (Bournemouth University)
Octodad (DePaul University)
PaperPlane (ENJMIN)
Solace (DigiPen Institute of Technology)
Tiny and Big (School of Arts and Design Kassel)
Toys (Future Games Academy)
Honorable mentions: About Love Hate and the other ones (School of Arts and Design Kassel); EXP (NHTV); Paul and Percy (IT University Copenhagen); Senseless (University of Advancing Technology); StarTwine (Carleton University); Ute (School of Arts and Design Kassel).
This year's Student IGF entries were distributed to an opt-in subset of the main competition judging body, consisting of more than 60 leading independent and mainstream developers, academics and journalists. Now in its ninth year as a part of the larger Indendent Games Festival, the Student Showcase highlights up-and-coming talent from worldwide university programs.
It has served as the venue which first premiered numerous now widely recognized names, including Cloud from the USC-based team that became Thatgamecompany (Flower), as well as DigiPen's Narbacular Drop and Tag: The Power of Paint, which would evolve first into Valve's acclaimed Portal, with the latter brought on-board for the upcoming Portal 2.
"Even as general awareness of independent developers and their creative output increases year over year, our Student Showcase continues to be the best place to be genuinely surprised and delighted by entirely unknown talent," said IGF Chairman Brandon Boyer. "This year's lineup continues that tradition, with eight distinctive and wholly unique games from teams and individuals that we'll surely be hearing much more about in the future."
For more information on the Independent Games Festival, please visit the official IGF website -- and for those interested in registering for GDC 2011, which includes the Independent Games Summit, the IGF Pavilion and the IGF Awards Ceremony, please visit the Game Developers Conference website.
Tags: Categories: Name | 计算机 |
2014-23/2664/en_head.json.gz/19566 | MINE ACTION GATEWAY
E-MINE
THE UN MINE ACTION GATEWAY
Fourteen UN department, agencies, programmes and funds play a role in mine-action programs in 30 countries and three territories. A policy developed jointly by these institutions (Mine Action and Effective Coordination: the United Nations Inter-Agency Policy) guides the division of labor within the United Nations. Much of the actual work, such as demining and mine-risk education, is carried out by nongovernmental organizations. But commercial contractors and, in some situations, militaries, also provide humanitarian mine-action services. In addition, a variety of intergovernmental, international and regional organizations, as well as international financial institutions, also support mine action by funding operations or providing services to individuals and communities affected by landmines and explosive remnants of war.
The Strategy presents the common objectives and commitments that will guide the UN in mine action over the next 6 years. DOWNLOAD THE PDF
The vision of the United Nations is a world free of the threat of landmines and explosive remnants of war, where individuals and communities live in a safe environment conducive to development and where the needs of victims are met. The inter-agency partners working towards the achievement of this vision are: UN Department of Peacekeeping Operations (DPKO) DPKO integrates mine action into worldwide UN peacekeeping operations in line with a November 2003 Presidential Statement of the Security Council. Mr. Hervé Ladsous, the Under-Secretary-General for Peacekeeping Operations chairs the Inter-Agency Coordination Group on Mine Action, which brings together representatives from all UN mine-action entities. UNMAS provides direct support and assistance to UN peacekeeping missions. Careers & Business Opportunities
United Nations Mine Action Service (UNMAS) UNMAS is located in the Department of Peacekeeping Operations Office of Rule of Law and Security Institutions and is the focal point for mine action in the UN system. It is responsible for ensuring an effective, proactive and coordinated UN response to landmines and explosive remnants of war. Careers & Business Opportunities
United Nations Office for Disarmament Affairs (UNODA) UNODA advises and assists the UN Secretary-General in his work related to the Anti-Personnel Mine-Ban Treaty and the Convention on Certain Conventional Weapons. ODA promotes universal participation in international legal frameworks related to landmines and explosive remnants of war and assists countries in complying with their treaty obligations Careers & Business Opportunities
United Nations Development Programme (UNDP) Through its country offices and its New York-based Mine Action Team of the Bureau for Crisis Prevention and Recovery, UNDP assists mine-affected countries to establish or strengthen national and local mine action programmes. Careers & Business Opportunities
United Nations Children's Fund (UNICEF) UNICEF was created to work with others to overcome the obstacles that violence, poverty, disease and discrimination place in a child's path. This includes children in mine-affected countries globally. UNICEF supports the development and implementation of mine risk education and survivor assistance projects and advocacy for an end to the use of landmines, cluster munitions and other indiscriminate weapons. Careers & Business Opportunities
United Nations Office for Project Services (UNOPS) UNOPS is a principal service provider in mine action, offering project management and logistics services for projects and programmes managed or funded by the United Nations, international financial institutions, regional and sub-regional development banks or host governments. Careers & Business Opportunities
United Nations Mine Action is also supported by: Food and Agricultural Organisation (FAO) The FAO has a mandate to provide humanitarian relief, which sometimes requires the organization to participate in mine action in complex emergencies, particularly in rural areas. Office for the Coordination of Humanitarian Affairs (OCHA) OCHA shares information with all other organizations about the humanitarian impact of landmines and works with UNMAS on resource mobilization. OCHA is manager of the UN Central Emergency Revolving Fund and coordinator of the "Consolidated Appeal Process," both of which provide or mobilize financial resources for mine action. United Nations Entity for Gender Equality and the Empowerment of Women (UN Women) UN Women, among other issues, works for the elimination of discrimination against women and girls; empowerment of women; and achievement of equality between women and men as partners and beneficiaries of development, human rights, humanitarian action and peace and security. UN High Commissioner for Human Rights (OHCHR) The OHCHR does not have any specific mandate in the field of mine action, but it does carry out several relevant projects. OHCHR, for example, seeks to protect the rights people with disabilities, including survivors of landmines or unexploded ordnance. Office of the United Nations High Commissioner for Refugees (UNHCR) UNHCR's involvement in mine action ranges from contracting and mine clearance services, to training, advocacy against the use of anti-personnel mines and victim assistance. World Food Programme (WFP) WFP is involved in the clearance of landmines and unexploded ordnance to facilitate delivery of food assistance in emergency situations. World Health Organisation (Injuries and Violence Prevention Department) (WHO) WHO is primarily responsible for the development of standards, the provision of technical assistance and the promotion of institutional capacity building in victim assistance. It works with the ministries of health of affected countries and cooperates closely with UNICEF and the International Committee of the Red Cross. World Bank (WB) The World Bank helps address the long-term consequences of landmines and unexploded ordnance on economic and social development. It also plays a significant role in mobilizing resources. (Hosted by E-Mine)
UN MINE ACTION
Copyright 2013 United Nations | 计算机 |
2014-23/2664/en_head.json.gz/19905 | CS111: [[notes:lec15]]
Lecture 15 Scribe Notes
Buffer Cache and Virtual Memory
Running Out of Space
How to Evict
Blocks vs. Pages
Improving Performance of fork()
Distributed Systems and Security (Defensive Programming)
Buffer cache is the primary cache for File System data. It is used to store things such as dallying and prefetch data. In order to do its job, the buffer cache needs:
File system data
Location of the data on the disk: Map cache memory to a disk location. (The cache is useless if it does not keep exact track of what data it is caching).
A mapping of file locations to pages: This requires the use of the fmap function.
The second and third attributes must be implemented so that the buffer cache works correctly. In addition, utilization may decrease in terms of memory if multiple pages of the same page are stored within the buffer cache. This wastes space and is unnecessary as only one page is required. Thankfully, fmap() ensures that each block has at most one location in the buffer cache.
The fmap structure is as follows:
fmap(file, offset)
Returns either the primary location or address of a page, or a 0 if such a page is not available.
File: This argument may be thought of as an inode.
Offset: This argument
Another problem that may arise when using a buffer cache is a possible violation of the Safety Property. For example, let two processes, A and B, write to a file named a.txt. A writes to the file by executing the command echo "foo" >> a.txt while B executes its own write command with echo "bar" >> a.txt. The following diagram depicts the current scenario, and the contents of the cache may be ambiguous depending on the contents of the cache:
If the write commands above are executed and sequential consistency is maintained, the only possible outputs in a.txt are “foobar” or “barfoo”. However, if there are multiple copies of a.txt in cache and each process happens to write to a different copy, a.txt may contain only “foo” or “bar” instead of both because the disk may only update itself with only one of the cache copies of a.txt.
Virtual Memory
Process Virtual Memory and Physical Memory typically look like this:
Process Virtual Memory
The operating system provides each process with virtual memory so each process believes it has access to all the physical memory. The operating system also tries to make the buffer cache as large as possible without hurting performance.
The process code in virtual memory is typically fixed size and read-only. Both the process data and the read-only data are initialized by the disk. Since the data is read-only, it is wise to try and store it all in the buffer cache. This is accomplished with one simple step:
Map the process code (typically read-only, unless dynamic) and read-only data from the buffer cache, which is achieved with the following sequence.
This step yields the next illustration for the buffer cache.
It makes sense to map the code of each process in virtual memory to the same location in the physical memory buffer cache, if possible, to increase the utilization of disk space. Since read-only sharing is implemented, process A and B are still isolated since they cannot alter the state of the other process. In order to maintain read-only sharing and process isolation, the processor must generate exceptions on write to read-only pages (as it normally should).
In case the buffer cache runs out of memory, an EVICTION (flushing a portion of the buffer cache) must take place. Two things need to take place when performing an eviction:
Write changed data to disk (Optional)
Mark the memory space as free/reusable
If the data has changed since the last read, it will be written to disk. A structure is necessary to track this, or the cache data would have to be checked against disk data for each eviction.
fmap() will change the return value to show that a page is available after eviction. This is also true for the mapping of cache memory to a disk location. However, another important thing to consider is that process data should not be changed when the process might be utilizing such data. If a process needs access to data that has been evicted, the processor must generate an exception. To address this concern, a page table is required.
Page Table
The page table is a processor-interpreted data structure mapping virtual addresses to physical addresses.
It is important to throw an exception if a process attempts to access a page not present in the cache. The page can then be loaded from disk into physical memory, and the process can then resume at the location where it left off. Therefore, the processor now needs an additional rule when it comes to dealing with pages. The previous and new rule are stated below:
The processor needs to throw or must generate an exception on a write to read-only pages.
The processor must generate an exception on access to a non-present page.
The page table can then be utilized with the following function:
pagetable(pid_t, address)
Returns the physical address of a page, its permissions (R, RW, Kernel Only), or a 0 if no such page exists.
Bcpmap()
Recalling that a buffer may be full, another possibility is that a process may require the use of a page that is currently not present in the cache. Therefore, it may be necessary to remove a page from the buffer and also find out the address to the store the page into. This requires having a buffer cache process map. The function bcpmap() would work in the following manner:
bcpmap(pid_t, address)
Returns a zero if target is not found. Otherwise it provides information about the virtual page. The virtual page information consists of the file, the offset, a mapped variable (indicates if page is in buffer cache), the mapped address (mapped_addr), and whether it is a copy on write (is_it_copy_on_write, more on this later).
With this addition, the steps necessary for an eviction now become:
Mark the memory space as free/reusable.
Modify fmap() to mark file data as unmapped.
Modify bcpmap() to mark file data as unmapped.
Modify pagetable() to mark virtual addresses as unmapped.
When cache is full or a program needs a page, it is necessary to evict. A decision is required on what to evict. The problem of deciding what to evict becomes a scheduling problem. As with other scheduling problems, two policies can be implemented:
First In First Out (First page in, first page out)
Shortest First (Least Recently Used or page with fewest accesses)
First In First Out
Treating the disk pages as numbers, it is possible that some files make use of several pages:
From this it is only necessary to worry about accessing the cache. For now consider a cache that only has room for three pages and uses the FCFS scheduling algorithm with the page sequence: 1, 2, 3, 4, 1, 2, 4, 1, 2, 3, 4, 5. The following result is obtained with this algorithm (The sequence is shown in the top row, the remaining rows show the cache contents):
The cache is initially empty. If the page is currently not in the cache, a disk access is required. Each circle represents a disk access where a page is loaded by the disk since it was not in the cache. The circles or disk loads are called swaps. Notice that the cache is full after loading page 3, and that to load page 4 an eviction is required. Page 1 is evicted because it was the first page loaded into the cache. At some point no swaps (circles) are found because the required pages are already in the cache. However, page 3 is required after this. Looking back, page 1 was again the page that was loaded first (during the 4th column or page in the sequence) and is therefore the one that gets swapped for page 3. The number of total swaps for this example is 9 loads.
Next consider what happens when 4 pages are used in the cache instead of 3.
Although one would intuitively think that adding more memory to the cache would increase performance, in this case we see that the number of loads increases! This phenomenon where increasing the number of pages stored in cache increases the number of cache misses when using a FIFO scheduling algorithm is known as Belady's Anomaly.
Least Recently Used (LRU)
Now load the pages using LRU, where one evicts pages that were accessed furthest in the past.
With 3 pages, there are 10 loads, which seems that the algorithm may be worse, but with 4 pages the result is:
The total loads are now 8, which is much better than with 3 pages or when using FIFO. In addition, this algorithm does not suffer from Belady’s anomaly. An optimal algorithm would require knowledge of future page requests. Such an algorithm would evict the page that will be accessed furthest in the future. This is not possible in general, but there are algorithms that can somewhat predict the future. For example, it was mentioned earlier that a file may use two or more pages, so a file may use pages 1 and 2 or 3 and 4. If page 1 is accessed, then it most likely that page 2 will be required since it is part of the file. The same is true if the file used pages 3 and 4. The purpose of making better algorithms is to reduce the number of loads. A load requires disk access, which includes other costs such as inter-request costs among others. Removing loads therefore speeds things up if a page is already in memory.
The relationship between blocks and pages can be viewed using a simple analogy:
Block : File System :: Page : Physical Memory
A block avoids external fragmentation of disk data by implementing fixed-size allocations of disk data. Similarly, a page avoid external fragmentation of primary memory by implementing fixed-size allocations. A page is usually 4096 bytes, so it makes sense to have a block size of 4096 bytes as well.
When using fork(), some of the contents in the copied process may be shared (e.g. kernel, code) between the processes in physical memory, while other information (e.g. global data, heap, stack) may be copied to physical memory.
To simplify things, only consider the code, stack, and kernel:
In order to increase performance, we can initially have the child’s stack point to the same location in physical memory as the parent’s stack, and mark the stack as read-only. Then only after a write is attempted on the stack, an exception can be thrown, which will allow the OS to take over and create separate copies of the stack for the child and parent. A large reason for copy-on-write fork's efficiency is
because a fork is often shortly followed by an exec() command in code, which wipes out all of the processes' memory anyway. Therefore, copying the stack during the initial fork operation is rendered pointless.
However, in order to perform this operation, the OS needs to ensure that it is a fork on copy instruction (or copy on write) to know whether the instruction is a segmentation fault or whether a copy is necessary.
Regarding Fork Bombs
Fork bombs still pose a problem in this implementation. The reason being that bcpmap(), pid, and process descriptors are all examples of information that is not shared between forked processes. Fork bombs make use of this fact to try and fill the buffer cache. In the buffer cache, a WORKING SET is known as the pages that are currently being actively accessed and used. If the working set is much greater than the available physical memory, many evictions would be required to fulfill all requests. Evictions will lead to large amounts of page loads from disk, which can take millions of cycles to complete. Eventually all memory access becomes a disk access situation (which is a what a fork bomb strives to achieve) causing the system to crawl to a halt. This situation is known as THRASHING. It may be the case that many (or even every) access requires a disk access, with each one taking many thousands of cycles to complete and causing the system to become unusably slow.
A distributed system is when a process talks to another process in another system, which is essentially a network. Distributed systems are difficult to implement. As an example, suppose there are multiple computers communicating over a network. In order to abstract such a system, a Remote Procedure Call (RPC) is utilized. The RPC allows communication to take place between two systems, and does so by allowing a program to execute a procedure in another computer. For more information on RPC see http://www.cs.cf.ac.uk/Dave/C/node33.html.
Remote Procedure Call
The RPC looks like a function call. An RPC implementation contacts another computer first, and then the other computer executes the function code.
Some functions are not meant to be RPCs, but there are others that make sense to implement such as download_web_page(const char *url). The RPC consists of the following steps:
Arguments are marshalled into a sequence of bytes.
Transmit bytes to remote computer.
Unmarshalled in the other computer.
Marshal return values and send the response.
There is a weird aspect when it comes to networks and RPCs. For a regular function, it is expected that a function returns. However, in a network, many functions may never return due to unreliability, intrusion, or disconnection. The application and functions must be built to handle these possibilities. Also, the openness of the transmission protocol has many implications for security. Data must be transmitted securely when necessary since any computer can eavesdrop or misrepresent itself as the target. Worse still, a hostile computer could pretend to be someone else and transmit data that you will receive and act on. Step 3 above is where many vulnerabilities lie - incoming data must be checked carefully for authenticity and safety unless you want a compromised system.
notes/lec15.txt · Last modified: 2011/03/16 17:54 by alanq | 计算机 |
2014-23/2664/en_head.json.gz/22474 | Linux Documentation Project Reviewer HOWTO
A. GNU Free Documentation License
A.5. 4. MODIFICATIONS
You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version:
A. Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission.
B. List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has less than five).
C. State on the Title Page the name of the publisher of the Modified Version, as the publisher.
E. Add an appropriate copyright notice for your modifications adjacent to the other copyright notices.
F. Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below.
G. Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document's license notice.
I. Preserve the section entitled "History", and its title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is no section entitled "History" in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence.
J. Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the "History" section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission.
K. In any section entitled "Acknowledgements" or "Dedications", preserve the section's title, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein.
L. Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles.
M. Delete any section entitled "Endorsements". Such a section may not be included in the Modified Version.
N. Do not retitle any existing section as "Endorsements" or to conflict in title with any Invariant Section.
If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version's license notice. These titles must be distinct from any other section titles.
You may add a section entitled "Endorsements", provided it contains nothing but endorsements of your Modified Version by various parties--for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard.
You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one.
The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version .
3. COPYING IN QUANTITY
5. COMBINING DOCUMENTS | 计算机 |
2014-23/2664/en_head.json.gz/25091 | Is crowdsourcing less ethical than free-to-play?
By James Brightman
Thu 20 Mar 2014 6:24am GMT / 2:24am EDT / 11:24pm PDT BusinessFree-to-PlayGDC 2014 Ben Cousins looks at the "ethical scorecard" for a variety of business models; F2P is under fire from the establishment, he says
Free-to-play (F2P) for many is still a dirty word. The biggest complaint often voiced is that F2P simply isn't ethical, that it's loaded with bait-and-switch ploys, and that it's targeting children as "whales" more than any other group. If you ask Ben Cousins, GM of Scattered Entertainment, none of this is actually true. In a GDC talk titled "Is Your Business Model Evil? The Moral Maze of the New Games Business" he looked at an analytical model for determining how ethical a business model is or isn't. His "ethical scorecard" yielded surprising results. The key ethical issues listed on the scorecard include: - Are purchase decision aimed at under-18s?- Is it easy for an under-18 to spend without parental approval?- Can the consumer play the game before spending?- Are independent reviews available before spending?- Are there time-limited offers? Random chance? Emotional appeals?- Is the minimum purchase size under $20?- Can the consumer spend more than $240 on one game?- Can customers get refunds easily?By assigning a +1/-1 point value to a yes/no response on each of these questions, a simple score can be determined. The higher the number the better in terms of a model's ethical nature. In the end, Cousins' scorecard looked like this:As you can see, F2P games (whether for kids or adults) scored towards the middle to even positive territory whereas crowdsourced games, subscription and arcade were found to be less ethical overall. Interestingly, while crowdsourcing is vastly different from F2P, there are some parallels. Around 50 percent of total revenues on most crowdsourced games comes from just 10 percent of users; that means that crowdsourcing actually follows a similar "minnows and whales" pattern as F2P and yet Cousins noted, "I've never seen people accusing Kickstarter as manipulative or coercive."Why is that? Cousins believes it's a result of how the "establishment" in the games industry perceives games and business models. The models that are treated as benign ended up scoring badly, and yet F2P, which scored fairly well by comparison is often under fire. Cousins said that developers are "dealing with the shock of the new" when it comes to business models like F2P. They fear the worst, but over time as more research comes out and as they see that bad things aren't really happening, that fear dissipates. Even the telephone, as a new technology, was considered to be dangerous at first. In fact, pinball machines were made illegal in New York City up until 1976 because Mayor LaGuardia alleged that these pinball games were robbing school children of money. Cousins also said that F2P is suffering from the "outsider effect," meaning that F2P developers are often outsiders compared to the establishment, and if you don't know the people it's easy to assume the worst. Another factor is that the establishment wants to protect their definition of a "real game." Cousins said that this is often defined as "how I experienced games in my childhood" or in a "golden age." The establishment doesn't believe that F2P games respect the history of gaming. The truth, however, is that F2P is the world's biggest business model by participation and soon it will be by sales as well, Cousins believes. This rapid growth can be threatening to the establishment who are worried that their power will be diminished by this growth. Another argument is that games are art and that art shouldn't be directly mixed with business or marketing. F2P games essentially marry commerce and design in a very upfront manner and some in the establishment consider that to be sacrilegious. For other models, the aggressive marketing takes place outside the game, not in it. But it's for that reason that Cousins thinks that game reviewers need to pay special attention to monetization in F2P games. He said that reviewers should specifically be looking to review monetization, and not just the game itself. Ultimately, the stakes for F2P are massive now, and competition is extremely fierce as a result. Cousins said he's seen some F2P competitors attacking each other in order to gain an advantage. "I think that's unnecessarily bringing up issues that don't exist," he said. If anything, it's time to "stop infighting, and come together as an industry." Cousins believes that as F2P continues to grow (he thinks F2P revenues will surpass traditional by 2017) and the overall industry expands, there will be more and more scrutiny from the outside, and the industry will need to work together to defend itself. Tweet | 计算机 |
2014-23/2664/en_head.json.gz/27342 | When Drop-In Replacements Aren't
The nms scripts are advertised as being "drop-in replacements" for Matt's scripts. This is largely true, but there are a couple of caveats.
They are drop-in replacements for Matt's version 1.9. Anything earlier than that is just beyond help.We have an $emulate_matts_code flag. If it's unset then the scripts more secure, but the the "drop-in replacability" drops.It's the first of these that is causing us some problems recently. The nms project seems to be becomimg pretty well-known around the web. A lot of ISPs have seen Matt's formmail being used as a spam relay and have changed to our version.But in many cases the version of Matt's script that they were using was 1.6. This version is infamous for having absolutely no protection against being used as a spam relay. You just set the script up on your server and anyone could use it. So that's what a lot of ISPs seem to have done. They've set up the script on one central server and tell all of their clients to configure their HTML form to use that script.Now they've installed the nms version of formmail. They haven't read much of the documentation (because, hey, it's a drop-in replacement!) so they don't know that you now have to give it a list of domains that are allowed to use the script. This means that none of their clients' domains are permitted to access the script and anyone who tries to use a form on a client's site gets an error message. And the default error message has a link to nms in it - so the client's client thinks it's all our fault so we get another email to the support mailing list.There should be some sort of intelligence test for running an ISP. | 计算机 |
2014-23/2664/en_head.json.gz/28644 | An Interview with Walter Wright
My first encounter with Walter was three years ago at the not Still Art Festival which Carol Goss had organized. At the time, I had no information about him except that he had been one of the earliest artists in residence at the Experimental Television Center, right after Nam June Paik had finished his residency there creating the Paik/Abe Video Synthesizer with engineer Shuya Abe, with a grant they'd received from the New York State Council on the Arts. The following year (1997) at the NSA Festival, we spoke more at length. It was right around that time that I had begun thinking about doing this project, and had just seen the flyers from ETC about the Upstate Video History Project. Benton Bainbridge of The Poool and NNeng was there in the afternoon setting up with the other members of NNeng and had arranged to interview Walter and Carol. I sat in and listened to the them spin tales of the early days of video. It was that session that inspired me to pursue the project. Walter's background in architecture led him into computer graphics. Around 1966 he became interested in the computer graphics and electronic music. He worked at the University of Waterloo developing some of the early 3D software, creating some of the first CAD (computer aided design - architecture) programs. Simultaneously, they had acquired a Moog synthesizer and were working with visuals and sound with equal interest. Some of the early inspirations for Walter were the work being done by the Experiments in Art and Technology group, and particularly the work being done at Bell Laboratories. There Charles Dodge was involved in analyzing music digitally, taking the magnetic field data of sound signals and feeding them into a computer and turning out data that could be analyzed in a variety of ways. At the time the Waterloo group also did visualizations by photographing the screen and taking a series of photo prints and animating them on movie film, creating some of the earliest computer animation. He later moved to New York and worked for several computer graphics companies there, including Computer Image Corporation, which used the Scanimate, an analog-based computer (meaning wires and switches instead of punchcards or disks with software) capable of generating video. A lot of the early TV logos were created using the Scanimate.
Around this time, Walter got involved with The Kitchen, a relatively new art and performance space founded by Steina and Woody Vasulka, two of the early video art pioneers. At the time he lived only a few blocks from the Kitchen's first location, which was in the Mercer Arts Center (Hotel) in what used to be the kitchen (hence the name). He had read Ira Scheneider and Beryl Korot's Radical Television and was aware of the movement in video and got himself one of the new SONY portapak systems. Soon Walter became the Associate Director and was responsible for organizing the Open Screening, a weekly forum for people to show new works.
"We would set up the matrix (of monitors) in different configurations. Sometimes we would run multiple channels of different material, or people would set up shows with different channels of material. Other people set up shows where there was video with live electronic music. Some people set up shows with a live camera using the monitor matrix. (Nam June) Paik did some did piano performances with it."
The Kitchen forum was a great place for early experimentation in video exhibition. Since it was a relatively new medium at the time, there was no established tradition of venues of exhibition formats, so anything was game. This led to a lot of performance, installation, single and multi-channel work which have since become the established modes of video presentation. Due to the funding provided by NYSCA at the time, there was money for equipment which allowed the artists to do those early experiments and truly test the possibilities. Performance was always considered an option in the early days. "If you could do it live in a studio," Walter remarked, "it seemed logical take the equipment and put it in a performance space and do it during a performance." That kind of basic logic in many ways was a sign of the times. A new medium presented a lot of potential for utopic ideals (see the section on Paul Ryan). Nothing was off-limits, since nothing had been tried yet. Especially since the live video camera presented a new real-time immediate feedback device, it seemed very suitable for performance whereas film always had to be developed, so there could never be a live movie. In 1972, Walter met video artist Russell Connor and documentarian Ralph Hocking, who had just started the Experimental Television Center in Binghamton, NY. Russell and Ralph were very interested in Walter's work with computer graphics, and so he became the second artists in residence there. ETC was a very different experience from the computer graphics places he had worked at before. It was less organized, basically a loft full of electronics parts, circuit boards, and cameras. Walter worked with the Paik/Abe, nicknamed the "wobulator" for the way it took images and jittered them up in interesting geometric patterns. It was basically a black and white TV set with electronics attached to its coils which could alter the magnetic signal fed to the monitor. It was about this time that the engineer David Jones came on board at the Center and started designing numerous devices which can still be found there today, including the Jones Colorizer and the frame buffer.
Part of Walter's residency involved showing the system to a wide variety of groups, including schools, colleges, public access television centers, arts centers such as Visual Studies Workshop in Rochester, and museums. The people had never seen the Paik/Abe or even a video camera for that matter, so part of the show involved doing a little "performance," or demonstration to show what it could do.
"In order to show it off, it seemed obvious that one should show how it worked. So part of the thing became doing a performance, so I used to cart around a lot of cameras, a prerecorded sourdtrack and do a performance." After the performance, he would conduct a workshop to show people how the system worked, often showing tapes of work made using the equipment. This kind of demonstration in many ways served as a kind of performance that borders on being educational as much as informational, a trend that seems to have found its way into some of the performances now being done on the internet, the newest medium to embrace performance.
At he Making Connections conference, David Ross, the Director at the San Francisco Museum of Modern Art, gave a presentation called "The Success of the Failure of Video." In that presentation he alluded to a certain animosity between three different camps of the video art community, the image processors, the documentarians and the social activists. Walter didn't see the same kind of schism at the Center. "We did it all. People were trying to see what they could do with the medium" he says. It was all considered one big area of experimentation, whether the final output was going to be a performance or installation, documentary or single channel processed work.
Some people built sophisticated hardware interfaces for the machines. There was a genuine interest in blurring the distinctions between such things as performance and installation. "What was it? Performance or installation" Walter inquires. The demand for demonstrations and performances was not to last though. As funding from NYSCA dried up and video equipment became more available, the call for performance in video dropped off. As Walter recalls, you could do it maybe once a year, and even then. But there just wasn't much call for it.
"From the museum end, the thing they saw as being most amenable to the way they operated was the sculpturul installation, because the artists provided them with a piece. It was usually on a lasredisk at that point. It fit inside a frame or object. Or there was some plan for reproducing it in the museum. Whereas documentary video - who knows? Does it go into the film department? Does it go into community programs? It landed up in various places."
But not in the museums. They were focusing more on installation which was the most suitable medium for museum-based presentation. Single channel video and performance had to look elsewhere for support. Single channel work has found limited support through television, notable PBS and public access programming, but has never been able to achieve the level of success of museum based shows. "As far as performance goes, that was probably, thinking about it from their perspective, possibly the most risky thing they could get into. Because it takes so long to set up, and the equipment hardly ever worked. Who knows what you were going to get as an audience. You'd certainly never be able to do it twice, because it was so put together and it wasn't taped. And I think also the people doing it were 'of the moment' people."
Performance was truly a thing of the moment, a theatrical concept. The theater of the 60's had really embraced a more open approach which brought in elements of improvisation and other ideas to challenge the old status and traditions of the theater, and video grabbed these principles up right away. Much of the early video work was being created by people with an anti-institutional attitude, and as such were against the institutionalism the museum presented. They were looking at video as a utopian instrument which could really empower people to improve their lives and the world around them. There was a real social aspect to the performances of that time, it wasn't considered purely entertainment.
"I think people who are doing it and the people who are in those groups, a lot of them had a social mission, their reason for being there were to change the social structure. Others just bought into that. I wasn't particularly an activist, but, hey, I bought into it. It seemed like a cool idea to me. Cooler than bombing Vietnam."
The screenings at The Kitchen certainly followed along these principles with the open screenings. It was not a place concerned with the preciousness of the image in the same was museums and other parts of the art world were.
Jumping ahead in time, I asked Walter about his observations of the current state of video art and performance. He remarks that there are some similar trends between what was done then and what current artists are interested in. He sees now to some extent "a reinvention of the 60's" in terms of techniques. Analog synthesizers are very much in vogue now, after being threatened to be pushed aside by the powers of the newer digital synths. Musicians these days find both kinds equally attractive for performance and recording. Walter recognizes the same kinds of filter sweeps and arpeggiation from early electronic music experiments and the diverse genres of techno music today. The raves that are being produced in ever greater numbers today are combining the same experiments in visuals and sound that were done thirty years ago. Some things just don't change. The interesting thing, and the reason he says it's being reinvented is that none of the newer artists (or very few) are aware of the history of video and all the early experiments. As you'll read in the interview with Benton Bainbridge, he didn't become aware of this history until only recently, and that it was events such as the Syracuse conference that have finally uncovered some of these things. New video performers are discovering for themselves these same techniques without knowledge of what was done before. There must be something universal about what they found. | 计算机 |
2014-23/2664/en_head.json.gz/28676 | Contact Advertise Ask OSNews: Hackintosh Legality
posted by David Adams on Wed 30th Nov 2011 20:23 UTC A reader asks: "Can someone comment on the legality of using my brother's old Snow Leopard DVD to install OS X? My brother has Lion, so why can't he choose to give it to me? It doesn't violate Apple's 1 license per 1 computer policy."Well, first of all, IANAL. This is actually a rather murky legal issue, so if you're really worried about your legal exposure, consult a bona fide legal professional in your local country. Also, I'm going to address this as it relates to US law, because if I try to make it generic to all world laws, well, that would be hard.
That being said, you are really facing two issues. The first is copyright infringement, which is a criminal offense that can carry serous penalties, though the enforcement is almost impossible in the case of an individual. Luckily, installing a legally purchased copy of Mac OS X on your Hackintosh does not require that you violate copyright, with one wrinkle: the DMCA. The Digital Millenium Copyright Act makes the process of breaking even rudimentary encryption to "copy" a copyrighted work illegal in itself. Apple claimed against Psystar that it violated the DMCA when it "illegally circumvented Apple's technological copyright-protection measures." But it's not clear that what an individual needs to do to install Mac OS on a non-Apple computer necessarily violates the DMCA. See this OSNews article from a couple of years ago on the topic. So as far as criminal offenses go, it's probably possible to make a Hackintosh, even in the US, without violating copyright, though if you're going to be a stickler, it's probably going to be a more time-consuming process.
The legal aspect that's simultaneously more clear and more cloudy is the civil aspect: the license agreement. To use Mac OS X or any software, including open source, you implicitly agree to a contract with that software's author. That's the End User License Agreement, or EULA. Apple's EULA says, in essence, that you're only allowed to install it on an Apple-brand computer. So it's clear that if you make a Hackintosh, you're violating the EULA and could be subject to to civil legal action if Apple decides to pursue it. So it's simple, right?
Not so simple. You can make a contract, and you can even get someone to agree with the terms, but that doesn't necessarily make it a valid contract. There are all sorts of factors that might make a contract invalid, and particularly when one of the parties only "agrees" in the loosest sense, such as with so-called "shrink wrap" contracts like EULAs. Even a contract signed and notarized and sealed with wax aren't necessarily valid. There are many factors that can cause a contract to be void from the get-go. (There are a lot of these factors, and each one of them is a potential rabbit hole of common law precedents and vague analyses, with several that could possibly apply to a shrink wrap EULA if it were to be vigorously challenged in the courts. There's a reason that civil court cases often go on for months or even years.)
But to get back to your original question, it's not "illegal" to violate a contract. When you decide to cancel your cable TV or mobile phone service early, you're breaking your contract. When you stay parked in a parking space longer than the sign allows you, you're violating a contract. What happens is that you may be on the hook to pay the penalties that are specified, or will at least be obliged to argue with the other party before a judge. The consequences of violating a contract are usually limited to a monetary penalty.
So, in short, it's probably possible to do what you propose without breaking any laws, but you will be running afoul of Apple's license, so you'll have to be comfortable with Apple's stern disapproval. You'll also be running the risk that they could come after you in court for violating a possibly-invalid contract. So I wouldn't go taunting Apple's legal team on YouTube while you play with your Hackintosh.
Also see this Low End Mac article for a similar take on your question. (1) 51 Comment(s) Related Articles
What's Happening with User Interfaces?Why I Use Generic Computers and Open Source SoftwareU.S. Voting Technology: Problems Continue | 计算机 |
2014-23/2664/en_head.json.gz/29067 | Original URL: http://www.theregister.co.uk/2008/12/09/iwf_wikipedia_ban/
Why the IWF was right to ban a Wikipedia page
Wikimedia's hypocrisy
Posted in Law,
9th December 2008 11:22 GMT
Guest opinion There has been a storm of controversy over a decision by the Internet Watch Foundation (IWF) to blacklist a page of Wikipedia. But the criticism of Britain's online watchdog is unfair and hypocritical.Last Thursday, the IWF received a complaint from a member of the public about an image that appeared on a Wikipedia entry for German rock band The Scorpions. The image was the original sleeve design for the band's 1976 album Virgin Killer and featured a young naked girl. The sleeve was banned in many countries when the album was released.The IWF assessed the image, agreed that it may be illegal, and added the page on which it featured to a blacklist of URLs. That blacklist is updated twice each day and is used by many ISPs in the UK to block their customers' access to illegal images. They are not legally required to follow the blacklist – but many choose to do so.As of Saturday, customers of affected ISPs could no longer access the page featuring the image; but nor could they edit any page of Wikipedia. The site is written by a network of 75,000 editors, many of them from the UK.The Wikimedia Foundation, the non-profit operator of Wikipedia, appears to blame the IWF for this. It has issued a press release entitled Censorship in the United Kingdom disenfranchises tens of thousands of Wikipedia editors.It has also published an FAQ that says this is the first time Wikipedia has been censored in the UK, and it notes that it has also been censored at various times in China, Syria and Iran. Wikimedia does not appear to like the IWF: "It is not a government agency nor does it act with the authority of the police, and its accountabilities and responsibilities are unclear," it said."We are frankly baffled as to why the IWF would choose to target Wikipedia – an encyclopedia, run by a charitable organization, which has been repeatedly gauged as equivalent in quality to conventional encyclopedias – for censorship," it said.The blocking of the image is a very different issue from the blocking of editors, though. The former is within the control of the IWF, the latter is not. The blocking of editors is a consequence of the technical means used by ISPs to block pages and the approach that Wikipedia takes to regulating its army of editors.All traffic from affected ISPs now looks to Wikipedia like it comes from the same IP address. That causes a problem for Wikipedia. It doesn't mind who looks at its pages – but it wants to control who can change them. It has its own blacklist, a list of people from certain IP addresses who are forbidden from changing Wikipedia's pages. Wikimedia does this because it does not like what they write. So its criticism of the IWF is hypocritical.Wikimedia has attacked the IWF for censorship but the focus of its complaint – the impact on its own editors – is a direct result of Wikimedia's own censorship policy.Wikimedia's policy is a sensible one. Without it, the quality of Wikipedia wil | 计算机 |