id
stringlengths 30
34
| text
stringlengths 0
75.5k
| industry_type
stringclasses 1
value |
---|---|---|
2014-15/4497/en_head.json.gz/16821 | Posts Tagged with “revenue”
State of Mozilla and 2009 Financial Statements
November 18th, 2010 Mozilla has just filed its audited financial statements for 2009. This is the perfect time to look at the state of the Mozilla mission, our successes, our opportunities and our challenges. This year we’re trying a different format to better reflect the scope of Mozilla and to make better use of video and visual information. We’re hosting this year’s State of Mozilla and Financial Statements at our main website rather than at this blog. Please take a look!
Categories: Mozilla | Tags: Corporation, history, revenue | 9 comments
November 19th, 2009 Today we are posting our audited financial statements and tax form for 2008. We have also posted our FAQ. As in past years, I’ll use this event as an opportunity to review both our financial status and our overall effectiveness in moving the mission forward.
The financial highlights are:
Mozilla remains strong financially despite the financial crisis of 2008. Our investment portfolio was somewhat reduced, but overall revenues remained steady and more than adequate to meet our needs. We continue to manage our expenses very carefully.
Mozilla remains well positioned, both financially and organizationally, to advance our mission of building openness, interoperability and participation into the Internet.
Our revenue and expenses are consistent with 2007, showing steady growth. Mozilla’s consolidated reported revenues (Mozilla Foundation and all subsidiaries) for 2008 were $78.6 million, up approximately 5% from 2007 reported revenues of $75.1 million. The majority of this revenue is generated from the search functionality in Mozilla Firefox from organizations such as Google, Yahoo, Amazon, eBay, and others.
2008 revenues include a reported loss of $7.8 million in investments in the Foundation’s long-term portfolio (approximately 25%) as a result of economic conditions and investment values at the end of 2008. Excluding investment gains and losses, revenues from operational activity were $86.4 million compared to $73.3 million in 2007, an annual increase of 18%.
Mozilla consolidated expenses for the Mozilla Foundation and all subsidiaries for 2008 were $49.4 million, up approximately 48% from 2007 expenses of $33.3 million. Expenditures remain highly focused in two key areas: people and infrastructure. By the end of 2008, Mozilla was funding approximately 200 people working full or part-time on Mozilla around the world. Expenditures on people accounted for roughly 58% of our total expenses in 2008. The largest concentrations of people funded by Mozilla are in the U.S, Canada, and Europe with smaller groups in China and New Zealand and individuals in many parts of the world. Total assets as of December 31, 2008 were $116 million, up from $99 million at the end of 2007, an increase of 17% to our asset base. Unrestricted assets at the end of 2008 were $94 million compared with $82 million in 2007, a 15% increase. The restricted assets remain the same as last year: a “tax reserve fund” established in 2005 for a portion of the revenue the Foundation received that year from the search engine providers, primarily Google. As noted last year, the IRS has opened an audit of the Mozilla Foundation. The IRS continues to examine our records for the years 2004-2007. We do not yet have a good feel for how long this will take or the overall scope of what will be involved.
Total grants, donations, and contributions in 2008 were approximately $1 million matching the approximately $1 million of 2007. Mozilla supported projects such Mozdev, Software Freedom Conservancy, and accessibility support for the jQuery library, HTML 5 video, and Firebug.
We believe that Mozilla’s financial setting will continue with relative stability. We continue to use our assets to execute on the mission. Moving the Mission Forward
2008 was another exciting and robust year for Mozilla. Our scope of activities continued to grow, our community of committed contributors and users expanded, our geographical diversity deepened, and our effect on increasing openness, participation, innovation and individual empowerment in Internet life is significant. Here are some examples. In February we launched Mozilla Messaging to develop Mozilla Thunderbird as well as new possibilities in the broader messaging arena. 2008 was primarily a start-up year for Mozilla Messaging. In 2009 we’re starting to see the Mozilla Messaging team deliver on the promise. The final version of Thunderbird 3 –- a vastly improved product — is due to be released shortly. In addition the initial developer version of Raindrop — a prototype for a new way of integrating different kinds of messages — has been released.
In 2008 we developed a set of two-year goals (the “2010 goals”), setting out major areas we’d like to see the Mozilla project address in 2009 and 2010. The 2010 goals build upon the Mozilla Manifesto, which articulates the values underlying the Mozilla project and our products. Two of these are familiar — openness in general and continued vitality of Firefox. Two are newer: the mobile web and helping people manage the explosion of data around us. These reflect our desire to see the values of the Mozilla Manifesto infused into these areas of Internet life. We began an on-going process of strengthening some of the Mozilla project’s basic assets. We began broadening our “module ownership” system beyond code to include governance activities. We began a long-overdue update of the mozilla.org website. In September Mark Surman joined as the new Executive Director of the Mozilla Foundation. These activities continued in 2009, along with new Education and Drumbeat programs.
We expanded the scope of our innovation efforts under the “Mozilla Labs” banner. We launched a range of projects including our first Design Challenge, Test Pilot (user testing program), Ubiquity (natural language interface to browser interaction), and a Developer Tools program. We also expanded existing projects like Weave, Personas and Prism. This focus on innovation continues during 2009.
The activities of Mozilla’s support, localization, campus representative and design communities expanded significantly through 2008 and 2009, reaching more people in more ways.
Mozilla continues to grow ever more global. In June 2008 Firefox 3.0 launched simultaneously in 46 languages. A year later, Firefox 3.5 featured 70 languages. In 2008 Firefox became the majority browser in specific countries. This started with Indonesia, which passed 50% in July 2008, and grew to include Slovenia and Macedonia by the end of 2008. Since then, Slovakia, the Philippines, Poland, Hungary, Latvia, Bosnia Herzegovina, and Ghana have joined this group. Our local communities also work with other Mozilla products and activities such as Thunderbird, Seamonkey and Service Week (in 2009). We intend to continue to invest significantly in global participation. Product and Competition
The number of people using Mozilla products increased dramatically throughout 2008 and 2009. This user base makes Mozilla relevant to the Internet industry, helping us move the Internet to a more open and participatory environment. It also helps us build public benefit, civic and social value as components of the Internet’s future.
The number of people using Firefox on a daily basis increased from 28 million in 2006 to 49 million in 2007. In 2008 we moved up to 75 million daily users. As of November 2009 the daily number has grown to 110 million, bringing the total number of users to approximately 330 million people.
Our market share rose to approximately 21.69% in December of 2008. This breaks out into U.S. market share of approximately 20.2%, and more than 32% in Europe. Our statistics for Asia are similar, with our own estimates around 20%. Our South American market share rose to 27% by the end of 2008. These numbers have all continued to rise in 2009 as well. In February, 2008 we crossed the half-billion download mark; in July, 2009 we exceeded 1 billion downloads. As of November, 2009 Firefox’s market share worldwide reached 25%.
In June 2008 we released Firefox 3.0, bringing dramatic improvements to the online browsing experience. These improvements included features to help users quickly navigate to favorite websites, manage their downloads more easily, and keep themselves safe from malware attacks. Firefox 3 was downloaded over 8 million times in the first 24 hours, earning Mozilla a Guinness World Record. In June 2009 we released Firefox 3.5, with additional performance and feature improvements. In November 2009 we celebrated the fifth anniversary of Firefox. Work on Firefox for mobile devices began in earnest in 2008 with the first development milestones released. We expect to release the first product versions late in 2009. The mobile market has many challenges for us, in particular the fragmentation of the development platform (a plethora of operating systems, handsets and carriers) and a market where touching a consumer directly is more difficult. However, the market is beginning to change and a great, open browser will both help that process and benefit from it. We have much more to do, but have laid a good foundation for long-term contribution to the mobile Web.
SeaMonkey remains a vital project with millions of users. Bugzilla continues as a backbone tool for numerous organizations. A revitalized Thunderbird 3 should ship in 2009. Looking Forward
The past few years have seen an explosion of innovation and competition in web browsers, demonstrating their critical importance to the Internet experience and marking the success of our mission. In 2008 not only did Microsoft and Apple continue developing their web browsing products, but Google announced and released a web browser of its own. Competition, while uncomfortable, has benefited Mozilla, pushing us to work harder. Mozilla and Firefox continue to prosper, and to reflect our core values. We expect these competitive trends to continue, benefiting the entire Web.
The Internet remains an immense engine of social, civic and economic value. The potential is enormous. There is still an enormous amount to be done to build openness, participation and individual opportunity into the developing structure of the Internet. Hundreds of millions of people today trust Mozilla to do this. This is an accomplishment many thought was impossible. We should be proud. We should also be energized to do more and to try to new things. It’s a big challenge. It’s important. We’ve made this opportunity real. Let’s go surprise people once again by showing how much better we can make the Internet experience.
Categories: Mozilla | Tags: reports, revenue | 20 comments
Eyeballs with Wallets
July 21st, 2009 Here’s the business approach of the then-current CEO of a very well-known Internet company. (He’s gone now.)
get eyeballs looking at my site
find or create content that keeps them at my site as long as possible
monetize them as much as possible while they’re there
There are times when each of us will be happy to be a pair of eyeballs with a wallet attached, to be a “monitizable unit.” In our physical lives this is a little like going to the mall. I may “window-shop” and I may enjoy a comfortable environment. But there’s no doubt in anyone’s mind that the point of the mall is to for people to purchase things.
There are times, however, when being a wallet attached to eyeballs is not enough. The possibilities available to us online should be broader, just as they are in the physical world. Sometimes we choose to skip the mall and go to the library, or the town square or the park or the museum or the playground or the school. Sometimes we choose activities that are not about consumption, but are about learning and creation and improving the environment around us.
We have public spaces for these activities in our physical lives. We have public assets, and the idea of building some part of our infrastructure for public benefit as a necessary complement to private economic activity.
Mozilla strives to bring this public aspect, this sense of compete human beings, the goal of enriching the full range of human activities to the Internet. We envision a world where the Internet is built to support these varied aspects of the human experience; a world where robust economic activity lives alongside vibrant social, civic and individual enrichment.
We’re building this world so that we can all live in it.
Categories: Mozilla | Tags: business, people, revenue | 9 comments
Sustainability in Uncertain Times
November 19th, 2008 Today we are posting our audited financial statements and tax form for 2007. We have also posted an FAQ. As in past years, I’ll use this event as an opportunity to review both our financial status and our overall effectiveness in moving the mission forward.
2007 was another healthy year for Mozilla both financially and organizationally.
Mozilla is well positioned to remain vital and effective during the current difficult economic times.
Our revenue remains strong; our expenses focused. Mozilla’s revenues (including both Mozilla Foundation and Mozilla Corporation) for 2007 were $75 million, up approximately 12% from 2006 revenue of $67 million. As in 2006 the vast majority of this revenue is associated with the search functionality in Mozilla Firefox, and the majority of that is from Google. The Firefox userbase and search revenue have both increased from 2006. Search revenue increased at a lesser rate than Firefox usage growth as the rate of payment declines with volume. Other revenue and support sources were product revenues from online affiliate programs and the Mozilla Store, public support, and interest and other income on our invested assets.
The agreement between Google and the Mozilla Corporation that accounts for the bulk of the revenue has been renewed for an additional three years, and now expires at the end of November of 2011.
Mozilla expenses (including both the Mozilla Foundation and Corporation) for 2007 were $33 million, up approximately 68% from 2006 expenses of $20 million. Expenditures remain highly focused in two key areas: people and infrastructure. By the end of 2007, Mozilla was funding approximately 150 people working full or part-time on Mozilla around the world. Expenditures on people accounted for roughly 80% of our total expenses in 2007. The largest concentrations of people funded by Mozilla are in the U.S, Canada, and Europe with smaller groups in China, Japan, New Zealand, and South America.
Our assets as of December 31, 2007 were $99 million, up from $74 million at the end of 2006, an annual increase of 34% to our asset base. Unrestricted net assets (net of liabilities) at the end of 2007 were $82 million compared with $58 million in 2006, a 42% increase over the prior year. In 2005 the Mozilla Foundation established a “tax reserve fund” for a portion of the revenue the Foundation received that year from Google. We did this in case the IRS (the “Internal Revenue Service,” the US national tax agency) decided to review the tax status of these funds. This turns out to have been beneficial, as the IRS has decided to review this issue and the Mozilla Foundation. We are early in the process and do not yet have a good feel for how long this will take or the overall scope of what will be involved.
In 2007, the Mozilla Foundation expanded its grant giving and funding program, providing approximately $700,000 in funds. Mozilla supported projects such as Mozdev Support, the NVDA open source screen reader for Windows, GNOME, and Mozilla-related educational activities at Seneca College. In addition, the Mozilla Corporation contributed $321,326.40 to various individuals and efforts, which supported the open source projects of individual developers, the Bugzilla community, Creative Commons, Oregon State University, and others. This brings total grants, donations, and contributions to over $1 million (roughly tripling 2006 donations).
We believe that Mozilla’s structure and financial management will allow us to continue with relative stability despite the disturbing economic conditions that developed over the summer and fall of 2008. There are no guarantees of course and Mozilla is not immune. We will certainly feel the effects of the economic situation. However, there are a number of reasons why Mozilla is likely to experience less disruption than other organizations.
Our financial objective is sustainability, not financial return on investment, and certainly not the increasing financial return on investment that the markets seek. Success in our fundamental goals is not measured by the stock or investment markets.
Our basic structure — public benefit, non-profit organization — means that we do not have a share price or valuation set by the market. So the downturn in the stock market does not affect us directly.
Mozilla’s participants do want a return on their investment. That return is our effectiveness in creating a part of the Internet that is open, participatory, innovative and promotes decentralized decision-making. Financial resources are one tool in generating this return. But they are not the only tool. The open source software development model is adept at providing multiple tools to achieve our goals. Financial resources are a catalyst, but neither the goal nor the only tool.
We’ve been building in the ability to live with greatly reduced revenue for years. We have a significant amount of retained earnings. We don’t currently anticipate dipping into that fund in the immediate future. We believe our revenues for the near term future will be adequate to fund ongoing work. If the economic setting further worsens, we do have retained earnings to carry us through some difficult times.
Our financial management style has always been that each person who is paid to work on Mozilla needs to be a resource for many other people. We haven’t tried to hire everyone we need to fulfill our mission — that’s not possible.
Moving the Mission Forward
1. Scope
In 2007 we launched a number of initiatives focused on strengthening the Mozilla mission. In February we published the first version of the Mozilla Manifesto and began the ongoing public discussion of the most over-arching goals of the Mozilla project: openness, participation, decentralization, innovation. A few months later we turned to describing the open web and promoting an open Internet as the most fundamental “platform” for ongoing development. There is much work to be done here, both in defining what we mean clearly and in working with others who share the goal. This is possible only because of our success to date — we are able to shift the focus from Firefox as an end in itself to Firefox as a step in achieving something much greater.
In May the Mozilla Foundation started an Executive Director search process to add additional capabilities. This task required designing a search process appropriate for an open organization like Mozilla. We figured out how to create a search committee with board members and individual contributors, created that committee, did a lot of public outreach and discussion, and combined this with classic search techniques. We were able to include a live, streamed, public discussion and the chance for hundreds of Mozilla participants to meet our final candidate as part of the process. It would have been ideal if we could have done this more quickly, as it took us until August 2008 to officially hire our new Executive Director. But we found a rare and great fit in Mark Surman, and this occurred only because of the determinedly open nature of the search process.
In June we launched a focused, increased effort in China. This includes a range of outreach and community activities, particularly in universities, plus a focus on making Firefox a better experience for Chinese users. To do this effectively we created a subsidiary of the Mozilla Corporation known as Mozilla Online Ltd.
In July we launched a call to action to revitalize Mozilla efforts in email and Internet communications. That led to vigorous discussions for several months, and the decision to create a new organization with a specific focus on mail and communications. In the fall of 2007 we laid much of the groundwork for the creation of Mozilla Messaging, which launched officially in February 2008.
The idea of openness is taking root across the industry and in other areas of life. More organizations and people are realizing that choosing openness, collaboration and enabling participation is good for people, and good for a set of business opportunities as well. In addition, we are seeing the vast amount of civic and social benefit that can be created through open, collaborative, shared work product.
2. Geographic Reach
2007 was also a year of geographic expansion, reflecting the increasingly global nature of the Mozilla project.
One aspect of our global expansion is in our user base. By the end of 2007, nearly fifty percent of Firefox users chose a language other than English. In a fast forward, the first country in which Firefox usage appears to have crossed the 50% mark is Indonesia, surpassing 50% in July 2008. A set of European countries (Sovenia, Poland, and Finland) see Firefox usage above 40%.
Another aspect of geographic expansion is in the contributor and community base. In 2007 Mozilla contributors from the United States made a series of trips to India, resulting in many contacts and one of our 2008 interns. Mozilla contributors from the United States also made the first trips to Brazil to see our contributors there. This also resulted in ongoing activities in Brazil that are continuing, as well as expanding activities in other South American communities. The number of participants in Eastern Europe is growing dramatically. We started work in China and hired Li Gong to lead this effort. This resulted in the creation of Mozilla Online Ltd. in August. Mozilla has new groups of contributors and employees in Auckland, Beijing, Copenhagen, Vancouver and across Europe.
This global reach is driven by our focus on local contributors, local product and local empowerment. Firefox 2 shipped in thirty-six languages. Firefox 3 shipped in forty-six languages in June 2008 and 4 months later, our Firefox 3.1 beta is now localized in over 50 languages. We continue to invest very heavily in what we call “localization” for short but which in its broadest sense means everything that allows global participation in building and accessing the Internet.
At the end of 2007 our Calendar Project had twenty-six active localizations for Sunbird 0.7 and Lightning 0.7, and Thunderbird 2.0.0.9 offered thirty-six active localizations. SeaMonkey 1.1.7, the last stable release of the year, featured twenty languages. The number of releases is made possible by the enormous dedication of the localization communities, plus a focus on building infrastructure to enable those communities.
These efforts to make the web more accessible did not go unnoticed. In May of 2007 Mozilla was awarded the World Information Society Award by the ITU, the United Nations agency for information and communication technologies. Mozilla was singled out for its “outstanding contribution to the development of world-class Internet technologies and applications.”
Our community remains healthy and vibrant. The percentage of code contributed to Firefox by people not employed by Mozilla remained steady at about 40% of the product we ship. This is true despite a significant amount of new employees in 2007. Our geographic expansion is powered by active and committed volunteers, from the localizers to Spread Firefox participants to others who introduce Firefox to new people. In June of 2007 we launched a new quality assurance effort, building ways for people to get involved without needing to plunge exclusively into our bug-management tool. In October we launched a new support effort, building on the work community members have provided via forums. Today our end user support offering includes an online knowledge base, forums for discussion and troubleshooting, and one-to-one live support. We also made event planning and speaking planning a public activity, and have developed programs to assist more Mozilla contributors to become active public speakers about Mozilla. 4. Product
The number of people using Firefox on a daily basis nearly doubled from 27.9 million in 2006 to 48.9 million in 2007. As of October 2008 that number has grown to 67.7 million. In 2007 and 2008 three titans of the Internet and software industry — Microsoft, Apple and Google — all released competitive Web browsers. Our market share continues to rise, our community continues to grow and Firefox continues to provide leadership in innovation, technology, and user experience. Living among giants is not easy, but the Mozilla community continues to demonstrate that our efforts stand the test of competition and continue to lead the way.
Other Mozilla projects remain vital, with committed contributors and users. Worldwide, SeaMonkey has approximately five million users and Thunderbird has five to ten million users. Bugzilla installations are hard to count since many of them are internal to an organization. But we see Bugzilla installations everywhere, and over sixty thousand copies of Bugzilla were downloaded in 2007, with hundreds of companies identifying themselves as Bugzilla users.
The impact of our userbase allows us to help move the Internet industry to a more open and participatory environment — accessible content, standards-based implementations, and bringing participation and distributed decision-making to new aspects of Internet life.
In 2007 we began a new, focused effort to bring the Firefox experience to mobile devices; early steps included forming a team and identifying mobile platforms as a central part of our work going forward. We’ve begun shipping development milestones and early releases in 2008.
We’ve also started new initiatives to promote innovation across the Mozilla world by providing a home and infrastructure for experimental work via Mozilla Labs. Innovation is a notoriously difficult thing to build into an organization; we’ve adopted a flexible approach that we expect to grow and change over time. The Mozilla community is diverse and creative, our challenge here is to build environments that both encourage individual creativity and that allow us to work at scale.
Mozilla is strong. We’re growing. We’re trying new things. 2007 and 2008 to date have been important, successful years for Mozilla.
I hope Mozilla participants feel proud of what we’ve accomplished and excited about what is still to come. The Internet is still young, and still in its formative stage. Mozilla has, and can continue to empower each one of us to build the Internet into a better place.
Categories: Mozilla | Tags: accounting, reports, revenue | 42 comments
Revenue and Motives
March 25th, 2008 John has a post today about how some people impute revenue motives to everything we do. In his case John made a statement about how one of Apple’s business practices is bad for the overall security and health of the Internet. (In this case the practice is to encourage consumers to download and install new software by identifying it as an “update” to software the person already has on his or her machine.)
Some of the reactions address the actual issue. But there’s also a set of responses along the lines of: ‘All Lilly really cares about is using Firefox to make money from Google, and all this talk of what’s good for the Internet is just a smokescreen for protecting the revenue stream from Google.’ (This is not an actual quote, it’s my description of a set of responses.) I’m coming to wonder if any statement or action we take that is controversial or based on mission with get this response. I’ve had this experience myself when discussing a number of topics.
Periodically I’ll be in a discussion about Mozilla’s plans for something and people respond by saying “Oh, that’s because Google cares about [fill in the blank] and your revenue comes from Google.” On several occasions I’ve been utterly dumb-founded and speechless because I have never even thought of Google in relation to the discussion. (I’d give some examples but I am concerned that we’ll end up rehashing old issues. )
But much of the world is driven by money and all sorts of people say they have different or additional motivations. So suspicion may be warranted. At Mozilla we can only do what John notes — keep pursuing the mission, keep demonstrating by our actions that our mission is the critical piece, and being authentic.
A separate problem is that a focus on money makes it easy to miss other, important topics. In this case the question is: what happens if consumers stop accepting security upgrades because they don’t trust the other software that comes along with it? That’s a disaster for all of us. That’s the question John is raising and it’s an important question to consider. Those commentators who dismiss this topic because Mozilla competes with commercial offerings and generates revenue miss this point. If the commentators you turn to dismiss everything for this reason, then I’ll hope you’ll add some additional commentators to your resource list.
Categories: Mozilla | Tags: discussion, revenue | 19 comments
Older posts Skip past the sidebar
Mitchell Baker is Chair of the Mozilla Foundation, a non-profit organization dedicated to promoting openness, innovation and opportunity on the Internet.
On Mozilla’s Support for Marriage Equality
Building a Global, Diverse, Inclusive Mozilla Project: Addressing Controversy
A Return to Founders as Mozilla Moves Forward
25 Years of Human Potential
Air Mozilla
Mozilla.com
Copyright © 2014 Mozilla. All rights reserved. | 计算机 |
2014-15/4497/en_head.json.gz/17306 | The Digital Un-divide
The Digital Divide isn't. The implications of this technology generation are greater accessibility, and lower infrastructure costs, and the developing world has been skipping past entire technology generations. It is still going on today. Mobile payments took off in the "developing" world long before the developed world, etc.
EWD: A Personal Reflection
Under the title is a link to the EWD archives at the University of Texas, a bit superfluous perhaps, for the same link is in the links section of this blog, but it's there for the reader's ready reference. I want to quote here one sentence from the opening page of that site:"In addition, Dijkstra was intensely interested in teaching, and in the relationships between academic computing science and the software industry."While I was a long time reader of Dijkstra's books, this was where I connected with him some time in the early 80's as I was in the middle of developments in architecting a series of strategic transaction processing and Decision Support tools for the shipping company where I was then employed. After leaving the company I was finally recognized as the architect of all their strategic systems. I made a visit to Austin on April 3rd 1985 to do an interview with EWD for a now defunct magazine, HollandUSA, which was distributed by KLM in its executive class. My own interest was then very acute because I was struggling daily with a company where the management completely misunderstood the potential and limitations of IT. This is the territory I explored in depth with Dijkstra for what was presumed to be an executive magazine. In personal correspondence Dijkstra later called this interview the best of his life. Unfortunately the magazine never saw fit to publish it, even though they paid for the rights including for my trip to Austin. Quite evidently they did not understand the importance or the relevance of this brilliant countryman of theirs.My visit with him was a fresh breeze. It was everything I expected and more, including the discovery that we went to the same high school, the Gymnasium Erasmianum in Rotterdam, and one of his early nostrums was that if you want to become a programmer, learn Latin.I am writing this note simply as one of these life experiences that seems worth sharing, and as an invitation for the reader to explore Dijkstra's work, be it through the archives at UT, or through his books. He taught from a profound understanding that programming was a branch of applied mathematics, and it was this understanding that made him very perceptive in terms of the opportunities and limitations of programming in the business world, because many meaty business problems are quite intractable from a mathematical point of view, and simplification is done at the user's risk.Dijkstra's life in the deeper sense was spent in the pursuit of making people think. Making people think through a problem before they put pen to paper. He was popular, but his students sometimes disliked him as much as they--grudgingly one would think-- respected him, because he insisted on handwritten papers and would not accept output from a word processor. His reasoning: by the number of corrections he could see if the person was thinking before they wrote, something he considered an essential skill in programming. So he lived what he taught, and made his students do the same. His extensive notes that can be found in the archives are often almost an equivalent of zen koans for the world of IT, and one could only hope that future generations of IT architects keep EWD's dedication to the basics high on their list of priorities, for otherwise IT solutions are bound to wander down the path towards irrelevance and early obsolescence.For me personally, it was Dijkstra's sense of what computer science and programming are and what they aren't which served to define the near end of the digital divide, and keep a clear focus on the effective, reasonable and functional use of computing in business. It helped me understand the difficulties of implementation (the digital divide at home - full of executives who fight IT all the while supporting it in name) to the risks of overpromising and pursuit of inappropiate--for not mathematically tractible--applications.Copyright © 2005 Rogier F. van Vlissingen. All rights reserved.
Rogier van Vlissingen
Apurva
Hello, I came across this article from the EWD archives. I have been studying EWDs work for a couple of years now and at the moment I am in Eindhoven, The Netherlands to continue my studies of this discipline. It is always nice to meet other people who are enthusiastic about EWDs ideas and this comment is just my way of getting in touch with you! My colleague and I have a website where we publish our own series of documents, analogous to the EWDs. You might be interested in having a look at it. Keep in touch,Apurva
He sure was a great scientist (or programmer?)
Your interview is great, really. Dijkstra's thoughts about computer science role in industry, and about the perceived distance between theory and application is still of grat interest today. Perhaps not too much has changed since 1985...
hp samsung
Verzon/US Secret Service Threat Report 2011
Tr-Ac-Net
Paul Murphy
EWD Archives
1985 EWD Interview
The Discipline of Thought
C. K. Prahalad, The Fortune at the Bottom
Michael Hugos - Supply Chain Management
Muriel Siebert Bio
Muriel Siebert PFP
Overdoing IT
Opportunity in Services for Under-served Markets
Check21: Progress or More Dysfunctional IT?
How Food Stamps Pay for Crack, and What to Do Abou...
2008-2011 (c), Rogier Fentener van Vlissingen. Watermark template. Powered by Blogger. | 计算机 |
2014-15/4497/en_head.json.gz/17373 | Company FAQ
Web FAQ
What is Blizzard Entertainment?
Best known for blockbuster hits including World of Warcraft® and the Warcraft®, StarCraft®, and Diablo® series, Blizzard Entertainment is a premier developer and publisher of entertainment software renowned for creating some of the industry’s most critically acclaimed games. Blizzard Entertainment’s track record includes fourteen #1-selling games and multiple Game of the Year awards. The company’s online-gaming service, Battle.net®, is one of the largest in the world, with millions of active users.
Where is Blizzard Entertainment located?
Our headquarters, where all game development takes place, are located in Irvine, California. In addition, we have offices in several other locations around the world to support players of our games in those regions.
Where can I get more information about one of your games?
We have websites devoted to each of our titles. Please visit the appropriate site for more information:
How can I get a job or internship at Blizzard Entertainment?
We’re currently hiring qualified applicants for a number of positions. Please check our jobs page for full details.
Where can I buy your games?
Boxed copies of our games can be found at many gaming and electronics retailers. Alternatively, you can purchase copies, along with a variety of other related products, directly from the Blizzard Store.
Can I come visit your office?
Due to production demands and confidentiality issues, we are a closed studio and do not offer tours.
What projects is Blizzard Entertainment currently working on?
In addition to the ongoing development of World of Warcraft, we have teams hard at work on StarCraft II and Diablo III.
What are your plans for the future of Battle.net?
We are building the new Battle.net to be the premier online gaming destination. The new Battle.net experience is a full-featured online game service designed specifically around Blizzard Entertainment titles, and will include a complete set of around-the-game features including a state-of-the-art matchmaking system, achievement system, social networking features, structured competitive play options, a marketplace, and much more. Our vision is to create an environment where gamers can compete online, develop an online persona, and stay connected to friends and the rest of the community while enjoying our games. In doing this, the new Battle.net will deliver the ultimate social and competitive experience for Blizzard Entertainment gamers everywhere.
What is BlizzCon®?
BlizzCon is a gaming festival celebrating the communities that have sprung up around our games. It offers hands-on playtime with upcoming titles, developer panels, tournaments, contests, and more. To learn more about BlizzCon, visit our BlizzCon website.
How is the Warcraft movie progressing?
We continue to work closely with Legendary Pictures on the Warcraft movie. Duncan Jones (Moon, Source Code) has signed on to direct the film.
How does Blizzard Entertainment feel about total conversions or mods of its games?
We've seen some very polished and fun mods and conversions for our games, and have no problems with them, so long as they are for personal, non-commercial use and do not infringe on the End User License Agreement included in our games, nor the rights of any other parties including copyrights, trademarks or other rights. If you have any other legal questions regarding Blizzard Entertainment or our products, please see our Legal FAQ.
What is Blizzard Entertainment's plan for native Mac OS support, now that Boot Camp is available?
We have a recognized track record of native Mac OS support, and we have no plans to break with that tradition. We understand that our Mac player base prefers native software whenever possible, and our cross-platform development practice addresses that.
Candidate Profile Events | 计算机 |
2014-15/4497/en_head.json.gz/17689 | mfyasirgames
Cloned to Death: Developers Release all 570 Emails That Discussed the Development of 'Threes!'
<Best Dealsp> 2048 [ Free] has been storming up the charts on the App Store since its release and it seems like everybody's talking about the game. It's particularly disheartening when you know that Threes [$1.99] was released a couple of weeks prior to that. In both games you slide tiles on a board until either you win (for 2048 ) or you lose (for Threes). Even though 1024, another clone of Threes, was released first, it was that gained a huge following. There's been no shortage of drama around it since the original creator of 2048 mentioned on Hacker News that he hadn't even heard about Threes before making his game. As things evolved, his website has been updated and now states that it is "conceptually similar to Threes by Asher Vollmer". Sirvo has been fairly quiet about it up until now. Today they've released a huge article on the development of Threes featuring the 570 emails that the team sent to each other during that process. They explain how the concept was done quite fast but how they struggled with the mechanics, and much, much more. From a monster that was eating the tiles to the now popular "doubling" gameplay that was added 7 months after they started making it, you'll be able to have an in-depth look at how they made Threes and at how difficult it is to make a game that feels so simple. I really recommend that you do read it, because it's fascinating. In any case, 2048 (and 1024 before it) perfectly illustrate how quickly clones can take over the App Store. Or, as the Threes developers put it, "We do believe imitation is the greatest form of flattery, but ideally the imitation happens after we've had time to descend slowly from the peak -- not the moment we plant the flag." Posted by
Jimmy Spence
GDC 2014: Hands-on with Tiger Style's 'Spider: Rite of the Shrouded Moon'
<Best Dealsp>With such hits as Spider: The Secret of Bryce Manor and Waking Mars, Tiger Style are easily one of my favorite iOS developers. That's why it was such a big deal this past October when they revealed a new project through a cryptic teaser site that alluded to something called Blackbird Estate. About a month later, it was then revealed that their new project was actually a sequel to Spider, and Blackbird Estate would be the new location that you'd be exploring as an eight legged creature. Today we got to sit down and get a nice, long demo of Spider: Rite of the Shrouded Moon, and it looks fantastic. If you were a fan of the original Spider, then I think Rite of the Shrouded Moon will be right up your alley. It's familiar enough to the original that it feels like an extension of the world they created, but comes with quite a few new features and secrets to discover, which is something that made the first Spider so special. Spider: Rite of the Shrouded Moon will be done "when it's done" but it sounds like we'll be playing it sometime before the end of the year. I can't wait. Posted by
Robinson Cano and the lineup protection myth
There's an increasing amount of chatter that says the Mariners have to "protect" robinson cano mariners Cano in the lineup or his offense will be wasted. Don't fall for it. Superstars have a tendency to be followed by narratives. Often they write their own with spectacular performances, but other times people start to make them up to help rationalize. Many wrestle with the idea of a star player being as good as advertised, and find ways to highlight flaws to be contrary by nature. These flaws are often overstated, and take on a life of their own. That's certainly been the case with Robinson Cano, who has been bombarded with claims of being lazy through his years in New York. He's stayed remarkably healthy, but that doesn't matter because it doesn't fit. Nobody wants to hear something rational, the numbers already do that. Dispelling a narrative doesn't exactly grab the same kind of headlines that creating one does. There's been an increasing amount of buzz that the Mariners haven't done enough to protect Robinson Cano in the lineup, and that nobody will pitch to him unless they surround him with better hitters. Part of this is based in traditional baseball beliefs, articles remembered or passed down during times of a player's struggles. It's the kind of narrative that's used to explain great production or excuse poor ones, but usually the latter. Baseball has constantly masqueraded as more of a team sport than it actually is, and the idea of lineup protection is one that enforces that exaggerated belief. Fans constantly remove responsibility from individuals and shift blame to others, a struggle to view a group of players wearing the same hat as individuals. This blame game is typically seen with the arrow pointing straight up -- to the hitting coach, to the manager, to the general manager, to the ownership. The Mariner organization has witnessed this process run its course time and time again. The season has yet to begin, and fans are already shoving that arrow straight up in the air. The Mariners haven't had a legit superstar position player in so long that maybe fans have forgotten how to sit back and appreciate greatness. The excuses and worries have already begun. Unfortunately, the narrative that Robinson Cano is lazy has even more legs than the idea that he won't produce unless the Mariners surround him with better hitters. And the first narrative is terrible, so what does that make the second? The Mariners didn't have a particularly great offense last year. You can reasonably prove the Yankees offense was even worse. Though they managed to score a handful more runs, it was certainly aided by their hitter's paradise of a ballpark. Despite that advantage, the Yankees only scored 26 more runs than the Mariners. Strip away the park, and examine their park-adjusted wRC+ -- only the White Sox and Marlins had a lower ones. The Yankee offense was unquestionably miserable last year. They were destroyed by injuries and old, ineffective hitters. Cano hit 3rd most of last year (42 games hitting 2nd), and produced a 142 wRC+, the third highest total of his career. He was undeniably the same superstar hitter that he was when the Yankee offense was stacked from top to bottom. From 2010-2012, when the Yankees had one of, if not the best offense in the majors, Robinson Cano was an outstanding hitter. In 2013, when the Yankees had one of the game's worst offenses, he was equally outstanding. This came when Cano was bookended by hitters like Travis Hafner, Mark Reynolds, and Ichiro. Hitters that used to be good. In order to believe that Cano was protected by these hitters, you have to presume that pitchers pitch to reputation and not ability. You have to believe that pitchers are stupid. They're not. Pitchers knew that Ichiro wasn't the threat he once was, and that Pronk and Mark Reynolds were easily exploitable, or that it generally wasn't 2008 anymore. Examine the 2013 Yankee offense. Stats via ESPN.com Can you guess the spots where Cano hit the most? Does that look like "lineup protection" to you? Even if you do believe that reputation goes a long way in the mythical protection of Cano, then you should think Corey Hart will provide plenty of it. Kyle Seager is an upgrade on most that hit around Cano last year, and Justin Smoak and Logan Morrison probably will be as well. They won't be the 2010-2012 Yankee offense, but Cano did just fine without that support. This isn't meant to be a commentary on the idea of lineup protection as a whole, which has been fully rationalized by writers more accomplished than I. It may apply to some hitters and some teams, but it's probably often assigned incorrectly. The biggest change a hitter may see in reduced protection is a drop in RBIs, and that comes from having worse hitters in front of him. Much of the reasoning behind lineup protection is based on traditional stats that are inappropriately associated to individuals. Even if you're into that, Cano's RBIs didn't show any particular drop last year either. This is simply about Cano, a hitter who has demonstrated he is a star with all levels of talent around him -- including some worse than the current Mariner offense, despite how bad it has been for a number of years. Cano has been lobbying the Mariners to sign fellow Dominican Ervin Santana and Kendrys Morales. The narrative will be that the Mariners need Morales to protect Cano in the lineup, but don't bite. If he sees a decline in production this year, it'll be because of Safeco, his age, or injury. It won't be because of his teammates. He's a stud. Treat him like one, and place blame correctly. Posted by
Canucks trade Luongo to Panthers in four-player deal
(Reuters) - The Vancouver Canucks traded goaltender Roberto Luongo to the Florida Panthers on Tuesday in a stunning move one day before the National Hockey League's trade deadline. Luongo, the subject of trade rumors for nearly two years, and winger Steven Anthony were traded to Florida for goalie Jacob Markstrom and forward Shawn Matthias, the teams said in separate statements. "I thought my contract was immovable," Luongo said on TSN radio from Phoenix where the Canucks are scheduled to play the Coyotes later on Tuesday. "I would never have thought I would be traded before the deadline." A three-time Vezina Trophy finalist as the NHL's top goalie and finalist for the Hart Memorial Trophy as the most valuable player, Luongo helped Vancouver reach the Stanley Cup Final in 2011, where they lost to Boston in a decisive seventh game. But the 34-year-old goalie was unable to maintain his form and has been the subject of trade rumors since losing his starting job to Cory Schneider during the 2012 playoffs. The Canucks made previous attempts to trade Luongo but his massive 12-year contract worth $64 million that was signed in 2009 proved a major sticking point in making a Promotional Codes. Schneider was eventually traded to New Jersey in mid-2013 and it was expected Luongo would reclaim the starting job. He filled the role for most of the current season but Eddie Lack has started in each of Vancouver's three games since last month's Olympic break. Luongo, who spent five seasons with Florida earlier in his career, was in net for the Canadian team that won the gold medal at the 2010 Vancouver Olympics and was the backup goalie on the national team that triumphed at last month's Sochi Games. Luongo, who is under contract through the 2021-22 campaign, spent nearly eight NHL seasons with the Canucks and leaves as the team's all-time leader in shutouts, wins and most wins in a single season. "Roberto is one of the game's elite goaltenders and we are happy to welcome him back to South Florida," Panthers General Manager Dale Tallon said in a statement. "With this acquisition, we have solidified our goaltending depth with a top-tiered netminder for the next several seasons." (Reporting by Frank Pingue in Toronto and Larry Fine in New York; editing by Ken Ferris) Posted by
BusinessWest
<Reviewh2>Cover Amherst's 'Biddy' Martin Puts the Focus on Inclusion It's called the 'Committee of Six.' That's the name attached to an elected - and quite powerful - group of professors at [...] read more...Features Riverfront Club's Mission Blends Fitness, Teamwork, Access to a 'Jewel' Jonathon Moss says he found the item on eBay. It's a framed copy of an engraving and short story in [...] read more...Holyoke's Leaders Take a Broad View of Economic Growth Alex Morse has a message for Holyoke's residents and businesses: keep your eyes open. Over the past two years, said the [...] read more...Special Features Planned United, Rockville Merger Has the Industry's Attention They're called MOEs, or mergers of equals. And while neither the phrase nor the acronym is new to the banking industry, they [...] read more...Integrity and Accountability Are Central to Barr & Barr's Business Philosophy Stephen Killian was asked to put the Great Recession and its many - and still-lingering - consequences into perspective, [...] read more...Switch to Santander Banner Brings Some Change, but Also Stability When the Sovereign Bank signs suddenly came down across Massachusetts last fall, replaced by the Santander Bank name, it was [...] read more...North Brookfield Savings, FamilyFirst Ink Merger Agreement Two area mutual banks that serve local customers and small businesses - and are active in their communities - are joining together to [...] read more...Asset Allocation Is Key to Making Sure Your Goals Are Met The most important investing decision for individual investors is how much to save from your paycheck. The second most [...] read more...University Without Walls Offers Alternative Options for Adult Students When Orlando Ramos of Springfield sits down to do his homework at the kitchen table, he's often joined by another student [...] read more...HCS Head Start Strives to Get Preschoolers on the Right Track Fifty years is a long time in any field. Soit's no surprise, Nicole Blais said, that early-childhood education has [...] read more...A chart of colleges and universities in the region Click here to download the PDF read more...Expert in Eco-friendly Construction Offers 10 Trends to Watch in 2014 What are the major trends likely to affect the green-building industry and markets in the U.S. in 2014? Jerry [...] read more...A chart of area general contractors Click here to download the PDF read more...A listing of available commercial properties Click here to download the PDF read more...REB's New Director Wants to Build on Recent Momentum Dave Cruise's desk - or, more specifically, what sits on it - speaks volumes about his work with the Regional Employment [...] read more...Opinion The Race to Pick MGM's Pockets As the process for awarding the only Western Mass. casino license moves into its final, critical stages, there is an interesting subplot emerging - [...] read more...Tackling the Innovation Deficit By L. RAFAEL REIF The long-term future of congressional support for research and development is being shaped right now, and the stakes are high. Those of [...] read more... Posted by
Kicks Deal Of The Day: 15 Best Nike Flyknit Kicks On Clearance
For those after great savings on a new pair of Nike Flyknit trainers, especially the Flyknit Lunar1 model, the crew at KicksDeals.com have an exclusive coupon available for a limited time! They've picked out the 15 best options found in the Nike clearance section and you can enjoy extra 20% off savings if you don't get caught sleepinh. For complete Discount details, click here. Posted by
Here's why Amazon Prime's price may go up by $20 or more
During an earnings call Thursday, Coupon said it was looking into bumping up the price of Amazon Prime by 20 or 40 bucks. The Prime service currently charges a flat $79 for a year of all-you-can-ship service on millions of items. But despite staying at the same price point since its launch in 2005, Prime has been pretty lucrative for Amazon -- in fact, some research shows Prime subscribers spend twice as much on the online shopping site as non-Prime shoppers. So why is Amazon considering raising barriers to joining rather than lowering them? Because one of the psychological factors for why Prime customers spend so much is they want to make sure they get their money's worth from the upfront subscription cost. "Though expensive for the Company in the short-term, it's a significant benefit and more convenient for customers," said Amazon CEO (and Washington Post owner) Jeff Bezos in a press release announcing the service. "With Amazon Prime, there's no minimum purchase to think about, and no consolidating orders -- two-day shipping becomes an everyday experience rather than an occasional indulgence." And what started as a risky endeavor has been remarkably successful at getting consumers to change their shopping habits. Plus, in the near-decade since it launched, Amazon has added on new perks to attract subscribers, like Amazon Instant Video, and a e-book lending library for its Kindle e-reader line, as well as offered discounted or special versions aimed at students and ( real or fake) moms. At first glance, it might seem pretty amazing that the service managed to hold steady at the same price for that long while adding new services -- the licensing for content for the e-book and video services aren't free, after all. But Prime is actually a pretty lucrative proposition for Amazon. According to a Bloomberg Business Week story from 2010, the service "broke even in just three months, not the two years the team had originally forecast." While the company has been notoriously tight-lipped about the total number of subscribers, statements suggest at least 20 million people have signed up for Prime. And those 20 million people spend a lot more money at Amazon than non-subscribers: A report from market research firm Consumer Intelligence Research Partners found that Prime customers spent an average of $1,340 per year with Amazon versus non-Prime shopper who spent $650 annually. Statistics like that have led some to suggest Amazon should consider actually cutting the price -- or giving it away for free. So why is Amazon considering raising the price? During the earnings call, Amazon cited higher fuel and shipping costs. But there's also consumer behavior that will likely keep Prime a premium package. Right now, when they sign up, consumers are betting that the the savings they reap in shipping costs and possibly subbing Amazon Instant for Netflix or Hulu will outweigh the cost of subscribing. In that calculus, even a $40 dollar bump to Prime subscription fees, which would be a 50 percent price increase, might not seem unreasonable to current subscribers when you consider that Netflix alone will run you $96 per year. And, as suggested by the spending habits of Prime versus non-Prime shoppers, subscribers may feel compelled to use Amazon for as many of their day to day or online purchases as possible to make sure they're getting the most out of their investment: Why leave your house when you can get something delivered to your door? And why shop anywhere else when you can get it delivered fast, for free, from Amazon? It's this conditioning that makes Prime so effective. And if the company is confident it can convince consumers $120 per year is still a good deal for Prime, it's hardly surprising that Amazon is considering changing the price. After all, given the at least 20 million subscribers, a $40 increase could also translate into $800,000,000 in revenue for Amazon. Disclosure: Amazon founder and CEO Jeff Bezos is the owner of the Washington Post. Posted by
Cloned to Death: Developers Release all 570 Emails...
GDC 2014: Hands-on with Tiger Style's 'Spider: Rit...
Canucks trade Luongo to Panthers in four-player de... | 计算机 |
2014-15/4497/en_head.json.gz/17854 | PandaLabs Blog
Everything you need to know about Internet threats
About PandaLabs
Follow us on Twitteror Facebook
PowerLocker
by Luis Regel PowerLocker, also called PrisonLocker, is a new family of ransomware which in addition to encrypting files on the victim’s computer (as with other such malware) threatens to block users’ computers until they pay a ransom (like the ‘Police virus’).
Although the idea of combining the two techniques may have caused more than a few sleepless nights, in this case the malware is just a prototype. During its development, the malware creator has been posting on blogs and forums describing the progress and explaining the different techniques included in the code.
The malware creator’s message in pastebin
In this post for example, the creator describes how PowerLocker is a ransomware written in c/c++ which encrypts files on infected computers and locks the screen, asking for a ransom.
The malware encrypts the files, which is typical of this type of malware, using Blowfish as an encryption algorithm with a unique key for each encrypted file. It stores each unique key generated with an RSA-2048 public/private key algorithm, so only the holder of the private key can decrypt all the files.
Also, according to the creator, PowerLocker uses anti-debugging, anti-sandbox and anti-VM features as well as disabling tools like the task manager, registry editor or the command line window.
However, all the publicity surrounding PowerLocker that the creator has been generating across forums and blogs before releasing it, has led to his arrest in Florida, USA. Consequently, today there is no definitive version of this malware and there is no evidence that it is in-the-wild.
Nevertheless, we still feel it’s worth analyzing the current version of PowerLocker, as someone else could be in possession of the source code or even a later version.
PowerLocker analysis
The first thing PowerLocker does is to check whether two files with RSA keys are already created, and if not, it generates the public and private key in two files on the disk (pubkey.bin and privkey.bin).
Unlike other ransomware specimens, which use the Windows CrytoAPI service, PowerLocker uses the openssl library for generating keys and encrypting files.
Once it has the keys, PowerLocker runs a recursive search of directories looking for files to encrypt, excluding, not very effectively, files with any of the file names used by the malware: privkey.bin, pubkey.bin, countdown.txt, cryptedcount.txt. It also avoids $recycle.bin, .rans, .exe, .dll, .ini, .vxd or .drv files to prevent causing irreparable damage to the computer. The creator has however forgotten to exclude certain extensions corresponding to files which are delicate enough to affect the functionality of the system, such as .sys files. This means that any computer infected with PowerLocker would be unable to reboot.
Moreover, in this version it is possible to use a parameter to control whether the ransomware encrypts or decrypts files using the pubkey.bin and privkey.bin keys generated when it was first run.
This version does not include the screen lock feature described by the creator, although it displays a console with debug messages, names of the files to encrypt/decrypt, etc. and asks you to press a key before each encryption or decryption.
At present, there is only a half-finished version of PowerLocker which could practically be labelled harmless, and which lacks many of the most important features that the creator has described on the forums and blogs, such as anti-debugging, screen locking, etc.
Despite it not being fully functional we would recommend having a system for backing up critical files, not just to offer assurance in the event of hardware problems, but also to mitigate the damage of these types of malware infections.
Also bear in mind that if you don’t have a backup system and your system is infected, we certainly do not recommend paying the ransom, as this only serves to encourage the perpetrators of such crimes.
PowerLocker analysis performed by Javier Vicente
Password stores… Candy for cyber-crooks?
by Luis Corrons I am sure than more than once you have used the same password for different websites. Imagine that one of those websites stores your password on their internal servers. You won’t have to squeeze your imagination for that as, unfortunately, that’s common practice. Now, imagine that those servers are attacked by a group of hackers who manage to get your password. The next thing those hackers will do is use your password to try to access your email account or any other websites you may have registered to and, in many cases, they will succeed.
You can stop imagining now, as that’s exactly what happened to Yahoo in reality a few weeks ago. In this case the stolen data was not obtained directly from Yahoo’s systems. Apparently, Yahoo realized that a number of their user IDs and passwords had been compromised, and after further research, it was discovered that the information had been obtained from a third-party database not linked to Yahoo.
Immediately, Yahoo reset the affected users’ passwords and used two-factor authentication for victims to re-secure their accounts.
In this case we are not talking about a company failing to secure its data but quite the opposite, and we should congratulate Yahoo for having been able to detect the attack and act swiftly to protect its users.
Unlike the Yahoo incident, an attack recently launched on Orange did affect one of the company’s websites. More specifically, the breached site was affected by a vulnerability that allowed the attackers to gain access to personal data from hundreds of thousands of customers, including names, mailing addresses and phone numbers.
Fortunately, it seems that Orange’s systems were configured in a way that prevented the customers’ passwords from being compromised, which limited the damage done to the more than 800,000 users affected by the attack. According to reports, the customers’ passwords and banking details were stored on a separate server which was not impacted by the breach
In any event, when it comes to protecting passwords from the eventuality of theft, the best policy is simply not to store them. If passwords are not stored, they can’t be stolen, can they? It sounds quite obvious, but not many people seem to apply this simple concept.
Now, the question is, if organizations don’t store users’ passwords, how can they validate users? Very simple. It would be enough to ‘salt’ the original password set by the user when signing up for the Web service, and apply a hash function to that ‘salted’ password. By salting the original password, what you actually do is generate a new, different password using a previously defined pattern (turn letters into numbers, change their order, etc). Next, the system applies the hash function to the alternate password and converts it into a complex string of symbols by means of an encryption algorithm. It is this ‘hashed’ form of the password which is stored in order to validate the user. From that moment on, every time the user types in a password, the system will apply the aforementioned pattern to it, calculate a hash value, and compare it to the hash stored in the password database. If they match, it means that the user has entered the correct password and access is permitted. As you can see, the entire process takes place without the need to store sensitive data such as passwords.
Another measure that should be implemented on a massive scale is the use of two-factor authentication. Even though it can be a pain at times, when applied, it makes compromising user accounts a lot more difficult. This is a system that financial institutions have been using for a long time, but which should also extend to other Web services as well.
Android users under attack through malicious ads in Facebook
by Luis Corrons Cyber-criminals are always trying to attract people’s attention in order to carry out their crimes. So it should be no surprise that they have now found a combined way of using Facebook (the world’s largest social network), WhatsApp (the leading text messaging program for smartphones, recently bought by Facebook) and Android (the most popular operating system for mobile devices) to defraud users.
The group behind this attack uses advertising on Facebook to entice victims and trick them into installing their apps. When you access Facebook from your Android mobile device, you will see a ‘suggested post’ (Facebook’s subtle euphemism for an advertisement) advertising tools for WhatsApp:
As you can see, not only do they use the most popular platforms to attract users, they also appeal to the curiosity of users by offering the chance to spy on their contacts’ conversations. You can see how successful this is by looking at the number of ‘Likes’ and comments it has. Yet this is not the only lure they’ve used. Below you can see another suggested post promising an app that lets you hide your WhatsApp status:
Facebook offers targeted advertising for advertisers, i.e. you can specify which type of users you want to see your ads, where they appear (e.g. in the right-hand column), as suggested posts, etc. In this case it seems that the ad is only shown to Spanish Facebook users who are accessing the social network from an Android mobile device, because these are the types of victims that the cyber-crooks behind this scam are after. In fact you can see this here, as the screenshots are taken from a Spanish Facebook account through an Android mobile device. We also tried using the same account but from a PC, an iPad and an iPhone and in none of these cases were the ads displayed.
If you click on the image you can see here in any of the ads that we’ve shown, you’ll be redirected here:
As any Android user can tell, this is Google Play, specifically, a page for an app. It has the option to install it, and shows over one million downloads and a 3.5-star rating by users (out of 5). If you go down the screen you can see numerous positive comments, and the votes of over 35,000 users who have rated it:
However, a suspicious eye can see that not all the numbers add up:
- The app has a score of 4.5, yet the number of stars is 3.5
- You can see that the score is calculated on the basis of the votes from 35,239 users. Yet if you add up the number of votes that appear on the right, the total is 44,060 votes:
So how can this be happening in Google Play? As some of you may have guessed, this is happening because it is not Google Play. It is really a Web page designed to look like the Play Store, so users think they are in a trusted site. The browser address bar, as you can see in the screenshots here, is hidden at all times. If you click on the ‘Install’ button, a file called “whatsapp.apk” is downloaded.
When it runs, this app displays the following screen: | 计算机 |
2014-15/4497/en_head.json.gz/18792 | Developer Diaries > Pirate Empires: Part 2 - Give 'Em the WIP
Pirate Empires: Part 2 - Give 'Em the WIP
30th March 2009 � Give 'Em the WIP
Some J Mods having a sea battle.Click here to see a larger version of this image.
Recently, I have been working on getting islands to render properly during sea battles (where islands can be quite large) and in the game's world map (where they can be quite small). It's quite tricky to do with a game of this size, as I have to make sure it all works on older computers as well as reliably over a network. It means we end up doing a lot of our calculations in fixed-point arithmetic. Fixed-point arithmetic means that we use integers (whole numbers) instead of floating-point numbers. Although floating-point numbers allow for more accuracy and can represent fractions easily, they can be slower for the computer to calculate, lose precision and be unpredictable from one computer to the next.
So, the key question we have to answer is: how big is 1? It might be okay for 1 to represent 1cm in sea battles, but if we used that for the world map game, we might only be able to have a game world that�s 1km square - clearly not enough space for a pirate to make a living in. If you make the scale too big, though, animations and camera movements get very choppy. It feels kind of like I've been trying to press bubbles out of wallpaper this week!
Just the other day, I put the game on our internal 'work-in-progress' (WIP) version of FunOrb in order to show it to a few colleagues. I actually only intended to show it to one 'guinea pig', but when there's a new game working on someone's screen, the whole department can get a little excited. Things quickly escalated into a big battle, as other members of the FunOrb development team jumped in the game to try it out! A very early build of the tavern (with placeholder graphics).Click here to see a larger version of this image.
It is always helpful to see new people playing a game you're working on - it helps you to see the things that are wrong with it, which you have so far been blind to. Even simple things like if you see someone struggling to use your interface can help you to improve the quality of the final game. This 'small' test has given me and Mod Dunk a veritable hoard of little things to fix and improve with sea battles. We have quite a way to go with this game before it meets up to our standards, but we are making progress.
We've also been doing some work on the ports. We have been populating the taverns with a range of sailors - everything from stalwart bonny tars to scum-of-the-earth cut-throats - all of whom you'll be able to recruit for your ship! And we've been making sure that the booty you store in your ship's hold is as 'organised' as every self-respecting pirate would have it: in big piles! (Which reminds me somewhat of my desk...)
Mod WivlaroFunOrb Developer(Current grog level: low) | 计算机 |
2014-15/4497/en_head.json.gz/19305 | HomeAboutBlogContact Toggle navigation
Effective Date: August, 2011
The following Privacy Policy governs the online information collection practices of GWUN LLC.”we” or “us”). Specifically, it outlines the types of information that we gather about you while you are using the NicholasReese.com website (the “Site”), and the ways in which we use this information. This Privacy Policy, including our children’s privacy statement, does not apply to any information you may provide to us or that we may collect offline and/or through other means (for example, at a live event, via telephone, or through the mail).
Please read this Privacy Policy carefully. By visiting and using the Site, you agree that your use of our Site, and any dispute over privacy, is governed by this Privacy Policy. Because the Web is an evolving medium, we may need to change our Privacy Policy at some point in the future, in which case we’ll post the changes to this Privacy Policy on this website and update the Effective Date of the policy to reflect the date of the changes. By continuing to use the Site after we post any such changes, you accept the Privacy Policy as modified.
NicholasReese.com strives to offer its visitors the many advantages of Internet technology and to provide an interactive and personalized experience. We may use Personally Identifiable Information (your name, e-mail address, street address, telephone number) subject to the terms of this privacy policy. We will never sell, barter, or rent your email address to any unauthorized third party. “Period.”
How we collect and store information depends on the page you are visiting, the activities in which you elect to participate and the services provided. For example, you may be asked to provide information when you register for access to certain portions of our site or request certain features, such as newsletters or when you make a purchase. You may provide information when you participate in surveys, sweepstakes and contests, message boards and chat rooms, and other interactive areas of our site. Like most Web sites, NicholasReese.com also collects information automatically and through the use of electronic tools that may be transparent to our visitors. For example, we may log the name of your Internet Service Provider or use cookie technology to recognize you and hold information from your visit. Among other things, the cookie may store your user name and password, sparing you from having to re-enter that information each time you visit, or may control the number of times you encounter a particular advertisement while visiting our site. As we adopt additional technology, we may also gather information through other means. In certain cases, you can choose not to provide us with information, for example by setting your browser to refuse to accept cookies, but if you do you may be unable to access certain portions of the site or may be asked to re-enter your user name and password, and we may not be able to customize the site’s features according to your preferences.
We may use Personally Identifiable Information collected on NicholasReese.com to communicate with you about your registration and customization preferences; our Terms of Service and privacy policy; services and products offered by NicholasReese.com and other topics we think you might find of interest.
Personally Identifiable Information collected by NicholasReese.com may also be used for other purposes, including but not limited to site administration, troubleshooting, processing of e-commerce transactions, administration of sweepstakes and contests, and other communications with you. Certain third parties who provide technical support for the operation of our site (our Web hosting service for example) may access such information. We will use your information only as permitted by law. In addition, from time to time as we continue to develop our business, we may sell, buy, merge or partner with other companies or businesses. In such transactions, user information may be among the transferred assets. We may also disclose your information in response to a court order, at other times when we believe we are reasonably required to do so by law, in connection with the collection of amounts you may owe to us, and/or to law enforcement authorities whenever we deem it appropriate or necessary. Please note we may not provide you with notice prior to disclosure in such cases.
NicholasReese.com expects its partners, advertisers and affiliates to respect the privacy of our users. Be aware, however, that third parties, including our partners, advertisers, affiliates and other content providers accessible through our site, may have their own privacy and data collection policies and practices. For example, during your visit to our site you may link to, or view as part of a frame on NicholasReese.com pages, certain content that is actually created or hosted by a third party. Also, through NicholasReese.com you may be introduced to, or be able to access, information, surveys, Web sites, features, contests or sweepstakes offered by other parties. NicholasReese.com is not responsible for the actions or policies of such third parties. You should check the applicable privacy policies of those third parties when providing information on a feature or page operated by a third party.
While on our site, our advertisers, promotional partners or other third parties may use cookies or other technology to attempt to identify some of your preferences or retrieve information about you. For example, some of our advertising is served by third parties and may include cookies that enable the advertiser to determine whether you have seen a particular advertisement before. Other features available on our site may offer services operated by third parties and may use cookies or other technology to gather information. NicholasReese.com does not control the use of this technology by third parties or the resulting information, and are not responsible for any actions or policies of such third parties.
You should also be aware that if you voluntarily disclose Personally Identifiable Information on message boards or in chat areas, that information can be viewed publicly and can be collected and used by third parties without our knowledge and may result in unsolicited messages from other individuals or third parties. Such activities are beyond the control of NicholasReese.com and this policy.
This children’s privacy statement explains our practices with respect to the online collection and use of personal information from children under the age of thirteen, and provides important information regarding their rights under federal law with respect to such information.
This Site is not directed to children under the age of thirteen and we do NOT knowingly collect personally identifiable information from children under the age of thirteen as part of the Site. We screen users who wish to provide personal information in order to prevent users under the age of thirteen from providing such information. If we become aware that we have inadvertently received personally identifiable information from a user under the age of thirteen as part of the Site, we will delete such information from our records. If we change our practices in the future, we will obtain prior, verifiable parental consent before collecting any personally identifiable information from children under the age of thirteen as part of the Site.
Because we do not collect any personally identifiable information from children under the age of thirteen as part of the Site, we also do NOT knowingly distribute such information to third parties.
We do NOT knowingly allow children under the age of thirteen to publicly post or otherwise distribute personally identifiable contact information through the Site.
Because we do not collect any personally identifiable information from children under the age of thirteen as part of the Site, we do NOT condition the participation of a child under thirteen in the Site’s online activities on providing personally identifiable information.
Email: email (at) NicholasReese.com
NicholasReese.com reserves the right to change this policy at any time. Please check this page periodically for changes. Your continued use of our site following the posting of changes to these terms will mean you accept those changes. Information collected prior to the time any change is posted will be used according to the rules and laws that applied at the time the information was collected.
This policy and the use of these Sites are governed by Florida law. If a dispute arises under this Policy we agree to first try to resolve it with the help of a mutually agreed-upon mediator in the following location: Florida. Any costs and fees other than attorney fees associated with the mediation will be shared equally by each of us.
If it proves impossible to arrive at a mutually satisfactory solution through mediation, we agree to submit the dispute to binding arbitration at the following location: Florida, under the rules of the American Arbitration Association. Judgment upon the award rendered by the arbitration may be entered in any court with jurisdiction to do so.
NicholasReese.com is controlled, operated and administered entirely within Florida. This statement and the policies outlined herein are not intended to and do not create any contractual or other legal rights in or on behalf of any party.
About Nick Reese
Nick Reese is a serial entrepreneur who teaches others to live life on their own terms. Learn More »
Small Talk: How to Answer This One Extremely Important Question
How to Avoid Problem Clients
How to Communicate Your Expertise Without Sounding Arrogant
How to Take Control of Your Business and Life
How to Sell — Even if You’re an Introvert (or an Extrovert)
How to Protect Your Time Without Losing Friends
Problem Awareness: How to Custom Tailor Your Marketing to Increase Sales
How to Stop Feeling Overwhelmed
How to Raise Your Prices 300%
Unstuck: How to Break Through Business Barriers with a Simple Marketing System
3 Painless Scripts to Free Yourself of Problem Clients
Website Credibility: How to Build a Credible Website
These Two Decision Styles will MAKE or BREAK Your Business
How to Promote Yourself Online… Even When You Don’t Feel Like an Expert
The Secret Art of Asking for Advice (and Mentorship)
Business & Life on Your Terms
Enter your name and email below to get access to my best stuff on building an uncommonly good life.
If you've got guts and hustle, you can build an uncommonly good life.
Enter your information below to join our small community of smart entrepreneurs and I'll send you my best stuff...
...see you on the inside.
Your email will never be shared and you can unsubscribe at any time.
About · Privacy · Terms of Service · Contact | 计算机 |
2014-15/4497/en_head.json.gz/19647 | Got a Tip? Asura’s Wrath Interview Part 1: Arms Aren’t Everything
By Spencer . August 18, 2011 . 6:02pm
.hack creators CyberConnect2 and Capcom are creating an over-the-top action game where you play as enraged demigod. Why is Asura vexed? He was betrayed by his fellow demigods and robbed of his strength. Asura was left dormant with his anger festering for 12,000 years. Meanwhile, his daughter Mithra is captured and is about to bring about a "great rebirth." If you missed our hands-on impressions read those here, then check out our interview with Hiroshi Matsuyama, CEO of CyberConnect2, and Kasuhiro Tsuyachiya, Producer at Capcom.
Let’s go back to the beginning of Asura’s Wrath, you were envisioning a new game and decided to make a deity a main character and then you added sci-fi elements?
Hiroshi Matsuyama, CEO of CyberConnect2: When we started the development process for this game we were talking about what kind of game we want to make. The first thing we said is we don’t want to make an action game. There were too many of those out there where you go from stage to stage an battle a boss at the end. We wanted to break the mold and try something new.
From the very beginning, the concept started mixing mythology and sci-fi with themes of anger. We also wanted to create a game that’s not just for Japan, but something players around the well would enjoy as well. One of the main concepts in the game is Asura rising and falling. He is betrayed and badly beaten up, but he comes back in the end. The feeling you get when you watch a drama with cliffhangers at the end is a big part of that theme.
And that’s why you end with "to be continued"?
HM: [Laughs.] Yes, exactly! Asura’s Wrath, at least from the demo, has a number of quick time events (QTEs) and CyberConnect2 also had those in the Naruto Shippuden: Ultimate Ninja Storm games. What makes QTEs interesting? Is it because they make games more dramatic?
HM: Yes, we wanted to make the game more dramatic. That was our intention from the start and why integrated quick time events seamlessly into the game. We get comments from a lot of people that when they see Asura’s Wrath they think of the events in Naruto. I want everyone to know these are very different from the QTEs in Naruto. While they are seamless in both games, in Asura’s Wrath the QTEs are designed so the players become Asura and feel for him. You mimic what Asura does, for example by moving the analog sticks you spread his hands out. We wanted to add a level of immersion to the game.
In those events, Asura gets fired up and grows more arms, but how else does Asura develop as a playable character?
Kasuhiro Tsuyachiya, Producer at Capcom: Asura’s Wrath doesn’t have a normal power up system, it’s kind of complicated. I’m sure you’ve seen the trailers where Asura is fighting with no arms. There will be situations where he has six arms and six are of course better than two. Sometimes you will have zero arms and we want players to feel "how am I going to fight with zero arms?" is the type of excitement we want players to feel.
You have to figure out how to fight in whatever state you’re in. It’s not a simple progression from level one to level two where you power up from two to six arms. One more thing… his six arm form is not his ultimate form.
HM: But, that doesn’t mean he’s going to get more arms! [Laughs.] He will get more powerful in different ways, which is based on his anger.
Our interview will continue tomorrow with discussion about Asura’s personality and how CyberConnect2 worked Asura’s highs and lows into combat. Read more stories about Asura's Wrath & Interviews & PlayStation 3 & Xbox 360 on Siliconera. Video game stories from other sites on the web. These links leave Siliconera. RSS | 计算机 |
2014-15/4497/en_head.json.gz/20140 | 100 million downloads of Apache OpenOffice
12 hours ago, Bass wrotehttps://blogs.apache.org/foundation/entry/the_apache_software_foundation_announces56I'm not really an OpenOffice/LibreOffice user, but apparently some people are. This doesn't count Linux distros that bundle it, as well as LibreOffice which is a popular fork of it. I'm sure that the actual number of downloads is far greater than 100 million.Unfortunately, the number of downloads is one of the most useless stats ever fostered on the internet. I was one of those 100 million, and I'm also one of an unknown number who used it once, then never opened it again. It's dirt cheap to download stuff; what is really interesting is how many are actively using it.
Bill Gates sure that Google Glass will be successful
12 hours ago, 00010101 wrote*snip*Microsoft did in fact let my project have a lot of free stuff, in fact it was free stuff they valued at well into 6 figures. Then earlier this year they offered this:https://www.microsoft.com/BizSpark/plus/default.aspx60k in Azure credits, which I kindly, or not so kindly declined depending on your definition of the word kind.Then why are you always whining like a little girl?And NO, I and my team am not special by any means.And by you and your team, you actually mean just you. But that's not important. You clearly, and delusionally, believe that you are doing something so important that Microsoft should prostrate itself before you and . . . give you loads of free stuff.Not how it works. I am looking for the free that says, "hey, we'll give you this for free with no strings attached, and because of our good will, we hope you do business with us in the future. We're not going to ask you to port over platforms or change your business plan for this one thing"Ah, I see your problem. You don't understand how Google makes its money. Yes, Microsoft wants something in return, and so does Google. Google wants access to your customers personal information and their habits (both online and offline), which they can sell to third parties or use for targeted advertising. That's why all their free stuff requires they sign up with Google to make it work.It's not that Google doesn't have strings; it's that you've just chosen to ignore them.This patent is interesting though, and weirdly enough, if they can figure out a way to make it work then Microsoft will come up smelling like a privacy hero: releasing a technology that will protect normal people from all those creepy glass wearers hanging around under mall staircases. That's probably how they'll advertise it, anyway.And though you'd like to believe it, Gates filing a patent doesn't actually mean he thinks the product will be a success. Filing a patent costs next to nothing, and guarantees (more or less) that if Glass does take off, then Microsoft is ready to monetise protecting normal people from it.Heck, Google won't even allow Glass to be worn at their board meetings; what does that tell you. Bill Gates sure that Google Glass will be successful
2 days ago, 00010101 wroteWell, I think a lot of companies are starting to put Microsoft tech behind them at this point.I know it still hasn't hit them financially worldwide, but the tail end of that will hit them soon enough.They were hurt bad enough in the retail markets that they started mass eBay-ing their goods at 70-80% off through various vendors.They also had a re-organization, so I think they are probably not in such good shape at this point.I can't say I wish them well, because I don't, but I hope that some of them that made good money have the sense to either fund or start new companies, because it looks like a few SV companies and funds are going to have complete monopoly control of the future of technology otherwise. And this is all because they wouldn't let you have some free stuff.Amazing. Heartbleed
3 hours ago, fanbaby wrote*facepalm* not again..Anecdotes, like heartbleed, mean nothing. Open source is more secure for the reasons mentioned above. Or we could simply ignore the problem, yes.
1 hour ago, bondsbw wrote*snip*I think this is the takeaway. As I understand, there was a formal review and the reviewer didn't catch it. But then again, why should your company trust an unpaid external reviewer when it comes to critical software like this?The unfortunate reality is that while open source is a great way to standardize and share code, it really isn't some panacea where the company can get code completely for free. The company either needs to hire reviewers for mission-critical open source software, or (more realistically) they need to hire an outside firm that does this and provides a certain level of guarantee/insurance.The one thing they don't need to do is assume anything mission-critical they find on Github is "good enough as-is". Precisely!
5 hours ago, KirbyFC wrote*snip*This is one of the big myths surrounding open source.The concept of open source began many years ago when some programmers got together and said "Hey, wouldn't it be great if everyone made their source available to look at, modify, etc...." And it's a great idea -- IF you are a programmer. But that's the problem. The vast majority of people in the world are not programmers.In *THEORY* anyone can look at the source code. In *REALITY* the number of people looking at the source code is very small. Other than the people actually working on the code, very few people are looking at the source closely enough to find a serious problem. This is not meant as a criticism of open source, it is simply reality.This is precisely my point. I think the problem is this notion of 'more eyes on code' when what we really should be concerned with is 'what eyes and if they're any good.'Now the past few outfits I've worked for have employed third parties to look for vulnerabilities, and I think that perhaps the OS community should look into setting up some sort of body of experts who can advise on this sort of thing. A well-maintained site so that volunteers know what's expected of them, can get advice on how to test for known vulnerabilities, how to avoid script injection etc. Heartbleed
I always thought that one of the advantages of open source code is that bugs are picked up more quickly because there are more eyes on the code.Is this simply a case of 'not all OS projects are created equal' or do we need some sort of formal review process for critical stuff like this to actually prove it was written and tested by people who know what they're doing?
33 minutes ago, Bass wroteWhy would you pay a yearly subscription for this when iWork is free and Apple puts serious effort into making it run well on iPad.Because you've got an enterprise vault full of Word documents that you still need to update.
36 minutes ago, Sven Groot wrote*snip* And as far as I can tell, it's only for the actual in-app payment of the first year. After that, MS gets 100%.Nope, the in-app purchase is made through your iCloud account, so every time it is renewed, Apple gets 30%. This is why Amazon doesn't support purchases through the Kindle app on iOS. If you start your subscription on iOS then switch to Android, Apple will still get a cut, unless you stop the subscription and renew it on Android.
1 hour ago, cbae wrote Edit: I just read Apple is getting 30% of the Office 365 subscription.http://techcrunch.com/2014/03/27/apple-gets-its-30-take-on-office-365-subscriptions-microsoft-sells-through-office-for-ipad/I can't believe they agreed to that.Well they didn't really have a choice, did they?The Surface hasn't turned out to be as much as a draw as Microsoft had hoped, and the lack of Office on the iPad hasn't really put a dent in Apple's sales. Besides, if Apple had given Microsoft a free ride then their developers would have been, quite rightly, up in arms. Still, as someone has already pointed out, consumers are not going to sign up for an Office subscription, so most of the sales are going to be for enterprise users who still want to stick with Office (though from what I've seen, most outfits are happy with earlier versions that they don't have to shell out yearly for), and the enterprise customers are not going to be buying this through the app store.Apple's ongoing developers are more important to Apple than Microsoft, so the 30% is no surprise; there was no way Apple could afford to p*** off its developers by giving MS a free ride on the app store. It's very telling that on the day of the launch, Tim Cook greets the new addition to the iPad and then goes on to tout their iWork, Evernote (a competitor to OneNote) and Paper (has not comparison). http://recode.net/2014/03/27/microsoft-is-selling-office-365-within-ipad-apps-and-apple-is-getting-its-30-percent-cut/Nope, the 30% cut is no surprise. The only surprise is how long it took MS to cave in and agree to it. | 计算机 |
2014-15/4497/en_head.json.gz/20741 | Gaming Trend Forums > Gaming > Console / PC Gaming (Moderators: farley2k, naednek) > [PC-PR] War on Terror Video Game Now Available
Topic: [PC-PR] War on Terror Video Game Now Available (Read 1066 times)
[PC-PR] War on Terror Video Game Now Available
War on Terror Video Game Now AvailableKuma\War: The War on Terror at Retail Outlets NationwideNew York (Oct. 14, 2004) – Kuma Reality Games, the company that blends news coverage with interactive game technology to allow players to experience re-creations of real military events, announced today that Kuma\War: The War On Terror is now available for the PC at retail outlets nationwide. Kuma\War: The War On Terror is a compilation of some of the most critical battles fought in Iraq and Afghanistan since the beginning of the war, as released on the Kuma\War online service. The CD-based retail product contains 15 playable missions featuring military hotspots around the world including Mosul, Fallujah and Sadr City and includes units from the 10th Mountain Division, 101st Airborne, and the U.S. Marine Corps as they take on well-armed insurgents, al Qaeda, Taliban fighters, and terrorists throughout the world. Kuma\War: The War on Terror also includes one exclusive mission only available in the retail product, detailing Navy Cross Medal recipient Marine Capt. Brian R. Chontosh and his comrades’ extraordinary contribution to Operation Iraqi Freedom on March 25, 2003. Capt. Chontosh, who assisted in the Kuma\War re-creation, is currently stationed in Iraq on active duty with the 1st Marine Expeditionary Force. Buyers of the retail product will also get many improvements over the version previously available online, including speed optimization, Win98 compatibility, new weapons and models, and gameplay enhancements to previously released missions. Kuma\War has been available online since March 2004, and there are currently 23 missions available for play through the online service. Later this week, players will be able to download “John Kerry’s Silver Star,” the first-ever realistic re-enactment of the controversial events surrounding John Kerry’s Silver Star and Vietnam service. Kuma\War provides players with intelligence gathered from news sources around the world and expert military analysis by Kuma’s decorated team of military advisors who provide strategic and tactical perspectives on the events. At the start of each mission, players watch a video news show about the event and can view technical specifications for weapons used, a detailed chronology of the battle, and even satellite photos used to model the actual battlefield. Worldwide sources for the intelligence include the Associated Press and declassified Department of Defense documents, among others. Kuma\War: The War on Terror is available at Best Buy and other fine retail outlets nationwide for a MSRP of $19.99 and is rated “M” for mature. An internet connection is not required to play. Also included with purchase is a one-month FREE subscription to the Kuma\War online service at www.kumawar.com <http://www.kumawar.com/> , a $9.99 value. Each month, subscribers can expect to receive at least three new missions online that further explore the explosive situation in Iraq and other conflicts from the war on terror. As a salute to the men and women in uniform who have served their country, Kuma Reality Games will donate $1 of all paid online subscriptions to The Intrepid Fallen Heroes Fund, which was created to assist the families of the nation's fallen heroes killed in duty. ABOUT KUMA REALITY GAMESKuma Reality Games builds re-creations of real-world events using advanced gaming tools. Kuma\War is available online to subscribers for free trial and download at www.kumawar.com and is available at retail nationwide at $19.99 MSRP. Owners of the retail product will also receive a free month of the Kuma\War online subscription service (a $9.99 value). Each month Kuma\War online subscribers receive 3 new playable missions, video news shows, extensive intelligence gathered from news sources around the world, and insight from a decorated team of military veterans. Kuma\War is a first and third-person tactical squad-based military PC game that provides multiple updates monthly to the player's computer to reflect unfolding events in the real war. As of today, there are more than 23 playable missions online available for download. Kuma Reality Games, headquartered in New York, New York is a privately held company.
Grievous Angel
I think it's in poor taste to make video games about a war that's still claiming U.S. lives.
Owner, Port Royal Cutthroats, GGNL | 计算机 |
2014-15/4497/en_head.json.gz/21257 | The RPG Square
HomeSite Information
← The Rise of SquareSoft (Part 2) – The Golden Age
The Rise of SquareSoft (Part 4) – No Going Back →
December 4, 2012 · 6:57 PM ↓ Jump to Comments
The Rise of SquareSoft (Part 3) – It’s Hip to Be Square
Following their success on the Super Nintendo, Square had originally planned to continue to develop for Nintendo systems. They even created a tech demo rendering some of the Final Fantasy VI characters in 3D for which many thought would be a preview of what Final Fantasy might look like on the Nintendo 64. These plans would soon change though, when a partnership between Nintendo and Sony fell through which ended with Nintendo staying with cartridges for its new system and Sony deciding to enter the video game market with its CD enabled PlayStation. With Sakaguchi and his team looking to push themselves with the expanded storage space offered by the CD format, Square controversially announced they would develop Final Fantasy VII for the Sony PlayStation.
Yoshinori Kitase was concerned that the franchise would be left behind unless it embraced 3D graphics like other new games at the time and so Square made many advances with the new technology and Final Fantasy VII was the first in the series to feature a 3D world map, 2D pre-rendered backgrounds and character models rendered with polygons. Most famously though was the introduction of higher quality Full Motion Videos (FMV’s) that became a staple of the series.
Square didn’t just focus on graphics though, as the fantastic story of Final Fantasy VII was a joint effort written by Kazushige Nojima, Kitase and Masato Kato, based off an original draft by Sakaguchi. Previous Final Fantasy series artist Yoshitaka Amano was limited during the production due to other commitments and so Tetsuya Nomura, who previously had worked on Final Fantasy V and VI as a monster designer, was promoted to lead character designer. Even composer Nobuo Uematsu utilised the PlayStation’s internal sound chip to create songs with digitized voice tracks.
Final Fantasy VII was one of the most expensive games of its time and Sony advertised it heavily, especially in North America. It was also the first mainline title in the series to be released in Europe. The game was met with critical and commercial success upon its release and went on to sell 10 million copies worldwide. Final Fantasy VII is often regarded as one of the greatest games ever made and is recognised as the catalyst for popularising RPGs outside of Japan.
Final Fantasy VIII followed soon after VII and expanded on its foundations, presenting a more modern and futuristic world, as well as realistic and highly detailed characters again designed by Nomura. With Square’s experience with 3D graphics growing, Final Fantasy VIIIs presentation was much more consistent and it allowed the designers to make more experimental game play mechanics, such as the junction system and the addictive card mini game Triple Triad.
Final Fantasy IX was the last main installment to be developed for the PlayStation and returned the series briefly to its medieval, fantasy roots. Hiroyuki Ito returned as director while the character designs were handled by Hideo Minaba and were made more cartoonish to reflect the older games in the series, it also included black mages, crystals and lots of moogles . Sakaguchi has stated that Final Fantasy IX is his favourite in the series and that it most closely resembles what he initially visioned Final Fantasy to be. The soundtrack is also said to be Uematsu’s favourite composition.
Square seemed to be on roll with the PlayStation and as their popularity grew overseas more of their other games found success as well. Masato Kato was handed directorial duties on Chrono Cross and with returning composer Yasunori Mitsuda they created a bright and wonderful game that dealt with parallel dimensions and featured a cast of 45 different characters to recruit. The action RPG Legend of Mana released with some of the most beautiful art work ever seen in a video game and highlighted the talent of up and coming composer Yoko Shimomura who would go on to score the two Parasite Eve games and many other big name franchises in the years to come. Showing the enormous depth of talent at Square, Tetsuya Takahashi, who had smaller roles on games like Final Fantasy VI directed the amazing Xenogears. It featured one of the most intricate and fascinating stories ever conceived and utilised a battle system that incorporated game play mechanics like combos found in a fighting game. It seemed like Square could do nothing wrong.
Sakaguchi was also a big fan of a small development studio known as Quest who made the Ogre Battle games and he convinced the director Yasumi Matsuno and his team to join Square. Their partnership created more mature and complex games such as the classic strategy RPG, Final Fantasy Tactics and the dark and cinematic Vagrant Story.
With a whole new legion of fans from around the world, SquareSoft re-released some of their classic games to a new audience and PlayStation ports of Final Fantasy I and II, Final Fantasy IV and Chrono Trigger and Final Fantasy V and VI were given new life and their quality was appreciated all over again. Square was now a household name and Final Fantasy was one of the biggest video game series ever, could anything stop their seemingly endless supply of talent and creativity…?
Filed under Chrono Series, Editorial, Final Fantasy Series, Mana Series, Music, Parasite Eve Series, Uncategorized, Vagrant Story, Xenogears
Tagged as chrono cross, final fantasy vii, legend of mana, playstation, sakaguchi, sony, square, squaresoft, vagrant story, xenogears, yasumi matsuno
3 responses to “The Rise of SquareSoft (Part 3) – It’s Hip to Be Square” duckofindeed December 11, 2012 at 5:18 AM I wonder if Nintendo regrets missing out on “FFVII”. They shouldn’t have been so stubborn about sticking with cartridges. It’s weird how “FF” games used to be on Nintendo consoles, and now they are on every console but Nintendo (with the exception of “Crystal Chronicles”). Anyway, interesting article. Now I see where Tetsuya Nomura finally started doing more.
Reply trigger7 December 12, 2012 at 9:29 PM Yeah it definitely wasn’t the best decision looking back, but it seemed like they couldn’t reach an agreement with Sony over money. Nintendo tried again to create a CD system with Phillips, but that was also a failure.
Reply Pingback: The Rise of SquareSoft (Part 4) – No Going Back | The RPG Square
Pages Bastion (1)
Bravely Default (2)
Chrono Series (17)
Dragon Quest Series (5)
Final Fantasy Series (36)
Kingdom Hearts Series (9)
Mana Series (11)
Ni No Kuni (2)
Parasite Eve Series (4)
Persona (1)
Radiant Historia (3)
Rogue Galaxy (2)
SaGa Series (3)
Suikoden Series (2)
Terranigma (2)
The Last Story (1)
The Legend of Dragoon (1)
The Legend of Heroes Series (1)
The Legend of Zelda Series (4)
Top 5 Lists (17)
Vagrant Story (5)
Xenoblade Chronicles (2)
Xenogears (3)
Ys Series (1)
Popular ArticlesReview: Why You Should Play Suikoden II
Top 5 Songs From Final Fantasy VI
Top 5 Songs From Final Fantasy VIII
Why Square Enix Should Look To Final Fantasy IX
Review: Why You Should Play Final Fantasy IX
Friends of The RPG Square
Chalgyr's Games Room
Ocarina of Time Nerd
Recollections of Play
Score VGM
RPG News and Reviews
RPG Fan
RPGamer
Siliconera
The RPG Square · Articles and Editorials on RPG's Create a free website or blog at WordPress.com. | 计算机 |
2014-15/4497/en_head.json.gz/21680 | AppLift Blog
The mobile games marketing Blog.
April 24th, 2014 by Emilia Knabe |
Posted in AppLift news |
eCPM? API? The Top 21 Ad Tech Buzzwords Demystified
What some of the trendiest words in ad tech actually mean.
The mobile ad tech space has only been around for a few years but still long enough to accumulate a clutter of ad tech lingo that leaves newbies and onlookers to the industry equally confused. Sure, you’ve heard these terms before – but do you really know what they mean? If you no longer want to pretend to know but actually want to become clear on what’s behind these fancy acronyms and names, then the following glossary is for you – ranked in decreasing order of buzziness!
Acronym for “Software Development Kit”; in the classic software development sense, a SDK is a set of software development tools that enable a developer to create applications for a specific platform i.e. Apple’s iOS or Google’s Android platform; in mobile ad tech however, a SDK is simply a piece of code placed in mobile apps that enables communication with the publisher’s application and advertising software platforms. SDKs have a wide range of uses among them analytics and monetization of mobile applications. Whereas an API serves standard and native formats, an SDK serves interstitial and rich media formats.
A web server that stores and manages ads and delivers them to consumers from an ad network or a different provider. The ad server also performs various other tasks like counting the number of impressions / clicks for an ad campaign and generation of reports. Ad Servers can belong to ad networks or are available as white label solutions.
Ad Tag
A piece of HTML or javascript code placed on a publisher’s mobile website that enables him to sell ad space. The ad tag requests an ad from an ad server and calls upon the browser to open an iframe which then holds the advertisement shown.
Acronym for “Application Programming Interface”; a set of rules that enables communication between machines such as a server, a mobile phone or a PC. APIs are usually included in SDKs and enable ad serving of standard and native ad formats in an automated way. They have a wide range of usages in the mobile ad tech sector.
Commonly referred to as “ad network mediation”; a technology which delivers an integrated portfolio of ad networks to publishers and enables them to sell their inventory to the different ad networks through one single channel; mediation is possible at SDK and API level and serves as a means for publishers to increase their fill rate.
Acronym for Demand Side Platform; a technology that enables advertisers to buy impressions across a range of publisher inventory targeted towards specific users, based on information such as their location and previous browsing behavior. Publishers make their ad impressions available through marketplaces on ad exchanges or SSPs, and DSPs decide which ones to buy based on the information they receive from the advertiser. Often the price of those impressions is determined by a second-price auction, through a process known as real-time bidding. That means there’s no need for human salespeople to negotiate prices with buyers, because impressions are simply auctioned off to the highest bidder who then pays the price of the second highest bidder.
Acronym for Supply Side Platform; a technology that gathers various types of advertising demand for publishers including demand from traditional ad networks as well as ad exchanges. This demand is aggregated by Demand Side Platforms which plug into an SSP to bid on publisher’s inventory, using Real-Time-Bidding.
Acronym for Data Management Platform; a centralized computing system for collecting, integrating and managing large sets of data from first-, second-, and third party data sources. It provides processing of that data, and allows a user to push the resulting segmentation into live interactive channel environments.
A metric that describes the number of times an ad is displayed; an impression occurs each time a consumer is exposed to an advertisement.
A technology platform that facilitates the buying and selling of online media advertising inventory from multiple ad networks through bidding practices. It functions as a sales channel between publishers and ad networks and can provide aggregated inventory to advertisers. Ad exchanges’ business models and practices may include features that are similar to those offered by ad networks.
A company that connects advertisers to publishers. It aggregates inventory from publishers to match it with advertiser demand. Ad networks use central ad servers to deliver advertisements to consumers which enable targeting, tracking and reporting of impressions.
The number of advertisements, or amount of ad space, a publisher has available to sell to an advertiser. In mobile ad tech, ad inventory is often valued in terms of impressions that the publisher can deliver to the advertiser.
Fill Rate
The ratio of ad requests that are successfully filled in relation to the total number of ad requests made, expressed in percentage.
The concept, design and artwork of an ad; a creative can come in various formats such as: GIF, JPEG, JavaScript, HTML, and Flash.
Cost per Mille; the price paid by an advertiser to a publisher displaying their ad 1000 times.
Cost per Click; the price paid by an advertiser to the publisher for a single click on the ad that brings the consumer to its intended destination.
Cost per Install; the price paid by the advertiser for each installation of a mobile app linked to the advertisement.
Effective Cost per Mille; metric for measuring revenue generated across various marketing channels; is calculated by dividing total earnings by the total number of impressions in thousands.
Acronym for Real Time Bidding; a technology that conducts a real-time auction of available mobile ad impressions by receiving bids from multiple demand sources such as DSPs within a set time interval (typically 100ms) and then delivering the ad to the winning bidder; usually a second price auction where the highest bidder pays the price bid by the second highest bidder.
Commonly defined as a broad range of advertisements using Flash or HTML5 technology that exhibit dynamic motion like moving, floating or peeling down and occur either over time or in direct response to a user interaction. Rich media formats include video ads and interactive ads that require the engagement of the user.
Buying inventory in an automated way on an RTB exchange on another automated system.
Still confused? Don’t hesitate to drop us a line and we’ll be happy to get back to you!
To make sure not to miss AppLift’s insights, subscribe to our newsletter trough the form below:
Stay up to date with the AppLift Newsletter
Which topic do you prefer?
Please choose...Mobile Games AdvertisingMobile Traffic MonetizationBoth April 23rd, 2014 by Thomas Sommer |
Webinar: Leveraging Customer Lifetime Value for user acquisition campaigns
On Wednesday, April 23 we held the second episode of AppLift Webinars on how to leverage Customer Lifetime Value (LTV) for mobile user acquisition campaigns. We had the pleasure of hosting 3 industry experts: (more…)
April 17th, 2014 by Thomas Sommer |
Google Play catches up on iOS and 4 other stories you shouldn’t miss this week!
Get the 5 most interesting stories from the week that was in the mobile industry.
Also, don’t miss our webinar next week: “Leveraging LTV for user acquisition campaigns”. Wed, April 23 at 9:30am PDT/6:30pm CEST.
Click here to register now!
Happy Easter! (more…)
Posted in Android |
Google Play Developer Program Policy Update: Why It’s No Threat To Native Advertising
A couple of weeks ago, Google updated their Play Store Developer Program Policies with a few substantial changes, notably in terms of app content, app store promotion and advertising. While this is somewhat old news, there are a few points we’d like to focus on more specifically for what they mean for the industry as a whole, and for native advertising in particular. (more…)
April 11th, 2014 by Emil Damholt |
Posted in Blog |
Key trends in the Chinese mobile market and 4 other stories you shouldn’t miss this week!
Get the 5 most interesting stories from the week that was in the mobile industry. (more…) | 计算机 |
2014-15/4497/en_head.json.gz/21822 | A Mead Project
reference page
Originally published as:
Robert Throop and Lloyd Gordon Ward. "Mead Project 2.0." Toronto: The Mead Project (2007).
A brief introduction to the October 2007 revision.
Site Navigation Mead Project Inventory
the Web Mead Project
Mead Project 2.0
Welcome to the October 2007 edition of the Mead Project. We would like to thank everyone who complained about the look and feel of the old site over the past year or so. We wanted to ignore you, but your criticisms were valid. Except for minor (and usually ill-considered) cosmetic changes, nothing had really changed since we first mounted the site more than a decade ago. That meant that we had not taken advantage of developments in Web technologies since the mid-1990s. So we spent a few days learning about cascading style sheets and "code validators," and invested most of the summer in the redesign and reconstruction of the site, bringing it up to W3C's standards, cleaning out more than 300,000 compliance issues and errors — tedious work but we hope that you think it was worth the effort. Most of the site has been tested against three of the "big four:" Internet Explorer� Firefox� and Opera.� We are still testing Safari.� We are a little PC-centric and didn't know that Apple had a Window's version of the browser until mid-October.But once again, if you have problems, let us know.
Despite the structural revisions, we assure you that an abundance of content errors remain. That's both a warning and a request for your assistance. When you run across something that looks wrong or just plain strange, send us a note. We will try to correct it in a more expeditious manner than has been the case in the past.
What has changed
Probably the most obvious change is our use of cascading style sheets (CSS). Every moment invested in learning the basics of CSS has paid off. While encouraging simplicity, style sheets allow for easy experimentation and encourage a playfulness that otherwise you might not entertain. Admittedly, in our case the result is an even more boring design. We have given up any pretense to being clever or creative. After a bit of playing around, we borrowed the "new look" from the old "reprint series" that were ubiquitous when we were students. The design worked for print, we think it works for the web. What it lacks in inspiration, it has gained in simplicity.More importantly, the two-panel design has made it easier to add in material that we always intended to add but hadn't taken the time to put together: the "Related Documents" section. So far, we have tied together only a handful of documents but that should change over the next year. The same is true of the "Editors' Note" section. We hope to give you a better hint to why each document appears as part of the site. In the course of revision, the cutesy names once used for sections of the site dedicated to particular writers (Baldwin, Dewey, Cooley, James, Mead, Sherif, and Veblen) have disappeared. When the longer names used for the other sites didn't fit into the design we dumped them, opting instead for a simpler structure. Part of that change we regret. We were always fond of the name George's Page. Over the years we had taken a lot of flack from scholars. But for us, it evoked the informality and usefulness that the World Wide Web was conceived to support. Those early days are long gone and the Web has become an integral part of education. It was time to put away childish things, so we did. What has been added
Over the past five years, active work on the Project has shifted focus from Mead to those related to Mead's work. We have started with two important contributors to Social Psychology: William Isaac Thomas and Floyd Henry Allport. Over the years, both men have acquire an heroic stature. Much of that reputation has been based on often-repeated stories and undocumented assertions about their work. In both cases, their contributions were important but, we believe, misunderstood. Untangling the myths from documentable realities has proven to be a great hobby, but it has consumed time we should have been investing in the site. Reference pages
To merge that work into the site, we have added research notes we put together to understand the context of their work. Like this page, they typically bear the heading "A Mead Project reference page." The majority of the notes focus on aspects of Thomas's career, many documenting what appear to be only tangentially related issues, but other scholars may find them useful.
Clipping pages
The same work has also taken us into newspaper archives, particularly the New York Times and the Chicago Tribune. There are now hundreds of newspaper stories included in the site. A lot of the material may look unrelated to Social Psychology, but it is related to careers of Thomas or Allport. Again, our hope is that other readers will find the material useful.
Scrapbook pages The new material has blurred any focus that the Inventory page may once have given the Project. We began assembling pages of links that pull together subsections of the material, electronic scrapbooks with brief notes. The notes have grown into essays. We are sure that the first set will seem entirely tangential to Social Psychology. We wanted to start with the easiest bits. Bear with us, the link to the
Project's goals will become more evident as more appear.The essays haven't been run through a peer review process. That seems unfair. At points the notes are critical of published work and it would be impolite (arrogant?) not to invite others to comment on what we have put out there for anyone to read. We encourage readers to send their comments to us as email. For the foreseeable future, we will link any signed remarks to the page so that readers are aware of divergent interpretations. We hope to move the Scrapbook pages into a context will allow readers to attach their responses to our ideas directly. But until that time, we will include them as part of the scrapbook page itself.The Result of the Additions
The shift in focus has changed the nature of the Project in a way which we hadn't expected and didn't really notice until this revision. Back in the late 1980s, we started the project as a "work around" for a situation that we found personally frustrating. We believed that widely-held beliefs about Mead's ideas were misinterpretations. But his published statements were often difficult to obtain. It was easier for scholars to rely from the secondary literature about Mead than to consult primary sources. As a result, those frustrating misinterpretations persisted. Our solution: republish as much of Mead as possible in machine-readable form to make distribution, familiarity, and study easier. When the Web was established, we abandon plans for a CD and prepared the documents for the new medium. George's Page was born.When we restricted our work to Mead, we constrained "editorial license" by striving for completeness. If Mead wrote it, we have tried to publish it. Some documents continue to evade our best efforts, but it remains the most complete collection of Mead's writing available. As we added material by other writers, we made a conscious decision not to follow the same path with everyone. We are fast approaching 4,000 source document web-pages and documents by others writers far outnumber those by Mead. That change is particularly obvious in the clipping and reference pages. More than any other set of pages, those new source documents make the Project look far more "scattershot." That is not necessarily a bad thing, but it has become far more personal. That is especially evident in the Scrapbook pages. We don't apologize for the change but readers should be warned that as a collection it undoubtedly has lacunae, the "blind spots" in our perspective.
"Promises to Keep"
Finally, although you will find references to several scrapbook pages, only a few of the scrapbook pages are ready for circulation. The missing pages should appear intend another update completing the "Thomas cycle" in December 2007, and another documenting the "Allport cycle" in June of 2008. Luther Lee Bernard and Jacob Robert Kantor will follow, then Louis Thurstone and Herbert Blumer. We will attempt to keep things balance between the sociological and psychological streams.
No notes �2007 The Mead Project.
This page and related Mead Project pages constitute the personal web-site of Dr. Lloyd Gordon Ward (retired), who is responsible for its content. Although the Mead Project continues to be presented through the generosity of Brock University, the contents of this page do not reflect the opinion of Brock University. Brock University is not responsible for its content.
Fair Use Statement: Scholars are permitted to reproduce this material for personal use. Instructors are permitted to reproduce this material for educational use by their students.
Otherwise, no part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording or any information storage or retrieval system, for the purpose of profit or personal benefit, without written permission from the Mead Project. Permission is granted for inclusion of the electronic text of these pages, and their related images in any index that provides free access to its listed documents.
The Mead Project, c/o Dr. Lloyd Gordon Ward, 44 Charles Street West, Apt. 4501, Toronto Ontario Canada M4Y 1R8 | 计算机 |
2014-15/4497/en_head.json.gz/22531 | The Library of Congress > Publishers, Authors > U.S. ISSN Center > FAQs
ISSN Home
ISSN International Centre ISBN Agency, R.R. Bowker National Information Standards Organization (NISO) U.S. Copyright Office
Copyright Clearance Center U.S. Patent & Trademark Office
What Do the Letters ISSN Stand For?
International Standard Serial Number. According to the pertinent national and international standards (ISO 3297; ANSI/NISO Z39.9) the abbreviation"ISSN" denotes the singular and plural forms, according to context. Why Do I Need an ISSN?
The ISSN can be thought of as the social security number of the serials world. Just as an individual's social security number is used in many automated systems to distinguish that person from others with the same or similar names, the ISSN distinguishes a particular serial from others with which it might be confused. The ISSN also helps library patrons, libraries, and others who handle large numbers of serials to find and identify titles in automated systems more quickly and easily. Does the ISSN Have Any Meaning Embedded in the Number?
Unlike the ISBN, which contains country and publisher prefixes, the ISSN contains no inherent meaning. Why Do Some ISSN End in an "X"?
An ISSN is composed of eight digits: the arabic numerals 0 to 9, except that in the case of the last digit, which is a check digit, the upper case X can appear. The purpose of the check digit is to guard against errors caused by the incorrect transcription of the ISSN. The method of determining the check digit for the ISSN is the modulus 11 basis, using the weighting factors 8 to 2. In the case of the ISSN, the Roman numeral X is used where the check digit calculation results in a remainder of 10. Who Assigns ISSN?
ISSN are assigned by a network of over 60 centers worldwide coordinated by the ISSN International Centre located in Paris. ISSN are assigned to serials published in the United States by the U.S. ISSN Center at the Library of Congress. Serials published outside of the United States are assigned ISSN by the national center in their country of publication, or, in the case of countries lacking a national center, by the ISSN International Centre. Information about the ISSN network and ISSN centers worldwide can be found on the ISSN International Centre's home page. Who Is Eligible to Obtain ISSN From the U.S. ISSN Center?
The U.S. ISSN Center generally only assigns ISSN at the direct request of the publisher or an agent (such as an attorney) acting on the publisher's behalf. Libraries and other ISSN users interested in obtaining ISSN should contact the head of the U.S. ISSN Center, Regina Reynolds, [email protected], to discuss other possible arrangements. How Do I Get an ISSN for a U.S. Serial?
U.S. publishers should complete an application form and send it to the U.S. ISSN Center together with a representation of the serial (either a sample issue, or a photocopy of the cover, title page (if present), masthead, publisher information, and any other pages giving information about the serial. How Much Does It Cost to Get an ISSN?
There is no charge for the assignment of the ISSN, or for the use of an ISSN once assigned. (However, the Library of Congress incurs substantial costs to staff and maintain the U.S. ISSN Center. Additionally, the Library of Congress is assessed a considerable fee to belong to the ISSN Network.) Do I Need a Separate ISSN for Each Issue?
No. ISSN are assigned to the entire serial and stay the same from issue to issue unless you change the title of your serial in any way except to increment the date (e.g., The World of Serials 1996 to The World of Serials 1997). What Happens if I Change My Title?
Title changes are costly for libraries and can be costly to publishers as well. If you must change the title, please apply to the U.S. ISSN Center for a new ISSN at least a month in advance. If you are in doubt as to whether a contemplated title change would require a new ISSN, please contact the center ([email protected]). The Whats in a Name? brochure has further information about the costs of serial title changes. How Many ISSN Do I Need?
That depends. For most serials one ISSN for each title under which it has been published is sufficient. But, if your serial is published in different language, regional, or physical editions (e.g., print, electronic), you will probably require a separate ISSN for each edition. Further information about electronic serials is available. Where and How Do I Print the ISSN?
The preferred location for printing the ISSN on a printed serial is on the upper right-hand corner of the cover. Other good locations are the masthead area, the copyright page, or in the publishing statement where information about the publisher, frequency, and other publication facts are given. On a non-print serial, the ISSN should be printed, if possible, on an internal source, such as on a title screen or home page. Other suggested locations on non-print serials are on external sources such as microfiche headers, cassette or disc labels, or other containers. If a publication has both an ISSN and an ISBN, each should be printed. If a publication is in a series which has its own ISSN, both ISSN should be printed, accompanied by the title to which it pertains. Do I Have to Send You Each Issue I Publish?
No. The ISSN office only needs to see one published issue either at the time of registration, or after publication, for ISSN issued prior to the publication of the first issue of a serial. However, please see Copyright Circular 7d, Mandatory Deposit of Copies or Phonorecords for the Library of Congress [PDF, 135K] for information on Copyright deposit requirements you may be subject to. What Is the ISBN?
ISBN or International Standard Book Number is the book counterpart to the ISSN. It is a national and international standard identification number for uniquely identifying books, i.e., publications that are not intended to continue indefinitely. Can a Publication Have Both an ISSN and an ISBN?
Yes. This situation occurs most commonly with books in a series and with annuals or biennials. The ISBN identifies the individual book in a series or a specific year for an annual or biennial. The ISSN identifies the ongoing series, or the ongoing annual or biennial serial. What Is the Relationship Between ISSN and CIP?
CIP or Cataloging in Publication information is only available for books. So, unless the cataloging in publication data is for an individual book in a series, a publication will not normally be eligible for both cataloging in publication and ISSN. What Is the Relationship Between ISSN and Copyright?
There is no connection between Copyright and ISSN. Having an ISSN does not confer any Copyright protection, nor does sending a serial to the Copyright office eliminate your need to send the U.S. ISSN Center a sample issue of a serial for which you were given a prepublication ISSN. Does Registering a Title with an ISSN Mean No One Else Can Use It?
No. Getting an ISSN for a title does not confer any exclusive rights to that title. Nor can titles be copyrighted. The best way to protect a title is to register it with the U.S. Patent & Trademark Office. Does Having an ISSN Mean I Can Mail My Serial at Special Postal Rates?
No. The U.S. Postal Service uses the ISSN as an identification number for certain publications mailed at second class postage rates, but all publications have to meet the same requirements for a second class mailing permit regardless of whether they have an ISSN or not. Contact your local postmaster about obtaining a second class mailing permit. How Are ISSN Used in Bar Codes?
The ISSN is used in several bar codes as the title identifier portion of the code. One such code, the SISAC bar code symbol, can be found on scholarly, technical, medical and other subscription-based serials. The SISAC symbol is used by libraries and library-affiliated organizations. The symbol can also represent articles within journals and is used by document delivery services. The other major bar code that uses the ISSN is the EAN (International Article Number). The EAN is used in the U.S. by major bookstore chains for trade and other book publications. It is used extensively in the U.K. for magazines.
Although the ISSN is used as an element of the above bar codes, NSDP does not issue the actual bar codes. Further information concerning the SISAC bar code symbol is available from Publication ID Division of Product Identification & Processing Systems, Inc. (PIPS), on the Web at http://www.pips.com/ Back to Top
Download the Adobe Acrobat Reader
to view PDF documents. | 计算机 |
2014-15/4497/en_head.json.gz/22623 | Privacy Policy Thank you for visiting Metricfire.com. This privacy policy tells you how we use personal information collected at this site. Please read this privacy policy before using the site or submitting any personal information. By using the site, you are accepting the practices described in this privacy policy. These practices may be changed, but any changes will be posted and changes will only apply to activities and information on a going forward, not retroactive basis. You are encouraged to review the privacy policy whenever you visit the site to make sure that you understand how any personal information you provide will be used.
Note: the privacy practices set forth in this privacy policy are for this web site only. If you link to other web sites, please review the privacy policies posted at those sites.
We collect personally identifiable information, like names, postal addresses, email addresses, etc., when voluntarily submitted by our visitors. This information is only used to fulfill your specific request, unless you give us permission to use it in another manner, for example to add you to one of our mailing lists.
Cookie/Tracking Technology
The Site may use cookie and tracking technology depending on the features offered. Cookie and tracking technology are useful for gathering information such as browser type and operating system, tracking the number of visitors to the Site, and understanding how visitors use the Site. Cookies can also help customize the Site for visitors. Personal information cannot be collected via cookies and other tracking technology, however, if you previously provided personally identifiable information, cookies may be tied to such information. Aggregate cookie and tracking information may be shared with third parties.
Distribution of Information
We may share information with governmental agencies or other companies assisting us in fraud prevention or investigation. We may do so when: (1) permitted or required by law; or, (2) trying to protect against or prevent actual or potential fraud or unauthorized transactions; or, (3) investigating fraud which has already taken place. The information is not provided to these companies for marketing purposes.
Commitment to Data Security
Your personally identifiable information is kept secure. Only authorized employees, agents and contractors (who have agreed to keep information secure and confidential) have access to this information. All emails and newsletters from this site allow you to opt out of further mailings.
Privacy Contact Information
If you have any questions, concerns, or comments about our privacy policy you may contact us using the information below:
We reserve the right to make changes to this policy. Any changes to this policy will be posted.
About Metricfire
Metricfire is based in Berkeley, CA and Dublin, Ireland.
Dave Concannon - Co-founder Dave has spent the last decade producing code for organizations such as the United Nations, the Irish government, and major universities. He’s also worked in the trenches with a number of interesting startups.
Charlie von Metzradt - Co-founder
An Irish software engineer with a grudge, Charlie has long been unhappy with the current state of software performance measurement tools for developers. He has worked on one of Ireland’s biggest websites and in the video games industry, where his code has been used by a couple of million concurrent users.
Metricfire Blog
Copyright © 2012 Metricfire Limited. All Rights Reserved.
Registered Company number 509010. Registered Office: 4 Percy Place, Dublin 4, Ireland
Secure online payments provided by 2Checkout.com, Inc. | 计算机 |
2014-15/4497/en_head.json.gz/23086 | Menu:: HOME | ORDER THE CD-ROM NOW! | SCREENSHOTS | DOWNLOAD A DEMO | CONTACT US | A Virtual Tour of the United Nations on CD ROM
Learn More about the unique international organization and its history.
Explore the hidden treasures of the United Nations Headquarters.
150 Interactive Panoramas
More than Two Hours Video
115 Gifts and Artworks The History of the UN
Over 40 Interviews All 192 Member States
Unpublished Photos and Videos Tour of the Headquarters in New York City Over 90 Menus
Around 500 different Screens
For PC and Macintosh Computer
"A Virtual Tour of the United Nations" is a new multimedia CD-ROM adventure which provides an unrivaled source of knowledge about the many facets of this important organization.
Co-produced by Spinning Eye and the UN's Department of Public Information, "A Virtual Tour of the United Nations" features an inside look of the UN, its main bodies and the people that work to make the world a better place. Spinning Eye's team of producers spent over six months taking photographs, filming and researching on-site at UN Headquarters.
This CD-ROM is a must have for anyone interested in the United Nations and a perfect addition to the collection of someone who has always wanted to take a journey through this unique organization. "A Virtual Tour of the United Nations" is a powerful teaching tool that combines an engaging narrative with the liveliness of interactive panoramas and interviews.
© 2010 Spinning Eye Inc. - [email protected] | 计算机 |
2014-15/4497/en_head.json.gz/23790 | - Microsoft Anti-Spyware Deleting Norton Anti-Virus
Microsoft Anti-Spyware Deleting Norton Anti-Virus
Microsoft's Anti-Spyware program is causing troubles for people who also use Symantec's Norton Anti-Virus software; apparently, a recent update to Microsoft's anti-spyware application flags Norton as a password-stealing program and prompts users to remove it. According to several different support threads over at Microsoft's user groups forum, the latest definitions file from Microsoft "(version 5805, 5807) detects Symantec Antivirus files as PWS.Bancos.A (Password Stealer)."
When Microsoft Anti-Spyware users remove the flagged Norton file as prompted, Symantec's product gets corrupted and no longer protects the user's machine. The Norton user then has to go through the Windows registry and delete multiple entries (registry editing is always a dicey affair that can quickly hose a system if the user doesn't know what he or she is doing) so that the program can be completely removed and re-installed. I put in calls to Microsoft and to Symantec on this issue, but am still waiting to hear back from both companies. Microsoft said it is shipping updates that fix this problem, but judging from the growing number of other threads on this in that forum, this is shaping up to be a pretty big issue for companies that have deployed Microsoft's free anti-spyware product inside their networks. It's a good idea to keep in mind that Microsoft's Anti-Spyware product is in beta mode: The company's product page explicitly says that Microsoft Anti-Spyware should not be deployed in production systems. I'm not apologizing for Redmond in any way; it just seems like too many people ignore warnings about beta products.
Update: 10:58 p.m. ET: I heard from Microsoft, and they say the problem is limited to customers running Symantec Antivirus (SAV) Corporate Edition versions 7, 8, 9 or 10 or Symantec Client Security (SCS) versions 1, 2 or 3 in combination with Windows AntiSpyware Beta 1. "The beta software will prompt and allow the user to remove a registry key containing subkeys belonging to these Symantec products. The deletion of these registry keys will cause all versions of the SAV and SCS software to stop operating correctly. No files are removed in this situation, only registry keys."
The rest of the statement Microsoft sent me says: "Once this issue was discovered, Microsoft quickly released a new signature set (5807) to remove this false positive. Both companies are working jointly together to identify the number of affected customers, which we believe to be very limited. Microsoft and Symantec are working jointly on a solution to restore normal operation of the Symantec software. Until this solution is available, customers can utilize System Restore in Windows XP to restore to an earlier point prior to the removal of the registry keys, or reinstall their client software." By Brian Krebs
Categories: Latest Warnings Save & Share: Previous: Security Fix Has Moved - Please Update Your Bookmarks and RSS Feeds
Next: The New Face of Phishing No comments have been posted to this entry. | 计算机 |
2014-15/4497/en_head.json.gz/24058 | Home»Latest»HTML 5 has arrived Facebook Twitter RSS Featured Articles
Wednesday, 26 December 2012 12:03 HTML 5 has arrived Written by Nick Farrell
Add new comment All ready go to HTML5, which has been in the works since Roman times, is now officially "feature complete." According the standards-setting Worldwide Web Consortium (W3C). There's still some testing to be done, and it hasn't yet become an official Web standard but now it is safe to say that there won't be any new features added to HTML5.It means that Web designers and app makers now have a "stable target" for implementing it by the time it comes a standard in 2015. The HTML5 language lets developers deliver in-the-browser experiences that previously required standalone apps or additional software like Java, Adobe's (ADBE) Flash or Microsoft's (MSFT, Fortune 500) Silverlight. It supports lightning-fast video and geolocation services, offline tools and touch, among other bells and whistles.It has taken more than a decade for the standard to be developed. W3C CEO Jeff Jaffe said in a prepared statement said that as of today, businesses know what they can rely on for HTML5 in the coming years. "Likewise, developers will know what skills to cultivate to reach smart phones, cars, televisions, e-books, digital signs, and devices not yet known," he added.The latest versions of Microsoft Internet Explorer, Google Chrome, Mozilla Firefox and Apple Safari are already compatible with most HTML5 elements. W3C is already working on HTML 5.1, the first parts of which were just submitted in draft form. Published in
« Two APUs headed to someone’s stockings Lonely Planet shuts down travel forum » | 计算机 |
2014-15/4497/en_head.json.gz/24570 | The Guide to Windows 2000 Wisdom: My Whistler Whish List
Because Whistler is Microsoft's response to customer feedback on Windows 2000, here's my lists of wants.
By Jeremy Moskowitz03/01/2001
People who know me know I love Windows 2000. I’m often guilty of espousing the many countless aspects where it triumphs over Windows NT (and its competitors): the ease of management, its IntelliMirror features and increased stability, to name just a few. Currently in development, as we’ve reported in these pages, is the next iteration of Win2K, code-named Whistler. With the release of a new operating system comes the opportunity to enhance current features and add some new ones. The purpose of this article is to voice comments from administrators and consultants who say that Win2K “has missed the mark” in certain areas. It’s not meant to attack Microsoft’s efforts to make Win2K the feature-rich and stable product it is. It’s my hope that these current Win2K issues will be addressed in the Whistler release to | 计算机 |
2014-15/4497/en_head.json.gz/25335 | Posted on April 27, 2005 by seo-admin In this article you will learn about the Unified Modeling Language (UML) by examining basic modeling of things and concepts in the real world. It is excerpted from the book Fast Track UML 2.0, written by Kendall Scott (Apress, 2004; ISBN: 1590593200)
LET’S BEGIN OUR LOOK AT THE DETAILS of the Unified Modeling Language (UML) by exploring how we do basic modeling of things and concepts in the real world. Classes and Objects A class is a collection of things or concepts that have the same characteristics. Each of these things or concepts is called an object. An object that belongs to a particular class is often referred to as an instance of that class. You can think of a class as being an abstraction and an object as being the concrete manifestation of that abstraction. The class is the most fundamental construct within the UML. Reasons why this is so include the following: Classes define the basic vocabulary of the system being modeled. Usin g a set of classes as the core glossary of a project tends to greatly facilitate understanding and agreement about the meanings of terms. Classes can serve as the foundation for data modeling. Unfortunately, there is no standard for mapping between a set of classes and a set of database tables, but people like Scott Ambler1 are working to change that. Classes are usually the base from which visual modeling tools—such as Rational Rose XDE, Embarcadero Describe, and Sparx Systems’ Enterprise Architect—generate code. The most important characteristics that classes share are captured as attributes and operations. These terms are defined as follows: Attributes are named slots for data values that belong to the class. Different objects of a given class typically have at least some differences in the values of their attributes. Operations represent services that an object can request to affect behavior. (A method is an implementation of an operation; each operation of a given class is represented by at least one method within each of the objects belonging to that class.) The standard UML notation for a class is a box with three compartments. The top compartment contains the name of the class, in boldface type; the middle compartment contains the attributes that belong to the class; and the bottom compartment contains the class’s operations. See Figure 1-1. Figure 1-1. Class notation
You can, however, show a class without its attributes or its operations, or the name of the class can appear by itself (see Figure 1-2). Figure 1-2. Alternate class notations
The level of detail you choose to show for your classes depends on who is reading the diagrams on which they appear. For example, a stakeholder who’s focused on the “big picture” is probably interested only in the names of the classes, while a developer working at a more detailed level probably wants to see a full set of attributes and operations. You can also “mix and match” nota tions in a given context. Figure 1-3 shows some examples of classes. Figure 1-3. Sample classes
The names of the classes, attributes, and operations in Figure 1-3 adhere to conventions that aren’t carved in stone but are in fairly wide use. These conventions are as follows: Class names are simple nouns or noun phrases. Each word is capitalized. Attribute names are also simple nouns or noun phrases. The first word is not capitalized, but subsequent words are. Acronyms tend to appear in all uppercase letters. Operation names are simple verbs. As with attributes, the first word is not capitalized and subsequent words are; acronyms tend to appear in all uppercase letters here as well. Note that all words in class, attribute, and operation names are generally run together, as shown in Figure 1-3. Whether you choose these simple conventions—or more elaborate ones— the naming of classes, attributes, and operations should be consistent with the language or platform that you’re using or with your company-specific coding standards. NOTE The title attribute of the Book class has an associated data type(String), whereas the other three attributes in the figure (emailAddress, ID, and password) don’t have types. Note also that each of the three operations (verifyPassword, assignRating, and computeAvgRating) has a different appearance. There are various kinds of details that you can attach to attributes and operations. These are explored in the section “Attribute and Operation Details,” later in this chapter. It’s often desirable to define explicit responsibilities for a class. These represent the obligations that one class has with regard to other classes. Figure 1-4 shows how you can use an extra compartment within a UML class box to indicate responsibilities for a class. Figure 1-4. Class responsibilities | 计算机 |
2014-15/4497/en_head.json.gz/25433 | Text-Only | Privacy Policy | Site Map
Strategic supercomputing comes of age
This composite image is taken from a three-dimensional simulation performed to help scientists better understand the sequence of events that led to the containment failure of the Baneberry underground test in December 1970.
Full size image available here.
As the National Nuclear Security Administration's (NNSA's) Advanced Simulation and Computing (ASC) Program prepares to move into its second decade, the users of ASC's enormous computers also prepare to enter a new phase. Since its beginning in 1995, the ASC Program (originally the Accelerated Strategic Computing Initiative, or ASCI) has been driven by the need to analyze and predict the safety, reliability, and performance of the nation's nuclear weapons and certify their functionality--all in the absence of nuclear weapons testing. To that end, Lawrence Livermore, Los Alamos, and Sandia national laboratories have worked with computer industry leaders such as IBM, Intel, SGI, and Hewlett Packard to bring the most advanced and powerful machines to reality.
But hardware is only part of the story. The ASC Program also required the development of a computing infrastructure and scalable, high-fidelity, three-dimensional simulation codes to address issues related to stockpile stewardship. Most important, the laboratories had to provide proof of principle that users could someday have confidence in the results of the simulations when compared with data from legacy codes, past nuclear tests, and nonnuclear science experiments.
Efforts are now successfully moving beyond that proof-of-principle phase, notes Randy Christensen, who leads program planning in the Defense and Nuclear Technologies (DNT) Directorate and is one of the founding members of the tri-laboratory ASC Program. Christensen says, "With the codes, machines, and all the attendant infrastructure in order, we can now advance to the next phase and focus on improving the physics models in our codes to enhance our understanding of weapons behavior." Livermore's 12.3-teraops (trillion operations per second) ASC White machine and Los Alamos's 20-teraops ASC Q machine are in place, and the next systems in line are Sandia's 40-teraops ASC Red Storm and Livermore's 100-teraops ASC Purple. "In anticipation of ASC Purple in 2005, we are shifting our emphasis from developing parallel-architecture machines and codes to improved weapons science and increased physics understanding of nuclear weapons," adds Christensen. "We are taking the next major step in the road we mapped out at the start of the program."
"Ten years ago, we were focused on creating a new capability, and the program was viewed more as an experiment or an initiative," says Mike McCoy, acting leader for DNT's ASC Program. "Many skeptics feared that the three-dimensional codes we were crafting, and the new machines we needed to run them on, would fail to be of use to the weapons program." These skeptics had three areas of concern: First, would the new three-dimensional codes be useful? That is, would the code developers, working with other scientists, be able to develop new applications with the physics, dimensionality, resolution, and computational speed needed to take the next step in predictivity? Second, would the computers be reliable and work sufficiently well to grind through the incredibly complex and detailed calculations required in a world without underground nuclear testing? Third, would the supporting software infrastructure, or simulation environment, be able to handle the end-to-end computational and assessment processes? For that first decade, the program's primary focus was on designing codes and running prototype problems to address these concerns.
A snapshot of dislocation microstructure generated in a massively parallel dislocation line dynamics simulation. Full size image available here.
"Sophisticated weapon simulation codes existed before the ASC and Stockpile Stewardship programs," says Christensen. "However, because of the limited computer power available, those codes were never expected to simulate all the fine points of an exploding nuclear weapon. When the results of these simulations didn't match the results of the underground tests, numerical 'knobs' were tweaked to make the simulation results better match the experiments. When underground nuclear testing was halted in 1992, we could no longer rely so heavily on tweaking those knobs."
At the time that underground testing ceased and NNSA's Stockpile Stewardship Program was born, Livermore weapons scientists were depending on the (then) enormous machines developed by Seymour Cray. Cray designed several of the world's fastest vector-architecture supercomputers and introduced closely coupled processors. "We had reached the limits on those types of systems," says McCoy, who is also a deputy associate director in the Computation Directorate. "From there, we ventured into scalar architecture and the massively parallel world of ASC supercomputers--systems of thousands of processors, each with a large supply of local memory. We were looking at not only sheer capability--which is the maximum processing power that can be applied to a single job--but also price performance. We were moving away from specialized processors for parallel machines to commodity processor systems and aggregating enough memory at reasonable cost to address the new complexity and dimensionality." Not Just Computers and Codes: Making It All Work Designers and physicists in the tri-laboratory (Livermore, Los Alamos, and Sandia national laboratories) Advanced Simulation and Computing (ASC) Program are now using codes and supercomputers to delve into regimes of physics heretofore impossible to reach. What made these amazing tools possible were the efforts of the computer scientists, mathematicians, and computational physicists who brought the machines and the codes to the point of deployment. It wasn't easy. Throughout the era of testing nuclear weapons, approximations were a given for the computations. When calculations produced unusual results, scientists assumed that lack of resolution or faithful replication of geometry or faithful physics models or some combination were the culprits. "It was assumed that, no matter how big the machines were at that time, this inaccuracy would remain a given," says Mike McCoy, deputy associate director of Livermore's Computation Directorate. "But this concern was greatly mitigated, because testing provided the 'ground truth' and the data necessary to calibrate the simulations through the intelligent use of tweaking 'knobs.'" When testing was halted, the nation's Stockpile Stewardship Program came into being. Scientists now needed to prove that computer simulation results could hold their own and provide valuable information, which could be combined with data from current experiments and from underground tests to generate the necessary insights. To bring such parity to computer simulations in the triumvirate of theory, experiment, and simulation, code designers had to address three concerns. First, could supercomputing hardware systems be built to perform the tasks? Second, could a workable simulation environment or support infrastructure be created for these systems? Third, could the mathematical algorithms used in the physics codes be scalable?
Bringing on the Hardware
The move to massively parallel processing supercomputers in the late 1980s was followed by the cessation of underground testing of nuclear devices in 1992 and the start of science-based stockpile stewardship. The ASC Program required machines that could cost-effectively run simulations at trillions of operations per second (teraops) and use the terabytes of memory needed to properly express the complexity of the physics being simulated. This requirement forced a jump to massively parallel processing supercomputers that were, above all, scalable. In other words, these machines needed to be able to run large problems across the entire system without bogging down from communication bottlenecks, which led to the development of high-performance interconnects and the necessary software to manage these switches. Demands on hardware grew, and now the ASC Program at Livermore juggles three technology curves to ensure that users will have the machines they need today, tomorrow, and in the future. (See S&TR, June 2003, Riding the Waves of Supercomputing Technology.)
Creating an Infrastructure
Without a proper infrastructure, the ASC systems are little more than hard-to-program data-generation engines that create mind-numbing quantities of intractable, raw data. The infrastructure (sometimes called the supporting simulation environment) is what makes the terascale platform a real tool. The infrastructure includes improved systems software, input and output applications, message-passing libraries, storage systems, performance tools, debuggers, visualization clusters, data reduction and rendering algorithms, fiber infrastructure to offices, assessment theaters, high-resolution desktop displays, wide-area networks with encryption, and professional user consulting and services at the computer center--all focused on making the machines and codes run more efficiently. (a) When the Cray-1 machine was installed in 1981, it was one of the fastest, most powerful scientific computers available. The last Cray obtained by Lawrence Livermore, in 1989, had 16 central processing units and about 2 megabytes of memory. (b) Nearly a decade later, the massively parallel 10-teraops ASC White arrived at the Laboratory as part of the National Nuclear Security Administration�s Advanced Simulation and Computing Program.
The infrastructure has evolved in balance with the hardware. In 1999, for example, 2 terabytes of data from a three-dimensional simulation might have taken 2 or 3 days to move to archival storage or to a visualization server. By the end of 2000, that journey took 4 hours. Today, those 2 terabytes can zip from computer to mass storage in about 30 minutes. Similar efficiency and performance improvements have occurred with compilers, debuggers, file systems, and data management tools as well as visualization and distance computing. Remote computing capabilities within the tri-laboratory community are easily available to all sites.
Designing Codes and Their Algorithms
Over the past few years, the ASC Program has developed some very capable three-dimensional codes and has maintained or further developed supporting science applications and two-dimensional weapons codes. Because of the enormous size of the computers and their prodigious power consumption, notes McCoy, the applications themselves are generally ignored by the media in favor of headline-producing computers. But if the truth were known, it is these codes and the people who build them, not the computers, that are the heart and soul of the ASC Program. "The computers come, and after a few years, they go," says McCoy. "But the codes and code teams endure." The greatest value of the ASC Program resides in these software assets, and this value is measured in billions of dollars. The backbone of these scientific applications is mathematical equations representing the physics and the numerical constructs to represent the equations. To address issues such as how to handle a billion linear and nonlinear equations with a billion unknowns, computational mathematicians and others created innovative linear solvers (S&TR, December 2003, Multigrid Solvers Do the Math Faster, More Efficiently) and Monte Carlo methods (S&TR, March 2004, Improved Algorithms Speed It Up for Codes) that allow the mathematics to "scale" in a reasonable manner. Thus, as the problem grows more complex, processors can be added to keep the solution time manageable.
The challenge was to move into the world of massively parallel ASC systems in which thousands of processors may be working in concert on a problem. "First, we had to learn how to make these machines work at large scale," says McCoy. "At the same time, we were developing massively parallel multiphysics codes and finding a way to implement them on the new machines. It was a huge effort in every direction." As the machines matured, the codes matured as well. "We've entered the young adult years," says McCoy. "ASC White is running reliably in production mode, with a mean time to failure of a machine component measured in days, not hours or minutes. The proof-of-principle era is ending: The codes are deployed, the weapon designers increasingly are using these applications in major investigations, and this work is contributing directly to stockpile stewardship. With the upcoming 100-teraops ASC Purple, we believe that in many cases where we have good experimental data, numerical error will be sufficiently reduced to make it possible to detect where physics models need improvement. We have demonstrated the value of high-resolution, three-dimensional physics simulations and are now integrating that capability into the Stockpile Stewardship Program, as we work to improve that capability by enhancing physics models. The ASC Program is no longer an initiative, it's a permanent element of a tightly integrated program with a critical and unambiguously defined national security mission."
Looking forward, Jim Rathkopf, an associate program leader for DNT's A Program, notes that with the arrival of Purple, codes will be able to use even higher resolution and better physics. "Higher resolution and better physics are required to reproduce the details of the different phases of a detonation and to determine the changes that occur in weapons as they age and their materials change over time." Predicting Material Behavior
It's exciting times for scientists in the materials modeling world. The power of the terascale ASC machines and their codes is beginning to allow physicists to predict material behavior from first principles--from knowing only the quantum mechanics of electrons and the forces between atoms. Earlier models, which were constrained by limited computing capabilities, had to rely on averages of material properties at a coarser scale than the actual physics demanded. Elaine Chandler, who manages the ASC Materials and Physics Models Program, explains, "We can now predict very accurately the elastic properties of some metals. We're close to having predictive models for plastic properties as well." Equation-of-state models are also moving from the descriptive to the predictive realm. It's possible to predict melt curves and phase boundaries from first principles and to predict changes in the arrangement of atoms from one crystalline structure to another. For example, scientists are running plasticity calculations to look at how tantalum moves and shears, then conducting experiments to see if their predictions are correct. Using this process, they can determine basic properties, such as yield strength.
With the older descriptive modeling codes, scientists would run many experiments in differing regimes of temperature and pressure, then basically "connect the dots" to find out what a metal would do during an explosion. Now, they can perform the calculations that provide consistent information about the entire process. "It's a new world," says Chandler, "in which simulation results are trusted enough to take the place of physical experiments or, in some cases, lead to new experiments."
In the future, ASC Purple and the pioneering BlueGene/L computer will contribute to this new world. BlueGene/L is a computational-science research and evaluation machine that IBM will build in parallel with ASC Purple and deliver in 2005. According to Chandler, BlueGene/L should allow scientists to reach new levels of predictive capability for processes such as dislocation dynamics in metals, grain-scale chemical reactions in high explosives, and mixing in gases. Chandler says some types of hydrodynamics and materials science calculations will be relatively straightforward to port to the BlueGene/L architecture, but others, particularly those involving quantum-mechanical calculations, will require significant restructuring in order to use the architecture of this powerful machine. This is a challenge well worth the effort, because of the unprecedented computer power that BlueGene/L will offer to attack previously intractable problems. "Nearly a half century ago," adds Chandler, "scientists dreamed of a time when they could obtain a material's properties from simply knowing the atomic numbers of the elements and quantum-mechanical principles. That dream eluded us because we lacked computers powerful enough to solve the complex calculations required. We are just now able to touch the edge of that dream, to reach the capabilities needed to make accurate predictions about material properties." Leaping from Milestone to Milestone
With the birth of the Stockpile Stewardship Program (SSP), the need for better computer simulations became paramount to help ensure that the nation's nuclear weapons stockpile remained safe, reliable, and capable of meeting performance requirements. The tri-laboratory (Livermore, Los Alamos, and Sandia national laboratories) Advanced Simulation and Computing (ASC) Program was created to provide the integrating simulation and modeling capabilities and technologies needed to combine new and old experimental data, past nuclear-test data, and past design and engineering experience. The first decade was devoted to demonstrating the proof of principle of ASC machines and codes. As part of that effort, the program set up a number of milestones to "prove out" the complex machines and their advanced three-dimensional physics codes.
In thermonuclear weapons, radiation from a fission device (called a primary) can be contained and used to transfer energy for the compression and ignition of a physically separate component (called a secondary) containing thermonuclear fuel.
The first milestone, accomplished in December 1999 by Livermore researchers on the ASC Blue Pacific/Sky machine, was the first-ever three-dimensional simulation of an explosion of a nuclear weapon's primary (the nuclear trigger of a hydrogen bomb). The simulation ran a total of 492 hours on 1,000 processors and used 640,000 megabytes of memory in producing 6 million megabytes of data contained in 50,000 computer files. The second Livermore milestone, a three-dimensional simulation of the secondary (thermonuclear) stage of a thermonuclear weapon, was accomplished in early 2001 on the ASC White machine--the first time that White was used to meet a milestone. Livermore met a third milestone in late 2001, again using ASC White, coupling the primary and secondary in the first simulation of a full thermonuclear weapon. For this landmark simulation, the total run time was about 40 days of around-the-clock computing on over 1,000 processors. This simulation represented a major step toward deployment of the simulation capability. The quality was unusually high when compared to historic nuclear-test data. A detailed examination of the simulation results revealed complex coupled processes that had never been seen. In 2001, ASC White was also used by a Los Alamos team to complete an independent full-system milestone simulation. In December 2002, Livermore completed another milestone on ASC White when a series of two-dimensional primary explosion calculations was performed. These simulations exercised new models intended to improve the physics fidelity and quantified the effect of increased spatial resolution on the accuracy of the results. The first production version of this code was also released at this time to users. Yet another Livermore team used ASC White to perform specialized three-dimensional simulations of a critical phase in the operation of a full thermonuclear weapon.
In 2003, Livermore teams completed separate safety and performance milestones. For the performance milestone, one team worked remotely on the ASC Q machine at Los Alamos to conduct a suite of three-dimensional primary explosion simulations in support of a Life Extension Program (LEP). Moving even farther from proof-of-principle demonstration and closer to deployment, a code team worked with the LEP team to accomplish this milestone, which addressed complex technical issues and contributed to meeting SSP objectives.
"We accomplished major objectives on time--with the early milestones demonstrating first-of-a-kind proof-of-principle capabilities," says Tom Adams, an associate program leader for DNT's A Program. "Achieving these milestones was the result of an intense effort by the code teams, who were assisted by dedicated teams from across the Laboratory. ASC milestones have now transitioned from these early demonstrations to milestones focused on improving the physics fidelity of the simulations and supporting stockpile stewardship activities. We are now in the position of delivering directly to the SSP."
Adams adds that the upcoming ASC Purple machine is a significant entry point. "Purple is the fulfillment of one of the original goals of the ASC Program, which is to bring a 100-teraops system to bear on stockpile stewardship issues. We need Purple to perform full, three-dimensional simulations for stockpile stewardship on a business-as-usual basis. With Purple, we'll have the computing power and the codes needed to begin to address challenges in detail. Similarly, BlueGene/L will extend material models." And, beyond Purple? Petaops (quadrillion operations per second) systems will allow weapons designers and other users to address the fundamental underlying sources of uncertainty in the calculations. The goal is to be prepared to respond to technical issues that might arise because of component aging or new material requirements in the stockpile.
Delivering the Goods
ASC simulations play a key role in stockpile assessments and in programs to extend the life of the nation's arsenal. Each year, a formal assessment reports the status of the nation's stockpile of nuclear warheads and bombs. (See S&TR, July/August 2001, Annual Certification Takes a Snapshot of Stockpile's Health.) This process involves the three national weapons laboratories working in concert to provide a "snapshot" of the stockpile's health. Together, Livermore and Los Alamos are developing an improved methodology for quantifying confidence in the performance of these nuclear systems, with the goal of fully integrating this methodology into these annual assessments. The new methodology, known as quantification of margins and uncertainties (QMU), draws together information from simulations, experiments, and theory to quantify confidence factors for the key potential failure modes in every weapons system in the stockpile. (See S&TR, March 2004, A Better Method for Certifying the Nuclear Stockpile.)
The assertion that the nuclear explosive package in a weapon performs as specified is based on a design approach that provides an adequate margin against known potential failure modes. Weapons experts judge the adequacy of these margins using data from past nuclear experiments, ground and flight tests, and material compatibility evaluations during weapons development as well as routine stockpile surveillance, nonnuclear tests, and computer simulations. With the cessation of underground nuclear testing, the assessment of these margins relies much more heavily on surveillance and computer simulations than in the past and therefore requires the simulations to be more rigorous and detailed.
Because no new weapons are being developed, the existing ones must be maintained beyond their originally planned lifetimes. To ensure the performance of these aging weapons, Livermore and Los Alamos weapons scientists use QMU to help them identify where and when they must refurbish a weapons system. When needed, a Life Extension Program (LEP) is initiated to address potential performance issues and extend the design lifetime of a weapons system through refurbishment or replacement of parts. For the W80 LEP now under way, results from ASC simulations are weighed along with data from past nuclear weapons tests and from recent small-scale science tests. These results will support certification of the LEP. Using today's ASC computer systems and codes, scientists can include unprecedented geometric fidelity in addressing issues specific to life extension. They can also investigate particular aspects, such as plutonium's equation of state, scientifically and in detail, and then extend that understanding to the full weapons system. The results of these simulations, along with data from legacy testing and current experiments, improve the ability of weapons designers to make sound decisions in the absence of nuclear testing.
As computational capability increases, designers will have a more detailed picture of integrated weapons systems and can address even more complex issues--for example, how various materials fracture--with even higher resolution. Right Answers for Right Reasons
Even as inaccuracies due to mathematics and numerics are being resolved by running simulations at ever higher resolutions, the question remains: If a simulation result is unusual, how do scientists know whether it is a problem due to inadequate resolution or simply an error, or bug, in the coding? According to Cynthia Nitta, manager of the ASC Validation and Verification (V&V) Program, an effort was established by the ASC Program to rigorously examine the computational science and engineering simulation results with an eye to their credibility. "Can we trust that the results of simulations are accurate? Do the results reflect the real-world phenomena that they are striving to re-create or predict?" asks Nitta. "In the V&V Program, we are developing a process that should increase the confidence level for decisions regarding the nation's nuclear stockpile. Our methods and processes will establish that the calculations provide the right answers for the right reasons." The verification and validation (V&V) process ties together simulations and experiments using quantitative comparisons. Full size image available here.
The verification process determines whether a computer simulation code for a particular problem accurately represents the solutions of the mathematical model. Evidence is collected to ascertain whether the numerical model is being solved correctly. This process ensures that sound software-quality practices are used and the software codes themselves are free of defects and errors. It also checks that the code is correctly solving the mathematical equations in the algorithms and verifies that the time and space steps or zones chosen for the mathematical model are sufficiently resolved. The validation process determines whether the mathematical model being used accurately represents the phenomenon being modeled and to what degree of accuracy. This process ensures that the simulation adequately represents the appropriate physics by comparing the output of a simulation with data gathered in experiments and quantifying the uncertainties in both.
Nitta says, "Computer simulations are used in analyzing all aspects of weapons systems as well as for analyzing and interpreting weapons-related experiments. The credibility of our simulation capabilities is central to the credibility of the certification of the nuclear stockpile. That credibility is established through V&V analyses."
Terascale--A Beginning, Not an End
With the proof-of-principle phase ending and new codes being deployed, what does the future hold? With the arrival of the 100-teraops Purple in 2005, many simulations become possible, including a full-system calculation of a nuclear weapon with sufficient resolution to distinguish between phenomenological and numerical issues. But, as McCoy, Christensen, and others point out, 100 teraops is just the beginning. The Terascale Simulation Facility (TSF) at Lawrence Livermore will have two machine rooms for housing ASC Purple and BlueGene/L. Full size image available here.
The ASC Program plans to increase the level of confidence in predictions that such simulations can bring as well as increase the predictive capability, by tying together simulations and experiments even more closely and quantifying the uncertainty of the simulated results. We're positioning our science codes to run on the Purple and BlueGene/L machines so that we can understand the physics in even greater detail," says Christensen. "It's been a challenging journey over the past decade: In the ASC Program, we've demonstrated that we can acquire and use the world's most powerful computers to perform three-dimensional calculations that capture many details of weapons performance. Now, we must look toward the next goal, which is to be able to predict weapons behavior and quantify the confidence we have in that prediction. If the past decade is any indication--and we believe it is--this is a goal we can, and will, indeed attain."
The Department of Energy's Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time.
For more information please visit
http://science.energy.gov/about | 计算机 |
2014-15/4497/en_head.json.gz/25510 | The Cameron Files: Secret at Loch Ness (c) DreamCatcher Interactive
Windows, Pentium 200, 32MB RAM, 10MB HDD, 8X CD-ROM
Tuesday, March 12th, 2002 at 02:08 PM
By: Westlake
The Cameron Files: Secret at Loch Ness review
Game Over Online - http://www.game-over.com
The Cameron Files: Secret at Loch Ness is the fourth adventure I’ve reviewed from French developer Wanadoo (formerly Index). It uses the same sort of 3D engine as other Wanadoo games (like Dracula: The Last Sanctuary and Necronomicon), but don’t be fooled; it is nowhere in the same class. In fact, everything about Secret at Loch Ness is inferior to other Wanadoo games. I’d say something like “Oh, how the mighty has fallen” except that Wanadoo hasn’t exactly made great games in the past, and “Oh, how the mediocre has fallen” doesn’t have the same ring to it. But where Wanadoo at least made games that were good to look at before, the graphics in Secret at Loch Ness are sub-par, and the puzzles (if you want to call them that) might have taken all of a single day to think up. It’s like somebody at Wanadoo wondered just how little time they could put into a game and still release it, and Secret at Loch Ness is the result.In Secret at Loch Ness you play Alan P. Cameron, a private investigator from Chicago. You’re hired to look into a missing persons case in Scotland, but when you get there it soon becomes clear that the disappearance is only a secondary issue, and that the main mystery has to do with three ancient crystals. But then, if that weren’t enough, along he way you also encounter a banshee, a character named Peter Evil, a mysterious maid, an old Scottish manor, lots of B-move science fiction, and even the Loch Ness Monster. It’s like Wanadoo, realizing they didn’t have much of a story, decided to just throw weird things at the player, as if that would be close enough. But it doesn’t work, and when you eventually discover there’s an Evil Person planning an Evil Deed, nothing makes any sense.Worse, there are almost no puzzles to speak of in Secret at Loch Ness, and the very engine Wanadoo uses for its games might be the main problem. The interface tells you exactly when you can use an object, and there aren’t any red herrings, so all the inventory puzzles are simple. I mean, if you see a boarded-up window, and if the interface tells you that you can interact with the boards, and if you’re holding a crowbar, just how difficult is the puzzle? And do you call it a puzzle at all?The engine works fine for mechanical puzzles (as games like Myst 3 and Schizm have shown), but Wanadoo apparently doesn’t know how to create them. When they tried in Necronomicon the results were disastrous, and in Secret at Loch Ness, whenever they offer a puzzle that is even remotely mechanical, they also provide explicit instructions on how to solve it. For example, at one point you discover a chair that is also a container. To open the chair, you have to operate six knobs / levers in the right order. Now, without any hints, there are 720 possible combinations, and that’s just too many for trial and error alone. But how hard could it be to, oh, create a riddle or something with hints about “arms” and “legs” and things like that? Instead, Wanadoo includes a piece of paper that tells the exact order to manipulate the knobs / levers, and so the puzzle isn’t any fun at all.In fact, the most difficult part of the game is tracking down events and objects. Wanadoo created Secret at Loch Ness so that it is completely linear, and to trigger any event you first have to find the objects needed for it. While that’s friendly (in a way), the manor in which you spend most of the game is rather large, and it’s sort of annoying the scour the entire place only to finally discover a dishrag in the sink that for some reason triggers a teacup appearing in a dumbwaiter. Wanadoo tried to help things along for this part of the game by giving Cameron a notebook where he jots down some not-so-subtle hints (“I should go to the parlor next”), but he only provides hints like that sometimes, and pixel-hunting your way through a manor once is sort of boring, and after five or six times it’s just deadly.Plus, Wanadoo does some other annoying things. Right now, without fully testing the theory, I’m conjecturing that if an adventure game has any of these things -- a maze, a timed puzzle, or a universal tool -- then the game isn’t likely to be very good, because all three things indicate a lack of creativity on the part of the developer when it comes to making puzzles. And, wouldn’t you know, Secret at Loch Ness has all three. In fact, while it has “only” one maze, it has no less than two universal tools (a crowbar and bolt cutters, that are used far too often) and a whole mess of timed puzzles, all of which kill you if you can’t finish them in time. In a nutshell, Wanadoo didn’t do anything right in terms of gameplay, and Secret at Loch Ness ranges from being boring to tedious to annoying at regular intervals.Maybe there was just a bug going around Wanadoo during the development of Secret at Loch Ness, because even the graphics and sound are rather shoddy. Wanadoo didn’t use a high enough resolution for the backgrounds, and so there is just way too much pixellation going on, and every line is a jagged line. Plus, there are even some weird effects, like trees having white outlines, that make the forests look like Wanadoo was using paint-by-numbers templates. The result is that you won’t ever feel like you’re really traipsing through the Scottish countryside, or really exploring a Scottish manor, and it’s difficult to get involved in a game when absolutely nothing is going right.Overall, The Cameron Files: Secret at Loch Ness is just an embarrassment. At least when other games go badly, you can usually get a sense that the developers had something good in mind when they started out, and that things just didn’t go well. But with Secret at Loch Ness, Wanadoo aimed low and then hit their target dead on. So I wouldn’t recommend the game to anybody for any reason, and I can now understand why, since Wanadoo is the main horse in DreamCatcher’s adventuring stable, that DreamCatcher is branching out to other genres.
Written By: Westlake
Ratings:(10/40) Gameplay(10/15) Graphics(10/15) Sound(08/10) Interface(05/10) Storyline(02/05) Technical(03/05) Documentation
See the Game Over Online Rating System
Copyright (c) 1998-2009 ~ Game Over Online Incorporated ~ All Rights Reserved
Game Over Online Privacy Policy | 计算机 |
2014-15/4497/en_head.json.gz/26212 | Knowledge Center HomeCloud HostingCMS Comparison: Drupal, Joomla and Wordpress Feedback
CMS Comparison: Drupal, Joomla and Wordpress
Last updated on April 4, 2013
If creating a website for your business is on the horizon, you may be wondering which content management system (CMS) is the best choice for you. Here’s a look at three of the most widely-used ones. All three are open-source software, each developed and maintained by a community of thousands. Not only are all three free to download and use, but the open-source format means that the platform is continuously being improved to support new Internet technologies. With all of these systems, basic functions can be enhanced ad infinitum with an ever-expanding array of add-ons, contributed from their respective communities.
There’s no one-size-fits-all solution here; it depends on your goals, technical expertise, budget and what you need your site to do. For a simple blog or brochure-type site, Wordpress could be the best choice (while very friendly for non-developers, it’s a flexible platform also capable of very complex sites). For a complex, highly customized site requiring scalability and complex content organization, Drupal might be the best choice. For something in between that has an easier learning curve, Joomla may be the answer.
When you have questions or need help, will you be able to find it easily? With all of these systems, the answer is yes. Each has passionate, dedicated developer and user communities, making it easy to find free support directly through their websites or through other online forums or even books. In addition, paid support is readily available from third-party sources, such as consultants, developers and designers. Each of these systems shows long-term sustainability and longevity; support for them will continue to be readily available for the foreseeable future. The more time and effort you are willing and able to invest into learning a system, the more it will be able to do for you. With both Wordpress and Joomla, you can order a wide range of services and options off the menu to suit your needs; with Drupal, you’ll be in the kitchen cooking up what you want for yourself, with all of the privileges of customization that entails.
See the comparison chart below for more insight into the differences in these top content management systems. Still not sure? Download each of the free platforms and do a trial run to help you decide.
www.drupal.org
www.joomla.org
www.wordpress.org
Drupal is a powerful, developer-friendly tool for building complex sites. Like most powerful tools, it requires some expertise and experience to operate.
Joomla offers middle ground between the developer-oriented, extensive capabilities of Drupal and user-friendly but more complex site development options than Wordpress offers.
Wordpress began as an innovative, easy-to-use blogging platform. With an ever-increasing repertoire of themes, plugins and widgets, this CMS is widely used for other website formats also.
Example Sites
Community Portal: Fast Company, Team Sugar
Social Networking: MTV Networks Quizilla
Education: Harvard University
Restaurant: IHOP
Social Networking: PlayStation Blog
News Publishing: CNN Political Ticker
Education/Research: NASA Ames Research Center
News Publishing:The New York Observer
Drupal Installation Forum
Joomla Installation Forum
Wordpress Installation Forum
Drupal requires the most technical expertise of the three CMSs. However, it also is capable of producing the most advanced sites. With each release, it is becoming easier to use. If you’re unable to commit to learning the software or can’t hire someone who knows it, it may not be the best choice.
Less complex than Drupal, more complex than Wordpress. Relatively uncomplicated installation and setup. With a relatively small investment of effort into understanding Joomla’s structure and terminology, you have the ability to create fairly complex sites.
Technical experience is not necessary; it’s intuitive and easy to get a simple site set up quickly. It’s easy to paste text from a Microsoft Word document into a Wordpress site, but not into Joomla and Drupal sites.
Known for its powerful taxonomy and ability to tag, categorize and organize complex content.
Designed to perform as a community platform, with strong social networking features.
Ease of use is a key benefit for experts and novices alike. It’s powerful enough for web developers or designers to efficiently build sites for clients; then, with minimal instruction, clients can take over the site management. Known for an extensive selection of themes. Very user-friendly with great support and tutorials, making it great for non-technical users to quickly deploy fairly simple sites.
Caching Plug-ins
Pressflow: This is a downloadable version of Drupal that comes bundled with popular enhancements in key areas, including performance and scalability.
JotCache offers page caching in the Joomla 1.5 search framework, resulting in fast page downloads. Also provides control over what content is cached and what is not. In addition, page caching is supported by the System Cache Plugin that comes with Joomla.
WP-SuperCache: The Super Cache plugin optimizes performance by generating static html files from database-driven content for faster load times.
Best Use Cases
For complex, advanced and versatile sites; for sites that require complex data organization; for community platform sites with multiple users; for online stores
Joomla allows you to build a site with more content and structure flexibility than Wordpress offers, but still with fairly easy, intuitive usage. Supports E-commerce, social networking and more.
Ideal for fairly simple web sites, such as everyday blogging and news sites; and anyone looking for an easy-to-manage site. Add-ons make it easy to expand the functionality of the site. | 计算机 |
2014-15/4497/en_head.json.gz/26453 | stories from October 30th, 2012 Surprises
Zachary KnightTue, Oct 30th 2012 8:05pm
crowdfunding, haunts, support, transparency, video games
mob rules games
How Being Very Transparent May Have Saved A 'Failed' Kickstarter Projectfrom the only-mostly-dead dept
For a while now, we have been highlighting many stories about the successful crowdfunding of movies, music, books and games. This new source of funding for creative content has been an exciting time for indie artists and those wanting to break free of traditional funding models. However, this funding model is not without its risks, something that Kickstarter has recognized with a change in the way projects are presented.
So what exactly happens when a successfully funded project fails to meet its completion goals? Well, reader Marcus Wellby sent along a story about one successfully funded game project that has hit some major roadblocks to completion. Haunts: The Manse Macabre, although successfully funded, has run out of money and programmers and was in danger of never being completed.
Haunts sought $25,000 (£15,590) from Kickstarter but the project proved popular and meant the game's developers got $28,739 (£17,895) to fund completion of the game. Prior to the funding appeal, Haunts creator Mob Rules Games had spent about $42,500 getting the basics of the title completed.
The end result was supposed to be a haunted house horror game in which players could take on the role of the house's inhabitants or intruders investigating what lived within it.
Now Mob Rules Games boss Rick Dakan has revealed that the game's development has prematurely halted.
"The principal cause for our dire condition is that there are no longer any programmers working on the game," said Mr Dakan in a blogpost updating backers.
You can see Rick's full explanation of the problems the game has had over at Kickstarter. With all the cash and programming problems, Rick felt so bad about letting down the backers that he was willing to refund, out of his own pocket, anyone who wanted their money back. While most companies will silently kill off projects that do not meet expectations, his forthcoming post about the state of affairs actually had a positive effect on the project's future.
The next day, Rick posted the following update.
I've had a lot of interested emails from programmers offering their help. Thank you all very much! There's a lot to sift through and I'm not sure what the best way to proceed will be, but I am very encouraged by these offers and want to try and figure out the best way to take advantage of this opportunity. I've reached out to a good friend of mine who's an expert in collaborative open source development, and he and I will talk soon. I also want to discuss this exciting development with Blue Mammoth and get their take on it.
By being open about the problems he was having completing the game, the community came in to offer their help. Granted, this is a unique circumstance, but having such a dedicated fan base is wonderful. Had he let the game fester with no updates for longer than he had, he might have been met with more hostility than encouragement. That would have made it far more difficult to find any kind of solution.
Finally, in the most recent update, Rick announced that after considering the situation and the best way to move forward, he will be open sourcing the game with over thirty programmers offering their help to complete it.
We're going to finish developing Haunts: The Manse Macabre as an Open Source project. The source code has been open from the beginning, but now we're going to fully embrace open development model and making the game entirely open source. We've had about thirty programmers from a variety of backgrounds, including many proficient in Go, who have stepped forward and offered to help finish the game. We're still in the process of setting up the infrastructure for issue tracking, source control, documentation wikis, and other tools necessary before we can begin in earnest, but we hope to have that all up and running within the next week or two.
While this story is far from over, it is a great lesson in the risks of any project whether crowdfunded or not. Projects can fail, they can have problems, they can be shuttered. The key takeaways from this story, however, are (1) being transparent (rather than hiding) with supporters can do wonders and (2) being flexible and willing to change course can help. Rick notes that there's been plenty of press coverage about the supposed "failure," but much less about what happened after...
We've gotten a lot of press coverage, most of it in the general vein of, "Look, see, Kickstarter projects can go bad, so be careful!" I think that's a fair and useful point to make. But we're committed to being the follow-up story. You know, the underdog who comes back from the brink of collapse and proves a resounding success!
Yes, this is a story that highlights the risk in any kind of crowdfunding endeavor. Backers may be out the money they put in with nothing to show for it. However, if those who run these projects will be open and honest through the whole process, stumbles and falls included, even if the project never comes to fruition, then the potential that such a failure will damage their reputation and future projects can be mitigated. And heck, maybe you will be struck with a miracle and your project will come back to life.
Read More Surprises
Tim CushingTue, Oct 30th 2012 11:50am
customer service, elemental, games, surprises, video games
Game Publisher Stardock Apologizes To Its Customers For Releasing A Subpar Game... By Giving Them Its Latest Game Freefrom the well-played,-sir dept
One of the best things you can do for your business is have the guts to stand up and take full responsibility for your screwups. Too often, businesses tend to minimize their errors or sweep the screwup under the rug. This works right up until the public notices and when they do, there's all kinds of hell to pay. Word spreads fast on the internet, much faster than most companies seem to realize.
On the bright side, good news travels equally fast when companies do the right thing and take care of their customers. This is one of those all-too-rare occasions when a company goes above and beyond what anyone expects and turns customers into lifelong fans.
The Consumerist has an amazing story of customer service gone exactly right. The company is Stardock, the publisher behind "Elemental: War of Magic," a strategy game that was released as a buggy mess a couple of years back. This (unfortunately) isn't unusual. Games get rushed to market for several reasons and end users are left to either deal with something nearly unplayable or install patch after patch to get their brand new purchase up and running. So, while screwed up releases may not be unusual, what followed absolutely is. Customers who purchased "Elemental" received a letter from the CEO of Stardock that not only apologized for releasing a lousy game, but actually offered something way more valuable than lip service:
Dear Stardock customer,
My name is Brad Wardell. I’m the President & CEO of Stardock. Two years ago, you bought a game from us called Elemental: War of Magic. We had great hopes and ambitions for that game but, in the end, it just wasn’t a very good game.
Elemental was an expensive game. You probably paid $50 or more for it. And you trusted us to deliver to you a good game. $50 is a lot of money and companies have a moral obligation to deliver what they say they’re going to deliver and frankly, Stardock failed to deliver the game we said we were going to deliver…
Its design just wasn’t adequate to make it into the kind of game it should be. So we decided to start over. From scratch. We made a new game called Fallen Enchantress.
So even though it’s been two years, we haven’t forgotten about you. This week, we released Fallen Enchantress. It is a vastly better game and, we believe, lives up to the expectations set for the original Elemental. This game is yours. Free. It’s already been added to your account…
Thank you for being our customers and your patience.
Brad Wardell
[email protected]
@draginol http://www.twitter.com/draginol
Not only is it highly unusual for developers to apologize for crafting an underpar game, it's even more unusual for them to take the extra step and offer their latest game absolutely free. Wardell takes advantage of the technology at hand to keep the affected users from having to make any effort on their part to get their replacement game ("It's already been added to your account...")
Stardock realizes that each game its customers purchase takes a bit of their time and money, and both commodities are in limited supply. This gesture doesn't ask for any more of those two commodities, and goes a long way towards securing something else only available in limited quantities: trust.
Wardell and Stardock are investing in their own future by taking care of their customers now. By doing the unexpected, fans who were burned by "Elemental" will be more likely to take a look at Stardock's upcoming offerings. And even if they felt "Elemental" wasn't that bad, hey... free game! How often does that happen? Either way, a ton of goodwill and positive word-of-mouth is being generated, something no company can purchase.
Cool New Platform For Supporting Artists: Patreon, From Jack Conte (23)Monday05:46
A New Hope: How Going Free To Play Brought Redemption To Star Wars MMO (47)Friday11:16
There Is No Logic To The Argument That Zach Braff Shouldn't Use Kickstarter (105)Monday06:00
When Startups Need More Lawyers Than Employees, The Patent System Isn't Working (55)Wednesday03:14
Hitchhiker's Fan-Site Started By Douglas Adams Shows Why Authors Shouldn't Panic Over Derivative Works (26)Friday09:21
Patents As Weapons: How 1-800-CONTACTS Is Using The Patent System To Kill An Innovative Startup (53)07:19
How EA's 'Silent Treatment' Pushed The SimCity Story Into The Background (55)Thursday13:30
Deftones Guitarist: People Who Download Our Music Are Fans, They're Welcome To Do So (29)13:10
Macklemore Explains Why Not Being On A Label Helped Him Succeed (24)Wednesday03:45
Successful Self-Published Ebook Authors Sells Print & Movie Rights For $1 Million, But Keeps Digital Rights To Himself (43)Monday11:53
Musician Alex Day Explains How He Beat Justin Timberlake In The Charts Basically Just Via YouTube (52)Wednesday00:09
Publishers Show Yet Again How To Make Money By Reducing The Price To Zero (39)Tuesday20:13
Flattr Makes It Easier Than Ever To Support Content Creators Just By Favoriting Tweets (61)16:03
Case Study: Band Embraces Grooveshark And Catapults Its Career (21)Friday19:39
Amanda Palmer On The True Nature Of Connecting With Fans: It's About Trust (128)Monday16:03
Kickstarter-Funded Movie Wins Oscar For Best Documentary (89)Friday13:41
It's Fine For The Rich & Famous To Use Kickstarter; Bjork's Project Failed Because It Was Lame (20)17:34
Connecting With Fans In Unique Ways: Band Sets Up Treasure Hunt To Find Fan-Submitted Sounds In New Album (10)Monday07:27
Just As Many Musicians Say File Sharing Helps Them As Those Who Say It Hurts (131)Tuesday20:00
Skateboard Legend Stacy Peralta Demonstrates His Latest Trick: Cashing In By Going Direct-To-Fan (13)Wednesday23:58
Wallet Maker Shows Everyone How To Make Their Own Awesome Wallet (15)Thursday11:27
$274 Million Raised Via Kickstarter In 2012 (8)Monday08:45
$100 Million Pledged To Indie Film On Kickstarter... And 8,000 Films Made (89)05:39
Confusing Value And Price, Choir Demands £3000 Per Download (77)Friday17:37
Odd Logic: If You Value Your Readers, You Should Make Them Pay (24)Wednesday08:50
Makers Of Minecraft Documentary Put It On The Pirate Bay, Despite High Profile Launch With Xbox (52)14:56
Choose Your Own Hamlet Becomes The Largest Publishing Project On Kickstarter, Thanks To The Public Domain (32)08:45
Quick List Of Successes In Which Copyright Didn't Matter (47)Thursday00:07
People Realizing That Other Occupations Can Learn From Music Success Stories (7)Wednesday16:08
Infographic: People Will Pay To Support Creators, Even When Free Is An Option (78) | 计算机 |
2014-15/4497/en_head.json.gz/27484 | Alum Takes 'Corporate Route' and Loves It Error loading MacroEngine script (file: navigation-navbar-title.cshtml)
Alum Takes 'Corporate Route' and Loves It
Tim Ehinger ’89 remembers the moment when he knew he would pursue a life and career outside the United States. He was with classmates in Cologne, Germany, during the Germany/Austria program, sitting outside the cathedral at night. This is how he tells it.
“I just remember looking up at that beautiful cathedral and thinking to myself, ‘I want more of this.’” More beautiful and historic vistas. More experiences in countries and cultures other than his own. More visits to places far from his hometown of Ft. Wayne, Ind.
Ehinger’s wish came true. After graduating from law school at Indiana University in 1992, Ehinger participated in an exchange program sponsored by the German Academic Exchange Service (DAAD), a government sponsored institution that he compares with the US Fulbright Program. In this case the DAAD program brought together young lawyers from various countries to learn about German law. Ehinger says that interacting with peers from Belgium, France, Russia and elsewhere offered a helpful education in culture as well as the law.
When he returned to the U.S., he spent two years as a judicial clerk in the Michigan Court of Appeals in Detroit, all the while planning to seek work in the private sector, focusing on companies with a strong international presence.
Making the most of an Earlham connection, he reached out to Tom Gottschalk ’64, who at the time was General Counsel for General Motors.
“Tom didn’t know me from Adam, but he wrote a nice note back, and with some luck I managed to land an interview with GMAC. It so happened they were looking for a junior lawyer to support their European operations. It was really a matter of being in the right place at the right time with the right resume,” Ehinger recalls. After eight months with in Detroit, he spent five years working for GMAC (General Motors’ financing division) in Europe, based in Zurich and then London. He worked for G.E. Capital for three years before moving on to American Express, his employer for the last decade.
At Home in Corporate Law
Ehinger is now based in London and is the Managing Counsel for Amex’s legal team for Europe, the Middle East and Africa. He loves his job and his company.
“There’s a misperception that big companies must be full of little Napoleons, but I certainly have not found that to be true,” Ehinger says. “Amex is a company that behaves with a high degree of integrity and that has a lot of respect for its customers and employees.”
“We have a consensus-based culture across a very diverse and global employee base, and my experiences at Earlham obviously prepared me well for that,” says Ehinger. “Before Earlham, I didn’t understand what consensus was, but I learned to look for the common shared goal and to find ways to build consensus around such a goal. I’ve found that it’s the best way of leading and achieving results.
“Before I went to Earlham, most of what I had experienced was Indiana. Earlham gave me an opportunity to see more of what the world had to offer. It was definitely a major turning point in my life,” he says. “Now I travel a lot and work every day with people from all around the world. I realize that although there are many differences from one country to the next, there are a lot of similarities among people. And I think everyone smiles at the same things. “Not many Earlhamites take the corporate route and love it, but I do. And I feel like I take what I learned at Earlham with me wherever I go.” | 计算机 |
2014-15/4497/en_head.json.gz/27926 | You are here: Home / Archives for Kernel
Ben Hutchings is a rather unassuming guy… but hiding behind his hat, there’s a real kernel hacker who backports new drivers for the kernel in Debian stable so that our...People behind Debian: Ben Hutchings, member of the kernel team December 13, 2011 by Raphaël Hertzog Ben Hutchings, photo by Andrew Mc Millan, license CC-BY-2.0 Ben Hutchings is a rather unassuming guy… but hiding behind his hat, there’s a real kernel hacker who backports new drivers for the kernel in Debian stable so that our flagship release supports very recent hardware.
Read on to learn more about Ben and the kernel team’s projects for Debian Wheezy!
Raphael: Who are you?
Ben: I’m a professional programmer, living in Cambridge, England with my long-suffering wife Nattie. In Debian, I mostly work on the Linux kernel and related packages.
Raphael: How did you start contributing to Debian?
Ben: I started using Debian in 1998 and at some point I subscribed to Debian Weekly News. So in 2003 I heard about the planned Debian 10th birthday party in Cambridge, and thought I would like to go to that. Somehow I persuaded Nattie that we should go, even though it was on the day of our wedding anniversary! We both enjoyed it; we made new friends and met some old ones (small world). From then on we have
both been socially involved in Debian UK.
In 2004 there was a bug-squashing party in Cambridge, and we attended that as well. That’s where I really started contributing – fixing bugs and learning about Debian packaging. Then in 2005 I made my first package (sgt-puzzles), attended DebConf, and was persuaded to enter the New Maintainer process.
NM involved a lot of waiting, but by the time I was given questions and tasks to do I had learned enough to get through quite quickly. In April 2006 I was approved as a Debian Developer.
Meanwhile, I looked at the videos from DebConf 5 and thought that it would be useful to distribute them on a DVD. That led me to start writing video software and to get involved in the video team for the next year’s DebConf.
Raphael: You have been one the main driver behind the removal of non-free firmwares from the kernel. Explain us what you did and what’s the status nowadays?
Ben: That’s giving me a bit more credit than I deserve.
For a long time the easy way for drivers to load ‘firmware’ programs was to include them as a ‘blob’ in their static data, but more recently the kernel has included a simple method for drivers to request a named blob at run-time. These requests are normally handled by udev by reading from files on disk, although there is a build-time option to include blobs in the kernel. Several upstream and distribution developers worked to convert the older drivers to use this method. I converted the last few of these drivers that Debian included in its binary packages.
In the upstream Linux source, those blobs have not actually been removed; they have been moved to a ‘firmware’ subdirectory. The long-term plan is to remove this while still allowing the inclusion of blobs at build-time from the separate ‘linux-firmware’ repository. For now, the Debian source package excludes this subdirectory from the upstream tarball, so it is all free software.
There are still a few drivers that have not been converted, and in Debian we just exclude the firmware from them (so they cannot be built). And from time to time a driver will be added to the ‘staging’ section of Linux that includes firmware in the old way. But it’s understood in the kernel community that it’s one of the bugs that will have to be fixed before the driver can move out of ‘staging’.
Raphael: Do you believe that Debian has done enough to make it easy for users to install the non-free firmwares that they need?
Ben: The installer, the Linux binary packages and initramfs-tools will warn about specific files that may be needed but are missing. Users who have enabled the non-free section should then be able to find the necessary package with apt-cache search, because each of the
binaries built from the firmware-nonfree source package includes driver and file names within its description. For the installer, there is a single tarball that provides everything.
We could make this easier, but I think we have gone about as far as we can while following the Debian Social Contract and Debian policy.
Raphael: At some point in the past, the Debian kernel team was not working very well. Did the situation improve?
Ben: Back in 2008 when I started working on the Linux kernel package to sort out the firmware issues, I think there were some problems of communication and coordination, and quite possibly some members were burned-out.
Since then, many of the most active kernel team members have been able to meet face-to-face to discuss future plans at LPC 2009 in Portland and the 2010 mini-DebConf in Paris. We generally seem to have productive discussions on the debian-kernel mailing list and elsewhere, and I think the team is working quite well. Several new contributors have joined after me.
I would say our biggest problem today is that we just don’t have enough time to do all we want to. Certainly, almost all my Debian time is now taken up with integrating upstream kernel releases and handling some fraction of the incoming bug reports. Occasionally I can take the time to work on actual features or the other packages I’m neglecting!
“Our biggest problem today is that we just don’t have enough time to do all we want to.”
Raphael: It is widely known that Linux is maintained in a git repository. But the Debian kernel team is using Subversion. I believe a switch is planned. Why was not git used from the start?
Ben: The linux-2.6 source package dates from the time when Linus made his first release using git. I wasn’t part of the team back then so I don’t know for sure why it was imported to Subversion. However, at that time hardly anyone knew how to use git, no-one had experience hosting public git repositories, and Alioth certainly didn’t offer that option.
Today there are no real blockers: everyone on the kernel team is familiar with using git; Alioth is ready to host it; we don’t have per-architecture patches that would require large numbers of branches. But it still takes time to plan such a conversion for what is a relatively complex source package (actually a small set of related source packages).
Raphael: What are your plans for Debian Wheezy?
Ben: Something I’ve already done, in conjunction with the installer team, is to start generating udebs from the linux-2.6 source package. The kernel and modules have to be repacked into lots of little udebs to avoid using too much memory during installation. The configuration for this used to be in a bunch of separate source packages; these could get out of step with the kernel build configuration and this would only be noticed some time later. Now we can update them both at the same time, they are effectively cross-checked on every upload, and the installer can always be built from the latest kernel version in testing or unstable.
I think that we should be encouraging PC users to install the 64-bit build (amd64), but many users will still use 32-bit (i386) for backward compatibility or out of habit. On i386, we’ve slightly reduced the variety of kernel flavours by getting rid of ’686′ and making ’686-pae’ the default (previously this was called ’686-bigmem’). This means that the NX security feature will be used on all systems that support it. It should also mean that the first i386 CD can have suitable kernel packages for all systems.
I have been trying to work on providing a full choice of Linux Security Modules (LSMs). Despite their name, they cannot be built as kernel modules, so every enabled LSM is a waste of memory on the systems that don’t use it. This is a significant concern for smaller Debian systems. My intent is to allow all unused LSMs to be freed at boot time so that we can happily enable all of them.
I recently proposed to dro | 计算机 |
2014-15/4497/en_head.json.gz/27985 | Saturday, March 27, 1999 by Dave Winer.
Welcome back! It feels like forever since I wrote a DaveNet. Such a foreign thing to do. But it's the right time to do it. I've been busy this week doing all kinds website work for the new version of Frontier that's coming. I've also been arguing with myself about taking a vacation. Some part of me is resisting. But most of me knows it's long overdue. Even if my goal is merely maximizing productivity over the next few months, a couple of weeks away from the keyboard would do me good. And I'm itching to hit the road, so I'd say there's a good chance I will break away for a bit during April. Loosening up First a song. A loosener. A dancing tune by Huey Lewis and the News, the master of brass and bass and predictable almost-too-pregnant pauses. I know they're corny, but I like it anyway. "I can see you've got your motor running. But don't you think you're moving kind of slow? If you feel the way I feel let me get behind the wheel. We don't need any destination baby, anywhere you want to go. So give me the keys and I'll drive you crazy. Give me the keys, I'll drive you out of yourrrr mind! We don't need no registration and we don't need no license, so let me in the driver's side, and we can take a ride. Give me your hand.. I'll drive you insane!"
Now *that's* music! ;->
Submission Hey. I've been thinking about submission lately.
Submission. Compliance. The act of submitting, surrendering power to another.
A few weeks ago a friend said that I didn't do it. It's true. Submission is not my nature. In a way I live my life to prove that I don't have to submit to anyone or any thing. As I look at my values from this perspective I realize that everything I do is circling around this issue, on both sides. I wasn't even aware of it.
In Earth's Website, 11/20/98, I said: "I believe in science, and I think there is a science of humanity. We're the puzzle. We can't explore the depths of space, but we can explore the depths of ourselves."
So there's a clue, this is deep within me, the resistance to submission. And then I wondered if everyone else was the same, and I realized they are not. In fact, the vast majority of people submit every day. They have jobs! They take orders. Most of them, most of the time, do their best to submit. But guys like me resist.
But submission is inevitable, even for people who hate to submit. Why? Because we die. And that's the ultimate act of submission. Wheeee! That's what I imagine death is like. You take your hands off the rail, unbuckle the seatbelt and go for the ultimate ride. So, following this trail of logic, using my intellect, it makes sense to start submitting as soon as possible. This is the mathematics I believe in. If you've got a losing strategy, the best tactic is to admit defeat asap and get on with it.
I offer no other further conclusion at this time. I don't know if I recommend submission. I'll let you know how it goes.
My.UserLand.Com In my last piece, Everyone's Equally Nasty, 3/18/99, I talked about Netscape and RSS files and promised to let you know when we got our syndication/aggregation site, My.UserLand.Com, on the air. It's up now.
http://my.userland.com/ If you have a news site, a weblog, I encourage you to put up an RSS version of the content so we can flow it thru our templates and out to readers' desktops. There's no legal agreement, I see the inclusion of a channel on our site as equivalent to pointing to a website from Scripting News. It's not that big a deal. You're letting us know where a publicly accessible resource is located. It's no more significant than signing a guestbook, or sending me an email pointing to a site you just put up. That's always welcome. You never know when a Dancing Hamsters site is going to pop up (or what kind of music they're going to be dancing to). I like to look at new sites and I like to point to them from my site. It's just that simple.
Opting On the other hand, at any time, I can disconnect the link. And you can opt-out simply by removing the file or sending me an email from the account you were using when you registered. We keep our distance. What you choose to say to your readers thru my interface is between you and them. But it's also at-will. Either of us can opt-out at any time. That's the fairest formula in my opinion. It allows us to relax and try out a new idea and be prepared to learn from the experience without risking too much.
So that's my proposal. If you run a site with a news flow, consider producing a RSS version and link it into my.userland.com and sites that are compatible, such as my.netscape.com. This is how updraft is created, how we make a new standard happen. Along with Netscape we've taken care of one side of the equation. Competitors are welcome. Now let's get the content sites working on this stuff. They're the other side.
XML All this comes under the banner of XML. When it gains critical mass we will also link these sites together in a search engine. We will also offer web hosting for people who do news sites, with a high-level writer's interface, and scalable flows on the other side (broadband, palmtop, XML, plain old HTML). We're now entering the commercialization of XML. I'm still optimistic. Let's go!
All this and more will happen when I get back.. Let's leave the bookmark right here.
In the meantime, I'm going to do some more writing, so expect a couple more DaveNets in the next few days.
Dave Winer © Copyright 1994-2004 Dave Winer. Last update: 2/5/07; 10:50:05 AM Pacific. "There's no time like now." | 计算机 |
2014-15/4497/en_head.json.gz/28434 | Link Archive Updated
Added a new Tablatures category in the Link Archive. You can use the archive to find material that's not on this site.
Burn To Shine Tour Ends
Just because the tour is over doesn't mean we don't have plenty of fun stuff planned at .net. We're almost ready to open up The Official Ben Harper Store, and a cool giveaway is just around the corner. In the meantime, check out the setlist from the last show of the tour in Asheville, NC.
Ben Harper Featured in Details
The December issue of "Details" magazine features an 8-page spread on Ben called "The Harper Style." | 计算机 |
2014-15/4497/en_head.json.gz/29120 | The Ludologist
My name is Jesper Juul, and I am a ludologist [Noun. Video Game Researcher]. This is my blog on game research and other important things.
Game Studies, issue 09/02
1 Reply The new issue 09/02 of Game Studies is out.
The Character of Difference: Procedurality, Rhetoric, and Roleplaying Games
by Gerald Voorhees
This essay examines the cultural politics of the Final Fantasy series of computer roleplaying games. It advances an approach to games criticism that supplements Bogost’s procedural method with a thoroughly contextual approach to rhetorical criticism. By accounting for the narrative, visual and procedural representations in various iterations of the series, this essay argues that Final Fantasy games can also be understood as toys that allow players to experiment with different responses to cultural difference.
http://gamestudies.org/0902/articles/voorhees
Moral Decision Making in Fallout
by Marcus Schulzke
Abstract Many open world games give players the chance to make moral choices, but usually the differences between good and evil paths through a game are slight. In order for moral choices in games to be meaningful they must be fairly calculated and have significant consequences. The Fallout series is one of the best examples of how to give players thoughtful moral problems and multiple paths to resolving them. This essay looks at the series, and Fallout 3 in particular, as examples of how moral choice can be incorporated into video games. One of the oldest fears about art is that it may corrupt observers and lead them to immorality – a criticism that has resurfaced with attacks on video games. Fallout 3 does the opposite. It encourages players to think about the morality of their actions in the virtual world, thereby teaching them the practical wisdom that Aristotle considered essential to being a moral actor.
http://gamestudies.org/0902/articles/schulzke
Cheesers, Pullers, and Glitchers: The Rhetoric of Sportsmanship and the Discourse of Online Sports Gamers
by Ryan M. Moeller, Bruce Esplin, Steven Conway
In this article, we examine online sports gamers’ appeals to fair play and sportsmanship in online forums maintained by game developers. These online discussions serve to document and police acceptable behavior and gameplay for the larger community of game players and to stimulate innovation in game development, especially in online ranking systems.
http://gamestudies.org/0902/articles/moeller_esplin_conway
World of Warcraft: Service or Space?
by Adam Ruch
This article seeks to explore the relationship between the concept of Blizzard’s World of Warcraft in legal terms, in Blizzard’s End-User License Agreement (EULA) and the Terms of Use (TOU), and the concept of the game as conceived by the players of the game. Blizzard present their product as a service, and themselves as a service provider, in the EULA/TOU. Meanwhile, the product itself seems to be more akin to a space or place, which subjective players move about in. This conflict is essentially a difference between a passive viewer accessing certain content within a range available to him, and an individual who inhabits a space and acts within that space as an agent. The meaning of this subjectivity-in-space (or denial of the same) problematizes the relationship Blizzard has with its customers, and the relationships between those customers and Blizzard’s product.
An evolution of the governance of these spaces is inevitable. Where Castronova and Lessig’s answers differ, their basic assertion that the virtual political landscape can and will change seems clear. These changes will be influenced by the values pla | 计算机 |
2014-23/0754/en_head.json.gz/10505 | Midway Games Company »
A now-defunct publisher and developer that was based in Chicago. Best known later in its life for the Mortal Kombat series, Midway's legacy traces back to early pinball and arcade releases, when the company was known as Williams.
Party Pigs: Farmyard Games
Play Olympic-like mini-games with mad buff pigs!
Vin Diesel is looking for a job in Wheelman, a driving-focused open-world game with vehicular special moves to assist in your escape. The story is based around Diesel's character as he comes out of retirement to save a woman from his past.
The Mortal Kombat and DC Comics universes collide in Midway's first crossover Mortal Kombat game developed for the Xbox 360 and PlayStation 3.
Blitz: The League II brings back the classic football game to a new audience with brutal hits, injuries, HD visuals, and a compelling story mode.
Game Party 2
Yeah, they made a sequel.
Mechanic Master
Touchmaster II
Select from 20 brand-new games.
Enter the six-sided ring and wrestle in Midway's first wrestling game from the Orlando-based TNA wrestling promotion.
Chosen One is the third game in Midway's NBA Ballers series.
Game Party
This party game is a party... of games.
Cruis'n
Cruis'n is a Wii port of the arcade game The Fast and The Furious (itself a spiritual successor of the classic Cruis'n games). Since Midway lost the Fast and Furious license, all the cars and references to the movie franchise were removed.
BlackSite: Area 51 is a Sci-fi themed First-Person Shooter by Midway.
Ultimate Mortal Kombat
Ultimate Mortal Kombat for DS by Midway includes Puzzle Kombat and a port of Ultimate Mortal Kombat 3 complete with online play.
Foster's Home for Imaginary Friends: Imagination Invaders
Aqua Teen Hunger Force: Zombie Ninja Pro-Am
Aqua Teen Hunger Force: Zombie Ninja Pro-Am is a golf-action-racing game starring the cast of the Adult Swim cartoon. The game's quality is so poor that it's been suggested that it was released that way intentionally to keep in line with the show's bizarre sense of humor.
The Bee Game
The Bee Game is a GBA/DS platformer starring the German character Maya the Bee. Released in Germany as Die Biene Maja: Klatschmohnwiese in Gefahr, literally Maya the Bee: Clap Poppy Meadow in Danger. It was later released in the US simply as The Bee Game.
TouchMaster
Ever wanted to play all of those TouchMaster games without having to hang out in a bar? No? Well, then this DS version might not be your thing.
Hot Brain
Hot Brain is a brain training game for the PSP
Hour of Victory is a World War II-themed first-person shooter. Calling it mediocre would be a compliment.
The Lord of the Rings Online: Shadows of Angmar
Blitz: Overtime
Blitz for your PSP with online multiplayer, injuries and strategy!
Based on the animated movie of the same name following the story of a penguin trying to find his rhythm
Mortal Kombat: Unchained
Mortal Kombat: Unchained is a port of Mortal Kombat: Deception for the PSP.
Mortal Kombat: Armageddon
Mortal Kombat: Armageddon is the seventh installment in the violent fighting game franchise, featuring the most complete Mortal Kombat roster to date.
The Grim Adventures of Billy & Mandy
The Grim Adventures of Billy & Mandy is a fighting game featuring the characters of the Cartoon Network television show.
Spy Hunter: Nowhere to Run
Spy Hunter: Nowhere to Run is the third game in the vehicle-combat franchise. Nowhere to Run is the first game to add on-foot missions to the series.
Rise & Fall: Civilizations at War
Rise & Fall: Civilizations at War is a real-time strategy game where players can directly control individual units.
MLB Slugfest 2006
Top Rated Lists for Midway Games
Game companies
Kieran_ES | 计算机 |
2014-23/0754/en_head.json.gz/11599 | Joseph L. Cowan appointed President and Chief Executive Officer of Epicor Software Corporation
Published October 13th, 2013 - 13:02 GMTPress Release [1]
Epicor Software Corporation announced that Joseph (Joe) L. Cowan has been appointed President and Chief Executive Officer. Epicor Software Corporation, a global leader in business software solutions [2] for manufacturing, distribution, retail and services organizations, today announced that Joseph (Joe) L. Cowan has been appointed President and Chief Executive Officer. Cowan, who brings extensive executive management experience in software and technology to Epicor, succeeds Pervez Qureshi, who is stepping down to pursue new opportunities. Qureshi will also be stepping down from his position as a director on the Epicor board of directors.
“We are very pleased that Joe Cowan has joined Epicor as its new CEO and will be leading the company in its next phase of growth,” said Jason Wright, a member of the Board of Directors of Epicor. “Joe is a proven executive and strong leader who is highly respected within the technology industry. He possesses outstanding strategic vision and exceptional commercial and operational skills having led a number of major software companies to new levels of growth and profitability.” Along with his appointment as President and CEO, Cowan will be appointed as a member of the Epicor board of directors.
“I am honored to have been selected as the new Epicor CEO and am impressed by the company’s solutions, its relationships with its customers and its opportunities for growth,” said Cowan. “I look forward to working with the experienced Epicor leadership team and our more than 4,800 talented employees worldwide on continuing to deliver innovative technologies and software solutions that help our customers build more successful businesses. I am eager to get started on what will be an exciting new chapter in the Epicor story.”
Most recently, Cowan served as President and CEO of Online Resources, a leading provider of online banking and full-service payment solutions, until its acquisition by ACI Worldwide in March 2013. Previously, he served as CEO of Interwoven, Inc., a global leader in content management software, until its acquisition by Autonomy Corporation plc in 2009. Cowan has significant technology and enterprise software experience having served in a variety of leadership roles at Manugistics, EXE Technologies and Invensys/Baan. Cowan received a B.S. in Electrical Engineering from Auburn University and an M.S. in Engineering from Arizona State University.
“On behalf of the Board, I would like to express our deep appreciation to Pervez Qureshi for his dedication and many contributions to Epicor,” said Wright. “Pervez led us through a complex period during exceptionally challenging economic times in which he oversaw the highly successful integration of Epicor and Activant. He was instrumental in positioning the new Epicor as the global leader it is today with over 20,000 customers in more than 150 countries. We thank Pervez for all that he has accomplished over the past two years and wish him well as he pursues new opportunities.”
Orient Planet PR & Marketing Communications [3] © 2013 Al Bawaba (www.albawaba.com [4]) Source URL: http://www.albawaba.com/business/pr/epicor-new-president-526734
[2] http://www.epicor.com/Solutions/Pages/Enterprise-Business-Software.aspx | 计算机 |
2014-23/0754/en_head.json.gz/11799 | Developers can now sell Web apps in Amazon's Appst...
HTML5 game developer said Amazon's support will help spread the word
Mikael Ricknäs (IDG News Service) on 07 August, 2013 17:09
Developers can now submit Web apps and offer them alongside native Android-based programs on Amazon's Appstore.The change will make it easier for developers to distribute HTML5-based apps via Amazon's store without having to convert them to Android-specific versions."With this announcement we no longer have to perform that post-production work. We can just submit the URL of the game and then Amazon takes care of the rest, so for us its about improving efficiency," said Erik Goossens, CEO at Spil Games, whose Dream Pet Link is one of the first games to take advantage of the new distribution option.Goossens also sees Amazon's move as a statement of support for HTML5, which is important to companies that are betting on the technology."Having a company like Amazon do something like this helps us spread the word," Goossens said, adding that he would like to see the same option on Apple's App Store and Google Play.Developers still have to convert their apps to native iOS and Android versions to make them available via Apple's App Store and Google Play.The development of HTML5 and related technologies such as JavaScript and CSS has had its ups and downs in the last couple of years. Spil, like many other companies, bet too much too early, according to Goossens. But the technologies have now matured to the point where they are great for creating casual games, he said.To help developers, Amazon is offering the Web App Tester, a tool that lets developers test the app on a production-like environment on a Kindle Fire or Android device, without first submitting it to Amazon's store. The tester offers a suite of tools to help developers debug code and ensure apps will look and work great, according the company. Amazon has also published a list of what it thinks are best practices for developing Web apps.Interested developers with HTML5 apps can get started at the Amazon Mobile App Distribution Portal.When announcing its HTML5 apps push Wednesday, Amazon also took the opportunity to highlight the Chromium-based runtime that powers Web apps on its Kindle Fire tablets. With the runtime, the company has made sure Web apps can achieve "native-like performance," Amazon said.The Appstore can be downloaded and installed on any Android-based smartphone, but the store is also a central part of Amazon's Kindle Fire tablet, which ships with it installed. Earlier this year the company expanded the reach of its store by making it available in an additional 200 countries around the world.Send news tips and comments to [email protected]
Tags Web services developmentMobile gamesapplication developmentamazon.comAndroidgamessmartphonessoftwaremobilemobile applicationsAndroid OSconsumer electronics
Mikael Ricknäs | 计算机 |
2014-23/0754/en_head.json.gz/12158 | , OS & system enhancement software
Snow Leopard Gets a September Ship Date
Snow Leopard, the next major update to OS X, will be available in September, Apple announced during Monday's Worldwide Developer Conference keynote. However, OS X 10.6 will only work on Intel-based Macs, leaving the owners of aging PowerPC-based hardware without the ability to upgrade.
First announced at the 2008 WWDC, Snow Leopard doesn't offer the parade of new features Mac users might have come to expect from a major OS X update. Instead, much of the focus with Snow Leopard has been behind the scenes, with Apple looking to improve the performance and increase the power of its operating system.
During Monday's keynote, Apple senior vice president of software engineering Bertrand Serlet said the next major version of OS X would be characterized by powerful new technologies, refinements to existing features, and support for Microsoft Exchange.
Snow Leopard will cost $29 for Leopard users, with a family pack available for $49. That's a far cry from Apple's usual price on OS X updates--it costs $129 to purchase Leopard, for example.
"We want all users to upgrade to Snow Leopard, because Snow Leopard is a better Leopard," said Serlet of OS X 10.6's price."
The Snow Leopard ship date joins an announcement of new models of MacBook and MacBook Pro and an update to Safari made at the WWDC keynote, available in a PC World live blog.
Macworld will have more details on Snow Leopard's refinements and technologies shortly. | 计算机 |
2014-23/0754/en_head.json.gz/12298 | Home :: Longboat Key :: Front Page :: Town software receives global recognition SAVE TO MYOBSERVER
Town software receives global recognition
Date: April 3, 2013by: Kurt Schultheis | Managing Editor More Photos
The town’s Information Technology Department is a finalist for a prestigious award that IT Director Kathi Pletzke calls “the Emmys for computer geeks like us.”
Just how big of a deal is it? A town Fire Rescue Department software application is in a Computerworld Honors Program Safety & Security category with the likes of Hewlett-Packard and the FBI.
Computerworld, an IT magazine, has named a town firefighter tactical information software app for iPads as one of 268 finalists in 11 different categories (from 29 different countries) for its annual Honors Program awards.
The town’s IT Department, in collaboration with the Fire Rescue Department, created the software application in 2010 that allows firefighters access to tactical information with a tap on an iPad. Until recently, Longboat Key firefighter/paramedics had to flip through a large three-ring binder to get the information that would help them make split-second decisions.
The binders included information such as stairwell locations, building construction details and the gallons per minute that a fire hydrant could pump.
But, because of new software application the town developed, firefighters can now access what they need on an iPad.
The awards “honor visionary applications of information technology moving businesses forward and benefiting society,” according to the Computerworld website. Winners will be announced June 3, in Washington, D.C. “We’re ecstatic,” Pletzke said. “It’s an honor to even be considered for an award of this stature.”
It’s not the first time the town’s application has received praise. The Florida Local Government Information Systems Association recognized the software as “Best Application Serving a Public Organization’s Business Needs” at a July 2012 conference, where Pletzke demonstrated the software to more than 100 people.
Several expressed interest in purchasing the software for their municipalities, which could lead to a new source of revenue for the town. The IT Department has developed similar programs for Longboat Key police, who now complete police reports electronically in the field.
The programs could cut costs, but, more importantly, they make it easier for firefighters and police to do their jobs, giving them more time to spend helping people in the community.
Town Manager Dave Bullock told the Longboat Key Town Commission at its regular meeting Monday night he’s proud of the IT Department for its accomplishment. “It’s really clever to see staff figure out how to invent modern interfaces,” Bullock said. | 计算机 |
2014-23/0754/en_head.json.gz/12961 | 38 comment(s) - last by MagicSquid.. on Nov 22 at 10:06 AM
Prince is know for his unusual sense of style and funky jams, loving and partying. Now he be known in the file sharing community as "the man". (Source: Prince)
The prince tells the Pirate Bay "Party over, oops out of time"
The Pirate Bay is the Internet's largest Torrent tracker, and has been for some time now. However while other trackers and piracy advocates tend not last very long, The Pirate Bay is actively stirring the pot.For a short time the organization tried to raise money to buy the island nation of Sealand, and more recently it obtained the ifpi.com domain, formerly occupied by the IFPI, the RIAA's parent affiliate. However, its legal battles have come to boil, with Swedish authorities issuing charges against five Pirate Bay admins -- only two of which have actually been identified.Now The Pirate Bay has a surprising new nemesis in the form of the chest-baring, fashioning setting, funk-rock jammer Prince. Prince, recently gave away his album for free in the British newspaper the Mail On Sunday, infuriating his record label. Now Prince has taken a surprising stand in the other direction.Prince Rogers Nelson sees it as his right to defend his copyrights against all who might dare commit what he sees as infringements upon them.Prince plans to launch a triad of attacks, with lawsuits in the U.S., Sweden, and France (France, being known for its strong copyright protections), which aims to put The Pirate Bay's funky feel-good days to an end. Prince is also suing companies who advertise on The Pirate Bay.Prince added some hired guns in the form of John Giacobbi, President of Web Sheriff, and the rest of Web Sheriff to help coordinate his fiery legal onslaught upon on the pirates.Giacobbi already launched into allegations stating that The Pirate Bay makes $70,000 monthly in advertising revenue, a charge The Pirate Bay denies.The Pirate Bay is not the only one who Prince is hoping to make cry like a wounded dove. Prince stated his desire to "reclaim the Internet" and in recent months has become a champion for militant copyright protection, announcing that he would take legal action against YouTube and Ebay, as well as The Pirate Bay.YouTube chief counsel, Zahavah Levine, apparently unimpressed by Prince's legal wrath issued a unconcerned response, "Most content owners understand that we respect copyrights. We work every day to help them manage their content, and we are developing state-of-the-art tools to let them do that even better. We have great partnerships with major music labels all over world that understand the benefit of using YouTube as another way to communicate with their fans."The Pirate Bay also responded, with Pirate Bay admin brokep stating that the organization was not been contacted by Prince, and that Web Sheriff has been sending them take down notices for a prolonged period, which The Pirate Bay's filtering software conveniently trashes.However, even with the world's greatest funkmaster on their heels, all is business as usual at The Pirate Bay. Just last week prosecuters in Sweden vowed to take action against the organization, even though the site continues to adhere to Swedsh copyright laws.Giacobbi, of Web Sheriff countered, "They'll either have to come out and fight or just try and ignore it. In that case, we're going to win a default judgment against them. This could be a ticking time bomb for them. They can't outrun this. We are very confident."There have also been unconfirmed reports of Prince shutting down fan web sites for use of his images, lyrics, songs, or likeness.
RE: When I saw the title, I thought "Prince? I didn't think he was still relevant today."
Probably just as well, you would not want to get sued ;) Parent
Pirate Bay Commandeers Anti-Pirate Website
PirateBay.org Wants to Be Own Country | 计算机 |
2014-23/0754/en_head.json.gz/13164 | About Creature House
Creature House was founded in 1994 by Alex S C Hsu and Irene H H Lee.
Their vector graphics software Expression was built around research in Skeletal Strokes (presented at SIGGRAPH'94). As computer scientists and artists, Hsu and Lee understood the needs of their users; the Expression design featured a functional, intuitive interface, accompanied by a quirky, funny website.
Expression was initially released in September 1996 by publisher Fractal Design. It went on to win many awards. Creature House continued research into computer graphics, and further developed its software. Skeletal Strokes technology enabled animation sequences to have bold outlines and fluid movements. This was unlike any other kind of animation. Expression was on its way to becoming a major software tool for illustration and cel animation.
LivingCels was the last flagship product from Creature House. It used the same drawing engine as Expression, but added a sophisticated animation system. The beta release of LivingCels for Windows and Mac was available for only a few weeks, from August until October 2003. At the end of October 2003, Microsoft acquired the company, and the software vanished. | 计算机 |
2014-23/0754/en_head.json.gz/13539 | Linux Authors: Pat Romanski, Elizabeth White, Lori MacVittie, Esmeralda Swartz, Liz McMillan Related Topics: Security, .NET, Linux Security: Article
Keyloggers: The Undisputed Heavyweight Champions of the Malware World!
Keyloggers work by hooking the Windows or Apple message queue
By Shelly Palmer
Keylogging has taking center stage, and it now deserves our proper attention. After all, keyloggers have been identified as the #1 Global Threat to consumers, corporations and government agencies in the recent 2012 Verizon Data Breach Investigations Report. Symantec Corporation coined 2011 as “The Year of the Hack,” when they saw an 81 percent increase in cyber attacks.
Keyloggers have been credited for many of the world’s most notable breaches: RSA/EMC, Lockheed Martin, Google Epsilon, Oakridge Nuclear Weapons Lab, Citibank, World Bank and tens of millions of consumers around the world.
What makes the keylogger the preferred weapon of choice is that they have been designed to avoid detection from anti-virus and anti-malware programs and the fact that they can be embedded into any type of download (mp3, video) or attached to any type of web link. Social Networking websites, like Facebook, have become the favorite place for hackers to propagate their spyware.
CIO, CTO & Developer Resources Keyloggers work by hooking the Windows or Apple message queue. It is relatively easy to place a hook and inspect all the messages (such as keystroke messages) before they are sent to the application (i.e. desktop application or browser). The keyloggers then log the keystroke messages into a file. Typically, the keylogger communicates with the hacker via an IRC channel and delivers the captured keystroke file to the hacker.
Current anti-virus and anti-malware tools are based on scanning a computer for files with a particular signature. The database containing signatures of known bad files has to be continuously updated. The major caveat in this approach is the existence of the signature of a known problematic file. Spammers and criminals are currently deploying sophisticated software which dynamically changes the file signature, making anti-spam and anti-virus tools no longer effective against keyloggers. Also, there is significant time between detecting a new keylogger on the internet and the anti-keylogging signature being updated on anti-virus/spyware software. This time gap can be a month or longer.
Anti-keylogging keystroke encryption to the rescue. Keystroke encryption uses a different approach to defend against keyloggers. Rather than trying to detect keyloggers, it takes a preventive approach. It takes control of the keyboard at the lowest possible layer in the kernel. The keystrokes are then encrypted and sent to desktop applications and the browser via an “Out-of-Band” channel, bypassing the Windows and Apple messaging queue. Look for keystroke encryption products with a built in self-monitoring capability, such as GuardedID. This prevents it from being bypassed by other software. Published June 25, 2012 Reads 5,171 Copyright © 2012 SYS-CON Media, Inc. — All Rights Reserved.
More Stories By Shelly Palmer
Shelly Palmer is the host of NBC Universal’s Live Digital with Shelly Palmer, a weekly half-hour television show about living and working in a digital world. He is Fox 5′s (WNYW-TV New York) Tech Expert and the host of United Stations Radio Network’s, MediaBytes, a daily syndicated radio report that features insightful commentary and a unique insiders take on the biggest stories in technology, media, and entertainment.
Comments (0) Share your thoughts on this story.
You must be signed in to add a comment.
Sign-in | Register In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.
Please wait while we process your request...
Your feedback has been submitted for approval. | 计算机 |
2014-23/0754/en_head.json.gz/13657 | Simplified Chinese characters
(Redirected from Chinese Simplified)
Logographic
Oracle Bone Script
Seal Script
Clerical Script
Sister systems
Kanji, Chữ Nôm, Hanja, Khitan script, Zhuyin
Hans, 501
Simplified Chinese characters are standardized Chinese characters prescribed in the Xiandai Hanyu Tongyong Zibiao (List of Commonly Used Characters in Modern Chinese) for use in mainland China. Along with traditional Chinese characters, it is one of the two standard character sets of the contemporary Chinese written language. The government of the People's Republic of China in mainland China has promoted them for use in printing since the 1950s and 1960s in an attempt to increase literacy.[1] They are officially used in the People's Republic of China and Singapore.
Traditional Chinese characters are currently used in Hong Kong, Macau, and Republic of China (Taiwan). While traditional characters can still be read and understood by many mainland Chinese and Singaporeans, these groups generally retain their use of Simplified characters. Overseas Chinese communities generally tend to use traditional characters.
Simplified Chinese characters are officially called in Chinese jiǎnhuàzì (简化字 in simplified form, 簡化字 in traditional form).[2] Colloquially, they are called jiǎntizì (简体字 / 簡體字). Strictly, the latter refers to simplifications of character "structure" or "body", character forms that have existed for thousands of years alongside regular, more complicated forms. On the other hand, jiǎnhuàzì means the modern systematically simplified character set, that (as stated by Mao Zedong in 1952) includes not only structural simplification but also substantial reduction in the total number of standardized Chinese characters.[3]
Simplified character forms were created by decreasing the number of strokes and simplifying the forms of a sizable proportion of traditional Chinese characters. Some simplifications were based on popular cursive forms embodying graphic or phonetic simplifications of the traditional forms. Some characters were simplified by applying regular rules, for example, by replacing all occurrences of a certain component with a simplified version of the component. Variant characters with the same pronunciation and identical meaning were reduced to one single standardized character, usually the simplest amongst all variants in form. Finally, many characters were left untouched by simplification, and are thus identical between the traditional and simplified Chinese orthographies.
Some simplified characters are very dissimilar to and unpredictably different from traditional characters, especially in those where a component is replaced by an arbitrary simple symbol.[4] This often leads opponents not well-versed in the method of simplification to conclude that the 'overall process' of character simplification is also arbitrary.[5][6] In reality, the methods and rules of simplification are few and internally consistent.[7] On the other hand, proponents of simplification often flaunt a few choice simplified characters as ingenious inventions, when in fact these have existed for hundreds of years as ancient variants.[8]
A second round of simplifications was promulgated in 1977, but was later retracted for a variety of reasons. However, the Chinese government never officially dropped its goal of further simplification in the future.
In August 2009, the PRC began collecting public comments for a modified list of simplified characters.[9][10][11][12] The new Table of General Standard Chinese Characters consisting of 8105 (simplified and unchanged) characters was promulgated by the State Council of the People's Republic of China on June 5, 2013.[13]
Chinese characters
Oracle-bone
Seal (large
small)
Imitation Song
Strokes (order)
Character-form standards
Kangxi Dictionary
Xin Zixing
Commonly-used characters (PRC)
Forms of frequently-used characters (Hong Kong)
Standard Form of National Characters (RoC Taiwan)
Grapheme-usage standards
Graphemic variants
Hanyu Tongyong Zi
Hanyu Changyong Zi
Tōyō kanji
Jōyō kanji
Traditional characters
Simplified characters
(first round
second round)
Old (Kyūjitai)
New (Shinjitai)
Ryakuji
Yakja
Jiăntǐzì biǎo
Homographs
Literary and colloquial readings
Use in particular scripts
Written Chinese
Zetian characters
Nü Shu
Kanji (Kokuji)
Kana (Man'yōgana)
Hanja (Gukja)
Sawndip
1.1.1 Before 1949
1.1.2 People's Republic of China
1.2 Singapore and Malaysia
2 Method of simplification
2.1 Structural simplification of characters
2.2 Derivation based on simplified character components
2.3 Elimination of variants of the same character
2.4 Adoption of new standardized character forms
2.5 Consistency
3 Distribution and use
3.1 Mainland China
3.3 Taiwan
4.4 Chinese as a foreign language
5 Computer encoding
6 Web pages
Before 1949[edit]
Although most of the simplified Chinese characters in use today are the result of the works moderated by the government of the People's Republic of China (PRC) in the 1950s and 60s, character simplification predates the PRC's formation in 1949. Cursive written text almost always includes character simplification. Simplified forms used in print have always existed; they date back to as early as the Qin dynasty (221–206 BC).
The first batch of Simplified Characters introduced in 1935 consisted of 324 characters.
One of the earliest proponents of character simplification was Lufei Kui, who proposed in 1909 that simplified characters should be used in education. In the years following the May Fourth Movement in 1919, many anti-imperialist Chinese intellectuals sought ways to modernise China. Traditional culture and values such as Confucianism were challenged. Soon, people in the Movement started to cite the traditional Chinese writing system as an obstacle in modernising China and therefore proposed that a reform be initiated. It was suggested that the Chinese writing system should be either simplified or completely abolished. Fu Sinian, a leader of the May Fourth Movement, called Chinese characters the "writing of ox-demons and snake-gods" (牛鬼蛇神的文字). Lu Xun, a renowned Chinese author in the 20th century, stated that, "If Chinese characters are not destroyed, then China will die." (漢字不滅,中國必亡) Recent commentators have claimed that Chinese characters were blamed for the economic problems in China during that time.[14]
In the 1930s and 1940s, discussions on character simplification took place within the Kuomintang government, and a large number of Chinese intellectuals and writers have long maintained that character simplification would help boost literacy in China.[15] In 1935, 324 simplified characters collected by Qian Xuantong were officially introduced as the table of first batch simplified characters and suspended in 1936. In many world languages, literacy has been promoted as a justification for spelling reforms.
The PRC issued its first round of official character simplifications in two documents, the first in 1956 and the second in 1964. The reform met resistance from scholars such as Chen Mengjia, who was an outspoken critic of simplification. Labeled a Rightist when the Anti-Rightist Movement began in 1957, and again severely persecuted during the Cultural Revolution, Chen committed suicide in 1966.[16]
Within the PRC, further character simplification became associated with the leftists of the Cultural Revolution, culminating with the second-round simplified characters, which were promulgated in 1977. In part due to the shock and unease felt in the wake of the Cultural Revolution and Mao's death, the second-round of simplifications was poorly received. In 1986 the authorities retracted the second round completely. Later in the same year, the authorities promulgated a final list of simplifications, which is identical to the 1964 list except for six changes (including the restoration of three characters that had been simplified in the First Round: 叠, 覆, 像; note that the form 疊 is used instead of 叠 in regions using Traditional Chinese).
There had been simplification initiatives aimed at eradicating characters entirely and establishing the Hanyu Pinyin romanization as the official written system of the PRC, but the reform never gained quite as much popularity as the leftists had hoped. After the retraction of the second round of simplification, the PRC stated that it wished to keep Chinese orthography stable. Years later in 2009, the Chinese government released a major revision list which included 8300 characters. No new simplifications were introduced. However, six characters previously listed as "traditional" characters that have been simplified, as well as 51 other "variant" characters were restored to the standard list. In addition, orthographies (e.g., stroke shape) for 44 characters were modified slightly. Also, the practice of simplifying obscure characters by analogy of their radicals is now discouraged. A State Language Commission official cited "oversimplification" as the reason for restoring some characters. The language authority declared an open comment period until August 31, 2009 for feedback from the public.[17]
The officially promulgated version of the List of Commonly Used Standardized Characters, announced in 2013, contained 45 newly recognized standard characters that were previously considered variant forms, as well as a selection of characters simplified by analogy (226 characters) that had seen wide use.
Writer and poet Liu Shahe writes a column on the Chinese edition of the Financial Times dedicated to the criticism of simplified characters.[18]
Singapore and Malaysia[edit | 计算机 |
2014-23/0754/en_head.json.gz/14771 | German Dictionary, classical spelling st...
German Dictionary, classical spelling standards
by Tobias Kuban
German Dictionary according to the classical spelling standards for spell-checking in Firefox and Thunderbird
End-User License Agreement
Help support the continued development of German Dictionary, classical spelling standards by making a small contribution through PayPal. How much would you like to contribute?
Support E-mailmoc.liamenulllgoog@gnubierhcsthceretla
German Dictionary, classical spelling standardshttps://addons.mozilla.org/addon/deutsch-alte-rechtschreibung/https://addons.mozilla.org/thunderbird/addon/deutsch-alte-rechtschreibung/http://tobiaskuban.com/firefoxAlteRechtschreibung.phpalterechtschreibung@googlemail.comWhat's new?Learn more about the changes and new features of the latest version here: http://tobiaskuban.com/firefoxAlteRechtschreibung.phpThe open-source Firefox Add-on "German Dictionary (de-DE), classical spelling standards" for spell-checking according to the classical German spelling standards supports Firefox and Thunderbird platform-independently and is based on the LibreOffice Extension "German (de-DE-1901) old spelling dictionaries". The contents of the dictionaries are untouched and in original state. Also the latest versions of Firefox and Thunderbird are supported. If you have questions regarding the Add-on, please feel free to contact me, Tobias Kuban ([email protected]).License:The dictionaries are licensed under GPLv2, GPLv3 or OASIS distribution license agreement, included in text form in folder "dictionaries" of the Add-on. The Add-on itself is licensed under the GNU General Public License, Version 2.Deutsches Wörterbuch (de-DE), alte RechtschreibungWas ist neu?Lesen Sie hier mehr über die Neuerungen und Veränderungen der aktuellen Version: http://tobiaskuban.com/firefoxAlteRechtschreibung.phpDas quelloffene Firefox Add-on "Deutsches Wörterbuch (de-DE), alte Rechtschreibung" für die Rechtschreibprüfung in der alten deutschen Rechtschreibung* unterstützt Firefox und Thunderbird plattformübergreifend und basiert auf der LibreOffice Extension "German (de-DE-1901) old spelling dictionaries". Der Inhalt der Wörterbücher ist unverändert und entspricht den Originalen. Auch die aktuellen Versionen von Firefox und Thunderbird werden unterstützt. Fragen zum Add-on gerne an mich, Tobias Kuban ([email protected]), schicken. * auf mehrfache Anregung hin sei ergänzt, daß diese mitunter auch geläufig ist als klassische deutsche Rechtschreibung, bewährte deutsche Rechtschreibung, bisherige deutsche Rechtschreibung, unreformierte deutsche Rechtschreibung, traditionelle deutsche Rechtschreibung, herkömmliche deutsche Rechtschreibung oder QualitätsrechtschreibungLizenz:Die Wörterbücher stehen unter Lizenz der GPLv2, GPLv3 oder OASIS distribution license agreement, in Textform enthalten im Ordner "dictionaries" des Add-ons. Das Add-on selbst steht unter der GNU General Public License, Version 2.
I recommence to install "Dictionary Switcher" too.
Firefox for Android 12.0 - 32.0, Firefox 3.0 - 32.0, Mobile 5.0 - 32.0, SeaMonkey 2.0 - 2.29, Thunderbird 3.0 - 32.0 Source code released under GNU General Public License, version 2.0 What's this?
GNU GENERAL PUBLIC LICENSEVersion 2, June 1991Copyright (C) 1989, 1991 Free Software Foundation, Inc. 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USAEveryone is permitted to copy and distribute verbatim copiesof this license document, but changing it is not allowed.PreambleThe licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Lesser General Public License instead.) You can apply it to your programs, too.When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things.To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it.For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights.We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software.Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations.Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all.The precise terms and conditions for copying, distribution and modification follow.TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you".Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does.1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program.You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee.2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it.Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program.In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License.3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable.If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code.4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance.5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it.6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License.7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program.If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances.It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice.This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License.8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License.9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation.10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally.NO WARRANTY11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.END OF TERMS AND CONDITIONS | 计算机 |
2014-23/0754/en_head.json.gz/15318 | Large Projects (new web application, complex features etc)
Looking for serious team for game
Thread: Looking for serious team for game
ur twin
Let me start off by saying that I know a lot of people come onto these forums and talk about their new games and how they're going to be the best. This post is for people who are serious about making a game that may attract hundreds of thousands of players. This post is for the ones who are really going to sit down for hours upon hours, and complete task given to them, this is for the people who want to take their coding to the next level. If you're that person, then continue reading, if not, then this will do you no good. The game is Explurse, it's a 3D/text based space strategy game that will be built in the browser. Part of it is already built, and I'll get to that later. Explurse is truly revolutionary, and you'll see why. There are so many other games out right now that are just garbage, they're bad and their developers never put much thought into their game. The top 2 games in the genre are Ogame, and Astro Empires. Most other games are just a copy of one of these 2. People think that setting higher speeds and giving players endless resources will make a better game, or that releasing server after server will keep their dieing game alive just a little longer. I am not one of those people, and Explurse is not one of those games. Explurse is an an empire builder, something that other games claim to be, but what are those other games really? What do you actually end up doing on those games? Everything you work for in these other games always ends up being about the same thing, making your "fleet" or military stronger to attack other players and profit. Where is the empire building? Surely there's more to building an empire then just ships. This is what Explurse is about, showing the other side of the genre, showing what it CAN be, and taking the entire genre to the next level. Explurse is more then just a game, it's an entirely new universe with an author writing a very long and deep story line, it has concept art being worked on and many other thing happening, but as great as this all sounds, we have 2 coders, and we need more. The main things I'll reveal about the game at this point is that planets will be proceduraly generated and 3D, they'll have realistic features you can see real-time, such as lightning and massive sand storms. The small portion of the game that is currently up is....in bad shape, and is written in Javascript and Python. We'll most likely just end up trashing that and start fresh. This game has a long and interesting history, and I don't want to get into it. That's all irrelevant now. At this moment, I'm forming the "core" team, the coders who are actually going to own part of the company, and by extension the game. Those who join now will be part of the core team, however, everyone will sign a contract that states that if at any point you fail to do the job detailed in the contract that you will lose your share of the company, so don't think you can just join and get part of it and then not do anything. Explurse is taking things to the next level, and after Explurse, there will be other games, that we will all be making while sitting comfortably in our own game studio. I'm looking for serious coders who have the skill, time, and dedication o take on a project this large. We have a nice development plan written up that will allow developers of all time zones some flexibility. We will not rush the game, you will get breaks, and you will get more then a fair amount of time to complete a task. If you're serious about taking part in something big, then email me at: [email protected]
I could go on and comment on how you're throwing some big words out there without any actual proof of the current work or status on the project, without even having so much as a dedicated domain and email adress or anything. I'll leave it up to every person on their own to decide what project to join.
However, if I may put it in a word you didn't use: You're trying to be professional in your work, right?
Then what is so hard about reading forum rules and figure out in which subforum a question like this has to go and in which it doesn't have to go. Geezus… That is what is always the same.
Originally Posted by Airblader
Then what is so hard about reading forum rules and figure out in which subforum a question like this has to go and in which it doesn't have to go. Geezus� That is what is always the same.
I go through multiple forums, If I stopped and read every forum's rules, I wouldn't even get a post out. I not posting to much about the game, and the "dedicated domain" is explurse.com, which I'm currently redoing the site to, so it's down.
All I'm hearing is "My project isn't worth the time for me to actually make sure I follow the forums' rules which allow me to look for people". I find that disrespectful and, more importantly, unprofessional, especially for someone who's promising the new holy grail of games. Besides, it's not even reading rules – it's taking a single second to check whether there is a dedicated subforum for such requests. Which are not even that uncommon for forums of this kind.
Then again, I am just another user here, you don't have to care about what I think. I wish you the best luck with your project.
Last edited by Airblader; 02-05-2013 at 09:08 PM.
Well, now that it's actually in the right place, let's stop bawing about it. | 计算机 |
2014-23/0754/en_head.json.gz/15824 | Graphics & design software
How to create your own animated GIFs the easy way
You can’t go too far online these days without coming across an animated GIF, either as a quick flashcard-style animation or a short, soundless, looping video. No, that sentence was not written in 1999. Animated GIFs—simple animations made by stringing several still shots together—went quietly out of fashion over the last 12 years in favor of full-on streaming HD digital video and fancy Flash productions.
But now, unbidden and somewhat enigmatically, they’re back. Tumblr in particular is jam-packed with these succinct visual snippets, which are often clever and comical. Those that strike just the right combination tend to go viral, and can end up being viewed by millions of people the world over. Not bad for a file format that is over two decades old.
Graphics Interchange what?
GIF—short for Graphics Interchange Format—was developed in 1987 by CompuServe, and a perpetual argument has raged about how to pronounce the one-syllable acronym, an argument I will not even attempt to negotiate. Suffice it to say that about as many people seem to be proponents of the hard-G pronunciation as of the J. But no matter how you say it, GIF technology allowed the online services company to deliver downloadable color images, a remarkable accomplishment for its time.
The format included two features that quickly earned it favor with programmers as well as the brand-new Web design community: image compression, which reduced the overall file size, and the capacity to hold more than one image (or frame) in a single file. Though it was never intended to serve as a platform for animations, a 1989 revision to the format (popularly known as GIF89a) allowed these frames to be displayed with time delays, permitting frame-by-frame animation. In the mid-1990s, support for the format by the Netscape browser let creators choose the number of times the animation would loop.
A flood of animated GIFs thereby ensued—no respectable website was considered complete without at least one animated GIF, no matter how inane. From then on, the steady, reliable GIF has endured, due largely to its limited capacity for animation, despite competitors like the PNG (Portable Network Graphics) format.
Not surprisingly, animated GIFs played a key role in the commercialization of the Internet. At the dawn of digital advertising, ad blocks were quite tiny, reflecting the low-resolution displays of the time. To attract attention and maximize their marketing messages, advertisers relied on animated GIFs—short slideshows would display the product, deliver the message, close with the oh-so-amazing price, and then loop over and over again.
As time wore on, the GIF format yielded the advertising space to Flash, but there’s been a bit of a reversal in the past few years. If you’re using a MacBook Air or iOS device, chances are that the animated ads you’re seeing are good ol’ GIFs (if not HTML 5).
GIF limits
Despite its improbable success over time, the GIF format is showing its age. A GIF image is limited to a maximum of 256 colors, which was fine for a typical 8-bit monitor circa 1995, but laughable for a modern Apple Cinema display, for example. And while the palette is variable and can be customized to best match the colors in the source image or animation frames, GIF color representation pales in comparison to the 16.8-million-color palette we’re accustomed to with more modern image formats such as JPEG.
GIF files use a compression technique called LZW (Lempel-Ziv-Welch), which is a lossless algorithm poorly suited to animation and video playback. The greater the pixel dimensions of the document, the higher the color count, and the quicker the frame rate, the larger the resulting file size. A 10-second video clip in MP4 format might be 2MB; the same video clip converted to GIF would be more than 20MB.
GIFs tend to be short and sweet as a result, relying on reduced frame rates and restricted color palettes to keep the file size under control. Without sound, animated GIFs are the digital equivalent of silent movies.
These limitations certainly add to the challenge of creating an interesting GIF—but they may also contribute to its enduring charm.
Creating a GIF
A number of apps allow you to make animated GIFs, bu | 计算机 |
2014-23/0755/en_head.json.gz/103 | Sun Java(TM) System Administration Server 5 2004Q2 Administration Guide Appendix A Introduction to Public-Key Cryptography
Public-key cryptography and related standards and techniques underlie security features of many Sun Java System products, including signed and encrypted mail, form signing, object signing, single sign-on, and the Secure Sockets Layer (SSL) protocol. This appendix introduces the basic concepts of public-key cryptography. This appendix contains the following sections:
Internet Security Issues
Encryption and Decryption
Certificates and Authentication
Managing Certificates
For an overview of SSL, see Appendix B, "Introduction to SSL."
All communication over the Internet uses the Transmission Control Protocol/Internet Protocol (TCP/IP). TCP/IP allows information to be sent from one computer to another through a variety of intermediate computers and separate networks before it reaches its destination.
The great flexibility of TCP/IP has led to its worldwide acceptance as the basic Internet and intranet communications protocol. At the same time, the fact that TCP/IP allows information to pass through intermediate computers makes it possible for a third party to interfere with communications in the following ways:
Eavesdropping. Information remains intact, but its privacy is compromised. For example, someone could learn your credit card number, record a sensitive conversation, or intercept classified information.
Tampering. Information in transit is changed or replaced and then sent on to the recipient. For example, someone could alter an order for goods or change a person’s resume.
Impersonation. Information passes to a person who poses as the intended recipient. Impersonation can take two forms, Spoofing and Misrepresentation.
Spoofing. A person pretends to be someone else. For example, a person can pretend to have the mail address [email protected], or a computer can identify itself as a site called www.example.com when it is not. This type of impersonation is known as spoofing.
Misrepresentation. A person or organization misrepresents itself. For example, suppose the site www.example.com pretends to be a furniture store when it is really just a site that takes credit-card payments but never sends any goods.
Normally, users of the many cooperating computers that make up the Internet or other networks don’t monitor or interfere with the network traffic that continuously passes through their machines. However, many sensitive personal and business communications over the Internet require precautions that address the threats listed above. Fortunately, a set of well-established techniques and standards known as public-key cryptography make it relatively easy to take such precautions.
Public-key cryptography facilitates the following tasks:
Encryption and decryption allow two communicating parties to disguise information they send to each other. The sender encrypts, or scrambles, information before sending it. The receiver decrypts, or unscrambles, the information after receiving it. While in transit, the encrypted information is unintelligible to an intruder.
Tamper detection allows the recipient of information to verify that it has not been modified in transit. Any attempt to modify data or substitute a false message for a legitimate one is detected.
Authentication allows the recipient of information to determine its origin—that is, to confirm the sender’s identity.
Nonrepudiation prevents the sender of information from claiming at a later date that the information was never sent. The sections that follow introduce the concepts of public-key cryptography that underlie these capabilities.
Encryption is the process of transforming information so it is unintelligible to anyone but the intended recipient. Decryption is the process of transforming encrypted information so that it is intelligible again. A cryptographic algorithm, also called a cipher, is a mathematical function used for encryption or decryption. In most cases, two related functions are employed, one for encryption and the other for decryption.
With most modern cryptography, the ability to keep encrypted information secret is based not on the cryptographic algorithm, which is widely known, but on a number called a key that must be used with the algorithm to produce an encrypted result or to decrypt previously encrypted information. Decryption with the correct key is simple. Decryption without the correct key is very difficult, and in some cases impossible for all practical purposes. The sections that follow introduce the use of keys for encryption and decryption.
Symmetric-Key Encryption
Public-Key Encryption
Key Length and Encryption Strength
With symmetric-key encryption, the encryption key can be calculated from the decryption key and vice versa. With most symmetric algorithms, the same key is used for both encryption and decryption.
Figure A-1 Symmetric Key Encryption
Implementations of symmetric-key encryption can be highly efficient, so that users do not experience any significant time delay as a result of the encryption and decryption. Symmetric-key encryption also provides a degree of authentication, since information encrypted with one symmetric key cannot be decrypted with any other symmetric key. Thus, as long as the symmetric key is kept secret by the two parties using it to encrypt communications, each party can be sure that it is communicating with the other as long as the decrypted messages continue to make sense.
Symmetric-key encryption is effective only if the symmetric key is kept secret by the two parties involved. If anyone else discovers the key, it affects both confidentiality and authentication. A person with an unauthorized symmetric key not only can decrypt messages sent with that key, but can encrypt new messages and send them as if they came from one of the two parties who were originally using the key.
Symmetric-key encryption plays an important role in the SSL protocol, which is widely used for authentication, tamper detection, and encryption over TCP/IP networks. SSL also uses techniques of public-key encryption, which is described in the next section.
The most commonly used implementations of public-key encryption are based on algorithms patented by RSA Data Security. Therefore, this section describes the RSA approach to public-key encryption.
Public-key encryption (also called asymmetric encryption) involves a pair of keys—a public key and a private key—associated with an entity that needs to authenticate its identity electronically or to sign or encrypt data. Each public key is published, and the corresponding private key is kept secret. (For more information about the way public keys are published, see Certificates and Authentication..) Data encrypted with your public key can be decrypted only with your private key. Figure A-2 shows a simplified view of the way public-key encryption works. Figure A-2 Public Key Encryption
The scheme shown in Figure A-2 lets you freely distribute a public key, and only you can read data encrypted using this key. In general, to send encrypted data to someone, you encrypt the data with that person’s public key, and the person receiving the encrypted data decrypts it with the corresponding private key.
Compared with symmetric-key encryption, public-key encryption requires more computation and is therefore not always appropriate for large amounts of data. However, it’s possible to use public-key encryption to send a symmetric key, which can then be used to encrypt additional data. This is the approach used by the SSL protocol.
As it happens, the reverse of the scheme shown in Figure A-2 also works: data encrypted with your private key can be decrypted only with your public key. This would not be a desirable way to encrypt sensitive data, however, because it means that anyone with your public key, which is by definition published, could decrypt the data. Nevertheless, private-key encryption is useful, because it means you can use your private key to sign data with your digital signature—an important requirement for electronic commerce and other commercial applications of cryptography. Client software can then use your public key to confirm that the message was signed with your private key and that it hasn’t been tampered with since being signed. Digital Signatures and subsequent sections describe how this confirmation process works.
In general, the strength of encryption is related to the difficulty of discovering the key, which in turn depends on both the cipher used and the length of the key. For example, the difficulty of discovering the key for the RSA cipher most commonly used for public-key encryption depends on the difficulty of factoring large numbers, a well-known mathematical problem.
Encryption strength is often described in terms of the size of the keys used to perform the encryption: in general, longer keys provide stronger encryption. Key length is measured in bits. For example, 128-bit keys for use with the RC4 symmetric-key cipher supported by SSL provide significantly better cryptographic protection than 40-bit keys for use with the same cipher. Roughly speaking, 128-bit RC4 encryption is 3 x 1026 times stronger than 40-bit RC4 encryption. (For more information about RC4 and other ciphers used with SSL, see Appendix B, "Introduction to SSL.")
Different ciphers may require different key lengths to achieve the same level of encryption strength. The RSA cipher used for public-key encryption, for example, can use only a subset of all possible values for a key of a given length, due to the nature of the mathematical problem on which it is based. Other ciphers, such as those used for symmetric key encryption, can use all possible values for a key of a given length, rather than a subset of those values. Thus a 128-bit key for use with a symmetric-key encryption cipher would provide stronger encryption than a 128-bit key for use with the RSA public-key encryption cipher. This difference explains why the RSA public-key encryption cipher must use a 512-bit key (or longer) to be considered cryptographically strong, whereas symmetric key ciphers can achieve approximately the same level of strength with a 64-bit key. Even this level of strength may be vulnerable to attacks in the near future.
Encryption and decryption address the problem of eavesdropping, one of the three Internet security issues mentioned at the beginning of this appendix. But encryption and decryption, by themselves, do not address the other two problems mentioned in Internet Security Issues: tampering and impersonation. This section describes how public-key cryptography addresses the problem of tampering. The sections that follow describe how it addresses the problem of impersonation.
Tamper detection and related authentication techniques rely on a mathematical function called a one-way hash (also called a message digest). A one-way hash is a number of fixed length with the following characteristics:
The value of the hash is unique for the hashed data. Any change in the data, even deleting or altering a single character, results in a different value.
The content of the hashed data cannot, for all practical purposes, be deduced from the hash—which is why it is called “one-way.”
As mentioned in Public-Key Encryption, it’s possible to use your private key for encryption and your public key for decryption. Although this is not desirable when you are encrypting sensitive information, it is a crucial part of digitally signing any data. Instead of encrypting the data itself, the signing software creates a one-way hash of the data, then uses your private key to encrypt the hash. The encrypted hash, along with other information, such as the hashing algorithm, is known as a digital signature. Figure A-3 shows a simplified view of the way a digital signature can be used to validate the integrity of signed data.
Figure A-3 Digital Signing
Figure A-3 shows two items transferred to the recipient of some signed data: the original data and the digital signature, which is basically a one-way hash (of the original data) that has been encrypted with the signer’s private key. To validate the integrity of the data, the receiving software first uses the signer’s public key to decrypt the hash. It then uses the same hashing algorithm that generated the original hash to generate a new one-way hash of the same data. (Information about the hashing algorithm used is sent with the digital signature, although this isn’t shown in the figure.) Finally, the receiving software compares the new hash against the original hash. If the two hashes match, the data has not changed since it was signed. If they don’t match, the data may have been tampered with since it was signed, or the signature may have been created with a private key that doesn’t correspond to the public key presented by the signer.
If the two hashes match, the recipient can be certain that the public key used to decrypt the digital signature corresponds to the private key used to create the digital signature. Confirming the identity of the signer, however, also requires some way of confirming that the public key really belongs to a particular person or other entity. For a discussion of the way this works, see the next section, Certificates and Authentication.
The significance of a digital signature is comparable to the significance of a handwritten signature. Once you have signed some data, it is difficult to deny doing so later—assuming that the private key has not been compromised or out of the owner’s control. This quality of digital signatures provides a high degree of nonrepudiation—that is, digital signatures make it difficult for the signer to deny having signed the data. In some situations, a digital signature may be as legally binding as a handwritten signature.
A Certificate Identifies Someone or Something
Authentication Confirms an Identity
How Certificates Are Used
Contents of a Certificate
How CA Certificates Are Used to Establish Trust
A certificate is an electronic document used to identify an individual, a server, a company, or some other entity. The certificate also associates that identity with a public key. Like a driver’s license, a passport, or other commonly used personal IDs, a certificate provides generally recognized proof of a person’s identity. Public-key cryptography uses certificates to address the problem of impersonation (see Internet Security Issues.)
To get a driver’s license, you typically apply to a government agency, such as the Department of Motor Vehicles, which verifies your identity, your ability to drive, your address, and other information before issuing the license. To get a student ID, you apply to a school or college, which performs different checks (such as whether you have paid your tuition) before issuing the ID. To get a library card, you may need to provide only your name and a utility bill with your address on it.
Certificates work much the same way as any of these familiar forms of identification. Certificate authorities (CAs) are entities that validate identities and issue certificates. They can be either independent third parties or organizations running their own certificate-issuing server software (such as Sun Java System Certificate Management System). The methods used to validate an identity vary depending on the policies of a given CA—just as the methods to validate other forms of identification vary depending on who is issuing the ID and the purpose for which it is used. In general, before issuing a certificate, the CA must use its published verification procedures for that type of certificate to ensure that an entity requesting a certificate is in fact who it claims to be.
The certificate issued by the CA binds a particular public key to the name of the entity the certificate identifies (such as the name of an employee or a server). Certificates help prevent the use of fake public keys for impersonation. Only the public key certified by the certificate works with the corresponding private key possessed by the entity identified by the certificate.
In addition to a public key, a certificate always includes the name of the entity it identifies, an expiration date, the name of the CA that issued the certificate, a serial number, and other information. Most importantly, a certificate always includes the digital signature of the issuing CA. The CA’s digital signature allows the certificate to function as a “letter of introduction” for users who know and trust the CA but don’t know the entity identified by the certificate.
For more information about the role of CAs, see How CA Certificates Are Used to Establish Trust.
Authentication is the process of confirming an identity. In the context of network interactions, authentication involves the confident identification of one party by another party. Authentication over networks can take many forms. Certificates are one way of supporting authentication.
Network interactions typically take place between a client, such as browser software running on a personal computer, and a server, such as the software and hardware used to host a Web site. Client authentication refers to the confident identification of a client by a server (that is, identification of the person assumed to be using the client software). Server authentication refers to the confident identification of a server by a client (that is, identification of the organization assumed to be responsible for the server at a particular network address).
Client and server authentication are not the only forms of authentication that certificates support. For example, the digital signature on an email message, combined with the certificate that identifies the sender, provide strong evidence that the person identified by that certificate did indeed send that message. Similarly, a digital signature on an HTML form, combined with a certificate that identifies the signer, can provide evidence, after the fact, that the person identified by that certificate did agree to the contents of the form. In addition to authentication, the digital signature in both cases ensures a degree of nonrepudiation—that is, a digital signature makes it difficult for the signer to claim later not to have sent the email or the form.
Client authentication is an essential element of network security within most intranets or extranets. The sections that follow contrast two forms of client authentication:
Password-Based Authentication. Almost all server software permits client authentication by means of a name and password. For example, a server might require a user to type a name and password before granting access to the server. The server maintains a list of names and passwords; if a particular name is on the list, and if the user types the correct password, the server grants access.
Certificate-Based Authentication. Client authentication based on certificates is part of the SSL protocol. The client digitally signs a randomly generated piece of data and sends both the certificate and the signed data across the network. The server uses techniques of public-key cryptography to validate the signature and confirm the validity of the certificate.
Figure A-4 shows the basic steps involved in authenticating a client by means of a name and password. Figure A-4 assumes the following:
The user has already decided to trust the server, either without authentication or on the basis of server authentication via SSL.
The user has requested a resource controlled by the server.
The server requires client authentication before permitting access to the requested resource.
Figure A-4 Using a Password to Authenticate a Client
These are the steps shown in Figure A-4:
In response to an authentication request from the server, the client displays a dialog box requesting the user’s name and password for that server. The user must supply a name and password separately for each new server the user wishes to use during a work session.
The client sends the name and password across the network, either in the clear or over an encrypted SSL connection.
The server looks up the name and password in its local password database and, if they match, accepts them as evidence authenticating the user’s identity.
The server determines whether the identified user is permitted to access the requested resource, and if so allows the client to access it.
With this arrangement, the user must supply a new password for each server, and the administrator must keep track of the name and password for each user, typically on separate servers.
As shown in the next section, one of the advantages of certificate-based authentication is that it can be used to replace the first three steps in Figure A-4 with a mechanism that allows the user to supply just one password (which is not sent across the network) and allows the administrator to control user authentication centrally.
Certificate-Based Authentication
Figure A-5 shows how client authentication works using certificates and the SSL protocol. To authenticate a user to a server, a client digitally signs a randomly generated piece of data and sends both the certificate and the signed data across the network. For the purposes of this discussion, the digital signature associated with some data can be thought of as evidence provided by the client to the server. The server authenticates the user’s identity on the strength of this evidence.
Like Figure A-4, Figure A-5 assumes that the user has already decided to trust the server and has requested a resource, and that the server has requested client authentication in the process of evaluating whether to grant access to the requested resource. Figure A-5 Using a Certificate to Authenticate a Client
Unlike the process shown in Figure A-4, the process shown in Figure A-5 requires the use of SSL. Figure A-5 also assumes that the client has a valid certificate that can be used to identify the client to the server. Certificate-based authentication is generally considered preferable to password-based authentication because it is based on what the user has (the private key) as well as what the user knows (the password that protects the private key). However, it’s important to note that these two assumptions are true only if unauthorized personnel have not gained access to the user’s machine or password, the password for the client software’s private key database has been set, and the software is set up to request the password at reasonably frequent intervals.
Neither password-based authentication nor certificate-based authentication address security issues related to physical access to individual machines or passwords. Public-key cryptography can only verify that a private key used to sign some data corresponds to the public key in a certificate. It is the user’s responsibility to protect a machine’s physical security and to keep the private-key password secret.
The client software maintains a database of the private keys that correspond to the public keys published in any certificates issued for that client. The client asks for the password to this database the first time the client needs to access it during a given session—for example, the first time the user attempts to access an SSL-enabled server that requires certificate-based client authentication. After entering this password once, the user doesn’t need to enter it again for the rest of the session, even when accessing other SSL-enabled servers.
The client unlocks the private-key database, retrieves the private key for the user’s certificate, and uses that private key to digitally sign some data that has been randomly generated for this purpose on the basis of input from both the client and the server. This data and the digital signature constitute “evidence” of the private key’s validity. The digital signature can be created only with that private key and can be validated with the corresponding public key against the signed data, which is unique to the SSL session.
The client sends both the user’s certificate and the evidence (the randomly generated piece of data that has been digitally signed) across the network. The server uses the certificate and the evidence to authenticate the user’s identity. (For a detailed discussion of the way this works, see Appendix B, "Introduction to SSL.")
At this point the server may optionally perform other authentication tasks, such as checking that the certificate presented by the client is stored in the user’s entry in an LDAP directory. The server then continues to evaluate whether the identified user is permitted to access the requested resource. This evaluation process can employ a variety of standard authorization mechanisms, potentially using additional information in an LDAP directory, company databases, and so on. If the result of the evaluation is positive, the server allows the client to access the requested resource. As you can see by comparing Figure A-5 to Figure A-4, certificates replace the authentication portion of the interaction between the client and the server. Instead of requiring a user to send passwords across the network throughout the day, single sign-on requires the user to enter the private-key database password just once, without sending it across the network. For the rest of the session, the client presents the user’s certificate to authenticate the user to each new server it encounters. Existing authorization mechanisms based on the authenticated user identity are not affected. How Certificates Are Used
Types of Certificates
SSL Protocol
Signed and Encrypted Email
Form Signing
Object Signing
Five kinds of certificates are commonly used with Sun Java System products:
Client SSL certificates. Used to identify clients to servers via SSL (client authentication). Typically, the identity of the client is assumed to be the same as the identity of a human being, such as an employee in an enterprise. See Certificate-Based Authentication for a description of the way client SSL certificates are used for client authentication. Client SSL certificates can also be used for form signing and as part of a single sign-on solution.
Examples: A bank gives a customer a client SSL certificate that allows the bank’s servers to identify that customer and authorize access to the customer’s accounts. A company might give a new employee a client SSL certificate that allows the company’s servers to identify that employee and authorize access to the company’s servers. Server SSL certificates. Used to identify servers to clients via SSL (server authentication). Server authentication may be used with or without client authentication. Server authentication is a requirement for an encrypted SSL session. For more information, see SSL Protocol.
Example: Internet sites that engage in electronic commerce usually support certificate-based server authentication, at a minimum, to establish an encrypted SSL session and to assure customers that they are dealing with a web site identified with a particular company. The encrypted SSL session ensures that personal information sent over the network, such as credit card numbers, cannot easily be intercepted.
S/MIME certificates. Used for signed and encrypted mail. As with client SSL certificates, the identity of the client is typically assumed to be the same as the identity of a human being, such as an employee in an enterprise. A single certificate may be used as both an S/MIME certificate and an SSL certificate (see Signed and Encrypted Email). S/MIME certificates can also be used for form signing and as part of a single sign-on solution.
Examples: A company deploys combined S/MIME and SSL certificates solely for the purpose of authenticating employee identities, thus permitting signed email and client SSL authentication but not encrypted email. Another company issues S/MIME certificates solely for the purpose of both signing and encrypting email that deals with sensitive financial or legal matters.
Object-signing certificates. Used to identify signers of Java code, JavaScript scripts, or other signed files. For more information, see Object Signing.
Example: A software company signs software distributed over the Internet to provide users with some assurance that the software is a legitimate product of that company. Using certificates and digital signatures in this manner can also make it possible for users to identify and control the kind of access downloaded software has to their computers.
CA certificates. Used to identify CAs. Client and server software use CA certificates to determine what other certificates can be trusted. For more information, see How CA Certificates Are Used to Establish Trust.
Example: The CA certificates stored in client software determine what other certificates that client can authenticate. An administrator can implement some aspects of corporate security policies by controlling the CA certificates stored in each user’s client.
The sections that follow describes how certificates are used by Sun Java System products.
The Secure Sockets Layer (SSL) protocol is a set of rules governing server authentication, client authentication, and encrypted communication between servers and clients. SSL is widely used on the Internet, especially for interactions that involve exchanging confidential information such as credit card numbers. SSL requires a server SSL certificate, at a minimum. As part of the initial “handshake” process, the server presents its certificate to the client to authenticate the server’s identity. The authentication process uses public-key encryption and digital signatures to confirm that the server is in fact the server it claims to be. Once the server has been authenticated, the client and server use techniques of symmetric-key encryption, which is very fast, to encrypt all the information they exchange for the remainder of the session and to detect any tampering that may have occurred.
Servers may optionally be configured to require client authentication as well as server authentication. In this case, after server authentication is successfully completed, the client must also present its certificate to the server to authenticate the client’s identity before the encrypted SSL session can be established.
For an overview of client authentication over SSL and how it differs from password-based authentication, see Authentication Confirms an Identity. For more detailed information about SSL, see Appendix B, "Introduction to SSL."
Some mail programs support digitally signed and encrypted mail using a widely accepted protocol known as Secure Multipurpose Internet Mail Extension (S/MIME). Using S/MIME to sign or encrypt mail messages requires the sender of the message to have an S/MIME certificate.
An mail message that includes a digital signature provides some assurance that it was in fact sent by the person whose name appears in the message header, thus providing authentication of the sender. If the digital signature cannot be validated by the mail software on the receiving end, the user is alerted.
The digital signature is unique to the message it accompanies. If the message received differs in any way from the message that was sent—even by the addition or deletion of a comma—the digital signature cannot be validated. Therefore, signed mail also provides some assurance that the mail has not been tampered with. As discussed at the beginning of this appendix, this kind of assurance is known as nonrepudiation. In other words, signed mail makes it very difficult for the sender to deny having sent the message. This is important for many forms of business communication. (For information about the way digital signatures work, see Digital Signatures.)
S/MIME also makes it possible to encrypt email messages. This is also important for some business users. However, using encryption for email requires careful planning. If the recipient of encrypted email messages loses his or her private key and does not have access to a backup copy of the key, for example, the encrypted messages can never be decrypted. Form Signing
Many kinds of e-commerce require the ability to provide persistent proof that someone has authorized a transaction. Although SSL provides transient client authentication for the duration of an SSL connection, it does not provide persistent authentication for transactions that may occur during that connection. S/MIME provides persistent authentication for mail, but e-commerce often involves filling in a form on a web page rather than sending a mail message.
The Sun Java System technology known as form signing addresses the need for persistent authentication of financial transactions. Form signing allows a user to associate a digital signature with web-based data generated as the result of a transaction, such as a purchase order or other financial document. The private key associated with either a client SSL certificate or an S/MIME certificate may be used for this purpose. When a user clicks the Submit button on a web-based form that supports form signing, a dialog box appears that displays the exact text to be signed. The form designer can either specify the certificate that should be used or allow the user to select a certificate from among client SSL and S/MIME certificates. When the user clicks OK, the text is signed, and both the text and the digital signature are submitted to the server. The server can then use a Sun Java System utility called the Signature Verification Tool to validate the digital signature.
Network users are frequently required to remember multiple passwords for the various services they use. For example, a user might have to type different passwords to log into the network, collect mail, use directory services, use the corporate calendar program, and access various servers. Multiple passwords are an ongoing headache for both users and system administrators. Users have difficulty keeping track of different passwords, tend to choose poor ones, and tend to write them down in obvious places. Administrators must keep track of a separate password database on each server and deal with potential security problems related to the fact that passwords are sent over the network routinely and frequently.
Solving this problem requires some way for a user to log in once, using a single password, and get authenticated access to all network resources that user is authorized to use—without sending any passwords over the network. This capability is known as single sign-on.
Both client SSL certificates and S/MIME certificates can play a significant role in a comprehensive single sign-on solution. For example, one form of single sign-on supported by Sun Java System products relies on SSL client authentication (see Certificate-Based Authentication.) A user can log in once, using a single password to the local client’s private-key database, and get authenticated access to all SSL-enabled servers that user is authorized to use—without sending any passwords over the network. This approach simplifies access for users, because they don’t need to enter passwords for each new server. It also simplifies network management, since administrators can control access by controlling lists of certificate authorities (CAs) rather than much longer lists of users and passwords.
In addition to using certificates, a complete single sign-on solution must address the need to interoperate with enterprise systems, such as the underlying operating system, that rely on passwords or other forms of authentication.
Sun Java System products support a set of tools and technologies called object signing. Object signing uses standard techniques of public-key cryptography to let users get reliable information about code they download in much the same way they can get reliable information about shrink-wrapped software. Most importantly, object signing helps users and network administrators implement decisions about software distributed over intranets or the Internet—for example, whether to allow Java applets signed by a given entity to use specific computer capabilities on specific users’ machines.
The “objects” signed with object signing technology can be applets or other Java code, JavaScript scripts, plug-ins, or any kind of file. The “signature” is a digital signature. Signed objects and their signatures are typically stored in a special file called a JAR file. Software developers and others who wish to sign files using object-signing technology must first obtain an object-signing certificate.
The contents of certificates supported by Sun Java System and many other software companies are organized according to the X.509 v3 certificate specification, which has been recommended by the International Telecommunications Union (ITU), an international standards body, since 1988.
Users don’t usually need to be concerned about the exact contents of a certificate. However, system administrators working with certificates may need some familiarity with the information provided here.
Distinguished Names
An X.509 v3 certificate binds a distinguished name (DN) to a public key. A DN is a series of name-value pairs, such as uid=doe, that uniquely identify an entity—that is, the certificate subject.
For example, this might be a typical DN for an employee of Sun Microsystems, Inc.:
uid=jdoe,[email protected],cn=John Doe,dc=sun,dc=com,c=US
The abbreviations before each equal sign in this example have these meanings:
uid: user ID
e: email address
cn: the user’s common name
o: organization
c: country
DNs may include a variety of other name-value pairs. They are used to identify both certificate subjects and entries in directories that support the Lightweight Directory Access Protocol (LDAP). The rules governing the construction of DNs can be quite complex and are beyond the scope of this appendix. For comprehensive information about DNs, see A String Representation of Distinguished Names at the following URL:
http://www.ietf.org/rfc/rfc1485.txt
A Typical Certificate
Every X.509 certificate consists of two sections:
The data section includes the following information:
The version number of the X.509 standard supported by the certificate.
The certificate’s serial number. Every certificate issued by a CA has a serial number that is unique among the certificates issued by that CA.
Information about the user’s public key, including the algorithm used and a representation of the key itself.
The DN of the CA that issued the certificate.
The period during which the certificate is valid (for example, between 1:00 p.m. on November 15, 2003 and 1:00 p.m. November 15, 2004)
The DN of the certificate subject (for example, in a client SSL certificate this would be the user’s DN), also called the subject name. Optional certificate extensions, which may provide additional data used by the client or server. For example, the certificate type extension indicates the type of certificate—that is, whether it is a client SSL certificate, a server SSL certificate, a certificate for signing email, and so on. Certificate extensions can also be used for a variety of other purposes.
The signature section includes the following information:
The cryptographic algorithm, or cipher, used by the issuing CA to create its own digital signature. For more information about ciphers, see Appendix B, "Introduction to SSL."
The CA’s digital signature, obtained by hashing all of the data in the certificate together and encrypting it with the CA's private key.
Here are the data and signature sections of a certificate in human-readable format:
Version: v3 (0x2)
Serial Number: 3 (0x3)
Signature Algorithm: PKCS #1 MD5 With RSA Encryption
Issuer: OU=Ace Certificate Authority, O=Example Industry, C=US
Validity:
Not Before: Fri Oct 17 18:36:25 2003
Not After: Sun Oct 17 18:36:25 2004
Subject: CN=Jane Doe, OU=Finance, O=Example Industry, C=US
Subject Public Key Info:
Algorithm: PKCS #1 RSA Encryption
Public Key:
Modulus:
00:ca:fa:79:98:8f:19:f8:d7:de:e4:49:80:48:e6:2a:2a:86:
ed:27:40:4d:86:b3:05:c0:01:bb:50:15:c9:de:dc:85:19:22:
43:7d:45:6d:71:4e:17:3d:f0:36:4b:5b:7f:a8:51:a3:a1:00:
98:ce:7f:47:50:2c:93:36:7c:01:6e:cb:89:06:41:72:b5:e9:
73:49:38:76:ef:b6:8f:ac:49:bb:63:0f:9b:ff:16:2a:e3:0e:
9d:3b:af:ce:9a:3e:48:65:de:96:61:d5:0a:11:2a:a2:80:b0:
7d:d8:99:cb:0c:99:34:c9:ab:25:06:a8:31:ad:8c:4b:aa:54:
91:f4:15
Public Exponent: 65537 (0x10001)
Extensions:
Identifier: Certificate Type
Critical: no
Certified Usage:
SSL Client
Identifier: Authority Key Identifier
Key Identifier:
f2:f2:06:59:90:18:47:51:f5:89:33:5a:31:7a:e6:5c:fb:36:
26:c9
Algorithm: PKCS #1 MD5 With RSA Encryption
6d:23:af:f3:d3:b6:7a:df:90:df:cd:7e:18:6c:01:69:8e:54:65:fc:06:
30:43:34:d1:63:1f:06:7d:c3:40:a8:2a:82:c1:a4:83:2a:fb:2e:8f:fb:
f0:6d:ff:75:a3:78:f7:52:47:46:62:97:1d:d9:c6:11:0a:02:a2:e0:cc:
2a:75:6c:8b:b6:9b:87:00:7d:7c:84:76:79:ba:f8:b4:d2:62:58:c3:c5:
b6:c1:43:ac:63:44:42:fd:af:c8:0f:2f:38:85:6d:d6:59:e8:41:42:a5:
4a:e5:26:38:ff:32:78:a1:38:f1:ed:dc:0d:31:d1:b0:6d:67:e9:46:a8:
d:c4
Here is a certificate displayed in the 64-byte-encoded form interpreted by software:
-----BEGIN CERTIFICATE-----
MIICKzCCAZSgAwIBAgIBAzANBgkqhkiG9w0BAQQFADA3MQswCQYDVQQGEwJVUzER
MA8GA1UEChMITmV0c2NhcGUxFTATBgNVBAsTDFN1cHJpeWEncyBDQTAeFw05NzEw
MTgwMTM2MjVaFw05OTEwMTgwMTM2MjVaMEgxCzAJBgNVBAYTAlVTMREwDwYDVQQK
EwhOZXRzY2FwZTENMAsGA1UECxMEUHViczEXMBUGA1UEAxMOU3Vwcml5YSBTaGV0
dHkwgZ8wDQYJKoZIhvcNAQEFBQADgY0AMIGJAoGBAMr6eZiPGfjX3uRJgEjmKiqG
7SdATYazBcABu1AVyd7chRkiQ31FbXFOGD3wNktbf6hRo6EAmM5/R1AskzZ8AW7L
iQZBcrXpc0k4du+2Q6xJu2MPm/8WKuMOnTuvzpo+SGXelmHVChEqooCwfdiZywyZ
NMmrJgaoMa2MS6pUkfQVAgMBAAGjNjA0MBEGCWCGSAGG+EIBAQQEAwIAgDAfBgNV
HSMEGDAWgBTy8gZZkBhHUfWJM1oxeuZc+zYmyTANBgkqhkiG9w0BAQQFAAOBgQBt
I6/z07Z635DfzX4XbAFpjlRl/AYwQzTSYx8GfcNAqCqCwaSDKvsuj/vwbf91o3j3
UkdGYpcd2cYRCgKi4MwqdWyLtpuHAH18hHZ5uvi00mJYw8W2wUOsY0RC/a/IDy84
hW3WWehBUqVK5SY4/zJ4oTjx7dwNMdGwbWfpRqjd1A==
-----END CERTIFICATE-----
How CA Certificates Are Used to Establish Trust Certificate authorities (CAs) are entities that validate identities and issue certificates. They can be either independent third parties or organizations running their own certificate-issuing server software (such as the Sun Java System Certificate Management System).
Any client or server software that supports certificates maintains a collection of trusted CA certificates. These CA certificates determine which other certificates the software can validate—in other words, which issuers of certificates the software can trust. In the simplest case, the software can validate only certificates issued by one of the CAs for which it has a certificate. It’s also possible for a trusted CA certificate to be part of a chain of CA certificates, each issued by the CA above it in a certificate hierarchy. The sections that follow explains how certificate hierarchies and certificate chains determine what certificates software can trust.
CA Hierarchies
Certificate Chains
Verifying a Certificate Chain
In large organizations, it may be appropriate to delegate the responsibility for issuing certificates to several different certificate authorities. For example, the number of certificates required may be too large for a single CA to maintain; different organizational units may have different policy requirements; or it may be important for a CA to be physically located in the same geographic area as the people to whom it is issuing certificates.
It’s possible to delegate certificate-issuing responsibilities to subordinate CAs. The X.509 standard includes a model for setting up a hierarchy of CAs.
Figure A-6 A Hierarchy of Certificate Authorities
In this model, the root CA is at the top of the hierarchy. The root CA’s certificate is a self-signed certificate: that is, the certificate is digitally signed by the same entity—the root CA—that the certificate identifies. The CAs that are directly subordinate to the root CA have CA certificates signed by the root CA. CAs under the subordinate CAs in the hierarchy have their CA certificates signed by the higher-level subordinate CAs.
Organizations have a great deal of flexibility in terms of the way they set up their CA hierarchies. Figure A-6 shows just one example; many other arrangements are possible. Certificate Chains
CA hierarchies are reflected in certificate chains. A certificate chain is series of certificates issued by successive CAs. Figure A-7 shows a certificate chain leading from a certificate that identifies some entity through two subordinate CA certificates to the CA certificate for the root CA (based on the CA hierarchy shown in Figure A-6).
Figure A-7 A Certificate Chain
A certificate chain traces a path of certificates from a branch in the hierarchy to the root of the hierarchy. In a certificate chain, the following occur:
Each certificate is followed by the certificate of its issuer.
Each certificate contains the name (DN) of that certificate’s issuer, which is the same as the subject name of the next certificate in the chain.
In Figure A-7, the Engineering CA certificate contains the DN of the CA (that is, USA CA), that issued that certificate. USA CA’s DN is also the subject name of the next certificate in the chain. Each certificate is signed with the private key of its issuer. The signature can be verified with the public key in the issuer’s certificate, which is the next certificate in the chain.
In Figure A-7, the public key in the certificate for the USA CA can be used to verify the USA CA’s digital signature on the certificate for the Engineering CA.
Certificate chain verification is the process of making sure a given certificate chain is well-formed, valid, properly signed, and trustworthy. Sun Java System software uses the following procedure for forming and verifying a certificate chain, starting with the certificate being presented for authentication: The certificate validity period is checked against the current time provided by the verifier’s system clock. The issuer’s certificate is located. The source can be either the verifier’s local certificate database (on that client or server) or the certificate chain provided by the subject (for example, over an SSL connection). The certificate signature is verified using the public key in the issuer's certificate. If the issuer’s certificate is trusted by the verifier in the verifier’s certificate database, verification stops successfully here. Otherwise, the issuer’s certificate is checked to make sure it contains the appropriate subordinate CA indication in the Sun Java System certificate type extension, and chain verification returns to step 1 to start again, but with this new certificate. Figure A-8 presents an example of this process.
Figure A-8 Verifying A Certificate Chain
Figure A-8 shows what happens when only Root CA is included in the verifier’s local database. If a certificate for one of the intermediate CAs shown in Figure A-8, such as Engineering CA, is found in the verifier’s local database, verification stops with that certificate, as shown in Figure A-9.
Figure A-9 Verifying A Certificate Chain to an Intermediate CA
Expired validity dates, an invalid signature, or the absence of a certificate for the issuing CA at any point in the certificate chain causes authentication to fail. For example, Figure A-10 shows how verification fails if neither the Root CA certificate nor any of the intermediate CA certificates are included in the verifier’s local database.
Figure A-10 A Certificate Chain that Cannot Be Verified
For general information about the way digital signatures work, see Digital Signatures. For a more detailed description of the signature verification process in the context of SSL client and server authentication, see Appendix B, "Introduction to SSL."
The set of standards and services that facilitate the use of public-key cryptography and X.509 v3 certificates in a network environment is called the public key infrastructure (PKI). PKI management is complex topic beyond the scope of this appendix. The sections that follow introduce some of the specific certificate management issues addressed by Sun Java System products.
Issuing Certificates
Certificates and the LDAP Directory
Key Management
Renewing and Revoking Certificates
Registration Authorities
The process for issuing a certificate depends on the certificate authority that issues it and the purpose for which it is used. The process for issuing nondigital forms of identification varies in similar ways. For example, if you want to get a generic ID card (not a driver’s license) from the Department of Motor Vehicles in California, the requirements are straightforward: you need to present some evidence of your identity, such as a utility bill with your address on it and a student identity card. If you want to get a regular driving license, you also need to take a test—a driving test when you first get the license, and a written test when you renew it. If you want to get a commercial license for an eighteen-wheeler, the requirements are much more stringent. If you live in some other state or country, the requirements for various kinds of licenses differ.
Similarly, different CAs have different procedures for issuing different kinds of certificates. In some cases the only requirement may be your mail address. In other cases, your UNIX login and password may be sufficient. At the other end of the scale, for certificates that identify people who can authorize large expenditures or make other sensitive decisions, the issuing process may require notarized documents, a background check, and a personal interview.
Depending on an organization’s policies, the process of issuing certificates can range from being completely transparent for the user to requiring significant user participation and complex procedures. In general, processes for issuing certificates should be highly flexible, so organizations can tailor them to their changing needs.
Sun Java System Certificate Management System allows an organization to set up its own certificate authority and issue certificates. Issuing certificates is one of several managements tasks that can be handled by separate Registration Authorities.
The Lightweight Directory Access Protocol (LDAP) for accessing directory services supports great flexibility in the management of certificates within an organization. System administrators can store much of the information required to manage certificates in an LDAP-compliant directory. For example, a CA can use information in a directory to prepopulate a certificate with a new employee’s legal name and other information. The CA can leverage directory information in other ways to issue certificates one at a time or in bulk, using a range of different identification techniques depending on the security policies of a given organization. Other routine management tasks, such as key management and renewing and revoking certificates, can be partially or fully automated with the aid of the directory.
Information stored in the directory can also be used with certificates to control access to various network resources by different users or groups. Issuing certificates and other certificate management tasks can thus be an integral part of user and group management.
In general, high-performance directory services are an essential ingredient of any certificate management strategy. Directory Server is fully integrated with Sun Java System Certificate Management System to provide a comprehensive certificate management solution.
Before a certificate can be issued, the public key it contains and the corresponding private key must be generated. Sometimes it may be useful to issue a single person one certificate and key pair for signing operations, and another certificate and key pair for encryption operations. Separate signing and encryption certificates make it possible to keep the private signing key on the local machine only, thus providing maximum nonrepudiation, and to back up the private encryption key in some central location where it can be retrieved in case the user loses the original key or leaves the company.
Keys can be generated by client software or generated centrally by the CA and distributed to users via an LDAP directory. There are trade-offs involved in choosing between local and centralized key generation. For example, local key generation provides maximum nonrepudiation, but may involve more participation by the user in the issuing process. Flexible key management capabilities are essential for most organizations.
Key recovery, or the ability to retrieve backups of encryption keys under carefully defined conditions, can be a crucial part of certificate management (depending on how an organization uses certificates). Key recovery schemes usually involve an m of n mechanism: for example, m of n managers within an organization might have to agree, and each contribute a special code or key of their own, before a particular person’s encryption key can be recovered. This kind of mechanism ensures that several authorized personnel must agree before an encryption key can be recovered.
Like a driver’s license, a certificate specifies a period of time during which it is valid. Attempts to use a certificate for authentication before or after its validity period fails. Therefore, mechanisms for managing certificate renewal are essential for any certificate management strategy. For example, an administrator may wish to be notified automatically when a certificate is about to expire, so that an appropriate renewal process can be completed in plenty of time without causing the certificate’s subject any inconvenience. The renewal process may involve reusing the same public-private key pair or issuing a new one.
A driver’s license can be suspended even if it has not expired—for example, as punishment for a serious driving offense. Similarly, it’s sometimes necessary to revoke a certificate before it has expired—for example, if an employee leaves a company or moves to a new job within the company.
Certificate revocation can be handled in several different ways. For some organizations, it may be sufficient to set up servers so that the authentication process includes checking the directory for the presence of the certificate being presented. When an administrator revokes a certificate, the certificate can be automatically removed from the directory, and subsequent authentication attempts with that certificate fails even though the certificate remains valid in every other respect. Another approach involves publishing a certificate revocation list (CRL)—that is, a list of revoked certificates—to the directory at regular intervals and checking the list as part of the authentication process. For some organizations, it may be preferable to check directly with the issuing CA each time a certificate is presented for authentication. This procedure is sometimes called real-time status checking.
Interactions between entities identified by certificates (sometimes called end entities) and CAs are an essential part of certificate management. These interactions include operations such as registration for certification, certificate retrieval, certificate renewal, certificate revocation, and key backup and recovery. In general, a CA must be able to authenticate the identities of end entities before responding to the requests. In addition, some requests need to be approved by authorized administrators or managers before being serviced.
As previously discussed, the means used by different CAs to verify an identity before issuing a certificate can vary widely, depending on the organization and the purpose for which the certificate is used. To provide maximum operational flexibility, interactions with end entities can be separated from the other functions of a CA and handled by a separate service called a Registration Authority (RA).
An RA acts as a front end to a CA by receiving end entity requests, authenticating them, and forwarding them to the CA. After receiving a response from the CA, the RA notifies the end entity of the results. RAs can be helpful in scaling an PKI across different departments, geographical areas, or other operational units with varying policies and authentication requirements.
Copyright 2004 Sun Microsystems, Inc. All rights reserved. | 计算机 |
2014-23/0755/en_head.json.gz/1211 | New Medicaid computer system doesn't end errors
Four months after the controversial $90 million Medicaid computer system finally began operating, some providers say they aren't getting paid properly, while another said her office was being paid 10 times the expected amount on some claims.The Medicaid Management Information System has been frequently delayed since being contracted in 2005 to a firm now owned by Xerox.It is causing ongoing frustration, with no end in sight, according to Bruce Burns, Concord Hospital's chief financial officer."We've got a $5 million backlog in Medicaid payments," Burns said. Some of the confusion involved simple coding problems and billing rules that should have been caught before the system went live March 31, Burns said.Concord Hospital isn't alone. "Anybody touched by Medicaid recipients is being impacted by this," Burns said.It is taking far too long to get up and running, Burns said. "It was not well-managed through this process," he said.Burns said he has tried to find out when it will be working properly. "They seem to not want to commit to a time frame to get the problems corrected," he said.Health and Human Services Commissioner Nicholas Toumpas acknowledged there have been problems since the system went live, but they were to be expected with such a huge project."We have been dealing with a major systems conversion," Toumpas said. "I'm not saying everything is perfect, but it has gone remarkably smoothly. The system is stable, and I'm pleased overall with where we are."Tina Emery, a reimbursement specialist at Seacoast Orthopedics & Sport Medicine in Somersworth, said Medicaid paid $500 for one claim that she would have expected to receive only $50. Another was paid at $300 when she expected $50.When she calls the state about the problems, they don't know the reason, Emery said. She is then forced to fill out more forms.The orthopedic surgery practice has 10 doctors and six physician assistants. Emery suspects the $500 payment for a surgical assistant was paid out as if it were a surgeon by mistake.She has also received a denial for an assistant surgeon that didn't make sense, especially since Medicaid rules haven't changed in the meantime."In my mind, it should have been paid," Emery said. "When I called to ask why it was denied, the girl couldn't tell me. They can't seem to answer questions. They don't know the answers."Emery, otherwise, praised the workers on the other end of the phone as being pleasant, even if they didn't know how to fix the problems."They need to work out the bugs," she said.Emery was told there is no billing manual, a guide that explains the rules and regulations and how to bill insurers."I was told the powers that be were still working on the billing manual," Emery said.The governor and Executive Council approved a $60 million contract to upgrade the Medicaid computer system in 2005, but the cost has risen to $90 million because of changes required by the state and federal governments, according to Toumpas."We have not paid anything more for the core system for which we contracted," Toumpas said. "What has changed is the additional costs for modifications we wanted or the federal government mandated."Kevin Lightfoot, vice president of communications for Xerox, said in an email: "... I did hear back from the state (our client) and I understand they will be addressing your questions regarding the New Hampshire MMIS contract."Gov. Maggie Hassan didn't respond to a request for an interview about the MMIS computer system. Her spokesman, Marc Goldberg, sent an email stating: "Implementing the new MMIS system, an innovative program that will significantly improve coordination of health care services, is a substantial undertaking."Goldberg said Health and Human Services could better respond when asked how the computer payment system would improve the coordination of health care services and copied the question via email to Health and Human Services spokesman Kris Nielsen.Nielsen said there was no one in the office Friday afternoon who could answer that question, that it would be answered next week.Goldberg's email went on to say: "Like all new systems of this scope, issues will arise that need to be worked through and addressed as quickly as possible, and we are working closely with DHHS to ensure that they are addressing issues and keeping providers informed about the implementation process."In New Hampshire, Medicaid, a government insurance plan for some low-income people, pays about $1.1 billion in claims a year for about 130,000 recipients to 14,000 providers. Medicaid is a 50-50 split of state and federal money, Toumpas said.Toumpas said there have been 15 payment cycles since the system went live, paying about $20 million in claims a week.Toumpas said he hadn't heard any concerns about overpayments or duplicative payments. The system has been very stable, with little down time, he said.But he concedes that about 40 percent of the claims have been suspended until further analysis is done."Over 60 percent of claims are paid or denied appropriately, and roughly 40 percent are going into suspense," Toumpas said, meaning no determination would be made on whether to pay without researching the claim.Providers were told at the outset they could receive contingency payments to make sure their cash flow wasn't interrupted, he said.A new call center has opened with 15 to 20 people to respond to questions, Toumpas said, adding Xerox is paying for them."If somebody has other issues and frustrations, the appropriate venue is to come talk with me," Toumpas said. "I encourage them to do that."[email protected] | 计算机 |
2014-23/0755/en_head.json.gz/1929 | Iomega Screenplay Pro
Connect directly to TV, Digital Audio, Firewire connection
Expensive, no Component/DVI output, no support for WMV
The Iomega Screenplay Pro shares many of the limitations of its predecessor, but the extra storage space, addition of Firewire and digital audio output are all significant improvements
(Selling at 1 store)
The Iomega Screenplay Pro, while incorporating some worthwhile improvements over its predecessor, also retains many of the same flaws. One of our criticisms of the previously released Screenplay was the size of the drive - a paltry 60GB. The Screenplay Pro is much larger, giving users a much more practical 200GB of storage. As a standalone external drive, we had no problems using the Screenplay Pro. It was recognised immediately by Windows XP as a drive and we could drag and drop files with ease. In our performance tests, the Screenplay Pro performed faster than the LaCie Brick and the Maxtor One Touch II, but not as fast as the Seagate 100GB. The biggest difference with using this unit compared to the Screenplay is that the Pro is permanently anchored to a power source at all times, whereas the Screenplay doesn't need mains power as it draws its power from the USB port of the PC it's attached to. In the looks department, the Screenplay Pro is much larger than the Screenplay, with the oddly shaped case punctured by holes on either side to help with cooling. It is also much heavier and not nearly as portable. All the connections are situated at the rear of the unit, with multimedia functions on the front. A power indicator light has been placed on top which changes colour whenever the remote is pressed.Unfortunately, our complaint with the regular screenplay unit is once again a problem with the Pro. Both use a composite connection to connect to a TV or give you the option of using S-Video. Composite doesn't offer particularly notable performance, nor does it make use of the superior display qualities of flat panel displays, so the picture quality is average at best. On the up side, Iomega has included a Firewire connection on this unit, making it easier to transfer content from digital video cameras.One aspect of the player we found frustrating is the rigidity of the folder structure. When you establish a connection between the Pro and your PC, three folders are displayed for music, movies and pictures respectively. In order for content to be displayed on a TV, it must be placed in the correct folder or it will not show up. You cannot create your own folders to store content as they won't show up either. You can however, create subfolders under the three main folders to store files in. The restrictions were not present in the Screenplay and we much prefer the flexibility of that model.When we connected the unit to a Sharp Aquos LCD TV, we were a little disappointed at the sharpness of the displayed image - a product of the composite connection. The S-Video output proved to be far superior. After we selected the correct TV input, three menu items were displayed on startup and you can select each of them using the supplied remote to view pictures, movies or music.The remote control provided with the Pro is infinitely superior to that provided with the Screenplay. It has a more ergonomic feel to it and also includes many more options. The problem with the remote, - and one that proved increasingly frustrating, - is that it must be pointing exactly at the front of the unit in order to work. This means you will have to align the unit precisely and stand directly in front of it to operate the remote each time you use it.We also found that while the remote contained shortcut buttons for Movies, Music or Pictures, you cannot press these when media is playing. You actually have to stop the currently playing track first and then select the shortcut to jump back to the menu. Ideally, users should be able to jump between content with a minimum of fuss.The Pro has undergone significant improvement in the music stakes, supporting a much larger number of file formats such as MP3, WMA, WAV, AAC, AC3 and OGG Vorbis. The unit also has a SPDIF output at the rear, allowing you to experience digital audio output if you have a compatible home theatre system. The Pro supports the same video and picture formats as the earlier Screenplay, with the addition of ISO. | 计算机 |
2014-23/0755/en_head.json.gz/1948 | Apache OpenOffice now a top-level project
The Apache Software Foundation has announced that, after being in the Apache Incubator since June 2011, the Apache OpenOffice project has now graduated and become a top-level project (TLP) of the foundation. Graduation recognises that the project has, in the seventeen months it has spent in the incubator, shown that it is able to manage itself in a transparent meritocracy, gaining new volunteers and electing a Project Management Committee (PMC) to oversee the project's direction. The incubating project released Apache OpenOffice 3.4 in May 2012, with support for 20 languages, and has subsequently seen over 20 million downloads of the open source office suite. The project is currently working on new functionality with future major releases targeted for the first and fourth quarter of 2013. These will likely include the integration of some of IBM's Symphony code, which was contributed to Apache OpenOffice in May 2012.
Created in the 1990s by StarDivision, the code base was acquired in 1999 by Sun Microsystems who set out to open source it. However, the licence terms were cumbersome and the development model was not transparent with most of the work on the code taking place at Sun. When Oracle acquired Sun in 2010, a large number of developers and contributors forked OpenOffice.org, as it was known, to create the LGPLv3-licensed LibreOffice; this has attracted wide support from the free software community. Given this fork, Oracle decided to contribute the ten million lines of OpenOffice.org source code to the Apache Software Foundation. There, it has been renamed as Apache OpenOffice and re-licensed under the Apache 2.0 Licence (after removing or replacing licence-incompatible components). IBM, which developed Symphony as a hybrid Eclipse/OpenOffice package, committed itself to the Apache OpenOffice project at the start of 2012.
Apache OpenOffice can be downloaded from the Apache OpenOffice downloads site along with source code in tarballs. Current OpenOffice source can also be found in the OpenOffice Subversion repository.
OpenOffic | 计算机 |
2014-23/0755/en_head.json.gz/2715 | Router (computing)
(Redirected from Network router)
A Cisco ASM/2-32EM router deployed at CERN in 1987
A router is a device that forwards data packets between computer networks. This creates an overlay internetwork, as a router is connected to two or more data lines from different networks. When a data packet comes in one of the lines, the router reads the address information in the packet to determine its ultimate destination. Then, using information in its routing table or routing policy, it directs the packet to the next network on its journey. Routers perform the "traffic directing" functions on the Internet. A data packet is typically forwarded from one router to another through the networks that constitute the internetwork until it reaches its destination node.[1]
The most familiar type of routers are home and small office routers that simply pass data, such as web pages, email, IM, and videos between the home computers and the Internet. An example of a router would be the owner's cable or DSL router, which connects to the Internet through an ISP. More sophisticated routers, such as enterprise routers, connect large business or ISP networks up to the powerful core routers that forward data at high speed along the optical fiber lines of the Internet backbone. Though routers are typically dedicated hardware devices, use of software-based routers has grown increasingly common.
1.1 Access
1.2 Distribution
1.4 Core
1.5 Internet connectivity and internal use
2 Historical and technical information
Applications[edit]
When multiple routers are used in interconnected networks, the routers exchange information about destination addresses using a dynamic routing protocol. Each router builds up a table listing the preferred routes between any two systems on the interconnected networks. A router has interfaces for different physical types of network connections, (such as copper cables, fiber optic, or wireless transmission). It also contains firmware for different networking Communications protocol standards. Each network interface uses this specialized computer software to enable data packets to be forwarded from one protocol transmission system to another.
Routers may also be used to connect two or more logical groups of computer devices known as subnets, each with a different sub-network address. The subnets addresses recorded in the router do not necessarily map directly to the physical interface connections.[2] A router has two stages of operation called planes:[3]
Control plane: A router records a routing table listing what route should be used to forward a data packet, and through which physical interface connection. It does this using internal pre-configured directives, called static routes, or by learning routes using a dynamic routing protocol. Static and dynamic routes are stored in the Routing Information Base (RIB). The control-plane logic then strips the RIB from non essential directives and builds a Forwarding Information Base (FIB) to be used by the forwarding-plane.
A typical home or small office router showing the ADSL telephone line and Ethernet network cable connections
Forwarding plane: The router forwards data packets between incoming and outgoing interface connections. It routes it to the correct network type using information that the packet header contains. It uses data recorded in the routing table control plane.
Routers may provide connectivity within enterprises, between enterprises and the Internet, and between internet service providers (ISPs) networks. The largest routers (such as the Cisco CRS-1 or Juniper T1600) interconnect the various ISPs, or may be used in large enterprise networks.[4] Smaller routers usually provide connectivity for typical home and office networks. Other networking solutions may be provided by a backbone Wireless Distribution System (WDS), which avoids the costs of introducing networking cables into buildings.
All sizes of routers may be found inside enterprises.[5] The most powerful routers are usually found in ISPs, academic and research facilities. Large businesses may also need more powerful routers to cope with ever increasing demands of intranet data traffic. A three-layer model is in common use, not all of which need be present in smaller networks.[6]
Access[edit]
A screenshot of the LuCI web interface used by OpenWrt. This page configures Dynamic DNS.
Access routers, including 'small office/home office' (SOHO) models, are located at customer sites such as branch offices that do not need hierarchical routing of their own. Typically, they are optimized for low cost. Some SOHO routers are capable of running alternative free Linux-based firmwares like Tomato, OpenWrt or DD-WRT.[7]
Distribution[edit]
Distribution routers aggregate traffic from multiple access routers, either at the same site, or to collect the data streams from multiple sites to a major enterprise location. Distribution routers are often responsible for enforcing quality of service across a WAN, so they may have considerable memory installed, multiple WAN interface connections, and substantial onboard data processing routines. They may also provide connectivity to groups of file servers or other external networks.
Security[edit]
External networks must be carefully considered as part of the overall security strategy. A router may include a firewall, VPN handling, and other security functions, or these may be handled by separate devices. Many companies produced security-oriented routers, including Cisco Systems' PIX and ASA5500 series, Juniper's Netscreen, Watchguard's Firebox, Barracuda's variety of mail-oriented devices, and many others. Routers also commonly perform network address translation, which allows multiple devices on a network to share a single public IP address.[8][9][10]
Core[edit]
In enterprises, a core router may provide a "collapsed backbone" interconnecting the distribution tier routers from multiple buildings of a campus, or large enterprise locations. They tend to be optimized for high bandwidth, but lack some of the features of Edge Routers.[11] | 计算机 |
2014-23/0755/en_head.json.gz/3153 | 63 comment(s) - last by BrotherPointy.. on Oct 31 at 4:27 PM
(Source: engadget.com)
There are only 1,000 spots available in the current registry
Valve has announced that it is allowing users to apply for the Steam for Linux beta.
Valve is specifically looking for experienced users that are familiar with Linux and are running Ubuntu 12.04 or above. This is likely because it's in the way early stages and needs a good debugging. Users that are newer to Linux are being asked to wait until the next beta release to apply. There are only 1,000 spots available in the current registry, so those who fit the bill can sign up through their Steam accounts. Valve will follow up with users afterward. Steam is a digital distribution, multiplayer and communications platform that distributes video games online from small developers to larger software companies. Source: Joystiq Comments Threshold -1
RE: I am in favor of this...
I have tried it...multiple times.Driver support is far from complete. If you're lucky, the distro you've downloaded will recognize everything and contain drivers for all your bits. If not, and there's a huge chance it's not, you're off on your own trying to figure out how to find drivers (and maybe no Linux drivers exist at all for some of your bits) and then how to install them....and the problem is that WINE, first of all, exists...as noted, it's a symptom of the problem - not a solution. The problem is that Linux has no mainstream software industry support.I really, really wish it did. But it doesn't. WINE is a band-aid on that headwound.And then to explain to an average PC user that "well, probably a lot of the programs you want to use will work in WINE, but some won't" is an absolute death knell for the OS. NOBODY outside of a handful of enthusiasts are going to spend the slightest amount of time on an OS where there's even the slightest chance that some piece of software they want to use won't work.And that's the state of Linux, past and present. I'm hoping that Steam will help change that in the future...your apparent assertion that that future is already here, though, is false. Parent
Valve Sets Its Sights Beyond Games With Steam
August 9, 2012, 11:01 AM | 计算机 |
2014-23/0755/en_head.json.gz/4003 | TOSS is a Linux distribution targeted especially at engineers and developers, while giving convenience and ease-of-use to laymen. It is a spin-off from Ubuntu. The core of Ubuntu has been retained with minimal changes, enabling users to retain its more popular and useful features, but providing a completely different look and feel. Despite the eye-candy offered with a variety of user-friendly interfaces, TOSS mainly targets student developers. It offers the user gcc-build-essential, OpenSSL, PHP, Java, gEda, xCircuit, KLogic, KTechlab, and a variety of other essential programs for engineering and application development.
GPLv2Linux DistributionsOperating System
HotSpotEngine
HotSpotEngine is a Web based software for the HotSpot Billing System and all-in-one hotspot management solutions. It supports wireless or wired networking. It is designed to run on a dedicated PC, and it is available as an installable CD image (ISO). It comes with a Linux-based OS and all required software included. Its main features include the ability to create randomly generated vouchers, prepaid user accounts with time limits or data limits, the ability to refill vouchers, and user sign-up via PayPal integration.
CommercialInternetCommunicationsNetworkingLinux Distributions
BugOS
BugOS is a microkernel operating system. It has a kernel, device drivers, a file system, and an Internet module. The main concepts are that every process has its own computer with its own console, security, and modularization. If a process wants to read the file, it asks the kernel. The kernel forwards the request to the filesystem driver, which reads and writes through the partition handler, which operates over the idehdd driver. The kernel is around 20 KB. Processes are fully separated from hardware.
Beerware Licensex86Operating SystemMIPSarm
openmamba
openmamba is a fully featured GNU/Linux distribution for desktops, notebooks, netbooks, and servers. It runs on computers based on the 32-bit Intel x86 architecture, or on 64-bit AMD processors in 32-bit mode. openmamba comes with both free and closed source drivers for the most frequently used video cards. It supports compiz out of the box. It has preinstalled multimedia codecs, and can install the most frequently used closed source applications for GNU/Linux (such as Flash Player or Skype) very easily.
GPLv3Operating SystemLinux distribution | 计算机 |
2014-23/0755/en_head.json.gz/5187 | Some security experts have advised Web surfers to turn off some Internet Explorer features or switch browsers to avoid falling prey to a concerted attack aiming to steal log-on information and passwords. Bottom line: The IE flaw could tilt security-conscious companies and home users in favor of adopting an alternative browser--and perhaps chip away at Microsoft's 95 percent-plus share of the Web browser market. For more info: Track the players Last week's broad attack has been blunted by Internet engineers that disconnected the Russian site that hosted the Scob Trojan horse program from the Web. However, the latest vulnerability could tilt security-conscious companies and home users in favor of adopting an alternative browser--and perhaps chip away at Microsoft's dominant share of the Web browser market. At least 130 Web sites were still attempting to infect visitors as of Sunday, according to Internet security firm Websense, which discovered that more than 200 of its customers attempted to download the Trojan horse from the malicious Russian site in the past week. None of the servers were top-rated Web sites, but they all ran Microsoft's Internet Information Service 5.0 Web software and Secure Sockets Layer, or SSL, encryption, the firm said.
Non-Microsoft browsers, such as the Opera browser and the Mozilla and Firefox browsers made by the Mozilla Foundation, don't have many of the vulnerable technologies and tend to focus more on just providing Internet browsing features, keeping the project size smaller, said Hakon Wium Lie, chief technology officer of Opera Software, which makes the browser of the same name. "Our code base is small, compared to other browsers, and by actively addressing problems that arise, we end up with a highly secure browser," Lie said. Such a focus differs from Microsoft, which has chosen to tightly integrate IE into the operating system, in part to sidestep antitrust issues. A representative of the software giant was not available for comment. The suggestion to use other browsers also underscores some security researchers' arguments that software diversity can improve security. Borrowing a term from agriculture and the fight against pests, software developers and security experts have warned about the hazards of "monoculture." The term refers to the widespread farming of a single variety, making the entire crop vulnerable to a single pest. Historians pin such disasters as the Irish potato famine on monoculture. Mozilla acknowledged that much of the value of using its software, or that of Opera, stemmed from the hazards of monoculture rather than any inherent security superiority. Microsoft's browser currently dominates the Internet landscape, with more than 95 percent of Web surfers using the browser, according to WebSideStory, a Web analytics firm. Mozilla, on the other hand, makes up 3.5 percent, and Opera accounts for 0.5 percent of all users of the sites monitored by WebSideStory. "Since there is such a disproportionate use of IE on the Internet right now, it does make it a very high-profile target," said Chris Hofmann, the Mozilla Foundation's director of engineering. "That's what people who are writing exploits are targeting, because that's where they get the biggest bang for the buck." Hofmann called the war against software homogeneity one of the raisons d'etre of his group. "If we were in a world where there were less of a monoculture for browsers, it would make it harder to design exploits that would affect that much of the marketplace," Hofmann said. "That's one of the driving forces of the Mozilla Foundation--to provide choices so that someone can't come up with an exploit that affects nearly the whole population." IE a sitting duck? But Mozilla claims some inherent security advantages as well. Internet Explorer is a fat target for attackers, in large part because it supports powerful, propriety Microsoft technologies that are notoriously weak on security, like ActiveX. Security experts also noted that Web surfers using non-Microsoft operating systems, such as Linux or Apple Computer's Mac OS, were not affected by last week's attack. Among security groups advising a browser switch is the U.S. Computer Emergency Readiness Team (US-CERT), the official U.S. body responsible for defending against online threats. The group on Friday advised security administrators to consider moving to a non-Microsoft browser among six possible responses. "There are a number of significant vulnerabilities in technologies relating to" IE, the advisory stated. "It is possible to reduce exposure to these vulnerabilities by using a different Web browser, especially when browsing untrusted sites." The advisory noted that Internet Explorer has had a great many security problems in several of its key technologies, such as Active X scripting, its zone model for security and JavaScript. However, the group pointed out that turning off certain features in IE increases the security. Get Up to Speed on...
Get the latest headlines andcompany-specific news in ourexpanded GUTS section.
"Using another Web browser is just one possibility," said Art Manion, Internet security analyst with the CERT Coordination Center, which administers US-CERT. "We don't recommend any product over another product. On the other hand, it is naive to say that that consideration should not play into your security model." CERT also noted that people who opt for non-IE browsers but who continue to run the Windows operating system are still at risk because of the degree to which the OS itself relies on IE functionality. Mozilla's Hofmann recommended that Windows users who want to ditch Internet Explorer increase their security level in Windows' Internet options to help thwart those kinds of attacks. While Windows comes by default with those options on "medium," Hofmann said that setting them to "high" would have offered sufficient protection against last week's exploit. He also encouraged Web developers to stop writing Web sites that rely on ActiveX. Game and photo-uploading sites are among the worst offenders, he said. "We encourage people not to use these proprietary technologies that we've seen security vulnerabilities associated with," Hofmann said. "ActiveX is one of the biggest areas where these exploits have occurred, and from these recent exploits, you can see that exposing users and making that technology available has some real danger. Sites need to rethink what they're doing to protect users." 12
IE is "the one"
as much as I am not a big fan of monopoly, but I have to admit that IE rules!On my laptop, I have Opera 7.50, Netscape 7.1, and IE 6. Most companies write codes directly for IE. If you have Outlook Web Access, the choice of browser is clear: IE. If you have Bank of America account, the choice is clear: IE. Ironically, even if you use hotmail or yahoo mail, in order to take advantage of all features, you have to use IE.Even with as many flaws as it has, Microsoft products are becoming mature and stable. I remember working and installing windows 3.11, NT 3.x, 4.0, etc... always had to reboot and gave (all of us) blue screen of death and so on. But Win2000 did and still does an amazing job. In our company where I am responsible for IT infrastructure, we didn't even see the need to upgrade to win2003.Having said that, most of our web-based applications are IE friendly. Even those apps from a Unix background have an IE ready front-end.Personally, I don't see, at least in the near future, people migrating from IE ot Opera or Netscape.I must add that I love Opera's interface and it's tabulated paging... but it just doesn't work the same IE does with tables and dhtml and javascripts and java.well, that's all.
Posted by (1
the IE misnomer
it is true that many web dev teams write specifically for IE. why this has occured is debatable but probably due to a lack of IE dev for 3+ years and the need to find creative workarounds for the deluge of issues IE has with XML, CSS, JavaScript, etc.I have used Mozilla for over 2 years now and the latest release of FireFox 0.9 -- only 4.7MB on Win32 -- is the best thing that could happen to a web developer. Not only does it force us to use W3C standards, but it comes with excellent debugging facilities as opposed to the dizzying "Error occured on page" msgs with the occasional impotent Windows Script Debugger...Anyway, just thought I would speak up for the little guy! By the way, I use every feature on BofA.com with greater confidence through Mozilla firefox.more info:<a class="jive-link-external" href="http://www.mozilla.org/products/firefox/" target="_newWindow">http://www.mozilla.org/products/firefox/</a>cross platform CMS that works in mozilla:<a class="jive-link-external" href="http://www.enthusiastinc.com" target="_newWindow">http://www.enthusiastinc.com</a>
Writing codes directly for Ineternet Explorer?
If HTML is so universal, if XML is so extensible, then why would anyone have to write custom code JUST for Internet Explorer?Well, because Microsoft implements proprietary technology that only works with Explorer, doesn't support certain standards correctly, or, it quite simply renders the code WRONG and people have to code around the faults in the render engine. Since explorer is the most widely used browser, that often leads to companies writing two entirely different webpages: one that works with Explorer, and another that works with absolutely everything else.Explorer is not the dominant browser because it's good, it's the dominant browser because Microsoft illegally used its dominant position in the market place to kill off all other competition. Having to write custom code (not because you want to or because you think the browser is great) just so you can support the "number one" browser on the market is stupid.People need to wake up and move away from Explorer, indeed, Windows all together. There are so many better products out there from all sorts of great vendors if they're given a chance.
Posted by olePigeon
June 28, 2004 11:08 PM (PDT)
Standards?
That's because IE and pages coded for it don't adhere to standards (yes, there are some things the others don't supply - but not in 95% of the pages).IE is much better with SP2 though, until someone figures out how to bypass the security.
Posted by Stupendoussteve
June 29, 2004 10:55 AM (PDT)
Ummmm, wasn't JavaScript the vulnerability??
I agree that ActiveX is not secure, but the exploit was with Javascript technology. That was highly undermentioned in this article. I also agree with Pat's statement regarding the way alternative browsers handle the most common elements of the most popular web pages - DHTML, JavaScript, and TABLES!!! I realize that this is due to IEs popularity and IEs methods for handling these elements dictates how most developers code, but until there's consistency in that area people will continue to use IE. I also have to say that Norton Internet Security does a great job alerting users to the presence of ActiveX and Javascripts (when set up properly). It also gives you the option to allow this per website. I think that if you're connecting to the internet, I highly suggest this product or similar ones such as ZoneAlarm Pro.As for the monoculture discussion, I love how open source advocates think that their products are so superior. I don't feel that at all. They're great if you want to sacrifice the full functionality of their propietary counterparts. Secondly, it is easy to claim security when you're not the big target. If these products had equal or better market share, they would be exploited just the same as Microsoft's.
Posted by jamie.p.walsh
superior?
well, i try not to enter tech religious wars but had one correction here. I suppose many people might look at one software app as 'better' than the other for various reasons but ultimately it is all in progress. software is never done -- it is always developing.this is the big win with open source (OS) software. imagine if any Microsoft partner could release a patch for the latest trojan? we would have had it within hours, not weeks...there is a large team available for this purpose on many fronts with OS software nowadays and this is good for joe customer like me. :D<a class="jive-link-external" href="http://osdl.org/about_osdl/members/" target="_newWindow">http://osdl.org/about_osdl/members/</a><a class="jive-link-external" href="http://www.jboss.com" target="_newWindow">http://www.jboss.com</a><a class="jive-link-external" href="http://www.mysql.com" target="_newWindow">http://www.mysql.com</a><a class="jive-link-external" href="http://www.openoffice.org/" target="_newWindow">http://www.openoffice.org/</a><a class="jive-link-external" href="http://www.opengroupware.org/" target="_newWindow">http://www.opengroupware.org/</a>thx.jc
No, it's Internet Explorer.
It's not JavaScript, but how Microsoft implemented the technology into Internet Explorer. The flaw does not effect any other browser regardless if they're using JavaScript.Which is probably why I use a Mac to avoid all those problems.
Netscape > IE > FireFox
I used to swear off IE until IE 4.0, when Netscape started getting bloated. IE has been great, and i've used it until Firebird 0.8 was released, and renamed FireFox. Now FireFox 0.9 is out, and it's great! Tabbed windows, easy to use extensions, Flash/Shockwave/Java support. Livejournal tie-ins, and themes, plus the pop-up blocking which is unsurpassed, and it's ease of use... I can't say enough about it! It's small, streamlined, and Mozilla.org even provides instructions on how to load Firefox onto a USB drive so you can bring your browser with you wherever you go! You can't beat that!I've had a few security run-ins with IE, and so far, nothing with Firefox. I can't give up IE, because certain companies only support that, but overall... Firefox is awesome, I can't deny the truth that it's time to switch!Note: Oddly enough, there must be something wrong with news.com that I can't post this with Firefox, and need to use IE!
Posted by Jahntassa
No problems with FireFox here....
Odd....no problems here.
Posted by Jonathan
June 29, 2004 6:18 AM (PDT) | 计算机 |
2014-23/0755/en_head.json.gz/6867 | Contact Advertise OpenSolaris: a Bad Linux Distribution?
Linked by Thom Holwerda on Mon 20th Jul 2009 19:16 UTC The Linux desktop has come a long way. It's a fully usable, stable, and secure operating system that can be used quite easily by the masses. Not too long ago, Sun figured they could do the same by starting Project Indiana, which is supposed to deliver a complete distribution of OpenSolaris in a manner similar to GNU/Linux. After using the latest version for a while, I'm wondering: why?
0 · Read More · 124 Comment(s) http://osne.ws/gv7 Permalink for comment 374519
RE[4]: personal impressions... by kawazu on Tue 21st Jul 2009 20:00 UTC in reply to "RE[3]: personal impressions..." Member since:
Hi there; well... you don't really have to kind of "evangelize" me regarding OpenSolaris... I spent most of my university life working with old Sun / Solaris workstations and for sure are affectionate towards Solaris and in some ways enthusiastic about the possibilities OpenSolaris does offer. And I have to admit that I am using Sun stuff (NetBeans, Glassfish, not talking about Java of course... :>) wherever possible. Personally, as well, I think many of the features provided by OpenSolaris generally are good, but then again, talking about an open source system, are they really tied to OpenSolaris? ZFS so far also does exist as a (fuse) port for GNU/Linux users. Maybe (not sure, though) DTrace also might be ported to GNU/Linux or other Unixoid systems - I'm not sure. The only thing I know is, off-hand, that Sun in many respects failed about OpenSolaris. Why on earth that strange "Java Desktop System" (basically a modified GNU/Linux) a couple of years ago? Why does it take so long to make OpenSolaris stable? Why is there no "real" developer community around OpenSolaris so far, comparing to GNU/Linux or the *BSDs? Why, talking about DTrace in example, doesn't OpenSolaris come with a straightforward, powerful GUI tooling for these features to allow (desktop/developer) users to easily get started with these tools? Why, at the moment, is the set of hardware supported by OpenSolaris (being a company-backed operating system) still felt to be years behind what the Linux kernel provides here? Why, to get back to this example, does a system like Debian cleanly and quickly install packages within a couple of seconds or minutes where OpenSolaris IPS still takes rather long to install obscurely named packages to strange places like /opt/csw/ or /usr/gnu? I think that, given some more love years ago, OpenSolaris by now could be predominant. The way it is, right now it has to compete with GNU/Linux on the operating system, not even talking about Windows or MacOS X (which, as I disturbedly had to realize, seems to be the OS of choice amongst most of my Sun contacts... so much for that). Asides this, just to add another example: When JavaFX was released, I just was into testing OpenSolaris, and I felt enthusiastic about JavaFX as well, just to figure out that - what? A technology released by Sun, in its initial release not supporting the operating system also released by Sun? That's simply dumb, from a marketing point of view, in my opinion... So, overally: I hope the Sun/Oracle merger won't affect OpenSolaris all too much, or maybe a community will be capable of dropping in keeping OpenSolaris running even without Sun being there backing the project anymore. I still see work to be done, and I won't hesitate also testing out future releases. Let's see where it's heading... | 计算机 |
2014-23/0755/en_head.json.gz/8402 | Oracle® BPEL Process Manager Developer's Guide
This manual describes how to use Oracle BPEL Process Manager.
This preface contains the following topics:
This manual is intended for anyone who is interested in using Oracle BPEL Process Manager.
Our goal is to make Oracle products, services, and supporting documentation accessible, with good usability, to the disabled community. To that end, our documentation includes features that make information available to users of assistive technology. This documentation is available in HTML format, and contains markup to facilitate access by the disabled community. Accessibility standards will continue to evolve over time, and Oracle is actively engaged with other market-leading technology vendors to address technical obstacles so that our documentation can be accessible to all of our customers. For more information, visit the Oracle Accessibility Program Web site at
http://www.oracle.com/accessibility/
Accessibility of Code Examples in Documentation
Screen readers may not always correctly read the code examples in this document. The conventions for writing code require that closing braces should appear on an otherwise empty line; however, some screen readers may not always read a line of text that consists solely of a bracket or brace.
Accessibility of Links to External Web Sites in Documentation
This documentation may contain links to Web sites of other companies or organizations that Oracle does not own or control. Oracle neither evaluates nor makes any representations regarding the accessibility of these Web sites.
TTY Access to Oracle Support Services
Oracle provides dedicated Text Telephone (TTY) access to Oracle Support Services within the United States of America 24 hours a day, seven days a week. For TTY support, call 800.446.2398.
For more information, see the following Oracle resources:
Oracle BPEL Process Manager Quick Start Guide
Oracle BPEL Process Manager Order Booking Tutorial
Oracle BPEL Process Manager Administrator's Guide
Oracle Adapters for Files, FTP, Databases, and Enterprise Messaging User's Guide
Oracle Application Server Adapter Concepts
Oracle Application Server Adapter for Oracle Applications User's Guide
Printed documentation is available for sale in the Oracle Store at
http://oraclestore.oracle.com/
To download free release notes, installation documentation, white papers, or other collateral, visit the Oracle Technology Network (OTN). You must register online before using OTN; registration is free and can be done at
http://www.oracle.com/technology/membership/
To download Oracle BPEL Process Manager documentation, technical notes, or other collateral, visit the Oracle BPEL Process Manager site at Oracle Technology Network (OTN):
http://www.oracle.com/technology/bpel/
If you already have a username and password for OTN, then you can go directly to the documentation section of the OTN Web site at
See the Business Process Execution Language for Web Services Specification, available at the following URL:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnbizspec/html/bpel1-1.asp
See the XML Path Language (XPath) Specification, available at the following URL:
http://www.w3.org/TR/1999/REC-xpath-19991116
See the Web Services Description Language (WSDL) 1.1 Specification, available at the following URL:
http://www.w3.org/TR/wsdl | 计算机 |
2014-23/0755/en_head.json.gz/9307 | Contact Advertise The Future of Computing Part 7: Conscious Machines?
posted by Nicholas Blachford on Thu 11th Mar 2004 19:36 UTC "Future of computing, Page 3/3"
Now, Back to Planet Earth for the Conclusion and a New Platform...
In the future computers as we know them today will become playthings for geeks, just like steam locomotives or vintage cars are today. Computing however will surround us, in Phones in TVs and in many other areas, we may not recognise it though. There will still be workstations, the descendants of today's desktop PCs. The alternative computer manufacturers may become the only ones left serving those who want the "real thing".
Before we get to that stage an awful lot is going to happen.
In the short term the current manufacturers and technology leaders will continue to duke it out for a bigger share of the market. Wireless will become the norm but I don't expect it to have a smooth ride, people are paranoid enough about mobile phones damaging health and expect more powerful devices to cause more controversy. RFID tags will make it into everything but I can see sales of amateur radios going up as the more paranoid try to fry them. Longer term technology trends mean what we know as computers today will change over time into things we may not even consider as computers. Technology will get faster and better, easier to use, smaller and eventually as Moore's law slows down it'll last longer as well.
How we build technology will change and an infant industry will become another part of society subject to it's rules and regulations, it'll never be the same as the last 30 years, but this will be a gradual process. That's not to say innovation will die, there's a lot of technologies not yet our desktops which have yet to play their part. We have seen radical platforms arrive in the 70s and 80s and evolution playing it's part in the 90s and 00s. I think radicalism will return to the desktop but I don't know who will have the resources or for that fact, the balls to do it.
The Next Platform
The following is a description of a fictional platform. Like the radical platforms in the 80s it's based on the combination of mostly existing technologies into something better than anything that has gone before. I don't know if anyone will build it but I expect any radical new platform which comes along will incorporate at least some of the features from it:
It'll have a CPU, GPU and some smaller special purpose devices (start with what works first).
It'll have a highly stable system in the bottom OS layers and at the top it'll be advanced and user friendly (for the software as well).
The GPU will be more general purpose so it'll speed things up amazingly when programmed properly (Perhaps a Cell processor?).
There'll be an FPGA and a collection of libraries so you can use them to boost performance of some intensive operations onto another planet.
It'll run a hardware virtualising layer so you can run multiple *platforms* simultaneously.
It'll run anything you want as it'll include multiple CPU emulators and will to all intents and purposes be ISA agnostic.
The CPU will have message passing, process caching and switching in hardware so there'll be no performance loss from the micro-kernel OS, in fact these features may mean macro-kernels will be slower.
The GUI will be 3D but work in 2D as well, It'll be ready for volumetric displays when they become affordable. When they do expect to see a lot of people looking very silly as they wave their hands in the air, the mouse will then become obsolete.
It'll be really easy to program.
It will include a phone and you will be able to fit it into your pocket (though maybe not in the first version).
And Finally...
That is my view of the Future of Computing and the possibilities it will bring. I don't expect I've been right in everything but no one trying to predict the future ever is. I guess we'll find out some day.
I hope you've enjoyed reading my thoughts.
Thanks to the people who wrote comments and sent me e-mails, there were some very good comments and interesting links to follow.
So, I've enjoyed my stint as an anti-historian. What do you expect will happen? Maybe your predictions of the future are completely different, why not write them down, I look forward to reading them.
[1] A page on AI in Science fiction (Warning: may require sunglasses).
http://www.aaai.org/AITopics/html/scifi.html
[2] Some of Roger Penrose's thoughts on AI.
http://plus.maths.org/issue18/features/penrose/
[3] AI the Movie.
http://www.indelibleinc.com/kubrick/films/ai/
[4] Yours truly, 2095 from the ELO album "Time". By Jeff Lynne
http://janbraum.unas.cz/elo/ELO/diskografie/Time.htm
[5] H G Wells' The War of the Worlds
http://www.bartleby.com/1002/
Jeff Wayne's musical version of the story is one of my favourite CDs:
http://www.waroftheworldsonline.com/musical.htm
Copyright (c) Nicholas Blachford March 2004
This series is about the future and as such is nothing more than informed speculation on my part. I suggest future possibilities and actions which companies may take but this does not mean that they will take them or are even considering them.Table of contents
"Future of computing, Page 1/3"
(0) 52 Comment(s) Related Articles
What's Happening with User Interfaces?Why I Use Generic Computers and Open Source SoftwareU.S. Voting Technology: Problems Continue | 计算机 |
2014-23/0755/en_head.json.gz/9400 | Home > Risk Management
OverviewGetting StartedResearchTools & Methods Additional Materials ConsultingOur People Risk Management
Consider a broad range of conditions and events that can affect the potential for success, and it becomes easier to strategically allocate limited resources where and when they are needed the most.
Overview The SEI has been conducting research and development in various aspects of risk management for more than 20 years. Over that time span, many solutions have been developed, tested, and released into the community. In the early years, we developed and conducted Software Risk Evaluations (SREs), using the Risk Taxonomy. The tactical Continuous Risk Management (CRM) approach to managing project risk followed, which is still in use today—more than 15 years after it was released. Other applications of risk management principles have been developed, including CURE (focused on COTS usage), ATAM® (with a focus on architecture), and the cyber-security-focused OCTAVE®. In 2006, the SEI Mission Success in Complex Environments (MSCE) project was chartered to develop practical and innovative methods, tools, and techniques for measuring, assessing, and managing mission risks. At the heart of this work is the Mission Risk Diagnostic (MRD), which employs a top-down analysis of mission risk.
Mission risk analysis provides a holistic view of the risk to an interactively complex, socio-technical system. The first step in this type of risk analysis is to establish the objectives that must be achieved. The objectives define the desired outcome, or "picture of success," for a system. Next, systemic factors that have a strong influence on the outcome (i.e., whether or not the objectives will be achieved) are identified. These systemic factors, called drivers, are important because they define a small set of factors that can be used to assess a system's performance and gauge whether it is on track to achieve its key objectives. The drivers are then analyzed, which enables decision makers to gauge the overall risk to the system's mission.
The MRD has proven to be effective for establishing confidence in the characteristics of software-reliant systems across the life cycle and supply chain. The SEI has the MRD in a variety of domains, including software acquisition and development; secure software development; cybersecurity incident management; and technology portfolio management. The MRD has also been blended with other SEI products to provide unique solutions to customer needs.
Although most programs and organizations use risk management when developing and operating software-reliant systems, preventable failures continue to occur at an alarming rate. In many instances, the root causes of these preventable failures can be traced to weaknesses in the risk management practices employed by those programs and organizations. For this reason, risk management research at the SEI continues. The SEI provides a wide range of risk management solutions. Many of the older SEI methodologies are still successfully used today and can provide benefits to your programs. To reach the available documentation on the older solutions, see the additional materials.
The MSCE work on mission risk analysis—top-down, systemic analyses of risk in relation to a system's mission and objectives—is better suited to managing mission risk in complex, distributed environments. These newer solutions can be used to manage mission risk across the life cycle and supply chain, enabling decision makers to more efficiently engage in the risk management process, navigate through a broad tradeoff space (including performance, reliability, safety, and security considerations, among others), and strategically allocate their limited resources when and where they are needed the most. Finally, the SEI CERT Program is using the MRD to assess software security risk across the life cycle and supply chain. As part of this work, CERT is conducting research into risk-based measurement and analysis, where the MRD is being used to direct an organization's measurement and analysis efforts. Spotlight on Risk Management
The Monitor June 2009
New Directions in Risk: A Success-Oriented Approach (2009)
A Practical Approach for Managing Risk
A Technical Overview of Risk and Opportunity Management
A Framework for Categorizing Key Drivers of Risk
Practical Risk Management: Framework and Methods | 计算机 |
2014-23/0755/en_head.json.gz/12906 | search VGU.TV 6 mins read Video game nostalgia and remakes 6 mins read Video game nostalgia and remakes August 19, 2013
M-I-C K-E-Y M-O-U-S-E
HD version of Castle of Illusion
Nostalgia is a peculiar thing. Reconnecting with old games from our childhood sounds refreshing, even more so with the promise of updated graphics. But what was once held in high regard can be soured when revisited even when it has a fresh coat of paint. And while there can be exceptions to the rule, more often than not, these classic games suffer from being a product of their time. It is best to keep the rose-tinted glasses on rather than witnessing the flaws of a beloved classic.
For me, quarter-guzzling arcade games are the hardest to replay. Shortly after purchasing an Xbox 360, I was ecstatic to hear about the release of Teenage Mutant Ninja Turtles: The Arcade Game on XBLA. The game had a soft spot in my heart because during school field trips to our local skating rink, this was one of the two games I spent all of my money playing since I didn’t know how to skate. I was incapable of finishing the game before running out of cash, so the ability to experience it again was thrilling. And experience it I did, but with less than remarkable results.
These same results arose during my time with subsequent XBLA/PSN games such as The Simpsons, an arcade game I loved playing in Wal-Mart as my mother was in the checkout line, and X-Men; another game that swallowed my quarters at Skateland, respectively. I realized that, without that fear of running out of lives, these games were not the same. That threat of losing all progress because I no longer had funds to feed the machine drove me as a child to an unhealthy obsession with these games. I had no reason to love these games because I could never get past level two or three, even with friends. Such games were developed with the mindset to rob you of as much coins as they could. And they had me hook line and sinker. As terrible as they were, these games were mysteries to many people simply because we could never finish them. Yet here they were, in all of their glory, with unlimited lives. The challenge was gone, and so was the love for these once cherished arcade games.
Over the past few years, I’ve noticed a rising trend of releasing old arcade and Nintendo Entertainment System (NES) games with a stunning HD remake. My first foray into these games began with TMNT Turtles in Time Reshelled. But the drawback wasn’t just in the gameplay itself, it was also the updated graphics. The “reshelling” of the game gave it a beautiful radiant look that really stood out, but it also removed the charm that the old style presented. And with no ability to switch between graphics, the nostalgic factor was slightly dampened and had to be reinforced with the simplistic controls and multiplayer, which unfortunately for the latter, wasn’t well received as players couldn’t jump in and play but instead have to start at the very beginning.
There have been instances where HD games included extra content aside from better graphics. While not an arcade or NES game, Earthworm Jim HD surprised players with new levels, a new boss fight, and a multiplayer mode. I note this because the rest of the game was poorly regarded due to the advancements made in platformers since the game was released. Instead, the positivity from this game came mainly out of the cooperative multiplayer and not the game itself that people remembered it being. But Earthworm Jim HD did something that most remakes should do, and that’s not just rely on nostalgia, but also give added incentive to pick it up with extra inclusions such as new levels and a multiplayer mode.
Ducktales Remastered is the most recent example where expectations may not always live up to the finished product. Despite it’s mediocre to positive reviews from a number of people and sites, there was a sense of being underwhelmed. Like other remake examples, the game looks amazing but struggles to capture those who don’t mind the difficulty and outdated mechanics. But what is interesting is the adoration this game received when it was originally announced. Since it’s reveal back in March, fans have been craving this game. The craze grew with every bit of news that emerged from new music, returning voice actors, and new areas. The game was being set up to be disappointing due to the expectation behind it. And while the scores aren’t horrible, they certainly don’t match the hype that the game received.
This begs the question: does anything hurt the scores and sentiments towards these remakes more than hype? Certainly. As mentioned, most of these games have the uphill battle of surviving time. What worked over 20 years ago has evolved into something better and gamers expect more. When a game is remastered, “HD-ified”, or remade, there is a sense that the game should play better than it did in its original version. Some games have went that route, most notably Bionic Commando Rearmed, which received some of the highest praise for an NES remake. Price also factors into the enjoyment as well. Spending $10-$15 can be a bit much to play a 20 year old game with minimal improvements. Worse if the game is only 45 minutes to an hour long.
There isn’t anything wrong with wanting to see old games. They have their problems, but they also serve as a benchmark of what video games were like back when they were first released. The dissonance comes when they try to update the graphics to appear more current while leaving the gameplay as a broken component of a lesser realized time. One of the best things about PS1 classics and Virtual Console is that the customer knows what they are buying. They aren’t buying the promise of a better game that fulfills the same nostalgia; they are buying the exact product that created the nostalgia. There’s an understanding in place that the game you buy is old. It’s going to play old. And in turn, you may feel old playing it.
With Castle of Illusion Starring Mickey Mouse set as the next remake, anticipation needs to be realistic. People can’t play it with the same high hopes that they have with every new iteration of an old game. Like other aged titles, it will look great and play poorly. It will satisfy nostalgia while also making them question the validity of it. At some point down the road, the same discussion can be had with Nintendo 64 and PlayStation remakes (if they can’t already in some cases). Let this article be a reminder that it’s hard to go back to these games, but only in the sense of them “trying” to be new.
AuthorJosh MillerJosh spends most of his time working and being a fine husband and father. Unfortunately for his son, Josh is going to force as much geekdom down his throat as humanly possible including 90's cartoons, comic books, and of course, his love of gaming. Also by Josh Miller (37) Advertisement Advertisement Advertisement VGU.TV is an entertainment outlet that hosts podcasts, video, and written content. All of the content located here, or other related channels are copyright of VGU.TV and their owners. | 计算机 |
2014-23/0755/en_head.json.gz/13797 | Published on Linux DevCenter (http://www.linuxdevcenter.com/)
See this if you're having trouble printing code examples
What Is the Linux Desktop
by Jono Bacon, author of Linux Desktop Hacks
Linux Desktop
The Linux desktop is a graphical interface to the open source Linux
operating system. Many distributions such as Ubuntu, RedHat, SuSE, and
Mandriva include Linux desktop software. The desktop itself comes in
two primary forms, KDE and GNOME, and there are a range of
desktop applications for a range of different tasks, such as
productivity, graphics, multimedia, and development.
It's All About the Applications
What Can You Do with It?
The Ups and Downs
Refinements and Improvements
An Exciting Future
Ever since the invention of Linux and the appearance of open source, potential uses for the software have fallen into three broad categories: server, embedded, and desktop. With the widespread success of Linux in both the server room and on devices such as the Nokia 770, Zaurus, and Motorola phones, the desktop is the last remaining battlefield. Though many deem its success as inevitable, what is the Linux desktop, why should we use it, and why on earth should you care?
The story of the Linux desktop began many moons before Linux itself was invented. Back in the early '80s, the efforts by Richard Stallman to create free implementations of Unix applications under his GNU project were largely intended for rather mundane but important tools such as compilers, editors, and languages. At the time, most people did not use a graphical interface; they were simply too expensive. In 1984, boffins at MIT invented the X Window System, a framework for drawing graphics on different types of computer. Released later in 1987, X has provided an industry standard for creating graphical applications. Luckily, the free implementation of X, Xfree86, was available to support the growing free software community. XFree86 has since evolved into X.org, a cutting-edge implementation of X.
As Linux burst onto the scene and Linux distributions matured, an increasing importance was placed in the graphical interface. Early window managers such as mwm, twm, and fvwm were rather primitive and simplistic for the then-current state of the art. To provide a more complete environment, Matthias Ettrich started work on KDE, a project to create a complete desktop environment. Based on the Qt toolkit, KDE matured at quite a pace and soon provided a compelling, attractive graphical interface that many distributions shipped as the default environment. Despite the engineering success of KDE, licensing concerns about Qt (a toolkit that was not considered entirely free software) drove a number of concerned developers to create a competing desktop called GNOME. Although it was pedal to the metal for KDE and GNOME, a good desktop is more than just the environment itself; it is all about applications, applications, applications.
Linux Desktop Hacks
Tips & Tools for Customizing and Optimizing your OS
By Nicholas Petreley, Jono Bacon
Traditionally, the Linux platform was more than capable when it came to development or academic processing, but there was something of a barren wasteland of desktop applications for common needs. With KDE and GNOME providing an insight into what could happen with the open source desktop, more and more work went into creating these kinds of applications. In addition to open source efforts, some companies made concerted efforts to solve the applications problem. One of the biggest events at the time was Netscape open sourcing its Communicator suite. With Netscape Navigator as the most feature-complete browser available for Linux, this move cemented confidence in the burgeoning platform. Another major event was the open sourcing of the StarOffice office suite when Sun purchased StarDivision. StarOffice had existed for a number of years on Linux, but the suite had become rather bloated and lost. These two applications would later become Firefox and OpenOffice.org, two of the most popular open source products.
As the desktop has continued to develop, more and more support has evolved from commercial organizations. With support from major hardware manufacturers, support organizations, training companies, and application vendors, desktop Linux is edging closer to everyone' | 计算机 |
2014-23/0756/en_head.json.gz/1419 | Search Workshop on the Application of the concept of Historic Urban Landscapes in the African context
More than half of the Earth's population now lives in an urban area, and over the past three decades, due to the sharp increase in the world's urban population, historic cities have become subject to new threats. In order to address conservation and planning issues of historic cities, the Recommendation on Historic Urban Landscapes is being prepared for possible adoption at the 36th session of the General Conference of UNESCO in 2011.
Within this framework, the overall objective of the foreseen workshop in Zanzibar is to consider the African context in the UNESCO Recommendation on Historic Urban Landscapes. This would be the first regional expert meeting on the subject in Africa. More specifically the workshop should aim at:
Identifying and understanding the situation and the challenges of the World Heritage designated cities in the African Region;
Enhancing the foreseen UNESCO Recommendation on Historic Urban Landscape through consultation with experts and managers from the African cities of urban heritage values;
Developing strategies to facilitate management of the Historic Urban Landscape in the African Region.
The participants of this workshop will share definitions, challenges and tools concerning Historic Urban Landscapes in the African context, and the discussed policies and technical issues will be summarized into a set of Recommendations.
The workshop is organized by the UNESCO World Heritage Centre together with the Stone Town Conservation and Development Authority (STCDA), Ministry of Water, Construction, Energy and Land, Zanzibar, United Republic of Tanzania. Financial support is provided by the Government of the Netherlands.
Mr Issa S. Makarani, Director-General of STCDA, Zanzibar ([email protected])
Mr. Muhammad Juma Muhammad ([email protected])
World Heritage Properties (1)
Stone Town of Zanzibar States Parties (1)
Tanzania, United Republic of Regions (1)
Africa Activities (1)
World Heritage Cities Programme Contacts (1)
Junko Okahashi See Also (1)
Africa When
Stone Town of Zanzibar, Tanzania
Contacts Junko Okahashi
Add to your calendar (ics) November 2009 | 计算机 |
2014-23/0756/en_head.json.gz/1430 | 1270.0.55.001 - Australian Statistical Geography Standard (ASGS): Volume 1 - Main Structure and Greater Capital City Statistical Areas, July 2011 Latest ISSUE Released at 11:30 AM (CANBERRA TIME) 23/12/2010 First Issue
Main Features About this Release Expanded Contents History of Changes PREFACE
This publication is the first volume of a series detailing the new Australian Statistical Geography Standard (ASGS). It deals with the ASGS Main Structure (Statistical Area Levels 1 - 4) and the Greater Capital City Statistical Areas. The ASGS brings all the regions for which the ABS publishes statistics within the one framework and will be used by the ABS for the collection and dissemination of geographically classified statistics from 1 July 2011. It is the framework for understanding and interpreting the geographical context of statistics published by the ABS. The ABS also encourages the use of the ASGS by other organisations to improve the comparability and usefulness of statistics generally.
While there are superficial similarities between the ASGS and the Australian Standard Geographical Classification (ASGC), it is important to recognise that the two are fundamentally different and there are significant differences between their respective regions, both in their geographical extent and their conceptual foundation. As a whole, the ASGS represents a more comprehensive, flexible and consistent way of defining Australia's statistical geography than the ASGC. For further information to assist you to move from the ASGC to the ASGS please refer to the ABS website at http://www.abs.gov.au/geography.
The ASGS will be progressively introduced through the various ABS collections. It will replace the ASGC as the main geographical framework for the 2011 Census of Population and Housing, although data on Statistical Local Areas (SLAs) and those regions aggregated from SLAs will still be available for 2011. All ABS collections should be reporting on ASGS units by 2013.
Future volumes will detail the: Indigenous Structure, Non-ABS Geographies (including Local Government Areas), Urban Centres and Localities/Section of State and Remoteness Areas. The digital boundaries, maps, codes and labels for the regions described in this volume are available free of charge from the Australian Bureau of Statistics (ABS) website at http://www.abs.gov.au/geography.
Any enquires regarding the ASGS, or suggestions for its improvement can be made by emailing [email protected].
Brian Pink
Australian Statistician
PURPOSE OF THIS PUBLICATION
The purpose of this publication is to outline the conceptual basis of the ASGS Main Structure and the Greater Capital City Statistical Areas (GCCSAs) and their relationships to each other. The digital boundaries, maps, codes and labels for each of these regions are defined and can be obtained from the ABS website free of charge at http://www.abs.gov.au/geography.
This publication is the first in a series of volumes that will detail the various structures and regions of the ASGS. For more detail, please refer to ASGS Related Material and Release Timetable.
PURPOSE OF THE ASGS
The main purpose of the ASGS is for disseminating geographically classified statistics. It provides a common framework of statistical geography which enables the publication of statistics that are comparable and spatially integrated.
When the ASGS is fully implemented within the ABS, statistical units such as households and businesses will be assigned to a Mesh Block. Data collected from these statistical units will then be compiled into ASGS defined geographic regions which, subject to confidentiality restrictions, will be available for publication. | 计算机 |
2014-23/0756/en_head.json.gz/7198 | 1386.0 - What's New in Regional Statistics, June 2011 Previous ISSUE Released at 11:30 AM (CANBERRA TIME) 20/06/2011 SummaryDownloadsExplanatory Notes
Contents Welcome from the Director 2011 Census of Population and Housing The New Statistical Geography: Australian Statistical Geography Standard (ASGS) The Changing Face of Wage and Salary Earners on Melbourne's Outskirts Economy Population and People Industry Environment and Energy Other News and Contacts About this Release THE NEW STATISTICAL GEOGRAPHY - AUSTRALIAN STATISTICAL GEOGRAPHY STANDARD (ASGS)
Why is the ASGS being introduced?
Regions of the ASGS
What impact will the change have on time series?
What will ABS do to support time series?
Release of the ASGS
From July 2011 the ABS will progressively replace the current Australian Standard Geographical Classification (ASGC) with the new Australian Statistical Geography Standard (ASGS). The ASGS will be used for the 2011 Census of Population and Housing and progressively from 2011 in other series.
The ASGS is being introduced as the new statistical geography as it addresses some of the shortcoming of the ASGC in that: it brings all the geographic regions used by the ABS into the one framework it is more stable, the ABS structures will remain stable between Census; unlike the ASGC regions which were reviewed annually the regions at each level of the ASGS ABS structures are more consistent in population size the regions at each level of the ASGS ABS structures are optimised for the statistical data to be released for them the Main Structure Statistical Area (SA) units reflect gazetted localities based on the idea of a functional area, which will result in more meaningful regions it is based on Mesh Blocks and can therefore support more accurate statistics for a range of commonly used administrative regions such as Postcodes and electoral divisions
Back to topREGIONS OF THE ASGS
The ASGS brings all the regions used by the ABS to output data under the one umbrella. They are divided into two broad categories:
1. ABS structures, those regions which are defined and maintained by the ABS. 2. Non-ABS structures, those regions defined and maintained by other organisations, but for which the ABS releases data.
The ABS structures are a hierarchy of regions specifically developed for the release of particular ABS statistics described below.
ABS Regions
Mesh Blocks are the smallest area geographical region. There are 347,627 covering the whole of Australia. They broadly identify land use such as: residential, commercial, agriculture and parks etc. Residential and agricultural Mesh Blocks usually contain 30 to 60 households. Mesh Blocks are the building block for all the larger regions of the ASGS. Only limited Census data, total population and dwelling counts will be released at the mesh block level.
Statistical Areas Level 1 (SA1s) will be the smallest region for which a wide range of Census data will be released. They will have an average population of about 400. They will be built from whole Mesh Blocks. There are 54,805 covering the whole of Australia.
Statistical Areas Level 2 (SA2s) will have an average population of about 10,000, with a minimum population of 3,000 and a maximum of 25,000. The SA2s are the regions for which the majority of ABS sub-state non-census data, for example Estimated Resident Population, will be released. There are 2,214 SA2s, built from whole SA1s.
Statistical Areas Level 3 (SA3s) are a medium sized region with a population of 30,000 to 130,000. They represent the functional areas of regional cities and large urban transport and service hubs. They will be built from whole SA2s. Statistical Areas Level 4 (SA4s) will be used for the release of Labour Force Statistics.
Urban Centres/Localities, Section of State and Remoteness Areas will define the built up area of Australia's towns and cities and will be broadly comparable to previous Censuses. Greater Capital City Statistical Areas (GCCSA) define the Capital Cities and their socio-economic extent.
Significant Urban Areas will define the major cities and towns of Australia with a population over 10,000. This includes both the built up area, any likely medium term expansion and immediately associated peri-urban development.
Indigenous Regions, Areas and Localities are designed for the presentation of Indigenous data. At the Indigenous Locality level it is possible to identify data on particular Indigenous Communities.
The diagram below summarises the overall ABS structures of the ASGS -
Back to topNon-ABS Regions
Non-ABS structures will be approximated or built directly from Mesh Blocks, SA1s or SA2s. The Non-ABS structures include such important regions as: Local Government Areas (LGAs), postal areas, state gazetted suburbs and electoral divisions. LGAs remain part of the ASGS and the ABS will continue to support LGAs with the data it currently provides.
The diagram below summarises the overall non-ABS structures of the ASGS -
While the new regions will give a better platform for the analysis of time series into the future, the change over to the ASGS will cause a break in time series for Census Collection Districts, Statistical Local Areas, Statistical Subdivisions, Statistical Divisions and Labour Force regions.
It will have a significant impact on data for 'capital cities'. In the ASGC, Capital City Statistical Divisions (SD) were used as the boundaries for capital cities. There is no equivalent region to SD in the ASGS however the capital cities will be defined by the new GCCSA which represent the cities socio-economic extent. A detailed discussion of the new design of capital cities can be found in Australian Statistical Geography Standard: Design of the Statistical Areas Level 4, Capital Cities and Statistical Areas Level 3, May 2010 (cat. no. 1216.0.55.003).
The change to ASGS will have some impact on Remoteness Areas, Urban Centres and Localities and the Indigenous Region Structure. The impact should not seriously affect comparability of data over time, but users undertaking detailed analysis of this data need to be aware of the change.
It will have very little impact on data held at the LGA level or other non-ABS structures, as these will be approximated by aggregating whole meshblocks.Back to topWHAT WILL THE ABS DO TO SUPPORT TIME SERIES?
The ABS will support time series in several ways:
A time series of population estimates will be available on the new geography. The length of the time series will depend on the geographic level and the type of estimate. For more information on sub-state population estimates on the ASGS see Regional Population Growth, Australia, 2009-10 (cat. no. 3218.0).
2006 and 2001 Census data will be available on SA2s, SA3s, SA4s and SLAs and LGAs in the Time Series Profiles along with 2011 data. Plans for Census data output are outlined in Census of Population and Housing: Outcomes from the 2011 Census Output Geography Discussion Paper, 2011 (cat. no. 2911.0.55.003).
Building Approvals, Australia, Apr 2011 (cat. no. 8731.0) includes a feature article with information on the implementation of the ASGS for Building Approvals statistics.
Information about the release of other ABS statistics will progressively become available.
Advice on the ASGS and its impact is available.
The ABS will provide a variety of correspondence files (used to transform data from one geography to another). For more information on these please go to the ABS Geography Portal.
The ABS published the ASGS manual with the boundaries, labels and codes for SAs 1-4 and Greater Capital City Statistical Areas in December 2010, see Australian Statistical Geography Standard (ASGS): Volume 1 - Main Structure and Greater Capital City Statistical Areas, July 2011 (cat.no. 1270.0.55.001)
The Non-ABS structures will be released in mid 2011; this is to ensure that the Census is released on the most up to date boundaries available. Urban Centres and Localities, Section of State, Remoteness and Significant Urban Areas will be released in 2012 as they require an analysis of Census data to be developed. The regions defined in the ABS structures will not change until the next Census in 2016, although the Non-ABS structures will be updated annually.
The ASGS will come into effect on the 1 July 2011. FURTHER INFORMATION
For more information please follow the link to the ABS Geography Portal. If you have any questions regarding the ASGS please email [email protected]. | 计算机 |
2014-23/0756/en_head.json.gz/9291 | Couple fed up with broken Dell computer
October 8, 2012 10:06:26 PM PDT
Michael Finney SANTA ROSA, Calif. -- The last thing anyone wants after buying something is for it to break practically out of the box. Nothing is more frustrating than constant trips or phone calls to try to get something fixed and that's how a Santa Rosa couple felt, until they called 7 On Your Side for help. Kathleen Smith of Santa Rosa gestures to show us how big the crack was on her computer screen. She said, "It looked just like jagged pieces came out from all around the crack. There was no picture." She says the incident happened a month after she bought her dell last year in May. "We closed it and the next morning we got up and opened it and it was cracked. It was not dropped. It was not mistreated, just broke," said Smith. Dell was prompt in sending someone to her home to replace the whole screen and everything was fine until one month later when the computer would not boot up. She took it into Staples where she bought the computer, hoping to get it repaired. "It was just a big hassle for a new product, something I had just purchased," said Smith. Those hassles continued a year later when her partner used the computer. "I turned the computer on. It started making these weird beeping noises that I've never heard before," said Charles Borg. Borg took it to a repair shop to get looked at, but was told it wasn't worth fixing. Frustrated, he packed up the computer and sent it back to Dell. "We went back and forth exchanging phone calls about a week and I basically had given up on getting this unit fixed," said Borg. He also alerted us at 7 On Your Side about his troubles, so we called Dell and it jumped into action. "All of a sudden out of the blue I got a call from her telling me that unit, they were going to replace the unit," said Borg. And it did. Dell sent Smith and Borg a brand new computer. In an e-mailed statement, Dell told us, "We are sorry about Ms. Smith and Mr. Borg's frustrations and problems with their laptop and are glad to report that our customer engagement team did resolve the problem." "I had just lost my job so I wasn't ready to go out and purchase a new computer. And I was very thankful it was replaced," said Smith. Dell agreed to replace the computer even though the couple did not have a warranty for the computer and that was really good of them. Map My News | 计算机 |
2014-23/0756/en_head.json.gz/9411 | Knowledge representation and reasoning
Knowledge representation and reasoning (KR) is the field of artificial intelligence (AI) devoted to representing information about the world in a form that a computer system can utilize to solve complex tasks such as diagnosing a medical condition or having a dialog in a natural language. Knowledge representation incorporates findings from psychology about how humans solve problems and represent knowledge in order to design formalisms that will make complex systems easier to design and build. Knowledge representation and reasoning also incorporates findings from logic to automate various kinds of reasoning, such as the application of rules or the relations of sets and subsets.
Examples of knowledge representation formalisms include semantic nets, Frames, Rules, and ontologies. Examples of automated reasoning engines include inference engines, theorem provers, and classifiers.
4 Ontology Engineering
4.1 Commitment begins with the earliest choices
4.2 Commitments accumulate in layers
A classic example of how setting an appropriate formalism leads to new solutions is the early example of the adoption of Arabic over Roman numerals. Arabic numerals facilitate larger and more complex algebraic representations, thus influencing future knowledge representation.
Knowledge representation incorporates theories from psychology which look to understand how humans solve problems and represent knowledge. Early psychology researchers did not believe in a semantic basis for truth. For example, the psychological school of radical behaviorism which dominated US universities from the 1950s to the 1980s explicitly ruled out internal states as legitimate areas for scientific study or as legitimate causal contributors to human behavior.[1] Later theories on semantics support a language-based construction of meaning.
The earliest work in computerized knowledge representation was focused on general problem solvers such as the General Problem Solver (GPS) system developed by Allen Newell and Herbert A. Simon in 1959. These systems featured data structures for planning and decomposition. The system would begin with a goal. It would then decompose that goal into sub-goals and then set out to construct strategies that could accomplish each subgoal.
In these early days of AI, general search algorithms such as A* were also developed. However, the amorphous problem definitions for systems such as GPS meant that they worked only for very constrained toy domains (e.g. the "blocks world"). In order to tackle non-toy problems, AI researchers such as Ed Feigenbaum and Frederick Hayes-Roth realized that it was necessary to focus systems on more constrained problems.
It was the failure of these efforts that led to the cognitive revolution in psychology and to the phase of AI focused on knowledge representation that resulted in expert systems in the 1970s and 80s, production systems, frame languages, etc. Rather than general problem solvers, AI changed its focus to expert systems that could match human competence on a specific task, such as medical diagnosis.
Expert systems gave us the terminology still in use today where AI systems are divided into a Knowledge Base with facts about the world and rules and an inference engine that applies the rules to the knowledge base in order to answer questions and solve problems. In these early systems the knowledge base tended to be a fairly flat structure, essentially assertions about the values of variables used by the rules.[2]
In addition to expert systems, other researchers developed the concept of Frame based languages in the mid 1980s. A frame is similar to an object class, it is an abstract description of a category describing things in the world, problems, and potential solutions. Frames were originally used on systems geared toward human interaction, e.g. understanding natural language and the social settings in which various default expectations such as ordering food in a restaurant narrow the search space and allow the system to choose appropriate responses to dynamic situations.
It wasn't long before the frame communities and the rule-based researchers realized that there was synergy between their approaches. Frames were good for representing the real world, described as classes, subclasses, slots (data values) with various constraints on possible values. Rules were good for representing and utilizing complex logic such as the process to make a medical diagnosis. Integrated systems were developed that combined Frames and Rules. One of the most powerful and well known was the 1983 Knowledge Engineering Environment (KEE) from Intellicorp. KEE had a complete rule engine with forward and backward chaining. It also had a complete frame based knowledge base with triggers, slots (data values), inheritance, and message passing. Although message passing originated in the object-oriented community rather than AI it was quickly embraced by AI researchers as well in environments such as KEE and in the operating systems for Lisp machines from Symbolics, Xerox, and Texas Instruments.[3]
The integration of Frames, rules, and object-oriented programming was significantly driven by commercial ventures such as KEE and Symbolics spun off from various research projects. At the same time as this was occurring, there was another strain of research which was less commercially focused and was driven by mathematical logic and automated theorem proving. One of the most influential languages in this research was the KL-ONE language of the mid 80's. KL-ONE was a frame language that had a rigorous semantics, formal definitions for concepts such as an Is-A relation.[4] KL-ONE and languages that were influenced by it such as Loom had an automated reasoning engine that was based on formal logic rather than on IF-THEN rules. This reasoner is called the classifier. A classifier can analyze a set of declarations and infer new assertions, for example, redefine a class to be a subclass or superclass of some other class that wasn't formally specified. In this way the classifier can function as an inference engine, deducing new facts from an existing knowledge base. The classifier can also provide consistency checking on a knowledge base (which in the case of KL-ONE languages is also referred to as an Ontology).[5]
Another area of knowledge representation research was the problem of common sense reasoning. One of the first realizations from trying to make software that can function with human natural language was that humans regularly draw on an extensive foundation of knowledge about the real world that we simply take for granted but that is not at all obvious to an artificial agent. Basic principles of common sense physics, causality, intentions, etc. An example is the Frame problem, that in an event driven logic there need to be axioms that state things maintain position from one moment to the next unless they are moved by some external force. In order to make a true artificial intelligence agent that can converse with humans using natural language and can process basic statements and questions about the world it is essential to represent this kind of knowledge. One of the most ambitious programs to tackle this problem was Doug Lenat's Cyc project. Cyc established its own Frame language and had large numbers of analysts document various areas of common sense reasoning in that language. The knowledge recorded in Cyc included common sense models of time, causality, physics, intentions, and many others.[6]
The starting point for knowledge representation is the knowledge representation hypothesis first formalized by Brian C. Smith in 1985:[7]
Any mechanically embodied intelligent process will be comprised of structural ingredients that a) we as external observers naturally take to represent a propositional account of the knowledge that the overall process exhibits, and b) independent of such external semantic attribution, play a formal but causal and essential role in engendering the behavior that manifests that knowledge.
Currently one of the most active areas of knowledge representation research are projects associated with the Semantic web. The semantic web seeks to add a layer of semantics (meaning) on top of the current Internet. Rather than indexing web sites and pages via keywords, the semantic web creates large ontologies of concepts. Searching for a concept will be more effective than traditional text only searches. Frame languages and automatic classification play a big part in the vision for the future semantic web. The automatic classification gives developers technology to provide order on a constantly evolving network of knowledge. Defining ontologies that are static and incapable of evolving on the fly would be very limiting for Internet based systems. The classifier technology provides the ability to deal with the dynamic environment of the Internet.
Recent projects funded primarily by the Defense Advanced Research Projects Agency (DARPA) have integrated frame languages and classifiers with markup languages based on XML. The Resource Description Framework (RDF) provides the basic capability to define classes, subclasses, and properties of objects. The Web Ontology Language (OWL) provides additional levels of semantics and enables integration with classification engines.[8][9]
Knowledge-representation is the field of artificial intelligence that focuses on designing computer representations that capture information about the world that can be used to solve complex problems. The justification for knowledge representation is that conventional procedural code is not the best formalism to use to solve complex problems. Knowledge representation makes complex software easier to define and maintain than procedural code and can be used in expert systems.
For example, talking to experts in terms of business rules rather than code lessens the semantic gap between users and developers and makes development of complex systems more practical.
Knowledge representation goes hand in hand with automated reasoning because one of the main purposes of explicitly representing knowledge is to be able to reason about that knowledge, to make inferences, assert new knowledge, etc. Virtually all knowledge representation languages have a reasoning or inference engine as part of the system.[10]
A key trade-off in the design of a knowledge representation formalism is that between expressivity and practicality. The ultimate knowledge representation formalism in terms of expressive power and compactness is First Order Logic (FOL). There is no more powerful formalism than that used by mathematicians to define general propositions about the world. However, FOL has two drawbacks as a knowledge representation formalism: ease of use and practicality of implementation. First order logic can be intimidating even for many software developers. Languages which do not have the complete formal power of FOL can still provide close to the same expressive power with a user interface that is more practical for the average developer to understand. The issue of practicality of implementation is that FOL in some ways is too expressive. With FOL it is possible to create statements (e.g. quantification over infinite sets) that would cause a system to never terminate if it attempted to verify them.
Thus, a subset of FOL can be both easier to use and more practical to implement. This was a driving motivation behind rule-based expert systems. IF-THEN rules provide a subset of FOL but a very useful one that is also very intuitive. The history of most of the early AI knowledge representation formalisms; from databases to semantic nets to theorem provers and production systems can be viewed as various design decisions on whether to emphasize expressive power or computability and efficiency.[11]
In a key 1993 paper on the topic, Randall Davis of MIT outlined five distinct roles to analyze a knowledge representation framework:[12]
A knowledge representation (KR) is most fundamentally a surrogate, a substitute for the thing itself, used to enable an entity to determine consequences by thinking rather than acting, i.e., by reasoning about the world rather than taking action in it.
It is a set of ontological commitments, i.e., an answer to the question: In what terms should I think about the world?
It is a fragmentary theory of intelligent reasoning, expressed in terms of three components: (i) the representation's fundamental conception of intelligent reasoning; (ii) the set of inferences the representation sanctions; and (iii) the set of inferences it recommends.
It is a medium for pragmatically efficient computation, i.e., the computational environment in which thinking is accomplished. One contribution to this pragmatic efficiency is supplied by the guidance a representation provides for organizing information so as to facilitate making the recommended inferences.
It is a medium of human expression, i.e., a language in which we say things about the world."
Knowledge representation and reasoning are a key enabling technology for the | 计算机 |
2014-23/0756/en_head.json.gz/10178 | Home > Risk Management
OverviewGetting StartedResearchTools & Methods Additional Materials ConsultingOur People Risk Management
Consider a broad range of conditions and events that can affect the potential for success, and it becomes easier to strategically allocate limited resources where and when they are needed the most.
Overview The SEI has been conducting research and development in various aspects of risk management for more than 20 years. Over that time span, many solutions have been developed, tested, and released into the community. In the early years, we developed and conducted Software Risk Evaluations (SREs), using the Risk Taxonomy. The tactical Continuous Risk Management (CRM) approach to managing project risk followed, which is still in use today—more than 15 years after it was released. Other applications of risk management principles have been developed, including CURE (focused on COTS usage), ATAM® (with a focus on architecture), and the cyber-security-focused OCTAVE®. In 2006, the SEI Mission Success in Complex Environments (MSCE) project was chartered to develop practical and innovative methods, tools, and techniques for measuring, assessing, and managing mission risks. At the heart of this work is the Mission Risk Diagnostic (MRD), which employs a top-down analysis of mission risk.
Mission risk analysis provides a holistic view of the risk to an interactively complex, socio-technical system. The first step in this type of risk analysis is to establish the objectives that must be achieved. The objectives define the desired outcome, or "picture of success," for a system. Next, systemic factors that have a strong influence on the outcome (i.e., whether or not the objectives will be achieved) are identified. These systemic factors, called drivers, are important because they define a small set of factors that can be used to assess a system's performance and gauge whether it is on track to achieve its key objectives. The drivers are then analyzed, which enables decision makers to gauge the overall risk to the system's mission.
The MRD has proven to be effective for establishing confidence in the characteristics of software-reliant systems across the life cycle and supply chain. The SEI has the MRD in a variety of domains, including software acquisition and development; secure software development; cybersecurity incident management; and technology portfolio management. The MRD has also been blended with other SEI products to provide unique solutions to customer needs.
Although most programs and organizations use risk management when developing and operating software-reliant systems, preventable failures continue to occur at an alarming rate. In many instances, the root causes of these preventable failures can be traced to weaknesses in the risk management practices employed by those programs and organizations. For this reason, risk management research at the SEI continues. The SEI provides a wide range of risk management solutions. Many of the older SEI methodologies are still successfully used today and can provide benefits to your programs. To reach the available documentation on the older solutions, see the additional materials.
The MSCE work on mission risk analysis—top-down, systemic analyses of risk in relation to a system's mission and objectives—is better suited to managing mission risk in complex, distributed environments. These newer solutions can be used to manage mission risk across the life cycle and supply chain, enabling decision makers to more efficiently engage in the risk management process, navigate through a broad tradeoff space (including performance, reliability, safety, and security considerations, among others), and strategically allocate their limited resources when and where they are needed the most. Finally, the SEI CERT Program is using the MRD to assess software security risk across the life cycle and supply chain. As part of this work, CERT is conducting research into risk-based measurement and analysis, where the MRD is being used to direct an organization's measurement and analysis efforts. Spotlight on Risk Management
The Monitor June 2009
New Directions in Risk: A Success-Oriented Approach (2009)
A Practical Approach for Managing Risk
A Technical Overview of Risk and Opportunity Management
A Framework for Categorizing Key Drivers of Risk
Practical Risk Management: Framework and Methods | 计算机 |
2014-23/0756/en_head.json.gz/11657 | Crystal Space
Jorrit Tyberghein et al.
2.0 / July 3, 2012 (2012-07-03)
3D engine
GNU LGPL
www.crystalspace3d.org
Crystal Space is a framework for developing 3D applications written in C++ by Jorrit Tyberghein and others. The first public release was on August 26, 1997.[1] It is typically used as a game engine but the framework is more general and can be used for any kind of 3D visualization. It is very portable and runs on Microsoft Windows, Linux, UNIX, and Mac OS X. It is also free software, licensed under GNU Lesser General Public License, and was SourceForge.net's Project of the Month for February 2003.[2]
1 Engine design
Engine design[edit]
Crystal Space is programmed in object oriented C++. It is very modularly built with a number of more or less independent plugins. The client programs use the plugins, such as the OpenGL 3D renderer, by registering them via Crystal Space's Shared Class Facility (SCF).
Crystal Space has modules for 2D and 3D graphics, sound, collision detection and physics through ODE and Bullet.
OpenGL rendering
Supports hardware acceleration from all major card vendors
Allows use of shaders
Library of common shaders like normal mapping, parallax mapping and hardware skinning
Supports software rendering with limited features
Mesh objects:
Plugin-based mesh system
Triangle-based meshes with frame and bone animation support
Collision detection and dynamics:
ODE and Bullet dynamics
Simplified collision detection when full dynamic simulation is not needed
Free software portal
Game engine
^ Release history of Crystal Space from the Internet Archive. Old | 计算机 |
2014-23/0756/en_head.json.gz/12414 | Contact Advertise iXsystems Announces Acquisition of PC-BSD
Linked by Thom Holwerda on Tue 10th Oct 2006 15:14 UTC, submitted by Charles A Landemaine "iXsystems, an enterprise-class hardware solution provider, announced today its acquisition of PC-BSD, a rock solid UNIX operating system based on FreeBSD. PC-BSD is a fully functional desktop operating system running FreeBSD version 6, with a KDE desktop interface and graphical system installer. Its PBI system, developed exclusively for PC-BSD, lets users download and install their applications in a self-extracting and installing format. iXsystems' acquisition of PC-BSD will provide funding to the PC-BSD project to increase distribution of PC-BSD and develop future versions of PC-BSD. Development is currently underway for a version of PC-BSD that will allow for easy installation and operation on servers, workstations, and laptops."
14 90 Comment(s) http://osne.ws/cfy Permalink for comment 170621
RE[6]: Panic mode by Joe User on Wed 11th Oct 2006 00:02 UTC in reply to "RE[5]: Panic mode" Member since:
No ... Lets put you in a cage because I feel like it. Maybe , probably not , then you will understand freedom. Are you too damn stupid that BSD software developers release their code under the BSD license *KNOWING* and *ACCEPTING* that their code be closed by another company? What's wrong with Apple using FreeBSD source code and closing it? Absolutely NOTHING. If I were a software developer, I would release my code under the BSD license for the sake of freedom. I don't want to force anybody using my code to have to release source code if they make changes. Use my code, change it and release it open or close, whichever. If you can't use open-source software for a closed source application, for me it's close to useless. BSD is a philosophy. That you give and that you don't expect anything in return. I write for Wikipedia, and I don't expect anybody to cite their source. I wrote a number of articles on Wikipedia and I don't want anything in return for yourself. I did it for people in general. This is the true meaning of giving when you don't want anything in return for yourself. | 计算机 |
2014-23/0756/en_head.json.gz/12792 | Projects / Orchestra / Releases
All releases of Orchestra
Release Notes: This release delivers the first version of the new Web BPMN designer. It is now possible to design, change, and deploy processes directly from a Web browser using the popular BPMN 2.0 notation. The engine is still 100% standards compliant with the support of WS-BPEL2.0. This version adds the introduction of Human Task features following the BPEL4People and WS-HumanTask standards. Finally, the Orchestra Console gives more possibilities to monitor and administrate your processes such as: BPMN and BPEL graphical monitoring, a Reporting module, Simulation possibilities, etc.
Release Notes: This version ships a technology preview of the Web 2.0 BPMN process designer. The current version is already fully functional. The Web console now has a BPMN process view of the processes deployed using the designer, a BPEL process view and graphical monitoring, pagination of the lists proposed by the console, a new tab to manage pending messages and allow you to delete them, and a new tab to manage dead jobs and give the possibility to retry or delete them. Bug fixes and performance improvements on the engine have also been added.
Release Notes: Important performance improvements and bug corrections were made, especially on the clustering and versioning combined functionalities.
Release Notes: This version brings the possibility to manage process versioning.
Release Notes: The main new functionnality of this version is the brand new Web Console. The console is by default available in the tomcat packages but it is also possible to download it standalone and connect it to an existing Orchestra engine. In addition to this console, some performance improvements has been added to the engine to support a high workload environment.
Release Notes: Integration with Camel (http://camel.apache.org) was added. This opens a wide range a possibilities, like calling non-WS services (sending an email, a JMS message, calling a local Java class) directly from your process. It is now possible to use Orchestra in a cluster mode. This gives load balancing and fail-over capabilities. It is now possible to call sub-processes directly without going through the WS stack. Orchestra automatically detects if a call is related to another process, enhancing the performance of this type of call.
Release Notes: The main focus of this release is on the OSGi packaging. CXF and Axis are now integrated as OSGi bundles and the Tomcat packages have been modified to use the OSGi packaging. This provides a lot of flexibility in the choice of Web Service implementation you want to use. The other main feature added is related to the management of "dead" Jobs. There is now a default execution that kills the instances that have a dead job, but Orchestra also provides the possibility to interact with those jobs using the API.
Release Notes: This maintenance release does not provide any major enhancements, but is
deployable on the Java EE Application Server JOnAS. It provides a few
bugfixes, including a fix for management of waiting clients when the
process ends with a fault, and provides major performance improvements
and support for XSD importation.
Release Notes: In this version, you will be able to use the last major BPEL statement that was not yet supported. EventHandlers are now available, making Orchestra fully compatible with the BPEL 2.0 standard. To facilitate installation, a graphical installer is now available.
Release Notes: The main improvement in this version is the integration of a new Web Service framework. Orchestra is now available packaged with CXF. Aside from the small performance improvement compared to the Axis version, Orchestra-CXF has support for the WS-Addressing and WS-Reliable Messaging Web Services. There is also a new version of the Orchestra Console, as well as a preview demonstration of the future Web 2.0 Orchestra Designer. | 计算机 |
2014-23/0757/en_head.json.gz/672 | developerWorksTechnical topicsRationalTechnical library
Understanding IBM Rational Automation FrameworkKey concepts and history
This article introduces the basic concepts behind IBM Rational Automation Framework and documents the history of its releases.
David Brauneis ([email protected]), Senior Technical Staff Member, Rational Software Delivery, and Chief Architect, Middleware Automation , IBM
David Brauneis is a Senior Technical Staff Member for Rational Software Delivery and Chief Architect for the Middleware Automation strategy, including the Rational Automation Framework and the Advanced Middleware Configuration capabilities of IBM PureApplication System. He has been working for the last several years for Rational software in the area of software delivery automation, including technical leadership roles in the Rational Automation Framework and Rational Build Forge. Before joining Rational, David spent 8 years on the WebSphere Application Server team in a variety of architecture and development roles. He has 12 years of WebSphere Application Server development experience, more than 10 years of Java EE application development experience, and more than 15 years of Java development experience. David has been working for IBM in Research Triangle Park, North Carolina and Hoboken, New Jersey for 13 years on projects related to distributed computing, software builds, automation, user interfaces, and system administration. He holds a B.S. in Biomedical Engineering and a master's degree in Technical Communication from Rensselaer Polytechnic Institute.
Lewis Shiner, Technical Writer, IBM
Lewis Shiner is a contract technical writer in Raleigh, North Carolina. He works on Rational Automation Framework and Advanced Middleware Configuration.
Also available in Spanish
Environment generation
Modes of operation
Release history | 计算机 |
2014-23/0757/en_head.json.gz/1292 | The Information Technology Agreement
Geneva, 15 September 2008
What is the ITA?
The ITA was negotiated and signed in 1996, with the goal of expanding trade
in IT and telecommunication products. The ITA initially had 14 signatories
representing more than 90% of world trade in information technology products. This has since grown to a total of 43 signatories, representing 70 countries or
separate customs territories and more than 97% of world trade in IT products. ITA signatories have agreed to eliminate customs duties and other duties and
charges on certain IT products, which are specified in Attachments A and B to
the Agreement. The ITA entered into force in April 1997 and required that
customs duties on covered products be reduced in stages and eventually
eliminated by 1 January 2000. The commitments undertaken under the ITA are on a
"most favoured nation" basis, which means that benefits must be extended to all
WTO Members.
The global trading system has seen an unprecedented expansion of trade in IT
products since the signature of the ITA. IT products now account for over USD
1500 billion of exports world-wide, i.e. one fifth of total world exports of
manufactured products, up from USD 600 billion in 1996.
How is product coverage of the ITA defined?
The ITA product coverage is defined in attachments to the ITA. Attachment A
defines products by tariff heading; Attachment B provides a positive list of
specific products covered by the ITA.
Why does the EU want to update the ITA? Despite the growing trade in IT products, a number of problems remain which
the current agreement does not adequately address:
The product-based approach of the ITA has not been able to take account of
changes in the industry. The ITA covers a list of specific IT products, which
was agreed in 1996. IT technology, however, has and continues to evolve and
converge with entertainment, communication and other technologies, thereby
creating a growing potential for specific IT products to fall outside the scope
of the ITA. This systemic challenge to the ITA can only be addressed by
revisiting the product coverage of the Agreement. Unfortunately, the mechanisms
provided for in the ITA for this purpose failed to resolve this issue.
IT products continue to face non-tariff barriers, a real obstacle to market
access in many countries. Examples of such "behind the border" barriers are
overly burdensome practices to certify compliance with technical regulations, or
the application of technical standards unnecessarily deviating from those set by
internationally recognised standard setting bodies. These imply costs equivalent
or even more important than customs duties, and effective disciplines to address
them should become part of a revised ITA.
There are still some important actors in the manufacturing of and trade in
IT products which are not participants in the ITA. It would be commensurate with
the level of development of these countries and of the competitiveness of their
IT industries for those countries to become also members of the ITA. The
expansion of its membership should lead to a further growth of the global IT
market and of trade in IT products, for the benefits of producers – who
increasingly source their components world-wide - and consumers
alike.The WTO dispute settlement cases launched by the US, Japan
and Chinese Taipei
On 28 May 2008 the United States and Japan requested WTO consultations with
the EC with respect to our tariff treatment of certain information technology
products. They were joined by Chinese Taipei on 12 June 2008. Consultations were
held in June and July but did not resolve the dispute. The complainants jointly
requested the establishment of a panel at the WTO Dispute Settlement Body
Meeting of 29 July 2008.
The European Commission has rejected claims that the EU is not fulfilling its
obligations under the ITA. Not only has the EU respected its ITA obligations,
but it has explicitly said it is willing to reassess the current ITA product
coverage to reflect new technology, in a negotiation with all ITA
signatories. A change in ITA scope can only be made on the basis of consensus amongst all
ITA participants, not as a result of litigation by some members. The ITA has a
review clause which can be invoked by members at any time. The complainants have
so far not demonstrated a willingness to do this. The complaints identify classification issues with three products, namely
Flat Panel Displays, Set-Top Boxes and Multifunctional Copy Machines For more information on the WTO dispute, please see: United States: http://trade.ec.europa.eu/wtodispute/show.cfm?id=408&code=2
Japan: http://trade.ec.europa.eu/wtodispute/show.cfm?id=409&code=2
Chinese Taipei: http://trade.ec.europa.eu/wtodispute/show.cfm?id=410&code=2 | 计算机 |
2014-23/0757/en_head.json.gz/3929 | 14 September 2012, 14:12
"Pre-loaded" PC malware leads to domain takeover
A US District Court has given Microsoft permission to take down the command and control servers and domains of over 500 strains of malware. The Eastern District of Virginia was asked by Microsoft's Digital Crimes Unit to allow them to disable these domains as part of "Operation b70", which has its roots in a study carried out by Microsoft in China.
Microsoft has found that new computers purchased by its employees in Chinese cities already had malware installed on them. In August 2011, the company began an investigation to see if there was any evidence to back up claims that counterfeit software and malware was being placed onto PCs in the supply chain in China and sent employees to buy ten desktop and ten laptop computers from "PC Malls" in various cities in China. Four of the computers were found to already have malware on them. As well as having malware which spread over USB flash drives on them, one of the four machines in particular attracted the researchers' attention because it was infected with the Nitol virus. Nitol installs a backdoor used for spam or DDoS attacks and the botnet it was connected to was hosted at 3322.org. Microsoft found that the hosting provider appeared to host around 500 different strains of malware on 70,000 sub-domains. This other malware, says Microsoft, included remote camera control and viewing backdoors and key loggers.
It appears that Microsoft didn't have any success approaching the hosting company and so it decided to apply to take over the domain through the courts and has now been given permission, through a temporary restraining order, to take over control of the 3322.org domain and block the operation of the Nitol botnet and the other malware. As there are legitimate subdomains of 3322.org, Microsoft is filtering access with the help of Nominum, and allowing traffic to them through while blocking access to malicious subdomains.
(djwm) « previous | | 计算机 |
2014-23/0757/en_head.json.gz/4701 | Game Rankings Metacritic Zero Punctuation IGN GameSpot GameFAQs GameTrailers Kotaku G4 TV GamesRadar 1up.com GameSpy Joystiq Giant Bomb Destructoid Enough with the freakin' trilogies already
By Mike Doolittle on October 14, 2008 - 1:45pm.
Last year when Crysis came out, I think all of us who played it were a little disappointed in the abrupt, cliffhanger ending. It felt like the ending of Halo 2, where you think you're about to get the biggest, baddest level of the game, and then the credits roll. Crytek's reason for such a lame ending? "It's a trilogy". What? Why didn't anybody say anything before? Are they sure they didn't just run out of time to put in all the levels they wanted?Today, EA announced that Mirror's Edge will be the first part of a trilogy. What? The first one isn't even out yet. We don't know if it will be any good or if it will sell worth a spit. Need I remind everyone what happened with Too Human?Ah yes, Too Human. Silicon Knights began development on the game sometime around the release of Space Invaders. Of course, they reminded everyone that it's just one part of what will be this big, epic trilogy. Then the game, after many, many years in development and great fanfare, garned a whopping 69% on GameRankings.com. Guess a trilogy isn't sounding like such a good idea anymore.And I would be remiss to neglect the news of Starcraft 2 being broken up into an episodic trilogy, possibly spaced out | 计算机 |
2014-23/0757/en_head.json.gz/5670 | Developing Development
Zac HillFriday, August 24, 2012
Latest Developments Archive Zac Hill Archive 've been at Wizards of the Coast for almost exactly three years now, and it's kind of mind-blowing to think about how many things have changed since I first showed up. In a comparatively short amount of time, we've made gigantic strides forward when it comes to process, organization, and ideology. Most of the stuff we talk about with you guys has to do with how we develop Magic sets, but we also spend a lot of time and energy "developing" the way we work internally. And that makes sense: after all, navigating all the hoops you have to jump through in order to put together a product as demanding and intricate as Magic is kind of like a game in and of itself. So I thought it might be cool to walk through a few of the ways we've shaken up our process over the last couple of years.
R&D: Research and... Design?
Perhaps the biggest change to hit R&D since I got here is that it's no longer R&D! Err... it's still R&D—but it's a different D! Maybe it should have like an asterisk or a little superscript 1 or something in the logo. I dunno. I'm sure our brilliant and capable graphic designers will get right on that.
Anyway, yeah: what was Research and Development is now Research and Design. It's a largely cosmetic change, to be sure, but I feel like it reflects a broader shift in philosophy that has a lot to do with why Magic has really taken off in recent years.
The thing is, development's job really isn't to tweak numbers and cost things. That's a part of what we do, and it's an important part, but what matters most is whether we're making Magic fun. In other words, we're not just trying to make sure stuff works—we're trying to make sure stuff works well. And the way we do that is to ensure the play experience is as good as it can possibly be. What that means is that development has started to spend more and more of its time on game-play design—to really ensure that the mechanics and themes and overall feel of a design file manage to express themselves satisfyingly when actual games are played.
In Innistrad, for example, a central pillar of the horror motif we were trying to get across involved suspense: tension, surprise, transformation, and excitement. Moreover, coming out of design we knew we wanted each tribe to actually feel like the trope it was supposed to represent—it wouldn't work, you know, if werewolves weren't both regular old dudes and giant ferocious monsters, or if zombies just kind of hung out by themselves and stayed put inside their graves when they died. The thing is, if development just sat around and re-jiggered some numbers, just playtested a bunch and found "bugs" (i.e., broken cards, overpowered strategies, dominant archetypes) and reported them, none of this feel would have ever come across. Design's specialty is to realize that, say, Vampires want to be an aggressive red-black tribe and some mechanics should exist that make you have to think about the graveyard. It's up to development to make sure that happens in-game—to encourage that aggression with the Slith ability, to have black and green care about death because of morbid, to have blue and red load up the graveyard to make use of flashback, etc., etc. They're two sides of the same coin. They're two similar kinds of game design, and I think the re-branding of "R&D" reflects an acknowledgment of that reality that only grows with time.
Call to the Kindred | Art by Jason A. Engle
Magic Digital
Different people have different answers about why Magic has been doing pretty well as of late. I like to think the most important reason is because we put a lot of passion and commitment into giving y'all something that's worth the time you invest in it, but it's also true that a big portion of that success involves breaking down the huge barriers to entry that come with figuring out how to play a game as complicated as ours. The most successful way we've managed to do that so far has been through the Duels of the Planeswalkers video game series.
When I first got here, the entire Duels team was made up of (a) interns or (b) people who could spare time from other projects. Fortunately, a lot of the people working on Duels were extremely committed to the project (in addition to being extremely talented in general), and our awesome partners over at Stainless Games really went the extra mile to make Duels extraordinary. So, despite being put together by a kind of skeleton crew, the game really managed to take off. We realized, though, that it was probably not the best idea to piece together a team for one of our most important projects out of whoever's calendar happened to have the most white space on it. Instead, we created an entirely new wing of the department: Magic Digital.
The Magic Digital R&D team shouldn't be confused with the Magic Online team, who is responsible for putting that game together. Instead, digital's job is to coordinate game design resources for our digital projects—so, doing things like building decks for Duels of the Planeswalkers or creating the list for the Magic Online Cube. Having this team is enabling us to design some really incredible media experiences for our upcoming paper releases, and also create editions of Duels that, in my opinion, are going to redefine what card games are capable of doing in the digital space.
None of this existed at all when I first showed up, but I really feel like some of the most exciting things in the pipeline are coming out of this team.
Team Spirit | Art by Terese Nielsen
We're All In This Together
Y'all have to have seen these at some point, I'm sure—Monty's done an Arcana, or they've surfaced in a Maro article, or something. R, 3. A blank white card with some text scribbled on it in Sharpie. 1U 3. W 2/1. Perhaps my favorite: UUBBBRR "Taste it!" The famous and/or infamous R&D playtesting proxies.
I've got to admit, these have a certain charm. In fact, I'm a holdout: I still can be found scribbling "1U 2/1 Flash LOLOLOLOL" onto white cardboard rectangles at my desk. But these cards have a massive problem: aside from the author, nobody has any idea what they do!
When you're dealing with super-enfranchised developer-y Magic players who play in the Future Future League regularly, such ambiguity isn't that big of a deal; we're all basically familiar with the card file, and we all basically know what everything does. The trouble comes when you try to play with anyone at all besides that. Maybe someone from Kaijudo had some spare time and designed a cool new deck. Maybe Aaron has decided to stroll by and annihilate us with Shrine of Burning Rage, just to remind us who's the boss. Maybe someone from another department wants to know how a certain set plays so they can do a better job marketing it, or talking it up to distributors, or designing eye-popping packaging, or crafting branded play experiences, or whatever. They literally cannot play, no matter how much they want to. And as fun as it is to watch someone's eyes pop as they try to parse nothing but the phrase "XWWW CAT FESTIVAL," it's not really conducive to getting work done.
Fortunately, Dave Guskin designed a super-sweet tool that takes a text decklist and coverts it into printed stickers, replete with Oracle text for every card. All the considerate developers—yeah, everyone but me—now use that tool for most of their playtests, and it's managed to open up our process to the rest of the company on a frankly unparalleled scale.
Alter Reality | Art by Justin Sweet
Yeah, The Times, They Are A-Changin'
I'm doing design work now for "Huey," the fall set slated for release almost two years from now. The first set I contributed to was Rise of the Eldrazi, meaning that I've worked on six blocks' worth of sets in the span of just three years.
So it turns out that for a long time, we kind of just kicked a set over the wall when we finished it and told people, "So, yeah, you want to make some packaging for these cards now? Maybe, I don't know, try and market it a little bit? Yeah, that would be awesome. No, I am not really sure what's important, but I bet you can figure it out. I'm sure you'll do fine. Oh, you've got like four months. Thanks."
Strangely, we started getting some pushback from other departments on this. What would work a lot better for them, they told us, was (a) if we could start doing our work earlier, to give them a far more sane amount of time to do theirs, and (b) if they could get involved in the process earlier, so they could have the perspective necessary to implement bigger and more comprehensive projects (say, a campaign that spanned a whole block).
All of this was fine and dandy and totally reasonable, but it wasn't as simple as just shifting back the schedule by a couple months. For one thing, obviously, the work that needs to be done inside the window getting pushed back still needed to get done somehow. That's a surmountable problem, though. The bigger issue was that a huge amount of development work depends on results that are coming in from the outside world. If some card—say, I don't know, Bitterblossom—is tearing up the tournament scene, we both (a) want to make sure some answers exist for that strategy and (b) probably don't want the next big set's theme to involve producing 1/1 token creatures every turn.
The solution was to split development into two parts. In the first segment, we paint in very broad strokes and do a lot of the context-independent work of making a set, say, playable in Limited, or ensuring the file contains enough diversity that we can work with the art concepts should something need to change. For the second segment, then, we wait a while—that is, there's a gap to work on other sets while the real world produces a metagame. Then, we adapt to those changes as much as possible before polishing off the file for good.
Mitzy, our friendly Lobby Dragon
Bye, Bye, Miss American Pie
All of these, of course, represent just a few of the things that have changed since I first rode up the elevator to kick it with our resident dragon back in 2009. By and large, of course, things mostly remain the same. We still strive to make awesome Magic sets, we still work with awesome people, and we're still proud to have one of the most passionate, dedicated fan bases in the industry.
If I seem unusually reflective about my history at Wizards—after all, who gets all misty-eyed and nostalgic at the three year mark, of all times—it's because, well, I kind of am. This'll be the last Latest Developments you guys read from me. I recently accepted a position as the Director of Research and Development for The Future Project, a New York-based education nonprofit, and I'll be headed out that-a-way in early September. As sad as I'll be to leave this amazing environment and this amazing dream of a job, there's a lot to look forward to as well. My background is in politics and policy, actually—I worked for an organization called the Centre for Independent Journalism in Kuala Lumpur before coming to Wizards, and held a position in Memphis as an advisor to the mayor—and I'm kind of excited to be diving back into the "real world," so to speak. There's a kind of surreal time-bubble we close over our heads when we step into the office at 1600 Lind Ave SW, and it's good to be able to stop for a while and take in some of the fresh air. Moreover, I have some other stuff going on. I'm going to be working a few weekends a month as a research associate at the MIT-Singapore GAMBIT Game Lab, doing some pretty cool research into the critical theory of games. My debut short story collection comes out this October (with an audiobook slated for release later this month!) via the awesome guys at Monarch Press. And I'm currently deferring enrollment for my JD/MPP at Berkeley, whenever the Future Project gig ends. So if your hearts are just aching to get some more Zac Hill into your lives—I know, I know, I can hear the wailing from here at my desk—I won't be too hard to find.
All of that non-Magic business aside, I've been playing the game since I was eight years old, and it's not as if I can front about it like it won't be a major part of my life. I still intend to participate in the community. I still intend to write about the game. I'll still be working for Wizards in a minor role doing some external consultation on sets, and I'll still be in the booth as part of the coverage team at Pro Tours for the foreseeable future. Most importantly, even though I can't exactly jump right back into high-level competitive play for a while, I'll be able to sleeve up some decks and battle with all you guys for the first time in entirely too long. I miss that like you wouldn't believe.
As for Latest Developments, my spot'll be filled by a rotating core of developers—Billy Moreno, Dave Humpherys, Tom LaPille, Sam Stoddard, and maybe a few by Erik Lauer—talking about set-specific topics in more or less the same format you're used to. Trick'll tell y'all more about that later, I'm sure. In the mean time, I can't express how meaningful it's been to me, how life-affirmingly enriching, to have the opportunity to earn a living creating the game I love. I've been scribbling cards down for as long as I can remember, notebook after notebook, and there's no feeling quite like doing that one day and seeing a conference room full of people playing with it the next. I've been immeasurably blessed. It's been a hell of a ride. | 计算机 |
2014-23/0757/en_head.json.gz/6938 | 483 projects tagged "Operating Systems"
CD-Based (114)
Floppy-Based (20)
Init (11)
DFSG (1)
MPLv1.1 (1)
deb (1)
SCons (1)
Conary (1)
i5/OS (1)
Linux Deepin
Linux Deepin is an easy-to-use Linux distribution. It aims to provide a beautiful and user-friendly experience for the average user. In the light of such philosophy, Linux Deepin pays more attention to detail and provides a better interactive experience. It features its own desktop environment, called DDE or Deepin Desktop Environment.
GPLv3Linux DistributionsOperating Systems
Mnix
Mnix is a free, simple, and fast i686 GNU/Linux distribution, aimed at experienced users. It is a hybrid distribution; both precompiled packages and sources are supplied. The main focus is keep it simple and to be as Unix-like as possible, using "Free Software" only. Mnix is installed as a basic system with console-only tools, which forms the base which lets the user build a customized distribution (and even remaster it). Its main features are: Free Software only, BSD-style init scripts, only one shell (bash), a simple package manager (mtpkg), a ports-like repository structure called "mars" (the Mtpkg Applications Repository System), a simple filesystem hierarchy which adheres to the Unix philosophy, kernel-libre-only sources, a set of libraries and compilers for the most-used programming languages, and a complete set of shell packages (installable from the ISO) to set up in 30 minutes (more or less) a fully working console-based system. Mnix GNU/Linux is suitable for the somewhat-experienced user who prefers console admin tools to tweak the system, who prefers to compile packages with custom settings, and who wants to customize the kernel for his own system.
GPLv3Operating Systems
uClibc++
uClibc++ is a C++ standard library targeted at the embedded systems/software market. As such, it may purposefully lack features you might normally expect to find in a full-fledged C++ standard library. The library focuses on space savings, as opposed to performance.
LGPLSoftware DevelopmentLibrariesEmbedded SystemsOperating Systems
MeeGo is a Linux-based mobile and embedded operating system. It brings together the Moblin project, headed up by Intel, and Maemo, by Nokia, into a single open source activity. MeeGo currently targets platforms such as netbooks and entry-level desktops, handheld computing and communications devices, in-vehicle infotainment devices, connected TVs, and media phones. All of these platforms have common user requirements in communications, application, and Internet services in a portable or small form factor. The MeeGo project will continue to expand platform support as new features are incorporated and new form factors emerge in the market.
GPLLinux DistributionsQtEmbedded SystemsMobile
Chakra Linux
Chakra Linux is a Linux distribution that combines the simplicity of Arch Linux with KDE. It is fast, user-friendly, and extremely powerful. It can be used as a live CD or installed to hard disk. Chakra is currently under heavy and active development. It features a graphical installer and automatic hardware configuration. Chakra provides a modular and tweaked package set of the KDE Software Compilation with a lot of useful additions. It features the concept of half-rolling releases and freshly cooked packages and bundles. It is designed for people who really want to learn something about Linux or don't want to deal with administrative overhead.
GPLDesktop EnvironmentLinux DistributionsOperating SystemsKDE
Milos RTOS
Milos is a modular, portable, real-time operating system for embedded systems running on small microprocessors like the ARM Cortex M3.
GPLv2Embedded SystemsOperating Systemsreal-timertos
Linux MangOeS
Linux MangOeS is, a lifestyle operating system based on Enlightenment (e17) and openSuse 11.4. Its focus is to provide access to richer Internet content and computing needs primarily in rural and provincial areas of the Philippines.
GPLv3Operating SystemsHTPCTablettouchscreen
musl
musl is a new implementation of the standard library for Linux-based systems. It is lightweight, fast, simple, free, and strives to be correct in the sense of standards-conformance and safety. It includes a wrapper for building programs against musl in place of the system standard library (e.g. glibc), making it possible to immediately evaluate the library and build compact statically linked binaries with it.
MITLibrariesEmbedded SystemsOperating Systemssoftware deployment
CloudLinux
CloudLinux is a Linux operating system designed to improve control in the shared hosting and data center arena, while simultaneously increasing control and stability, as well as improving overall performance. It employs kernel level technology called LVE to allow Web hosting companies to control QoS of each individual Web site as well as each section of the Web site. The technology can be used for any multi-tenant environment, where it is beneficial to control resource usage of individual tenant. With CloudLinux, hosting companies can make sure that a single site cannot slow down or take down other Web sites. The OS is interchangeable with CentOS.
GPLv2LinuxOperating SystemsHosting
Foresight Linux
Foresight Linux is a desktop operating system featuring an intuitive user interface and a showcase of the latest desktop software, giving users convenient and enjoyable access to their music, photos, videos, documents, and Internet resources. As a Linux distribution, Foresight sets itself apart by eliminating the need for the user to be familiar with Linux, combining a user-focused desktop environment on top of Conary. As the most technically innovative software management system available today, Conary ensures that users can efficiently search, install, and manage all the software on the Foresight system, including bringing in the latest features and fixes without waiting for a major release.
Freely distributableDesktop EnvironmentGNOMELinux DistributionsSoftware Distribution
A program that backs up and restores data.
Hashrat
A command-line or HTTP CGI hashing utility. | 计算机 |
2014-23/0757/en_head.json.gz/8204 | TechNet Library Other Microsoft Products and Technologies BizTalk Server BizTalk Server 2004 Technical Articles Planning and Architecture Articles BizTalk Server 2004 and Web Services BizTalk Server 2004 Architecture BizTalk Server 2004 Convoy Deep Dive BizTalk Server 2004 Database Sizing Guidelines BizTalk Server 2004 Technical Guide for Certificate Management BizTalk Server 2004: A Messaging Engine Overview BizTalk Server 2004: A Review of the Business Activity Monitoring (BAM) Features BizTalk Server 2004: Architecture BizTalk Server 2004: Business Rules Framework BizTalk Server 2004: The Service-Oriented Architecture Paradigm Takes Center Stage in the Enterprise BizTalk Server 2004: Understanding BPM Servers BizTalk Server Business Rules Framework Connecting to the Elemica Network with BizTalk Accelerator for RosettaNet Installing BizTalk Server Microsoft on the Enterprise Service Bus (ESB) Risk Scoring with BizTalk Server 2004 and the Business Rules Framework RosettaNet and SAP Integration The Service-Oriented Architecture Paradigm Takes Center Stage in the Enterprise Transactions Across BizTalk Server 2004 Understanding BizTalk Server 2004 Understanding Microsoft Integration Technologies Understanding Microsoft Integration Technologies - A Guide to Choosing a Solution Understanding Microsoft Integration Technologies
Integrating Applications Directly
Integrating Applications through Queues
Integrating with Applications and Data on IBM Systems
Integrating Applications through a Broker
Integrating Data
There is no silver bullet for application integration. Different situations call for different solutions, each targeting a particular kind of problem. While a one-size-fits-all solution would be nice, the inherent diversity of integration challenges makes such a simplistic approach impossible. To address this broad set of problems, Microsoft has created several different integration technologies, each targeting a particular group of scenarios. Together, these technologies provide a comprehensive, unified, and complete integration solution.
The Microsoft integration technologies can be grouped into several categories:
Technologies for integrating applications directly, including Microsoft ASP.NET Web services (ASMX), Microsoft .NET Remoting, and Microsoft Enterprise Services. All of these will soon be subsumed by a unified foundation for service-oriented applications, code-named "Indigo."
Technologies for integrating applications through queues, including Microsoft Message Queuing (MSMQ) and SQL Service Broker. Both of these will also be usable through "Indigo."
Technologies for integrating Microsoft Windows applications with applications and data on IBM systems. This diverse set of solutions is provided by Microsoft Host Integration Server 2004.
Technologies for integrating applications through a broker, the approach taken by BizTalk Server 2006.
Technologies for integrating data, including Microsoft SQL Server Integration Services (SSIS) and SQL Server Replication.
Each of these technologies has its own distinct role to play in integrating applications. All of them also have important things in common, however. All can be used from a single development environment, Microsoft Visual Studio 2005, and all rely on a common foundation, the Microsoft .NET Framework. Given this, combining these technologies is straightforward, making it easier to solve complex integration challenges.
This article looks at these varied technologies, describing the scenarios for which each one is the best choice. The goal is to simplify the process of choosing the most appropriate Microsoft technology or technology combination for solving a particular problem.
The defining characteristic of application logic, the thing that most clearly distinguishes it from data, is that it is active–it does something. The simplest way to connect this active logic is to directly connect one part of an application to another using some kind of remote procedure call (RPC). .NET Framework applications today have three options for doing this: ASMX, .NET Remoting, and Enterprise Services. Once it's released, "Indigo" will subsume all three of these technologies, providing a common solution for this and other scenarios.
ASMX, .NET Remoting, and Enterprise Services
The communication technologies included in today's .NET Framework are well known. Still, it's worth briefly summarizing the role each one plays:
ASP.NET Web services (ASMX) enables Windows applications to communicate directly with applications running on Windows or other operating systems through Simple Object Access Protocol (SOAP), the foundation protocol for Web services.
.NET Remoting lets Windows applications communicate directly with other Windows applications using the traditional distributed objects approach.
Enterprise Services lets Windows applications communicate directly with other Windows applications, letting those applications use distributed transactions, object lifetime management, and other functions.
While all of these technologies are useful, why not have just one that can be used in all of these situations? As described next, this is exactly what "Indigo" provides.
"Indigo" is the code name for Microsoft's forthcoming extension to the .NET Framework for building service-oriented applications. Scheduled to be released in 2006, it will be available for the next release of Microsoft Windows Vista, formerly code-named "Longhorn," for Windows XP, and for Microsoft Windows Server 2003.
Describing Indigo
The agreement by all major vendors to standardize on Web services for application-to-application communication is a watershed in integration. Web services technologies, based on SOAP, provide a direct way to connect software running on platforms from multiple vendors. "Indigo" implements SOAP and the associated group of multi-vendor agreements known as the WS-* specifications. Other major vendors are implementing these same technologies, allowing reliable, secure, and transactional communication between applications running on diverse systems.
As the previous figure shows, applications built on "Indigo" can communicate directly with other "Indigo"-based applications or with applications built on other web services platforms, such as a J2EE application server. When communicating with non-"Indigo" applications, the wire protocol is standard text-based SOAP, perhaps with additions defined by one or more of the WS-* specifications. When communicating with other "Indigo" applications, the wire protocol can be an optimized binary version of SOAP. A single "Indigo" application can use both options simultaneously, allowing high performance for homogeneous partners together with cross-platform interoperability.
"Indigo" applications send and receive SOAP messages over one or more channels. An HTTP channel is provided, for instance, that allows communication according to the agreements defined by the Web services Interoperability (WS-I) Organization. "Indigo" also provides other channels, including channels that support MSMQ and SQL Server Broker. As described later in this article, this gives "Indigo" applications the benefits of queued messaging when they're communicating with other Windows applications.
When to Use Indigo
The primary integration scenarios for "Indigo" are:
Direct Web services communication between a Windows application and an application built on another Web services platform.
Direct communication between a Windows application and another Windows application.
Communication through message queues between a Windows application and another Windows application using "Indigo" over MSMQ or SQL Server Broker. "Indigo" provides a common programming interface for both queued and direct communication, and it also allows a single application to expose endpoints supporting both communication styles.
Indigo and Other Integration Technologies
"Indigo" will subsume ASMX, .NET Remoting, and Enterprise Services, the earlier .NET Framework technologies for direct communication. (It's important to realize, however, that applications built with these earlier technologies will continue to work unchanged on systems with "Indigo" installed.) "Indigo" will also supersede System.Messaging, the .NET Framework's standard interface to MSMQ. How "Indigo" fits with queued communication is described in the next section.
Direct communication between parts of an application is simple to understand and straightforward to implement. It's not always the best solution, however. Rather than communicating directly with another application, it's often better to use queued messaging. This style of communication relies on the presence of one or more queues between the sender and receiver, with applications sending messages to and receiving messages from these queues.
Queued communication lets applications interact in a flexible, adaptable way. One major benefit of this approach is that the receiving application need not be ready to read a message from the sender at the time that message is sent. In fact, the receiver might not even be running when the message is sent. Instead, messages wait in a queue, usually stored on disk, until the receiver is ready to process them.
Microsoft Message Queuing
Microsoft Message Queuing (MSMQ) is the built-in technology in Windows for application-to-application communication using queued messaging. The current release, MSMQ 3.0, runs on Windows XP and Windows Server 2003, while earlier releases run on older versions of Windows. MSMQ is also available for mobile devices running Windows CE.
Describing MSMQ
As the following figure illustrates, MSMQ lets Windows applications communicate through message queues. The messages it sends can contain any type of information, and because they're sent asynchronously, the sender need not block waiting for a response. Using asynchronous messaging can be somewhat more complex for a developer than using RPC, but it's nonetheless the right solution in many cases.
When to Use MSMQ
The primary integration scenarios for MSMQ are:
When asynchronous communication is required between two or more Windows applications.
When the sender and receiver might not be running at the same time.
When message-level logging is required.
MSMQ and Other Integration Technologies
Once "Indigo" is released, new applications that today use MSMQ directly will typically access it through the "Indigo" programming model, as described later in this section. Also, the services provided by MSMQ are similar in some ways to those provided by SQL Server Broker. Choosing between the two requires understanding SQL Server Broker, which is described next.
SQL Service Broker
SQL Server Broker is a new communication technology provided as part of Microsoft SQL Server 2005. Scheduled to be released in late 2005, SQL Server 2005 will be available for Windows XP, Windows Server 2003, and Windows CE. SQL Server Broker is included with all versions of the product, including Express, Workgroup, Standard, and Enterprise.
Describing SQL Server Broker
Most enterprise applications use a database management system (DBMS) in some way. Some applications, such as those written as stored procedures, run within the DBMS itself. Others, such as .NET Framework applications that use ADO.NET, access the DBMS externally. If these applications need to communicate through message queues, why not rely on the DBMS itself to provide those queues? This is exactly what's done by SQL Server Broker. Rather than create a standalone queuing infrastructure, as MSMQ does, SQL Server Broker provides queued communication using SQL Server 2005. The following figure shows how this looks.
To let applications use its queues, SQL Server Broker adds several verbs to SQL Server's T-SQL language. These verbs allow applications to start a relationship called a conversation, then send and receive messages using that conversation. The SQL verbs defined by SQL Server Broker are intended to be accessed by software created in various ways, including:
Applications written as stored procedures in T-SQL.
Applications written as stored procedures in a language based on the Common Language Runtime (CLR), such as C#. This option relies on the SQL Server 2005 built-in support for the CLR.
By using SQL Server 2005 to persistently store queued messages, SQL Server Broker can provide efficient, full-featured communication, including high-performance transactional messaging, integrated backup and recovery mechanisms, and more.
When to Use SQL Server Broker
The primary integration scenarios for SQL Server Broker are:
Connecting logic built as stored procedures in one or more separate instances of SQL Server 2005.
Connecting logic built as a .NET Framework application using SQL Server 2005 with a stored procedure in the same or another instance of SQL Server 2005.
Before "Indigo" is available, some applications might use SQL Server Broker directly through ADO.NET. "Indigo" will become the core interface for applications using message queuing, however, and so most queued applications will use SQL Server Broker through an "Indigo" channel, as described later in this section.
Any organization that relies heavily on SQL Server 2005 for building applications, especially when those applications are implemented as stored procedures, will likely use SQL Server Broker for communication. And because it's part of SQL Server, SQL Server Broker allows users to have a single product to install, configure, and monitor, together with a single approach to failover, for both a DBMS and a queuing technology.
SQL Server Broker and Other Integration Technologies
The functionality of SQL Server Broker overlaps to some degree with what MSMQ provides. Both rely on queued messaging, and so either can be used when this style of communication is needed. Its integration with the database means that SQL Server Broker will normally be the preferred queuing option for database-oriented applications, but there are also cases where MSMQ is preferable. These include the following:
Communication between applications that need only in-memory queuing such as those that don't require messages to be stored persistently in transit. MSMQ supports memory-based queues, while SQL Server Broker does not.
Situations where the extra license cost of SQL Server 2005 isn't acceptable. Unlike SQL Server 2005, MSMQ is part of Windows, and so there's no extra charge for using it. Even though SQL Server Broker is included with the Express edition of SQL Server 2005, which doesn't require a paid license, every message sent through SQL Server Broker must traverse at least one licensed copy of SQL Server 2005.
Once "Indigo" is available, applications will typically access both MSMQ and SQL Server Broker through the "Indigo" programming model. How "Indigo" works with these queuing technologies is described next.
Indigo and Queued Communication
While "Indigo" is based on Web services, its channel architecture allows it to send SOAP messages over diverse protocols. For direct communication with non-Windows systems, "Indigo" will typically send SOAP over HTTP. For queued communication between Windows applications, "Indigo" will also be able to send SOAP messages over MSMQ and SQL Server Broker as the following figure shows.
Even though MSMQ and SQL Server Broker will be accessible through "Indigo", an application will still be able to access these queuing technologies directly. Here's how to make the choice:
"Indigo" supersedes System.Messaging, the .NET Framework's standard interface to MSMQ, and so most queued applications should use "Indigo" once it's available. A few MSMQ services aren't available through "Indigo", however. For example, "Indigo" applications that communicate through the MSMQ channel can't peek into a queue of received messages, can't use journaling or queued receipts, and aren't able to let the sending system specify the response queue that a reply message should be sent to. Applications that require these services should still access MSMQ directly using System.Messaging.
As with MSMQ, applications that access SQL Server Broker through "Indigo" will see only a subset of SQL Server Broker's capabilities. An application that needs access to all of SQL Server Broker's functions might choose to use SQL Server Broker directly.
Ultimately, the simplicity, flexibility, and ubiquity of the "Indigo" programming model will lead most developers to use it for the majority of queuing applications implemented outside the database.
Host Integration Server 2004
Host Integration Server 2004 is a set of technologies focused on connecting to applications and data on IBM mainframes and mid-range systems. The various components it includes run on a variety of systems, including Windows XP, Windows 2000, Windows Server 2003, and Windows 2000 Server.
Describing Host Integration Server 2004
Many enterprises have substantial investments in IBM mainframe and midrange systems. When new applications are written on Windows, integrating with existing applications and data on these older systems is often essential. Yet doing this can be challenging, since these environments support applications and store data in several different ways. Effectively linking Windows software to these existing IBM systems requires a variety of approaches.
As the following figure illustrates, Host Integration Server 2004 contains components that address these diverse requirements. Using various parts of the product, Windows software can access applications and data on IBM zSeries mainframes running z/OS, along with applications and data on IBM iSeries mid-range systems running OS/400. Host Integration Server 2004 also includes an MSMQ-MQSeries Bridge, allowing queued messaging between MSMQ and IBM's WebSphere MQ (formerly known as MQSeries).
When to Use Host Integration Server 2004
The primary integration scenarios for Host Integration Server 2004 are:
Connecting Windows systems to IBM zSeries mainframes and iSeries midrange systems using Systems Network Architecture (SNA) and other IBM communication technologies, including SNA over TCP/IP.
Integrating Windows security with IBM mainframe or midrange security systems, including IBM's Resource Access Control Facility (RACF) and Computer Associates' ACF2 and Top Secret.
Accessing existing Customer Information Control System (CICS) and IMS applications, either directly from .NET Framework applications using Host Integration Server 2004's Transaction Integrator or through Web services.
Creating Windows applications that access data stored on zSeries and iSeries systems, including VSAM data and relational data stored in DB2.
Connecting MSMQ to IBM's WebSphere MQ, allowing messages to be transferred between these two message queuing technologies.
Host Integration Server 2004 includes a broad range of functions. As its name suggests, however, all of them are focused on accessing application logic and data stored on IBM mainframe and midrange systems.
Host Integration Server 2004 and Other Integration Technologies
As described later in this article, some components of Host Integration Server 2004 can be used together with other integration technologies such as BizTalk Server 2006 and SSIS. The key fact to remember is that whenever a complete solution requires integration with IBM systems, Host Integration Server 2004 has a role to play.
Rather than integrating applications directly, it sometimes makes more sense to connect them through a broker. A broker is software that sits between the applications being integrated, interacting with all of them. By providing a common connection point, brokers avoid the complexity that can arise when several applications are connected directly to one another. Brokers can provide a range of integration services, including transformations between different message formats and support for diverse communication technologies. A broker can also act as a platform for its own application logic, providing the intelligence to control a business process. For Windows, connecting applications through a broker means using BizTalk Server 20061.
BizTalk Server 2006
Like its predecessor BizTalk Server 2004, BizTalk Server 2006 is an integration and business process platform. Available for Windows Server 2003, Windows 2000 Server, and Windows XP, BizTalk Server 2006 is Microsoft's solution for brokered application-to-application integration.
Describing BizTalk Server 2006
As the following figure shows, BizTalk Server 2006 sits in the middle of a group of applications. By providing adapters for various communication mechanisms, including MSMQ, "Indigo", EDI, and many more, BizTalk Server 2006 can communicate with Windows and non-Windows applications in a number of different ways. BizTalk Server 2006 also provides other integration services, including:
The ability to graphically define orchestrations, logic that interacts with applications on other systems to drive an integrated process, together with runtime services for orchestrations, such as state management and support for long-running transactions.
Graphical definition of XML schemas for messages, along with the ability to define transformations between incoming and outgoing messages that use those schemas.
Business-to-business (B2B) integration features, including support for Electronic Data Interchange (EDI), RosettaNet, HL7, and other standard interchange formats. It also includes services for managing interactions with trading partners.
BizTalk Server 2006 is also a business process server. Viewed from this perspective, it provides features such as:
Support for graphical definition of business processes using an Orchestration Designer hosted in Visual Studio. A lightweight Microsoft Visio–hosted Orchestration Designer for Business Analysts is also included.
Business Activity Monitoring (BAM), providing real-time displays of business process information to the information workers that rely on those processes.
A Business Rules Engine (BRE), letting complex business rules be defined, accessed, and maintained in a single place.
When to Use BizTalk Server 2006
The primary integration scenarios for BizTalk Server 2006 are:
Creating brokered application-to-application message-based integration, especially when data mapping and support for diverse communication mechanisms is required.
Implementing integration processes, including long-running processes that take hours, days, or weeks to complete, and processes with complex business rules.
Addressing B2B integration, including situations with many trading partner interactions and those that require industry standards such as RosettaNet and HL7.
Creating business processes that give information workers real-time visibility into an integrated process.
Using a brokered approach can add cost to an integration solution. Still, for scenarios such as those just listed, the additional overhead of a broker is more than made up for by the value it provides. In these situations, BizTalk Server 2006 is the best approach to integrating diverse applications.
BizTalk Server 2006 and Other Integration Technologies
BizTalk Server 2006 can work with many other integration technologies from Microsoft and other vendors. For example, as mentioned earlier, adapters are available for MSMQ and "Indigo", as well as adapters for non-Microsoft integration technologies such as IBM's WebSphere MQ. BizTalk Server 2006 also supports integration using Web services, including a SOAP adapter and the ability to import and export definitions created using the Business Process Execution Language (BPEL).
It's also possible to create custom adapters. For example, an organization could build a custom adapter based on components in Host Integration Server 2004 that connects to CICS and DB2. (These adapters will be included as a standard part of Host Integration Server 2006, the product's next release.) And while an adapter for SQL Server Broker is likely to be available in the future, a custom adapter could be created today.
One of the main benefits of a brokered approach to integration is the ability to communicate with a diverse set of applications. Given this, it shouldn't be surprising that BizTalk Server 2006 is able to use virtually any integration technology that provides direct application-to-application communication.
Integrating applications requires connecting active logic. Integrating data, by contrast, means moving and manipulating passive information. While the software that integrates data from different sources is not passive, the data itself—the thing that's actually being integrated—doesn't have any intelligence of its own. Because of this, data integration happens between data stores, not applications. Accordingly, both of Microsoft's technologies for data integration are associated with SQL Server, its flagship product for data management.
SQL Server Integration Services (SSIS) provides tools for combining data from diverse data sources into a SQL Server 2005 database. The successor to the SQL Server 2000 Data Transformation Services (DTS), SSIS is the extract, transform, and load (ETL), service for SQL Server 2005. SSIS is included as part of SQL Server 2005 Standard Edition, with some advanced components for data cleansing and text mining added in the Enterprise Edition.
Describing SSIS
It's become increasingly common for organizations to create databases that contain large amounts of historical data obtained from a group of operational databases. Creating this kind of data warehouse implies integrating a broad and diverse set of information. Yet doing this effectively also requires making the information consistent, coherent, and comprehensible. A primary goal of SSIS is to make this possible.
The previous figure shows how SSIS is typically used. Data from various DBMSs, including SQL Server 2005 and others, can be combined with data from other sources, such as semi-structured data and binary files. The result, stored in SQL Server 2005, can then be used as a foundation for historical reporting, data mining, and online analytical processing (OLAP).
When to Use SSIS
The primary integration scenarios for SSIS are:
Combining information from a group of operational databases into a data warehouse. Along with powerful support for data transformations, SSIS provides graphical tools for defining the ETL process, fuzzy logic for data cleansing, error handling, and other features to make integration of diverse data easier.
Transferring data from one DBMS to one or more other DBMSs. Because SSIS supports heterogeneous data sources, the products involved might or might not be SQL Server 2005.
Loading data into SQL Server databases from flat files, spreadsheets, and other diverse data sources.
SSIS and Other Integration Technologies
Like other Microsoft integration technologies, SSIS relies on the data access components included in Host Integration Server 2004 for access to data on IBM mainframe and midrange systems. And whatever systems are involved, it's conceivable that other integration approaches could be used to solve the core problems that SSIS addresses. For example, it's possible to create an orchestration using BizTalk Server 2006 that accesses diverse data sources and builds a common integrated database from a variety of data sources. In nearly every case, however, SSIS is a better choice for solving this problem. BizTalk Server 2006 focuses on integrating application logic, not data, and so it's better suited to real time communication of information between applications and among trading partners. SSIS, by contrast, is optimized for bulk data loading from diverse data sources. Given this, SSIS is a much better choice for creating data warehouses than the application-oriented BizTalk Server 2006. (In fact, the BizTalk Server 2006 Business Activity Monitoring component uses SSIS to build the BAM data warehouse.)
SSIS can also be used to replicate identical data across different systems rather than to combine diverse data. As described next, however, SQL Server Replication is usually a better solution for this problem.
SQL Server Replication
As its name suggests, SQL Server Replication allows replicating data across two or more SQL Server databases. Available with SQL Server 2000, this technology is also contained in all versions of SQL Server 2005 and in SQL Server CE.
Describing SQL Server Replication
It's often useful to have a copy of the same data in multiple databases, than have that data automatically kept in sync. For example, letting applications running on a group of Web servers spread their read requests across a group of identical databases, each on its own machine, can improve the application's scalability and availability. For this arrangement to work, all updates must go to a single database instance, then be propagated to the read-only copies. SQL Server Replication is designed for situations like this.
The previous figure shows a simple illustration of SQL Server Replication. As the diagram suggests, one of the most important benefits of this technology is a powerful user interface that lets database administrators easily define what data should be replicated, see differences between tables, and more. A replication conflict viewer is also provided, allowing data conflicts that occur during replication to be resolved.
When to Use SQL Server Replication
The primary integration scenarios for SQL Server Replication are:
Replicating data between tables in one or more SQL Server instances. Those instances might be running on servers, clients, or even mobile devices that are only occasionally connected. Rather than copying entire tables, SQL Server Replication replicates incremental row-level changes, letting updates be propagated at near real-time speeds.
Using SQL Server as a source for data that is replicated to IBM and Oracle databases.
Using Oracle as a source for data that is replicated to SQL Server, IBM, and Oracle databases. In this case, data is first replicated to a SQL Server database, then replicated to the other databases.
Data replication is useful in a variety of different scenarios, and so SQL Server Replication can be applied in numerous ways. In fact, even though this technology can be correctly seen as a tool for integration, it's just as accurate to view SQL Server Replication as a solution for data synchronization.
SQL Server Replication and Other Integration Technologies
If synchronization of identical data sources in near real time is required, SQL Server Replication is the best choice. No other integration technology provides support for rapidly tracking and delivering changes as they occur. While it's possible to write a custom replication application using BizTalk Server 2006 or other integration technologies, it's challenging to do correctly. The result also wouldn't have the performance or the features that are included in SQL Server Replication.
SQL Server Replication replicates data identically using either all columns of the selected tables or a subset of the table columns, while maintaining the data in the same format. SSIS has the additional capability to transform the data as it is moved, and so it's a better choice for copying entire tables that have different structures. Yet unlike SQL Server Replication, SSIS doesn't support replicating incremental row-level changes. It copies the whole table, which means SSIS typically has lower performance. As an ETL tool, SSIS focuses on integrating diverse data rather than replicating identical data, the problem for which SQL Server Replication was designed.
Different integration problems require different solutions, and it's important to use the right tool for the job. To address the diversity of applications and data that must be connected, Microsoft has produced a range of integration products and technologies. These solutions sometimes overlap, and so more than one choice might be applicable to a given situation. While these ambiguous cases are relatively uncommon, the most straightforward way to make a decision is by examining the fundamental scenarios for each integration technology.
The goal of integration is to connect diverse pieces of software into a coherent whole. If the tools for doing this are themselves built in diverse ways, such as when products built by several different firms are grouped together under a single marketing umbrella, reaching this goal becomes even more difficult. By providing a unified set of solutions, all built on the .NET Framework and all accessible using Visual Studio 2005, Microsoft provides a comprehensive, unified, and complete solution for today's integration challenges.
Primary Integration Scenarios
ASP.NET Web services (ASMX)
Connecting Windows applications with Windows and non-Windows applications through SOAP
.NET Remoting
Connecting Windows applications with other Windows applications through distributed objects Enterprise Services
Connecting Windows applications with other Windows applications that use distributed transactions, object lifetime management, and so on.
"Indigo"
Connecting Windows applications with Windows and non-Windows applications using Web services, distributed transactions, lifetime management, and so on (subsumes ASMX, .NET Remoting, and Enterprise Services).
Connecting Windows applications with other Windows applications using queued messaging
Connecting SQL Server 2005 applications with other SQL Server 2005 applications using queued messaging
Connecting Windows applications with other Windows applications using queued messaging (through MSMQ and/or SQL Server Broker)
Connecting Windows applications with IBM zSeries and iSeries applications and dataConnecting MSMQ with IBM WebSphere MQ
Connecting Windows applications and non-Windows applications using diverse protocolsTranslating between different message formatsControlling business processes with graphically defined orchestrationsConnecting with business partners using industry standards, such as RosettaNet and HL7Providing business process services, such as Business Activity Monitoring and a Business Rules Engine
SQL Server Integration Services Combining and transforming data from diverse sources into SQL Server 2005 data
Synchronizing SQL Server data with copies of that data in other instances of SQL Server, Oracle, or DB2
For the latest information about Microsoft Windows Server System, visit the Windows Server System site.
1Despite its name, SQL Service Broker doesn't provide a "broker" in this sense of the term. In the terminology of SQL Server Broker, a broker is a message queue. | 计算机 |
2014-23/0757/en_head.json.gz/11728 | Oracle® Fusion Middleware Understanding Security for Oracle WebLogic Server
2 Overview of the WebLogic Security Service
The following sections introduce the WebLogic Security Service and its features:
Introduction to the WebLogic Security Service
Features of the WebLogic Security Service
Oracle Platform Security Services (OPSS)
Balancing Ease of Use and Customizability
New and Changed Features in This Release
Deploying, managing, and maintaining security is a huge challenge for an information technology (IT) organization that is providing new and expanded services to customers using the Web. To serve a worldwide network of Web-based users, an IT organization must address the fundamental issues of maintaining the confidentiality, integrity and availability of the system and its data. Challenges to security involve every component of the system, from the network itself to the individual client machines. Security across the infrastructure is a complex business that requires vigilance as well as established and well-communicated security policies and procedures.
WebLogic Server includes a security architecture that provides a unique and secure foundation for applications that are available via the Web. By taking advantage of the security features in WebLogic Server, enterprises benefit from a comprehensive, flexible security infrastructure designed to address the security challenges of making applications available on the Web. WebLogic security can be used standalone to secure WebLogic Server applications or as part of an enterprise-wide, security management system that represents a best-in-breed, security management solution.
The open, flexible security architecture of WebLogic Server delivers advantages to all levels of users and introduces an advanced security design for application servers. Companies now have a unique application server security solution that, together with clear and well-documented security policies and procedures, can assure the confidentiality, integrity and availability of the server and its data.
The key features of the WebLogic Security Service include:
A comprehensive and standards-based design.
End-to-end security for WebLogic Server-hosted applications, from the mainframe to the Web browser.
Legacy security schemes that integrate with WebLogic Server security, allowing companies to leverage existing investments.
Security tools that are integrated into a flexible, unified system to ease security management across the enterprise.
Easy customization of application security to business requirements through mapping of company business rules to security policies.
A consistent model for applying security policies to Java EE and application-defined resources.
Easy updates to security policies. This release includes usability enhancements to the process of creating security policies as well as additional expressions that control access to WebLogic resources.
Easy adaptability for customized security solutions.
A modularized architecture, so that security infrastructures can change over time to meet the requirements of a particular company.
Support for configuring multiple security providers, as part of a transition scheme or upgrade path.
A separation between security details and application infrastructure, making security easier to deploy, manage, maintain, and modify as requirements change.
Default WebLogic security providers that provide you with a working security scheme out of the box. This release supports additional authentication stores such as databases, and gives the option to configure an external RDBMS system as a datastore to be used by select security providers.
Customization of security schemes using custom security providers
Unified management of security rules, security policies, and security providers through the WebLogic Server Administration Console.
Support for standard Java EE security technologies such as the Java Authentication and Authorization Service (JAAS), Java Secure Sockets Extensions (JSSE), Java Cryptography Extensions (JCE), and Java Authorization Contract for Containers (JACC).
A foundation for Web Services security including support for Security Assertion Markup Language (SAML) 1.1 and 2.0.
Capabilities which allow WebLogic Server to participate in single sign-on (SSO) with web sites, web applications, and desktop clients.
A framework for managing public keys which includes certificate lookup, verification, validation, and revocation as well as a certificate registry.
Oracle Platform Security Services (OPSS) provides enterprise product development teams, systems integrators (SIs), and independent software vendors (ISVs) with a standards-based, portable, integrated, enterprise-grade security framework for Java Standard Edition (Java SE) and Java Enterprise Edition (Java EE) applications.
OPSS provides an abstraction layer in the form of standards-based application programming interfaces (APIs) that insulates developers from security and identity management implementation details. With OPSS, developers don't need to know the details of cryptographic key management or interfaces with user repositories and other identity management infrastructures. With OPSS, in-house developed applications, third-party applications, and integrated applications all benefit from the same uniform security, identity management, and audit services across the enterprise. OPSS is available as part of WebLogic Server.
For more information about OPSS, see Oracle Fusion Middleware Security Overview.
The components and services of the WebLogic Security Service seek to strike a balance between ease of use, manageability (for end users and administrators), and customizability (for application developers and security developers). The following paragraphs highlight some examples:
Easy to use: For the end user, the secure WebLogic Server environment requires only a single sign-on for user authentication (ascertaining the user's identity). Users do not have to re-authenticate within the boundaries of the WebLogic Server domain that contains application resources. Single sign-on allows users to log on to the domain once per session rather than requiring them to log on to each resource or application separately.
For the developer and the administrator, WebLogic Server provides a Domain Configuration Wizard to help with the creation of new domains with an administration server, managed servers, and optionally, a cluster, or with extending existing domains by adding individual severs. The Domain Configuration Wizard also automatically generates a config.xml file and start scripts for the servers you choose to add to the new domain.
Manageable: Administrators who configure and deploy applications in the WebLogic Server environment can use the WebLogic security providers included with the product. These default providers support all required security functions, out of the box. An administrator can store security data in the WebLogic Server-supplied, security store (an embedded, special-purpose, LDAP directory server) or use an external LDAP server, database, or user source. To simplify the configuration and management of security in WebLogic Server, a robust, default security configuration is provided.
Customizable: For application developers, WebLogic Server supports the WebLogic security API and Java EE security standards such as JAAS, JSS, JCE, and JACC. Using these APIs and standards, you can create a fine-grained and customized security environment for applications that connect to WebLogic Server.
For security developers, the WebLogic Server Security Service Provider Interfaces (SSPIs) support the development of custom security providers for the WebLogic Server environment.
See "What's New in Oracle WebLogic Server" for new and changed features in this release. | 计算机 |
2014-23/0758/en_head.json.gz/1663 | Print this article | Return to Article | Return to CFO.com
The Seven-Year NicheLinux has gone mainstream sooner than some expected, and along the way has spawned a movement known as ''open source.'' Just how far can it go?Bob Violino, CFO ITNovember 15, 2004
Anyone paying even the slightest attention to the fortunes of Linux, the "free" computer-operating system beloved by techies and intriguing to CFOs, has probably noticed that a new buzz phrase has entered the conversation. "Open source" has become a staple of computer-vendor press releases and technology-conference agendas, providing an umbrella term for Linux and a growing volume of non-Linux software that is now in the public domain and ready to be stitched into the fabric of computing infrastructures.
In the past year, vendors of many stripes have offered up once-proprietary software as eagerly as patrons once stood in line at Studio 54. Software dubbed "open source" is free to use, alter, and share, with no license fees per se, although if it is obtained via a for-profit company, support costs will likely enter in. Work continues apace on a variety of open-source efforts nearly as well known as Linux, such as the Apache Web server, the MySQL database, and the OpenOffice desktop suite. And Linux itself, still the linchpin of this movement, continues to carve an ever-widening space in Corporate America, to the point where Dan Kusnetzky, vice president, system software, at research firm IDC, acknowledges, "We published a projection in 1997 that Linux would show up as a mainstream [operating system] choice in all vertical markets around the world by the end of 2005. Our projection may have been too pessimistic. Seven years later, it's happened."
According to figures compiled by IDC, 3.4 million Linux client operating systems were shipped in 2002, and that number is forecast to grow to more than 10 million by 2007. Gartner says shipments of Linux-based servers increased about 61.6 percent in the second quarter of this year compared with the same period last year. Those figures don't include the substantial volume of free downloads that often give Linux its initial presence in corporate environments.
More important than increased sales figures may be a growing perception that the time is right for Linux to move beyond pilot projects and relatively safe duty as the underlying software for fairly mundane tasks such as print and file services, and on to center stage as a key platform for a range of business needs.
"We evaluated Linux in 1999 and didn't feel it was ready for prime time," says Mike Jones, senior vice president and CIO at retailer Circuit City Stores in Richmond, Virginia. "It has come a long way since then, and our confidence has increased." So much so, in fact, that Circuit City has launched a project to roll out Linux-based point-of-sale systems from IBM at its 600 nationwide retail outlets beginning in March. The strategy is part of a "revitalization effort" that will move its stores from customized, proprietary systems to software based on open standards, says Jones.
The growing popularity of Linux and other open-source software has grabbed the attention of leading IT vendors, which want to create open-source user communities that will help boost their revenues (or reduce their costs) by linking their commercial products and services to systems designed, at least in part, with open-source technologies. More than 70 percent of open-source development today is supported by vendors that offer commercial products that incorporate open-source software, in contrast to most people's perception of a strictly grassroots movement, staffed on a part-time or best-effort basis, as open source was 5 to 10 years ago, says Bill Weinberg, architecture specialist at Open Source Development Labs, a Beaverton, Oregon, industry group that's working to create open-source options for business. The OSDL also pays the salaries of key open- source developers, in particular Linus Torvalds and Andrew Morton.
Vendors on the Bandwagon
IBM, which garnered more than $2 billion in Linux-related revenue last year and is now putting Linux at the heart of many of its enterprise products, has made a slew of open-source announcements this year. In August the company expanded its Leaders for Linux program, which provides resources such as presales support, education, and technical support to partners — including recently signed Novell and Red Hat — that market and support Linux products on IBM platforms. At the same time, IBM announced that it is contributing more than half a million lines of relational database code to the Apache Software Foundation. IBM participates in and contributes to more than 150 open-source projects, including Linux, the Globus Alliance, Eclipse, and Apache, and has issued enough press releases on the subject to fill an enterprise-class database.
Not to be outdone, Hewlett-Packard, which had $2.5 billion in Linux-related revenue last year, has been introducing Linux options for select systems and says it now ships 100,000 desktop and notebook computers equipped with Linux each quarter. Earlier this year, it announced an agreement with Novell to certify and support the Novell SUSE Linux operating system on select HP Compaq client systems. In June HP signed agreements with MySQL and JBoss to certify, support, and jointly sell their open-source products on HP servers. The company also said it has expanded its Linux professional services team to 6,500 people. Perhaps more notable, in August HP introduced what it claimed was the first preinstalled Linux notebook PC from a major hardware vendor. Linux is well established as a server operating system, but its viability on corporate desktops (and laptops) is a fiercely debated issue, particularly within the halls of Microsoft, which has found itself on the defensive as many competitors position Linux in direct opposition to Microsoft's products.
Meanwhile, Sun Microsystems is sponsoring a long list of open-source projects and says it plans to release its Solaris operating system under an open-source license by the end of the year. And in August, Computer Associates International Inc. released its Ingres Enterprise Relational Database for Linux into the open-source community. CA says that marked the first time a major enterprise software vendor had collaborated directly with the open-source community to deliver enterprise database technology. That may be for the scholars to debate, as IBM's database announcement preceded CA's by one day.
As they knock one another over in an attempt to prove their allegiance to Linux, vendors do seem to be removing two often-voiced knocks against open source: questionable reliability and lack of customer support.
"One of the criticisms is that with open source, you're on your own; you have to get help from newsgroups or second-tier companies [that lack] the stature of bigger vendors," says Weinberg. "These larger sponsors are now putting Linux into their core strategy. The [justification] for using Linux in critical operations is on a par with other top-tier software solutions."
Furthermore, Weinberg says, enterprises are developing internal expertise in Linux by refocusing people with Unix backgrounds, and there are many sources of support within the open-source community. Time and a critical mass of support have combined to make Linux more reliable, he says, in large part because of continuous improvements by the OSDL and the larger open-source community. This, and the cost savings from the absence of licensing fees, have enterprises taking greater notice of the open-source movement than ever before.
Open to Open Source
Aerospace manufacturer The Boeing Co., based in Chicago, had been following open-source developments for years before it began purchasing Linux-based servers about three years ago. Impressed with the performance of the operating system, about a year ago the company launched a policy to migrate from proprietary Unix servers to Linux-based machines for its IT infrastructure.
Boeing uses commercial versions of Linux, such as Red Hat's, so it can get vendor support if any problems arise. As a result, the life-cycle support costs of the operating system aren't much different from those of such Unix systems as Sun's Solaris or HP's HP-UX, says Vaho Rebassoo, director and chief architect, computing and network operation, at Boeing.
The main benefit, Rebassoo says, comes from not being locked into any one hardware vendor. "With Linux, the most compelling argument is that instead of buying hardware and software packages from Sun or HP, we can put Linux onto [any] Intel equipment," which gives the company more flexibility in hardware selection and ultimately will lead to cost savings. Considering that Boeing has more than 7,000 servers, those savings can be considerable.
Rebassoo says it's a myth that open-source software is unreliable. "That being said, we would have reservations about putting our most-complex [engineering and airplane design] applications on Linux," he says. "But it will be only a matter of time before we move there."
Another Linux devotee is the nearby Chicago Mercantile Exchange, the largest futures exchange in the United States. The exchange is gradually increasing its deployment of Linux and now runs 35 percent of its Unix servers on the open-source platform, using software from Red Hat. Its goal is to reach 40 percent by the end of the year and ultimately replace 100 percent of its Unix-based servers with machines that run Linux.
Chicago Mercantile will likely never run Linux on its most computing-intensive systems, such as the mainframe databases that store massive volumes of data, says CTO Charlie Troxel. But it has been pleased with how open source has performed at the server level.
Total cost of ownership was the initial goal of the exchange's move to Linux. Troxel estimates that the Linux-based servers cost five to seven times less than the Solaris servers did before Sun lowered its pricing to close the gap somewhat. But Chicago Mercantile is also seeing performance increases compared with Unix, he says: "Performance of the Linux servers in some cases is 10 times better than on the servers we had been using. So for less money, we're getting far better results." In addition to the support it gets from Red Hat, Chicago Mercantile is retraining its Unix technical-support team for Linux.
For some newer technology vendors, putting Linux at the heart of their offerings allows them to go to market at a lower price point. InsiteOne Inc., a Wallingford, Connecticut, company that provides storage and archiving of digital medical images and other health-care data, has been an open-source user since it was founded in 1999. InsiteOne installs HP ProLiant servers at customer sites to run high-capacity imaging applications, and uses Linux on all those servers as well as in its data center. The databases that the company uses to store medical images are powered by MySQL.
InsiteOne placed a heavy bet on open-source platforms because it believed they would provide reliability and scalability — and they have, says David Cook, chairman and founder of the company. Another advantage is that Linux has become a positive selling point with clients. Cook recently met with a large government agency that is a prospective customer and is moving its entire IT infrastructure to Linux. "Because we're on Linux, we got a check mark on that question on the application," he says.
To be sure, there are still uncertainties when it comes to adopting open-source products in the enterprise. In addition to the trepidation that accompanies any technological change, a specific caveat to would-be users of Linux is the legal question that still dangles over the code. The lawsuits filed by SCO Group against a group of companies concerning portions of Linux that SCO claims infringe on part of its Unix operating system are still in progress. In August Open Source Risk Management Inc., a firm that provides insurance coverage against lawsuits involving patents and copyrights, published a report saying that 283 registered software patents could possibly figure into such lawsuits.
Concern about lawsuits "shows up in our surveys. But for the most part, it's not stopping people from starting pilot projects and seeing how Linux would fit in their environment," says IDC's Kusnetzky. At this point, nothing seems to be stopping the move to open source and its greater role in the enterprise.
"I think it's already on everyone's road map in some sort of way," says Gartner analyst Michael McLaughlin. "Companies have to be aware of the benefits. We've reached the point where deployment of [open source] for enterprise computing is very appropriate."
Bob Violino is a freelance writer based in Massapequa Park, New York.
In September the Free Standards Group announced the availability of Linux Standard Base 2.0. The San Francisco-based nonprofit organization, which develops and promotes open-source software standards, says the standard is an essential component for the long-term market success of Linux. New features in the release include an application binary interface for C and support for 32- and 64-bit hardware architectures. The standard has the support of some of the biggest players in the Linux marketplace, including IBM, Hewlett-Packard, Intel, Dell, Novell, and Red Hat. The group says the broad support is significant because it will help keep Linux from diverging, as Unix systems did in the past. The standard is available now from the group's Website, at www.freestandards.org.
Dan Kusnetzky, vice president, system software, at IDC, says the effort shows that open-source software is gaining greater momentum in the business world. "The emergence of standards is an indication that something has become mainstream. Otherwise, why would anyone care?" he says. "The fact that the group has come up with this and has [leading] vendors signed up to support it is another major milestone toward Linux as an enterprise platform."
Opening a New Frontier
The next triumph for open source may well be the corporate database. More than half a dozen options already exist, led by MySQL and PostgreSQL. Falling under the rubric of open-source databases, or OSDBs, these products are already finding their way into corporate use, sometimes for mission-critical applications. A recent report by AMR Research found that, while advanced functionality and scalability are still open questions, the performance and stability of leading OSDBs are deemed acceptable by many. The availability of commercial support contracts gives companies the confidence they need to embrace this new breed of database, and most current users of OSDBs expect them to equal commercial products in all key criteria within three years.
That doesn't mean that the world will abandon the IBM, Oracle, and other commercial databases that currently provide a major portion of IT bedrock. As AMR notes, corporate inertia is a force to be reckoned with, even when new options cost less. More to the point, companies with terabytes of data won't be happy to learn that the maximum capacity of OSDBs can be measured in gigabytes, moving business logic ("stored procedures") to OSDBs is cumbersome, and decision-support and business-intelligence systems that rely on elaborate queries are not currently a good fit with OSDBs.
None of these problems is insurmountable, and, like Linux, OSDBs are constantly being refined to meet more-complex corporate needs. But for now, OSDBs tend to be harnessed to serve new systems rather than retrofitted to underpin existing applications. AMR notes that, on a per-CPU basis, the most expensive OSDB costs less than 4 percent of the most expensive traditional database ($1,500 versus $40,000), which certainly helps the ROI calculation. And OSDBs are seen as being simpler to administer. Software companies whose products rely on databases are interested in the growing success of OSDBs — money that customers don't have to spend on the underlying technology should help drive sales of the applications that ride on top. But the catch is that many companies won't aggressively pursue OSDBs until they see a suitable supply of compatible software. OSDBs remain a leading-edge technology, but AMR expects them to be mainstream within three years.
� CFO Publishing Corporation 2009. All rights reserved. | 计算机 |
2014-23/0758/en_head.json.gz/4605 | Home > Risk Management
OverviewGetting StartedResearchTools & Methods Additional Materials ConsultingOur People Risk Management
Consider a broad range of conditions and events that can affect the potential for success, and it becomes easier to strategically allocate limited resources where and when they are needed the most.
Overview The SEI has been conducting research and development in various aspects of risk management for more than 20 years. Over that time span, many solutions have been developed, tested, and released into the community. In the early years, we developed and conducted Software Risk Evaluations (SREs), using the Risk Taxonomy. The tactical Continuous Risk Management (CRM) approach to managing project risk followed, which is still in use today—more than 15 years after it was released. Other applications of risk management principles have been developed, including CURE (focused on COTS usage), ATAM® (with a focus on architecture), and the cyber-security-focused OCTAVE®. In 2006, the SEI Mission Success in Complex Environments (MSCE) project was chartered to develop practical and innovative methods, tools, and techniques for measuring, assessing, and managing mission risks. At the heart of this work is the Mission Risk Diagnostic (MRD), which employs a top-down analysis of mission risk.
Mission risk analysis provides a holistic view of the risk to an interactively complex, socio-technical system. The first step in this type of risk analysis is to establish the objectives that must be achieved. The objectives define the desired outcome, or "picture of success," for a system. Next, systemic factors that have a strong influence on the outcome (i.e., whether or not the objectives will be achieved) are identified. These systemic factors, called drivers, are important because they define a small set of factors that can be used to assess a system's performance and gauge whether it is on track to achieve its key objectives. The drivers are then analyzed, which enables decision makers to gauge the overall risk to the system's mission.
The MRD has proven to be effective for establishing confidence in the characteristics of software-reliant systems across the life cycle and supply chain. The SEI has the MRD in a variety of domains, including software acquisition and development; secure software development; cybersecurity incident management; and technology portfolio management. The MRD has also been blended with other SEI products to provide unique solutions to customer needs.
Although most programs and organizations use risk management when developing and operating software-reliant systems, preventable failures continue to occur at an alarming rate. In many instances, the root causes of these preventable failures can be traced to weaknesses in the risk management practices employed by those programs and organizations. For this reason, risk management research at the SEI continues. The SEI provides a wide range of risk management solutions. Many of the older SEI methodologies are still successfully used today and can provide benefits to your programs. To reach the available documentation on the older solutions, see the additional materials.
The MSCE work on mission risk analysis—top-down, systemic analyses of risk in relation to a system's mission and objectives—is better suited to managing mission risk in complex, distributed environments. These newer solutions can be used to manage mission risk across the life cycle and supply chain, enabling decision makers to more efficiently engage in the risk management process, navigate through a broad tradeoff space (including performance, reliability, safety, and security considerations, among others), and strategically allocate their limited resources when and where they are needed the most. Finally, the SEI CERT Program is using the MRD to assess software security risk across the life cycle and supply chain. As part of this work, CERT is conducting research into risk-based measurement and analysis, where the MRD is being used to direct an organization's measurement and analysis efforts. Spotlight on Risk Management
The Monitor June 2009
New Directions in Risk: A Success-Oriented Approach (2009)
A Practical Approach for Managing Risk
A Technical Overview of Risk and Opportunity Management
A Framework for Categorizing Key Drivers of Risk
Practical Risk Management: Framework and Methods | 计算机 |
2014-23/0758/en_head.json.gz/5414 | Internet Governance Forum-USA, 2012
Case Studies: IG/ICANN, Cybersecurity,
Consumer Privacy - Lessons Learned or NotLoading Player . . .
Brief session description:Thursday, July 26, 2012 - This workshop was aimed at examining the role principles are playing in framing debates, achieving consensus and influencing change – or not. Proposals for Internet principles are popping up everywhere, from national to regional and global discussions, on a wide range of issues. In 2011, IGF-USA examined a number of principles in a session titled “A Plethora of Principles.” This session follows on that one. Session planners noted that it's not enough to simply develop a set of principles, the question is: how are principles actually implemented how are they inspiring change? Are they new voluntary codes of conduct, new regulations, new laws? Principles can become a baseline for gaining high-level agreements. They may go beyond the expectations possible through legislation or regulation, so some argue that principles should be written to be aspirational. Some argue for legislation, regulation or enforcement mechanisms to ‘hold industry accountable’ to promises made in principles designed as sets of commitments. This workshop examined three case vignettes: 1) how consumer privacy principles have fared in global and national settings in terms of these points ‘turning into practice’; 2) how the principles of a white paper were incorporated into ICANN’s formation and what the status of these principles are today within ICANN’s mission and core activities; and 3) how cybersecurity/botnet principles are faring.Details of the session:
The moderator for this session was Shane Tews, vice president for global public policy and government relations at Verisign. Panelists included:Becky Burr, chief privacy officer, Neustar Inc.: Turning White Paper Principles into actuality in ICANN
Meneesha Mithal, associate director of the division of privacy and identity protection, Federal Trade Commission: Consumer privacy principles
Eric Burger, director of the Georgetown University Center for Secure Communications: Cybersecurity and botnets
Carl Kalapesi, co-author of the World Economic Forum's report Rethinking Personal Data: Strengthening Trust: the World Economic Forum perspectiveBefore an informal agreement, policy or formal regulation is adopted, passed or approved it takes its initial steps as an idea. The trick lies in bringing it from a formative state to something actionable, otherwise it may languish as a suggested goal, followed by and adhered to by no one.
During the IGF-USA panel titled “Turning Principles into Practice – or Not” participants shared successful case studies as examples of how to create actionable practices out of ethereal goals. Citing processes ranging from US efforts to counteract botnets to domain name system governance and to consumer privacy, three panelists and one respondent drew from their own experiences in discussing ways in which people might successfully bridge the gap between idea and action.Meneesha Mithal, associate director of the Federal Trade Commission’s Division of Privacy and Identity Protection, weighed in on the efficacy of principles versus regulation by offering a series method to act on a problem.
“It’s not really a binary thing - I think there’s a sliding scale here in how you implement principles and regulation,” she said. She cited corporate self-regulatory codes, the work of international standard-setting bodies, multistakeholder processes, safe harbors and legislation as possible means for action.
Mithal highlighted online privacy policies as an example of the need for a sliding scale. The status quo has been to adhere to the concepts of notice and choice on the part of consumers; this has resulted in corporations' creation of lengthy, complicated privacy policies that go unread by the consumers they are meant to inform. Recently, pressure has been placed on companies to provide more transparent, effective means of informing customers about privacy policies.
“If it had been in a legislative context, it would have been difficult for us to amend laws,” Mithal said, though she admitted that such flexible agreements are “sometimes not enough when you talk about having rights that are enforceable.”
And Mithal did note that, given the current climate surrounding the discussion of online privacy, it’s still the time for a degree of broad-based privacy legislation in America.Eric Burger, a professor of computer science at Georgetown University, spoke on the topic of botnets, those dangerous cyber networks that secretly invade and wrest control of computers from consumers, leaving them subservient to the whims of hackers looking for a challenge, or criminals looking for the power to distribute sizable amounts of malware.
Given the sheer number of stakeholders - ISPs concerned about the drain on their profits and the liability problems the strain of illegal information shared by the botnets, individual users concerned over whether their computers have been compromised and government agencies searching for a solution - Burger said that the swift adoption of principles is the ideal response.
Among those principles are sharing responsibility for the response to botnets, admitting that it’s a global problem, reporting and sharing lessons learned from deployed countermeasures, educating users on the problem and the preservation of flexibility to ensure innovation. But Burger did admit the process of arriving at this set of principles wasn't without its faults. “Very few of the users were involved in this,” he said, citing “heavy government and industry involvement, but very little on the user side,” creating a need to look back in a year or two to examine whether the principles had been met and whether they had been effective in responding to the swarm of botnets.Becky Burr, chief privacy officer and deputy general counsel at Neustar, previously served as the director of the Office of International Affairs at the National Telecommunications and Information Administration, where she had a hands-on role in the US recognition of ICANN (NTIA). She issued a play-by-play of the lengthy series of efforts to turn ICANN from a series of proposed responses into a legitimate governing entity, which was largely aided by a single paragraph in a framework issued by President Bill Clinton’s administration in 1997.
Written as a response to the growing need for the establishment of groundwork on Internet commerce and domain names, the paper called for a global, competitive, market-based system for registering domain names, which would encourage Internet governance to move from the bottom-up. The next day, the NTIA issued the so-called “Green Paper” which echoed many of the principles of the administration’s framework and drew extensive feedback from around the world, including negative feedback over the suggestion that the US government add up to five gTLDs during the transitional period.
After reflection on the feedback to both the white and green papers, and a series of workshops among multiple stakeholders to flesh out the principles of stability, competition, private-sector leadership, bottom-up governance and realistic representation of the affect communities, ICANN held its first public meeting Nov. 14, 1998, underwent several reforms in 2002, and ever since, in Burr’s words, “is still the best idea, or at least no one’s figured out a better idea.”
“The bottom line is to iterate, make sure you articulate your principles and try to find some built-in self-correcting model,” Burr said.While Burr’s play-by-play described how a relatively independent, formal institution was formed to offer DNS governance, Carl Kalapesi, a project manager at the World Economic Forum, offered a more informal approach, relying on the informal obligations tied to agreeing with principles to enforce adherence.
“Legislative approaches by their nature take a very, very long time," Kalapesi said. He vigorously supported the importance of principles in offering “a common vision of where we want to get to,” which leaders can sign onto in order to get the ball rolling.
He offered the example of the “Principles of Cyber Resilience,” offered to CEOs at last year’s World Economic Forum with the goal of making them more accountable for the protection of their own networks and sites while still allowing them flexibility to combat problems in a way that best suited their own work-flow and supply chains.
Central to Kalapesi’s argument in favor of principle-based solutions is their flexibility.
“Half of the uses of data didn’t exist when the data was collected – we didn’t know what they were going to do with it,” he said, alluding to the concerns over the use of private data by the likes of Google and Facebook, which accelerate and evolve at a rate with which formal legislation could never keep up.
Burr later echoed this point in theorizing that 1998′s Child Online Protection Act might soon be obsolete, but Mithal remained firm that a “government backstop” should be in place to ensure that there’s something other than the vague notion of “market forces” to respond to companies who step back from their agreements.– Morgan LittleReturn to IGF-USA 2012 Home
http://www.elon.edu/e-web/predictions/igf_usa/2012/default.xhtml
The multimedia reporting team for Imagining the Internet at IGF-USA 2012
included the following Elon University students, staff, faculty and alumni:
Jeff Ackermann, Bryan Baker, Ashley Barnas, Katie Blunt, Mary Kate Brogan, Joe Bruno, Kristen Case, Allison D'Amora, Colin Donohue, Keeley Franklin, Janae Frazier, Ryan Greene, Audrey Horwitz, Elizabeth Kantlehner, Perri Kritz, Morgan Little, Madison Margeson, Katie Maraghy, Brennan McGovern, Brian Mezerski, Julie Morse, Janna Anderson. A project of the Elon University School of Communications
All rights reserved. Contact us at [email protected] | 计算机 |
2014-23/0758/en_head.json.gz/8589 | Oracle® OLAP
DML Reference
Part No. B10339-02
Oracle OLAP DML Reference, 10g Release 1 (10.1)
Copyright © 2003 Oracle Corporation. All rights reserved.
The Programs (which include both the software and documentation) contain proprietary information of Oracle Corporation; they are provided under a license agreement containing restrictions on use and disclosure and are also protected by copyright, patent and other intellectual and industrial property laws. Reverse engineering, disassembly or decompilation of the Programs, except to the extent required to obtain interoperability with other independently created software or as specified by law, is prohibited.
The information contained in this document is subject to change without notice. If you find any problems in the documentation, please report them to us in writing. Oracle Corporation does not warrant that this document is error-free. Except as may be expressly permitted in your license agreement for these Programs, no part of these Programs may be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without the express written permission of Oracle Corporation.
If the Programs are delivered to the U.S. Government or anyone licensing or using the programs on behalf of the U.S. Government, the following notice is applicable:
Restricted Rights Notice Programs delivered subject to the DOD FAR Supplement are "commercial computer software" and use, duplication, and disclosure of the Programs, including documentation, shall be subject to the licensing restrictions set forth in the applicable Oracle license agreement. Otherwise, Programs delivered subject to the Federal Acquisition Regulations are "restricted computer software" and use, duplication, and disclosure of the Programs shall be subject to the restrictions in FAR 52.227-19, Commercial Computer Software - Restricted Rights (June, 1987). Oracle Corporation, 500 Oracle Parkway, Redwood City, CA 94065.
The Programs are not intended for use in any nuclear, aviation, mass transit, medical, or other inherently dangerous applications. It shall be the licensee's responsibility to take all appropriate fail-safe, backup, redundancy, and other measures to ensure the safe use of such applications if the Programs are used for such purposes, and Oracle Corporation disclaims liability for any damages caused by such use of the Programs.
Oracle is a registered trademark, and Express, PL/SQL, and SQL*Plus are trademarks or registered trademarks of Oracle Corporation. Other names may be trademarks of their respective owners. | 计算机 |
2014-23/1194/en_head.json.gz/13827 | Japanese Game Development Continues Decline
Stephen Kamizuru (Blog) - January 26, 2009 7:48 AM
34 comment(s) - last by WTFiSJuiCE.. on Jan 29 at 5:12 PM
Japanese market share of western game market falls to 20%
Japanese market share of the Western game market has been reduced to 20 percent according to an analysis of the Japanese gaming industry by the CESA. The decline is significant as Japanese game development used to maintain a dominant position in the industry especially for home and portable console game development. The speed of the decline is also noteworthy as the decline has become impossible to ignore during this generation of home consoles which are roughly 2 to 3 years old.
In October of last year, Square Enix president Yoichi Wada declared that Japan had "lost its position" as the leader in the video game industry. These claims are being backed by evidence which shows the market share for Western-developed titles in their own territories doubled between 2004 and 2007 while Japanese game market share has declined.
Although market share has declined, data summarized by Kotaku revealed the news is not all negative as revenue generated from Japanese game exports overall have increased by 43 percent year on year in 2006 and by 54.3 percent in 2007. The success is attributed primarily to the success of the Nintendo DS and Wii hardware. According to a senior analyst at Nomura Finance, there are several causes for the decline in Japanese game development. He suggests RPG games which Japanese developers tend to focus are not as popular in the west. He also suggests the right to develop games in potentially lucrative areas such as sports or popular movie franchises is tightly controlled. Nomura also states since the successful launch of the Microsoft Xbox the quantity and size of the competition in the field of game development has increased significantly making it more difficult for Japanese game developers to maintain significant market share. Comments Threshold -1
RE: Curious
With more complex technologies if you don't have the right tools to tone down the learning curve there's no escaping the fact that you will have to spend years on A project and hope for it's success. Only bigger companies can rely on this and only they have the money to do this.Even years after release now their full potential hasn't been tapped, including the Wii and DS. Parent | 计算机 |
2014-23/1194/en_head.json.gz/13862 | Where's the SOA Train Headed?
An AMR essay draws parallels between SOA and the client/server boom; ERP vendors have control of the aggregated services in the short term.
By Marshall Lager Posted Oct 7, 2005
Services-oriented architecture is the application design trend of the present and the future, but its very nature indicates that it will not be with us forever, according to an essay published today. In "Dateline 2010: A Look Back At SOAs," Bruce Richardson, chief research officer at AMR Research, considers several angles of the SOA issue, including value proposition, performance, necessity of skills, and possible vulnerabilities. He reminds us: "SOA is a journey, not a destination."
The enterprise applications market is going through a period of consolidation, which appears to be limiting the options of smaller companies. "A number of companies received funding when CRM and ERP became important issues in business, but now they're stranded with no hope of an IPO, hoping that SAP or Oracle will buy them out," Richardson says. "We got excited about client/server architecture back in 1991, and now I'm wondering about what questions we should have asked ourselves back then, and how to apply those lessons to the emerging SOA."
The questions are wide-ranging, from the increased computing infrastructure requirements and the possibility of slow performance, to governance of the spread of new services and the shortage of personnel skilled in creating, integrating, and maintaining SOA components. Intel estimates that the reuse of services and components will improve deployment efficiency by 25 to 56 percent over traditional deployments, with the attendant IT budget savings, but Richardson ponders the other side of the issue. "When we shifted from dumb terminals to PC clients, most of us did so without thinking about how much more it was going to cost to support the new desktops. Instead, we viewed it as a productivity boost. Now, many large companies have engaged in a very expensive, instance consolidation exercise designed to move us back to a more cost-effective centralized model." This, he believes, will lead to hidden costs in the form of hardware upgrades, real-time BPM and business activity monitoring tools, and consulting services fees for integrators.
Richardson also notes a lack of security due to multiple versions of existing services. "In the initial [ESA] pilot, there is no way to keep track of the various versions and changes to specific Web services." He suggests creating a metadata repository to define the services catalog and prevent loopholes from forming. He's not entirely pessimistic, however: "I see a lot of hope for analytics and handling unstructured information. The space for knowledge management and content management is getting larger," Richardson says, and that space will be filled by specialist organizations that can offer their expertise as a component in a services platform.
"The ERP platform of the future consists of a new hub-and-spoke architecture. The center consists of a much thinner, stripped-down applications hub. The spokes are a set of much richer Web services that radiates out through the enterprise and into the extended trading partner community," Richardson says. "Intel sees ESA as offering a new plug-and-play, ERP-as-a-service platform that will accelerate business changes and software upgrades. It also provides an opportunity for business process reengineering, especially for processes that extend to trading partners."
The current issue is how to cut through the clutter of SOA offerings, but "in a me-too environment, the tie goes to the ERP vendor," Richardson says, noting those vendors have control of the aggregated services in the short term. SOA requires interoperability, however, which will open the door for a new crop of best-of-breed vendors, as well as the ability for users to pick and choose from among the best platform features. But the new best-of-breed will have much shorter time to innovation.
SOA will change and evolve for the foreseeable future. "When will the future arrive? That's too hard to predict," he says. "Client/server took five to eight years to reach the promises made in 1990 and 1991. With SOAs, the trip will be longer. I think we are embarking on a 10-year journey--all aboard."
Oracle's Fused Future: Support and Interoperability
SOA Is Consulting's Bread and Butter
Integration Still Leaves Some Users Confused | 计算机 |
2014-23/1194/en_head.json.gz/14068 | 38 Studios Hires Two Former Playwell and Zynga Executives March 15, 2012 - Kingdoms of Amalur: Reckoning maker 38 Studios has added two game industry veterans: John Blakely and Mark Hansen. John Blakely joins the company as senior vice-president of development, while Mark Hansen takes the role of senior vice-president of operations & business.
Prior to joining 38 Studios, Blakely worked at Zynga's Austin offices as its general manager. Prior to working for Zynga, he worked at Sony Online Entertainment as vice-president of development. While at SOE he oversaw development of several massively multiplayer online (MMO) games, including DC Universe Online and EverQuest II. In his new role he will oversee game development at 38 Studios' offices in Providence, RI and at Big Huge Games in Baltimore, MD.
Before joining 38 Studios, Hansen was senior director of Playwell Studios, where he oversaw the online game business for LEGO Systems Inc. While at Playwell, he led the development, launch, and operation of the LEGO Universe MMO. In his new role he will run the company's publishing efforts, including operational services, marketing, and business development.
38 Studios is currently working on its first MMO title, code named "Copernicus."
Posted in38 Studios | 计算机 |
2014-23/1194/en_head.json.gz/14137 | Opera Software ASA offers free course for Web deve...
Opera Software ASA offers free course for Web developers
It's looking to push standards-based Web development
Todd R. Weiss (Computerworld) on 10 July, 2008 08:31
The company behind the Opera Web browser has released a free online curriculum to encourage student and professional Web developers to create standards-based Web sites.In an announcement Tuesday, Opera Software ASA said it launched the effort to help set the pace of Web standards education and training in secondary schools, colleges, universities and businesses.Under the project, Opera has created an online Web Standards Curriculum, which provides detailed information and lessons about Web design and development using standards-based coding. The online curriculum so far includes 23 articles, with more than 30 more to come, according to Thomas Ford, communications manager for the Norway-based company."This is essentially a curriculum for teaching standards-based Web design," Ford said. Many existing materials on the subject are out-of-date or incomplete, he said, so Chris Mills, developer relations manager at Opera, created the company's own version of a training course. "We wanted something that was easy to understand. Chris saw a lack of good standards-based design materials."By using Web browsers that are standards-based, users aren't locked into a browser from any specific vendor and content is rendered properly online, Ford said. "It's really about opening up the Web."Anyone can use the class materials for free as long as they don't try to resell them, he said.Browsers like Opera, Mozilla Firefox and Apple's Safari are standards-based and therefore render standards-based Web pages properly; Microsoft's Internet Explorer 7 (IE7) isn't standards-based, he said. "Internet Explorer has a ways to go," he said, noting that the upcoming IE8 apparently will be standards-based by default.The articles are being written by a range of notable Web developers and experts, including Christian Heilmann and Mark Norman "Norm" Francis of Yahoo, Peter-Paul Koch of quirksmode.org, Jonathan Lane, Linda Goin, Paul Haine, Roger Johansson of 456bereastreet, and Jen Hanen, according to Opera. "We hope the community gets behind this and they see the value in it and they help us promote it."In an interview, Mills said he created the course to make it easier for Web developers to get the skills they need for standards-based Web design. There are other such sites available, he said, including W3schools.com, but "none of them really cover the whole story of what you need."Rob Enderle, principal analyst with the Enderle Group, said the project is timely, but noted that Opera isn't one of the major players in the Web browser marketplace. "I think it's a good idea, but for a small player, and Opera's a small player, it's hard to drive a change like this," Enderle said. "Opera's advantage has always been that they keep the product simple and it's fast."The move by Microsoft to make the upcoming IE8 browser standards-based "should help" drive the effort toward standardization, he said.The Opera browser is free for download and use but has only a 0.55 per cent share of the global browser usage market. That compares to 83.27 per cent for IE, 13.76 per cent for Firefox and 2.18 per cent for Safari, according to recent statistics from Web analytics vendor OneStat.com.
Todd R. Weiss | 计算机 |
2014-23/1194/en_head.json.gz/14138 | Internet infrastructure groups move away from US g...
Internet infrastructure groups move away from US gov't over spying
ICANN and other groups call for an accelerated globalization of Internet domain name functions after NSA surveillance leaks
Grant Gross (IDG News Service) on 16 October, 2013 16:34
After recent revelations about the U.S. National Security Agency's widespread surveillance of Internet communications, the coordination of the Internet's technical infrastructure should move away from U.S. government oversight, said 10 groups involved in the Internet's technical governance.The groups -- including the Internet Corporation for Assigned Names and Numbers (ICANN), the Internet Society, the Internet Engineering Task Force (IETF) and the World Wide Web Consortium (W3C) -- said Internet groups should accelerate the "globalization" of the Internet domain name functions performed by ICANN and traditionally overseen by the U.S. government. Internet governance should move toward "an environment in which all stakeholders, including all governments, participate on an equal footing," the groups said in a joint statement released this month.During a meeting this month in Uruguay, the 10 groups "expressed strong concern over the undermining of the trust and confidence of Internet users globally due to recent revelations of pervasive monitoring and surveillance," they said in the statement.The groups signing the statement didn't go into further detail. A spokesman for ICANN declined comment, referring to the statement, and representatives of the Internet Society and W3C didn't immediately respond to a request for comments.But on Wednesday, Chris Disspain, CEO of the .au Domain Administration, repeated concerns about U.S. surveillance of the Internet and the NSA's Prism program that collects Internet communications worldwide. "New battle lines" are forming over who controls the Internet, Disspain said during a speech at the Australian Internet Governance Forum in Melbourne."Controversy over the extent of the NSA's PRISM program, the very concept of cyber-surveillance and the reactions of stakeholders ... are just the most recent developments highlighting the complexity and reach of these issues," he said, according to a transcript of the speech. "The big picture consequence is that the Internet's informal support frameworks -- those built on a bedrock of multistakeholder cooperation and trust -- have potentially been significantly weakened."The new concerns about NSA surveillance, with leaks published since June, come after years of efforts by China, Russia and other countries to limit the influence of the U.S. government on Internet governance. Since former NSA contractor Edward Snowden leaked information about the agency's surveillance activities, the government of Brazil has questioned whether outside Internet traffic should route through the U.S.The NSA revelations have added to a "perfect storm" that could hurt Internet users, said Steve DelBianco, executive director of NetChoice, a U.S. ecommerce trade group. Brazil, Russia and China have long called for ICANN to leave the U.S. and become affiliated with the United Nations, while ICANN leaders have wanted to "reduce their reliance on the U.S.," he said in an email."This storm could cause some serious damage," DelBianco added. "Considering the censorship and suppression that happens around the world, moving ICANN out of US and into the hands of foreign governments will likely reduce privacy and free expression on the Internet."Grant Gross covers technology and telecom policy in the U.S. government for The IDG News Service. Follow Grant on Twitter at GrantGross. Grant's email address is [email protected].
Tags Internet Engineering Task ForceInternet Corporation for Assigned Names and NumbersSteve DelBiancoChris DisspainNetChoiceU.S. National Security AgencyInternet SocietyinternetgovernmentWorld Wide Web Consortiumprivacysecurity.au Domain Administration | 计算机 |
2014-23/1194/en_head.json.gz/16304 | (Redirected from Warwalking)
A free public Wi-Fi access point
Wardriving is the act of searching for Wi-Fi wireless networks by a person in a moving vehicle, using a portable computer, smartphone or personal digital assistant (PDA).
Software for wardriving is freely available on the Internet, notably NetStumbler, InSSIDer, Vistumbler or Ekahau Heat Mapper[1] for Windows; Kismet or SWScanner for Linux, FreeBSD, NetBSD, OpenBSD, DragonFly BSD, and Solaris; and KisMac for Macintosh. There are also homebrew wardriving applications for handheld game consoles that support Wi-fi, such as sniff jazzbox/wardive for the Nintendo DS/Android, Road Dog for the Sony PSP, WiFi-Where for the iPhone, G-MoN, Wardrive,[2] Wigle Wifi for Android, and WlanPollution[3] for Symbian NokiaS60 devices. There also exists a mode within Metal Gear Solid: Portable Ops for the Sony PSP (wherein the player is able to find new comrades by searching for wireless access points) which can be used to wardrive. Treasure World for the DS is a commercial game in which gameplay wholly revolves around wardriving.
2 Mapping
3 Antennas
4 Legal and ethical considerations
Wardriving originated from wardialing, a method popularized by a character played by Matthew Broderick in the film WarGames, and named after that film. War dialing consists of dialing every phone number in a specific sequence in search of modems.[4]
Warbiking is similar to wardriving, but is done from a moving bicycle or motorcycle. This practice is sometimes facilitated by mounting a Wi-Fi enabled device on the vehicle.
Warwalking, or warjogging, is similar to wardriving, but is done on foot rather than from a moving vehicle. The disadvantages of this method are slower speed of travel (resulting in fewer and more infrequently discovered networks) and the absence of a convenient computing environment. Consequently, handheld devices such as pocket computers, which can perform such tasks while users are walking or standing, have dominated this practice. Technology advances and developments in the early 2000s expanded the extent of this practice. Advances include computers with integrated Wi-Fi, rather than CompactFlash (CF) or PC Card (PCMCIA) add-in cards in computers such as Dell Axim, Compaq iPAQ and Toshiba pocket computers starting in 2002. More recently, the active Nintendo DS and Sony PSP enthusiast communities gained Wi-Fi abilities on these devices. Further, many newer smartphones integrate Wi-Fi and Global Positioning System (GPS).
Warrailing, or Wartraining, is similar to wardriving, but is done on a train/tram/other rail-based vehicle rather than from a slower more controllable vehicle. The disadvantages of this method are higher speed of travel (resulting in fewer and more infrequently discovered networks), and often limited routes.
Warkitting is a combination of wardriving and rootkitting.[5] In a warkitting attack, a hacker replaces the firmware of an attacked router. This allows them to control all traffic for the victim, and could even permit them to disable SSL by replacing HTML content as it is bein | 计算机 |
2014-23/1194/en_head.json.gz/18898 | Contact Advertise Common Usability Terms: pt. VI: the Dock
posted by Thom Holwerda on Sun 18th Nov 2007 15:46 UTC This is the sixth article in a series on common usability and graphical user interface related terms [part I | part II | part III | part IV | part V]. On the internet, and especially in forum discussions like we all have here on OSNews, it is almost certain that in any given discussion, someone will most likely bring up usability and GUI related terms - things like spatial memory, widgets, consistency, Fitts' Law, and more. The aim of this series is to explain these terms, learn something about their origins, and finally rate their importance in the field of usability and (graphical) user interface design. In part VI, we focus on the dock.
Even though many people will associate the dock firstly with Mac OS X (or, if you are a real geek, with NeXTSTEP), the concept of the dock is actually much older than that. In this installment of our usability terms series, I will detail the origins of the dock, from its first appearance all the way up to its contemporary incarnations; I will explain some of the criticisms modern-day docks are receiving, finishing it off with the usual conclusion.
Origins of the dock
As I already mentioned, many people assume that Mac OS X and its ancestor, NeXTSTEP, are the ones that first presented the idea of what we now know as a "dock". While these two certainly played a major (if not the only) role in the popularisation of the dock concept, the first appearance of what we would call a dock was made somewhere else completely - far away from Redwood City (NeXT) and Cupertino (Apple). It all started in a small shed in Cambridge, England.
Well, I am not sure if it actually started in a shed, but that is generally where cool and original stuff in England comes from (British independent car manufacturers, people!). Anyway, I am talking about Arthur, the direct precursor to RISC OS (so much even that the first actual RISC OS release had the version number "2.0"). Arthur, whose graphical user interface always reminds me of the first versions of the Amiga OS (the 'technicolour' and pixel use), was released in 1987, for the early Archimedes ARM-based machines from Acorn (the A300 and A400 series). It was actually quite a crude operating system, implemented quite quickly because it was only meant as a placeholder until the much more advanced version 2.0 (RISC OS 2.0) was ready (two years later).
That thing at the bottom of the screen is the Iconbar, the first appearance of a dock in the world of computing. The left side of the Iconbar is reserved for storage icons, and on this particular screenshot you can see a floppy disk; if you inserted a new drive into the Archimedes, the Iconbar would update itself automatically. Clicking on a drive icon would show a window with the drive's contents. The right side of the dock is reserved for applications and settings panels - here, you see the palette icon (which is used to control the interface colours), a notepad launcher, a diary launcher, the clock icon, a calculator, and the exit button.
Even though Wikipedia can be a good starting point for various computing related matters, the article entry on "Dock (computing)" is a bit, well, complete and utter rubbish; it claims that the dock in NeXTSTEP, released in 1989, was the first appearance of the dock concept (so not the Iconbar in Arthur). Further, Wikipedia claims that "a similar feature [to the NeXTSTEP/OS X dock], called the Icon Bar, has been a fundamental part of the RISC OS operating system and its predecessor Arthur since its inception, beginning in 1987, which pre-dated the NeXTSTEP dock (released in 1989). However, upon further examination the differences are quite noticeable. The Icon Bar holds icons which represent mounted their own context-sensitive menus and support drag and drop behaviour. Also, the Mac OS X Dock will scale down accordingly to accommodate expansion, whereas the Icon Bar will simply scroll. Lastly, the Icon Bar itself has a fixed size and position, which is across the bottom of the screen."
Those are minor differences of course - not differences that set the NeXTSTEP dock that much apart from the Iconbar. It is obvious to anyone that the first appearance of the dock concept was the Iconbar in Arthur. Now, this whole dock thing was of course another example of similar people coming up with similar solutions to similar problems in a similar timespan (I need a term for that) - but the fact remains that the first public appearance of the dock was the Iconbar in Arthur. Credit where it is due, please*.
* I do not edit Wikipedia articles. I do not think that "journalists" like myself should do that.
So, the Iconbar was the first dock - but the dock has changed a lot since then. Let me walk you through the various different docks since the concept was introduced. Firstly, NeXTSTEP 1.0 was released on September 18th, 1989, and it included the ancestor of Mac OS X's dock, positioned in the top-right corner of the screen. It introduced some new elements into the dock mix; applications that were not running showed an ellipsis in the bottom-left corner - contrary to what we see in docks today where usually applications that are running receive a marker. The dock in NeXTSTEP had its limitations in that it did not automatically resize when full, so you had to remove icons from the dock, or you had to put them in shelves. The NeXT dock remained fairly unchanged over its years (until Mac OS X, of course). The below image of the NeXTSTEP dock has been rotated 90 degrees for formatting issues. Clicking it will give you the proper orientation and size.
Before NeXTSTEP 1.0 was released, Acorn updated its Arthur to RISC OS 2.0 (April 1989), which included the Iconbar we already knew, but in addition, it had context sensitive menus for the various icons in the Iconbar. The colour scheme was a bit less unnerving too. In future versions of the RISC OS, the Iconbar remained fairly similar, but of course did get visual treatments. See the below shots of RISC OS 2.0 and RISC OS 4.
Other operating systems also received a dock, such as Amiga OS 3.1, but the one I want to highlight here is the dock in CDE - the Common Desktop Environment. The dock in CDE (my favourite desktop environment of all times - despite its looks) was quite the functional beast. It had drawers that opened upwards (a different play on context menus), and in the middle, you had a big workplace switcher. The dock was fully configurable, and was quite easy to use. Keep the CDE screenshot below in mind, as I will dedicate an entire Usability Terms article solely to CDE running on Solaris 9. The CDE dock evolved onwards through Xfce, also seen below.
Mac OS X; Criticism
(0) 100 Comment(s) Related Articles
Typography in 8 bits: system fontsThe mystery of the misaligned window widgetsThe state of in-car UX | 计算机 |
2014-23/1194/en_head.json.gz/19964 | Camino is no longer being developed. Learn more.
All that legal mumbojumbo your mom warned you about.
Binary Licensing
Camino 2.0 and Camino 2.1 are distributed under the terms of the Mozilla Public License. Know your rights.
Camino versions 1.0 - 1.6 are distributed under an End-User License Agreement (EULA) that must be accepted before first use. The license can be found by following this link.
Source Code Licensing
Camino’s source code, as with all mozilla.org projects, is distributed under the MPL/LGPL/GPL Tri-License.
Trademarks & Logos
“Camino®” is a registered trademark of the Mozilla Foundation and is used with permission. All rights are reserved. Other product names may be trademarks of their respective companies.
The Camino logo is a registered trademark of the Mozilla Foundation and is used with permission.
More information about Mozilla trademarks is available on their website.
Graphics Licensing
Graphics within Camino and on the Camino website are currently licensed from their authors. Work is underway to license all graphics in Camino, excluding the Camino logo, under the Tri-License used elsewhere.
Website Licensing
All content on Camino’s website is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 license. The look and feel of Camino’s website is licensed under the same license.
Graphics used on the website are covered by the policy above.
Camino Privacy Policy
We’re working hard to protect your privacy while delivering products and services that bring you the performance and protection you desire in your personal computing. The Camino Privacy Policy describes how data from users of the Camino browser is collected and used.
Home Legal Press Search Contact Copyright © 1998-2013 The Camino Project | 计算机 |
2014-23/1194/en_head.json.gz/21506 | Chewy Apps
Hi, I'm Henning, and I'm an iPhone developer.
Kill Death Ratio
KDR Import Format
iBorrow (IOU)
Twitterchewyapps15
Archive for November, 2008 Post
Contract Expires Early Two months ago I signed a six month contract to do iPhone and Mac development. My first project would be an iPhone one, and my second would be a Mac project that they were pursuing. So the iPhone project is now coming to a close, and yesterday I got called in to the HR department. I was told that other iPhone contracts they were pursuing didn’t materialize, and that they’re giving me two weeks notice.
There was only one other permanent person working on this project (an employee, not a contractor), and he had no Mac/iPhone/Cocoa/Xcode experience whatsoever. I was the prime developer in everything but name. I did almost all the GUI for the whole project, and much of the guts as well. They are quite pleased with my work. So to be dropped this easily is a bit unnerving.
But they said that they’re pursuing other opportunities, and that if one of them gets signed, that I’d be working on a Blackberry project after the iPhone one is done. That’s all very nice and good, and I have nothing against the Blackberry, but I didn’t sign up to do Blackberry work. I love working on the iPhone! While I’ve learned a lot that about iPhone development during my time here, there’s still a lot more to learn. I want to continue expanding my skillset on the iPhone and want to become an iPhone domain expert. That said, learning a new platform like the Blackberry wouldn’t be that bad.
But most of all, I have to wonder why I bothered signing a six month contract in the first place. What good is it to sign up for six months, only to be let go halfway through the term? I’m relatively new to the world of contracting, this being my second contract. So far it’s been quite a turbulent ride, with more downs than ups. I guess this is all part of the learning process.
So anyway, now I’m now looking for work iPhone or Mac development. My resume is here.
26Nov08 No Comments
How Much is an iPhone Developer Worth? The author of this article at O’Reilly says that in his experience, good iPhone developers are making $125/hour doing contract work, sometimes more. And that because of how lucrative selling apps is, many contractors are forgoing contract work altogether in favour of creating and selling their own iPhone apps.
He rightly points out that in this market, ideas are worthless. I’ve seen this in software development before – an idea, all by itself, is worth nothing, and it’s the execution of the idea that matters. The author of this article says: “I am someone who is highly motivated by ideas. So, it pains me to say that the value of an iPhone application idea right now is pretty much zero. A great idea isn’t worth anything under these conditions. There is no shortage of great iPhone ideas, just a shortage of talent to bring these ideas to market.” For someone like me, who doesn’t seem to have a creative bone in my body (kidding!), I think that ideas are worth slightly more than that. But not much.
Which brings me to my point. I see lots of people offer to work with a developer and split the profits 50/50. In this market where a developer’s time is so precious, I hardly think that this would be any motivation for a developer at all. The idea person gets half the profit and the person that does all the work gets the other half? Hardly seems fair. Especially given the fact that the split isn’t actually 50/50. It’s 30/35/35. That is, Apple gets 30%, the “idea person” gets 35%, and the person who creates the app gets 35%. That’s even more unbalanced.
As an experienced developer who’s now doing iPhone development, I don’t see any iPhone devs making $125/hour. Maybe I just don’t know where to look? I’ve been looking for good telecommuting iPhone jobs, and they’re hard to find. I’m starting to be open to finding someone who is willing to offer me a true 50/50 split (that is, 30/20/50). But I still think full time/contract work is what I want.
24Nov08 3 Comments
© 2014 Chewy Apps | 计算机 |
2014-23/1194/en_head.json.gz/21624 | Computer Architecture Seminar Abstracts Spring 2009
Srini Devadas MIT
A Search for an Efficient Reconfigurable Computing Substrate
Computing substrates such as multicore processor chips or Field Programmable Gate Arrays (FPGAs) share the characteristic of having two-dimensional arrays of processing elements interconnected by a routing fabric. At one end of the spectrum, FPGAs have a computing element that is a single-output programmable logic function and a statically-configurable network of wires. At the other end, the computing element in a multicore is a complex 32-bit processor, and processors are interconnected using a packet-switched network. We are designing a reconfigurable substrate that shares characteristics of both FPGAs and multicores. Our substrate is configured to run one application at a time, as with FPGAs. The computing element is a processor, and processors are connected using an interconnection network with virtual channel routers that use table-based routing. Bandwidth-sensitive oblivious routing methods that statically allocate virtual channels to application flows utilize the network efficiently. To accommodate bursty flows, the network contains adaptive bidirectional links that increase bandwidth in one direction at the expense of another. We are in the process of building a compiler that compiles applications onto this architecture so as to maximize average throughput of the
applications. Our plan is to use the compiler to refine the architecture and then to build a reconfigurable processor chip.
Srini Devadas is a Professor of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology (MIT), and has been on the faculty of MIT since 1988. He currently serves as the Associate Head of Computer Science. Devadas has worked in the areas of Computer-Aided Design, testing, formal verification, compilers for embedded processors, computer architecture, computer security, and computational biology and has co-authored numerous papers and books in these areas. Devadas was elected a Fellow of the IEEE in 1998.
Mike Kistler
Petascale Computing with Accelerators
A trend is developing in high performance computing in which commodity processors are coupled to various types of computational accelerators. Such systems are commonly called hybrid systems. In this talk, I will describe our experience developing an implementation of the Linpack benchmark for a petascale hybrid system, the LANL Roadrunner cluster built by IBM for Los Alamos National Laboratory. This system combines traditional x86-64 host processors with IBM PowerXCell� 8i accelerator processors. The implementation
of Linpack we developed was the first to achieve a performance result in excess
of 1.0 PFLOPS, and made Roadrunner the #1 system on the Top500 list in June 2008. I will describe the design and implementation of hybrid Linpack, including the special optimizations we developed for this hybrid architecture.
I will show actual results for single node and multi-node executions. From this work, we conclude that it is possible to achieve high performance for certain applications on hybrid architectures when careful attention is given to efficient use of memory bandwidth, scheduling of data movement between the host and accelerator memories, and proper distribution of work between the host and accelerator processors. Biography:
Mike Kistler is a Senior Technical Staff Member in the IBM Austin Research Laboratory. He received his BA in Math and Computer Science from Susquehanna University in 1982, an MS in Computer Science from Syracuse University in 1990,
and MBA from Stern School of Business (NYU) in 1991. He joined IBM in 1982 and has held technical and management positions in MVS, OS/2, and Lotus Notes development. He joined the IBM Austin Research Laboratory in May 2000 and is currently working on design and performance analysis for IBM's PowerPC and Cell/B.E. processors and systems. His research interests are parallel and cluster computing, fault tolerance, and full system simulation of high-
performance computing systems. Lieven Eeckhout
Per-Thread Cycle Accounting in SMT Processors
Simultaneous Multi-threading (SMT) processors run multiple hardware threads
simultaneously on a single processor core. While this improves hardware utilization substantially, co-executing threads affect each other's performance
in often unpredictable ways. System software however is unaware of these performance interactions at the micro-architecture level, which may lead to unfair scheduling at the system level.
Starting from a mechanistic performance model, we derive a cycle accounting architecture for Simultaneous Multithreading (SMT) processors that estimates the execution times for each of the threads had they been executed alone, while they are running simultaneously on the SMT processor. This is done by accounting each cycle to either a base, miss event or waiting cycle component. Single-threaded alone execution time is then estimated as the sum of the base and miss event components; the waiting cycle component represents the lost cycle count due to SMT execution. The cycle accounting architecture incurs
reasonable hardware cost (around 1KB of storage) and estimates single-threaded
performance accurately with average prediction errors around 7.2% for two-
program workloads and 11.7% for four-program workloads.
The cycle accounting architecture has several important applications to system software and its interaction with SMT hardware. For one, the estimated single-
thread alone execution time provides an accurate picture to system software of the actually consumed processor cycles per thread. The alone execution time instead of the total execution time (timeslice) may make system software scheduling policies more effective. Second, a new class of thread-progress aware SMT fetch policies based on per-thread progress indicators enable system software level priorities to be enforced at the hardware level. Third, per-
thread cycle accounting enables substantially more effective symbiotic job scheduling.
Lieven Eeckhout is an assistant professor at Ghent University, Belgium, and is a postdoctoral fellow with the Fund for Scientific Research -- Flanders (FWO). He received his PhD degree in computer science and engineering from Ghent University in 2002. His main research interest include computer architecture, virtual machines, performance modeling and analysis, simulation methodology, and workload characterization. He has published papers in top conferences such as ISCA, ASPLOS, HPCA, OOPSLA, PACT, CGO, DAC and DATE; he has served on multiple program committees including ISCA, PLDI, HPCA and IEEE Micro Top Picks; and he is the program chair for ISPASS 2009. His work on hardware
performance counter architectures was selected by IEEE Micro Top Picks from 2006 Computer Architecture Conferences as one of the "most significant research
publications in computer architecture based on novelty and industry relevance".
He graduated 5 PhD students, and currently supervises one postdoctoral researcher, 4 PhD students and 3 MSc students. | 计算机 |
2014-23/1194/en_head.json.gz/21670 | In Search of the King of the Linux Distros
When it was proclaimed in an article recently that “Debian is the most influential Linux distribution ever,” it was a rare geek who didn’t sit up and take notice. Sure enough, that’s just what Datamation’s Bruce Byfield asserted in a recent article, adding that “not everyone uses Debian, but, both alone and second hand through […]
March 14th, 2011 by cj2003
Debian GNU/Linux 6.0 (squeeze)
I have been a bit slack in not writing about the release of Debian GNU/Linux 6.0, which was made over a month ago now. A new Debian stable release is always a big deal, not least because it doesn’t happen very often, and it doesn’t happen on a predictable, regular schedule.
The Spiral tribe – marking the end of Debian’s unpredictable release schedule
The launch of Debian 6.0 Squeeze may mark the end of Debian’s unpredictable release schedule, but it’s still the domain of FOSS purists, says Linux User & Develop columnist, Richard Hillesley…
More blogging systems – with Debian as a guide
Blosxom, PyBlosxom, Nanoblogger — hell, even WordPress and Movable Type are available as Debian packages. I wondered, was I missing other blogging platforms, both flat-file and database-driven?
March 9th, 2011 by cj2003
Counting derivatives
With this article, Bruce has made me quite a favor in harvesting distrowatch to refresh the figures about the number of derivat | 计算机 |
2014-23/1194/en_head.json.gz/21912 | rel='schema.DC' href='http://purl.org/dc/elements/1.1/' />
OUYA Android game console now up for pre-order on Amazon
OUYA, the Android-based home game console that took Kickstarter by storm, is now available for pre-order on Amazon for those who missed out on the campaign. The cost is $99 for the unit, which includes the OUYA console and one controller. The draw of OUYA is that anyone can develop and publish games for the console, and there's no huge financial barrier to entry for devs. This could mean that there will be just a bunch of random stuff, but it also means that you'll have more developers working on quality games--and for the first time on a home console, you'll likely see games as inexpensive as the ones you play on your iOS and other Android devices. OUYA is powered by a quad-core NVIDIA Tegra 3 processor and 1 GB RAM with 8 GB of storage and 1080p output. Pre-order it now for $99 and it'll deliver in June, and don't forget to grab an extra controller.
Read More | OUYA pre-order
Ouya Android-based indie game console takes Kickstarter by storm
Are you bored and tired of the big players in the video game space failing to innovate in truly meaningful ways? Then you'll wanna meet Ouya, the Android-powered game console that will cost just $99 with a controller that connects to your television set just like your Wii U, Xbox 360, and PS3 does. The difference? Anyone can develop games for the Ouya console, and there's no huge financial barrier to entry. That means more indie quality indie games, likely much less cheaper than you'd find on other home game consoles. The product is designed by Yves Behar and team, the same folks who dreamed up the designs for the One Laptop Per Child OLPC computer and Jawbone Jambox. On the inside it's powered by Android 4.0 Ice Cream Sandwich with a quad-core Tegra 3 processor, 1 GB RAM, and 8 GB of built-in storage. It also packs 1080p output over HDMI, Wi-Fi, and Bluetooth connectivity.
Interested? You can head over to the Ouya Kickstarter page to pre-order one now. This could turn out to be a very big deal. Check out a video explaining the project after the break.
Click to continue reading Ouya Android-based indie game console takes Kickstarter by stormRead More | Ouya | 计算机 |
2014-23/1194/en_head.json.gz/21979 | Denmark's Resolution on Open Standards - Updated
Saturday, June 03 2006 @ 04:34 PM EDT
Groklaw member elhaard sends us a bit more detail about the Danish resolution that passed yesterday. We put the story in News Picks. The motion is called "B 103" and all material about it (even Parliament transcripts) can be found at the Parliament's home page.
It's only in Danish, though. So he helps us out again, translating the last publicly shown version of the resolution. If you understand Danish, here is an avi file of the final discussions in Parliament, and an Ogg for audio only. Both are from this site, which has a bit more detail in English.First, elhaard tells us the following:
In the list of materials, there is notation about a letter that Microsoft sent to the Parliament committee for Science and Technology. In that, the President of Microsoft, Denmark, J�rgen Bardenfleth, asks for permission to meet with the committee, and he also tells them that Microsoft's Office Open XML is an open standard that will meet the demands set out in the motion.
Shortly after that, the committee held an open council, which means that anyone could participate. This probably means that Minister never got the chance to meet with them one-on-one privately. Also, a Parliament member officially asked the Minister of Science about his stand on the letter, and the minister answered that Open XML will be examined for openness, just like any other candidate. Here is the translation of the resolution for us into English. It's not an official translation, of course, but at least it will give us an idea:
B 103 (as proposed): Motion for Parliament Resolution Regarding Use of Open Standards for Software with Public Authorities.
Proposed March 30, 2006 by Morten Helveg Petersen (Radikale Venstre (RV), Marianne Jelved (RV), Naser Khader (RV), Martin Lidegaard (RV) and Margrethe Vestager (RV)
Motion for Parliament Resolution
Regarding Use of Open Standards for Software with Public Authorities
Parliament directs the government to ensure that the use of information technology, including software, within public authorities is based upon open standards.
No later than January 1st, 2008, the government should introduce and maintain a set of open standards that can serve as inspiration for other public authorities. Hereafter, open standards should be a part of the basis for public authorities' development and purchase of IT software, with the aim of furthering competition.
The government should ensure that all digital information and data that public authorities exchange with citizens, corporations and institutions are available in formats based on open standards.
This motion for a resolution is partly a re-proposal of motion no. B 64 of the Parliament year 2004-05, first session (see Folketingstidende 2004-05, 1. samling, forhandlingerne page 3521 and appendix A pages 4786 and 4788).
Procurement of information technology by the public sector should be based on the Government service's assessment of how working and service is done most efficiently, properly and economically.
It is, however, a political task primarily to ensure that there is a determined strategy for public authorities' procurement and use of software, so that it generally is to the benefit for users, citizens and business.
Secondly, it is a political task to ensure that the use of information technology by public authorities ensures the democratic rights of all citizens to be able to freely receive digital information from public authorities and to be able to freely send digital information to them. These political goals can only be met if the public sector demands that software, that is used in the public sector and for communication with the public sector, is based on open standards.
Thirdly, it is a political task to ensure the settings for open competition.
Fourthly, an insistence on open standards is crucial in these years, when municipal and county authorities unite their IT systems as a consequence of the municipal reform. [By January 1st, 2007, the number of municipal authorities in Denmark will be reduced to approximately a third, as cities with low population are joined together to form larger municipal units, or "communes". At the same time, counties are joined to form larger "regions". - translator's note] A part of this must be that all public home pages, intra-nets and IT based tools should be accessable by persons with handicaps, according to the guidelines that are recommended by "Kompetencecenteret it for alle", a part of IT- og Telestyrelsen [The National Administration for IT and Telecom - translator's note]
Fifthly, there are important commercial-political perspectives associated with the introduction of open standards in public administration.
Sixthly, there will presumably be considerable long-term economical advantages in introducing open standards in public administration
Government IT policy should ensure the public sector the best possible software at the lowest possible price. This includes such parameters as functionality, stability and security. Government IT policy should contribute to a competetive market for software in Denmark.
Open standards means that the standard is
- well documented with its full specification publically available,
- freely implementable without economically, politically or legal limitations on implementation and use, and - standardized and maintained in an open forum (a so-called standards organisation) through an open process.
In the coming years, a substantial growth in the public sector's use of digital administration is expected, and thereby a larger general use of IT and the Internet in both the public sector and between the public sector and the private sector. In order to achieve the expected gains of digital administration, there must be openness regarding the choice of IT, and openness in communication, data exchange and electronic documents, as well as systems that can speak to each other, so that citizens, corporations and public authorities can communicate. Thus, openness is a fundemental demand as well in relation to enhancement of competition as in entertaining the democratic aspect of information technology. Therefore, the government should no later than January 1st, 2008, introduce and maintain a set of open standards.
As a starting point, the public sector's procurement and use of software must be based on open standards. Only in the event that no usable system based on open standards is available should the public sector procure and use software based on closed formats. In such events a separate reasoning must be cited in such a form that the public administration can assess whether demands for the introduction of open standards can be raised at a later time. This motion does not infer any demand that older systems are converted to open standards. But the government is directed to maintain mandatory interchange formats in continuation of the Ministry of Science's work on the so-called reference profiles, and to define the rules that will govern arguments for deviation from the demand for open standards.
As a starting point, everybody shall be able to communicate with the public sector without demands for choice of one or a few vendors' software. That is not always the case today, where the public sector's use of software from primarily one vendor can mean that persons that use other types of software from other vendors can experience difficulties communicating with the public sector without being forced to use a certain type of software. To an even higher degree, this is the case for ministries, for which reason it should be ensured that in the future, there is at least one open format in the communication with citizens.
Today, the public sector's choice of software based on closed standards supports a market without or with very small competition. A demand for openness - and thus not a demand for exclusion or inclusion of separate vendors or software types - will further encourage competition to the benefit of the entire society.
Public authorities must be better at exchanging data and information. Agreement on open standards can result in that industry and authorities can "stand on each others' shoulders" instead of the new and bigger communes and regions developing each their own systems, which cannot speak to each other. Furthermore, this strengthened cooperation can be used so that companies more easily will be able to develop new solutions for the public sector, so that open standards can bring considerable innovation opportunities for the Danish software industry.
Written proposal
Morten Helveg Petersen (RV):
As speaker for the movers, I respectfully propose:
Motion for Parliament Resolution Regarding Use of Open Standards for Software with Public Authorities
(Resolution motion no. B 103).
Additionally, I refer to the notes that accompany the motion, and recommend it to the kind reading by Parliament.
UPDATE: More just in from elhaard. First, here is the Microsoft letter [PDF], in Danish and more details:
The commitee asked the Minister of Science about the letter. His answer is here and it says:
Question #10
Would the Minister please comment on the letter of May 17th, 2006, from Microsoft Denmark, in regards to UVT B 103 - appendix 2 ? [UVT B 103 is the motion for resolution, appendix 2 is the letter from Microsoft - elhaard]
Regarding the letter from Microsoft Denmark to the Commitee for Science and Technology of May 17th, 2006, Microsoft Denmark writes amongst other that "Office Open XML is an open standard which in all areas lives up to the conditions given in B 103".
Office Open XML is under approval in ECMA (a European standardization organization).
The Ministry of Science has started a re-evaluation of the recommendations regarding standards for word processor documents in the OIO catalogue [a list of approved open formats for use in public administration in Denmark. The list is not enforced yet - elhaard]. As a part of this, the openness of different standards for word processing documents, including Office Open XML, will be evaluated. | 计算机 |
2014-23/1194/en_head.json.gz/22192 | Top 5 Paid Games for Linux
With Linux matching Windows and Mac head-to-head in almost every field, indie developers are ensuring that gaming on Linux doesn't get left behind. We've covered various types of games that are available for Linux, from the best MMORPGs to the top action-packed First Person Shooters. While most of these games are free, there are a few paid games that have come out for Linux.Here's a look at the top 5 paid games that are making noise:MinecraftMinecraft is a new cross-platform indie game, which has recently gained a lot of popularity. It is a 3D sandbox game, where players must try and survive in a randomly generated world. In order to do this, they must build tools, construct buildings/shelters and harvest resources. If you're still curious, then do check out the best minecraft structures created by addicted players from around the world. Minecraft comes in two variants – Beta and Classic, both with single-player and multiplayer options. The Classic version (both single-player and multiplayer) is free. On the other hand, Minecraft Beta, which is still under heavy development, will retail at 20 Euros (that's about 28.5 USD) when finished. For the moment, the game can be pre-purchased and played as a beta for 14.95 Euros. Users who buy the beta version won't have to pay anything for the stable release once it comes out.In case you're still confused what the whole hype is about, then here's a nice video explaining the basics of the game: World of GooThis multiple award-winning game, developed by former EA employees has been one of the most popular games for the Linux platform. World of Goo is a physics-based puzzle game by 2D Boy, that works on Windows, Linux, Mac, Wii and even iOS. The game is about creating large structures using balls of goo. The main objective of the game is to get a requisite number of goo balls to a pipe representing the exit. In order to do so, the player must use the goo balls to construct bridges, towers, and other structures to overcome gravity and various terrain difficulties such as chasms, hills, spikes, or cliffs. The graphics, music, and the effects come together to provide a very Tim Burton-esque atmosphere to the game.The game consists of 5 chapters, each consisting of multiple levels. In all, there are about 48 levels, making the experience truly worthwhile. In case you've missed it, World of Goo was part of the Humble Indie Bundle 1 and 2. However, the game can still be purchased at $19.95 from the Ubuntu Software Center or from the official website.Amnesia Dark DescentWe've covered Amnesia in detail before. In this game, you play the role of Daniel, who awakens in a dark, godforsaken 19th century castle. Although he knows his name, he can hardly remember anything about his past. Daniel finds a letter, seemingly written by him and which tells him to kill someone. Now, Daniel must survive this spooky place using only his wits and skills (no knives, no guns!). Amnesia brings some amazing 3D effects along with spectacularly realistic settings making the game spookier than any Polanski movie. As of now, the game retails at as low as 10 USD. Before buying, you can also try out the demo version of the game HERE. One warning though, don't play this game with the lights turned down; it's really that scary!Vendetta OnlineVendetta Online is a science fiction MMORPG developed by Guild Software. Quoting the website “Vendetta Online is a 3D space combat MMORPG for Windows, Mac, Linux and Android. This MMO permits thousands of players to interact as the pilots of spaceships in a vast universe. Users may build their characters in any direction they desire, becoming rich captains of industry, military heroes, or outlaws. A fast-paced, realtime "twitch" style combat model gives intense action, coupled with the backdrop of RPG gameplay in a massive online galaxy. Three major player factions form a delicate balance of power, with several NPC sub-factions creating situations of economic struggle, political intrigue and conflict. The completely persistent universe and detailed storyline add to the depth of immersion, resulting in a unique online experience.” The game has been around since 2004, and since then, it has evolved a lot with developers claiming it as one of the most frequently updated games in the industry. Gamespot rated Vendetta as 'good', but there have been some criticisms about its limited content compared to its high subscription price. The game uses a subscription-based business model and costs about $9.99 per month to play the game. Subscribers get a discount on subscriptions for longer blocks of time bringing the monthly price down to $6.67 a month. A trial (no credit-card required) is also available for download on the official website.OsmosOsmos is a puzzle-based game developed by Canadian developer Hemisphere Games. The aim of the game is to propel yourself, a single-celled organism (Mote) into other smaller motes by absorbing them. In order to survive, the user's mote has to avoid colliding with larger motes. Motes can change course by expelling mass, which causes them to move away from the expelled mass (conservation of momentum). However, doing this also makes the mote shrink. In all there are 3 different zones of levels in Osmos, and the goal of each level is to absorb enough motes to become the largest mote on the level. With its calm, relaxing ambiance thanks to the award-winning soundtrack, Osmos creates a truly unique gaming experience.The game has received a great response so far. On Metacritic, it has a metascore of 80, based on 22 critics reviews. Apple selected Osmos as the iPad game of the year for 2010. Osmos retails at $10 for the PC, Mac and Windows versions. The game is available across Windows, Linux, Mac OS X and iOS.Why pay?In the free world of Linux and Open-source, many people argue that if everything is 'free', why should I pay for a game? Of course, the Linux world is free but free doesn't mean free as in 'free beer', the word free implies freedom. Most of the popular games for Windows, now come with SecuROM and other such DRM restrictions that restrict one's fair-use rights. This means that the user will only be able to use the software on one machine, sometimes requiring constant activations. Games and other software developed in the FOSS world don't have such absurd restrictions. Users are free to use and distribute the game, and yes there's none of that activation or cd-key nonsense. While these games respect the user's freedom, keeping them free (as in free beer) is not a viable option because developers have to devote a lot of time and money in making these games. So, shelling out a few dollars for these games will help the indie developers pay their rent as well as come up with many new games for this emerging gaming platform.
linux games, | 计算机 |
2014-23/1194/en_head.json.gz/22343 | Software Pluralism
Technology Primer
Contents: Introduction Languages and Machines Compilers and Higher Level Languages Modules: Software Decomposition Combining Physical Modules: Static and Dynamic Linking Binding Conclusion
References Related Articles:
Derivative Works Introduction In this article, we describe and define a number of concepts underlying modern computer systems technology. Understanding these concepts is central to resolving a number of legal issues that arise in the context of open, hybrid, and proprietary models of software development and distribution.
Languages and Machines Machine code is a sequence of instructions expressed in the native representation of a particular computing machine. By native, we mean that the target machine can consume and execute the instructions contained in a particular sequence of machine code with a minimum of effort. A sequence of machine instructions is sometimes called a program or executable. Machines are generally defined by reference to an abstraction, called an instruction set architecture, or ISA. The ISA is an interface to the machine, meaning that it defines the format (or syntax) and meaning (or semantics) of the set of instructions that a given machine can execute. The syntax and semantics that defines the set of allowable machine codes and their meanings is sometimes called machine language.
The decision as to how best implement a given ISA is to a large degree an arbitrary one. Historically, ISAs have been implemented in hardware, meaning that the logic required to consume, decode, and execute machine codes that comply with a given ISA is implemented by means of a fixed circuit. However, this mode of implementation is by no means required. A given ISA may also be implemented by means of a software program, and such an implementation is sometimes called a virtual machine. This software program will itself of course just be a sequence of machine codes that can be understood by another machine, that may or may not also be implemented in hardware or software. For instance, it is possible (and in fact common, nowadays) to implement a given machine that can execute programs written in machine language X by writing a program in machine language Y that runs on (is consumed and executed by) a machine that is implemented in hardware. The good thing about machine languages is that they are typically relatively simple in terms of syntax and semantics. The set of allowable machine instructions is generally small, and the allowable formats for expressing machine instructions are typically few. These limitations make it relatively easy to build (particularly in hardware) a machine that implements a given machine language. The bad thing about machine languages is the same as the good thing—that is, that they are very simple. This makes them inefficient modes of communication, at least from the perspective of someone who wishes to write programs that do interesting things. Intuitively, a language containing only 100 words will be much less expressive than one containing 100,000 allowable words, because it should take fewer words to express a given concept in the latter (large) language than in the former (small) language. Also, because machine languages are designed to be easily read and decoded by mechanical means, they are often represented by way of alphabets that are not natural for humans to read or write. The most common format used is binary—that is, all machine codes are represented only by strings of ones and zeros. The choice of alphabet is again an arbitrary decision. Historically, however, binary has ruled because it is relatively straightforward to build hardware circuits that function on high and low voltages which map nicely to ones and zeros.
Compilers and Higher Level Languages These drawbacks with machine code led computer scientists to write programs (initially, painstakingly in machine code) to translate programs written in higher-level programming languages into programs consisting only of machine code. These programs are called compilers. A compiler is just a program that takes as input a code sequence in one language and translates and outputs a corresponding code sequence in second language. That second language will typically be machine code, but it need not be. The second language is often called the source language and is typically of a much higher level than machine language. That is, its alphabet is comprised of symbols that are familiar to humans, such as alphanumeric characters; its built-in commands or instructions will be represented in words (e.g. "ADD") or symbols (e.g. "+") that have conventional meanings; and they provide for the straightforward expression of useful, commonly occurring programming idioms, such as repeated execution or functional decomposition.
Modules: Software Decomposition There are two ways to think about program decomposition. First, all modern programming languages provide some mechanism for logical decomposition. The core principle behind logical decomposition is the separation of interface (sometimes also called specification) from implementation (sometimes also called representation). The interface to a module is everything and anything that a client of that module needs to know to use the module. The interface is a description of what the module can do. The implementation of a module, on the other hand, is the inner workings (generally data structures and instructions) that actually perform the work promised by the interface description. Consider, for example, a television. The interface consists of the switches, knobs, and plugs on the outside of the television. The implementation consists of the guts of the television. As users, we need only to understand the interface: how to plug it in, turn it on, and twiddle the knobs. We don't need—indeed, probably don't want—to know how the television works its magic. By decoupling the interface from the implementation, separate parties can work on separate parts of large system by knowing only about agreed-upon interfaces. Television set designers only need to know about the interface to the electrical system—the shape of the plug and the voltage provided—in order to design a television set that any consumer will be able to plug into their wall socket. They do not know or care about how the electricity is generated, where it comes from, and so on.
The most basic form of logical decomposition is procedural or functional decomposition. Every modern, high level programming language supports functional decomposition in some manner. A function is just an abstraction for a set of instructions to perform a common task, which has been given a convenient name by the programmer. For instance, a programmer may create (define) a function called area that knows how to compute the area of a rectangle. In defining this function, that programmer has in some sense added a new word to the vocabulary of the programming language. The programmer can later invoke the function simply by using that new word in their program. Other forms of logical decomposition include modules, abstract data types, class definitions, and packages. All modern, high level programming languages support one or more of these forms of logical decomposition. While the details of these forms is beyond the scope of the article, they are all methods for creating software components that are larger than single functions or procedures.
In addition to logical decomposition, modern programming languages allow for physical decomposition. The most common mechanism for performing this kind of decomposition is by breaking a single large program into a series of files. Each file will typically contain one or more logical program components. For example, a single file might contain a group of related functions that can then be called from within other source files. Some mechanism for program decomposition is central to the ability to build large programs. Without it, it is hard to imagine how two or more programmers could work on a single program at the same time.
Combining Physical Modules: Static and Dynamic Linking
Program decomposition, however, complicates the process of source to machine code translation. In a world where every program is contained in a single source file, the compiler knows everything it needs to know to translate the program from source to machine code. In a world where a program is broken down into two or more source files, the compiler will not know how to properly translate function invocations that reference functions that are defined in other source files. At a high level, a compiler needs to translate a function invocation into a "jump" to a different instruction address within the executable program. However, the compiler cannot know the machine address of a defined function unless it knows the layout of the entire executable, which it cannot know until all the source files have been compiled and merged.
The typical solution to this problem is simply to defer the resolution of function names to machine addresses to another program, known as a static linker. In this scheme, the compiler translates a given source file into a corresponding object file. An object file contains machine code, as well as some information about functions and other names that are defined by (or functions and other names that are referenced by) the given object file. A second program, called a linker, is then used to make a final resolution of names to numerical addresses. At a high level, the linker takes all of the object files, and for each object file stitches up unresolved references to names that are defined in other object files. After stitching up these names, it concatenates the various machine code sequences into one long machine code sequence, which is now fully executable by the target machine. If it encounters names that are not defined in any object file, it typically reports an error, because this means that the programmer of some object file has attempted to invoke a function or otherwise reference names that are not defined in any object file that comprises the executable.
Static linking works well until we consider that many programs share a considerable amount of standard code. This commonly used, shared code is typically placed into libraries, which are seldom changed, well-known repositories for useful functions that are used by a wide variety of programs. Examples include libraries for performing input and output (from keyboards and other input devices and to the screen or other output devices), mathematical functions (such as sine, cosine, logarithms, etc), and so on. One way to think about a library is as a widely-advertised, well-known object file.
Consider what happens when a number of programs are each linked against a given library. First, this approach is wasteful in terms of disk space, because each program contains a section of identical code. While this may not seem significant in today's world of large disks, it does begin to pose a problem on systems with limited storage, where hundreds or even thousands of programs are all statically linked with a given library or libraries. Second, suppose that the vendor wishes to fix a bug in a given library. The vendor can redistribute the library, but then the end-users would have to re-link that library to each of the hundreds or thousands of programs that may use it. Alternately, the vendor can relink the programs for the user, but they may not be in a position to do so if some of the programs are licensed from third parties. And even if the vendor were willing to relink, they would now need to re-distribute and install a large number of programs, rather than just a single library.
The solution to these problems is to employ dynamic linking. In a world of dynamic linking, programs are not linked until run-time—that is, external name references are left unresolved until the program is loaded, or even until a given function is referenced for the first time. Dynamically linked programs incur some cost at run time—either due to longer startup times because the linking is done when the program is loaded, or as a small cost each time an unresolved name is referenced for the first time. The benefit of dynamic linking, of course, is that multiple programs can now truly share the same library—each system will only have a single copy of each dynamically linked library shared by a plurality of running programs. For this reason, dynamically linked libraries are sometimes also called shared libraries (in the *N*X world, for instance). Another benefit is that upgrading a dynamically linked library (e.g. due to a bug-fix) is often as simple as just replacing the library on disk, and restarting the system. Because programs are automatically linked each time they are run, no tedious static re-linking of programs is required.
Static and dynamic linking are instances of a more general concept known as binding. Binding can be thought of as the process of associating, in some way, names with values in a software system. Systems can be distinguished based on when they perform various kinds of binding. In the case of static linking, function names are bound to numerical function addresses more or less at compile time (or more precisely, after compile time, but before run-time). In the case of dynamic linking, binding doesn't take place until run-time.
Internet domain name resolution is another example of the binding concept. The Domain Name Service (DNS) allows textual, easy-to-remember domain names (like "yahoo.com") to be translated into numerical IP addresses at run-time. Deferring binding has the benefit of making systems more flexible at the cost of some run-time overhead. In the DNS example, imagine if domain names were translated at compile time. In that case, anytime someone wanted to re-map a domain name to a new IP address, all programs that referenced that name would have to be re-compiled. Moreover, a large class of networking programs, such as web-browsers, would be dependent upon relatively static associations between domain names and IP addresses, greatly limiting their utility.
Object oriented programming languages are another example of systems that rely on late binding. One of the hallmark features of object oriented programming languages is subtype polymorphism. A programming language that is subtype polymorphic allows a programmer to write code that will function with data objects of a type that is not known until runtime. Object oriented programming languages do not depend on knowing the particular representation of objects at compile time. This means that programmers may write code that is highly abstract, in that it works with many different (concrete) types of objects that all share common interfaces. This flexibility and expressiveness, however, comes at the cost of deferring binding until runtime. Specifically, the process of looking up and determining the correct behavior for a given object of a particular type is deferred until runtime. This process is in many ways analogous to that of dynamic linking that occurs on a "demand basis" (as opposed to dynamic linking that takes place at load-time). It differs from demand-linking, however, because object oriented systems allow for the binding to be different every time a particular function is invoked—in other words, the particular behavior for a given object must be (in the worst case) re-determined every time that behavior is invoked (because it is not known if the current object is or will be the same as the subsequent object).
Binding, modularity, and interfaces may seem like abstract computer science concepts, but as other articles on this website will demonstrate (see GPL's treatment of derivative works, for example), they are critical to correctly resolving legal issues that arise in this context. References
The author would take the following three books onto a desert island with him: Aho, Alfred V. & Ullman, Jeffrey D. (1992), The Foundations of Computer Science, Computer Science Press. Abelson, Harold & Gerald Jay Sussman with Julie Sussman (1985), Structure and Intrpetation of Computer Programs, The MIT Press. Patterson, David A. & John L. Hennessy (1998), Computer Organization and Design: The Hardware/Software Interface, Morgan Kaufmann. Related Articles
Derivative Works top | 计算机 |
2014-23/1194/en_head.json.gz/22500 | Computer Games - Myths, Rumours and Others...
1983 - Attack of the Timelord!
A pioneering game for the Odyssey2 this may be. One of the best uses of the 70s system's sound capabilities it may have. Fast and frantic and generally very cool are other ways it can be described. It is not, however, a Doctor Who game. The Timelord in question is a green blob hell bent on taking over the galaxy. It's not even a rogue Time Lord, since the instructions refer to the titular villain as Spyrus the Deathless � Timelord of Chaos and it's unlikely a programmer would use his favourite TV show for inspiration, then invent a character and mis-spell his title (it's "Time Lord", not "Timelord" you moron!). In any case, it was renamed Terrahawks for the the European release, to tie in with the Gerry Anderson series.1983 - Cybermen The followup to J Morrison's Bonka (probably best that you don't ask) sees the title character of 'man in a black hat' trying to retrieve bars of Platignum from a maze patrolled by Cybermen, and was coded exclusively for the Commodore C64. From the in-game instructions..:
"The object of the game is to collect as much Platignum as possible. This is scattered at random throughout the maze, which is patrolled by Cybermen. You begin with 3 lives and the scoring is as follows: Cyberman 100 Pts, Platignum Bar 500 Pts. A bonus of 100 Pts is earned if you manage to shoot a Cyberman's bullet before it hits you. The overseer appears periodically and is indestructible, the only escape is through any of the open doors. Your man can only fire when moving, in the direction of travel. Good luck. Any key to start."
It seems likely that the name is just a coincidence, as the Cybermen certainly don't look like their TV counterparts! 1983 - Zalaga No obvious Doctor Who references here... except that three years after being released for the BBC Micro, this game would turn up in The Trial of a Time Lord episodes 9-12, as a futuristic Space Invaders clone. | 计算机 |
2014-23/1194/en_head.json.gz/22888 | KnowledgeLake Appoints IT Marketing Veteran Heather Newman as First-ever Chief Marketing Officer Seasoned technology marketing veteran brings over 16 years of global marketing and strategy experience to KnowledgeLake’s Executive Team as Chief Marketing Officer (CMO) Heather Newman, CMO, KnowledgeLake
I’m excited that we finally found the highly results-driven, truly strategic CMO that we’ve been searching for in Heather. - Ron Cameron, President and Co-Founder, KnowledgeLake
St. Louis, Missouri (PRWEB) February 18, 2014 KnowledgeLake, an enterprise-class ECM company, officially announced today Heather Newman has joined the company as the organization’s first-ever chief marketing officer. As CMO, Newman will lead KnowledgeLake’s marketing efforts, including global strategy, corporate branding, partner development, and product and solution marketing.
“I’m excited that we finally found the highly results-driven, truly strategic CMO that we’ve been searching for in Heather. Her proven track record of enabling technology companies to achieve maximum results and deliver an unmatched customer experience is highly-regarded in our industry. I am confident her passion, entrepreneurial spirit and industry expertise will allow us to continue improving how we serve our clients,” said Ron Cameron, President and Co-Founder, KnowledgeLake.
Newman’s years of experience in building global, high-tech marketing businesses has helped drive revenue for many companies through alignment with sales, partner channels, leadership teams and clear execution of initiatives. The majority of her career has been spent working with the largest technology companies in the world, including Microsoft, Google, Amazon, Hewlett Packard, NetApp and Dell. “After a long-time professional relationship with KnowledgeLake, it is truly an honor to serve as the organization’s first CMO and to work with such a talented team. I am extremely impressed with the innovation and solutions KnowledgeLake brings to a quickly evolving industry. I am looking forward to jumping in and helping support KnowledgeLake’s existing customer and partner relationships and take KnowledgeLake’s business and brand to even greater heights,” said Heather Newman, CMO, KnowledgeLake. In 2006, Newman founded the highly-successful consulting business Creative Maven, where as CEO and CMO, she has produced hundreds of marketing campaigns and events for high-tech companies, such as Microsoft, KnowledgeLake, GimmalSoft, OmniRIM and Ascentium. Most recently, Heather served as the SVP of Global Marketing for AvePoint, the world's largest provider of enterprise-class governance solutions for collaboration platforms, where she was nominated for The CMO Club Rising Star Award - 2013.
"Heather combines great marketing instincts with a passion for data driven marketing decision making and customer engagement. I look forward to seeing great things from her in her new position," said Pete Krainik, Founder, The CMO Club.
In addition to The CMO Club, Newman is an active member of a number of professional organizations, including: The American Marketing Association (AMA), Argyle Executive Forum (CMO Membership), Meeting Professionals International (MPI), and the Professional Convention Management Association (PCMA). She has also been named to the 2014 Board of Directors for AIIM, The Global Community of Information Professionals.
“We are delighted to have Heather’s extensive marketing experience and creativity on the AIIM Board. In addition to her personal talents, we are excited to have KnowledgeLake involved at the Board level," said John F. Mancini, President and CEO, AIIM. You can connect with Heather Newman on LinkedIn and on Twitter.
About KnowledgeLake
KnowledgeLake is an innovative software and services firm specializing in helping Microsoft-driven organizations solve their document-intensive business challenges through expert guidance, council, services and enterprise software solutions. KnowledgeLake provides the business expertise and technology needed to help clients efficiently capture, store and manage process related documents as part of an over-arching Enterprise Content Management (ECM) vision.
Headquartered in St. Louis, Missouri, KnowledgeLake is a three-time Microsoft Partner of the year award winner and is recognized as the founder of the SharePoint document imaging marketplace in 2003. KnowledgeLake enables its customers to maximize and extend their already sound investments in proven Microsoft technologies, such as Microsoft SharePoint, Microsoft Office and Microsoft Office 365. Equity funded by PFU Ltd. (a wholly owned subsidiary of Fujitsu Ltd.)
Lauren Ziegler
KnowledgeLake, Inc. +1 (314) 898-0512
@KnowledgeLake
KnowledgeLake
KnowledgeLake Solution Overview | 计算机 |
2014-23/1194/en_head.json.gz/23097 | Crysis 3 PC requirements land, get ready to upgrade
Eric Abent | Dec 3, 2012
For years now, we've used the Crysis series to determine how good a PC is at a glance, with enthusiasts today still asking "But can it run Crysis?" when you bring up your PC's technical specifications. It looks like Crysis 3 will continue the series' trend of demanding a lot of power, as Electronic Arts has released the PC requirements for the incoming game. Luckily, EA has share the requirements two months before Crysis 3 launches, which is good since it sounds like some of us will have to spend some time upgrading.
THQ stock climbs nearly 40% after release of Humble Bundle
Eric Abent | Nov 29, 2012
Earlier in the day, we told you about the latest Humble Bundle. This Humble Bundle is more or less the same as past bundles in that players get to name their own price for it, but there's one key difference: instead of bundling together a bunch of indie titles, this latest one is a collection of THQ's biggest games. It would appear that THQ's investors like this idea a whole bunch, because THQ's stock ended the day up a whopping 37.96%.
Do we really need the Steam autumn sale?
Over the weekend, I had the pleasure of partying with a bunch of my friends. All of them are pretty big nerds, just like me. If you're a nerd too, you know that not much changes when a bunch of nerds get a few drinks in them, they just talk about nerdy things louder than usual. Therefore, it shouldn't come as much of a surprise to hear that the Steam autumn sale was among the topics that came up that night.
Levine: BioShock Infinite won’t have a multiplayer mode
These days, it seems like publishers are all about multiplayer. They're sticking multiplayer in where it doesn't belong (we're looking at you, Dead Space and God of War) simply because they think a game won't sell as well without it. It's been a major sticking point for a lot of gamers who don't want to see single player modes become less important, but thankfully, not all publishers and developers have been bitten by the multiplayer bug.
Baldur’s Gate Enhanced Edition hits hard in new gameplay trailer
After months and months of waiting, the launch of Baldur's Gate Enhanced Edition is nearly here. On November 28 - less than one week from now - Baldur's Gate Enhanced Edition will land on PC, Mac, and iPad, introducing an entirely new generation of gamers to what is widely considered to be one of the best games ever made. What better way to celebrate its impending release than with an all new gameplay trailer?
Steam autumn sale kicks off with deals on XCOM, Darksiders II
Just as the prophecy foretold, Valve has started the long-awaited Steam autumn sale. The autumn sale may traditionally be outshined by the longer and larger holiday sale, but there are still some pretty excellent deals to take advantage of right this minute. For instance, some of the featured deals include Darksiders II at $16.99 (that's 66% off) and XCOM: Enemy Unknown for $33.48 (33% off).
Borderlands 2 Torgue DLC on all platforms today
Just a friendly reminder to all of you Borderlands 2 players out there: Mr. Torgue's Campaign of Carnage is available today across all platforms. This is the second DLC expansion to Borderlands 2, following the release of Captain Scarlett and Her Pirate's Booty last month. It can be had on Xbox 360, PS3, and PC for $9.99 or 800 MSP - that is, unless you purchased a Borderlands 2 season pass, in which case you can just straight up download it.
Call of Duty: Black Ops II Review
It's the end of the year, which means it's time for the inevitable Call of Duty game. Treyarch has a lot to live up too after the reception and the success of the original Black Ops, and this time around, the studio is looking to expand upon some of the ideas laid down in the first game. Does it work, or does the latest Black Ops II installment fail to improve enough and ultimately fall flat? Read on to find out.
Skyrim Dragonborn DLC struts its stuff in new screenshots
Skyrim's new Dragonborn DLC is just a few weeks away from release, and today Bethesda is trying to build up some hype with a slew of new screenshots. We were already given a bunch of details back when the DLC was officially announced, but now it's time to see some of the environments and races that will be found on the island of Solstheim. Hit the jump to see the full collection.
Black Ops II PC discs surprise players with Mass Effect 2 data
Well this is something of a hairy situation: we're getting reports that claim some Black Ops II PC discs actually contain data for Mass Effect 2 instead of, you know, the game advertised on the front of the box. One Redditor has compiled a list of links that all lead to complaints of these Mass Effect 2 discs in disguise, and YouTube zeroiez has captured the rather confusing mix up on video for the whole world to see. Sounds like someone made a pretty big mistake. | 计算机 |
2014-23/1194/en_head.json.gz/23291 | Spammer charged in huge Acxiom personal data theft
Hack to spam indictment
A Florida man has been charged with stealing a vast quantity of personal information from Acxiom, one of the world's largest database companies. Scott Levine, 45, of Boca Raton, Florida, was this week indicted for 144 offences in connection with the alleged attack including "conspiracy, unauthorized access of a protected computer, access device fraud, money laundering and obstruction of justice".The US Department of Justice said the case involves what might be the "largest illegal invasion and theft of personal data to date". Federal investigators charge that Levine and staff at Snipermail stole 8.2 gigabytes of data from Acxiom's FTP servers between April 2002 to August 2003 during the course of 137 separate intrusions. The purloined data was allegedly used in email spamming campaigns by Snipermail, a Boca Raton-based company allegedly controlled by Levine.
Although investigators allege the intrusion and theft of personal information at Acxiom resulted in losses of more than $7m there is no suggestion this data was used in the more serious offence of identity fraud. The DoJ said six other people associated with Snipermail are cooperating in its investigation.Double jeopardyEvidence of Snipermail's alleged assault was discovered by investigators probing a separate security breach at Acxiom. Daniel Baas, 25, of Cincinnati, Ohio, pleaded guilty to that attack last December.Acxiom clients include 14 of the 15 biggest credit card companies, seven of the top ten auto manufacturers and five of the top six retail banks. The company also analyses consumer databases for multinationals such as Microsoft, | 计算机 |
2014-23/1194/en_head.json.gz/25094 | Aseem Agarwala Home
Tech transfer Research projects Publications
I am a principal scientist at Adobe Systems, Inc., and an affiliate assistant professor at the University of Washington's Computer Science & Engineering
department, where I completed my Ph.D. in June 2006 after five years
of study; my advisor was David
Salesin. My areas of research are computer graphics, computer vision, and computational imaging. Specifically, I research computational techniques that can help us author more expressive imagery using digital cameras. I
spent three summers during my Ph.D interning at Microsoft Research, and my
time at UW was supported by a Microsoft
fellowship. Before UW, I worked for two years as a research
scientist at the legendary but now-bankrupt Starlab,
a small research company in Belgium. I completed my Masters and Bachelors at MIT majoring in computer
science; while there I was a research assistant in the Computer Graphics Group, and an intern at the Mitsubishi Electric Research
Laboratory (MERL) . As an undergraduate I did research at the MIT Media Lab.
I also spent much of the last year building a modern house in Seattle, and documented the process in my blog, Phinney Modern. | 计算机 |
2014-23/1194/en_head.json.gz/25145 | November / December 2003 Volume 29, Issue 8 Send Feedback Table of ContentsBuilding Collaboration with Adobe AcrobatSimon ChesterSimpler, friendlier, more integrated—an essential product just got better. Adobe Acrobat 6.0 has powerful new features for facilitating communications with your clients and colleagues.The remarkable spread of Adobe Acrobat, and the acceptance of its ubiquitous Portable Document Format (PDF), has pervaded all of our desktops. Every lawyer’s workstation should, by now, have Acrobat Reader installed as a standard application. It is essential to full use of the Internet and to effective client communication through e-mail. Law firms have increasingly relied on Adobe PDF as a common format for document exchange with clients and others who may be on different platforms. The reason, of course, is that a document saved as an Adobe PDF file looks the same regardless of the original document format.The full Adobe Acrobat program offers many other ways to process documents. And now, with the recent release of its newest version, Acrobat 6.0, Adobe provides a significant breakthrough for law firms looking to develop their collaborative capacities—whether internally, with clients or with co-counsel.A Better Interface in Three PackagesAcrobat 6.0 comes in three versions: Standard ($299), Professional ($449) and Elements, a corporate version ($28 per seat, for a 1,000-seat minimum license). The fact that Elements’ license restrictions exclude all but a handful of settings in which lawyers practice limits its use in our sector, which is a pity. For power users, the Professional version has additional tools that permit fine editing of documents that will be commercially produced.However, most documents generated by lawyers, while complex in structure and format, tend not to be graphically rich or complex in layout and other publishing elements. So for most legal professionals, the $299 Standard version of Acrobat will be adequate. It contains all the key features.Chief among the improvements in all three versions is a redesigned user interface that is much simpler and friendlier. The most common features have been built into large buttons, and the more specialized applications reserved for nested menus. It is now much easier to select text or images for copying—simply draw a box around whatever you want to copy. Also, the search engine is much improved. The strength of this alone justifies upgrading your current version of Acrobat Reader to what’s now called Adobe Reader, version 6.0.Tools to Leverage Teamwork via E-mailFor lawyers, the most attractive feature of Adobe Acrobat 6.0 lies in its ability to facilitate collaboration, leveraging the fact that e-mail is the key tool for lawyer collaboration. It does so through a remarkable set of tools to drive document processing, helping lawyers create, share, review, edit and archive documents regardless of format. For those using the Microsoft Office suite of products (including Word), there is very tight integration.The file-sharing utility creates documents to share, compressing them for e-mail transfer. Those who draft documents can send them to a team for reviewing—the team can be internal or external and doesn’t have to be running the software in which the document was originally created. Simply save the document as a PDF file, attach it to an e-mail and generate a standard message telling the reviewers how to use the file.Comments and revisions from reviewers are then imported back into the original PDF as layers. The author can review the comments, decide which to accept, then export the comments and suggested changes back into Microsoft Word. The program is significantly more powerful than Word’s Track Changes function, since it permits multiple reviewers to work simultaneously on the same document. Note that the reviewers must have installed either the Standard or Professional Adobe Acrobat programs to annotate the files.The new program makes it much easier to undertake file creation, both within standard Microsoft Office applications, as well as on the Web. Adobe adds a toolbar that permits documents to be created in PDF, _e-mailed and sent out for review. One especially useful feature is the ability to create a single PDF from a large number of files in disparate formats. (At my firm, we are automating corporate closing books, and this feature will be invaluable.)In addition, recognizing how much business now relies on the Internet and Web-based information, the capturing of Web sites has been significantly enhanced, and the sites so captured now include flash and live links. As the Web evolves, more documents are being generated with extensible mark-up language and contain metadata. Acrobat, taking account of this development, has a better bond between the PDF and XML formats, offering PDF as a container for metadata.Neutrality and Security FactorsAt the technical end, Adobe has done a good job of preserving its neutrality toward operating systems. While Acrobat is optimized for Windows XP, the Mac version seems to work well, and it is available for other operating systems, too. Adobe has also succeeded in achieving greater compression, and file sizes are now shrunk to half the size available with Acrobat 5.0. The compression is described as a minimum of 29 percent and a maximum of 79 percent, depending on the original file’s complexity.Last but not least, Adobe continues to offer an attractive package of security features in the new version. It permits digital signatures and encryption (using Microsoft Crypto API). A nice aspect of the digital signature feature is that documents can be identified as certified, unaltered or digitally signed.Version 6.0 requires 245 megabytes of hard disk space, with a recommended 64MB to 128MB of installed RAM. (I think the program will run much better if you have more RAM than that.)Does Your Firm Need Acrobat 6.0?The drawbacks for a law firm lie in the relatively high cost. At $229 per user, it is a high price to pay—albeit for a splendid collaborative tool. My ultimate recommendation is that every law firm upgrade the Acrobat Reader on its desktops to Adobe Reader 6.0. The great things about the new version of Reader are that it provides a much more robust search capability, the ability to complete forms and the ability to digitally sign documents.Adobe has come forward with an excellent update to a great product, which all law firms would benefit from deploying. My only hesitation in recommending the product more generally is its relatively high sticker price, if it needs to be deployed in an enterprise-wide fashion or given to all lawyers in the firm. Since Adobe Reader is free, however, the decision to upgrade that for every user in your office should be a no-brainer.Simon Chester ( [email protected]) is a partner in the KNOWlaw Group at Toronto’s McMillan Binch LLP. Send Feedback Table of Contents | 计算机 |
2014-23/1194/en_head.json.gz/25624 | 3/27/201312:52 PMMathew J. SchwartzNewsConnect Directly3 commentsComment NowLogin50%50%
Tougher Computer Crime Penalties Sought By U.S. LegislatorsDraft version of Computer Fraud and Abuse Act includes amendments largely recycled from 2011 DOJ proposals -- and running counter to leading legal experts' demands to narrow anti-hacking laws, critics say.Legal experts and privacy activists are crying foul after the House Judiciary Committee began circulating a draft bill that would amend the Computer Fraud and Abuse Act (CFAA) to impose tougher penalties for many types of computer crimes. The 22-page draft "cyber-security" legislation is currently being circulated among committee members. A House Judiciary Committee aide told The Hill that the draft is still in its early stages, and feedback is still being gathered from multiple stakeholders. But multiple legal and privacy experts have already criticized the proposed changes, with George Washington University professor Orin Kerr, a former Department of Justice computer crime prosecutor, saying that the bill's revised language appears to have been recycled from legislation proposed by Sen. Patrick Leahy (D-Vt.) in 2011, which he developed with the Department of Justice.
"This is a step backward, not a step forward," said Kerr in a blog post analyzing the draft bill. "This is a proposal to give DOJ what it wants, not to amend the CFAA in a way that would narrow it."
[ What changes should be made to current privacy and cyber abuse legislation? Read Hacking, Privacy Laws: Time To Reboot. ]
Indeed, the proposed changes would impose tougher penalties for many types of computer crimes, including making some computer crimes a form of racketeering. In addition, CFAA could be used to punish "whoever conspires to commit ... as provided for the completed offense," meaning that someone who discussed committing a computer crime could be charged with having committed the crime, reported Techdirt.
Numerous legal experts have been calling on Congress to amend the CFAA, following the death of Internet activist Aaron Swartz. He committed suicide while facing up to $1 million in fines and 35 years in prison after he used the Massachusetts Institute of Technology's network to download millions of articles from the JSTOR academic database as part of his quest to promote open access to research that had been funded by the federal government. Ultimately, Swartz issued an apology and returned the files, and JSTOR requested that the civil matter be closed. But using the CFAA, federal prosecutors continued to press charges against Swartz. Seeing a pattern of behavior, meanwhile, critics have slammed the CFAA for being overly broad and enabling Justice Department prosecutors to treat minor crimes as major felonies. The House Judiciary Committee's so-called cyber-security bill contains a hodgepodge of other recommendations, including in some cases classifying violations of a company's terms of service as being a felony charge. It would give the government greater leeway in pursuing criminal forfeiture, and assess penalties for anyone who intentionally damages "critical infrastructure" computers -- of which the vast majority are owned, secured and controlled by private businesses. It would also create a federal data breach notification law that would supersede the patchwork of regulations now in effect in virtually every state. The law would require any "covered entity" that suffered a "major security breach" -- involving "means of identification" pertaining to 10,000 or more people -- to notify the FBI or Secret Service within 72 hours and inform affected customers within 14 days, or else risk a fine of up to $500,000, which could be raised to $1 million for intentional violations.
The draft legislation does propose setting a new threshold for the charge of "exceeding authorized access," saying that it would be a crime only if the value of information compromised exceeded $5,000. But how much thought has been put into these amendments? Interestingly, the text of the bill says that "the Attorney General is authorized to establish the National Cyber Investigative Joint Task Force, which shall be charged with coordinating, integrating, and sharing information related to all domestic cyber threat investigations."
In fact, the FBI-led National Cyber Investigative Joint Task Force -- created in 2008 when President Obama established the Comprehensive National Cybersecurity Initiative -- is already coordinating intelligence and investigations into national cybersecurity intrusions across 18 intelligence and law enforcement agencies. A House Judiciary Committee spokeswoman wasn't immediately able to discuss the apparent discrepancy by phone. But the draft bill's outdated language suggests that more than one facet of the cyber-security bill, including the proposed CFAA amendments, remain -- at best -- half-baked.
re: Tougher Computer Crime Penalties Sought By U.S. Legislators While the prospect of harsher punishments may help deter certain hackers, the real focus for legislators should first be on reforming the scope of many of these cybersecurity laws. Reform laws like the CFAA so that they are clearer and more reasonable before placing an even more onerous burden on individuals who have violated those laws.
re: Tougher Computer Crime Penalties Sought By U.S. Legislators Hackers value being unseen, and unheard. As a result uncaught.If you catch them give them what they want.Make them disappear and never heard from again. Being unseen, and unheard ultimately they would be at their zenith of value, and be in a place where we can really use them.
re: Tougher Computer Crime Penalties Sought By U.S. Legislators I do think prosecutors have gone overboard regarding some recent cases, including that of Aaron Swartz. And turning the violation of Terms of Service into a felony? That sounds like overkill. This certainly bears watching.Drew Conry-MurrayEditor, Network Computing
How to Increase Transactions per Second (TPS) with Flash StorageHow to Avoid Cyber Attack Tools of the TradePCI 3.0 makes it clear ? New rules demand better tools | 计算机 |
2014-23/1194/en_head.json.gz/25802 | Home | About Folklore
The Original Macintosh: 116 of 122 Busy Being Born
Bill Atkinson, Bud Tribble, Steve Jobs
Origins, User Interface, Early Programs, Lisa, QuickDraw, Software Design
A visual history of the development of the Lisa/Macintosh user interface
The Macintosh User Interface wasn't designed all at once; it was actually the result of almost five years of experimentation and development at Apple, starting with graphics routines that Bill Atkinson began writing for Lisa in late 1978. Like any evolutionary process, there were lots of false starts and blind alleys along the way. It's a shame that these tend to be lost to history, since there is a lot that we can learn from them. Fortunately, the main developer of the user interface, Bill Atkinson, was an avid, lifelong photographer, and he had the foresight to document the incremental development of the Lisa User Interface (which more or less became the Mac UI after a few tweaks) with a series of photographs. He kept a Polaroid camera by his computer, and took a snapshot each time the user interface reached a new milestone, which he collected in a loose-leaf notebook. I'm excited to be able to reproduce and annotate them here, since they offer a fascinating, behind the scenes glimpse of how the Mac's breakthrough user interface was crafted.
The images are scaled so they easily fit onto a typical screen, but you can click on them for larger versions that show more detail.
The first picture in Bill's notebook is from Bill's previous project, just before starting work on the Lisa: Apple II Pascal. The high performance graphics routines that Bill wrote for Apple II Pascal in the fall of 1978 led right into his initial work on the Lisa.
The center and right photos, from the spring of 1979, were rendered on the actual Lisa Display system, featuring the 720 by 360 resolution that remained constant all the way through to the shipping product. No Lisa existed yet; these were done on a wired wrapped prototype card for the Apple II. The middle picture shows the very first characters ever displayed on a Lisa screen; note the variable-width characters. The rightmost picture has more proportional text, about the Lisa display system, rendered in a font that Bill designed by hand.
The leftmost picture illustrates the first graphics primitives that Bill wrote for LisaGraf (which was eventually renamed to QuickDraw in 1982) in the spring of 1979, rendering lines and rectangles filled with 8x8 one-bit patterns. The power and flexibility of the patterns are illustrated in the rightmost shot, which were our poor man's substitute for color, which was too expensive (at the required resolution) in the early eighties.
The middle picture depicts the initial user interface of the Lisa, based on a row of "soft-keys", drawn at the bottom of the screen, that would change as a user performed a task. These were inspired from work done at HP, where some of the early Lisa designers hailed from.
Here are some more demos of the initial graphics routines. Bill made line-drawing blindingly fast with an algorithm that plotted "slabs" of multiple pixels in a single memory access. The rightmost picture shows how non-rectangular areas could be filled with patterns, too.
Here are some scanned images, showing off Lisa's impressive resolution for the time, which Bill scanned using a modified fax machine. He was always tweaking the half-toning algorithm, which mapped gray scales into patterns of monochrome dots. Bill had made versions of these for the Apple II that Apple distributed on demo disks, but these higher resolution Lisa versions were much more impressive.
The left and middle pictures show off the first sketch program, an early ancestor of MacPaint, that allowed mouse-based drawing with patterns and a variety of brush shapes. I think these are perhaps a bit out of sequence, done in early 1980. The rightmost picture shows the final soft-key based UI, which is about to change radically...
...into a mouse/windows based user interface. This is obviously the biggest single jump in the entire set of photographs, and the place where I most wish that Bill had dated them. It's tempting to say that the change was caused by the famous Xerox PARC visit, which took place in mid-December 1979, but Bill thinks that the windows predated that, although he can't say for sure.
The leftmost picture shows different fonts in overlapping windows, but we didn't have a window manager yet, so they couldn't be dragged around. The middle window shows the first pop-up menu, which looks just like SmallTalk, as does the simple, black title bar. The rightmost picture shows that we hadn't given up on the soft-keys yet.
By now, it's the spring of 1980 and things are starting to happen fast. The leftmost picture shows the earliest text selection, using a different highlighting technique than we ended up with. It also shows a "command bar" at the bottom of the screen, and that we had started to use non-modal commands (make a selection, then perform an action, instead of the other way around).
The middle picture shows the very first scroll bar, on the left instead of the right, before the arrow scroll buttons were added. It also has a folder-tab style title bar, which would persist for a while before being dropped (Bill says that at that point, he was confusing documents and folders). The right most photo shows that we adopted the inverse selection method of text highlighting.
By the summer of 1980, we had dropped the soft-keys. The leftmost photo shows that we had mouse-based text editing going, complete with the first appearance of the clipboard, which at that point was called "the wastebasket". Later, it was called the "scrap" before we finally settled on "clipboard." There was also a Smalltalk style scrollbar, with the scroll box proportional to the size of the document. Note there are also two set of arrows, since a single scrollbar weirdly controlled both horizontal and vertical scrolling.
The next picture shows that we dropped the proportional scroll box for a simpler, fixed-size one, since we were afraid users wouldn't understand the proportionality. It also shows the I-Beam text cursor for the first time. At this point, we were finally committed to the one-button mouse, after a long, protracted internal debate.
The right most picture shows Bill playing around with splines, which are curves defined by a few draggable control points. QuickDraw didn't end up using splines, but the picture is still notable for the first appearance of the "knobbie" (a small, draggable, rectangular affordance for a point).
By now, it's the fall of 1980. The middle picture shows us experimenting with opened and closed windows, which was eventually dropped (but it made a comeback in the 1990s and is in most systems today one way or another). The right most picture shows the first window resizing, by dragging a gray outline, although it's not clear how resizing was initiated.
The middle picture shows that windows can be repositioned by dragging a gray outline. We wanted to drag the whole window, like modern user interfaces do today, but the processors weren't fast enough in those days. As far as I know, the NeXTStep was the first system to do it the modern way.
The right most picture shows the first appearance of pull-down menus, with a menu bar at the top of the window instead of the top of the screen, which is the way Windows still does things. By this point, we also gave up on using a single scroll bar for both horizontal and vertical scrolling; it's looking very much like what the Mac shipped with in 1984 now.
This set of pictures illustrates the Lisa desktop, circa the end of 1980, with a tab-shaped title, followed by a menu bar attached to the window. Windows could be reduced to tabs on the desktop. We've also changed the name of the clipboard to "the scrap", an old typesetting term.
The leftmost picture mentions the first use of double-click, to open and close windows. The middle picture represents a real breakthrough, by putting the menu bar at the top of the screen instead of the top of each window. The menu bar contains the menus of the "active folder", which is the topmost window. By this point, the grow icon found its way to the bottom right, at the intersection of the horizontal and vertical scrollbars, which stuck. This is the first picture which is really recognizable as the shipping Macintosh.
By now, it's early 1981, and things are beginning to shape up. The leftmost picture shows a window with scrollbars that look a lot like the ones that shipped. The middle folder illustrates split views, which were used by Lisa's spreadsheet application. The rightmost picture contains the first appearance of a dialog box, which at the time ran the entire length of the screen, just below the menu bar.
Now that the basic window structure was stabilizing, Bill turned his attention back to the graphics routines. He worked more on the Sketch program (the forerunner of MacPaint); the snowman drawing on the left is a clue that it's now Winter 1981. He added algorithmic text styles to the graphics, adding styles of bold (pictured on the right), as well as italic, outline and shadow (Bill took pictures of the other styles which I'm omitting to save space).
Bud Tribble was living at Bill's house now, and tended to sleep during the day and work all night, so Bill drew the phase diagram diagram on the left with the sketch program. The middle picture shows fast ovals, which were added to LisaGraf as a basic type in Spring 1981, using a clever algorithm that didn't require multiplication. They were quickly followed by rectangles with rounded corners, or "roundrects", illustrated on the right, which were suggested by Steve Jobs (see Round Rects Are Everywhere!).
By May 1981, the Lisa user interface is beginning to solidify. The leftmost photo shows scrollable documents of different types in overlapping windows, still sporting folder tabs for titles. The middle picture shows how roundrects began to creep into various UI elements, like menus, providing a more sophisticated look, especially when combined with drop shadows. The right most photo shows how menus could be also be graphical, as well as text based. The Lisa team was worried about the closed window tabs being obscured by other windows on the desktop, so Bill added a standard menu on the extreme left called "the tray", that could show and hide opened windows. The middle and right pictures portray a prototype that Bill created for the Lisa Graphics Editor (which eventually evolved into MacDraw), to demonstrate that modes could sometimes be useful; it was the first program to select modes with a graphical palette, which eventually became the main user interface of MacPaint.
The last major change in the Lisa User Interface was moving to an icon-based file manager in March 1982. The leftmost picture was an early mock-up done in the graphics editor, using a two-level hierarchy; selecting an icon in the top pane displays its contents in the bottom one. By the middle photo, Bill arrived at something very similar to the shipping design, complete with a trash can at the lower right. (see Rosing's Rascals). Note that the folder tab on windows has disappeared now, replaced by a rectangular title bar that's partially inverted when highlited.
Finally, Bill renamed "LisaGraf" to "QuickDraw" in the spring of 1982, because he wanted a name that was suitable for the Macintosh, too. He added two related features to meet the burgeoning needs of the Lisa applications: pictures and scaling. Pictures were a way of recording graphics operations into a data structure for later playback; this became the basis of both our printing architecture and also cutting and pasting graphics. Since pictures could be drawn into an arbitrary sized rectangle, it also caused Bill to add bitmap scaling features as well.
Most users and developers only experienced the user interface as a completed whole, so they tend to think of it as static and never changing, when in fact these pictures show that it was always evolving as we gained more experience and tackled more application areas. A user interface is never good enough, and, while consistency between applications is an important virtue, the best developers will continue to innovate when faced with new problems or perhaps just when they see a much better way to accomplish something. As usual, Bob Dylan said it best when he wrote in 1965, "He not busy being born, is busy dying."
On Xerox, Apple and Progress
Back to The Original Macintosh
Five Different Macintoshes
• On Xerox, Apple and Progress
• Rosing's Rascals
• Round Rects Are Everywhere!
Overall Rating: 4.80
Login to add your own ratings
Your rating:<
from Bill Eccles on January 30, 2004 15:05:06
I think that the devil's in the details, many of which continue to elude Windows to this day (though my experience with XP is, blessedly, minimal and it may address some of them).
For example, the rounded corners of the screen, which seem to develop between "The Dialog Box" and "Sketchpad in a Folder," were a refinement that was a result of some kind of debate, as I remember reading somewhere. That kind of subtle detail makes the Macintosh a much friendlier, more organic computer.
It isn't often that a person or group of people like you get a chance to so significantly impact the world. I'm glad you did it so well. Thanks, y'all!
from David Craig on February 20, 2004 03:38:30
Great story, I learned a lot about the development of the Lisa and Macintosh user interfaces.
Pleased to see that Bill Atkinson documented this work with at least photos. Assume these photos were a key exhibit in the Apple v. Microsoft UI lawsuit in the 1990s.
I wonder if the Lisa and Mac group printed screen shots of their progress for both the UI and the Lisa/Mac applications themselves such as the LisaWrite/MacWrite word processors? I would like to see stuff like this.
Also, AFAIK the name "QuickDraw" originated with Jef Raskin who used this as the name of his 1967 Penn State thesis on computer graphics. Atkinson and Raskin worked closely together on various Apple projects (e.g. Apple II Pascal) and assume that Atkinson saw Raskin's thesis.
-David T Craig
from Andrew Simmons on March 04, 2004 22:18:14
Not a comment, just wondering, following a discussion on another site, why the scrollbar moved from the left to the right of the window.
from Frank Ludolph on March 24, 2004 22:23:34
Thanks for posting the photos and history. A minor correction of sorts. "The middle picture shows us experimenting with opened and closed windows, which was eventually dropped." I don't believe that these were minimized windows as the text subsequent to the quote implies. The rectangles were just very simple icons for closed documents on the desktop, a concept supported by the Xerox Star and all GUI systems since. In addition, the Lisa did have a type of minimized windows, i.e., a window could be 'Set Aside' to the desktop where it took the form of the object icon. It appeared to just be a closed document, but in reality it's application was still running and would immediately display if double-clicked. It looked like a closed document because the Lisa user model did not distinguish between window and document. In fact the user model didn't distinguish between running and non-running documents. A document was either open (window form) or closed (icon form). Lisa's data model made this "set aside" document safe - nothing would be lost in the event of an application or system crash.
from Josh Osborne on April 29, 2004 05:00:49
The article says NeXTStep is the first system the author was aware of that supported the dragging of whole windows (as opposed to just an outline).
The X11 window system was able to do it, and the twm window manager had an option (I don't know its default) that would do it. I remember playing with it long before the University of Maryland got any NeXT boxes (and I think the UofMD got NeXT boxes pretty much as soon as they were available).
NeXTStep may have been the first system to ship with that behavior as default though (twm wasn't even the default window manager for X11...of corse there wasn't exactly a default anything for X11 as most venders shipped it at that time, not only was almost everything end user cutomizable...the end user pretty much had to bash at the configs to get anything useable)
from michael smith on June 13, 2004 03:22:22
fantastic to actually see the development of the windowing system here for the first time. glad we've been given this insight into a heretofore hidden aspect of the development. it was always nice being able to read it but to see the pictures too is a bonus.
from Eric Bariaux on December 26, 2004 12:46:10
About the dragging the whole window as opposed to just an outline part, I'm wondering if the Amiga 1000 with the Workbench software was not already able to do this. I'm not sure if this was in the first release or not either. Maybe someone can validate this.
This would mean it's been available to a wide audience in 1985.
Also in 1987, the Acorn Archimedes was, I believe, able to resize a window "in real-time" as opposed to just the outline.
from Steven Fisher on July 06, 2006 23:51:30
As I recall, the Amgia 1000 initially dragged an ugly yellow outline, not the window. But I'm not an expert on the Amiga and could easily be wrong.
from John B on January 19, 2007 19:59:58
You should be able to get a pretty good idea of when the SX-70 photos were made from the lot code on the back of the prints. Try contacting Polaroid.
from Rodolfo Cereghino on April 04, 2007 18:18:40
Picking up on a comment here. Did the Mac team ever look / consider what was going on with the Amiga or the Atari ST? They came out at (relatively) the same time frame (Amiga 1985, I think). At least from what I remember the Amiga had preemptive multitasking (though it was very unstable), showed thousands of colors, etc. This was impressive at the time.
Btw, great site!
from Anay Kulkarni on September 04, 2008 06:50:51
Andy, your "A user interface is never good enough, and, while consistency between applications is an important virtue, the best developers will continue to innovate when faced with new problems or perhaps just when they see a much better way to accomplish something. As usual, Bob Dylan said it best when he wrote in 1965, "He not busy being born, is busy dying."" is just one of the greatest thing i've ever read. A million Thanks for writing these things and also publishing the book. it's given me a new inspiration and renewed my gone_dull spirit of a computer engineer to just TRY to "Make a Dent in the Universe"
The text of this story is licensed under a | 计算机 |
2014-23/1194/en_head.json.gz/26101 | Dynamic Form filling
Landing Page Features
Landing Page Showcase
No coding required
Gather Lead Intelligence
Contact Inbound Now
Disclosures and Relationships
Guest Post For Inbound Now
Selling on Inbound Now
Basic Support
Home > Blog > Connecting Digital & Traditional Marketing Channels w/ Mitch Joel
Connecting Digital & Traditional Marketing Channels w/ Mitch Joel
Posted by David Wells in Inbound Now Episodes In this episode of Inbound Now, Mitch Joel joins us to share his thoughts on how companies should be bridging the gap between digital and traditional marketing.
Mitch is the president of Twist Image, an avid blogger, podcaster, and author of Six Pixels of Separation.
David: Hey, everybody. Welcome to Episode Number 11 of Inbound Now. Today I have with me a very special guest, Mr. Mitch Joel. Mitch is the President of Twist Image, a full service digital agency up in Canada. He’s a podcaster and an author of a show and book, “Six Pixels of Separation.” Dare I say, he is the next Seth Godin. He’s a marketing mastermind. Welcome to the show, Mitch.
Mitch: Thanks. Great to be here, and I’ll take all compliments like that. That’s very kind of you.
David: There you go. No, I think you put out a lot of great stuff and I hope to see you, you’ve just got to write more books. You’ve got to write a book every two months like Godin.
Mitch: I’m trying. I can’t keep pace. He keeps lapping me.
David: Yeah, exactly. I wanted to get you on the show today to talk a little bit about, you write on your blog and your podcasts about how companies are thinking about bridging the gap between digital and traditional marketing. You talk a lot about it. I wanted to dive deeper into that and your thoughts on that. You’re also a content machine, putting out all kinds of great stuff. So your methodologies behind that and some tips that our audience can take away. Then how, you specifically leveraged your podcast to grow your business and personal brand. Sound cool?
Mitch: Sure, yeah, let’s do it. I’m excited.
David: All right. Cool, cool. So on your blog and on your podcasts, you often talk about bridging the gap between digital and traditional channels. How do you see traditional channels, like TV, radio, broadcast, playing a role in marketing moving into the future?
Mitch: I think the reason we have multiple marketing channels is fundamentally because they function in different ways, and they attract different audiences at different times and different moments in their lives. We tend to look at digital as this sort of like strange catch-all where it will do everything and solve all the problems and get us away from all the other stuff. But I’ve got some strong opinions about traditional mass media in terms of its value, and I think it runs a little bit contrary to a lot of my peers.
I think that having a web environment or a social or mobile environment where people can vote things up and comment on them and share them is great. But there’s a big majority of people who come home after a long day of work and they want to plop themselves on the couch and be entertained or inspired or given content that intellectually stimulates them that doesn’t involve them being proactive. I tend to look at media from more of an interactive versus a proactive versus a reactive platform, and I think that there are places for all of that. I think within in the media we have to also understand that there are multiple ways you can market.
We mistakenly use the word marketing when what we mean is advertising. I think it’s a fundamental flaw in many of the sort of professionals I deal with day-to-day. I think that there are tons of things you can do as a marketing professional that add value to an advertising campaign. When I look at the extensions of how TV can play with the Web or mobile plays with radio, I see multiple ways in which you can market and connect. Now, the challenge is obviously, one, tying it into a strategy, one that has real business objectives and real values. The other one is fundamentally believing that what you’re doing is adding value to the consumer’s life and not just clutter. As you can probably tell, I can go on and on about the topic.
David: Yeah, definitely. So you’re really talking about traditional is not dying. There’s still that mass appeal there where basically going onto blogs, Twitter, blah, blah, that’s all proactive stuff and that’s good that brands are there. But there’s still that mass of people that still, like you said, plop down and watch TV or what have you. I think tying those two mediums together with a cohesive strategy is where things are moving into the future. Would you agree?
Mitch: I tell, my sort of wave, my one line to sum it up is just by letting people know that everything is with, not instead of. I think we in the digital social field tend to push towards instead of. Instead of TV, you should be blogging. Instead of this, and I don’t believe that to be the case. I think there are brands who have had amazing success in advertising in TV, radio, print, that can leverage these amazing new platforms to change and to add more and to do different things and to try different connections with their consumers. I don’t see it as a zero sum game. Do I see TV advertising changing because of the fragmentation and new channels that are there? Absolutely. But evolution would happen whether we had social media or not.
David: Right, totally. So, what advice would you have for companies out there that are kind of getting into the digital marketing space, kind of playing around? What kind of low hanging fruit could they be taking advantage of right now that they may not know about?
Mitch: Yeah, I don’t know if there’s a sort of platform that they may not know about that they should be running after, and I say that half jokingly only because I think that, unless you’re driven by a real strategy, understanding why you’re doing this and why you’re engaging in it, there’s no point to it. You’re just running after tactics of the latest and greatest shiny bright object. I see value in brands starting off with a fundamental strategy, and to me, that strategy needs to tie into their business objectives and all those nice ROI things that we keep hearing about in the world. But, really, brands know what they want to do, whether it’s acquiring customers, whether it’s a cost per acquisition strategy, whether they’re looking to create brand affinity or awareness, whether they want to build a loyalty and retention program.
When you look at those business objectives, you can then look towards the social channels as ways in which you can engage. At its core, though, what I tell people is the misconception of social media is that it’s about a conversation or engagement. I think that that’s part of what you get out of social. But what actually makes a media social to me is simply two things. One is that everything you do is sharable. You’re opened up so that people can share your content and you can share along with them. Two is being findable. By being in these social spaces, whether it’s a Twitter, Facebook, Quora, whatever it might be, you’re making your brand and the people who represent your brand as findable as possible. So you’re not missing opportunities within that.
We hop right away to the conversation. Conversation is a pretty tough thing to get. You have to people who care. You have to have people who are coming frequently, people who want to talk to you, want to have a back and forth beyond a simple engagement. Looking at it from a perspective of making your brand as sharable and as findable as possible is probably the best advice I could give someone just starting out of the blocks.
David: Right, and having a face on your brand is important, and social is playing an increasing role in SEO and what have you. So, I think it’s more important every single day. Cool. In a recent interview you did with Jonathan Baskin, you had an analogy where basically back in the day amps started to become cheaper and cheaper. Everyone started buying an amp, and you equated it to social media where the barrier to entry is basically non-existent for companies to get into this. All these people are buying amps, but then all of a sudden, hey, you have all these people that can’t play guitar, right? I thought that was really cool. Where should these companies learn to play guitar, i.e. start using social media for their businesses? What resources would say would be good to start out learning some of these things?
Mitch: Yeah, it’s sort of the mindset of just because everybody can blog or have a blog doesn’t mean you’ve got a lot of people who are great at blogging. I think part of it is understanding your skill sets and where you come from and what you’re trying to accomplish with all of these things. I mean, fundamentally, what we’re talking is a shift away from being a marketer and a shift more towards thinking like a publisher. That, in and of itself, is really challenging. Now, it would only be challenging if you had one way to publish text or images or video. It becomes increasingly more challenging when suddenly it’s about text, images, audio, video, instantly and pretty much for free to the world.
For me, it’s developing an appreciation for the type of content that you as a brand feel most comfortable and being able to present to the world. Also, really thinking through what does it mean to be a publisher. We look at some of the core things that make a magazine great or a TV show great, and it happens to be things like consistency and frequency and how often you publish. The relevancy, the context of it, how it plays out to your audience, understanding their reactions, the pulse. These are all things that fall very much outside of the marketer’s traditional toolbox. So we have to be able to break the change and stop calling it social media or social marketing, but look at it as a sense of, if we’re trying to make connections, the ways in which we are fundamentally making those connections is by providing this level of valuable content. This level of valuable content has to be created by somebody real and authentic, and it has to really have value to the people who are reading it. Just putting out constant reworks of your brochure or retweets of how you say things doesn’t necessarily create that level of confidence from the consumers.
My blog has been around since 2003, 2004, I don’t shill for Twist Image, the agency that I own. I don’t talk about the services we offer. In fact, a lot of people are like this guy has . . . they think it’s just me. They’re shocked that I have two offices and hundreds of employees and stuff like that. Part of it is because I believe that the best clients we can get here as a digital marketing agency are the ones who are walking on the lot because they read something that inspired them to think differently about their marketing and communications.
The actual angle by which we approach our advertising and how we’re looking at it is very, very different than our competitors and peers.
David: Right. So you’re becoming the content producer. Every company should be thinking about what content can we publish to basically help solve our target market’s problems and what have you. Pull them in via organic search, social media, etc., and then they kind of see, oh, yeah, you run an agency, you have all these different services. Right? So, it’s really like. . .
Mitch: Yeah, and you have to look at it beyond that. Because the way you’re saying, we have to publish content. That’s the whole amp analogy. Anyone can buy an amp and anyone can publish content. The question is how are you going to become the next Jimi Hendrix? How are you going to really rethink what it is you’re doing? The thing there is that might be overwhelming to people. I wrote a blog post a while back on why you should write a book, and the net answer to the question that I wrote in 600 words, I’ll sum it up in one sentence was, you should write a book because nobody else can write a book like you. I think we tend to forget that. Right? It’s the sort of human perspective that we bring to it that makes it really of value.
My side of social media, or I don’t even know if I consider “Six Pixels of Separation,” the book, a social media book. I think it was actually just a modern marketing book to be honest. Because I don’t talk just about the social sphere, I talk about the changing landscape that’s affecting marketers and businesses as we know it. It just so happens, obviously, that the social implications of it play a major role in how this shift has taken place. But brands have to really be able to understand that it’s not the cold callous marketing blather that you see everywhere. It’s their ability to deliver their message in a unique way that resonates with an audience. Anyone can write a song. We all have the same seven chords or however many chords there are. It takes a lot to get to Lennon-McCartney though.
David: Right. So it’s really playing to your strengths and producing content that would actually resonate with your audience and that you’re actually good at doing. Because, like you said, just putting something out there just to put it out, you’re just shouting. Everyone’s shouting into Twitter right now, but you need to stand out is what you’re saying.
Mitch: Yeah, and don’t not do it because you’re not John Lennon or you’re not Paul McCartney. That’s not the reason to sort of be afraid of it either. You just have to recognize that it is the people who bring that unique perspective and that they bring their own style and they’re not afraid to put their art out there that really win in these channels.
It’s true. If you look at a brand even like, my friend Scott Monty over at Ford and what they’ve done with that brand and the optics we now have on that brand, social media for sure changed the brand, but the brand in and of itself then becomes a part of this interesting ecosystem where it can play in different spaces because it has that permission. It’s created that value within certain segments and then on the overall sort of halo effect of the brand that have literally transitioned the company to think differently about how they see themselves. I think that people forget that.
When I’m blogging, I don’t have a formula. It’s not a simple sort of cookie cutter thing. Every day something inspires me, usually three or four things. Then I’m sort of editing down to one thing. Then even as I have the germ of an idea in my head and I’m putting my art out there in words and writing it out, that creates its own manifestation where a lot of times I’ll end the blog post and I’m like wow, that wasn’t even really where I started with the first idea, but I sort of like where it went.
What I’m trying to say is brands, individuals can give themselves the freedom to try and explore different ways to connect with people that don’t necessarily involve the direct sort of 20% off, 40% more whiter, brighter, faster, whatever.
David: Right. So yeah, speaking of your blog, where do you source ideas? Where do you pull inspiration from to create some of the great posts that you crank out on a daily basis?
Mitch: I was the Vice-Chair of the National Conference for the Canadian Marketing Association, which is like the American Marketing Association, only it’s Canadian which means it’s smaller and we’re more intimidated than Americans. There are no guns, it’s crazy. We were talking. We actually had Seth Godin come up to speak. Actually, I invited him up and he was speaking. Someone in the audience said, “Hey, Seth, where do you come up with your blog ideas?” The truth of the matter is it’s very much, I’ll give you his answer because it’s very much my answer, which is, one, I’m interested. So, I pay attention to a lot of the news and channels and feeds whether it’s Twitter and blogs and Facebook stuff and things like that. Truth of the matter is I’ve also created a bit of an audience for myself, so people send me a lot of really cool stuff that’s interesting. I’ve got 130 peers right here in my office who share great content across different channels that’s fascinating to me.
But ultimately that’s not where it comes from. Where it comes from is, as Seth says, it’s a secret sauce. I don’t know. We just don’t know. I really, honestly don’t know. Something inspires me the same way when you’re in the shower and you remember, oh, yeah, I have to do this. I get inspiration and ideas from that. I’ll see something very odd and peculiar that will inspire me to think about something.
Just the other day I wrote a post about five or six brand new business books that I haven’t read yet but I thought were really interesting, that I thought other people should know about because we’re in a world where people are talking about Seth’s new book and Gary Vaynerchuk’s next book and Guy Kawasaki’s next book. All of them are either out or coming out this week. I was just literally like glancing over at the books that I have here on my left, and I was like, wow, there’s like six books that look amazing that no one’s really talking about. There was inspiration.
I think what it comes down to though, the real sort of secret to the secret sauce, is in our ability to be open to noticing things. That’s a really strange thing to say, but it’s really powerful. I won’t read a newspaper and go, wow, that was a good consumption tactic. I’ll actually read a newspaper and see an article even on Libya or Egypt that will inspire me to think about a business problem I have. I don’t know if that’s me putting ideas together in ways in which the average person doesn’t or everyone else does. I don’t consider myself below average, that’s why I say average.
That’s the truth of it. I just keep myself open to it. I guess the best other sort of reason is they say great journalists have a nose for news. Maybe because I’m so passionate about this space, I have a nose for that space. I’m not trying to pat myself on the back, but I’m genuinely curious and interested. I’m not afraid to ask questions.
David: Right. I think you do a really good job of taking a story then playing off of it. Basically putting your own perspective on top of it instead of just regurgitating what everyone else is saying. I think that helps out a lot. Then the other thing that I kind of noticed, you post really fantastic headlines that someone might even consider link bait, kind of in a good way, though. What’s kind of your tip on coming up with really great headlines, because that’s what people see on Twitter or Facebook and that’s what grabs their attention to pull them into your blog, right?
Mitch: I really appreciate that, because I actually think that that’s where I’m weakest is in my headlines, believe it or not. I use to publish a magazine, magazines, multiple magazines actually. So, even prior to the digital channels being around, and I’ve been a journalist since I was about 17 years old, professionally, being paid as a journalist. I think when you work with editors and you see so many different types of magazines and newspapers and things like that, you get a sort of feeling for like, okay, if this is the topic, these are ways I can play off of it.
The real sort of funny thing I’ve done is I’ll go into magazine stores and just look at the covers and the titles of articles. You could go from like men’s health to like a parenting magazine to like a rock music magazine and it’s always the same, right. It’s the things like the ten best guitar heroes of all time, five ways to get a flat belly for summer, the six things you must know about losing weight. It’s like the classic, and I always say magazine covers have the best link bait on them. I’ve actually focused a lot in the past couple of years on trying to shy away a little bit from that. Not because I think it’s a saturated thing or it’s sort of like a trick or anything like that. I’ve just been thinking a lot about elevating the content, because my whole thing, especially if you listen to my audio podcast, is I don’t want it to sound like a radio show.
I don’t want my blog to look like a magazine article or a newspaper article. I want my blog to have content that is different for blogging. When it comes audio, the best way I can explain it is almost like my podcast, where I look at it as experiments in audio. That’s literally what I do. I’ve gone from like conversations with people to blabbing while walking on the beach. Like I try and go through different ways in which I can experiment with this, because otherwise it is, it’s just radio. Not that there’s anything wrong with radio. It’s just there’s radio. I can do a radio show then.
I’m trying to create new things, whether it’s with text or audio, that engenders people to think differently about how you can publish in this modern age.
David: Right, cool. So, I want to switch gears a little bit and talk about your podcasts. You got in kind of on the ground floor of podcasting, like nearly five years ago. You have over 200 or around 250 episodes under your belt. What have you learned on this journey, and how has podcasting helped your business?
Mitch: It’s funny, when I started podcasting, I remember there being this holy trinity of “Jaffe Juice,” Joe Jaffe’s show, “For Immediate Release” with Shel and Neville, and “Inside PR” at the time it was David Jones and Terry Fallis, and I remember thinking to myself, really seriously like, oh, I totally missed boat on this podcasting thing. I literally went to the first PodCamp in Boston which is where I met C.C. Chapman in person for the first time. We’d already been friends online. Chris Brogan, and we all became fast buddies from then for sure. But it was really interesting, I remember going to that event thinking like, oh, I totally missed the boat on this podcasting thing.
It’s funny you say that. I think it sort of goes back to what I was saying earlier. It’s just a great way to share content. I look at the B2B aspects of it as well, where like you could use podcasting as an inside channel to keep your company together. You could interview the latest people in the marketing department or HR and connect with.
What I’ve actually learned is that, if you’re willing to try to do things that are different, if you’re willing to try and press on and have real valuable conversations with people, capturing them in audio and sharing them with the world is a very powerful idea actually. That’s been my thing. As my audience builds and I have access to people, like I’ve had Steve Wozniak and Seth Godin and people like that, what I’m realizing is it’s also that journalism guy in me where it’s like I suddenly have this ability to take people who we all admire and like and ask them the questions all of us would love to ask them. I use that as a platform to do that. I really will ask them questions that I think you might want to know or even me, because I’m my own fan boy in all of this as well.
Has it grown the business? The answer is yes and no. It hasn’t in the sense of nobody’s called us in and said we’re hiring you because, man, you’re good, you have a great podcast. No, that has not happened. But I think what happens is when people do an audit of who Twist Image is, and they see the blog, they see the podcast, they see the articles, they see the speaking and the book and they see the quality of the work and the teams that we have and the way it all comes together, I think they develop an understanding that clearly, hopefully, these people understand how to use the channel. They’re not just talking about it. They’re actually living it and breathing it.
So, I think, in that instance, it’s been a tremendous motivator for people to want to work with us. I don’t think any of our clients, and we have a lot of clients, have spent all that much time listening to every single episode and downloading it every Sunday it comes out. But I think they have a cognizance, awareness of the fact that this agency has a passion and desire to not only help themselves grow but help the industry grow and that they are walking the talk essentially.
David: Right. Yeah, I think it has, you know, it helped your company kind of with that thought leadership basically behind these people obviously know what they’re talking about and they eat their own dog food. I think that’s a good point to kind of transition into my next question which would be let’s pretend that we’re setting up the ultimate marketing education MBA program. What would be some of the marketing books that you would say would be required reading?
Mitch: Wow, put me on the spot. I’m a big fan of the stuff that Seth does, obviously. I think David Armano when my book, said, “How do you feel about being compared to Seth Godin?” I said, “Wow, I’m good. I’m really good.” Because I admire his work and I admire the stuff he’s done, I’ve known him for well over a decade at this point too. The stuff he does I think it’s just very, very consumable. I think he takes very complex ideas and simplifies them in a way in which you can read it. When you’re done reading it, literally whack yourself on the head and go, “Why don’t I do this again?” I think that’s a very powerful gift he has, and I really think it’s a powerful gift. The stuff he’s done in books like “Linch Pin” and the stuff he’s done, in obviously, “Permission Marketing” and “Unleashing the Virus” I think is great. But even if you’re an entrepreneur, he’s got an earlier book called “Survival Is Not Enough” that I think is paramount to any business. I think that area is great.
I’m a huge fan of Avinash Kaushik. Avinash is the analytics evangelist at Google. “Web Analytics: An Hour a Day.” His other book is called “Web Analytics 2.0.” It sounds, oh, god, web analytics, but I mean the way he writes and the way he explains what we measure and what we can use these channels for is absolutely astounding.
I’m a big fan of Tom Peters. I thought that the book “Reimagine” really gave me the ability to reimagine what business could be like. People will probably roll their eyes because he’s a close friend, but Joe Jaffe, I think has done some great stuff especially, for me really that first book of his, “Life After the 30-Second Spot” really, I thought, set a course for advertising and thinking about it differently. “Flip the Funnel,” his latest is great. He’s done great stuff there.
I would be lying if I didn’t say that “The Cluetrain Manifesto” is essentially one of the bibles. It’s one of those books where I could pick out any page, read it, and be like, wow, that’s just so fresh. Ten plus years on, it’s amazing that that book is over a decade old.
I’m a huge fan of the Eisenberg brothers, Bryan Eisenberg. He’s a great friend. But his book, “Call to Action,” he and his brother wrote this book a long time ago. It’s probably the bible for online marketing. He’s got a great book on conversion called “Waiting for Your Cat to Bark” that I also love a lot.
If I look over to my bookcase, what else do I see?
David: “The Cluetrain Manifesto” it came out close to 11 years ago now. You wrote a blog post kind of recapping that. I don’t if was too recently, but has anything really changed, or is it still kind of dead on in the methodologies that he was laying out?
Mitch: It’s dead on. It’s scary dead on. In fact, I recount a story that I was in Europe, I think last week, and I was sent this tenth anniversary edition of the book and what was really funny, there was a testimonial on the back from the Montreal Gazette. I realized, oh my god, I was the guy who wrote that testimonial. I guess I had written about the book and they put it on. It was kind of weird for me.
What happened was, I came back from Europe and I was severely jet lagged and the book was just near my night table. As I was rolling in and out of sleep, like you know when you’re jet lagged there are these weird deep sleeps but then you’re super awake. I started reading the book, and I got so into it because there were like these epiphany moments. They wrote the book long before Twitter ever existed, but there’s this section that basically explains Twitter. I sort of felt like I was in this weird like half asleep, half awake world of like the DaVinci Code where I had uncovered, like in this book, it sounds like a crazy drug rant, but it’s true. I was literally looking at the book saying to myself, “You know, I bet if I even dig in deeper, there’s probably some inklings of what the next Twitter will be like literally.” So while they don’t talk about Twitter or necessarily blogging or anything like that, the foundations of the strategy, thinking how consumers think, what has changed is not only spot on, it’s fresh as daisy.
David: Cool. So switching back to podcasting for a second, for businesses thinking about getting into podcasting, do you have any tips for them getting started? Thinking about content, different channels to kind of distribute through?
Mitch: Yeah, my general advice is always, actually, do an internal podcast. Start internally. Use it for your team. This way you can used to the gear. You can used to the editing, you can used to the publishing. You can really think about things differently. I believe that doing it internally is a really, really powerful way to do it. Also, believe it or not, people don’t realize this, but YouTube is the number two search engine in the world after Google. People don’t know it. They think it’s Bing or Yahoo. It’s actually YouTube.
YouTube has so much amazing video content on how to produce a podcast that it’s staggering. I actually switched from PC to Mac not too long ago, and I was really nervous about my audio software. One, because the software I used on PC was not at all like audio editing. It was basically you record live, and off you go. It was very sort of dead easy. Software didn’t exist for Mac, and I realized I’ve got to learn how to audio edit. I had a friend come over and show me Audacity, which people would probably laugh at because I had to actually have someone show it to me. That’s how sort of naive I was with it. But after they left, I was like, oh, maybe I won’t remember everything. I went onto YouTube and there were these amazing video tutorials about it that just totally give you great skills.
So if you’re really curious, hop onto You Tube and type in podcast, podcast creation, how do I podcast and just sit back and enjoy the video. It’s great.
David: It’s really easy to get set up. I would recommend getting a decent mike, because audio is definitely important. But yeah, the barrier to entry is really low. Just get up there and start creating that content. The audience will give you feedback on where to go from there, right?
Mitch: Yeah. My whole thing is, again, if you know that you’re good at audio or you’re good at text or you’re good at video or you’re good at shooting pictures, start there. Start in the media you’re most comfortable with. Again, my background was in writing. It was in journalism. So blogging and Twitter were really natural and intuitive for me. I’d done some college radio, and again, I was a profession journalist for like 16 years, so I was doing four to five interviews a day. So I knew that I could have conversations with people. I didn’t necessarily know the technology or how to make it work as a podcast, but I understood the sort of mechanics of getting an interview done.
You don’t hear me doing ums and uhs a lot. You won’t hear me interrupting you when you ask me question, because I know that if I had to transcribe this after, which I usually had to do, it was really annoying to listen back to audio and hear yourself going um, uh, uh-huh over someone who’s talking. So there are some really powerful interview skill sets too, and I would recommend that part of the process, especially if you’re interested in audio or podcasting, would be to read books on how to create great interviews, how to give interviews, how to conduct interviews, and even how to tell a story.
One of the big things with social media even on the Web is it’s about telling great stories. There are amazing books out there on how to tell great stories. So check them out and find the ones you like and off you go.
David: Totally. Awesome tip. So, Mitch, where can people find you online?
Mitch: Hey, I’ve heard that line before. Do we all do that at the end of a podcast?
David: I don’t know. I should start doing like . . . so you start out your show, who are you and what do you do?
Mitch: Funny story with that, somebody was like, “What are you, Robert Scovill?” I was like, “What do you mean?” Apparently, Robert Scovill starts off all his interviews with who are you and what do you do. I’m like I didn’t even know that. So yeah, we all steal from each other.
David: I think it easier. I usually stumble through the intro. So I should just be like, “Hey, who are you and what are you doing on my show?”
Mitch: The problem with it is a lot of people don’t want to self . . . they feel uncomfortable saying, “Well, I’m known as the . . .” Sometimes you have to reinforce it after. People can always find me at www.TwistImage.com/blog or just do a search for Mitch Joel. The book, blog, and podcast are called “Six Pixels of Separation.” You even doing a search for Six Pixels will help you find me.
David: Cool. Well, thanks for coming on the show, Mitch, I really appreciate it, and I’m a big fan of your show, “Six Pixels of Separation” and I definitely recommend everyone checking it out.
Mitch: Well, it’s a mutual admiration society. I love HubSpot.
David: Hey, there you go. All right, cool. Well, thanks for coming on the show.
Mitch: Cheers.
During the show we chat about:
Bridging the gap between digital and traditional marketing
How he leverage his podcast to grow his agency
And some tips on creating remarkable content
For the Audio Click Here
Marketing books that are a must read
Linchpin by Seth Godin
Permission marketing by Seth Godin
Survival is not enough by Seth Godin
Web analytics an hour a day with avinash kaushik
Reimagine by Tom peters
Life after the 30 second spot by Joe jaffe
Call to action by Bryan eisenberg
Waiting for your cat to bark by Bryan eisenberg
David is Founder of Inbound Now and a Fanatical WordPress Designer & Developer. He believes that the internet is a magical place where wonderful things can happen. Say hi to him @DavidWells
6 comments John Haydon March 10, 2011 at 5:23 pm - Reply I’m a big fan of Mitch Joel (love his podcast)! Nice work again, David.
Download the Inbound Now Marketing Plugin for WordPress
Search Inbound Now
Ramp up your Inbound Marketing with our WordPress Plugins
Field Mapping
© 2014 Inbound Now | 计算机 |