id
stringlengths 30
34
| text
stringlengths 0
75.5k
| industry_type
stringclasses 1
value |
---|---|---|
2014-15/4479/en_head.json.gz/11291 | Internet Explorer 7 Beta 1 Review
Paul Thurrott's Supersite for Windows
Paul Thurrott
I have a checkered history with Internet Explorer, and by this point, my relationship with Microsoft's controversial browser is so tainted, I have to admit that I approached this review with some trepidation. My history with IE dates all the way back to 1995, when Windows 95 first shipped and Microsoft released the first version of this browser.
At the time, IE was sort of a joke, and Netscape Navigator ruled the Web. Netscape was pumping out new browser versions every couple of weeks, it seemed, and Microsoft's first attempt seemed a bit sad by comparison. That said, I actually liked it: IE 1.x looked a lot like the Explorer shell in Windows 95, with compact, square buttons. It's hard to appreciate this now, but at the time, the Windows 95 look and feel was brand new and IE just seemed to fit in.
I actually switched to IE full time when IE 2 was released in late 1995, though in retrospect it's unclear what advantages it offered over 1.0. By early 1996, Microsoft began publicly discussing IE 3.0 and, at a developer show that simulcast to theaters around America, Joe Belfiore showed off IE 3.0 alpha features like frames, HTML layout, and multimedia support. I was hooked, and though early builds of IE 3 were difficult to use, the final version, released in August 1996, was a watershed event in the industry. IE, for the first time, bested the feature set of Netscape Navigator. And it would never look back: From that point on, IE began gnawing away at Navigator, and would soon overcome it for good.
In late 1996, Microsoft canceled its original plans for IE 4.0 and retooled after hearing that Netscape was going to try and replace the Windows shell with an HTML-based shell codenamed Constellation. The original IE 4.0 plans called for an evolutionary update to IE 3.0 that would have included features such as Site Map and an integrated FTP client. However, catching wind of Netscape's plans, Microsoft recast IE 4.0 as a major project called "Nashville" which would combine the Windows shell with the HTML rendering engine in IE, blurring the line between local content on your PC and remote content from the Web.
Nashville resulted in two products. The first was the standalone version of IE 4.0, released in late 1997, which included the expected browser, of course, but also a new integrated version of the Windows shell, an Active Desktop that combined the Windows desktop with a Web-based layer, and other controversial features. The second was Windows 95 OSR-2, which included these new IE-based integration elements, as would every future version of Windows, including, alarmingly, those based on Windows NT.
It was here that my support for IE began to flag and then, eventually, completely unwind. Bundling IE with Windows was one thing. Integrating it deeply into the Windows core was quite another. Unlike Windows and NT, IE was new code, and adding it deeply into Windows at such an early stage--and only because of a perceived competitive threat that, frankly, never materialized anyway--was just a bad decision. The ramifications of that decision are still with us today. IE is now one of the most obvious attack vectors for malware in Windows, and the weakest technical link in the so-called shield that separates hackers from your precious data.
Anyway. The next few IE releases were relatively uninspiring updates and an unintended omen of things to come. That's because IE was starting to pull away in the market, and Microsoft had fewer reasons to improve the browser, now that Netscape was imploding. IE 5.0 was "an incremental, evolutionary upgrade to IE 4.0" (see my review and my tech showcase) that sported an alarming number of proprietary Web features. IE 5.01, included with Windows 2000, set the stage for future IE versions by offering a huge array of security fixes (see my review). And IE 5.5--designed to coincide with Windows Me--was just as unexciting, with security and bug fixes, more proprietary Web technologies, and print preview. Yawn.
The last time I reviewed a standalone I version was in late 1999, over five years ago, and I wasn't too impressed. And though I did describe IE 6.0 as "wonderful" in my Windows XP Home and Professional Editions Review (see my review), I also noted that "it doesn't seem much different than the IE 5.x products it replaces." Since then, IE 6.0 stagnated for three years before Microsoft finally got around to updating it in XP Service Pack 2 (SP2, see my review). With SP2, IE finally got pop-up ad blocking and a simple plug-in management system, but not much else. I commented on its "laughable compliance with Web standards" and noted that I would continue using Mozilla Firefox, which I have. As a result, I have never suffered from a spyware or malware attack, a common occurrence for IE users. And, I've been beseeching people to use Firefox--which, in addition to better security, has a slew of awesome end-user features not found in IE--instead of Microsoft's buggy browser.
Then the IE 7 beta happened.
Microsoft announces IE 7
It's important to understand that Microsoft had effectively killed IE. That is, the original plan for Windows Vista, the next major version of Windows (see my Beta 1 review), called for IE to be subsumed completely into the Windows shell. There were to be no more standalone IE updates.
Two things changed those plans. First, hacker attacks on IE 6 reached record levels, with Microsoft releasing IE 6 patches constantly over a three year period. Second, the Mozilla Foundation, which rose out of the ashes of Netscape, developed the aforementioned standalone browser, Firefox (originally called Phoenix, and then briefly Firebird), which, amazingly, began eating away at IE's market share. At the time of this writing, Firefox is closing in on 10 percent of the market, with all of that market share--all of it--coming at IE's expense. In certain technology-oriented circles, Firefox's share is actually much, much higher than that, and it actually outstrips IE in some cases.
When you combine these factors with Windows Vista's constant delays--now due in late 2006, the product was first aimed at a 2003 release--it was pretty clear that Microsoft had to do something. As I documented in my first IE 7 Preview, Microsoft chairman Bill Gates announced that his company would ship IE 7 for Windows XP with Service Pack 2 (SP2; and, as it turned out, for Windows XP x64 and Windows Server 2003 with SP1 | 计算机 |
2014-15/4479/en_head.json.gz/12079 | East 1000AD to the Present, software for Windows and Mac OSX
The Centennia Atlas
Download Centennia
Purchase an Access Code
Group Rate Purchase
Site License Pricing
Reviews of Centennia
Using Centennia
Buy on Amazon.com
The CENTENNIA Historical Atlas
Recent Additions and Changes:Single-user access code: lower price
New Windows edition (Windows 8, 7, Vista, XP compatible)
Macintosh OS X edition (10.5 or above, Mavericks, Lion, etc.)
Added review by Prof. Charles Ingrao
EU focus (for example, see the EU in 2008)
Read about Frank Reed, the creator of Centennia
Centennia Software's home port is now Conanicut Island USA
Here's Frank Reed, creator of the Centennia Historical Atlas with Neil deGrasse Tyson:
CENTENNIA is a map-based guide to the history of Europe and the
Middle East from the beginning of the 11th century to the present. It is
dynamic, animated historical atlas including over 9,000 border
changes. The map controls evolve the map forward or backward in time bringing the static map to life. Our maps display every major war and territorial conflict displaying the status of each region at intervals of a tenth of a year. The maps reflect actual "power on the ground" rather than internationally-sanctioned or "recognized" borders.
From Kevin Kelly's review of Centennia which was published in the Whole Earth Catalog:
"As a kid I dreamed of maps that would move; I got what I wanted in
Centennia. This colorful political map of Europe and the Mid-East redraws
itself at yearly intervals from the year 1000 to present. It's a living map,
an atlas with the dimension of time. I can zoom around history, pause at
particular dates, or simply watch how nations melt away, or disintegrate
into tiny fragements, or unite! Year by year the outlines of tribes and nations
spread, retreat, and reform almost as if they were tides or infections. The
resolution of detail (almost at the "county" level) is astounding; the breadth
of time (ten centuries) thrilling. It rewards hours and hours of study."
Kevin Kelly is editor-at-large and co-founder of "Wired" magazine and an all-around prophet of the digital age.
The Centennia Historical Atlas was required reading for all beginning students at the US Naval Academy at Annapolis for over twelve years. Over 1150 copies have been purchased annually for all prospective naval officers at Annapolis. The software serves as a visual introduction to Western History from a cartographic perspective. Centennia is also licensed by hundreds of secondary schools, colleges, and universities worldwide. Editions of the Centennia Atlas are available in Greek and German, as well as English.
Individual home users also purchase the Centennia Historical Atlas. It's ideal for anyone who loves maps and history, and it's also extremely popular among genealogy enthusiasts. There's no easier way to get a long-time-scale perspective on the history of the regions of Europe and the Middle East than by watching the borders shift back and forth in Centennia.
Professor Charles Ingrao, Purdue University wrote:
The Centennia Atlas offers an instant antidote to the problem of changing frontiers. It permits
you to view any part of Europe, North Africa or the Levant from A.D. 1000 to
[the present]. You can also go forward (or backward) in time, which permits you to see
the map change in five-week intervals for the period and region of your choice.
Centennia also provides a "historical gazette" and glossary of
names/places that students might find useful. It even traces the changing
battlefronts between countries in wartime, so you can follow the inexorable
march and retreat of the Austrian armies in the Balkans and elsewhere. I was
most impressed by the developer's incredible eye for detail, which was more
precise (and often more accurate) than Magocsi's new Historical Atlas of
East Central Europe. Centennia is no less precise for Germany. Since much of my earlier work
dealt with the early modern German states, I especially appreciated the
excellent detail that Centennia provides for some of the smaller (but not
the very smallest) Kleinstaaterei. CENTENNIA covers in detail the rise and fall of the Ottoman Empire,
the Hundred Years War, the Mongol invasions, the Napoleonic Wars,
the Unification of Italy and Germany, the First World War, the Rise of Nazi
Germany, the Arab-Israeli wars, and even recent events like the collapse
of the USSR, the wars of the former Yugoslavia, and the Chechen wars.
Some video samples from the Centennia Historical Atlas:
Some earlier non-official versions created using the Centennia Historical Atlas appeared under the titles "Ten centuries in five minutes", "Epic time-laspe of Europe", and "European time-lapse map".
The Centennia Historical Atlas software runs under Apple Macintosh OSX (Leopard, Snow Leopard, etc.), as well as Microsoft Windows (8/7/Vista/XP). The software requires 20 megabytes of hard disk space and 40 megabytes of memory. Centennia does not have any other significant system requirements, and it will run well on almost any computer made in the year 2000 or later.
The downloadable edition of the Centennia Historical Atlas is available at no charge. It covers the French Revolutionary and Napoleonic Era from 1789 to 1819. The map data and text for the full period from 1000AD to the present are already present in the download file and may be opened at any time with an access code. A single-user license access code is priced at $59.00 (plus shipping and handling, if required). We also have site license pricing and group rate pricing. We also accept purchase orders from schools, universities, and campus book stores.
Watch a short video guide to the Centennia Historical Atlas software. This video was created by Legacy Family Tree, one of our dealers. © Copyright 2002-2013 Centennia Software, Conanicut Island USA. All rights reserved. www.HistoricalAtlas.com. powered by Olark live chat software | 计算机 |
2014-15/4479/en_head.json.gz/12820 | Our Organization Our People Year in Review Acquisition Support The SEI works directly with federal defense and civil programs. Teams of acquirers, developers, and operators help government navigate the complexities of acquiring increasingly complex software and systems.
Increasingly, the Department of Defense (DoD) and federal agencies acquire software-intensive systems instead of building them with internal resources. However, acquisition programs frequently have difficulty meeting aggressive cost, schedule, and technical objectives.
The SEI works directly with key acquisition programs to help them achieve their objectives. Teams of SEI technical experts work in actual acquisition environments in the Army, Navy, and Air Force, as well as other DoD and civil agencies, applying SEI products and services in specific contexts.
Our vision is to facilitate the rapid establishment of agile teams composed of acquirers, developers, and operators using SEI technologies to provide evolutionary, high-quality, cutting-edge software-intensive capabilities to the warfighter. Acquisition program managers are challenged not only to grasp practical business concerns, but also to understand topics as diverse as risk identification and mitigation, selection and integration of commercial off-the-shelf (COTS) components, process capability, program management, architecture, survivability, interoperability, source selection, and contract monitoring. The SEI has spent more than two decades compiling a body of knowledge and developing solutions for these topics. The SEI is focused on direct interaction with the defense, intelligence, and federal acquisition communities by
transitioning technologies and practices to improve DoD software-intensive systems
performing diagnostics such as Independent Technical Assessments (ITAs) and Independent Expert Program Reviews (IEPRs)
helping with RFP preparation
helping with technical evaluations of proposals and deliverables
collaboratively developing acquisition technologies and practices
transitioning technologies and practices to the DoD acquisition community's collaborators
reviewing and advising the DoD on acquisition policy related to software-intensive systems
The SEI is focused on delivery, support, and integration of software-intensive systems acquisition practices to help acquisition program offices. The SEI is positioning itself as a facilitator and leader of a community of practice for the acquisition of software-intensive systems.
Spotlight on Acquisition Support
Acquisition Archetypes: Robbing Peter to Pay Paul
This April 2009 whitepaper is one in a short series of acquisition failures. This paper focuses on the problems of underspending, which can result in funds being shifted from one program to another.
Software Acquisition Survival Skills | 计算机 |
2014-15/4479/en_head.json.gz/16211 | RIPE NCC Regional Meeting Moscow Sets Precedent for Internet Community in Russia, CIS and Eastern Europe
regional meeting,
press release,
7 October 2010 - The RIPE NCC held its 7th Regional Meeting in Moscow from 29 September - 1 October. With over 380 attendees, the meeting was a huge success, with tutorials, presentations and panel sessions on topics including DDoS Attacks, DNS Security (DNSSEC), Operations of Networks and Exchange Points, Regional Connectivity and Capacity, the Evolution of the Internet and IPv6 Deployment. During the meeting, the attendees decided to further their cooperation by proposing the creation of a regional forum in which the region’s Internet experts could collaborate on issues unique to the Russian Federation, CIS and Eastern Europe. Alexey Soldatov, Russian Federation Ministry of Telecommunication and Mass Media, told the attendees in his welcome address, "The future of the Internet depends on you to establish its infrastructure and maintenance. The RIPE NCC Regional Meeting Moscow is an excellent example of self-organisation and the exchange of experiences between the Russian Federation technical community and global Internet experts enables us to be fully involved in the development of the Internet."
Paul Rendek, RIPE NCC Head of External Relations and Communications, commented, "The RIPE NCC welcomes any move to create regional forums for network engineers and other technical staff that enable them to share their experiences and knowledge and identify areas for regional cooperation. It is the next logical step for a community that has, over the last five years, established itself as a unique gathering of technical minds. The RIPE NCC offers its full support and guidance throughout the process and looks forward to facilitate this process and the resulting forum."
Founded in 1992, the RIPE NCC is an independent, not-for-profit membership organisation that supports the infrastructure of the Internet. The most prominent activity of the RIPE NCC is to act as a Regional Internet Registry (RIR) providing global Internet resources and related services to a current membership base of around 7,000 members in over 75 countries.
These members consist mainly of Internet Service Providers (ISPs), telecommunication organisations and large corporations located in Europe, the Middle East and parts of Central Asia.
As one of the world’s five RIRs, the RIPE NCC performs a range of critical functions including:
The reliable and stable allocation of Internet number resources (IPv4, IPv6 and AS Number resources)
The responsible storage and maintenance of this registration data
The provision of an open, publicly accessible database where this data can be accessed
The RIPE NCC also provides a range of technical and coordination services for the Internet community. These services include the operation of K-root (one of the 13 root name servers), the Deployment of Internet Security Infrastructure (DISI) and DNS Monitoring (DNSMON).
As a result of its established position in the Internet industry, the RIPE NCC has played an important role in the World Summit on the Information Society (WSIS), the Internet Governance Forum (IGF), European Union (EU) workshops and government briefings on key issues in the current Internet landscape.
For media enquiries please contact
Blaise Hammond / Lucie Smith Racepoint Group UK Tel: +44 208 752 3200 Email: ripencc _at_ racepointgroup _dot_ com | 计算机 |
2014-15/4479/en_head.json.gz/16877 | Home / Community / Evaluation Teams / 2012 Evaluation Team Report: Web Browser Evaluation
2012 Evaluation Team Report: Web Browser Evaluation Executive Summary
The 2012 Web Browser Evaluation Team was tasked with evaluating the impact of the changing web browser climate for desktop and mobile operating systems. It is important to note that the team was not tasked with providing a recommendation on a single best-in-class browser; rather, it was asked to assess the current and upcoming generation of web browsers and consider the implications for Penn’s various constituents. The team divided Penn’s constituents into three separate communities, and provides the following recommendations for each subset:
Developers: The team suggests (at a minimum) that the primary goal for developers should be to provide access from an operating system’s built-in browser offering (for example, Safari latest on OS X ). The secondary goal should be to support a secondary browser for the operating system (for example, Firefox Latest or Firefox ESR). Developers should clearly communicate expected levels of functionality for each browser and application. When access to an application using a particular browser is restricted, the reasons for the limitation should be clearly provided at point of access, in a format that is comprehensible to both End Users and Local Support Providers.
Support Providers: The team suggests that Local Support Providers should utilize a “Browser + 1” model on all managed systems. On unmanaged systems, Local Support Providers should strongly encourage and promote this model. Resultant of frequent version changes, LSP’s should deploy frequent updates to managed systems or allow users to update browsers to ensure mitigation of security vulnerabilities.
End Users: End Users managing their own systems should also adopt a “Browser + 1” model, and utilizing this model should provide a reasonable expectation of the availability of services.
As the environment continues to evolve, schools and centers will need to continue adopt support for a “bring your own” environment as it pertains to web browsers.
Evaluation Methodology
The 2012 Web Browser Evaluation Team considered the built-in and most popular (as of early 2012) web browsers for desktop and mobile platforms. These included Firefox and Internet Explorer (with multiple versions identified), and Safari and Chrome (with the latest version indicated). The team considered University-supported desktop and mobile platforms (with the addition of Android due to its popularity). The team began by identifying pertinent Penn-provided and Penn-affiliated websites with heavy usage by University personnel. After identification, a testing matrix was developed for each combination of browser and application (See: Mobile evaluation and desktop
evaluation sheets). For each combination, a “Yes”/”No” answer was recorded as it pertained to functionality. When an issue was encountered with a specific combination, a note was made indicating (in as much detail as possible) the cause and problem.
In addition, a separate “Pros and Cons” list was developed for each web browser platform. For each point of interest, documentation is provided from a reliable source. While the team does not make any recommendations on best-practices or use-cases for each browser, the hope is that this list can be an aid in making these decisions. (See: Pros and cons)
The team tested baseline browsers with only supported add-ons (for example, Java 6 update 33) and no major customizations to settings that would impact compatibility. The team attempted to maintain the integrity of testing, while realizing that the scope of testing had to be fairly limited (due to time and manpower restrictions). While the team endeavored to accurately and fully test every application for every browser, some of these applications have restrictions to certain features. Where an application was tested and worked for all available functionality that the tester had access to, the team made the assumption that the application was compatible. Due to time constraints, the team also asserted that browsers across similar operating systems work similarly (for example, Windows 7 and Windows Vista Firefox work similarly), except where indicated. Finally, because Android allows for heavily modified user interface (UI) overlays, the team adopted a supported stance for the built-in browser with a “use at your own risk” recommendation for overlays.
In years past, the Browser Evaluation Teams provided a recommendation on the penultimate campus browser. Rather than making a particular recommendation on the recommended browser, the team developed a list of Pros and Cons for each browser on mobile and desktop platforms, to help Penn constituents make an informed decision based on preferences and use case. The list provides information (using cited resources) on browser strengths and weaknesses. The team also performed best-case effort to collect data for most combinations of modern web browsers, operating systems, and applications.
Browser Pros and Cons
Desktop Platform Results
Mobile Platform Results
As it pertains to developers, the team reiterates that applications should work with University- supported built-in browsers. On desktop platforms, these supported browsers are Internet Explorer 8 and 9 and Safari current. On mobile platforms, these are iOS, Android, and Windows mobile built- in browsers. Developers should work to ensure all technology End Users are equipped with a baseline operating system and built-in browser that supports access to their application. While the optimal environment would be one where access to an application was browser-agnostic, the optimistic view is that developers will strive to support their applications using the “browser + 1” model (the built-in operating system’s browser plus a secondary browser). Where developers are not able to develop compatibility with a specific browser, the rationale for this limitation should be clearly communicated (for example, if a user logs into an application using Chrome and that application is not compatible, the website should disseminate specific reasons why limitation is in place) in a format that is both comprehensible to End Users and technically informative to local support providers.
For local support providers, particularly where a system is manageable, emphasis is put on enforcing or allowing users to maintain an up-to-date browser where possible (particularly easy with Firefox’s and Chrome’s update methodology, yet challenging with Safari and Internet Explorer). To some degree, limitation on customizations that would impede access ensures a fairly homogenous and consistent user experience across the board. When a system is not manageable, communicating appropriate requirements and limitations that has been relayed by the developers is key. When providing recommendations on browsers for particular use case scenarios, LSPs can provide recommendations based on the “Pros and Cons” list. Where browser-agnosticism isn’t possible, the decision on use of browser should be on the merits of the browser itself and user preferences. The push should also be for the “browser + 1” model espoused in this report, ensuring a fallback should developers be unable or unwilling to follow the above recommendations.
As it pertains to End Users, we increasingly see an environment in which users expect more choice about their technological experience. Where access to an application is impossible, the onus is on developers to clearly communicate the specific incompatibility. By informing users about expectations, we can limit End User frustration and the extent to which they are unable to access web-based applications.
Date Posted: June 28, 2013
EvaluationTeam, Web Browser Was this information helpful?
Please Note This article is the final report of a past evaluation team. The information may no longer be current and has been made available as a historical reference only. | 计算机 |
2014-15/4479/en_head.json.gz/17583 | We've gathered project and spec leads, go-to bloggers, best-selling authors and industry insiders for 3 days of information sharing. Here's a list of who is confirmed to present at TheServerSide Java Symposium 2011: James Gosling
Adam Messinger
Bear Bibeault
Adam Bien
Andy Bosch
Jeanne Boyarsky
Bill Burke Stephen Chin
Cliff Click
Adrian Cole
Emiliano Conde
Patrick Curran
Janeice Del Vecchio
Jerome Dochez
Johan Edstrom
Michael Ernest
Jonathan Fullam
Jeff Genender
Dan Hardiker
Iran Hutchinson
Claus Ibsen
Jevgeni Kabanov
Max Katz
Jon Kern
Mik Kersten
Heath Kesler
Tom Kincaid
Jim Knutson
Justin Lee
Cameron McKenzie
Andrew Monkhouse
Charles Nutter
Karen Tegan Padir
Kirk Pepperdine
Reza Rahman
Matt Raible
Scott Selikoff
Mark Spritzler
Craig Tataryn
Martijn Verburg
Patrycja Wegrzynowicz
Jason Whaley
Paul Wheaton
Meet the Java Experts James Gosling, Father of Java
Presenting: Keynote: Surfing the Currents of Change and Panelist for Keynote Panel Discussion: The Java Community Process: What's Wrong and How to Fix It
While most likely everyone knows Java expert James Gosling by name recognition alone, his official bio includes a BSc in Computer Science from the University of Calgary followed by a PhD in Computer Science from Carnegie-Mellon University. Best known for the original design of the Java programming language and the implementation of its original compiler and virtual machine, he has also contributed to the Real-Time Specification for Java and was an original researcher at Sun labs where his primary interest was software development tools, prior to becoming Chief Technology Officer of Sun's Developer Products Group and most recently CTO of Sun's Client Software Group when Oracle finalized their acquisition of Sun in early 2010.
One of the computer industry's most noted programmers, he is also the recipient of Software Development's "Programming Excellence Award" in addition to co-authoring such programming bibles including The Java Language Specification and The Java Programming Language. In the past. James has been a speaker at key industry events and Java conferences, including last year's Java Symposium and JavaOne 2009. Visit James Gosling’s blog: Nighthacks. Steve Harris, Senior Vice-President, Application Server Development, Oracle Presenting: Co-Keynote: Java in Flux: Utopia or Deuteronopia? Java expert Steve Harris is senior vice president of application server development at Oracle. He joined Oracle in 1997 to manage development of the Java virtual machine for the Oracle8i release. Since then, his role has expanded to include the entire Java Platform, Enterprise Edition technologies in the Oracle Application Server and WebLogic Server product lines, including EJBs, Servlets, JSPs, JDBC drivers, SQLJ, TopLink, and Web services support in both the application server and database. Prior to Oracle, Mr. Harris was vice president of engineering at Java predecessor ParcPlace-Digitalk following acquisition of a startup providing an object-oriented database for Smalltalk developers he co-founded in 1993. More than 13 years in scientific and engineering computing, consulting, document management, and systems integration
experience followed Mr. Harris’s degrees from George Washington University and UC Berkeley. Steve is co-presenting the Keynote presentation Java in Flux: Utopia or Deuteronopia? with Adam Messinger
In the past. Steve has been a speaker at key industry events and Java conferences, including EclipseCon 2010 (visit the conference website).
Rod Johnson, Creator of the Spring Framework; Author, J2EE without EJB and more Presenting: Keynote: Driving Java Innovation to the Cloud and Cloud Keynote: Bringing Code to the Cloud and Back Again Rod is the father of Spring, co-founder and CEO of Interface21, and one of the world’s leading authorities on Java and J2EE development. Rod’s best-selling Expert One-on-One J2EE Design and Development (2002) was one of the most influential books ever published on J2EE. The sequel, J2EE without EJB (July 2004, with Juergen Hoeller), has proven almost equally significant, establishing a comprehensive vision for lightweight, post-EJB J2EE development. Rod regularly speaks at conferences in the US, Europe and Asia, including the ServerSide Symposium (2003, 2004, 2005 and 2006), JavaPolis (Europe’s leading Java conference) in 2004 and 2005, JavaZone (2004 and 2005) and JAOO (2004). He was awarded a prize for giving one of the top 20 presentations (by evaluation) at JavaOne in 2005.
Rod serves in the JCP on the Expert Groups defining the Servlet 2.4 and JDO 2.0 specifications. His status as a Java expert and leader in the Java community has been recognized through his invitation to Sun’s Java Champions program. Rod continues to be actively involved in client projects at Interface21, as well as Spring development, writing and evangelism.
In the past. Rod has been a speaker at key industry events and Java conferences, including previous years of TheServerSide Java Symposium, Jasig Spring 2010 Conference and he will speak at other upcoming events, such as OSCON 2011 (Open Source Convention). Visit this blog with contributions from Rod Johnson: SpringSource Team Blog Adam Messinger, Vice-President of Development in the Fusion Middleware Group, Oracle
Presenting: Co-Keynote: Java in Flux: Utopia or Deuteronopia?
Java expert Adam Messinger is Vice President of Development in the Fusion Middleware group at Oracle. He is responsible for managing the Oracle Coherence, Oracle JRockit, Oracle WebLogic Operations Control, and other web tier products. Prior to joining Oracle, he worked as a venture capitalist at Smartforest Ventures and O'Reilly AlphaTech Ventures. Adam is a graduate of the Stanford Graduate School of Business where he was a Sloan Fellow and of Willamette University where he was a G. Herbert Smith Scholar. Adam is co-presenting the Keynote presentation Java in Flux: Utopia or Deuteronopia? with Steve Harris.
In the past. Adam has been a speaker at key industry events and Java conferences, including QCon San Francisco 2010. Bear Bibeault, Author, jQuery in Action
Presenting: How jQuery Made Bob a Happy Man Bear Bibeault has been turning coffee into quality software since 1976 when he starting programming in BASIC on a Control Data Cyber. Having managed to wrestle two Electrical Engineering degrees from the University of Massachusetts, he taught in the Graduate Computer Engineering Program of that esteemed institution for a decade or so. He has also served stints with Digital Equipment Corporation, Lightbridge Inc., Dragon Systems, and a whole slew of other companies no one has ever heard of (or that he's ashamed to admit association with). Bear has authored four books, and contributed on many others, including: Ajax in Practice Prototype and Scriptaculous in Action
jQuery in Action and jQuery in Action, 2nd edition
In the past. Bear has been a speaker at key industry events and Java conferences, including the Emerging Technology for the Enterprise Conference in 2008. Adam Bien, Author, Real World Java EE Patterns
Presenting: Java EE 6 Patterns and Best Practices: What I Learned in the Field and Lightweight Application Development with Java EE 6
Java expert, Adam Bien, is an Expert Group member for the Java EE 6, EJB 3.1, and JPA 2.0 JSRs. He has worked with Java technology since JDK 1.0 and Servlets/EJB 1.0 in several large-scale projects and is now an architect and developer in Java SE/EE/FX projects. He has edited several books about Java and J2EE / Java EE and is the author of Real World Java EE Patterns. Adam is a Java Champion, Oracle Java Developer of The Year 2010, and JavaOne 2009 Rock Star.
Andy Bosch, Independent Consultant
Presenting: JavaServer Faces in the Cloud Andy Bosch is an independent consultant and trainer for JSF and Portlet technologies. He wrote the first German book on JavaServer Faces and just lately published "Portlets and JavaServer Faces."
Andy is responsible for the website www.jsf-forum.de, a German portal for JSF related topics. Andy is a member of the Expert Group of JSR-301 and JSR-329. He regularly publishes articles in Java magazines and teaches web programming with JSF at various conferences.
Jeanne Boyarsky, Developer for a New York City bank
Presenting: Throw Away All The Rules. Now What Process Do You Follow? Jeanne Boyarsky is a graduate of Queens College with a degree in Computer Science, and she also holds a Master’s degree in Computer Information Technology from Regis University. Currently working as a Java Developer for a bank in New York City, her development
interests include databases, Web programming and testing.
Among her speaking engagements, Jeanne gave a highly regarded 'lightning talk' at the 2007 Google Test Automation Conference, and she has written a number of often cited articles on: JDBC batching, Ant Task Dependency Graphs, The Great Forum Migration Project, and
data Migration to JForum.
Jeanne is also an open-source developer, contributing to Version 1.1.0 of Classpath Suite - Running JUnit 3.8 test classes in a 4.X suite and supporting JUnit 4.4.
Bill Burke, Senior Consulting Software Engineer, Red Hat Presenting: REST Never Sleeps (And Neither Does Your Middleware) Bill Burke, senior consulting software engineer at Red Hat, is a JBoss Fellow. A long-time JBoss.org contributor and architect, Bill has founded projects, including JBoss clustering, EJB3, AOP, and RESTEasy, and he was Red Hat’s representative for EJB 3.0, Java EE 5, and JAX-RS JCP specifications. Bill authored O’Reilly’s EJB 3.0 5th Edition and RESTFul Java with JAX-RS and has numerous in-print and online articles.
Stephen Chin, Chief Agile Methodologist, GXS
Presenting: Extending VisualVM with JavaFX
Stephen Chin is a technical expert in RIA technologies, and Chief Agile Methodologist at GXS where he is leading a large-scale Lean/Agile rollout with hundred of developers spread out across the globe. He coauthored the Apress Pro JavaFX Platform title, which is the current leading technical reference for JavaFX, and is lead author of the upcoming Pro Android Flash title. In addition, Stephen runs the very successful Silicon Valley JavaFX User Group, which has hundreds of members and tens of thousands of online viewers. Finally, he is a Java Champion and an internationally recognized speaker featured at Devoxx, Jazoon, and JavaOne, where he received a Rock Star Award. Stephen can be followed on twitter @steveonjava and reached via his blog.
Cliff Click, Chief JVM Architect, Azul Systems Presenting: A JVM Does That??? With more than 25 years experience developing compilers Cliff serves as Azul Systems' Chief JVM Architect. Cliff joined Azul from Sun Microsystems where he was the architect and lead developer of the HotSpot Server Compiler. Previously he was with Motorola where he helped deliver industry leading SpecInt2000 scores on PowerPC chips, and before that he researched compiler technology at HP Labs. Cliff has been writing optimizing compilers and JITs for over 15 years. Cliff holds a PhD in Computer Science from Rice University.
Adrian Cole, Founder, Cloud Conscious, LLC
Presenting: Java Power Tools: The Cloud Edition
Adrian founded the open source jclouds multi-cloud library two years ago, and is actively engaged in cloud interoperability and develops circles. Recent efforts include vCloud ecosystem engineering at VMware, Java integration at Opscode, and cloud portability efforts at Cloudsoft. Adrian is currently consulting under Cloud Conscious LLC. Emiliano Conde, Founder and Lead Developer, jBilling Software, Ltd.
Presenting: Distributed to the Extreme: The Open Source Development Process Emiliano Conde is the Founder and Lead Developer of jBilling Software, Ltd. He oversees the architecture and product direction of jBilling, the leader in open source enterprise billing systems. He is often working on-site with companies around the world helping implement large, enterprise class billing solutions on Java environments. Emiliano Code counts 17 years of experience in software development, his last position prior to the founding of jBilling being Software Architect for HSBC Global Systems (ranked 2nd largest bank in the world). He holds a certificate on Software Engineer from the University of British Columbia (Canada). He now lives in Ottawa, Canada.
Patrick Curran, Chair of the Java Community Process
Panelist for Keynote Panel Discussion: The Java Community Process: What's Wrong and How to Fix It
Patrick Curran is Chair of the Java Community Process (JCP). In this role he oversees the activities of the JCP Program Office including driving the process, managing its membership, guiding specification leads and experts through the process, leading Executive Committee meetings, and managing the JCP.org website.
Patrick has worked in the software industry for more than 25 years, and at Sun (now Oracle) for almost 20 years. He has a long-standing record in conformance testing, and before becoming Chair of the JCP he led the Java Conformance Engineering team in Sun's Client Software Group. He was also chair of Sun's Conformance Council, which was responsible for defining Sun's policies and strategies around Java conformance and compatibility.
Patrick has participated actively in several consortia and communities including the World Wide Web Consortium (W3C) (as a member of the W3C's Quality Assurance Working Group and co-chair of the W3C Quality Assurance Interest Group), and the Organization for the Advancement of Structured Information Standards (OASIS) (as co-chair of the OASIS Test Assertions Guidelines Technical Committee). Patrick maintains a blog here: http://blogs.sun.com/pcurran/
Janeice Del Vecchio, Independent Consultant
Presenting: What’s New on the Persistence Side: Getting to Know JPA 2.0 Janeice Del Vecchio, a graduate of Western Governors University's Bachelor of Science in Information Technology program, is an Oracle Certified Java Professional who is well known to the Java community through her participation as a bartender on JavaRanch. She also volunteers her time in the Beginning Java, Cattle Drive and General Computing forums. Jerome Dochez, GlassFish Architect, Oracle
Presenting: OSGi-enabled Java EE Applications in GlassFish and Sponsored Keynote: GlassFish 3.1: Java EE 6 and beyond
Jerome Dochez is the architect of GlassFish and led the design and implementation of the GlassFish V1 and V3 application servers. He worked at Sun Microsystems for 13 years before joining Oracle as part of the acquisition. Jerome has presented at numerous conferences including 13 consecutive JavaOne conferences, as well as Devoxx and Jazoon. He is looking at the direction of the product while maintaining a stable compatible implementation, but he spends most of his time coding as it remains the fun part of this job.
He has worked on Java EE technologies since 2000, including various aspects of the application server implementation such as deployment, Web services and kernel. Before concentrating on Java EE, he worked on the Java SE team particularly on the JavaBeans team and the Java Plug-in.
Johan Edstrom, Independent Consultant, Senior SOA Architect, Savoir Technologies
Presenting: Tax Dollars and Open Source
Johan Edstrom is an open source developer, consultant and software architect. Johan divides his time between writing software, mentoring development teams and teaching people how to use Apache Servicemix, Camel, CXF and ActiveMQ effectively and scalable for enterprise installations. He is a senior SOA architect at Savoir Technologies which specializes in guiding companies to leverage open source technologies and solutions. Michael Ernest, Owner, Systems Architect; Education Specialist, Inkling Research
Presenting: The Premier Sun Designation: Mastering the Oracle Certified Architect Exam and Java Performance Tuning: Embrace the Whole Platform
Michael Ernest has 15 years' experience in consulting, training and writing, principally on Java development and the Unix-based systems administration. He owns and operates Inkling Research, a small group of societal misfits that found a way to teach and learn and still live indoors.
He specializes in delivering fast-track seminars to highly-experienced programming and admin teams. He is a lead instructor, technical adviser and contributor to Oracle courseware development on topics including Solaris performance management, DTrace technology, Java EE design patterns and architecture.
Michael has spoken previously at the JavaOne, Java University and CommunityOne conferences. He co-authored the Complete Java 2 Certification Study Guide and still isn't even close to finishing The Book of DTrace, not without a lot more caffeine. Ben Evans, Technical Architect and Lead Application Developer Presenting: Back to the Future with Java 7 Ben has been a professional developer and Open Source enthusiast since the late 90s. He has delivered world-class projects for banks, media companies and charities in that time, and currently works as a lead architect, principal engineer and in-house Java expert at one of the world’s leading financial institutions. Mark Fisher, Author, Spring Integration in Action; Engineer, VMware Presenting: Developing a Message Driven Architecture with Spring Mark Fisher is an engineer within the SpringSource division of VMware. He is the lead of the Spring Integration project and co-lead of the Spring AMQP project. He is also a committer on the core Spring Framework and the Spring BlazeDS Integration project. In addition to his role as an engineer, Mark spends a significant amount of time working with customers as a consultant and trainer. The focus of such engagements is primarily in the realm of enterprise integration and message-driven applications. Mark is a frequent speaker at conferences and user groups in North America and Europe, and along with other Spring Integration committers, he is an author of the forthcoming book, Spring Integration in Action, to be published in 2011 by Manning.
Jonathan Fullam, Enterprise Content Management Consultant, Micro Strategies
Presenting: How to Reap the Benefits of Agile-Based Test-Driven Development
Jonathan Fullam is an Enterprise Content Management consultant with over 10 years of experience with software development. Currently employed by Micro Strategies, Jonathan designs and implements custom ECM solutions based on the Alfresco open source Enterprise Content Management platform and has delivered presentations at Alfresco "Lunch and Learns" and the Alfresco Developer's conference. Jonathan has a passion for software development and also enjoys public speaking. Jeff Genender, CTO, Chief Architect and Open source evangelist
Presenting: Architecture Track Keynote: ActiveMQ In The Trenches – Advanced Tips On Architectures and Implementations
Jeff has over 20 years of software architecture, team lead, and development experience in multiple industries. He is a frequent speaker at such events as TheServerSide Symposium, JavaZone, Java In Action, and numerous Java User Groups on topics pertaining to Enterprise Service Bus (ESBs), Service Oriented Architectures (SOA), and application servers.
Jeff is an active committer and Project Management Committee(PMC) member for Apache Geronimo, a committer on OpenTerracotta, OpenEJB, ServiceMix, and Mojo (Maven plugins). He is the author of Enterprise Java Servlets(Addison Wesley Longman, 2001), coauthor of Professional Apache Geronimo (2006, Wiley), and co-author of Professional Apache Tomcat (2007, Wiley). Jeff also serves as a member of the Java Community Process (JCP) expert group for JSR-316 (JavaPlatform, Enterprise Edition 6 (Java EE 6) Specification) as a representative of the Apache Software Foundation.
Jeff is an open source evangelist and has successfully brought open source development efforts, initiatives, and success stories into a number of Global 2000 companies, saving these organizations millions in licensing costs.
Dan Hardiker, Chief Technical Architect and Founding Member, Adaptavist.com Ltd. Presenting:The (Not So) Dark Art of Performance Tuning Dan Hardiker is a Chief Technical Architect and founding member of Adaptavist.com Ltd., which specializes in confluence consultancy, support, hosting, and bespoke development. Dan has many years of Java expertise, as well as almost two decades of experience with UNIX and networking systems - focusing on infrastructure, performance, and security. He speaks regularly on these topics and has a background in event management. He works with enabling geeks to socialize throughout the UK via the GeekUp and BarCamp initiatives.
Iran Hutchinson, Product Manager, InterSystems
Presenting: Vendor Technical Session: Globals: Extreme Performance for Java Iran Hutchinson currently serves as Product Manager at InterSystems with a focus on driving global product strategy and development on the Java platform. Prior to joining InterSystems, Hutchinson held lead roles in enterprise architecture and development in companies such as IBM, where he led the development strategy for enterprise integration and evolution of global projects using: JavaEE5, Distributed Computing, CICS, Flex, SOA and Web services. He focuses on understanding diverse architectures and technologies to lead the way to next-generation solutions surrounding high performance computing, distributed computing and complex data interactions. Hutchinson thinks the open sourcing of standards and technologies, such as Java, in concert with other best-of-breed tooling will yield a bright future. Recently, Hutchinson has taken a more active role in presenting and debating technology in the hopes of learning and spurring innovative solutions. You can find him presenting at upcoming events around the world like Java One and on the upcoming blog + technology series at InterSystems.com. Claus Ibsen, Author, Camel in Action
Presenting: Apache Camel, the Integration Framework: Tales from the Leading Camel Experts
Claus Ibsen is a software engineer and integration specialists from FuseSource and is project lead on the open source integration framework Apache Camel and co-author of the Camel in Action.
Claus is the most active contributor to Apache Camel and is very active in the Camel community. At FuseSource he leads the development of Camel and provides consulting and support to customers. Claus is frequent speaker at FuseSource community day events on subjects related to Camel, including Devoxx 2010.
Jevgeni Kabanov, Founder and CTO of ZeroTurnaround Presenting: Do You Really Get Class Loaders? and Do You Really Get Memory?
Jevgeni Kabanov is the founder and CTO of ZeroTurnaround, a development tools company that focuses on productivity. Before that he worked as the R&D director of Webmedia, Ltd., the largest custom software development company in the Baltics. As part of the effort to reduce development time tunraround, he wrote the prototype of the ZeroTurnaround flagship product, JRebel, a class reloading JVM plugin.
Jevgeni has been speaking at international conferences for over 5 years, including TheServerSide Java Symposium, JavaPolis/Devoxx, JavaZone, JAOO, QCon, JFokus and others. He also has an active research interest in programming languages, types and virtual machines, publishing several papers on topics ranging from category theoretical notions to typesafe Java DSLs. Jevgeni is a co-founder of two open-source projects - Aranea and Squill.
Max Katz, Senior Systems Engineer and Lead RIA Strategist, Exadel
Presenting: Ajax Applications with JSF 2 and RichFaces 4
Max Katz is a Senior Systems Engineer and Lead RIA Strategist at Exadel. Max is a well-known speaker, appearing at many conferences, webinars, and JUGs. Max leads Exadel’s RIA and mobile strategy and Exadel open source projects such as Fiji, Flamingo and JavaFX Plug-in for Eclipse. Max is the community manager for web-based rapid UI prototyping application Tiggr. Max has been involved with RichFaces since its inception, publishing numerous articles, providing consulting and training, and authoring the book Practical RichFaces (Apress). Max writes about RIA technologies in his blog, and can be found on Twitter as @maxkatz. Max holds a Bachelor of Science in computer science from the University of California, Davis and MBA from Golden Gate University. Jon Kern, Software Architect, Agile Mentor, and Co-author of the Agile Manifesto Presenting: Agile Track Keynote: Agile Schmagile: The Backlash Against Agile
Jon Kern is a premiere software architect and team leader/coach that keeps the people and the business in sharp focus. Aerospace engineer-turned software expert, co-author of Agile Manifesto for Software Development and Java Design. Currently, Jon helps companies develop mission-critical software. His insights are critical factors in producing solutions with significant impact to business value, quality, budget, and schedule. He brings experts from around the world to work on the project team, work with client's developers and mentors on agile and distributed development processes, techniques, & tools. Most importantly, Jon leaves behind a team that is much more valuable to the company. Mik Kersten, CEO of Tasktop Technologies and Creator of the Eclipse Mylyn open source project Presenting: Mylyn 3.4 and the New Face of the Java IDE and Cloud Keynote: Bringing Code to the Cloud and Back Again
Dr. Mik Kersten is the CEO of Tasktop Technologies, creator of the Eclipse Mylyn open source project and inventor of the task-focused interface. As a research scientist at Xerox PARC, Mik implemented the first aspect-oriented programming tools for AspectJ. He created Mylyn and the task-focused interface during his PhD in Computer Science at the University of British Columbia. Mik has been an Eclipse committer since 2002, is an elected member of the Eclipse Board of Directors and serves on the Eclipse Architecture Council. Mik's thought leadership on task-focused collaboration makes him a popular speaker at software conferences, and he was voted a JavaOne Rock Star speaker in 2008 and 2009. Mik has also been recognized as one of the top ten IBM developerWorks Java technology writers of the decade. He enjoys building tools that offload our brains and make it easier to get creative work done. Heath Kesler, Consultant and Open source software evangelist
Presenting: What Riding the Camel Can Do for You
Heath Kesler is an open source software evangelist, developer and architect; he has created Java architectures utilizing open source frameworks on large scalable, high transaction load systems for such companies as LeapFrog Enterprises, AT&T, GE & GE Healthcare, and IBM. Heath has conducted training classes at companies like Verizon, Singapore Post and the Federal Aviation Administration on Apache frameworks including ActiveMQ, ServiceMix, CXF and Camel. Heath has been a team lead in many project recovery implementations, helping to rescue systems on the verge of collapse. He was recently involved with the implementation of the customer account creation and third-party integration on mission-critical systems for the largest educational products provider in the United States.
Tom Kincaid, Vice President, Professional Services, EnterpriseDB
Presenting: Vendor Technical Session: Introduction to PostgreSQL for Development and Deployment
Tom is Vice President of Professional Services at EnterpriseDB. He is responsible for the over-site and delivery of all their professional services including support, training and consulting. He has over 24 years of experience in the Enterprise Software Industry. Prior to EnterpriseDB, he was VP of software development for Oracle's GlassFish and Web Tier products where he helped integrate Sun's Application Server and Web Tier products into Oracle's Fusion middleware offerings. At Sun Microsystems he was part of the original Java EE architecture and management teams at Sun Microsystems and played a critical role in defining and delivering the Java Platform. Tom is a veteran of the Object Database industry and helped build Object Design's customer service department holding management and senior technical contributor roles. Other positions in Tom's past include Director of Quality Engineering at Red Hat and Director of Software Engineering at Unica.
Jim Knutson, Java EE Architect, WebSphere, IBM
Presenting: Core Java Track Keynote: Enterprise Java Platforms for the Next Decade Jim Knutson, IBM WebSphere's Java EE Architect, is responsible for IBM's participation in Java EE specifications and IBM's implementations of the specifications. His involvement in Java EE goes back to before there was a J2EE platform. He is also involved in programming model evolution to support SOA and Web services. Lasse Koskela, Author, Test Driven: Practical TDD and Acceptance TDD for Java Developers Presenting: Test Smells in Your Code Base Lasse Koskela works as a coach, trainer, consultant and programmer, spending his days helping clients and colleagues at Reaktor create successful software products. He has trenched in a variety of software projects ranging from enterprise applications to middleware products developed for an equally wide range of domains.
In the recent years, Lasse has spent an increasing amount of time giving training courses and mentoring client teams on-site, helping them improve their performance and establish a culture of continuous learning. Aside from consulting leaders and managers, Lasse enjoys programming and works frequently hands-on with software teams.
In 2007, he published a book on Test Driven Development and is currently working on his next book. He is one of the pioneers of the Finnish agile community and speaks frequently at international conferences.
Justin Lee, Member, GlassFish and Grizzly teams, Oracle
Presenting: Building Websockets Applications with GlassFish/Grizzly Justin has been an active Java developer since 1996. He has worked on projects ranging from Web applications to systems integration. He has spoken internationally and at local user groups and is an active member of the open source community. For the last few years, he has been a member of the GlassFish and Grizzly teams where he works on the Web tier team. Justin is also a contributor to The Basement Coders Podcast.
Cameron McKenzie, Editor, TheServerSide.com
Presenting: What’s New on the Persistence Side: Getting to Know JPA 2.0
With over ten years of development experience, Cameron McKenzie brings with him a long and storied history with the Java platform and Java EE architectures. Cameron McKenzie is the author of five best selling Java titles, including What is WebSphere?, the SCJA Certification Guide, JSR168 Portlet Programming, and the ever popular Hibernate Made Easy. Along with emceeing the TSSJS event, Cameron, together with Janeice Del Vecchio, will be speaking about what’s new with the Java Persistence API, and what we can expect from the specification in the future.
Andrew Monkhouse, Author of the Sun Certified Java Developer Guide Presenting: The Myths and Realities of Testing and Deployment in the Cloud Andrew is a senior software engineer at Overstock.com in Salt Lake City - a job that he thinks is one of the best you can get.
Prior to Overstock.com, Andrew has worked in many different sized companies dealing with many different problems in countries all over the world. From companies with only 2 developers, all the way up to Amazon.com with its several thousand developers. During these jobs, he has worked on occupational health and safety systems, communication systems, airline systems, nanking systems, and retail systems.
Andrew is best known for authoring the best selling Sun Certified Java Developers Guide (SCJD). He has also contributed on a number of other best selling Java titles including: Head First Servlets and JSP, Head First Design Patterns, Head Rush Ajax. Charles Nutter, Co-lead JRuby project, Engine Yard, Inc
Presenting: Language Track Keynote: Pump It Up: Maximizing the Value of an Existing Investment in Java with Ruby
Charles Nutter has been programming most of his life, as a Java developer for the past decade (named a Java Rock Star in 2007) and as a JRuby developer for over four years. He co-leads the JRuby project at Engine Yard, in an effort to bring the beauty of Ruby and the power of the JVM together. Along with the rest of the JRuby team, Charles recently celebrated the release of JRuby 1.5. The latest release makes it easier than ever for Java developers to take Ruby for a spin because of the seamless interaction it allows with commonly used Java components. Charles believes in open source and open standards and hopes his efforts on JRuby and other languages will help ensure that the many JVM users and enthusiasts have the best possible access to the benefits Ruby can bring. Karen Tegan Padir, Vice President, Products & Marketing, EnterpriseDB Presenting: Sponsored Keynote: Predicting Technology Ubiquity: What makes standards stick?
Karen is responsible for EnterpriseDB's product management and engineering as well as its global marketing initiatives, including demand generation, public relations and product marketing. She is a veteran software executive with 20 years of industry experience leading global business and engineering organizations.
Prior to joining EnterpriseDB, Karen was the vice president of MySQL and Software Infrastructure at Sun Microsystems where she was responsible for key Sun open source software GlassFish, Identity Management and SOA products. Prior to that Karen was vice president of engineering for infrastructure technology at Red Hat where she was responsible for Red Hat's Directory and Certificate server products, as well as Quality and Release Engineering of the Red Hat Enterprise Linux bundle. She is one of the founding members of the Java EE Platform at Sun.
She holds a Masters Degree in Business Administration and a Bachelors of Science Degree in Computer Science from Worcester Polytechnic Institute (WPI).
Kirk Pepperdine, Java Performance Tuning Expert
Presenting: Tools & Techniques Track Keynote: The (Not So) Dark Art of Performance Tuning, Extending VisualVM with JavaFX, and Performance Tuning with Cheap Drink and Poor Tools (Part Deux)
Kirk's career began in Biochemical Engineering, where he applied his researching skills in attaching computers to sheep and cats, synthesising radio-active tylenol and developing separation techniques using High Performance Liquid Chromatography for Ottawa University and the National Research Council of Canada. Subsequently, he became employed by the Canadian Department of Defense. Kirk admits that his work at the DoD involved programming Cray supercomputers as well as other Unix systems, but he refuses or is unable to divulge the exact nature of the applications in the department other than that they involved databases and high performance systems. After the DoD, Kirk consulted as an analyst at Florida Power & Light, then moved on to join GemStone Systems as a senior consultant. He is currently an independent consultant, and also an editor at TheServerSide.com. Kirk has been heavily involved in the performance aspects of applications since the start of his career, and has tuned applications involving a variety of languages from Cray Assembler, through C, Smalltalk and on to Java. Kirk has focused on Java since 1996. Kirk co-authored ANT Developer's Handbook, which was published in 2002. Reza Rahman, Author, EJB 3 in Action; Member, Java EE 6 and EJB 3.1 expert groups
Presenting: A Quick Tour of the CDI Landscape, Effective Caching Across Enterprise Application Tiers, An Introduction to Seam 3, Testing Java EE 6 Applications: Tools and Techniques and Panelist for Keynote Panel Discussion: The Java Community Process: What's Wrong and How to Fix It
Reza Rahman is an independent consultant specializing in Java EE with clients across the greater Philadelphia and New York metropolitan areas. He is currently focused on the Resin EJB 3.1 Lite/Java EE 6 Web Profile implementation.
Reza is the author of EJB 3 in Action from Manning Publishing. He is a member of the Java EE 6 and EJB 3.1 expert groups. He is a frequent speaker at seminars, conferences and Java user groups including JavaOne as well as an avid contributor to TheServerSide.com.
Reza has been working with Java EE since its inception in the mid-nineties. He has developed enterprise systems in the financial, healthcare, telecommunications and publishing industries. Reza has been fortunate to have worked with EJB 2, Spring, EJB 3 and Seam.
Matt Raible, UI Consultant and Architect
Presenting: Everything You Ever Wanted To Know About Online Video and Comparing JVM Web Frameworks
Matt Raible has been building web applications for most of his adult life. He started tinkering with the web before Netscape 1.0 was even released. For the last 11 years, Matt has helped companies adopt open source technologies (Spring, Hibernate, Apache, Struts, Tapestry, Grails) and use them effectively. Matt has been a speaker at many conferences worldwide, including ApacheCon, JavaZone, Colorado Software Summit, No Fluff Just Stuff, and a host of others.
Matt is an author (Spring Live and Pro JSP), and an active "kick-ass technology" evangelist. He is the founder of AppFuse, a project which allows you to get started quickly with Java frameworks, as well as a committer on the Apache Roller and Apache Struts projects. Scott Selikoff, Owner, Selikoff Solutions, LLC
Presenting: GWT Roundup: An Overview of Google's Web Toolkit and Hybrid Integration
Scott Selikoff is a senior Java/J2EE software developer with years of experience in Web-based database-driven architectures. He owns and operates Selikoff Solutions, a software consulting company servicing businesses in the NY/NJ/PA area.
Scott is the founder of Down Home Country Coding, a software blog that provides tools, tips and discussions for Java, GWT, and Flex developers. He is a member of the bst-player project, which provides support for integrating third-party video players into GWT applications. He is also an editor on the TheDailyWTF.com, a humorous blog dedicated to "Curious Perversions in Information Technology."
Scott holds a Bachelor of Arts in Mathematics and Computer Science and a Masters of Engineering in Computer Science, both from Cornell University. His master's thesis was on the effectiveness of online educational software in the classroom. Mark Spritzler, Independent Consultant, Regular Contributor to TheServerSide.com Presenting: Comparing, Contrasting and Differentiating Between Mobile Platforms
Mark owns Perfect World Programming, LLC, a consulting and contract training firm. Specializing in Java, Enterprise Java, iPhone/iPad and Android development. He currently has 5 iPhone and 1 iPad applications on the Apple App Store. Most of the time, Mark is travelling the world training many software developers, working for companies like SpringSource and JBoss, as well as N-Tier Training and Sum Global. He was a technical editor on Head First Design Patterns and the K&B SCJP 5.0 Exam book. James Strachan, Creator of the Groovy language and Software Fellow at FuseSource Presenting: Apache Camel, the Integration Framework: Tales from the Leading Camel Experts
James is heavily involved in the open source community: he's been an Apache committer for 10 years, was one of the founders of the Apache ActiveMQ, Camel and ServiceMix projects, created the Groovy programming language and a number of other open source projects including Scalate, dom4j & Jaxen and is a committer on a number of projects such as Apache Karaf, Maven, Lift and Jersey. James is currently a Software Fellow at FuseSource and has more than 20 years experience in enterprise software development with a background in finance, consulting and middleware. Craig Tataryn, Editor, Basement Coders Podcast
Presenting: Evolving Enterprise Code with Scala & Wicket Craig Tataryn started his career as a Visual Basic programmer, but don’t hold that against him! Around the year 2000 he discovered Struts and never looked back. A professional Java developer for over a decade, Craig has honed his skills on everything from Apache Wicket, CXF, Facebook and iPhone application development. Craig also enjoys his role as Editor at The Basement Coders Podcast. Martijn Verburg, Consultant & Community Leader for Java and Open Source software
Presenting:Back to the Future with Java 7, The Diabolical Developer: What You Need to Do to Become Awesome and How Mega-Corp Open Sourced its Internal Software & Leveraged a Volunteer Community (And How Your Corporation Can Too!!!)
Martijn Verburg is a Dutch Born Kiwi who is also a permanent resident
of a few other nations, he likes to call it being a "citizen of the
world". Martijn co-leads the London (UK) Java User Group (JUG) and
also is heavily involved in the London graduate/undergraduate
developer, CTOs and software craftsmanship communities. JavaRanch.com
kindly invited him to be a bartender in 2008 and he's been humbled by
the awesomeness of the community ever since.
He's currently working on somewhat complex JCA Connectors and an associated open source middleware platform (Ikasan) and also spends a
good deal of time herding monkeys on another open source project that
deals with creating characters for d20 based role playing games
(PCGen).
More recently he's started writing The Well-Grounded Java Developer
(Covers Java 7) for Manning publications (with Ben Evans) and can be
found speaking at conferences on a wide range of topics including open
sourcing software, software craftsmanship and the latest advancements
in the openJDK.
Patrycja Wegrzynowicz , Founder and CTO at Yon Labs and Yon Consulting Presenting: Anti-Patterns and Best Practices for Hibernate and Static Analysis in Search for Performance Anti-Patterns Patrycja Wegrzynowicz is Founder and CTO at Yon Labs and Yon Consulting. There, she shapes the future direction of technological research in software as well as acts as a chief architect and consultant on the projects from the field of automated software engineering, domain names, and Internet security. Also, she is associated with Warsaw University of Technology, where she serves as Technical Manager of Passim, intelligent search engine. She is a regular speaker at top academic (e.g., OOPSLA, ASE) as well as technical conferences (e.g., JavaOne, Devoxx, JavaZone). Patrycja holds a master degree in Computer Science and is currently finalizing her PhD at Warsaw University. Her research interests are focused on architectural and design patterns and anti-patterns along with automated software engineering, particularly static and dynamic analysis techniques to support program verification, comprehension, and optimization.
Jason Whaley, Founder Brink Systems, Freelance Java Developer Presenting: It's Your Infrastructure Now - Developing Solutions in an IaaS World Jason Whaley is a freelance Java developer and consultant specializing in service oriented architectures, enterprise integration, cloud computing, and continuous integration. Previously, Jason has worked in multiple roles for both public companies as well as government institutions in a variety of roles for several broad ranging Java based projects. He is also a contributor to The Basement Coders Podcast.
Paul Wheaton, Owner, JavaRanch.com
Presenting: SEO in the Real World: A Java Case Study in What Works and What Doesn’t
Paul is a Sun Certified Java Programmer working out of Missoula, Montana.
Paul had a website dedicated to Java discussion that he started in November of 1998. He merged his site into JavaRanch when Kathy Sierra turned it over to him. His contributions include the Saloon, the Cattle Drive, most of the bunkhouse, some of the code barn, the coop, gramps and granny.
Jim White, Director of Training, Intertech, Inc.
Presenting: Java in the Microsoft Cloud: Deploying Enterprise Applications to Windows Azure Jim White is the director of training, partner, and instructor with Intertech, Inc. He is co-author of Java 2 Micro Edition (Manning) and a frequent contributor to various journals and on-line magazines including recent articles at DevX.com. Jim also heads Intertech’s Cloud Computing practice and is co-lead of the Windows Azure User Group (www.azureug.net), which is a national virtual user group of over 750 members. He has twenty years of software development experience including time as a senior technical architect at Target Corporation. Return to Top
Bring a Group and Save
Save $1,500 when you come with a team. Contact your Delegate Relations Manager, Melissa Cote, to learn about special registration offers.
TheServerSide.com | TheServerSide.NET | SearchSOA.com | SearchSoftwareQuality.com SearchWinDevelopment.com |
Ajaxian.com | ebizQ.net
TechTarget Events Gain free admission to an IT-specific event coming soon to a city near you. To report issues with this Web site, please contact [email protected].
© 2012 TechTarget. TechTarget cares about your privacy. Read our Privacy Statement. | 计算机 |
2014-15/4479/en_head.json.gz/17818 | characterizing planetary systems
Downloadable Console
Console Tutorial #1
> worlds > reintroduction reintroduction
greg i-Phone snapshot of Difference Engine #2.
The systemic console started life over five years ago as a web-based applet for analyzing radial velocity data. The original version was a collaboration between Aaron Wolf (then a UCSC Undergraduate, now a Caltech Grad Student) and myself, and the Java was coded in its entirety by Aaron. Our goal was to clarify the analysis of radial velocity data — the “fitting” of extrasolar planets — by providing an interactive graphical interface. The look and feel were inspired by sound-mixing boards, in particular, the ICON Digital Console built by Digidesign:
Over the intervening years, the console has expanded greatly in scope. Stefano Meschiari has taken over as lead software developer, and has directed the long-running evolution with considerable skill. The console has been adopted by planet-hunting groups world-wide, as well as by classroom instructors and by a large community of users from the public.
Tuesday’s post pointed to our new peer-reviewed article (Meschiari et al. 2009) that describes the algorithms under the console’s hood, and now that the code base has matured, we’re developing documentation that can serve the widely varying needs of our users. We also intend to return the systemic backend collaboration to the forefront of relevance. A great deal of very interesting work has been done by the backend users, and it can be leveraged.
As the first step, we’re updating and expanding the tutorials, which have been largely gathering dust since November 2005. Following the page break, the remainder of this post updates tutorial #1. If you’ve ever had interest in using the console, now’s the time to start…
Console Tutorial #1: A Fish in a Barrel — HD 4208b
How do you use the console to find planets?
Stellar radial velocity data have been used to infer the presence of hundreds of extrasolar planetary systems, and in nearly every case, the radial velocity data have been tabulated in the papers that announce the discoveries. In this tutorial, we introduce the console, and use it to “discover” a planet orbiting the star HD 4208. (The tutorial assumes version 1.0.90 of the console, running on Mac OS X 10.5.7, if you are using a different set-up, there may be minor differences in details and appearance).
A radial velocity measurement of a star is the component of the velocity of the star along the line of sight from the Earth to the star. Most of this radial velocity stems from the natural motion of the star with respect to our solar system. The Sun orbits the center of the galaxy at a speed of approximately 250 kilometers per second. Most of the stars in the solar neighborhood are moving in roughly the same manner, but stellar orbits are not perfectly circular. Some of the stars in the solar vicinity are moving more quickly than the Sun, while others are moving more slowly. The average difference in orbital velocity between neighboring stars is about 20 kilometers per second. Part of this velocity will be in the so-called transverse direction, the rest is along the radial line connecting our solar system to the star. The Alpha Centauri system, for example, is headed toward us with a radial velocity of -21.6 kilometers per second. By carefully noting the Doppler shift of the stellar lines, it is possible (for favorable cases) to measure the line-of-sight speed of the star with a precision of order 1 meter per second.
In addition to the random motion that a given star has with respect to the Sun, there is also a small superimposed component of motion that is generated as the star wobbles back and forth in response to any planets that are in orbit around it. For the case of a single planet in a circular orbit, the situation is easily visualized by imagining that the star and the planet are attached to the opposite ends of a rigid rod. If you wish to balance the rod on a fingertip, then you must position your finger under a point on the rod that is much closer to the heavy star than it is to the less massive planet. For example, if the star is a hundred times more massive than the planet, then the point of balance lies one hundred times closer to the star than it does to the planet. As the orbit proceeds, one simply swings the star and the planet around the point of balance. The planet executes a large circle, and the star executes (in the same amount of time) a circle that is one hundred times smaller.
An analogous situation applies when the orbits are eccentric.
To get started, install the “cutting-edge” console on your computer, and double-click the Console.jar icon. When initialization is finished, you’ll see the main console window:
At the risk of sounding silly, it’s worth remarking that the console is ready for rough-and-ready experimentation. Push buttons, slide sliders, and get a feeling for how things work. While it’s not impossible to effectively hang the software, you can always force-quit and restart, and in a worst-case scenario you can always download a fresh-baked copy. It’s free! (Also, in response to a query from a well-known planet hunter, under no circumstances will the console “phone home”.)
When the console first appears, it is set by default to the radial velocity data sets for the multiple-planet-bearing star 55 Cancri. This tutorial walks you through the much less complex data set associated with the star HD 4208. You change data sets by clicking on the star icon
and then selecting HD 4208 from the ensuing pop-up window:
This selection plots the the published HD 4208 radial velocity data set in the console’s data window:
HD 4208 is a sunlike star lying roughly 110 light years from Earth. It’s too faint to see with the naked eye, but it can easily be spotted with binoculars or a small telescope if you know where to look. In 2002, Vogt et al. published a data set containing 35 independent radial velocity observations of the star. These measurements were accumulated at the Keck telescope over an 1,821 day (~5 year) interval starting on JD 2450366.9657 (11:10 AM on Oct. 10, 1996, Universal Time). The observations are spaced unevenly over the years because the California-Carnegie planet search team received only limited blocks of time at the telescope, and also because the star can only be easily observed from Hawaii from Ju | 计算机 |
2014-15/4479/en_head.json.gz/18774 | Previous789101112131415161718192021222324252627Next
Nurturing Entrepreneurship at Every Level
Hileman Jane
Summary: The founder and CEO of American Reading Company, Jane Hileman, has seen her company grow from a few teachers ten years ago to 111 employees today who provide books and reading goals for students to encourage a love of reading. Hileman's goals are revenue growth, profitability, and success.
Hennessy John
VideoSeries Resource
Summary: Dr. John Hennessy has been President of Stanford University since 2000. He became a Stanford faculty member in 1977. He rose through the academic ranks to full professorship in 1986 and was the inaugural Willard R. and
Inez Kerr Bell Professor of Electrical Engineering and Computer Science from 1987 to 2004. A pioneer in computer architecture, in 1981 Dr. Hennessy drew together researchers to focus on a computer architecture known as RISC (Reduced
Instruction Set Computer), a technology that has revolutionized the computer industry by increasing performance while reducing costs. In 1984, he used his sabbatical year to found MIPS Computer Systems Inc. to commercialize his research in
RISC processors. Dr. Hennessy is a recipient of the 2000 IEEE John von Neumann Medal, a 2004 NEC C&C Prize for lifetime achievement in computer science and engineering, and a 2005 Founders Award from the American Academy of Arts and
Sciences. Dr. Hennessy earned his bachelor's degree in electrical engineering from Villanova University and his master's and doctoral degrees in computer science from the State University of New York at Stony Brook.
Surviving the Lean Years
Heinmiller Robert
Summary: How do you survive personally when your business goes bust? In an article that is both realistic and compassionate, the author lays out a financial plan for the seven lean years. Stash away cash during the fat years, downsize quickly once the handwriting is on the wall, and consider moving to a lower-cost geographic area are among his suggestions.
Summary: How do you deal with things when your business is on the verge of going bust? This author lays out a financial plan for working through lean years to sustain a business. Key tips: stash away cash during good times, downsize quickly if need be, and consider relocating to a lower-cost area of the country.
Summary: Jeff Hawkins is the Founder of Numenta, but he is also well known as the co-founder of two companies, Palm and Handspring, and as the architect of many computing products, such as the PalmPilot and the Treo smartphone.
Throughout his life Hawkins has also had a deep interest in neuroscience and theories of the neocortex. His interest in the brain led him to create the non-profit Redwood Neuroscience Institute (RNI), a scientific organization focused on
understanding how the human neocortex processes information. While at RNI, Hawkins developed a theory of neocortex which appeared in his 2004 book, On Intelligence. Along with Dileep George and Donna Dubinsky, Hawkins
founded Numenta in 2005 to develop a technology platform derived from his theory. It is his hope that Numenta will play a catalytic role in creating an industry based on this theory and technology. Jeff Hawkins earned his B.S. in
electrical engineering from Cornell University in 1979. He was elected to the National Academy of Engineering in 2003.
The Three Shows
Haupt Norbert
Summary: The author asserts there are three tasks entrepreneurs need to do to attract the attention of angel investors. They are "the three shows": show up, show enthusiasm, and show humility.
Entrepreneurial Thought Leaders Lecture Series
Hansson David Heinemeier
Summary: Danish-born David Heinemeier Hansson is the programmer and creator of the popular Ruby on Rails web development framework and the Instiki wiki. He is also a partner at the Web-based software development firm 37signals,
based in Chicago. Ruby on Rails provides a "basic development environment" for programmers, according to Wikipedia.org. Based on the programming language Ruby (developed by Japanese programmer Yukihiro Matsumoto in 1995), Ruby on Rails
focuses on user interface and "convention over configuration"; meaning, developers can focus on the unique qualities of their Web site or program rather than the building blocks that every application may require. Released in 2004, Ruby on
Rails has been incorporated into many applications used by some of the biggest companies, from Twitter to Apple's 2007 release of Mac OS X v.10.5 "Leopard." Aside from his development of Ruby on Rails, Heinemeier Hansson also works as a
partner for Web-based software development firm 37signals. Joining the company in 2003, he has helped develop Basecamp, Campfire, Backpack and other Web-based applications. Working in similar ways like Web-based e-mail services like Yahoo!
e-mail and Google's Gmail, 37signals hosts a broad range of IT services for companies, including project management to information-sharing. The firm's software has been used by Kellogg's, Sun Microsystems and even Obama '08. Hansson
received his bachelor's degree from the Copenhagen Business School in 2005. In that same year, he moved to Chicago and received Hacker of the Year honors for his work on Ruby on Rails from Google and O'Reilly Media. He runs a blog called
LoudThinking.com.
Managing a Mail-Order Marriage: Building Trust With Your VC Investor
Hammer Katherine
Curle Robin Lea
Summary: Venture capitalists play a critical funding role, as entrepreneurial ventures move into the big leagues, but the price these investors extract is often too high. Entrepreneurs should consider the relationship analogous to marrying a mail-order bride and proceed accordingly, according to this comprehensive and entertaining article by two women who co-founded a software company. Tips include advising company owners to build trust with VCs and, until that is established, dealing with them in a way that allows for "a reasonable balance of power."
Endeavor's Entrepreneurs' Summit
Green Jason
Friel Tom
Frankel David
Cline Michael
Summary: J. Michael Cline is the founding Partner of Accretive LLC. Michael and other Accretive principals founded Exult, Xchanging, Fandango and Accretive Health. Before founding Accretive Michael spent 10 years as General
Partner at General Atlantic Partners helping build General Atlantic into the world's largest private investment firm focused on software and related investments. Prior to General Atlantic, Michael was an associate at McKinsey &
Company. Michael received his MBA from Harvard Business School where he was a Baker Scholar and he received a BS from Cornell University. He serves on the boards of Accretive Commerce, Fandango, Accretive Health and Willow. He is a Trustee
of the Wildlife Conservation Society (WCS) where he chairs the Tigers Forever initiative - the world's largest effort in global tiger conservation and is a Trustee of the Brunswick School. He also serves on the board of the National Fish
and Wildlife Foundation, Endeavor Global and the Harvard Business School Rock Center for Entrepreneurship.
Collecting Well: Whose Money Is It Anyway?
Green Leonard
Altman John
Summary: Entrepreneurs are apt to happen upon found money by more skillfully | 计算机 |
2014-15/4479/en_head.json.gz/18778 | ERCIM News 64
This issue in pdf(76 pages; 12Mb)
Next issue:April 2006
Next Special theme:Space Exploration
AVISPA: Automated Validation of Internet Security Protocols and Applications
by Alessandro Armando, David Basin, Jorge Cuellar, Michael Rusinowitch and Luca Viganò
AVISPA is a push-button tool for the Automated Validation of Internet Security Protocols and Applications. It provides a modular and expressive formal language for specifying protocols and their security properties, and integrates different back-ends that implement a variety of state-of-the-art automatic analysis techniques. Experimental results, carried out on a large library of Internet security protocols, indicate that the AVISPA tool is the state of the art for automatic security protocols. No other tool combines the same scope and robustness with such performance and scalability.
With the spread of the Internet and network-based services and the development of new technological possibilities, the number and scale of new security protocols under development is outpacing the human ability to rigorously analyse and validate them. This is an increasingly serious problem for standardization organizations like the Internet Engineering Task Force (IETF), the International Telecommunication Union (ITU) and the World Wide Web Consortium (W3C). It also affects companies whose products and services depend on the rapid standardization and correct functioning of these protocols, and users whose rights and freedoms (eg the right to privacy of personal data) depend on a secure infrastructure.
Designing secure protocols is a hard problem. In open networks such as the Internet, protocols should work even under worst-case assumptions, eg that messages may be seen or tampered with by an intruder (also called the attacker or spy). Severe attacks can be conducted without breaking cryptography, by exploiting weaknesses in the protocols themselves. Examples of this are 'masquerading attacks', in which an attacker impersonates an honest agent, or 'replay attacks', in which messages from one protocol session (ie execution of the protocol) are used in another session. The possibility of these attacks sometimes stems from subtle mistakes in protocol design. Typically these attacks go unnoticed, as it is difficult for humans, despite careful protocol inspection, to determine all the complex ways in which protocol sessions can be interleaved, with the possible interference of a malicious intruder.
Tools that support a rigorous analysis of security protocols are thus of great importance in accelerating and improving the development of the next generation of security protocols. Ideally, these tools should be completely automated, robust, expressive and easily usable, so that they can be integrated into protocol development and standardization processes.
Although in the last decade many new techniques that can automatically analyse small and medium-scale protocols have been developed, moving up to large-scale Internet security protocols remains a challenge. The AVISPA tool is a push-button tool for the Automated Validation of Internet Security-sensitive Protocols and Applications, which rises to this challenge in a systematic way. First, it provides a modular and expressive formal language for specifying security protocols and properties. Second, it integrates different back-ends that implement a variety of automatic analysis techniques ranging from protocol falsification (by finding an attack on the input protocol) to abstraction-based verification methods for both finite and infinite numbers of sessions. To the best of our knowledge, no other tool exhibits the same scope and robustness while enjoying the same performance and scalability.
AVISPA Web-based graphical user interface.
As shown in the figure, AVISPA is equipped with a Web-based graphical user interface that supports the editing of protocol specifications and allows the user to select and configure the back-ends integrated into the tool. If an attack on a protocol is found, the tool displays it as a message-sequence chart. The interface features specialized menus for both novice and expert users. A protocol designer interacts with the tool by specifying a security problem (ie a protocol paired with a security property that the protocol is expected to achieve) in the High-Level Protocol Specification Language (HLPSL). The HLPSL is an expressive, modular, role-based, formal language that is used to specify control-flow patterns, data-structures, alternative intruder models and complex security properties, as well as different cryptographic primitives and their algebraic properties. These features make HLPSL well suited for specifying modern, industrial-scale protocols.
In order to demonstrate the effectiveness of AVISPA, we selected a substantial set of security problems associated with protocols that have recently been, or are currently being standardized by organizations like the Internet Engineering Task Force IETF. We then formalized a large subset of these protocols in HLPSL. The result of this specification effort is the AVISPA Library (publicly available on the AVISPA Web site), which at present comprises 215 security problems derived from 48 protocols. Most of the problems in the library can be solved by the AVISPA tool in a few seconds. Moreover, AVISPA detected a number of previously unknown attacks on some of the protocols analysed, eg on some protocols of the ISO-PK family, on the IKEv2-DS protocol, and on the H.530 protocol.
The AVISPA tool can be freely accessed either through its Web-based interface or by downloading and installing the software distribution. For more details, please refer to the AVISPA Web site.
AVISPA has been developed in the context of the FET Open Project IST-2001-39252 'AVISPA: Automated Validation of Internet Security Protocols and Applications', in collaboration with the University of Genova, INRIA Lorraine, ETH Zurich and Siemens Munich.
http://www.avispa-project.org
Alessandro Armando, Università di Genova, Italy
Tel: +39 010353 2216
E-mail: armandodist.unige.it | 计算机 |
2014-15/4479/en_head.json.gz/19961 | Goto Search
Lebanese e-Government portal: DAWLATI
Thematic Website
Electronic and Mobile Government, ICT for MDGs, Knowledge Management in Government, Citizen Engagement
DAWLATI (in Arabic means “ My State” ) provides Lebanese Citizens with the following services: Information about more than 4500 administrative transactions in the Lebanese administration in a simple, accurate and constantly d method, Having electronic forms for download and electronic filling and printing, online registration with personalized space and storage of personal documents, and electronic services to be announced periodically with different administrations.
Website: www.dawlati.gov.lb
Mobile applications: DAWLATI mobile applications (ANDROID 4+ / APPLE 6+ /BLACKBERRY)
0 Views | Rated 0.0 | Created On : Nov 05, 2013
Visit | More...
International Journal of eGovernance and Networks (IJeN)
Electronic and Mobile Government, Knowledge Management in Government, Internet Governance
International Journal of eGovernance and Networks (IJeN) is a peer-reviewed publication, devoted to broadening the understanding of contemporary developments and challenges in administrative and policy practices promotion of international scholarly and practitioner dialogs the encouragement of international comparisons and the application of new techniques and approaches in electronic systems of governing. IJeN intends to fill the need for a venue in which scholars and practitioners with different viewpoints bring their substantive approaches to work on various legal, social, political, and administrative challenges related to e-Governance issues. IJeN includes cutting edge empirical and theoretical research, opinions from leading scholars and practitioners, and case studies. Call for Manuscript
IJeN a uses a blind peer-review process and therefore manuscripts should be prepared in accordance with the American Psychological Association (APA) Guidelines as follows: No longer than 35 pages, including all elements (abstract, endnotes, references, tables, figures, appendices, etc.) formatted in Times New Roman, 12-point type, double-spaced with one inch margins. Please do not use the automatic features as well as the footnote feature to endnotes.
Submissions should include the title of the manuscript, an abstract of approximately 150 words, an opinion for practitioners of 100 words, and a list of key words on the title page but do not include the author(s) name on the title page. Please ensure to remove any indications of authorship in the body of the manuscript. The author(s) name, affiliation, and contact information should be listed on a separate page preceding the title page of the manuscript. Please submit your manuscript for review in a widely accepted word processing format such as Microsoft Word.
Submission to IJeN implies that your article has not been simultaneously submitted to other journals or previously has not been published elsewhere.
Submissions should be directed to the attention of:
Younhee Kim
Managing Editor at [email protected]
0 Views | Rated 0.0 | Created On : Oct 08, 2013
e-Governance in Small States
Journals, Training Material
Electronic and Mobile Government, ICT for MDGs, Internet Governance
ICTs can digital pathways between citizens and governments, which are both affordable, accessible and widespread. This offers the opportunity for developing small states to leapfrog generations of technology when seeking to enhance governance or to deepen democracy through promoting the participation of citizens in processes that affect their lives and welfare. For small developing countries, especially those in the early stages of building an e-Government infrastructure, it is vital that they understand their position in terms of their e-readiness, reflect upon the intrinsic components of an e-Governance action plan, and draw lessons from the success and failures of the various e-Government initiatives undertaken by other countries, developed or developing. This book aims to strengthen the understanding of policy-makers by outlining the conditions and processes involved in planning and ution of e-Government projects.
0 Views | Rated 0.0 | Created On : Sep 10, 2013
Going for Governance: Lessons Learned from Advisory Interventions by the Royal Tropical Institute
Knowledge Management in Government, Internet Governance
The 15 cases presented in this book illustrate the different kinds of advice and support that advisors from the Royal Tropical Institute (KIT) have delivered to help partners around the world improve people’ s lives by "going for governance.” Taken as a whole, these accounts show the range of processes and interventions that have helped strengthen governance in diverse settings and situations. Taken individually, each case study can be used as reference materials for a variety of training courses. The aim of this book is to provide ideas and inspiration for those who are asked to advise on governance issues in various kinds of development programs and sectors, or explore opportunities to use innovative and creative governance approaches and tools in KIT’ s joint initiatives with partners in the South.
Masters Degree Online - Public Administration
Public Administration Schools
Electronic and Mobile Government, ICT for MDGs, Knowledge Management in Government, Citizen Engagement, Institution and HR Management, Internet Governance
Masters Degree Online in public administration provides information to current and prospective graduate students who is pursuing a career in public administration or related fields. Its directory allows you to search schools by institution size, geographic area, tuition cost, and school type. Its primary focus is online master degree programs, but we acknowledge that on-campus programs at traditional brick-and-mortar schools are the best options for some students. Therefore, you can search for both online and on-campus programs here.
Click here for Online Masters Degree in Public Administration.
0 Views | Rated 0.0 | Created On : Jun 28, 2013
UNCTAD Measuring ICT Website
The Measuring ICT Website provides information on the development of ICT statistics and indicators worldwide, with an emphasis on supporting ICT policies and the information economies in developing countries. The objectives of the Measuring ICT Website are to: Provide information to experts and the general public on progress in the field of ICT measurement, particularly by National Statistical Offices and international organizations Promote the discussion between practitioners of ICT statistical work on best practices, experiences, methodology, presentations, theory, etc. Contribute to the follow-up to the World Summit on the Information Society (WSIS) Support the work of UNCTAD on measuring the information economy, and of the Partnership on Measuring ICT for Development.
The Measuring ICT Website is maintained by the ICT Analysis Section of UNCTAD. The Section is part of the Science, Technology and ICT Branch, in the Division on Technology and Logistics.
Galilee International Management Institute
Training Institutions, Public Administration Schools, Training Material
ICT for MDGs, Knowledge Management in Government, Citizen Engagement, Institution and HR Management
Based in beautiful northern Israel, the Galilee Institute is a leading public training institution, offering advanced leadership, management and capacity building seminars to professionals from more than 160 transitional and industrialised countries around the world. The institute enjoys a global reputation as a top management institute, and to date, more than 10,000 senior managers, administrators and planners have graduated from the international programmes at the institute. In addition to its regularly scheduled seminars, the institute also offers tailor-made training programmes, designed to meet the requirements of governments and other international organisations. All programmes are available in English, French, Spanish, Portuguese, Russian and Arabic, and other languages are available upon request.
Click here to visit Galilee International Management Institute.
0 Views | Rated 0.0 | Created On : May 06, 2013
Approaches to Urban Slums a Multimedia Sourcebook on Adaptive and Proactive Strategies
This source book by Barjor Mehta & Arish Dastur (editors) from The World Bank on Approaches to Urban Slums a Multimedia Sourcebook on Adaptive and Proactive Strategies
brings together the growing and rich body of knowledge on the vital issue of improving the lives of existing slum dwellers, while simultaneously planning for new urban growth in a way which ensures future urban residents are not forced to live in slums. The sourcebook& rsquo s user-friendly multimedia approach and informal dialogue greatly increase the accessibility of the content, as well as the range of topics and information that are covered. Totaling over nine hours of modular viewing time, the sourcebook will be an essential resource for practitioners, policy makers, as well as students and academics. It contains the latest perspectives on the burning issues, and cutting edge approaches to dealing with the problems that afflict the living conditions of hundreds of millions of poor people. The sourcebook charts unfamiliar waters in two ways.
0 Views | Rated 0.0 | Created On : Mar 31, 2013
Education Index
Training Institutions, Public Administration Schools, Public Institutions, Statistical Databases
The Education Index at PhDs.org is the premier source of clear and educational data about undergraduate and graduate programs in the United States. We use publicly available numbers from the National Center for Education Statistics (NCES), and strive to present them in a simple and easy-to-digest way. Our desire is to make it easy for you to pick the best college you possibly can with this index: a college that fits your financial, social and educational interests and goals.
Click here to visit the Education Index.
The International Council for Caring Communities (ICCC)
The International Council for Caring Communities (ICCC) is a not-for-profit organization that has Special Consultative Status with the Economic and Social Council of the United Nations.
ICCC acts as a bridge linking government, civil society organizations, the private sector, universities and the United Nations in their efforts at sparking new ways of viewing an integrated society for all ages.
Since its inception, ICCC has been committed to the principle that private enterprises and individuals can help society improve communities and social public activities. This is one of ICCC essential goals. Twenty-three renowned world leaders since 1996 have been presented with ICCC "Caring" Awards for their contributions to society.
2014 International Student Design Competition
Music as a Global Resource: Solutions for Social and Economic Issues Compendium - Third Edition
2011 ICCC Compendium on Music As a Natural Resource 2012 International Student Design Competition Winners
<< 1 2 3 4 5 6 7 8 9 10 ... >> Total Record(s): 1082
Statistical Databases
Training Institutions
UN Research Institutions
Public Service Awards Programs | 计算机 |
2014-15/4479/en_head.json.gz/20639 | Laptop Friday, August 15, 2008
A laptop computer or laptop (also notebook computer, notebook and notepad) is a small mobile computer, typically weighing 3 to 12 pounds (1.4 to 5.4 kg), although older laptops may weigh more. Laptops usually run on a single main battery or from an external AC/DC adapter that charges the battery while it also supplies power to the computer itself, even in the event of a power failure. This very powerful main battery should not be confused with the much smaller battery nearly all computers use to run the real-time clock and backup BIOS configuration into the CMOS memory when the computer is without power. Laptops contain components that are similar to their desktop counterparts and perform the same functions, but are miniaturized and optimized for mobile use and efficient power consumption, although typically less powerful for the same price. Laptops usually have liquid crystal displays and most of them use different memory modules for their random access memory (RAM), for instance, SO-DIMM in lieu of the larger DIMMs. In addition to a built-in keyboard, they may utilize a touchpad (also known as a trackpad) or a pointing stick for input, though an external keyboard or mouse can usually be attached.
CGyp
A computer network is a group of interconnected computers. Networks may be classified according to a wide variety of characteristics. This article provides a general overview of some types and categories and presents the basic components of a network.Connection method Computer networks can also be classified according to the hardware technology that is used to connect the individual devices in the network such as Optical fibre, Ethernet, Wireless LAN, HomePNA, or Power line communication. Ethernet uses physical wiring to connect devices. Often deployed devices are hubs, switches, bridges, and/or routers. Wireless LAN technology is designed to connect devices without wiring. These devices use radio waves as transmission medium.Functional relationship (Network Architectures) Computer networks may be classified according to the functional relationships which exist among the elements of the network, e.g., Active Networking, Client-server and Peer-to-peer (workgroup) architecture.Network topology Main article: Network Topology Computer networks may be classified according to the network topology upon which the network is based, such as Bus network, Star network, Ring network, Mesh network, Star-bus network, Tree or Hierarchical topology network, etc. Network Topology signifies the way in which devices in the network see their logical relations to one another. The use of the term "logical" here is significant. That is, network topology is independent of the "physical" layout of the network. Even if networked computers are physically placed in a linear arrangement, if they are connected via a hub, the network has a Star topology, rather than a Bus Topology. In this regard the visual and operational characteristics of a network are distinct; the logical network topology is not necessarily the same as the physical layout.Types of networks Below is a list of the most common types of computer networks in order of scale.Personal Area Network (PAN) Main article: Personal area network A personal area network (PAN) is a computer network used for communication among computer devices close to one person. Some examples of devices that are used in a PAN are printers, fax machines, telephones, PDAs or scanners. The reach of a PAN is typically within about 20-30 feet (approximately 6-9 metres). Personal area networks may be wired with computer buses such as USB[1] and FireWire. A wireless personal area network (WPAN) can also be made possible with network technologies such as IrDA and Bluetooth..Local Area Network (LAN) Main article: Local Area Network A network covering a small geographic area, like a home, office, or building. Current LANs are most likely to be based on Ethernet technology. For example, a library may have a wired or wireless LAN for users to interconnect local devices (e.g., printers and servers) and to connect to the internet. On a wired LAN, PCs in the library are typically connected by category 5 (Cat5) cable, running the IEEE 802.3 protocol through a system of interconnection devices and eventually connect to the internet. The cables to the servers are typically on Cat 5e enhanced cable, which will support IEEE 802.3 at 1 Gbit/s. A wireless LAN may exist using a different IEEE protocol, 802.11b or 802.11g. The staff computers (bright green in the figure) can get to the color printer, checkout records, and the academic network and the Internet. All user computers can get to the Internet and the card catalog. Each workgroup can get to its local printer. Note that the printers are not accessible from outside their workgroup. Typical library network, in a branching tree topology and controlled access to resources All interconnected devices must understand the network layer (layer 3), because they are handling multiple subnets (the different colors). Those inside the library, which have only 10/100 Mbit/s Ethernet connections to the user device and a Gigabit Ethernet connection to the central router, could be called "layer 3 switches" because they only have Ethernet interfaces and must understand IP. It would be more correct to call them access routers, where the router at the top is a distribution router that connects to the Internet and academic networks' customer access routers. The defining characteristics of LANs, in contrast to WANs (wide area networks), include their higher data transfer rates, smaller geographic range, and lack of a need for leased telecommunication lines. Current Ethernet or other IEEE 802.3 LAN technologies operate at speeds up to 10 Gbit/s. This is the data transfer rate. IEEE has projects investigating the standardization of 100 Gbit/s, and possibly 40 Gbit/s.Campus Area Network (CAN) Main article: Campus Area Network A network that connects two or more LANs but that is limited to a specific and contiguous geographical area such as a college campus, industrial complex, or a military base. A CAN may be considered a type of MAN (metropolitan area network), but is generally limited to an area that is smaller than a typical MAN. This term is most often used to discuss the implementation of networks for a contiguous area. This should not be confused with a Controller Area Network. A LAN connects network devices over a relatively short distance. A networked office building, school, or home usually contains a single LAN, though sometimes one building will contain a few small LANs (perhaps one per room), and occasionally a LAN will span a group of nearby buildings. In TCP/IP networking, a LAN is often but not always implemented as a single IP subnet.Metropolitan Area Network (MAN) Main article: Metropolitan Area Network A Metropolitan Area Network is a network that connects two or more Local Area Networks or Campus Area Networks together but does not extend beyond the boundaries of the immediate town/city. Routers, switches and hubs are connected to create a Metropolitan Area Network.Wide Area Network (WAN) Main article: Wide Area Network A WAN is a data communications network that covers a relatively broad geographic area (i.e. one city to another and one country to another country) and that often uses transmission facilities provided by common carriers, such as telephone companies. WAN technologies generally function at the lower three layers of the OSI reference model: the physical layer, the data link layer, and the network layer.Global Area Network (GAN) Main article: Global Area Network Global area networks (GAN) specifications are in development by several groups, and there is no common definition. In general, however, a GAN is a model for supporting mobile communications across an arbitrary number of wireless LANs, satellite coverage areas, etc. The key challenge in mobile communications is "handing off" the user communications from one local coverage area to the next. In IEEE Project 802, this involves a succession of terrestrial Wireless local area networks (WLAN).[2]Internetwork Main article: Internetwork Two or more networks or network segments connected using devices that operate at layer 3 (the 'network' layer) of the OSI Basic Reference Model, such as a router. Any interconnection among or between public, private, commercial, industrial, or governmental networks may also be defined as an internetwork. In modern practice, the interconnected networks use the Internet Protocol. There are at least three variants of internetwork, depending on who administers and who participates in them: IntranetExtranetInternet Intranets and extranets may or may not have connections to the Internet. If connected to the Internet, the intranet or extranet is normally protected from being accessed from the Internet without proper authorization. The Internet is not considered to be a part of the intranet or extranet, although it may serve as a portal for access to portions of an extranet.Intranet Main article: Intranet An intranet is a set of interconnected networks, using the Internet Protocol and uses IP-based tools such as web browsers and ftp tools, that is under the control of a single administrative entity. That administrative entity closes the intranet to the rest of the world, and allows only specific users. Most commonly, an intranet is the internal network of a company or other enterprise. A large intranet will typically have its own web server to provide users with browseable information.Extranet Main article: Extranet An extranet is a network or internetwork that is limited in scope to a single organization or entity but which also has limited connections to the networks of one or more other usually, but not necessarily, trusted organizations or entities (e.g. a company's customers may be given access to some part of its intranet creating in this way an extranet, while at the same time the customers may not be considered 'trusted' from a security standpoint). Technically, an extranet may also be categorized as a CAN, MAN, WAN, or other type of network, although, by definition, an extranet cannot consist of a single LAN; it must have at least one connection with an external network.Internet Main article: Internet A specific internetwork, consisting of a worldwide interconnection of governmental, academic, public, and private networks based upon the Advanced Research Projects Agency Network (ARPANET) developed by DARPA of the U.S. Department of Defense – also home to the World Wide Web (WWW) and referred to as the 'Internet' with a capital 'I' to distinguish it from other generic internetworks. Participants in the Internet use the Internet Protocol Suite and IP Addresses allocated by address registries. Service providers and large enterprises exchange information about the reachability of their address ranges through the Border Gateway Protocol (BGP).Basic Hardware Components All networks are made up of basic hardware building blocks to interconnect network nodes, such as Network Interface Cards (NICs), Bridges, Hubs, Switches, and Routers. In addition, some method of connecting these building blocks is required, usually in the form of galvanic cable (most commonly Category 5 cable). Less common are microwave links (as in IEEE 802.11) or optical cable ("optical fiber").Network Interface Cards Main article: Network card A network card, network adapter or NIC (network interface card) is a piece of computer hardware designed to allow computers to communicate over a computer network. It provides physical access to a networking medium and often provides a low-level addressing system through the use of MAC addresses. It allows users to connect to each other either by using cables or wirelessly.Repeaters Main article: Repeater A repeater is an electronic device that receives a signal and retransmits it at a higher level or higher power, or onto the other side of an obstruction, so that the signal can cover longer distances without degradation. In most twisted pair ethernet configurations, repeaters are required for cable runs longer than 100 meters.Hubs Main article: Network hub A hub contains multiple ports. When a packet arrives at one port, it is copied to all the ports of the hub for transmission. When the packets are copied, the destination address in the frame does not change to a broadcast address. It does this in a rudimentary way, it simply copies the data to all of the Nodes connected to the hub.[3]Bridges Main article: Network bridge A network bridge connects multiple network segments at the data link layer (layer 2) of the OSI model. Bridges do not promiscuously copy traffic to all ports, as hubs do, but learns which MAC addresses are reachable through specific ports. Once the bridge associates a port and an address, it will send traffic for that address only to that port. Bridges do send broadcasts to all ports except the one on which the broadcast was received. Bridges learn the association of ports and addresses by examining the source address of frames that it sees on various ports. Once a frame arrives through a port, its source address is stored and the bridge assumes that MAC address is associated with that port. The first time that a previously unknown destination address is seen, the bridge will forward the frame to all ports other than the one on which the frame arrived. Bridges come in three basic types: Local bridges: Directly connect local area networks (LANs)Remote bridges: Can be used to create a wide area network (WAN) link between LANs. Remote bridges, where the connecting link is slower than the end networks, largely have been replaced by routers.Wireless bridges: Can be used to join LANs or connect remote stations to LANs.Switches Main article: Network switch A switch is a device that performs switching. Specifically, it forwards and filters OSI layer 2 datagrams (chunk of data communication) between ports (connected cables) based on the Mac-Addresses in the packets.[4] This is distinct from a hub in that it only forwards the datagrams to the ports involved in the communications rather than all ports connected. Strictly speaking, a switch is not capable of routing traffic based on IP address (layer 3) which is necessary for communicating between network segments or within a large or complex LAN. Some switches are capable of routing based on IP addresses but are still called switches as a marketing term. A switch normally has numerous ports with the intention that most or all of the network be connected directly to a switch, or another switch that is in turn connected to a switch.[5] Switches is a marketing term that encompasses routers and bridges, as well as devices that may distribute traffic on load or by application content (e.g., a Web URL identifier). Switches may operate at one or more OSI layers, including physical, data link, network, or transport (i.e., end-to-end). A device that operates simultaneously at more than one of these layers is called a multilayer switch. Overemphasizing the ill-defined term "switch" often leads to confusion when first trying to understand networking. Many experienced network designers and operators recommend starting with the logic of devices dealing with only one protocol level, not all of which are covered by OSI. Multilayer device selection is an advanced topic that may lead to selecting particular implementations, but multilayer switching is simply not a real-world design concept.Routers Main article: Router Routers are networking devices that forward data packets between networks using headers and forwarding tables to determine the best path to forward the packets. Routers work at the network layer of the TCP/IP model or layer 3 of the OSI model. Routers also provide interconnectivity between like and unlike media (RFC 1812). This is accomplished by examining the Header of a data packet, and making a decision on the next hop to which it should be sent (RFC 1812) They use preconfigured static routes, status of their hardware interfaces, and routing protocols to select the best route between any two subnets. A router is connected to at least two networks, commonly two LANs or WANs or a LAN and its ISP's network. Some DSL and cable modems, for home (and even office) use, have been integrated with routers to allow multiple home/office computers to access the Internet through the same connection. Many of these new devices also consist of wireless access points (waps) or wireless routers to allow for IEEE 802.11b/g wireless enabled devices to connect to the network without the need for a cabled connection.
Hardware is a general term that refers to the physical artifacts of a technology. It may also mean the physical components of a computer system, in the form of computer hardware. Hardware historically meant the metal parts and fittings that were used to make wooden products stronger, more functional, longer lasting and easier to fabricate or assemble.[citation needed] In modern usage it includes equipment such as keys, locks, hinges, latches, corners, handles, wire, chains, plumbing supplies, tools, utensils, cutlery and machine parts, especially when they are made of metal.[citation needed] In the United States, this type of hardware has been traditionally sold in hardware stores, a term also used to a lesser extent in the UK.[citation needed] In a more colloquial sense, hardware can refer to major items of military equipment, such as tanks, aircraft or ships.[citation needed] In slang, the term refers to trophies and other physical representations of awards.[citation needed]
Computer software, or just software is a general term used to describe a collection of computer programs, procedures and documentation that perform some tasks on a computer system.The term includes application software such as word processors which perform productive tasks for users, system software such as operating systems, which interface with hardware to provide the necessary services for application software, and middleware which controls and co-ordinates distributed systems. "Software" is sometimes used in a broader context to mean anything which is not hardware but which is used with hardware, such as film, tapes and records.
An e-book (for electronic book: also ebook: also ecobook) is the digital media equivalent of a conventional printed book. Such documents are usually read on personal computers, or on dedicated hardware devices known as e-book readers or e-book devices.If you want to search free e-book, you can see here
Sata HD Notebook Driver for Windows XP
Notebook ACER :
Aspire 4310 - 4710
Presario V3643TU
DAPAT GRATISAN DISINI
Blog Information Profile for feribayek Kartoo SE
allvery
Store Download
Good Blog
Good Multiply
Open tab links in browser window instead. | 计算机 |
2014-15/4479/en_head.json.gz/21596 | Bill Swartz: "Follow The Money" The head of Mastiff Games tells us where your $50 goes when you buy a game.
While most of the GDC speeches we cover concern game design and production, the conference also features speakers on everything from audio to art to business & legal issues. One of this year's business lectures was "Follow the Money: Understanding Console Publishers," where Bill Swartz of new publisher Mastiff Games laid out what you need to know to start a game publishing company.
For his presentation, Swartz put together various slides showing where the money goes for a hypothetical game. While not based on an actual title, Swartz said his example is based on a "pretty real game," and compared it to an equivalent of Bloody Roar.
Part of the problem with putting together an example, Swartz said, is that this is a hit-driven industry. Games will either bomb and sell 40,000 copies or less, or do extremely well with 300,000 or more sold. Because of this, an example like Bloody Roar is rare, since that is a game that will sell around 90,000 -- it hits the "average" that doesn't tend to exist most of the time. However, Swartz claims that the percentages seen in his mock-up would not change drastically for a game that sold 900,000 copies, so the example should hold true for most games.
Swartz showed a breakdown of how much publishers, wholesalers, and retailers can make, as well as what risks they face.
In his example, Swartz showed that a publisher can clear seven dollars on a game, but only one of every five games will sell enough copies to make money, since publishers have to consider things like taking back inventory that doesn't sell through to customers. They have to be smart about the number of copies they ship into the market. Wholesalers have a smaller amount they can make on a single game, and face risks such as dealing with retailer payments. Retailers can sometimes make good money on a single game, but that margin drops when the game price falls.
Swartz went on to discuss each part of the market you have to consider before publishing a product, noting the amounts you can expect to spend on distribution, retail advertising, royalties, etc. once the game is complete.
A chart of the distribution, production, and royalty costs.
A closer look at the total cost and net profit of a moderately sucessful game.
Beyond numbers, Swartz gave a few tips to aspiring game publishers. He said to look at Intellectual Property as nothing more than advertising, because paying for a license serves the same purpose as marketing -- it gets the word out and attracts customers. He also advised that you "don't commit more than one crime," with those crimes being 1) new engine/technology, 2) new Intellectual property, and 3) new development team. More than one of these makes the game a risk for the publisher.
Swartz finished his speech by pleading with the audience to not be "slimey" when it comes to making payments and doing business deals. Swartz has seen way too much of this type of behavior in the past, and hopes developers won't have to worry that their next payment will come in on time every couple months -- comments that were met with quite a few head nods in the audience. | 计算机 |
2014-15/4479/en_head.json.gz/21969 | 7 hard truths about the NoSQL revolution
Forgoing features for speed has its trade-offs as these NoSQL data store shortcomings show
Peter Wayner (InfoWorld)
The NoSQL buzzword has been metastasizing for several years. The excitement about these fast data stores has been intoxicating, and we're as guilty as anyone of seeing the groundbreaking appeal of NoSQL. Yet the honeymoon is coming to an end, and it's time to start balancing our enthusiasm with some gimlet-eyed hard truths. Don't get us wrong. We're still running to try the latest experiment in building a simple mechanism for storing data. We still find deep value in MongoDB, CouchDB, Cassandra, Riak, and other NoSQL standouts. We're still planning on tossing some of our most trusted data into these stacks of code because they're growing better and more battle-tested each day. [ Also on InfoWorld: NoSQL standouts: New databases for new applications | First look: Oracle NoSQL Database | Get a digest of the key stories each day in the InfoWorld Daily newsletter. ] But we're starting to feel the chafing, as the NoSQL systems are far from a perfect fit and often rub the wrong way. The smartest developers knew this from the beginning. They didn't burn the SQL manuals and send nastygrams to the sales force of their once devoted SQL vendor. No, the smart NoSQL developers simply noted that NoSQL stood for "Not Only SQL." If the masses misinterpreted the acronym, that was their problem. This list of gripes, big and small, is thus an attempt to document this fact and to clear the air. It's meant to set things straight now so that we can do a better job understanding the trade-offs and the compromises. NoSQL hard truth No. 1: JOINs mean consistencyOne of the first gripes people have about SQL systems is the computational cost of executing a JOIN between two tables. The idea is to store the data in one and only one place. If you're keeping a list of customers, you put their street addresses in one table and use their customer IDs in every other table. When you pull the data, the JOIN connects the IDs with the addresses and everything remains consistent. The trouble is that JOINs can be expensive, and some DBAs have concocted complex JOIN commands that boggle the mind, turning even the fastest hardware to sludge. It was no surprise that the NoSQL developers turned their lack of JOINs into a feature: Let's just keep the customer's address in the same table as everything else! The NoSQL way is to store key-value pairs for each person. When the time comes, you retrieve them all. Alas, people who want their tables to be consistent still need JOINs. Once you start storing customers' addresses with everything else about them, you often end up with multiple copies of those addresses in each table. And when you have multiple copies, you need to update them all at the same time. Sometimes that works, but when it doesn't, NoSQL isn't ready to help with transactions. Wait, you say, why not have a separate table with the customer's information? That way there will only be one record to change. It's a great idea, but now you get to write the JOIN yourself in your own logic. NoSQL hard truth No. 2: Tricky transactionsLet's say you're OK to live without JOINing tables because you want the speed. It's an acceptable trade-off, and sometimes SQL DBAs denormalize tables for just this reason. The trouble is that NoSQL makes it hard to keep the various entries consistent. There are often no transactions to make sure that changes to multiple tables are made together. For that, you're on your own, and a crash could ensure that tables turn inconsistent. The earliest NoSQL implementations thumbed their nose at these transactions. They would offer data listings that were consistent, except when they weren't. In other words, they went after the lowest-value data where errors wouldn't make any material difference. Now some NoSQL implementations offer something approaching a transaction. Oracle's NoSQL product, for instance, offers transactional control over data written to one node and lets you choose a flexible amount of consistency across multiple nodes. If you want perfect consistency, you have to wait for each write to reach all nodes. Several other NoSQL data stores are experimenting with adding more structure and protection like this. NoSQL hard truth No. 3: Databases can be smartMany NoSQL programmers like to brag about how their lightweight code and simple mechanism work extremely quickly. They're usually right when the tasks are as simple as the insides of NoSQL, but that changes when the problems get harder. Consider the old challenge of a JOIN. Once NoSQL programmers start generating their own JOIN commands in their own logic, they start to try to do this efficiently. SQL developers have spent decades developing sophisticated engines to handle JOIN commands as efficiently as possible. One SQL developer told me he was trying to synchronize his code with the spinning hard disk so that he would request data only when the head was just above the right spot. This may seem extreme, but SQL developers have been working on similar hacks for decades. There's no doubt that programmers spend days pulling out their hair trying to structure their SQL queries to take advantage of all of this latent intelligence. It may not be simple to tap, but when the programmer figures it out, the databases can really sing. A sophisticated query language like SQL always has the potential to outshine an unsophisticated query language like those found in NoSQL. It may not matter with simple results, but when the action becomes complex, the SQL is being executed on the machine right next to the data. It has little overhead fetching the data and doing the work. A NoSQL server usually has to ship the data to where it's going. NoSQL hard truth No. 4: Too many access modelsIn theory, SQL is supposed to be a standard language. If you use SQL for one database, you should be able to run the same query in another compliant version. This claim may work with a few simple queries, but every DBA knows that it can take years to learn the idiosyncrasies of SQL for different versions of the same database. Keywords are redefined, and queries that worked on one version won't work with another. NoSQL is even more arcane. It's like the Tower of Babel. Since the beginning, NoSQL developers have each tried to imagine the best language possible, but they have very different imaginations. This hotbed of experimentation is good -- until you try to jump between tools. A query for CouchDB is expressed as a pair of JavaScript functions for mapping and reducing. Early versions of Cassandra used a raw, low-level API called Thrift; newer versions offer CQL, an SQL-like query language that must be parsed and understood by the server. Each one is different in its own way. Each tool doesn't just have its own idiosyncrasies, it sports an entirely different philosophy and way of expressing it. There are no easy ways to switch between data stores and you're often left writing tons of glue code just to give yourself the option of switching in the future. This may not be too difficult when you're stuffing pairs of keys and values into the system, but it can grow increasingly aggravating the more complexity you introduce. NoSQL hard truth No. 5: Schema flexibility is trouble waiting to happenOne of the great ideas from the NoSQL model is not requiring a schema. In other words, programmers don't need to decide in advance which columns will be available for each and every row in a table. One entry may have 20 strings attached to it, another may have 12 integers, and another might be completely blank. The programmers can make the decision whenever they need to store something. They don't need to ask permission of the DBA, and they don't need to fill out all the paperwork to add a new column. All that freedom sounds intoxicating, and in the right hands it can speed development. But is it really a good idea for a database that might live through three teams of developers? Is it even workable for a database that might last beyond six months? In other words, the developers might want the freedom to toss any old pair into a database, but do you want to be the fifth developer to come along after four have chosen their own keys? It's easy to imagine a variety of representations of "birthday," with each developer choosing his or her own representation as a key when adding a user's birthday to an entry. A team of developers might imagine almost anything: "bday," "b-day," "birthday". The NoSQL structure offers no support to limit this problem because that would mean reimagining the schema. It doesn't want to harsh on the mellow of the totally cool developers. A schema would get in the way. The fact is that adding a column to a table isn't a big deal, and the discipline might actually be good for the developer. Just as it helps to force developers to designate variable types, it also helps to force developers to designate the type of data attached to a column. Yes, the DBA may force the developer to fill out a form in triplicate before attaching that column, but it's not as bad as dealing with a half-dozen different keys created on the fly by a programmer. NoSQL hard truth No. 6: No extrasLet's say you don't want all of the data in all of the rows, and you want the sum of a single column. SQL users can execute a query with the SUM operation and send one -- just one -- number back to you. NoSQL users get all of the data shipped back to them and can then do the addition themselves. The addition isn't the problem because it takes about the same amount of time to add up the numbers on any machine. However, shipping the data around is slow, and the bandwidth required to ship all that data can be expensive. There are few extras in NoSQL databases. If you want to do anything but store and retrieve data, you're probably going to do it yourself. In many cases, you're going to do it on a different machine with a complete copy of the data. The real problem is that it can often be useful to do all of the computation on the machine holding the data because shipping the data takes time. But tough for you. NoSQL solutions are emerging. The Map and Reduce query structure from MongoDB gives you arbitrary JavaScript structure for boiling down the data. Hadoop is a powerful mechanism for distributing computation throughout the stack of machines that also holds the data. It is a rapidly evolving structure that offers rapidly improving tools for building sophisticated analysis. It's very cool, but still new. And technically Hadoop is an entirely different buzzword than NoSQL, though the distinction between them is fading. NoSQL hard truth No. 7: Fewer toolsSure, you can get your NoSQL stack up and running on your server. Sure, you can write your own custom code to push and pull your data from the stack. But what if you want to do more? What if you want to buy one of those fancy reporting packages? Or a graphing package? Or to download some open source tools for creating charts? Sorry, most of the tools are written for SQL databases. If you want to generate reports, create graphs, or do something with all of the data in your NoSQL stack, you'll need to start coding. The standard tools come ready to snarf data from Oracle, Microsoft SQL, MySQL, and Postgres. Your data is in NoSQL? They're working on it. And they'll be laboring on it for a bit. Even if they jump through all of the hoops to get up and running with one of the NoSQL databases, they'll have to start all over again from the beginning to handle the next system. There are more than 20 different NoSQL choices, all of which sport their own philosophy and their own way of working with the data. It was hard enough for the tool makers to support the idiosyncrasies and inconsistencies in SQL, but it's even more complicated to make the tools work with every NoSQL approach. This is a problem that will slowly go away. The developers can sense the excitement in NoSQL, and they'll be modifying their tools to work with these systems, but it will take time. Maybe then they'll start on MongoDB, which won't help you because you're running Cassandra. Standards help in situations like this, and NoSQL isn't big on standards. NoSQL shortcomings in a nutshellAll of these NoSQL shortcomings can be reduced to one simple statement: NoSQL tosses away functionality for speed. If you don't need the functionality, you'll be fine, but if you need it in the future, you'll be sorry. Revolutions are endemic to tech culture. A new group comes along and wonders why the last generation built something so complex, and they set out to tear down the old institutions. After a bit, they begin to realize why all of the old institutions were so complex, and they start implementing the features once again. We're seeing this in the NoSQL world, as some of the projects start adding back things that look like transactions, schemas, and standards. This is the nature of progress. We tear things down only to build them back again. NoSQL is finished with the first phase of the revolution and now it's time for the second one. The king is dead. Long live the king. Related articles NoSQL standouts: New databases for new applications First look: Oracle NoSQL Database Flexing NoSQL: MongoDB in review 10 essential performance tips for MySQL 10 essential MySQL tools for admins Master MySQL in the Amazon cloud The time for NoSQL standards is now
This story, "7 hard truths about the NoSQL revolution," was originally published at InfoWorld.com. Follow the latest developments in data management at InfoWorld.com. For the latest developments in business technology news, follow InfoWorld.com on Twitter. Read more about data management in InfoWorld's Data Management Channel.
Read how Perth-based safety footwear manufacturer, Steel Blue, was able to cut costs with shipping and improve efficiency while meeting the growing demand for their products as they expanded their national and export markets and increased their local market share, all thanks to a new ERP system.
IBM X-Force Threat Intelligence Top 8 Considerations to Enable and Simplify Mobility | 计算机 |
2014-15/4479/en_head.json.gz/22998 | Our Organization Our People Year in Review Risk Management Consider a broad range of conditions and events that can affect the potential for success, and it becomes easier to strategically allocate limited resources where and when they are needed the most.
The SEI has been conducting research and development in various aspects of risk management for more than 20 years. Over that time span, many solutions have been developed, tested, and released into the community. In the early years, we developed and conducted Software Risk Evaluations (SREs), using the Risk Taxonomy. The tactical Continuous Risk Management (CRM) approach to managing project risk followed, which is still in use today—more than 15 years after it was released. Other applications of risk management principles have been developed, including CURE (focused on COTS usage), ATAM® (with a focus on architecture), and the cyber-security-focused OCTAVE®. In 2006, the SEI Mission Success in Complex Environments (MSCE) project was chartered to develop practical and innovative methods, tools, and techniques for measuring, assessing, and managing mission risks. At the heart of this work is the Mission Risk Diagnostic (MRD), which employs a top-down analysis of mission risk.
Mission risk analysis provides a holistic view of the risk to an interactively complex, socio-technical system. The first step in this type of risk analysis is to establish the objectives that must be achieved. The objectives define the desired outcome, or "picture of success," for a system. Next, systemic factors that have a strong influence on the outcome (i.e., whether or not the objectives will be achieved) are identified. These systemic factors, called drivers, are important because they define a small set of factors that can be used to assess a system's performance and gauge whether it is on track to achieve its key objectives. The drivers are then analyzed, which enables decision makers to gauge the overall risk to the system's mission.
The MRD has proven to be effective for establishing confidence in the characteristics of software-reliant systems across the life cycle and supply chain. The SEI has the MRD in a variety of domains, including software acquisition and development; secure software development; cybersecurity incident management; and technology portfolio management. The MRD has also been blended with other SEI products to provide unique solutions to customer needs.
Although most programs and organizations use risk management when developing and operating software-reliant systems, preventable failures continue to occur at an alarming rate. In many instances, the root causes of these preventable failures can be traced to weaknesses in the risk management practices employed by those programs and organizations. For this reason, risk management research at the SEI continues. The SEI provides a wide range of risk management solutions. Many of the older SEI methodologies are still successfully used today and can provide benefits to your programs. To reach the available documentation on the older solutions, see the additional materials.
The MSCE work on mission risk analysis—top-down, systemic analyses of risk in relation to a system's mission and objectives—is better suited to managing mission risk in complex, distributed environments. These newer solutions can be used to manage mission risk across the life cycle and supply chain, enabling decision makers to more efficiently engage in the risk management process, navigate through a broad tradeoff space (including performance, reliability, safety, and security considerations, among others), and strategically allocate their limited resources when and where they are needed the most. Finally, the SEI CERT Program is using the MRD to assess software security risk across the life cycle and supply chain. As part of this work, CERT is conducting research into risk-based measurement and analysis, where the MRD is being used to direct an organization's measurement and analysis efforts. Spotlight on Risk Management
New Directions in Risk: A Success-Oriented Approach (2009)
presented in San Jose, California, at the 21st Annual SEPG North America 2009 conference March 23-26, 2009
Practical Risk Management: Framework and Methods | 计算机 |
2014-15/4479/en_head.json.gz/23112 | Daylight Saving Time and Sybase Server Products
Daylight Saving Time and Sybase Server Products Summary Most Sybase Server Products do not have any direct support for handling daylight saving time. This document examines the issues and suggests ways to avoid problems. Sybase recommends, as a best practice, running Sybase Servers under the Coordinated Universal Time (UTC) standard and having all conversions to local time zones (including daylight saving time adjustments) performed by the client applications. Additionally, applications written in Java that employ unpatched versions of the Java Developer's Kit/Java Runtime Environment may be at risk of incorrectly reporting the local-time offset from UTC. How Sybase Products Use Time and Date Sybase Servers support the use of date and time data through the datetime and smalldatetime datatypes (and, with newer server, the date and time datatypes), as well as the getdate(), dateadd(), datediff(), and datepart() functions. The getutcdate() function was also added to some servers to provide the current datetime value in Coordinated Universal Time regardless of the time zone the server is otherwise running under. The datetime and smalldatetime datatypes, however, do not store time zone information and the products are entirely ignorant of the concepts of time zones and daylight saving time. Sybase Servers only recognize and store the date and time portions of the values provided by the operating system, which are based on the time zone configured at the operating system level (typically though the TZ environment variable setting in Unix or the Date/Time function of the Windows Control Panel) for the user who started the product. The calculations behind the dateadd and datediff functions are aware of leap years (using the rule of every 4th year, except for every 100th year, except for every 400th year), but do not include any adjustments for leap seconds or transitions from daylight saving time to regular time. Most Sybase Servers usually have two sources for datetime values. The getdate() and getutcdate() functions always make a call to the operating system to get the current time with the greatest accuracy. Most Servers also maintain an internal clock, which it uses to avoid the overhead of making a system call in cases where strict accuracy is less important. For example, Sybase ASE relies on its own clock for the password change date field pwdate in syslogins, creation and modification date fields in system catalogs, and begin and commit transaction times (which are visible in the syslogshold table, and used for the with until_time option of load transaction). The internal clock is initialized at start time with the current value of the operating system clock and incremented based on regular SIGALRM signals from the operating system (typically 10 per second). Once a minute, Sybase ASE polls the operating system clock to get the current time. The two clocks sometimes fall out of synchronization. When this happens, Sybase ASE speeds up or slows down the internal clock to minimize the difference with the operating system clock. The Effect of Daylight Saving Time Most UNIX systems actually run on UTC; there are no daylight saving adjustments in the UTC definition. Such adjustments are taken care of in applications, normally by calling OS library functions. If in effect, daylight saving time causes a large discontinuous jump in the time value received from the OS, typically either forward by an hour or backwards by an hour. The getdate() function, which gets its information directly from the operating system, immediately picks up this change. However, the server cannot immediately synchronize its internal clock. System-generated datetime values, such as crdates in sysobjects, pwdate in syslogins, and begin tran and commit tran times in log records and the syslogshold table will not match the new operating system clock, though the difference will decrease over time. Use of load tran with until_time is also effected by this. Until the internal clock has synchronized with the operating system clock, the "until time" is not accurate. The various common date and time functions are also unaware of daylight saving time. For example, if you use the datediff() function with values that cross one or more daylight saving time boundaries, the results are not adjusted for this change. Recent Changes to Daylight Saving Time in the USA In the USA, a law named the Energy Policy Act was passed which altered the starting and ending dates of Daylight Saving Time by 4 weeks starting in March of 2007. Sybase Servers do not contain any built-in knowledge of daylight saving time, the adjustments are made based on OS libraries and function calls. Presuming the Sybase Product is running under daylight saving time, these products will reflect these changes if the OS has been updated. Time Conversions Done in the Java JDK/JRE The Java Run-time Environment (JRE) contains library functions for time conversion from Coordinated Universal Time (also refered to as GMT) to local time, including any Daylight Saving Time compensation. These library functions were developed prior to the United States Energy Policy Act of 2006 and may not include the changes to Daylight Saving Time in US Time Zones that result from that act. Patches to the JDK/JRE will be issued as necessary to ensure these functions are updated by both Sun and Sybase Inc. The following advice is provided by Sun Microsystems: The Java Runtime Environment (JRE) stores rules about DST observance all around the globe. Older JREs will have outdated rules that will be superseded by the Energy Policy Act of 2005. As a result, applications running on an older JRE may report incorrect time from March 11, 2007 through April 2, 2007 and from October 29, 2007 through November 4, 2007. Solutions for Java Applications: If you are concerned about application failures that may result from these DST changes, you should update your Java Runtime Environment. The following Java platform versions have correct time rules to handle the DST changes that will affect U.S. time zones in 2007. You can download any of the following Java platform versions to resolve this DST issue: JDK 6 Project (beta)
J2SE 5.0 Update 6 or later
J2SE 1.4.2_11 or later
Testing Daylight Saving Time Changes Testing Daylight Saving Time changes is non-trivial. Simply changing the OS clock forward or back an hour is a poor test as it is changing the root OS time, and the time zone adjustments (made through calls to the OS libraries) are being bypassed. Testing probably should be done on dedicated machines where the system clock can be changed to values just before the transition time into or out of daylight saving time, the application (i.e. Sybase Product) started and allowed to run as the OS clock advances through the transition. Best Practice To avoid such issues, the best practice is to run Sybase products so that they are not subject to daylight saving time, i.e. run them on a constant clock, such as Greenwich Mean Time (GMT) or Coordinated Universal Time (UTC). You can use one of these standards for all datetime values in the server and let clients be responsible for conversions to their local time zone, including adjustments for daylight saving time. The dateadd() and datediff() calculations will then be correct even if the values in question span the change in or out of daylight saving time. The choice of which timezone to run the server under is unfortunately often based on where the company's headquarters are at the time the server is first created, which usually works well for small companies that don't have operations in other time zones. However, headquarters are sometimes moved, and small companies can grow to become global companies. Establishing a company standard early on that the server runs under UTC and all clients are responsible to translating from UTC to local time can save a great deal of trouble in the future. What Sybase Customers should do Customers should contact operating system supplier for the updates for their particular operating system. Some Sybase products bundle the JDK/JRE. Customers using these products should get the updated JDK/JRE from the supplier or get the Sybase product update that bundles the fixed JDK/JRE. Click here for a complete list of Sybase products and instructions - Last Updated 6th March 2007. Additional Workaround If you cannot avoid making daylight saving time adjustments, the best practice is to shut down the product before the operating system clock is reset; you can restart immediately after the clock is reset. This is precautionary; changing the clock while the product is running usually does not cause problems. However, unpredictable effects can occurred, for example, WAITFOR commands or dumps hanging, or SIGALRMs not being received by the product. If a precautionary shutting down of the product while the clock is changed is not possible and such problems are subsequently seen, the product can be restarted at any convenient time to reinitialize the internal clock. To assist you, Sybase Engineers have prepared the attached matrix of products and indicated their dependency on Operating System time information. Where applicable, known operating system patches have been recommended. Additional Recommended Reading: Calendrical Calculations by Dershowitz and Reingold, Cambridge University Press. ISBN 0-521-56474-3
Developing Time-Oriented Database Applications in SQL by Snodgrass, Morgan Kaufmann Publishers. ISBN 1-55860-436-7
Sun Developer Article: "U.S. Daylight Saving Time Changes in 2007" by O'Conner: http://java.sun.com/developer/technicalArticles/Intl/USDST/
Unix MAN pages on "TIMEZONE(4)"
Copyright © 2006 Sybase, Inc. All rights reserved. DOCUMENT ATTRIBUTES
Last Revised: Mar 08, 2007
Product: EAServer, Open Server, Replication, Replication Server, Data Integration Suite, Adaptive Server Enterprise
Technical Topics: Troubleshooting
Content Id: 1048699 | 计算机 |
2014-15/4479/en_head.json.gz/23775 | Early Windows 8 Adoption Lagging Far Behind Windows 7′s Rates
By Brad Chacos, LAPTOP Contributor
| Oct 2, 2012 11:39 AM EDT
With less than a month to go until the official launch of Windows 8, consumers don’t appear to be jumping on the Metro… oops, Modern bandwagon with as much enthusiasm as they had making the switch from Vista to Windows 7.
Only 0.33 percent — or 33 out of every 10,000 PCs — currently run a Preview version or RTM trial of Windows 8, Computerworld reports, citing statistics from metrics firm Net Applications. At the same point to the release of Windows 7, 1.64 percent of all Windows PCs were running the upcoming operating system. That’s a full five times more than Windows 8 adoptees — and the gap between early Windows 7 adoption and early Windows 8 adoption is actually increasing as October 26th draws closer.
Windows 8′s current adoption number stand at the same percentage Windows 7 held six months before that operating systems launch. At that point, the Windows 7 Release Candidate wasn’t even available yet and the final RTM version was a far-off milestone.
Microsoft hopes to stimulate sales of the new operating system with an aggressive pricing structure out of the gate for early buyers, highlighted by a Windows 8 Pro upgrade that costs just $40 for current Windows users.
Simple customer satisfaction may be part of the reason for consumer hesitation; Windows 7 was the follow up to the widely panned Windows Vista. Many people consider Windows 7 to be the best version of Windows ever released, while early reaction to Windows 8′s tiled, touch-focused interface has been decidedly mixed, with one expert going so far as to call the Modern/Desktop switching “a cognitive burden.”
The new UI is sure to be a stumbling point for adoption rates. The question is whether the mass mainstream market will be willing take the time to get used to a completely new design.
8 Worst Windows 8 Annoyances and How to Fix Them
Usability Expert: Windows 8 on PCs is Confusing, a Cognitive Burden
Microsoft Windows 8 Review
Tags: Microsoft Windows 8, Windows 8, operating systems, operating system, Windows 7, Windows, Microsoft Recommended by LEAVE A REPLY
Agasicles Says:
October 2nd, 2012 at 12:46 pm There might be some reasons for this delta. I am only offering my own perspective, and suggesting that maybe some of the 1.3% delta might be due to some of the same reasoning.
When the Win7 pre-release was made available, it was largely preferable to the on-market version of Vista. Today, I am satisfied with Windows 7. I know eventually I am going to have to go over to Win8. But that will happen naturally for me like it will with most people…when I have to upgrade to a new PC. I do not think most people upgrade to a new PC because of a new Windows OS. Sometimes you delay an upgrade, and sometimes you just upgrade the OS itself. But most times it happens just because the market does not stock new PC’s with the previous version of the OS, and it becomes time for you to buy a new PC. So without a Windows Vista or Me to run away from, I have not been compelled to test-drive the new OS. I can wait for the first Service Pack, in fact, like I’ve done for every other Windows release other than Win7. I think the downloadable pre-release availability for Win7 was the first time that was feasible, at least for the general public. The paradigm is still somewhat new. It was popular then because of the negativity around Vista. Without that goblin to push people to download the pre-release, maybe we are just all back to our normal adoption and upgrade paradigms.
– Vr/A. Stamas
October 3rd, 2012 at 6:55 am I’ve got to agree with Agasicles, I’m surely interested in windows 8 but I don’t really need an upgrade at the moment. A lot of people tried out the Windows 7 pre-releases because that’s the first time anyone could do it, and if it meant getting away from Vista then it was probably the right move. But people are happy with 7, so there’s not going to be the same desperate rush as there was with 7. I expect it will end up being just as successful as 7 come service pack 1, and by then I will probably be in the mood for an upgrade. | 计算机 |
2014-15/4479/en_head.json.gz/23880 | BlogsCool Stuff
Questions about Windows 7? Ask @MicrosoftHelps on Twitter
Posted: Oct 21, 2009 at 12:55 PM
By: Sarah Perez
Yes, it’s official. @MicrosoftHelps is in fact a real Microsoft-owned Twitter account for Microsoft Customer Service. Recently launched but given little fanfare, several bloggers and Twitter users were questioning whether or not this account was genuine and, if so, what kind of questions it was designed to help out with. Now we have the answers.The primary purpose for the @MicrosoftHelps account is to help you find the resources you need for Microsoft products and services. At launch time, the initial scope will focus solely on Windows 7. It will also be an English-only resource for now. Before launching the account, the company spoke with members from Best Buy’s social media team in order to learn from their experience with their own @twelpforce account, a Best Buy service where users can ask questions about the hardware, software, and services Best Buy provides. While similar in spirit to @twelpforce, the @MicrosoftHelps account is different because it will only focus on Microsoft products and services and initially only Windows 7. Also, the company says they don’t anticipate being able to respond to each and every question they receive. However, the team behind the account will be providing answers to customer questions about Windows 7 and if those questions are of a complex nature, they will direct those asking to the appropriate forums where in-house experts and Microsoft MVPs will be able to help. The company’s goal with the new account is to help respond to and engage with customers on the platform where so many are now choosing to participate, Twitter. Microsoft has seen a lot of people tweeting about Microsoft products and services – both good and bad – and wanted to provide a resource where customers could get easy, accurate answers. They hope that by doing so, they’ll be able to improve customers’ experience with the company by providing support, information, and even product updates if needed. Although initially the focus is on Windows 7, we’re told that Microsoft will “continue to assess the value” of using Twitter in this way going forward. Hopefully that means they’ll expand their scope beyond OS questions in the future. Tags:
Twitter, Windows 7
A Camera that Records your Whole Life
Questions about Windows 7? Ask @MicrosoftHelps on…
Microsoft's Online Store Now Selling Windows 7 PCs
Pivot Tables with Tons of Developer Resources
Bytes by MSDN: Tim Sneath and Tim Huckaby discuss… | 计算机 |
2014-15/4479/en_head.json.gz/23882 | The Sims 3 ReviewXbox 360
| PC System: PC, PS3, X360, Wii, DS
Despite all of the added features and persistent world, The Sims 3 oddly leaves out some content that should have really been included. Add-ons like season weather, which was included in one of The Sims 2's expansion packs, aren't present here. So, while The Sims 3 does provide a lot of content, there are some details that were accounted for in previous installments that aren't represented.
Another missing component is an intimate view of your career. When players send their sim off to work, they can watch as far as the door to the building, at which point the sim enters and the player is given an aerial view of the building until the sim's shift ends. Behavior options are available while the sim is at work, which affects the productivity and chances of promotion. For example, as a police officer, the player can set their sim to "Chat with Partner," which improves their relationship with their partner, but doesn't increase their chances of promotion as quickly as say setting their behavior to "Work hard." This system does give the player some control over their sims' career on a day-to-day basis, but seems strange in comparison to the amount of depth found in other areas. Another interesting and mildly irritating limitation is how a sim can only have one career at a time, despite whether work shifts overlap or not. If one shift ends at 2 p.m. and the other begins at 3 p.m., why should you not be able to work two careers at once if you want? Sure, working two jobs in real life isn't fun at all, and maybe this was the thinking behind limiting a sim to one career, but if you want to make your sim a work-a-holic with two jobs and no social life, then the option should be there.
While The Sims 3 continues the tradition of a solely offline and single-player experience, the continuation of social networking features remains as strong as ever. The Sims 3 launcher allows players to do a variety of things such as upload their own content, including individual sims, objects, houses, public buildings, and entire towns. Players have the option of creating their own player page, which includes a blog and an area to display all their created content for sharing.
In addition to being able to share content with other players via the website, exclusive content can be downloaded from the developers in exchange for SimPoints, which are purchased with real money. While this particular system isn't very popular, it doesn't hurt the game much because players have the option of just downloading shared content instead. Moreover, while some content requires SimPoints, players will be happy to know that actual game updates remain free.
These online features greatly increase the longevity of The Sims 3. Considering how detailed the creation tools are, players should have a nearly endless source of downloadable content to choose from. Perhaps the only negative thing about the online feature is the fact that it can't all be done in a browser built into the game, forcing players to muddle around in their browser and install content prior to launching the game. So, while it may not be as integrated as it could be, it definitely isn't any less easy to use.
The Sims 3 was a huge undertaking and it shows. The core gameplay remains largely unchanged, with minor tweaks and improvements that unquestionably add to the fun. Enhanced visuals and an expectedly good soundtrack excel at creating truly immersive moments, especially when moving around the largely persistent world. Even though there are areas that lack the level of detail and depth of the game as a whole, they still provide options to the player that keep the game from running into issues, which makes them identifiable as areas for improvement rather than a complete overhaul.
If you're a fan of the series, then The Sims 3 will deliver all that you've come to expect and throw in a ton of new ideas, even if the amount of extra toppings doesn't seem as vast. Newcomers to the series take note: if you've ever thought about playing The Sims, but were overwhelmed by the many expansion and "stuff" packs on the shelves, fear not; The Sims 3 is your window of opportunity.
Derek Hidey
CCC Freelance Writer
GraphicsImproved graphics bring a new level of realism to The Sims, but also bring issues for players with less powerful computers.
ControlA familiar user interface and control scheme boil the many tiers and complexities of the game into a simple and easy-to-understand system that could still be slightly daunting to newcomers to the series.
/ Sound FX / Voice ActingGreat music helps to enhance the experience, while recycled sound effects ensure the game maintains an intimate level of familiarity.
ValueUnchanged core gameplay mechanics are improved with a host of new features and simplified ideas, and they are brought together in a single persistent environment that is just what the series needed.
Overall Rating -
Must BuyNot an average. See Rating legend above for a final score breakdown.
Game Features: Create Sims with Unique Personalities: Influence the behaviors of your Sims with traits you've chosen and watch how their traits impact their relationships and the neighborhood around them.
Expand Your Game: Add to your game by downloading exclusive content or sharing content with other players via TheSims3.com!
Determine Your Sims Ultimate Destiny: Face short- and long-term challenges and reap the rewards. Your Sims can pursue random opportunities to get fast cash, get ahead in life, get even with enemies, and more.
Customize Everything: Enjoy complete customization over your Sims' appearances with the new Create a Sim. Enjoy new, easy-to-use design tools that allow you to fine-tune your Sims' facial features, hair color, eye color, and more. | 计算机 |
2014-15/4479/en_head.json.gz/24624 | Some aspects of the logical design of a control computer: a case study1
R. L. Alonso / H. Blair-Smith / A. L. Hopkins
Summary Some logical aspects of a digital computer for a space vehicle are described, and the evolution of its logical design is traced. The intended application and the characteristics of the computer's ancestry form a framework for the design, which is filled in by accumulation of the many decisions made by its designers. This paper deals with the choice of word length, number system, instruction set, memory addressing, and problems of multiple precision arithmetic.
The computer is a parallel, single address machine with more than 10,000 words of 16 bits. Such a short word length yields advantages of efficient storage and speed, but at a cost of logical complexity in connection with addressing, instruction selection, and multiple-precision arithmetic.
In this paper we attempt to record the reasoning that led us to certain choices in the logical design of the Apollo Guidance Computer (AGC). The AGC is an onboard computer for one of the forthcoming manned space projects, a fact which is relevant primarily because it puts a high premium on economy and modularity of equipment, and results in much specialized input and output circuitry. The AGC, however, was designed in the tradition of parallel, single-address general-purpose computers, and thus has many properties familiar to computer designers [Richards, 1955], [Beckman et al., 1961]. We will describe some of the problems of designing a short word length computer, and the way in which the word length influenced some of its characteristics. These characteristics are number system, addressing system, order code, and multiple precision arithmetic.
A secondary purpose for this paper is to indicate the role of evolution in the AGC's design. Several smaller computers with about the same structure had been designed previously. One of these, MOD 3C, was to have been the Apollo Guidance Computer, but a decision to change the means of electrical implementation (from core-transistors to integrated circuits) afforded the logical designers an unusual second chance.
It is our belief, as practitioners of logical design, that designers, computers and their applications evolve in time; that a frequent reason for a given choice is that it is the same as, or the logical next step to. a choice that was made once before.
A recent conference on airborne computers [Proc. Conf. Spaceborne Computer Eng., Anaheim, Calif., Oct. 30-31, 1962] affords a view of how other designers treated two specific problems: word length and number system. All of these computers have word lengths of the order of 22 to 28 bits, and use a two's complement system. The AGC stands in contrast in these two respects, and our reasons for choosing as we did may therefore be of interest as a minority view.
2. Description of the AGC
The AGC has three principal sections. The first is a memory, the fixed (read only) portion of which has 24,576 words, and the erasable portion of which has 1024 words. The next section may be called the central section; it includes, besides an adder and a parity computing register, an instruction decoder (SQ), a memory address decoder (S), and a number of addressable registers with either special features or special use. The third section is the sequence generator which includes a portion for generating various microprograms and a portion for processing various interrupting requests.
The backbone of the AGC is the set of 16 write busses; these are the means for transferring information between the various registers shown in Fig. 1. The arrowheads to and from the various registers show the possible directions of information flow.
In Fig. 1, the data paths are shown as solid lines; the control paths are shown as broken lines.
Memory: fixed and erasable
The Fixed Memory is made of wired-in "ropes" [Alonso and Laning, 1960], which are compact and reliable devices. The number of bits so wired is about 4 X 105. The cycle time is 12 m
The erasable memory is a coincident current system with the same cycle time as the fixed memory. Instructions can address registers in either memory, and can be stored in either memory.
1IEEE Trans., EC-12 (6), 687-697 (December, 1963) | 计算机 |
2014-15/4479/en_head.json.gz/24950 | #Side Mission
New Contrast Website Launched, Images Released
Daniel Kayser September 27, 2013 at 10:45AM Contrast, the upcoming puzzle/platformer developed by Compulsion Games for PC, the PSN and XBLA has drawn the curtains back to reveal its rather slick official website and four new images to celebrate this occasion.
From information about the game, the artistry and innovation of the gameplay mechanics to the story and details on the cast/characters, the newly launched website offers a look behind the scenes with the latest images, arts, videos and articles from the developer blog. I was personally very intrigued by this game at E3 and the new website does a great job of capturing the mood and atmosphere that helped it stand out from the crowd during the show.
In addition, four new screenshots from the game were released, which I'm including throughout this post.
For those unfamiliar with what Contrast is all about, here's a description of the game's story:
In and around town, there's a rumor that Didi's father, Johnny, is back in town trying to promote something big – a circus that is as entertaining as it is ambitious. But to do this, Johnny needs the Great Vincenzo, a famous magician, to headline the show and draw big crowds. Johnny could find himself in a bad situation if Vincenzo doesn't agree, as this deal isn't exactly above board. Join Didi, the adventure-loving, spirited little girl and her imaginary friend Dawn, as they discover the mysteries that lay under the big tents of Johnny's circus and Vincenzo's workshop through our four new images, as these places will be the setting for important scenes in Didi's story. So, check out the game's new website and let us know if you're interested in experiencing Contrast once it arrives on PC, PS3, and Xbox 360 in the comments below. | 计算机 |
2014-15/4479/en_head.json.gz/25393 | Spear-Phishing Attacks Out Of China Targeted Source Code, Intellectual PropertyAttackers used intelligence, custom malware to access Google, Adobe, and other U.S. companies' systemsThe wave of targeted attacks from China on Google, Adobe, and more than 20 other U.S. companies, which has led the search giant to consider closing its doors in China and no longer censor search results there, began with end users at the victim organizations getting duped by convincing spear-phishing messages with poisoned attachments. Google and Adobe both revealed last night that they were hit by these attacks, which appear to be aimed mainly at stealing intellectual property, including source code from the victim companies, security experts say. So far, the other victim companies have yet to come forward and say who they are, but some could go public later this week. Microsoft, for one, appears to be in the clear: "We have no indication that any of our mail properties have been compromised," a Microsoft spokesperson said in a statement issued today.
Google, meanwhile, first discovered in mid-December that it had been hit by a targeted attack out of China that resulted in the theft of some of its intellectual property. The attackers' primary goal was to access the Gmail accounts of Chinese human rights activists, according to Google: "Based on our investigation to date we believe their attack did not achieve that objective. Only two Gmail accounts appear to have been accessed, and that activity was limited to account information (such as the date the account was created) and subject line, rather than the content of emails themselves," said David Drummond, senior vice president of corporate development and chief legal officer at Google, in a blog post. Google discovered that at least 20 other large companies from the Internet, finance, technology, media, and chemical industries also had been hit by the attack, he said.
iDefense says the attacks were primarily going after source code from many of the victim firms, and that the attackers were working on behalf of or in the employment of officials for the Chinese government. "Two independent, anonymous iDefense sources in the defense contracting and intelligence consulting community confirmed that both the source IPs and drop server of the attack correspond to a single foreign entity consisting either of agents of the Chinese state or proxies thereof," iDefense said in a summary it has issued on the attacks.
Eli Jellenc, head of international cyberintelligence for iDefense, which is working with some of the victim companies, says on average the attacks had been under way for nearly a month at those companies. One source close to the investigation says this brand of targeted attack has actually been going on for about three years against U.S. companies and government agencies, involving some 10 different groups in China consisting of some 150,000 trained cyber-attackers. The attacks on Google, Adobe, and others started with spear-phishing email messages with infected attachments, some PDFs, and some Office documents that lured users within the victim companies, including Google, to open what appeared to be documents from people they knew. The documents then ran code that infected their machines, and the attackers got remote access to those organizations via the infected systems. Interestingly, the attackers used different malware payloads among the victims. "This is a pretty marked jump in sophistication," iDefense's Jellenc says. "That level of planning is unprecedented."
Mikko Hypponen, chief research officer at F-Secure, says a PDF file emailed to key people in the targeted companies started the attacks. "Once opened, the PDF exploited Adobe Reader with a zero-day vulnerability, which was patched today, and dropped a back-door [Trojan] that connected outbound from the infected machine back to the attackers," Hypponen says. That then gave the attackers full access to the infected machine as well as anywhere the user's machine went within his or her network, he says.
Other experts with knowledge of the attacks say it wasn't just PDFs, but Excel spreadsheets and other types of files employed as malicious attachments. The malware used in the attacks was custom-developed, they say, based on zero-day flaws, and investigators were able to match any "fingerprints" in the various versions of malware used in the attacks and determine that they were related. The attackers didn't cast a wide spam net to get their victims like a typical botnet or spam campaign. Sources with knowledge of the attacks say the attackers instead started out with "good intelligence" that helped them gather the appropriate names and email addresses they used in the email attacks. "The state sponsorship may not be financial, but it is backed with intelligence," says one source. "What we're seeing is a blending of intelligence work plus malicious cyberattacks."
iDefense's Jellenc says the attackers were able to successfully steal valuable intellectual property from several of the victim companies.
Kelly Jackson Higgins is Senior Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise Magazine, ... View Full Bio1 of 2Comment | Email This | Print | RSSMore InsightsWebcasts | 计算机 |
2014-15/4479/en_head.json.gz/26558 | Far Cry 2 performance in-depth
By Steven Walton on October 29, 2008
Can i get the full performance of a Gddr5 vga from a G41 motherboard?
4 replies on Audio and Video
Question on Acer 29" monitor and other monitors
Issue getting sound from tv speakers
Laptop - TV HDMI connection problem
Titan black or gtx 780+quadro k4000 for 3d modeling/gaming
By Steven Walton
Editor: Julio Franco
Read user comments Find videocard prices Tweet
If like us you are a fan of first-person shooters, then there is a good chance you have spent the better part of this year anticipating the arrival of Far Cry 2. Last week marked the release date for this awaited sequel, and so we immediately jumped in and bought our copy. However, rather than play the single player mission from start to finish and then go into some multiplayer action, we have been hard at work for bringing you this article.
As usual our in-depth performance review takes various ATI and Nvidia graphics cards and compares them in this new first-person shooter title. Having recently completed a similar article with Crytek's Crysis Warhead, we have been keen to see if Far Cry 2 is just as demanding.
You may recall the original version of Far Cry was developed by Crytek using the CryENGINE, while it was actually published by Ubisoft. For Far Cry 2, Ubisoft's Montreal studios took over the development of the game using their own Dunia engine. This game engine has been designed for use with the PC, Xbox 360, and PlayStation 3 platforms which resulted in last week's multi-platform release. The word Dunia means "world", "earth" or "living" in the Persian Language. As current players of Far Cry 2 will discover, Dunia offers a number of impressive features like destructible environments, dynamic weather, dynamic fire propagation, full day/night cycles, and many others.
Furthermore, the Dunia engine can take advantage of DirectX 10 when running on Windows Vista, but is also capable of running on DirectX 9 platforms. Now, unlike the engine used for Crysis games, Dunia is said to be less hardware demanding which could only come as great news for PC gamers. If you look back at our recent Crysis Warhead performance article you will see that this game requires a tremendous amount of GPU power to deliver playable performance.
Clearly only those with the most advanced gaming rigs are going to be able to enjoy Crysis Warhead in all its visual glory as we found that even a top of the line GeForce GTX 280 could struggle when pushed far enough. Far Cry 2, on the other hand, has been publicized to work perfectly on today's mid-range graphics cards in spite of the impressive eye candy.
As we move on, we will find out exactly how Far Cry 2 performs using a range of previous and current generation graphics cards. The quality presets tested include Ultra High, Very High, and High, which will be tested at 1280x1024, 1680x1050, and 1920x1200 resolutions. The built-in Far Cry 2 benchmark tool has been used to test the various graphics cards, so you will be able to accurately compare your systems performance to ours.
Test System Specs
Benchmarks: Ultra High
Benchmarks: Very High
Benchmarks: High | 计算机 |
2014-15/4479/en_head.json.gz/26956 | Assassin's Creed 3's "Wolf Pack" co-op fails to excite Posted by: Vito Gesualdi
Though the Assassin's Creed series is largely known for its expansive single player campaigns, the unique multiplayer modes have been a big draw for gamers looking for something outside of the traditional deathmatch experience. Thankfully, many of these fan favorite multiplayer modes are returning in Assassin's Creed III, alongside several new modes which promise even more variations on the game's backstabbing fun times.
Last week I got the chance to try out the new Wolf Pack multiplayer mode at Ubisoft's AC3 preview event in Boston, Massachusetts, the first time the Assassin's Creed series has ever featured co-op. Unfortunately, though Wolf Pack is a decently enjoyable multiplayer romp, this co-op experience wasn't deep enough to hold my attention for long.
Wolf Pack is a co-op multiplayer mode where four assassins work together to take out specific groups of targets, working against the clock to rack up kills and accrue as many points as possible. The key here is that the timer is extended each time players earn enough points to reach a new "sequence," with major bonus points awarded to teams who can successfully synchronize their kills. Simply put, if each assassin simply runs around the map taking out targets on a whim, the mode is sure to end quickly. However by effectively communicating the location of targets and taking care to make the kill at the same exact time, skilled teams will likely be able to dominate the game's leaderboards.
Wolf Pack feels a bit like an Assassin Creed flavored twist on Resident Evil's "Mercenaries" mode, with the frantic race against the clock leaving little time to think. Unlike Mercenaries however, players will have to maintain some degree of stealth as they progress to harder and harder sequences, where targets not only begin to spread out around the map, but are also prepared for confrontation. Synchronizing kills becomes very difficult once targets start becoming uncomfortable with the weird cloaked figures standing behind them waiting for the signal to strike. Some of these NPCs will flee if shadowed for too long, while others will turn to do battle with potential assassins. Point is, this mode will be almost impossible to play without a headset, as teams will need to be in constant contact throughout each session.
Again, the only real problem I had with Wolf Pack is that it isn't a terribly thrilling implementation of co-op, and the seeming complexity involved in setting up tandem kills means you'll need a truly dedicated team to excel at this mode. Still, it's a welcome addition to AC3's suite of multiplayer options, and it's nice to work alongside your fellow assassins for once. We'll be looking to see how popular Wolf Pack mode proves to be when Assassin's Creed 3 drops in November.
Tags: AC3, Assassin's Creed 3, Assassin's Creed III, Assassin's Creed | 计算机 |
2014-15/4479/en_head.json.gz/26982 | WPS Home
Note: your password is case sensitive. Forgot your password? Notice to employees and independent contractors of WPS and its subsidiaries: This computer system, which includes all related equipment, networks, and network devices (specifically including access to the internet), is provided only for authorized business functions. The system may be monitored by authorized personnel to ensure that your use is authorized, for management of the system, to facilitate protection against unauthorized access, and to verify security procedures. Information you place on this system is not private. Use of this computer system, authorized or unauthorized, constitutes consent to official monitoring of this system.
PRIVACY ACT WARNING! Information contained in this system with respect to Wisconsin Physicians Service Insurance Corporation and its subsidiaries is subject to The Privacy Act of 1974, 5 U.S.C. §552a, as amended. Information contained in this system may be used only by authorized persons in the conduct of official business. Any individual responsible for unauthorized disclosure or misuse of personal information may be subject to fines of up to $5,000. | WPS Home
| About WPS
| Code of Conduct | Supplier Code of Conduct | Disclaimer
© Wisconsin Physicians Service Insurance Corporation. All Rights Reserved. | 计算机 |
2014-15/4479/en_head.json.gz/27161 | GoDaddy web hosting v A2 Hosting - A web hosting comparison
JBuilder key sequences to remember
Mac online backup services - comparison chart
Mac freeware photo editing software
Hide your Mac desktop icons - Free
Linux ls command - How to show the permissions and size of a directory
Best free Mac software (Mac freeware)
By Alvin Alexander. Last updated: Sep 11, 2013 Mac freeware FAQ: Can you provide a list of the best free Mac software?
While I'm loading up some freeware on a friend's new MacBook, it hit me how much really wonderful free Mac software is available these days. Of course there are free web browsers, which everyone wants, but there are also free Mac HTML editors, mail clients, and other free Mac apps for image editing, FTP, RSS, IRC, and CD/DVD burning and ripping, and much more. You can get a lot of things done these days using only free software.
In an effort to help other Mac users find the best Mac software around, here's my list of the best "free Mac software applications" I know of (last updated July, 2010).
Free Mac software - Web browsers
One of the first things everyone needs is a web browser. In my opinion, if you just want one good Mac web browser, I'd go with Firefox, but the other browsers have their strengths as well, and these days I also use Google Chrome a lot.
Currently the leading open source web browser, Firefox has thousands of plugins to help customize your web browsing experience. (And I use it every day.)
Safari is included free with Mac OS X. Safari 4 includes a number of improvements that make it a good browser (but I prefer Firefox and Chrome).
Google Chrome is a web browser that runs web pages and applications as fast as Google can make it go. The Chrome people seem to want to make the world's fastest web browser. (June, 2010: Chrome is now my favorite Mac browser, though many other people still prefer Firefox.)
Camino is an open source web browser developed with "a focus on providing the best possible experience for Mac OS X users". Historically, Camino was developed as a native Mac OS X web browser when there weren't any other good alternatives. It is still a very good browser -- possibly better than Safari -- but I prefer Firefox.
I personally don't use Opera, so I'll take this statement from their website: "Opera’s newest Web browser introduces a new technology platform, Opera Unite, allowing you to stream music or share files, photos and more, right from the browser".
SeaMonkey -- which is discussed in the next section -- also includes a web browser. If you're an original Netscape Navigator user, the SeaMonkey browser will put a smile on your face. I use the SeaMonkey HTML editor, but I don't use their browser.
As a quick update, these days I use Firefox, Chrome, and Safari on a regular basis (in that order).
Free Mac HTML editors
If you're ever in the market for a free Mac WYSIWYG HTML editor, here are two options:
From their website: "SeaMonkey provides a web browser, advanced e-mail, newsgroup and feed client, IRC chat, and HTML editing made simple -- all in one application." As mentioned before, if you ever used the original Netscape suite of products, SeaMonkey will look very familiar.
I use the SeaMonkey Mac HTML editor (their 'Composer') for creating WYSIWYG HTML documents -- such as this document -- as described in this "Best free Mac HTML editor" article.
Amaya is a free Mac HTML editor that was originally developed to showcase web technologies. Amaya really isn't intended as a commercial product, so these days Amaya's UI looks old, and the software runs slow, but it does create very clean HTML code.
Note: I've written about Mac HTML editors in detail before, including my popular Free Mac WYSIWYG HTML editors review.
Mac freeware - Office apps (word processing, spreadsheet, presentation)
Another big need when it comes to Mac freeware are "office" applications. If you're used to create office documents using Word, Excel, and PowerPoint on Microsoft Windows, these two free Mac office applications will help fill that void on the Mac OS X platform.
OpenOffice is a free and open office productivity suite from Sun Microsystems. It is probably the most popular free "office" suite in the world, and OpenOffice 3 provides a much better experience on Mac OS X.
NeoOffice
NeoOffice adds "improvements" to the OpenOffice project for the Mac OS X platform. (There is a little angst between the OpenOffice people and the NeoOffice people, but for a long time NeoOffice provided a better Mac experience. With the release of OpenOffice 3, I don't know if NeoOffice is much better than OpenOffice any more.)
Mac freeware - Mac OS X email clients
If you like to have an email client running on your local computer, here are two terrific, free Mac email applications.
Mac Mail
Mac OS X comes with a free email application named Mail, which can usually be found in the dock and Applications folder when you first buy a Mac. You can configure Mail to work with your email servers.
Thunderbird 3 is the latest version of Mozilla Messaging's free and open source email application. Like Mail, Thunderbird is an email client that you configure to work with your remote email servers/accounts.
Personally I don't use either of these applications. I just use the web clients that come with Yahoo Mail and Gmail, but many people I know really like Thunderbird.
Free Mac DVD ripping software
I've recently started using the free Mac software application named Handbrake, and I can recommend it:
HandBrake is "an open-source, GPL-licensed, multiplatform, multithreaded video transcoder, available for MacOS X, Linux and Windows".
I've used HandBrake for the last two months, and it's great for ripping DVDs. I've written more about this in my iPhone/iPad DVD movies article and in my When ripping a DVD movie to a digital video file makes sense article. In short, two thumbs up.
Free Mac software - CD/DVD burning
From the Burn website: "Simple but advanced burning for Mac OS X."
Like HandBrake, I haven't used Burn, but again, I see it referenced on many Mac websites. When I burn a CD or DVD, I do it the old-school way, as described in this article on "How to burn a DVD on Mac OS X".
Mac freeware - graphics and image/photo editing
These days everybody is editing and sharing images, and from my own experience I can tell you GIMP is a powerful image editing application that runs on Mac OS X.
GIMP is an acronym for GNU Image Manipulation Program. It is a freely distributed program for such tasks as photo retouching, image composition and image authoring.
GIMP has many capabilities. It can be used as a simple paint program, an expert quality photo retouching program, an online batch processing system, a mass production image renderer, an image format converter, etc.
GIMP is a really good free image-editing program for Mac OS X. It doesn't currently provide a "native" Mac look and feel, but it's a very powerful image-editing application, and I use it every week. It's a great free alternative to an application like Photoshop.
Blender is the free open source 3D content creation suite, available for all major operating systems under the GNU GPL.
I haven't used Blender yet, but it certainly looks like it can create really terrific 3D images.
Mac freeware - RSS readers
NetNewsWire is "an easy-to-use RSS and Atom reader for your Mac. The Eddy award-winning NetNewsWire has a familiar three-paned interface and can fetch and display news from thousands of different websites and weblogs."
Whenever I do use an RSS reader I use NetNewsWire, but for better or worse I don't use any software applications like this these days. I usually find what I want on the internet via iGoogle and similar services.
Mac freeware - FTP clients
If you're a web site developer, or just need a simple way to transfer files from one system to another, these two free Mac FTP applications might be just what you need.
FileZilla Client is a fast and reliable cross-platform FTP, FTPS and SFTP client with lots of useful features and an intuitive graphical user interface.
While Filezilla may not look like a native Mac OS X application, I use it all the time for FTP file transfers, and it's a very decent FTP client. Take a look at it before you go out and buy a Mac FTP client application.
Cyberduck
Cyberduck is an open source FTP, SFTP, WebDAV, Cloud Files and Amazon S3 browser for the Mac.
I haven't used Cyberduck yet, but I will give it a trial run soon. (Again it's one of the names I've heard about for several years, but I haven't used it myself.)
Mac freeware - Instant messaging (IM) software
If you're into instant messaging, here are two free instant messaging applications for Mac OS X.
Adium is "a free instant messaging application for Mac OS X that can connect to AIM, MSN, Jabber, Yahoo, and more". I don't use any IM clients myself, but I've seen Adium referred to for several years now.
Yahoo IM
I briefly tried a beta version of Yahoo's IM client for Mac OS X about a year ago, and it seemed fine for text messages. We had some problems with the video and audio quality, but the text messaging was just fine.
Mac freeware - IRC clients
From their website: Colloquy is an IRC, SILC & ICB client which aims to conform to Mac OS X interface conventions.
I haven't used Colloquy, but I see it referenced on many other websites.
Free Mac podcasting software
Like to create and listen to podcasts? If so, here are a couple of terrific, free Mac podcast applications.
GarageBand comes free with Mac OS X, and you can use it to create podcasts. I've used it several times, as shown in these GarageBand podcast tutorials, and I thought it was fairly easy to use.
Juice is a cross-platform podcast receiver. If you want to listen to podcasts, this program is for you. Juice is the premier podcast receiver, allowing users to capture and listen to podcasts anytime, anywhere.
Here are a few links to my GarageBand podcast tutorials:
How to create a podcast on Mac OS X with GarageBand
Create a podcast on the Mac with GarageBand, part 2
Free Mac sound recording software
Audacity is free, open source software for recording and editing sounds. You can use Audacity to record live audio; convert tapes and records into digital recordings or CDs; edit Ogg Vorbis, MP3, WAV or AIFF sound files; cut, copy, splice or mix sounds together; change the speed or pitch of a recording; and more.
I haven't used Audacity yet, but when I told a friend that I wanted to edit an audio recording, he recommended Audacity. (Again, I've heard about it for years, but haven't tried it yet.)
Free Mac OS X mind mapping software
FreeMind is a premier free mind-mapping software written in Java.
If you ever used MindManager on a Windows PC, FreeMind is a free, open-source competitor to that application. Written in Java, it runs on many computer platforms, including Mac OS X.
Mac freeware - Personal finance software
Some people have told me they can't switch to using a Mac without some form of personal finance application available on OS X. There are other commercial applications you can pay for if you'd like, but here are two free Mac personal finance applications that get you what you need.
GnuCash is personal and small-business financial-accounting software, freely licensed under the GNU GPL.
Buddi is a personal finance and budgeting program, aimed at those who have little or no financial background.
(I haven't used either of these programs.)
Free Mac software developer tools
If you're a developer you probably already know about Eclipse and NetBeans, but if not, here's a little information about them.
Eclispse is an open source IDE, originally intended for Java, but now with support for many different programming languages and frameworks.
NetBeans is a free, open-source Integrated Development Environment for software developers. You get all the tools you need to create professional desktop, enterprise, web, and mobile applications with the Java language, as well as C/C++, PHP, JavaScript, Groovy, and Ruby.
The abbreviation "MAMP" stands for: Macintosh, Apache, Mysql and PHP. With just a few mouse-clicks, you can install Apache, PHP and MySQL for Mac OS X.
Eclipse and NetBeans are competitors in the open source IDE world. Both run just fine on Mac OS X. MAMP is a cool bundle of free software applications that makes it very easy to use Apache, PHP, and MySQL on Mac OS X. I've been using MAMP for the last several months, and it really does make life easier.
Free open source Unix software for Mac OS X
If you come from a Unix background, the great news about OS X is that it is a real Unix operating system (BSD), and there are plenty of Unix tools available for Mac OS X. MacPorts and Fink are two projects that help to make it easier than ever to get Unix software installed and running on OS X.
The MacPorts project is an open-source community initiative to design an easy-to-use system for compiling, installing, and upgrading either command-line, X11 or Aqua based open-source software on the Mac OS X operating system.
The Fink project wants to bring the full world of Unix Open Source software to Darwin and Mac OS X. We modify Unix software so that it compiles and runs on Mac OS X ("port" it) and make it available for download as a coherent distribution.
MacPorts and Fink are two competing projects that help make tons of free Unix and Linux software available on the Mac OS X platform. For the most part these are command-line programs that a regular Mac user won't be interested in, but if you are a Unix or Linux user, these two projects can help you get all the free, cool, open source software you want on your Mac computer.
Many more free Mac software applications
Of course there are many more Mac "freeware" applications, but I think this is a great start, and I've now worked with many of these free Mac software applications myself.
On a related note, if you're looking for Mac backup solutions, I recently wrote a Mac online backup solutions article which details many of the available backup options. Finally, at www.apple.com/downloads you can find many more free Mac software applications (along with many other apps you can pay for).
mac-os-x
text editor | 计算机 |
2014-15/4479/en_head.json.gz/27194 | Red Hat, Fedora servers infiltrated by attackers
Unknown attackers infiltrated Red Hat and Fedora servers but did not …
- Aug 25, 2008 3:03 pm UTC
Linux distributor Red Hat has issued a statement revealing that its servers were illegally infiltrated by unknown intruders. According to the company, internal audits have confirmed that the integrity of the Red Hat Network software deployment system was not compromised. The community-driven Fedora project, which is sponsored by Red Hat, also fell victim to a similar attack. "Last week Red Hat detected an intrusion on certain of its computer systems and took immediate action," Red Hat said in a statement. "We remain highly confident that our systems and processes prevented the intrusion from compromising RHN or the content distributed via RHN and accordingly believe that customers who keep their systems updated using Red Hat Network are not at risk." Although the attackers did not penetrate into Red Hat's software deployment system, they did manage to sign a handful of Red Hat Enterprise Linux OpenSSH packages. Red Hat has responded by issuing an OpenSSH update and providing a command-line tool that administrators can use to check their systems for potentially compromised OpenSSH packages. Key pieces of Fedora's technical infrastructure were initially disabled earlier this month following a mailing list announcement which indicated only that Fedora personnel were addressing a technical issue of some kind. Fedora project and leader and board chairman Paul W. Frields clarified the situation on Friday with a follow-up post in which he indicated that the outage was prompted by a security breach. Fedora source code was not tampered with, he wrote, and there are no discrepancies in any of the packages. The system used to sign Fedora packages was among those affected by the incursion, but he claims that the key itself was not compromised. The keys have been replaced anyway, as a precautionary measure. "While there is no definitive evidence that the Fedora key has been compromised, because Fedora packages are distributed via multiple third-party mirrors and repositories, we have decided to convert to new Fedora signing keys," he wrote. "Among our other analyses, we have also done numerous checks of the Fedora package collection, and a significant amount of source verification as well, and have found no discrepancies that would indicate any loss of package integrity." Assuming that Red Hat and Fedora are accurately conveying the scope and nature of the intrusion, the attacker was effectively prevented from causing any serious damage. Red Hat's security measures were apparently sufficient to stave off a worst-case scenario, but the intrusion itself is highly troubling. Red Hat has not disclosed the specific vulnerability that the intruders exploited to gain access to the systems. Like the recent Debian openssl fiasco, which demonstrated the need for higher code review standards, this Red Hat intrusion reflects the importance of constant vigilance and scrutiny. When key components of open source development infrastructure are compromised, it undermines the trust of the end-user community. In this case, Red Hat has clearly dodged the bullet, but the situation could have been a lot worse. Further reading
Red Hat: OpenSSH blacklist script
Paul Frields: Infrastructure report | 计算机 |
2014-15/4479/en_head.json.gz/28226 | Subfictional StudiosPersonal blog for Christie Koehler. "It's a poor sort of memory that only works backward."Home
About Christie
An Explanation of the Heartbleed bug for Regular People I’ve put this explanation together for those who want to understand the Heartbleed bug, how it fits into the bigger picture of secure internet browsing, and what you can do to mitigate its affects.
HTTPS vs HTTP (padlock vs no padlock)
When you are browsing a site securely, you use https and you see a padlock icon in the url bar. When you are browsing insecurely you use http and you do not see a padlock icon.
Firefox url bar for HTTPS site (above) and non-HTTPS (below).
HTTPS relies on something called SSL/TLS.
Understanding SSL/TLS
SSL stands for Secure Sockets Layer and TLS stands for Transport Layer Security. TLS is the later version of the original, proprietary, SSL protocol developed by Netscape. Today, when people say SSL, they generally mean TLS, the current, standard version of the protocol.
Public and private keys
The TLS protocol relies heavily on public-key or asymmetric cryptography. In this kind of cryptography, two separate but paired keys are required: a public key and a private key. The public key is, as its name suggests, shared with the world and is used to encrypt plain-text data or to verify a digital signature. (A digital signature is a way to authenticate identity.) A matching private key, on the other hand, is used to decrypt data and to generate digital signatures. A private key should be safeguarded and never shared. Many private keys are protected by pass-phrases, but merely having access to the private key means you can likely use it.
Authentication and encryption
The purpose of SSL/TLS is to authenticate and encrypt web traffic.
Authenticate in this case means “verify that I am who I say I am.” This is very important because when you visit your bank’s website in your browser, you want to feel confident that you are visiting the web servers of — and thereby giving your information to — your actual bank and not another server claiming to be your bank. This authentication is achieved using something called certificates that are issued by Certificate Authorities (CA). Wikipedia explains thusly:
The digital certificate certifies the ownership of a public key by the named subject of the certificate. This allows others (relying parties) to rely upon signatures or assertions made by the private key that corresponds to the public key that is certified. In this model of trust relationships, a CA is a trusted third party that is trusted by both the subject (owner) of the certificate and the party relying upon the certificate.
In order to obtain a valid certificate from a CA, website owners must submit, at minimum, their server’s public key and demonstrate that they have access to the website (domain).
Encrypt in this case means “encode data such that only authorized parties may decode it.” Encrypting internet traffic is important for sensitive or otherwise private data because it is trivially easy eavesdrop on internet traffic. Information transmitted not using SSL is usually done so in plain-text and as such clearly readable by anyone. This might be acceptable for general internet broswing. After all, who cares who knows which NY Times article you are reading? But is not acceptable for a range of private data including user names, passwords and private messages.
Behind the scenes of an SSL/TLS connection
When you visit a website with HTTPs enabled, a multi-step process occurs so that a secure connection can be established. During this process, the sever and client (browser) send messages back and forth in order to a) authenticate the server’s (and sometimes the client’s) identity and, b) to negotiate what encryption scheme, including which cipher and which key, they will use for the session. Identities are authenticated using the digital certificates mentioned previously.
When all of that is complete, the secure connection is established and the server and client send traffic back and forth to each other.
All of this happens without you ever knowing about it. Once you see your bank’s login screen the process is complete, assuming you see the padlock icon in your browser’s url bar.
Keepalives and Heartbeats
Even though establishing an ssl connection happens almost imperceptibly to you, it does have an overhead in terms of computer and network resources. To minimize this overhead, network connections are often kept open and active until a given timeout threshold is exceed. When that happens, the connection is closed. If the client and server wish to communicate again, they need to re-negotiate the connection and re-incur the overhead of that negotiation.
One way to forestall a connection being closed is via keepalives. A keepalive message is used to tell a server “Hey, I know I haven’t used this connection in a little while, but I’m still here and I’m planning to use it again really soon.”
Keepalive functionality was added to the TLS protocol specification via the Heartbeat Extension. Instead of “Keepalives,” they’re called “Heartbeats,” but they do basically the same thing.
Specification vs Implementation
Let’s pause for a moment to talk about specifications vs implementations. A protocol is a defined way of doing something. In this case of TLS, that something is encrypted network communications. When a protocol is standardized, it means that a lot of people have agreed upon the exact way that protocol should work and this way is outlined in a specification. The specification for TLS is collaboratively developed, maintained and promoted by the standards body Internet Engineering Task Force (IETF). A specification in and of itself does not do anything. It is a set of documents, not a program. In order for a specifications to do something, they must be implemented by programmers.
OpenSSL implementation of TLS
OpenSSL is one implementation of the TLS protocol. There are others, including the open source GnuTLS as well as proprietary implementations. OpenSSL is a library, meaning that it is not a standalone software package, but one that is used by other software packages. These include the very popular webserver Apache.
The Heartbleed bug only applies to webservers with SSL/TLS enabled, and only those using specific versions of the open source OpenSSL library because the bug relates to an error in the code of that library, specifically the heartbeat extension code. It is not related to any errors in the TLS specification or and in any of the underlying ciper suites.
Usually this would be good news. However, because OpenSSL is so widely used, particularly the affected version, this simple bug has tremendously reach in terms of the number of servers and therefor the number of users it potentially affects.
What the heartbeat extension is supposed to do
The heartbeat extension is supposed to work as follows:
A client sends a heartbeat message to the server.
The message contains two pieces of data: a payload and the size of that payload. The payload can by anything up to 64kb.
When the server receives the heartbeat message, it is to add a bit of extra data to it (padding) and send it right back to the client.
Pretty simple, right? Heartbeat isn’t supposed to do anything other than let the server and client know they are each still there and accepting connections.
What the heartbeat code actually does
In the code for affected versions (1.0.1-1.0.1f) of the OpenSSL heartbeat extension, the programmer(s) made a simple but horrible mistake: They failed to verify the size of the received payload. Instead, they accepted what the client said was the size of the payload and returned this amount of data from memory, thinking it should be returning the same data it had received. Therefore, a client could send a payload of 1KB, say it was 64KB and receive that amount of data back, all from server memory.
If that’s confusing, try this analogy: Imagine you are my bank. I show up and make a deposit. I say the deposit is $64, but you don’t actually verify this amount. Moments later I request a withdrawal of the $64 I say I deposited. In fact, I really only deposited $1, but since you never checked, you have no choice but to give me $64, $63 of which doesn’t actually belong to me.
And, this is exactly how a someone could exploit this vulnerability. What comes back from memory doesn’t belong to the client that sent the heartbeat message, but it’s given a copy of it anyway. The data returned is random, but would be data that the OpenSSL library had been storing in memory. This should be pre-encryption (plain-text) data, including your user names and passwords. It could also technically be your server’s private key (because that is used in the securing process) and/or your server’s certificate (which is also not something you should share).
The ability to retrieve a server’s private key is very bad because that private key could be used to decrypt all past, present and future traffic to the sever. The ability to retreive a server’s certificate is also bad because it gives the ability to impersonate that server.
This, coupled with the widespread use of OpenSSL, is why this bug is so terribly bad. Oh, and it gets worse…
Taking advantage of this vulnerability leaves no trace
What’s worse is that logging isn’t part of the Heartbeat extension. Why would it be? Keepalives happen all the time and generally do not represent transmission of any significant data. There’s no reason to take up value time accessing the physical disk or taking up storage space to record that kind of information.
Because there is no logging, there is no trace left when someone takes advantage of this vulnerability.
The code that introduced this bug has been part of OpenSSl for 2+ years. This means that any data you’ve communicated to servers with this bug since then has the potential to be compromised, but there’s no way to determine definitively if it was.
This is why most of the internet is collectively freaking out.
What do server administrators need to do?
Server (website) administrators need to, if they haven’t already:
Determine whether or not their systems are affected by the bug. (test)
Patch and/or upgrade affected systems. (This will require a restart)
Revoke and reissue keys and certificates for affected systems.
Furthermore, I strongly recommend you enable Perfect forward secrecy to safeguard data in the event that a private key is compromised:
When an encrypted connection uses perfect forward secrecy, that means that the session keys the server generates are truly ephemeral, and even somebody with access to the secret key can’t later derive the relevant session key that would allow her to decrypt any particular HTTPS session. So intercepted encrypted data is protected from prying eyes long into the future, even if the website’s secret key is later compromised.
What do users (like me) need to do?
The most important thing regular users need to do is change your passwords on critical sites that were vulnerable (but only after they’ve been patched). Do you need to change all of your passwords everywhere? Probably not. Read You don’t need to change all your passwords for some good tips.
Additionally, if you’re not already using a password manager, I highly recommend LastPass, which is cross-platform and works on pretty much every device. Yesterday LastPass announced they are helping users to know which passwords they need to update and when it is safe to do so.
If you do end up trying LastPass, checkout my guide for setting it up with two-factor auth.
If you like visuals, check out this great video showing how the Heartbleed exploit works.
If you’re interested in learning more about networking, I highly recommend Ilya Grigorik‘s High Performance Browser Networking, which you can also read online for free.
If you want some additional technical details about Heartbleed (including actual code!) checkout these posts:
Diagnosis of the OpenSSL Heartbleed Bug
Attack of the week: OpenSSL Heartbleed
Oh, and you can listen to Kevin and I talk about Heartbleed on In Beta episode 96, “A Series of Mathy Things.”
Share this:ShareEmail Written by Christie Koehler
Posted in Uncategorized Tagged with heartbleed, internet, mozilla, openssl, security March 24, 2014
On Brendan Eich as CEO of Mozilla Today Mozilla announced a number of leadership changes, including the appointment of Brendan Eich as CEO. Amid the analysis of the change, there is a lengthy post on Hacker News specifically discussing Brendan’s support of anti-LGBT Prop. 8 in 2008 and whether or not it affects his suitability as CEO.
As a single employee of Mozilla, I am not sure I can definitively determine Brendan’s suitability. I can, however, give insight as to what I experience at Mozilla as a queer woman and how I feel about the appointment.
Mozilla is a very unique organization in that it operates in a strange hybrid space between tech company and non-profit. There simply aren’t a lot of models for what we do. Wikimedia Foundation is always the one that comes closest to mind for me, but remains a very different thing. As such, people with experience relevant Mozilla, relevant enough to lead Mozilla well, are in very short supply. An organization can always choose to make an external hire and hope the person comes to understand the culture, but that is a risky bet. Internal candidates who have demonstrated they get the culture, the big picture of where we need to go and have demonstrated they can effectively lead large business units, on the other hand, present as very strong options.
And, from my limited vantage point, that’s what I see in Brendan.
Like a lot of people, I was disappointed when I found out that Brendan had donated to the anti-marriage equality Prop. 8 campaign in California. It’s hard for me to think of a scenario where someone could donate to that campaign without feeling that queer folks are less deserving of basic rights. It frustrates me when people use their economic power to further enshrine and institutionalize discrimination. (If you haven’t seen it, here’s Brendan’s response to the issue.)
However, during the intervening years, I’ve spent a lot of time navigating communities like Mozilla and figuring out how to get things done. I’ve learned that it’s hard working with people but that you have to do it anyway. I’ve learned that it can be even harder to work with someone when you think you don’t share your fundamental beliefs, or when you think they hold opposing or contradictory beliefs, but you have to do that sometimes, too.
The key is to figure out when it’s important to walk away from interacting with a person or community because of a mis-alignment in beliefs and when you need to set aside the disagreement and commit to working together in service of the shared goal. Context is really important here. What is the purpose or mission of the community? Who is its audience? What are its guiding principles?
Mozilla’s mission is “to promote openness, innovation & opportunity on the Web.” Our audience is the global community of people connecting to the internet. Our guiding principles are numerous, but include protecting the internet as a public resource and upholding user privacy, security and choice.
At the same time, many Mozillians are themselves advocates for human rights, animal rights, prison abolition, marriage equality, racial equality, etc. As much as some of those causes might overlap with the cause of a free and open internet, they are separate causes and none of them are the focus of Mozilla the organization. Focus is important because we live in a world of limited resources. Mozilla needs to stay focused on the mission we have all come together to support and move forward.
Another factor to consider: What is their behavior within the community, where we have agreed to come together and work towards a specific mission? How much does a person’s behavior outside the scope of community affect the community itself? Does the external behavior conflict directly with the core mission of the organization?
To be clear, I’m personally disappointed about Brendan’s donation. However, aside from how it affected me emotionally, I have nothing to indicate that it’s materially hurt my work within the Mozilla community or as a Mozilla employee. Mozilla offers the best benefits I have ever had and goes out of its way to offer benefits to its employees in same-sex marriages or domestic partnerships on par with those in heterosexual marriages. Last year we finally got trans-inclusive healthcare. We didn’t have an explicit code of conduct when I started, but adopted the guidelines for participation within my first year. Progress might be slow, but it’s being made. And I don’t see Brendan standing in the way of that.
Certainly it would be problematic if Brendan’s behavior within Mozilla was explicitly discriminatory, or implicitly so in the form of repeated microagressions. I haven’t personally seen this (although to be clear, I was not part of Brendan’s reporting structure until today). To the contrary, over the years I have watched Brendan be an ally in many areas and bring clarity and leadership when needed. Furthermore, I trust the oversight Mozilla has in place in the form of our chairperson, Mitchell Baker, and our board of directors.
It’s true there might be a kind of collateral damage from Brendan’s actions in the form of some people withdrawing from participation in Mozilla or never joining in the first place. There’s a lot I could say about people’s responses to things that happen at Mozilla, but I’ll save those for another time.
For now, I’ll just say that if you’re queer and don’t feel comfortable at Mozilla, that saddens me and I’m sorry. I unde | 计算机 |
2014-15/4479/en_head.json.gz/28234 | Welcome to www.Takamine.com. Any person accessing this World Wide Web Site (the "Web Site") agrees to the following:
All textual, graphical and other content appearing on this Web Site (www.Takamine.com) is the property of KMC MUSIC, INC., or its affiliates (collectively, "KMC") or its licensors. All right, title and interest in and to the site and its materials, including but not limited to, all patent rights, copyrights, trade secrets, trademarks, site marks and other inherent proprietary rights, are retained by KMC or its licensors. Except as expressly authorized by KMC herein, you agree not to make, copy, display, modify, rent, lease, license, loan, sell, distribute or create derivative works of this Web Site or its materials in whole or in part. Any modification of the Web Site or its materials for any purpose is in violation of these terms.
You may view, copy, print and use content contained on this Web Site (including recorded material) solely for your own personal use and provided that: (1) the content available from this Web Site is used for informational and non-commercial purposes only; (2) no text, graphics or other content available from this Web Site is modified or framed in any way; and (3) no graphics available from this Web Site are used, copied or distributed separate from accompanying text. The use of any such content for commercial purposes is expressly prohibited. Nothing contained herein shall be construed as conferring by implication, estoppel or otherwise any license or other grant of right to use any patent, copyright, trademark, service mark or other intellectual property of KMC or any third party, except as expressly provided herein.
Reference to any product, recording, event, process, publication, service, or offering of any third party by artist name, trade name, trademark, company name or otherwise does not necessarily constitute or imply the endorsement or recommendation of such by KMC. Any views expressed by third parties on this Web Site (including recorded interviews) are solely the views of such third party and KMC assumes no responsibility for the accuracy or veracity of any statement made by such third party.
KMC Music Trademarks: ADAMAS and the unique bridge, headstock, fingerboard inlay, and soundboard configuration designs of ADAMAS guitars, GENZ BENZ ENCLOSURES, GIBRALTAR, HAMER and the unique headstock design of HAMER guitars, KMCONLINE, LATIN PERCUSSION LP, LP and the 1 and 2 circle designs, MBT, MUSICORP, OVATION and the unique bridge and bowl-shaped designs of OVATION guitars, ROUNDBACK, TAKAMINE, TOCA, are a few of the trademarks and service marks of KMC that may appear in this Web Site, many of which are registered in the United States and other countries. This is not a comprehensive list of all trademarks of KMC. The KMC trademarks may not be displayed or otherwise used in any manner without the prior written consent of KMC. All other names and marks mentioned in this Web Site are the trade names, trademarks or service marks of their respective owners.
LINKS: THIS WEB SITE MAY CONTAIN LINKS TO OR BE ACCESSED THROUGH LINKS ON WORLD WIDE WEB SITES OF KMC DEALERS OR DISTRIBUTORS. KMC DEALERS AND DISTRIBUTORS ARE INDEPENDENT CONTRACTORS AND ARE NOT AGENTS OF KMC. KMC DOES NOT HAVE RESPONSIBILITY FOR THE CONTENT, AVAILABLITY, OPERATION OR PERFORMANCE OF WEB SITES OF KMC DEALERS OR DISTRIBUTORS, OR ANY OTHER SITES, TO WHICH THIS WEB SITE MAY BE LINKED OR FROM WHICH THIS WEB SITE MAY BE ACCESSED. YOUR USE OF SUCH SITES OR RESOURCES SHALL BE SUBJECT TO THE TERMS AND CONDITIONS SET FORTH BY THEM.
Networked KMC Music Sites: This Web Site also serves as an entry into several Web Sites operated by KMC subsidiaries and operating divisions. Please note that these sites may adopt terms of use particular to the subsidiary or operating division. While these terms of use apply to this KMC Web Site as a whole, if a KMC subsidiary or division has terms in addition to those described here, then those terms will also apply. In addition, sales made in connection with a KMC subsidiary or division site that offers products or services for sale will be subject to that subsidiary's or division's terms of sale as a condition of completion of the transaction. Those terms of sale will either be posted on the subsidiary or division Web Site or described in a separate agreement. You | 计算机 |
2014-15/4479/en_head.json.gz/28780 | Big data means big IT job opportunities -- for the right people
A slew of new jobs is expected to open up in big data, but not everyone in IT will qualify. Here's what employers will be looking for.
Tam Harbert (Computerworld (US))
As big data gathers momentum, it's helping to create big career opportunities for IT professionals -- if they have the right qualifications. According to a report published in 2011 by McKinsey & Co., the U.S. could face a shortage by 2018 of 140,000 to 190,000 people with "deep analytical talent" and of 1.5 million people capable of analyzing data in ways that enable business decisions. Companies are, and will continue to be, looking for employees with a complex set of skills to tap big data's promise of competitive advantage, market watchers say. "There's no question that the No. 1 requirement [for] enterprises that are serious about gaining a competitive advantage using data and analytics is going to be the talent to run that program," says Jack Phillips, CEO of the International Institute for Analytics (IIA), a research firm. But what exactly constitutes "big data talent"? What are these jobs, and what skills do they require? What kind of background qualifies a person for a big data job? Computerworld took the pulse of some prominent players in the emerging field to determine an IT worker's place -- if any -- in the big data universe. Here's what they had to say. Buckets of Skills "There is no monolithic 'big data profession,' " says Sandeep Sacheti, former head of business risk and analytics at UBS Wealth Management, who now holds the newly created position of vice president of customer insights and operational excellence at Wolters Kluwer Corporate Legal Services. Sacheti's new job is all about big data: using analytics to understand customers, develop new products and cut operational costs. In one project, the Wolters division that sells electronic billing services to law firms is using analytics to mine data it gathers from its customers (with their permission) to create new products, including the Real Rate Report, which benchmarks law firm rates around the country. Sacheti is now both hiring from the outside and training internal staffers for big data work. He thinks of big data jobs in terms of four "buckets of skill sets": data scientist, data architect, data visualizer and data change agent. But there are no standard titles -- other employers likely use different buckets and value different skills. What one company calls a data analyst, for example, might be called something different elsewhere, says John Reed, senior executive director at IT staffing firm Robert Half Technology. And, as Sacheti's title demonstrates, some big data jobs contain neither the word big nor the word data. Some companies come to the IIA for help recruiting big-data talent, Phillips says. First they ask where to look for candidates. "Then they stop in their tracks and say, 'Wait, how do I know what I'm looking for?' " he adds. "Everybody's asking, 'How do you identify these people? What skills do you look for? What is their degree?' " says Greta Roberts, CEO of Talent Analytics, which makes software designed to help employers correlate employees' skills and innate characteristics to business performance. Roberts, Phillips and other experts say the skills most often mentioned in connection with big data jobs include math, statistics, data analysis, business analytics and even natural language processing. And although titles aren't always consistent from employer to employer, some, such as data scientist and data architect, are becoming more common. A Curious Mind Is Key As companies search for big data talent, they're tending to target application developers and software engineers more than IT operations professionals, says Josh Wills, senior director of data science at Cloudera, which sells and supports a commercial version of the open-source Hadoop framework for managing big data. That's not to say IT operations specialists aren't needed in big data. After all, they build the infrastructure and support the big data systems. "This is where the Hadoop guys come in," says D.J. Patil, data scientist in residence at Greylock Partners, a venture capital firm. "Without these guys, you can't do anything. They are building incredible infrastructure, but they are not necessarily doing the analysis." IT staffers can quickly learn Hadoop through traditional classes or by teaching themselves, he notes. Burgeoning training programs at the major Hadoop vendors are proof that many IT folks are doing so. That said, most of the jobs emerging in big data require knowledge of programming and the ability to develop applications, as well as an understanding of how to meet business needs. The most important qualifications for these positions aren't academic degrees, certifications, job experience or titles. Rather, they seem to be soft skills: a curious mind, the ability to communicate with nontechnical people, a persistent -- even stubborn -- character and a strong creative bent. Patil has a Ph.D. in applied mathematics. Sacheti has a Ph.D. in agricultural and resource economics. According to Patil, the qualities of curiosity and creativity matter more than one's field of study or level of academic credential. "These are people who fit at the intersection of multiple domains," he says. "They have to take ideas from one field and apply them to another field, and they have to be comfortable with ambiguity." Wills, for example, took a circuitous path to the role of data scientist. After graduating from Duke University with a bachelor's degree in math, he pursued a graduate degree in operations research at the University of Texas on and off while working for a series of companies before dropping out to take a job at Google in 2007. (He notes that he did eventually complete that master's degree.) Wills worked at Google as a statistician and then as a software engineer before moving to Cloudera and assuming his data science title. In short, big data folks seem to be jacks of all trades and masters of none, and their greatest skill may be the ability to serve as the "glue" in an organization, says Wills. "You can take someone who maybe is not the world's greatest software engineer [nor] the world's greatest statistician, but they have the communications skills to talk to people on both sides" as well as to the marketing team and C-level executives, he explains. "These are people who cut across IT, software development, app development and analytics," Wills adds, noting that he thinks such professionals are rising in prominence. "I'm seeing a shift in value that companies are assigning to these people," he says. Sacheti, too, keeps his eye out for people like that. "We are finding there are a lot more who are flexible in learning new skills, willing to do iterative design and agile thinking," he says. Roberts agrees. "The innate characteristics of people, like a predisposition to curiosity, can be more predictive of someone's performance in a role than them having a degree in, say, IT or IS or CS," she says. Wanted: Relentless, Scientific Temperament Until recently, creativity, curiosity and communications skills haven't typically been emphasized in IT departments, which may be why many employers aren't looking to their IT operations staffs to find people to spearhead big data projects. The IIA sees data science as resting on three legs: technological (IT, systems, hardware and software), quantitative (statistics, math, modeling and algorithms) and business (domain knowledge), according to Phillips. "The professionals we see who are successful come from the quantitative side," he says. "They know about the technology, but they aren't running the technology. They rely on IT to give them the tools." Big data also demands a scientific temperament, says Wills. "When we talk about data science, it's really an experiment-driven process," he explains. "You're usually trying lots of different things, and you have to be OK with failure in a pretty big way." Wills goes on to say that there's a "certain kind of relentlessness you need in the personality of someone who does this kind of work." Big data professionals also have to be intellectually flexible enough to quickly change their assumptions and approaches to problems, says Brian Hopkins, an analyst at Forrester Research. "You can't limit yourself to one schema but [need to be comfortable] operating in an environment with multiple schemas or even no schemas," he says. That tends to be a different approach than most IT people are used to. "IT people coming out of a strong enterprise IT shop are going to perhaps be constrained a little bit in their ability to do things quickly and move fast and be agile," Hopkins says. But once hiring managers find the right type of person, they're usually willing to retrain that person to fill a big data role. For example, Patil used to work at LinkedIn, where, he says, "we largely trained ourselves, because so much of this is open source." He thinks the same thing can happen at most companies. "You can make these people" -- if they have the right personality, he says. IT workers who are flexible, willing to learn new tools and have a bit of an artist somewhere within can move into data architecture or even data visualization, says Sacheti. In short, big data carries big potential for IT pros who would relish an opportunity to show their creativity. Frequent Computerworld contributor Tam Harbert is a Washington, D.C.-based writer specializing in technology, business and public policy. This version of this story was originally published in Computerworld's print edition. It was adapted from an article that appeared earlier on Computerworld.com. Career Opportunities Big Data Job Titles and Skills Without conventional titles, or even standard qualifications, it's hard to know what makes someone suitable for a big data job. This listing, based on interviews with big data experts and recruiters, attempts to match up some of the most common titles with the skills required. • Data scientists: The top dogs in big data. This role is probably closest to what a 2011 McKinsey report calls "deep analytical talent." Some companies are creating high-level management positions for data scientists. Many of these people have backgrounds in math or traditional statistics. Some have experience or degrees in artificial intelligence, natural language processing or data management. • Data architects: Programmers who are good at working with messy data, disparate types of data, undefined data and lots of ambiguity. They may be people with traditional programming or business intelligence backgrounds, and they're often familiar with statistics. They need the creativity and persistence to be able to harness data in new ways to create new insights. • Data visualizers: Technologists who translate analytics into information a business can use. They harness the data and put it in context, in layman's language, exploring what the data means and how it will impact the company. They need to be able to understand and communicate with all parts of the business, including C-level executives. • Data change agents: People who drive changes in internal operations and processes based on data analytics. They may come from a Six Sigma background, but they also need the communication skills to translate jargon into terms others can understand. • Data engineers/operators: The designers, builders and managers of the big data infrastructure. They develop the architecture that helps analyze and process data in the way the business needs it. And they make sure those systems are performing smoothly. "The people who do the best are those that have an intense curiosity," says D.J. Patil, data scientist in residence at Greylock Partners. Patil probably knows what he's talking about: Forbes magazine credits him and Cloudera founder Jeff Hammerbacher with coining the term data scientist. And earlier in his career, Patil helped develop the data science team and strategy at LinkedIn. - Tam Harbert Read more about management in Computerworld's Management Topic Center.
Quip issues API for mobile word processor, aims it at enterprise IT
Tags: applications, Networking, IT careers, big data, software, data mining, UBS, McKinsey, management
Read how Perth-based safety footwear manufacturer, Steel Blue, was able to cut costs with shipping and improve efficiency while meeting the growing demand for their products as they expanded their ...
Journey to the Future-State framework | 计算机 |
2014-15/4479/en_head.json.gz/28878 | 10/21/201202:56 PMWendy NatherCommentaryConnect Directly0 commentsComment NowLogin50%50%
The Elephant In The Security Monitoring RoomIt's right in front of us, but is too rarely taken into account within monitoring and risk systems: the policy exceptionIf you think about it, a firewall is an exception: Just connecting to the Internet is a risk, and a firewall is there to allow in (or out) the things you need despite that risk. Even when you have a full set of policies in place that govern how your infrastructure is configured, not everything will follow the rules. For every setting, there is an equal and opposite exception.
CISOs spend a lot of time granting and tracking these exceptions -- and then explaining them to an auditor. "Yes, I know we haven't changed this account password in two years. That's because to do it across the whole network will require six weeks of dedicated effort and reconfiguration of legacy hosts and business-critical applications that rely on jobs running under this account. There is no way we're going to do this every 90 days."
There are exceptions everywhere you look, either time-based ("We'll fix this when we have more money in the next fiscal year") or permanent ("We meant to do that -- please stop bugging us about it"). And don't forget what are probably your biggest sources of exceptions: your developers, who need to try new things, and your senior management, who probably get to have whatever they want; I once caught an executive doing the exact thing that he was most vocal about preventing. So it's important to be aware of exceptions and have them centrally controlled and tracked for a more complete view of the risk you're taking on ("Who thought THAT was a good idea??"). It also saves time and effort when you are trying to troubleshoot something -- or, more importantly, when you're interpreting events in your monitoring system.
If I could wave a magic wand, I would have annotation features for every product that controls or monitors security. And I would have the annotation at the very spot where the controls are listed. This is available today with some systems: When I read firewall rules, I want to know immediately who created them, when they were created, and who authorized them. But I also want annotation for alerts: "Don't show this as a problem, but keep tracking it, and let me know if it continues beyond this date because that's when they said they would stop needing it." And I want them for logs: "These are all the entries that resulted from this change that we approved last Wednesday." I don't want to have to track them all and look them up separately in the world's most popular business intelligence tool (Excel).
Exceptions require deep institutional knowledge, not only of systems, but of business processes and risk appetite. It's critical to understand what policy exceptions you have in place so that you can identify false-positives as well as real anomalies. And in an ideal world, you would roll all of your exceptions into one place so you could tell when your exposure was reaching the limits of acceptable risk. This probably won't happen if the exception is in the process and isn't visible in technology, but it sure would be nice to move in this direction with all of our assessment and monitoring systems. Otherwise, you're only monitoring half of what your business is actually doing.
Wendy Nather is Research Director of the Enterprise Security Practice at the independent analyst firm 451 Research. You can find her on Twitter as @451wendy. Wendy Nather is Research Director of the Enterprise Security Practice at independent analyst firm 451 Research. With over 30 years of IT experience, she has worked both in financial services and in the public sector, both in the US and in Europe. Wendy's coverage areas ... View Full BioComment | Email This | Print | RSSMore InsightsWebcasts
InformationWeek Elite 100 | 计算机 |
2014-15/4479/en_head.json.gz/29173 | () review
Sorry, but the review that you are trying to view is no longer accessible or the page was assigned to a review that was submitted but never approved by site staff. Reviews may be removed from the site for a variety of reasons, and reviews may also be rejected for a variety of reasons. We hope that you continue browsing the site and find other content that interests you. Thanks!
None of the material contained within this site may be reproduced in any conceivable fashion without permission from the author(s) of said material. This site is not sponsored or endorsed by Nintendo, Sega, Sony, Microsoft, or any other such party. is a registered trademark of its copyright holder. This site makes no claim to , its characters, screenshots, artwork, music, or any intellectual property contained within. Opinions expressed on this site do not necessarily represent the opinion of site staff or sponsors. | 计算机 |
2014-15/4479/en_head.json.gz/29250 | Operating Systems/Platforms
Microsoft – Windows 7 Beta review
it could be the OS that Vista should have been
£free (time-limited beta) by IT Reviews Staff
Perhaps it’s a changing of the tide within the walls of Microsoft, but the decision to allow as many people as demand dictates to download and try a free beta of its next operating system has proven to be a good one.
For two reasons, really. Firstly, it demonstrates an openness that would serve the company well were it to continue. And secondly, Windows 7 is really rather good and a sign of hope for those of us who have grown frustrated with the bloat and fussiness of Windows Vista.
The beta is downloadable from Microsoft’s website for a limited time, and runs through until August, at which point you’ll need to dig out your XP/Vista/Linux discs again. We opted, unwisely as it turned out, to upgrade an existing Vista installation rather than install from new, and paid the price for doing so.
There’s a wise proverb yet to be written about always installing an operating system afresh, and after sitting through three hours of Windows 7′s takeover of our 2GHz test laptop, we were tempted to throw in the towel. Many commentators have noted that the from-fresh installation is really quite brisk, so it was inevitable that we’d do things the hard way.
After that, we did eventually arrive at the Windows 7 log-in screen, a familiar descendant of Vista’s, and were delighted to see that even on a laptop, everything was in place. All of our old programs were compatible, the hardware was working and even the trusty old touchpad was giving us no problems. In short, it was a long yet seamless upgrade. It was really surprising just how much worked.
For things have clearly changed. The boot up time had slightly improved for starters, and – thankfully – the hanging around once the Windows desktop appears, before you can actually do anything meaningful, has also been attacked. This is clearly good news.
The look of the operating system is very much one that shows its Vista heritage: it is still the same core operating system, with some enhancements, until you get down to the taskbar. Here things have changed, with chunkier, double-sized icons following the kind of thinking that Microsoft employed with Office 2007, and Apple utilises with MacOS.
You can dock an application to the taskbar and running programs are displayed here. What’s good, though, is that if you have six or seven Firefox windows running, for example, then they’ll be docked under one icon. Clicking on said icon will then let you choose from all open windows attributed to that program. A neat idea, even if it takes some getting used to.
Also built onto the taskbar by default, on the far right, is a direct-to-desktop button. This was available in XP and Vista, but it just seems in a more logical place now.
Under the bonnet there are many changes too, and the operating system seems a bit friendlier as a result. Little things like changing the labelling of features to more sensible names makes a handy difference.
So what other changes? The gadgets that Vista brought in can now be placed all over the desktop as your heart desires, there’s greater control over user accounts, and the security side of things – arguably a strength of Vista – has been refined too. And then there are little add-ons such as the snipping tool, which allows you to take a grab of an area you select without having to battle through a graphics program to do so.
What impressed us the most about Windows 7 thus far, though, is its stability. Mixed in with a generally faster way of working, there’s real promise here, even if some holdovers from Vista are still a little annoying. This seems to be a strong step in the right direction, and you have to conclude that it’s closer to release than Microsoft is saying, given just how polished it feels.
There’s still time for things to go off in the wrong direction, but right now, Windows 7 is a product of real promise.
Company: Microsoft
Windows 7 is shaping up better than many were expecting and, while it's still early days, this could be a turning point for the modern-day Microsoft More from 'Operating Systems/Platforms'
Apple iOS 7
4/5 Launching iOS 7 (free) for the first time feels like stepping into some far-away land. It reminds me of the moment Dorothy first finds ... Wary Puppy Linux 5.2
4/5 The true power of Linux is in its near-infinite variety: no matter what your particular requirement, the chances are that you can find a ... Apple iOS 5 mobile operating system
4/5 iOS 5, the latest version of Apple's mobile operating system for the iPhone, iPad and iPod Touch includes over 200 enhancements that ... About Us | 计算机 |
2014-15/4479/en_head.json.gz/30270 | (27) PowerColor X1900 XT Graphics Card Review
[05/04/2006 02:37 PM | Graphics]by Alexey Stepin ATI Radeon X1900 XT is a very successful solution from the price-to-performance standpoint: it is considerably cheaper than Radeon X1900 XTX but is practically as fast. Today we are going to introduce to you one of the graphics cards based on this chip from PowerColor Company.
Noise, Overclockability, 2D Quality
Half-Life 2: Lost Coast
Project: Snowblind
Splinter Cell: Chaos Theory
Performance in Strategy Games
Warhammer 40000: Dawn of War
Performance in Semi-Synthetic Benchmarks
Futuremark 3DMark05 build 120
As our readers should well know, the announcement of the Radeon X1900 (R580) graphics processor from ATI Technologies was accompanied with a release of three graphics cards based on it: Radeon X1900 XTX, Radeon X1900 XT and Radeon X1900 XT CrossFire Edition. The latter is a specific and narrowly-scoped product equipped with a frame-sewing Compositing Engine and meant for use as a Master in CrossFire tandems. If you want to learn more about this card, refer to our 2 Fast, 2 Furious: ATI Radeon X1900 XT CrossFire Review.The Radeon X1900 XTX debuted successfully. It outperformed the GeForce 7800 GTX 512 which had never really taken off and became the highest-performance product among consumer graphics cards. ATI Technologies had taken care to provide the new card in mass quantities, so it was available right after the announcement, even though at a steep price of $649. Well, every flagship product is expensive, and the price of this one was also lowered to $549 a month after the release.The Radeon X1900 XT in its turn only differed from its elder brother in having somewhat lower frequencies: 625/725 (1450) MHz as opposed to 650/750 (1500) MHz. Such a small difference couldn’t have a big effect on the performance in games as our tests proved (for details see our article called The Fast and Furious: ATI Radeon X1900 XTX Review). In most cases there was a negligible difference between a Radeon X1900 XTX and a Radeon X1900 XT or even none at all, although the latter came at an officially recommended price of $549, i.e. $100 cheaper than the flagship model! Since there was virtually no competition from Nvidia at the time of the announcement, the Radeon X1900 XT had the most appealing price/performance ratio. Today it has got a dangerous rival in the GeForce 7900 GTX which is officially priced at $499.We’re going to talk about PowerColor’s version of Radeon X1900 XT in this review. Products from this company have never boasted gorgeous accessories, but have usually come at a rather low price, contrary to products from ASUS, for example. Let’s see what PowerColor offers us this time. Table of contents: | 计算机 |
2014-15/4479/en_head.json.gz/30329 | AWS Case Study: Onoko Limited About Onoko Onoko is a Hong Kong-based application development company that creates social and mobile apps, primarily for the Facebook and iPhone platforms. Onoko's apps are largely focused on relational games, such as quizzes and interviews. The company reports that its apps collectively support 15 million users worldwide. Onoko also develops custom browser homepages and browser plugins. All of the company's products are created by four fulltime developers.
Onoko began operations using a variety of cloud-based services, but struggled to find an individual provider with both the flexibility and economical price-point to meet the company's growing needs. In response, Onoko turned to Amazon Web Services (AWS).
Why Amazon Web Services Janakan Arulkumarasan, Director of Onoko International, says, "We found AWS to be significantly cheaper than other hosted services, but the main appeal was the simplicity of the service, and the elegant way that various AWS services work together. Once you start using one service, it becomes easy and logical to use all the others."
The company now operates its entire application infrastructure on AWS. For example, Amazon Elastic Compute Cloud (Amazon EC2) provides all of Onoko's authentication and back-end services, while Amazon SimpleDB meets the company's requirement for scalable, non-relational data storage.
For Flash file hosting and delivery, Onoko uses Amazon Simple Storage Service (Amazon S3) in conjunction with the content delivery service, Amazon CloudFront. The company is particularly impressed with Amazon CloudFront, which, the company explains, has improved the user experience by dramatically increasing app download speeds.
The Benefits Currently, Onoko's various social and mobile applications, browser homepages, and browser plugins maintain an average of one million users per day. The company credits AWS with helping it sustain this high level of service while at the same time reducing its operating cost by more than 50 percent since switching from its previous cloud service provider. In fact, the entire operating expense of Onoko's Amazon S3 and Amazon CloudFront services is less than one percent of the company's total revenues.
In addition to the financial benefits the company has experienced with AWS, Onoko is also impressed with the support services of AWS, which help it concentrate on its own corporate goals, rather than worrying about outgrowing its infrastructure.
"AWS has allowed us to focus on building great apps and great user experiences with a very small team," Arulkumarasan says. "With AWS, scalability is never a problem, and we have survived—and profited—from massive spikes of up to 5 million additional users a day without any problems. The AWS staff has been enormously helpful and responsive, and I love how much time AWS developer relations staff spends organizing events and meeting one-on-one to understand our business model and needs."
Onoko is always exploring the possibility of adding additional features from AWS to its current service stack. "AWS keeps innovating new services that make app development easy," Arulkumarasan says. "I still get super excited whenever I see an 'AWS new feature' email."
Next Step To learn more about how AWS can help support your web app needs, visit our Web Application details page: http://aws.amazon.com/web-mobile-social/. | 计算机 |
2014-15/4496/en_head.json.gz/1198 | Gamestorming
from XPLANE 3 years ago /
The future of work is not about dull routine… it's about being more human.
Gamestorming is a set of best practices compiled from the world's most innovative people and companies, condensed into a lightweight, low-tech toolkit that applies tools and rules to the problems of collaboration and teamwork. The approach is a mashup of game principles, game mechanics and work. It's a set of methods for inventors, explorers, and change agents. A practice made of people, paper and passion.
It's for people who want to design the future, to change the world, to make, create and innovate. And it's been compiled into a book by Dave Gray, James Macanufo and Sunni Brown, and published by O'Reilly Media.
Video by XPLANE. Learn more about Gamestorming at gogamestorm.com. Follow | 计算机 |
2014-15/4496/en_head.json.gz/1910 | Preview: Injustice: Gods Among Us is gritty, sharp and it looks incredible
Posted by: Gregory Hutto
I was walking across the hall at E3 last year, and my brisk pace was broken by the sight of The Flash punching Super Man square in the nose. Taken away from my serious business, no doubt, I noted right away that this was no crappy Mortal Kombat vs. DC game. It was gritty, sharp and it looked incredible; this game was Injustice: Gods Among Us. Last week, I finally got to see it again.
For those of us who gave Mortal Kombat vs. DC a shot, we were left utterly disappointed by an extremely tame game with an underwhelming cast. Injustice: Gods Among Us is an opportunity to get that terrible taste out of our mouths. I’m looking for a game that’s dark, deep and fine-tuned. Injustice seems closer, so far.
I can’t really speak to the characters that I saw, save staples like Batman, Super Man, Deathstroke, Bane, Lex Luthor and a few others. But I can say that they’ve got loads of great characters to choose from, and we got a sneak peak as to what they’d look like in the game. It was amusing hearing the characters awkwardly shout each other’s name for the sake of informing non-nerds who they were. But enough teasing – on to the stuff I can speak about.
After taking stock of the potential playable characters, I really took the atmosphere of the game in. It was campier than I expected. That’s not to say it was downright cheesy, but this is no Nolan effort. I have to say, I was a bit disappointed by that. I may be on an island, though. The game features an all original story arc that many comic book fans may be interested in. If you’re a fan of DCUO’s story, this game will far from offend, and it definitely does go dark.
As I was handed the marketing one sheet, I read the famous fighter line about how the game interacts with the environment. I kind of rolled my eyes and thought, Yeah, whatever. But, it’s true! Objects in the area can be used and manipulated during the course of battle, and they’re all appropriately contextual. Ripping off a hose to Freeze’s prison cell sprays ice at enemies to slow them down, and tossing explosives through ripped off assembly lines whittles down the opponent’s life. Purists can turn these options off, but they do provide a bit of nuance and bring set pieces to life.
Fighting is very similar to Mortal Kombat, which is to be expected from the very same developer. Combat feels to move at a comfortable pace, more or less, and combos are existent but not overly complicated. The round system is tossed aside, though, favoring a one round take all that feels much more sensible. I enjoy fighters, but for the sake of being honest, I don’t know enough about them to give you the nitty gritty beyond that.
Oh, but wait, there are special moves and what not. They’re easy to trigger by clicking both, well, triggers. Batman grabs the foe, tases their neck and has the Batmobile commit a hit and run, regardless if you’re actually on a street or in Arkham Asylum. Better yet, Bane does the old backbreaker, and it feels oh so right. These moves become available as the combo meter fills, but there are other ways to utilize the meter, such as exploding batarangs on impact and more.
To get me to buy a fighter is a tough sell. The only one I can really wrap my head around is Marvel vs. Capcom, because I love how fast it is, and the roster is amazing. If Injustice: Gods Among Us delivers on the latter and tosses in a sick story that I just can’t miss, they may have a fan in me after all.
Tags: Injustice: Gods Among Us, Injustice, Gods Among US, DCUO, DC Universe Online, Mortal Kombat, Mortal Kombat vs. DC
Injustice: Gods Among Us mobile players rewarded for 1 billion play sessions
NYCC 2013 Preview: Injustice and the ever-increasing Vita fighting game dilemma
Injustice: Gods Among Us Ultimate Edition coming to PS4, PS3, Vita, and Xbox 360
Injustice: Gods Among Us may be heading to the Vita
Zatanna’s hocus pocus history trailer for Injustice: Gods Among Us appears
General Zod challenges iOS users with new Injustice: Gods Among Us mode
Injustice: Gods Among Us pulls Zatanna out of a their DLC hat
Martian Manhunter not the last DLC character for Injustice; 'Big fan favorite request' on the way
About Injustice: Gods Among Us
Xbox 360, PS3, Wii U, iPhone
Release Date (Xbox 360, PS3, Wii U)
Related to Injustice: Gods Among Us
Newsfor Injustice: Gods Among Us
Screenshotsfor Injustice: Gods Among Us
Downloadsfor Injustice: Gods Among Us
Warner Bros. Interactive Entertainment is thanking its mobile players for...
NetherRealm Studios' well-received fighting game Injustice: Gods Among...
It appears that Warner Bros. Interactive and NetherRealm Studios may be... | 计算机 |
2014-15/4496/en_head.json.gz/1982 | Every Enthusiast's 127.0.0.1 Since May 2002 ATI Radeon HD 3870 X2
Why the delay, and what’s this with minimum FPS?
We are sure many of you reading will be aware that the 3870 X2 was originally due to be launched on the 23rd January but at what seemed like the last moment the launch was delayed until today, the 28th January. The reason for this has not been particularly clear but the public stance has been that ATI were working on an updated driver, which is true.
The reason for this new driver, and subsequent delay is that when we first received the 3870 X2 and press driver we found numerous performance problems, image quality issues and game crashes when testing. These were reported to ATI and to their credit they worked tirelessly with us to fix all of the issues, which were reproducible. ATI supplied us with beta drivers to test until the final driver (the one in fact used by us today) was ready for use and throughout the last week or so they have shown a great commitment to improving the situation, and as a result the end user experience. We really do need to credit them considerably for their efforts in accepting the issues and getting them quickly resolved.
Driver bugs ranged from minor flickering in Oblivion when first loading a game to Gears of War crashing when configured to use maximum settings. However with the card now performing well and all of the major titles running stable and looking great the end result is a win for everyone involved.
There is still however one outstanding issue with the 3870 X2 which cannot be put to the side. In many tests within this article you will notice that the minimum frames per second figure seems out of line with the rest of the results, ATI had the following comment on this issue:
“In most circumstances we would expect the minimum frame rate found on the ATI Radeon HD 3870X2 to be equal to, or better than an ATI Radeon HD 3870. However, there are conceivable circumstances in which the minimum frame rates of the ATI Radeon HD 3870X2 could be lower. The most obvious case where the ATI Radeon HD 3870X2 could have a lower instantaneous frame rate would be if an application uploaded new information (such as textures or vertex data) to the graphics accelerator during rendering. In this situation the ATI Radeon HD 3870X2 driver has to duplicate the uploaded data, copying the data to each of the GPU frame buffers, which naturally takes more time than uploading a single set of data for a single GPU graphics accelerator.
These circumstances should be rare in practice - as a rule applications try to avoid uploading data during rendering as much as possible as such uploads are known to cause inconsistent frame rates.
An important thing to remember is that if a slow frame is caused by data being uploaded, then the effect will be seen across all graphics accelerators (graphics accelerators with multiple cores will just be affected to a greater degree). The visual result would be a small stutter, probably for a single frame, and if this stutter is noticeable on a ATI Radeon HD 3870X2 it is likely to be noticeable on a single GPU graphics accelerator as well.
In most situations low frame rates are caused by a heavy rendering workload, resulting in low frame rates that are generally seen for an extended period of time. Under these circumstances an ATI Radeon HD 3870X2 will shine, delivering a significantly higher frame rate than a comparable single GPU graphics accelerator thanks to its superior rendering power.”
This seems a fair and honest explanation from ATI, however we do feel that it is worth pointing out that even if single GPU cards are affected by the same issue, our testing shows that it is never to as dramatic an extent as the X2. As a result, when we compare this product to a single core card this has to go down as a failing.
Missed our other reviews?
Copyright ©2002-2007 DriverHeaven.net, All rights reserved. TechHeaven design based on BlackTeal adapted by craig5320 & Zardon. Review coding Zardon.
DH logo & Artwork may NOT be used without express permission of the Administration Team, protected under Copyright Law.
DriverHeaven.net Reviews Copyright ©2002 - 2006 DriverHeaven.net | 计算机 |
2014-15/4496/en_head.json.gz/2253 | / root / Linux Books / Red Hat/Fedora
Andrew Hudson, Paul Hudson
Continuing with the tradition of offering the best and most comprehensive coverage of Red Hat Linux on the market, Red Hat Fedora 5 Unleashed includes new and additional material based on the latest release of Red Hat's Fedora Core Linux distribution. Incorporating an advanced approach to presenting information about Fedora, the book aims to provide the best and latest information that intermediate to advanced Linux users need to know about installation, configuration, system administration, server operations, and security.
Red Hat Fedora 5 Unleashed thoroughly covers all of Fedora's software packages, including up-to-date material on new applications, Web development, peripherals, and programming languages. It also includes updated discussion of the architecture of the Linux kernel 2.6, USB, KDE, GNOME, Broadband access issues, routing, gateways, firewalls, disk tuning, GCC, Perl, Python, printing services (CUPS), and security. Red Hat Linux Fedora 5 Unleashed is the most trusted and comprehensive guide to the latest version of Fedora Linux.
Paul Hudson is a recognized expert in open source technologies. He is a professional developer and full-time journalist for Future Publishing. His articles have appeared in Internet Works, Mac Format, PC Answers, PC Format and Linux Format, one of the most prestigious linux magazines. Paul is very passionate about the free software movement, and uses Linux exclusively at work and at home. Paul's book, Practical PHP Programming, is an industry-standard in the PHP community. manufacturer website
SAMS Publishing | 计算机 |
2014-15/4496/en_head.json.gz/2382 | PF Ornamental Treasures™
by Parachute
Ornamental Treasures is a very special series of font families never before released.
This is a form of artistic expression which was developed between the 9th and 15th century at the centers of the Byzantine civilization. The majority of Byzantine art is represented with wall paintings, mosaics, iconography and illuminated manuscripts. More…
Unfortunately, these historic treasures were kept from the public eye for centuries. Following an extensive research, Parachute unearthed these treasures and created an unprecedented series of ornaments with great interest to designers, architects, scholars, artists, researchers and students.
An attempt was made to create a series which works equally well for historic as well as contemporary applications. Furthermore, each package includes up to 9 fonts (layers) per glyph, which enable the user to create with an unlimited combination of colors in any program, without converting glyphs to outlines. Additionally, several glyphs connect with each other to form banners, frames and exquisite backgrounds.
It comes with a comprehensive guide which explains how to handle the ornamental fonts in seconds.
Publisher: Parachute
MyFonts debut: Apr 14, 2009
PF Ornm Treasures 1 Regular
PF Ornm Treasures 1 Layer 1
Ornamental Treasures is a trademark of Parachute and may be registered in certain jurisdictions.
fleurons
non-alphabetic
Image + Ornaments
Michelle's Wishlist Album
Frames borders ornaments
Melanie's Album
mata patterns | 计算机 |
2014-15/4496/en_head.json.gz/3337 | Arbiter Appointments
Amsterdam Office Location
Dubai Office Location
Work @ The RIPE NCC
Created: 17 Nov 2010 - Last updated:
The following individuals have been appointed by the RIPE NCC Executive Board to the Arbiters Panel:
Pierre Baume Conor Dufficy Ronald Duncan Alireza Ghafarallahi James Hickman Ondřej Surý Nick Williams Before they can take their place on the Arbiters Panel, these candidates must have their appointments approved by the RIPE NCC membership at the RIPE NCC General Meeting, 17 November 2010, at the Westin Excelsior Hotel in Rome.
More information on the Arbitration Process.
Pierre Baume
I was born in France and studied there, eventually securing an engineer diploma from ISEP (a Paris-based engineer school focused on electronics and related technologies). Among others, I worked with IBM (which led to my move to the Netherlands in 1995), EUnet, Qwest, kpnQwest (these last three were the same employment, the company changed ownership and name), the RIPE NCC (as a hostmaster), AMS-IX and IMC bv (where I'm still employed at the moment). I've built experience with both beginning and experienced LIRs and with an RIR. I've attended RIPE Meetings on and off since 1998, but less, recently, as few have taken place in Amsterdam and my present company uses mostly (private) IP resources handed to us by our partners.
Conor Dufficy
Conor was born on March 3rd 1965 in Dublin. He is a founder and director of Serverspace Limited (www.serverspace.co.uk), a provider of IT Services to businesses in the area of colocation, managed servers, cloud computing and related services. He is a practising barrister, specialising in commercial law and personal injury. He is also a qualified and practising mediator, with very broad experience in commercial mediation. Before becoming a barrister he had a seventeen-year career in Capital Markets at the end of which he was European head of Foreign Exchange for Bank of America. He sits on a number of company boards.
Ronald Duncan
Ronald Duncan co-founded @UK PLC in 1999. Prior to @UK PLC, he spent ten years running his own computer software consultancy, servicing projects using a range of languages and platforms. Ronald studied Physics at Cambridge and is a Chartered Physicist and Member of the Institution of Analysts and Programmers. He is a former UK downhill ski champion who competed internationally for ten years, including at two Olympics. Currently, he is developing BASDA Green XML and working on interoperability for the Hub Alliance, and is still Technical Director of @UK PLC.
James Hickman
James Hickman (1972, UK) studied Computing and Informatics at the University of Plymouth before working in various I.T. roles within the Rail industry. These including running transformational projects to deliver multi-million pound HR and Finance systems, though he most fondly remembers trying to get hundreds of shiny new ticket machines and passenger information systems to work within a few feet of 25kV overhead power lines. In 2000, he joined a mobile internet start-up called OmniSky running the infrastructure for the European offices and was part of the team designing and building PoPs to provide the service over GPRS. In 2001, James joined PSINet as a Senior Pre-Sales Consultant. He took over running the team shortly afterwards and built it into a respected part of the business. Under his leadership, many technical functions were drawn into the team. This included responsibility for support and administration of RIPE resource applications as well as providing advice to customers and partners wishing to become LIRs in their own right. In 2004 Telstra acquired PSINet UK and James was asked to run the combined Pre-Sales team. In 2008, he was invited to run a newly-formed complex customer design function and was given responsibility for the architecture of the entire UK network backbone during a challenging time of consolidation and integration of a new datacentre. Nowadays as Head of Technical Consulting he concentrates on professional services consultancy around managed services for large Enterprise customers. James is accredited by the British Computer Society (MBCS CITP) and the Engineering Council (CEng).
Ondřej Surý
Ondřej Surý (1977, Ostrava, CZ) studied Computer Science at the Charles University. While on studies he joined hosting company Globe Internet in 1998 (part of ACTIVE 24 group now). After working on several projects he became CTO of Globe Internet. In 2005 he joined CZ.NIC, the .CZ registry, as a Chief Technology Officer to build a new registry system, which was deployed in 2007. In 2008 he was involved in DNSSEC launch in .CZ. In 2009 he became a head of the CZ.NIC Labs, newly founded R&D department of CZ.NIC. He is primarily involved in DNS and DNSSEC deployment, acting as an active participant of the DNS Working Group at RIPE, and dnsop and dnsext IETF groups. He founded a KIDNS working group at IETF pushing the idea of certificates in DNS forward. He is also one of the seven Recovery Key Share Holders who participated at the first ICANN KSK Ceremony. In 2004 he founded the Czech Ubuntu Local Community Team which turned from a volunteer organisation to a official NGO after some years.
Attendee List
GM Minutes | 计算机 |
2014-15/4496/en_head.json.gz/3364 | Two big announcements.
February 5, 2014.
This year marks 37signals' 15th year in business. And today is Basecamp's 10th birthday. We have a lot to celebrate, and two exciting announcements to share. But first, let's set the scene with some history.
37signals was founded back in 1999 as a web design firm. With the release of Basecamp in 2004, we began our journey to become a software company. Once Basecamp revenue surpassed web design revenue in 2005, the transition was complete.
Since then we've launched Ta-da List (2005), Writeboard (2005), Backpack (2005), Campfire (2006), The Job Board & Gig Board (2006), Highrise (2007), Sortfolio (2009), the all new Basecamp (2012), Know Your Company (2013), and We Work Remotely (2013).
We also created and open sourced Ruby on Rails (2004), wrote a few books (Defensive Design for the Web (2004), Getting Real (2006), REWORK (2010), and REMOTE (2013)), and published thousands of blog posts on Signal vs. Noise.
Fifteen years into it, we're proud of the work we've done and the business we've built. And business has never been better.
However, because we've released so many products over the years, we've become a bit scattered, a bit diluted. Nobody does their best work when they're spread too thin. We certainly don't. We do our best work when we're all focused on one thing.
Further, we've always enjoyed being a small company. Today we're bigger than we've ever been, but we're still relatively small at 43 people. So while we could hire a bunch more people to do a bunch more things, that kind of rapid expansion is at odds with our culture. We want to maintain the kind of company where everyone knows everyone's name. That's one of the reasons why so many of the people who work at 37signals stay at 37signals.
So with that in mind, last August we conducted a thorough review of our products, our customer base, our passions, and our visions of the company for the next 20 years. When we put it all on the table, everything lined up and pointed at one clear conclusion. We all got excited. We knew it was right.
So today, February 5, 2014, exactly ten years to the day since we launched Basecamp, we have a couple of big announcements to make.
Here's the first: Moving forward, we will be a one product company. That product will be Basecamp. Our entire company will rally around Basecamp. With our whole team - from design to development to customer service to ops - focused on one thing, Basecamp will continue to get better in every direction and on every dimension.
Basecamp is our best idea and our biggest winner. Over 15 million people have Basecamp accounts, and just last week another 6,622 companies signed up for new Basecamp accounts. Ten years into it, Basecamp keeps accelerating. We've had other big hits, but nothing quite like Basecamp.
When we meet people, and they ask us what we do, we say we work for 37signals. If they aren't in the tech world, they'll squint and say "what's that?". When we say "we're the folks who make Basecamp", their eyes light up and open wide. "Basecamp! Oh I love Basecamp! My wife uses Basecamp too! Even our church uses Basecamp!" We hear this kind of response over and over. People just love their Basecamp.
So that got us thinking... While 37signals is well known in tech circles, far more people around the world actually know us for Basecamp. And since we're going to be completely focused on Basecamp moving forward, why don't we just go all in on "Basecamp".
So here's the second big announcement: We're changing our name. 37signals is now Basecamp. "37signals" goes into the history books. From now on, we are Basecamp. Basecamp the company, Basecamp the product. We're one and the same.
With this change, we renew our long-term commitment to all things Basecamp. Basecamp on the web, Basecamp on iOS, Basecamp on Android, Basecamp via email, and Basecamp wherever else it makes sense. Each one of us will be dedicated to improving Basecamp, extending Basecamp's reach, expanding Basecamp's capabilities, and making sure our Basecamp customers are treated like royalty.
And we'll never forget what made Basecamp so popular in the first place: It just works. It's simple, it's easy to use, it's easy to understand, it's clear, it's reliable, and it's dependable. We'll continue to make it more of all of those things.
The last fifteen years have been a blast, but with every future moment focused on Basecamp, the next fifteen are going to be even better. We're fired up! We've already got loads of new Basecamp stuff cooking.
Please visit our brand new site at http://basecamp.com and have a look around. If you're not a Basecamp customer yet, give us a try. We'd love to have you. If you already are, we thank you for your business.
Standing by to serve you for decades to come.
Thank you from all of us at Basecamp.
-Jason FriedFounder & CEO Basecamp
Questions & answers.
Q: If you're going all in on Basecamp, what happens to Campfire and Highrise?
In the short term, everything stays the same. Business as usual. No interruption in service, no changes that affect our customers.
In the long term, one of three things:
SCENARIO 1: We'll spin them off into separate companies where we'll retain partial ownership, but another fully-dedicated team will run the products and own the majority of the company. This would be our ideal situation as it would ensure continuity and no interruption for our customers, but we'd have to find the right entrepreneur/team with the right experience and enough financing to make it work.
SCENARIO 2: We'll sell the products outright (either separately or together). The key for us in this scenario is that the products, and our customers, are well looked after. We will not sell either of these products to a company that is planning to shut the products down. And since no one from our team goes with the sale, this is not an acqui-hire situation. We're looking to sell to a company that wants to add well-respected, well-established, profitable, growing products to their portfolio.
SCENARIO 3: If we can't find the right partner or buyer, we are committed to continuing to run the products for our existing customers forever. We won't sell the products to new customers, but existing customers can continue to use the products just as they always have. The products will shift into maintenance mode which means there will be no new development, only security updates or minor bug fixes. We did this successfully in 2012 with Ta-da List, Writeboard, and Backpack, so we know how to make it work.
If you're a company or team interested in exploring scenario 1 (spin-off) or scenario 2 (outright purchase), please get in touch. Based on current revenues, current growth rates, and a conservative multiple, Campfire will sell in the single digit millions, and Highrise will sell in the tens of millions, so serious inquires only please. Disclosure: We're currently in early discussions with a few interested parties.
Q: What about Basecamp Classic?
We are fully committed to running Basecamp Classic forever. As long as we're around, Basecamp Classic will be around. A large chunk of our customer base loves Classic and we'll make sure they'll always have their Classic. The same rigorous uptime standards of Basecamp also apply to Basecamp Classic.
Background: Basecamp Classic was the original version of Basecamp we launched in 2004. Then in 2012 we released the all new version of Basecamp. Customers had the choice to stay on the old version, transition to the new version, or use both.
Q: What about other stuff like your Signal vs. Noise Blog, your books, etc?
We will continue publishing to Signal vs. Noise, writing books, and sharing as we always have. We will also be launching another online publication this year called THE DISTANCE. More details on this later.
Q: Will you be downsizing your company as part of downsizing your product lines?
No — our entire team stays intact. There's more than enough work to go around just on Basecamp alone. In fact, we're probably short a few designers now that we're fully committing to supporting Basecamp on multiple platforms. Plus, we're working on a variety of other tools and ideas that'll expand Basecamp in all new ways. We've also begun R&Ding entirely new, future versions of Basecamp, so there's a lot of really interesting work ahead for us. We'll be posting a few open positions in the coming months.
Q: This is a really unusual strategy. Can I talk to you about it?
From the very beginning we've done things differently. From switching from being a client services company to a product company, from being one of the early pioneers of the Software as a Service model, to open-sourcing our Ruby on Rails infrastructure, to signing up thousands of customers without a single salesperson, to being bootstrapped and funded by customer revenues, to being based in Chicago instead of the valley, to having a remote workforce spread out across nearly 30 cities across the world, to eschewing over 100 VC and private equity investment offers over the years (note: we did sell a small piece of the company to Jeff Bezos in 2006), to keeping our company as small as possible when "go big or go home" is all the rage, to writing a New York Times Bestselling business book (REWORK, 2010), etc. We're used to these unusual moves.
If you're a journalist who's interested in a business story unlike any other, please get in touch with our CEO (Jason Fried) at [email protected].
Links of interest.
Our new site: http://basecamp.com
About us: http://basecamp.com/about
Meet our team: http://basecamp.com/team
The inside story of why we built Basecamp: http://basecamp.com/story
A few of the things people have made with Basecamp's help: http://basecamp.com/made
The original blog post from February 5, 2004 that launched Basecamp: Original launch post
The original 37signals site from 1999: The 37signals Manifesto | 计算机 |
2014-15/4496/en_head.json.gz/4404 | Linux and Windows: Clustering champ unclear
Special report:Can Windows and Linux peacefully co-exist? Fourth in a series.
Unless you've been living under a rock, you probably know that for several years, Microsoft has been trying to convince customers that the Windows platform is far superior to Linux. Although I have always personally liked Microsoft products, I began to wonder how Windows stacks up against Linux when it comes to high performance computing, specifically clustering.
Who uses Windows clusters?
As I started researching corporate cluster usage on the Internet, I found that Windows cluster usage is indeed widespread. Even so, it seems that there are more companies using Linux clusters at this point. I couldn't find any statistics on the percentage of market saturation for either platform's clustering solution, but informal research revealed that at least two companies use Linux clustering products for every one that uses Microsoft.
Why choose one platform over the other?
There are lots of performance comparisons available for download on the Internet, but nearly all have contradictory results. I believe that these results vary depending on the hardware being used and the tasks being performed. I don't think that either platform is clearly superior to the other. Your organization's cluster platform selection should be based on your environment and on the task at hand.
Microsoft's clustering architectures
Windows Server 2003 actually supports two different types of clustering. One is called network load balancing, which enables up to 32 clustered servers to run a high-demand application to prevent a single server from being bogged down. If one of the servers in the cluster fails, then the other servers instantly pick up the slack.
Network load balancing has been most often used with Web servers, which tend to use fairly static code and require little data replication. If a clustered For more information
'Epic struggle' between Linux and Windows a myth
Desktop Linux not yet making a dent in the enterprise
Few care to share when it comes to Microsoft's code
Web site needs more performance than what the cluster is currently providing, additional servers can be instantaneously added to the cluster. Once the cluster reaches the 32-server limit, you can further expand the cluster by creating a second cluster and then using round-robin DNS to divide traffic between the two clusters.
The other type of clustering that Windows Server 2003 supports by default is often referred to simply as clustering. The idea behind this type of clustering is that two or more servers share a common hard disk. All of the servers in the cluster run the same application and reference the same data on the same disk. Only one of the servers actually does the work. The other servers constantly check to make sure that the primary server is online. If the primary server does not respond, then the secondary server takes over.
This type of clustering doesn't really give you any kind of performance gain. Instead, it gives you fault tolerance and enables you to perform rolling upgrades. (A server can be taken offline for upgrade without disrupting users.) In Windows 2000 Advanced Server, only two servers could be clustered together in this way (four servers in Windows 2000 Datacenter Edition). In Windows Server 2003, though, the limit has been raised to eight servers. Microsoft offers this as a solution to long-distance fault tolerance when used in conjunction with the iSCSI protocol (SCSI over IP).
So, what would make someone choose to use a Microsoft cluster? I recently spoke with several friends (who do not work for Microsoft) and asked them why they chose Microsoft clusters. I got a variety of answers. One person told me that his company had subscribed to one of Microsoft's volume licensing agreements, and going with Microsoft just seemed like the thing to do, since everything else running in the organization was based on Microsoft.
Another person told me that his corporate Web site was running on Microsoft's Internet Information Server (IIS). Since the site was coded in Active Server Pages, IIS was the only platform that could natively run the site. As the site grew, the company had no choice but to create a Microsoft-based cluster.
A third person I spoke with explained that his company had originally considered implementing a Linux-based cluster for a particular database application. The company preferred to use Microsoft products because of the level of available support, but the Microsoft platform required a separate Windows Server 2003 license for each cluster node and thousands of dollars worth of special hardware. The company was willing to spend the bucks, but its IT policy stated that a duplicate machine must be purchased to match any server put in a production environment. The duplicate machine is used in the company's lab for deployment testing. While the company was willing to spend money on a Microsoft server cluster, a duplicate system for the lab was beyond the budget. The problem was eventually solved by bending the rules a little and using VMware to create a test cluster environment on a series of virtual servers.
Linux and Beowulf
As you can see in the section above, price was the major objection to deploying a Microsoft-based cluster. Admittedly, it's hard to justify spending that kind of money for a Microsoft cluster when you can get arguably better performance on a Linux cluster for a fraction of the cost.
Price isn't the only argument for choosing Linux, though. Certain Linux clustering implementations can scale way beyond anything that Microsoft offers. As I explained earlier, Microsoft imposes an eight-server limit for a cluster and a 32-cluster limit for load balancing. In comparison, two years ago, Charles Schwab was using a cluster of 50 Linux servers to perform a financial analysis. According to a company spokesman, it was able to achieve performance similar to that of a supercomputer, but for far less money.
At the moment, the prevalent type of Linux cluster is called Beowulf. Similar to Microsoft's network load balancing solution, Beowulf relies on parallel processing. Beowulf does have its differences, though.
As you will recall, Microsoft's network load balancing allows the cluster to run multiple instances of a common application. In contrast, Beowulf works best when each node in the cluster is running completely independent code rather than parallel code. In Microsoft's network load balancing, the main goal is to increase scalability so that the cluster can service more people than a single server ever could. A Beowulf cluster is better suited to mathematically intensive operations in which the processing power of multiple servers can be used to arrive at a solution faster than a single server could.
As you can see, the superiority of either platform cluster is debatable. What is clear is that you will get the best results if you choose the cluster platform best suited to the task at hand.
Brien M. Posey, MCSE, is a Microsoft Most Valuable Professional for his work with Windows 2000 Server and IIS. He has served as the CIO for a nationwide chain of hospitals and was once in charge of IT security for Fort Knox. As a freelance technical writer he has written for Microsoft, CNET, ZDNet, TechTarget, MSD2D, Relevant Technologies and other technology companies.
AMD adds Linux clustering support for Opteron
Special report: Linux clusters get supercharged
Linux clusters: High-performance computing
Penguin Computing acquires Scyld Computing | 计算机 |
2014-15/4496/en_head.json.gz/5256 | CSAIL News: Freeman receives Test of Time Award The Computer Science and Artificial Intelligence Lab (CSAIL) announced on July 25, 2013 that Professor William Freeman has been honored with the Test of Time Award for his paper "Orientation Histograms for Hand Gesture Recognition,” co-written by Michal Roth in 1995. The award was presented at the 2013 IEEE Automatic Face and Gesture Recognition Conference in Shanghai, China. CSAIL News: Balakrishnan, Winstein develop Remy, an enhanced congestion control system Researchers from CSAIL and Center for Wireless Networks and Mobile Computing have developed a TCP congestion-control system called Remy, which they will present at the annual conference of the Association for Computing Machinery’s Special Interest Group on Data Communications. Hari Balakrishnan, the Fujitsu Professor of Electrical Engineering and Computer Science and EECS graduate student Keith Winstein are the authors of the work titled "TCP ex Machina: Computer-Generated Congestion Control". CSAIL News: Indyk is named Simons Investigator As noted on the CSAIL website: The Simons Foundation has announced that Professor Piotr Indyk has been selected as a Simons Investigator. Indyk is one of 13 mathematicians, theoretical physicists and computer scientists named as 2013 Simons Investigators and one of two professors at MIT selected for the honor. Read more CSAIL News: Using ordinary language to specify programming code EECS Professors Regina Barzilay and Martin Rinard (and their respective graduate students Nate Kushman and Tao Lei) have demonstrated that ordinary language can be used (in specific cases) to aid in generating code for computer programs. Read more Game Theory used by EECS faculty - Technology Review feature Faculty members in the Electrical Engineering and Computer Science Department at MIT are converging on a wide range of research issues through game theory, which used to be a staple of economics research in the 1950s. EECS faculty members Asuman Ozdaglar, Costis Daskalakis, Munther Dahleh, and Silvio Micali discuss their approaches in this Technology Review feature. Read more. Thesis Defense: A 2D + 3D Rich Data Approach to Scene Understanding CSAIL News: Devadas teams to design hardware that disguises cloud servers memory-access patterns Srini Devadas, the Edwin Sibley Webster Professor of Electrical Engineering and Computer Science, and members of the Computational Structures Group in the Computer Science and Artificial Intelligence Lab (CSAIL) have developed a new system that not only disguises a server's memory-access patterns, but also prevents attacks that rely on how long computations take. CSAIL News: Algorithm to see (measure) human pulse from video In a paper they are presenting this summer at the Institute of Electrical and Electronics Engineers’ Computer Vision and Pattern Recognition conference, EECS graduate student Guha Balakrishnan and his advisors, both faculty members in the MIT EECS Department, John Guttag and Fredo Durand, describe a new algorithm they developed to measure the heart rates of people in video. The algorithm allows for analyzing the digital data for small imperceptible movements that are caused by the rush of blood from the heart's contractions. Data could ultimately aid in predicting heart disease. Tsitsiklis and Xu show versatility's promise Kuang Xu, a graduate student in the Department of Electrical Engineering and Computer Science [photo, right], and his advisor, John Tsitsiklis, the Clarence J. Lebel Professor of Electrical Engineering, have demonstrated in a series of recent papers that a little versatility in operations management, cloud computing and even health-care delivery and manufacturing could save exponential reduction in delays. Indyk, Katabi selected for top ACM Awards The Association for Computing Machinery (ACM) has announced that it is honoring Professor Piotr Indyk and Professor Dina Katabi for their innovations in computing technology. Indyk has been named one of the recipients of the Paris Kanellakis Theory and Practice Award, which honors specific theoretical accomplishments that have had a significant and demonstrable effect on the practice of computing. Katabi has been honored as one of the recipients of the Grace Murray Hopper Award, which recognizes the outstanding young computer professionals of the year. Pages« first‹ previous1234567next ›last » MIT Electrical Engineering & Computer Science | Room 38-401 | 77 Massachusetts Avenue | Cambridge, MA 02139 | 计算机 |
2014-15/4496/en_head.json.gz/5506 | About IANA
IANA Report on the Redelegation of the .MA Top-Level Domain
The Internet Assigned Numbers Authority (IANA), as part of the administrative functions associated with management of the Domain Name System root, is responsible for receiving requests for delegation and redelegation of top-level domains, investigating the circumstances relevant to those requests, and reporting on the requests. This report gives the findings and conclusions of the IANA on its investigation of a request for redelegation of .MA, the country-code top-level domain (ccTLD) for Morocco.
Morocco is a North African country with a population of over 33 million people. It borders Algeria and Western Sahara, and has coastlines on both the Atlantic Ocean and Mediterranean Sea. The country is assigned the ISO 3166-1 alpha-2 code of “MA”.
On 3 June 1992, IANA approved a request for the delegation of the .MA ccTLD. The assigned administrative and technical contact was Amine Mounier Alaoui of Ecole Mohammadia d'Ingenieurs. The Ecole Mohammadia d'Ingenieurs (EMI) is listed as the Supporting Organisation.
In 1995, the technical management of .MA was taken over by the national telephone provider, presently known as Itissalat Al Maghrib (IAM, literally Morocco Telecom). They continue this role today, whilst EMI plays no active role in the day-to-day management of .MA.
In the intervening period, IANA has conducted a number of routine changes to the nameserver data for .MA at the administrative/technical contact’s request, most recently in August 2005. In February 2005, Agence Nationale de Régementation des Télécommunications (ANRT), the national telecommunications regulator, launched an online consultation on the management of the .MA domain. The questions posed in the consultation covered the present management of .MA, the advantages of the current operator, the difficulties experienced with .MA, and both the short-term and long-term visions for .MA management.
ANRT summarised the main criticisms of the present .MA administration, as expressed in the submissions to the consultation, as: a lack of transparent rules and procedures; an absence of WHOIS service to obtain registrant details; the lack of reseller channels; the inability to appeal the registration of a domain; an absence of procedure to modify an existing registration; and a lack of neutrality having the dominant stakeholder in the telecom market operating the domain registry.
In November 2005, ANRT organised an Internet conference that received over 500 participants. These participants represented telecommunications providers, ISPs, the government, the private sector, and civil society. The discussion of .MA management was said to be a key theme for the meeting, with a number of speakers supporting action in line with the result of the online consultation.
On 12 May 2006, a redelegation request was lodged with IANA. It seeks the delegation of .MA to be transferred to ANRT. It is proposed that the Director General of ANRT be listed as both the administrative and technical contact for the domain. The present Director General of ANRT is Mohamed Benchaaboun.
IANA received a letter from the Moroccan Minister of Economic and General Affairs, Rachid Talbi El Alami, approving to the redelegation of .MA to ANRT. The letter notes “some weakness in the management of the Moroccan top level domain” and that the government “believe that [ANRT] is an appropriate entity for the redelegation of the management and administration of [.MA]”.
Amine Mounir Alaoui, the current Administrative and Technical Contact, assented to the redelegation to ANRT, commenting that “ANRT recognises that the Internet naming system is a public resource in the sense that its functions must be administered in the public or common interest”.
In response to the initial request and supporting documents, IANA enquired on the transition plan for moving operations from the current operator to the new operator.
IANA also requested further information on the 2005 community consultation project, to which ANRT responded by providing specific details on the consultation and the responses.
IANA’s analysis of the community sentiment to the operation .MA noted that there was a weight of opinion that sought to have its operation vested in a not-for-profit o | 计算机 |
2014-15/4496/en_head.json.gz/5888 | Re: Mercator Project
From: Brian Buhrow (buhrow@moria)
Date: Mon Apr 12 1993 - 23:07:32 PDT
Next message: Dale Dougherty: "Re: O'Reilly books"
Previous message: Brian Buhrow: "OLD NEWS, BUT POSSIBLY WORTH ARCHIVING"
Well, Mike, I do remember the lady and, in fact, saw her here in
California last summer. She was working as a summer researcher at Sun
MicroSystems and was going to give me a demonstration of their X-windows
project. Unfortunately, due to transportation troubles I was unable to get
there in time to actually see a demonstration. I did, however, get to talk
with her and address some of the issues of accessability and usability with
her. First, the three-dimensional sound stage had to go. Currently, (as
of September 1992), their Talking X-windows manager ran on the DECTalk
through the serial port that all Sun workstations have. The driver was
written in C and, according to her, could be easily modified to drive any
speech synthesizer. They were working on plans to put in an internal
Sun-Produced speech algorithm which would be used through the internal
audio port, but they were having a problem with the speech response time.
Beth expressed the feeling that the Sun speech wasn't going to be a reality
for a good bit of time.
The window manager, she said, had to be re-written from the ground up.
and so that was a problem they had gotten around, but not one that they
had totally solved. One of the problems with their new window manager was
that if you opened a window, then that window spawned another window, you
couldn't go back up the window tree to get to any windows you may have had
opened on the top level. I said that this was unacceptable, considering
the fact that I can run 9 windows at a time on any Unix box that I'm
attached to and get to any one of those windows in less than three
key-strokes. She also said that they had opted to fore-go any mouse
control and that mouse control was simulated with the arrow keys on the Sun
keyboard. Of course, sighted colleagues could use the mouse, and so could
we, but at a great loss of efficiency.
In terms of what they were going to support, she said that they were
going to support the athena X window widgit set, and, with the grace and
funding of Sun, their Ollit widgit set. Between these two widgit sets,
most of today's text-based X-windows applications should be relatively
accessible. She had hoped to let me play with xmh-mail, but as I say, I
didn't have enough time.
In terms of availability, she said that they hoped to have something
working by December. It would be primative, but it would be something we
could hack around with on the net, considering that it was going to be
produced by Georgia Tech. I expressed an interest in trying it out and
seeing what sorts of tricks I could make it do in our Sun environment. We
use the athena widgit set on a daily basis and have access to all the
necessary compilers. Up till now, I have had no contact with her again. I've looked for
files in anonymous ftp sites with references to talking X-windows
applications, but nothing has come to light. I'm glad to see someone else
was pestering as well. Perhaps we can stage a strategically planned
approach whereby one of us will eventually get an answer. And, with any
luck, a piece of software to compile and play with to our heart's content.
P.S. I've kept in touch with Gregory Pike of IBM, who came to our New
Orleans convention to help Jim Thatcher show Screen Reader for OS/2 for the
first time and, who, as you may remember, was working on the Screen Reader
for X-Windows under AIX (IBM's version of Unix). He tells me that they
have tty access and access to the basic X screen, but that they have a good
bit of distance to go. It hasn't been getting as much attention as the
OS/2 project has, because the OS/2 thing is an actual product that needs
updating and the "need" for supporting the X-windows Unix project hasn't
come to fruition yet. With any luck, it will soon. Still, I'm rooting for
Beth because I don't have an RS-6000 and because I think there will be some
sources available from that. That means, of course, that we won't have to
wait a year to make a fix after the first bugs surface.
This archive was generated by hypermail 2b29 : Sun Dec 02 2012 - 01:30:03 PST | 计算机 |
2014-15/4496/en_head.json.gz/6093 | Bringing Rationality to Information Management
VisionOverview
The Problems
The Rational Solution
Rational Governance
Rational Analytics
SolutionsThe RE Platform
RE Information Governance
RE Electronic Discovery
RE Document Review
TechnologyIntegrated Product Suite
Rational Search and Analytics Engine
CustomersCustomer Profiles
Demo & Info Request
Shahzad Bashir
[+Show More]
Shahzad Bashir most recently served as the Executive Vice President of Huron Legal; he was the Founder of Huron Legal and a co-Founder of Huron Consulting Group. While at Huron (from its inception in May 2002 to May 2013), Mr. Bashir provided guidance to law firms and law departments on how to enhance their business performance through the implementation of process improvements and technology solutions – in his first year, earning a spot on Consulting Magazine’s 2007 list of Top 25 Consultants. Prior to founding Huron, Mr. Bashir was a Consulting Partner at Arthur Andersen (August 1982 to May 2002). Mr. Bashir has a BA in Accounting and is a Chartered Accountant from the English Institute. Mr. Bashir has also been a member of the Board of Directors of The General Counsel Forum since its inception. [-Show Less]
Michael McCreary
President & Chief Operating Officer
Michael McCreary has over 20 years of experience working at the intersection of business and technology in highly regulated and litigious environments. Before joining Rational Enterprise, Michael spent 12 years at Pfizer in various management positions within R&D, HR, and Legal. Most recently, he was CIO of Pfizer’s Legal and Public Affairs divisions. As a member of Pfizer’s IT and Legal Leadership teams, he led the corporate technology strategy for e-Discovery, IP, Security, Privacy, Records Management, and Information Risk Management. Prior to joining Pfizer, Michael was a partner in a software development and consulting firm working for various clients in the manufacturing, energy, and professional sports industries. Michael’s team was honored by Law Technology News for the most innovative use of technology by a law department and he is a regular speaker and writer on topics including information life cycle management, retention risk management, auto-categorization, and e-Discovery preparedness. Michael holds dual BAs from Union College. [-Show Less]
Mark Gianturco, PhD
Dr. Mark Gianturco is a nationally recognized technologist with over 25 years of industry experience in multiple disciplines, including technology management, software development, design, systems architecture, and information technology. In addition to successful tenures at two rapid-growth healthcare technology firms, Mark has served as the Chief Technology Officer for two well-known e-Discovery software and solutions providers. At both of those firms, he was responsible for creating and building a corporate technology subsidiary and leading the architecture, design, and implementation of multiple software products within those organizations. Mark is a regular speaker on emerging technologies such as cloud computing, and has served as an adjunct faculty member at George Mason University, teaching master level software engineering. In addition to speaking engagements and being published on several software engineering topics, he is qualified as a court appointed expert in software plagiarism, computer forensics, electronic discovery issues, and software engineering. Mark holds a BS in Computer Science from the College of William and Mary, as well as a Masters in Information Systems and a Ph.D. in Information Technology from George Mason University. [-Show Less]
Konstantin Mertsalov, PhD
Principal Research Scientist & Director of European Software Development
Dr. Konstantin Mertsalov brings extensive experience in search, social network, and document classification technologies. Before joining Rational Enterprise, Konstantin worked at a leading e-Discovery firm and developed a scalable search system used to search hundreds of millions of documents with a rich set of metadata. His current focus is on the development of the document classification system based on Support Vector Machines, social network analysis tools, as well as algorithms for corpus structure analysis (e.g., near duplicate detection and email thread analysis). In 2009, Konstantin received a Ph.D. from Rensselaer Polytechnic Institute. His thesis work involved the study of large dynamic social networks. Konstantin's other academic interests include machine learning, information diffusion in social networks, and semantic web search. He has co-authored numerous publications on social networks analysis, search, and machine learning, which have appeared at international conferences and journals. [-Show Less]
Michael McCutcheon
Chief Solution Officer
Michael McCutcheon is responsible for the strategy and design of the Rational Enterprise product suite. Michael brings a deep understanding of technology, specifically applied in the legal domain. Prior to joining Rational Enterprise, Michael was Chief Technology Officer of a litigation services and software company, where he pioneered a scalable web-based litigation repository. In 1999, Michael founded and held the post of Chief Technology Officer at ProductivityNet, a company focused on developing cutting edge software to control servers and network devices via web and wireless interfaces. Mike attended Rensselaer Polytechnic Institute, majoring in Computer Science. [-Show Less]
John Fobare
Vice President of Professional Services
Mr. Fobare is an accomplished leader in the information management space, with experience directing strategy, budget, delivery, governance, project management, and program management. He has a deep understanding of the healthcare payer industry, including the drivers of cost and quality, claims adjudication, enrollment, sales, underwriting, vendor management, enterprise reporting, and systems integration and management. Mr. Fobare holds a B.S. in Mechanical Engineering from Rensselaer Polytechnic Institute and a MBA from the University at Albany. [-Show Less]
Charlotte Kelly, Esq.
Vice President of e-Discovery Group
Charlotte Kelly oversees and advises on the company’s review platform development, strategy, and service delivery. Ms. Kelly is a licensed attorney with a proven track record of successfully managing complex e-discovery matters. She maintains a deep understanding of current discovery practices and tools, including emerging technologies and applications. Ms. Kelly has over 20 years of experience in project management, over a decade of which has been dedicated to managing e-discovery for top law firms and Fortune 500 companies. Immediately prior to joining Rational, she worked at Boies, Schiller & Flexner, LLP as a senior manager with a staff of 50 discovery specialists supporting the Firm's e-discovery efforts and reviews. [-Show Less]
© 2014 Rational Enterprise, LLC sitemap
Stay Connected with LinkedIn | 计算机 |
2014-15/4496/en_head.json.gz/6200 | SEOContent MarketingSocial MediaPaid SearchAgencyEntrepreneur Facebook
Amanda DiSilvestroFebruary 26, 2013February 21, 2013 Facebook and Google have been seen as separate online necessities for quite some time, but it seems as though both companies are beginning to close that gap. Facebook is becoming more search oriented with the announcement of the Facebook Graph (currently in Beta testing), and Google is becoming more socially oriented as Google+ begins to have more and more of an influence of individual search results.
Although Facebook is coming from the very “social” end of the things and Google is coming from the very “search” end of things, it is clear that both are coming closer and closer together in what looks like one common goal—creating something that dominates both search AND social.
This leads many to wonder: Is this going to be a matter of who gets to the middle first, or will Google and Facebook always remain in their designated realms, unable to truly compete with the other?
The Differences Between Both Facebook and Google’s Advances The biggest difference between the two is the type of searches that will be successful. When it comes to the Facebook Graph search (which you can learn all about in my SEM-Group article here), you can’t really search for general things.
If I wanted to know “How to setup a WordPress blog,” the Facebook search simply isn’t going to have the answer. The Facebook search will still only be good for certain searches such as: Restaurants, both specific and location based, location advice such as where to go and what to do, general companies that someone might “like;” essentially, any search that is centered around friends and the information they provide on Facebook is going to be valid for the new feature.
Below is a video from CNET that explains how the new feature will work:
On that same note, Google isn’t going to be the best when it comes to searching for similarities between your friends. Although Google+ might bring articles your Google+ friends have +1’ed to the top of your Google results page, it’s tough to search specifically for advice from your friends (not to mention Google+ doesn’t have the kind of information Facebook does in terms of your connections and friends).
When I typed “places my friends like to eat” into Google+ this is the page that I got:
As you can see, this isn’t quite as advanced or accurate as some of the search results you would get from Facebook (as shown in the video above). The moral of the story: It’s what you search for that really makes a difference. The two are a long way from closing this gap.
So Who Will Win the Fight for Social AND Search?
So, the answer to the question posed above? Unfortunately, there really isn’t one clear-cut answer. It makes sense that people are going to be stuck on Facebook for social and stuck on Google for search because it is familiar, but Web users have surprised us in the past.
Both Google and Facebook have been able to remain ahead of their rivals in their own realms, but the time to blur these lines has finally come. It seems as though we have an even match here, as far away as they both may be from meeting in the middle.
I personally believe that Facebook Graph search won’t be very big (much like their BranchOut attempt) and Google+ will continue to be used for business-social networking purposes. If I had to choose one of the other, my bets go to Google because search gets less repetitive than social, and therefore I think people hold on to their search engines longer and stronger than their social networks.
What do you think about the closing gap between Google and Facebook? Could you see this being a potential fight in the future? Let us know your thoughts in the comments below.
Photo Credit: antiworldnews.wordpress.com
Amanda DiSilvestroAmanda DiSilvestro gives small business and entrepreneurs SEO advice ranging from keyword density to recovering from Panda and Penguin updates. She writes for HigherVisibility, a nationally recognized SEO consulting firm that offers online marketing services to a wide range of companies across the country. Connect with Higher Visibility on Google+and Twitter to learn more!
@highervis
+Amanda DiSilvestro
Latest posts by Amanda DiSilvestro (see all)
Marketing 101: How to Create a Successful Influencer #Marketing Plan - April 18, 2014 Four Quick Tips to Creating Content that Isn’t a Blog Post - April 2, 2014 The Time Has Come: How to Actually Create a Video Rich Snippet - March 4, 2014 You Might Also Like
7 thoughts on “Facebook’s Graph Search and Google+ for Social: Does One Have to Prevail?” Jeremy says: February 26, 2013 at 5:01 pm Great article Amanda! I think the answer to your question really depends on Google’s ability to position G+ as a real competitor to Facebook. As you mention, right now it’s getting a lot of traction in relation to business use, but if they can expand that out into personal use as well I think that’s when Facebook will have some serious cause for concern. I can’t see that happening with the current G+, but Google have a massive audience so if they found a way to integrate with say their YouTube audience for example, we could see big changes happen very quickly.
CSM eMarketing and Consulting says: February 27, 2013 at 10:36 am Great Article! good to know what’s coming in both search and social. I think you’re correct by saying google+ is better for business connections…facebook really is for trash posts most of the time.. I think that as long as we have choices, we’ll still use facebook for one thing and google(plus) for the other.
Amanda DiSilvestro says: February 27, 2013 at 3:42 pm Thanks for reading! I think you both are right–it really all depends upon what Google+ can do with personal use. It doesn’t seem like Facebook is really trying to compete with Google exactly, so this whole question is really based on Google+. Until then, we’ll probably continue to use them both separately no matter how much the gap may seem to close.
Charles says: March 12, 2013 at 12:55 am This is right on…it drives me crazy when people talk about Facebook creating a search engine to compete with Google. That’s madness, and I don’t think that’s what they want to do at all. As big as Facebook is, just the logistics of what Google does from a data center pov would be prohibitive to Facebook.
Erika Barbosa says: February 28, 2013 at 11:05 am Thanks for this post! I don’t feel either company will truly close the gap effectively. They are both very good at what they do within their own space. Although they can play in one another’s territory, I don’t think either company is capable of beating the other in their own game. I do think there is opportunity for both of them to fight for positioning in the future with their own interpretations to search and social though.
Amanda DiSilvestro says: February 28, 2013 at 11:12 am I think you’re exactly right Erika–both are very good at what they do, and people seem content with using the two different sites for two different things. These two companies may not even be TRYING to close the gap, it just seemed like they were heading in that direction. Thanks for reading!
Lenny M Gomez says: May 20, 2013 at 3:47 pm Just depends – Facebook has the Users and their interests based on what they choose is relevant to follow. Google has the knowledge base for users to find information. The major problem is which one will choose the right information that the end user is looking for based on relevancy. With Facebook now entering the “Local SEO” market through their “Nearby” or whatever it is they are calling it these days. With Google+ Local you can optimize your business through citations. I still think the secret sauce is in ranking content organically through either Social Platform. | 计算机 |
2014-15/4496/en_head.json.gz/6333 | Mind vs. Machine
In the race to build computers that can think like humans, the proving ground is the Turing Test—an annual battle between the world’s most advanced artificial-intelligence programs and ordinary people. The objective? To find out whether a computer can act “more human” than a person. In his own quest to beat the machines, the author discovers that the march of technology isn’t just changing how we live, it’s raising new questions about what it means to be human. Brian Christian Feb 9 2011, 9:30 AM ET
Bryan Christie Brighton, England, September 2009. I wake up in a hotel room 5,000 miles from my home in Seattle. After breakfast, I step out into the salty air and walk the coastline of the country that invented my language, though I find I can’t understand a good portion of the signs I pass on my way—LET AGREED, one says, prominently, in large print, and it means nothing to me. I pause, and stare dumbly at the sea for a moment, parsing and reparsing the sign. Normally these kinds of linguistic curiosities and cultural gaps intrigue me; today, though, they are mostly a cause for concern. In two hours, I will sit down at a computer and have a series of five-minute instant-message chats with several strangers. At the other end of these chats will be a psychologist, a linguist, a computer scientist, and the host of a popular British technology show. Together they form a judging panel, evaluating my ability to do one of the strangest things I’ve ever been asked to do. I must convince them that I’m human. Fortunately, I am human; unfortunately, it’s not clear how much that will help. Also see:
From Luddites to Predators, Men vs. Machines Through Time Humanity's fears and dilemmas resulting from technology since the Industrial Revolution.
Technology and Humanity in The Atlantic Writings on the interface between technology and humanity by Mark Twain, Oliver Wendell Holmes, Nobel Laureate James D. Watson, James Fallows, and others.
The Turing Test Each year for the past two decades, the artificial-intelligence community has convened for the field’s most anticipated and controversial event—a meeting to confer the Loebner Prize on the winner of a competition called the Turing Test. The test is named for the British mathematician Alan Turing, one of the founders of computer science, who in 1950 attempted to answer one of the field’s earliest questions: can machines think? That is, would it ever be possible to construct a computer so sophisticated that it could actually be said to be thinking, to be intelligent, to have a mind? And if indeed there were, someday, such a machine: how would we know? Instead of debating this question on purely theoretical grounds, Turing proposed an experiment. Several judges each pose questions, via computer terminal, to several pairs of unseen correspondents, one a human “confederate,” the other a computer program, and attempt to discern which is which. The dialogue can range from small talk to trivia questions, from celebrity gossip to heavy-duty philosophy—the whole gamut of human conversation. Turing predicted that by the year 2000, computers would be able to fool 30 percent of human judges after five minutes of conversation, and that as a result, one would “be able to speak of machines thinking without expecting to be contradicted.” Turing’s prediction has not come to pass; however, at the 2008 contest, the top-scoring computer program missed that mark by just a single vote. When I read the news, I realized instantly that the 2009 test in Brighton could be the decisive one. I’d never attended the event, but I felt I had to go—and not just as a spectator, but as part of the human defense. A steely voice had risen up inside me, seemingly out of nowhere: Not on my watch. I determined to become a confederate. The thought of going head-to-head (head-to-motherboard?) against some of the world’s top AI programs filled me with a romantic notion that, as a confederate, I would be defending the human race, à la Garry Kasparov’s chess match against Deep Blue. During the competition, each of four judges will type a conversation with one of us for five minutes, then the other, and then will have 10 minutes to reflect and decide which one is the human. Judges will also rank all the contestants—this is used in part as a tiebreaking measure. The computer program receiving the most votes and highest ranking from the judges (regardless of whether it passes the Turing Test by fooling 30 percent of them) is awarded the title of the Most Human Computer. It is this title that the research teams are all gunning for, the one with the cash prize (usually $3,000), the one with which most everyone involved in the contest is principally concerned. But there is also, intriguingly, another title, one given to the confederate who is most convincing: the Most Human Human award. One of the first winners, in 1994, was the journalist and science-fiction writer Charles Platt. How’d he do it? By “being moody, irritable, and obnoxious,” as he explained in Wired magazine—which strikes me as not only hilarious and bleak, but, in some deeper sense, a call to arms: how, in fact, do we be the most human we can be—not only under the constraints of the test, but in life? 1 2 3 4 5 Single Page Next
Brian Christian is the author of The Most Human Human: What Artificial Intelligence Teaches Us About Being Alive. | 计算机 |
2014-15/4496/en_head.json.gz/6461 | You are hereHome › Research & Data Publications
Working Paper on Assessing the outcome of the World Summit on the Information Society in Asia and the Pacific, 23 April 201423 Apr 2014Working paper series This is a working paper on a preliminary review by the ESCAP Secretariat on Assessing the outcome of the World Summit on the Information Society in Asia and the Pacific published on 23 April 2014.
Sixth APPC Report4 Apr 2014Flagship publications and book series This report contains the Asian and Pacific Ministerial Declaration on Population and Development as well as the proceedings and organization of the Sixth Asian and Pacific Population Conference, held in Bangkok in September 2014.
If you would like to request a printed copy of the report, please email [email protected].
ICPD Report in Asia and the Pacific4 Apr 2014Flagship publications and book series The publication Sustaining Progress on Population and Development in Asia and the Pacific: 20 years after ICPD contains an analysis of the ICPD beyond 2014 Global Survey in Asia and the Pacific and the results of additional research on the status of implementation of the ICPD Programme of Action in the region.
The World Water Development Report 2014: Water and Energy21 Mar 2014Flagship publications and book series The World Water Development Report, or WWDR, is produced by the World Water Assessment Programme, a programme of UN-Water hosted by UNESCO, and is the result of the joint efforts of the UN agencies and entities which make up UN-Water, working in partnership with governments, international organizations, non-governmental organizations and other stakeholders. Technical paper series on ICT for Resilient Development11 Mar 2014Working paper series ICT continues to grow rapidly with widespread diffusions, novel applications as well as unforeseen challenges. This policy brief series aims to increase the policy relevance underlying the secretariat’s analytical work and thereby enhance the contributions that ICT can make in the shift towards more inclusive and sustainable development processes. Discussion Paper Series on Problems and Challenges in Transit Connectivity Routes and International Gateways in Asia4 Mar 2014Working paper series As the major supply lines for the Internet, the smooth functioning of the domestic and international long distance telecommunications infrastructure has never been so critical. Formerly based on older technologies such as high frequency (HF) radio links, microwave and satellite communications this infrastructure is now heavily dependant on fiber optic technology. Asia-Pacific Development Journal Vol. 20, No.2, December 20133 Feb 2014Journals The Asia-Pacific Development Journal (APDJ) is published twice a year by the Macroeconomic Policy and Development Division of the United Nations Economic and Social Commission for Asia and the Pacific (ESCAP). The primary objective of the APDJ is to provide a platform for the exchange of knowledge, experience, ideas, information and data on all aspects of economic and social development issues and concerns facing the region and to stimulate policy debate and assist in the formulation of policy. Inter-Regional Report on Labour Migration and Social Protection10 Jan 2014Flagship publications and book series Since the 1970s in particular, the countries of Western Asia and those of the Asia-Pacific region have been closely linked to each other through highly extensive movements of people. Opportunities created by the rapid development of the countries of the Gulf Cooperation Council (GCC), but also other countries in the ESCWA region, have attracted a large number labour migrants from the Asia-Pacific region. Low Carbon Development Path for Asia and the Pacific: Challenges and Opportunities to the Energy Sector9 Jan 2014Working paper series Climate change is one of the greatest environmental issues of our time and the Asia-Pacific region is already experiencing its adverse impacts. Studies suggest that the costs of inaction on reducing the consumption of fossil fuels, the main source of climate change, would be many times the costs of action. This report stresses the need to take decisive steps quickly to get the developing countries in this region on course to make inroads in the global effort to combat climate change and achieve sustainable development and green growth. Green growth indicators: A practical approach for Asia and the Pacific31 Dec 2013Flagship publications and book series Several countries in Asia and the Pacific have launched high-level policy initiatives and action plans to promote green growth, and the green economy. As a consequence the demand for indicators of economic growth that supports, rather than detracts from, sustainable development, is growing. Green growth indicator frameworks developed by international organisations and partnerships of organisations share a focus on a few key dimensions. Pages1 | 计算机 |
2014-15/4496/en_head.json.gz/6926 | Boomerang! Is the Pentagon Field-Testing 'Son of Stuxnet'?
When the cybersecurity firm Symantec announced they had discovered a sophisticated Trojan which shared many of the characteristics of the Stuxnet virus, I wondered: was the Pentagon and/or their Israeli partners in crime field-testing insidious new spyware?According to researchers, the malicious program was dubbed "Duqu" because it creates files with the prefix "~DQ." It is a remote access Trojan (RAT) that "is essentially the precursor to a future Stuxnet-like attack." Mark that carefully.In simple terms, a Trojan is malicious software that appears to perform a desirable function prior to its installation but in fact, steals information from users spoofed into installing it, oftentimes via viral email attachments.In the hands of enterprising security agencies, or criminals (the two are functionally synonymous), Trojans are primarily deployed for data theft, industrial or financial espionage, keystroke logging (surveillance) or the capture of screenshots which may reveal proprietary information."The threat" Symantec averred, "was written by the same authors (or those that have access to the Stuxnet source code) and appears to have been created since the last Stuxnet file was recovered."The malware, which began popping-up on the networks of several European firms, captured lists of running processes, account and domain information, network drives, user keystrokes and screenshots from active sessions and did so by using a valid, not a forged certificate, stolen from the Taipei-based firm, C-Media.Whereas Stuxnet, believed to be a co-production of U.S. and Israeli cyber-saboteurs, was a weaponized virus programmed to destroy Iran's civilian nuclear power infrastructure by targeting centrifuges that enrich uranium, Duqu is a stealthy bit of spy kit that filches data from manufacturers who produce systems that control oil pipelines, water systems and other critical infrastructure.Sergey Golovanov, a malware expert at Kaspersky Labs told Forbes that Duqu is "is likely the brainchild of a government security apparatus. And it's that government's best work yet."Speaking from Moscow, Golovanov told Forbes in a telephone interview that "right now were are pretty sure that it is the next generation of Stuxnet.""We are pretty sure that Duqu is a government cyber tool and are 70% sure it is coming from the same source as Stuxnet," Golovanov said."The victims' computer systems were infected several days ago. Whatever it is," Golovanov noted, "it is still in those systems, and still scanning for information. But what exactly it is scanning for, we don't know. It could be gathering internal information for encryption devices. We only know that it is data mining right now, but we don't know what kind of data and to what end it is collecting it."Whom, pray tell, would have "access to Stuxnet source code"?While no government has claimed ownership of Stuxnet, IT experts told Forbes "with 100% certainty it was a government agency who created it."Suspects include cryptologists at the National Security Agency, or as is more likely given the outsourcing of intelligence work by the secret state, a combination of designers drawn from NSA, "black world" privateers from large defense firms along with specialists from Israel's cryptologic division, Unit 8200, operating from the Israeli nuclear weapons lab at the Dimona complex, as The New York Times disclosed.Analyst George Smith noted: "Stuxnet was widely distributed to many computer security experts. Many of them do contract work for government agencies, labor that would perhaps require a variety of security clearances and which would involve doing what would be seen by others to be black hat in nature. When that happened all bets were off."Smith averred, "once a thing is in world circulation it is not protected or proprietary property."While one cannot demonstrably prove that Duqu is the product of one or another secret state satrapy, one can reasonably inquire: who has the means, motive and opportunity for launching this particular bit of nastiness into the wild?"Duqu's purpose," Symantec researchers inform us, "is to gather intelligence data and assets from entities, such as industrial control system manufacturers, in order to more easily conduct a future attack against another third party."In other words, while Stuxnet was programmed to destroy industrial systems, Duqu is an espionage tool that will enable attackers "looking for information such as design documents that could help them mount a future attack on an industrial control facility."Although it can be argued, as Smith does, that "source code for malware has never been secure," and "always becomes something coveted by many, often in direct proportion to its fame," it also can't be ruled out that military-intelligence agencies or corporate clones with more than a dog or two in the "cyberwar" hunt wouldn't be very interested in obtaining a Trojan that clips "industrial design" information from friend and foe alike.Black ProgramsThe circulation of malicious code such as Duqu's is highly destabilizing. Considering that the U.S. Defense Department now considers computer sabotage originating in another country the equivalent to an act of war for which a military response is appropriate, the world is on dangerous new ground.Speaking with MIT's Technology Review, Ronald Deibert, the director of Citizen Lab, a University of Toronto think tank that researches cyberwarfare, censorship and espionage, told the publication that "in the context of the militarization of cyberspace, policymakers around the world should be concerned."Indeed, given the fact that it is the United States that is now the biggest proliferator in the so-called cyber "arms race," and that billions of dollars are being spent by Washington to secure such weapons, recent history is not encouraging.With shades of 9/11, the anthrax mailings and the Iraq invasion as a backdrop, one cannot rule out that a provocative act assigned to an "official enemy" by ruling elites just might originate from inside the U.S. security complex itself and serve as a convenient pretext for some future war.A hint of what the Pentagon is up to came in the form of a controlled leak to The Washington Post.Last spring, we were informed that "the Pentagon has developed a list of cyber-weapons and -tools, including viruses that can sabotage an adversary's critical networks, to streamline how the United States engages in computer warfare."The list of "approved weapons" or "fires" are indicative of the military's intention to integrate "cyberwar" capabilities into its overall military doctrine.According to Ellen Nakashima, the "classified list of capabilities has been in use for several months and has been approved by other agencies, including the CIA."The Post reported that the new "framework clarifies, for instance, that the military needs presidential authorization to penetrate a foreign computer network and leave a cyber-virus that can be activated later."On the other hand, and here's where Duqu may enter the frame, the "military does not need such approval, however, to penetrate foreign networks for a variety of other activities. These include studying the cyber-capabilities of adversaries or examining how power plants or other networks operate."Additionally, Nakashima wrote, Pentagon cyberwarriors "can also, without presidential authorization, leave beacons to mark spots for later targeting by viruses, the official said."As part of Washington's on-going commitment to the rule of law and human rights, as the recent due process-free drone assassination of American citizen Anwar Al-Awlaki, followed by that of his teenage son and the revenge killing of former Libyan leader Muammar Qaddafi by--surprise!--Al Qaeda-linked militias funded by the CIA clearly demonstrate, the "use of any cyber-weapon would have to be proportional to the threat, not inflict undue collateral damage and avoid civilian casualties."Try selling that to the more than 3,600 people killed or injured by CIA drone strikes, as Pakistan Body Count reported, since our Nobel laureate ascended to his Oval Office throne.As George Mason University researchers Jerry Brito and Tate Watkins described in their recent paper, Loving the Cyber Bomb? The Dangers of Threat Inflation in Cybersecurity Policy, despite overheated "rhetoric of 'cyber doom' employed by proponents of increased federal intervention," there is a lack of "clear evidence of a serious threat that can be verified by the public."However, as Brito and Watkins warned, "the United States may be witnessing a bout of threat inflation similar to that seen in the run-up to the Iraq War," one where "a cyber-industrial complex is emerging, much like the military-industrial complex of the Cold War. This complex may serve to not only supply cybersecurity solutions to the federal government, but to drum up demand for them as well."A "demand" which will inevitably feed the production, proliferation and deployment of a host of viral attack tools (Stuxnet) and assorted spybots (Duqu) that can and will be used by America's shadow warriors and well-connected corporate spies seeking to get a leg-up on the competition.While evidence of "a serious threat" may be lacking, and while proponents of increased "cybersecurity" spending advanced "no evidence ... that opponents have 'mapped vulnerabilities' and 'planned attacks'," Brito and Watkins noted there is growing evidence these are precisely the policies being pursued by Washington.Why might that be the case?As a declining imperialist Empire possessing formidable military and technological capabilities, researcher Stephen Graham has pointed out in Cities Under Siege: The New Military Urbanism, the United States has embarked on a multibillion dollar program "to militarize the world's global electronic infrastructures" with a stated aim to "gain access to, and control over, any and all networked computers, anywhere on Earth."Graham writes that "the sorts of on-the-ground realities that result from attacks on ordinary civilian infrastructure are far from the abstract niceties portrayed in military theory."Indeed, as "the experiences of Iraq and Gaza forcefully remind us," robotized drone attacks and already-existent cyberwar capabilities buried in CIA and Pentagon black programs demonstrate that "the euphemisms of theory distract from the hard fact that targeting essential infrastructure in highly urbanized societies kills the weak, the old and the ill just as surely as carpet bombing."A Glimpse Inside the ComplexIn the wake of the HBGary hack by Anonymous earlier this year, the secrecy-shredding web site Public Intelligence released a 2009 Defense Department contract proposal from the firm.Among other things, it revealed that the Pentagon is standing-up offensive programs that "examine the architecture, engineering, functionality, interface and interoperability of Cyber Warfare systems, services and capabilities at the tactical, operational and strategic levels, to include all enabling technologies."HBGary, and one can assume other juiced defense contractors, are planning "operations and requirements analysis, concept formulation and development, feasibility demonstrations and operational support.""This will include," according to the leaked proposal, "efforts to analyze and engineer operational, functional and system requirements in order to establish national, theater and force level architecture and engineering plans, interface and systems specifications and definitions, implementation, including hardware acquisition for turnkey systems."Indeed, the company will "perform analyses of existing and emerging Operational and Functional Requirements at the force, theater, Combatant Commands (COCOM) and national levels to support the formulation, development and assessment of doctrine, strategy, plans, concepts of operations, and tactics, techniques and procedures in order to provide the full spectrum of Cyber Warfare and enabling capabilities to the warfighter."During the course of their analysis Symantec learned that Duqu "uses HTTP and HTTPS to communicate with a command-and-control (C&C) server that at the time of writing is still operational.""The attackers were able to download additional executables through the C&C server, including an infostealer that can perform actions such as enumerating the network, recording keystrokes, and gathering system information. The information is logged to a lightly encrypted and compressed local file, which then must be exfiltrated out."To where, and more importantly by whom was that information "exfiltrated" is of course, the $64,000 question.A working hypothesis may be provided by additional documents published by Public Intelligence.According to a cyberwar proposal to the Pentagon by General Dynamics and HBGary, "Project C" is described as a program for the development "of a software application targeting the Windows XP Operating System that, when executed, loads and enables a covert kernel-mode implant that will exfiltrate a file from disk (or other remotely called commands) over a connected serial port to a remote device."We're informed that Project C's "primary objectives" was the design of an implant "that is clearly able to exfiltrate an on-disk file, opening of the CD tray, blinking of the keyboard lights, opening and deleting a file, and a memory buffer exfiltration over a connected serial line to a collection station.""As part of the exploit delivery package," HBGary and General Dynamics told their prospective customers, presumably the NSA, that "a usermode trojan will assist in the loading of the implant, which will clearly demonstrate the full capability of the implant."Duqu, according to Symantec researchers, "uses a custom C&C protocol, primarily downloading or uploading what appear to be JPG files. However, in addition to transferring dummy JPG files, additional data for exfiltration is encrypted and sent, and likewise received."While we don't know which firms were involved in the design of Stuxnet and now, Duqu, we do know thanks to Anonymous that HBGary had a Stuxnet copy, shared it amongst themselves and quite plausibly, given what we've learned about Duqu, Stuxnet source code may have been related to the above-mentioned "Project C."Kevin Haley, Symantec's director of product management told The Register that "the people behind Stuxnet are not done. They've continued to do different things. This was not a one-shot deal."
Thanks, Tom. Great little article. Are we getting the meaning of "Full Spectrum Dominance" yet?
Boomerang! Is the Pentagon Field-Testing 'Son of S...
Amid Calls for 'Less Democracy,' German Security A...
Dead Men Tell No Tales: The CIA, 9/11 and the Awla... | 计算机 |
2014-15/4496/en_head.json.gz/7036 | Home/CRM/Some Thoughts on Open
Some Thoughts on Open
Jan Sysmans — August 4, 2010
Note: This post originally appeared on CRM Outsiders. For an in-depth and insightful discussion with Larry explaining his points further, check out Larry’s NetworkWorld podcast on the topic.
By Larry Augustin, CEO, SugarCRM
The announcement of General Availability of Sugar 6 this week has prompted some questions about SugarCRM’s business model and the role of Open Source at SugarCRM. (Read about it all here: [1][2][3][4])
Open Source is at the heart of SugarCRM’s business. Well over half of our engineering effort produces code that is released under an OSI approved license. We have three versions of our Sugar CRM product: Community Edition, Professional Edition, and Enterprise Edition. The Community Edition is licensed under version 3 of the AGPL, and has been licensed under some version of the GPL or AGPL since early 2007. Prior to that it was available under several variants of the MPL.
SugarCRM does not release 100% of the code we develop under an Open Source license; Sugar Professional Edition and Enterprise Edition are distributed under a commercial license. This mix of Open Source and commercially licensed software offerings has allowed us to build a successful business while creating an innovative, award winning, affordable, and open CRM solution. From the beginning SugarCRM has always had this mixed model. We benefit from this model, and, as Marten Mickos says, believe that the world of Free and Open Source Software benefits as well.
SugarCRM always makes available full source code to all of our customers. In all cases (Community, Professional, or Enterprise), our customers receive full source code to our products. In all cases our customers have the right to run our products anywhere: in their own datacenters, in our datacenters, or at any of a variety of cloud service providers. In all cases our customers own their data and have full access to their complete database. We care deeply about those rights. They are at the heart of our differentiation as a company.
Open Source code is just part of that. “Open” to us means more than source code. It’s an entire philosophy about how we do business and how we empower our customers.
To riff on an analogy I originally heard from Red Hat founder Bob Young, would you buy a car with the hood locked shut and where only the dealer who sold you that car had the key? Imagine for a minute what that would mean. Only the dealer could perform regularly scheduled maintenance. You couldn’t modify the engine in any way, such as tuning for higher performance or modifying it to run on alternative fuels. Imagine you were on a trip and the car broke down. While you might have the skills to fix it, or might find a local mechanic who could fix it, you wouldn’t have those options. Only the dealer has the key, and only the dealer has the right to touch that engine. Imagine how frustrating that lack of control would be.
Why then would you run your business on software where you have no control? Where you are entirely at the mercy of the vendor? Where you did not control your own destiny?
At SugarCRM we are passionate about giving our customers that control. With full access to Sugar’s source code, customers can take control of their own destiny. If they so choose, they can make enhancements specific to their business needs. If something breaks, they can open the hood themselves, or have a “mechanic” of their own choosing open the hood for them.
But empowering customers means not just sharing with them our source code (under either an Open Source or commercial license), but also making sure that they have the keys to the hood so they can control their own destiny. How is this different? Consider a ‘traditional’ hosted (Software as a Service, or SaaS) CRM provider. Your data resides on their servers, under their control. If their systems go down, you go down. If it doesn’t operate the way you want it to, you’re out of luck. Even if they were to give you access to their source code, you are still not in control of your own destiny, because you wouldn’t be allowed to modify it, or even run it, if you wanted to. You might have the blueprints, but you still can’t get under the hood. Tim O’Reilly has been preaching this challenge to Free and Open Source Software for many years. Marten Mickos makes the same observation about closed web services in his recent Computer World UK article.
At SugarCRM our customers have not only full access to their data, but they have that access in the original database form so that they can truly control their own destiny. They can move that database to another cloud service provider or to servers on their own private cloud or in their own data center. As a SugarCRM customer that choice is in your control.
Further, our open model has created a vibrant partner network that allows our customers to select the level of service they want, while at the same time giving them full control and options for the future. For example, you may be the hands-on person who likes to open the hood and change your own oil. Or you may prefer to buy a complete service agreement with your car, where everything is included and the dealer takes care of everything. Our open model has enabled a network of partners that offer whatever level of service you need, from do-it-yourself to full service. As a SugarCRM customer that choice is in your control.
Our open, ”run anywhere”, model enables similar choice and control in where your data resides and your applications runs. That may mean you choose to let us run Sugar for you out of our datacenters. Or you may choose to run it on cloud services such as Amazon, Rackspace, or Windows Azure. Or you may choose to run it on your own servers on your own private cloud. As a SugarCRM customer that choice is in your control.
Bottom line: Open is a core value for us a SugarCRM. That manifests itself in part through our commitment to our Open Source Community Edition, but is pervasive in our entire company philosophy in which our customers receive full source code to our products, have the right to run our products anywhere, and own their own data. Open is at the heart of our business.
Share this:TwitterFacebookLinkedInGoogleDiggLike this:Like Loading...
In CRM, Open Source, SugarCRM CRM, GPL, open source, SugarCRM Jan Sysmans
Jan Sysmans brings 20 years of marketing and product management experience, including particular focus in the software-as-a-service segment, to his role as Senior Director of Product Marketing at SugarCRM. In this position, Mr. Sysmans is responsible for all global product marketing programs for SugarCRM. Mr. Sysmans speaks on behalf of SugarCRM about the value, benefits and future of commercial open source solutions at customer and industry events around the world.
Prior to assuming his current role at SugarCRM, Mr. Sysmans was the Enterprise Marketing TME (Technical Marketing Engineer) at Cisco WebEx and Director of Marketing at WebEx Communications. Earlier in his career, Mr. Sysmans held product management positions at PlaceWare, Ensim, Narus, XO Communications and Concentric Network Communications. He also served as the chairperson of the Marketing Communications committee on the SaaS Executive Council of the Software Information and Industry Association (SIIA) from 2006-2007.
Jan Sysmans holds a Bachelor of Science in commercial and diplomatic relations from the HUBrussels Business School (Belgium), and a Master of Business Administration in intercultural management from ICHEC Brussels Management School (Belgium). He speaks English, Dutch, French, German and Spanish
Next Recent Posts Weekly CRM Roundup: Achieving Social Selling Success | 计算机 |
2014-15/4496/en_head.json.gz/8090 | User talk:Wissam hemadeh
Revision as of 12:07, 18 June 2010 by Arensb (Talk | contribs)
Wissam...
You're about two comments from being blocked and I'd really rather not do that, so let's clear a few things up.
Sign your comments. It's really easy 4 tildes at the end of your comments will add the name and date stamp. (It's very difficult to follow discussion on talk pages when there's just a wall of text with no formatting and no signature)
We don't need to have a discussion and take a vote about how to deal with a simple, obviously flawed argument
Understand the scope in which you're working. Euthyphro shouldn't be a catch all for all moral arguments - feel free to create new pages
Make sure you're familiar with posting rules, guidelines and wiki formatting. Visit the forum and/or talk to people who post regularly.
Make sure you not only know who you're talking to, but make it clear who you're directing your comments to. Several of the comments you made, responding to me, don't seem to apply to me. The first step is to look at the history for a given page. You can see who has made changes and what changes they've made...that'll keep you from saying things like "your counter-arguments" to someone who didn't make them. (Note: At first, I took this to mean 'your arguments' as in 'your wiki'...but I'm not convinced you even knew that you were talking to the site owner.)
So, let's get to your actual comments, so we can clear the air and I can get back to work:
"Furthermore, I did not insinuate that the Euthyphro should address ALL moral argument but I was giving atheists a heads-up as to the modern moral argument where Euthyphro is useless and a new swift response should be made. Doesn't this require some collaboration and agreement on the counter-argument?"
— Wissan
The argument you presented is not an argument where Euthyphro is useless, it's just a moral argument that limits the use of Euthyphro as a response. We have an entire category for moral arguments, feel free to add a page for this one if it doesn't exist. No, we don't need collaboration and agreement on counter-arguments. We tend to list the counter arguments and they are then modified or removed. A wiki is a living document, we don't need a committee before adding a page and, in the end, a committee of 1 (me) may overturn the decisions.
"And if you really care about this site, I advise you to work on the kalam argument."
Thanks for your advice. There's a reason that I opened the wiki up to the public: I simply don't have time to do this, the TV show, the podcast, my ridiculously demanding regular job, e-mail, speaking engagements, ACA business and still find time to eat, sleep and occasionally socialize. There are many articles that simply don't exist. There are many that need some serious editing...but I've had to limit my efforts here to a bare minimum.
"Kalam is one of the best theistic arguments."
Which is only slightly more impressive than being the least smelly dung pile.
"Kalam is the only one which has been constantly used in recent debates. Have you been to any recent debates?"
Clearly you have no idea who you're talking to. I say that not because I would have expected any special fawning...but because if you knew, you couldn't have said something so monumentally stupid. Whether or not I've been to a recent debate (I have) is entirely irrelevant. I'm involved in debates 7 days a week, with real theists of all stripes and Kalam isn't nearly so common as you might think. That said, it deserves a thorough response.
"No, my friend. You are NOT ready to take on counter-apologetics if you have no idea of what the kalam cosmological argument for atheists is, which has been introduced by atheist philosopher quentin smith. Search for it, please do!"
I have done...and you missed the point. Your implication was that without Kalam for atheism, one isn't ready to take on Kalam as an apologetic. This is false and it ignores the burden of proof. Kalam stands or falls on its own merits and the existence or non-existence of a Kalam-for-atheism is a secondary concern. I've read Smith's essay. It's interesting and contingent upon unproven particulars. I don't find it particularly compelling and I don't find that it is in any way superior, as a response, than simply exposing the flaws in Kalam. - HOWEVER, it is an argument that should be included, both in the counters to Kalam and as a page in the arguments for the non-existence of a god.
Your implication that one isn't ready for counter-apologetics if they don't possess an exhaustive familiarity with a particular argument is without merit. My concern was that you seemed to be confused about how to address a simple and obviously flawed argument...that concern was based on a miscommunication. You were asking for feedback on how to address it at the wiki, not feedback about how one should respond to it. The fact that you still missed the point that one has no more need of the Kalam for atheism than they do of the atheists wager in order to address the apologetic is still a minor concern.
"You also have poor articles on 'quran and science' which could be a powerful atheistic tool against islam'. There's no mentioning of the 'inimatibility of quran' argument. There are many arguments you have missed. As you see, I have my hands full and it seems that you are not ready taking on counter-apologetics from the apparent poverty of this wiki."
Well, aren't we lucky you've arrived! There was no claim that the wiki is finished (as if it ever would be) or that it even had adequate coverage of most arguments...it's a resource, a work in progress and its state is entirely dependent on volunteer participation. While the goal would be to serve as the premiere treatment for these subjects no one has said we were anywhere close to that. Your comment is akin to walking into a garage where someone is building a car from scratch and saying, "Where's the odometer? There's no headliner or carpet? If you really cared about this car, you'd have a GPS system installed. You aren't ready for Daytona..." - and it's almost enough for me to revoke your welcome.
Fortunately, I'm not quite that reactionary. Go. Edit pages, add comments help improve the site like many others have done. Just sign your comments and try to be clear. - Sans Deity 10:37, 2 March 2010 (CST)
1 Sign Comments
2 First person
3 Numbered / bullet formatting
4 Plagiarism and dumping
Sign Comments Above the editing box on every page there are a bunch of icons. The second last one is a squiggly line that looks like a signature. If you click on it, it will insert your signature and the date/time. Or you can type two dashes and four tildes instead.
Sign discussion comments, but not article edits.
--Jaban 14:54, 3 March 2010 (CST)
I've seen a few things that you've written in the first person. Is that the standard procedure here?--Bob M 06:04, 15 May 2010 (CDT)
No, it's not really standard procedure but I write articles for several publications on the internet in the first person. You are encouraged to change them.--wissam hemadeh 14:38, 15 May 2010 (CDT)
Thank you for your encouragement. But I've mentioned previously that arguments against the existence of gods don't do much for me - as I feel that they are about as useful as arguments against the existence of Father Christmas. Consequently I'm reluctant to start editing such articles. I was just curious about why you did it that way. Thanks for responding.--Bob M 06:33, 16 May 2010 (CDT)
Numbered / bullet formatting Edit this section to see how to properly format lists on wikis.
For numbered lists:
Statement the first.
Statement the second.
Statement the third.
For bullet lists:
For non-bulleted or non-numbered lines within lists:
Corollary to the second.
Mix-and-match happy meal combo of everything:
Numbered statement the first.
Item number one.
Item number two.
Numbered statement the second.
Corollary to item number one.
Item number three.
Numbered statement the third.
Corollary to statement the third.
Hope that helps your future edits. :)
Plagiarism and dumping Could you please stop just copying articles here? Iron Chariots is not intended as a dumping site for every article on atheism ever written. It's easy to link to external sites, so instead of duplicating content here, it's best to just link. The Iron Chariots Wiki:Editing guidelines say not to just copy text from Wikipedia, and I think it's safe to assume that that applies to other sites as well. In addition, a lot of your edits have been plagiarized. Aside from being immoral, that's also illegal, and I'd rather this site didn't get in trouble with the law.
There's also a fundamental difference between a traditional article and a wiki page: a published article is static, whereas a wiki page is intended to be updated as new information becomes available, or as later authors come up with better ways of expressing what earlier authors meant. --Arensb 12:07, 18 June 2010 (CDT)
Retrieved from "http://wiki.ironchariots.org/index.php?title=User_talk:Wissam_hemadeh&oldid=13745" Personal tools | 计算机 |
2014-15/4496/en_head.json.gz/8634 | ZeroMQ: The Design of Messaging Middleware
By Martin Sústrik, February 17, 2014
A look at how one of the most popular messaging layers was designed and implemented
ØMQ is a messaging system, or "message-oriented middleware" if you will. It is used in environments as diverse as financial services, game development, embedded systems, academic research, and aerospace.
Messaging systems work basically as instant messaging for applications. An application decides to communicate an event to another application (or multiple applications), it assembles the data to be sent, hits the "send" button, and the messaging system takes care of the rest. Unlike instant messaging, though, messaging systems have no GUI and assume no human beings at the endpoints capable of intelligent intervention when something goes wrong. Messaging systems thus have to be both fault-tolerant and much faster than common instant messaging.
ØMQ was originally conceived as an ultra-fast messaging system for stock trading and so the focus was on extreme optimization. The first year of the project was spent devising benchmarking methodology and trying to define an architecture that was as efficient as possible.
Later on, approximately in the second year of development, the focus shifted to providing a generic system for building distributed applications and supporting arbitrary messaging patterns, various transport mechanisms, arbitrary language bindings, etc.
During the third year, the focus was mainly on improving usability and flattening the learning curve. We adopted the BSD Sockets API, tried to clean up the semantics of individual messaging patterns, and so on.
This article will give insight into how the three goals above translated into the internal architecture of ØMQ, and provide some tips for those who are struggling with the same problems.
Since its third year, ØMQ has outgrown its codebase; there is an initiative to standardize the wire protocols it uses, and an experimental implementation of a ØMQ-like messaging system inside the Linux kernel, etc. These topics are not covered here. However, you can check online resources for further details.
Application vs. Library
ØMQ is a library, not a messaging server. It took us several years working on the AMQP protocol, a financial industry attempt to standardize the wire protocol for business messaging writing a reference implementation for it and participating in several large-scale projects heavily based on messaging technology to realize that there's something wrong with the classic client/server model of a smart messaging server (broker) and dumb messaging clients.
Our primary concern was with the performance: If there's a server in the middle, each message has to pass the network twice (from the sender to the broker and from the broker to the receiver) inducing a penalty in terms of both latency and throughput. Moreover, if all the messages are passed through the broker, at some point, the server is bound to become the bottleneck.
A secondary concern was related to large-scale deployments: when the deployment crosses organizational boundaries the concept of a central authority managing the whole message flow doesn't apply anymore. No company is willing to cede control to a server in a different company due to trade secrets and legal liability. The result in practice is that there's one messaging server per company, with hand-written bridges to connect it to messaging systems in other companies. The whole ecosystem is thus heavily fragmented, and maintaining a large number of bridges for every company involved doesn't make the situation better. To solve this problem, we need a fully distributed architecture, an architecture where every component can be possibly governed by a different business entity. Given that the unit of management in server-based architecture is the server, we can solve the problem by installing a separate server for each component. In such a case we can further optimize the design by making the server and the component share the same processes. What we end up with is a messaging library.
ØMQ was started when we got an idea about how to make messaging work without a central server. It required turning the whole concept of messaging upside down and replacing the model of an autonomous centralized store of messages in the center of the network with a "smart endpoint, dumb network" architecture based on the end-to-end principle. The technical consequence of that decision was that ØMQ, from the very beginning, was a library, not an application.
We've been able to prove that this architecture is both more efficient (lower latency, higher throughput) and more flexible (it's easy to build arbitrary complex topologies instead of being tied to classic hub-and-spoke model) than standard approaches.
One of the unintended consequences was that opting for the library model improved the usability of the product. Over and over again users express their happiness about the fact that they don't have to install and manage a stand-alone messaging server. It turns out that not having a server is a preferred option as it cuts operational cost (no need to have a messaging server admin) and improves time-to-market (no need to negotiate the need to run the server with the client, the management or the operations team).
The lesson learned is that when starting a new project, you should opt for the library design if at all possible. It's pretty easy to create an application from a library by invoking it from a trivial program; however, it's almost impossible to create a library from an existing executable. A library offers much more flexibility to the users, at the same time sparing them non-trivial administrative effort.
Global State
Global variables don't play well with libraries. A library may be loaded several times in the process but even then there's only a single set of global variables. Figure 1 shows a ØMQ library being used from two different and independent libraries. The application then uses both of those libraries.
Figure 1: ØMQ being used by different libraries.
When such a situation occurs, both instances of ØMQ access the same variables, resulting in race conditions, strange failures and undefined behavior. To prevent this problem, the ØMQ library has no global variables. Instead, a user of the library is responsible for creating the global state explicitly. The object containing the global state is called context. While from the user's perspective context looks more or less like a pool of worker threads, from ØMQ's perspective it's just an object to store any global state that we happen to need. In the picture above, libA would have its own context and libB would have its own as well. There would be no way for one of them to break or subvert the other one.
The lesson here is pretty obvious: Don't use global state in libraries. If you do, the library is likely to break when it happens to be instantiated twice in the same process.
When ØMQ was started, its primary goal was to optimize performance. Performance of messaging systems is expressed using two metrics: throughput how many messages can be passed during a given amount of time; and latency how long it takes for a message to get from one endpoint to the other.
Which metric should we focus on? What's the relationship between the two? Isn't it obvious? Run the test, divide the overall time of the test by number of messages passed and what you get is latency. Divide the number of messages by time and what you get is throughput. In other words, latency is the inverse value of throughput. Trivial, right?
Instead of starting coding straight away we spent some weeks investigating the performance metrics in detail and we found out that the relationship between throughput and latency is much more subtle than that, and often the metrics are quite counter-intuitive.
Imagine A sending messages to B (see Figure 2). The overall time of the test is 6 seconds. There are 5 messages passed. Therefore, the throughput is 0.83 messages/sec (5/6) and the latency is 1.2 sec (6/5), right?
Figure 2: Sending messages from A to B.
Have a look at the diagram again. It takes a different time for each message to get from A to B: 2 sec, 2.5 sec, 3 sec, 3.5 sec, 4 sec. The average is 3 seconds, which is pretty far away from our original calculation of 1.2 second. This example shows the misconceptions people are intuitively inclined to make about performance metrics.
Now have a look at the throughput. The overall time of the test is 6 seconds. However, at A it takes just 2 seconds to send all the messages. From A's perspective the throughput is 2.5 msgs/sec (5/2). At B it takes 4 seconds to receive all messages. So from B's perspective, the throughput is 1.25 msgs/sec (5/4). Neither of these numbers matches our original calculation of 1.2 msgs/sec.
To make a long story short, latency and throughput are two different metrics; that much is obvious. The important thing is to understand the difference between the two and their relationship. Latency can be measured only between two different points in the system; there is no such thing as latency at point A. Each message has its own latency. You can average the latencies of multiple messages; however, there's no such thing as latency of a stream of messages.
Throughput, on the other hand, can be measured only at a single point of the system. There's a throughput at the sender, there's a throughput at the receiver, there's a throughput at any intermediate point between the two, but there's no such thing as overall throughput of the whole system. And throughput make sense only for a set of messages; there's no such thing as throughput of a single message.
As for the relationship between throughput and latency, it turns out there really is a relationship; however, the formula involves integrals and we won't discuss it here. For more information, read the literature on queuing theory. There are many more pitfalls in benchmarking the messaging systems that we won't go further into. The stress should rather be placed on the lesson learned: Make sure you understand the problem you are solving. Even a problem as simple as "make it fast" can take lot of work to understand properly. What's more, if you don't understand the problem, you are likely to build implicit assumptions and popular myths into your code, making the solution either flawed or at least much more complex or much less useful than it could possibly be.
Critical Path
We discovered during the optimization process that three factors have a crucial impact on performance:
Number of memory allocations
Number of system calls
Concurrency model
However, not every memory allocation or every system call has the same effect on performance. The performance we are interested in messaging systems is the number of messages we can transfer between two endpoints during a given amount of time. Alternatively, we may be interested in how long it takes for a message to get from one endpoint to another.
However, given that ØMQ is designed for scenarios with long-lived connections, the time it takes to establish a connection or the time needed to handle a connection error is basically irrelevant. These events happen very rarely and so their impact on overall performance is negligible.
The part of a codebase that gets used very frequently, over and over again, is called the critical path; optimization should focus on the critical path.
Let's have a look at an example: ØMQ is not extremely optimized with respect to memory allocations. For example, when manipulating strings, it often allocates a new string for each intermediate phase of the transformation. However, if we look strictly at the critical path the actual message passing we'll find out that it uses almost no memory allocations. If messages are small, it's just one memory allocation per 256 messages (these messages are held in a single large allocated memory chunk). If, in addition, the stream of messages is steady, without huge traffic peaks, the number of memory allocations on the critical path drops to zero (the allocated memory chunks are not returned to the system, but reused repeatedly).
Lesson learned: Optimize where it makes difference. Optimizing pieces of code that are not on the critical path is wasted effort.
Surviving Developer Decision-Making HellDell Red Hat Cloud Combo For Dev/TestKii Adds A/B Testing To Mobile BackendMicrosoft Lauds Node.js Tools for Visual Studio 1.0 BetaMore News» Commentary
NoSQL with MySQLRAD Studio XE6 Extends Desktop To WearablesInside an Automated State MachineStrongLoop StrongOps 2.0 DevOps For Node.jsMore Commentary» Slideshow
Jolt Awards: Mobile Development ToolsJolt Awards: The Best Programming UtilitiesC++ Reading ListDeveloper's Reading ListMore Slideshows» Video
The Purpose of HackathonsVerizon App Challenge WinnersIBM Mobile Developer ChallengePerceptual ComputingMore Videos» Most Popular
NoSQL with MySQLJolt Awards: Mobile Development ToolsMongoDB with C#: Deep DiveWindows Phone 8.1 Preview For DevelopersMore Popular» More Insights
White Papers Key Components Of Your Resiliency Strategy Select the Right Cloud-Based ITSM Solution More >> Reports Will IPv6 Make Us Unsafe? Database Defenses More >> Webcasts Closing the Book on Windows Server 2003: Planning for Windows Server 2012 Opens New Possibilities Inside Threats: Is Your Company at Risk? More >> INFO-LINK
Innovations in Integration: Achieving Holistic Rapid Detection and Response Big Customer Data Analytics with a 360-Degree View Smarter Process: Five Ways to Make Your Day-to-Day Operations Better, Faster and More Measurable How to Improve Customer Analytics: Best Practices Inside Threats: Is Your Company at Risk? More Webcasts>>
Combining Cloud-Based DDoS Protection and DNS Services to Thwart the Threat of DDoS Approaches to DDoS Protection: An Overview on Keeping Your Networks Protected The Customer-Centric Supply Chain The Future of IT: A Customer First Approach What Does Friction Between Business and IT Cost your Organization? More >> | 计算机 |
2014-15/4496/en_head.json.gz/8800 | Interview: Everybody's Gone to the Rapture and Amnesia: A Machine for Pigs
Posted by: Stephanie Carmichael
We chatted with Dan Pinchbeck, creative director for thechineseroom, about the studio's next two titles, Everybody's Gone to the Rapture and Amnesia: A Machine for Pigs. He talks about what we can expect from both games, what's changed this time around, and how far the developer has come since Dear Esther.
GameZone: What's your role at the company?
Dan Pinchbeck: I'm Creative Director for both games, which basically translates as Writer, Producer, Lead Designer.
GZ: Let’s start with Everybody’s Gone to the Rapture — described as the natural follow-up to Dear Esther. In what ways, and how is it similar/different?
DP: It's a pure story game, like Esther, so it's driven by exploration of both a world and a fiction. This takes prominence over traditional goal or skill-based gameplay. The big difference is that it's open-world, so it's much more non-linear than Esther. You have a single, very large environment, and the choice of how you explore it is completely up to you and has a real impact on the way the story gets delivered and what versions of events you might find. The world is also much more dynamic than Esther's island — it responds to you, your actions, and your explorations, and that affects the narrative delivery, as well. So you're much more grounded in the world for this game. Your actions have a real impact.
GZ: The game is based around six characters. Why six, and can you tell us about their roles?
DP: That's one of those organic decisions that actually hard to answer! Six fitted the world and the story quite naturally. Once we had the basic scale-testing done, Andrew (Crawshawm our game designer) and I spent a lot of time in the world, getting a sense of scale, travel-times, etc. I started writing concept fiction around this as well, and we ended up with six characters. They are represented in the world, so you can interact with them, but not in a traditional NPC sense, and they also have their own journeys, which you can be part of or not. You'll have to make choices about whom you engage with, and that will limit your choices in other directions. It's still pretty early. We're still working through the options here. But they are actively part of the storytelling experience — not absent like the characters in Esther.
GZ: You’re working with CryEngine 3, whereas Dear Esther used Source, correct? What possibilities has the engine opened up for you, and what was the transition like from one to the other?
DP: Oh, we're big fans of CryEngine — and Crytek have been fantastically supportive. Basically, when I was doing early work on the game, I checked out a lot of engines, and CE3 was the only one that really gave us everything we needed, particularly in the relationship between how big the game world is and the level of presentational finish I wanted the game to achieve. I've wanted to use CryEngine since the first Far Cry game. It's really powerful and open — you can do huge amounts with scripting before you get anywhere near the code, which is brilliant. It's a new art team on Rapture — Ian Maude and Dan Cordell — so they've come at the engine completely fresh and are doing amazing things with it.
GZ: Everybody’s Gone to the Rapture is set in a post-apocalyptic, open world. Can you describe it more for us? How open-world are we talking?
DP: Completely open. It's one giant environment. I was in it yesterday doing some more writing, and it keeps taking me aback at just how bloody big it is. That's a great challenge for a writer — there's so much time and space to fill with content. It's set in rural England, so expect rolling hills, farmland, a lake, a river bed, a railway line, and station. Under normal circumstances, a sleepy, peaceful little village nestled away in obscurity out in the countryside. Only something catastrophic has occurred. Or is occurring. Or is about to happen.
GZ: Will there be voice-acting in the game, or is it text-based?
DP: Oh yeah, there will be voice-acting. Getting top quality voice-work is really important to me. But we're heavily investing in environmental storytelling, as well. And music will be central to the experience, of course. Jessica Curry, who composed Dear Esther, is co-director of thechineseroom, so music is always at the core of what we do. The early soundtrack for Rapture is sounding brilliant. And it's all dynamically generated according to your explorations and the history of what you've done, without descending into naff ambient muzak. Which is testament to Jess's skill as a composer.
GZ: What can players expect, gameplay-wise?
DP: It's first-person exploration, but this time you have much more impact on the world and options for exploring. So you get the standard set of FPS controls now — jump, sprint, crawl. You can interact with objects and will be expected to engage with objects and characters to push the story forward. And the world is responding to you all the time.
GZ: Let’s talk about Amnesia: A Machine for Pigs. First off, that opening quote: "This world is a machine. A machine for pigs. Fit only for the slaughtering of pigs." What does it mean, and where did it come from?
DP: That comes from my sick, sick head. It just popped in and had to be used. I'm not telling you what it means. You can worry about that yourself.
GZ: You’re collaborating with Frictional Games, who started the very popular, very scary Amnesia series. How did you become involved with the sequel?
DP: So Frictional are working on a new game but wanted to get another Amnesia out and didn't have time to do it, so they were looking for a studio to take it on. We know each other anyway, mutual fans, and we just got talking. It cemented over a few beers at GDCE last year, and we got cracking. Frictional are executive producers for the project, and we're making the game. It's a lot of fun.
GZ: Are you still aiming for a Halloween release?
DP: No, it's just slid back to early 2013. The quality of the game is the absolute first, last, and always with this development, and we felt it could do with a few more months' work to make sure that's really going to happen.
GZ: A Machine for Pigs isn’t a direct sequel — but it does take place in the universe, 60 years or so later. What’s changed?
DP: We're in the thick of Victoriana — empire, invention, social revolution, spiritualism. Rampant industrialization along with the attendant issues of dehumanization, poverty, disease, racism. It's right at the turn of the century, so there's that panic of what's coming next, and then there are these powerful industrialists pushing society forward, without any real thought to consequences. A really exciting but very dark age. So the central question for us is how does the universe of Amnesia fit within that brave new world.
GZ: Who is Oswald Mandus, exactly?
DP: He's a wealthy industrialist who made his fortune in the livestock industry but suffered a horrendous family tragedy and has been struck down with a terrible disease while exploring the ruins of the Aztecs and Maya in Mesoamerica. He wakes up from disturbing dreams at the start of the game and begins to try and piece his life back together, to understand what has happened to him.
GZ: You’ve said before that the game will be true to its predecessor, only pushed in new and interesting directions. What’s this new vision like, and does that extend to story, gameplay — everything? Can you talk at all about what’s in store?
DP: The central thing about Amnesia: The Dark Descent is that the player's experience is at the core. It takes precedence of mechanics, over more formal gameplay features as such. So we're preserving that emotional journey — that rollercoaster of fear, awe, desperation, loneliness, sadness, disgust, and terror. There are some changes of course, to keep things fresh, but it will very much feel like an Amnesia game. But what we're hoping for is a very different experience at the same time. I'm not going to say anymore about that yet, though ...
GZ: How has working on these two games been a growing and learning experience for you and the studio?
DP: Oh yeah, if you are not growing and learning with every title then you're getting something very wrong. I'd hate to be churning out the same thing over and over. It was really important for us to make a clear break from Esther, do some different things. I'd hate to be pigeonholed.
GZ: What are you excited for players to discover?
DP: That Esther was just the start. We've got a whole lot more on the way.
Follow @wita on Twitter for tales of superheroes, plumbers in overalls, and literary adventures.
Tags: thechineseroom, Dan Pinchbeck, interviews, Everybody's Gone to the Rapture, Amnesia
Amnesia: A Machine for Pigs Walkthrough
Amnesia: A Machine for Pigs will make you squeal for its new release date
About Amnesia: A Machine for Pigs
Related to Amnesia: A Machine for Pigs
Newsfor Amnesia: A Machine for Pigs
Screenshotsfor Amnesia: A Machine for Pigs
Downloadsfor Amnesia: A Machine for Pigs
GameZone brings you the walkthrough guide on Amnesia: A Machine for Pigs....
According to a Tweet by game developer The Chinese Room, Amnesia: A...
We chatted with Dan Pinchbeck, creative director for thechineseroom, about... | 计算机 |
2014-15/4496/en_head.json.gz/8801 | Zeroing in: Treating indie developers with kid gloves
There seems to be a common misconception going around about independent game developers. More often than not, you see people all over the internet praising them for their ideas while simultaneously emphatically talking about how they did so much with so little. This may refer to the technology and resources indie devs have at their disposal, the amount of time they spend making their games, or the monetary funds they have available. Ultimately, it seems as though fans are both applauding indie efforts and dishing out insults in disguise.
It’s true that most of the time indie devs don’t have the same technology and funding as major publishers and developers. Companies such as Activision, EA, and Ubisoft among others are known for their huge reveals, cinematic trailers, and costly investments. Indie devs don’t have that luxury, but they also don’t need that luxury. Sure, a few million bucks would definitely go a long way in helping to create an impressive indie game, but that’s not what that particular spectrum of the industry is about. Indie games are about personal, meaningful, and heartfelt experiences that really reflect the developers’ intentions and spirit.
Thomas Was Alone didn't have the biggest budget, but its art style, sound design, narrative, and gameplay were absolutely brilliant.
I recently watched an episode of Rev Rants, a now-defunct web series where Anthony Burch, who most recently was credited with writing Borderlands 2, discussed the topic of “indie mercy.” In his video, Burch talked about the skewed mentality of gamers who say things like, “Well, the graphics are pretty good for an indie game,” and, “The graphics aren’t that good, but it’s okay because it’s an indie game.” Those are probably two of the most insulting comments anyone could possibly ever make about an indie game. By approaching an indie endeavor with that mindset, individuals are already downplaying the efforts, abilities, and potential of said endeavor’s creator.
Indie developers don’t want to be seen as weaker game makers who make do with that they’ve got. They want to provide experiences that are unique and personal. To say that these developers “do the best they could” is insulting and harsh. What these developers do, in actuality, is work around their limitations and create experiences that they’re proud of. They don’t do the best they could; they do something that they want to do because it’s special to them and they want to share it with the world. Indie studios don’t have the same amount of resources as huge companies, but that doesn’t mean they want to be seen as inferior competitors to those companies.
A heated topic recently arose when Valve implemented a $100 fee to any developers attempting to get their game approved through Steam Greenlight. Detractors argued that asking these small studios – some of which consisted of five individuals or less – to pay up was an unfair strategy because “indie devs don’t have a lot of money.” These ignorant individuals brought classism into their argument, essentially saying that small studios shouldn’t have to pay money to get their games on Greenlight. To quote Danielle McMillen, wife of Super Meat Boy designer Edmund McMillen, “Bottom line about the Greenlight thing, if you can’t scrounge and save $100, you don’t have what it takes to achieve your goals anyway.”
While Robox boasted beautiful graphics, it suffered from poor controls and lackluster gameplay.
When people used classism as a point for their argument, the seemingly did so to come off as heroes, defending the indie devs that don’t have millions of dollars to create, launch, and sell their games. At the same time, these people brought up a topic that had absolutely no place in the matter. The Greenlight requirement of $100 wasn’t about class; it was about filtering out the garbage submissions that too many individuals had already begun submitting. Mike Bithell, the developer who brought us the brilliant Thomas Was Alone, explained his thoughts on the matter, saying, “I could understand criticisms of whether this is a good way to stop spam (I’m on the fence), but the idea that reactions to this problem define my economic politics, standing, or class makes zero sense to me.”
Bithell was one of many individuals who stated that this whole mess didn’t have anything to do with classism. A large majority of the detractors reacted by blindly saying that these folks felt that way because they were “rich” or “had a lot of money.” The funny thing is that a lot of people who were okay with the $100 fee (as well as those who didn’t make a big deal about it) were indie devs and indie game fans – people who understand that the independent side of the gaming industry isn’t exactly the wealthiest. By childishly resorting to “it’s because you’re rich” arguments, individuals took on the role of sanctimonious douchebags who thought they had a better moral understanding of the indie game industry.
Ultimately, these hypocritically pious people delivered an argument that was not only wrong, but insulting to indie devs. Studios that create independent content are well aware that they don’t have the biggest budget for their games; they don’t need people feeling sorry for them because that accomplishes absolutely nothing. If Indie Game: The Movie taught me anything, it’s that indie devs can rise to the occasion and overcome the odds. They can deliver a hell of a gaming experience that’s on par with content created by major publishers and developers.
Mighty Jill Off can be beaten in 15 minutes, but its challenging gameplay, pleasant visuals, and refreshing plot make it a great example of a quality gaming experience at its finest.
And that brings me to my next point: Not every indie game out there is good. I adore indie games and developers and proudly have a vast library of downloaded titles from studios like Die Gute Fabrik, Smudged Cat Games, VBlank Entertainment, and Zeboyd Games to name a few. The content I’ve played from these developers is some of the most entertaining I could ever ask for, with games such as Where Is My Heart? and Retro City Rampage being among my favorite indie titles. But if a game has poor graphics, weak controls, and terrible gameplay, and people dismiss that because it’s indie, it’s not helping the industry get better, and it’s not helping the indie studio in question to improve and create better content.
Gamers need to understand that indie devs shouldn’t be treated like feeble individuals. They’re competent companies (for the most part) that work around their limitations and create content that’s different from what the major organizations are doing. They do things their way, and oftentimes they struggle. But they manage to jump the hurdles placed before them to give us, their audience, something unique and entertaining. Games like Johann Sebastian Joust, Hotline Miami, and Spelunky give us experiences that are different and completely unlike Assassin’s Creed, Call of Duty, and Mario. Is endearment toward indie devs justified? Of course it is, because these folks rise above their hardships, work their asses off, and give us some great content. But by no means should any studio, no matter how big or how small, get a free ride. In the long run, any individual with a passion for making games will appreciate fair, honest, and insightful criticism.
Tags: Indie games, Steam | 计算机 |
2014-15/4496/en_head.json.gz/10135 | The History of GNU Linux and Modern Confusion
Posted by Web Hosting Blogger on April 7, 2010 in Banter Technorati Tags: linux,GNU,open source
The Linux operating system has come a long way in a short period of time, working its way up from a mere curiosity to a real player on the world stage. As more and more businesses embrace the power of Linux and explore what this open source operating system can do, it’s interesting to look back and see where its been, and the struggle it (along with many other open source software) has in front of it.
The history of the Linux operating system dates all the way back to 1991, a short time span on the world stage but an eternity in the world of technology. It is often difficult to remember just how primitive computer technology was just a few decades ago, but consider this – at the beginning of the 90’s, DOS was still the preeminent operating system, and the GUI interface we know today was still pretty much a rarity. Users who wanted a friendly interface could turn to Apple, but the high prices of those machines generally put them out of reach of business owners and all but the wealthiest individual users.
While Bill Gates was busy working with the version of DOS he had purchased for the princely sum of $50,000, other computer enthusiasts were plugging away at an alternative computing platform. The enthusiasts of Unixworld were working on their own computer systems, but Unix itself was quite a pricey option, and not a viable one for the business community. Unix was instead the playground of computer science majors, and the source code behind the operating system was (at the time) a closely guarded secret.
One of the first forays into the world of the open source operating system was MINIX, an operating system that was written from the ground up by Professor Andrew Tanenbaum as a way to teach his students the inner workings of a computer operating system. While MINIX was not an exceptional operating system, it was unique in that its source code was freely available, allowing students and computer enthusiasts to explore how the system worked. This open source approach also allows students to tinker with the program and instantly see the results of their changes.
One of the students who enjoyed tinkering with the open source MINIX operating system was Linus Torvalds, who would later become the father of the Linux operating system. At the time Torvalds was a 21 year old student at the University of Helsinki in Finland. But even at that young age, Torvalds loved to explore the inner workings of computer systems, pushing them to their limits to see what would happen.
Another project that played a big role in the development of open source software in general, and Linux in particular, was the GNU project. Proponents of the GNU system argued that software should be freely available, and they were working to make their vision a reality. Those in the GNU camp felt that by making the source code freely available, computer enthusiasts could constantly tweak that software to make it better and more user friendly.
As the work went on more and more people jumped onto the open source bandwagon. Powered by the many newly developed programs from the GNU system, the Linux operating system Torvalds had created became more robust and much more powerful for business users.
What had begun as a mere side project had become a viable operating system alternative, making even major players like Microsoft sit up and take notice.
And Modern Confusion
Today the Linux operating system is a real threat the dominance of proprietary systems as many companies discover how stable, reliable, and easy to work with Linux systems really are. So much so that it seems the big boys are getting a little nervous.
In February, the International Intellectual Property Alliance, an umbrella group for organizations including the MPAA and RIAA, requested the Office of the United States Trade Representative consider countries like Indonesia, Brazil and India for its “Special 301 watchlist” just because they use open source software.
The Omnibus Trade and Competitiveness Act of 1988 created the Special 301 mechanism. The Office of the United States Trade Representative (USTR) issues an annual Special 301 Report which “examines in detail the adequacy and effectiveness of intellectual property rights” in many countries around the world. If countries make it on the Watch List, it means that they appear to not be respecting copyright rights, and the IIPA appears to feel that using open source software and recommending people use open source software is enough for a country to get itself designated as a bastion of piracy.
While this is clearly a political move designed to keep the coffers of expensive software companies full of gold, it shows the attempts at the top to create confusion and fear around open source software – which can trickle down to the masses.
In 2008, Linux Austinite folks watched in something akin to horror as an Austin Independent School District Teacher made the national Linux blogosphere news for confiscating Linux CD’s from her student in an unnamed middle school in Austin, Texas and firing off a letter to the founder of a non-profit attacking him for “falsehoods” because “no software is free” and she believed spreading that “misconception” is “harmful” to the kids.
Austin’s nickname is “The Silicon Hills” due to our being the home of many development, manufacturing, and office centers for many technology corporations, including 3M, Apple Inc., Hewlett-Packard, Google, AMD, Applied Materials, Cirrus Logic, Cisco Systems, eBay/PayPal, Hoover’s, Intel Corporation, National Instruments, Samsung Group, Silicon Laboratories, Sun Microsystems and United Devices. Our Governor, Rick Perry, has offered 1.4 Million in incentives if Facebook will come here and set up shop. (We’ve been kinda depressed after Google broke up with us after only four months.)
Needless to say, Austinites tech view of themselves did not account for an exchange like this taking place in one of our schools, nor did we ever think we would read an Austin teacher write the following words to HeliOS‘s project founder, Ken Starks:
…observed one of my students with a group of other children gathered around his laptop. Upon looking at his computer, I saw he was giving a demonstration of some sort. The student was showing the ability of the laptop and handing out Linux disks. After confiscating the disks I called a confrence with the student and that is how I came to discover you and your organization. Mr. Starks, I am sure you strongly believe in what you are doing but I cannot either support your efforts or allow them to happen in my classroom. At this point, I am not sure what you are doing is legal. No software is free and spreading that misconception is harmful. These children look up to adults for guidance and discipline. I will research this as time allows and I want to assure you, if you are doing anything illegal, I will pursue charges as the law allows. Mr. Starks, I along with many others tried Linux during college and I assure you, the claims you make are grossly over-stated and hinge on falsehoods. I admire your attempts in getting computers in the hands of disadvantaged people but putting linux on these machines is holding our kids back.
This is a world where Windows runs on virtually every computer and putting on a carnival show for an operating system is not helping these children at all. I am sure if you contacted Microsoft, they would be more than happy to supply you with copies of an older verison of Windows and that way, your computers would actually be of service to those receiving them…”
Karen xxxxxxxxx
xxxxxxxxx Middle School
The “attempts at getting computers into the hands of disadvantaged children” would be a reference to the HeliOS Initiative, an Austin non-profit that refurbishes old computers and gives them to kids that can’t afford it (which Mr. Stark co-founded). You can read Ken’s (seriously annoyed) response to the teacher on his blog here:
http://linuxlock.blogspot.com/2008/12/linux-stop-holding-our-kids-back.html
It is a case in point for the type of misinformation there is out there about Linux. Many of our own clients seem to not know that the operating system the server runs on (CentOS) is open source and “free”. The web server the sites run on, Apache, is open source and “free”. The Courier Mail Server, Pure-FTP server and on and on.
The WordPress Software you use on your sites is open source. So’s Joomla. So’s Drupal. Apache, the open source web server, still runs more web sites than any other software (Netcraft Web Server Survey, Market Share for Top Servers Across All Domains August 1995 – February 2010).
So, if you think the history of Linux and the open source movement, and the attacks on it, are something that doesn’t concern you, we hope this article will help you think again. Leave a Reply Cancel reply
Your email address will not be published. Required fields are marked * Name * Email * Website Comment You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong> ← Why You Should Have a Company Blog
Frangible Friday: Get a Backup, Keep a Backup → | 计算机 |
2014-15/4496/en_head.json.gz/10615 | Our GoalsInternational Home
Internet & Telecommunications
Action 98: Support the Internet Governance Forum
Support the continuation of the Internet Governance Forum beyond 2010.
The Internet Governance Forum (IGF) is a multi-stakeholder forum to discuss policy issues on internet governance. The IGF was formally established by the United Nations Secretary-General in July 2006. The main themes addressed by the IGF are openness, security, diversity and access; ideas are discussed and tested. Participants include ministers, government officials, internet activists, industry representatives, academics and technology experts. Its value added is to provide a free platform where ideas and initiatives can feed into the policy making process.What has the Commission done until now?The Commission is an active supporter of the IGF and part-funds its budget. Commission services participate actively in the IGF works and Vice-President Neelie Kroes has personally taken part in the IGF editions of 2011 (Nairobi) and 2012 (Baku).The Commission also works within the UN framework to raise the impact of the IGF on policy discussions which take place in other fora. What will the European Commission do?Participate in the debate on the role of IGF, defending the multi-stakeholder approach, and raising the IGF's political profile to ensure it has a larger role in the policy making process.Support the implementation of the recommendations made by the UN working group to improve the functioning of the IGFWork on the organisation of the next IGF (Bali, Autumn 2013)Continue sustaining the functioning of the IGF secretariat Contact person by email Newsroom
RSSPress Releases
NETmundial: European Commission to take leading role at global conference on internet governance
Stopping a Digital Cold War
Policy/Legislation: Communication on Internet Policy and Governance
Policy/Legislation: Digital Agenda for Europe: Communication from the Commission (26/08/2010)
The Internet needs better governance, starting now
(Neelie Kroes, Vice-President of the European Commission responsible for the Digital Agenda, 23/04/2014)
A European vision for Internet governance
Taking care of the Internet
(Neelie Kroes, Vice-President of the European Commission responsible for the Digital Agenda
, 27/09/2011)
Search Progress Report
You may also see: | 计算机 |
2014-15/4496/en_head.json.gz/12531 | LinuxInsider > Technology > Tech Buzz | Next Article in Tech Buzz
The Top 10 List of Worst Business IT Decisions
By Paul MurphyLinuxInsider
Personally, I'd put DEC's failure to recognize that commercial VMS users weren't remotely like mainframers in solid second place, although I can think of some other contenders too -- including AT&T's purchase of NCR, the Defense Department's choice of staff and criteria in the development of ADA, and Intel's decision to continue 64-KB block addressing in the i80286.
About a month ago we had some people over for dinner and the discussion, at least on my side of the table, drifted to top-10 lists of the Letterman variety. What had happened to make that topical was that Canada's entry into the late-night talk-show format had just been cancelled and the local papers were snitting about an American talk show host whose "insult puppet" had taken a round out of some walking embarrassment from Quebec.
As part of that conversation, I got challenged to name the top 10 worst IT decisions ever -- something I couldn't do then and still can't do now, which is why I'm asking for your help in defining the criteria needed and then identifying examples.
The "rules," as I made them up, are that the technology component of the decision must have been subordinate to the business decision, the decision makers must have had a clear choice, the outcome must have been a business disaster, and the documentation supporting the story should be reasonably clear and comprehensive.
Thus the story about Kmart's top executive deciding to stop an early and successful supply-chain pilot from going to full implementation -- thereby handing the chain's market to Wal-Mart simply because Kmart was unable to see supply chain as more than an expensive inventory management tool -- would be a perfect example if the documentation for it were strong enough for us to be sure it really happened that way.
Unfortunately, the conditions surrounding business IT decisions big enough to make the list are usually underreported relative to their real complexity. In most cases, we don't really understand what went on and so can't fairly judge the quality of the decision. For example, a lot of people I talked to about this immediately nominated IBM's decision to use Microsoft's MS-DOS instead of CP/M as the worst such decision ever made, but the evidence on that is at best inconclusive.
In fact, I think there's a better case for considering the decision to license a new -- and very limited -- OS from Microsoft a brilliant example of creative corporate maneuvering, rather than a mistake.
Technically Indefensible Decisions
The key factor driving many of the decisions made by the PC development team at IBM was fear that their project would meet the same fate as the "future systems architecture" had in 1972.
Now sold as the iSeries but introduced seven years late as the System 38, the database architecture had been developed as a replacement for the card-based transactions model enshrined in the System 360 and was clearly better, smarter, faster and less expensive. Unfortunately, IBM's System 360 customer base rejected absolutely everything about it, and IBM's board caved in, almost literally at the last minute, to order the product shelved.
Eight years later, in 1980, the System 360 architecture was even more deeply entrenched. Knowing this, PC developers believed that anything perceived as powerful enough to threaten the mainframe would be impossible to get past the board.
As a result, they picked the 8088 -- an eight-bit compatible downgrade to Intel's already uncompetitive 8086 -- instead of the MC68000, turned down both BSD and AT&T Unix for their machine, omitted both real graphics and a hard disk from the design, killed an internal effort to port the original PC client software from the IBM 51XX series, and contracted for a very basic OS from Microsoft to gain the support of William Gates II, father to the version 3.0 we all know and at that time an influential advisor to the IBM board.
In other words, their decisions, while technically indefensible, almost certainly have no place on a top 10 list of the worst-ever business IT decisions. Weakening the product in these ways got IBM into the game and may well have been required to get the company's PC to market at all.
Destruction of DEC
The destruction of Digital Equipment Corporation (DEC) at the hands of its own management might be a much better bet, despite the ambiguities in the record. By about 1988, the company had grown to be IBM's biggest competitor by building high-quality hardware and leaving software development mainly to the research community and its commercial spin-offs. At about that time, however, some of the company's most senior executives fell victim to their own success and started talking about becoming IBM by leveraging their proprietary software to grow services revenues.
What this really meant was that they just didn't understand their own markets: Commercial VAX users simply didn't behave the way IBM's mainframe community does and were fundamentally uninterested in buying support services for packaged applications that generally already worked as advertised.
As a result, when DEC started the Open Software Foundation -- an organization founded with IBM and seven others in an attempt to bring Unix development under proprietary control -- and cut off new development funding for Unix users, it effectively cut its deepest roots and left itself with no place to go once the services experiment failed.
By the time DEC's executives understood this, however, the situation had changed. On the positive side, the Alpha CPU was an emerging success, but most of the innovators who had driven the company's growth had switched to Sun- and Intel-based Unix, while AT&T's adventure with NCR -- itself a clear candidate for this top-10 list -- had taken the strategic value out of the OSF.
In theory, the board could have admitted failure at that point and set out to rebuild developer loyalty, but instead it dithered for nearly two years before betting what was left of the company on Microsoft's promise to take over for the research and other software innovators who had been driven off.
Coming Up with a Sensible List
As a business strategy, trying to outsource innovation to Microsoft turned out to be both naive and suicidal for DEC. But you can see how the people involved were led to that decision one mistake at a time. HP's current executives, however, have no such excuse: With DEC's example in front of them, they still chose to direct their company down the same path to oblivion -- blissfully assuming that the combination of a hot new server CPU and Microsoft's historic commitment to software innovation would open the road to IBM-like services revenues.
Wrong on all counts, of course, but it does have the virtue of nailing down HP's "Itanic" partnership with Intel and Microsoft as the leading candidate for top spot on this list of the worst-ever business IT decisions.
Personally, I'd put DEC's failure to recognize that commercial VMS users weren't remotely like mainframers (in their spending or thinking) in solid second place, though I can think of some other contenders too -- including AT&T's purchase of NCR, American Microprocessor's decision to lay off the first microprocessor design team, the Defense Department's choice of staff and criteria in the development of ADA, and Intel's decision to continue 64-KB block addressing in the i80286.
On the other hand, I'll bet you have some ideas for both examples and criteria, and that's what this column is about: asking for your help in coming up with a sensible list.
Paul Murphy, a LinuxInsider columnist, wrote and published The Unix Guide to Defenestration. Murphy is a 20-year veteran of the IT consulting industry, specializing in Unix and Unix-related management issues.
More by Paul Murphy | 计算机 |
2014-15/4496/en_head.json.gz/13157 | Government and Political Conspiracies Anonymous ................The Grand Deception
Anonymous ................The Grand Deception
by The_Joker » Sat Nov 16, 2013 9:25 pm The Anonymous-Op "Anonymous" has been discovered as a Government Psychological Security Agency creating fake hacking attacks as an excuse for Internet Control Legislation. Observe the 'Anonymous' Logo on Google Images, with the European National Council. It is also observed that "Anonymous" receives sufficient Media Air-Time on most of the 'main' Satanic tele-vision stations such as CNN (who owns CNN?), that if "Anonymous" were so anti-Government, then why does "Anonymous" receive more attention than the 9/11 Truth Movement, who have never received much attention by the Satanic Media. The channel CNN and other "news" channels are indeed thrusting "Anonymous" into the 'public' spotlight to assist the World Governments to psychologically condition the masses to accept the anti-Christ Laws being passed through their Satanic Parliaments and Synagogues/Universities.We know that 95%, or perhaps more, Internet Websites are being used by The Satanic Hierarchy to gather information data to monitor the collective consciousness within its nations. Was it not Vladimir Lenin who quote, "The best way to control the opposition is to lead it ourselves". End quote.It is also observed that 'Anonymous' are formatted in every nation where the Satanic Hierarchy are subjugating their Satanic anti-Christ Legislation's and draconian Laws. Another observation is when numerous of thousands of living souls were marching in New York saying '9/11 is an inside job", the main-stream Satanic Media was completely silent! So the Satanic Media have employed their duped sheeple employees to brag and boast about "Anonymous" in 'their' great efforts to 'shut-down-the-governments!' and continue to give "Anonymous" plenty of Media exposure! Why?Those who state on their own Websites 'the-truth' have more responsibility than ever to endeavor that what they are stating is the truth, need to make sure that it is the truth beyond reasonable doubt. Do you agree?Those who are truly spiritual born-again and regenerated by the power and assistance of the holy Ghost, through Jesus Christ, unto the LORD (yes, CAPITAL LORD, because His children are subjected unto Him), are given the true discernment to know what His truth is, and to know the lies and deceptions of Satan and his cohorts/agents.We must see the true modus operandi of Satan and his agents' agenda."Anonymous" also has artistic and professionally advanced videos with amazing sound effects and graphics. Have you noticed? The type of video editing, and sound effects you might expect to see on [H]olly-wood and Media News Broadcasts."Hello Citizens of the world, We Are Anonymous". Really!Again, please observe 'the world in a cage, with the anti-Christ United Nations olive branches in the background, which is the equivalent logo used for "Anonymous". When violence and so called terrorists have been erupting in geographical regions such as 'Anti-Islam movie sparks riots in Egypt; angry mob kills American in Libya', which was a film depicting Muhammad as a fraud, a womanizer and a madman, and financially funded by 100 Judaic donors, along with other middle-eastern countries, it has also been observed that Egyptian protestors wearing Guy Fawkes masks who represent themselves as "Anonymous" have been caught wearing 'Press-Media' Identification Passes around their necks and on their Persons. Interesting eh!How many "Anons" are actually members of the Satanic Media Press?The perfect excuse for Internet Censorship, is to 'shut-down' Federal Security Police/Agencies and Government Websites and alleged that the group "Anonymous" have hacked their Websites and are attacking them via the Internet. Good Hoax!We can always be certain that the major Satanic Media, such as CNN, will not be monitoring the 9/11 truth movement whatsoever, but will broadcast "Anonymous"."Anonymous" is 'legion'; We Do Not Forget! Legion was the name of a Devil that the Lord Jesus Christ cast out.It has also been observed that "Anonymous" is being used to 'shut-down' Websites not friendly to NATO!"Anonymous" is a Pro-Government Group posing as "The Opposition".The truth is, "Anonymous", is the the Government.Think about it every "Anonymous Hacker" involved in these attacks on the Government have NEVER been caught...how is that possible when the RFID chip in your Credit Card or Driver Licence can be tracked to within a yard or so of your actual location or when in today's world there is no such thing as anonymity when buying a phone, gettign a phone , getting an ISP etc.
Re: Anonymous ................The Grand Deception
by Nesaie » Wed Nov 27, 2013 7:27 pm Link? | 计算机 |
2014-15/4496/en_head.json.gz/13738 | Regional Summits
Sao Paulo 2014
Executive Briefings Over eighty Black Hat sessions crowded into two days is a lot. So it helps to get some advance sorting and some info-distillation—those are the key missions of the Executive Briefings. We have designed the day to help parse the information deluge, guiding your team deployment strategy for team members who have accompanied you to Black Hat.
With over twenty sessions impacting Application Security, and facing the evolution of HTML5, Nathan Hamiel will be providing an overview of what is coming in this fast moving space.
Black Hat Review Board member Nathan Hamiel is charing the Web Application Security track, serves as Principal Consultant for FishNet Security's Application Security Practice. He is also an Associate Professor of Software Engineering at the University of Advancing Technology.
Nathan Hamiel
Mobile devices were among the first 'bring your own device' landscape, and have evolved into one of the more interesting attack targets in the Enterprise. It is true that the mobile track includes elements you might expect, but there are more unconventional topics emerging as well. Vincenzo, the track chair, says it all has something to do with peeling an onion.
Presentation by Black Hat Review Board member Vincenzo Iozzo, Director of Vulnerability Intelligence at Trail Of Bits Inc. He's perhaps best known for co-writing the exploits for BlackBerryOS and iPhoneOS to win Pwn2own 2010 and Pwn2own 2011.
Vincenzo Iozzo
Things Defensive
Defense doesn't always get as much airplay at Black Hat, where much of the buzz tends to focus on breaking things. Defense, always harder than offense, gets attention from some great minds at Black Hat this year. Presentation by Black Hat Review Board member Shawn Moyer, who manages the Research Consulting Practice for Accuvant Labs.
Shawn Moyer
Special Guest Speaker
An update on national issues from DHS Deputy Under Secretary for Cybersecurity Mark Weatherford.
Mark Weatherford
Analytical Response and Discussion
Rounding out the day's discussions and presentations, Black Hat has assembled a top-shelf panel to break down and discuss top concerns highlighted through the day, evaluate down-stream implications of upcoming Black Hat research, and help process what to do in response to this year's content.
This panel, comprised of three leading analysts and two (crowd elected) CSO's are charged with working with the attendees to synthesize, challenge, clarify what questions to carry forward into the next two days of Black Hat Briefings.
Joshua Corman is Director of Security Intelligence for Akamai Technologies and has more than a decade of experience with security and networking software. Most recently he served as Research Director for Enterprise Security at The 451 Group, following his time as Principal Security Strategist for IBM Internet Security Systems. Mr. Corman's cross-domain research highlights adversaries, game theory, and motivational structures. His analysis cuts across sectors to the core security challenges plaguing the IT industry, and helps drive evolutionary strategies toward emerging technologies and shifting incentives.
Corman can be found on twitter @joshcorman and on his blog at http://blog.cognitivedissidents.com/
Rob Joyce is the Deputy Director of the Information Assurance Directorate (IAD) at the National Security Agency. His organization is the NSA mission element charged with providing products and services critical to protecting our Nation's systems that carry classified communications, military command and control or intelligence information. IAD provides technical expertise on cyber technologies, cryptography, security architectures and other issues related to information assurance, as well as supplying deep understanding of the vulnerability and threats to national security systems. Joyce has spent more than 23 years at NSA, beginning his career as an engineer.
Rich Mogull, Analyst and CEO at Securosis, has twenty years of experience in information security, physical security, and risk management. He specializes in data security, application security, emerging security technologies, and security management. Prior to founding Securosis, Rich was a Research Vice President at Gartner on the security team where he also served as research co-chair for the Gartner Security Summit. Rich is the Security Editor of TidBITS and a frequent contributor to publications ranging from Information Security Magazine to Macworld. He is a frequent industry speaker at events including the RSA Security Conference and DefCon, and has spoken on every continent except Antarctica.
Kevin Overcash, Chief Software Architect of Accuvant, has been designing and building commercial software products and services for over fifteen years. Starting with Internet Security Systems' (ISS) Internet Scanner in the late 90's, he has designed and served as product manager for ISS RealSecure IDS, SPI Dynamics WebDefend and Assessment Management Platform (AMP), Breach Security WebDefend Web Application Firewall, and most recently the WhiteHat Sentinel Web Application Assessment Service. Mr. Overcash has been speaking at industry events for over a decade, including SANS and RSA.
Premium & Dinner Co-SponsorBack to Top
Qualys, Inc., is a pioneer and leading provider of cloud security and compliance solutions with over 5,700 customers in more than 100 countries, including a majority of each of the Forbes Global 100 and Fortune 100. The QualysGuard Cloud Platform and integrated suite of applications helps organizations simplify security operations and lower the cost of compliance by delivering critical security intelligence on demand and automating the full spectrum of auditing, compliance and protection for IT systems and web applications. Founded in 1999, Qualys has established strategic partnerships with leading managed service providers and consulting organizations including BT, Dell SecureWorks, Fujitsu, IBM, NTT, Symantec, Verizon, and Wipro. The company is also a founding member of the Cloud Security Alliance (CSA).
For more information, please visit www.qualys.com
Philippe Courtot, Chairman and CEO for Qualys
Philippe's leadership experience includes serving as Chairman and CEO of Signio, an electronic payment company later acquired by Verisign, President and CEO of Verity, and CEO of Thomson CGR Medical. Philippe holds a Masters Degree in Physics from the University of Paris.
Amer Deeba, Chief Marketing Officer for Qualys
Amer came to Qualys from VeriSign, where he was the General Manager for the Payment Services Division, and has a variety of technical and management positions at Adobe, Verity and Amdahl. Amer earned MS and BS degrees in Computer Sciences.
Wolfgang Kandek, Chief Technical Officer for Qualys
Wolfgang's more than 20 years of experience in IS management includes positions at myplay.com,iSyndicate, EDS, MCI and IBM. Wolfgang earned a Masters and a Bachelors degree in Computer Science from the Technical University of Darmstadt, Germany.
John Wilson, Executive Vice President of World Wide Field Operations for Qualys
John's more than 20 years of sales and operations leadership includes roles at Verizon Business, Ubizen, The Sayers Group, Winstar Communications, and Johnson & Johnson. John holds a Bachelor of Science degree from the U.S. Military Academy at West Point and a Master of Business Administration degree from Fordham University.
Foundation SponsorBack to Top
Adobe is changing the world though digital experiences. We help our customers develop and deliver high-impact experiences that differentiate brands, build loyalty, and drive revenue across every screen, including smartphones, computers, tablets and TVs. Adobe content solutions are used daily by millions of companies worldwide—from publishers and broadcasters, to enterprises, marketing agencies and household-name brands. Building on our established design leadership, we enable customers not only to make great content, but to manage, measure and monetize it for maximum impact.
Brad Arkin, Senior Director of Security for Adobe
Brad Arkin is the senior director of security for Adobe products and services. Arkin also oversees the Corporate Standards Group as well as the open source and accessibility teams.
David Lenoe, Adobe
David Lenoe leads the Product Security Incident Response Team (PSIRT) at Adobe, responsible for Adobe's security incident response and vulnerability information sharing programs.
event sponsorsBack to Top
Core Security Technologies enables organizations to get ahead of threats with security test and measurement solutions that continuously identify and prove real-world exposures to their most critical assets. Our customers can gain real visibility into their security standing, real validation of their security controls, and real metrics to more effectively secure their organizations.
Core Security's software solutions build on over a decade of trusted research and leading-edge threat expertise from the company's Security Consulting Services, CoreLabs and Engineering groups. Core Security Technologies can be reached at +1 (617) 399-6980 or on the Web at: http://www.coresecurity.com.
BlackBerry Security, Research in Motion (RIM), is a world class organization providing end to end security focus including: driving the BlackBerry security message globally, security accreditations, development of security products, advanced threat research, building mitigations into BlackBerry products, and by rapidly responding to security incidents. More information: www.blackberry.com/security
SAIC is a FORTUNE 500® scientific, engineering, and technology applications company that uses its deep domain knowledge to solve problems of vital importance to the nation and the world, in national security, energy and the environment, critical infrastructure, and health. The Company's approximately 41,000 employees serve customers in the U.S. Department of Defense, the intelligence community, the U.S. Department of Homeland Security, other U.S. Government civil agencies and selected commercial markets. Headquartered in McLean, Va., SAIC had annual revenues of approximately $10.6 billion for its fiscal year ended January 31, 2012. For more information, visit http://www.saic.com/. SAIC: From Science to Solutions®
Dinner Co-SponsorBack to Top
Veracode is the only independent provider of cloud-based application intelligence and security verification services. The Veracode platform provides the fastest, most comprehensive solution to improve the security of internally developed, purchased or outsourced software applications and third-party components. By combining patented static, dynamic and manual testing, extensive eLearning capabilities, and advanced application analytics, Veracode enables scalable, policy-driven application risk management programs that help identify and eradicate numerous vulnerabilities by leveraging best-in-class technologies from vulnerability scanning to penetration testing and static code analysis. Veracode delivers unbiased proof of application security to stakeholders across the software supply chain while supporting independent audit and compliance requirements for all applications no matter how they are deployed, via the web, mobile or in the cloud. Visit www.veracode.com
FEATURED SITES: Black Hat |
Sponsorships |
Latest Intel
Electronics | Game & App Development WORKING WITH US:
Privacy Statement | Copyright © 2013 UBM Tech, All Rights Reserved | 计算机 |
2014-15/4496/en_head.json.gz/14478 | WordPress.org The personal site of Tom Coates, co-founder of Product Club
On Andrew Keen…
Andrew Keen makes me furious but I don’t write about him as a rule. Why not? Because you don’t feed the trolls. And I don’t think I’ve ever seen anyone so clearly acting like a troll. I mean, you only have to read his post Etes-vous elitiste in which he declares that people have labelled him an anti-Christ and then uses that as a platform to sell his speaking gigs, while the right-hand column of his website lists all his media appearances. He wants to stir up an argument to get attention. We’re not supposed to enable behaviour like this in our children. We have to be firm. He must be placed on the naughty step.
Andrew is the chap who thinks that the whole internet is full of amateurish morons and that nothing rises to the top and that professional media has become corrupted and less good as a result of all this stuff. I could agree with his comments about mainstream media losing the plot if it didn’t seem to be quite the other way around. As far as I can see in the US at least, mainstream news became about entertainment way before the bloggers came along, there’s lots of money in cinema still and Harry Potter sells by the ton. I watched a TV programme about how in the US they sold Life on Earth as basically animal on animal bestial snuff movies. Presumably also the effect of the nascent internet, even if about four people in the world were using it back then. And clearly the blunt utility of Wikipedia counts for nothing, the beautiful pictures in Flickr aren’t worth looking at, Keen’s own blog presumably yet another indication of how low you now have to stoop to make an impact in the world rather than something we should celebrate – another citizen gets to express their opinion and try and persuade the world he’s right.
The thing is about this, all this conversation is a total waste of time. I don’t understand why he gets the traction he does. I mean, what is he actually trying to accomplish? Does he think that the millions of bloggers will get bored and go home if he explains why their voices don’t count? Does he think that Wikipedia will stop being useful to people (even with its inaccuracies) or YouTube will stop being entertaining? No, of course he doesn’t. He can’t honestly think he can accomplish anything. The future comes, for good or ill, whether you like it or not. The best you can do in such a situation is try and work to fix the issues you see. No market for decent commentary and opinion? Look for a business model that could support it! No way that Encyclopedia Britannica can compete with Wikipedia? Well then why not move some of the resourcing of Britannica towards creating a trusted version of Wikipedia? Check articles every so often for factual accuracy, pull them aside and enhance them and make that your business.
The world we have as a result of technologies of the internet is not a world I find particularly troubling, because it’s a world finding its feet and its a world that has also created significant beauty. It’s a world I feel comfortable in, and there is always a market for what people want and often for what people need. I don’t doubt that journalism will survive or resurge but it will have to adapt.
People like Keen are professional complainers, stirring up fights, decrying the state of the world that we find ourselves in without facing the fact that it is where we are and wishing won’t make it not so. If you don’t like the way the world is, then use the tools that exist and push them further and find a way to compensate for the problems that you think the existing technology has created. I’m afraid it’s a clich√© but it’s true. You can’t put the genie back in the bottle. The world we have is the world we can work with, and anyone wanting to push it back to the fifties will fail.
And that’s what really gets to me. Because it’s pretty clear that he knows this. He’s writing his own bloody blog for a start. He knows he can’t win the battle, but he’s put himself on the side of respectability, trustworthiness, reliability and is decrying all the terrible new things in the world. As I once said of Nick Carr, this is a brilliant strategy to make yourself like a terribly intelligent and responsible, serious person without actually having to go to any of the trouble of thinking. That’s why he’s a troll – because his opinion cannot do any good, cannot change anything for the better, but in its decrying of the nascent environment of millions of people finding their voices for the first time, he can get nothing but attention, media coverage and book-sales. It’s not an appeal to better standards, it’s not an appeal to quality or tradition. It has no aspirations to honour. It’s disingenuous to the core, manipulative of the people, anti-progressive, cynical and hypocritical.
Author Tom Coates
Category Books & Literature Net Culture Personal Publishing Social Software
© 2014 plasticbag.org | The personal site of Tom Coates, co-founder of Product Club | 计算机 |
2014-15/4496/en_head.json.gz/15190 | W3C Launches New �Agile� Standards Development Platform
Wednesday, August 17 2011 @ 08:04 AM CDT
Contributed by: Andy Updegrove
By anyone’s measure, the World Wide Web Consortium (W3C) has been one of the most important and influential standards development organizations of the information technology age. Without its efforts, the Web would literally not exist as we know it. But times change, and with change, even venerable – indeed, especially venerable – institutions must change with it. Yesterday the W3C announced the launch of a light-weight way for non-members as well as members to initiate new development projects. It allows participants to take advantage of streamlined, off the shelf tools and policies to support their efforts, as well as the intellectual support of the W3C staff and member community. Where appropriate, a project can graduate to the formal W3C development process as well. The new programs are the result of extensive discussion and consensus building that began in two ad hoc working groups in which I was pleased to be invited to participate.
The announcement is notable not only for the resources that will now be available to a broader community, but also because it demonstrates the W3C’s willingness to adapt to remain as relevant and useful as possible, part of an over-all process of reinvention that is being led by the W3C’s new CEO, Jeff Jaffe. The new groups are briefly described at the W3C Website as follows (the full press release and links appear at the end of this blog entry): To support the rapid evolution of Web technology, W3C today announced Community Groups, an agile track for developers and businesses to create Web technology within W3C's international community of experts. Community Groups are open to all, anyone may propose a group, and there is no fee to participate….W3C also launched Business Groups today, which provide W3C Members and non-Members a vendor-neutral forum for the development of market-specific technologies and the means to have a powerful impact on the direction of Web standards. The impetus for the new programs comes in part from the proliferation over the past several years of more and more ad hoc development efforts focusing on individual protocols and other Web standards. These efforts have typically been very bare bones, utilizing the same techniques employed by open source projects (indeed, they have often been launched by the same developers). In contrast to traditional standards development, such a project involved no more than a Wiki and perhaps a few additional Web pages. Not surprisingly, they have also often been launched by teams that have had little or no prior experience with the traditional standards development process, or the reasons why that process has evolved to operate in the way that it does.
This utilitarian approach allowed for very rapid development, but it also meant that the resulting work product was created without the benefit of any of the supporting infrastructure, tools and protections that a traditional standards development project provides: The developers were not bound by the terms of an intellectual property rights policy, meaning that a participant with ill intent could set up a “submarine patent” trap without worrying about the legal consequences. There is no legal organization to own a trademark in the protocol to ensure that claims of compliance cannot be made when in fact a product is not compliant, thus jeopardizing the credibility of the protocol or standard. There may be no one to provide ongoing support for the effort if the participants later drift away. There is no organization to promote the work product, or to lend credibility to the result. There is no in-place pool of members to provide breadth of input to maximize the quality of the result, or to act as a springboard for broad and rapid adoption when the effort is complete. The result is that a given protocol or other standard may be much less likely to become broadly adopted, because vendors will be unsure of the quality of the work product and the likelihood that it will be adopted by others. Most importantly, they will also worry about the infringement risk that may result from incorporating the technology into their products.
One response to this phenomenon was the formation of the Open Web Foundation, which has worked to provide appropriate and appealing IPR frameworks which such groups can utilize to support their processes. But these tools address only part of the gap between a protocol group and a traditional standards organization.
It would be fair to ask, then, why the same individuals that have been launching protocol groups have not simply gone to the W3C’s of the world to begin with?
The answers are several, including the fact that many of these organizations accept only corporate, government and non-profit members, and many of the protocol groups have been launched by individuals. Most of the time, there are also dues to be paid. In some organizations (including the W3C), the process between idea and adoption has become lengthy and bureaucratic, contrasting poorly with the quick pivot and shoot culture of open source development.
It’s good news, then, that the W3C is rolling out the red carpet for rapid innovation. Now, individuals will be able to utilize W3C resources to launch an activity at no cost, and without hindering their efforts. If the initiative is successful and appropriate to the mission of the W3C, those that launched the effort will have the opportunity to propose further development within the more formal process of the organization. In this context, it’s worth noting that the W3C has been busy addressing both ends of the standards development pipeline. Last November, it announced that it has become an ISO/IEC PAS process submitter, and expects that it will now offer most or all of its standards for adoption by the ISO/IEC, thereby gaining an added layer of credibility in the eyes of some potential customers (e.g., the governments of some countries).
With this latest announcement, the W3C has completed the linkage from grass roots efforts through to adoption by the traditional global standards bodies, offering “one stop shopping” (if you will) for all your standards development needs.
The remaining question is this: now that the W3C has built their new community process, will the community embrace it?
Hopefully, the answer will be yes. The press release lists, and links to, eight Community Groups that have already been launched, and one Business Group.
My personal hope is that existing protocol groups will consider transferring their efforts over to the W3C, and that new efforts will preferentially begin their projects there. After all, there’s everything to gain, and nothing to lose.
Sign up for a free subscription to Standards Today here
* * * * * * * * * * * * * * * * * * * * * * * W3C Launches Agile Track to Speed Web Innovation
Community and Business Groups Encourage Broad Participation in Technology Development
Translations | Testimonials | W3C Press Release Archive
http://www.w3.org/ — 16 August 2011 — To support the rapid evolution of Web technology, W3C announced today an agile track for developers and businesses to create Web technology within W3C's international community of experts. Because innovation can come from organizations as well as individuals, W3C has designed W3C Community Groups to promote diverse participation: anyone may propose a group, and groups start quickly as soon as there is a small measure of peer support. There are no fees to participate and active groups may work indefinitely. Lightweight participation policies let groups decide most aspects of how they work. The larger implementer community benefits from specifications available under royalty-free patent terms and permissive copyright.
"Innovation and standardization build on each other," said Jeff Jaffe, W3C CEO. "The stable Web platform provided by W3C has always encouraged innovation. As the pace of innovation accelerates and more industries embrace W3C's Open Web Platform, Community Groups will accelerate incorporation of innovative technologies into the Web."
With the launch of Community Groups, W3C now offers a smooth path from innovation to open standardization to recognition as an ISO/IEC International Standard. W3C's goals differ at each of these complementary stages, but they all contribute to the organization's mission of developing interoperable standards to ensure the long-term growth of the Web.
W3C also announced today the launch of Business Groups, which provide W3C Members and non-Members a vendor-neutral forum for the development of market-specific technologies and the means to have a powerful impact on the direction of Web standards. W3C staff work with Business Groups to help them achieve their goals and to provide connectivity among groups with shared interests. For instance, a Business Group might compile industry-specific requirements or use cases as input to a W3C Working Group.
"W3C is now open for crowd-sourcing the development of Web technology," said Harry Halpin, Community Development Lead. "Through these groups, people can reach influential companies, research groups,and government agencies. Developers can propose ideas to the extensive W3C social network, and in a matter of minutes start to build mindshare using W3C's collaborative tools or their own. Creating a Community or Business Group doesn't mean giving up an existing identity; it means having an easier time promoting community-driven work for future standardization."
Inaugural Community and Business Groups
The first groups to launch reflect a varied set of interests. W3C announces eight Community Groups:
Colloquial Web
Declarative 3D for the Web Architecture
ODRL Initiative
Ontology-Lexica
Semantic News
Web Education
Web Payments
XML Performance
and one Business Group:
Oil, Gas and Chemicals
Learn more about Community and Business Groups then start your own Community Group or your own Business Group.
| | | What's Related
Open Web Foundation
ISO/IEC PAS process sub...
W3C Press Release Archive
http://www.w3.org/
W3C Community Groups
participation policies
open standardization
recognition as an ISO/I...
Declarative 3D for...
Oil, Gas and Chemi...
about Community and Bus...
start your own Communit...
your own Business Group
More by Andy Updegrove
More from Open Source/Open Standards
W3C Launches New �Agile� Standards Development Platform | 0 comments | Create New Account | 计算机 |
2014-15/4496/en_head.json.gz/15458 | Available on: PC
Exclusive: Watch as EverQuest II's upcoming dragon is brought to life
Hollander Cooper
Videogame characters aren't born, they're made. They start as a sketch, get put onto a computer, and then... a bunch of stuff happens. People mess around with 3D programs, then mess around with other 3D programs, then they give it textures, and animations, and... it's complicated. But when you see a videogame character go from sketch to finished product it's really cool, and that's just what this video does.Check it out to get a sneak peek at Honvar, a new dragon being added in EverQuest II's 63rd update. The dragon is a bit different than most of the other dragons in the game, for reasons you'll see when you check out the video. Needless to say, he's a bit, um, lumpier than the others. Topics
EverQuest mmorpg
inkyspot
- February 17, 2012 7:34 p.m.
Zbrush and Maybe, nout what the animator was using, I would thing Maya to do the animation. It takes a dedicated team of many talented people to make a game. I would be the concept artist slash Zbrush Maya guy...
Where can I download this video? | 计算机 |
2014-15/4496/en_head.json.gz/15615 | ITD a leader in computer technology
Highways department was first in area to introduce mainframe computer Philo T. Farnsworth. Even if the name is not familiar, the average American family owes a large part of their daily entertainment to him. His invention, the television, is center stage every evening in most homes. Computers are central to life and work now, too. Life without them is difficult to imagine. Though not quite on a par with Farnsworth, ITD also was a pioneer of sorts.
ITD, or more correctly the Idaho Department of Highways, was the first organization in the Boise area to be computerized, according to the Association of Information Technology Professionals (AITP). A plaque affixed to the old Statesman building at the southwest corner of Sixth and Main Street attests to that fact. The building was built for the Statesman in 1909, and the Idaho Division of Highways took up residence there in the late ‘50s.
The plaque, placed there last summer by the AITP, commemorates the Idaho Department of Highways as the first organization in Boise with a computer – a UNIVAC 120 from 1957.
The UNIVAC 120, a mainframe computer, was a 1953 release of the modified Remington Rand 409 Computer. The UNIVAC 120 boasted of performing 360,000 addition and subtraction calculations per hour. By contrast, today’s IBM-built Mira, used by the Department of Energy, can do 10 quadrillion calculations – per second.
ITD was nearly the first in the state to use computers, but that honor actually goes to Idaho National Laboratory, which was using computers from the Navy for nuclear research in the early 1950s.
The first 10 computers in Boise
Idaho Department of Highways
Boise Cascade Corporation
Albertsons Inc.
Ore-Ida Foods
Idaho Power
Idaho State Auditor
Mountain States Wholesale
Intermountain Gas
Idaho First National Bank
Boise College (now Boise State University)
Evolution of the “First 10” list
Emerson Maxson, a retired Boise State University professor in Information Technology and Supply Chain Management, stopped by ITD Headquarters early this month. As a member of the group since 1968, Maxson is the historian of the AITP Idaho chapter. Along the way, he’s also worn the hats of president, secretary, international director and CDP (Certified Data Processor) ambassador.
Maxson was heavily involved in the initial efforts to put together the “First 10” list, a research effort that started in 1990 and is ongoing.
“The plaque was an outgrowth of the ‘First 10’ research,” Maxson said. “We decided to start that activity about 18 months ago, coinciding with the AITP Idaho chapter's 50th anniversary.”
Maxson said the UNIVAC 120 was introduced in the first quarter of 1953, and was named to take advantage of the prominence of the first UNIVAC produced by the Eckert/Mauchly group acquired by Remington Rand in 1950.
More than 1,000 units were produced, with a price tag of nearly $100,000. It was the first computer used by the Internal Revenue Service and the first computer installed in Japan. The first mini computer didn't appear until 1963, and the micro-computer wasn't available until 1974.
Published 3-4-2011 | 计算机 |
2014-15/4496/en_head.json.gz/15988 | Contact Advertise GNOME, KDE: Which Has the Evolutionary Advantage?
Linked by Thom Holwerda on Mon 30th Mar 2009 18:43 UTC, submitted by elsewhere Any discussion about GNOME vs. KDE is sure to end in tears. It's basically impossible to discuss which of these two Free desktop environments is better than the other, mostly because they cater to different types of people, with different needs and expectatotions. As such, Bruce Byfield decided to look at the two platforms from a different perspective: if we consider their developmental processes, which of the two is most likely to be more successful in the coming years?
9 · Read More · 117 Comment(s) http://osne.ws/gdk Permalink for comment 355870
The Elephant in the Room by segedunum on Mon 30th Mar 2009 22:09 UTC Member since:
Put simply, Gnome does not have an application developers' framework and a common set of libraries underpinning it that everything uses, and it never has done. It really doesn't matter what is considered for Gnome 2.3 you can only ever be as good as the tools that you use and stand on. I'm afraid users out there just don't care about fanboys shouting about bloat, simplicity or how 'clean' a desktop is. If it doesn't have the applications and the functionality and doesn't have the tools to build that functionality then you're on the road to nowhere. Having a go at KDE 4.0 isn't going to change that either. XFCE? It sort of fills a niche, but don't make people laugh. KDE has always had a very good object oriented framework in Qt to build from, it's gone on to another level with 4.x, is squaring up to some of the things you can do visually with Vista and OS X and with Plasma we're finally getting a decent container for developers to write all those little desktop applications and applets that provide the functionality that users want. What new applications can people write with Gnome and how will they go about doing it? Right now, the best way of getting into Gnome and GTK development is with Mono, regardless of how people might feel about it. Gnome need to recognise that to be relevant and they need to either embrace it or put serious work into learning why that is and doing something about it. Alas, Jack Wallen's 'article' quoted by Byfield just seems to be another sad attempt to stop discussing the elephant in the room, or to stop people from seeing it because he doesn't actually discuss Gnome at all, quite apart from the deliberate inaccuracies. It's only going to get worse for people like him. There's a lot of things in and around KDE that need improving, but I see no one else at all in the open source desktop world looking ahead and being able to look the proprietary competition in the eye developer-wise, visually or functionally. If there ever is to be a 'Year of Desktop Linux' then open source desktops need to catch up to proprietary alternatives, not be afraid to try new ideas and make it easy to develop with for their developers. If they can't do that then we need to accept it and then all these silly articles can just end. | 计算机 |
2014-15/4496/en_head.json.gz/16232 | Welcome to The Alfred P. Sloan Foundation ("Foundation," "we" or "our") Web site. This privacy statement ("Statement") describes how we may collect information during your visit to the Foundation's Web site ("Web site") and how we may use, share and store that information.
The Foundation is committed to protecting the personal information you provide to us when visiting the Web site. By using this Web site, you consent to the Foundation's collection and use of any personal information that you submit ("Personal Information") as described in this Statement. The Foundation reserves the right, in its sole discretion, to modify this Statement at any time by posting an updated version of this Statement. If the Foundation decides it is appropriate, it may also post a notice of the update on the Web site's homepage at www.Sloan.org. Any modifications to this Statement are effective as soon as they are posted, so please review this Statement periodically. By using the Web site after we have posted changes, you agree to the terms of the updated Statement. Personal Information you submit to the Foundation is governed by the terms of the Statement in effect at the time the information is received.The Foundation collects two types of data: (A) Personal Information (defined above), and (B) anonymous data that is automatically collected from all visitors to the Web site.
1. Collection and Use of Personal InformationWe collect Personal Information only when you provide it to us voluntarily (e.g., through forms found on our Web site such as the Industry Studies Affiliate Application Form). Personal Information includes, but is not limited to, first and last names, date of birth, email and postal addresses, telephone numbers, and educational history. In some circumstances (e.g., nominations for the Alfred P. Sloan Research Fellowships), you may provide Personal Information about other individuals. The Foundation uses the Personal Information submitted through the Web site to fulfill user requests (e.g., to join the Industry Studies Affiliates Program) and in connection with Foundation programs (e.g., to manage the nomination process for Alfred P. Sloan Research Fellowships). The Foundation may disclose Personal Information to the Foundation's committees and organizations that further its mission and to the Foundation's third-party service providers (e.g., to maintain the Web site, provide listserv services, and to provide other services for the Foundation). The Foundation may also send you information from time to time that it believes may be of interest to you. If you ever wish to opt out of receiving such information from the Foundation, you may notify the Foundation as described in Section 8 below. The Foundation, however, always reserves the right to communicate with you (by email or other means) in connection with programs for which you have applied or are participating. In addition, the Foundation may disclose and use Personal Information in response to legal process (e.g., subpoena or warrant), a security risk or when the Foundation believes in good faith that disclosure is necessary or appropriate.
2. Collection and Use of Anonymous DataThe Foundation may also collect, transmit and receive anonymous, non-personal data relating to your use of the Web site (such as, Internet Protocol address, browser type, referrer site and site usage data), including through Google Analytics, a third-party service provider. The Foundation may use the anonymous data it collects, transmits and receives to analyze trends, administer the Web site, track traffic patterns, and gather any other aggregated information related to the usage of its Web site. By using this Web site, you agree to the collection of this data and its transmission to and from Google Analytics. Google's use of any non-personal data collected on the Web site is governed by the Google Analytics Terms of Use and the Google Privacy Statement. The Foundation may share the aggregated non-personal data it collects and receives from Google with third-parties such as affiliates, licensees and partners, and may use that data for research, promotional and other purposes relating to the Foundation's mission. 3. CookiesWe may use cookies on the Web site. A cookie is a file stored on your computer's hard drive. For example, the anonymous data the Foundation collects and transmits to Google Analytics requires the use of a cookie. You may disable cookies in your internet browser, but doing so may affect your use of the Web site.
4. Links to Third-Party Web sitesFor your convenience, this Web site may include links to Web sites owned or controlled by third-parties in order to provide you with information that may be of interest to you. Any Personal Information you provide to a third-party Web site is not governed by the terms of this Statement. Before providing any third-party Web site Personal Information, you should review the terms of use and privacy policy, if any, applicable to that Web site. The Foundation does not endorse any of the third-party Web sites which may be linked to using this Web site.
5. Location of ServersAny Personal Information you submit to the Foundation will be stored using servers located in the United States. If you are located outside of the United States, please be aware that any Personal Information you provide to the Foundation will be transferred to the United States. By using this Web site or providing Personal Information to the Foundation, you consent to this transfer and to the collection, storage, processing and use of that information in the United States. In addition, if you are located in the United states, our service providers may not be located in your geographic area in the United States. By using this Web site or providing Personal Information to the Foundation, you consent to the transfer of that information to these service providers. 6. Security The Foundation will use reasonable efforts to safeguard the Personal Information it receives through this Web site. Nevertheless, no transmission over the Internet can be guaranteed to be 100% secure. The Foundation makes no representation or warranty that its Web site is protected from viruses, security threats or other vulnerabilities or that the Personal Information submitted to the Foundation is always secure. Accordingly, you provide the Foundation with Personal Information at your own risk. 7. Children The Foundation does not knowingly collect information from children under the age of thirteen. If you become aware that anyone under the age of thirteen has submitted Personal Information to the Foundation through the Web site, please notify us at [email protected] or at 212.649.1692.
8. Choice/Opt-OutIf you do not wish to receive communications from the Foundation (other than necessary communications in connection with an application to or participation in a Foundation program) or if you would like to discontinue receiving communications through certain methods but not others), please inform us by sending an email to [email protected].
9. ContactWe strive to ensure that your visit to our Web site is a satisfactory one and that your privacy is respected. If you have any questions regarding this Statement or the technical functionality of the Web site please contact: Nathan Williams, Web Content Specialist, Alfred P. Sloan Foundation, 630 Fifth Avenue, Suite 2550, New York, NY 10111, at [email protected] or at 212.649.1692.
If you wish to correct, update or delete the Foundation's record of your Personal Information, you may contact the Foundation using the information provided above.
California Civil Code Section 1798.83 permits California residents, who submit Personal Information to the Foundation, the right to ask whether we have disclosed that information to third-parties for direct marketing purposes. The Foundation does not disclose Personal Information to third parties for direct marketing purposes. However, if you are a California resident and nevertheless wish to make a request under Section 1798.83, you may do so using the contact information provided directly above.
Privacy Statement Last updated: October 10, 2008 Alfred P. Sloan FoundationMajor Program AreasSloan Research FellowshipsPress RoomAbout The FoundationApply for GrantsContact Us | 计算机 |
2014-15/4496/en_head.json.gz/17503 | The name of this blog, which is probably why you are here, is the Setswana translation of the ubuntu precept, that “a person is a person because of ubuntu.”
Beyond that, it’s probably worth mentioning that I have been a Linux user since late 2005, and started out with Ubuntu, like many other people. In fact, I still volunteer as a moderator on the Ubuntu forums.
Since then though, I’ve been preoccupied with using Linux as a means of resurrecting out-of-date (and even antique) computers, and putting them to use on a daily basis.
Ubuntu isn’t my No. 1 choice any longer, but it does come into play quite often. I try to post something, interesting or not, on a daily basis. Everything is original content, and everything is a best-as-I-can account of my Linux experiences.
I am not a hub of information, so don’t look here for breaking news about Distro X. Instead, this is just a chronicle of the things I’ve done, what worked and what didn’t, and what I need to remember for next time.
If you have a question or a suggestion or want to share war stories about old computers, feel free to e-mail me with the contact form below. Cheers! (Edit, 2012-04-01: I’ve taken out the e-mail form, because of abuse. Sorry. :( )
Some fine print: The opinions expressed on this blog are mine alone, and do not represent the position of the staff or management of UbuntuForums.org, the Forums Council, Ubuntu or Canonical, Ltd. Nor do the opinions posted here have any reflection or bearing on WordPress.com, linked sites or their parent organizations. Links to external sites are not to be interpreted as endorsements of those organizations or individuals, and are included solely for information or reference purposes.
Copyright (c) 2006-2011 K.Mandla.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled “GNU Free Documentation License”.
GNU Free Documentation License
Version 1.3, 3 November 2008
Copyright (c) 2000, 2001, 2002, 2007, 2008 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
0. PREAMBLE
The purpose of this License is to make a manual, textbook, or other functional and useful document “free” in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or noncommercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others.
This License is a kind of “copyleft”, which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software.
We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference.
1. APPLICABILITY AND DEFINITIONS
This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The “Document”, below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as “you”. You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law.
A “Modified Version” of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language.
A “Secondary Section” is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document’s overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them.
The “Invariant Sections” are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none.
The “Cover Texts” are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words.
A “Transparent” copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not “Transparent” is called “Opaque”.
Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming simple HTML, PostScript or PDF designed for human modification. Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML, PostScript or PDF produced by some word processors for output purposes only.
The “Title Page” means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, “Title Page” means the text near the most prominent appearance of the work’s title, preceding the beginning of the body of the text.
The “publisher” means any person or entity that distributes copies of the Document to the public.
A section “Entitled XYZ” means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands for a specific section name mentioned below, such as “Acknowledgements”, “Dedications”, “Endorsements”, or “History”.) To “Preserve the Title” of such a section when you modify the Document means that it remains a section “Entitled XYZ” according to this definition.
The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License.
2. VERBATIM COPYING
You may copy and distribute the Document in any medium, either commercially or noncommercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3.
You may also lend copies, under the same conditions stated above, and you may publicly display copies.
3. COPYING IN QUANTITY
If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document’s license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects.
If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages.
If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public.
It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document.
4. MODIFICATIONS
You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version:
A. Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission.
B. List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement.
C. State on the Title page the name of the publisher of the Modified Version, as the publisher.
D. Preserve all the copyright notices of the Document.
E. Add an appropriate copyright notice for your modifications adjacent to the other copyright notices.
F. Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below.
G. Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document’s license notice.
H. Include an unaltered copy of this License.
I. Preserve the section Entitled “History”, Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is no section Entitled “History” in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence.
J. Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the “History” section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission.
K. For any section Entitled “Acknowledgements” or “Dedications”, Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein.
L. Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles.
M. Delete any section Entitled “Endorsements”. Such a section may not be included in the Modified Version.
N. Do not retitle any existing section to be Entitled “Endorsements” or to conflict in title with any Invariant Section.
O. Preserve any Warranty Disclaimers.
If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version’s license notice. These titles must be distinct from any other section titles.
You may add a section Entitled “Endorsements”, provided it contains nothing but endorsements of your Modified Version by various parties-for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard.
You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one.
The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version.
5. COMBINING DOCUMENTS
You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers.
The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work.
In the combination, you must combine any sections Entitled “History” in the various original documents, forming one section Entitled “History”; likewise combine any sections Entitled “Acknowledgements”, and any sections Entitled “Dedications”. You must delete all sections Entitled “Endorsements”.
6. COLLECTIONS OF DOCUMENTS
You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects.
You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document.
7. AGGREGATION WITH INDEPENDENT WORKS
A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an “aggregate” if the copyright resulting from the compilation is not used to limit the legal rights of the compilation’s users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document.
If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document’s Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate.
8. TRANSLATION
Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail.
If a section in the Document is Entitled “Acknowledgements”, “Dedications”, or “History”, the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title.
9. TERMINATION
You may not copy, modify, sublicense, or distribute the Document except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense, or distribute it is void, and will automatically terminate your rights under this License.
However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.
Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, receipt of a copy of some or all of the same material does not give you any rights to use it.
10. FUTURE REVISIONS OF THIS LICENSE
The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See http://www.gnu.org/copyleft/.
Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License “or any later version” applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation. If the Document specifies that a proxy can decide which future versions of this License can be used, that proxy’s public statement of acceptance of a version permanently authorizes you to choose that version for the Document.
11. RELICENSING
“Massive Multiauthor Collaboration Site” (or “MMC Site”) means any World Wide Web server that publishes copyrightable works and also provides prominent facilities for anybody to edit those works. A public wiki that anybody can edit is an example of such a server. A “Massive Multiauthor Collaboration” (or “MMC”) contained in the site means any set of copyrightable works thus published on the MMC site.
“CC-BY-SA” means the Creative Commons Attribution-Share Alike 3.0 license published by Creative Commons Corporation, a not-for-profit corporation with a principal place of business in San Francisco, California, as well as future copyleft versions of that license published by that same organization.
“Incorporate” means to publish or republish a Document, in whole or in part, as part of another Document.
An MMC is “eligible for relicensing” if it is licensed under this License, and if all works that were first published under this License somewhere other than this MMC, and subsequently incorporated in whole or in part into the MMC, (1) had no cover texts or invariant sections, and (2) were thus incorporated prior to November 1, 2008.
The operator of an MMC Site may republish an MMC contained in the site under CC-BY-SA on the same site at any time before August 1, 2009, provided the MMC is eligible for relicensing.
ADDENDUM: How to use this License for your documents
To use this License in a document you have written, include a copy of the License in the document and put the following copyright and license notices just after the title page:
Copyright (C) YEAR YOUR NAME.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.3
or any later version published by the Free Software Foundation;
with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
A copy of the license is included in the section entitled "GNU
Free Documentation License".
If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the “with … Texts.” line with this:
with the Invariant Sections being LIST THEIR TITLES, with the
Front-Cover Texts being LIST, and with the Back-Cover Texts being LIST.
If you have Invariant Sections without Cover Texts, or some other combination of the three, merge those two alternatives to suit the situation.
If your document contains nontrivial examples of program code, we recommend releasing these examples in parallel under your choice of free software license, such as the GNU General Public License, to permit their use in free software.
tadeucz
2007/03/23 at 6:58 PM Lumela
2007/03/23 at 9:51 PM Abuti oa me! U phela joang? Le nna, kea phela.
2007/03/25 at 12:33 AM Na, ke phela hantle. U phela joang?
2007/03/25 at 7:26 PM Hey lona! Ga kea itse gore batho ba bantsi ba bua! Ke itumeletse go lo dumedisa!
2007/12/06 at 12:52 PM Dumela Ra!!! from New Zealand, Cheers
Bram Pitoyo
2008/12/01 at 1:58 PM Hey. Thanks for adding Link En Fuego to your blogroll!
2009/05/31 at 9:29 PM Great work K.Mandla, it is a great thing to do a side forum like this as the ubuntuforums are really loaded with everything now. Keep up the good work. Abantu abanjengathi bayabulela torho umsebenzi omhle.
2009/07/05 at 7:43 PM Great blog, nice info and good How-To’s. I’ve added it to my blogroll on http://www.clarke.org.ru (Russian).
2009/11/16 at 9:57 PM Howzit,
I run a site called Floss.pro!
I would love it if you could contribute !!
p0ng
2010/05/24 at 2:57 AM Hi! I saw one of your posts that you use a notebook running rtorrent. After you download the files in it, how do you send these files to other computers on your network? I have a notebook that has a problem on the screen and I would do the same. Download torrents and then send them to my computer. Sorry for my English. I’m using Google Translate.
2010/05/24 at 6:04 AM No problem. :) Usually I connect the computers over a network with NFS, or use an external USB drive if the machine has USB ports. Sometimes that’s faster than a network connection. Other people use samba, I think. I prefer the way NFS attaches to folders. If you can set up ssh on your notebook, you can control it from another computer, but you might already know that. ;)
2010/11/19 at 9:09 PM Added to my blogroll… Actually ages ago, but worth mentioning it! ;)
http://linuxblog.darkduck.com
mascip
2014/04/08 at 4:01 AM Hi there, I’m looking for a CLI spreadsheet and I’m considering sc or teapot.
sc would be my first choice because I’m a Vim user, but it doesn’t come with the Undo feature (you cannot undo an action). How do you do when you make a mistake? At the moment I close the spreadsheet and start all over again. Any better solution?
2014/04/20 at 3:51 AM I looked again and to be honest, I don’t see anything like that in the help pages. I use sc daily for small calculations but I tend to rely on the d-r and p-r keystrokes to duplicate rows. And now that I think of it, if I make a mistake I have a tendency to drop back to a saved copy, rather than try to “undo” my mistake. :(
2014/04/20 at 4:09 AM Thank you for your help :) It’s a shame really, that is not possible to undo. Well, I’m using Gnumeric now; it’s not CLI but at least it’s less heavy and slow than Libreoffice.
2014/04/20 at 4:11 AM Thanks for your answer :) I’ll keep it in mind if I want to go CLI for some very simple spreadsheets.
You may translate when ready, Grisly « Motho ke motho ka botho
High-end and low-end desktop games « Motho ke motho ka botho
Howto: Use rtorrent like a pro « Motho ke motho ka botho
GFDL 1.3 adopted « Motho ke motho ka botho
Rubbing shoulders with giants « Motho ke motho ka botho
Let’s not make a big deal out of it « Motho ke motho ka botho
An extremely minor update « Motho ke motho ka botho
Going Old School « Freed Up Thoughts
Trackback on 2011/01/23 at 7:17 AM 10
Freed Up Thoughts » Going Old School | 计算机 |
2014-15/4496/en_head.json.gz/17909 | How to have a realistic set of languages without making adventuring prohibitively difficult?
5 "Do you speak Common?" "Of course I do! Everyone speaks Common..."
My group is starting a somewhat experimental campaign. We're using a setting that is neither canon (e.g. described in a book) nor entirely homebrewed: it's "what we could remember from the book that we read once" plus the world map from the book, marked up some added locations and country borders, and a short political history we developed.
My problem is that according to the book (Endival) every race speaks their own language and Common. In my opinion this has two downsides. First, it does not feel right that just anyone can communicate with people from the other side of the world. Second, this strongly diminishes the value of learning another language. It means you can go from one side of the world to the other and still fluently speak the local language (i.e., Common). Also, all humans speak Human, all dwarves speak Dwarven… but there are several warring human kingdoms and yet they all share a common language?!
I would like to make the world resemble our own history a bit more. My concerns are:
You and I speak "Common" – it's called English. But this is the result of the recent globalization made possible with the advent of the Internet. Even though the setting I am talking about is more sophisticated than the historical middle ages (due to widespread powerful magic), they are far from anything equivalent to microprocessors, space exploration, freight in the millions of tons and so on. There should not be a Common language. On the other hand, suppose we decide that there is no Common language. Then we run into problems during character creation: A very basic and classic player freedom is to choose their character's race. Now we have a colorful group in which no one can speak with the others (why waste a skill to learn a language, when your squishy starting-level character could learn to swing an axe or cast spells better). Even if they agree to all invest skills in a shared language, once they begin travelling (i.e., adventuring) they quickly run into people with no shared language.
What are some ways to handle this? How can we have a realistic set of languages without making adventuring prohibitively difficult?
gm-techniques world-building historical-settings languages share|improve this question edited Aug 2 '12 at 15:10
Latin (and to some extent Greek) used to be the lingua franca during the middle ages. Later on, French became the language of diplomacy and nobility. Everyone that mattered [1] speaks a local variation of said language which should still be understandable by another speaker. For example, Quebecois and French or American and English. So, you could have such a language that all the PCs speak. They should be able to interact with everyone else. Now, make sure that each PC speaks the language from where they will go adventuring. If not, they will have to find a teacher and learn the language. This does not take that much time. You can learn everyday grammar and vocabulary in about three to six months of (hard) study. This is what I do for all my games.
Well, almost all my games. If the game is set in a bounded location, then only those languages that are around said location will be relevant. If I set a game in 14th century Venice, I do not need to worry about the PCs speaking Japanese. If I set a game in the Crusades, you better believe that everyone will learn Latin, French, and Arabic pretty damned quickly if they want to get anything done. If you have boogly powers (aka magic or psyonics or whatever), then learning languages could be done via it. As a side note, Middle Earth started as a setting to play with the evolution of different languages yet most characters manage to communicate quiet well -- and were delayed at the gates of Moria because of a translation error! Philology is just cool. And just because it is hard to implement in game setting should not be a barrier to trying it out provided that it enhances the enjoyment of the game.
[1] Why, yes sire, I do have blue blood... What about Peasants? They don't need to speak to outsiders, they need to work harder and pay taxes.
There's two ways that I can think of.
If you want a really simple solution? Declare that "Common" is a common second language. It's by no means universal - and as you move further away from major borders and trade routes it can completely disappear - but it's common enough that almost anyone could know it without straining plausibility. In mechanical terms, this means that PCs get the language for free, but that NPCs will only know it at the GM's discretion - and even then, they might not be very good at it outside the few phrases they commonly use.
The second option is to make 'common' a simplified trade tongue cobbled together from other languages with only enough detail to negotiate deals, but not enough to express complex abstract concepts. Personally, I don't like this one, since most real-world trade languages tend to freely borrow loan-words from their parent languages and so can express complex concepts just fine, but some people like it.
GMJoe
Some background: languages are shared only as far and wide as they can be communicated. Any farther than that, and variations start. Soon you have comprehensible dialects, then incomprehensible dialects and other languages. As you say, technology is what made entire countries speak the same language. Example: BSL is the British Sign Language. There's one formal lexicon and grammar, but because BSL can't be written (English is written instead), it's fiercely localised. Deaf people who learned in different schools may have different slang. Even basic signs, e.g. those for colours or numbers can differ between cities. And this is just one sign language. There are dozens out there, with nearly nil cross-intelligibility.
So, if you have easy, cheap dissemination of language to everyone (and everyone can read, if this is in writing), then Common would be largely the same, with variations depending on individual skills.
Suggestion: as GMJoe said, make Common a second language. Treat it like a pidgin language: not everyone speaks it (mostly just people who travel), and not everyone speaks it well. The grammar varies depending on the local language (because pidgin tends to be a bastardisation of two or more languages), and the lexicon includes lots of local words. The farther the player is from their particular homeland, the weirder this becomes for them. Game potential: this is perfect material for adventuring! One word might mean one thing where you're from, and another elsewhere. Some words may be inoffensive in your homeland, and mortal insults far to the South. You can use this to your advantage. Even worse, depending on the NPCs ability to understand (or the PCs' ability to learn the differences), they may just be unable to get some nuances across. Pidgin languages develop for practical reasons (usually trade).
Suggestion 2: in the times of the crusades, some crusading knights had basic phrasebooks so they could communicate in the local languages of the lands they crossed. Perhaps you can give your PCs something like that. It can be pitifully inaccurate in some cases, and/or only include such marvels as ‘me want go inn/brothel/temple’. It can miss out on some local dialects, and leave plenty of space for humorous/adventuring misunderstandings.
Remember that a complete language barrier dehumanises (in the general sense) the NPC, while anything in common (even attempts to learn some of each other's language) are tools to bring PCs and NPCs closer together.
Alexios
Other people have already discussed keeping Common around as a 2nd language, so I'll describe another approach.
Consider modern Europe: The average person speaks their native language fluently, and anywhere from 1-3 more languages with anywhere from crude skill to fluency depending on how often they use it. The more tightly-packed the language regions are, the more languages everyone will speak, simply so they can interact with their neighbors.
If a nation A is at peace with neighbor B, then people in A will speak B's language so they can trade with B. If there is war, or the threat of it, between A & B, then A's government will want people to learn B's language so they can interrogate POWs, translate intercepted messages, tell conquered peasants to shut up & get back to work, and so on.
This doesn't mean that everyone will speak multiple languages: people from the center of a nation/language-region who don't have much contact with foreigners won't have any reason to learn foreign languages. But everyone else will probably pick up at least a smattering of a 2nd language, especially if they travel a lot (adventurers, for example).
If your system allows varying levels of skill in a language (say, basic/intermediate/advanced/fluent/native), I would give every player native-level in one language and 2-5 ranks divided however they please among other languages, so they might be a native speaker of Elven & Sylvan, for example, or a native speaker of Elven, an intermediate speaker of Sylvan & Human, and a basic speaker of Dwarven.
It's a world with powerful magic at play. Common might exist because of that. For example, it could be the gift of some appropriate god, like a trade god or a god that's all about peace and unity. Or I suppose a god all about conquest and ruling conquered territory. If so, it can work however you want it to work, including being more limited than a real language. Maybe everyone just knows the language like they know how to walk but the vocabulary can never change - so things that are newer than when the language was given to people all have local variations and aren't really part of common. Or the common part of the language, the part people just know, is focused on that god's concerns and if you want to talk about anything else you have to express it using just those available words.
That being said, I've played in a campaign where not every character had a language in common. Usually somebody could translate, but in combat or tricky diplomatic situations it was a factor. One character was from far away and communicated through pantomime, or on rare occasion repeated a word he had heard. In my case everyone really liked how the language issue worked out, but it would depend on the group of players.
Hmm, your first paragraph sounds like the Tower of Babel in reverse. Which would be awesome. +1 for the creative setting solution.
@GMJoe - Actually, the Tower of Babel thing is interesting. Maybe the default state of the universe is that everyone speaks the same language, and the reason they don't anymore is that most of the original language was destroyed by some great event.
There are only 3 situations where I think language ought to matter in a game:
You can't communicate at all.
You can communicate, but there's a single word or phrase you don't recognize.
You can communicate without any difficulty.
Here's what we've used in our campaign. (And it's worked quite well for us.) Language names are changed, of course.
All the adventurers are from the same cultural area. They all speak English to each other and to others from their area. When they're in their home area, there are no communication problems.
In court settings and other high-culture places, French is often used. Most of the adventurers are commoners, so they only know a smattering of French, so the noblewoman in the group takes the lead when trying to make a good impression at court, since she's the only one fluent in French.
Old manuscripts and monuments are written in various ancient languages: Latin, Anglo-Saxon, and Pictish. One of the party members can read Anglo-Saxon, so those are no trouble. The priests in the region generally can read Latin, so they can help with those. The party still hasn't found anyone who can read Pictish, so those stones are still a mystery to them.
In one of the neighboring kingdoms, they speak Scots, which is close to English. Most of the time I let the party interact with people without any trouble, but every now and then I throw in someone describing something they don't understand.
There are a few settlements of horse riders in the area. These people speak Kazakh amongst themselves, which the party doesn't understand, but many of the men speak English just fine. These men act as spokesmen for the group when dealing with the party.
Occasionally they run across forest people deep in the woods. These people speak only Cree, which none of the adventurers speak. Communication is only possible with gestures, noises, and a lot of guessing.
Personally, I enjoy making up languages to fill the game world, but this is not significant in the actual roleplaying. None of the players in our group want to actually learn a new language.
In practice, the players end up recognizing which cultures different names belong to, and they learn a few words here and there, but that's it.
Realism in language diversity can become a major impediment and shift the focus from the characters & plot to the intricacies of the setting. Keep a Common language in the game to facilitate play (and fun). As noted here, it need not be fully known or ubiquitous.
As a suggested mechanic: If you're interested in linguistics and the players aren't very, just add a comprehension/communications skill roll to dialogues where misunderstandings could occur. While simple and basic concepts would be assumed, this mechanic would indicate the level of nuance conveyed. With some skill on the GM's part, failed nuances can pyramid into misconceptions, many humorous but some few being crucial to the party's progress. (And keep a sharp ear for slang and colloquial expressions used by players, presenting a rich vein for your NPCs to misinterpret.)
ExTSR
I've always viewed 'Common' as the 'Human language of the area'. So not 'common' in the sense that its specifically a 'shared' language that is common among races, but 'common' in the sense that its the everyday, 'most common' language of humans in the area. (assuming a human-centric world)
So with that, the Common in one area is different from the Common in another. Common in England would be English. Common in Germany would be German. ie, basically the way the world is.
Given the typical speed of travel, that never really posed much of a problem in my campaigns. I always assumed the characters could pick up enough of the local lingo by being immersed in it as they traveled. Unless they are traveling by ship somewhere, or are entering poetry contests, they generally woudnt travel faster than they could learn the local dialect. If that happens, they can hire translators (which itself could lead to interesting plot hooks)
As for characters that dont speak 'Common', consider that after six weeks adventuring with a party that primarily speaks it, they would be able to start communicating enough for basic conversation due to immersion. So after a level or two, they'd speak Common reasonably well, just perhaps with a cool accent. In game terms, I would take language out of the skill system (ie, no cost if they pick it up via travel), unless a character was specifically studying a language from someplace they never had been. Another possibility may be to give bards special benefits when the enter a new land, perhaps halving the time it takes for them to pick up the local dialect. That'd give a nice boost in purpose and scope to the class.
I think this is a very classic example of "Realism vs. Gameplay". Improve one of them and the other will suffer.
But here's an idea: take a look at Earth and model your setting after it, but drop the fact that everybody speaks English. Namely, there is huge regions where people speak the same language (English in North America, Spanish in South America, even German in Central Europe). Have your party agree on a language they share and focus your adventure mostly in that region. This will have some interesting effects. For example, a PC can be a foreigner and speak with an accent or grossly misunderstand the culture. Traveling to an off-language region will be exotic and fun - the PC's will need to find a translator or will be able to communicate with a small subset of the people. There will be large parts of the world that are less accessible language-wise and that's realistic even today.
However I hate how that affects the gameplay. You say that diminishes the value of learning a language. While true, I say that there is no nice mechanic for learning a language I'm familiar with. Unlike investing in your combat skills, a language ability is in the mercy of the Game Master - if I learn Dwarven and we don't go where they actually speak it, it's a wasted ability. Worse, if we go there and I'm the only one speaking it, now the other players are having a bad time. And finally, if two of us learn it, it's less valuable for each of us. Even if you have them travel to a region where they don't speak the language and have them stay there for a longer while, it would still be bad gameplay - the players won't feel that they have an advantage of learning it, they will feel forced to learn it in order to enjoy the game. And it will become wasted XP when they move away.
In "Realism vs. Gameplay", I strongly prefer gameplay. That's why I try to avoid the language issue entirely when I do my own setting. I'll have dialects and variations to be able to make (N)PCs sound foreign and exotic, I'll have foreign/ancient languages in order to have things appear "alien", but I'll go a long way not to penalize the players for not investing in a skill they cannot see clear return in.
Stefan Kanev
Not the answer you're looking for? Browse other questions tagged gm-techniques world-building historical-settings languages or ask your own question. asked
How can GMs make their game worlds more inclusive?
How do I make players more comfortable in an unusual historical setting?
World Development - A Fresh Start
Do shardminds have to share a language to communicate with an NPC/PC?
How to make rituals a part of everyday life
How do you customise swearing to a setting?
How to make players care about a community
How can we make overcoming a language problem interesting?
How can established settings contact each other, and are there rules about it?
Can a character know Druidic without having any levels of Druid? | 计算机 |
2014-15/4496/en_head.json.gz/17935 | 3 What is the maximum number of bytes for encrypting a plaintext message using RSA that is reasonably secure and also efficient and would AES be better for the same size in bytes? The encryption doesn't have to be public by the way, I'm just wondering if AES is just as good on a short message as it is on a large document. Basically the message or document would be sent encrypted but the key would never be made public. I guess that would also defeat the purpose of RSA but I've read a few times online that RSA is good for short messages and AES is good for long ones.
aes rsa share|improve this question asked Mar 30 '13 at 5:23
lost_with_coding
RSA, as defined by PKCS#1, encrypts "messages" of limited size. With the commonly used "v1.5 padding" and a 2048-bit RSA key, the maximum size of data which can be encrypted with RSA is 245 bytes. No more.
When you "encrypt data with RSA", in practice, you are actually encrypting a random symmetric key with RSA, and then encrypt the data with a symmetric encryption algorithm, which is not limited in size. This is how it works in SSL, S/MIME, OpenPGP... Regularly, some people suggest doing "RSA only" by splitting the input message into 245-byte chunks and encrypting each of them more or less separately. This is a bad idea because:
There can be substantial weaknesses in how the data is split and then rebuilt. There is no well-studied standard for that.
Each chunk, when encrypted, grows a bit (with a 2048-bit key, the 245 bytes of data become 256 bytes); when processing large amounts of data, the size overhead becomes significant.
Decryption of a large message may become intolerably expensive.
When encrypting data with a symmetric block cipher, which uses blocks of n bits, some security concerns begin to appear when the amount of data encrypted with a single key comes close to 2n/2 blocks, i.e. n*2n/2 bits. With AES, n = 128 (AES-128, AES-192 and AES-256 all use 128-bit blocks). This means a limit of more than 250 millions of terabytes, which is sufficiently large not to be a problem. That's precisely why AES was defined with 128-bit blocks, instead of the more common (at that time) 64-bit blocks: so that data size is practically unlimited.
Comparing the two directly is a little like comparing a tractor to a train - they're both vehicles but have completely different function and construction.
RSA is an asymmetric cipher. It is ideal for secure exchange of messages across an untrusted network, because the public key can be known by everyone - a message encrypted with the public key can only be decrypted by the private key. As such, if two parties know each other's public keys, they can exchange messages securely. This means that no secret information has to be transmitted - as long as authenticity and integrity are maintained you're safe. Thankfully, RSA provides a method of generating signatures on data, which help prove that it is authentic. Given a message signed by a private key, it is possible to verify that signature using the corresponding public key.
As a rule of thumb, you can only encrypt data as large as the RSA key length. So, if you've got a 4096-bit RSA key, you can only encrypt messages up to 4096 bits long. Not only that, but it's incredibly slow. RSA isn't designed as a full-speed data transport cipher.
AES is a symmetric block cipher, and is incredibly fast. The plaintext is split into chunks called blocks, and each block is encrypted in a chain. There are different ways of doing this, but a common one is called Cipher Block Chaining, or CBC for short. This allows for theoretically infinite message sizes. However, symmetric ciphers like AES require a secret key to be exchanged first. Unlike RSA, the shared key must remain unknown to attackers, so you have to provide authenticity, integrity, and secrecy. That's difficult to do directly.
What you'll tend to find is that both schemes are implemented together, such that RSA is used to exchange the key for a block cipher like AES:
Alice and Bob know each other's RSA public keys. In general these are exchanged out-of-band, e.g. via a certificate being deployed as part of your operating system.
Alice picks a random 256-bit key for AES, and encrypts that key with Bob's public key. She signs this message with her private key, and sends it to Bob.
Bob uses Alice's public key to verify the signature. He then uses his private key to decrypt the message.
Eve (an eavesdropper) has seen the encrypted message, but cannot decrypt it without knowing Bob's private key. She also cannot alter the message, since it would render the signature incorrect. She cannot re-generate a valid signature without knowing Alice's private key.
Alice and Bob now share a secret 256-bit key, and use that to encrypt messages using a symmetric cipher, such as AES. This satisfies a few requirements:
The conversation can occur over an untrusted network without an attacker being able to read the messages.
The session key exchange can be done in a safe manner, i.e. authenticity and integrity are maintained on the key.
Performance of encryption and decryption for the actual conversation data is very good.
In terms of level of security, it doesn't really make much sense to compare RSA and AES. They do different jobs. We currently assume that 128-bit AES keys are safe, and that 2048-bit RSA keys are safe, but this is entirely dependant on individual security requirements. Using 256-bit AES and 4096-bit RSA keys should be more than enough for the next decade, assuming the implementation is sound.
Note that all of this is a simplification, as there are many caveats and details involved, and describing and RSA exchange as "encryption" isn't strictly correct, but all in all this should be a reasonable high-level overview of the way the two types of crypto work.
AES is a symmetric-key algorithm. RSA is asymmetric. While the both are for encryption, they are often used in different ways, so it is difficult to compare them in terms of efficiency or strength, since the purpose for using one versus the other would likely be a greater determinant in which one or class of encryption is used.
You may want to start with this other question: Encryption - should I be using RSA or AES?
I saw that thread already and thats the reason I asked this question. I suppose the different uses for RSA and AES make my question pointless since comparing the two would be like comparing two non-related things.
– lost_with_coding
One reason that you normally don't see RSA used for larger amounts of data is because of performance - both RSA and AES are secure for large and small amounts of data (when implemented correctly), but RSA is much slower.
In cases where public-key crypto makes sense, but more performance is needed, a hybrid system can be implemented that leverages both.
Not the answer you're looking for? Browse other questions tagged aes rsa or ask your own question. asked
Encryption - should I be using RSA or AES?
Decrypt from cipher text encrypted using RSA
AES in CTR mode with same random IV to create same ciphertext
Questions to hybrid encryption. RSA with AES
RSA Possible Vulnerability?
Why is RSA using fixed point type numbers?
Key size difference between AES and RSA
Help to understand secure connections and encryption using both private/public key in RSA?
RSA encryption confidentiality
Is the standard scheme of RSA CCA and CPA secure?
Questions about RSA/asymmetric key encryption?
Need help designing an encryption workflow | 计算机 |
2014-15/4496/en_head.json.gz/18753 | Front Page » March 28, 2006 » Tech Tips » Digital Rights Management (drm)
Published 2,949 days agoDigital Rights Management (drm)
By JASON BAILEYSun Advocate/Progress Webmaster
In recent years, Digital Rights Management (DRM) has been the centerpiece of the technology world, particularly in the music industry. It is no doubt one of the most heated topics in computer history. As the old adage says, "there's always two sides to a story," and this is no different.
DRM is essentially a concept which implements controls that determine what a user can or cannot do with a digital file or device. DRM may restrict what devices your music files will play on, stop you from sharing digital audio or video, prevent you from skipping commercials on a movie, or determine who can make changes to particular electronic documents. DRM was central to the Digital Millennium Copyright Act, which was passed by Congress and signed into law by President Bill Clinton in 1998. This act, often referred to as the DMCA, makes it illegal to circumvent or thwart DRM provisions and creates stiff penalties for violators. Proponents of the bill insist it is a necessary measure to protect intellectual property and other forms of copyright from theft. Critics insist the bill was "purchased legislation" by lobbying industries and essentially makes the rights of corporations (particularly in the digital media industry) more important than the rights of individuals and consumers.
A perfect example is the many music enthusiasts (who often call DRM "Digital Restrictions Management") that feel DRM violates consumers' right to fair use.
For example, suppose someone wants to make a backup copy of their favorite audio CD for safe keeping, in case the original is broken or damaged. Because a DRM enabled computer has no way of knowing whether or not this copy is an attempt at CD piracy, it may disallow the copy from being made. Suppose someone wants to watch a movie on DRM enabled DVD player, and wants to skip twenty minutes of previews and get on with the movie. DRM provisions built into the player may prevent them from doing that.
DRM could also prevent music stored on a portable digital audio player, like Apple's iPod, from being played on any other device (including a PC or a competing player).
Supporters of DRM and the DMCA insist that strong actions must be used to enforce a blatant disregard for copyright. Many cite the once infamous peer-to-peer Napster network as an example, where thousands of users across the globe freely exchanged hundreds of thousands of music files, which resulted in revenue losses in the millions of dollars within the recording industry. After the Napster network was ordered by US courts to shut down, a new decentralized peer-to-peer network quickly arose and expanded to include much more than music. Thousands of movie titles were being distributed, in addition to a myriad of hacked commercial software packages. Many industries have had losses as a result.
DRM proponents insist DRM restrictions are the only solution for a global-scale problem that is far out of hand. On the flip side, DRM critics insist that the technology will never stop the real criminals (who will never fail to circumvent DRM schemes), but will only victimize honest consumers who have no desire to break the law. Many cite Sony BMG's recent DRM blunder as an example that left hundreds of thousands of honest Sony consumers with broken PCs that were made vulnerable to serious security breaches.
Regardless of which side of the issue you may sympathize with, Digital Rights Management (DRM) is an important topic that will surely affect all of us (good or bad) for decades to come. As a result, we should all be aware of it, and involved in its legal development.
For more information, go to www.wikipedia.org or your favorite search engine and search for "digital rights management."
Have comments about this article, or suggestions for an additional Tech Tips article? Send an email to [email protected].
Tech Tips March 28, 2006
Recent Tech Tips
Digital photography doesn't necessarily require a computer
Utah's impact on technology
Understanding Digital Cameras, Part V
Digital Television Brings Change to TV, Cell Phones
Understanding Digital Cameras, Part I | 计算机 |
2014-15/4496/en_head.json.gz/19603 | RFG ID#
Submission Stats
Register for an RFG Account
Submit Game Additions / Edits
Submit Hardware Additions / Edits
You are either not logged in or not a registered member. In order to submit info to the site you must be a registered memeber and also logged in. If you are a registered member and would like to log in, you can do so via this link. If you are not a registered member and would like to register, please follow this link to register. Please note that there are perks to being logged in, such as the ability to see pending submissions and also the ability to view your submissions log.
Welcome to the RF Generation Submit Info Pages. These pages will allow you to submit info for all of the database entries, and will even allow you to submit entries to add. Use the menu to select the action that you would like to complete. If this is your first time visitng the submit pages I highly suggest that you visit the FAQ page to learn some very important info. I also suggest that you visit the FAQ if you are confused about these pages. Before you're overzealous to your home country
We know that you may or may not know whether or not the game you own is a region wide release, but we'd like for you to make a concerted effort to ensure that the title you are adding really exists. For example, you may live in the US. Therefore, you may think that all your games are US releases, right? Well, they are. But, more often than not, they are also Canadian and Mexican releases, and as such they are a North American Release. For the record, most modern releases in North America are region wide. I can actually look at the back of Mario Kart DS and see that this version of the game was not only authorized to be sold in the US but also Canada, Mexico, and Latin America! As a general rule of thumb, assume, unless you know otherwise, that the title that you are about to submit was a region wide release. Please Read the following regarding image submissions!
We appreciate all submissions that you are willing to give RF Generation, but we need to adhere to certain standards so that there is consistency in our database. As such, please take note that your scans should be 550 pixels on the short side! Your submissions could be rejected if they do not meet these size requirements! Please also note that there are exceptions to this rule, for example, you don't really need a 550 pixel wide scan of a DS or GBA game. Use proper judgement! If you have any questions please contact a staff member, as we are more than willing to help you decipher our standards. We appreciate all submissions that are made, we just want to make sure your submissions are not in vain.
Site content Copyright © rfgeneration.com unless otherwise noted. Oh, and keep it on channel three. | 计算机 |
2014-15/4496/en_head.json.gz/20138 | Ifrah, Georges
The universal history of computing. From the abacus to the quantum computer. Transl. from the French and with notes by E. F. Harding. Assisted by Sophie Wood, Ian Monk, Elizabeth Clegg and Guido Waldman. (English) Zbl 0969.68001
Chichester: Wiley. iv, 412 p. $ 24.95 (2000).
Publisher’s description: “Suppose every instrument could by command or by anticipation of need execute its function on its own; suppose that spindles could weave of their own accord, and plectra strike the strings of zithers by themselves; then craftsmen would have no need of hand-work, and masters have no need of slaves.” – Aristotle called the Indiana Jones of arithmetic, Georges Ifrah embarked in 1974 on a ten-year quest to discover where numbers come from and what they say about us. His first book, the highly praised The universal history of numbers (Wiley, New York) (2000; Zbl 0955.01002), drew from this remarkable journey, presented the first complete account of the invention and evolution of numbers the world over – and became an international bestseller. In The universal history of computing, Ifrah continues his exhilarating exploration into the fascinating world of numbers. In this fun, engaging but no less learned book, he traces the development of computing from the invention of the abacus to the creation of the binary system three centuries ago to the incredible conceptual, scientific, and technical achievements that made the first modern computers possible. He shows us how various cultures, scientists, and industries across the world struggled to break free of the tedious labor of mental calculation and, as a result, he reveals the evolution of the human mind. Evoking the excitement and joy that accompanied the grand mathematical undertakings throughout history, Ifrah takes us along as he revisits a multitude of cultures, from Roman times and the Chinese Common Era to twentieth-century England and America. We meet mathematicians, visionaries, philosophers, and scholars from every corner of the world and from every period of history. We witness the dead ends and regressions in the computers development, as well as the advances and illuminating discoveries. We learn about the births of the pocket calculator, the adding machine, the cash register, and even automata. We find out how the origins of the computer can be found in the European Renaissance, along with how World War II influenced the development of analytical calculation. And we explore such hot topics as numerical codes and the recent discovery of new kinds of number systems, such as “surreal” numbers. Adventurous and enthralling, The universal history of computing is an astonishing achievement that not only unravels the epic tale of computing, but also tells the compelling story of human intelligence – and how much farther we still have to go. In this engaging successor to The universal history of numbers, you’ll discover the entire story of the calculation of yesteryear and the computation of today. Highly acclaimed author and mathematician Georges Ifrah provides an illuminating glimpse into humankind’s greatest intellectual tale: the story of computing.
Cited in 1 ReviewMSC:68-03Historical (computer science)01A05General histories, source books68M99Computer system organization | 计算机 |
2014-15/4496/en_head.json.gz/20200 | Resources >> Tutorials >> Web
Development >> Basic HTML What is HTML?
Basic Tags
Text Formatting Tags
Headline Tags Lists Colors Links & Images
Evaluate this Tutorial
What is HTML?
Actually, it's an acronym that stands for HyperText Markup Language. Right...that doesn't help you that much, does it. Actually, HTML is a language that is derived from SGML which stands for Standard Generalized Markup Language. SGML, like its sibling HTML, was originally developed to help scientists communicate with each other in a standardized format. They were initially interested in marking up, or describing, text. Why, you ask? Well, texts vary widely, and without face-to-face communication, one has to be able to describe the thing (or text) he/she is talking about.
HTML is the language of the World Wide Web today. Tim Berners-Lee of CERN (Centre Europeen de Recherche Nucleaire) is credited with creating HTML (or rather, deriving it from SGML). He first proposed the new mark-up language at the CERN conference in 1989 to use as a knowledge management tool for physicists. It wasn't until the arrival of the world's first graphical web browser, NCSA Mosaic, that HTML really became something that everyday people would want to use. Quickly thereafter, everyday people began to find different uses for HTML, mainly, for graphical and textual web page design. Like many standards, HTML has gone through numerous iterations. Currently the W3C recommends HTML version 4.01 (released in December of 1999).
So...enough of the history lesson. How does it work? HTML uses tags to tell a web browser how to display text and graphics on a web page. The HTML code is invisible to the user for the most part. You can look at the code of most web pages with your browser by viewing the source code. To view the source code of this web page, go to your browser's menu, select "View" and tab down to " Source" or "Page Source." In some browsers, this may not be located in the same menu; you might have to play with it a little to find it. The code that you see is the HTML code that tells your web browser how to display this page. At this point, it looks almost cryptic, and well, it should. Next we'll show you how to begin writing your own HTML code. next section >
choose format/speed
dial-up | broadband Tutorial Transcript
© 2004 Chad Hutchens & Libby Peterek | iSchool | UT Austin | webmaster | 计算机 |
2014-15/4496/en_head.json.gz/20231 | << Practice Makes Perfect | Use Different Speech Patterns to Get Conference Call Results >>
20077 Methods for Group Communications Accuconference
In the beginning, the most popular way for groups to communicate was simply "in person". But with the advent of technology, even as early as two millennia ago, man has devised new ways for groups to communicate without actually being together in the same room.
IRC (Internet Relay Chat)
Back when the internet was young and 28.8 baud modems were all the rage, Internet Relay Chat was the way for web-heads to communicate online. VOIP was still the internet equivalent of HAM radio ("I spoke with someone in Australia today!"), and ICQ was still a few years out. Created by Jarkko "WiZ" Oikarinen in late August of 1988, Oikarinen’s design was inspired by Jeff Kell’s Bitnet Relay, which had been designed as a way for researchers to chat on Bitnet mostly over mainframe servers. IRC’s slash commands were inspired by Bitnet Relay and they persist to this day in many other chat mediums. IRC’s leap into the public eye came when it was used by the citizens of Kuwait to contact the outside world during the Iraqi invasion of the early 90’s. While many today now utilize more modern means for internet person to person communication, when it comes to text based group chat, IRC is still king.
IM(Instant Messaging)
Instant messaging had its start in the 1970’s when it was developed to allow two uni
x users to chat if they were both logged into the same server. The technology would then evolve to function on closed networks and then finally the internet. The first instant messaging program to enter the public eye was the "On-Line Messages" feature of "Quantum Link" for Commodore computers in the late 80’s. In 1991 "Quantum Link" would change its name to "America Online". Despite this, however, it would be a different company that would beat AOL to the modern (graphic user interface) IM market. An Israeli program known as ICQ would hit the market in 1996, followed by AOL Instant Messenger in 1997. Since then a number of other heavy hitters have joined the fray. Yahoo and Microsoft hold a heavy share of the market, and Google has recently come out with its own instant messaging service known as GTalk. Recently, these companies have begun to incorporate IRC chat room type functionality into their IM clients for group conversations. Unlike IRC though, these conversations are restricted to the user’s buddy list. This alone could be what keeps IRC as the leader in the chat room venue.
Smoke Signals Laugh all you want, but when the electromagnetic pulse of the apocalypse hits wiping out all electronics, you’ll be glade you had a way to you’re your neighboring walled-in villages of the oncoming uber-mutant invasion. Hey, it could happen. As a technology, smoke signals were created by both the Chinese and Native Americans. The technique involved using a blanket to cover a fire then quickly removing the blanket to produce a large puff of smoke. Smoke signal codes were never standardized as a drawback of the technique was one’s enemies could see the smoke signals as we | 计算机 |
2014-15/4496/en_head.json.gz/24335 | Bill Roberts, Director of Video Product Management, Adobe Systems & Adobe's Past, Present & Future
COW Library : Adobe Premiere Pro : Debra Kaufman : Bill Roberts, Director of Video Product Management, Adobe Systems & Adobe's Past, Present & Future(print friendly)
Debra KaufmanSanta Monica California USA©2012 CreativeCOW.net. All rights reserved.Bill Roberts, Director of Video Product Management, Adobe Systems talks about Premiere Pro CS5.5, the acquisition of IRIDAS, 3D, HDR and NAB 2012
In this industry, you have to earn your stripes. Avid had to displace flat bed film editors. Apple had to get movies under its belt. With Adobe, it's been a different trajectory. If you look at motion graphics or visual effects, everyone thinks of After Effects and Photoshop, but Adobe has a different brand presence with Premiere Pro. Back when we first introduced Premiere Pro, there were of course some aspects that made editors happy, but other things related to how it worked or performance that, historically, got in the way. The turning point was when we introduced the Mercury Playback Engine in CS5 and all the native file types worked well. That's when the Premiere brand took a big leap upwards.
Then, of course, came the announcement of FCPX, which caused a lot of editors to start exploring different solutions. At the same time, we had just taken the power we'd delivered in Premiere Pro CS5 and layered on a ton of ease-of-use features in CS5.5, so when people came to look at it, they were pleased. We put together a fantastic promotion for those interested in switching to Premiere Pro and the response was exceptional -- we've seen tens of thousands of users make the switch from FCP and Media Composer.
I know there are some Mac users who point to the fact that Premiere Pro was off the Mac platform for several years. At the time, which was the turn of the 21st century, Apple was a very different company. Avid waffled around the Mac too; in fact, everyone at the time was questioning its viability. But Mac's stronghold has always been the creative community. The Mac platform today is a fantastic part of our business and a great place to be. In fact, our presence on the Mac grew 45 percent in 2010.
Focusing on the needs of professionals, we have so much to gain with being on the Mac and making sure it's a first class citizen. We have a great relationship with Apple, and the relationship benefits them as a partner of ours as well. After Effects is a ubiquitous tool used by an extremely large number of Mac users. You have to have good relationships in the industry and we get along well with Apple.
Another way we've seen the Premiere user base increase over the years is that Adobe has long had this big base of creative independents that have used all of Production Premium. That customer base was built over years, and now we're seeing those same people really produce amazing work with the power of Premiere Pro, along with its smooth workflows between it and the other products they already know and love in the Suite.< | 计算机 |
2014-15/4496/en_head.json.gz/24534 | Helping the European Parliament to release its own free software For the first time, the European Parliament is about to release one of its own programs as Free Software. The program in question is called AT4AM, short for "Automatic Tool for Amendments". The Parliament is in the business of making laws, and AT4AM automates a lot of the formal stuff associated with the production process.
To understand what AT4AM means for MEPs and their staff, have a look at how amendments were filed before, and how it works now. (Vimeo. Flash required, sorry.) Parliament staffer Erik Josefsson compared the introduction of AT4AM to the arrival of version control for developers. It's been in use inside the parliament for about 18 months, and it's a pretty fundamental tool for the people working there. » Read more
0 CommentsPosted 18 Jul 2012 by Karsten Gerloff
5 Questions with David A. Wheeler
Meet David A. Wheeler. He's a Research Staff Member for the Institute for Defense Analyses (IDA) and a well-known speaker, author, and expert on open source software and security. He helped develop the Department of Defense's open source software policy and FAQ and has written other guidance materials to help people understand how to use and collaboratively develop open source software in government. He has a Ph.D. in Information Technology, an M.S. in Computer Science, and a B.S. in Electronics Engineering. We hope you enjoy getting to know David. » Read more
0 CommentsPosted 17 Jul 2012 by Melanie Chernoff (Red Hat) | 计算机 |
2014-15/4496/en_head.json.gz/25444 | Comparing an Integer With a Floating-Point Number, Part 1: Strategy
We have two numbers, one integer and one floating-point, and we want to compare them.
Last week, I started discussing the problem of comparing two numbers, each of which might be integer or floating-point. I pointed out that integers are easy to compare with each other, but a program that compares two floating-point numbers must take NaN (Not a Number) into account.
White Papers Supply Chain Visibility in Business Networks Select the Right Cloud-Based ITSM Solution More >>Reports Return of the Silos Strategy: The Hybrid Enterprise Data Center More >>Webcasts Agile Service Desk: Keeping Pace or Getting out Paced by New Technology? Balancing BYOD with Security and Manageability More >>
That discussion omitted the case in which one number is an integer and the other is floating-point. As before, we must decide how to handle NaN; presumably, we shall make this decision in a way that is consistent with what we did for pure floating-point values.
Aside from dealing with NaN, the basic problem is easy to state: We have two numbers, one integer and one floating-point, and we want to compare them. For convenience, we'll refer to the integer as N and the floating-point number as X. Then there are three possibilities:
N < X.
X < N.
Neither of the above.
It's easy to write the comparisons N < X and X < N directly as C++ expressions. However, the definition of these comparisons is that N gets converted to floating-point and the comparison is done in floating-point. This language-defined comparison works only when converting N to floating-point yields an accurate result. On every computer I have ever encountered, such conversions fail whenever the "fraction" part of the floating-point number — that is, the part that is neither the sign nor the exponent — does not have enough capacity to contain the integer. In that case, one or more of the integer's low-order bits will be rounded or discarded in order to make it fit.
To make this discussion concrete, consider the floating-point format usually used for the float type these days. The fraction in this format has 24 significant bits, which means that N can be converted to floating-point only when |N| < 224. For larger integers, the conversion will lose one or more bits. So, for example, 224 and 224+1 might convert to the same floating-point number, or perhaps 224+1 and 224+2 might do so, depending on how the machine handles rounding. Either of these possibilities implies that there are values of N and X such that N == X, N+1 == X, and (of course) N < N+1. Such behavior clearly violates the conditions for C++ comparison operators.
In general, there will be a number — let's call it B for big — such that integers with absolute value greater than B cannot always be represented exactly as floating-point numbers. This number will usually be 2k, where k is the number of bits in a floating-point fraction. I claim that "greater" is correct rather than "greater than or equal" because even though the actual value 2k doesn't quite fit in k bits, it can still be accurately represented by setting the exponent so that the low-order bit of the fraction represents 2 rather than 1. So, for example, a 24-bit fraction can represent 224 exactly but cannot represent 224+1, and therefore we will say that B is 224 on such an implementation.
With this observation, we can say that we are safe in converting a positive integer N to floating-point unless N > B. Moreover, on implementations in which floating-point numbers have more bits in their fraction than integers have (excluding the sign bit), N > B will always be false, because there is no way to generate an integer larger than B on such an implementation.
Returning to our original problem of comparing X with N, we see that the problems arise only when N > B. In that case we cannot convert N to floating-point successfully. What can we do? The key observation is that if X is large enough that it might possibly be larger than N, the low-order bit of X must represent a power of two greater than 1. In other words, if X > B, then X must be an integer. Of course, it might be such a large integer that it is not possible to represent it in integer format; but nevertheless, the mathematical value of X is an integer.
This final observation leads us to a strategy:
If N < B, then we can safely convert N to floating-point for comparison with X; this conversion will be exact.
Otherwise, if X is larger than the largest possible integer (of the type of N), then X must be larger than N.
Otherwise, X > B, and therefore X can be represented exactly as an integer of the type of N. Therefore, we can convert X to integer and compare X and N as integers.
I noted at the beginning of this article that we still need to do something about NaN. In addition, we need to handle negative numbers: If X and N have opposite signs, we do not need to compare them further; and if they are both negative, we have to take that fact into account in our comparison. There is also the problem of determining the value of B.
However, none of these problems is particularly difficult once we have the strategy figured out. Accordingly, I'll leave the rest of the problem as an exercise, and go over the whole solution next week.
Surviving Developer Decision-Making HellTelerik Open Sources Kendo UIMicrosoft Lauds Node.js Tools for Visual Studio 1.0 BetaCMMI Extends Framework For SecurityMore News» Commentary
NoSQL with MySQLRAD Studio XE6 Extends Desktop To WearablesAntagonism and AgileContinuous Development: The New Maintenance RealityMore Commentary» Slideshow
Developer Reading ListC++ Reading ListNoSQL Options ComparedDeveloper Reading List: The Must-Have Books for JavaScriptMore Slideshows» Video
Verizon App Challenge WinnersFirst-Class Functions in Java 8Master the Mainframe World ChampionshipStephen Wolfram InterviewMore Videos» Most Popular
Building GUI Applications in PowerShell2013 Developer Salary SurveySoftware Estimation: How Misperceptions Mean We Almost Always Get It Wrong13 Linux Debuggers for C++ ReviewedMore Popular» INFO-LINK
Client Windows Migration: Expert Tips for Application Readiness Agile Service Desk: Keeping Pace or Getting out Paced by New Technology? Closing the Book on Windows Server 2003: Planning for Windows Server 2012 Opens New Possibilities Smarter Process: Five Ways to Make Your Day-to-Day Operations Better, Faster and More Measurable How to Really Put Big Data to Work More Webcasts>>
Key Components Of Your Resiliency Strategy Key Components Of Your Resiliency Strategy Build a Business Case: Developing Custom Apps Data Quality Issues in the Configuration Management Database (CMDB) Context-Centered Data Services: The Next IT Decision Support Challenge More >> | 计算机 |
2014-15/4496/en_head.json.gz/25454 | General Administration & Networking
Cisco Networking
Laptop & Mobile Device Security
Networking For Dummies Extras
IT Disaster Recovery
Layers in the OSI Model of a Computer Network
By Doug Lowe from Networking For Dummies, 10th Edition
The OSI (Open System Interconnection) Model breaks the various aspects of a computer network into seven distinct layers. Each successive layer envelops the layer beneath it, hiding its details from the levels above.
The OSI Model isn't itself a networking standard in the same sense that Ethernet and TCP/IP are. Rather, the OSI Model is a framework into which the various networking standards can fit. The OSI Model specifies what aspects of a network's operation can be addressed by various network standards. So, in a sense, the OSI Model is sort of a standard's standard.
The first three layers are sometimes called the lower layers. They deal with the mechanics of how information is sent from one computer to another over a network. Layers 4–7 are sometimes called the upper layers. They deal with how applications relate to the network through application programming interfaces.
Layer 1: The Physical Layer
The bottom layer of the OSI Model is the Physical Layer. It addresses the physical characteristics of the network, such as the types of cables used to connect devices, the types of connectors used, how long the cables can be, and so on. For example, the Ethernet standard for 100BaseT cable specifies the electrical characteristics of the twisted-pair cables, the size and shape of the connectors, the maximum length of the cables, and so on.
Another aspect of the Physical Layer is that it specifies the electrical characteristics of the signals used to transmit data over cables from one network node to another. The Physical Layer doesn't define any particular meaning for those signals other than the basic binary values 0 and 1. The higher levels of the OSI model must assign meanings to the bits transmitted at the Physical Layer.
One type of Physical Layer device commonly used in networks is a repeater. A repeater is used to regenerate signals when you need to exceed the cable length allowed by the Physical Layer standard or when you need to redistribute a signal from one cable onto two or more cables.
An old-style 10BaseT hub is also a Physical Layer device. Technically, a hub is a multi-port repeater because its purpose is to regenerate every signal received on any port on all the hub's other ports. Repeaters and hubs don't examine the contents of the signals that they regenerate. If they did, they'd be working at the Data Link Layer, not at the Physical Layer. Layer 2: The Data Link Layer
The Data Link Layer is the lowest layer at which meaning is assigned to the bits that are transmitted over the network. Data-link protocols address things, such as the size of each packet of data to be sent, a means of addressing each packet so that it's delivered to the intended recipient, and a way to ensure that two or more nodes don't try to transmit data on the network at the same time.
The Data Link Layer also provides basic error detection and correction to ensure that the data sent is the same as the data received. If an uncorrectable error occurs, the data-link standard must specify how the node is to be informed of the error so it can retransmit the data.
At the Data Link Layer, each device on the network has an address known as the Media Access Control address, or MAC address. This is the actual hardware address, assigned to the device at the factory.
You can see the MAC address for a computer's network adapter by opening a command window and running the ipconfig /all command.
Layer 3: The Network Layer
The Network Layer handles the task of routing network messages from one computer to another. The two most popular Layer-3 protocols are IP (which is usually paired with TCP) and IPX (normally paired with SPX for use with Novell and Windows networks).
One important function of the Network Layer is logical addressing. Every network device has a physical address called a MAC address, which is assigned to the device at the factory. When you buy a network interface card to install in a computer, the MAC address of that card can't be changed. But what if you want to use some other addressing scheme to refer to the computers and other devices on your network? This is where the concept of logical addressing comes in; a logical address gives a network device a place where it can be accessed on the network — using an address that you assign.
Logical addresses are created and used by Network Layer protocols, such as IP or IPX. The Network Layer protocol translates logical addresses to MAC addresses. For example, if you use IP as the Network Layer protocol, devices on the network are assigned IP addresses, such as 207.120.67.30. Because the IP protocol must use a Data Link Layer protocol to actually send packets to devices, IP must know how to translate the IP address of a device into the correct MAC address for the device. You can use the ipconfig command to see the IP address of your computer.
Another important function of the Network layer is routing — finding an appropriate path through the network. Routing comes into play when a computer on one network needs to send a packet to a computer on another network. In this case, a Network Layer device called a router forwards the packet to the destination network. An important feature of routers is that they can be used to connect networks that use different Layer-2 protocols. For example, a router can be used to connect a local-area network that uses Ethernet to a wide-area network that runs on a different set of low-level protocols, such as T1.
Layer 4: The Transport Layer
The Transport Layer is the basic layer at which one network computer communicates with another network computer. The Transport Layer is where you'll find one of the most popular networking protocols: TCP. The main purpose of the Transport Layer is to ensure that packets move over the network reliably and without errors. The Transport Layer does this by establishing connections between network devices, acknowledging the receipt of packets, and resending packets that aren't received or are corrupted when they arrive.
In many cases, the Transport Layer protocol divides large messages into smaller packets that can be sent over the network efficiently. The Transport Layer protocol reassembles the message on the receiving end, making sure that all packets contained in a single transmission are received and no data is lost.
Layer 5: The Session Layer
The Session Layer establishes sessions (instances of communication and data exchange) between network nodes. A session must be established before data can be transmitted over the network. The Session Layer makes sure that these sessions are properly established and maintained.
Layer 6: The Presentation Layer
The Presentation Layer is responsible for converting the data sent over the network from one type of representation to another. For example, the Presentation Layer can apply sophisticated compression techniques so fewer bytes of data are required to represent the information when it's sent over the network. At the other end of the transmission, the Transport Layer then uncompresses the data.
The Presentation Layer also can scramble the data before it's transmitted and then unscramble it at the other end, using a sophisticated encryption technique.
Layer 7: The Application Layer
The highest layer of the OSI model, the Application Layer, deals with the techniques that application programs use to communicate with the network. The name of this layer is a little confusing because application programs (such as Excel or Word) aren't actually part of the layer. Rather, the Application Layer represents the level at which application programs interact with the network, using programming interfaces to request network services. One of the most commonly used application layer protocols is HTTP, which stands for HyperText Transfer Protocol. HTTP is the basis of the World Wide Web.
Cloud Computing Glossary
cloud computingA networking solution in which everything — from computing power to computing infrastructure, applications, business processes to personal collaboration — is delivered as a service wherever and whenever you need.
cloud serviceThe delivery of software, infrastructure, or storage that has been packaged so it can be automated and delivered to customers in a consistent and repeatable manner.
deprovisionThe release of cloud services that are no longer needed.
federatingLinking distributed resources together over the cloud.
hypervisorAn operating system that acts as a traffic cop, managing the various virtualization tasks in the cloud to ensure that they make things happen in an orderly manner.
multi-tenancyThe sharing of underlying resources by multiple companies over a cloud.
network attached storeStorage that has its own network address through which it is accessed by the network's workstation users. Acronym: NAS
service level agreementA contract that stipulates the type of service you need from providers and what type of penalties would result from an unexpected business interruption. Acronym: SLA
solution stackAn integrated set of software that provides everything a developer needs to build an application.
storage area networkA storage systems that is flexible and scalable because it's available to multiple hosts at the same time. Acronym: SAN
vertical industry groupsWorkgroups comprised of members from a particular industry such as technology and retail.
virtual memoryThe portion of your hard drive that Windows uses to expand the available RAM Cloud Computing Glossary
virtualizationUsing computer resources to imitate other computer resources or whole computers to maximize performance and flexibility.
Networking For Dummies, 10th Edition | 计算机 |
2014-15/4496/en_head.json.gz/26359 | IT Security Community
banner_blue
Protect Personal Devices & Data
Protect University Data
Sensitive Data Guide
Report an IT SecurityIncident
IT Security Events
Digital CopyrightCompliance
About IIA
Home Information for U-M Faculty and Staff
Protecting University Data
Accessing University Data
Other Computing Resources
IT User Advocate - Enforces compliance with U-M information technology policies and guidelines. Frequently Asked Questions
Sensitive data refers to data whose unauthorized disclosure may have serious adverse effect on the university's reputation, resources, services, or individuals. Faculty, staff, and U-M workforce members are responsible for protecting sensitive university data to which they have authorized access. As custodians of such data, they are also responsible to comply with all U-M information security and institutional data management policies and procedures as well as applicable laws, statutes, and regulations. As a U-M faculty or staff member, you are responsible for protecting university data, and for knowing the appropriate places to store the data, how to securely dispose of the data, and how to report a breach or compromise of sensitive university data. The following Safe Computing sections address faculty and staff responsibilities related to the handling and storage of university data:
Faculty and staff who work with sensitive university data are required to follow unit-specific guidelines, which may or may not allow access to sensitive data from personally owned devices. For more information, see University Data and Personally Owned Devices.
ITS provides enterprise administrative and data systems that support U-M's core missions of teaching, research, clinical care, and administration. Before requesting access to systems that maintain sensitive institutional data, members of the university community are asked to complete an online course, Access and Compliance 101: Handling Sensitive Institutional Data at U-M (approximately 35 minutes) and then agree to and submit online the Institutional Data Access and Compliance Agreement. They can then request access. See How to Request Access.
Each school, college, institute, or central administrative unit has a designated Security Unit Liaison (SUL) who coordinates the unit's IT security activities and serves as a liaison with ITS Information and Infrastructure Assurance (IIA), the campus IT security office. The SUL, supported by additional members of the security community, is the initial point of contact for guidance on IT security-related issues, including questions about appropriately securing unit, research, and institutional information assets.
See IT Security Program (requires MToken for access) to learn more about IIA and the roles and responsibilities associated with managing and protecting the university's information resources. The IT Security Community has specific responsibility to help protect IT security information, including following the guidance for accessing and storing this kind of data provided in the Sensitive Data Guide.
The Faculty and Researcher Guide identifies Information and Technology Services' resources for general computing, teaching, and research.
Last modified: March 06 2014. Information and Technology Services | 计算机 |
2014-15/4496/en_head.json.gz/27184 | Why design makes the difference between good and bad apps
By Ian Barker
The first stage of developing an app involves no technical skills at all, it's also the hardest, and that’s coming up with an original idea. There are already thousands of apps out there so you need to make sure that what you’re proposing hasn't been done before. Or at the very least that you have a new and original twist on an idea that will make it stand out from the crowd.
It's important to note that just creating an app isn't going to make you money, research by Canalys in 2012 showed that some two-thirds of apps received fewer than 1,000 downloads in their first year. The store pages have many thousands of "zombie apps" which still appear on the websites b | 计算机 |
2014-15/4496/en_head.json.gz/27220 | ← Twtpoll results from The Open Group Conference, London
Facebook – the open source data center →
by The Open Group Blog | May 16, 2011 · 1:00 AM The Open Group updates Enterprise Security Architecture, guidance and reference architecture for information security
By Jim Hietala, The Open Group
One of two key focus areas for The Open Group Security Forum is security architecture. The Security Forum has several ongoing projects in this area, including our TOGAF® and SABSA integration project, which will produce much needed guidance on how to use these frameworks together.
When the Network Application Consortium ceased operating a few years ago, The Open Group agreed to bring the intellectual property from the organization into our Security Forum, along with extending membership to the former NAC members. While the NAC did great work in information security, one publication from the NAC stood out as a highly valuable resource. This document, Enterprise Security Acrhitecture (ESA), A Framework and Template for Policy-Driven Security, was originally published by the NAC in 2004, and provided valuable guidance to IT architects and security architects. At the time it was first published, the ESA document filled a void in the IT security community by describing important information security functions, and how they related to each other in an overall enterprise security architecture. ESA was at the time unique in describing information security architectural concepts, and in providing examples in a reference architecture format.
The IT environment has changed significantly over the past several years since the original publication of the ESA document. Major changes that have affected information security architecture in this time include the increased usage of mobile computing devices, increased need to collaborate (and federation of identities among partner organizations), and changes in the threats and attacks.
Members of the Security Forum, having realized the need to revisit the document and update its guidance to address these changes, have significantly rewritten the document to provide new and revised guidance. Significant changes to the ESA document have been made in the areas of federated identity, mobile device security, designing for malice, and new categories of security controls including data loss prevention and virtualization security.
In keeping with the many changes to our industry, The Open Group Security Forum has now updated and published a significant revision to the Enterprise Security Architecture (O-ESA), which you can access and download (for free, minimal registration required) here; or purchase a hardcover edition here.
Our thanks to the many members of the Security Forum (and former NAC members) who contributed to this work, and in particular to Stefan Wahe who guided the revision, and to Gunnar Peterson, who managed the project and provided significant updates to the content.
An IT security industry veteran, Jim is Vice President of Security at The Open Group, where he is responsible for security programs and standards activities. He holds the CISSP and GSEC certifications. Jim is based in the U.S.
Filed under Security Architecture
Tagged as architecture, NAC, O-ESA, SABSA, security architecture, Security Forum, TOGAF | 计算机 |
2014-15/4496/en_head.json.gz/28056 | Sign up Scratchpad Scratchpad Navigation
Create a new mini-wiki
Test editing
Request user help
Report VANDALISM
Page protection requests
Page move/rename requests
Deletion requests
Undelete requests
Admin requests
Scratchpad chat
Community messages
Random project page
Random user
Random template
All pages by prefix
Watchlist Random page Recent changes Theodore Tugboat
154,787pages on this wiki Theodore Tugboat at Wikia
Editing tutorial
Welcome to the Theodore Tugboat mini wiki at The Wikia Scratchpad!
You can use the box below to create new pages on this wiki. The Title Sequence for Theodore Tugboat.Added by FlyingDuckManGenesisTheodore Tugboat is a Canadian children's television series about a tugboat named Theodore who lives in the Big Harbour with all of his friends. The show was produced in Halifax, Nova Scotia, Canada by the CBC (Canadian Broadcasting Corporation), and the now defunct Cochran Entertainment, and was filmed on a model set using radio controlled tugboats, Production of the show ended in 2001, but is still televised in some countries. The show's distribution rights were later sold to Classic Media. The show premiered in Canada on CBC Television, then went to PBS (Public Broadcasting Service), and now on qubo in the US,
The show deals with life learning issues portrayed by the tugs or other ships in the harbour. Most often, the tugs have a problem, or get involved in a struggle with each other or another ship, but they always manage to help one another resolve these problems and see them through. Their main focus however, is to always make the Big Harbour the friendliest harbour in the whole world, and to always do a good job with their work related tasks.
Main article: Theodore Tugboat/Characters/Gallery
The Harbourmaster: Along with all the duties of a real-life harbourmaster, 'The Harbourmaster' is the narrator of the series, and provides voices for the entire cast of characters. He is the only human on the show, and is portrayed in the Canadian and US versions by the late Denny Doherty, and by other performers internationally. The Harbourmaster introduces the theme at the beginning of every episode by addressing an issue that he has in common with the tugs. He also loves to play the tuba and is good friends with a man named "Rodney" (who is never seen).
Theodore Tugboat "the Victorious": Theodore is the title character who lives in the Big Harbour with all of his friends. He's one of the smaller tugs that wears a red baseball cap, and is sometimes offended if someone calls him "cute" or "small". He and his closest friend Hank are the only two harbour tugs (tugs that stay in the harbour) that are not yet ocean tugs (tugs that are eligible to work outside of the harbour). They both share the harbour tug side of the dock and love working together. This life-size version of Theodore Tugboat, Theodore Too plies the waters of Halifax HarbourAdded by Share Bear He's a kind little tugboat that is always friendly to the other ships in the harbour, with the goal of becoming friends with everyone he meets. He loves to sway back and forward to show that he's happy. His biggest dream is to become an ocean tug and to travel across the sea to different harbours. But before he does, he works as hard as he can to make the Big Harbour the friendliest harbour in the world. That's why he is always there whenever someone needs him. It is hinted that Theodore has a crush on Emily.
Hank Henry "the Volcano": Hank (the Volcano, as he sometimes calls himself) is the smallest, funniest, fastest tugboat in the Big Harbour. He wears a blue tuque and loves to make funny faces and noises as a way of getting attention. He can be very sensitive too, and usually gets ignored for being the smallest. But whenever he feels down, he always turns to Theodore for help or guidance. Sometimes Hank is the one to give a good idea without even knowing it. He also loves to use the word "fresh" to describe something. But out of all the other tugboats, Hank is special because of his good humour and nature to learn and grow from his mistakes. He is performed by the vehicle and puppeteer of "Spud" from Bob the Builder, Rob Rackstraw.
Emily Annapolis "the Vigorous": Emily is the only female tug in the fleet. She wears an old turquoise fishing hat that is very special to her. She loves to travel to different countries and discover new cultures and languages. Emily loves to be admired, but hates to look silly in front of her friends because they always have high expectations for her, and look up to her as a leader. Still she always comes to find that her friends are there to help her, even if she doesn't ask for their help. She usually gets into arguments with George, but they always resolve their differences in the end. No matter how upset Emily gets, she always shows her kind spirits and female strength. It is hinted that Emily has feelings for Theodore.
George Golieth Gargantua "the Valiant": George is the largest and strongest tugboat in the Big Harbour. He wears a purple baseball cap on his head backwards. George loves to show off and can sometimes be a little rude without knowing it. He's somewhat stubborn and always struggles to admit that he is sometimes wrong. He especially loves to tell stories to the other tugs, mostly about himself. Whenever he gets irritated, he blows up a lot of smoke from his smokestack and makes loud noises with his powerful engines. But most of all, George is a hard worker, and never finishes a job until it's done, and always stands up for his friends.
Foduck Fredrick "the Vigilant": Foduck is the harbour's safety tug. He wears a red fireman's hat and is equipped with extra bright spotlights, sonar transceiver and a fire hose. Foduck is always very serious and makes sure all jobs are being performed safely. Foduck is a V tug like George and Emily, meaning he is fully qualified to make ocean voyages, but is content with staying in the harbour to keep it safe. Bec | 计算机 |
2014-15/4496/en_head.json.gz/28152 | The Beautiful Math Behind Hollywood's First Computer-Generated Sequence
By Alex Pasternack — Nov 26 2012 Tweet
You might think about big Hollywood movies these days not just as stories, but increasingly as attempts to tackle tough problems. I don’t mean how to fix our educational system or our foreign policy. I mean how to make maximum returns off a multimillion dollar investment—and how to make magic look real. For the VFX whiz kids, this is actually a math problem. The movie is a kind of solution, and we decide if it’s right.
The modern search for better solutions arguably began in the 1970s, with Hollywood’s special effects heavyweights turning to computers for the most cutting-edge film scenes—like the light bike races in Tron, which, as I wrote back when, gave rise to Perlin noise, which allowed the kind of computer-generated natural-looking surfaces that trick us into thinking that we’re really hunting the Opposing Force or visiting Pandora.
That wouldn’t have been possible without a development a few years earlier. In the late 1970s, the graphics gurus at Industrial Light and Magic working on Star Trek II had to make a fly-by sequence of the Genesis planet. Filming a model would not suffice. They would have to generate it entirely with the computer. But they couldn’t just pull off the wire-frame trick, the kind that had just been used in Star Wars for that Rebel briefing on the Death Star attack. This had to look natural. They would need to rely on fractals.
By building the shapes of landscape features with fractals—from the Latin for fractured, the kind of mathematical shapes whose rough edges can mimic a great deal many of the irregularities found in nature – they would be able to generate a natural-looking landscape. This class of shapes generates lines that seem random, but which upon closer and closer inspection reveal inner patterns. Even when you can see a section of a fractal, the length you see would be infinite if you tried to measure the edges.
Fractals were the discovery and the passion of Benoit Mandelbrot, the French mathematician whose reach spread far beyond his field. His recently-released memoir, The Fractalist, writes the Times, isn’t a beautiful book, but it evokes the hard questions his work would answer: “What shape is a mountain, a coastline, a river or a dividing line between two river watersheds?”
And those questions were the same ones that the CGI wizards at ILM faced as they began their work on Star Trek. And this is the solution they came up with, one that would change the landscape of computer generated film—or at least make it look a lot more real.
This behind-the-scenes video explains the process:
Fractals would also prove crucial in generating the geography of the moons of Endor and the Death Star outline in Return of the Jedi. The NOVA documentary Hunting the Hidden Dimension begins with a section of how fractals helped Hollywood:
And if you want to do a bit of CGI on your own, the NOVA website includes a nifty “make your own fractal” application to get lost inside for hours.
[via Motherboard]
@pasternack By Alex Pasternack Tags: Film, Art, Design, Film, design, fractals, The Future of the Moving Picture, In the Lab, VFX Tweet | 计算机 |
2014-15/4496/en_head.json.gz/29417 | What's The Word? 4 Pics 1 Word iPad Review
Published A year ago One of the downsides of mobile game success is the copycats hoping to grab a piece of the action; others refer to this as imitation being the sincerest form of flattery. That said, none are as blatant as What's the Word? 4 Pics 1 Word from LOTUM. For starters, half of its title is a direct rip off RedSpell's immensely popular What's the Word?, which is currently one of the biggest titles for smartphones and tablets. Here's the kicker: it plays almost exactly the same.
As the title implies, the game features a variety of puzzles, each of which places four photographs onto the screen and asks players to figure out the word that connects the images together. Gamers do this using 12 jumbled letters, tapping each to put it in place. Answer correctly, and they receive a small number of gold coins for their trouble before moving onto the next brainteaser.
How does it all work? Example: a fish, a fishhook, cooked fish and a pier. What do these pics have in common? Four letters: F-I-S-H. Yes, the answer is at times that obvious. Other times, you'll scratch your head in confusion.
Thankfully, you can spend those aforementioned coins to either remove a letter or reveal a letter, both of which cost 90 coins a piece. It's a slow climb to afford these hints, and that being the case, LOTUM will gladly sell you more through in app purchase for varying amounts.
That's really all there is to it, and quite frankly, we can't hate on What's the Word: 4 Pics 1 Word too much. Yes, it's a mirror image of What's the Word?, and we can certainly debate whether this is or is not an issue. At the same time, it's an entertaining game, and App Store reviewers appear satisfied. Considering both titles are free, that only means more puzzles for you.
Download: iOS | Android
What's Hot:Large variety of puzzles, the ability to get help through Facebook, in-app purchase for more coins.
What's Not:Shamelessly copies What's the Word? | 计算机 |
2014-15/4497/en_head.json.gz/4585 | Goto Search
Lebanese e-Government portal: DAWLATI
Thematic Website
Electronic and Mobile Government, ICT for MDGs, Knowledge Management in Government, Citizen Engagement
DAWLATI (in Arabic means “ My State” ) provides Lebanese Citizens with the following services: Information about more than 4500 administrative transactions in the Lebanese administration in a simple, accurate and constantly d method, Having electronic forms for download and electronic filling and printing, online registration with personalized space and storage of personal documents, and electronic services to be announced periodically with different administrations.
Website: www.dawlati.gov.lb
Mobile applications: DAWLATI mobile applications (ANDROID 4+ / APPLE 6+ /BLACKBERRY)
0 Views | Rated 0.0 | Created On : Nov 05, 2013
Visit | More...
International Journal of eGovernance and Networks (IJeN)
Electronic and Mobile Government, Knowledge Management in Government, Internet Governance
International Journal of eGovernance and Networks (IJeN) is a peer-reviewed publication, devoted to broadening the understanding of contemporary developments and challenges in administrative and policy practices promotion of international scholarly and practitioner dialogs the encouragement of international comparisons and the application of new techniques and approaches in electronic systems of governing. IJeN intends to fill the need for a venue in which scholars and practitioners with different viewpoints bring their substantive approaches to work on various legal, social, political, and administrative challenges related to e-Governance issues. IJeN includes cutting edge empirical and theoretical research, opinions from leading scholars and practitioners, and case studies. Call for Manuscript
IJeN a uses a blind peer-review process and therefore manuscripts should be prepared in accordance with the American Psychological Association (APA) Guidelines as follows: No longer than 35 pages, including all elements (abstract, endnotes, references, tables, figures, appendices, etc.) formatted in Times New Roman, 12-point type, double-spaced with one inch margins. Please do not use the automatic features as well as the footnote feature to endnotes.
Submissions should include the title of the manuscript, an abstract of approximately 150 words, an opinion for practitioners of 100 words, and a list of key words on the title page but do not include the author(s) name on the title page. Please ensure to remove any indications of authorship in the body of the manuscript. The author(s) name, affiliation, and contact information should be listed on a separate page preceding the title page of the manuscript. Please submit your manuscript for review in a widely accepted word processing format such as Microsoft Word.
Submission to IJeN implies that your article has not been simultaneously submitted to other journals or previously has not been published elsewhere.
Submissions should be directed to the attention of:
Younhee Kim
Managing Editor at [email protected]
0 Views | Rated 0.0 | Created On : Oct 08, 2013
e-Governance in Small States
Journals, Training Material
Electronic and Mobile Government, ICT for MDGs, Internet Governance
ICTs can digital pathways between citizens and governments, which are both affordable, accessible and widespread. This offers the opportunity for developing small states to leapfrog generations of technology when seeking to enhance governance or to deepen democracy through promoting the participation of citizens in processes that affect their lives and welfare. For small developing countries, especially those in the early stages of building an e-Government infrastructure, it is vital that they understand their position in terms of their e-readiness, reflect upon the intrinsic components of an e-Governance action plan, and draw lessons from the success and failures of the various e-Government initiatives undertaken by other countries, developed or developing. This book aims to strengthen the understanding of policy-makers by outlining the conditions and processes involved in planning and ution of e-Government projects.
0 Views | Rated 0.0 | Created On : Sep 10, 2013
Going for Governance: Lessons Learned from Advisory Interventions by the Royal Tropical Institute
Knowledge Management in Government, Internet Governance
The 15 cases presented in this book illustrate the different kinds of advice and support that advisors from the Royal Tropical Institute (KIT) have delivered to help partners around the world improve people’ s lives by "going for governance.” Taken as a whole, these accounts show the range of processes and interventions that have helped strengthen governance in diverse settings and situations. Taken individually, each case study can be used as reference materials for a variety of training courses. The aim of this book is to provide ideas and inspiration for those who are asked to advise on governance issues in various kinds of development programs and sectors, or explore opportunities to use innovative and creative governance approaches and tools in KIT’ s joint initiatives with partners in the South.
Masters Degree Online - Public Administration
Public Administration Schools
Electronic and Mobile Government, ICT for MDGs, Knowledge Management in Government, Citizen Engagement, Institution and HR Management, Internet Governance
Masters Degree Online in public administration provides information to current and prospective graduate students who is pursuing a career in public administration or related fields. Its directory allows you to search schools by institution size, geographic area, tuition cost, and school type. Its primary focus is online master degree programs, but we acknowledge that on-campus programs at traditional brick-and-mortar schools are the best options for some students. Therefore, you can search for both online and on-campus programs here.
Click here for Online Masters Degree in Public Administration.
0 Views | Rated 0.0 | Created On : Jun 28, 2013
UNCTAD Measuring ICT Website
ICT for MDGs
The Measuring ICT Website provides information on the development of ICT statistics and indicators worldwide, with an emphasis on supporting ICT policies and the information economies in developing countries. The objectives of the Measuring ICT Website are to: Provide information to experts and the general public on progress in the field of ICT measurement, particularly by National Statistical Offices and international organizations Promote the discussion between practitioners of ICT statistical work on best practices, experiences, methodology, presentations, theory, etc. Contribute to the follow-up to the World Summit on the Information Society (WSIS) Support the work of UNCTAD on measuring the information economy, and of the Partnership on Measuring ICT for Development.
The Measuring ICT Website is maintained by the ICT Analysis Section of UNCTAD. The Section is part of the Science, Technology and ICT Branch, in the Division on Technology and Logistics.
Galilee International Management Institute
Training Institutions, Public Administration Schools, Training Material
ICT for MDGs, Knowledge Management in Government, Citizen Engagement, Institution and HR Management
Based in beautiful northern Israel, the Galilee Institute is a leading public training institution, offering advanced leadership, management and capacity building seminars to professionals from more than 160 transitional and industrialised countries around the world. The institute enjoys a global reputation as a top management institute, and to date, more than 10,000 senior managers, administrators and planners have graduated from the international programmes at the institute. In addition to its regularly scheduled seminars, the institute also offers tailor-made training programmes, designed to meet the requirements of governments and other international organisations. All programmes are available in English, French, Spanish, Portuguese, Russian and Arabic, and other languages are available upon request.
Click here to visit Galilee International Management Institute.
0 Views | Rated 0.0 | Created On : May 06, 2013
Approaches to Urban Slums a Multimedia Sourcebook on Adaptive and Proactive Strategies
Knowledge Management in Government
This source book by Barjor Mehta & Arish Dastur (editors) from The World Bank on Approaches to Urban Slums a Multimedia Sourcebook on Adaptive and Proactive Strategies
brings together the growing and rich body of knowledge on the vital issue of improving the lives of existing slum dwellers, while simultaneously planning for new urban growth in a way which ensures future urban residents are not forced to live in slums. The sourcebook& rsquo s user-friendly multimedia approach and informal dialogue greatly increase the accessibility of the content, as well as the range of topics and information that are covered. Totaling over nine hours of modular viewing time, the sourcebook will be an essential resource for practitioners, policy makers, as well as students and academics. It contains the latest perspectives on the burning issues, and cutting edge approaches to dealing with the problems that afflict the living conditions of hundreds of millions of poor people. The sourcebook charts unfamiliar waters in two ways.
0 Views | Rated 0.0 | Created On : Mar 31, 2013
Training Institutions, Public Administration Schools, Public Institutions, Statistical Databases
The Education Index at PhDs.org is the premier source of clear and educational data about undergraduate and graduate programs in the United States. We use publicly available numbers from the National Center for Education Statistics (NCES), and strive to present them in a simple and easy-to-digest way. Our desire is to make it easy for you to pick the best college you possibly can with this index: a college that fits your financial, social and educational interests and goals.
Click here to visit the Education Index.
The International Council for Caring Communities (ICCC)
The International Council for Caring Communities (ICCC) is a not-for-profit organization that has Special Consultative Status with the Economic and Social Council of the United Nations.
ICCC acts as a bridge linking government, civil society organizations, the private sector, universities and the United Nations in their efforts at sparking new ways of viewing an integrated society for all ages.
Since its inception, ICCC has been committed to the principle that private enterprises and individuals can help society improve communities and social public activities. This is one of ICCC essential goals. Twenty-three renowned world leaders since 1996 have been presented with ICCC "Caring" Awards for their contributions to society.
2014 International Student Design Competition
Music as a Global Resource: Solutions for Social and Economic Issues Compendium - Third Edition
2011 ICCC Compendium on Music As a Natural Resource 2012 International Student Design Competition Winners
<< 1 2 3 4 5 6 7 8 9 10 ... >> Total Record(s): 1082
Electronic and Mobile Government
Training Institutions
UN Research Institutions
Public Service Awards Programs | 计算机 |
2014-15/4497/en_head.json.gz/4741 | [ Home | Articles | Account | People | Projects | FAQ ] 16 Feb 2002 shlomif
Fixing the IP-Noise Final Report
The Final Report of the IP-Noise Project which Roy and I wrote was based on an old version of the Mid-Term Report
before Lavy fixed a lot of things. Thus, Lavy was not very happy with it, and asked us to correct things before
he'll take another look. He said the user's guide which
we wrote was very good, OTOH.
Today, my father and I went over the final report and corrected a lot of things. Office XP was installed so it can read the report in the first place, but it still causes some minor glitches. But so far, the document is better than it was before.
The worst case scenario is that we will lose some points due to the brevity of the report. | 计算机 |
2014-15/4497/en_head.json.gz/5747 | Susan Gantner's career has spanned over 30 years in the field of application development. She began as a programmer, developing applications for corporations in Atlanta, Georgia, and working with a variety of hardware and software platforms. She joined IBM in 1985 and quickly developed a close association with the Rochester laboratory during the development of the AS/400 system. She worked in Rochester, Minnesota, for five years in the AS/400 Technical Support Center. She later moved to the IBM Toronto Software Laboratory to provide technical support for programming languages and AD tools on the AS/400 and iSeries.
Susan left IBM in 1999 to devote more time to teaching and consulting. She co-authored one of the most popular System i Redbooks ever, Who Knew You Could Do That with RPG IV? She and partner Jon Paris make up Partner400, a consulting company focused on education and mentoring services related to application modernization. Susan is also part of System i Developer, a consortium of top educators on System i technology who produce the RPG & DB2 Summit events. Its members include Jon Paris, Paul Tuohy, Skip Marchesani, and Susan.
Jarek Miszczyk is a senior software engineer at the ISV Enablement organization in Rochester, Minnesota. His mission is to provide consulting services to independent software vendors (ISVs), large IBM clients, and other IBM organizations on issues related to DB2 for IBM i. Before joining the ISV Enablement organization in 2000, he worked for several years at the IBM International Technical Support Organization (ITSO), also in Rochester, Minnesota, where he was the leading author of several popular IBM Redbooks on databases.
Jarek holds a masters degree in computer science and has almost 20 years of experience in the computer field. His areas of expertise include cross-platform database programming, performance tuning, and database integration with emerging technologies (such as XML, Java, and Microsoft .NET). Recently, Jarek has expanded his interests into the area of virtualization and cloud computing. He can be reached by email at [email protected].
Jon Paris's IBM midrange career started when he fell in love with the System/38 while working as a consultant. This love affair led him to join IBM. In 1987, Jon was hired by the IBM Toronto Laboratory to work on the S/36 and S/38 COBOL compilers. Subsequently, Jon became involved with the AS/400 and in particular COBOL/400. In early 1989, Jon transferred to the Languages Architecture and Planning Group, with particular responsibility for the COBOL and RPG languages. There, he played a major role in the definition of the new RPG IV language and in promoting its use with IBM Business Partners and users. He was also heavily involved in producing educational and other support materials and services related to other AS/400 programming languages and development tools, such as CODE/400 and VisualAge for RPG. Jon left IBM in 1998 to develop and deliver education focused on enhancing AS/400 and iSeries application development skills.
Jon is a frequent speaker at user group meetings and conferences around the world, and he holds a number of speaker excellence awards. from COMMON.
Thomas M. Stockwell is an independent analyst and writer. He is the former editor in chief of MC Press Online and Midrange Computing magazine and has over 20 years of experience as a programmer, systems engineer, IT director, industry analyst, author, speaker, consultant, and editor.
Tom works from his home in the Napa Valley in California. He can be reached at ITincendiary.com.
Mike Tharp currently holds a position of Software Engineer at Jack Henry and Associates, Inc. Mike graduated from Missouri Southern State University in 1993 after obtaining his BS in Mathematics. He began his IT career after graduation at Contract Freighters Inc. located in Joplin, MO. Mike joined the JHA team in 2001 in the Special Projects division of the Installation Services group. In 2004, he was transferred to Research and Development where he now fulfills his developer role. Mike’s latest area of focus has been programming for performance. He has worked closely with IBM’s Lab Services group in resolving performance issues which has led to several trips to the Benchmark Center located in Rochester, MN.
Mike resides with his wife and two sons in Monett, Missouri, where they enjoy being outdoors playing baseball and attending minor league baseball games. He can be reached at [email protected].
Articles by this Author:
TNT - Most Popular MC Press Books 2012-10-26
AIX Expert 2012/05/23 Text 1
MC Video News Makers 2001/03/11a
2013-10-15 Bookstore Offer MC Press Bookstore 50% Off Books | 计算机 |
2014-15/4497/en_head.json.gz/8887 | Terry Allen Winograd
Local code:
OH 237
University of Minnesota, Twin Cities. Charles Babbage Institute
Charles Babbage Institute Oral History Program
Winograd describes his education in computer science and introduction to linguistics at the Massachusetts Institute of Technology (MIT). He discusses the work of Marvin Minsky and other in artificial intelligence. He describes his move to the Stanford Artificial Intelligence Laboratory and his additional linguistic research at Xerox-PARC. Winograd compares the approach to artificial intelligence at MIT and Stanford. He describes his involvement with obtaining funding from the Information Processing Techniques Office of the Defense Advanced Research Projects Agency.
http://www.cbi.umn.edu/oh/pdf.phtml?id=16
Winograd, Terry Allen
Norberg, Arthur L.
North America; United States
Computer software; Electric engineering; Engineering; Information technology; North America; Oral history; Software engineering; United States
Defense Advanced Research Projects Agency (DARPA) | 计算机 |
2014-15/4497/en_head.json.gz/9028 | 16. Two Modems (Modem Doubling)
By using two modems at the same time, the flow of data can be doubled. It takes two modems and two phone lines. There are two methods of doing this. One is "modem bonding" where software at both ends of the modem-to-modem connection enables the paired modems to work like a single channel.
The second method is called "modem teaming. Only one end of the connection uses software to make 2 different connections to the internet. Then when a file is to be downloaded, one modem gets the first half of the file. The second modems simultaneously gets the last half of the same file by pretending that it's resuming a download that was interrupted in the middle of the file. Is there any modem teaming support in Linux ??
16.2 Modem Bonding
There are two ways to do this in Linux: EQL and multilink. These are provided as part of the Linux kernel (provided they've been selected when the kernel was compiled). For multilink the kernel must be at least v.2.4. Both ends of the connection must run them. Few (if any) ISPs provide EQL but many provide Multilink.
The way it works is something like multiplexing only it's the other way around. Thus it's called inverse-multiplexing. For the multilink case, suppose you're sending some packets. The first packet goes out on modem1 while the second packet is going out on modem2. Then the third packet follows the first packet on modem1. The forth packet goes on modem2, etc. To keep each modem busy, it may be necessary to send out more packets on one modem than the other. Since EQL is not packet based, it doesn't split up the flow on packet boundaries.
EQL
EQL is "serial line load balancing" which has been available for Linux since at least 1995. An old (1995) howto on it is in the kernel documentation (in the networking subdirectory). Unfortunately, ISPs don't seem to provide EQL.
Multilink
Staring with kernel 2.4 in 2000, experimental support is provided for multilink. It must be selected when compiling the kernel and it only works with PPP. | 计算机 |
2014-15/4497/en_head.json.gz/9501 | Our Organization Our People Year in Review Acquisition Support The SEI works directly with federal defense and civil programs. Teams of acquirers, developers, and operators help government navigate the complexities of acquiring increasingly complex software and systems.
Increasingly, the Department of Defense (DoD) and federal agencies acquire software-intensive systems instead of building them with internal resources. However, acquisition programs frequently have difficulty meeting aggressive cost, schedule, and technical objectives.
The SEI works directly with key acquisition programs to help them achieve their objectives. Teams of SEI technical experts work in actual acquisition environments in the Army, Navy, and Air Force, as well as other DoD and civil agencies, applying SEI products and services in specific contexts.
Our vision is to facilitate the rapid establishment of agile teams composed of acquirers, developers, and operators using SEI technologies to provide evolutionary, high-quality, cutting-edge software-intensive capabilities to the warfighter. Acquisition program managers are challenged not only to grasp practical business concerns, but also to understand topics as diverse as risk identification and mitigation, selection and integration of commercial off-the-shelf (COTS) components, process capability, program management, architecture, survivability, interoperability, source selection, and contract monitoring. The SEI has spent more than two decades compiling a body of knowledge and developing solutions for these topics. The SEI is focused on direct interaction with the defense, intelligence, and federal acquisition communities by
transitioning technologies and practices to improve DoD software-intensive systems
performing diagnostics such as Independent Technical Assessments (ITAs) and Independent Expert Program Reviews (IEPRs)
helping with RFP preparation
helping with technical evaluations of proposals and deliverables
collaboratively developing acquisition technologies and practices
transitioning technologies and practices to the DoD acquisition community's collaborators
reviewing and advising the DoD on acquisition policy related to software-intensive systems
The SEI is focused on delivery, support, and integration of software-intensive systems acquisition practices to help acquisition program offices. The SEI is positioning itself as a facilitator and leader of a community of practice for the acquisition of software-intensive systems.
Spotlight on Acquisition Support
We Have All Been Here Before: Recurring Patterns Across 12 U.S. Air Force Acquisition Programs
presentation given by William Novak and Ray Williams at the 2010 Systems and Software Technology Conference (SSTC) on April 29, 2010
Practical Risk Management: Framework and Methods | 计算机 |
2014-15/4497/en_head.json.gz/9549 | 16 November 2010 All right folks... I've put up one more build, and it's likely to be the final version of Spectre. The changes are pretty minor, just a few things we implemented for Meaningful Play (such as a tutorial and fullscreen toggle). Thank you, everyone, for all the kind words and support, it's meant a lot to us!
14 March 2010 Our IGF build of Spectre is live! Even if you've played the game before, you should jump over to the Play page and check it out... we've made some huge improvements since January. PC and Mac versions are available. 13 March 2010 Thanks to everyone who gave us feedback, love and sandwiches at GDC... it was an amazing week, and we are happily exhausted. We'll have a new build up very soon. In the meantime, here is some wallpaper we made, just for you.
10 March 2010 Vaguely Spectacular is at GDC, and tomorrow we'll be showing off a new, much improved version of Spectre. We'll keep you posted as the week unfolds
8 February 2010 We've made one last update to our public build of Spectre before GDC. Please, download it, play it, and send us comments... we'll be spending this month refining and reworking the gameplay, and your feedback is essential so that we know what should change (and what we should keep!)
18 January 2010 Amazing news for Spectre... we've made it into the IGF Student Showcase! If you're visiting us for the first time, 1) hello and 2) the full game is available for free on the "play" page!
6 December 2009 We've made one last version of Spectre for '09, and it has a lot of major fixes and changes that we think you'll enjoy. Most importantly, though, we've tested the compression on several machines, so you should be able to download hassle-free! Thanks for your patience, we hope you enjoy the updated game.
5 October 2009 Allright folks, a new version of Spectre is live and downloadable. On the PC side, it should solve the ZIP file problems people have had... and on both sides, it should run smoother and less bugaliciously. 4 October 2009 We are just decompressing from a very successful Indiecade... both for Spectre specifically, and for The Peanut Gallery at large! Spectre received an honorable mention from the jury, and our co-conspirators/collaborators/fellow travellers won the Audience Award AND the Finalist's Award for Minor Battle!
24 Septempter 2009 Wow! Spectre is going to Indiecade!
25 August 2009 Lots of new features on the site. Check out the Spectre trailer on our media page, and leave a comment for the community on the talk page. And best of all, the game is now available for download right here. Check it out and let us know what you think!
23 August 2009 Spectre Numerology: 112 Memories. 52 Themes. 14 Overworlds. 7 Vignettes. 1 Old Man in the Snow. (Big announcement coming soon!)
9 May 2009 Tonight is the opening ceremony of the thesis show, where Spectre is currently on display. Come by if you're in Los Angeles and want to check it out. If you can't make it to USC this week, just sit tight. We'll be releasing the game as a free download within the next couple weeks.
11 April 2009 Lots of exciting things happening here at Vaguely Spectacular headquarters. Our hard work over the past several weeks is beginning to pay off, and we're very pleased with the results. A few more weeks, and we'll be ready to show the game off to everyone!
25 January 2009 There's a lot to be done! We're taking the lessons we learned from building and testing the first iteration of this game in December and overhauling the game. It's going to be a lot of work, but all of Vaguely Spectacular is excited to tackle it! (Check out a behind-the-scenes shot from one of our design sessions here!)
16 January 2009 From an email sent out to the team today: "Now that we have a solid prototype version of Spectre, what changes need to be made? ... Everything in the game is potentially up for grabs if you think there are problems that need fixing." We have a whole semester to go before the thesis deadline and IndieCade submission, so we're going to be going back to the design table and figuring out how the experience can be improved.
7 December 2008 Lots of changes have been made on the website over the past couple weeks. We've added several screenshot images to the media page and pictures of ourselves on the team page. We've also added a forum where you can go to ask questions or talk about indie games.
15 November 2008 First build of Spectre released for the IGF competition. Stay tuned for more information about the game and the development team!
© 2008 Vaguely Spectacular | 计算机 |
2014-15/4497/en_head.json.gz/10645 | Does anyone out there in the ether know who currently holds the rights to Ed Simbalist and Wilf Backhaus's fantasy RPG, Chivalry & Sorcery, originally published by FGU? As I understand it, the rights to C&S were acquired by a company called Highlander Designs sometime in the 1990s, resulting in a new edition of the game. Highlander Designs eventually goes under and the game passes on -- whether under license or having been bought wholesale, I don't know -- to another company called Britannia Game Designs, who publish yet another edition around the turn of the century. Since then, I've heard nothing about Chivalry & Sorcery or Britannia Game Designs and I have read in various places that the company no longer exists. Sadly, both Ed Simbalist and Wilf Backhaus are now dead.So, does anyone know the current status of C&S? Who owns it? And how might one find out who legally holds the right if one wished to do so? I ask this almost entirely out of personal curiosity, with a slight hint of an ulterior motive. If anyone can point me in the right direction, I'd be appreciative.
fgu,
ZanazazNovember 21, 2010 at 1:34 AMWhew, good luck with that. Sounds like you're going to see how good a detective you can be, because I think that's what's going to take to find out who owns it now.ReplyDeleteZanazazNovember 21, 2010 at 1:56 AMWell, boredom and a sense of curiosity led me to perform a google search. You might check with these guys, Mystic Station: http://www.mysticstation.com/about.htmlI'm assuming it's the same C&S, but it could be something different. I didn't dig that deep... at least not yet.According to the wiki page there is also an unauthorized edition pdf floating around.ReplyDeleteRobert Saint JohnNovember 21, 2010 at 2:12 AMJames, as far as I know, Britannia Game Designs still owned the rights to C&S until very recently -- they liquidated just a few weeks ago. Between that and Wilf's recent death, the rights to C&S may very well be in limbo. You might be able to get clarification from BGD's Steve Turner, he's on Facebook and in the C&S FB group: http://tinyurl.com/2fqa8kcReplyDeleteRobert Saint JohnNovember 21, 2010 at 2:20 AMOh, and AFAIK, MSD was just a US distributor for BGD's C&S 4th Ed here in North America.ReplyDeleteReverance PavaneNovember 21, 2010 at 2:25 AMI believe, after extensive litigation, Ed got the rights back from FGU back in the early 1990s (I'd have to check my archives to find out exactly when). Because of the problems with FGU I don't believe he was willing to alienate the rights to the game any more, and the rights to the game were actually invested in Ed's own company, Maple Leaf Games.The Highlander Games production of the game was an explicit grant of licence by Ed and Maple Leaf Games.However Ed was unhappy working with Highlander, and transferred the licence to publish C&S to Britannia Game Designs (which I believe was essentially little more than a small collective of dedicated fans based in the UK). Steve Turner was the managing director. [Other important names include Colin Spiers, Dave Blewitt, and Paul Perano.]However the actual rights to the game should have remained with Maple Leaf Games, and since this was a non-public company should have passed to whoever was heir to Ed's estate, since it was essentially a holding company.This is all based on unreliable recollections of discussions on the Loyal Order of C&S (LOCS) mailing list, so may be totally wrong.However (as of 2010) Mystic Station Designs is still apparently publishing C&S and Skillscape (the 3rd, 4th and Light editions of the rule system for the game) )stuff and holding "living C&S tournaments," so they are probably a good contender for having gained the licence. <grin>ReplyDeleteRobert Saint JohnNovember 21, 2010 at 2:27 AMHmmm, I found a post from Steve Turner dated July 2010 at the C&S Forum. Obviously not current, but may give you a good idea of how difficult it might be to get the answer to this, or work with those involved:"Despite all rumours Brittannia Game Designs Ltd is still around and we are still working on the official 5th edition. As for any other versions that may be produced in Canada, we own the trademarks for the game in Canada and if any version is produced we shall request the trade department of the UK Embassy in Canada to handle the matter for us. We have spent to long on C&S to just give it away, its cost me 2 well paid jobs trying to put out products for a the fan base. After 15 years I have yet to receive any reward financial or otherwise and have invested thousands of pounds of my own money. Now if someone wishes to offer GBP 75,000 for the game then fine, but thats the estimate of how much time and money I have personally invested let alone others.C&S is a wonderful game but also a great way to lose money as a publisher. Now Francis is working on the product but we all have a real life. If some one can fund us to the tune of GBP 60,000 a year (that covers Sue and my salary) then we can churn out product for you. Otherwise be patient or find the above price to by the game. Regards Steve Turner"http://tinyurl.com/24xsz9wReplyDeleteNickNovember 21, 2010 at 3:35 AMThis is as per the wikipedia page for Brittania Game Design:"According to Companies House, Britania Games Design Limited has not filed accounts for 2008 or 2009, and was liquidated on 30 October 2010."ReplyDeleteFabio Milito PagliaraNovember 21, 2010 at 5:11 AMI was involved in C&S and the translation of the light version in Italian... also with playtesting of the 4th edition, at the time I had phone contacts with Ed :( and mail with Steve....sadly I don't know who is holding the rights of itwhat memory you bring back :)ReplyDeletefinarvynNovember 21, 2010 at 8:39 AMI don't have anything to contribute except to marvel at the wonderfulness of the internet to allow folks to jump in with answers like this so quickly.I played C&S and it wasn't "my thing" so I didn't pay much attention to it after that, and it's so neat that the game is still out there and that folks are still playing it!ReplyDeletePatrick TinglerNovember 21, 2010 at 11:24 AMIf you find out, please let us know. I started thinking about the game in the past few weeks too and found what the others have found (the liquidated information and post from Steve Turner). It's a shame that it appears to be dead without the possibility of it continuing with a new publisher. Since it appears that the game hasn't made any money, too bad it wasn't released as OGL or under the Creative Commons for fan support.ReplyDeleteLoneIslanderNovember 21, 2010 at 11:36 AMI got nothing, but it seems that everyone else has you covered lolReplyDeleteAnthonyNovember 21, 2010 at 4:00 PMI'd be fascinated to see what comes of this. I had both the Highlander and BGD versions of the game, and really like C&S's magic system.ReplyDeleteCaptain JackNovember 21, 2010 at 11:20 PMThis comment has been removed by the author.ReplyDeletegrayhawkfhNovember 22, 2010 at 2:04 PMFWIW, I've done a little work for Mystic Station Designs, and am good friends with the managing partners, and have asked them to contact you to try and answer your questions.Hope this helps.ReplyDeleteKoryNovember 22, 2010 at 2:12 PMHello:My name is Kory Kaese. I am one of the Managing Members of Mystic Station Designs, LLC. We are one of the companies that has a license for producing C&S materials. If you have questions concerning C&S please let me know? I can be contacted at:[email protected] MaliszewskiNovember 22, 2010 at 7:10 PMThanks to everyone who's been providing me with links and contacts regarding C&S's current status. I'm in the process of sorting through all the details and I'll make a post about it once I have a more complete sense of the situation.ReplyDeleteedNovember 30, 2011 at 9:32 PMC&S is still held by Brittanna, who did not dissolve. I think there was an online shop of a Similar name that did.BGD had some problems over the last few years, litigation over the trademarks and the death of Steve and Sue's son being the most prominentHoweverC&S 5th is being worked on, an updated and expanded C&S Essence was just published on Drivethrurpg.com and a C&SLight update is on the cardsReplyDeleteAdd commentLoad more... | 计算机 |
2014-15/4497/en_head.json.gz/11620 | Verifying Email Addresses - Not So Easy - Sun, 14 Feb 2010 Before the first Alpha of Behold came out in March 2005, I had a survey up which you can still see at archive.org. (Don’t you just love the color I used to use?) Between 1997 and 2004, 685 of the people who filled in that survey said “Yes” to the question “Does the concept of a program like Behold interest you?” and also gave me their e-mail address. I promised to myself that when I release the beta of Behold, I would email them back and let them know. Well, the beta of Behold is available, and I’m finalizing the Newsletter so I am just about ready to send out that letter. Just one thing to do: verify that those email addresses still exist. I do not want to send out 600 emails and have a lot of them bounce back. It could blacklist me and/or make my ISP very unhappy with me. So then the journey to do that began. I found out that there is no perfect way to do so. Even sending an email may bounce back because an account is just suspended or a mailbox is full. Without sending an email, it is even trickier. The main method used that does the best job under the circumstances is to set up an SMTP call to the mailserver at the domain of the email address, and send it a HELO and RCPT command. You can sort of see how this works at a Free Email Address Verifier.
So I found a number of articles on how to do this and set up PHP code to do the verification. That took a couple of days. But when I ran it, it didn’t work. It took me another couple of days before I found out that the reason was my website host blocked port 25 where SMTP goes as a spam prevention policy. The research told me there’s no way around that. Find a host who doesn’t block it. So next I tried running it from PHP on my own machine. That would go through my ISP here in Winnipeg who gives me my internet access. Nope. Port 25 blocked here as well. Without doing the SMTP checks, only 15 of the 685 addresses could be proven wrong. The rest were unknown. I knew there were a lot more than that. So I needed something with the SMTP checks. Downloading an email verification program didn’t help. They need port 25 as well. Finally I was able to write a routine that accessed an online email checker and check the emails one by one. I was willing to pay for this service, but I could not find one that would do it in a batch online manner for me. So here goes, a one shot check of 685 addresses. Painful but necessary. As I write this, I’m about half way through. Looks like about 45% of the email addresses from back then were good. That will make about 300 of them. Was all this effort worth it? Now that I look back, possibly not. But I didn’t know it at the time.
I have to wait to hear back from the SMTP mailing service I signed up for to know how to configure phplist to use them for my newsletter, so it will still be a few days before the newsletter is sent.
So tomorrow it’s back to work on Behold - let’s get that log file working. | 计算机 |
2014-15/4497/en_head.json.gz/11897 | 2/15/201311:27 AMDark ReadingProducts and ReleasesConnect Directly0 commentsComment NowLogin50%50%
Coverity Expands Global FootprintAnnounces the opening of four new officesSAN FRANCISCO, Feb. 13, 2013 /PRNewswire/ -- Coverity, Inc., the leader in development testing, today announced the opening of four new offices, as well as the expansion of three existing offices, across Asia, Europe and North America.
The company cited the rising worldwide demand for development testing across all industries as the reason for its expansion.
To increase its customer support in two of the fastest growing economies in the world, Coverity opened new offices in Bangalore, India, and Beijing, China. The company also opened a new C# Center of Excellence in Seattle, led by veteran Microsoft developer Eric Lippert, and a new field office in Paris. In addition, Coverity is expanding its existing footprint with increased office space in London, Boston and Calgary, Canada, and increased personnel in Munich, New York and Toronto.
"The world is increasingly becoming software-driven, and companies can no longer ignore the business impact that a software failure or security breach can have on their brands or on their bottom lines," said Anthony Bettencourt, CEO for Coverity. "Coverity is committed to the continued success of our 1,100+ customers with a platform that integrates quality and security testing seamlessly into their respective development processes. With the new offices, we'll be able to better support our customers in these key regions and continue on our path of rapid growth and new customer acquisition."
Founded in 2003, Coverity has grown from its roots as a project born in the Computer Systems Laboratory at Stanford University to a global enterprise with more than 275 employees, based in 11 offices that span ten countries and three continents.
About Coverity
Coverity, Inc., (www.coverity.com), the leader in development testing, is the trusted standard for companies that need to protect their brands and bottom lines from software failures. More than 1,100 Coverity customers use Coverity's development testing platform to automatically test source code for software defects that could lead to product crashes, unexpected behavior, security breaches or catastrophic failure. Coverity is a privately held company headquartered in San Francisco. Coverity is funded by Foundation Capital and Benchmark Capital. Follow us on Twitter or check out our blog. | 计算机 |
2014-15/4497/en_head.json.gz/11916 | Feature Automation & Control
Joe Greco6/2/2003 Post a commentEmail ThisPrintComment
Keep it simple, stupid! That could well be the mantra for CAD developers today as they strive to make their software easier to use at the same time they add new functionality.
"Everyone knows the advantages of 3D software today, like the ability to make changes to the model and have them propagate to the 2D drawing," says Aaron Kelly, director of product management at SolidWorks (Concord, MA). "Now the task is to make it easier to use."
He's not alone in that quest.
PTC, with 300,000 commercial users and almost five times that many educational seats, spent several hundred million dollars and several years morphing Pro/ENGINEER into Pro/ENGINEER Wildfire. Autodesk developed Inventor even though it already had a powerful 3D CAD program called Mechanical Desktop. And Dassault Systemes completely changed the look and feel of CATIA from Version 4 to Version 5, risking the alienation of customers who were used to the former product. In each case, the software developers re-tooled their products at least in part to make them easier to us.
Each of those products also includes new functionality beyond ease-of-use features. After all, says Jim Gross, an aerospace/rocket engineer for the U.S. Navy, the most important thing is that the software has the power to get the job done. Even so, ease of use is still key.
Engineers who are happy with the user-interface changes making their CAD programs more intuitive can thank Bill Gates.
"During the lifespan of Pro/ENGINEER, Windows� became the standard operating system, so people now expect a certain amount of ease of use," says Michael Campbell, a vice president at PTC. In addition, the user profile changed, he says, so even though there are now more people using CAD, many are not engineers. "They don't use CAD as often, so it has to be easier," Campbell asserts.
SolidWorks' Kelly agrees, and adds another factor behind the efforts to make software easier: "Many users are coming from other systems and they want the transition to be easy." Meanwhile Andrew Anagnost, senior director of product and solution marketing at Autodesk, says most products are already functional, "so the key is making the user more productive via a better user interface." In fact, Anagnost adds, "Autodesk has made choices in terms of what kind of functionality we roll out, based on its impact on ease of use." He admits that Autodesk has not rolled out its full surface environment primarily because of user interface issues.
Most CAD developers have teams dedicated to improving ease of use in their products. "We don't have a performance team, or a 'flashy-feature' team, but we do have an ease-of-use team," notes SolidWorks' Kelly. At EDS, they've gone a step further. Dan Staples, director of the Solid Edge business unit, says they even have a cognitive psychologist who reviews user-interface design ideas and determines how humans react to them. He says they also have their developers listen in on support calls from time to time, to better understand the struggles of the everyday user. Term Limits
So what makes a CAD program easy to use, anyway? It turns out that developers and users have both similar and different thoughts about that. Developers talk about consistency�making sure the program not only looks and feels like other Windows applications, but also provides consistency within the developer's own tools, so that users can apply their knowledge of how one tool works in order to learn another. Easier job: Krebs Engineers designs sophisticated mining equipment in CATIA. Event though the software is much more functional than the company's previous software it's ease of use for the engineers has improved. Developers mention other factors as well. For example, they try to infer what the operator wants to do, and as a result, save steps. "That's what it is all about," says SolidWorks' Kelly. "Pretty icons are nice, but in the end we want to make the user more productive." Speed is an important part of usability. Robert Bou, president of Ashlar-Vellum, one of the pioneering CAD companies in terms of user interface, points to an old IBM study that found if a user has to wait more than two seconds for a command to be carried out, he starts to lose his train of thought. So one of the goals of Ashlar's software is not to disturb this thought process. Staples at EDS points to how taking advantage of improved processing speed has afforded Solid Edge new capabilities that make it easier to use by allowing dynamic editing with shaded previews. This instant visual feedback allows the design process to continue uninhibited.
But some of the user interface philosophies of the developers are 180 degrees from one another. For example, several developers, including SolidWorks, mention that a good user interface doesn't hide options, and that users shouldn't have to remember where a command is buried. However, Staples, when discussing Solid Edge, talks about the concept of progressive disclosure, where users are not shown all options at once. "Humans review options available to them and then proceed, but if you bombard them with all the options at once, you make them think a lot more than they need to at that point. It is a lot more interruptive to their workflow," he says. He adds that it's important to talk to the engineer on his terms. "That means you don't use a sweep command to create tubing and wiring. You create a separate module with specific features." Not Always Obvious
While users generally agree with these developer points, they also cite real-world examples of ease of use, some of which would not be obvious by taking a quick glance at the software. For example, CATIA 5 user Kevin Soukup, senior mechanical engineer/CAD administrator at Krebs Engineers in Tucson, AZ, points to recent changes made to a CATIA feature called Multiple Body Parts that now makes his work a lot easier, although at first glance many won't consider this an ease of use issue. The changes make the parts more stable and require less work when the components that make up the multiple body parts are reorganized. Dassault Systemes also made it easier for someone who is receiving a file with Multiple Body Parts to understand what is going on. "This saves a lot of time," says Soukup, "because the sender doesn't have to explain everything." Soukup also says missing features can make the design process more tedious, which impairs ease of use. Because CATIA doesn't have a spell checker, he has to import drawings into AutoCAD in order to check for typos, which takes time. Understand the Process
Ease of use is also directly connected to how well developers understand the processes that their users employ, without trying to change them. Do developers try to change users' processes? Krebs' Soukup says yes. He cites very complex sheet metal parts his company designs. "They are made up of hundreds of parts, but when we send it to the fabricator it gets welded and comes back as one part, and that's the way we want to track it," he explains. However, SmarTeam (Dassault Systemes' PLM software) wants to count it as 200 parts. "I would have to hire one guy just to check parts in and out of SmarTeam if that was the process we used�it just doesn't make any sense to us." He says that they are working with Dassault Systemes to find a solution to the problem.
T.J. Fisher, president of Arizona Applied Engineering in Prescott Valley, AZ, is a mold designer and SolidWorks user who says he has even had to change his actual design to make complex parting lines work. While he is generally happy with the application, sometimes he has to make a hole one-tenth bigger or smaller in size, for example, in order to have the software figure out the parting line without any problems.
So what happens when a company "moves up the CAD ladder," that is, migrates from one system to another with greater capabilities? Monaco Coach Corp. (Bend, OR) is making the transition from AutoCAD to Inventor, partly due to the ease-of-use issue. Engineer Dolan Classen notes that most users are easily adjusting to the change saying, "It's not hard, it's just different from the fact that there is no command line to how you put an assembly together."
Honeywell Inc.'s special equipment department moved to IronCAD because of ease-of-use issues with its previous CAD software. "It's made life much easier because so many things are now much more visual," says Mechanical Technician Carl Steien. Soukup at Krebs views CATIA as easier to use than AutoCAD, even though it does more. In fact, even as more features are added, it's getting even easier, he says. He cites new CATIA entities called thin fiber elements, which allow users to create open solids, thus making the creation of certain objects a lot simpler.
So the notion that more robust software will be more complex and harder to learn and use doesn't always hold water. Still, Fischer at Arizona Applied Engineering adds, "SolidWorks is starting to fall into the trap where they are trying to be everything to everybody and are getting to the point where they are potentially getting more complicated." However, he also notes that new features generally work just like existing ones. More to Do
"If I go to a computer store and buy a $30 Windows package to help me touch up photos or balance my checkbook, I expect it to be easy and I don't even read the manual," says SolidWorks' Kelly. "CAD should be the same way." Others agree.
According to Autodesk's Anagnost, a key challenge for his company is getting users to do conceptual design in 3D and then turn that into a production model. In fact, Classen at Monaco Coach still does 2D conceptual design in AutoCAD, before moving to Inventor, simply because it is what he is most comfortable with.
While PTC's Campbell believes Wildfire was a big step forward, he notes that there are still many areas where its user-interface paradigm has yet to be implemented, such as in their sheet metal module and some of their other applications. As a software reviewer, I have found this to be true. Additionally, I personally found minor inconsistencies and some lingering aspects of the old user interface. Wildfire is not alone, as I have found this to be an issue in almost every CAD application I have used. Greg Milliken, vice president at Alibre Inc. believes that his Alibre Design application is easy to use, but also notes that it can be more polished. "Half of the developmental effort of the next version will focus around user interface," he says.
What's interesting is that PTC and Alibre are companies on the opposite ends of the price scale; and while they both know the importance of a good user interface, both realize that building core functionality first is more important. Says the U.S. Navy's Gross, "When your needs get very specific, you just want something that will do the job." Gross says that he often needs to employ sophisticated custom-built analysis programs when doing his designs, and he doesn't mind working with an unintuitive user interface in order to get the needed results. Strides in ease of use aside, what seems to be missing in CAD today are major user-interface breakthroughs. A few years ago, think3 introduced the ability to enable CAD commands via spoken instructions inside its thinkdesign CAD software. I tested this and not only did it work, but it also saved a lot of time. Yet fewer than 50% of thinkdesign customers use it. Why has no other vendor committed the resources to build something similar? Some say it's a gimmick, but it boosts productivity.
Perhaps the profits in providing training are too great, so we may never see CAD software that is as easy to use as that $30 general-purpose Windows application. However, this is a line the CAD industry must consider crossing, if it really wants to appeal to the mass market.
Contributing writer Joe Greco can be reached [email protected]. Email ThisPrintComment
View Comments: Threaded|Newest First|Oldest First | 计算机 |
2014-15/4497/en_head.json.gz/12481 | The Luminous Landscape Guide to Adobe Lightroom 2 Learn a Complete Digital Photography Workflow
The Lightroom Experts
A 7.5 Hour Multi-Segment Training Video with Michael Reichmann and
Jeff Schewe
FREE PREVIEW Learn
Lightroom 2 From The Experts
This Tutorial Covers The Features of Lightroom 2.x
And a Fresh Look at The Original Features of v.1
Since its introduction over three years ago, Lightroom has become
"The place where photographers now live" .
The Luminous Landscape Guide to Lightroom
2 is a new and
comprehensive video tutorial. It covers the new features and tools in Lightroom
2 and also gives a fresh look at all of the original features of Lightroom
v.1x. Adobe
Photoshop Lightroom 2.6 is available from the Adobe web
site for US $299 and as an upgrade for Version 1.x owners from $99. There
is a free 30 day Trial Version available as well. ________________________________________________________________
Lightroom 3 Beta User? If you are using the latest Beta of Lightroom 3 and are new to Lightroom, get a head start on using Lightroom 3 with our Lightroom 2 Tutorial. At least 80% of the features of Lightroom 3 are in Lightroom 2.
All purchasers of our Lightroom 2 Tutorial will be eligible for a discount on the Lightroom 3 tutorial on its release.
About The Tutorial The Luminous
Landscape Guide to Lightroom 2 is comprehensive and
in-depth. It consists of nearly 8 hours of live video training. The
price is just US $39.95.
The Guide to Lightroom 2 is designed to be viewed on a computer monitor
using VLC Player, QuickTime Player or other media players. Windows, Mac, and Linux systems
are all supported. Conversions to other formats are detailed on our FAQ. The Guide to Lightroom 2 is produced in High Definition video suitable for playback on a modern computer with a powerful graphics card.
If your computer is less than two years old you should have no problem playing these videos. There is a Standard Definition version available here with a smaller demand on the CPU & graphics card for those who have older computers (Mac G4, G5 & older Wintel machines). The video can also be viewed on a television; see
our FAQ. Download Video Means: No
shipping costs, no import taxes or duties. No delays. Forty seperate files of the Lightroom V2 Tutorial are
available for download. They are enclosed in nine zipped files for download. The files average between 10 and 30 minutes each
in length and each covers a discrete topic. The files average 100MB in size and total close to 4GB for the complete 40 file tutorial. You can (and should) download each file seperately.
Start watching right away while you download additional files. The video
files can be downloaded at any time from your account. There is no time or
download limit. The files may also be transfered between hard drives. We sincerely
hope that by not locking or restricting these files in any way that we
are contributing to your ease of use, but also that you will respect
our copyright and the huge amount of effort that went into creating
these turotials. Please do not share these with others. We have made
the price low enough that everyone should be willing to purchase their
own copies.
The Contents
A comprehensive Table of Contents can be found here.
This live video tutorial covers Lightroom in depth, with the
type of detail and focus that only Michael and Jeff, two of the program's internal
alpha and beta testers, can provide. We look at each of the program's modules – Library
– Develop – Slideshow – Print – Web – and
explore their features and functions in detail. Our orientation is toward workflow,
which is what Lightroom excels at. You'll learn all of the keyboard shortcuts,
and how to take best advantage of each of Lightroom's powerful new tools.
If you're the type of person that learns best by a live demonstration
and hands-on approach, rather than from a book, then The
Luminous Landscape Guide to Lightroom 2 is for you. ________________________________________________________________
Upgrade your Lightroom Library catalog from 1.x to v.2 Free Video ________________________________________________________________
A FAQ for Download Video and QuickTime can be found here.
About This Video The video was shot in May 2008 in Toronto using late release
candidates of Lightroom 2. The video cameras used were Sony EX-1 & Sony
HDV. Equipment used: - Mac Book Pro 17" & 15" Special Thanks
Henry Wilhelm, who joined us on the photography trip to Niagara
Falls and for a day of the taping in Toronto. Credits
Shot and co-edited by Christopher Sanderson with additional editing by Mark Guertin.
All video is ©2008 Terra Luma Inc. All Rights Reserved
Note that because this tutorial is a download
there are no shipping costs, import taxes, or duties.
Learning Lightroom 2 in-Depth Now!
July, 2008 Filed Under: Videos show page metadataConcepts: Comparison of video player software, Adobe Photoshop Lightroom, High-definition video, High-definition television, Adobe Photoshop, Computer, Adobe Systems, LuaEntities: Toronto, Mac, live video training, Niagara Falls, Windows, Linux, Michael Reichmann, Lightroom, Christopher Sanderson, Mark Guertin, Henry Wilhelm, Michael, Jeff, G4Tags: Lightroom, Luminous Landscape Guide, tutorial, video, Adobe Photoshop Lightroom, Lightroom V2 Tutorial, Lightroom v.1x, tutorial covers lightroom, Lightroom Library catalog, Lightroom excels, download, video tutorial, graphics card, live video, computers, comprehensive video tutorial, Version 1.x owners, Forty seperate files, live video training, older wintel machines, Adobe web site, complete 40 file, High Definition video, Standard Definition version, powerful graphics card, powerful new tools, beta, Download Video, latest beta, Trial Version, video files, Beta User, computer monitor, additional files, Mark Guertin, VLC Player, fresh look, smaller demand, modern computer, download limit Full Disclosure | 计算机 |
2014-15/4497/en_head.json.gz/13730 | launchpadblog
« What’s your favourite project in Launchpad? Meet Rob Collins » Everything in Launchpad
Over the summer, Jono and I have been compiling a list of all the features in Launchpad.
While the help wiki and the heads of various members of the Launchpad team are a pretty good guide to everything that’s in Launchpad, we haven’t had a canonical, comprehensive, list of Launchpad’s features. Obviously having that one page makes it easier to keep track of what’s there and to think about what Launchpad is and what it isn’t.
So, what do we consider to be a feature? Really, it’s anything where someone can interact with Launchpad or where bugs can live. Simple as that.
If you think something’s missing from this list, or needs more explanation, please do go ahead edit the wiki page.
Launchpad’s feature list.
Tags: front-page
This entry was posted by Matthew Revell on Friday, September 17th, 2010 at 4:05 pm and is filed under General.
One Response to “Everything in Launchpad”
gregorywissing Says:
September 30th, 2010 at 5:53 pm Dear Matthew, and fellow Gentlemen developers, Wizardly gentlemen no doubt. To me as a compiling dummy, and I bet thousands of others like me, it is not utterly clear what Launchpad is. I do understand that it is a site for open source constructors to test and launch their magic. On the other hand I had to subscribe to find out more about open source on the USER side of things. It is adorable and brave indeed to initiate free software (who can be against that?), but the developed, and offered software seems to live in a world of its own, which is the world of programmers and/or compilers, a world totally strange and intimidating for us dummy users.
I assume there is a wish to have open source apps utilized by lay people, or even as many people as possible, it would then be useful if those people would UNDERSTAND the workings. We are not familiar with command lines, or ANY machine language at all. We can only handle apps that work with mouse clicks on visual interfaces with the machine process safely hidden in the background.
I am a music composer (classic) and getting more and more annoyed with apps on the market and Apple who seems to mess up its own Logic as a composer tool.
Hence, I, and a number of fellow composers are looking around if a WORKING tool can be found in the open source world, and get rid of Mac’s vending strategies.
Yes, there are goodies there (Lilypond, Rosegarden, fluidsynth, etc), but to make these apps work, and more so work together, one has to be a command line wizard! Of some of the offered apps it is not even clear if some form of Linux needs to be installed beforehand.
I think, virtually every routine to (let’s say) mimic the workings of Logic, Cubase, Sibelius and such can be found on the open source, but I think it would be useful to pack everything in ONE package using visual interfaces (buttons, faders, etc) to work with. I think you people can do this easily.
As a user and graphic designer I could gladly lend you people a hand on the lay-out side, or even give a clue of what makes a good composing tool. The buzzwords should be; CLARITY, WORKFLOW and USER FRIENDLINESS.
For me, things do not necessarily have to be free. If too much work is involved some sort of payment could be thought of.
Or. should I start a project on your site to try and combine all the goodies? Since I am not a programmer that would very much be a request thing, I could write a USER receipt? All in all, I have great respect for the open source world, and I do check matters and even try some every now and then, but as said I’m equally amazed by its refusal to live in the world of, well, normal people. Silly as they are, they are many.
with regards, Greg
Rotterdam Netherlands
Terms of use | Help improve Launchpad | FAQ
Launchpad Blog by Canonical Ltd is licensed under a Creative Commons Attribution 2.0 UK: England & Wales License. © 2004-2009 Canonical Limited. | 计算机 |
2014-15/4497/en_head.json.gz/14564 | What is the purpose of schema.org?
Why are Google, Bing, Yandex and Yahoo! collaborating? Aren't you competitors?
There are lots of schemas out there. Why create a new one?
Is schema.org a standards body like the W3C or IETF ?
How does schema.org relate to Facebook Open Graph?
What's coming next? How will schema.org evolve?
Who is managing schema.org on an ongoing basis? Can other websites join schema.org as partners and help decide what new schemas to support?
Is schema.org available in multiple languages? When is that coming? What languages is the markup available in?
How do I mark up my site using this schema?
Why should I add markup? What will I get out of it? How will the data be used?
This is too much work. Why can't you just extract this data automatically?
I have already added markup in some other format (i.e. microformats, RDFa, data-vocabulary.org, etc). Do I need to change anything on my site?
My website contains content that is of a type that is unsupported. Are you going to add that type? How do I mark it up in the meantime?
Do I have to mark up every property?
Why microdata? Why not RDFa or microformats?
Why don't you support other vocabularies such as FOAF, SKOS, etc?
Where can I give feedback, report bugs, etc.?
What do you mean by "Schema Version 0.9x" that is on every schema page?
Schema.org is a joint effort, in the spirit of sitemaps.org, to improve the web by creating a structured data markup schema supported by major search engines. On-page markup helps search engines understand the information on web pages and provide richer search results. A shared markup vocabulary makes easier for webmasters to decide on a markup schema and get the maximum benefit for their efforts. Search engines want to make it easier for people to find relevant information on the web. Markup can also enable new tools and applications that make use of the structure.
Currently, there are many standards and schemas for marking up different types of information on web pages. As a result, it is difficult for webmasters to decide on the most relevant and supported markup standards to use. Creating a schema supported by all the major search engines makes it easier for webmasters to add markup, which makes it easier for search engines to create rich search features for users.
Creating a new schema with common support benefits webmasters, search engines and users. Webmasters: Schema.org provides webmasters with a single place to go to learn about markup, instead of having to graft together a schema from different sources, each with its own rules, conventions and learning curves. Search engines: Schema.org focuses on defining the item types and properties that are most valuable to search engines. This means search engines will get the structured information they need most to improve search.Users: When it is easier for webmasters to add markup, and search engines see more of the markup they need, users will end up with better search results and a better experience on the web.
No. Schema.org is a collaboration, in the spirit of sitemaps.org, between Bing, Google, Yahoo! and Yandex to make it easier for webmasters to provide us with data so that we may better direct users to their sites.Schema.org is not a formal standards body. Schema.org is simply a site where we document the schemas that three major search engines will support. Schema.org is a collaboration between Google, Microsoft, Yahoo! and Yandex - large search engines who will use this marked-up data from web pages. Other sites - not necessarily search engines - might later join.
Facebook Open Graph serves its purpose well, but it doesn't provide the detailed information search engines need to improve the user experience. A single web page may have many components, and it may talk about more than one thing. If search engines understand the various components of a page, we can improve our presentation of the data. Even if you mark up your content using the Facebook Open Graph protocol, schema.org provides a mechanism for providing more detail about particular entities on the page. For example, a page about a band could include any or all of the following:A list of albumsA price for each albumA list of songs for each album, along with a link to hear samples of each songA list of upcoming showsBios of the band members
Schema.org is a work in progress which will keep evolving over the next many years. We expect the evolution to come from two major sources.As we identify new kinds of structured data that we can use to provide better search results, we will extend schema.org to cover these.We strongly encourage schema developers to develop and evangelize their schemas. As these gain traction, we will incorporate them into schema.org.
Google, Bing and Yahoo! are managing schema.org on an ongoing basis. As appropriate, we invite participation from major consumers and producers of structured data on the web.
Schema.org markup can be used on web pages written in any language. The site is currently available in English only, but we plan to translate to other languages soon. The markup, like HTML, is in English.
Take a look at the getting started guide for an overview on microdata and schema.org. Or go to the schemas page to start looking at specific item types.
Search engines are using on-page markup in a variety of ways. These projects help you to surface your content more clearly or more prominently in search results. Not every type of information in schema.org will be surfaced in search results — you can refer to each company's documentation to find specific uses — but over time you can expect that more data will be used in more ways. In addition, since the markup is publicly accessible from your web pages, other organizations may find interesting new ways to make use of it as well.
Automated data extraction is great when it works, but it can be error prone because different sites can represent the same information in so many different ways. Markup provides a consistent way for computers to understand the data on a page, and helps search engines display information usefully in search results.
If you are already publishing structured data markup and it is already being used by Google, Microsoft, Yandex or Yahoo!, the markup format will generally continue to be supported. Changing to the new markup format could be helpful over time because you will be switching to a standard that is accepted across all three companies, but you don't have to do it.
If you publish content of an unsupported type, you have three options:Do nothing (don't mark up the content in any way). However, before you decide to do this, check to see if any of the types supported by schema.org - such as reviews, comments, images, or breadcrumbs - are relevant. Use a less-specific markup type. For example, schema.org has no "Professor" type. However, if you have a directory of professors in your university department, you could use the "person" type to mark up the information for every professor in the directory.If you are feeling ambitious, use the schema.org extension system to define a new type. Q:
It is fine to mark up only some properties of an item - markup is not an all-or-nothing choice. However, marking up as much content as possible helps search engines use your information to present your page to users in the most useful way. As a general rule, you should mark up only the content that is visible to people who visit the web page and not content in hidden div's or other hidden page elements.
Focusing on microdata was a pragmatic decision. Supporting multiple syntaxes makes documentation for webmasters more complex and introduces more overhead in terms of defining new formats. Microformats are concise and easy to understand, but they don't offer an open extensibility mechanism and the reuse of the class tag can cause conflicts with website CSS. RDFa is extensible and very expressive, but the substantial complexity of the language has contributed to slower adoption. Microdata is the most recent well-known standard, created along with HTML5. It strikes a balance between extensibility and simplicity, and is most suitable for building the schema.org. Google and Yahoo! have in the past supported both microformats and RDFa for certain schemas and will continue to support these syntaxes for those schemas. We will also be monitoring the web for RDFa and microformats adoption and if they pick up, we will look into supporting these syntaxes. Also read the section on the data model for more on RDFa.
In creating schema.org, one of our goals was to create a single place where webmasters could go to figure out how to mark up their content, with reasonable syntax and style consistency across types. This way, webmasters only need to learn one thing rather than having to understand different, often overlapping vocabularies. A lot of the vocabulary on schema.org was inspired by earlier work like Microformats, FOAF, OpenCyc, etc. Many terms in schema.org came through collaborations, and we acknowledge these on the schema.org site rather than by making our markup more complex. See also our external enumerations mechanism for handling enumerated lists of entities.
Please use our feedback form.
The schema presented on this site right now are a draft to solicit feedback. Based on the feedback we receive, we will update the schema. We hope to make it final sometime later this year. However, we urge you to go ahead and use it. We will continue to provide support for what is here. | 计算机 |
2014-15/4497/en_head.json.gz/15219 | Developer Talks Dead Space: Extraction
Mon, June 8th, 2009 at 9:59am PDT
Updated: June 8th, 2009 at 10:54am Video Games
Brian LeTendre, Contributing Writer
Concept art for "Dead Space: Extraction"EA's “Dead Space” was one of the most critically acclaimed games of 2008, earning high praise for its atmospheric presentation, great gameplay, and a story that extended into a comic book series and animated feature. “Dead Space” put players in the shoes of Isaac Clarke, an engineer of a crew that is responding to a distress call from the stranded USG Ishimura, a mining ship that cracks planets and drains them of their resources. After crashing into the Ishimura, Issac and company find out the ship has been overrun by Necromorphs, organisms that reanimate dead flesh into grotesque creatures that must be dismembered to be stopped. In addition to finding a way home, Isaac must try to find a way to stop the Necromorphs, whose origins are tied to a strange marker that was found by the mining colony Aegis VII on the planet below. In 2009, the “Dead Space” universe will expand once again, this time on the Nintendo Wii. Coming this fall, “Dead Space: Extraction” will explore the events that preceded the original game, and answer some questions about what actually happened to the mining colony of Aegis VII. Visceral Games' Steve Papoutsis is the Executive Producer on “Dead Space: Extraction,” and this week he answered some of CBR News’ questions about the upcoming game.Story continues belowCBR: “Dead Space: Extraction” serves as a prequel to “Dead Space.” How much of what happened on Aegis VII will be explored in the game, and will we see the USG Ishimura as well?Steve Papoutsis: “Extraction” is broken into three acts. Act 1 takes place on the Aegis VII colony. The game starts as the Marker is being extracted and follows a party of people who attempt to seek sanctuary on the USG Ishimura. Concept art for "Dead Space: Extraction"Isaac Clarke was the protagonist of “Dead Space.” Who will players be taking on the role of for “Extraction?”Players will get to play from a variety of POV’s in “Extraction.” We will be discussing our characters in more depth soon. Look for info on them to show up on our “Extraction” blog soon.You’ve chosen to go with a more guided experience than in the original game. How would you describe “Extraction’s” gameplay?“Extraction” is played from a first person perspective. The core gameplay retains the mechanics found in “Dead Space.” Strategic Dismemberment, Stasis, Telekinesis, and Zero-G areas all return in “Extraction.” All of the original tools / weapons return in “Extraction” plus three all new ones: Arc Welder, Rivet Gun, and one other that we are not talking about at this time. “Extraction” is different from “Dead Space” in that this adventure follows a group of people who are forced together during the initial outbreak on the Aegis VII colony. The story unfolds as the game progresses through conversations between these people and various logs that are discovered in the game. In some ways, it would seem the Wii controls would lend themselves better to some weapons than a gamepad would. How are you taking advantage of this?The biggest difference in “Extraction” is the way you fire your weapons. You take aim with the Wii Remote at a Necromorph and simply press the B button. As in “Dead Space” each weapon will have an alternate fire mode and this is activated by rotating your Wii Remote 90 degrees. I’m really digging the way the Ripper works with the Wii Remote, its great fun ripping apart Necromorphs with it, it has a very visceral feeling.Screenshots from "Dead Space: Extraction"From a tools standpoint, did you have to create the engine and assets from scratch for the Wii game, or were you able to use anything from the original game?We are creating new environments, enemies, and weapons for “Extraction” but have been able to leverage our past assets as well.Can you give us an example of something you are doing with on the Wii that hasn’t been done before?One of our primary goals with “Extraction” is to nail the atmosphere and look that the original “Dead Space” had. So far we are happy with how the visuals are turning out. As far as mechanics our Alternate Fire mode is different as is our “Glow Worm” mechanic. In low light areas the player is able to shake their Wii Remote to charge up the “Glow Worm” a light stick like device. This creates an interesting risk reward opportunity for the player in that they must briefly take their reticule off screen in order to charge the “Glow Worm”. They can always choose not to charge the “Glow Worm” and take their chances in the low light areas but they may sacrifice accuracy in these circumstances.One of the ways “Dead Space” kept the player immersed was the absence of a HUD, using Isaac’s suit to give cues about health. How have you taken that a step further for “Extraction?”Given that “Extraction” is played from a First Person perspective we are working hard on retaining an in-world HUD design. Players can currently see how many stasis shots and how much ammunition they have by simply looking at their reticule. In addition to an onscreen effect when taking damage we have introduced what we are calling our “Mini Rig.” This is a UI element that can be toggled on and off by pressing a button on the Wii remote. The “Mini Rig” will contain a Rig, as well as additional weapon information such as total number of ammunition the player is holding, number of upgrades on the currently selected weapon, and if they are carrying a revival pack.A highly anticipated feature of “Extraction” is the co-op mode. What can you tell us about it?Co-op is super fun. Our goal with the mode was to make sure it was easy for people to play with a friend without sacrificing their single player campaign. When a person wants to jump into your current game all they need to do is hit a button on the Wii remote and they instantly jump into the game. There is no need to jump out to another menu or restart your single player game. Screenshots from "Dead Space: Extraction"With the unique mechanics in “Extraction,” co-op also allows for a lot of strategy between players. I like to use Stasis and Telekinesis while the other player focuses on dismembering the Necromorphs. Another crucial goal with co-op was to make sure that player two has as many opportunities to interact with the game as player one does. Our puzzles and camera look around moments alternate between players; this is one of the ways we are hoping to keep both players equally engaged throughout the game.Some might argue that adding co-op will take away from the atmosphere of the game. What are you doing to make sure the feel of the game doesn’t lose anything when another player jumps in?“Extraction” follows a group of people, so having co-op does not detract from the overall atmosphere at all. Co-op feels very natural in the game.Being able to upgrade weapons in the original game allowed players some ability to tailor their experience. Will you be using the same system in “Extraction?” Players will be able to upgrade their weapons on the fly in “Extraction.” Throughout the game Upgrade Nodes will be found, that when collected are immediately applied to the appropriate weapon. Finding the Upgrade Nodes will be one of the many challenging elements in the game.Will there be other game modes in addition to the campaign mode?I can’t say much about it at this time but Extraction will have a new mode we are calling “Challenge Mode.” We will be talking more in depth about this in the near future. Hopefully I can show it to you very soon.“Dead Space: Extraction” is slated to release on September 29, 2009 for the Nintendo Wii. For more information, head over to www.deadspace.ea.com. Discuss this story in CBR's Games forum.
TAGS: dead space: extraction, visceral games, dead space | 计算机 |
2014-15/4497/en_head.json.gz/15526 | How Bay Area Transit Survived a Site Launch in a Traffic Storm
The Bay Area Rapid Transit service launched website redesign in only five months while also battling a 20,000-visitor traffic spike. How did they do it?
A Bay Area Rapid Transit car stops for a passenger pickup. Flickr/ykanazawa1999
It could have been a recipe perfect for disaster. Just five days after Northern California’s Bay Area Rapid Transit relaunched its new Web site, BART.gov, it was hit with its second largest traffic spike of 2013 — a daunting threat, considering the site was placed on an expedited four-month development timeline and was unveiled just as BART's two largest employee unions were embroiled in a pitched labor dispute.
Oddly, however, BART’s Web Services Manager Tim Moore remembers the day — at least from a Web standpoint — being fairly calm. Moore said records show that on Nov. 22, between 7 a.m. and 8 a.m., BART.gov handled more than 20,000 unique visitors due to a major service delay in transit operations. The number represented an impact to the site that was roughly 11 times greater than normal for the hour, a time that typically averages only 1,800 visitors. RELATED
HHS Will Allow ‘Unbanked’ to Use Prepaid Debit Cards On Exchanges
Sign-Ups Accelerating In State Health Exchanges
This success, which Moore describes as a “trial by fire,” led to a quiet celebration that day as the news media focused their attention on commuter delay updates and the ongoing union dispute. The website’s strong showing and the secret behind its speedy development strategy is noteworthy, not simply within the framework of organizational accolades, but also in the way of lessons learned — lessons that began on day one.
A Surprise Announcement
At the beginning of January 2013, Moore said BART received a startling notice from Adobe, the site’s content management system provider. BART’s Web team was told that by the end of 2013, Adobe Publish, the site’s former content management system, would be phased out entirely.
“That meant that we’d lose all of our Web site publishing capabilities, our editing capabilities and maintenance capabilities in less than a year,” Moore said. “So effectively, that’s when the stopwatch started.” The tight time frame to launch a new website gave BART’s team one month to get through the process of internal evaluation on site design, budgeting, procurement and contracting, then just four months for actual Web development.
“It was no small task getting a public agency like this moving,” Moore said, and credited Ravindra Misra, BART’s CIO, for his quick response, as tasks were immediately delegated and stakeholders called in for feedback. One of the most game-changing decisions made during the first month, Moore said, was choosing a new content management system, one that was easy to use and yet customizable enough to accommodate BART’s enterprise-level needs. Tasked by Misra to spearhead site development, Moore set about evaluating content management platforms, conducting user interviews, and reaching out to like-sized transit agencies for advice.
Like this story? If so, subscribe to Government Technology's daily newsletter.
In the end, Moore said, “Drupal emerged at the top of the list,” for it’s ability to be managed in-house, its estimated longevity and the backing of a large open source developer community behind it — and BART opted for many sources for future maintenance and support.
'One Throat to Choke'
If there was one thing to be avoided under BART’s strict deadline, it was finger pointing and accountability shirking. In the often labyrinthian workings of large-scale Web development projects, Moore said it's common to see one team of developers pointing to the other when problems arise from compatibility issues, site feature requests and hosting snags. Based on BART’s decision to choose Drupal, Moore said, Acquia — a commercial open-source software company providing products, services, and technical support for Drupal — appeared the most logical choice to avoid this challenge.
“That’s your classic one-throat-to-choke method,” Moore said jokingly. “On a project like this, we really wanted to minimize the number of project partners, or integrators, we were working with.”
Similarly, Moore said, he was that “throat” for Acquia’s developers. He said that considering the looming deadline, it was important that there be one point man for decision-making tasks versus a variety of stakeholders with varying levels of approvals. “We were really trying to focus on the important thing of launching the new site and getting something that we could use going forward,” Moore said.
This tactic was also included within the initial month BART had for site collaboration. Moore explained that all suggestions for site features had to be finalized within the first month so the scope of the project could be maintained. Anything that the team determined to be a “launch blocker,” a feature or process that may prevent meeting the deadline, was cut from the scope of the project.
Hard Things First
The last big lesson learned from the experience, Moore said, was doing the hard things first, including those objectives that may require more time. Many of these tasks relate to integration issues, such as allowing BART’s trip planner connected to Google Maps to be compatible with the new site.
Jessica Richmond, senior director of government professional services at Acquia, said this and the many other difficult tasks were done through resourcing them simultaneously into multiple teams.
“We have a lot of expert talent both internally and through our partner network that we could tap into," she said. "So we we’re very, very diligent in determining the right roles to play from a development standpoint."
Worst case scenarios were also run, such as high volume traffic spikes scenarios, to ensure preventative measures were in place. “For BART, one of the examples we used was what would happen if there was a strike and people needed to know what was going on with the trains,” Richmond said. “We prepared so much for that situation, even on the short timeline, and the infrastructure of the applications were so high performance that there were absolutely no issues. The most exciting thing about that day was that there was no excitement at all.”
Predictive Analytics Aboard the London Underground?
Appeals Court Denies Petition, Clears Way for California High-Speed Rail Trial
Where the Rubber Is the Road
While Wooing Tesla's Factory, Texas Still Won't Let Automaker Sell Cars In-State
Madison Joins Fight for Uber, Lyft | 计算机 |