id
stringlengths
30
34
text
stringlengths
0
75.5k
industry_type
stringclasses
1 value
2015-48/1913/en_head.json.gz/11475
Windows XP Now Available on OLPC XO Notebook 13 comment(s) - last by heffeque.. on May 18 at 11:43 AM OLPC XO Laptop Running Windows XP (Source: Microsoft) XO Windows XP trials to start in June The One Laptop Per Child (OLPC) Foundation has been pushing its XO notebooks in developing nations for a while now at a price of $188 per notebook. The XO has had some stiff competition in the market from competitors like the Classmate from Intel. Many have seen one major drawback to the XO notebook as being the fact that it ran the Linux operating system rather than Windows. OLPC attempted to initiate talks in the past with Microsoft to put its Windows operating system on the XO to no avail. At the time Microsoft didn’t want to be part of the project because it was going to use Linux as well. Over time, Microsoft came around to the notion of having its Windows operating systems run on machines that also run Linux, paving the way for talks between it and the OLPC to begin again. Microsoft and the OLPC announced today that they had signed an agreement to provide a customized version of Windows XP for use on the XO Laptop. Part of the agreement will even allow the OLPC to build XO’s that will be dual boot systems with both Linux and Windows installed at the same time. Microsoft chief research and strategy officer, Craig Mundie said in a statement, “Transforming education is a fundamental goal of Microsoft Unlimited Potential, our ambitious effort to bring sustained social and economic opportunity to people who currently don’t enjoy the benefits of technology. By supporting a wide variety of affordable computing solutions for education that includes OLPC’s XO laptop, we aim to make technology more relevant, accessible and affordable for students everywhere.” Microsoft says that customers and partners around the world have been asking for a Windows-powered XO because of the fact that Windows on the low-cost machines would allow educators and students access to the entire ecosystem of Windows software. Many foreign developing nations see Windows on the XO as a way to give their children marketable technology skill with the world’s most dominant operating system. Andres Gonzalez Diaz, the governor of Cundinamarca, Columbia says, “As I plan my region’s investment in technology, I must evaluate the best way to provide quality education and prepare my citizens for the work force. Windows support on the XO device means that our students and educators will now have access to more than computer-assisted learning experiences. They will also develop marketable technology skills, which can lead to jobs and opportunities for our youth of today and the work force of tomorrow.” According to the New York Times, the addition of Windows to the XO won’t add much to the cost of the machines. Under Microsoft’s Unlimited Potential program the fee for Windows is around $3 more per machine. To allow the XO to run Windows and Linux will add about $7 to the price tag. Despite adding Windows to the XO’s list of features, the small notebook faces a tough road ahead. As DailyTech reported before, there is more to the implementation of the XO notebook to consider in developing nations than simply buying notebooks and handing them out. RE: I think MS deserves some props on this one heffeque Apple offered MacOS X for free and the option was denied because it wasn't open source. Why is M$'s XP in there then? Parent XO Notebook Faces Challenges in Very Poor Peruvian Villages
计算机
2015-48/1913/en_head.json.gz/12206
Dive into the Query Optimizer-Undocumented Insight This 500 level session will focus on using undocumented statements and trace flags to get insight into how the query optimizer works and show you which operations it performs during query optimization. I will use these undocumented features to explain what the query optimizer does from the moment a query is submitted to SQL Server until an execution plan is generated including operations like parsing, binding, simplification, trivial plan, and full optimization. Concepts like transformation rules, the memo structure, how the query optimizer generates possible alternative execution plans, and how the best alternative is chosen based on those costs will be explained as well. Benjamin Nevarez Benjamin Nevarez is a SQL Server MVP and independent consultant based in Los Angeles, California who specializes in SQL Server query tuning and optimization. He is the author of “SQL Server 2014 Query Tuning & Optimization” and “Inside the SQL Server Query Optimizer” and co-author of “SQL Server 2012 Internals”. With more than 20 years of experience in relational databases, Benjamin has also been a speaker at many SQL Server conferences, including the PASS Summit, SQL Server Connections and SQLBits. Benjamin’s blog can be found at http://www.benjaminnevarez.com and he can also be reached by e-mail at admin at benjaminnevarez dot com and on twitter at @BenjaminNevarez.
计算机
2015-48/1913/en_head.json.gz/12485
.siteShortName News from the GlassFish Community « Jersey 2.x Tracing... | Main | Asynchronous clients... » Optimized WebSocket broadcast By David Delabassee-Oracle on Nov 22, 2013 "When you have to make a choice and don't make it, that is in itself a choice." (William James) The Java API for WebSocket (JSR 356) is one of the API added to Java EE 7. It is a '1.0' release, that means that the API is complete and fully functional. It is often said that premature optimisation is the root of all evil, so an initial implementation can easily be optimised. The same is also true for features, choices had to made as it is impossible to implement all the desired features in an initial release. Clearly, they are different aspects that could easily be improved going forward. Tyrus serves as the JSR 356 Reference Implementation, it is also a test-bed for potential new WebSocket features and improvements. As always, a proprietary feature of an implementation doesn't necessarily means that this particular feature will be included in a future revision of the specification. It could just remains a proprietary feature or an implementation specific optimisation. In this post, Pavel Bucek, a Tyrus team member, discuss how a WebSocket Server Endpoint could be improved when it needs to broadcast a message to its connected remote endpoints. In Tyrus 1.3, a new
计算机
2015-48/1913/en_head.json.gz/12496
Medium embraces CC licenses Jane Park, May 6th, 2015 Today Creative Commons is excited to announce that blogging and storytelling platform Medium now offers the entire suite of Creative Commons licenses and public domain tools. You can read more about this great news over at Medium, naturally, in stories by both Creative Commons and Medium. In just a few years Medium has grown a thriving community of highly engaged authors and storytellers, and it’s been home to some incredible pieces of journalism covering a wide range of interests. It’s no surprise that we heard from folks in the CC and Medium community asking for the licenses to be made available. The Medium community, and the folks behind Medium, really understand the power of CC and the opportunity for their stories to reach even more people. Medium users can now share their stories under any of the CC licenses or CC0, and they can also import other CC-licensed or public domain work. Medium leverages the power of photography like few other platforms, making it an ideal way to showcase and share CC licensed images, illustrations, and other media. We want to thank the team at Medium for their amazing work and dedication in making CC available to their users. From our kick-off conversations it was clear that Medium understood the importance of this decision, and it was a pleasure to help them bring it to life. Please read more about this exciting news over at Medium! Medium welcomes the Creative Commons licenses by Creative Commons Explicit post licensing — “All rights reserved” is not the only option by Medium Why I’m Excited for Medium’s Partnership with Creative Commons by Lawrence Lessig Medium joins CC’s new Platform Initiative, which works to create easy, clear, and enjoyable ways for users to contribute to the commons on community-driven content platforms. If you are a platform that would like to join this movement for the commons, please get in touch! Tags: Medium, news, platform, platform initiative, platforms Great news for the commons: Flickr now supports CC0 and the CC Public Domain Mark Ryan Merkley, March 30th, 2015 ( CC0 and Public Domain Mark) Today we’re extremely pleased to announce that Flickr now allows its users to share images under CC0, Creative Commons’ international public domain dedication. Flickr also announced they will allow users to share work in the public domain using our Public Domain Mark (PDM). Flickr is the largest repository of CC-licensed photos on the web, and CC0 and the Public Domain Mark will give creators even more ways to share their works and those in the public domain to expand the commons. Why is this big news for Flickr and Creative Commons? CC0 maximizes the potential creative use of works by dedicating them, without restrictions, to the commons. By doing so, creators enable others to freely and without condition build upon those works in ways that advance science, education, scholarship, and literature, sometimes in surprising and unexpected ways. Many Creative Commons photographers on Flickr have been asking for CC0. With this announcement Flickr users will be able to choose from among our six standard licenses, our public domain dedication, and they will also be able to mark others’ works that are in the public domain. Adding CC0 and PDM to Flickr is an unprecedented win for the commons and for free creativity and knowledge on the internet. (CRS-5 Falcon 9 rocket / SpaceX / CC0) The topic of awesome public domain and CC0 imagery was in the news about a week ago when SpaceX founder and CEO Elon Musk announced that all of SpaceX’s incredible photographs are dedicated to the worldwide public domain. SpaceX will be moving all of their images on Flickr to CC0. Wikimedians have also helped SpaceX to declare a gallery of images as CC0 on Wikimedia Commons. For years, galleries, museums, and others to whom the the public has entrusted important cultural heritage works have leveraged CC0 as an internationally-recognized way to share digitized copies of works and the metadata that enables search. Europeana now boasts no fewer than 26,000 images under CC0, as well as more than 3.6 million works marked as public domain worldwide using our Public Domain Mark. The availability of CC0 as a means for digitizers of works in the public domain to eliminate any “thin” copyright on public domain works they digitize, or for individuals who wish to eliminate their own copyright, allows the global public to freely create and publish the next great thing. And the availability of the Public Domain Mark to signal a work is globally free of copyright restrictions further empowers creators to stand on the shoulders of those who created before them. What’s the difference? Using CC0, a creator enables the public to freely reuse and remix a work without limitation. This is because the author/creator waives all conditions including attribution (although citation is supported) and encourages others to reuse the work in any way, including commercially. We know that Creative Commons supporters, including many photographers in the Flickr community, have been seeking the ability to use CC0 on Flickr since it was was published almost exactly 6 years ago today. This also offers remixers clear and simple terms when seeking out a work to build upon. Many “no known copyright” images are too uncertain to build upon, while CC0 offers a clear dedication to free use and re-use. Once fully implemented, users will be able to move some or all of their works on Flickr to CC0. The Public Domain Mark is used to denote works out of copyright or in the worldwide public domain. Developed with reference to “no known copyright” statements adopted by many leading cultural heritage institutions, including contributors to Flickr Commons, the PDM is the only mark of its kind, and the only widely-adopted and globally accepted mark that communicates a work’s public domain status worldwide. Flickr’s leadership We are very happy to recognize Flickr’s longstanding commitment to the Creative Commons licenses, their community of CC photographers/videographers, and to the public good that is our shared commons and heritage. Incorporating CC0 and PDM into Flickr has been a long term wish of ours, and we’re happy to see it happen today. There were many who helped along the way, but special thanks to CC General Counsel Diane Peters and also to Jane Park, who now leads CC’s platform engagement team. We anticipate that Flickr’s stewardship of CC-licensed content and public domain materials will continue to grow now that users can take advantage of the full breadth of our legal tools. Tags: CC0, flickr, PDM, public domain, SpaceX Press release: Creative Commons Launches Special Edition Commemorative Tee Jay Walsh, March 25th, 2015 Partners with Noun Project and Teespring to design and sell exclusive t-shirt celebrating “CC” logo acquisition by MoMA; Proceeds to support Creative Commons SAN FRANCISCO – MARCH 25, 2015 – Creative Commons has partnered with crowdsourced visual dictionary Noun Project and commerce platform Teespring to release a custom t-shirt celebrating the “CC” logo’s acquisition into the Museum of Modern Art’s permanent collection. The special edition t-shirt will be available for a limited time only. Proceeds will benefit Creative Commons to further their work in growing and protecting the commons. Designed by the Noun Project, the commemorative t-shirt celebrates the lasting impact and international recognition of the Creative Commons “double-c in a circle” or “CC” logo. The logo, originally designed for Creative Commons in 2002 by designer Ryan Junell, is recognized as the global standard for creative sharing, remixing, and reuse. Creators, educators, and remixers use the logo to indicate their adoption of one or more variants of the Creative Commons license. In March 2015 MoMA recognized the ubiquity and significance of the Creative Commons logo by including it in their permanent design collection. The logo can be viewed alongside other imminently recognizably marks such as the @ and recycling symbols as part of the MoMA exhibit “This Is for Everyone: Design Experiments for the Common Good,” organized by senior curator, Paola Antonelli. “On behalf of the global Creative Commons community I want to thank Teespring and Noun Project for launching this collaboration to celebrate our beloved CC logo,” said Creative Commons CEO Ryan Merkley. “This commemorative design is a beautiful remix that represents what Creative Commons is all about: Noun Project’s freely reusable iconography depicting a range of sharing and remixing activities within the Commons. We know fans of Creative Commons will wear it with pride.” Noun Project, a long-time supporter and proponent of Creative Commons, designed the limited edition t-shirt to celebrate this milestone using pictograms uploaded by their community. Each pictogram in the design represents an industry or type of media influenced by Creative Commons, which encompasses fields as broad as the arts, science, medicine, and law. “When opening our platform to submissions from creatives around the world, we knew we wanted to offer a clear and easy license that would enable anyone to share their work. Creative Commons was the perfect solution for helping us build and share the world’s visual language,” said Sofya Polyakov, CEO and Co-Founder of the Noun Project. To bring this special edition t-shirt to life, Creative Commons and Noun Project have partnered with Teespring, the leading commerce platform for custom apparel. Launched in 2012, Teespring empowers entrepreneurs, creatives, influencers, and nonprofits to create and sell high­-quality products people love, with no cost or risk. “At Teespring we strive to remove the barriers to bringing great ideas to market, which is why we have a unique respect and admiration for Creative Commons and the impact they’ve made for creators all over the world,” said Teespring Co-Founder and CEO, Walker Williams. “It’s an honor for us to partner with Creative Commons and Noun Project and help the community show their support for this meaningful cause and movement.” This special edition Creative Commons tee will be available until April 8, 2015 at www.teespring.com/creativecommons. You can read more about the history and origin of the Creative Commons logo at http://creativecommons.org/weblog/entry/45228. Image assets can also be downloaded via zip file. [email protected] Noun Project [email protected] [email protected] About Creative Commons Creative Commons is a globally-focused nonprofit organization dedicated to making it easier for people to share their creative works, and build upon the work of others, consistent with the rules of copyright. Creative Commons provides free licenses and other legal tools to give individuals and organizations a simple, standardized way to grant copyright permissions for creative work, ensure proper attribution, and allow others to copy, distribute, and make use of those works. There are nearly 1 billion licensed works, hosted on some of the most popular content platforms in the world, and over 9 million individual websites. About Noun Project Noun Project is a crowdsourced visual dictionary of over 100,000 pictograms anyone can download and use. Their goal is to help people communicate ideas visually by building the world’s best resource for visual language. About Teespring Teespring is a commerce platform that enables anyone to create and sell products that people love, with no cost or risk. Teespring powers all aspects of bringing merchandise to life from production and manufacturing to supply chain, logistics, and customer service. By unlocking commerce for everyone, Teespring is creating new opportunities for entrepreneurs, influencers, community organizers, and anyone who rallies communities around specific causes or passions. Comments Off on Press release: Creative Commons Launches Special Edition Commemorative Tee Tags: press release Creative Commons Names Ryan Merkley as Chief Executive Officer Elliot Harmon, May 14th, 2014 Ryan Merkley / Rannie Turingan / CC0 Download the press release (67 KB PDF) Mountain View, CA May 14, 2014: The board of directors of Creative Commons is pleased to announce the appointment of Ryan Merkley to the position of chief executive officer. Ryan is an accomplished strategist, campaigner, and communicator in the nonprofit, technology, and government sectors. Ryan was recently chief operating officer of the Mozilla Foundation, the nonprofit parent of the Mozilla Corporation and creator of the world’s most recognizable open-source software project and internet browser, Firefox. At the Mozilla Foundation, Ryan led development of open-source projects like Webmaker, Lightbeam, and Popcorn, and also kicked off the Foundation’s major online fundraising effort, resulting in over $1.8 million USD in individual donations from over 44,000 new donors. Ryan is a well-known and respected voice in the open source community, and recognized for his unwavering support to open government and open data initiatives. “As the board has gotten to know Ryan after the past several weeks, he’s articulated a strong vision to us for the future of the organization,” board chair and interim CEO Paul Brest said. “He understands that the internet has changed a lot since we first launched the CC licenses, and that our relevance requires an evolving technology strategy. He also recognizes that this is a crucial moment for CC and its allies: we must work together to strengthen and protect the open web.” “A public commons, enabled by the open web, is the most powerful force to foster creativity, inspire innovation, and enhance human knowledge around the world. Those who believe in its potential need to join together in a global movement to ensure its success,” said Ryan Merkley. “At Creative Commons we’re making that case, and supporting, inspiring, and connecting the various communities that are building the commons — from open education, to science, to film and photography — and working to provide tools, solutions, and policy on their behalf.” Creative Commons provides a set of licenses that creators can use to grant permission to reuse their work. With over half a billion openly licensed works on the internet, Creative Commons is internationally recognized as the standard in open content licensing. Ryan will lead a global team of legal and technology professionals who manage and support the licenses, as well as experts who lead CC license adoption efforts in areas like education, culture, science, and public policy. Ryan joins Creative Commons after a career working to advance social causes and public policy in nonprofits and government. Outside of his work at Mozilla Foundation, Ryan was senior advisor to Mayor David Miller in Toronto, where he initiated Toronto’s Open Data project. He was also seconded to the City of Vancouver as director of corporate communications for the 2010 Winter Games. Most recently, Ryan was managing director and senior vice president of public affairs at Vision Critical, a Vancouver-based SaaS company and market research firm. Ryan will take up his new position on June 1, 2014. He will be based in Toronto, and will split his time between Toronto and the Bay Area. Official biography and high-resolution images can be found at:http://creativecommons.org/staff/ryan Bios and photos of Creative Commons board and advisory council membershttp://creativecommons.org/board Creative Commons launches Version 4.0 of its license suitehttp://creativecommons.org/weblog/entry/40935 Creative Commons (http://creativecommons.org/) is a globally-focused nonprofit organization dedicated to making it easier for people to share and build upon the work of others, consistent with the rules of copyright. Creative Commons provides free licenses and other legal tools to give everyone from individual creators to large companies and institutions a simple, standardized way to grant copyright permissions and get credit for their creative work while allowing others to copy, distribute, and make specific uses of it. For more information contact:Elliot HarmonCommunications Manager, Creative [email protected] Tags: CEO, press release, Ryan Merkley Creative Commons welcomes new energy and expertise onto its board Elliot Harmon, December 16th, 2013 Download the press release (118 KB PDF) Updated board and advisory council listing Creative Commons, a globally-focused nonprofit that provides legal and technological tools for sharing and collaboration, welcomed eight new members to its board of directors today. It also announced a new advisory council to complement the board and provide input and feedback to CC leadership. Several alumni of the CC board will be transitioning to the advisory council. CC co-founder Lawrence Lessig will lead the advisory council and transition to emeritus status on the board. The announcement comes on the anniversary of the first Creative Commons licenses, which were launched in 2002. Since then, thousands of creators around the world have used Creative Commons licenses, amassing a selection of over half a billion CC-licensed works, spanning the worlds of education, art, academia, data, science, and much more. “With the continuity of current board members and a fresh outlook brought by the new members, Creative Commons is well prepared to engage with a very different environment from the time of its founding,” board chair Paul Brest said. “The board includes some of the world’s foremost experts in technology, intellectual property law, the internet, and business and social entrepreneurship. It’s appropriate that we’re announcing the new board on December 16, the date when it all began eleven years ago.” The new board members reflect the broad diversity of the global Creative Commons community. Four of the new board members — Renata Avila (Guatemala), Dorothy Gordon (Ghana), Paul Keller (Netherlands), and Jongsoo Yoon (South Korea) — are Creative Commons affiliates, experts who represent Creative Commons around the world and localize CC licenses and other materials for their jurisdictions. Creative Commons also gained board members with substantial experience in technology and product development, like Ben Adida, a director of engineering at Square who previously served as CC’s first technology lead; and Christopher Thorne, a veteran technology entrepreneur and private equity investor. The new members also bring additional legal acumen to the organization, including Microsoft intellectual property counsel Thomas C. Rubin and New York University Law School professor Chris Sprigman. Together, these individuals will augment Creative Commons’ existing capacity in technology and intellectual property law. Creative Commons CEO Cathy Casserly will be leaving her role as CEO in early 2014, but will continue to serve on the newly formed advisory council. “While my role in the organization will be changing, I’m proud to continue to serve the CC community alongside this diverse and talented group of leaders,” Casserly said. “Through our collective efforts, we will continue to work toward our vision of a truly collaborative, free internet.” Comments Off on Creative Commons welcomes new energy and expertise onto its board Press release: Creative Commons launches Version 4.0 of its license suite Elliot Harmon, November 26th, 2013 Download the press release (67 KB PDF). Creative Commons launches Version 4.0 of its license suiteRefreshed copyright licenses function globally and cover new rights Mountain View, CA, November 26, 2013: Creative Commons (CC) announced today that Version 4.0 of its licensing suite is now available for use worldwide. This announcement comes at the end of a two-year development and consultation process, but in many ways, it began much earlier. Since 2007, CC has been working with legal experts around the world to adapt the 3.0 licenses to local laws in over 35 jurisdictions. In the process, CC and its affiliates learned a lot about how the licenses function internationally. As a result, the 4.0 licenses are designed to function in every jurisdiction around the world, with no need for localized adaptations. In a blog post celebrating the launch, CC general counsel Diane Peters acknowledged the role that CC’s affiliates played in developing the new licenses. “The 4.0 versioning process has been a truly collaborative effort between the brilliant and dedicated network of legal and public licensing experts and the active, vocal open community. The 4.0 licenses, the public license development undertaking, and the Creative Commons organization are stronger because of the steadfast commitment of all participants.” Creative Commons is a nonprofit organization that enables the sharing and use of creativity and knowledge through free legal tools. Creators and copyright holders can use its licenses to allow the general public to use and republish their content without asking for permission in advance. There are over half a billion Creative Commons–licensed works, spanning the worlds of arts and culture, science, education, business, government data, and more. The improvements in Version 4.0 reflect the needs of a diverse and growing user base. The new licenses include provisions related to database rights, personality rights, data mining, and other issues that have become more pertinent as CC’s user base has grown. “These improvements may go unnoticed by many CC users, but that doesn’t mean they aren’t important,” Peters said. “We worry about the slight nuances of the law so our users don’t have to.” Launch announcement List of new features Catherine Casserly to step down as Creative Commons CEO Elliot Harmon, September 25th, 2013 Download the press release. (63 KB PDF) Mountain View, CA, September 25, 2013: Catherine Casserly announced that she will transition out of her role as CEO of Creative Commons in early 2014. Creative Commons, a Silicon Valley nonprofit that provides legal and technological tools for sharing and collaboration, was launched in 2002. Casserly became the organization’s first full-time CEO in 2011 after serving on the board of directors. Casserly helped to secure the organization’s considerable gains from its first decade and to lay a foundation for its second. She worked with the board and staff to integrate and grow existing programs, increase public impact, articulate key priorities and outcomes, and strengthen core operations. One of Casserly’s significant accomplishments was Creative Commons’ role in the development of open education policies, both in the United States and around the world. In 2012 alone, the governments of Poland and California passed major legislation in support of open educational resources (OER) and others, like British Columbia, provided major public funding for OER. Similarly, the US Department of Labor is currently awarding $2 billion in grants for OER development through the Trade Adjustment Assistance Community College and Career Training (TAACCCT) grant program. In an email to Creative Commons’ global network of volunteers, Casserly expressed pride in three years of growth as a movement and optimism about the possibilities for the organization’s new leadership. “Together, we’ve grown our community and movement tremendously — both in size and in our ability to impact the world. For me and for the organization, the three-year mark is the right time to usher in a new generation of leadership.” Creative Commons board chair Paul Brest noted that Cathy’s tenure as CEO has brought major changes to the organization. “The focus that we’ve seen over the past three years is remarkable, and what’s even more impressive is the clarity of mission and priorities that Cathy has brought to the organization. Under her leadership, the growth in the use of CC licenses generally, in the field of OER, and particularly in government-adopted OER mandates, has brought us substantially closer to our vision — universal access to knowledge and culture — than ever before.” Casserly agreed, and predicted that the next CEO will play a major role in scaling Creative Commons’ achievements. “We’re currently developing products and tools with the potential to transform how sharing and collaboration work on the internet. Realizing that potential will require a CEO who deeply understands both our mission and the broader technology landscape.” The Creative Commons Board of Directors plans to formally begin the search for a new CEO in October. Edited October 2: Previous version incorrectly listed British Columbia as a government that had passed OER legislation. Read this article for information on British Columbia’s support for OER. Paul Brest Named Creative Commons Chair Cathy Casserly, November 29th, 2012 Read the full press release. (PDF) I’m delighted to announce that Paul Brest has been elected chair of the Creative Commons board. Paul will begin as chair in December, coinciding with CC’s tenth anniversary celebrations. Throughout his career, Paul has bridged the worlds of law, philanthropy, and academia, most recently as president of the William and Flora Hewlett Foundation and, before that, dean of Stanford Law School. He’s widely recognized as an expert on constitutional law, problem solving and decision making, and philanthropic strategy, having written books and taught classes at Stanford on these subjects. I can’t think of a better choice than Paul. He has that rare combination of strong instincts and the knowledge and rigor to back those instincts up. He’s the leader we need to carry CC into the next decade. I’d also like to take this opportunity to recognize Joi Ito for his years of service as chair. During Joi’s time as chair, he’s helped CC grow as an organization, both in global influence and in its relevance to a changing technology landscape. Please join me in thanking Joi and welcoming Paul. Tags: board members, board of directors, Paul Brest 500px Announces Creative Commons Licensing Options Elliot Harmon, November 16th, 2012 This morning, photo-sharing platform 500px announced that it now offers Creative Commons licensing options. 500px has become a hub for talented photographers in recent years, and it’s great to see it join the ranks of CC-enabled platforms. “While our platform still defaults to full copyright protection as it always has, we want to give our photographers as much flexibility as possible to spread their work and build their profiles and businesses,” says Oleg Gutsol, CEO, 500px. “Our move to offer Creative Commons licensing is another way we’re providing additional services and value to meet the needs of our growing community.” With tens of millions of high quality professional photos potentially now available through Creative Commons, 500px is planning for the increased traffic from bloggers, publishers and media outlets that have been clamoring to get at the content for several years. “We’ve built content searching by keywords and applicable license right into the functionality,” says Gutsol. “Our hope is that this targeted searching makes it seamless for people to find the content they’re looking for.” With this rollout, 500px joins the ranks of other prominent rich media communities such as Vimeo, SoundCloud and YouTube who already have Creative Commons in place. “500px is a great addition to the family of CC-compatible media platforms,” Creative Commons CEO Cathy Casserly said. “500px caters to a talented and intelligent community of photographers, just the sort of users we’re always excited to see licensing their work under CC. I’ll be curious to see how creative people everywhere reuse and remix the work of 500px photographers.” Comments Off on 500px Announces Creative Commons Licensing Options Tags: 500px, photography, platform Using Free and Open Educational Resources to Support Women and Girls in STEM Cable Green, September 28th, 2012 Download press release (PDF) Mountain View, CA and Cambridge, MA — Creative Commons and the OpenCourseWare Consortium announce the formation of a task force to determine how open educational resources (OER) can support the success of girls and women in science, technology, engineering and math (STEM) in support of the Equal Futures Partnership, announced on September 24 by U.S. Secretary of State Hillary Clinton. “The gender gap in participation in STEM areas around the world is significant,” said Cathy Casserly, CEO of Creative Commons. “We need to address the barriers to girls’ success in STEM to ensure that the future is filled with bright, ambitious, well-educated people of both genders who are able to contend with future global challenges.” The OER-STEM task force will examine how OER can attract and support girls in STEM education, including additional support services necessary to ensure high levels of success. OER are high-quality educational materials that are openly licensed and shared at no cost, allowing learners and educators to use, adapt, change and add information to suit their education goals. The task force will include experts in STEM education for girls and women along with experts in OER to determine specific projects that will advance achievement in these important areas. “We are seeking innovative support solutions for girls to succeed in STEM subjects using open educational resources,” said Mary Lou Forward, Executive Director of the OpenCourseWare Consortium. “Since OER can be accessed freely by anyone, anywhere, and modified to fit different cultural contexts and learning needs around the world, we are looking at this issue from a global perspective.” Creative Commons is a globally-focused nonprofit organization dedicated to making it easier for people to share and build upon the work of others, consistent with the rules of copyright. Creative Commons provides free licenses and other legal tools to give everyone from individual creators to large companies and institutions a simple, standardized way to grant copyright permissions and get credit for their creative work while allowing others to copy, distribute and make specific uses of it. About the OpenCourseWare Consortium The OpenCourseWare Consortium is an international group of hundreds of institutions and organizations that support the advancement open sharing in higher education. The OCW Consortium envisions a world in which the desire to learn is fully met by the opportunity to do so anywhere in the world, where everyone, everywhere is able to access affordable, educationally and culturally appropriate opportunities to gain whatever knowledge or training they desire. Tags: cable green, Equal Futures Partnership, OER, OpenCourseWare, STEM next page
计算机
2015-48/1913/en_head.json.gz/13588
Engine Yard now offers Node.js Developers can run their Node.js applications in the cloud Joab Jackson (IDG News Service) on 21 August, 2012 18:17 Paving the way for more server-side use of JavaScript, platform-as-a-service (PaaS) provider Engine Yard has added the Node.js library to its collection of hosted Web application tools. The service, which Engine Yard first offered as a preview last November, will join Engine Yard's two other Web application-friendly offerings, Ruby on Rails and PHP. Running Node.js as a service, instead of in-house, eliminates many headaches for the developer, said Mark Gaydos, Engine Yard senior vice president of worldwide marketing. No server hardware needs to be procured, nor does the developer need to worry about maintaining Node.js itself, or the other software Node.js depends upon to run. Built on Google's V8 JavaScript engine, Node.js is a library of JavaScript functions that work under an event-driven concurrency model, meaning they are especially well-suited for distributed real-time applications. Node.js is similar to how Unix operates, in that it offers a set of stand-alone functions that can be strung together to form larger processes. "Node modules do one thing and do it well," explained Mike Amundsen, a developer for API management software vendor Layer 7 Technologies, in an introductory talk at the O'Reilly Open Source Convention in Portland, Oregon, last month. Node.js is "meant to be fast. It's optimized for the machine, not for the developer," he said. Games, interactive tools, real-time analytics and other Web applications have all benefitted from running on the Node.js platform. Although JavaScript code is traditionally run on browsers, developers are finding that running JavaScript on the server side offers a number of advantages. For one, it allows large, sprawling Web applications to run more efficiently. Organizations "could get a lot more users supported per dollar of compute resources," said Bill Platt, vice president of operations at Engine Yard. The server-side Node.js also eliminates much of the worry about tweaking JavaScript code for many end-user devices that exist. With the Engine Yard service, the user gets a dashboard, from which one can build a manifest of needed components, such as Node.js. The user can then upload the code to run against the Node.js deployment. The requested elements are placed in a virtual machine (VM). An application may need multiple VMs to run databases, and other supporting applications. Engine Yard chose to offer Node.js because it "has a passionate and growing community," Platt said. The library is the second-most-monitored project on GitHub. For its hosted service, Engine Yard picks open-source technologies with large user bases, Platt said. As with its support of Ruby on Rails and PHP, Engine Yard employs software engineers who possess the expertise to help support users and the Node.js project itself. "When we support something, we want to be very good at it," Platt said. "We learn right alongside these developers what kind of platform they need to be successful." Engine Yard is initially using Node.js version 0.8.7. The company maintains a continuous integration process for its technologies, so the latest version of the library should be available to users within days of its release. "Because it is open source, we can watch and see all of the commits and actions when they occur. Our goal is to be coincident with version releases," Platt said. The service bills according to the use of virtual machines, on a per-hour basis. The medium-sized Node.js implementation would run at about US$1 per hour, according to Engine Yard. Engine Yard generated $29 million in revenue in 2011 and supports about 2,200 paying customers. Joab Jackson covers enterprise software and general technology breaking news for The IDG News Service. Follow Joab on Twitter at @Joab_Jackson. Joab's e-mail address is [email protected] How to choose IT rack power distribution Converged infrastructure: Reaching maturity and meeting business demands Integrated infrastructure to enable innovation while reducing IT cost
计算机
2015-48/1913/en_head.json.gz/13661
Catalog Marketers Eye Federal Government for New Business Carol Krol, Senior Editor Ever since Al Gore appeared on "The Late Show With David Letterman" six years ago, selling goods and services to the government has been more accessible for business-to-business catalog marketers. A few computer hardware and software catalogers have made significant investments in this hugely profitable niche, although many have yet to tap its potential.Gore smashed an ashtray with a hammer on the late- night talk show to shatter the public's perception of government while at the same time publicizing the "National Performance Review," a federal program that, among other things, made it easier for the government and its employees to buy goods and services without having to fill out endless paperwork. The introduction of federal Visa credit cards within the past decade also helped streamline the procurement process. MicroWarehouse Inc., Norwalk, CT, a computer catalog marketer, is poised to garner significant market share in the government arena. The $2.2 billion computer marketer hired Gail Bergantino earlier this year as senior government programs manager to lead that effort, and she was brought on board specifically because of her direct government experience at Unisys Corp., Blue Bell, PA, and its catalog, SelectIT. "It's difficult to get a handle on how the government works unless you live in that environment," said Bergantino. "I brought that to the table."There is serious money in selling to the government, said Mark Amtower, partner, Amtower & Co., Ashton, MD, a consultancy and specialty compiler of federal government mailing lists. Traditional resellers to the government, such as GTSI, Chantilly, VA, are experiencing the biggest threat from the computer direct marketers, including Dell and Gateway. GTSI had a lock on doing business with government agencies in the past, acting as middleman and navigating through the complicated process of selling to the government.Dell sold more than $500 million worth of equipment to the federal government last year, while GTSI and Gateway tied for second with about $300 million each in government sales, according to Amtower. "Dell came into the market in the 1992-1993 time frame, right at the moment that Al Gore was riding this National Performance review," Amtower said. "They were the right business model at the right time." GTSI ranked first for most of the 1990s, until Dell surpassed it in 1997. PC Connection is another catalog marketer devoted to gaining more of the government's business, according to Amtower. "It's a huge market," agreed Bergantino of MicroWarehouse. "The Department of Defense alone spends multibillion dollars in the IT market." MicroWarehouse developed a separate customized catalog for the government to build brand awareness. "We want to get mindshare," said Bergantino. Copies are mailed to targeted lists - from end users up to contracting officers - along with a GSA schedule, which is essentially a prenegotiated price list with the government so that any agency within its ranks can buy the product at the same or similar cost. "You have to make the GSA schedule attractive to your end user," said Bergantino. Along with ease of use, she said, it's important to learn more about customers' needs and use that information in formulating the product mix. "We look at historical purchasing data," she said. Ease in purchasing and customer satisfaction sets companies like MicroWarehouse and PC Connection apart from the traditional government resellers who typically might take an average of six weeks to ship a product. "One of the major discriminators with the catalog marketers is the ability to deliver fast," Amtower said. These marketers are able to ship products overnight and certainly within a few days. MicroWarehouse doubled its catalog mailing to government over last year. Total dollars spent using federal Visa credit cards for fiscal 1998 was $7.95 billion, according to Amtower, compared with $4.95 billion in fiscal year 1997.The toughest nut to crack for companies interested in doing business with the government is in finding the right people to target. "You can get information from the Small Business Administration office," said Amtower. The information may be well worth tracking down, with 25 percent of the GNP tied directly to government spending.Most business-to-business catalog marketers already have some current government customers whether or not they're aware of them. "If you're an intelligent business-to-business marketer, in all likelihood you have a significant niche in government," said Amtower. All government credit cards have one of three exclusive prefixes: 4716, 4486 and 5568. So a simple test can be conducted of prior sales data to determine a company's government business level. Gmail's New Inbox: The upside for marketers Customer Experience Primer for Marketers Business Intelligence Network from B-EYE-Network.com Why Small Data Is the Next Big Thing for Marketers You must be a registered member of Direct Marketing News to post a comment. Next Article in Multichannel Marketing Moving Toward a Multi-Channel World Sponsored Links
计算机
2015-48/1913/en_head.json.gz/13861
Surveying in all waters Product comparisons Helping to Solve Global Problems - 12/03/2008 Jack Dangermond is the co-founder and president of ESRI, a privately held geographic information systems (GIS) software company that is headquartered in Redlands (CA, USA). In 1969, he co-founded ESRI (Environmental Systems Research Institute) with his wife, Laura. Originally, the company concentrated on land use analysis, but increasingly focused on developing GIS software. ESRI became a leader in the GIS industry during the 1980s and continues to develop and support the most widely used GIS technology.<P> Can you describe the vision of ESRI, what are your ideas on the use of spatial information? While our software products and services have changed greatly over the years because of user needs and advances in technology, our company goals have been very consistent. Customer service is important to ESRI ( 8 1) and our commitment is reflected in our product development and support. Among the numerous uses of spatial information, I believe that GIS will be used increasingly to help solve the global environmental and social problems that we currently face. Many of our users share this vision and we will continue to support them in their efforts to develop sustainable solutions. How do you consider the role of ESRI in the world of data processing, data analysis, data visualisation and GIS? ESRI’s ArcGIS product suite provides an extendable solution for data processing, analysis and visualisation that scales from the desktop to the enterprise as organisations and needs for GIS capabilities grow. Our mobile products collect field data for on-site analysis and transmittal, while our server-based systems deliver GIS capabilities over existing networks. GIS technology is used in a multitude of disciplines, providing a unique, location-based perspective of information that promotes better, timelier decisions because of its broad collaborative and analytical capabilities. Originally, ESRI focused on the development of GIS tools for land-based analysis. Can you explain how your tools and software are used and can be used in the field of data acquisition and data processing for the hydrographic world? We have built a number of data models that facilitate the use of ArcGIS in hydrographic applications. Our Arc Marine data model is the result of a collaboration between ESRI, the Danish Hydraulic Institute (DHI), the US National Oceanic and Atmospheric Administration, Oregon State University and Duke University. The model considers how marine and coastal data can be most effectively integrated in 3D and 4D space and time, and includes an approach towards a volumetric model to represent the multidimensional and dynamic nature of ocean data and processes. Our Arc Hydro data model is used for surface water applications and our Groundwater data model is used to represent multidimensional groundwater data. This is an area of rapid growth in the GIS community. Is there, in your view, any difference between land-based and hydrographic surveying and data analysis? Although the underlying data management technology is similar, the data models used and analyses performed are different. For example, the datum for bathymetric digital elevation models (DEMs) is not the same as that used by the US Geological Survey (USGS) for land-based DEMs. Also, the shoreline for the USGS DEMs is indeterminate and not the same as that used for the bathymetric DEMs. The differences in these data models permit efficient, unique analyses for each domain. Data analysis with land-based systems includes specialised analysis for planning, cadastral management and other terrestrial unique applications, while in the hydrographic domain other modelled data are important such as gauge station recordings, sounding type and current. Through its applications, Google facilitates sharing geospatial information. What is your opinion on freely accessible geospatial information? The availability of public domain data and simple viewing software, such as Google Earth, provides a quick snapshot of a selected area. Our free ArcGIS Explorer software not only allows access to similar content, but also lets the user perform true GIS analysis using tasks including visibility, modelling and proximity search from more than 24GB of free hosted content that is worldwide in scope. Additionally, ArcGIS Explorer can connect to other published web services for visualisation and analysis. Regarding the larger issue of whether publicly collected data should be free or for sale, there really is no such thing as free data. The real issue is making maximum use of the collected information and providing sufficient funding to sustain the regular updating of that information. What do you foresee as important developments in sharing geospatial information?? The interest and development of spatial data infrastructures (SDIs) is particularly important to the process of data sharing. The European Union’s INSPIRE Directive will provide a uniform platform for sharing data throughout Europe. Standards are critical in promoting and supporting the sharing of spatial data. ESRI supports ­appropriate specifications as they become finalised, and participates in the development of GIS standards with active involvement in the International Organization for Standardization (ISO) and the Open Geospatial Consortium (OGC). ESRI’s software also supports leading defacto industry standards such as XML, SOAP and SQL. What do you foresee as important developments in the visualisation of this information? While GIS visualisation capabilities are growing rapidly, strong data management and data integration functionality is critical for efficient visualisation. As hydrographers and other professionals in this area expand their use of GIS and develop new capabilities for the technology, there will also be the discovery and development of new ways to view and analyse hydrographic and geographic data. Do you expect GIS to become more important in hydrography in the coming years? If so, how do you think this will affect the work and position of the hydrographic surveyor? The accuracy and resolution of data collection is rapidly evolving to encompass geographic and ­hydrographic data of all types. Centralised data management and real-time data collection will become the norm. While these data sets continue to expand in size and content, GIS provides scalable capabilities to manage them. GIS has the tools to manage, integrate, visualise and analyse this data. Does this imply changes in the educational system for hydrography? As we see in many fields of study, GIS is becoming part of the core curriculum. With the ability to visualise and perform complex analysis using large and disparate data sets, new ways to teach and understand the world are becoming evident through GIS technologies and methodologies. Is ESRI involved in educational programmes? If so, how? Education is a cornerstone of the ESRI community. Thousands of schools, from primary to university, have included ArcGIS courses and applications in their curricula. ESRI provides comprehensive course materials and stages an annual international Education User Conference. In addition, we recently expanded our own training programmes and facilities. Today, we offer hundreds of courses at various training sites around the world, as well as online courses, live training seminars and podcasts. ESRI also supports a number ofgrants for GIS training. Do you have a message for our readers? New applications for GIS will continue to be developed at an increasing rate because of GIS’ ability to provide a logical basis for data organisation and analysis in virtually all disciplines. The spatial component of the technology engenders a unique capability to communicate ideas and concepts within a society in general and will allow it to play a significant part in developing sustainable solutions toour impending global challenges Last updated: 08/10/2015 RENC Seatronics Predator Inspection The Predator, a 300m depth-rated inspection class ROV has been developed to meet the demanding markets for rugged and reliable underwater viewing systems. Tweets by @hydro_intl © 2015 Geomares Publishing. Copyright reserved.
计算机
2015-48/1913/en_head.json.gz/14316
Storm Worm, E-card Spurred Growth in Spam by 17% in Two DaysSecure Computing researcher has noted an increase of 17% in the level of spam from 15 August 2007 to 16 August 2007, which is equal to 89% of all mail during that time. The company attributes this hike to the Storm Worm and high levels of Excel, PDF and e-card spam. On August 21, 2007, chief researcher of Secure Computing, Dmitri Alperovitch, said to the SCMagazine.com that the level of spam is close to that recorded in December last year, an all time high. He also added that since the Storm Worm appeared around January, it succeeded in creating several zombies and was used primarily to send out stock-based promotions.Further, he also claimed that they are using more of greeting cards to spread virus, taking control over the machines and enlarging botnets. Dmitri also said that they have associated this to one of the largest groups of spam in Russia. He expects spam to increase 90% of the total mail in September 2007.In the last few weeks, Sophos has identified resurgence in e-card spam conceived to exploit the user's PC. In a 48-hour period (third week of August), SophosLabs found that deadly e-cards, which were made to exploit the recipients with the JSEcard-A Trojan, accounted for 6.3% of the total spam present in its worldwide spam trap network .As per the news of SCMagazine.com on August 21, 2007, senior security analyst at Sophos, Ron O'Brien, said that greeting cards are becoming more targeted in their attacks. He also said that they are continuously observing the different kinds of storm malware. The concerning part is the amount at which e-cards are being used, because they have become an important mode of infection, said Ron.These kinds of mails make use of social engineering to make it appear that the card has been sent by a friend or a family member. Further, this mail also says that it can be viewed simply by clicking the link embedded in the spam message.Symantec's report "State of Spam" for August 2007 shows that spam using Excel, PDF or other attachments has increased in July 2007, while the use of image spam has gradually declined. Also, in the 3rd week of August, Secure Computing also disclosed that information stealing and backdoor threats has emerged as the greatest threats and are continuously rising. Related article: Storm Worm Returns with Follow-Up Attack» SPAMfighter News - 31-08-2007Press Releases - IT Security News SPAMfighter
计算机
2015-48/1914/en_head.json.gz/628
Taking Up The Challenge My answer to a colleague's challenge for this old dog (that's me) to blog. I hope I've proven that 'every old dog could do a good blog'. Yesterday's Office Equipment (1) In my last post, I complained about the meagre remuneration I received as a bank clerk in the late 1970s. That was only half the story... and half the suffering that I went through. Besides paying me a somewhat subsistence salary, my employers then also made me work with obsolete office equipment. Well, to be very fair, the equipment could not be considered really that outdated at that time - they were probably second or even third generation ones. However, when viewing them in the context of today's technological advances, the equipment were real antiques in every sense of the word.First and foremost, every bank clerk must be equipped with a certain machine. It was a machine that was slightly more sophisticated than this one:It was a manual adding machine:Don't you dare sniff at it though. It could be operated when there was a power failure - not by batteries, not by electricity but just by candlelight (so that you could see in the dark lah). It also sported a leading brand name in office equipment at that time - Olivetti. (I learnt from a website that the giant Italian company had since changed its business to focus on telecommunications and IT instead. In Aug 2003, the company also adopted the name 'Telecom Italia'.)The adding machine that I used had a cranking handle on the right. It was meant to be operated with the right hand only as well as by the right-handed only. (I don't know if the company specially designed left-handed models to cater for left-handed people. I certainly haven't seen any left-handed models at that time, except pretty sashaying ones, perhaps.)When you want to add two numbers, you simply punch in the first set of figures with the fingers on your right hand, punch the addition (+) sign, punch in the second set of figures, then confidently pull back the handle to see the answer appear magically on the paper roll right before your eyes. You are unlikely to mistake the grand total with any other number because the machine was equipped with a red-black ink ribbon that printed totals in red and other numbers in black:The machine could probably handle all four arithmetic operations, i.e. addition, subtraction, multiplication and division. However, I recall that it was mostly used for addition and hence its name - adding machine. I think decimal points were handled by punching one of the black buttons with a white dot on it. Some of the senior staff could operate the machine so well that their fingers seemed to be 'tap dancing' effortlessly on the keypad. The 'dancing' was interrupted only by the occasional cranking of the handle. And all these were done without so much as a glance at the machine or keypad. It was very much like touch-typing. In contrast, for a newbie like me, I couldn't calculate even half as fast as the seniors although I had my eyes glued to the keypad. I did not stay long enough in the two banks to learn to calculate as quickly as the seniors and probably because of that, I also didn't develop right biceps as big as theirs.The machine was quite noisy during operation. Every punch of a button created a more than audible 'click-clack' sound - 'click' when punching it and 'clack' when releasing it. The cranking of the handle is even noisier - somewhat like the cocking of a rifle. So if you have a group of clerks operating the machines at the same time, the sound generated could certainly rival that of a casino's jackpot room. Coincidentally, the cranking of the adding machine's handle also looked uncannily similar to the wrestling with the 'one-armed-bandit'.The machine was so noisy that you could never pretend that you were busy calculating while hiding behind a partition because the lack of noises would easily give you away. So don't even think of calculated (pardon the pun) risks like catching up on your sleep that way at the office after watching an early morning World Cup match.Not long after I left the banks, life was a little easier for those who stayed on. They had a improved version of the adding machine:It was the electric adding machine. But it was no less noisy. And not blackout-proof.I couldn't remember how the supermarkets of that era totalled up your purchases. There were certainly no barcode scanners nor electronic registers then even in Cold Storage which was considered upmarket at that time. The supermarkets probably used a manual cash register which looked like a giant adding machine like these:But most people were poor then and could only afford to buy from cheaper traditional grocers such as the one shown in the photo below. Her 'cash register' was two Milo cans linked by a rope which ran through two overhead pulleys. (One of the cans could be seen at the left side of the photo.) This 'cash register' operated quite like a seesaw - when one went up, the other went down. The 'closing of the cash register' was achieved by leaving both cans at mid-height, just like the one in the photo.Such grocers made use of another type of calculator:The abacus.
计算机
2015-48/1914/en_head.json.gz/1596
Join Date Sep 2007 Posts 439 New Prince of Persia goes Open-World Late last month, Ubisoft finally officially announced Prince of Persia for the current generation, and Edge magazine was said to have the juicy scoop in an upcoming issue. That issue has now arrived, and the first details are here. As it was already known, the game is being worked on at Ubisoft Montreal, the same studio that created the Sands of Time trilogy (The Sands of Time, Warrior Within, The Two Thrones). While people are being brought in from other projects, the core team working on the new game has remained the same as it was with the trilogy, meaning the game is in experienced and very capable hands. Prince of Persia is powered by Anvil, the same game engine used in Assassin's Creed, however, the two games look quite a bit different. While Assassin's Creed went for the realistic look, the visual style in POP is inspired by Japanese movies like Princess Mononoke, and games like Okami, and the new Street Fighter. "Fantasy, but credible," as the game's visual art director Mickael Labat puts it. When it comes to any relations to the previous games, producer Ben Mattes doesn't leave any room for speculation: "The Sands of Time are dead and gone," saying that the new game is a new chapter with its own story, proved by the fact that the player won't start off as a prince this time around, but instead as a drifter and adventurer, lost in the desert. "We're starting afresh, in the same universe, and we wanted to bring something new while keeping what worked before," creative director Jean-Christophe Guyot explains, "He'll be confronted by a lot of fantasy settings." The game will throw the player into a conflict between two gods; the god of light, Ormazd, and his brother Ahriman, the god of darkness. And whenever the gods have a score to settle, it is of course the world of the mortals where any battles must take place. Ahriman has released an infection that is basically holding the world in an ever-intensifying choke grip, and it's up to the good prince to save the day. And how he goes about it is up to the player -- the new POP is going to be (more of) an open-world game. While the game world won't be free-roaming, Mattes says it will be "truly non-linear" and will allow the player to progress through the game in preferred order: "We really wanted to create a POP experience where the player has a much greater authorship over their global experience, so that it wasn't a completely linear game where they played through it once, and that was it. At the same time, we recognized you will never get the POP experience we want -- those long strings of choreographed acrobatics -- from a true sandbox. We adopted an open-world structure where the player has the macro-level choice in how it unfolds, how the story unfolds, in terms of which regions to visit and which times, and which bosses they fight and when." Speaking of fighting, don't expect the combat system to be anything like what we saw in Assassin's Creed. The player will go against only one bad guy at a time - every fight in the game will be a duel. "Combat is a game in and of itself," Mattes says, "We really want to play off the strategic advantages of the environments." "Every fight in the game should feel like a boss fight. The feeling of being against a very difficult adversary who's every bit the swordsman as you are, and you have to use all your strategy and skill just to get past him; maybe you don't kill him, just drive him off. That'll happen often -- you'll drive an enemy away, then they'll stalk you to return and fight again." It certainly sounds like the current generation incarnation of Prince of Persia will be considerably different from what we've seen in the previous games, all graphics-, story-, and gameplay-wise. Ubisoft is also keeping a big secret under wraps for now, something that will apparently be fundamental to everything in the game -- the puzzles, the fighting and the acrobatics. Let's just hope it won't be as 'big' as Assassin's Creed's sci-fi 'twist'. Prince of Persia will be released for the PS3, 360 and PC, along with a "complementary" DS version which is "not a sequel, not a prequel and certainly not a port," later this year. More PlayStation 3 News... You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is On [VIDEO] code is On HTML code is Off Forum Rules PlayStation 4 News Categories • PlayStation 4 & PS4 News • PS4 Guides & Tutorials • PS4 Hacks & JailBreak • PS4 Videos • PlayStation 3 & PSN News • PS3 Guides & Tutorials • PS3 CFW & MFW • PS3 Hacks & JailBreak • PS3 / PSN Game Releases NFO List • PS3 Reviews • PS3 Themes • PS3 Trophies • PS3 Videos • PSP & PS Vita News • PS Vita Trophies • Site & Gaming Console News PlayStation 4 News Discussions Sony: Save 20% on All PlayStation Gear Purchases on Black Friday Oh nice thanks for the information ... 4m ago - 1 Comment UDK Ultimate Edition - PC / PS3 / XBox360 Project by Nidalnijm Hello brother. Ok, as i told you, i am recompiling all the Binaries over again from scratch before upload the final release, so then i am sure you w...
计算机
2015-48/1914/en_head.json.gz/1987
(Redirected from Constraint (database)) A relational database is a digital database whose organization is based on the relational model of data, as proposed by E.F. Codd in 1970.[1] The various software systems used to maintain relational databases are known as relational database management system (RDBMS). Virtually all relational database systems use SQL (Structured Query Language) as the language for querying and maintaining the database. 1 Relational model 2.1 Relationships 3 Transactions 4 Stored procedures 5 Terminology 6 Relations or tables 7 Base and derived relations 7.1 Domain 8 Constraints 8.1 Primary key 8.2 Foreign key 8.3 Stored procedures 8.4 Index 9 Relational operations 10 Normalization 11 Distributed relational databases 12 Watermarking of relational databases Relational model[edit] Main article: Relational model This model organizes data into one or more tables (or "relations") of columns and rows, with a unique key identifying each row. Rows are also called records or tuples.[2] Generally, each table/relation represents one "entity type" (such as customer or product). The rows represent instances of that type of entity (such as "Lee" or "iPhone 6") and the columns representing values attributed to that instance (such as address or price). Keys[edit] Each row in a table has its own unique key. Rows in a table can be linked to rows in other tables by adding a column for the unique key of the linked row (such columns are known as foreign keys). Codd showed that data relationships of arbitrary complexity can be represented using this simple set of concepts. Part of this processing involves consistently being able to select or modify one and only one row in a table. Therefore, most physical implementations have a system-assigned, unique primary key for each table. When a new row is written to the table, the system generates and writes the new, unique value for the primary key (PK); this is the key that the system uses primarily for accessing the table. System performance is optimized for PKs. Other, more natural keys may also be identified and defined as alternate keys (AK). Often several columns may be needed to form an AK (this is one reason why a single integer column is usually made the PK). Both PKs and AKs have the ability to uniquely identify one row within a table. Additional technology may be applied that will significantly assure a unique ID across the world, a globally unique identifier; these are used when there are broader system requirements. The primary keys within a database are used to define the relationships among the tables. When a PK migrates to another table, it becomes a foreign key in the other table. When each cell can contain only one value and the PK migrates into a regular entity table, this design pattern can represent either a one-to-one, or a one-to-many relationship. Most relational database designs resolve many-to-many relationships by creating an additional table that contains the PKs from both of the other entity tables—the relationship becomes an entity; the resolution table is then named appropriately and is often assigned its own PK while the two FKs are combined to form an AK. The migration of PKs to other tables is the second major reason why system-assigned integers are used normally as PKs; there usually is neither efficiency nor clarity in migrating a bunch of other types of columns. Relationships[edit] Relationships exist both among the tables. These relationships take three logical forms: one-to-one, one-to-many, or many-to-many. Most relational databases are designed so that each column in each row holds only a single value (values are "atomic"). Transactions[edit] In order for a database management system (DBMS) to operate efficiently and accurately, it must have ACID transactions.[3][4][5] Stored procedures[edit] Most of the programming within a RDBMS is accomplished using stored procedures (SPs). Often procedures can be used to greatly reduce the amount of information transferred within and outside of a system. For increased security, the system design may also grant access to only the stored procedures and not directly to the tables. Fundamental stored procedures contain the logic needed to insert new data and update existing data. More complex procedures may be written to implement additional rules and logic related to processing or selecting the data. Terminology[edit] Relational database terminology. The relational database was first defined in June 1970 by Edgar Codd, of IBM's San Jose Research Laboratory.[1] Codd's view of what qualifies as an RDBMS is summarized in Codd's 12 rules. A relational database has become the predominant type of database. Other models besides the relational model include the hierarchical database model and the network model. The table below summarizes some of the most important relational database terms and the corresponding SQL term: SQL term Relational database term Tuple or record A data set representing a single item Attribute or field A labeled element of a tuple, e.g. "Address" or "Date of birth" Relation or Base relvar A set of tuples sharing the same attributes; a set of columns and rows View or result set Derived relvar Any set of tuples; a data report from the RDBMS in response to a query Relations or tables[edit]
计算机
2015-48/1914/en_head.json.gz/2047
nerdculturalreview I would love to change the world, but they won't give me the source code Thoughts About the Narrative Power of Video Games I’ve been listening to a lot of the Nerdist Podcasts lately. They are humorous and entertaining, and most importantly they keep my brain from destroying itself with doubt and loathing. My favorite is The Indoor Kids, where professional comedian Kumail Nanjiani and his wife Emily V. Gordon talk to various funny people about video games. One of the best episodes by far starred Film Crit Hulk, where the idea of Video Games as art was debated. It is difficult to talk about art in general, mostly because everybody has a different idea about what art is. A good baseline definition for art is: Art is the expression of feelings or ideas through a specific medium, whether it is Literary, Visual, or Audible. Whether something is Art or just artistic depends on how accurately the thought or feelings are expressed and the performance of the medium. The consensus from the podcast was that video games tell a story, much like films, and therefore are able to be art. However, the inherent gameness of most video games takes away from the story, and therefore the accuracy of the themes and ideas of the narrative get watered down, making it not art. I’ve been taking these thoughts with me as I have been thinking about how it applies to one of my favorite video games of all time, Final Fantasy VIII. I feel of all the Final Fantasy games, it has arguably the strongest narrative (Final Fantasy X and XII would both be in that discussion). The game’s story is based around a team of mercenaries who have been training since their youth to face an evil sorceress. Throughout the game, the characters learn that they have forgotten their strong ties (they were all orphans together at the same orphanage) due to their training and use of magical Guardian Forces. The B story throughout game hinges around the main characters Squall and Rinoa. Squall’s father and Rinoa’s mother were once semi-romantically intwined. But due to war they were tragically never to see each other again, and their lives were to go on elsewhere. While the team relives scenes from Squall’s father’s past (oh by the way, that is a spoiler, I don’t think you get to that revelation until the third disc, sorry) Squall and Rinoa develop romantically. The games main themes revolve, therefore, on how War takes the childhood and futures of those involved, and on how the powerful memories of the past can help us to anchor or present. The game’s main beats happen over four discs, but the story stays mostly in the first three with the fourth containing a strange metaworld where the past, present, and future happen simultaneously and the team must fight against the evil sorceress on the edge of time. The story plays pretty strongly, with reveals spread out enough to keep the story interesting. However, with all the side quests and the ability to explore a vast territory, the narrative often takes a backseat to game play. Even the great Triple Triad side game (which is better than any other FF side game with the exception of maybe blitzball) can eat up a lot of your time and take away from the story. Therefore, I would have to come to the conclusion that FFVIII can come very close to being considered and artistic game, if the game play didn’t take away from the execution of the narrative. The same could probably be said for any of the other games in the series. Even with the strong storyline in FFXII hinging around the idea of revenge and the responsibility involved with using force, the massiveness of the game takes away from the overall execution of the narrative, thus taking from its ability to be called art. They are great games. I don’t think anyone can argue they are bad games without having a bias against RPGs. For them to be considered art they would have to take away all the things that take away from the story line, which would make them weaker games. Share this:TwitterFacebookLike this:Like Loading... Written by dcoenen January 21, 2012 at 4:11 pm « LEGO Lord of the Rings: Summer 2012 Chuck ends tonight. . .I might cry, in a manly style of course. » ATM dcoenenPages About Recent Posts Some Summer Beach Reading… Games of E3: #20 Mirror’s Edge 2 Games of E3: #21 NBA 2K14 Games of E3: #22 Battlefield 4 Games of E3: #23 Saints Row IV Categories Comics Couch Coaching What I expect WordPress.com Create a free website or blog at WordPress.com. The Journalist v1.9 Theme. Follow “nerdculturalreview”
计算机
2015-48/1914/en_head.json.gz/2069
PortfolioAbout/ContactRSS FeedCartastropheStorefrontProject Linework Standard Daniel Huffman 19th January, 2012 Lake City Lake Let me tell you about one of my favorite maps. I’ve seen it on various t-shirts around Madison, Wisconsin, the city in which I have lived for the past four years or so. It’s an emblem of sorts for we who are proud of living on an isthmus. I love this map because of its simplicity, and how that simplicity exemplifies good and clever cartographic design. I like to bring this map up as an example to people because it helps explore the edge of the term “map.” It certainly doesn’t look like most maps (except, of course, the increasingly-poplar typographic maps). But to me, its unusual appearance simply brings into focus what a map is and is not. This is a representation of space, and one in which there is a correspondence between space on the page and space on the Earth. The isthmus of Madison runs roughly southwest-northeast, and to either side there are lakes. This relationship is preserved in the representation above. It’s authored, like any map, and it is graphical (it functions through its appearance). Those are the four components of map-ness, to me: authored, graphic, representation, spatial. So, first off, I appreciate it because it permits me to be needlessly pedantic about what makes something a map. Beyond letting me show off in front of others, though, I appreciate it even when no one else is around. I enjoy its simplicity and its economy. It’s a very highly generalized map, breaking down the area into but two categories: lake and city. It’s a reminder to me that you can still convey a message with a map that is ridiculously simplified. Adding any more detail here would get in the way. All maps tell stories. Some are short, some are long. This map is a slogan. Unfortunately, there are a lot of maps out there that don’t say much, but take a lot of space to say it, and they could learn a thing or two from this one. Here, the message is paired perfectly with the level of visual complexity used to express it. I imagine there are a lot of more detailed maps out there that could stand to be distilled down into three words. I have some problems with those who worship Tufte’s doctrine of minimalistic, ink-efficient design, but those ideas can certainly be instructive. Finally, I appreciate the fact that, like typographic maps generally, it needs no legend. When you look at this map, you see what it is. The interpretation of map symbols should ideally be seamless. As mapmakers, we work in the zone of representations. People see through maps — they don’t look at Google Maps and see yellow lines and blue polygons. They see roads and water. It is our job to make that representational layer transparent, so that people see what our symbols mean, not how they were constructed. A legend absolutely destroys that transparency, because it makes someone aware of the symbol’s mediation, and forces them to scramble for its meaning. I’m not a big fan of legends. While they are certainly necessary and useful at times, I’d rather mark things directly on the page if I can. Too often, you find maps like this one, in which the legend serves only to waste everyone’s time. A lot of people have been unfortunately indoctrinated with the notion that a map must have a legend, when they should be used sparingly. What I would be keen to see is something which expresses this economy of design, and this easy legibility, but does not use type. I’m sure there are examples out there, and it’s something I will have to ponder in my own future work. ← Opening the Vaults Two NACIS Projects → 8 thoughts on “Lake City Lake” bmc says: 20th January, 2012 at 07:15 So what do you think of the Madison flag? http://en.wikipedia.org/wiki/Flag_of_Madison,_Wisconsin Reply Daniel Huffman says: 20th January, 2012 at 09:39 I love the city flag — there was someone who lived a few doors down from me who used to fly it, and I was impressed by its simplicity and unforced symbolism (for forced symbolism, see Chicago’s flag, where the point of each star is made to stand for something). I think it was ranked among the top flags in the country by some vexillologists, for what it’s worth. Reply RobinT says: 20th January, 2012 at 07:26 I agree – I love the simplicity of this map (and, like you, would love to see a map like this without text, but not sure how you would do it). However, the problem I see with this map is that if you aren’t from Madison or aren’t familiar with its geography, you probably wouldn’t know what it was referring to or that it was a map at all. Reply Andy Woodruff says: 20th January, 2012 at 08:55 Nicely put on why this map/slogan is so great, and “seamless” map interpretations. I’m with you on legends, and similarly in the interactive realm I’m opposed to those initial “how to use this map” screens that it seemed we were encouraged to include a few years ago. What do you imagine could be as economical and legible as this typographic map? Something pictorial, perhaps? Reply Daniel Huffman says: 20th January, 2012 at 09:45 The city flag is nice, but it’s not obvious what the stripes stand for, so it’s not quite as good as the text. I think something pictorial could work nicely, but this morning I’m having trouble putting a concept together mentally. The “how to use the map” screens do have the advantage of occupying the user while the map loads. My favorite example of “something to do while you wait” was in a soccer game (I think it was EA FIFA 2011). While it’s loading the players and setting up the match in the background, they give you a single player on an empty field to practice shooting and movement. Would be interesting to see the concept applied cartographically — give you a quick pre-map to read or interact with while the real thing is getting up to speed. Reply BAD says: 30th January, 2012 at 15:16 I dig this. Anybody know where I can find it for sale on a t-shirt? I checked CafePress and Zazzle. Reply Daniel Huffman says: 30th January, 2012 at 15:26 I’ve seen them around State Street in Madison. But I wasn’t paying a lot of attention, so I can’t say as I remember where, and/or if they were only a temporary item, so I’m afraid I can’t be of much help to you. >________________________________ Reply Pingback: Elsewhere « Visualingual Leave a Reply Cancel reply Enter your comment here...
计算机
2015-48/1914/en_head.json.gz/2413
ME3 ending What are your thoughts on this?: http://www.themarysu...ee-as-possible/ I haven't played the ME games though I know one person that does. I guess in one way, it's a compliment that people would get that riled up over a game ending, but it also seems a little weird. I think developers have created a monster with giving the player story choices in games. They seem to be taking ownership where there isn't really any. My guess is that they had all these diverging paths and a decent ending became impossible, so they tried to make the player fill in the blanks and it outraged them. This kind of compliments my thoughts that modern games are leading people into a weird form of narcissism, where they feel everything should bend to their will. We decide what the character looks like, what happens to them, and on and on. In the old days, it was simply an obstacle course that you had to traverse. Maybe someone can write a Kinect game where they get to lay down and kick their feet and pound their fists when it doesn't go the way they want. I hate the ending of ME3 It's a "end of franchise" ending. I hope this isn't going to ruin the game for anyone, but... you have a choice of "main character dies and a sequel could be possible", and "main character dies and no sequels are possible" That's just bad .... well everything. Having said that, I don't see any reason Bioware should even think about changing the ending. They spent all those months deciding on what should happen, it's their game. If they want to kill it off, it's their choice. No point crying about it. I haven't played ME3 so I don't know what all the commotion is about. The racket however did find its way into my local news outlet, which I thought was amusing that someone would file a complaint with the FCC and BBB over the ending. You crazy Americans you ME1 was epic IMO and the story for me ended right there and then. ME2 to me was like what the movie Matrix 2 is to Matrix 1. It had entertainment value, but it can't hold a candle to the original. Pretty much all sequels are like that though. Judging from what I read about the reactions to ME3, I still don't know why people don't just move on. Just look at the ending for ME2. The last boss? I honestly thought Arnold Schwarzenegger would have made a cameo appearance there. It was a good laugh, but a clear indication that the story grew wings and flew out the window. So basically, they recognized that the only way to keep moving on a story with so many threads was to kill the main character and either start over with a new one or quit all together. It's amazing that they kept it going for 2 sequels after the first, however. You crazy Americans you No kidding. I think a fairly large portion of our society has lost touch with reality. That's why I live out in the wilderness. Sorry, I just have the image of a guy with no teeth, a banjo, and a state of the art laptop sat on a porch playing deliverance. In UK we have a different problem, the people aren't too bad, but the government has completely lost it. Oh well. I think Mass Effect was a port of the Star Wars games, Knights of the old republic et al with a new rendering engine. I wonder what the code will morph into next. I just watched this video and I think I understand now, and got a good laugh out of it too. Stainless, I believe SW KOTOR used the Aurora engine, like Neverwinter Nights and Dragon Age. ME series used the Unreal Engine. I think Deliverance scared a lot of people. I know it did me. Really ruined backpacking. I have my teeth, but I play the guitar and no state of the art laptop, an old desktop that has some trouble with the latest games. I don't have a porch but my house looks a little tacky, I must say. I built it myself. Going to try to do some more work on it this year. I lived in the mountains of Tennessee for a while and those hill billies are some of the nicest people you could meet. I'm currently living in Northern Wisconsin. Been here for about 30 years so I'll probably stay. Edit: I saw the video: I think that guy is crazy and should find some sort of life beyond games. OK, my hobby is making games, but I don't want to end up like that. That's another think about the UK. Do you know how many rules and regulations you would have to adhere to if you wanted to build your own house? Hell you'd be a hundred before you had finished filling in the forms! In fact you would probably have to do a Shepard and die then get some mysterious organisation regrow your body (with all your memories intact) to live long enough to dig the foundations. ..... I'm just wondering if this is a designed controversy. Either a cynical attempt to boost sales and get free advertising, or a case of the development team has fallen out with the publisher and it's a feck you ending. It's kind of getting like that here now, too. The unions put people in that make laws to keep us from doing anything, for our own protection, of course. Wouldn't want us to hurt ourselves. Better that we spend our lives in debt. When I built mine they didn't have anything like that, luckily, because I'm not the best carpenter, or electrician. I know how to make it work, but it's not very pretty. They don't bother people much if you live far enough in the country though. I know someone that's living in a 10X10 shack. It's not approved living, but no one says anything. I wouldn't get along in the city with people out measuring my grass length and whatnot. I'm just wondering if this is a designed controversy. I think the lead story guy at Bioware quit recently. I wonder if it was over that? You do get in situations where the publisher wants something out way too early and they don't care if it's crap. slashandz My thoughts. But I guess we'll see when the 'extended ending' DLC comes out in summer! Hmmmm you are writing as if there will be a ME4. I don't think that's possible, unless it's set prior to the events of ME3. ME3 feels to me as a "so long and thanks for all the fish" release. Do you know what I find most interesting in all this? It's gone planetary. The whole world is annoyed (to use a gentle word) at BW/EA for this. It has become huge. And even people who have never played a ME game have come to hear of it in enough detail to make their own (and negative) opinion about it. They (BW/EA) will do something about it, or it'll have repercussions on what next big/epic/multi_installments title they plan to release. Not that I'd enjoy the downfall of anybody, but it's about time that something like this happened. It's a step forward (however small) to begin the reversal of the bad trend that so many developers and publishers have embraced. The next time a developer will caress the idea of "screwing up" (\<-- can't find a better term, sorry) on their fan base, they'll pause and think seriously about it, knowing what they risk. Lots of dissatisfied people are holding back and not expressing their dissatisfaction with ME3's endings, because they think that -once again- no amount of complains shall bring a change (and honestly: did it ever?) But this time is different. It's out of scale, out of control, already. It is setting an example. The next time something like this happens, a lot more people than now will make their voice heard, and it shall be reputation/financial loss for whoever's enblazoned name is the target of it. I'm sure that BW will get out of this displeasing accident in one piece. This time. I'm also sure there won't be a next, or there won't be another BW-made game after it. Specifically about the ME franchise, now, people have payd real money for the game. And they dedicated their time to it. And they've grown fond of it, its story, its characters. They had expectations (solidly based on what they were given in the previous 2 titles)... It is an investment they made. It's all a videogame, yes, pure virtual entertainment. But the money/time/emotions/expectations investment part of it is as real as it gets. You can't stomp over that, pretend you don't care, stating that it's only *your* call to decide. Facts at hand that's not how it works, or we wouldn't be having this (planetary) fuss now. I personally find the whole thing humorous and one more sign that the world is going insane. Then again, I didn't play it. I always laugh when people get upset about these kind of things. I think what they are doing borders on the impossible as far as letting people believe they are creating their own story. It finally caught up with them. How many diverging paths can they write? It's ridiculous really, and the more they actually deliver, the more people will demand. There are tens of thousands of voice acted lines in these games. It's like a hundred movies and it's not enough and people are pissed off. It's Bioware's fault for creating too good an illusion and making people believe it. Now it's spun out of control. It's like a pyramid scheme. There just aren't enough lines, and eventually there won't be enough actors to read them. @fireside I always laugh when people get upset about these kind of things. I see. Let's all be apathetic and let the world f**k us over in big AND small ways. I say to those complaining that they have a valid complaint, and they should be heard. If Bioware wants to live by the detail sword, they open themselves up to die by it too, don't they? If you want a free market, how will things correct themselves in laissez-fairy land if you aren't allowed to raise your voice in complaint? We obviously don't have to worry about that. What we have to worry about is the insatiable monster they created, or rather, what they have to worry about. Hats off to them for what they did, but I think they just found out that too many sequels = impossible to control branching. The issue of the number of required lines for the voice actors is a purely technical one. At the moment voice acting has to be done in a studio with a microphone. It's not going to be long before voice synth's get good enough for this not to be an issue any more. Hell I came close ten years ago. I got Janet Jackson to record a bunch of test phrases, parsed it into a voice print, then generated a load of spoken sentences. Almost worked, they sounded a lot like Michael Jackson. If I was doing that ten years ago, with no hardware acceleration, won't be too long before someone realises there is a market for this as middle ware and gets stuck in. So maybe you could say "Bioware were too ambitious for the current state of the technology". I don't think that though, I think they realised that they couldn't afford to support the development of the game that ME3 should have been. I think they reached the limit of what their current design could handle, and decided to kill it off. Now I personally think that was a very bad idea, but it's their game. They can do what they want with it. It's not going to be long before voice synth's get good enough for this not to be an issue any more. It will be a very long time before that happens, if ever. I think if you've made amateur games and gotten friends or whatever to do the lines, you know what bad acting really is. Not very many of us humans can even do it. For a computer to invoke emotions in speech is a long stretch. When it comes to voice acting, I think Skyrim takes the spotlight. It's quite often a debated issue amongst gamers, but they don't lambast the game over it. In fact one of the constantly repeated lines in the game has become an Internet meme and even made its way into a TV show (NCIS).@fireside I think if you've made amateur games and gotten friends or whatever to do the lines, you know what bad acting really is. Nah, just a bunch of friends that horse around I don't think voice acting is that complicated though, it just takes a passion for doing it right without feeling embarrassed for acting it out (who wants to yell with neighbours listening?). I'm no pro, but I can mimic a lot of the voices from Warcraft II units and often do it to lighten the mood at work. I'm sure most could do a good job given the right atmosphere. On topic, it looks like all the complaining paid off as there is work in progress for a DLC this summer to fix up the ending somehow. Wonder if this will set the mindset for future games. The beginnings of a gamers union. I think that's hyperbole in the opposite direction. ME1 and ME2 had a decent system to create a good amount of relevant alternate endings. Explain to me why they couldn't redo what they already did twice before? IOW, if it truly is a "monster" of a job, they already killed it twice. I don't think people wanted an order of magnitude more endings. They just wanted variety that made sense like the previous ones... especially when being actively sold on that bill of goods. Explain to me why they couldn't redo what they already did twice before? Because you are loading in a character which has accumulated experiences from the previous game, that makes each game more divergent from the one prior.
计算机
2015-48/1914/en_head.json.gz/2437
Windows Media Player 11 Nothing new from Windows Media Player 11. The glass design doesn't look good, it's black, it has too many button scattered all around the place. It has an inline search feature that works well for large media libraries, it doesn't lists artists and songs in trees - the new media player uses a similar approach to Windows Explorer.But there are people that say Windows Media Player 11 has many new features. Let's see:"Some of the unique features of Media Player 11 are a deeply integrated music library for both online and offline content, a new and improved interface, the ability to connect to additional hardware easily, and integrated, easy-to-use tools for following the process of any task (downloading music, burning CDs, synching music, or streaming video, just to name a few). Media Player 11 has a new integrated feel, too — one that makes online, network, and offline content indistinguishable. Many of the improvements are due to the redesigned interface, which includes simplified trees, helpful shortcuts on the menus and menu bars, and an advanced and improved media library. There are now Back and Forward buttons, giving Media Player a web-browser feel; a lightning-fast WordWheel search tool for getting through the library; and Xbox 360 support."You can download WMP 11 from Softpedia (Windows 2000/ XP - download size: 22MB).
计算机
2015-48/1914/en_head.json.gz/2438
Google Toolbar 8, Powered by Google Chrome After Google released Chrome, Google Toolbar's development slowed down. That's because Google Toolbar is no longer the primary vehicle for adding browser features and Google mostly focused on improving Chrome.Google Toolbar 8 is a completely new version of Google's add-on that was available as part of Google Labs. "Google Toolbar 8 is actually built and runs on top of the Google Chrome Frame platform. This means that Toolbar 8 will run more like a web app in that it can be customized and updated much more frequently and easily. It also means that Google Chrome Frame is installed at the time of Toolbar 8 installation," explains Google.The new version of Google's toolbar only works in Internet Explorer right now and it doesn't include all the features that are currently available in the latest public version. Google included some new features: buttons for the most visited sites, Google Dictionary integration and Google Instant. "Google Toolbar displays up to seven of your most visited sites as buttons. Click on a button to go directly to its site. When you download the new Google Toolbar your toolbar will display buttons for Gmail, Google Calendar, Google Docs, Youtube, Google News, Google Reader and Google Tasks by default." Google Chrome,
计算机
2015-48/1914/en_head.json.gz/2629
OpenTube | 2 comments Open Source Games List – Latest Roundup of 40 Games We had earlier covered a list of 30 open source games which has great game play. As we enjoy playing them and times are passing by, more and more developer community and game developers have been striving hard, contributing to create amazing games with great game plays and graphics for free. Amazing Bulldozer Simulator – Digger Simulator 10 Terrific Free Massively Multiplayer Online Games 12 Free and Awesome First Person Shooter Games – Part 1 12 Free and Awesome First Person Shooter Games – Part 2 Here is a list of open source games that have done recent releases: 1. O.A.D Wildfire Games, an international group of volunteer game developers have release of "0 A.D. Alpha 10 Jhelum" a free, open-source game of ancient warfare. This alpha release features Hellenic factions such as Athens, Macedonia and Sparta; technologies, civilization phases, click-and-drag walls, healing and more. The Software is available for Windows, Linux and Mac OS X. 0 A.D. 2. Angband Angband is a free, single-player dungeon exploration game where you take the role of an adventurer, exploring a deep dungeon, fighting monsters, and acquiring the best weaponry you can, in preparation for a final battle with Morgoth, the Lord of Darkness. The game is supported on Windows and Mac OS X. 3. Battle for Wesnoth The Battle for Wesnoth is a turn-based tactical strategy game with a high fantasy theme. Build up a great army, gradually turning raw recruits into hardened veterans. In later games, recall your toughest warriors and form a deadly host whom none can stand against! Choose units from a large pool of specialists, and hand-pick a force with the right strengths to fight well on different terrains against all manner of opposition. The game supports Windows, Linux and Mac OS 4. Blades of Exile Exile is a series of role-playing video games created by Jeff Vogel of Spiderweb Software. Blades of Exile is one of the four games released in the series. It consists of three short scenarios set after the main trilogy as well as an editor that allows players to create their own scenarios, which need not be set in the Exile game world at all. Several hundred custom-made scenarios have been designed since the release of the game in 1997. Blade of Exile supports Windows and Macintosh. 5.BZFlag BZFlag is a free online multiplayer 3D tank battle game. The name originates from "Battle Zone Capture The Flag". It runs on Windows, Mac OSX, Linux, BSD, and other platforms. It was one of the most popular games ever on Silicon Graphics machines and continues to be developed and improved. 6. Dungeon Crawl Stone Soup Dungeon Crawl Stone Soup is a free roguelike game of exploration and treasure-hunting in dungeons filled with dangerous and unfriendly monsters in a quest for the mystifyingly fabulous Orb of Zot. Dungeon Crawl Stone Soup has diverse species and many different character backgrounds to choose from, deep tactical game-play, sophisticated magic, religion and skill systems, and a grand variety of monsters to fight and run from, making each game unique and challenging. Dungeon Crawl Stone Soup can be played offline, or online on a public telnet/ssh server thanks to the good folks at crawl.akrasiac.org (CAO) and crawl.develz.org (CDO). These public servers allow you to meet other players’ ghosts, watch other people playing, and, in general, have a blast! The game supports Windows and Mac OS X. 7. FlightGear FlightGear is an open-source flight simulator. It supports a variety of popular platforms (Windows, Mac, Linux, etc.) and is developed by skilled volunteers from around the world. Source code for the entire project is available and licensed under the GNU General Public License. The goal of the FlightGear project is to create a sophisticated and open flight simulator framework for use in research or academic environments, pilot training, as an industry engineering tool, for DIY-ers to pursue their favorite interesting flight simulation idea, and last but certainly not least as a fun, realistic, and challenging desktop flight simulator. 8. Freeciv Freeciv is a Free and Open Source empire-building strategy game inspired by the history of human civilization. The game commences in prehistory and your mission is to lead your tribe from the Stone Age to the Space Age. Freeciv is supported on Windows and Mac OS X. 9. FreeCol FreeCol is a turn-based strategy game based on the old game Colonization, and similar to Civilization. The objective of the game is to create an independent nation. You start with only a few colonists defying the stormy seas in their search for new land. The FreeCol aims to create an Open Source version of Colonization. FreeCol supports Windows and Mac OS X. 10. Freedoom The Freedoom project aims to create a complete Doom-based game which is Free Software. Combined with a free source port, people will also be able to play the back catalog of extensions made to Doom by hobbyists over the last 15 years. The game is supported on GNU/Linux, BSD, Mac OS X, other POSIX, Windows. 11. FreedroidRPG FreedroidRPG aims to provide a popular reference game in the open-source world. The goal is to do so through the implementation of an immersive world with distinctive dialog and graphical styles in a format that is friendly to all ages while providing a fair amount of choice to the player. FreedroidRPG features a real time combat system with melee and ranged weapons, fairly similar to the proprietary game Diablo. There is an innovative system of programs that can be run in order to take control of enemy robots, alter their behavior, or improve one’s characteristics. You can use over 50 different kinds of items and fight countless enemies on your way to your destiny. An advanced dialog system provides story background and immersive role playing situations. It is supported on Windows, Linux and Mac OS
计算机
2015-48/1914/en_head.json.gz/2711
Home > Risk Management OverviewGetting StartedResearchTools & Methods Additional Materials ConsultingOur People Consider a broad range of conditions and events that can affect the potential for success, and it becomes easier to strategically allocate limited resources where and when they are needed the most. The SEI has been conducting research and development in various aspects of risk management for more than 20 years. Over that time span, many solutions have been developed, tested, and released into the community. In the early years, we developed and conducted Software Risk Evaluations (SREs), using the Risk Taxonomy. The tactical Continuous Risk Management (CRM) approach to managing project risk followed, which is still in use today—more than 15 years after it was released. Other applications of risk management principles have been developed, including CURE (focused on COTS usage), ATAM® (with a focus on architecture), and the cyber-security-focused OCTAVE®. In 2006, the SEI Mission Success in Complex Environments (MSCE) project was chartered to develop practical and innovative methods, tools, and techniques for measuring, assessing, and managing mission risks. At the heart of this work is the Mission Risk Diagnostic (MRD), which employs a top-down analysis of mission risk. Mission risk analysis provides a holistic view of the risk to an interactively complex, socio-technical system. The first step in this type of risk analysis is to establish the objectives that must be achieved. The objectives define the desired outcome, or "picture of success," for a system. Next, systemic factors that have a strong influence on the outcome (i.e., whether or not the objectives will be achieved) are identified. These systemic factors, called drivers, are important because they define a small set of factors that can be used to assess a system's performance and gauge whether it is on track to achieve its key objectives. The drivers are then analyzed, which enables decision makers to gauge the overall risk to the system's mission. The MRD has proven to be effective for establishing confidence in the characteristics of software-reliant systems across the life cycle and supply chain. The SEI has the MRD in a variety of domains, including software acquisition and development; secure software development; cybersecurity incident management; and technology portfolio management. The MRD has also been blended with other SEI products to provide unique solutions to customer needs. Although most programs and organizations use risk management when developing and operating software-reliant systems, preventable failures continue to occur at an alarming rate. In many instances, the root causes of these preventable failures can be traced to weaknesses in the risk management practices employed by those programs and organizations. For this reason, risk management research at the SEI continues. The SEI provides a wide range of risk management solutions. Many of the older SEI methodologies are still successfully used today and can provide benefits to your programs. To reach the available documentation on the older solutions, see the additional materials. The MSCE work on mission risk analysis—top-down, systemic analyses of risk in relation to a system's mission and objectives—is better suited to managing mission risk in complex, distributed environments. These newer solutions can be used to manage mission risk across the life cycle and supply chain, enabling decision makers to more efficiently engage in the risk management process, navigate through a broad tradeoff space (including performance, reliability, safety, and security considerations, among others), and strategically allocate their limited resources when and where they are needed the most. Finally, the SEI CERT Program is using the MRD to assess software security risk across the life cycle and supply chain. As part of this work, CERT is conducting research into risk-based measurement and analysis, where the MRD is being used to direct an organization's measurement and analysis efforts. Spotlight on Risk Management The Monitor June 2009 New Directions in Risk: A Success-Oriented Approach (2009) A Practical Approach for Managing Risk A Technical Overview of Risk and Opportunity Management A Framework for Categorizing Key Drivers of Risk Practical Risk Management: Framework and Methods
计算机
2015-48/1914/en_head.json.gz/2913
You have JavaScript disabled in your web browser. This website is best viewed with CSS and JavaScript enabled, alternatively you can use the low bandwidth ACNS - Anglican News Service Home > Africa Address by Archbishop Njongonkulu Ndungane at an Interfaith Service on World AIDS Day Posted on: December 2, 2003 2:25 PM Related Categories: Southern Africa [ACNS source: Church of the Province of Southern Africa] Greetings to you all and thank you for setting aside this time to meet together on World AIDS Day. To stand together as we honour those living with and dying from AIDS and to express our ongoing concern about the present situation. Last Tuesday, November 25th , UNAIDS released its update on the global state of HIV and AIDS. The report is chilling. It shows that the global epidemic is showing no signs of abating: 5 million new infections worldwide, an estimated 40 million plus HIV positive worldwide, 3 million deaths in 2003 alone. One in five southern Africans is HIV positive. In South Africa, the number of infections has increased by half a million over last year making for some 4,3 million people living with HIV. These figures are daunting, indeed chilling. The human cost, the social and economic impact on nations such as ours is almost impossible to imagine. The epidemic is particularly devastating, says the report, on women who are more likely to be infected than men and on young women in the 15-24 age group, who have an infection rate two and a half times higher than similarly aged young men. The report goes on to note that, despite improved political action and increased spending, improvements are still far too small and slow in coming. In short, what we are doing is still inadequate. Increasingly, there is an acknowledgement that the epidemic can not be dealt with only as a health problem. Dr Peter Piot, UNAIDS Executive Director, commented in March this year: "The goal of realizing human rights is fundamental to the global fight against AIDS. And in a world facing a terrible epidemic - one that has already spread further, faster and to more devastating effect than any other in human history - winning the fight against AIDS is a precondition for achieving rights worth enjoying". The issue of stigma is directly related to the issue of human rights. Stigma discriminates, denying as it does: the protection of the dignity of people living with AIDS; freedom of expression to openly and frankly acknowledge their status which makes it difficult, if not impossible, for others to want to know their own status and freedom of movement which in turn impedes access to education and work. Stigma is about not acknowledging or respecting the human dignity of each and every person. It violates the fundamental human rights of people living with AIDS in a multitude of ways and consequently against society itself. An injury to one is indeed an injury to all! As we work as faith-based communities with others to bring all our resources to bear in this new struggle, we must recognize the need to add our weight to ensure that basic human rights of people living with AIDS are safeguarded. We are not here only to pick up the pieces of sick people's humanity or to bury the dead and look after the countless orphans left behind, critical as these interventions are. We are compelled by the great imperative 'to do unto others as we would have them do unto us'. This is fundamentally about the equal human dignity we all share and the common human rights that flow out of this. Respect, protection and the fulfillment of human rights is as central to all the world's great faiths as it is to the AIDS agenda. Equally therefore, we must ensure that HIV and AIDS is central to the global human rights as well all our faith's agenda. I find it almost impossible to read the Christian Gospel without hearing a powerful message speak through it into this pandemic. It is essentially a message which encourages us to live our lives in hope, to work unceasingly for a better world in the here and now, to realize God's kingdom on earth as in heaven. Because of this, I am encouraged not to give into despair as I take cognisance of the UNAIDS report. For that hope to be realised in the fight against AIDS, we must remain vigilant and steadfast, increasing our efforts. We must, in the face of the moralistic denouncing of people living with AIDS, refuse to deny who they are as human beings with equal rights. We must not, as former President Mandela reminds us, see them as overpowering numbers. People living with AIDS are at risk of being swallowed up in the anonymity of numbers. They are also at great risk of being seen as a burden on societies, on the economies of already struggling nations. This further adds to their stigmatisation and forces them out to the fringes of family, community and national life. There are still people who see them as 'deserving' of their lot, as having brought this situation on themselves. Even in the Church, there are those who shirk their duty to be compassionate and hide behind a wall of morality and judgmental attitudes. But we are not God. Our job is not to judge, lest we ourselves be judged for playing God! If we are to be like God, then we must learn to love like God and to show this love in works of mercy and compassion. We must learn to celebrate life and seek the best possible life for all and especially the oppressed and poor, the sick, the widowed and the orphaned. We must learn to see the human face of this disease. One of the great miracles that is to be found among people living with AIDS, is their discovery of the deep value of life. There are many such people who refuse to lay down in defeat or accept the label of 'victim'. They know themselves to be as deserving of life and human rights as everyone else. Experiencing life's fragility and knowing anew its value, they have learnt to value and celebrate this precious gift in all its fullness. Being HIV-positive does not exclude them from loving relationships, from raising their children or from playing their role as full and responsible citizens. It is so very critical that we raise the bar of our own contribution to include human rights. We can do this by: creating awareness of the social, theological and technical issues of HIV and AIDS that contribute to stigmatization through discussions throughout our churches, sermons and workshops with our communities; participating in advocacy programmes and adding our contribution to the shaping of government policies on HIV and AIDS; involving more people living with AIDS in progamme planning, implementation and management; promoting and upholding especially the rights of women, youth and children; networking with civil society organizations on human rights issues; and creating workplace policies in all our places of work which ensure the rights of all workers who are HIV positive. One of the great teacher's of the Christian faith many centuries ago spoke of 'the fantastic sob of recognition' that is evoked when a person recognizes the divinity that is present in all humanity. If we are to see the transfiguration of this pandemic from the global picture painted by the UNAIDS report to one in which the tide is turned, then we must begin with recognising 'with a fantastic sob' that in all people living with AIDS, we have a common and shared humanity and destiny given to us by our Creator. Their loss is ours. Sharing in the battle for human rights for people living with AIDS holds the promise of a shared victory and a dawn of a world free from AIDS. Madiba's concert on Saturday night put the spotlight on this country. That light must now spread into our hearts and minds, casting out all the dark shadows of ignorance and denial. It must enlighten the way ahead so that we may move forward with greater urgency and speed in rolling out the long awaited national treatment campaign. My call today is that we make this issue a voting issue. Unless a political party produces a clear commitment - including business plans and time frames - to fighting the disease and extending the lives of those who live with it, it is not deserving of our votes in the 2004 General Election. More particularly, political parties who want to be viewed by the electorate as serious contenders, must address the catastrophe of AIDS orphans. They must put forward a clear policy plan with a clear timeline on how to deal with these forsaken children. Each and every child in this country has the right to a secure home, plenty to eat, education and a secure future. I say again that this must be a voting issue. This nation demands a deep and lasting commitment to eradicate this pandemic from those who aspire to lead it! Finally, let this World AIDS Day mark the turning point in our land from darkness to light and herald the dawn of a new age of compassion and commitment. Let it be the beginning of a generation free from HIV and AIDS. Related articlesArchbishop Wabukala welcomes Pope Francis to KenyaArchbishop backs South Africa Day celebrationsPope to visit Anglican Martyrs Shrine in UgandaDeath announced of South Africa's first woman priestSynod evacuated as fire destroys cathedral and diocesan centreUrgent prayer request from Egypt - flood relief Tweets by @ACOffice Lots of people like Anglican Communion News Service. !screenname!: !message! Archive Search by Month Archive Search by Year To learn more about the Anglican Communion visit www.anglicancommunion.org Find out how to embed Anglican News on your website here Not registered with us yet?
计算机
2015-48/1914/en_head.json.gz/3148
Posted Teenage hacker sentenced to six years without Internet or computers By Andrew Kalinchuk Cosmo the God, a 15-year-old UG Nazi hacker, was sentenced Wednesday to six years without Internet or access to a computer. The sentencing took place in Long Beach, California. Cosmo pleaded guilty to a number of felonies including credit card fraud, bomb threats, online impersonation, and identity theft. Cosmo and UG Nazi, a group he runs, started out as a group in opposition to SOPA. Together with his group, Cosmo managed to take down websites like NASDAQ, CIA.gov, and UFC.com among others. Cosmo also created custom techniques that gave him access to Amazon and PayPal accounts. According to Wired’s Mat Honan, Cosmo’s terms of his probation lasting until he is 21 will be extremely difficult for the young hacker: “He cannot use the internet without prior consent from his parole officer. Nor will he be allowed to use the Internet in an unsupervised manner, or for any purposes other than education-related ones. He is required to hand over all of his account logins and passwords. He must disclose in writing any devices that he has access to that have the capability to connect to a network. He is prohibited from having contact with any members or associates of UG Nazi or Anonymous, along with a specified list of other individuals.” Jay Leiderman, a Los Angeles attorney with experience representing individuals allegedly part of Anonymous also thinks the punishment is very extreme: “Ostensibly they could have locked him up for three years straight and then released him on juvenile parole. But to keep someone off the Internet for six years — that one term seems unduly harsh. You’re talking about a really bright, gifted kid in terms of all things Internet. And at some point after getting on the right path he could do some really good things. I feel that monitored Internet access for six years is a bit on the hefty side. It could sideline his whole life–his career path, his art, his skills. At some level it’s like taking away Mozart’s piano.” There’s no doubt that for Cosmo, a kid that spends most of his days on the Internet, this sentence seems incredibly harsh. Since he’s so gifted with hacking and computers, it would be a shame for him to lose his prowess over the next six years without a chance to redeem himself. Although it wouldn’t be surprising if he found a way to sneak online during his probation. However, that kind of action wouldn’t exactly be advisable. It’s clear the FBI are taking his offenses very seriously and a violation of probation would only fan the flames. Do you think the sentencing was harsh or appropriate punishment for Cosmo’s misdeeds?
计算机
2015-48/1914/en_head.json.gz/4448
Passive verification of the strategyproofness of mechanisms in open environments Laura Kang Harvard University, Cambridge, MA David C. Parkes ICEC '06 Proceedings of the 8th international conference on Electronic commerce: The new e-commerce: innovations for conquering current barriers, obstacles and limitations to conducting successful business on the internet Pages 19-30 ACM New York, NY, USA ©2006 table of contents Concepts inPassive verification of the strategyproofness of mechanisms in open environments Decision making Decision making can be regarded as the mental processes resulting in the selection of a course of action among several alternative scenarios. Every decision making process produces a final choice. The output can be an action or an opinion of choice. more from Wikipedia Individual An individual is a person or a specific object. Individuality is the state or quality of being an individual; a person separate from other persons and possessing his or her own needs or goals. Being self expressive, independent. From the 15th century and earlier, and also today within the fields of statistics and metaphysics, individual meant "indivisible", typically describing any numerically singular thing, but sometimes meaning "a person. " (q.v. "The problem of proper names"). more from Wikipedia Behavior Behavior or behaviour refers to the actions and mannerisms made by organisms, systems, or artificial entities in conjunction with their environment, which includes the other systems or organisms around as well as the physical environment. It is the response of the system or organism to various stimuli or inputs, whether internal or external, conscious or subconscious, overt or covert, and voluntary or involuntary. more from Wikipedia Trust (social sciences) In a social context, trust has several connotations. Definitions of trust typically refer to a situation characterised by the following aspects: One party (trustor) is willing to rely on the actions of another party (trustee); the situation is directed to the future. In addition, the trustor (voluntarily or forcedly) abandons control over the actions performed by the trustee. more from Wikipedia Information Information, in its most restricted technical sense, is a sequence of symbols that can be interpreted as a message. Information can be recorded as signs, or transmitted as signals. Information is any kind of event that affects the state of a dynamic system. Conceptually, information is the message (utterance or expression) being conveyed. This concept has numerous other meanings in different contexts. more from Wikipedia Strategyproof In game theory, an asymmetric game where players have private information is said to be strategyproof (or truthful) if there is no incentive for any of the players to lie about or hide their private information from the other players. The strategyproof concept has applications in several areas of game theory and economics. For example, payment schemes for network routing. Consider a network as a graph where each edge (i.e. more from Wikipedia Agent (economics) In economics, an agent is an actor and decision maker in a model. Typically, every agent makes decisions by solving a well or ill defined optimization/choice problem. The term agent can also be seen as equivalent to player in game theory. For example, buyers and sellers are two common types of agents in partial equilibrium models of a single market. more from Wikipedia constraint networks information systems applications strategyproofness
计算机
2015-48/1914/en_head.json.gz/4510
Last year, Hewlett Packard Company announced it will be separating into two industry-leading public companies as of November 1st, 2015. HP Inc. will be the leading personal systems and printing company. Hewlett Packard Enterprise will define the next generation of infrastructure, software and services. Public Sector eCommerce is undergoing changes in preparation and support of this separation. You will still be able to purchase all the same products, but your catalogs will be split into two: Personal systems, Printers and Services and Servers, Storage, Networking and Services. Please select the catalog below that you would like to order from. Note: Each product catalog has separate shopping cart and checkout processes. Personal Computers and Printers Select here to shop for desktops, workstations, laptops and netbooks, monitors, printers and print supplies Server, Storage, Networking and Services Select here to shop for Servers, Storage, Networking, Converged Systems, Services and more. Privacy Statement | Limited Warranty Statement | Terms of Use ©2015 Hewlett Packard Development Company, L.P
计算机
2015-48/1914/en_head.json.gz/4602
release date:Nov. 2, 2010 The Fedora Project is a Red Hat sponsored and community-supported open source project. It is also a proving ground for new technology that may eventually make its way into Red Hat products. It is not a supported product of Red Hat, Inc. The goal of The Fedora Project is to work with the Linux community to build a complete, general purpose operating system exclusively from free software. Development will be done in a public forum. The Red Hat engineering team will continue to participate in the building of Fedora and will invite and encourage more outside participation than was possible in Red Hat Linux. By using this more open process, The Fedora Linux project hopes to provide an operating system that uses free software development practices and is more appealing to the open source community. Fedora 14, code name 'Laughlin', is now available for download. What's new? Load and save images faster with libjpeg-turbo; Spice (Simple Protocol for Independent Computing Environments) with an enhanced remote desktop experience; support for D, a systems programming language combining the power and high performance of C and C++ with the programmer productivity of modern languages such as Ruby and Python; GNUStep, a GUI framework based of the Objective-C programming language; easy migration of Xen virtual machines to KVM virtual machines with virt-v2v...." manufacturer website 1 DVD for installation on x86_64 platform back to top
计算机
2015-48/1914/en_head.json.gz/4812
TechnoSecurity's Guide to E-Discovery and Digital Forensics TechnoSecurity's Guide to E-Discovery and Digital Forensics, 1st Edition A Comprehensive Handbook Editor : J Wiles Imprint: Syngress Print Book ISBN : 9781597492232 eBook ISBN : Dimensions: 229 X 178 Complete coverage of e-discovery, a growth market, from the people that run the TechnoForensics annual tradeshow. Print Book + eBook USD 77.94 USD 129.90 Buy both together and save 40% Select format VST (VitalSource Bookshelf) format DRM-free included formats : PDF Print Book VST (VitalSource Bookshelf) format PDF USD 62.95 IDC estimates that the U.S. market for computer forensics will be grow from $252 million in 2004 to $630 million by 2009. Business is strong outside the United States, as well. By 2011, the estimated international market will be $1.8 billion dollars. The Techno Forensics Conference, to which this book is linked, has increased in size by almost 50% in its second year; another example of the rapid growth in the digital forensics world.The TechnoSecurity Guide to Digital Forensics and E-Discovery features:* Internationally known experts in computer forensics share their years of experience at the forefront of digital forensics* Bonus chapters on how to build your own Forensics Lab* 50% discount to the upcoming Techno Forensics conference for everyone who purchases a book This book provides IT security professionals with the information (hardware, software, and procedural requirements) needed to create, manage and sustain a digital forensics lab and investigative team that can accurately and effectively analyze forensic data and recover digital evidence, while preserving the integrity of the electronic evidence for discovery and trial. For investigators, examiners, IT security managers, lawyers and academia Jack Wiles Jack Wiles is a security professional with over 40 years' experience in security-related fields. This includes computer security, disaster recovery, and physical security. He is a professional speaker, and has trained federal agents, corporate attorneys, and internal auditors on a number of computer crime-related topics. He is a pioneer in presenting on a number of subjects, which are now being labeled "Homeland Security" topics. Well over 10,000 people have attended one or more of his presentations since 1988. Jack is also a co-founder and President of TheTrainingCo., and is in frequent contact with members of many state and local law enforcement agencies as well as Special Agents with the U.S. Secret Service, FBI, IRS-CID, U.S. Customs, Department of Justice, The Department of Defense, and numerous members of High-Tech Crime units. He was also appointed as the first President of the North Carolina InfraGard chapter, which is now one of the largest chapters in the country. He is also a founding member of the U.S. Secret Service South Carolina Electronic Crimes Task Force. Jack is also a Vietnam veteran who served with the 101st Airborne Division in Vietnam in 1967-68, where he was awarded two Bronze stars for his actions in combat. He recently retired from the U.S. Army Reserves as a lieutenant colonel and was assigned directly to the Pentagon for the final seven years of his career. View additional works by Jack Wiles Low Tech Hacking, 1st Edition Authors: Jack Wiles & Terry Gudaitis & Jennifer Jabbusch & Russ Rogers & Sean Lowther Introduction; Authentication; Email ForensicsDeveloping an Enterprise Digital Investigative/Electronic Discovery Capability; Advanced Training For Your Electronic Discovery Team; Digital Forensics in a Multi Operating System Environment; Digital Forensic Investigation Operations; Working Together To Build a Regional Forensics Lab; Forensic Examinations in a Terabyte World; Starting a Career In The Field of Techno Forensics - Degrees, Certifications and Networking; Standards in Digital Forensics; Selecting The Hardware For Your Forensics Computer; Death By a Thousand Cuts; Balancing Records & Information; Management through Electronic Discovery; Win or Lose - You Choose! - Inside Secrets to Presenting The Best You; Mac Forensics Shop with Confidence
计算机
2015-48/1914/en_head.json.gz/5117
CGSociety :: Artist Profile 14 November 2012, by Paul Hellard Chris Solarksi came into the game industry with a foundation in digital arts. Having graduated with a degree in CG, he says he was lucky enough to secure a job at Sony Computer Entertainment in London as a 3D environment and character artist. Solarski then took part in an art workshop organized by ConceptArt.org, when he saw artists like Andrew ‘Android’ Jones demonstrating his ability to create lifelike characters straight from his imagination. This was just part of the crew at Massive Black studio taking their knowledge of traditional and fine arts into the digital realm and mixing it right up for Solarski. “I began to question my lack of traditional art training,” says Chris. “I saw that it was their mastery of classical art principles that placed them in an enviable position of being first to visualize characters and environments in the development process, for which artists like me would produce 3D models and textures based on their designs. I felt I had a lot of catching up to do if I wanted to be part of high level game design.” Solarksi took painting lessons with Brendan Kelly and abandoned video game development altogether, spending the next two years in an intense program of self-guided study in Poland where he took life drawing sessions at the Warsaw Academy of Fine Art and the atelier of professor Zofia Glazer. During these years of study, Solarski developed a deep appreciation for the value of a classical art education and the techniques of the Old Masters. He realised that there were many undervalued lessons to be learned and skills missing, that would be very useful to all video game artists, including himself. Viewed from an angle, the similarities between drawing, painting and gam
计算机
2015-48/1914/en_head.json.gz/5239
Posted Minecraft: Xbox 360 Edition review By Minecraft comes to Xbox Live Arcade this week, and that development amounts to dread tidings for the Xbox 360-playing addictive personalities of the world. The experience of playing Mojang’s formerly PC-only sensation can be boiled down to a simple idea: LEGOs with monsters. Within that brief description, however, there is a literal world of possibility. It’s a slightly constrained world in its console form, but it is a no less magical one for a first-time explorer to take in. Mining and Crafting There’s a very simple concept at work in Minecraft: survive, then thrive. Any new game starts out by depositing you in the center of a world built out of cube-shaped blocks of various types and properties. Monsters come out at night (unless you have the difficulty set to Peaceful), so your first necessary steps in the world involve building shelter and some basic survival tools like torches, swords, and a front door (which keeps monsters at bay). Once you’ve got these essentials worked out, the world is pretty much at your mercy. The core gameplay conceit has you “mining” any in-range block that you target with your crosshairs, and collecting the resource that the “mined” object offers. You can then use these resources in a variety of ways. Stone and wood blocks (and a few other things) can be placed in the world, allowing you to build elaborate structures. Some blocks can be crafted into other items. For example, if you’ve got two wooden sticks and three stone blocks, you can ease the process of mining by building a stone pickaxe. Some resources can simply be placed in the world and left to do their thing. A “planted” tree sapling, for example, will eventually grow into an actual tree. There’s no “endgame” goal to all of this, other than what your imagination cooks up. See that natural rock formation off in the distance? Wouldn’t it be cool to build a castle on top of it? Well you can go gather the necessary resources and do that. It seems like a simple idea, but there’s a lot of depth here. For example, the resource redstone can be used to build rudimentary electrical systems. Or you can smelt the various raw ores you find into ingots, and then use those ingots to build a range of useful items. You hydrate soil using a nearby water source and then use any seeds you’ve gathered to build yourself a proper farm. To support your creations, you’ll need to thoroughly explore the world and dig deep underground as you search for the rarer resources. Console Strip-Mining While the Xbox 360 version of Minecraft retains the same fundamental qualities that make the PC game so popular, some necessary changes had to be made. The size of the world has been stripped down considerably, for example. If you’re a total newcomer, you likely won’t even notice. Those who have logged many tens of hours on the PC side will definitely pick up on the changes and omissions that ultimately render Minecraft: Xbox 360 Edition as a lesser product compared to its PC-based sibling. The biggest issue is the fact that 4J Studios’ console port runs on the equivalent of a much earlier PC version of Minecraft. This is largely due to the demands of developing a game for a closed platform like the Xbox 360, but the absence of things like NPC villages, abandoned mine tunnels, and elaborate jungle biomes is immediately noticeable to the seasoned fan. Everything that makes Minecraft fun is still here, but the complexity is diminished from what regular players might know. It’s worth mentioning there are stated plans to bring this console release up to date with the PC version of Minecraft. There’s no word on when or how that will happen yet though. It’s not fair to call Minecraft: Xbox 360 Edition an unfinished game, but it also definitely isn’t all that it could be. Fortunately, the work that 4J did on porting Minecraft wasn’t focused solely on stripping out content. The Xbox 360 release has some major tweaks built in to make the game more friendly for console gamers. There’s a tutorial world for starters, a chunk of game that walks you through both basic and advanced techniques for cultivating your Minecraft world. There’s also a re-tooled crafting interface that lays out everything you can build in the game in a categorized set of lists. If you’ve got the required materials in your inventory, you can make the object. That means there’s no more cross-referencing with the Minecraft wiki to see how to build something; the game flat-out tells you which resources you need, and in what amounts, to create anything. Apart from the tutorial, you’ll also encounter helpful on-screen pop-ups whenever your crosshairs fall on a resource that you haven’t previously discovered. The text window fills you in on what resource you’re seeing and what its basic uses are. It’s a simple addition, but it’s one that makes Minecraft‘s steep learning curve feel a bit more gentle. Perhaps the best improvement over the PC game is the console version’s online implementation. It’s not as open-ended in terms of being able to set up a server and let anyone in. You’ll only be able to join worlds belonging to your friends in Minecraft on your Xbox 360, but the “Load Game” screen will automatically list any open multiplayer worlds alongside your own saved ones. Joining is a simple matter of selecting the online world and dropping into it. There’s still a lot of room for improvement, but it wouldn’t be Minecraft if more tantalizing features weren’t awaiting in future updates. That’s always been the big shtick for Mojang’s game, and it helps to make the imbalance between the PC and console versions feel a bit more bearable. Minecraft: Xbox 360 Edition is finished, and most definitely ready to be played for hours at a time, but it could be — and hopefully will be, in time — so much more than it is as it arrives for the first time on Xbox Live Arcade. (This game was reviewed on the Xbox 360 on a copy provided by Mojang)
计算机
2015-48/1914/en_head.json.gz/5321
Home | About Folklore The Original Macintosh: 34 of 126 Calculator Construction Set Andy Hertzfeld Chris Espinosa, Steve Jobs, Donn Denman Chris tries to make a Steve-approved calculator The Calculator Chris Espinosa was one of Apple's earliest and youngest employees, who started work for the company at the ripe age of 14. He left Apple in 1978 to go to college at UC Berkeley, but he continued to do freelance work during the school year, like writing the Apple II Reference Manual, the replacement for the legendary "Red Book". In the summer of 1981, Steve Jobs convinced Chris to drop out of school to come work on the Mac team full time, arguing that he could go back to school anytime, but there'd only be one chance to help shape the Macintosh. Chris dropped out of school to become the manager of documentation for the Macintosh, starting in August 1981. We needed technical documentation right away, since we planned to seed third party developer in only a few months. Since the most importan
计算机
2015-48/1914/en_head.json.gz/5491
BREAKING: Click here to take part in our survey and you could win a BB-8 robot Analysing Windows 8’s launch and Microsoft’s evasive sales figures Sebastian Anthony Darren's latest articles... View Darren's profile Here’s an interesting question: How many computers are actually running Windows 8 or Windows RT? How many laptops, desktops, convertibles, and tablets, have made the jump to Microsoft’s next-generation, touch-oriented operating system? You see, beyond Microsoft’s inner circle in Redmond, no one knows. Despite being pushed for comment by journalists, pundits, and analysts for three months, the only figure that Microsoft is publicly sharing is the number of Windows 8 licenses sold. This figure currently stands at 60 million, which according to Microsoft is similar to Windows 7′s “sales trajectory.” 60 million sounds very grand – which is of course Microsoft’s intention – but now we have to add some caveats. How many of those 60 million copies have actually been installed? How many are sitting on store shelves, or in OEM inventories? Does that figure include Windows RT? We haven’t heard a peep from Microsoft about either Windows RT or Surface RT, and the company has refused to comment when asked whether the 60 million includes both Windows 8 and RT. What we do have, however, is a lot of data from retailers and OEMs – and when we factor in their sentiment, it perhaps illustrates why Microsoft is so reticent when it comes to exact installation figures. Over in the States, a Newegg senior vice president described Windows 8 sales as “slow going” last November, then Net Applications chimed in with some damning statistics, and now HP’s PC boss Todd Bradley has said that Windows 8 has experienced “a slower start than many people expected.” This is the same Bradley who trashed the Surface RT, calling it “kludgey.” At this point, while we can still only guess at how many PCs are running Windows 8 or RT, it’s fairly safe to assume that neither OS is doing particularly well. Perhaps a better question to ask, then, is why? There are three likely reasons for Windows 8′s lacklustre adoption rate. The first, and most obvious, is that users simply don’t want – or aren’t ready for – Windows 8. Whether it’s the abominable “Metro” Start screen, or the relatively short list of upgrades for Windows 7 users, Windows 8 just isn’t very desirable. The second reason is that the PC industry itself is in decline, and in a big way. This wasn’t unexpected – tablets and smartphones have been nibbling at the PC’s heels for a while now – but it was hoped that Windows 8 would somehow kick-start sales, especially with the sales boost from Black Friday and Christmas. We can either assume that this is down to Windows 8′s innate lack of desirability, or that iOS and Android are already too entrenched for Windows 8 to establish a beachhead. A third possible reason is that Windows 8 just doesn’t hit the mark. It is clearly a touch-oriented OS, and yet according to NPD touchscreen laptops account for only 4.5 per cent of Windows 8 sales in the US. We’ve heard similar rumblings from retailers, which report that cheap laptops still dominate the sales charts, with touchscreen laptops and tablets unable to get a foot in the door. It’s worth noting that the same report from NPD also says that the consumer electronics market in general, despite the boom of smartphones and tablets, is slumping. In all likelihood, it’s probably a noxious combination of all three circumstances. Moving forward, it isn’t entirely clear how Microsoft intends to rectify the situation. At the end of January, the price of a Windows 8 upgrade will shoot up – and if that won’t put a dent in Windows 8 sales, I don’t know what will. On the hardware front, Intel’s upcoming fourth-gen Ultrabook specification (and Haswell CPUs) could certainly spark some interest, but we still have no idea about whether consumers actually want a Windows 8 tablet, or a touchscreen laptop – or, in the case of transformers, both. As we know, hardware is nothing without software. Again, Microsoft doesn’t give any exact figures, but the Windows 8 Store is now up to around 30,000 apps – a fraction of the iOS or Android markets, and the quality of many apps is questionable, plus a bunch of key apps such as Facebook, Twitter, and Spotify are still missing. If you’ve made the jump to Windows 8, be sure to check out our extensive collection of 50 top tips and tricks for the OS. If you’re still on the fence, remember that you only have until January 31 to get Windows 8 Pro for £25. Topicshpmicrosoftsurfaceultrabookwindows 8
计算机
2015-48/1914/en_head.json.gz/5741
OSdata.com Basics of computer hardware A computer is a programmable machine (or more precisely, a programmable sequential state machine). There are two basic kinds of computers: analog and digital. Analog computers are analog devices. That is, they have continuous states rather than discrete numbered states. An analog computer can represent fractional or irrational values exactly, with no round-off. Analog computers are almost never used outside of experimental settings. A digital computer is a programmable clocked sequential state machine. A digital computer uses discrete states. A binary digital computer uses two discrete states, such as positive/negative, high/low, on/off, used to represent the binary digits zero and one. The French word ordinateur, meaning that which puts things in order, is a good description of the most common functionality of computers. OSdata.com is used in more than 300 colleges and universities around the world Find out how to get similar high web traffic and search engine placement. what are computers used for? Computers are used for a wide variety of purposes. Data processing is commercial and financial work. This includes such things as billing, shipping and receiving, inventory control, and similar business related functions, as well as the “electronic office”. Scientific processing is using a computer to support science. This can be as simple as gathering and analyzing raw data and as complex as modelling natural phenomenon (weather and climate models, thermodynamics, nuclear engineering, etc.). Multimedia includes content creation (composing music, performing music, recording music, editing film and video, special effects, animation, illustration, laying out print materials, etc.) and multimedia playback (games, DVDs, instructional materials, etc.). parts of a computer The classic crude oversimplication of a computer is that it contains three elements: processor unit, memory, and I/O (input/output). The borders between those three terms are highly ambigious, non-contiguous, and erratically shifting. A slightly less crude oversimplification divides a computer into five elements: arithmetic and logic subsystem, control subsystem, main storage, input subsystem, and output subsystem. arithmetic and logic input/output overview The processor is the part of the computer that actually does the computations. This is sometimes called an MPU (for main processor unit) or CPU (for central processing unit or central processor unit). A processor typically contains an arithmetic/logic unit (ALU), control unit (including processor flags, flag register, or status register), internal buses, and sometimes special function units (the most common special function unit being a floating point unit for floating point arithmetic). Some computers have more than one processor. This is called multi-processing. The major kinds of digital processors are: CISC, RISC, DSP, and hybrid. CISC stands for Complex Instruction Set Computer. Mainframe computers and minicomputers were CISC processors, with manufacturers competing to offer the most useful instruction sets. Many of the first two generations of microprocessors were also CISC. RISC stands for Reduced Instruction Set Computer. RISC came about as a result of academic research that showed that a small well designed instruction set running compiled programs at high speed could perform more computing work than a CISC running the same programs (although very expensive hand optimized assembly language favored CISC). DSP stands for Digital Signal Processing. DSP is used primarily in dedicated devices, such as MODEMs, digital cameras, graphics cards, and other specialty devices. Hybrid processors combine elements of two or three of the major classes of processors. For more detailed information on these classes of processors, see processors. An arithmetic/logic unit (ALU) performs integer arithmetic and logic operations. It also performs shift and rotate operations and other specialized operations. Usually floating point arithmetic is performed by a dedicated floating point unit (FPU), which may be implemented as a co-processor. Control units are in charge of the computer. Control units fetch and decode machine instructions. Control units may also control some external devices. A bus is a set (group) of parallel lines that information (data, addresses, instructions, and other information) travels on inside a computer. Information travels on buses as a series of electrical pulses, each pulse representing a one bit or a zero bit (there are trinary, or three-state, buses, but they are rare). An internal bus is a bus inside the processor, moving data, addresses, instructions, and other information between registers and other internal components or units. An external bus is a bus outside of the processor (but inside the computer), moving data, addresses, and other information between major components (including cards) inside the computer. Some common kinds of buses are the system bus, a data bus, an address bus, a cache bus, a memory bus, and an I/O bus. For more information, see buses. Main storage is also called memory or internal memory (to distinguish from external memory, such as hard drives). RAM is Random Access Memory, and is the basic kind of internal memory. RAM is called “random access” because the processor or computer can access any location in memory (as contrasted with sequential access devices, which must be accessed in order). RAM has been made from reed relays, transistors, integrated circuits, magnetic core, or anything that can hold and store binary values (one/zero, plus/minus, open/close, positive/negative, high/low, etc.). Most modern RAM is made from integrated circuits. At one time the most common kind of memory in mainframes was magnetic core, so many older programmers will refer to main memory as core memory even when the RAM is made from more modern technology. Static RAM is called static because it will continue to hold and store information even when power is removed. Magnetic core and reed relays are examples of static memory. Dynamic RAM is called dynamic because it loses all data when power is removed. Transistors and integrated circuits are examples of dynamic memory. It is possible to have battery back up for devices that are normally dynamic to turn them into static memory. ROM is Read Only Memory (it is also random access, but only for reads). ROM is typically used to store thigns that will never change for the life of the computer, such as low level portions of an operating system. Some processors (or variations within processor families) might have RAM and/or ROM built into the same chip as the processor (normally used for processors used in standalone devices, such as arcade video games, ATMs, microwave ovens, car ignition systems, etc.). EPROM is Erasable Programmable Read Only Memory, a special kind of ROM that can be erased and reprogrammed with specialized equipment (but not by the processor it is connected to). EPROMs allow makers of industrial devices (and other similar equipment) to have the benefits of ROM, yet also allow for updating or upgrading the software without having to buy new ROM and throw out the old (the EPROMs are collected, erased and rewritten centrally, then placed back into the machines). Registers and flags are a special kind of memory that exists inside a processor. Typically a processor will have several internal registers that are much faster than main memory. These registers usually have specialized capabilities for arithmetic, logic, and other operations. Registers are usually fairly small (8, 16, 32, or 64 bits for integer data, address, and control registers; 32, 64, 96, or 128 bits for floating point registers). Some processors separate integer data and address registers, while other processors have general purpose registers that can be used for both data and address purposes. A processor will typically have one to 32 data or general purpose registers (processors with separate data and address registers typically split the register set in half). Many processors have special floating point registers (and some processors have general purpose registers that can be used for either integer or floating point arithmetic). Flags are single bit memory used for testing, comparison, and conditional operations (especially conditional branching). For a much more advanced look at registers, see registers. For more information on memory, see memory External storage (also called auxillary storage) is any storage other than main memory. In modern times this is mostly hard drives and removeable media (such as floppy disks, Zip disks, optical media, etc.). With the advent of USB and FireWire hard drives, the line between permanent hard drives and removeable media is blurred. Other kinds of external storage include tape drives, drum drives, paper tape, and punched cards. Random access or indexed access devices (such as hard drives, removeable media, and drum drives) provide an extension of memory (although usually accessed through logical file systems). Sequential access devices (such as tape drives, paper tape punch/readers, or dumb terminals) provide for off-line storage of large amounts of information (or back ups of data) and are often called I/O devices (for input/output). Most external devices are capable of both input and output (I/O). Some devices are inherently input-only (also called read-only) or inherently output-only (also called write-only). Regardless of whether a device is I/O, read-only, or write-only, external devices can be classified as block or character devices. A character device is one that inputs or outputs data in a stream of characters, bytes, or bits. Character devices can further be classified as serial or parallel. Examples of character devices include printers, keyboards, and mice. A serial device streams data as a series of bits, moving data one bit at a time. Examples of serial devices include printers and MODEMs. A parallel device streams data in a small group of bits simultaneously. Usually the group is a single eight-bit byte (or possibly seven or nine bits, with the possibility of various control or parity bits included in the data stream). Each group usually corresponds to a single character of data. Rarely there will be a larger group of bits (word, longword, doubleword, etc.). The most common parallel device is a printer (although most modern printers have both a serial and a parallel connection, allowing greater connection flexibility). A block device moves large blocks of data at once. This may be physically implemented as a serial or parallel stream of data, but the entire block gets transferred as single packet of data. Most block devices are random access (that is, information can be read or written from blocks anywhere on the device). Examples of random access block devices include hard disks, floppy disks, and drum drives. Examples of sequential access block devcies include magnetic tape drives and high speed paper tape readers. Input devices are devices that bring information into a computer. Pure input devices include such things as punched card readers, paper tape readers, keyboards, mice, drawing tablets, touchpads, trackballs, and game controllers. Devices that have an input component include magnetic tape drives, touchscreens, and dumb terminals. Output devices are devices that bring information out of a computer. Pure output devices include such things as card punches, paper tape punches, LED displays (for light emitting diodes), monitors, printers, and pen plotters. Devices that have an output component include magnetic tape drives, combination paper tape reader/punches, teletypes, and dumb terminals. If you reached this web page as part of the free online computer programming text book, click here to return to table of contents and click here to proceed to the next chapter. For a more detailed of the basic parts of a computer, see the following sections: CISC RISC kinds of buses bus standards memory hardware issues basic memory software approaches static and dynamic approaches absolute addressing relocatable software demand paging and swapping program counter relative base pointers indirection, pointers, and handles stack frames virtual memory OS memory services memory maps low memory For a more detailed examination of how processors work, see assembly language (huge web page) or subsections below: intro to assembly language data representation and number systems addressing modes executable instructions (huge web page, subdivided below) data and address movement integer arithmetic floating arithmetic binary coded decimal advanced math logical operations shift and rotate bit and bit field manipulation character and string table operations high level language support program control and condition codes system control coprocessor and multiprocessor trap generating further reading: books: If you want your book reviewed, please send a copy to: Milo, POB 1361, Tustin, CA 92781, USA. Price listings are for courtesy purposes only and may be changed by the referenced businesses at any time without notice. further reading: books: general Structured Computer Organization, 4th edition; by Andrew S. Tanenbaum; Prentice Hall; October 1998; ISBN 0130959901; Paperback; 669 pages; $95.00; used by CS 308-273A (Principles of Assembly Languages) at McGill University School of Computer Science Computers: An Introduction to Hardware and Software Design; by Larry L. Wear, James R. Pinkert (Contributor), William G. Lane (Contributor); McGraw-Hill Higher Education; February 1991; ISBN 0070686742; Hardcover; 544 pages; $98.60; used by CS 308-273A (Principles of Assembly Languages) at McGill University School of Computer Science free music player coding example Programming example: I am making heavily documented and explained open source PHP/MySQL code for a method to play music for free — almost any song, no subscription fees, no download costs, no advertisements, all completely legal. This is done by building a front-end to YouTube (which checks the copyright permissions for you). View music player in action: www.musicinpublic.com/. Create your own copy from the original source code/ (presented for learning programming). Includes how to run this from your own computer if you don’t have a web site. Read details here. Some or all of the material on this web page appears in thefree downloadable college text book on computer programming. Tweets by @osdata A web site on dozens of operating systems simply can’t be maintained by one person. This is a cooperative effort. If you spot an error in fact, grammar, syntax, or spelling, or a broken link, or have additional information, commentary, or constructive criticism, please e-mail Milo. If you have any extra copies of docs, manuals, or other materials that can assist in accuracy and completeness, please send them to Milo, PO Box 1361, Tustin, CA, USA, 92781. Click here for our privacy policy. one level up Hardware Level of Operating System peer level processes or jobs character codes This web site handcrafted on Macintosh computers using Tom Bender’s Tex-Edit Plus and served using FreeBSD . †UNIX used as a generic term unless specifically used as a trademark (such as in the phrase “UNIX certified”). UNIX is a registered trademark in the United States and other countries, licensed exclusively through X/Open Company Ltd. Names and logos of various OSs are trademarks of their respective owners. Copyright © 2000, 2001, 2002, 2004, 2006 Milo Created: September 27, 2000
计算机
2015-48/1914/en_head.json.gz/5765
Massachusetts Verdict: MS Office Formats Out By DesktopLinux.com Staff September 24, 2005 10:18am EST The state of Massachusetts Friday made it official: It will use only nonproprietary document formats in state-affiliated offices effective Jan. 1, 2007. The state of Massachusetts Friday made it official: It will use only nonproprietary document formats in state-affiliated offices effective Jan. 1, 2007. Although state CIO Peter Quinn has said repeatedly that this issue does not represent "the state versus Microsoft Corp. —or any one company," adoption of the long-debated plan may result in all versions of Microsoft's Office productivity suite being phased out of use throughout the state's executive branch agencies. Massachusetts posted the final version of its Enterprise Technical Reference Model on its Web site. As part of this new policy, the state will support the newly ratified Open Document Format for Office Applications, or OpenDocument, and PDFs (portable document format) as the standards for its office documents. Quinn told DesktopLinux.com earlier this month that he challenged Microsoft and other companies who sell software that uses proprietary document formats to consider enabling open-format options as soon as possible. Quinn said that "government is creating history at a rapidly increasing rate, and all documents we save must be accessible to everybody, without having to use 'closed' software to open them now and in the future." Click here to read David Coursey's take on the Massachusetts decision. Quinn said the state runs a "vast majority" of its office and system computers on Windows and that "only a very small percentage of them run Linux and other open source software at this time. This is in tune with the general market in the U.S. But we like to 'eat our own cooking,' in that we are using OpenOffice.org and Linux more and more as time goes along, because it produces open format documents." In contrast, Microsoft's Office creates Word, Excel, PowerPoint, and other documents that are accessible only by Microsoft products, making them ineligible for use, the state said. "Microsoft has remade the desktop world," Quinn said. "But if you've watched history, there's a slag heap of proprietary companies who have fallen by the wayside because they were stuck in their ways. Just look at the minicomputer business, for example. The world is about open standards and open source. I can't understand why anybody would want to continue making closed-format documents anymore." //Related Articles Massachusetts Locks Horns with Microsoft Massachusetts vs. Microsoft? Microsoft Challenges Massachusetts on Open-Format Plan Microsoft Exec Weighs In on Massachusetts Flap Microsoft's answer to that is simple. MS Office, which is upgraded about every three years and includes Word, Excel, PowerPoint and Outlook, brought in more than $11 billion last year, or about 28 percent of Microsoft's total revenue, according to the company's recently filed annual report. "We've had an active, ongoing conversation with Microsoft since January about this, and they've been open to hearing our position," Quinn said. "But I don't know one way or the other how they're ultimately going to react to this. Also, this isn't just about Microsoft. We're focusing on the formats here, not necessarily the software." Unless Microsoft starts supporting OO.org, Quinn said, the state will gradually phase out Microsoft Office in favor of OO.org. Playing Games with VOIP Security Watch: More Firefox Flaws Revealed More Stories by DesktopLinux.com Firefox 2.0.0.5 Patches 'Critical' Flaws The latest update to Mozilla's flagship browser patches eight security vulnerabilities, three of the... Why Linus Torvalds Hates GNOME, Likes KDE It's no secret that Linus Torvalds, Linux's founder, dislikes the GNOME desktop. Here's why. (Des...
计算机
2015-48/1914/en_head.json.gz/6319
Deeplinks Blog posts about EFF Europe October 22, 2012 - 2:24pm | By Peter Eckersley and Eva Galperin and Katitza Rodriguez Dutch Government Proposes Cyberattacks Against... Everyone International Privacy Security State-Sponsored Malware Anonymity EFF Europe Last week, the Dutch Minister of Safety and Justice asked the Parliament of the Netherlands to pass a law allowing police to obtain warrants to do the following: Install malware on targets’ private computers Conduct remote searches on local and foreign computers to collect evidence Delete data on remote computers in order to disable the accessibility of “illegal files.” Requesting assistance from the country where the targetted computer(s) were located would be "preferred" but possibly not required. These proposals are alarming, could have extremely problematic consequences, and may violate European human rights law. As if that wasn't troubling enough, lurking in this letter was a request for something more extreme: If the location of a particular computer cannot be determined, the Dutch police would be able to break in without ever contacting foreign authorities. What would cause the “location of a particular computer” not to be determinable? Read full post October 15, 2012 - 8:56pm | By Katitza Rodriguez Highest Court in the European Union To Rule On Biometrics Privacy Privacy Biometrics Council of Europe EFF Europe International Privacy Standards Mandatory National IDs and Biometric Databases Courts are investigating the legality of a European Union regulation requiring biometric passports in Europe. Last month, the Dutch Council of State (Raad van State, the highest Dutch administrative court) asked the European Court of Justice (ECJ) to decide if the regulation requiring fingerprints in passports and travel documents violates citizens’ right to privacy. The case entered the courts when three Dutch citizens were denied passports and another citizen was denied an ID card for refusing to provide their fingerprints. The ECJ ruling will play an important role in determining the legality of including biometrics in passports and travel documents in the European Union. Read full post September 26, 2012 - 3:15pm | By Katitza Rodriguez and Jillian York Cleansing the Internet of Terrorism: EU-Funded Project Seeks To Erode Civil Liberties Free Speech International Privacy Anonymity EFF Europe International Privacy Standards Section 230 of the Communications Decency Act A new project aimed at “countering illegal use of the Internet” is making headlines this week. The project, dubbed CleanIT, is funded by the European Commission (EC) to the tune of more than $400,000 and, it would appear, aims to eradicate the Internet of terrorism. European Digital Rights, a Brussels-based organization consisting of 32 NGOs throughout Europe (and of which EFF is a member), has recently published a leaked draft document from CleanIT. Read full post June 25, 2012 - 12:26pm | By Maira Sutton If Europe rejects ACTA, will it actually go away? Fair Use and Intellectual Property: Defending the Balance International Anti-Counterfeiting Trade Agreement EFF Europe On Thursday, the fifth and final European Union Parliamentary committee voted to reject the Anti-Counterfeiting Trade Agreement (ACTA). This signifies a major blow to ACTA, but its standing in the EU still comes down to the European Parliament vote scheduled during the first week of July. After this final vote decides the agreement’s adoption in Europe, however, the future of ACTA for the rest of the signatory countries unfortunately remains cloudy. Read full post June 20, 2012 - 2:10pm | By Marcia Hofmann and Katitza Rodriguez Coders' Rights At Risk in the European Parliament International Coders' Rights Project EFF Europe Coders have never been more important to the security of the Internet. By identifying and disclosing vulnerabilities, coders are able to improve security for every user who depends on information systems for their daily life and work. Yet this week, European Parliament will debate a new draft of a vague and sweeping computer crime legislation that threatens to create legal woes for researchers who expose security flaws. Read full post Pages« first
计算机
2015-48/1914/en_head.json.gz/6498
ST. PAUL GASLIGHT CO. v. CITY OF ST. PAUL ST. PAUL GASLIGHT CO. v. CITY OF ST. PAUL ResetAA ST. PAUL GASLIGHT CO. v. CITY OF ST. PAUL, (1901) Argued: March 21, 1901 Decided: April 15, 1901 The charter of the St. Paul Gaslight Company was granted in 1856, and it expires in 1907. The corporation was empowered to construct a plant to supply the city of St. Paul and its inhabitants with illuminating gas. It may be assumed, for the purposes of the question arising on this record, that the corporation discharged its duties properly under its charter, and that from the time the charter became operative the company has lighted the city in accordance with the contracts made for that purpose from time to time with the municipal authorities. The charter did not purport to engage permanently with the company for lighting the city, but provided for agreements to be entered into on that subject with the city for successive periods, and from the beginning of the charter the parties did so stipulate for a specified time, a new contract supervening upon the termination of an expired one. It may also be assumed for the purposes of this case that the rights which the corporation asserts on this record were not foreclosed by any of the contracts which it made, at different periods, with the city. The question which here arises concerns only 9 of the charter, which is as follows: 'Sec. 9. That it shall be the duty of the St. Paul Gaslight Company to prosecute the works necessary to the lighting the whole city and suburbs with gas, and to lay their pipes in every and all directions, whenever the board of directors shall be satisfied that the expenses thereon shall be counterbalanced by the income accruing from the sales of gas. It shall also be their duty to put the gas works into successful operation as soon as practicable: Provided, That whenever the corporation of the city of St. Paul shall, by resolution of the board of aldermen, direct lamps to b
计算机
2015-48/1914/en_head.json.gz/6694
SDDC Java IoT Authors: Elizabeth White, Carmen Gonzalez, Brian Daleiden, AppDynamics Blog, Flint Brenton Related Topics: SYS-CON MEDIA SYS-CON MEDIA: News Item Java Developer's Journal Exclusive: 2006 "JDJ Editors' Choice" Awards "The editors of Java Developer's Journal are in a unique position when it comes to Java development." By Java News Desk The editors of SYS-CON Media's Java Developer's Journal are in a unique position when it comes to Java development. All are active coders in their "day jobs," and they have the good fortune in getting a heads-up on many of the latest and greatest software releases. They were asked to nominate three products from the last 12 months that they felt had not only made a major impact on their own development, but also on the Java community as a whole. The following is a list of each editor's selections and the reason why they chose that product. Joe WinchesterDesktop Java Editor SwingLabsSwingLabs is an open source laboratory for exploring new ways to make Swing applications easier to write, with improved performance and greater visual appeal. It is an umbrella project for various open source initiatives sponsored by Sun Microsystems and is part of the java.net community. Successful code and concepts may be migrated to future versions of the Java platform.http://swinglabs.org Everything that has come out of SwingLabs - this is an absolutely fabulous open source project that allows skunk work-type development to occur outside of the JCP that then gets rolled back into the Java Standard Edition. It has created superb frameworks like the Timing framework to allow crisp and elegant animation effects, the SwingX project that has spawned fantastic new widgets, and APIs including JXPanel and the whole concept of painters, as well as nice high-level work like the data binding project to allow easy GUI to data connectivity. The Eclipse Rich Client ProjectWhile the Eclipse platform is designed to serve as an open tools platform, it is architected so that its components could be used to build just about any client application. The minimal set of plug-ins needed to build a rich client application is collectively known as the Rich Client Platform.http://wiki.eclipse.org/index.php/Rich_Client_Platform This is just an awesome technology that allows Java developers to leverage the core plumbings of Eclipse, namely OSGi, SWT, JFace, and other frameworks, to create their own desktop application. It's already being used very successfully by a large number of clients and goes from strength to strength, making it a powerful way for people to build extensible desktop applications. I think it has the potential to really change the way Java client applications are built. The Java Web Start Improvements for MustangUsing Java Web Start technology, standalone Java software applications can be deployed with a single click over the network. Java Web Start ensures the most current version of the application will be deployed, as well as the correct version of the Java Runtime Environment (JRE).http://java.sun.com/products/javawebstart/ One of the big, possibly only, reasons why users today must suffer the poor usability of "dumb" browsers is because distributing and maintaining proper client apps is difficult. HTML makes this ridiculously easy and is a good engineering solution, but one that offers very poor end usability. JWS was always the promised savior to allow desktop distribution over HTTP but never really lived up to its expectations in previous releases. With the Mustang work now it looks very, very good, though with many of the dialogs simplified; better looking; and it seems like it's finally going to allow first class, easy and polished large-scale distribution of Java clients to help rejuvenate Java on the desktop. Yakov FainContributing Editor Adobe Flex 2Adobe Flex 2 is an application development solution for creating and delivering cross-platform Rich Internet Applications (RIAs) within the enterprise and across the Web. It enables the creation of expressive and interactive web applications that can reach virtually anyone on any platform.www.adobe.com/products/flex/ CIO, CTO & Developer Resources Adobe Flex 2 is a very potent player in the Rich Internet Application arena. Flex 2 is a direct competitor of Java Swing and AJAX. It offers declarative programming and a rich library of cool-looking and functional components. Your compiled code runs in a Flash 9 virtual machine. Flex 2 offers fast protocols for data exchange with the server-side components, server push, data binding, easy integration with Java, JMS support, and more. I was very impressed. IntelliJ IDEAIntelliJ IDEA is a Java IDE focused on developer productivity. It provides a combination of enhanced development tools, including refactoring, J2EE support, Ant, JUnit, and version controls integration.www.jetbrains.com/idea/ This Java IDE is the best available today. Despite the fact that it's not free (the price is very modest though), IntelliJ IDEA has a loyal following of Java experts who can appreciate the productivity gain this tool brings for a small price. Finding classes, refactoring, suggesting solutions, even a JavaScript editor for AJAX warriors...everything is at your fingertips. The upcoming version, 6.0, will include a new UI Designer and Google Web Toolkit support. WebCharts 3DWebCharts3D is a development toolkit that offers flexibility for all aspects of rich-client and Web-based charting requirements and provides a single-source solution for data visualization.www.gpoint.com This is one of the best charting components available for Java applications. It's easy to learn and integrate with your Swing, JSP, and JSF applications. The product provides a rich set of charts, gauges, and maps, and can generate not only binary streams but also HTML, which makes it a good choice for AJAX applications. For Web applications, deployment consists of adding one JSP and copying one library to WEB-INF/lib. Jason BellContributing Editor Head First Design Patterns by Elisabeth Freeman, Eric Freeman, Bert Bates, and Kathy Sierra (O'Reilly Media)Using the latest research in neurobiology, cognitive science, and learning theory, Head First Design Patterns will load patterns into your brain in a way that sticks; in a way that lets you put them to work immediately; in a way that makes you better at solving software design problems, and better at speaking the language of patterns with others on your team.www.oreilly.com Without doubt the most effective book I have ever read and extremely easy to read. Don't be fooled by the comical light-hearted way this book looks. The chapter with the intro RMI is the best I've ever come across. All the other design pattern books fade into the distance in my opinion. NetBeans 5NetBeans IDE 5.0 includes comprehensive support for developing IDE plug-in modules and rich client applications based on the NetBeans platform. NetBeans IDE 5.0 is an open source Java IDE that has everything software developers need to develop cross-platform desktop, Web, and mobile applications straight out of the box.www.netbeans.org After a bit of a love/hate start with NetBeans I've now become a convert. It's very easy to use and the enterprise support is excellent. It would be nice to see coverage of the "other" app servers such as Orion and Resin but that's a small price to pay. An excellent product. A4 Journal and a Ballpoint Pen For me everything starts on paper, whether it be sketch drawings and UML diagrams. I've never mentioned it over the years but I'd be really lost without it. I've had the delight of looking back through my journals of the past five years and seeing how I've developed and how my ideas have developed with it. Published September 21, 2006 – Reads 335,394 Copyright © 2006 SYS-CON Media, Inc. — All Rights Reserved. More Stories By Java News Desk JDJ News Desk monitors the world of Java to present IT professionals with updates on technology advances, business trends, new products and standards in the Java and i-technology space. Comments (8) View Comments Share your thoughts on this story. j j 09/21/06 08:45:32 AM EDT The editors of Java Developer's Journal are in a unique position when it comes to Java development. All are active coders in their 'day jobs,' and they have the good fortune in getting a heads up on many of the latest and greatest software releases. They were asked to nominate three products from the last 12 months that they felt had not only made a major impact on their own development, but also on the Java community as a whole. n d 09/18/06 03:59:51 PM EDT jdj 09/18/06 02:08:31 PM EDT JDJ News Desk 09/18/06 12:17:27 PM EDT Tom Boshell 09/15/06 06:43:30 AM EDT Java is great, but almost imposible to keep track of each-and-every API when you need it. The standard day-to-day or even week-to-week stuff I can do in my sleep, but it is those items that only come up maybe once a year that I tend to forget. kudos 09/15/06 04:01:35 AM EDT The ballpoint pen! Haha, lovely idea :-) ranjix 09/08/06 11:30:37 AM EDT cool, I can see the recommandation from the JDJ editors: Q1. Hi JDJ, which is the best IDE for developing Java apps? A1: well, depending on your needs, we recommend eclipse, idea or netbeans. Q2. and which platform/framework should I use for creating rich internet apps? A2. well, we would go with java web start, or maybe flex2... thanks a lot JDJ, that was really helpful. Mike Edwards 09/06/06 05:08:15 AM EDT Interesting that the list does not contain a product relating to Wikis, which must be one of the more active areas of new packages these days. Does this imply that Java does not play in this space? If so, I wonder why not? Yours, Mike.
计算机
2015-48/1914/en_head.json.gz/6892
Faster websites, more reliable data October 14, 2010 Larry Hardesty, MIT News Today, visiting almost any major website -- checking your Facebook news feed, looking for books on Amazon, bidding for merchandise on eBay -- involves querying a database. But the databases that these sites maintain are enormous, and searching them anew every time a new user logs on would be painfully time consuming. To serve up data in a timely fashion, most big sites use a technique called caching. Their servers keep local copies of their most frequently accessed data, which they can send to users without searching the database. But caching has an obvious problem: If any of the data in the database changes, the cached copies have to change too; moreover, any cached data that are in any way dependent on the changed data also have to change. Tracking such data dependencies is a nightmare for programmers, but even when they do their jobs well, problems can arise. For instance, says Dan Ports, a graduate student in the Computer Science and Artificial Intelligence Lab, suppose that someone is bidding on an item on eBay. The names of the bidders could be cached in one place, the value of their bids in another. Making a new bid updates the database, but as that update propagates through the network of servers, it could reach the value cache before it reaches the name cache. The bidder would see someone else’s name next to her bid and think she’d been beaten to the punch. “They might see their own bid attributed to somebody else,” Ports says, “and wind up in a bidding war with themselves.” MIT researchers have developed a new caching system that eliminates this type of asymmetric data retrieval while also making database caches much easier to program. Led by Ports and his thesis advisor, Institute Professor Barbara Liskov, who won the 2008 Turing Award, the highest award in computer science, the research also involves associate professor Sam Madden, PhD student Austin Clements, and former master’s student Irene Zhang. Ports presented the system on Oct. 5 at the USENIX Symposium on Operating Systems Design and Implementation in Vancouver. Transact locally Unlike existing database caching systems, Ports and Liskov’s can handle what computer scientists call transactions. A transaction is a set of computations that are treated as a block: None of them will be performed unless all of them are performed. “Suppose that you’re making a plane reservation, and it has two legs,” says Liskov. “You’re not interested in getting one of them and not the other. If you run this as a transaction, then the underlying system will guarantee that you get either both of them or neither of them. And it does this regardless of whether there are other concurrent accesses, or other users are trying to get seats on those flights, or there are machine failures, and so forth. Transactions are a well-understood technique in computer science to achieve this kind of functionality.” Indeed, it’s the idea of transactions that gives the new system its name: TxCache, where “Tx” is a shorthand for “transaction.” TxCache also makes it easier for programmers to manage caches. “Existing caches have the approach that they just make this cache and tell the programmer, ‘Here’s a cache: You can put stuff in it if you want; you can get stuff out of it if you want,’” says Ports. “But figuring out how to do that is entirely up to you.” TxCache, however, recognizes that a computer program already implicitly defines the relationships between stored data. For instance, a line of code might say that Z = X + Y, which is an instruction to look up X, look up Y, and store their sum as Z. With TxCache, the programmer would simply specify that that line of code — Z = X + Y — should be cached, and the system would automatically ensure that, whenever any one of those variables changed, the cached copies of the other two would be updated, everywhere. And, of course, it can perform the same type of maintenance with more complicated data dependencies, represented by more complicated functions. Bean counting According to Liskov, the key to getting TxCache to work was “a lot of bookkeeping.” The system has to track what data are cached where, and which data depend on each other. Indeed, Liskov says, it was the fear that that bookkeeping would chew up too many computing cycles that dissuaded the designers of existing caching systems from supporting transactions. But, she explains, updating the caches is necessary only when data in the database change. Modifying the data is a labor-intensive operation; the bookkeeping steps are comparatively simple. “Yes, we are doing more work, but proportionally it’s very small,” Liskov says. “It’s on the order of 5 to 7 percent.” In the researchers’ experiments, websites were more than five times as fast when running TxCache as they were without it. “The trouble with large-scale services like Bing and Amazon and Google and the like is that they operate at such a high level of scalability,” says Solom Heddaya, a partner at Microsoft and infrastructure architect for Bing, Microsoft’s search engine. “On a single request from the user searching for something, there are many, many applications that get invoked in real time, and they together will use tens of thousands of servers.” On that scale, Heddaya says, some kind of caching system is necessary. But, he says, “until this paper came along, people building these systems said, ‘Hey, we will shift the burden to the programmer of the application. We will give you the convenience of caching, so that we bring the data closer to where the computation is, but we will make you worry about whether the cache has the right data.” Heddaya cautions that, unlike some other caching systems, the MIT researchers’ offers significant performance improvements only for sites where reading operations — looking up data in the database — greatly outnumber writing operations — updating data in the databases. But according to Ports, “Adding support for using caching during read/write transactions is one of the things we're thinking about now. There aren't any major technical obstacles to doing so: It's mainly a question of how we can do so without introducing unexpected effects that make life more difficult for users and programmers.” This story is republished courtesy of MIT News (web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and teaching.
计算机
2015-48/1914/en_head.json.gz/7269
Contents Up a Level Open Journal Systems Help > Site Administration > Site Management OJS is designed to be a multi-journal system, and the Site Administrator is responsible for configuring site-level settings and creating new journals to be hosted within a single site. Journal sites are entirely independent, with the exception of user accounts; while a user may participate in any combination of roles and journals, the same email address and username will refer to the same user regardless of journal. Site Settings Basic information is entered by the Site Administrator that is applicable to all journals hosted by the site, including the site title and description, and contact information. Journal Redirect. This setting can be used to ensure that all requests are redirected to a particular journal web site. Typically this setting is used when a site is hosting a single journal. Top Hosted Journals As OJS allows any number of individual and distinct journal web sites to be generated, new journals can be created and managed at any time. Each journal that is created can be accessed through a unique URL based on a path name entered by the Site Administrator. Journals that are currently in the process of being set up and configured can be hidden from the main site until such time that they are ready to go live. Top Languages OJS is designed to be a multilingual system, allowing journals supporting a wide variety of languages to be hosted under a single site. The Site Administrator can specify the default language of the site and install additional locales as they become available to make other languages available for use by journals. Additional language packages will typically be available for download from the Open Journal Systems web site as user-contributed translations are received. These packages can be installed into an existing OJS system to make them available to journals. Top Authentication Sources By default, OJS authenticates users against its internal database. It is possible, however, to use other methods of authentication, such as LDAP. Additional authentication sources are implemented as OJS plugins; refer to the documentation shipped with each plugin for details.
计算机
2015-48/1914/en_head.json.gz/7894
Contact Advertise Shuttleworth Responds to Ubuntu's Critics posted by Thom Holwerda on Tue 14th Sep 2010 22:42 UTC If there's one consistent piece of criticism that gets lobbed in Canonical's and Mark Shuttleworth's direction, it's that they do not contribute enough code - or anything else for that matter - to the Free software world. Mark Shuttleworth has apparently had enough, and has written a very, very lengthy blog post detailing how he feels about this criticism.Personally, I couldn't disagree more with the people criticising Ubuntu and Canonical in this way. While Ubuntu may not contribute in hard lines of code, the distribution contributes something else that is at least just as valuable: mind share. I've never had any of my friends come up to me asking questions about Linux and what it was until Ubuntu came onto the scene. That being said, Ubuntu is of course nothing but a tiny part of a massive ecosystem of Free software, and Shuttleworth obviously recognises that. "Ubuntu, and the possibilities it creates, could not have come about without the extraordinary Linux community, which wouldn't exist without the GNU community, and couldn't have risen to prominence without the efforts of companies like IBM and Red Hat," he details, "And it would be a very different story if it weren’t for the Mozilla folks and Netscape before them, and GNOME and KDE, and Google and everyone else who have exercised that stack in so many different ways, making it better along the way." So, what does Ubuntu bring to the table, according to Shuttleworth? What does Canonical contribute to the world of Free software? "A total commitment to everyday users and use cases, the idea that free software should be 'for everyone' both economically and in ease of use, and a willingness to chase down the problems that stand between here and there," he states, "I feel that commitment is a gift back to the people who built every one of those packages. If we can bring free software to ten times the audience, we have amplified the value of your generosity by a factor of ten, we have made every hour spent fixing an issue or making something amazing, ten times as valuable." It seems like many people are too narrow-minded to look beyond just one measurement of contribution: lines of code. If you measure contribution solely in lines of code, then yes, Ubuntu and Canonical might not contribute as much as, say, Red Hat - but is that truly the only possible way to measure contribution? "I didn’t found Ubuntu as a vehicle for getting lots of code written, that didn’t seem to me to be what the world needed," Shuttleworth argues, "It needed a vehicle for getting it out there, that cares about delivering the code we already have in a state of high quality and reliability. Most of the pieces of the desktop were in place - and code was flowing in - it just wasn't being delivered in a way that would take it beyond the server, or to the general public." As much as I respect other distributions for the massive contributions that they've made and are making, there's no denying that Ubuntu has spread the idea of running Linux on your consumer desktop more than any other. "Those who say 'but Canonical doesn't do X' may be right, but that misses all the things we do, which weren't on the map beforehand," Shuttleworth further added, "Of course, there's little that we do exclusively, and little that we do that others couldn't if they made that their mission, but I think the passion of the Ubuntu community, and the enthusiasm of its users, reflects the fact that there is something definitively new and distinctive about the project." (14) 211 Comment(s) Related Articles Ubuntu Phone review: years in the making, still not readyUbuntu Desktop to eventually switch to Snappy by defaultThe new releases of Ubuntu, Kubuntu, etc. are now available
计算机
2015-48/1914/en_head.json.gz/8352
Copyright © 2009 Red Hat, Inc.. This document is licensed by Red Hat under the Creative Commons Attribution-ShareAlike 3.0 Unported License. If you distribute this document, or a modified version of it, you must provide attribution to Red Hat, Inc. and provide a link to the original. If the document is modified, all Red Hat trademarks must be removed. Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law. Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, MetaMatrix, Fedora, the Infinity Logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries. Linux® is the registered trademark of Linus Torvalds in the United States and other countries. Java® is a registered trademark of Oracle and/or its affiliates. XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries. MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries. Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project. The OpenStack® Word Mark and OpenStack Logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community. This document details the Release Notes for Red Hat Enterprise Linux 5.4. 1. Virtualization Updates2. Clustering Updates2.1. Fencing Improvements3. Networking Updates4. Filesystems and Storage updates5. Desktop Updates5.1. Advanced Linux Sound Architecture5.2. Graphics Drivers5.3. Laptop Support6. Tools Updates7. Architecture Specific Support7.1. i3867.2. x86_647.3. PPC7.4. s390x8. Kernel Updates8.1. General Kernel Feature Support8.2. General Platform Support8.3. Driver Updates9. Technology PreviewsA. Revision History This document contains the Release Notes for the Red Hat Enterprise Linux 5.4 family of products including: Red Hat Enterprise Linux 5 Advanced Platform for x86, AMD64/Intel® 64, Itanium Processor Family, System p and System z Red Hat Enterprise Linux 5 Server for x86, AMD64/Intel® 64, Itanium Processor Family, System p and System z Red Hat Enterprise Linux 5 Desktop for x86 and AMD64/Intel® The Release Notes provide high level coverage of the improvements and additions that have been implemented in Red Hat Enterprise Linux 5.4. For detailed documentation on all changes to Red Hat Enterprise Linux for the 5.4 update, refer to the Technical Notes 1. Virtualization Updates Red Hat Enterprise Linux 5.4 now includes full support for the Kernel-based Virtual Machine (KVM) hypervisor on x86_64 based architectures. KVM is integrated into the Linux kernel, providing a virtualization platform that takes advantage of the stability, features, and hardware support inherent in Red Hat Enterprise Linux. Virtualization using the KVM hypervisor is supported on wide variety of guest operating systems, including: Red Hat Enterprise Linux 3 Red Hat Enterprise Linux 4 Xen based virtualization is fully supported. However, Xen-based virtualization requires a different version of the kernel to function. The KVM hypervisor can only be used with the regular (non-Xen) kernel. While Xen and KVM may be installed on the same system, the default networking configuration for these are different. Users are strongly recommended to only install one hypervisor on a system. Xen is the default hypervisor that is shipped with Red Hat Enterprise Linux. As such all configuration defaults are tailored for use with the Xen hypervisor. For details on configuring a system for KVM, please refer to the Virtualization Guide. Virtualization using KVM allows both 32-bit and 64-bit versions of guest operating systems to be run without modification. Paravirtualized disk and network drivers have also been included in Red Hat Enterprise Linux 5.4 for enhanced I/O performance. All the libvirt based tools (i.e. virsh, virt-install and virt-manager) have also been updated with added support for KVM. USB passthrough with the KVM hypervisor is considered to be a Technology Preview for the 5.4 release. With resolution of various issues such as: save/restore, live migration and core dumps, Xen based 32 bit paravirtualized guests on x86_64 hosts are no longer classed as a Technology Preview, and are fully supported on Red Hat Enterprise Linux 5.4. the etherboot package has been added in this update, providing the capability to boot guest virtual machines using the Preboot eXecution Environment (PXE). This process occurs before the OS is loaded and sometimes the OS has no knowledge that it was booted through PXE. Support for etherboot is limited to usage in the KVM context. The qspice packages have been added to Red Hat Enterprise Linux 5.4 to support the spice protocol in qemu-kvm based virtual machines. qspice contains both client, server and web browser plugin components. However, only the qspice server in the qspice-libs package is fully supported. The qspice client (supplied by the qspice package) and qspice mozilla plugin (supplied by the qspice-mozilla package) are both included as Technology Previews. The qspice-libs package contains the server implementation that is used in conjunction with qemu-kvm and as such is fully supported. However, in Red Hat Enterprise Linux 5.4 there is no libvirt support for the spice protocol; the only supported use of spice in Red Hat Enterprise Linux 5.4 is through the use of the Red Hat Enterprise Virtualization product. The virtio-win component is only available via the Red Hat Network, and is not included on the physical Supplementary CD for Red Hat Enterprise Linux 5.4. For more information, see the Red Hat Knowledgebase. 2. Clustering Updates Clusters are multiple computers (nodes) working in concert to increase reliability, scalability, and availability to critical production services. All updates to clustering in Red Hat Enterprise Linux 5.4 are detailed in the Technical Notes. Further information on clustering in Red Hat Enterprise Linux is available in the Cluster Suite Overview and the Cluster Administration documents. Cluster Suite tools have been upgraded to support automatic hypervisor detection. However, running the cluster suite in conjunction with KVM hypervisor is considered to be a Technology Preview. OpenAIS now provides broadcast network communication in addition to multicast. This functionality is considered Technology Preview for standalone usage of OpenAIS and for usage with the Cluster Suite. Note, however, that the functionality for configuring OpenAIS to use broadcast is not integrated into the cluster management tools and must be configured manually. SELinux in Enforcing mode is not supported with the Cluster Suite; Permissive or Disabled modes must be used. Using Cluster Suite on bare metal PPC systems is not supported. Guests running Cluster Suite on VMWare ESX hosts and using fence_vmware is considered a Technology Preview. Running Cluster Suite in guests on VMWare ESX hosts that are managed by Virtual Center is not supported. Mixed architecture clusters using Cluster Suite are not supported. All Nodes in the cluster must be of the same architecture. For the purposes of Cluster Suite, x86_64, x86 and ia64 are considered to be the same architecture, so running clusters with combinations of these architectures is supported. 2.1. Fencing Improvements Fencing is the disconnection of a node from the cluster's shared storage. Fencing cuts off I/O from shared storage, thus ensuring data integrity. In Red Hat Enterprise Linux 5.4, fencing support on Power Systems has been added, as a Technology Preview, for IBM Logical Partition (LPAR) instances that are managed using the Hardware Management Console (HMC) (BZ#485700). Fencing support has also been added, as a Technology Preview for Cisco MDS 9124 & Cisco MDS 9134 Multilayer Fabric Switches (BZ#480836). The fence_virsh fence agent is provided in this release of Red Hat Enterprise Linux as a Technology Preview. fence_virsh provides the ability for one guest (running as a domU) to fence another using the libvirt protocol. However, as fence_virsh is not integrated with cluster-suite it is not supported as a fence agent in that environment. The fence_scsi man page has been updated, detailing the following limitations: The fence_scsi fencing agent requires a minimum of three nodes in the cluster to operate. For an FC connected SAN devices, these must be physical nodes. SAN devices connected via iSCSI may use virtual or physical nodes. In addition, fence_scsi cannot be used in conjunction with qdisk. Additionally, the following new articles on fencing have been published on the Red Hat Knowledge Base: SCSI Fencing (Persistent Reservations) with Red Hat Enterprise Linux 5 Advanced Platform Cluster Suite: http://kbase.redhat.com/faq/docs/DOC-17809 Using fence_vmware with Red Hat Enterprise Linux 5 Advanced Platform Cluster Suite: http://kbase.redhat.com/faq/docs/DOC-17345 3. Networking Updates With this update, Generic Receive Offload (GRO) support has been implemented in both the kernel and the userspace application, ethtool.((BZ#499347)) The GRO system increases the performance of inbound network connections by reducing the amount of processing done by the Central Processing Unit (CPU). GRO implements the same technique as the Large Receive Offload (LRO) system, but can be applied to a wider range of transport layer protocols. GRO support has also been added to a several network device drivers, including the igb driver for Intel® Gigabit Ethernet Adapters and the ixgbe driver for Intel 10 Gigabit PCI Express network devices. The Netfilter framework (the portion of the kernel resposible for network packet filtering) has been updated with added support for Differentiated Services Code Point (DSCP) values the bind (Berkeley Internet Name Domain) package provides an implementation of the DNS (Domain Name System) protocols. Previously, bind did not offer a mechanism to easily distinguish between requests that will receive authoritative and non-authoritative replies. Consequently, an incorrectly configured server may have replied to requests that should have been denied. With this update, bind has been updated, providing the new option allow-query-cache that controls access to non-authoritative data on a server (for example: cached recursive results and root zone hits). (BZ#483708) 4. Filesystems and Storage updates In the 5.4 update, several significant additions have been made to file systems support. Base Red Hat Enterprise Linux now includes the Filesystem in Userspace (FUSE) kernel modules and user space utilities, allowing users to install and run their own FUSE file systems on an unmodified Red Hat Enterprise Linux kernel (BZ#457975). Support for the XFS file system has also been added to the kernel for future product enablement (BZ#470845). The FIEMAP input/output control (ioctl) interface has been implemented, allowing the physical layout of files to be mapped efficiently. The FIEMAP ioctl can be used by applications to check for fragmentation of a specific file or to create an optimized copy of a sparsely allocated file (BZ#296951). Additionally, the Common Internet File System (CIFS) has been updated in the kernel (BZ#465143). The ext4 file system (included in Red Hat Enterprise Linux as a Technology Preview) has also been updated (BZ#485315). In Red Hat Enterprise Linux 5.4, the use of the Global File System 2 (GFS2) as a single server file system (i.e. not in a clustered environment) is deprecated. Users of GFS2 that do not need high availability clustering are encouraged to look at migrating to other file systems like the ext3 or xfs offerings. T
计算机
2015-48/1914/en_head.json.gz/8358
SharePoint Advancing the enterprise social roadmap by SharePoint Team, on June 25, 2013February 17, 2015 | 2 Comments | 0 Today’s post comes from Jared Spataro, Senior Director, Microsoft Office Division. Jared leads the SharePoint business, and he works closely with Adam Pisoni and David Sacks on Yammer integration. To celebrate the one-year anniversary of the Yammer acquisition, I wanted to take a moment to reflect on where we’ve come from and talk about where we’re going. My last post focused on product integration, but this time I want to zoom out and look at the big picture. It has been a busy year, and it’s exciting to see how our vision of “connected experiences” is taking shape. Yammer momentum First off, it’s worth noting that Yammer has continued to grow rapidly over the last 12 months–and that’s not something you see every day. Big acquisitions generally slow things down, but in this case we’ve actually seen the opposite. David Sacks provided his perspective in a post on the Microsoft blog, but a few of the high-level numbers bear repeating: over the last year, registered users have increased 55% to almost 8 million, user activity has roughly doubled, and paid networks are up over 200%. All in all, those are pretty impressive stats, and I’m proud of the team and the way the things have gone post-acquisition. Second, we’ve continued to innovate, testing and iterating our way to product enhancements that are helping people get more done. Over the last year we’ve shipped new features in the standalone service once a week, including: Message translation. Real-time message translation based on Microsoft Translator. We support translation to 23 languages and can detect and translate from 37 languages. Inbox. A consolidated view of Yammer messages across conversations you’re following and threads that are most important to you. File collaboration. Enhancements to the file directory for easy access to recent, followed, and group files- including support for multi-file drag and drop. Mobile app enhancements. Continual improvements for our mobile apps for iPad, iPhone, Android, and Windows Phone. Enterprise graph. A dynamically generated map of employees, content and business data based on the Open Graph standard. Using Open Graph, customers can push messages from line of business systems to the Yammer ticker. Platform enhancements. Embeddable feeds, likes, and follow buttons for integrating Yammer with line of business systems. In addition to innovation in the standalone product, we’ve also been hard at work on product integration. In my last roadmap update, I highlighted our work with Dynamics CRM and described three phases of broad Office integration: “basic integration, deeper connections, and connected experiences.” Earlier this month, we delivered the first component of “basic integration” by shipping an Office 365 update that lets customers make Yammer the default social network. This summer, we’ll ship a Yammer app in the SharePoint store and publish guidance for integrating Yammer with an on-prem SharePoint 2013 deployment, and this fall we’ll release Office 365 single sign-on, profile picture synchronization, and user experience enhancements. Finally, even though we’re proud of what we’ve accomplished over the last twelve months, we recognize that we’re really just getting started. “Connected experiences” is our shorthand for saying that social should be an integrated part of the way everyone works together, and over the next year we’ll be introducing innovations designed to make Yammer a mainstream communication tool. Because of the way we develop Yammer, even we don’t know exactly what that will look like. But what we can tell you is that we have an initial set of features we’re working on today, and we’ll test and iterate our way to enhancements that will make working with others easier than ever before. This approach to product roadmap is fairly new for enterprise software, but we’re convinced it’s the only way to lead out in space that is as dynamic and fast-paced as enterprise social. To give you a sense for where we’re headed, here are a few of the projects currently under development over the next 6-8 months: SharePoint search integration. We’re enabling SharePoint search to search Yammer conversations and setting the stage for deeper, more powerful apps that combine social and search. Yammer groups in SharePoint sites. The Yammer app in the SharePoint store will allow you to manually replace a SharePoint site feed with a Yammer group feed, but we recognize that many customers will want to do this programmatically. We’re working on settings that will make Yammer feeds the default for all SharePoint sites. (See below for a mock-up of a Yammer group feed surfaced as an out-of-the-box component of a SharePoint team site.) Yammer messaging enhancements. We’re redesigning the Yammer user experience to make it easier to use as a primary communication tool. We’ll also be improving directed messaging and adding the ability to message multiple groups at once. Email interoperability. We’re making it easier than ever to use Yammer and email together. You’ll be able to follow an entire thread via email, respond to Yammer messages from email, and participate in conversations across Yammer and email. External communication. Yammer works great inside an organization, but today you have to create an external network to collaborate with people outside your domain. We’re improving the messaging infrastructure so that you can easily include external parties in Yammer conversations. Mobile apps. We’ll continue to invest in our iPad, iPhone, Android, Windows Phone 8, and Windows 8 apps as primary access points. The mobile apps are already a great way to use Yammer on the go, and we’ll continue to improve the user experience as we add new features to the service. Localization. We’re localizing the Yammer interface into new languages to meet growing demand across the world. It will take some time, and we’ll learn a lot as we go, but every new feature will help define the future–one iteration at a time. When I take a moment to look at how much has happened over the last year, I’m really proud of the team and all they’ve accomplished. An acquisition can be a big distraction for both sides, but the teams in San Francisco and Redmond have come together and delivered. And as you can see from the list of projects in flight, we’re definitely not resting on our laurels. We’re determined to lead the way forward with rapid innovation, quick-turn iterations, and connected experiences that combine the best of Yammer with the familiar tools of Office. It’s an exciting time, and we hope you’ll join us in our journey. –Jared Spataro P.S. As you may have seen, we’ll be hosting the next SharePoint Conference March 3rd through the 6th in Las Vegas. I’m really looking forward to getting the community back together again and hope that you’ll join us there for more details on how we’re delivering on our vision of transforming the way people work together. Look forward to seeing you there! amagnotta Will the Office 365 release this fall integrate with SharePoint Online? I only see SharePoint 2013 on-prem mentioned. If not, are there plans in the Road Map to integration with SharePoint Online at some point? Thanks. CorpSec How does Yammer relate to Lync? It seems to me there’s a lot of overlap between the 2 collaboration tools. Will this evolve over time?
计算机
2015-48/1914/en_head.json.gz/8446
What WebRTC is and why you should care By Danny Boice Have you ever tried to make a VoIP call or or do video chat in your browser only to find that you were required to install some Flash or Java plugin? Annoying, right? Well, it may soon be time to rejoice: this is exactly what WebRTC is designed to eliminate.Web Real Time Communication (WebRTC) enables browser-to-browser communication without plugins, meaning that the end user (e.g. you) can do real-time voice and video calls right in the browser without installing anything. At my startup, we use WebRTC as our browser-based VoIP client, allowing users to do free conference calls quickly and easily.But WebRTC goes far beyond conference calls: WebRTC allows developers to easily embed real-time audio, video and file sharing capabilities in their products as if this functionality were a commodity. The time-to-market for features relying on real-time communication will be cut down dramatically as will the cost to develop them. Adding a VoIP client into your app will be as easy as adding “drag and drop” functionality is with HTML5.This will have an impact on the entire VoIP industry, as it neutralizes barriers to entry in the unified communications space. Products like WebEx and GoToMeeting, which rely on heavy 3rd party installers, will be forced to adapt or face the consequences.Imagine being able, as a developer, to quickly throw together the same features and functionality that Google Hangouts provides without having to spend thousands of hours and millions of R&D dollars to do so. With WebRTC that may also be possible soon, as Google is one of the project's major sponsors and leverages much of the open source technology in its own products.WebRTC was open-sourced by Google in May 2011. Since then, there has been extensive work to standardize its protocols in the IETF and browser APIs in the W3C. The project is sponsored by Google, Mozilla and Opera.WebRTC consists of 3 major components for developers. First,“GetUserMedia” allows the browser to access the user's camera and microphone (previously, browser security prevented this, thus requiring 3rd party add-ons like Flash). Second, “PeerConnection” allows developers to setup audio and/or video calls. Last,“DataChannels” enable P2P data-sharing across browsers.Chrome and Firefox were the first major browsers to support WebRTC, both coming on board in early 2013. No word on when, or even if, Internet Explorer or Safari will follow suit; however, open-source plugins are already available that enable WebRTC in these browsers.In the world before WebRTC, if you wanted to utilize real-world communication functionality right inside your web browser to make a call or share a file, your experience would go something like this:Click the link to the WebEx, GoToMeeting or audio/video conference.Download and install a very large Java installer or Flash SWF.Wait.Wait some more to join your webinar, video conference or call.Join the session, maybe in the web browser, or likely in an app outside your web browser.It was difficult, time-consuming, and clumsy given the lack of tight browser integration -- in most cases, you had to either leave the browser to do RTC or install and/or approve a flash file, making for a very disjointed experience.In the WebRTC world, your experience sharing in real-time of audio, video, files, or your desktop goes more like this:Click the link to go to the web app where the webinar, video conference or call is hosted.Allow WebRTC to access your mic (first time only).Start the session right in your browser.This user experience is fast, easy and feels more cohesive. You never have to leave the browser or install 3rd party software to make real-time communications work. You just go the web app and get started. It's that easy.Because WebRTC changes the level of effort required by developers to build real-time communications features into the browser, it in many ways makes this functionality a commodity. What would have previously taken thousands of hours of developer time now can be done in a matter of days. This removes the technical barriers of entry to the real-time communications market.What this commoditization of the real-time web means is two-fold:First, the inherent virality of real-time communication products if more important than ever. With viral products, leads widen over night. Defensibility in the form of technology matters less. For an example, look to the social media market. Twitter would take little more than a weekend for a good developer to build, Instagram a matter of weeks. What these products possessed that allowed them to win were features such as network effects, which made them more fun to use if you got all of your friends to use them as well, making them inherently viral. With WebRTC, expect the Unified Communications and Collaboration market to start to looking a lot more like social media and less like old school telecom.Second, expect to see a plethora of new real-time communications and collaboration products that run in the browser and offer a beautiful user experience. In the past, one of the largest barriers to great user experience within a browser based real-time communication product was a technical one -- it was impossible to access a computer's mic or camera through a browser. You had to use 3rd party software like Flash or Java. This kept developers from implementing the great user experiences that their teams came up with. WebRTC removes this barrier.While WebRTC is a disruptive force in real-time communications, it is still a young technology. As such, there is still a lot of growing up to do. Early adopters are leveraging it and we're seeing some of the larger players start to jump on board. However, it's still up in the air, for example, as to whether Safari and/or Internet Explorer will ever adopt it -- requiring at least a light browser extension to be installed to “WebRTC-enable” these browsers for now.So the future of RTC isn't here quite yet, but it's just around the corner.
计算机
2015-48/1914/en_head.json.gz/9515
Home / New Products / ANSYS, Inc. Releases Workbench 11 ANSYS, Inc. Releases Workbench 11 Posted by: DE Editors in New Products, News, Simulate July 2, 2007 By Al Dean In the world of FEA, ANSYS is a well-known name. Through the development of its own products and through a pretty aggressive acquisition trail that has introduced CFX, Autodyne, and most recently, Fluent to the family, ANSYS has greatly expanded its product portfolio and technology base. While ANSYS is well known for its traditional simulation tools, it has also been a leader in the world of mainstream simulation, where the ability to accurately model a product’s behavior and use that as part of the product development cycle is key — and the core part of that push is ANSYS Workbench. The sectioning tools in ANSYSWorkbench 11 allow you to inspect the internal structure of your mesh. ANSYS Workbench is not a product as such — you’ll never hear an ANSYS rep trying to sell it to you. ANSYS Workbench is a platform through which you communicate and interact with the wealth of technology that ANSYS has at its disposal. Because it can be applied to a huge range of simulation tasks, it is perhaps best to walk through a generic workflow to see how ANSYS Workbench creates a common platform that provides the basics of the data access and presentation. Looking at the UI, ANSYS Workbench displays simulation data in a hierarchical tree that separates the various groups of inputs and outputs. Your model data is stored at the highest level and this level contains the geometry that your mesh is built on, contacts, mesh, and the environment in which you are operating. The mesh is the elemental representation of your parts (the precise nature of this mesh will change depending on your task) and all are held within this folder. Workbench allows you to intelligently refine your mesh by starting with a coarse mesh. You can find areas of high-stress concentration, then re-mesh using a much higher density mesh to increase accuracy. The system can handle much of this automatically but you can also dive in and interactively re-mesh with more detail. The selection tools with Workbench allow you to ensure that you’re selecting the correct geometry when applying boundary conditions. The Workbench UI system is built on the standard interaction methods now common in this class of tool. Everything is geared toward making model and study interaction much more efficient, with many processes applied directly to the model with dynamic feedback, rather than through obscure dialogs. There are also some very neat little UI tricks that are worth a mention: when you have a 3D model loaded, the system shows you an OS map-like scale that gives you an indication of the scale or overall dimensions of the model you’re working with. While this might sound odd, when you consider that many systems don’t hold unit data and you have to specify it on import, it’s a useful clue to show if you get that wrong. Elsewhere, selection of geometry is frequently tricky as you’re often looking to apply loads and such to surfaces or faces that share common space. Workbench displays a graphic at the bottom that shows a number of 3D planes. As you hover over each plane with the mouse, the corresponding color-coded (according to part color) 3D plane is highlighted. The interactive mesh refinement tools allow you to select areas of stress concentration to improve the resolution of the simulated model. Geometry Definition The starting point for any simulation task is a geometry file and this can be sourced in several ways. The first method is to use the CAD connections, which allow you to read native data — Pro/Engineer, SolidWorks, NX/Unigraphics, CATIA V5, Inventor, etc. — alternatively, IGES, STEP, ACIS, or Parasolid. Data is read into the Workbench fairly intelligently to preserve part names. If it’s Inventor, Pro/E, or NX, a sub-set of material definitions is also preserved including Young’s Modulus, Poisson Ratio, Mass Density, Specific Heat, Thermal Conductivity, and Thermal Expansion Coefficient. One thing to note is that the system doesn’t replicate assembly structure information, so any nested subassemblies will be flattened out — a potential problem for those working with complex product models. During the input process, the system will also interrogate the assembly and define any points of contact it finds. Now, it’s important to note that this is not read from the assembly mates, but applied in Workbench. The system defines any contact found (within a tolerance) as static, but it’s a simple case of using the Details Pane to edit their definition. As standard, you can apply static, rotational, friction-based contact within your model with ease. Alongside CAD import, the DesignModeler tool allows you to take data from a variety of sources, fix it if need be, integrate it into a single cohesive whole, and use that as you would native information. Data Abstraction & Reconstruction ANSYS Workbench has a set of tools that can be used to repurpose or abstract your CAD geometry for a number of reasons, whether to repair problems or to find and remove features (such as holes, fillets, and chamfers) that have little or no influence on the performance of your part. Finally, the new FE Modeler tool allows you to work with legacy or third-party FEA data for which there is no CAD base. It allows you to read data decks from a wide variety of sources (ABAQUS, NASTRAN, and ANSYS CDB files), inspect the mesh, find areas in which explicit surfaces can be placed and create them. It’s ideal for finding prismatic surface types, such as planar faces, cylinders (such as holes), pockets, cuts, spherical features, and such. Those which can’t be handled (such as complex surfaces) are approximated into freeform NURBS. The end result is a surface model that can be used to then regenerate the mesh according to your requirements and adapted to your needs — all from a seemingly static, non-editable mesh format. This image illustrates the resultant refined mesh for better suiting this simulation’s load case. Materials Definition and Reporting As you would expect, ANSYS Workbench is supplied with a range of standard materials, as you would get with any FEA-based system. But as always you’ll want to adapt them. The system uses a central, XML-based material database to store and distribute materials. You start a new material and add all of the standard values (minimum of Young’s Modulus required, if not specified; Poisson’s ratio is assumed) for structural, thermal, and electromagnetic performance. You can then add properties to add more advanced functions of material. The whole process is completed with a great deal of feedback. Report generation is common in mainstream simulation tools, but I’m not going to pretend your reports are going to just consist of a template-based output from a CAD system. The point is that they get you a large part of the way there, quickly. ANSYS has always been a little better than others and creates reports based on a wizard-style workflow, so you define the various inputs (title, description, etc.) and the system handles the rest. It allows you to create diagrams at any point, so the graphics window and a caption is stored and then added to the report. This contains a complete documentation of your analysis study (materials, inputs, as well as results), all within a neat hyperlinked format. When you hit the Generate Report button, the whole thing is compiled for you and stored in a HTML format. It can also be output to Word or PowerPoint files directly. Mechanisms One of the key areas for this release relates to mechanisms. While computation technology has advanced, the fact is that mechanism simulation combined with how the forces are manifested within the constituent parts is still a complex process. ANSYS works around this by allowing you to model both rigid and flexible bodies in one environment intelligently. To do this, the motion analysis is set-up and run as normal. At that point, the system exercises the joints according to your inputs to gain the force load transfer conditions using the Rigid Body solver. Once that is complete, you switch those appropriate parts to flexible and carry out the Rigid Dynamics Solve to find out how those key parts perform under loading conditions using stress analysis. Results can be created at any time step and you have all manner of tools to inspect the loading and subsequent stress and strain values. Design optimization — DesignXplorer Alongside the basics of simulation, ANSYS has always been active in optimization. While it’s possible to conduct optimization using traditional FEA, with today’s processing technology enabling the user to conduct simulation tasks more efficiently than ever before, doing this type of work manually isn’t feasible. Thus, we need to work smarter, and Workbench allows you to do this. Within your CAD system, you add a prefix to the controlling parameter within your CAD model (_DS is the default). This prefix then exposes your model to Workbench. You then create the input value ranges in which these parameters are varied — either manually or in a design of experiments manner to specific values. You then add the loads and restraints as normal. Output parameters (the things you’re looking to optimize) are also added, whether that’s deformation, maximum and minimum stress or strain value. Design Points, which are effectively sensors or gauges on your mesh, can also be added so that you can track output values (such as stress, strain, deformation, etc.) as the solution is calculated. You can use Design Points to find values within the postprocessing reporting stage. The system then connects to the CAD tool, generates the model, builds the mesh, solves, and captures all of the data within Workbench. Once completed, you have the ability to work through a whole host of tools to gauge the performance of each iteration. All manner of reporting tools within the system are available here, but adapted to allow for the simultaneous visualization and inspection of multiple datasets on screen at once (with synchronize model manipulation and all that good stuff). The system also includes a number of formalized tools to carry out additional work with this mass of data, such as What if and Deterministic studies, as well as Six Sigma and Robust Design tasks. A Unified Environment ANSYS Workbench provides a unified environment in which you have access to a massive amount of simulation functionality. It allows the user to take 3D-based processes to their next logical conclusion — that of using that rich definition of a product under development and use it as the base for simulation, and perhaps just as importantly, optimization. By carrying out more holistic simulation of all areas of a product’s performance and function, we can build products that are of higher quality, cost less, and are more suitable for their intended purpose. ANSYS Corp.’s acquisitions mean it has a huge arsenal of simulation technologies ranging from its core-mastery of structural FEA analysis, through various forms of CFD (both with the CFX and more recent Fluent acquisition) and into newer, less recognized fields. For example, the Autodyne technology allows the user to carry out explicit analysis over a very short timescale using a variety of meshing and solver technologies. What’s more, each simulation technology has been acquired with a clear, long-term strategy in mind: to provide leading, full-spectrum simulation technologies within the fully integrated Workbench environment. If you’re looking to integrate your CAD use with simulation, then you’d be hard-pressed to find a system that provides more in terms of both functionality and the ease of use factor that’s essential to making the technology usable as part of your product development process. ANSYS Workbench 11 ANSYS, Inc. Canonsburg, PA Al Dean is technology editor at MCAD Magazine, a UK product development and manufacturing technology journal and is editor of Prototype magazine. You can send comments about this article through e-mail to [email protected]. Previous: Immersion Introduces a Portable CMM Arm Next: FEMAP V9.3 Announced by UGS
计算机
2015-48/1914/en_head.json.gz/9897
Mike Industries A running commentary of occasionally interesting things. Contact Mike Hosted by Dreamhost. Find out why. What the Betamax Case Teaches Us About ReadabilityApril 1st, 2012 The Betamax SL6500! I totally had this model!!! Several really smart people in our industry are arguing very publicly right now about a company called Readability and how great and/or evil their service is. One side thinks what Readability does is wrong, and by extension, that the company’s founders are immoral. The other side says Readability is providing a valuable service, and although they may not have gotten everything right yet, their intent is good. There are two issues at the center of the controversy: 1. When you save a “cleaned” version of an article (e.g. no ads, homogenized layout) to Readability and then try to share it publicly via Readability’s share tools, the shared link is to the Readability version of the article and not the source. When someone clicks over, they don’t even hit the original content creator’s server. This seems quite bad to me, and it might even be illegal. By facilitating the public retransmission of an author’s content in a format not authorized by the author, it would seem that Readability is committing copyright violation, en masse. When courts ruled in 1984 that it was ok for someone to make a personal copy of a television broadcast using their VCRs, they did not also rule that people (or VCR companies) could then re-transmit that copy to someone else, without commercials, or however else they saw fit. This issue seems straightforward to me, and as of this writing, the folks at Readability have apparently changed their tune and decided to do the right thing; although I just downloaded a new Readability Chrome extension and I still see the old behavior. So, that’s it for the first issue. Bad for publishers? Yes. Bad for readers? Only in that it’s bad for publishers. Update: Rich from Readability tells me that the only reason I’m still seeing this behavior is that I am clicking the link when I’m already signed in to Readability and the item is already in my reading list. I then tested clicking the link from another browser and it indeed went to the original article, albeit framed with a Readability callout on top. I’m fine with this. So, this problem appears to be resolved. 2. Readability collects voluntary fees from its users (suggested amount: $5 per month) and then attempts to redistribute 70% of this revenue back to publishers, providing said publishers have signed up for their service. This is proving controversial because Readability is “collecting fees on behalf of publishers” without their consent, only distributing the fees back to the publishers if they sign up, and deciding themselves what the details of this arrangement are. I’ve thought about this a bit — as someone who runs a company that also returns revenue back to content creators (90% in our case, with prior consent) — and I think detractors might be looking at this the wrong way. As I see it, Readability has no obligation to return any revenue to publishers. Unless I’m missing something, they are even within their rights to help individual users make offline, ad-free versions of articles for personal use per the same principles in the Betamax case. A VCR allows me to watch a show later, in another context, while skipping the ads, so why shouldn’t Readability allow me to do the same thing? The anger about the financial side of Readability seems to come from the opinion that the company is “keeping publishers’ money” unless they sign up, but I guess I look at it differently: I don’t think it is the publishers’ money. I think it is Readability’s money. Readability invests the time and resources into developing their service and they are the ones who physically get users to pay a subscription fee. It’s hard to get users to pay for content and they are the ones who are actually doing it. They realize that the popularity of their service is a direct result of content creators’ efforts so they are voluntarily redistributing 70% of it back to publishers in the only way it is feasible to: based on pageviews from publishers who register themselves. If you are a publisher and you don’t sign up, Readability doesn’t take your money. It’s all accounted for and available to you once you sign up. I’m not even sure if there is an expiration date on this collection, but there should be. If I were Readability, I’d probably put something like a year limit on it such that if it wasn’t claimed within that time period, it would go onto the company’s balance sheet as revenue. Readability has no universal contract with the publishing industry, nor do they need one; much as the makers of VCRs had no contract with TV or movie studios. When a reader signs up to pay their monthly fee, Readability then has a contract with the reader. That contract does not say “we will use 70% of your fee to pay your favorite publishers”. It says (paraphrased) “we will take your fee, keep 30%, and give the rest of it away to your favorite publishers, as long as they claim it.” The fact that certain publishers may not want to claim this 70% or may take umbrage as to the details of the arrangement does not change the contract between Readability and its customers. It also does not hurt the publisher any more than other competitive services like Instapaper do. I would feel very differently about this whole case if our fair use laws weren’t as they are today, but courts have told us that “personal archiving” is a legal activity. As such, it’s legal — and perfectly moral — for a company to create a service which makes personal archiving easier whilst charging a monthly fee for it. That Readability sees a future in which personal archiving may hurt publisher revenues and pushes forward an experiment to counteract those effects should be applauded. Finally, this whole episode is a good reminder that the problems of the publishing industry haven’t gone away just because the world has gone digital. In fact, personal archiving is an example of a way it’s gotten worse. You never needed a “reading layout” with a magazine or a newspaper because they were already optimized for reasonably efficient reading. Now layouts are optimized for “time on site”. You also never needed a separate service to help you “Read Later” a magazine or newspaper because you could, you know, just read it later. As digital publishing continues to try and balance profits with audience satisfaction, you can expect many more debates like this from smart people like Anil, Gruber, and Zeldman. Just as it’s important for us to defend upstarts who fight the status quo, it’s also important to hold them to as high of a standard as we hold ourselves. Like this entry? You can follow me on Twitter here, subscribe via email here, or get the RSS feed if that's how you roll. « Previous Entry | Next Entry » 34 Responses: zwei says: Quoting a comment I received on my latest blog post. (from a community liaison on the Readability team) http://zweigand.blogspot.com/2012/03/how-readability-could-nullify-naysay.html “One reason there’s a 12-month cut-off for publishers that haven’t signed up is pretty much exactly to be able to give it away and *not* to have it get stuck as in escrow for someone who will never come to collect it.” Doesn’t sound like a scumbag company to me. It sounds like they haven’t got everything figured out yet, and are just not ready to talk until they do. #1 - April 1st, 2012 at 7:04 pm Jemaleddin says: I think what bothers people about #2 is that they’re claiming that they’re collecting on behalf of sites that will never see the money. There’s a real lack of transparency there. This feels very disingenuous. And I thought I read that they fixed #1. #2 - April 1st, 2012 at 7:20 pm Bob says: This whole shebang makes me sleepy. #3 - April 1st, 2012 at 8:27 pm Kenneth Reitz says: FYI, the sharing behavior was only a problem on the mobile version of the site. On the “standard” version of the site, when you click on a Readability link, you get a “diggbar”-style view of the original page. However, if you’re logged into Readability as a user you’ll get see the Readability-hosted view. #4 - April 1st, 2012 at 9:02 pm Nick Jones says: So, what you’re saying is, Mr. Rogers made Readability possible. http://www.mentalfloss.com/blogs/archives/112878 #5 - April 1st, 2012 at 9:12 pm -b- says: Ask Readability where they’re opt-out tag was. I asked them months ago in a rant. Using your VCR analogy, Google also archives the Internet and also recognizes the disallow call in a robots.txt file…. What’s more curious to me, socially how we put aesthetics above copyright. I called it out then and also why why when Zeldman and Anil are on the board. Just last week Pinterest announced an ignore meta tag that stops their scraper from grabbing your content. Finally, Safari Reader and Readablity are like ok cause there’s no content repurposing like what Readability does. Though I’m pretty sure my VCR didn’t reformat the content. It just recorded it. #6 - April 1st, 2012 at 9:20 pm JC says: The problems I have with Readability are their “forgiveness over permission” model and their subscription fee transparency. A publisher must actively opt-out, there is no permission sought to be this arbitrary middleman between a reader’s good will (monthly subscription fee) and the publisher’s “earmarked” revenue. A paying subscriber is sold on the fact that their fees are being used to support writers/publishers they read and enjoy, while in reality a publisher may not even be aware someone is collecting money on their behalf and without their permission. Readability’s site back in mid-2011 stated (via archieve.org) “70% of all membership fees go directly to the people who make the content.” Many users still tout this as fact. The website and service has since downplayed the 70% figure (it’s no longer prominently displayed on their main page) and also placed the qualifier “earmarked”, which I personally feel is a weasel word akin to “up to”. Up to 70% of the subscription fees may potentially go to publishers, but as a subscriber, one never truly know how much of it goes where. I don’t know who’s partnered with readability, who gets any of the “earmarked” fees, what publishers are actually aware of readability, or what publishers have opted out. In short, a lack of transparency — saying 70% is “earmarked” gives them a lot of wiggle room to fulfill claims through technicalitie
计算机
2015-48/1914/en_head.json.gz/10505
Life at Eclipse Musings on the Eclipse Foundation, the community and the ecosystem Archive for October 2005 Whirlwind Tour So I am currently at the Hyatt Regency Burlingame — fondly remembered as the site of our most recent EclipseCon — for the Zend/PHP conference. This is my third conference in eight days. London, Paris and San Francisco in eight days. Remind me to never do that again. But the great thing is that no matter where I go, Eclipse is present in a big way. At the Symbian Smartphone show, Symbian and Nokia both made Eclipse-related announcements. Symbian announced that they were joining the Foundation, and that they will be making a significant on-going contribution to the C/C++ Development Tools (CDT) project. Nokia announced its new Carbide product family of C/C++ development tools, based on CDT. This is Nokia’s foray into Eclipse-based C/C++ tools to complement their previously announced efforts in mobile Java (J2ME). Also at the show, I got a demo of Wirelexsoft‘s visual programming tools for mobile applications. It is amazing to me what a small dedicated team can build on top of Eclipse in short order. This tool looks really powerful. Next stop was the OSGi World Congress where I was on a panel and gave a keynote. This was a smaller, more intimate conference. Lots of time and space to chat with people. A few notables I had an opportunity to meet were Richard Hall and Enrique Rodriguez from the Felix project, and Christer Larsson from Knoplerfish (thanks for the T-shirt!) Here, the Eclipse Foundation got to do its own announce that we’re ready with OSGi R4, and that the Equinox project was being “promoted” to become part of the Eclipse project. The Eclipse runtime is entirely based on the OSGi spec. I consider the OSGi and Eclipse relationship a great example of open source and open standards working well together. Although there is strong competition between multiple open source and commercial implementations, I really found the OSGi community open and friendly to a relative newcomer. Today’s stop is at the Zend/PHP conference, where Zend announced that they are joining Eclipse as a Strategic Developer. They are going to be leading a project to implement PHP development tools at Eclipse. I think I said in my first press interview upon joining Eclipse that this community is about more languages and platforms than Java. Having Zend come to build PHP tools at Eclipse is a big step in that direction. After a redeye home this evening, I don’t travel for almost ten days ;-) Written by Mike Milinkovich October 19, 2005 at 6:42 pm Posted in Foundation Ward Cunningham Joins the Eclipse Foundation My goodness Ed Burnette is fast :-) . Yes, as mentioned by Ed, I really am very pleased that Ward has decided to join the staff of the Eclipse Foundation. It’s really great to have him. I’ve had the pleasure of interacting with Ward at several points in the past, and I’ve always found him to be a truly rare bird: someone who is both brilliant but also blessed with a warm and engaging personality. I couldn’t imagine someone I would rather be working with. For those who are interested, here is the text of the email I sent to the Eclipse committer community earlier today: I am very pleased to announce that Ward Cunningham is joining the staff of the Eclipse Foundation. To date, the efforts of the Eclipse Foundation in support of the committer community have been primarily around providing infrastructure and process. However, a high functioning committer community is about more than just sharing servers and following a common process. A high functioning committer community is about collaboration and cooperation between the project silos. Although the Councils do an admirable job of co-ordinating the activities of the many Eclipse projects, what is needed is a culture of collaboration and cooperation. This is especially true today, as Eclipse grows rapidly with new projects and new committers. To help cultivate this committer culture, I am pleased to announce that Ward Cunningham is joining the Eclipse Foundation as Director, Committer Community Development. Ward’s track record of invention in areas such as wikis, patterns and agile development are known worldwide. His current interests in open source and developing communities of developers are a perfect match for the work we need to do at Eclipse. Ward will lead the effort to create a more cohesive Eclipse committer community by working with developers in order to enhance Eclipse as “the place to be”. Written by Mike Milinkovich October 17, 2005 at 8:38 am Mike Milinkovich Andrew Ross Ian Skerrett Ralph Mueller Life at Eclipse Blog at WordPress.com. The Journalist v1.9 Theme. Follow Follow “Life at Eclipse”
计算机
2015-48/1914/en_head.json.gz/12287
Kyle E. Miller E3 2013 Hands-On Impressions: DARKYes, the title is in all caps.06.13.13 - 2:57 AM Stealth tends to be an option these days, but the developers of DARK want stealth to be the best way, if not quite the only way. An action-oriented approach may be possible in this game, but I've played it, and I can tell you that it's going to be tough. The protagonist (whose voice sounds like my favorite Witcher's) has no weapons, gadgets, tools, or machines to help him sneak and slide past enemies. He is, however, a vampire.The story begins with a raid by the anti-paranormal military group M17, of which the protagonist is a member. During the routine raid, the squad is wiped out, and the protagonist is left feeling ill in a nightclub, alone. Rose, the owner and soon-to-be persistent guide (what would we do without one of those?) helps him recover and explains his new supernatural state. From there, he goes seeking the blood of an elder vampire, which must be imbibed before too long or else he will transform into a ghoul: a mindless undead beast.The RPG elements are quite light. Indeed, the gameplay is as simple as the cel-shaded graphics, but that doesn't have to be a bad thing. This could allow players to master the few abilities over the six to ten hours the game offers. The protagonist levels up (and yes, stealth kills net more experience) and earns skill points to invest in a small selection of abilities. Some of the simpler ones allow him to feed on enemies or do a Dishonored-like dash behind cover. Other abilities require blood points (read: MP) and offer more powerful alternatives such as a Vader-like choke attack.I tried two of the levels and while both were similar, I was told that certain areas differ in their parameters. One level, for example, features vampire enemies that can sense any use of vampire abilities. Thus, sneaking through that level might prove even more difficult than the others. The stealth is simple, but everything you might expect in a stealth game is featured here, including detectable corpses, corpse dragging, noise obstacles, distractions, and even an infrared-like vampire vision. Most importantly, there's a circle to indicate enemy awareness — a must in any stealth game.DARK may be light on the RPG elements, but that could work in its favor, as could the relatively simple presentation and gameplay design. The short length may ensure that no one tires of the gameplay as well. The controls could use a little work, but Realmforge Studios is running out of time, as DARK releases on July 9th. Check back next month for our review.
计算机
2015-48/1914/en_head.json.gz/13173
The Fedora Project is an openly-developed project designed by Red Hat, open for general participation, led by a meritocracy, following a set of project objectives. The goal of The Fedora Project is to work with the Linux community to build a complete, general purpose operating system exclusively from open source software. Development will be done in a public forum. The project will produce time-based releases of Fedora about 2-3 times a year, with a public release schedule. The Red Hat engineering team will continue to participate in building Fedora and will invite and encourage more outside participation than in past releases. Fedora 15, a new version of one of the leading and most widely used Linux distributions on the market, has been released. Some of the many new features include support for Btrfs file system, Indic typing booster, redesigned SELinux troubleshooter, better power management, LibreOffice productivity suite, and, of course, the brand-new GNOME 3 desktop: "GNOME 3 is the next generation of GNOME with a brand new user interface. It provides a completely new and modern desktop that has been designed for today's users and technologies. Fedora 15 is the first major distribution to include GNOME 3 by default. GNOME 3 is being developed with extensive upstream participation from Red Hat developers and Fedora volunteers, and GNOME 3 is tightly integrated in Fedora 15." manufacturer website 1 dvd for installation on a x86 platform back to top
计算机
2015-48/1914/en_head.json.gz/13702
Appeared on: Thursday, February 01, 2007 Vista Puts Aside PMP Restrictions For Now Playback of Blu-Ray and HD DVD movies on PCs seems to be unaffected by the release of Windows Vista, since Vista's controversial "Protected Media Path" (PMP) environment has not yet been fully applied. Currently, the available software solutions for reproduction of Blu-Ray and HD DVD movies on Vista are built to support Windows XP. As a result, until software developers release versions designed exclusively for Vista that support the PMP feature, no restrictions are imposed on the video and audio further to those applied under Windows XP. Vista's "Protected Media Path" (PMP) is a copyright protection for "premium" content, as Microsoft describes. In simple words, it is a function that prevents "stealing" of video and audio as they flow from the main memory of a PC to the video and audio cards. Microsoft sources have also confirmed that the software solutions built exclusively for Vista are very limited for now, saying that it will take a while until applications will support Vista's PMP. In addition, the support for PMP has not been advanced, at least for the time being, for one more reason. PMP uses AES encryption to protect the data stream. The encrypted data would idealy be processed by the graphics card. However, even the high-end graphics cards available today do not support hardware AES processing. As a result, the demanding encryption has to be done by the PC's CPU. Hence a new task is added to the already overbuedened processor with the reproduction of High Definition video. So, as soon as the high-end graphics cards advance and also become affordable to the mainsrteam, the support for PMP is expected to be limited. At least for now, there is little difference between the XP environment compared to Vista, when it comes to supporting next generation optical discs. The situation is expected to change soon, especially after the latest issues that have arisen concerning the "hacking" of AACS, used on Blu-ray/HD DVD.
计算机
2015-48/1914/en_head.json.gz/13935
Teen pushed adware to hundreds of thousands of PCs "Sobe" to serve time for scheme to make money by surreptitiously planting adware on large numbers of computers. Jaikumar Vijayan (Computerworld) on 19 February, 2008 08:00 A teenager identified by U.S. law enforcement officials only as B.D.H pleaded guilty last week to charges that he used botnets to illegally install adware on hundreds of thousands of computers in the U.S., including those belonging to the military. A statement from the U.S. Attorney's office in Los Angeles announcing the teenager's plea calls him a "well-known juvenile member" of the botnet underground. Officials said the teenager pleaded guilty to two counts of juvenile delinquency for conspiring to commit wire fraud, causing damage to computers and for accessing computers without authorization to commit fraud. The teen is scheduled to be sentenced May 5. Under a plea agreement, he will receive a sentence ranging from one year to 18 months in prison. Asst. U.S. Attorney Mark Krause said that most of the materials related to the case, including details about the investigation, have been sealed because it involves a juvenile under the age of 18. Krause, however, supplied a redacted version of the charging document against B.D.H, which the courts have allowed to be made public. According to the public statement and the charging document, B.D.H -- who was known online as "Sobe" -- worked with another person, Jeanson James Ancheta, in a scheme to make money by surreptitiously planting adware on large numbers of computers. Sobe and Ancheta, who was 20 at the time of his arrest in 2006 and from Downey, Calif., first enrolled as affiliates of legitimate online advertising companies so they could obtain affiliate identification numbers so they could get payments for adware installations. But the payments were supposed to be for adware programs installed with the consent of the user. The two then illegally modified the adware so it could be installed without the user's knowledge or consent and hosted it on servers they controlled. Between August 2004 and December 2005, Sobe and Ancheta broke into hundreds of thousands of computers and directed them via Internet Relay Channels (IRC) to the adware hosting servers. Once the servers then downloaded the modified adware, Sobe and Ancheta sought compensation from the online advertisers for each installation. Among the computers infected were those belonging to the Defense Information Security Agency (DISA) and the Sandia National Laboratories. To avoid getting caught, the two varied the download times and the rate of adware installations on compromised machines. In the charging documents, prosecutors offered numerous examples of chat sessions between Sobe and Ancheta that focused on ways to infect computers and how to avoid detection by network administrators and the FBI. The chats included discussions on new malware they planned to deploy, as well as methods for disabling systems. In one of these conversations Sobe noted that it was unlikely that "feds [would] bust in someones (sic) door for irc bots etc. lol", the charging documents showed. Another time, the pair used AIM to troubleshoot a botnet that kept losing bots and could not infect more than 25,000 computers at any given time. During one of these sessions, Sobe was assured that he would earn at least "2.2gs" by the end of the month. The conversations also showed that both knew that they had infected systems belonging to the Defense Department and to Sandia labs. Ancheta is now serving a 57-month sentence in a federal prison for his role in the scheme. He was sentenced in May 2006 after pleading guilty to using malicious code to infect thousands of computers and creating vast botnets from the compromised systems. He admitted to selling the botnets to others who used them to launch distributed denial-of-service attacks and for distributing adware. He also confessed to making US$107,000 in advertising affiliate payments for downloading adware on more than 400,000 infected computers that he controlled. Jaikumar Vijayan
计算机
2015-48/1914/en_head.json.gz/14783
Carry-lookahead adder (Redirected from Carry look-ahead adder) 4-bit adder with carry lookahead A carry-lookahead adder (CLA) is a type of adder used in digital logic. A carry-lookahead adder improves speed by reducing the amount of time required to determine carry bits. It can be contrasted with the simpler, but usually slower, ripple carry adder for which the carry bit is calculated alongside the sum bit, and each bit must wait until the previous carry has been calculated to begin calculating its own result and carry bits (see adder for detail on ripple carry adders). The carry-lookahead adder calculates one or more carry bits before the sum, which reduces the wait time to calculate the result of the larger value bits. The Kogge-Stone adder and Brent-Kung adder are examples of this type of adder. Charles Babbage recognized the performance penalty imposed by ripple carry and developed mechanisms for anticipating carriage in his computing engines.[1] Gerald Rosenberger of IBM filed for a patent on a modern binary carry-lookahead adder in 1957.[2] 1 Theory of operation 2 Carry lookahead method 3 Implementation details 4 Manchester carry chain Theory of operation[edit] A ripple-carry adder works in the same way as pencil-and-paper methods of addition. Starting at the rightmost (least significant) digit position, the two corresponding digits are added and a result obtained. It is also possible that there may be a carry out of this digit position (for example, in pencil-and-paper methods, "9+5=4, carry 1"). Accordingly all digit positions other than the rightmost need to take into account the possibility of having to add an extra 1, from a carry that has come in from the next position to the right. This means that no digit position can have an absolutely final value until it has been established whether or not a carry is coming in from the right. Moreover, if the sum without a carry is 9 (in pencil-and-paper methods) or 1 (in binary arithmetic), it is not even possible to tell whether or not a given digit position is going to pass on a carry to the position on its left. At worst, when a whole sequence of sums comes to ...99999999... (in decimal) or ...11111111... (in binary), nothing can be deduced at all until the value of the carry coming in from the right is known, and that carry is then propagated to the left, one step at a time, as each digit position evaluated "9+1=0, carry 1" or "1+1=0, carry 1". It is the "rippling" of the carry from right to left that gives a ripple-carry adder its name, and its slowness. When adding 32-bit integers, for instance, allowance has to be made for the possibility that a carry could have to ripple through every one of the 32 one-bit adders. Carry lookahead depends on two things: Calculating, for each digit position, whether that position is going to propagate a carry if one comes in from the right. Combining these calculated values to be able to deduce quickly whether, for each group of digits, that group is going to propagate a carry that comes in from the right. Supposing that groups of 4 digits are chosen. Then the sequence of events goes something like this: All 1-bit adders calculate their results. Simultaneously, the lookahead units perform their calculations. Suppose that a carry arises in a particular group. Within at most 5 gate delays, that carry will emerge at the left-hand end of the group and start propagating through the group to its left. If that carry is going to propagate all the way through the next group, the lookahead unit will already have deduced this. Accordingly, before the carry emerges from the next group the lookahead unit is immediately (within 1 gate delay) able to tell the next group to the left that it is going to receive a carry - and, at the same time, to tell the next lookahead unit to the left that a carry is on its way. The net effect is that the carries start by propagating slowly through each 4-bit group, just as in a ripple-carry system, but then move 4 times as fast, leaping from one lookahead carry unit to the next. Finally, within each group that receives a carry, the carry propagates slowly within the digits in that group. The more bits in a group, the more complex the lookahead carry logic becomes, and the more time is spent on the "slow roads" in each group rather than on the "fast road" between the groups (provided by the lookahead carry logic). On the other hand, the fewer bits there are in a group, the more groups have to be traversed to get from one end of a number to the other, and the less acceleration is obtained as a result. Deciding the group size to be governed by lookahead carry logic requires a detailed analysis of gate and propagation delays for the particular technology being used. It is possible to have more than one level of lookahead carry logic, and this is in fact usually done. Each lookahead carry unit already produces a signal saying "if a carry comes in from the right, I will propagate it to the left", and those signals can be combined so that each group of (let us say) four lookahead carry units becomes part of a "supergroup" governing a total of 16 bits of the numbers being added. The "supergroup" lookahead carry logic will be able to say whether a carry entering the supergroup will be propagated all the way through it, and using this information, it is able to propagate carries from right to left 16 times as fast as a naive ripple carry. With this kind of two-level implementation, a carry may first propagate through the "slow road" of individual adders, then, on reaching the left-hand end of its group, propagate through the "fast road" of 4-bit lookahead carry logic, then, on reaching the left-hand end of its supergroup, propagate through the "superfast road" of 16-bit lookahead carry logic. Again, the group sizes to be chosen depend on the exact details of how fast signals propagate within logic gates and from one logic gate to another. For very large numbers (hundreds or even thousan
计算机
2015-48/1914/en_head.json.gz/15005
Part five of our series on the history of the Amiga covers the computer's … - Dec 10, 2007 5:52 am UTC Bad advertising Because the Amiga was years ahead of its time compared to the competition, many Commodore executives believed that the computer would sell itself. This was not—and has never been—true of any technology. When personal computers first came on the scene in the late 1970s, most people had no idea what they would be useful for. As a result, the only people who bought them initially were enthusiastic and technically skilled hobbyists—a limited market at best. It took a few killer applications, such as the spreadsheet, combined with an all-out marketing assault, to drive sales to new levels. The Amiga was in the same position in 1985. It was a multimedia computer before the term had been invented, but there were no killer applications yet. What it needed was a stellar advertising campaign, one that would drive enough sales to get software companies interested in supporting the new platform. Instead, what it got was a half-hearted series of television ads that ran over Christmas and were never seen again. The first commercial had a bunch of zombie-like people shuffling up stairs towards a pedestal, from which a computer monitor emanated a blinding light. It was a poor copy of Apple's famous 1984 advertisement, and failed to generate even a tiny amount of buzz in the industry. An Amiga ad from November 1985 Image courtesy Amiga History Guide From there, things got worse. The next ad was a rip-off of the ending of 2001: A Space Odyssey and featured an old man turning into a fetus. Some pictures of the commercial's production made their way to the Commodore engineers, and soon the "fetus on a stick" became a standard joke about their company's marketing efforts. Further advertising used black-and-white and sepia-toned footage of typical family home movies, with some vague narration: "When you were growing up, you learned you faced a world full of competition." Amiga did indeed face a world full of competition, but this kind of lifestyle avant-garde advertising was already being done—and being done much better—by Apple. What Commodore really needed at that time was some simple comparative advertising. A picture of an IBM PC running in text mode on a green monochrome screen, then a Macintosh with its tiny 9-inch monochrome monitor, then the Amiga with full color, multitasking, animation, and sound. For extra marks, you could even put prices under all three. An Amiga ad from an alternate history As a result of Commodore dropping the ball on production and marketing, the firm sold only 35,000 Amigas in 1985. This didn't help with the balance sheet, which was getting grim. Missing CES Commodore had experienced a financial crunch at the worst possible time. In the six quarters between September 1984 and March 1986, Commodore Business Machines International lost over $300 million. Money was tight, and the bean-counters were in charge. As a result, Commodore was a no-show for the January 1986 Consumer Electronics Show (CES). Ahoy! Magazine reflected on this conspicuous absence: Understand that the last four CES shows in a row, dating back to January 1984, Commodore's exhibit had been the focal point of the home computer segment of CES, the most visited computer booth at the show—as befitted the industry's leading hardware manufacturer. Their pulling out of CES seemed like Russia resigning from the Soviet Bloc. Commodore also missed the following computer dealer exhibition, COMDEX, as well as the June 1986 CES. The company had defaulted on its bank loans and could not get the bankers to lend any more money for trade shows. The company's advertising also slowed to a trickle. Thomas Rattigan, who was being groomed for the position as Commodore's CEO, recalled those troubling times. "Basically, the company was living hand to mouth," he said. "When I was there, they weren't doing very much advertising because they couldn't afford it." This strategic retreat from the market had a hugely negative impact on Amiga sales. In February 1986, Commodore revealed that it was moving between 10,000 and 15,000 Amiga 1000 computers a month. Jack Tramiel's Atari ST was beating the Amiga in sales figures, in signing up dealers, and worse still, in application support. Page: 1 2 3 4 Next → Reader comments 3
计算机
2015-48/1914/en_head.json.gz/15414
Ripdev - GOOD BYE Ripdev has been a presence in the iPhone community for a good while now. They took over Installer from Null River, and worked WITH Null River for a while before doing so. They're behind Icy, Kate, and a few more notable iPhone softwares. And now, they're shutting down. Citing only "circumstances" as their reasoning, Ripdev states they'll be closing their doors, ceasing all support, and are giving out the source to Icy to see if anyone would like to do anything with it. We've always been Cydia proponents - I don't even have Icy installed on my iPhones, and never have except a brief review period when it initially came out. With Icy gone, guess we're in the same exact place - Cydia is King, and the people have spoken. It's been a great two years, but unfortunately, the time has come for Ripdev to close its doors. There are many reasons for this, most of which we probably will never disclose (unless we are forced to). The important thing is that due to circumstances we will likely be unable to support the titles we have created over the years further. They will be perfectly operatable on the firmware versions they were created for, and you will be able to download and use them � but they will no longer be supported and updated (except for i2Reader Pro that is not being developed by us and that will be kept in sync with its App Store version). We will, of course, transfer the licenses to the new devices � just email us. Our Cydia repository will be operational until at least next year, so be assured that the products you liked and paid for will be available for you to (re)install. Icy, our lightweight DPKG installer, is now available in source form under MIT license. You're free to do whatever you want with it. It would be nice if someone picks up the project� It was a honor to be in the iPhone jailbreak community, and we are proud that we have certainly made a ripple or two. Farewell! Ripdev.com
计算机
2015-48/1914/en_head.json.gz/15588
Recent postsRecent images Father Morning Member 879 0 entries Immortal since Dec 17, 2007 Uplinks: 0, Generation 2 Affiliated / Invited / Descended SpaceCollective Where forward thinking terrestrials share ideas and information about the state of the species, their planet and the universe, living the lives of science fiction. Featuring Powers of Ten by Charles and Ray Eames, based on an idea by Kees Boeke. Space Collective.org is a cross-media information and entertainment channel for post-ideological, non-partisan, forward thinking terrestrials. © 2006-2015 SpaceCollective. Privacy policy / Colophon Information Collection and Use by SpaceCollective.org SpaceCollective.org collects user submitted information such as name and email address to authenticate users and to send notifications to those users relating to the SpaceCollective.org service. SpaceCollective.org also collects other profile data including but not limited to: gender and age in order to assist users in finding and communicating with each other. SpaceCollective.org also logs non-personally-identifiable information including IP address, profile information, aggregate user data, and browser type, from users and visitors to the site. This data is used to manage the Website, track usage and improve the Website services. User IP addresses are recorded for security and monitoring purposes. User Profile information including members' pictures and first names are displayed to people in order to facilitate user interaction with the SpaceCollective.org service. Email addresses are used to send notifications related to the service. Email addresses are not shared or displayed to people within a user's personal network or anywhere else on the website. Users within a personal network communicate on SpaceCollective.org with each other through the SpaceCollective.org service, without disclosing their email addresses. To facilitate the connection between members on the service, SpaceCollective.org allows users to search for other members using display names. We may also use a user's email address to send updates or news regarding the service. Use of Cookies SpaceCollective.org uses cookies to store visitors' preferences and to record session information for many purposes, including ensuring that visitors are not repeatedly offered the same Web page content based on browser type and user profile information. We do not link the information we store in cookies to any personally identifiable information you submit while on our site. You may be able to configure your browser to accept or reject all or some cookies, or notify you when a cookie is set - each browser is different, so check the "Help" menu of your browser to learn how to change your cookie preferences - however, you must enable cookies from SpaceCollective.org in order to use most functions on the site. SpaceCollective.org contains links to sites. SpaceCollective.org is not responsible for the privacy policies and/or practices on other sites. When linking to another site a user should read the privacy policy stated on that site. Our privacy policy only governs information collected on SpaceCollective.org. Correcting/Updating or Removing Information SpaceCollective.org users may modify or remove any of their personal information at any time. Members who no longer wish to receive notifications may choose not to by selecting the appropriate checkbox in their personal account settings. Security SpaceCollective.org member accounts are secured by member-created passwords. SpaceCollective.org takes precautions to ensure that member account information is kept private. We use reasonable measures to protect member information that is stored within our database, and we restrict access to member information to those employees who need access to perform their job functions, such as our customer service personnel and technical staff. Please note that we cannot guarantee the security of member account information. Unauthorized entry or use, hardware or software failure, and other factors may compromise the security of member information at any time. Sharing and Disclosure of Information SpaceCollective.org Collects Except as otherwise described in this privacy statement, SpaceCollective.org will not disclose personal information to any third party unless we believe that disclosure is necessary: (1) to conform to legal requirements or to respond to a subpoena, search warrant or other legal process received by SpaceCollective.org; (2) to protect the safety of members of the public and users of the service. SpaceCollective.org reserves the right to transfer personal information to a successor in interest that acquires rights to that information as a result of the sale of SpaceCollective.org or substantially all of its assets to that successor in interest. Changes in Our Privacy Policy From time to time we may make changes to our privacy policy. If we make changes, we will post them on our site to make users aware of what the changes are so users will always be aware of what information we collect, how we use it, and when we may disclose it. A User is bound by any minor changes to the policy when she or he uses the site after those changes have been posted. If, however, we are going to use users' personally identifiable information in a manner materially different from that stated at the time of collection, we will notify by posting a notice on our Website for 30 days. SpaceCollective is a joint initiative of filmmaker Rene Daalder and designer Folkert Gorter. Daalder is the project's main author and creator of The Future of Everything. Gorter is the site's interaction designer and the curator of the Gallery. System architecture and technology created by Josh Pangell. The Future of Everything episodes are edited by Aaron Ohlmann and produced by American Scenes Inc; executive producer: Joseph Kaufman. © 2006-2015, American Scenes Inc.
计算机
2015-48/1914/en_head.json.gz/15915
Dark Souls: Prepare To Die PC Edition Gets Slammed And Patched By Community By William Usher 3 years ago comments You all petitioned for it, begged Namco Bandai to port over one of the most beloved, anti-propaganda, anti-mass market action-RPGs ever made. It was a small game designed to challenge players and not compromise on its appeal and it finally arrived for PC, albeit a shoddy consolitis port. Nevertheless, despite all the mellow raging going on right now there are modders already hard at work fixing the game, adjusting the resolution and working on fixes for the controls. GameSpy's Port Authority is quickly becoming one of my favorite go-to sites for information on PC ports. It's honest and hard hitting and better than any review out there (which usually succumbs to pandering to the publisher's interest). The Port Authority lays out a lot of what PC gamers ought to be afraid of when purchasing a port like Dark Souls: Prepare to Die Edition for PC and it's that there are very limited options available for modifying the experience to suit your PC's setup. As stated in the GameSpy article there is a patch for Dark Souls to improve the resolution so that it's no longer an upscaled, blocky mess. The patch comes courtesy of a PC Community Champion named Durante who frequents NeoGaf and has a DLL download available for those of you who want the resolution fixed. According to Durante... I developed an interception dll framework during this week to prepare for the job. I did the actual work to make the game render at higher res in that amount of time though -- based on the framework -- and spent a few more hours testing and adding the config file. Leave it to the modding community to fix what pros couldn't get right the first time around. The game has received fair reviews from two sites so far, with a German site giving it a 92 out of 100 and PC Gamer settling for an 89 out of 100. As a hack-and-slash, tactical action-RPG it still seems to hold its own on PC despite some of the shortcomings brought up in the GameSpy article. I don't know if Dark Souls was really needed for PC given that the platform already has a few games like Skyrim and (to a lesser extent) some Gothic type titles, but this was also a perfect chance for PC gamers to prove that they are a paying-bunch and that not everything that comes to PC has to fail in light of the often blamed elephant in the room...piracy. According to PC Gamer the �game isn't buggy, doesn't crash and loading times are quicker than they were on console� which should allow many PC gamers to take a sigh of relief. If the game continues to prove to be mod friendly enough so that the community can expand and enhance the experience (much like how it was the community that saved that absolutely gross port of Resident Evil 4 from Ubisoft) then Dark Souls: Prepare to Die Edition could have a long and fruitful life on PC. You can pick the game up right now from Steam for only $39.99. While FromSoftware admitted to Eurogamer they weren't accustomed to working on PC titles and that people shouldn't hold their breath too much for the PC port, it looks like things turned out well enough in the end. Tweet Amazon Black Friday Sales For Games Announced Fallout 1 Being Remade In Fallout: New Vegas
计算机
2015-48/1914/en_head.json.gz/16543
Chief Technology Officer (CTO) Chief Technology Officers receive generous compensation in the United States — earning an average of $149K per year, their standard salaries shoot well into the six-figure range. With bonuses occasionally running north of $56K, profit sharing proceeds sometimes surpassing $49K, and a few commissions as high as $98K, total income for Chief Technology Officers can range between $86K and $254K according to individual performance. Career duration and the particular city each impact pay for this group, with the former having the largest influence. A large number enjoy medical while a fair number get dental coverage. Vision coverage is also available to the greater part. Job satisfaction is high and work is enjoyable for most Chief Technology Officers. Respondents to the PayScale salary survey provided the data for this report. IT Security & Infrastructure Network Management / Administration Internet Information Server (IIS) Information Technology (IT) Manager Bonus$254.34 - $56,409 National Annualized Data (?XAll compensation data shown are gross 10th to 90th percentile ranges. Take the PayScale Survey to find out how location influences pay for this job.)$0$120K$240K$360KBonus$254.34 - $56,409 Job Description for Chief Technology Officer (CTO) A chief technology officer is part of an executive team in a company. He or she leads the efforts of the technology development within the company. This is usually the highest position related to technology within a company. Leadership skills are needed, as the CTO will often lead teams of people in the information technology department. Read More... The CTO may decide when technologies need to be updated, so it is important to stay up to date with developments in the field. It is also important to keep an eye on the competition, in order to stay a step ahead. An abundance of research will be carried out by the CTO, and reports will be generated. That way, decisions can be made by the executive team. The CTO will generate a vision for the company and a plan to achieve it in the future. It is important to harness the best technology available to provide a good experience for the end user and to make employees’ work more efficient. Since this is a senior position in a company, it is usually necessary to have many years of experience in a position relating to information technology. Since the job involves many duties, it is important to be self-motivated and to be able to problem-solve, multitask, and work well under pressure. The CTO may have a bachelor’s, master’s, or doctorate’s degree in a field such as information technology, computer programming, or another computer science. Chief Technology Officer (CTO) Tasks Monitor management of all hardware, software, databases and licenses, maintenance, and projections of future needs. Define technology strategies and ensure that processes meet expectations for federal, state and community privacy and security. Contribute to senior management team, guiding strategic decisions and resource allocation. Lead technology teams in day-to-day operations, provide key expertise, supervise the heads of departments, and set performance goals. Conduct technical reviews of products or solutions to compare and evaluate their applicability. As Chief Technology Officers transition into upper-level roles like VPs of Research & Development, it's possible that they won't see a change in salary. VPs of Research & Development earn the same amount as Chief Technology Officers on average. Chief Technology Officers most often move into Chief Information Officer or Chief Operating Officer roles. However, the former pays $13K less on average, and the latter pays $20K less. Chief Technology Officer (CTO) Job Listings Chief Technology Officers report using a deep pool of skills on the job. Most notably, skills in Leadership, Software Architecture, Java, and Product Development are correlated to pay that is above average, with boosts between 3 percent and 9 percent. Those listing Network Management / Administration as a skill should be prepared for drastically lower pay. Microsoft Office and Internet Information Server also typically command lower compensation. Most people skilled in IT Management are similarly competent in Leadership. Chief Technology Officers who reported more years of relevant experience also reported higher earnings. Salaries can reach six figures almost right off the bat; beginning workers who have less than five years' experience bring in a median of $107K per year. After working for 10 to 20 years, Chief Technology Officers make a median salary of $160K. Seasoned veterans with 20 years under their belts enjoy a median income of $187K. For those looking to make money, Chief Technology Officers in San Jose enjoy an exceptional pay rate, 32 percent above the national average. Chief Technology Officers will also find cushy salaries in Boston (+22 percent), Dallas (+19 percent), Atlanta (+18 percent), and New York (+15 percent). The smallest paychecks in the market, 13 percent south of the national average, can be found in Houston. Chief Technology Officer (CTO) Reviews What is it like working as a Chief Technology Officer (CTO)? Chief Technology Officer (CTO) in Tampa: "Its Like Working 3 Jobs At Once." Pros: Control my own schedule. Lots of learning available. Good networking. Cons: Buried with emails, voice mail, and questions. Spending time to answer basic questions someone could easily look up on Google. Commuting over an hour every day. Less than 1 year1%1-4 years6%5-9 years12%10-19 years41%20 years or more41%
计算机
2015-48/1914/en_head.json.gz/17150
APP for iOS Movable Type 6.2Is Now Available New and enhanced features that improves editing and managing an asset.And the Data API make it possible to manage content beyond web. Brochure (PDF, 3.9MB) Movable Type for AWS Start Movable Type on Amazon EC2 Start Quickly And Easily Movable Type for AWS is an Amazon Machine Image (AMI) including the OS in which Movable Type 6 was installed and available on AWS Marketplace. You can purchase and launch the latest versions of Movable Type quickly and easily. Optimized And Scalable Movable Type Environment OS, Applications, web server, PSGI server, PHP, and database are all optimized for Movable Type. Free Of Charge on a Micro Instance The software charge is $0.07 per hour or $499 per year. It is always free of charge if you launch Movable Type for AWS (nginx) on a micro instance. Easy Update Movable Type Using yum command When updating Movable Type for AWS, you only have to use yum command. You will get relief from the stress of manual updating. 7 Day Free Trial Available You can try Movable Type for AWS for 7 days free on all instance types. Movable Type for AWS (nginx) Movable Type for AWS (Apache) Software License We provide Movable Type software licenses as before.You can pay with your credit card by PayPal. This Movable Type License Agreement (hereinafter referred to as this "Agreement") is made and entered into by and between an individual, corporation, entity or organization (hereinafter referred to as the "Client") that uses Movable Type 6.x (hereinafter referred to as the "Software") and Six Apart, Ltd. (hereinafter referred to as "Six Apart"). The Client shall not download, install or use the Software unless it agrees to this Agreement. The Client shall be deemed to have agreed to this Agreement upon its download, installation or use of the Software. Article 1. Definitions In this Agreement, the following terms shall have the meanings specified below: (1) User "User" means an individual who has been assigned his/her own login name generated by the Software through the function of the Software to "add/edit blog authors." Any person using invalidated login name shall not be counted as a User. Further, it is prohibited to share a login name of any individual among more than one person. (2) Commenter "Commenter" means a User entitled only to post comments on the Software. The number of Commenters shall not be included in the number of Users. (3) Server "Server" means a computer installed with Movable Type, or a group of computers consisting of a computer installed with Movable Type and a computer or computers used for publishing web pages and a computer or computers used as database server. (4) Update "Update" of a product means a minor functional improvement over, or a bug fixing in, the current version. Release of an Update may be confirmed by a change of the figure after the decimal point of the version number. For example, a change from X.1 to X.2 represents an Update. (5) Upgrade "Upgrade" means a major-scale release of a product with introduction of a new function or improvement in the key functionality of the Software. Release of an Upgrade may be confirmed by a change of the figure before the decimal point of the version number. For example, a change from 5.X to 6.X represents an Upgrade. Designation of either the "Update" or the "Upgrade" shall be made by Six Apart. Article 2. Use of Software Pursuant to the provisions of this Agreement, the Client shall be granted a license to use, on a non-exclusive, non-transferable and non-sublicensable basis, the Software for the purpose of the Client's own use (if the Client is a corporation, entity or organization, use of the Software by an individual belonging to the Client or any similar individuals designated by the Client from among individuals within the scope permitted by Six Apart in accordance with the License Policy, as the use of the Software by the Client pursuant to this Agreement), and, if the Client operates a community, for the purpose of use within the community by participants therein. Except as specified in this Agreement, the Client may not provide any third party the whole or part of functions of the Software, nor may the Client receive from any third party a consideration for the use of the Software, no matter what the purpose of use is. Six Apart shall be entitled to determine whether the use by the Client is pursuant to this Agreement or not. Six Apart shall retain all rights pertaining to the Software (including all intellectual property rights), as well as all rights pertaining to the Software which are not specifically licensed under this Agreement. The use of the Software shall be limited to the number of Users and the number of Servers set forth in this Agreement. The number of the Commenters shall not be limited. Article 3. Production of Duplicates The Client may duplicate the Software in any readable forms, in the minimum number necessary only for the backup purpose; provided, however, that, such duplication of the Software shall be made in the same form as the original and with an indication of the authorized person. For the avoidance of doubt, any rights to the Software not specifically licensed hereunder shall be reserved by Six Apart. Article 4. Technical Support, Update and Upgrade (1) Technical supports for the Software shall be provided by Six Apart or a partner company of Six Apart for value. The Client shall enter into a separate agreement with Six Apart or such partner company of Six Apart on an individual basis concerning the contents of technical supports. (2) For two (2) years after the purchase of license for the Software and any period to be separately designated by Six Apart thereafter, the Client shall be entitled to receive an Update to the latest version for free. (3) If the Software is provided as an Update or Upgrade, the Client may use either the previous version or the current version and shall not use both versions concurrently; provided, however, that, if the Client chooses to use the previous version despite the Upgrade of the Software, the Client shall acknowledge that the Update set forth in the preceding Paragraph concerning such previous version may be terminated at the discretion of Six Apart. Article 5. Compliance The Client shall understand and acknowledge that it may use the Software only in compliance with all applicable laws. In addition, the Client shall use the Software in accordance with laws and other regulations relating to privacy and intellectual property rights. The Client shall cause any User to adhere to the conditions of this Agreement and acknowledge that any violation of this Agreement by such User shall be deemed as a violation by the Client. Article 6. Prohibited Matters The Client shall be prohibited to commit the acts specified in each Item below: (1) To distribute any software derived from the Software (provided, however, that, distribution of plug-ins and other add-ins written by using API and any other programming interfaces published by Six Apart shall be permitted); (2) To duplicate the Software otherwise than set out in this Agreement; (3) To perform reverse engineering, decompiling or disassembling concerning the Software, or otherwise try to restructure or clarify source code or algorithm for the Software; (4) To make available the Software, whole or part, or any duplicate thereof, to any third party in the form of sale, assignment, grant of a license, disclosure, distribution to such third party or otherwise; (5) To use the Software for the purpose of providing hosting services to any other party or providing services to any other individual, corporation, entity or organization which renders, as business, services relating to the Internet or systems, etc., with or without consideration; (6) To delete or modify any display authority or trademark on the Software. Article 7. Protection of Personal Information and Privacy All personal information furnished by the Client to Six Apart shall be controlled in accordance with the Privacy Policy published on the website https://www.movabletype.com/privacy/. The Client shall be deemed to have understood and accepted the Privacy Policy by using the Software. Article 8. Guarantee by Six Apart Six Apart hereby guarantees that no order is included in the Software which has been designed intentionally to alter, lose, destroy, record or transmit any information in computer, computer system or computer network, without intent or permission of the manager of the relevant information. This guarantee shall not apply to open source codes included in the Software, if any. If, during the term of this Agreement, any object which violates the guarantee hereunder, other than the open source codes, is found to be included in the Software, Six Apart shall make any reasonable commercial efforts to alter or replace the Software, at the cost of Six Apart, so that the Software may comply with the guarantee set out herein, without prejudice to any primary function of the Software, as the only legally available relief. The Client may not pursue any other legal relief in connection with the violation of guarantee set out in this Article. Article 9. Limitation on Guarantee relating to Function of Software (1) The Software shall be furnished on as-is basis and shall not provide any security or guarantee whether express or implied. Six Apart shall not provide security or guarantee of any kind whatsoever, whether express or implied, including, but not limited to, an implied security or guarantee concerning the merchantability and suitability to any specific objectives. (2) Any and all risks pertaining to the quality and performance of the Software, program errors in the installation and use, damage to devices, loss of data and software programs, nonperformance or suspension or otherwise shall be borne by the Client. The Client shall determine the suitability of use of the Software at its own responsibility, and bear any and all risks pertaining to such use. Article 10. Termination (1) If the Client violates any provision of this Agreement, Six Apart may terminate this Agreement without giving notice. (2) Upon the termination of this Agreement, licenses and technical supports having been granted to the Client shall all be terminated and the Client shall immediately uninstall and suspend any use of the Software, and, if instructed by Six Apart, shall delete or destroy any duplicates of the Software. In such cases, considerations having been paid for the Software and technical supports shall not be refunded for any reason whatsoever. Provisions relating to "Limitation on Guarantee relating to Function of Software," "Indemnification," "Limitation on Liability" and "General Provisions" shall survive the termination of this Agreement. Article 11. Indemnification The Client hereby agrees to indemnify Six Apart, any of its officers, employees, agencies, subsidiaries, affiliates and other partners for liabilities for any direct, indirect, contingent, exceptional, consequential or punitive damage arising from the Client's use of, or otherwise in connection with, the Software. Article 12. Limitation on Liability (1) The Client specifically understands and acknowledges that, Six Apart shall not assume liabilities for any direct, indirect, contingent, exceptional, consequential or punitive damage, including, but not limited to, those resulting from loss of profits, loss of credibility, nonperformance, unavailability of data or other causes, as well as any other unrecognized damage, not only where Six Apart has notified the possibility of such damage in advance, but in all other cases. (2) The amount of accumulated damages payable by Six Apart to the Client shall be up to the amount of fees paid by the Client to Six Apart during the latest twelve (12) months, not only where the court of competent jurisdiction rejected the limitation on liability for any contingent or indirect damage and the limitation referred to in Paragraph 1 does not apply to the Client, but in all other cases. Article 13. General Provisions (1) This Agreement shall be governed by and construed in accordance with the laws of Japan. This Agreement shall constitute an entire agreement between the Client and Six Apart, and the Client shall use the Software in accordance with the provisions of this Agreement. This Agreement shall supersede any and all agreements prior to the execution of this Agreement. (2) All disputes arising from or in connection with this Agreement shall be submitted to the exclusive agreed jurisdiction of the Tokyo District Court, for the first instance. (3) If any provision of this Agreement is determined by the competent court to conflict with any law, then such provision shall be modified or construed, to the maximum extent permitted by law, so that the expected objectives may be fulfilled, and any other provision of this Agreement shall remain full force and effect. (4) The Software shall be the "commercial item" defined in the United States 48 C.F.R. 2.101, and is comprised of the "commercial computer software" and the "commercial computer software documentation" used in the United States 48 C.F.R. 12.212. Provisions of the United States 48 C.F.R. 12.212 and from C.F.R. 227.7202-1 through 27.7202-4 shall apply concurrently and any and all United States end users shall obtain the Software within the extent of the rights stipulated in the said provisions. (5) Both parties acknowledge that the manufacturing and sale of the Software shall comply with the export control-related laws and regulations of Japan and the United States, and agree to comply with all such laws. (6) The Client may not assign, transfer, pledge or otherwise dispose of its contractual status or rights or obligations under this Agreement without prior written consent of Six Apart, and any assignment, transfer, pledge or other disposition contradicting the above shall be invalid. (7) Movable Type, logo of Movable Type, and other logos and names of Movable Type, Six Apart, logo of Six Apart and other logos and names of Six Apart shall be the trademarks of Six Apart. The Client hereby agrees that it shall not indicate or use such trademarks in any manner whatsoever without prior written consent of Six Apart. (8) To the extent not significantly prejudice the benefit of the Client, the Client shall acknowledge that Six Apart may amend or modify this Agreement without consent of the Client. (9) Titles and numbers of Articles, Paragraphs and Items of this Agreement shall be for convenience purposes only, and they shall not have any legal effects. PROUNLIMITED If you need technical support, please purchase our technical support service. Standard Technical Support Standard technical support is provided for Movable Type users (MT Users) who purchased our “Technical Support Services” additionally. Support is valid for one month or three months starting from the license registered date, and the validity period can be extended with further purchases. All queries are answered through the online ticket support system. Support by email, phone, and instant messenger is not available. General response time is about 2-3 business days. If you wish to receive support through mediums other than email or need help with commissioning website construction, please contact Six Apart’s support partner. Supported Topics Admin Screen Operations / Settings Help with questions regarding operations / settings on the admin screen [mt.cgi]. Configuration File [mt-config.cgi] Help with settings and editing the configuration file [mt-config.cgi]. Install / Update / Upgrade Help with installing / updating / upgrading Movable Type. Errors / Malfunctions Help with errors / malfunctions that occur while using Movable Type. It is extremely helpful to save or make note of error message(s) received when the error occurred. Please understand that not all errors / malfunctions can be fixed. Also, support staff cannot help with errors / malfunctions caused by operations not compatible with the MT User’s personal environment. Viewing Published Blogs Help with how a published Movable Type blog is displayed. Note that support staff cannot help solve problems caused by the MT User’s environment (OS / browser). Manual / Product Specifications Help with Movable Type information manuals and product specifications. Tags (MT Tag) Help with the use of Movable Type tags (MT Tags). Explanations on the basics of MT Tag notation, site construction and template creation is not covered by support. Also, support staff cannot answer general HTML and stylesheet questions. Miscellaneous / Support Help with questions regarding Movable Type standard support and other miscellaneous issues not mentioned in this document. About Movable Type Support Services It is highly recommended that the MT User read through the official Movable Type Manual prior to contacting customer support. Depending on the question, answers from customer support might be exactly the same as information provided in the manual. Questions related to Movable Type and official plugins can be answered by customer support. Customer support can not help with problems related to non-Six Apart products, software required to run Movable Type, third party plugins, third party software that connects to Movable Type and general software required for installing and setting up Movable Type. The MT User holds the responsibility of confirming their computing environment is compatible for use with Movable Type. Customer support cannot help with questions about servers, non-Six Apart software, database construction or environment set-up. Support is offered for the latest version of Movable Type only. Many problems can be resolved by upgrading to the latest version. After a major version upgrade of Movable Type has occurred, support for previous versions will end after a certain amount of time, specified by Six Apart. Questions about modifying the Movable Type source code, as well as problems caused by the modification of Movable Type source code, can not be answered by customer support. Solutions given to the MT User from customer support should be regarded as advice. We cannot guarantee that all questions can be answered or that all solutions given will fix the submitted query. How the information given by customer support is used is up to the MT User’s discretion. Six Apart can not be held responsible for problems or damage directly resulting from MT User actions. Six Apart does not guarantee solutions to all problems sent to customer support. Also, the length of time that customer support services remains valid for any MT User can be changed at the discretion of Six Apart. Support is offered for as long as the MT User’s product license is valid. If Six Apart discovers that the MT User has invalidated their user agreement, the provided support time period can be shortened or canceled. Six Apart has the right to update the information provided in this document without prior notice. Unsupported Topics The following topics are not covered by technical support services. Site building and design Software required for Movable Type operations Third party software designed to be used with Movable Type Operation of software necessary for installation / set-up Problems related to the MT User’s personal environment Advice on computing environments General instructions on MT Tag, HTML and stylesheet notation Plugin development for Movable Type expansion Problems related to modifying or modified source code Problems with versions of Movable Type older than the latest version Questions unrelated to Movable Type Non-Six Apart products Monday - Friday 10:00 - 19:00 (PST) Closed Saturday, Sunday, public holidays Support services are available during normal Six Apart business hours. Support is closed weekends and public holidays. Although general response time can take up to 3 business days, there is a chance the wait time may take longer depending on the issue and the amount of questions received by customer support during a given time period. Queries sent after 7:00 pm on a weekday will not be processed until the following business day. If you do not receive a response after 3 business days, there is a chance an error occurred in which case please re-send your question using the contact form. Questions not sent with the contact form will not be answered. TECHNICAL SUPPORT SERVICE Movable Type for iOS “Movable Type for iOS” is the best application for managing your content in Movable Type from your iPhone.Anytime, anywhere, you can create and easily edit articles. Also, you can save articles in your smartphone as a draft. Movable Type 6.1.2 or higher. iOS Version 8.0 or higher on the iPhone, iPod touch or iPad. (Not optimized for the iPad.) Made with Movable Type Data API. Add, edit and delete entries and web pages. Offline editing and saving. Upload photos. Simple HTML text editor - HTML input support. Also, supports editing Custom Fields. Preview for mobile view and desktop view. Six Apart, Ltd. Japanese Site
计算机
2015-48/1915/en_head.json.gz/231
Scenyx Sites Forums The State Of Hd Gaming Started by twistedsymphony twistedsymphony arrogant beyond belief Location:Almost Canada http://solid-orange.com Interests:Consoles, Computers, Cars, Arcades, Home Theater, and the modding of anything that moves. 360 version:v2 (zephyr) The State of HD GamingPast Present and FutureHDMI, 1080p, ICT, HDCP, LCD, DLP, HD-DVD, BRD, YPbPr, ANA, HANA, True-HD, DTS-HD, CPPM, DSD, are you still with me? If you understand what more than half of those terms mean then kudos to you, you're paying more attention to the technical side of HD gaming than most people spending hundreds if not thousands of dollars on Audio/Video equipment. Those making purchase decisions based not on the definitions of these terms, or the implications they carry with them, but under the mistaken impression that all of these things are somehow vitally important; the more letters the merrier. Too bad that's not the way things actually work.It's OK if Joe six-pack doesn't understand all this crap, it's not like his livelihood depends on it, unfortunately rattling off these terms to a register jockey at your local big box will land you face to face with a blank stare. Not even those who hock most of the HD wares for a living understand all of it. That's annoying, and it IS a problem but it's not a problem the companies pumping out this tech seem to care much about. Based on most of the PR surrounding these things it's pretty clear that they'd rather have people confused, less they understand the REAL implications of their purchase. Why? Because if more people understood most of these terms, and the real world implications they carry with them, they'd be far less likely to spend as much hard earned cash on it.I often see references to the early adopters as the group that spends enormous amounts of cash to get new tech first. While it's true that enthusiast make up a large portion of early adopters they're never foolish enough to part with their money without first doing their homework, if they buy a widget they know exactly what that widget will deliver. Recently though these early adopters have been avoiding the orphanage; they've taken a wait and see approach. Meanwhile there is a growing group of adopters who think they know what they're buying but on a whole they really don't; enthusiast posers if you will. They've got the cash, they've got vague ideas about what tech terms mean, but they're severely lacking in experience and genuine depth of knowledge. This can be applied to any new Audio Video tech but you might be asking...How does this apply to video games?The current generation of video games has been lauded as the "HD era", what that supposedly means is of no consequence, it's nothing more than a marketing gimmick. Regardless of what some shiny domed head in a posh suit tells you, HD does not carry implications of online connectivity, nor wireless accessories, nor customization. HD is a resolution, that's all it is, that's all it will ever be. How well is that thing defined? Is it highly defined? Yes? Well then it's HD.HD even gets so technical as to set actual limits. A line in the sand if you will where everything on one side is HD and everything on the other is not. All these other terms are just means to an end. We often look back at past generation when game companies played the numbers game with bits: 8-bit, 16-bit, 32-bit, 64-bit, stuff like blast processing and FX Chips etc. We can all look back and have a good chuckle at such triumphs as the 64-bit Jaguar that really wasn't any better than any of the 16-bit offerings of the time. Today it's no different; the names have change but it's the same damn game on the same damn filed.True ResolutionsOut of the factors most professionals agree make for a “beautiful” picture resolution ranks 4th, behind contrast ratio, color saturation, and color accuracy. Putting this in Video game graphics terms it means that things like shadows, lighting effects, textures and pallets are more important to the picture than resolution is. The problem is that both Sony and Microsoft decided to play the numbers game, placing resolution on a pedestal and throwing around terms like 1080p when these machines can actually nail the first three ranks much better than the fourth. The problem is when you shoot for those higher resolutions other things begin to suffer, and more often than not it's bullets one, two and three.Those who had HDTVs back in the thick of the Xbox 1's days might recall that a select few games supported HD resolutions of 720p and a few even supported 1080i. Anyone who played those game could easily tell you that while HD was nice those games generally looked worse than most other games. The reason was that the graphics were much more simplistic. The resolution was higher but the rest of it usually looked like crap. It was a double edged sword, the only reason you could get HD resolutions was because the rest of the graphics weren't all that impressive (read: system intensive).High Definition isn’t needed for things to look realistic. Think for a moment, have you ever watched a news program in standard definition and thought the people didn't look real enough? HD simply allows you to see more of the details, but if you short change the details for the sake of the resolution then you get a close up look at the LACK of details and the resulting picture looks even worse than if you left it at a lower resolution. This is the reason that I really hope companies like Sony (MS is off the hook because they made a commitment not to) don’t force developers to use 1080p for the sake of using 1080p because the overall quality of the graphics and other elements will actually suffer as a result.For the best game graphics they should be placing resolution where it belongs in the list of priorities, cover the important things first or find a happy medium between resolution and the rest. There is no doubt that the PS3 and Xbox 360 can handle HD graphics, but they shouldn't pump up the resolution to the point where the rest of the graphics suffer because of it. I'd rather look at a beautiful game at a lower resolution than an ugly one at a higher resolution.There is no question that all other things equal 1080p is an improvement over 720p. The unfortunate reality is that when the graphics are being generated on the fly moving from 720p to 1080p does not leave room for all other things to be equal. I don't care what you've been told 1080p is not an automatic ticket to a good looking game. I can render a steaming brown turd in 1080p it doesn't change the fact that it will still look like crap. Based on what the PS3 and Xbox 360 have shown for capabilities so far, I sincerely recommend developers stick to the 720p resolution this generation. There will be the occasional game who's art direction or level design will be such that 1080p is possible without compromising other aspects of the gameplay or graphics. All I'm suggesting is that 1080p not be on any list of priorities; it should be reserved as a nice to have feature implemented only after the need to have features are set and done. Setting 1080p on any list of priorities is simply short changing the remainder of your gaming experience for the sake of a number.Connecting the dots1080p resolutions aren't the only over hyped piece of HD era marketing hype that has been pushed on us. HDMI seems to fall into that category as well. They pretty much go hand-in-hand considering one of the biggest selling points for HDMI is that it has better 1080p support; funny that. Another big HDMI argument is that it is digital as opposed to analog. Though in terms of real world performance it doesn't seem to make all that much of a difference. Some argue that analog is more susceptible to interference than digital, and that's true, but that doesn't mean that digital is off the hook either. It's funny how easily some of those 1s can turn to 0s and visa versa, while analog pictures get fuzzy digital pictures get shimmers and both can develop their own brand of static.HDMI also offers support for new HD audio formats, which in my personal opinion is the only real reason worth spending the extra cash for it. Though it's still not all that useful to me, for the same reason it's not all that useful to most people. Support for a format doesn't mean that your content has it, nor does it mean that the rest of your equipment supports it. Saying HDMI supports things like Dolby True HD or DTS-HD is like bragging that your bank account is capable of holding millions of dollars. There's a really good chance you'll never see it being used to it's full extent. Today's games will probably never support these audio formats. Though, you might see some HD-DVDs and Blue Ray Discs popping up with support, that doesn't help much either. All of that is useless unless you have a receiver that can decode it. And even if you have a receiver that can decode it. Only those who have surround sound systems in the giggabucks range will actually notice the slight differences in the audio nuances. Plain and simple, most consumers don't have the golden ears required to tell the difference, never mind the desire to spend money on a surround sound system that would even afford them the opportunity to attempt making the distinction.Perhaps the biggest laugh is the people that want to see MS release an HDMI adapter for existing consoles; further proving that most of the people dropping cash on this stuff know enough to understand what things like HDMI are but not nearly enough to understand why it is important and what it actually does for you in the grand scheme of things. Those of you waiting for such an adapter, don't hold your breath. VGA/Component to DVI/HDMI converters run in the range of $200 (for those keeping track at home, that’s half to three quarters of the cost of a new Xbox 360). An Official MS brand adapter would most likely be priced in the same range. If you really wanted one of these adapters theres no reason you can’t just buy one right now and start using it. There are two big problems with using an external adapter for HDMI: 1.There’s no HDCP, meaning that when the ICT starts getting enabled on HD-DVD movies down the road you still can’t play them; and 2. Since the data was being converted to analog before being converted back into a digital format you loose whatever added quality you might have been hoping for with an all digital signal. The absolute best you could hope for is as-good-as the analog signal you were starting with. So all in all you just added another 50-75% to the price of your console and gained a big fat NOTHING. The only people who should be using external adapters are those that simply don’t have analog inputs available and only have HDMI or DVI-D. AFAIK there isn’t anyone in that situation. I’ve never seen an HDTV void of analog inputs.DRM for great JUSTICE!Perhaps the most prominent feature of HDMI is the DRM it offers content providers. HDCP is a nasty little feature that encrypts the digital audio and video data traveling through a DVI-D or HDMI cable. The idea is that would be pirates can't tap the audio and video data on the digital output and copy things that way. I always get a good chuckle when people talk about HDCP being used in video games. There's no point, I don't really think developers care if you record your gameplay. Anything you could possibly do with that would only serve to help sell more copies of the game. It's not like you could somehow tap the actual game code through the HDMI port.HDCP is only useful to people making static HD video or HD audio content. They aren't even using it yet, but they plan to start in 2012 (that's 2012, you know, far enough in the future where the PS3 and Xbox 360 will have already been replaced by their respective predecessors); which is when they suspect they'll have enough market saturation that they can turn on the Image Constraint Token and instantly screw over everyone at once. This is the kind of system lock down that consumers should really be weary of, and it's the kind of thing where if people really understood the implications they'd probably start boycotting HD-DVD, Blu-Ray, HDMI and everything else that carries with it the weight of HDCP. Of course it's of no consequence to gamers. It makes sense in the PS3 because it has a BluRay drive, and if you're ok with DRM in your HD movies then by all means. Since the current batch of Xbox 360 owners don't have have HDMI ports it begs the question:Are existing Xbox 360 Owners left out in the cold?Much like MS caved into the 1080p hype machine started by Sony they've also recently caved into the HDMI hype with the announcement of the Xbox 360 Elite. While MS was able to just turn on 1080p support for all existing owners via a software update, HDMI isn't so simple as it's a hardware feature, not a software one. The PS3 already supports HDMI but what does an HDMI enabled console mean for existing Xbox 360 owners?Nothing. It doesn’t mean anything at all, because existing Xbox 360 owners don't have any HDMI ports; and they never will. That doesn't seem to stop people from complaining though. Let me perfectly clear on this:A new version of the console doesn’t mean you’re being screwed. You knew what you bought when you bought it.To use a car analogy (everybody’s favorite): Honda wont replace your 2005 Civic when the 2006 model arrives with a fancy new CD player and some extra trunk space. You knew what you bought when you bought it, if you didn’t think an Xbox 360 sans-HDMI was worth the $300-$400 price tag then you shouldn’t have bought it, plain and simple. Similarly if your complaint is that you have the HD-DVD drive and are worried that the big bad ICT will come and murder your non HDCP protected resolution, well then you’re an idiot for buying an HD-DVD drive, or a cheap-skate for buying the gimped Xbox 360 version as opposed to a real player; take your pick.If your argument is that consoles shouldn’t change, then you’ve been living under a rock; nearly every console in existence has gone through major and frequent redesigns that added or removed features. Some of the changes were more visible than others; the Elite is no different. Heck even the PS2 had a major revision change that included a more reliable DVD drive and built in IR receiver for the DVD remote, among other things, and that was before the Slim PS2. If you’re under the misconception that consoles don’t go through these kinds of revisions then you simply haven’t been paying attention; it has been happening since the invent of Video games in the 70s. Products get new features and lower price tags over time, game consoles are no different.People who are complaining about being "screwed" with the release of this new Xbox have two problems. First and foremost they don't realize that in the grand scheme of things HDMI really doesn't offer any substantial benefits to what they're currently getting. Xbox 360 games likely wont ever support HD audio formats, most games wont ever support 1080p resolutions, and as long as you're using reasonable quality cables you wont see much of any interference. Beside the fact that only a small portion of the game buying population own HDTVs, and that only a small portion of HDTV owners have HDMI ports, even a smaller amount of people are capable of discerning the differences between an analog signal over component or VGA versus a digital signal over HDMI when administered the Pepsi challenge. Second of all: SHUT UP AND DEAL WITH IT! If you’re really that broken up about it grow a spine and start taking responsibility for your own actions. No one is stopping you from selling the console you have today sitting on the money and buying an HDMI version when it comes out. Consider difference in price between the sale of the used console with the price of a new one a rental fee for the months of gameplay you got out it. If you can’t deal with being away from your beloved console while you wait, well then HDMI must not really be all that important to you and there’s no real reason for you to complain because you’re not willing to wait for what you want.Future Proof?Some might say that support for HDMI and HD-DVD built into the Xbox 360 from the beginning would have helped to future proof the system. O'RLY? Has the lack of either of those features hindered the Xbox 360's performance yet? When you consider that on average a console has 5 holiday seasons before it's replaced buy a newer, faster, sexier model the idea of future proofing starts to look pretty stupid. Not only is the time line too short but if a console lasts beyond it's expected life it will be harder for console makers to move consumers on to the next platform.Consider this, we've moved well past the Xbox 360's 2nd holiday season and gearing up for it's 3rd... we're half way through the Xbox 360's expected lifespan. I honestly don't see the lack of a built in HD-DVD drive playing much of a factor in the remainder of the Xbox 360's life, nor do I see the late entry of HDMI playing much of a factor either. This holiday there will be two games that will likely span multiple discs. Holiday 08 will likely see a few as well. Only a small handful of these games will even support 1080p natively. By the time we get to Holiday 09 none of it will matter because the dust will have firmly settled on this generation and whoever owns the market at that point will define it. Holiday 2010 will see people turning their attention to another new generation of consoles At which point anything that was left out this generation is sure to be included next time.HDMI does offer benefit, as does 1080p, blue laser discs, HD Audio and all of the other little things that make up today's home theater tech. However for today's games most of it is not needed. Console horsepower isn't quite up to speed with the rest of the home theater tech yet, besides most gamers don't have the equipment at home to fully utilize it, and even if they did... most wouldn't see the difference. Those few that do have the equipment and can see the difference, need remember that they're in the vast minority, and the console world doesn't revolve around them no matter how self important they think they are.It's true that PS3 owners have access to BluRay and HDMI, but were those things worth the wait for the console, were they worth the extra cost, and for Sony fans was it worth loosing countless exclusives and market share?For Xbox 360 owners, would an HD-DVD drive really been worth waiting another year? Would it have been worth an extra $200 on the console's base price? Would HD-DVD have been worth it just for the sake of not having to switch discs on a small handful of games? Would the extra cost of an HDMI port been worth it for the few small games that actually support 1080p natively?For me the answer to all of those questions is a solid, definitive, and resounding NO. You might still think differently but I'd like to remind you not to take yourself so seriously; this is all just fun and games... maybe Nintendo is onto something after all.-------------------This article is an extension of the articles: Xbox 360 and HDMI, and True Resolutions from thoughthead.com Edited by twistedsymphony, 03 April 2007 - 01:42 PM. Back to Editorials → Scenyx Sites Forums → Editorials
计算机
2015-48/1915/en_head.json.gz/329
release date:May 2006 Andrew Hudson, Paul Hudson Continuing with the tradition of offering the best and most comprehensive coverage of Red Hat Linux on the market, Red Hat Fedora 5 Unleashed includes new and additional material based on the latest release of Red Hat's Fedora Core Linux distribution. Incorporating an advanced approach to presenting information about Fedora, the book aims to provide the best and latest information that intermediate to advanced Linux users need to know about installation, configuration, system administration, server operations, and security. Red Hat Fedora 5 Unleashed thoroughly covers all of Fedora's software packages, including up-to-date material on new applications, Web development, peripherals, and programming languages. It also includes updated discussion of the architecture of the Linux kernel 2.6, USB, KDE, GNOME, Broadband access issues, routing, gateways, firewalls, disk tuning, GCC, Perl, Python, printing services (CUPS), and security. Red Hat Linux Fedora 5 Unleashed is the most trusted and comprehensive guide to the latest version of Fedora Linux. Paul Hudson is a recognized expert in open source technologies. He is a professional developer and full-time journalist for Future Publishing. His articles have appeared in Internet Works, Mac Format, PC Answers, PC Format and Linux Format, one of the most prestigious linux magazines. Paul is very passionate about the free software movement, and uses Linux exclusively at work and at home. Paul's book, Practical PHP Programming, is an industry-standard in the PHP community. manufacturer website
计算机
2015-48/1915/en_head.json.gz/504
How do you get a new player engaged in an old campaign? A group of three players had been going through my campaign for a year and a half, when a fourth player was added around level 15. The new player entered in media res: there was a ton of back story, more than a hundred NPCs and the rest of the group was running from one important quest to the next. Unfortunately, I made a huge mistake in the way this character entered the campaign: he was scooped up from the Nine Hells and brought to the group's home world to assist them in a battle that won't take place for at least another 7 levels. The thing is, neither the player nor his character has any stake in this world. They don't know any people/NPCs there, they don't know the history or the intricacies of the quests and their relationship with the rest of the party is strained at best. Because the character is unaligned, he's not even that interested in helping the party defeat the big bad. At the moment, while the rest of the players talk to NPCs (both old and new), remember history, important clues and back story and discuss what quests to focus on, the new guy sits quietly in the corner and waits because he has no reason to do any of those things. What are some good ideas to get this player AND his character engaged in the story and/or just help him have fun? Note: The player is not at all disinterested - he seems to find the story interesting and wants to play with us. However, I understand why he would be unmotivated with the (in retrospective, bad and irreversible) way he was added. system-agnostic players party share|improve this question edited Dec 22 '12 at 16:02 Ravn
计算机
2015-48/1915/en_head.json.gz/1325
Commonwealth of Massachusetts Enterprise IT Strategy Introduction. Approach. Key Observations and Recommendations. Governance IT Strategy Architecture and Standards IT Infrastructure Partnerships Implementation. Moving Forward: The Enterprise IT Strategy is Just the Beginning A. Introduction Enterprise IT: Raising the Bar in Massachusetts Information Technology (IT) has become a powerful tool for almost everything we want to accomplish in government. IT’s utility, and how we manage it, can dramatically impact the efficiency, effectiveness, and citizen-centric focus of government services and programs. Getting IT right is becoming more critical than ever for governments in meeting the demands of citizens, businesses, and employees who are expecting the same high level of service they are receiving in the private sector. IT impacts directly on the future economic competitiveness of the Commonwealth. With the current budget crisis facing state governments, fewer funds are available and new accountability standards demand a clear economic payoff from any IT investment. Financial uncertainty is coupled with a rapidly changing technology environment, requiring new thinking and innovative approaches. An effective enterprise IT strategy requires the cooperation and collaboration of government business and IT leaders across government boundaries. For Massachusetts to “raise the bar” in the delivery of government services, it must aggressively pursue reforming the way it governs, manages, and leverages the IT enterprise throughout the Commonwealth. Citizens view the Commonwealth as “one government,” not a collection of agencies, departments, and authorities. Creating that “single view of government,” with a seamless service interface, will come about only when IT-based reforms are implemented and can impact how government conducts it business. Information Technology Commission: Meeting the Enterprise Challenge The IT Commission was established in response to Section 6 of IT Bond III,[1] which directed, “…a special commission to recommend an enterprise-wide strategy, including all 3 branches of government and the constitutional offices, for the commonwealth’s information technology infrastructure, system development and governance.” IT Commission members were appointed from among positions of leadership in both the public and private sectors.[2] They viewed this legislation as a “Call to Action,” and experienced a sense of urgency in completing this report, which members regard as the beginning of a journey for the Commonwealth, rather than the completion of a task. After the election of Governor Romney in November 2002, IT Commission co-chairs met with the transition team to discuss the Commission’s charter and membership. The transition team endorsed both, and welcomed the Commission’s findings and recommendations as inputs to the transition team’s work. Commission members understand the high degree to which state government depends on technology for meeting its operational needs and achieving its policy objectives. The Commission recognizes that one of the Commonwealth’s primary challenges is to employ technology not only to deliver existing services faster and cheaper, but also to create new enterprise services and new roles for government that enhance social progress and foster prosperity. This task is especially challenging, given the continuing escalation in the development of technology and the fact that government operates in an environment of constant economic, political, and social change. Without an understanding of the changing political environment, and an insight into the direction technology is moving, wrong and wasteful investment decisions will be made. Improving the effectiveness of IT investment is at the heart of what the Commission is seeking to address through enterprise IT reform in the Commonwealth. At the same time, it is important to note that IT is only the “enabler to change.” Commission members were vocal about the need to avoid automating inefficient business processes. Members knew instinctively that, “The two most common complaints in and about the public sector IT community are…the charge that money and technology are being thrown at fundamentally broken processes, and the complaint about the imposition on public organizations of foreign processes that have been automated around the structure and operational needs of private sector corporations.”[3] Responsive, innovative, cost efficient, and customer-centric government will result only when agencies examine existing business processes, and re-engineer these processes, as necessary, to create value for the end-user. Massachusetts is at the forefront of state efforts nationally to develop an enterprise IT framework that spans all branches and levels of government. The present day context for implementing this enterprise approach is as compelling as it is challenging. This report addresses a number of opportunities to reshape and improve IT resources, practices, and potential in the Commonwealth, and discusses several of the key change drivers and challenges affecting its current business environment, specifically: the increased challenges and expectations by constituents for e-government services, the heightened emphasis surrounding homeland security post-September 11th, the current economic crisis, and the transition in political leadership. Today, these change drivers are converging, offering unparalleled opportunity to strategically position the Commonwealth to address the overall management and delivery of IT services. Enterprise Vision: The Time is Now The Commission’s enterprise vision for the Commonwealth is about more than just technology; it encompasses strategic direction, organization/people, technology, and processes. Leadership is crucial in this complex environment. The IT Commission adopted the following statement as representative of members’ views on the appropriate scope of the enterprise, and the necessity to work to transcend existing governmental barriers: “Opportunities for taxpayer savings, expanded public services, and improved efficiency in the public sector, through IT reform, require us to go beyond traditional boundaries. Enterprise IT reform in Massachusetts, to the extent appropriate, should encompass all three branches of state government, state agencies, state authorities, cities and towns, and the Commonwealth’s university and research community.”[4] While no single individual has the ultimate authority for enterprise performance, the opportunity to hold the enterprise accountable for results rests most squarely with the Governor, who should lead the outreach efforts to the Legislature, the Judiciary, constitutional offices, the higher education community, and local governments in Massachusetts. In the first meeting of the IT Commission, Peter Quinn, the Chief Information Officer (CIO) for the Commonwealth of Massachusetts, described the timing of this legislatively mandated Enterprise IT Strategy initiative as “the perfect storm” for addressing IT governance and management issues in Massachusetts. As Mr. Quinn pointed out, the pending economic/budget crisis, the election of a new Administration, the need to expand e-government services, and the demand to address security concerns after September 11, 2001 are all converging, offering unparalleled opportunity to strategically position the Commonwealth to address the overall management and delivery of IT services. The stage is set to build the business case for the Commonwealth to make bold and significant recommendations regarding an Enterprise IT Strategy for the Commonwealth. The work of the IT Commission is not an end, but a beginning. B. Approach The IT Commission engaged IBM Business Consulting Services (IBM) to provide a “high-level assessment of the Commonwealth of Massachusetts’ information technology infrastructure, systems development, and governance.”[5] From these “as is” observations, the IBM team assisted the IT Commission in developing a high-level, strategic framework of recommendations, and a roadmap for implementing these recommendations. In conducting the “As Is” Assessment, the IBM team interviewed more than 50 individuals representing all three branches of government,[6] including many representatives from Commonwealth agencies. Additionally, the IBM team researched public and private sector best practices, utilizing information from leading market research firms (e.g., Gartner, Meta, IBM Endowment for the Business of Government), and industry organizations and periodicals (e.g., Center for Digital Government, IBM Institute for Business Value, National Association of State CIOs, IT Governance Institute, Information Systems Audit and Control Association, Massachusetts Technology Collaborative, Governing, Government Technology). Members of the IT Commission, representing industry leaders such as AMS, Cisco Systems, DSD Labs, EDS, Harvard Pilgrim Health Care, Harvard University’s Kennedy School of Government, Sun Microsystems, and Verizon, participated actively by providing valuable insight into market trends, competitive landscape, and best practices in information technology governance and strategy. As part of this engagement, the IBM team Web-enabled the Commonwealth’s existing application database, which was developed originally as a Y2K initiative, so agencies can update this information directly over the Internet. The IT Commission met six times from November 2002 through February 2003.[7] IT Commission members’ recommendations were informed by IBM’s “as is” observations, by facilitated visioning sessions, and by volumes of best practice research. The non-profit Center for Excellence in Government sponsored a daylong roundtable discussion with former government CIOs, to provide an opportunity for Commission members to dialogue directly with practitioners about governance structures and management practices that have worked successfully in state government environments, and about lessons learned. These practitioners were unanimous in their praise of Massachusetts for the inclusive, enterprise IT framework being pursued by the Commonwealth, and for the active involvement of Commission members from all branches of government, as well as the private sector. The Commission was diligent in looking beyond the performance of peer states, to leading industry practices in the private sector. The Commission was mindful that all private sector best practices cannot be translated exactly into the public sector, largely because of dissimilarities in public sector organizational governance models. The IT Commission adopted a set of values as guiding principles for developing its recommendations. These values represent the Commission’s ideals for the future enterprise IT environment in Massachusetts. As the Commonwealth moves forward in the development and deployment of an enterprise IT environment, the Commission recommends the continued adoption of these guiding principles as a framework within which to consider critical decisions affecting the Commonwealth’s future IT environment: Single Face of Government; Strategic Direction with a Common Vision; Business Value; Collaboration; Pragmatism; Agility; Accountability; Integrity; Equity in Access;
计算机
2015-48/1915/en_head.json.gz/1495
Keynote speakers Keynote speakers Ignacio Llorente, OpenNebula. Ignacio M. Llorente, Ph.D in Computer Science (UCM) and Executive MBA (IE Business School), is a Full Professor in Computer Architecture and Technology and the Head of the Distributed Systems Architecture Research Group at Complutense University of Madrid, and Chief Executive Advisor and co-founder of C12G Labs, a technology start-up. He has 17 years of experience in research and development of advanced distributed computing and virtualization technologies, architecture of large-scale distributed infrastructures and resource provisioning platforms. His current research interests are mainly in the area of Infrastructure-as-a-Service (IaaS) Cloud Computing, co-leading the research and development of the OpenNebula Toolkit for Cloud Computing and coordinating the Activity on Management of Virtual Execution Environments in the RESERVOIR Project, main EU funded research initiative in virtualized infrastructures and cloud computing. He founded and co-chaired the Open Grid Forum Working Group on Open Cloud Computing Interface; and participates in the European Cloud Computing Group of Experts and in the main European projects in Cloud Computing. Bret Priatt, OpenStack. Jean-Bernard Stefani, INRIA. Denis Caromel, INRIA Professor and activeEon founder. Denis Caromel is full professor at University of Nice-Sophia Antipolis and CNRS-INRIA. Denis is also co-founder and scientific adviser to ActiveEon, a startup dedicated to providing support for CLOUD Computing. His interests include parallel, concurrent, and distributed computing, in the framework of GRID and CLOUD. Denis Caromel gave many invited talks on Parallel and Distributed Computing around the world, over (Jet Propulsion Laboratory, Berkeley, Stanford, ISI, USC, Electrotechnical Laboratory Tsukuba,Sydney, Oracle-BEA EMEA, Digital System Research Center in Palo Alto, NASA Langley, IBM Tom Watson and IBM Zurich, Boston HARVARD MEDICAL SCHOOL, MIT, Tsinghua in Beijing, Jiaotong in ShangHai). He acted as keynote speaker at several major conferences (including Beijing MDM, DAPSYS 2008, CGW'08, Shanghai CCGrid 2009, IEEE ICCP'09, ICPADS 2009 in Hong Kong, WSEAS in Taiwan). Recently, he gave two important invited talks at Sun Microsystems HPC Consortium (Austin, Tx), and at Devoxx (gathering about 3500 persons). http://www-sop.inria.fr/oasis/caromel/ Conference Speakers Cédric Carbone, CTO, Talend. Cédric Carbone is Talend's Chief Technical Officer (since the creation of Talend) and OW2 Board Member (since the creation of OW2). He leads the technical team (100 people located in France, USA and China) and stay at the Talend steering committee and OW2 Board and is member of OW2 Cloud Expert Group and OW2 BI Initiative. Prior to joining Talend in 2006, he managed the Java practice at Neurones, a leading systems integrator in France. Cédric has also lectured at several universities on technical topics such as XML or Web Services. He holds a master's degree in Computer Science and an advanced degree in Document Engineering. Pierre Chatel, Thales Communications. Pierre CHATEL is an R&T Software Engineer at Thales Communications who contributes to the CHOReOS FP7 UE project. He graduated from Pierre & Marie Curie (Paris VI) University with a PhD Thesis in Computer Science in 2010. During his PhD, he worked in the LIP6 laboratory of Paris VI and, at the same time, in the SC2 laboratory at Thales Land & Joint Systems under a 'CIFRE' grant from ANRT. There, he had the opportunity to contribute to the SemEUsE ANR project. The subject of his thesis is 'A qualitative approach for decision making under non-functional constraints during agile service composition'. Pierre was also a teacher in Master of Computer Science at University Vincennes - Saint Denis (Paris VIII) in the field of distributed computing from 2007 to 2009, and a research engineer at LIP6 in 2010. Ludovic Dubost, XWiki. A graduated of PolyTech (X90) and Telecom School in Paris, Ludovic Dubost starts his career as software architect at Netscape Communications Europe. Then he joins NetValue as CTO, one of the first French start-up that went public. He leaves NetValue after the purchasing of it by Nielsen/NetRatings and then he launches XWiki in 2004. Bruno Dillenseger, France Telecom. Bruno Dillenseger is a computing scientist and engineer. He has been working during the past 18 years in the area of distributed computing middleware. His contributions range from academic papers to code in open source projects. Since 2002, he is leading OW2's (formerly ObjectWeb) CLIF project, providing a highly adaptable Java framework for load testing. With this orientation 1 P stands for slide-based presentation, D for demonstration and L for lab (hands-on performed by participants)2 expected duration in minutes Clement Escoffier, akquinet A.G. Clement Escoffier is a Solution Architect in the Modular and Mobile Solutions competence center of the akquinet AG. He is working on providing complete solutions for the development, deployment, management and evolution of modular applications crossing the IT department boundaries. His interests go from the build process and software quality and tracking management to the deployment and update process of complex systems. He is a Apache Felix PMC member and leads the Apache Felix iPOJO project and the OW2 Chameleon project. He received a PHD in software engineering from the University of Grenoble in 2008. Rafael Ferreira, eNovance. Kong Hao, Beijing Software Testing & QA Center. Kong Hao is the project manager of Open Source Software Lab in BSTQC (Beijing Software Testing & QA Center, the professional third-part testing company which have devoted a mass of resource to open source software testing). Main area is open source software testing, testing tools and testing methods. Has experience on operation system testing, database testing and mid-ware application server testing, such as Suse, Red Hat and Red Flag linux. And he also did the job of co-developing for testing tools with other companies. Petr Hnetynka, Charles University Pragues Petr Hnetynka is an assistant professor with the Department of Distributed and Dependable Systems, Faculty of Mathematics and Physics,Charles University in Prague. His research specialization includes component-based software systems and service-oriented architecture. He is the co-leader of the SOFA 2 project, which aims at providing a framework for development and deployment of systems built from components. Recently, he has participated in the ITEA projects OSMOSE and OSIRIS and European FP7 project Q-ImPrESS. Christophe Hamerling, PetalsLink Christophe Hamerling is Research Engineer on Service Oriented Architecture Projects at PetalsLink, a French Open Source SOA Software Editor and active OW2 member. Christophe is currently working on European Research projects such as SOA4All (http://soa4all.eu) as main Developer and Architect of the large scale distributed SOA infrastructure involving OW2 based middleware technology such as OW2-Petals Enterprise Service Bus and OW2-ProActive Framework. Christophe is also OW2-Petals SOA product family core commiter and focus his technology interests on Distributed/Cloud Computing, Open Source and Java stuff. His next challenge : Leading PetalsLink Cloud activity to enable SOA in the Cloud. Christophe shares his technology life (and more) on http://chamerling.org and on http://twitter.com/chamerling Jeremi Joslin, Developer Evangelist at eXo Platform Jérémi Joslin is a computer science engineer based in France. He is currently a developer evangelist for eXo Platform. Previously, he was the product manager for eXo Social, the company's implementation of the OpenSocial API, and also managed eXo's office in Vietnam. He has also worked on wiki technology at XWiki and prior to that, sports training software at Enora Technologies. Jérémi organized the first Barcamp and OpenSocial Hackathon in Vietnam and has presented at multiple conferences and events such as JavaPolis, Barcamp Paris, cmf2007, Google DevFest in SEAsia and Gtug Shanghai. Jérémi has a master's degree in computer science from Dalian Institute of Light Industry (China) and is a graduate of the European Institute of Technologies. Jan Kofron, Charles University Pragues Jan Kofron is an assistant professor with the Department of Distributed and Dependable Systems, Faculty of Mathematics and Physics, Charles University in Prague. His research specialization includes specification and verification of software component behavior, and formal verification of software both at the level of models and code. Recently, he has participated in the Component reliability for the Fractal component model project funded by France Telecom and the European FP7 project Q-ImPrESS. Sandro Morasca, Università degli Studi dell'Insubria in Como and Varese, Italy. Sandro Morasca is a Professor of Computer Science at the Università degli Studi dell'Insubria in Como and Varese, Italy. In the past, he was an Associate Professor and Assistant Professor at the Politecnico di Milano in Milano, Italy. He was a Faculty Research Assistant and later a Visiting Scientist at the Department of Computer Science of the University of Maryland at College Park. Sandro Morasca has been actively carrying out research in the Software Engineering field in Empirical Software Engineering, Specification of Concurrent and Real-time Software Systems, Software Verification, and Open Source Software, and has published more than 20 journal papers and 70 conference papers. Sandro Morasca has been involved in a number of national and international projects. He currently is the Leader of the activity related to the trustworthiness of Open Source Software in the QualiPSo project, financed by the European Union. Sandro Morasca has served on the PC of a number of international software engineering conferences and on the editorial board of "Empirical Software Engineering: An International Journal," published by Springer-Verlag. Karl Pauls, akquinet A.G. Karl Pauls is the head of Mobile Applications and OSGi Development at akquinet AG. During the day he is busy with leading projects in the OSGi and mobile application space and at night he is a member of the Apache Software Foundation (ASF). With more then six years experience, he is a long time OSGi enthusiast and commiter and member of the PMC of Apache Felix the OpenSource OSGi-Implementation from Apache. Recently, he is co-authoring the "OSGi in Action" book. Véronique Théault - Acpqualife Véronique Théault is Associate Director in charge of qualification offers. Rich experience of 11 years in IT companies, having held various positions in management development, Veronique Théault specializes in software testing. A challenge and a passion which led in 2002 to create and animate, with Marc Durupt, the company Qualife, specialized in the trades of the test. Hailong Sun, Beihang University. Hailong Sun is an Assistant Professor with the School of Computer Science and Engineering, Beihang University , Beijing , China . He received his Ph.D. in Computer Software and Theory from Beihang University , and BS degree in Computer Science from Beijing Jiaotong University in 2001. His research interests include web services, service oriented computing and distributed systems. He is a member of IEEE and ACM. Guillaume Sauthier, Bull S.A.S. Guillaume Sauthier is currently holding a senior developer position in Bull where he is working on the OW2 JOnAS application server since 2003. He is now responsible of the JOnAS 5 OSGi architecture, ensuring a best usage of this technology. Guillaume is also fluent in WSDL and speaks XML from the time he's been involved in Apache Axis (the first), now he is a contributor on Apache Felix, iPOJO and CXF projects, without speaking about his daily work on OW2 projects (JOnAS, EasyBeans, …). OSGi is also on his skills board, being a power user of Felix and iPOJO for at least 4 years. He's been involved in the OW2 community since the beginning (since Objectweb in fact), being part of the technology council, helping on Opal, managing our Bamboo instance, proposing new tools for OW2, ... Charles Souillard, Chief Technical Officer and co-founder, BonitaSoft. Charles leads the BonitaSoft product development organization. Charles coordinates with the Bonita community and users to define the product roadmap and is responsible for its execution. Prior to BonitaSoft, he was head of the Bonita core development team within Bull and has significant experience developing critical application with Business Process Management and Service Oriented Architecture technologies. Charles holds a Master's degree in Computer Science from Polytech de Grenoble (France). Philippe Merle, Senior researcher, INRIA. Philippe MERLE is a senior researcher at INRIA in the ADAM research-team. He obtained its PhD thesis in 1997 at the University of Lille. Its research covers software engineering and middleware for adaptable distributed applications. He is involved in OW2 since its first days. He was the president of the OW2 College of Architects (ex Technical Council). He is the leader of three OW2 projects: FraSCAti, Fractal, and OpenCCM. Currently, its main involvement is on the OW2 FraSCAti project, which targets the next generation of SOA runtime platforms. Stefano Scamuzzo begin_of_the_skype_highlighting end_of_the_skype_highlighting, Engineering. Stefano Scamuzzo has been working in IT field since 1989. Initially involved in European research projects on hypertext technology, he then undertook the technical management of complex projects in several technological areas such as document and workflow applications, web based applications, enterprise portals and business intelligence applications. He is presently Senior Technical Manager in the Research and Innovation Division of Engineering Ingegneria Informatica and member of the SpagoWorld Executive Board, mastering the domains of Service Oriented Architecture and Business Intelligence with a particular focus on open source solutions. He teaches training courses on Service Oriented Architecture at the Engineering Group ICT Training School in Italy. Wei Wang, Institute of Software, Chinese Academy of Science (ISCAS). Wei Wang received his Ph.D. in Computer Science (2010) from the Institute of Software, Chinese Academy of Science (ISCAS). Currently he is a research assistant of ISCAS. His area of research is software engineering and distributed computing, with emphasis on middleware based distributed software engineering. His interests include the adaptive resource management in middleware, and high reconfigurable and manageable middleware architectures, and the validation of such techniques on real systems. Yasha WANG, Peking University. Yasha WANG, associate professor of National Engineering Research Center on Software Engineering, Peking University.Prof. Wang received his Ph.D. , MS and BS degree in Northeastern University, Shenyang, China, in 2003, 2000, and 1997 respectively. He started his research work in Peking University in April 2003, first as a postdoctoral researcher, and then assistant professor and associate professor. His research interest focus on software engineering, especially component based software development and software process. He has led and participated more than 7 national research projects involving the National Basic Research Program of China (973) , the High-Tech Research and Development Program of China (863), and the National Natural Science Foundation of China (NFSC). He gained a National Progress Prize in Science and Technology (level 2) in 2005. He has published more than 20 papers in journals and conferences. Gang Yin, National University of Defense Technology, China. Gang Yin is a researcher in national university of defense technology who received his Ph. D degree in computer science in 2006. He has more than 10 years of experiences in distributed computing, information security and software engineering. He has participated as key member or chaired in more than 10 projects under the grants of 973 program, NSF program and 863 Program. He also serves as the general secretary for a grand 863 project and several international projects. He has authored more than 50 papers in academic journals and international conferences. Dr. Minghui Zhou, Peking University. Dr. Minghui Zhou is very interested in conducting research in summarizing system evolution data and improving the understanding and control of such systems. She has been leading a team to work on open source middleware for a long time, and looking for approaches to help global distributed development. Currently she is an associate professor in School of Electronics Engineering and Computer Science, Peking University
计算机
2015-48/1915/en_head.json.gz/1574
PSP Game Reviews: Frantix Review Frantix Review Overall Rating: 5.6 Online Gameplay: Don't be fooled by its energetic title, Frantix is nothing more than a puzzle game where players have to solve one maze after another. If you're into that sort of intellectual fodder, then, by all means, track it down at the local bargain bin and see if it floats your boat. Everyone else should probably take a pass though, seeing as how a video game focused entirely on the solving of mazes isn't very fun or engaging. This one isn't, at least. The blurb on the back of the game's case describes it as a "character-based 3D puzzle-solving adventure." That's a polite way of saying that you have to guide some poor sap through 185 individual mazes spread throughout six separate worlds. Each maze is a 3D environment populated with crystals, switches, traps, and an occasional monster or two. The whole idea here is that you have to guide the character past the various puzzles and enemies, collect all of the crystals in the stage, and reach the exit as quickly as possible. Make one mistake and you'll have to start the stage over again. Thankfully, most stages take less than 30 seconds to complete once you know what you're doing, and you can re-attempt any stage as often as you like until you're satisfied with your top time. Some of the switch- and door-based puzzles are legitimate brain teasers, which should please the braniacs out there that go for games like this. By the same token, the implementation of pick-up items that allow the character to run faster or pass through enemies adds a zesty twist to the otherwise bland subject matter. Diehard maze fanatics may give up on the game anyway, despite its solid intellectual chops, because the controls are rigid and often unresponsive. The character moves in one-step increments, which takes some getting used to, and there's a very obvious input delay that frequently causes him to take an extra step instead of stopping or turning. It's a real pain to push left on the d-pad, hoping to take a turn, only to watch the character walk directly into a lake. Like most puzzle games, Frantix doesn't offer much in the way of graphics and audio. The 3D environments and characters are nicely rendered, and the animation is smooth, but there isn't much variety. Every stage is basically just a large room decorated with a few plants and pillars, populated with movable blocks and carbon-copy monsters that are re-used to the point of absurdity. The default isometric camera view affords a good look at a major portion of each stage, and you can toggle between three different zoom settings or rotate the camera in order to gain a better look as necessary. Musically, this sure ain't Lumines, but the simple vocal sound effects and jumping techno music do manage to come together into a shockingly appropriate soundtrack. For some inexplicable reason, the 2002 Academy Award-winning animated short, The ChubbChubbs! is included as an extra on the disc. It's a charming cartoon and worth a look, although it doesn't bear any relevance to the characters or events in the video game. All in all, Frantix just doesn't quite work. Seriously, ask yourself "do I find mazes fun?" For most people, the answer is no. Worse, the few diehard maze fanatics out there that actually might want a game like this will probably be turned off by its dodgy controls. Frantix... the maze-oriented puzzle game that you have to be hell-bent on liking to enjoy. 1/12/2006 Frank Provo
计算机
2015-48/1915/en_head.json.gz/2673
Home > Risk Management OverviewGetting StartedResearchTools & Methods Additional Materials ConsultingOur People Consider a broad range of conditions and events that can affect the potential for success, and it becomes easier to strategically allocate limited resources where and when they are needed the most. The SEI has been conducting research and development in various aspects of risk management for more than 20 years. Over that time span, many solutions have been developed, tested, and released into the community. In the early years, we developed and conducted Software Risk Evaluations (SREs), using the Risk Taxonomy. The tactical Continuous Risk Management (CRM) approach to managing project risk followed, which is still in use today—more than 15 years after it was released. Other applications of risk management principles have been developed, including CURE (focused on COTS usage), ATAM® (with a focus on architecture), and the cyber-security-focused OCTAVE®. In 2006, the SEI Mission Success in Complex Environments (MSCE) project was chartered to develop practical and innovative methods, tools, and techniques for measuring, assessing, and managing mission risks. At the heart of this work is the Mission Risk Diagnostic (MRD), which employs a top-down analysis of mission risk. Mission risk analysis provides a holistic view of the risk to an interactively complex, socio-technical system. The first step in this type of risk analysis is to establish the objectives that must be achieved. The objectives define the desired outcome, or "picture of success," for a system. Next, systemic factors that have a strong influence on the outcome (i.e., whether or not the objectives will be achieved) are identified. These systemic factors, called drivers, are important because they define a small set of factors that can be used to assess a system's performance and gauge whether it is on track to achieve its key objectives. The drivers are then analyzed, which enables decision makers to gauge the overall risk to the system's mission. The MRD has proven to be effective for establishing confidence in the characteristics of software-reliant systems across the life cycle and supply chain. The SEI has the MRD in a variety of domains, including software acquisition and development; secure software development; cybersecurity incident management; and technology portfolio management. The MRD has also been blended with other SEI products to provide unique solutions to customer needs. Although most programs and organizations use risk management when developing and operating software-reliant systems, preventable failures continue to occur at an alarming rate. In many instances, the root causes of these preventable failures can be traced to weaknesses in the risk management practices employed by those programs and organizations. For this reason, risk management research at the SEI continues. The SEI provides a wide range of risk management solutions. Many of the older SEI methodologies are still successfully used today and can provide benefits to your programs. To reach the available documentation on the older solutions, see the additional materials. The MSCE work on mission risk analysis—top-down, systemic analyses of risk in relation to a system's mission and objectives—is better suited to managing mission risk in complex, distributed environments. These newer solutions can be used to manage mission risk across the life cycle and supply chain, enabling decision makers to more efficiently engage in the risk management process, navigate through a broad tradeoff space (including performance, reliability, safety, and security considerations, among others), and strategically allocate their limited resources when and where they are needed the most. Finally, the SEI CERT Program is using the MRD to assess software security risk across the life cycle and supply chain. As part of this work, CERT is conducting research into risk-based measurement and analysis, where the MRD is being used to direct an organization's measurement and analysis efforts. Spotlight on Risk Management The Monitor June 2009 New Directions in Risk: A Success-Oriented Approach (2009) A Practical Approach for Managing Risk A Technical Overview of Risk and Opportunity Management A Framework for Categorizing Key Drivers of Risk Practical Risk Management: Framework and Methods
计算机
2015-48/1915/en_head.json.gz/3059
From CrossWire Bible Society The SWORD Project In short, the SWORD Project is an effort to create a software platform for research and study of God and His Word. The open source model is the basis of development, maximizing the rapid growth and features of this project by leveraging the contributions of many developers. Components of the project include all types of Biblical texts and helps, a portable, platform-agnostic engine to access them, and a variety of front ends to bring this to as many users as possible. What is open source software? Open source software, as defined by the Open Source Definition, is software under a license that permits free distribution, requires inclusion of source code, and permits creation of derivative works. The SWORD Project is licensed under the GNU General Public License (GPL), an open source license approved by the Open Source Initiative (OSI). When software source is licensed under the GPL license, its source code must be available as well, so that any user of the product can alter it, recompile it, and reconfigure it. But any changes, fixes, or upgrades that are made to that source code must be made available to the public in source code form. Anyone who buys the product is free to recopy, alter, or redistribute the product as well, so long as the GPL is followed. With God's blessing, this project will incorporate the talents of all developers wishing to contribute their skills in the development of a software utility that will enable all people to come to a fuller understanding of God's Word and Will. So who is the leader of the SWORD Project? There are many modularized components of this project, each with their own leaders. Each leader is free to take the direction that they see fit to accomplish their goals, being held accountable only to fellow developers, users, and of course God. This is a collaboration of the masses — many parts of the Body working together for a common purpose. Our hope is that God, through the Holy Spirit, is ultimately the leader of the SWORD Project and each of its subprojects. Retrieved from "http://www.crosswire.org/wiki/Purpose_Statement" Category: CrossWire Personal tools About CrossWire Bible Society
计算机
2015-48/1915/en_head.json.gz/3115
Posted Microsoft loophole mistakingly gives pirates free Windows 8 Pro license keys By Looking for a free copy of Windows 8 Pro? An oversight in Microsoft’s Key Management System – made public by Reddit user noveleven – shows that with just a bit of work, anyone can access a Microsoft-approved product key and activate a free copy of Windows 8 Pro. The problem is in the Key Management System. Microsoft uses the KMS as part of its Volume Licensing system, which is meant to help corporate IT people remotely activate Windows 8 on a local network. The Achilles’ heel of the setup, according to ExtremeTech, is that you can make your own KMS server, which can be used to partially activate the OS. That approach requires reactivation every 180 days, though, so it’s not a practical system. However, the Windows 8 website has a section where you can request a Windows 8 Media Center Pack license. Media Center is currently being offered as a free upgrade until Jan. 31, 2013. Supply an email address and you’ll be sent a specific product key from Microsoft. If you have a KMS-activated copy of Windows 8, with or without a legitimate license key, then going to the System screen will display a link that reads “Get more features with a new edition of Windows.” If you enter your Media Center key there, the OS will become fully activated. It’s a little surprising that with Microsoft’s complex KMS, this type of thing could slip through the cracks, allowing people to take advantage of the system. It seems most likely that after the uproar in response to Microsoft’s plans to remove Media Center from Windows 8 Pro, the company may have rushed the free upgrade, resulting in a loss for Microsoft and a gain for anyone who takes the time to acquire a free Windows 8 Pro copy. It’s unclear whether or not there’s a patch for this – other than removing the free Media Center download all together. Though ending the free Media Center upgrade would be an easy fix, it wouldn’t be a popular choice among customers who just bought a Windows 8 computers and who want the feature. We’ll have to wait and see how the company responds to this latest hit. Get our Top Stories delivered to your inbox:
计算机
2015-48/1915/en_head.json.gz/3542
What is beta testing? By Computer Music, Tech The ins and outs of pre-release bug squishing Shares "A bug infestation y'say? That's gonna cost ya..." When you hear that a new or updated piece of software that you're interested in is 'in beta', your immediate reaction is usually a positive one: 'Great - it's nearly ready to be released!'Once beta testing has begun, it's easy to assume that the hard programming yards have been covered and now it's just a case of dotting the 'i's and crossing the 't's. Is this really the case for every company, though? Some products seem to be in beta for just a few weeks, while others remain in this state for months - maybe more! In some cases the beta process is shielded from users, with the public only hearing about the product when it's ready for release; but in others, the great unwashed are asked to test several different versions.Also, how do developers know when beta testing should stop? Can a product be declared 'finished' once it's out of beta, or does this just mean that it's stable enough to be sent to market?To beta or not to betaLet's try to define exactly what a company means when it says that its software is 'in beta'. Scott Fisher is Communications and Marketing Manager at Image-Line, developers of FL Studio. How would he define it?"I suppose we would contrast beta with alpha," Fisher says. "In alpha, the software is still under development, is likely to have structural changes and is tested by perhaps ten to 20 people. We go to beta when we have found most of the bugs we can with the alpha group and the software is basically in release form."At this stage, it is far less likely to have any structural or design changes, but we are always open to that possibility. This phase is mainly about bug-hunting based on the wider testing and gathering feedback on workflow issues."Jonathan Hillman is Product Manager for PreSonus's Studio One. His view is that beta software "is a version that is ready to be evaluated beyond our internal group. At that point, to our knowledge, everything works as intended. The scope of the release is more or less proportional to the length of time we spend in beta; that is, a larger release requires more extensive beta testing."This is confirmed by Ohm Force, which is putting the finishing touches to its long-awaited, collaboration-friendly DAW Ohm Studio - the beta phase is currently in full effect."Ohm Studio actually needs more beta testing than other sequencers because you need to also test the server loads, bandwidth, storage and features such as collaborative freeze that deal with non-shared plug-ins with a handy bounce. That means not only a beta, but a public one.""Public testing can begin when a product has reached a certain level of quality but now needs larger-scale feedback." Ohm ForceAh yes, the public beta testing programme - this has become increasingly common in recent years, particularly as the internet has made it easier for users to get hold of software and submit feedback on its performance. What are the benefits of public beta testing versus in-house/private evaluation, though?"Public beta testing involves a huge quantity and variety of systems, which is very desirable," says Hillman. "However, the quality of feedback tends to be inversely proportional to the size of the beta group. There's also the obvious disadvantage of showing your hand to the competition, which matters when you are innovating." There may be pros and cons to public beta testing, then, but Image-Line's Scott Fisher is of the view that the positives far outweigh the negatives - both for developers and users."We adopted a strategy of public beta testing many years ago, simply because it means that when we do launch software formally, it's been pretty thoroughly tested by tens of thousands of people. We also have a core of customers who love to get their hands on our latest products as soon as possible. It's a win-win really."Is it the case, though, that customers are constantly being asked to use software that isn't finished? Scott Fisher doesn't think so."Using the beta is entirely optional," he says. "We always direct customers to the latest official release for time- and mission-critical work. We do our best to keep the two separate in our customers' minds."Ohm Force makes the point that if you're going to send your beta out for public testing, you need to ensure you don't do it too early in the process."If we had given access [to Ohm Studio] to anyone at the early stage of development, most people would have been disappointed, to the extent that frustration caused by technical failure would have been greater than frustration caused by staying on the pending list. Public testing can begin when a product has reached a certain level of quality but now needs larger-scale feedback."Knowing when to stopJust as importantly, how do you know when beta testing should end? "We set and track objectives for releases: address certain issues, expand or add features, rethink an interface element, and so on," says PreSonus' Jonathan Hillman."When the objectives are met and stability is solid in our in-house tests, we enter beta. When stability is reported as solid within the beta group, and we're satisfied that no workflow issues have come up, we're finished."That sounds pretty clear, but 'finished' can be a tricky concept in software development, as Ohm Force confirm."The truth is that Ohm Studio, like any DAW, will never be finished. Remember Ableton Live 1.0: it had a really innovative feature, but was also missing other big ones - it had no MIDI instruments at all! Many features that we consider essential were added in later versions."Well, the same goes with Ohm Studio. As of today, no one offers a collaborative solution comparable to ours. While that and other aspects of Ohm Studio will be improved over the years, some people are very happy to use it already. That's one key marker – not for saying it's 'finished', but for the release. The other marker is having the package of features and bug fixes without which selling Ohm Studio would feel unacceptable."What should a company do if they've reached a set deadline for the end of beta testing and know that there are still problems to be fixed? Should the beta period be extended, or is it better to release first and iron out the kinks later?"Deadlines are man-made, so they are always open to being changed," says Scott Fisher. "If it's not ready, it's not shipped. On the other hand, software will always have bugs. Every developer knows there will be issues no matter how thorough the beta test was."In other words, you can beta test for as long as you like, but there will inevitably be some bugs that slip through the net. The important thing is that any issues that do remain aren't fundamental to the way the software works and performs."If those issues have the potential to affect stability or functionality for a large number of users, no, [the software] should not be released," says Jonathan Hillman. "At the end of the day, users correctly value stability over everything." This article originally appeared in issue 182 of Computer Music magazine.
计算机
2015-48/1915/en_head.json.gz/4996
National Archives for Developers Digital Gov Strategy Agency Milestones Open Data Policy High Value Datasets Citizen Archivist Social Media at NARA Other Developer Hubs The National Archives promotes the innovative application of agency data in public and private sectors. Archives.gov/developer connects citizen developers with the tool they need to unlock government data. Have suggestions, ideas, or questions? Please give us feedback about these resources at the US National Archives GitHub account. GitHub Application Programming Interfaces (APIs) Datasets Crowdsourcing Tools Digitization Software Tools The National Archives on GitHub Makes available code related to the work of the U.S. National Archives. The Federal Register on GitHub Makes available code related to the Federal Register. Learn more about resources available to develoeprs on the Federal Register developer hub. Application Programming Interfaces (APIs) National Archives Catalog API The National Archives Catalog API is a read-write web API for the online catalog for the National Archives. This API can be used to perform fielded search of archival metadata, bulk export of metadata and digital media, and post contributions to records. The dataset includes archival descriptions, authorities, digital media, web pages, and public contributions (such as tags and transcriptions). The Federal Register API FederalRegister.gov is a fully open source project. The source code for the main site is available on GitHub, as well as the chef cookbooks for maintaining the servers, and the WordPress themes and configuration. Executive Orders from 1994 to 2012 The President of the United States manages the operations of the Executive branch of Government through Executive orders. After the President signs an executive order, the White House sends it to the Office of Federal Register (OFR). The OFR numbers each order consecutively as part of a series, and publishes it in the daily Federal Register shortly after receipt. This data is available as as an interactive dataset and API through Data.gov. Digital Public Library of American (DPLA) API The Digital Public Library of America (DPLA) is a universal digital public library, providing a single online access point for digital collections containing America's cultural, historical and scientific heritage. The National Archives participates as a leading content provider and has contributed 1.9 million digital images to the DPLA, including our nation’s founding documents, photos from the Documerica Photography Project of the 1970’s, World War II posters, Mathew Brady Civil War photographs, and a wide variety of documents that define our human and civil rights. The DPLA API allows you to build applications and tools for enhanced learning and content discovery. More information available at: http://dp.la/info/developers/ Flickr API The National Archives has made more than 10,000 images of records available on Flickr. As a participating institution in the Commons on Flickr, the National Archives makes available images of documents, photographs, and other records with no known copy restrictions. These records can be accessed through the Flickr API. Datasets Code of Federal Regulations This dataset contains the Code of Federal Regulations (CFR) in XML format. The CFR is the codification of the general and permanent regulations of the Federal Government published in the Federal Register. Federal Register This dataset contains the daily Federal Register in XML format. The Federal Register is the official legal newspaper of the United States Government. Archival Descriptions from the Online Catalog This dataset contains information of permanent holdings of the Federal Government in the custody of the National Archives in XML format. Organization Descriptions from the Online Catalog The dataset contains the organization descriptions from the Online Catalog in XML format. United States Government Manual The U.S. Government Manual is the official handbook of the U.S. Government and available in XML format. Public Papers of the Presidents of the United States This is the official public Presidential writings, addresses, and remarks in XML format. Executive Orders of the Presidents of the United States The dataset contains the official documents through which the President of the United States manages the operations of the Federal Government in CSV format, and also as a combined interactive dataset, and API. Crowdsourcing Tools Transcribr This Drupal distribution includes all modules and themes required to emulate the National Archives Transcription Pilot Project, which allows the public to transcribe historical documents to make them more accessible to the public. AVI-MetaEdit The software gives you ability to perform various metadata editing for AVI files. You can use the tool to embed, edit, import, and export metadata. This tool is made available on GitHub. File Analyzer The File Analyzer performs filename validation and statistical analysis for file data like checksum and file size. MediaInfo This tool offers a GUI to display stream information for video and audio files. It also provides customization for data display and export formats. Video Frame Analyzer This software automates the quality control process for digitized video files. It also provides analysis for video frame level metadata. National Archives for Developers >
计算机
2015-48/1915/en_head.json.gz/5330
Developer Diaries > Pirate Empires: Part 2 - Give 'Em the WIP Pirate Empires: Part 2 - Give 'Em the WIP 30th March 2009 � Give 'Em the WIP Some J Mods having a sea battle.Click here to see a larger version of this image. Recently, I have been working on getting islands to render properly during sea battles (where islands can be quite large) and in the game's world map (where they can be quite small). It's quite tricky to do with a game of this size, as I have to make sure it all works on older computers as well as reliably over a network. It means we end up doing a lot of our calculations in fixed-point arithmetic. Fixed-point arithmetic means that we use integers (whole numbers) instead of floating-point numbers. Although floating-point numbers allow for more accuracy and can represent fractions easily, they can be slower for the computer to calculate, lose precision and be unpredictable from one computer to the next. So, the key question we have to answer is: how big is 1? It might be okay for 1 to represent 1cm in sea battles, but if we used that for the world map game, we might only be able to have a game world that�s 1km square - clearly not enough space for a pirate to make a living in. If you make the scale too big, though, animations and camera movements get very choppy. It feels kind of like I've been trying to press bubbles out of wallpaper this week! Just the other day, I put the game on our internal 'work-in-progress' (WIP) version of FunOrb in order to show it to a few colleagues. I actually only intended to show it to one 'guinea pig', but when there's a new game working on someone's screen, the whole department can get a little excited. Things quickly escalated into a big battle, as other members of the FunOrb development team jumped in the game to try it out! A very early build of the tavern (with placeholder graphics).Click here to see a larger version of this image. It is always helpful to see new people playing a game you're working on - it helps you to see the things that are wrong with it, which you have so far been blind to. Even simple things like if you see someone struggling to use your interface can help you to improve the quality of the final game. This 'small' test has given me and Mod Dunk a veritable hoard of little things to fix and improve with sea battles. We have quite a way to go with this game before it meets up to our standards, but we are making progress. We've also been doing some work on the ports. We have been populating the taverns with a range of sailors - everything from stalwart bonny tars to scum-of-the-earth cut-throats - all of whom you'll be able to recruit for your ship! And we've been making sure that the booty you store in your ship's hold is as 'organised' as every self-respecting pirate would have it: in big piles! (Which reminds me somewhat of my desk...) Mod WivlaroFunOrb Developer(Current grog level: low)
计算机
2015-48/1915/en_head.json.gz/5380
Chinese team mistakenly released unpatched IE7 exp... Chinese team mistakenly released unpatched IE7 exploit Internet Explorer 7 flaw means a computer could be infected with malicious software merely by visiting a Web site. Jeremy Kirk (IDG News Service) on 12 December, 2008 08:06 iDefense said in a note that the vulnerability is "really nasty" and that computer security professionals could be in for a rough ride. Microsoft issued its biggest group of patches in five years on Tuesday, and is not due for a regular patch release until Jan. 13, although it could opt to do an emergency release. "Chances are this will be unpatched for around about a month, and that leaves plenty of time for attackers to take advantage," said Toralv Dirro, a security strategist based in Germany for McAfee's Avert Labs. "This should be taken pretty seriously." iDefense said there aren't many options for users to defend themselves, but there is an easy one. The SANS Institute, which runs computer security training courses, recommended that people use a browser other than Internet Explorer. In an advisory, Microsoft said users should put IE7 in "protected" mode, which causes warning prompts to appear if something tries to change system files or settings. But that protected mode is only available to users running Windows Vista. Another mitigating factor is the default security level setting for IE7 running on Windows Server 2003 and Windows Server 2008. It is set to "high," which blocks file downloads, Microsoft said. Generally, administrators should not browse the Web from the server. Toralv said it could be tough to get the Internet Service Providers hosting the dodgy Web sites to take them offline, since the process is time intensive and service providers can be slow to respond. The IE vulnerability compounds what looks to be a tough month for Microsoft, with the publication of another 0-day vulnerability in Microsoft's WordPad application earlier this week. That problem is somewhat less severe since a user would have be tricked into opening a maliciously-crafted document attached to an e-mail. It also does not affect computers running Windows XP Service Pack 3 and Vista. It does, however, affect Windows 2000 Service Pack 4, Windows XP Service Pack 2, Windows Server 2003 Service Pack 1 and Windows Server 2003 Service Pack 2, according to Microsoft. Tags internet explorer 7
计算机
2015-48/1915/en_head.json.gz/5780
Contact Advertise Where Is WinFS Now? posted by Thom Holwerda on Sun 18th May 2008 12:59 UTC, submitted by Adam S Back when Windows Vista was still known as Windows Longhorn, the operating system contained a very interesting and promising feature, a feature promoted as one of the 'pillars' of Longhorn: WinFS. WinFS was a storage subsystem for Windows, based on a relational database, that could contain whatever data you wanted to put in it. Thanks to the relational properties of the database, you could then create relationships between data, or let the computer do that for you.WinFS would allow programmers to do all sorts of wild and radical things with data in their applications, and back in 2003, during the Professional Developers Conference, Microsoft showed a video demonstrating what could possibly be done with WinFS. This video, called IWish [.wmv], can still be downloaded from Microsoft's website. Sadly, the video only showed what you could do with WinFS in the future, because the version of WinFS that shipped with the Longhorn test releases back then was, well, disturbingly not useful. It did not do a whole lot of useful stuff back then, and to make matters worse, it was the world's worst resource hog ever. Disabling WinFS would turn your Longhorn build from a crippled snail into a fairly usable operating system - testament to its yet unoptimised nature. We all know what happened to WinFS after its audience-woowing days of 2003. It was first slated for release after the final release of Windows Vista, as an add-on, but later on, in 2006, it was cancelled altogether. Parts of WinFS ended up in Microsoft SQL Server and ADO.Net, but it would no longer be delivered as a stand-alone product or a Windows component. Since then, we haven't heard a whole lot on whatever happened to WinFS and its associated technologies - that is, until a few days ago. Jon Udell has published an interview with Quentin Clark, who led the WinFS team from 2002 until 2006, when he joined the SQL Server team as a general manager. One of the burning questions that rang through forums everywhere was whether WinFS was a filesystem or not. Clark explains that it depends on your viewpoint: People would often ask me if WinFS was a file system, and I'd struggle with the answer to that, because, well, you know, from a certain standpoint the answer is yes. The stuff I saw in the shell, was it in the WinFS filesystem? Well, OK. But there are no streams inside the database. So from a user perspective, those files were "in" the filesystem. But from an API perspective it was more nuanced than that. I could still use the Win32 APIs, get some file, open it, and from that point forward the semantics were exactly like NTFS. Because it was NTFS at that point. Clark explains that you can look at SQL Server 2008, ADO.NET, and VS 2008 SP1 today and trace its lineage right back to WinFS. The schemas of WinFS and the required technology, which were used to store properties of objects, were shelved, because they are not needed any more. The WinFS APIs are now part of ADO.Net as the entity framework. Finally, "What's getting delivered as part of VS 2008 SP1 is an expression of that, which allows you to describe your business objects in an abstract way, using a fairly generalized entity/relationship model." Clark details a whole set of new features and ideas that try to marry the database world with the filesystem world, and how they are trying to do this in a step-by-step fashion. According to Clark, this is exactly what lacked during the Longhorn days of WinFS. "That's kind of where we got tripped up in the Longhorn cycle. We were building too much of the house at once. We had guys working on the roof while we were still pouring concrete for the foundation." Clark remains hopeful about the future of integrated storage. "But I do at some point want to see that place in my heart fulfilled around the shared data ecosystem for users, because I believe the power of that is enormous," he explains, "I think we'll get there. But for now we'll let the concrete dry, and get the framing in place, and then we'll see how the rest of the house shapes up." (2) 31 Comment(s) Related Articles 'Microsoft's software is Malware'Blogging about MidoriMicrosoft launches Arrows, its Android application launcher
计算机
2015-48/1915/en_head.json.gz/6139
Top 10 Tumblr Music Sites Kick out the jams with these cutting-edge music blogs Whether you’re chasing down the next hot band, an old favorite song or an interesting twist on a musical genre, you can find it on Tumblr. The microblogging platform has become a beacon of creatively curated sites covering every angle of finding, enjoying and dissecting songs and artists. But with so much good stuff to choose from, how can you find the best Tumblr music sites for your specific sonic fix? We put our ears to the ground and came back with this guide of the top 10 Tumblr music sites for diehard fans. Make some noise Copycats Out of the top 10 Tumblr music sites listed, Copycats provides one of the most interesting paths to discovering “new” music. The site’s content consists exclusively of artists covering other artists, remixes and mash-ups. Recent posts included Gotye and The Little Stevies covering Paul Simon’s “Graceland” and an inspired mash-up of Outkast/White Stripes’ “Blue Orchid.” FreeIndie Every few days, FreeIndie posts three perfectly legal downloads from an independent artist they think might interest their readers. These guilt-free pleasures are just the thing to jazz up the soundtrack of your life. (Recently featured band Tiger Waves would be perfect for your “lying out by the pool” mix.) 2N Pronounced “tune,” this site isn’t quite as prolific as FreeIndie (only one song posted per week), and the focus is more about simple exposure rather than providing a free download. But 2N consistently posts deserving songs. It may be something new or old, highly regarded or completely under the radar; the only rule is that it’s a song worth listening to. Even accounting for subjective tastes, 2N nearly always hits the mark. If you’re looking to get hooked on 2N, recent postings from Grimes, LCD Soundsystem, Sharon Van Etten, College and Phantogram should do the trick. One Week One Band 2N and FreeIndie can 2000 get you started with a few new tracks, but if you want the full backstory on a band or artist you’ve just discovered, One Week One Band is the Tumblr for you. Each week, a trusted music aficionado will showcase an artist or musician that she or he feels is important for you to discover. It may be someone you’ve never heard of, or it could be an eye-opening history lesson involving a musician that you’ve known and loved for years. Private Noise Tumblr is as much about social networking as it is about blogging, and Private Noise is the perfect example of that. This site features “person on the street” photos of people listening to music, followed by a short interview with the subject of the photo explaining what they’re listening to and why, along with a link to the song they named. Rock & Roll Tedium This site collects normal people’s tales of banal, asinine run-ins with famous rock stars. Don’t worry; it’s funnier than it sounds. And, it totally disproves the notion that rock starts are doing crazy, wild things every second of the day. Sometimes, Thom Yorke just goes for a jog. Break Up Your Band Shockingly for those of us who grew up in the era, the 90s are back in style; and, writer John Frusciante (The Onion News Network, Cracked.com) does a fine job with this Tumblr dedicated to highlighting the best, worst and weirdest moments in 90s music history. Oh, and for all you Red Hot Chili Peppers fans out there: No, it’s not that John Frusciante. Lastly, for top genre-specific music Tumblr sites, give these a spin: Hip-Hop Cassette: Great hip-hop tracks, new and old, to keep your head ringin’. Both Kinds of Music: “Both kinds” refers to Country & Western, and as this site likes to point out, “This ain't your Dad's country music. It's your Granddad's!” Think Waylon, Willie, Hank and Johnny. Holy Soul: The name says it all: a digital bible of the greatest soul music ever recorded. If you visit one Tumblr-music site today, make it this one. Have music, will travel The best thing about all the new music you’ll discover through these Tumblrs is that you can load it all on your favorite mobile device and take it anywhere you go. Just make sure your mobile security is up to date so that malware and viruses don’t bring your Tumblr-inspired dance party to a screeching halt. By Jamey Bainer
计算机
2015-48/1915/en_head.json.gz/6256
Pinyin input method Screenshot of SCIM's Smart Pinyin. The pinyin method (simplified Chinese: 拼音输入法; traditional Chinese: 拼音輸入法; pinyin: pīnyīn shūrù fǎ) refers to a family of input methods based on the pinyin method of romanization. In the most basic form, the pinyin method allows a user to input Chinese characters by entering the pinyin of a Chinese character and then presenting the user with a list of possible characters with that pronunciation. However, there are a number of slightly different such systems in use, and modern pinyin methods provide a number of convenient features. 1 Advantages and disadvantages 2 Elements and features 2.1 Conversion length 2.2 Treatment of tones 2.3 Treatment of extended Latin characters (ü and ê) 2.4 Treatment of hm, hng, ng, n 2.5 Usage statistics and user dictionaries 2.6 Abbreviation 2.7 Fuzzy pinyin 2.8 Word prediction 2.9 Double pinyin 2.10 Typo correction 2.11 Language mixing 3 Implementations Advantages and disadvantages[edit] The obvious advantage of pinyin-based input methods is the ease of learning for Standard Chinese speakers. Those who are familiar with pinyin would be able to input Chinese characters with almost no training, compared to other input methods. For people who do not speak Chinese, the main advantage of pinyin becomes its disadvantage. They will need to learn the Standard Chinese pronunciation of characters before they are able to use this input method. However, since all children in Mainland China are required to learn pinyin in school, pinyin is in fact very popular there. Unlike stroke-based input methods, the pinyin method only requires the user to know how to speak Mandarin and be able to recognize the characters. It does not require the user to be able to construct the character from scratch as one would do in writing Chinese. This is both an advantage and a disadvantage. It is an advantage in that people will be able to type all the characters they can recognize. It is a disadvantage in that it may cause language attrition and skill loss in adults, and it may be a learning barrier for written Chinese in children.[1] Elements and features[edit] Pinyin input methods differ in a number of possible aspects. Most pinyin input methods provide convenience features to speed up input. Some of these features can speed up typing immensely. Conversion length[edit] The basic idea of an input method is to have a buffer that holds the user input until it is converted into characters that would otherwise be unavailable from the keyboard. In the most basic systems, one character is converted at a time. This makes a very time consuming input process. Not only does the
计算机
2015-48/1915/en_head.json.gz/6580
If Necessary Blogging But Not Necessarily Blogging Everyone has the following fundamental freedoms: (a) freedom of conscience and religion; (b) freedom of thought, belief, opinion and expression, including freedom of the press and other media of communication; (c) freedom of peaceful assembly; and (d) freedom of association. He is "entitled to his entitlements" If you were as generally loathed as Harper is, I wouldn't let him fly commercial either. But the Conservatives led by Harper made great hay out of Don Dingwall's unfortunate English usage. The Cons were successfully able to paint him as nickel and diming the taxpayers through expense claims. The same accusation can't be leveled at Harper. He isn't a hypocrite in the sense that he made expense claims for small personal amounts. It is hypocritical to have railed against Liberal expenditures and then connived to underpay for personal flights on government jets.Recommend this Post Constant Vigilance A Benefit of a "False Majority" Lots of pearl clutching regarding how the Liberal majority isn't based on a majority of the popular vote. While that may be the case and relevant for a discussion of electoral reform, I find this story to be an indication of how probable a constitutional crisis would have been if there were a Liberal or NDP minority victory. If he’s set on swearing in his new cabinet as planned next Wednesday, Justin Trudeau may have to do something he likely thought had dropped off his to-do list forever: namely, call on Stephen Harper to resign — not publicly, necessarily, and with the greatest possible respect for the outgoing leader, but definitively.Or, if he has indeed done so, make a public announcement to that effect.Because at the moment, it doesn’t appear that Harper has formally served notice to Governor General David Johnston — or anyone else — that he will voluntarily cede power to the incoming Liberal government next week. No official notice has been released to the media, or posted to the Rideau Hall website, nor has Harper’s office issued a statement confirming that he will resign. Yes, yes, after the non-resignation story was published, there was a less than definitive commitment to resign: Shortly after this story went out, the governor general’s senior communications advisor Marie-Eve Letourneau got in touch to say that, “in keeping with Canadian practice,” Harper “signified his intention to resign when he visited the Governor General at Rideau Hall immediately following the election,” although he won’t formally do so until Nov. 4, “just prior to the swearing-in of the new ministry.”She also said that the governor-general met with Trudeau following the election as well.What we still don’t know, however, is why the process has been conducted in such a clandestine fashion, without even an after-the-fact advisory that these meetings had taken place. There is also some uncertainty around whether that secrecy is, as Letourneau put it, “in keeping with Canadian practice.” It would be a hypothetical bet, but if Harper lost to a minority, who would put money on him actually resigning without a messy fight (or at least a hissy fit). The quirkiness of our system might have saved us from a big problem.Recommend this Post
计算机
2015-48/1915/en_head.json.gz/7174
Home My Page Q&A Topics Design & Code DevOps Enterprise Lean & Kanban People & Teams Planning Process Release Management Requirements Scrum Testing Transition Resources Articles Better Software Magazine Books Guide Conference Presentations Interviews Tools & Services White Papers & Downloads Events Conferences Training Virtual Conferences Web Seminars Jobs Mobile Development and Aggressive Testing: An Interview with Josh Michaels [interview] By Jonathan Vanian - March 7, 2014 Share URL JM: The first is that as I build I test aggressively, meaning that I try to not leave a feature until I've really just fully evaluated it. Because I'm writing the code and testing it, I'm able to really think concretely about the testing matrix and where there's likely to be problems and where there isn't likely to be problems. I try as aggressively as I can when I build a feature to right then test it as thoroughly as I possibly can as one individual. Now, the challenge is that very often I'll release this stuff and people use it in ways that I don't even think about. Maybe they'll set a couple of settings and I'm like why would you ever want to setting A this way but setting B and C this way? Sure enough people come up with a reason to do it and that's where I can't test it all because there's going to be lots of different ways to set things up that are going to be ways that I don't even think of and that's where I really depend upon the fans and customers who really love the product who function as beta-testing group. When people contact me who are like, "Hey, I love your app," I always like to offer up, "Hey, do you want to join the beta testing group?" Then when I've got a new release coming out I'm able to contact them on and say, "Hey, you can try it before anyone else." Fans love that. Fans love to get a chance to try it before anyone else. What that does for me is get a lot of people using it in different ways that aren't the ways I'm going to think about, that aren't the ways that are in my list of tests that I have to do before I ship. I would say I rely very heavily on the fans of the product who I give very early copies of the app too and who give me feedback on where there are problems both from a technical point of view, bugs, things that aren't working, but also usability. I couldn't figure this out, you said this feature is there and I don't know where it is. Where did you put this feature? JV: Were these ever disgruntled fans maybe who had specific problems like, "How dare this thing doesn't show up at this point?" JM: I would say half the time these interactions start from disgruntled fans who are writing in to complain about something and it's something that I can't necessarily fix. Airplay is a perfect example of that. When I am able to make improvements or when I come up with like, "Oh crap, I didn't think about it. I could do it this way and that would be a little bit better," I always like to jump to those and say, "Hey, try this out. It's not what you asked for but it's a little bit closer." They're always enthusiastic to see any amount of progress towards what they want as fans of the product. It took me a little while to realize that when somebody's angry about the product, that passion is just as much love as anything else. It's just they're frustrated, but the fact that they're that into it shows that they really care. JV: It's like voting. The act of voting shows that there's some caring. JM: The dude showed up and made a complaint. That's doing a lot. JV: This is when I first heard you speak at the Mobile Web Development conference here in San Francisco and you were talking a lot about the reviews and how that affects you, so this is a good segue to get into that. It's like confronting the people who are having the negative reactions immediately but then trying to get them to go to help you and join your side. JM: At the end of the day when somebody contacts me who's angry, my end goal is not only have them leave happy but also have them leave more of a fan than they came in. If I can get them on the way out the door to go write a review, they're going to write a glowing review because they just had a great personal interaction with me and that's really, really hard to top. If you go and look through the reviews for Magic Window—not for Ow My Balls!; nobody should be subject to read those—If you look at the ones for Magic Window you'll see there are a lot of them that say, "Great customer service." "Developer responded super fast." "Developer implemented the feature I suggested." You see that stuff in the comments and in the reviews, and that's what I love to see and I think that's what other potential customers love to see because it shows that as a developer I'm going to be there to help them if something goes wrong. JV: They're more willing to spend some money on the product knowing that they're going to get some service back. JM: When you look at an app that you want to buy and you're looking at the reviews, if you see a couple reviews that are like, "This didn't work and the developer didn't even respond," that's a big warning sign because maybe it's some edge case that doesn't work, maybe it's people who use Gmail in a particular way, but maybe I'm that dude who uses Gmail in a particular way. If it doesn't work I want to know that there's going to be someone there who's respectable and responsive when I try to contract them to solve it. JV: I want to add on this interview with a real nice story that you had to share at the conference and it's involving the word Beelzebub. Can you explain that? JM: I'll try to make a long story short there without actually reading the support mail. I received a support mail from a customer who is concerned because their child was playing Ow My Balls!, which they didn't have a problem with. JV: They didn't, okay. Tags: developmentmobileprogrammingtest executiontest managementtest-driven developmenttesting Login or Join to add your comment 1 comment Madhava Verma Dantuluri Wonderful one, great experiences were shared. March 11, 2014 - 2:13am Login or Join to add your comment Jonathan VanianJonathan Vanian has worked for newspapers, websites, and a magazine, and is not as scared of the demise of the written word as others may appear to be. Software and high technology never cease to amaze him. More Like This » 5 Ways Testers Can Mitigate Practical Risks in an Agile Team [article] » Top Twelve Myths of Agile Development [article] » The Future of the Software Testing Profession: An Interview with Mike Sowers [interview] Podcast » Mob Programming: A Whole Team Approach [article] » Unit vs. System Testing-It's OK to be Different [article] » You Can't be Agile Without Automated Unit Testing [magazine] » Identifying and Improving Bad User Stories [article] Upcoming Events
计算机
2015-48/1915/en_head.json.gz/7457
Posted Ouya: ‘Over a thousand’ developers want to make Ouya games By Aaron Colter Check out our review of the Ouya Android-based gaming console. Even after the relatively cheap, Android-based Ouya console proved a massive success on Kickstarter (the console was able to pull in nearly $8.6 million from investors despite having an initial goal of only $960,000), pundits and prospective owners of the new gaming machine loudly wondered how well it would be able to attract developers who would otherwise be making games for the Xbox 360, iPhone or PC. Assuming you believe official statements made by the people behind the Ouya console, there is nothing to worry about on that front. “Over a thousand” developers have contacted the Ouya creators since the end of their Kickstarter campaign, according to a statement published as part of a recent announcement on who will be filling out the company’s leadership roles now that it is properly established. Likewise, the statement claims that “more than 50” companies “from all around the world” have approached the people behind Ouya to distribute the console once it is ready for its consumer debut at some as-yet-undetermined point in 2013. While this is undoubtedly good news for anyone who’s been crossing their fingers, hoping that the Ouya can make inroads into the normally insular world of console gaming, it should be noted that while these thousand-plus developers may have attempted to reach the Ouya’s creators, the company offers no solid figures on how many of them are officially committed to bringing games to the platform. That “over a thousand” figure means little if every last developer examined the terms of developing for the Ouya and quickly declined the opportunity in favor of more lucrative options. We have no official information on how these developer conversations actually went, so until we hear a more official assessment of how many gaming firms are solidly pledging support to the Ouya platform, we’ll continue to harbor a bit of cynicism over how successful this machine might possibly be. As for the aforementioned personnel acquisitions, though they’re less impressive than the possibility that thousands of firms are already tentatively working on games for the Ouya, they should offer a bit more hope that the company making the console will remain stable, guided by people intimately familiar with the gaming biz. According to the announcement, Ouya has attracted former IGN president (and the first investor in the Ouya project) Roy Bahat to serve as chairman of the Ouya board. Additionally, the company has enlisted former EA development director and senior development director for Trion Worlds’ MMO Rift, Steve Chamberlin, to serve as the company’s head of engineering. Finally, Raffi Bagdasarian, former vice president of product development and operations at Sony Pictures Television has been tapped to lead Ouya’s platform service and software product development division. Though you may be unfamiliar with these three men, trust that they’ve all proven their chops as leaders in their respective gaming-centric fields. Expect to hear more solid information on the Ouya and its games line up as we inch closer to its nebulous 2013 release. Hopefully for the system’s numerous potential buyers, that quip about the massive developer interest the console has attracted proves more tangible than not.
计算机
2015-48/1915/en_head.json.gz/7499
Previous News Printer Friendly Version Eiffel Becomes an Ecma Standard In June 2005 the Ecma General Assembly approved ECMA-367, the first standard for the Eiffel programming language. Geneva, 21 September 2005 : Eiffel is a method of software construction and a language for use in the analysis, design, implementation and maintenance of software systems. The ECMA-367 standard covers the language, with an emphasis on implementational aspects. Originally designed by Eiffel Software under the leadership of Object Technology pioneer Bertrand Meyer, the Eiffel language is used worldwide by major corporations in mission-critical applications in finance, defence, aerospace, health care and many other fields. Eiffel is the language of choice for companies who need the best in programmer productivity and reliability of the resulting software. Long-term value – Eiffel standardisation brings to the Eiffel community a guarantee of total, line-by-line compatibility between different implementations and of long-term value of their investment. Through its award-winning product EiffelStudio – a complete development environment covering the entire software lifecycle and the leading Eiffel implementation – Eiffel Software is committed to fully supporting the Ecma Eiffel standard. The Eiffel standard is available, free for copying and distribution, from Ecma and Eiffel Software's websites. It has recently been submitted for ISO approval as part of Ecma's fast-track ISO status. "Ecma has provided an outstanding environment for developing the Eiffel standard," said Emmanuel Stapf, head of the compiler division at Eiffel Software. "The committee members are all busy software professionals immersed in large, mission-critical projects and with little patience for bureaucracy. The Ecma process is business oriented and friendly, with a minimum of overhead - exactly what we needed in order to produce an innovative and carefully worked out standard in little time." "The Eiffel standard is an exciting example of successful harmonisation," said Jan van den Beld, Secretary-General of Ecma. "It is a large intellectual achievement to develop one international standard for a language such as Eiffel, which was conceived in 1985 and has evolved ever since. Congratulations!" About Ecma International Since its inception in 1961, Ecma International (Ecma) has developed standards for Information and Communication Technology (ICT) and Consumer Electronics (CE). Ecma is a non-profit industry association of technology developers, vendors and users. Experts from industry and other organizations work together at Ecma to develop standards. Ecma submits its work for approval as ISO, IEC, ISO/IEC and ETSI standards and is the inventor and main practitioner of “fast tracking” of specifications through the standardisation process in International Standards Organisations (ISOs) such as the ISO and the IEC. Publications can be downloaded free of charge from http://www.ecma-international.org/. About Eiffel Software Eiffel Software was founded in 1985 with the mission of developing compilers and tools based on the power of pure object-oriented concepts to improve programmers’ productivity, lifecycle efficiency and the quality of the resulting applications. For twenty years, Eiffel Software has delivered to its customers the most cost-effective and advanced development tools on a large variety of platforms. Exploiting the power of the language and tools of the environment, i.e. EiffelStudio and EiffelEnvision, Eiffel users continuously demonstrate that they can produce between two and ten times as much software in a given amount of time as can be achieved using other IDEs and tool sets. Eiffel has thus gained prominence in challenging enterprise environments in the financial, insurance, manufacturing, and government sectors as well as among independent development teams. http://www.eiffel.com/ Industry contact Jan van den Beld Ecma Secretary General [email protected] Christa Rosatzin-Strobel Ecma Media Relations [email protected] Press release in other formats In pdf format In rtf format Back
计算机
2015-48/1915/en_head.json.gz/7982
Adobe Digital Marketing Last CMS Standing: The Promise of Drupal Versus WordPress and Joomla! By Tom Geller The popular open-source content-management systems Drupal, Joomla!, and WordPress are all getting new versions, setting the stage for an elimination tournament of market acceptance. Tom Geller, author of Drupal 7: Visual QuickStart Guide, looks at Drupal's comparative position in two regards: developer involvement and business adoption. From the author of  Drupal 7: Visual QuickStart Guide You wouldn't know about Drupal's success solely from statistics. Among the web's top 10,000 sites, it still trails WordPress by a huge margin, and Google reports far more searches for Joomla!. All three are enjoying a boom. Project Founder Matt Mullenweg summarized WordPress' growth as "breathtaking"; traffic to joomla.org has increased by 50 percent in the past two years; and statistics show a (fairly) consistent increase in the number of sites running Drupal. Extraordinary growth begets extraordinary competition. Codewise, all three camps are girding their loins, with the recent release of WordPress 3.0 and imminent releases of Joomla! 1.6 and Drupal 7.0, all considered major versions. Never before has the buzz been so great, nor signified so much at stake. For the market has a history of siding with a single winner and pushing the rest aside, as it did with OS/2 and AmigaOS. With that in mind, let's consider Drupal's chances by looking at support as regards developers and the business community. Developer Energy One measure of strength is the number of person-hours that goes into the software itself. That doesn't gauge of the software's strength in technical terms, of course: You can't polish a road apple, as they say. It also doesn't tell whether the project is interesting (or worthwhile) for any audience other than that of developers themselves. But it does expose personal commitment: Developers choose projects because they believe their time will be rewarded with recognition, useful skills, and job opportunities. The developer community for Drupal's core software appears to be considerably larger than those of Joomla! or WordPress. More than 1,000 people contributed substantially to produce Drupal 7, versus about 200 for WordPress 3.0. (While the Joomla! project hasn't published its numbers, Joomla! Production Leadership Team member Ian MacLennan extracted a current count of about 200 Joomla! 1.6 contributors from the project's tracking software.) Drupal's security team is particularly responsive, with a well-documented record of reporting and fixing vulnerabilities in both core Drupal and its third-party extensions. In terms of functional extensions, WordPress kicks the others' butts with more than 12,000 plugins versus around 7,000 each for Drupal and Joomla!. These numbers are open to a lot of interpretation, however, as the definitions of "contributor" and "extension" vary from one camp to another, and the extension count includes obsolete versions. I believe these extensions are commonly written for one of two reasons. Rarely, they're to fill in gaps in the core software[md]the Wysiwyg module is a prime Drupal example, while K2 is a Joomla! counterpart. In such cases, a high count of extensions is just a sign that the core product is incomplete. More often, though, extensions do something beyond what the core software could be expected to handle, such as complex data presentation. They're a sign that people are engaging with their CMSes in real-world situations[md]that they have itches to scratch, so to speak. If that's true, a high number of extensions is a sign of a project's health. On another development front, active communities of graphic designers have sprung up to create "themes" for all three CMSes. Oddly, joomla.org refuses to host them (which the Joomla! community calls "template files"), resulting in an active constellation of third-party sites that carry them. One such site claims more than 3,000 templates, which dwarfs WordPress' official count of 1,300 and Drupal's mere 800. (Again, these numbers are somewhat questionable, and don't include the untallied collection of commercial, non-free designs.) Add it all up, and where does Drupal stand? Well, its advantage in core development is both substantial and significant: Drupal's developers are building a strong foundation for the future. As yet that hasn't translated into equally strong non-core development, particularly among graphic designers. However, there are promising signs of third-party Drupal development in another direction. An increasing number of parties now produce free Drupal "distributions"[md]that is, core Drupal packaged with additional modules and other assets[md]for specialized purposes such as publishing and non-profit administration. We're also seeing more of what I like to call "supermodules," such as Panels, Rules, and Context that provide a framework for further development rather than just scratch a single itch. On the design front, such tools as Skinr and Sweaver ease (or replace) parts of the theming process. Drupal development is, in short, moving from the brickmaking to the building stage. What Businesses Are Betting On Business organizations around the three CMSes stratify distinctly based on their target audiences. WordPress has its teeth firmly in the thick end of the wedge[md]the low-demand, everyday computer user who primarily wants a blog-based web site. As such, WordPress business opportunities tend to be small, but numerous: hosting, site implementation for mom-and-pop companies, theme design, and so forth. Not that all WordPress sites are small: A respectable list of Fortune 500 companies use it, along with five of the world's top 1,000 sites. Joomla! has a strong presence among mid-size businesses, and sports a uniquely thriving market for commercial extensions along with those for implementation services, template designs, and hosting. (I've observed that Joomla's culture tends to be more entrepreneurial in general than Drupal or WordPress'.) Drupal, in contrast, is increasingly being deployed for enterprise clients, and has attracted the attention of enormous consulting firms such as Accenture and CapGemini. In his presentation on The Business of Drupal, Acquia Senior Drupal Advisor Robert Douglass points out that the Drupal ecosystem now includes not only independent consultants, designers, and hosting services, but full-fledged value-added resellers as well[md]positions that wouldn't be possible without enough enterprise-grade components to integrate as solutions. Hovering over all is the commercial infrastructure company, Acquia, founded in part by Drupal creator Dries Buytaert to be for Drupal what Red Hat is for Linux. Funded by three rounds of venture financing totalling $23.5 million, Acquia provides support, monitoring, search, hosting, and similar services directly and through a network of partners. No similar company exists in either the Joomla! or WordPress world, although for the latter Automattic offers enterprise-level support and hosting services on the Acquia model. The market value of Drupal skills is high, and is expected to continue its growth. A look at indeed.com's comparison of available jobs for Drupal, WordPress, and Joomla! shows that there are currently about twice as many listings citing WordPress than Drupal[md]which is still a strong showing for Drupal, as there are approximately a hundred times as many WordPress sites as Drupal sites in the world. But when you look at relative growth, the picture changes: Drupal's job count is growing at five times the rate of WordPress'. (Joomla's job growth rate is also comparatively flat.) Other measures suggest that Drupal's job market is superior. As I write this in December 2010, the recruitment board monster.com lists 160 Drupal jobs, versus 120 for WordPress and a mere 55 for Joomla!. Similarly, the job board on drupal.org got 50 posts in the last week, versus only nine on the WordPress job board. (Joomla.org doesn't host a formal job board; community members pointed me at joomlancers.com, which lists only a few gigs.) Having said that, there are pockets where Drupal is weak: The short-term gig board elance.com lists only 100 tasks for Drupal professionals, versus 200 for Joomla! and a staggering 515 for WordPress. It could be that Drupal positions tend to be more permanent than those for WordPress or Joomla!, which its strong position in enterprise-level businesses would support. Or this difference could simply be cultural, the same way Coca-Cola is more popular in the southern U.S. So: Where Is Drupal Going? Gather it all together and you get a picture of Drupal's promise as compared to WordPress and Joomla!: User base: On the web as a whole, WordPress' user base is overwhelmingly bigger than Drupal's and Joomla's, by as much as two orders of magnitude. Among the 10,000 most-trafficked sites, WordPress' lead is narrowed considerably, to about 4x. Drupal clearly leads Joomla! in this group. Core code: All three are currently experiencing major releases. Drupal's core is comparatively strong, secure, and well-supported. Third-party code: WordPress has the largest number of functional extensions of the three. Drupal is seeing a growth of distributions and "supermodules" that are platforms in themselves. Joomla! holds a strong lead in the number of graphical site templates. All three CMSes have vibrant and growing markets for services and consulting. Drupal has had better success in the enterprise market. The long-term job market appears to be favoring Drupal, although the complete picture is unclear. In the final analysis, these distinctions only affect organizations taking a very long-term view of the matter, or considering very large implementations. For the individual webmaster, no CMS delivers a knockout punch: All three are capable of running most sites, and their support communities will certainly be solid for at least a few years. Adobe Target Classroom in a Book By Brian Hawkins, Lily Chiu-Watson Adobe Experience Manager: Classroom in a Book: A Guide to CQ5 for Marketing Professionals By Ryan D. Lunka
计算机
2015-48/1915/en_head.json.gz/8027
Rn Software Officially Launches Fynch: Proficient Twitter Extension for Google Play Store Created by former NASA Engineer, Fynch is a Twitter extension app with grouping capabilities that streamline social media experience. Top 10 Tweets that Changed the World. We aim to improve everyone’s Twitter experience, and soon make checking your timeline manually a thing of the past. Oklahoma City, OK (PRWEB) Rn Software, a premier mobile app development company based in the Oklahoma City area, officially launched their latest app, Fynch, in the Google Play Store today. Fynch is a Twitter extension app designed to enhance users' Twitter experience by automatically analyzing the timeline for interesting patterns of activity, and decomposing it into smaller sets of tweets that are easier to consume. The app has been in beta-testing since early 2013. “Our goal is to develop simple, yet robust mobile applications that adapt to and engage the user," said Charles de Granville, Rn Software founder and former NASA Engineer. "We recognize that the space of possible applications is large, but we intend to explore it rather than taking a one dimensional approach." Fynch is designed for identifying and grouping Tweets into extremely high rates of activity, trending topic mentions, and the tweets of less active users in the following list. Users can easily keep up with all the accounts on their timeline with Fynch’s unique data mining technology that compiles the timeline into three kinds of “Fynch’s”, for easy, organized access. Find something interesting? Tap the user’s “Fynch” to unveil the group of tweets they have written. Want to go even further? Tap an interesting Tweet to be immediately directed to the Twitter app for the favorite, re-tweet, and reply options. Updates to the official Fynch version include: memory usage improvements (decreased memory usage to ensure Fynch functions smoothly on older devices), battery usage improvements, support for Falcon Pro, and added extension for the popular DashClock widget. To accompany the official launch, Rn Software has also released an infographic called “Top 10 Tweets that Changed the World.” The infographic includes tweets from President Obama, Justin Bieber, Oprah, Shaquille O’Neil, and other tweets that changed the course of history. The infographic is currently displayed on the blog of APPSPIRE.me, a leading mobile app marketing agency. “I’ve always felt it required too much manual labor on the part of the user, but with Fynch I just sit back and let it do all the work,” said Adam Smith, Rn Software CEO. “We aim to improve everyone’s Twitter experience, and soon make checking your timeline manually a thing of the past.” Fynch is currently available for Android devices for free download. For more information, download Fynch in the Google Play Store today. Rn Software is a small startup based in the Oklahoma City area. Co-founders Charles de Granville and Adam Smith, coming from the fields of robotics and retail wireless respectively, formed the company in 2012 with the simple goal of developing a range of mobile applications intended for use by a broad spectrum of users. Charles de Granville is a graduate of the University of Oklahoma with a B.S. and a M.S. in Computer Science. Upon graduating from the University of Oklahoma, Charles joined the Machine Learning and Instrument Autonomy Group at NASA’s Jet Propulsion Laboratory. Adam Smith has a combined 8 years business management experience between the restaurant and wireless industries. Throughout his time in the wireless business, he gained valuable experience on the front lines with several of the major national carriers, and learning the many ways that everyday people use their devices. Rn Software’s main goal is to integrate machine learning and data mining into mobile apps to enhance every day people's lives. Fynch is the first attempt at accomplishing this goal. Adam Smith CEO RN Software 405-740-2825
计算机
2015-48/1915/en_head.json.gz/9248
The Importance of First Impressions bitmob August 28, 2012 11:02 PM One of the most important parts of any video game is the first impression the game gives, and this is often given through menus, introductory cutscenes, or first levels. The initial experience can often make or break a game for me; games are inherently a large time investment and the game has to convince me that I should invest the necessary time into it. Here are some of the best first impressions that I’ve ever gotten from video games. Wipeout HD I haven’t played that much Wipeout HD, but the only reason that I even tried it was the incredibly slick presentation. The menus, music, and voice-overs combine to create a really cool, futuristic look to the game. I often fire up Wipeout just to show friends to cool menus. Of course, the actual game has great presentation too, especially the trippy Zone mode. I can’t remember a game that I was more excited for than Civilization IV, and the opening cutscene confirmed my excitement. The epic music combined with epic visuals on a large scale set the scene for a game that allows for total control over a civilization. The cutscene deftly zooms in from space down eventually to the individual people on Earth, showing how the scale of the game ranges from simply establishing a thriving settlement to winning the space race. Uncharted 2: Among Thieves The first level of Uncharted 2 is the perfect way to establish the level of quality that the player will experience. The moment Drake was hanging from a train, which itself was hanging off a cliff, I realized that this game would be operating on a totally different scale than its predecessor. While the first Uncharted was often a little campy and hard to take seriously, Uncharted 2 quickly established that this game was much more serious, by telling the story in medias res, beginning the game with Drake at his lowest point. There are certainly many other great first impressions in video games, so feel free to share your favorites in the comments below. Originally posted on leviathyn.com
计算机
2015-48/1915/en_head.json.gz/9572
Posted Blizzard agrees with Valve: Windows 8 is bad for video game makers By The world’s biggest PC video game makers aren’t very happy about Microsoft’s new operating system, Windows 8. Speaking with one-time Microsoft Game Studios chief, Ed Fries, at a Casual Connect earlier this week, Valve’s Gabe Newell described Microsoft’s plans as a “catastrophe.” “Windows 8 is kind of a catastrophe for everybody in the PC space,” said Newell, “I think that we’re going to lose some of the top-tier PC [original equipment manufacturers]. They’ll exit the market. I think margins are going to be destroyed for a bunch of people. If that’s true, it’s going to be a good idea to have alternative to hedge against that eventuality.” Newell says this is why Valve is pushing hard to bring its games, its Source engine, and the Steam digital distribution platform to the Linux operating system. Is he alone in thinking that Windows 8 will be a disaster for the PC gaming industry? Blizzard’s Rob Pardo doesn’t think so. The StarCraft designer and current vice president of game design at Diablo III studio Blizzard said that he believes Microsoft’s new operating system will also be a thorn in his company’s side. Pardo Tweeted on Wednesday, “Nice interview with Gabe Newell—‘I think Windows 8 is a catastrophe for everyone in the PC space’—not awesome for Blizzard either.” The belief in the development community is that Microsoft will make Windows 8 a closed system, an operating system that seeks to more stringently control, much in the way that Apple does with Mac OS X and the iOS platform. This would allow Microsoft to better monitor the quality of applications running on its platform, but it will also wall off the most widely used operating system in the world from myriad developers. PC game makers use Windows because of the openness of the platform and its ubiquity. If Microsoft takes that openness away, what will developers do? Windows 8 won’t be released until October of this year, so Microsoft still has time to decide exactly how free game and application makers will be to use the system. Valve and Blizzard are, by sales and reputation, the biggest PC game makers in the world and their influence over Microsoft’s platform isn’t insignificant. If Blizzard and others follow Valve to Linux platforms, what will Microsoft do to lure them back? Will PC gaming become increasingly based on streaming and browser-based solutions?
计算机
2015-48/1915/en_head.json.gz/10337
One bus or two? Making sense of the SOA story Philip Howard Which kind of cloud database is right for you? When is a database not so relational? Regarding IBM enterprise data management Comment This question came up during Progress's recent EMEA (Europe, Middle East, Africa) user conference: at one point, a vendor representative showed a slide showing Sonic as an ESB (enterprise service bus) and DataXtend as a comparable bus operating at the data level. From subsequent discussions it emerged that whether these should be regarded as one bus or two has been the subject of much internal debate. This isn't the first time that this discussion has come up. Towards the end of last year I was commissioned by IBM to write a white paper on information as a service and in this paper I posited the use of a second bus (which I suggested should be called an Information Service Bus or ISB) for data level integration. IBM wasn't too sure about this and we eventually compromised by saying that the ISB is logically distinct from the ESB, but not physically distinct. Progress has reached the same position. It is much easier to visually explain the concept of information services working alongside application (web) services to provide a complete SOA (service-oriented architecture) environment if you use two buses rather than one. At one level (and working downwards) you have web services, legacy applications and so forth connected through the ESB to more generalised web services while those web services connect via the ISB to data services, which themselves are used to extract information from relevant data sources (data or content). Now, both buses use common communications channels, which is the argument in favour of having a single bus. However, the sort of adapters you use to connect to data sources are very different from those used in a typical ESB environment. Further, some implementations may be data-centric while others will be application-centric and, moreover, you can implement one without the other. In particular, an ISB effectively subsumes the role of data integration and, potentially, master data management, which you might easily want to implement separately from either SOA or an ESB. So, I firmly believe that, at least from a logical perspective, it makes more sense to think of a complete SOA environment as consisting of twin buses as opposed to one. However, that's not quite the end of the story. If you think about it, you have services to services integration, data to data integration and services to data integration (or vice versa) and each of these has its own characteristics, so you might actually think of three buses. However, three buses in a single diagram might be considered overkill though you could depict an ‘S’ shaped bus if you wanted to, implying three uses of a single bus. Of course, you could use a sideways ‘U’ for the same purpose with a dual bus structure but again I think these approaches are overly complex—the whole point about SOA is simplification—if you can't depict it in a simple fashion you are defeating the object of the exercise. Of course, we all know that the problem with buses is that you wait for ages for one and then several come along at once. In this particular case I think that two buses is just right: one is too few and three is too many. Copyright © 2006, IT-Analysis.com
计算机
2015-48/1915/en_head.json.gz/10500
xorg/ XorgFoundation/ Reports/ 2010 Converted to wiki form from original report at http://foundation.x.org/pipermail/members/2010-February/000550.html The State of The X.Org Foundation 2010 Bart Massey Secretary, X.Org Foundation February 20, 2010 Abstract: 2009 has been a year of consolidation for X.Org. Software is now being released on a predictable schedule, with predictable improvement. Note: The Bylaws of the X.Org Foundation require the Secretary to prepare and deliver a State of the Organization report within 60 days of the start of the calendar year. It is my pleasure to discharge that responsibility by preparing this report. While I have prepared this report in close consultation with the X.Org Foundation Board of Directors, all views contained herein are ultimately my own. Introduction Six years ago, the X.Org Foundation was re-formed and its first officers elected. Since then, approximately one X Window System major release has occurred per year. The mission of the modern X.Org Foundation Board is to support this work through raising and allocation of funds, through recruitment and support of Foundation members, and through initiatives in community development, education, and support, and by providing a computing and communications infrastructure; in short "to develop and execute effective strategies that provide worldwide stewardship of the X Window System technology and standards." [ 1 ] In the next two sections of this report, I first review X.Org Foundation activities during 2009 and report on our successes and challenges; I then suggest something of the goals, needs, and plans for the future of the X.Org Foundation in 2010 and beyond. Finally, I draw some conclusions. X.Org Foundation 2009 In 2009 X.Org development proceeded at a steady and reasonable pace. The Foundation did not make major changes in operation in 2009. Development In keeping with the X.Org goal of about one release per year, Release 7.5 of the X Window System occurred on October 26, 2009. This release featured the first official version of Multi-Pointer X, "E-EDID support", improved pointer acceleration, an XACE-based SELinux security module, and RandR version 1.3. It also included the kernel modesetting support developed over the last several years, with the goal of moving parts of X better handled by the host operating system into it. Funded Activities Based on the limited ability to raise funds in the 2009 economic climate, and on the limited capacity of conference organizers, the decision was made in early 2009 to cut back from two conference events per year to just one. The Board fe
计算机
2015-48/1915/en_head.json.gz/10715
Sublime Blog Sublime HQ Pty Ltd Twitter: @sublimehq Sublime Text 2: Beta Since the first public Alpha at the end of January, there have been 12 new releases, and many more dev builds. On average, that's a new version every two weeks for the past five months. During this time, Sublime Text 2 has made great strides in functionality, and a correspondingly large increase in users. Sublime Text 2 has long outgrown its Alpha tag, so it's time to put a Beta label on instead. There's a new release to mark the occasion, and it's got a bigger change list than any previous version. A couple of the highlights are: The Command Palette provides a quick way to access commands that don't warrant a key binding, and would usually be hidden away in a menu. For example, turning Word Wrap on or off, or changing the syntax highlighting mode of the current file. It uses the same fuzzy matching as Goto Anything does, meaning most commands are accessible with just a few key presses. The command palette can be triggered via Ctrl+Shift+P on Windows and Linux, or Command+Shift+P on OS X. Distraction Free mode Distraction Free mode is full screen, with an extra emphasis on your content. All user interface chrome is hidden, leaving you with nothing but the file you're working on. It's a great help when you want to ignore everything else and just write. Distraction Free mode is accessible from the View menu. Now that Sublime Text 2 is in Beta, I'm planning to reduce the number of releases to around one a month, to avoid frequent update prompts. If you prefer living on the edge, the dev channel typically has a new build every 2 or 3 days. Traditionally, the Beta tag has been used on software to indicate it's feature complete, and is going through testing before the final release. That's not the case with the Sublime Text 2 Beta, which is ready to use, but subject to change. New releases will be coming out, and they'll be adding new functionality and changing how things work. People use Sublime Text 2 every day to get real work done - if you haven't tried it yet, now is a great time. Jon Skinner
计算机
2015-48/1915/en_head.json.gz/11757
Posted Firefox 10 hits the streets By Mozilla has set loose Firefox 10, the latest version of its open-source Web browser for Windows, Mac OS X, and Linux. New features in the release are largely limited to technologies aimed at Web developers, but there’s one important new feature that ought to appeal to anyone who has augmented their browser’s functionality: by default, most add-ons will be compatible with new versions of Firefox by default, and users will have an easier time managing and (if necessary) updating their add-ons to new versions of the browser. In previous versions of Firefox, Mozilla assumed add-ons weren’t compatible with new versions of the browser unless they had specifically been re-released for a new browser version; the result was that many users put off upgrades until new versions of their add-ons were available. However, Mozilla realized roughly three-quarters of all Firefox add-ons generally don’t have any compatibility issues with new releases—the biggest exceptions are binary add-ons that contain their own compiled code. So, beginning with Firefox 10, Mozilla assumes that extensions are compatible with new versions of Firefox so long as they don’t contain compiled code and were compatible with Firefox 4, the last time a major shift in architecture required add-on changes. Firefox 10 also polls for new versions of add-ons once a day, and installs them if an update is found. Most of the other new features in Firefox 10 are under-the-hood changes and features only Web developers can love. Notable for Web authors, Firefox 10 includes a Page Inspector that enables Web authors to peer into the structure of a Web page, and there’s also a style sheet inspector to look at how styling information is handled: Web developers have previously used tools like the much-loved Firebug for similar tasks, but it’s nice to see the support included. The browser also includes a ScratchPad based on the Eclipse Orion code editor, and adds a new full-screen mode site creators can use for immersive apps like games. (Game developers will also appreciate new 3D graphics capabilities and antialiasing for WebGL content.) Firefox 10 is the latest version in Firefox’s rapid-release program that’s intended to bring new features to the browser (and its users) more quickly, rather than waiting many months (or over a year) to bring out monolithic new versions. The goal of the rapid release program is to get new technologies out into the real Web more quickly; however, it has also drawn ire from both individual users and organizations. Some end users are frustrated by constant updates that don’t seem to bring much in the way of new features (for most users, Firefox 10 is almost indistinguishable from Firefox 4), while organizational users find Firefox too much of a moving target: they can’t certify a new version get it out to their user base before another one comes along and the process starts all over again. The new technology for handling add-ons should make the process of upgrading to new versions smoother for end-users, and to satisfy organizations, Firefox 10 is the first Extended Support Release (ESR) of the browser: instead of fading away after a few weeks, Mozilla will maintain Firefox 10 with security updates for a full year, even as they move on to Firefox 11, 12, 13, and more. Making Firefox 10 an ESR release means the browser is a stable target for companies and organizations.
计算机
2015-48/1915/en_head.json.gz/12019
Cloud // Platform as a Service 2/11/201310:58 AMAlistair CrollCommentary1 CommentComment NowLogin50%50% Lean IT: Think Big About AgilityCloud, DevOps, Agile and Lean Startup methodologies all aim to make companies more adaptive. But alone, no one of them can deliver on that promise.There's been a lot of talk about agility lately. Everyone, it seems, wants their company to be a circus acrobat: nimble, capable of improbable feats of derring-do. Dozens of "big ideas" have put on the mantle of agility in the hopes of convincing the world they're the answer to the sluggishness of big organizations. The problem is that none of these ideas is a panacea; rather, each is a piece of a bigger puzzle, that of transforming an organization into an organism. Let's look at a few of these agile claimants first. Cloud Computing is an umbrella term for a variety of big changes in IT. It's the move from physical machines to virtual ones; the move from computers to computing; and the shift from on-premise to third-party pay-as-you go IT. Clouds are IT in an era of abundance, and used properly, they make huge rearchitecting of IT systems possible with just a few mouse clicks, because they remove the friction of change. DevOps is a portmanteau of development -- the writing of software -- and operations -- the running of it. It's a recognition of the fact that modern developers aren't just coding the application, but also the infrastructure on which it runs, and increasingly the data on which it relies. A DevOps mentality means software that's able to adapt to the environment in which it operates, and that can scale and relocate in response to outages or increased demand. Agile Development is, as its name implies, a more responsive way of building software. In contrast to the old "build it and they will come" model of waterfall development, which assumed that the specification was correct, Agile emphasizes short cycles of development, testing, validation, and adjustment, along with continuous deployment, in order to deal with shifting requirements. And Lean Startup is a way of building new companies and new products through iterative learning. It focuses on identifying the riskiest part of a business, and doing just enough work to overcome that risk. Rather than selling what you can make, Lean says you should make what you can sell -- and you find out what you can sell through relentless, close customer development in search of the magical fit of the right product for the right market. [ Learn more about these and other hot topics at Cloud Connect Silicon Valley, April 2-5. There will be four days of lectures, panels, tutorials and roundtable discussions on a comprehensive selection of cloud topics taught by leading industry experts. Register for Cloud Connect now. ] None of these ideas is brand new. Agile development owes much to notions of incremental development in the 1950s; the first clouds were probably mainframes; and Lean Startup was inspired by concepts from Japanese manufacturing. But it's only in recent years that the widespread use and consumerization of technology that these things have truly gone mainstream. These and related issues will be discussed in a conference track that I am moderating on "Futures and Disruptions" at Cloud Connect, April 2-5, in Silicon Valley, Calif. From organization to organism Cloud, DevOps, Agile and Lean all want to make us more adaptive. But alone, no one of them can deliver on that promise. Cloud computing can reduce the coefficient of friction of changes dramatically -- but without a DevOps approach to adaptive infrastructure, it can't shine. Similarly, Lean Startup product managers can't achieve the tight iteration and continuous learning they need if the development team isn't using Agile coding. No, what companies want -- what startups want -- what everyone wants -- is to transform from an organization to an organism. An organism is a hierarchical assembly of systems working together as a single functional unit, often thought of as a self-organizing being. To react in a controlled, agile, measured way, organizations need to behave as organisms. And since IT is the central nervous system of the modern business, change starts here. This requires a holistic approach that encompasses all of these disciplines. You can't just have some of them and expect to reap the benefits of their entirety. Much of the disappointment, disillusionment, and backlash against these otherwise noble initiatives comes from not seeing the bigger picture. Part of a bigger whole Integrative philosopher and author Ken Wilbur has spent his life trying to find commonalities in science, systems of belief, and so on. He talks a lot about "holons", a term coined by Arthur Koestler in 1967 in The Ghost In The Machine. A holon is a thing that has both properties of self, and properties of membership. A cell in your body, for example, is part of a tissue. That tissue is part of an organ. That organ is part of you. You're an organism. As an organism, you function independently of your organs. You aren't consciously aware of muscles when you lift your arm; you don't think about nerve impulses when you notice something is hot; you don't raise your heartbeat when you need more oxygen. These things just happen. Each "whole" in your body is a thing unto itself, but sublimates itself into a greater whole. Andrew Binstock, User Rank: Author2/12/2013 | 11:26:58 PM re: Lean IT: Think Big About Agility A minor nit: DevOps is not a portmanteau. It is, however, a portmanteau word. The former is not shorthand for the latter, as it has a completely different meaning. Reply | Post Message | Messages List | Start a Board Google in the Enterprise Survey There's no doubt Google has made headway into businesses: Just 28 percent discourage or ban use of its productivity �products, and 69 percent cite Google Apps' good or excellent �mobility. But progress could still stall: 59 percent of nonusers �distrust the security of Google's cloud. Its data privacy is an open question, and 37 percent worry about integration.
计算机
2015-48/1915/en_head.json.gz/12272
Contact Advertise The Lilith: a graphical, mouse-driven workstation from 1980 Linked by Thom Holwerda on Thu 30th Aug 2012 09:16 UTC Just driving yesterday's point home some more: "The Lilith was one of the first computer workstations worldwide with a high-resolution graphical display and a mouse. The first prototype was developed by Niklaus Wirth and his group between 1978 and 1980 with Richard Ohran as the hardware specialist. [...] The whole system software of the Lilith was written in Modula-2, a structured programming language which Wirth has developed at the same time. The programs were compiled into low-level M-Code instructions which could be executed by the hardware. The user interface was designed with windows, icons and pop-up menus. Compared with the character based systems available at that time, these were revolutionary metaphors in the interaction with a computer." Jos Dreesen, owner of one of the few remaining working Liliths, wrote a Lilith emulator for Linux. 3 34 Comment(s) http://osne.ws/kb5 Permalink for comment 533242 And... by henderson101 on Thu 30th Aug 2012 13:25 UTC Member since: ... standing by what we discussed yesterday - nice achievement, but it's hardly on a par with what Xerox was doing, and then Apple with the Lisa and Mac. Again, no one said Xerox invented the idea of a user interface, they didn't. I've sat in front of enough BBS and Mainframe's to verify that the UI was alive and well in the early 70's. And don't think of these UI as being just text, they were amazingly graphical in their own ways. The modern concept of a Graphical User Interface with bitmapped (or vector, or whatever) graphics is the next logical step, but this is a proto-GUI. It is mainly text based. The pointer is there to direct input, but as with the Blit yesterday, it is not a GUI as we define today (windows, xwindows, aqua, whatever), it harks back to the earlier terminals driven by keyboard and NCurses.
计算机
2015-48/1915/en_head.json.gz/12400
Others Like “Crank Up the Music!” Please ensure you have JavaScript enabled in your browser. If you leave JavaScript disabled, you will only access a portion of the content we are providing. Here's how. 1,058 Project Ideas Recording on a Wire Today magnetic recording is used in audio and video cassette recorders, and computer disk drives. Did you know that you can also use an electromagnet to record and play back from a steel wire? In fact, this is how magnetic recording got started. This project shows you how to build a simple wire recorder. Eye protection required during construction Human-Powered Energy You have probably read all about forms of alternative energy like solar and wind power. But what about human power? With the aid of a coil of wire and some magnets, you can generate electricity with nothing more than a flick of your wrist. In this project, you will build a small hand-powered electrical generator that can power a series of tiny lights. Get ready to save the planet and get some exercise at the same time! This science project requires some specialty electronic components. A kit is available from the [# Link Name="Energy_p009.16" Value="HtmlAnchor" #]. The Time Required estimate includes time for gathering specialty materials. The actual project only takes 1 day. Neodymium magnets are very strong and can pinch your fingers when they come together. You should keep them away from pets and small children because they can cause serious harm if ingested. As with any magnet, you should keep them away from computers, cell phones, and credit cards. Adult supervision is required when using a hobby knife. Which Materials are the Best Conductors? There are two main types of materials when it comes to electricity, conductors, and insulators. What are they made of? Find out by testing different materials in a circuit to see which ones conduct the most electricity. When working with electricity, take precautions and beware of electric shock. Measure Your Magnetism Do you know how to find the north and the south poles of a magnet? What materials are more magnetic than others? Is there a way to measure how strong a magnet is? Is there a way to measure the strength of an electromagnet? How much does the material that is in the core of the electromagnet affect its magnetic strength? With this project, you'll be able to answer these questions and many others. You will learn how to build and use a simple meter for measuring magnetic field intensity. Familiarity with using a solderless breadboard, or willingness to learn Specialty items are needed. See the Materials tab for details. Short circuits can get very hot. Double-check all of your wiring before you connect the 9 V battery. Linear vs. Logarithmic Changes: What Works Best for Human Senses? If you want to get your friend's attention at a crowded sporting event with lots of people cheering, you need to shout. If you're trying to do the same thing in a quiet library, a whisper works. The detection limit for each of our senses depends on the amount of "background" stimulation that is already present. This project uses an LED control circuit to investigate detection of changes in light levels. Understanding of Ohm's Law and an understanding of logarithms. Note: the biggest expense is a powered, solderless breadboard, which can be used for future explorations in electronics.
计算机
2015-48/1915/en_head.json.gz/12423
Relational Data Mining Prof. Saso Dzeroski Josef Stefan Institute, Ljubljana, Slovenia Relational Data Mining (RDM) is the multi-disciplinary field dealing with knowledge discovery from relational databases consisting of multiple tables (relations). To emphasize the contrast to typical data mining approaches that look for patterns in a single relation of a database, the name Multi-Relational Data Mining is often used as well. Mining data which consists of complex/structured objects also falls within the scope of this field: the normalized representation of such objects in a relational database requires multiple tables. The field aims at integrating results from existing fields such as inductive logic programming (ILP), KDD, data mining, machine learning and relational databases; producing new techniques for mining multi-relational data; and practical applications of such techniques. Present RDM approaches consider all of the main data mining tasks, including association analysis, classification, clustering, learning probabilistic models and regression. The pattern languages used by single-table data mining approaches for these data mining tasks have been extended to the multiple-table case. Relational pattern languages now include relational association rules, relational classification rules, relational decision trees, and probabilistic relational models, among others. RDM algorithms have been developed to mine for patterns expressed in relational pattern languages. Typically, data mining algorithms have been upgraded from the single-table case: for example, distance-based algorithms for prediction and clustering have been upgraded by defining distance measures between examples/instances represented in relational logic. RDM methods have been successfully applied accross many application areas, ranging from the analysis of business data, through bioinformatics (including the analysis of complete genomes) and pharmacology (drug design) to Web mining (e.g., information extraction from Web sources). The tutorial will provide a coherent introduction to the basic concepts, techniques and applications of relational data mining. Saso Dzeroski is a Senior Scientific Associate of the Department of Intelligent System, Jozef Stefan Institute, Ljubljana, Slovenia. He is also an adjunct professor of the School of Environmental Sciences, Polytechnic Nova Gorica. He received his B.Sc. in 1989, M.Sc. in 1991, and Ph.D. in 1995, all in computer science, from the Faculty of Computer and Information Science, University of Ljubljana, Slovenia. For his dissertation "Numerical costraints and learnability in inductive logic programming", he received the 1996 The Jozef Stefan Golden Emblem Prize Award, a Slovenian national prize awarded for dissertations in the area of natural and technical sciences. He has held visiting researcher positions at the Turing Institute, Glasgow, UK; Katholieke Universiteit Leuven, Belgium; and German National Research Center for Computer Science, Sankt Augustin, Germany. He has been active in the research areas of inductive logic programming (ILP) and more recently relational data mining (RDM). He was involved in several international projects related to ILP and was the scientific coordinator of ILPnet2: The Network of Excellence in ILP. He was co-chair of the Seventh and Ninth International Workshops on ILP (ILP-97 and ILP-99) and co-chair of The Sixteenth International Conference on Machine Learning (ICML-99). He has also co-organized a number of events related to the topic of RDM, such as the ILP&KDD Summer School in Prague in September 1997, the RDM Summer School in Helsinki in August 2002, and the Multi-Relational Data Mining Workshop at KDD-2002 in Edmonton in July 2002. He is the co-author/co-editor of three books in the areas of ILP/RDM: Inductive Logic Programming: Techniques and Applications, the first authored book on ILP; Learning Language in Logic, concerned with learning from natural language resources; and finally the book Relational Data Mining. Return to Program
计算机
2015-48/1915/en_head.json.gz/12615
Facebook Inc (NASDAQ:FB) aims to be online all the time, and mostly succeeds, despite constantly rolling out changes (many invisible to users) and coding according to the maxim “Move fast and break things.” In response to a question on Quora, later published on Forbes, former Facebook Inc (NASDAQ:FB) software engineer Justin Mitchell explained the process that could funnel their manic programming style into a website that can’t afford downtime. Facebook attempts to smooth out programming warts Probably the most important technique is dividing the release into four phases so that new ideas don’t just show up in the wild, warts and all. These phases were called latest, p1, p2, and p3. The first phase, latest, is exactly what it sounds like. The latest code that developers are working on shows up here where it is completely separate from the web and free to wreak havoc on the system. This is basically the testing grounds where engineers can try out whatever they like. Once a new feature is more or less in working order it is moved to p1, where code could run for longer periods of time and engineers could watch the logs for obvious warnings or flaws to show up. At this point, the code was still very much considered to be in development. Once someone’s code moved into p2 it was running on a large section of the actual web servers, as much as 5 percent. “This offered several opportunities, including catching long tail fatals and monitoring CPU/memory/memcache fetches/DB queries/external service use along with key user metrics on the servers for any anomalies,” explains Mitchell. Bottom line, real people are using the code at this point, but not so many of them that Facebook Inc (NASDAQ:FB) as a network is in danger of crashing. Finally, once everyone is confident that code is working well, it goes live. P3 is shorthand for the entire web tier, and at that point Facebook Inc (NASDAQ:FB) has completed that particular launch. The advantage of going through all these phases is that multiple new features and products can be rolled out in parallel without having to coordinate their schedules, and without bringing the service down for maintenance. “Facebook Inc (NASDAQ:FB) evolved from the beginning with the idea of zero down time,” says Mitchell. “In my 4.5 years there, I can only remember a handful of experiences (one caused by me) where there was a widespread site disruption.” Tags: Bugs changes coding Downtime Facebook Written by Michael Ide Michael has a Bachelor's Degree in mathematics and physics from Boston University and Master's Degree in physics from University of California, San Diego. He has worked as an editor and writer for several magazines. Prior to his career in journalism, Michael Worked in the Peace Corps teaching math and science in South Africa. Copyright © 2015 ValueWalk - Privacy Policy Developed by ValueWalk Team
计算机
2015-48/1915/en_head.json.gz/12962
Autoweek How-To: Build a driving-game rig with stuff in your house Our first rig, with materials for the second build to the left. Photo by Jake Lingeman As driving video games have improved, so have the devices we use to play them. In our first Autoweek How-To, we'll teach you how to build a video game driving rig inexpensively and in one afternoon.Obviously, Autoweek assumes no responsibility should you injure yourself over the course of this build. Does that look like a #4 Phillips head? Because it is. First you have to acquire the materials. As the title suggests, we built our rig out of stuff we had at home, but that includes materials from a previous version of the rig. We used wood, but scrap metal, PVC pipe or any other easily malleable substance will work. A power drill will make things easier, as will a power saw. Wood or deck screws can be used to fasten the pieces together. Ladder frame with wood screws installed. We began by making the cuts for our ladder-frame base. The rig is designed for a driver who is five feet, 10 inches tall, though it could probably work for anyone in the five-foot-six-inch- to six-foot-two-inch-tall range. The end rails measure 42 inches while the shorter upright pieces measure 19 inches. The height of the upright pieces will determine the height of the steering wheel mount, so they should be tailored to place the wheel at a comfortable level. The width of the rig should be governed by the width of the driving seat. The pieces we're using from the old rig measure about 22 inches, which happens to be about right for a bucket seat from a Chevrolet Cavalier.The ladder frame is built using wood screws. We're using one-inch-by-one-inch wood here and we don't want any cracks, so make sure to use pilot holes. Using two-by-fours won't require pilot holes, but they still make things easier. The most important part of this step is the middle width rail. This piece will attach to the base and locate the pedals, so it needs to be the proper distance away. You will have some room for adjustment once it's finished, but the closer you can get to setting the proper length, the better off you'll be. Rig with pedal and wheel mounting platforms attached. Photo by Jake Lingeman Next we attach the legs for the wheel mount. We used two legs made of one-inch-by-one-inch timber on each side, but one two-by-four will work just as well. Once that's finished we attached the small pieces of plywood that the wheel and pedals attach to. Height can be adjusted with books, wood or whatever else you have around the house. Photo by Jake Lingeman Since the pieces that run the width of the ladder are only attached with one screw, the pedals will pivot to fit your driving position. The pedal platform can also be moved on the plywood, giving you more room to work. When a comfortable position is attained, the platform can be secured temporarily or permanently. Cords are attached out of the way gingerly with a staple gun. After that, it's just a matter of attaching the cords in an out-of-the-way place—we used a staple gun—and finding a seat to use. Our earlier attempt had us sitting in a standard chair, but then we found an inexpensive bucket seat from a junkyard to use. The completed rig, with seat from a 2002 Chevy Cavalier. Photo by Jake Lingeman Of course, if you don't have a wheel-and-pedal setup, you're only halfway there. We're using a Logitech Driving Force GT, but most setups will be mounted the same way. Prices for a good setup have decreased, and today a serviceable wheel will cost you anywhere from $50 to $100. We found a few on eBay here, here and here.Now it's time to sit down and race. Next year's Nissan GT Academy is only 10 months away. See MoreCar Life Car Life, Car News, Photo, Authors, Jake Lingeman
计算机
2015-48/1915/en_head.json.gz/13517
At 20 Million Copies Sold, Skyrim Is in the Top 20 Bestselling Games of All Time @mattpeckham That's across all platforms: PlayStation 3, Windows and Xbox 360 MoreSamsung Gear VR Review: A Very Exciting Glimpse Into the FutureNow Is a Great Time to Buy a PlayStation 4 or Xbox OneSony Confirms the PS4 Is Getting a Massive Upgrade This is technically last week’s news — last Thursday’s to be precise: Skyrim has sold 20 million copies since it launched in November 2011. That figure was buried in a press release about Bethesda’s upcoming The Elder Scrolls Online, so mentioned almost offhand, but I noticed a few sites picking it up this morning, and I understand why. While something as mainstream-obvious as Grand Theft Auto V already has Skyrim by some 9 million copies, Skyrim is a roleplaying game. Make that a deeply traditional roleplaying game: the apotheosis of computer-automated realizations of the sort of thing Gary Gygax and Dave Arneson were thinking about back in the early 1970s. I’m not asking anyone to genuflect at the altar of D&D, or even saying Skyrim‘s one of the greats (for me, because of the kinds of things Skyrim has to do to be the kind of game it was, given technological limitations in 2011, its greatness inexorably diminishes — just as Oblivion‘s and Morrowind‘s and Daggerfall‘s and Arena‘s did — with time and hindsight). I’m just noting that it seems counterintuitive, after years of treatises on the death of single player gaming, the death of extremely long form gaming and the stagnation of so-called Western fantasy gaming, that a game like Skyrim exists a decade into the 21st century, much less ranks in the top 20 bestselling games, across all platforms, of all time. Bear in mind that 20 million copies comprises all the subsequent compilation editions, and a certain number of buyers (myself included) are probably double-dipping, but consider that by comparison, Nintendo’s Super Mario Bros. 3 sold 18 million copies, while Super Mario World grabbed just a tick more at 20.6 million. None of the Halos are in that list, nor any of the Gears of Wars. Not a single Zelda game’s ever come close, and the top-selling installment in Sony’s bestselling PlayStation 2-exclusive franchise, Gran Turismo 3 (and remember that the PS2 is the bestselling game console in history), couldn’t crack 15 million copies. Even on the PC, granting that the revenue model for a lower-selling game, copy-wise, like World of Warcraft, is another matter, The Sims 2 is merely a sales tie — there’s nothing better-selling. I still haven’t “finished” Bethesda’s The Elder Scrolls: Skyrim. Between all the false starts and character rejiggering, the marathon play sessions that started out with the best of intentions but fizzled around the post-Dark Brotherhood quest-line business or the cosmic chitchat atop the Throat of the World, I’ve probably played more than most. But I have yet to feel that finish line ribbon snap across my chest. Maybe I never will. That’s what I love about games like Skyrim, and that’s why I’ll keep returning to them, story problems, gameplay drudgery and all.
计算机
2015-48/1915/en_head.json.gz/13533
OME - Eclipse Reshoots?! Update: GossipCop - my hero!A report widely disseminated earlier today made big waves by claiming that “key scenes” in The Twilight Saga: Eclipse need to be reshot in Vancouver, and speculated that the development indicated production on the June release was troubled.Gossip Cop looked into it, and we have the exclusive answers.First, regarding the implication that these reshoots were unexpected, an authorized rep for the studio tells Gossip Cop, “The reshoot was planned for months, like it is with the majority of films.”Let Gossip Cop settle some other false rumors.Contrary to today’s inaccurate speculation, “creative differences” have not led Summit to consider bringing in another director for the reshoots.David Slade is 100% directing them, like he did the entire movie, and the studio is “very happy” with his direction of the film, which internally is believed to be the best of the series.As for the timing, speculation that there will be three rushed days of 18-hour shoots is incorrect. The Summit rep tells Gossip Cop that it’s a two-and-a-half to three-day shoot, and that “very little” actually has to be reshot.But the biggest misconception concerned the specific scenes alleged to be those in need of a reshoot. The rep confirms to Gossip Cop, “None of the meadow or action scenes are being reshot.”
计算机
2015-48/1915/en_head.json.gz/13619
Aseem Agarwala Home Tech transfer Research projects Publications Activities & Honors I am a research scientist at Google, and an affiliate assistant professor at the University of Washington's Computer Science & Engineering department, where I completed my Ph.D. in 2006; my advisor was David Salesin. My areas of research are computer graphics, computer vision, and computational imaging. Specifically, I research computational techniques that can help us author more expressive imagery using digital cameras. I spent nine years after my Ph.D. at Adobe Research. I also spent three summers during my Ph.D. interning at Microsoft Research, and my time at UW was supported by a Microsoft fellowship. Before UW, I worked for two years as a research scientist at the legendary but now-bankrupt Starlab, a small research company in Belgium. I completed my Masters and Bachelors at MIT majoring in computer science; while there I was a research assistant in the Computer Graphics Group, and an intern at the Mitsubishi Electric Research Laboratory (MERL) . As an undergraduate I did research at the MIT Media Lab. I also spent much of 2010 building a modern house in Seattle, and documented the process in my blog, Phinney Modern.
计算机
2015-48/1915/en_head.json.gz/14352
What the Dormouse Said How the Sixties Counterculture Shaped the Personal ComputerIndustry Most histories of the personal computer industry focus on technology or business. John Markoff’s landmark book is about the culture and consciousness behind the first PCs—the culture being counter– and the consciousness expanded, sometimes chemically. It’s a brilliant evocation of Stanford, California, in the 1960s and ’70s, where a group of visionaries set out to turn computers into a means for freeing minds and information. In these pages one encounters Ken Kesey and the phone hacker Cap’n Crunch, est and LSD, The Whole Earth Catalog and the Homebrew Computer Lab. What the Dormouse Said is a poignant, funny, and inspiring book by one of the smartest technology writers around.
计算机
2015-48/1915/en_head.json.gz/14713
Articles & Tutorials Common for all OSes QuickPath Interconnect Russell Hitchcock [Published on 17 March 2011 / Last Updated on 17 March 2011] In this article the author discusses Intel's QuickPath technology and how it relates to HyperTransport. I’ve written previously about an AMD-driven technology called HyperTransport designed to increase data transfer rates. A key feature of HyperTransport is that it’s a point to point interconnect system, as opposed to a bus system. In this article, I’ll give an overview of a competitor to HyperTrasport - Intel’s QuickPath Interconnect. Before I explain what QuickPath Interconnect is, I will take a step back and explain what the traditional architecture of chip-to-chip communications is like in computers. For many years communications between processors and memory within computers used what is commonly referred to as a Front Side Bus. All communications between the CPU and memory has to travel over the same Front Side Bus. Because all communications travel over the Front Side Bus, there must be some extra data added to the communications to ensure proper communications; such as addressing. Also, bus systems by design only allow one communication to happen at a time. This means that if something needs to communicate with the CPU it must wait until the current communication has ended in order to start its own communication. Alternatively, interrupts could be used for priority communications. Interrupts, though effective, also add a certain amount of overhead to the overall Front Side Bus communications. All of this waiting, combined with the overhead can be a performance hindrance for high speed applications. Over the last few years, as processors increased their performance significantly, the speed at which the Front Side Bus could operate was a limiting factor of overall computer performance. This is because even though the processor could do a lot of work very quickly, it needed to continually wait for the Front Side Bus to deliver the proper communications; so the processor would often be idle. The speed of the Front Side Bus also rendered meaningless the speed of RAM since the speed at which the RAM could operate was significantly higher than the maximum speed of the Front Side Bus. With the increased use of multiple processors, including powerful and capable graphics processors, and very fast memory, the limitations of the Front Side Bus are becoming a bit ridiculous. That is the impetus for the design of technologies like HyperTransport which is a point-to-point interconnect system and therefore eliminates many of the limitations of the Front Side Bus like interrupts and addressing (you don’t really need addressing when there’s only two points - if you didn’t send it, then you should be receiving it!). QuickPath Interconnect But HyperTransport, developed by AMD and now managed by the HyperTransport Consortium, isn’t the only game in town. Not surprisingly, Intel has developed its own point-to-point interconnect system optimized to work as a communications mechanism between many processors. Though they were significantly later in the design of QuickPath Interconnect, they did do a great job. Figure 1: QuickPath Architecture courtesy of www.intel.com Like HyperTransport, QuickPath Interconnect is designed to work with processors that have integrated memory controllers. Also like HyperTransport, QuickPath Interconnect is designed as a double data rate (DDR) technology. Normally when data is digitally transmitted between two points, data is read as either high or low which represents either a 1 or 0. This data is read whenever the clock produces a high signal. With DDR, data can be read on the rising and falling edges of a clock signal. This means that in one full clock cycle a DDR capable transmission data can be read twice, producing twice the data rate. Also like HyperTransport, QuickPath Interconnect reduces the overhead found in Front Side Bus architectures. One way it does this is by eliminating some addressing since QuickPath Interconnect is a point-to-point technology. In fact, not only is QuickPath Interconnect a point-to-point technology, it is also a full-duplex communication channel having 20 dedicated communication lanes for each direction. QuickPath Interconnect does have some overhead though. QuickPath Interconnect actually has more overhead than Hypertransport; to send 64 bits of data QuickPath Interconnect requires 16 bits of overhead, where HyperTransport requires 8 or 12 bits for reads and writes respectively. Portocol Layers Intel’s QuickPath Interconnect is one part of a larger architecture that Intel calls QuickPath Architecture. The QuickPath Architecture is designed to cover five networking levels which are roughly equivalent to some of the OSI network layers. The Physical Layer of the QuickPath Architecture describes the physical wiring of the connections including the data transmitters and receivers and the 20-bit wide lanes in each directions. The Link Layer of the QuickPath Architecture describes the actual sending and receiving of data in 72-bit sections with 8-bits used for CRC error detection. This makes a total of 80 bits that are sent over each of the 20 lanes in each direction! The Routing Layer is responsible for sending a 72-bit chunk of data to the link layer. Within this 72-bit chunk of data is 64-bits of data and an 8-bit header. The 8-bit header consists of a destination and a message type. These 64-bits are what Intel uses to calculate total throughput of QuickPath Interconnect (as opposed to all 80-bits). The Transport Layer is responsible for handling errors in the data transmission and will request a retransmission if errors are found. The Protocol Layer of the QuickPath Architecture handles cache coherency and is also how a higher level program would access the data transfer mechanisms in QuickPath Interconnect. QuickPath Interconnect vs. HyperTransport So now that you’ve learned about QuickPath Interconnect and have reviewed my previous article on HyperTransport you should have a good idea of how the industry is moving away from the Front Side Bus architecture - for the benefit of us all. But you’re probably wondering which technology is best. As usual, that’s a difficult question to answer. Currently is seems that QuickPath Interconnect has a slight overall performance advantage over HyperTransport, but HyperTransport is designed as a much more flexible technology. QuickPath Interconnect is mainly designed to connect multiple processors to each other and to the input/output controller, as shown in figure 1 above. HyperTransport does that but can also be used for add-on cards and as a data transfer mechanism in routers and switches. HyperTransport is also an open technology which I think gives it a significant advantage over QuickPath Interconnect which is an Intel technology. This is still early in the development of both of these technologies though; especially for QuickPath Interconnect. Over the next few years you’ll start to see these technologies integrated into more and more computers and you’re likely to see some innovation in each of these products which increases their performance. For QuickPath Interconnect, I’d also expect to see some diversification of how it is used so that it will truly compete with HyperTransport as a data transfer mechanism for many uses. HyperTransport Memory and Storage - Part 3: Bus Specifications "World's Fastest Storage" enables InfiniBand SANs. QLogic Unveils Expanded HPC Networking Portfolio at SC07 Interrupt sharing on PCI-devices IPSWITCH WhatsUp Gold Network Time System Cache Coherency Raytheon Develops World's First Polymorphic Computer The Author — Russell Hitchcock Russell Hitchcock is a consultant whose specialties include networked hardware, control systems, and antennae. Latest Contributions Hardware Acceleration Data Furnaces Phase Change Memory Silicon (Part 1) Download Free TFTP Server. The most trusted on the planet by IT Pros
计算机
2015-48/1916/en_head.json.gz/113
Dont Name Pages Thoughts Issues Ideas Or Opinions Try not to use the words "Thoughts", "Issues", "Ideas", or "Opinions" in the name of a Wiki page. Such words are almost always redundant. Are there really pages on this Wiki where ideas are off-topic? There are two notable exceptions, however: When you're engaging in meta-discussion, it's often useful to use one of these words in the title. (This page uses all four.) The use of a Discussion page, when people would choose to leave the original page alone for various reasons. (Usually to keep the first page clear, for aesthetic reasons, or to help newbies.) DontNameClassesObjectManagerHandlerOrData, which is the software analogue of this rule. CategoryWikiMaintenance
计算机
2015-48/1916/en_head.json.gz/138
Stay Windows SIG: Application Performance Part 1 - Across the Wire Wednesday, April 07, 2004 - 07:00PM to 09:00PM Cubberley Community Center4000 Middlefield Rd., RM H-1Palo Alto, CA 94105 Software Architecture and Platform Tweet You are hereWindows SIG: Application Performance Part 1 - Across the Wire Windows SIG: Application Performance Part 1 - Across the Wire The Monthly Meeting of the Windows SIG Joe Kirsch, CEO Frank Lao, President Nitin Dwivedi, CTOLeadByte Corporation LeadByte Corporation is offering a 3 part series on Application Performance, covering the complete spectrum of how to build, measure, and monitor a high-performance, scalable Visual Studio .NET application. Network, Client and Server issues are covered in separate sessions. Application Performance Part 1 - Across the Wire In this session, you will understand how an application behaves over the network and what you can do to improve performance across the wire. This session is for designers, developers and testers who are building an application as well as engineers and analysts who are supporting the application in production. An introduction into networking concepts will be provided so that you are able to understand exactly what goes on behind the scenes of your application. You will understand what causes application delay and poor response time. In addition, .NET best practices and developer tips will be provided to help you quickly improve the performance of your current applications. LeadByte Corporation is a performance software company located in Redmond WA, dedicated to improving application performance and quality for web developers, application teams, internal business units and large-scale companies using Microsoft technologies such as the .NET Framework. LeadByte offers a suite of predictive performance tools known as NetworkSmart. Our products allow application teams to predict end user response times and determine performance bottlenecks during the design and development phases of an application. This early notification allows the application teams to make modifications to their code or design at a very minimal cost as compared to modifying the application after deployment. LeadByte also provides application performance training and services for companies who want to improve their current application quality and scalability.Prior to LeadByte, Joe Kirsch was a Senior Manager at Microsoft Corporation, responsible for developing and maintaining over 40 internal line-of-business applications, as well as internal Early Adopter Program for Visual Studio .NET, SQL Server 2000, Exchange 2000 and Windows 2000. In the early 1990s, he was the founder and President of Neverland Technologies, Inc. for seven years, a software development company that provided niche document management/data warehousing solutions to the transportation and utility industries. Joe is a technology freak at heart and specializes in the Visual Basic .NET, C# and ASP.NET programming languages.Prior to LeadByte, Frank Lao was a Production Support Manager at Microsoft Corporation, responsible for supporting over 40 internal line-of-business applications. Prior to that, Frank was a performance engineer for the Microsoft Enterprise Quality Assurance team. Frank has a MBA from the University of Washington and a Bachelors degree in Information Systems at the University of Hawaii. He is also a Microsoft Certified Systems Engineer. Frank's talent is centered in infrastructure where he has a keen understanding and vast experience in TCP/IP Network Design, Administration and Application Performance Testing and Troubleshooting.Prior to LeadByte, Nitin Dwivedi was part of Microsoft Consulting Services and later Enterprise Application Services. He has more than 15 years of experience in the software industry, working in areas of communications, networking, application development and shrink-wrapped software development. He graduated in Computer Science from Delhi University, and worked at diverse companies like Microsoft, IBM Research, Meca Software and Tata Consulting Services and was a founding partner in Gilt Edge Solutions. Nitin has a deep understanding of Application Design and Development using diverse technologies and patterns. His expertise is in the object-oriented design of distributed applications. Nitin's programming skills include Visual Studio.NET, VC#, VC++, VB.NET, ASP.NET as well as Windows platforms with CLR, COM, DCOM, COM+ and Network Programming. 6:30 - 7:00 Informal Q&A with Pizza7:00 - 7:15 Announcements7:15 - 9:00 Presentations $15 at the door for non-SDForum membersNo charge for SDForum membersPlease call 408.494.8378 for student membershipsNo registration required Cubberley Community Center4000 Middlefield Rd., RM H-1Palo Alto, CA 94105 Wednesday, April 07, 2004 - 07:00PM to 09:00PM Add to my calendar Outlook Calendar
计算机
2015-48/1916/en_head.json.gz/850
Home > Design VisualizationRelated topics: Management, Reviews Light It Up (Cadalyst Labs Review) By: Ron LaFon Visualization Software Adds Life to Your Designs The ability to take a design—be it a car, house or cell phone—and render it as the final manufactured product is one of the best uses of increased computational horsepower. What better way to help a client visualize a design than being able to say, "Let me show you"? The ability to turn a design drawing into a visualization that mimics reality is an invaluable tool for troubleshooting a design, convincing a nervous client or helping to promote a design firm's capabilities. This segment of the CAD software industry continues to grow and evolve, with rendered visualizations becoming more sophisticated. You only have to go as far as the movie screen to see how visualization has made an impact. Computer-driven animation and visualization has managed to make ideas visible in ways that affect us all in our day-to-day lives. It's amazing to have much of that technology available at the desktop level. Your design intent may not extend to enabling Spider-Man to swing through the manmade canyons of Manhattan or aiding Superman in leaping tall buildings in a single bound, but the available tools provide the means to convince, cajole and communicate ideas large and small. Understanding and visualizing modern structures and products from an underlying CAD drawing can be difficult, even for those experienced in doing so. For those who haven't made that leap in such interpretations, it can be a daunting process. The ability to make ideas and concepts visible is one of the more remarkable abilities of modern computer systems and associated software development. These days, the actual design process that results in a CAD drawing often is only one part of a process for a given project—though certainly an integral part. After a detailed and accurate drawing is produced, it's often necessary to produce visualizations. These can range from visualizations that are prepared for clients to ascertain that the design meets their needs to those used for publicity purposes or ad campaigns. Visualizations also can be used to troubleshoot for design flaws is early project phases, enabling designers to make modifications relatively inexpensively. The ability to see how a new building will look in its proposed setting, for example, is a great aid. For this review of visualization software, Cadalyst requested the latest versions of software from several vendors that specialize in visualization applications, whether as stand-alone products or applications designed to run within another design application. Cadalyst received a variety of software, though some vendors who typically appear in this annual review article didn't have new product versions that were ready for public attention. Some vendors didn't respond or were unable to provide products within our deadline restraints. The applications described in this article offer many different approaches to visualization software, with the form a specific visualization takes often being dependent upon the software resources that are used to create it. Because this article is a survey, I didn't do a nuts-and-bolts examination of every possible detail. The applications here aren't compared with one another, but the accompanying feature table (www.cadalyst.com/0707visualize-table) lets you compare features to find the products that best suit your particular needs and style. This table is more complex, with the elaboration of features within individual categories; for example, the depth of support for radiosity is an area to consider for increased feature depth. As might be expected with the broad array of applications, numerous approaches and concepts are available. You may want an application that provides the ability to produce a hyperrealistic, exquisitely detailed rendering that makes every perfectly lit detail visible. Or you may eschew the computer-generated look and choose an application that mimics traditional artists' tools for a more handmade result. During the past year, 64-bit versions of several major visualization applications have shipped. Because both design and visualization applications have confronted memory constraints for some time, this development is good news for those whose work tends to make extreme demands on both their hardware and software. 64-Bit support for these applications offers access to larger stores of memory and more capable management of that memory. It will be interesting to see whether visualization software will be one of the driving forces behind making 64-bit computing more mainstream or if it will remain something of a niche market for those who need it most. As always, the growth and development in this segment of the industry is interesting to watch, and end users are the beneficiaries of this process. Although faster and more capable microprocessors and improved software design speed the process somewhat, a lot of time and effort can go into creating visualizations. However, the payoff often is well worth the effort. Evolved technology makes the ability to go from a concept to an artistically created vision of that concept faster and more possible than ever before. Many design firms have found that a well-done visualization can be a deciding factor on whether a project ever reaches completion. In essence, design visualization software is about communicating ideas compellingly using visual media tools—tools such as those discussed below. AliasStudio 2008Autodesk800.440.4198www.autodesk.comPrice: $4,995Autodesk has a variety of very capable design and visualization software products, so when Cadalyst sent out its invitation, I wasn't quite sure what would come. Autodesk elected to submit its new release of AliasStudio 2008, which is designed to address the creative requirements of the entire industrial design workflow. Autodesk AliasStudio is a scalable product line that includes DesignStudio, Studio, AutoStudio and SurfaceStudio, so users can select the product most appropriate for their needs. Autodesk DesignStudio lets users develop and share design concepts and prototypes using sketches, illustrations, photorealistic renderings, animations and digital 3D models. Autodesk Studio provides features for precision surfacing with conceptual modeling and rendering tools. Finally, Autodesk SurfaceStudio provides a complete set of tools for surface model development, refinement and control, including interactive evaluation for verifying aesthetic and technical surface quality. Autodesk AliasStudio 2008 offers a wealth of new features, including an interface for the popular Wacom graphics tablets; predictive strokes that allow you to draw straight lines, circles and ellipses easily while painting; a streamlined canvas layer editor that offers layer blending; and new interactive modeling capabilities that incorporate new and enhanced rigs. The changes and enhancements are so extensive in this release that Autodesk offers a PDF file on its Web site that delineates all of the new features. In short, AliasStudio is more capable than ever, and it offers remarkable capabilities with relatively modest minimum hardware requirements. At present, AliasStudio 2008 runs under Windows XP Professional and Windows 2000 Professional, and minimum requirements include a system based on a 1GHz Intel Pentium III or an AMD Opteron processor with at least 512MB of RAM. You'll also need a graphics card with at least 64MB of texture memory that fully supports the OpenGL 2.0 specification to use the advanced hardware rendering features. Additionally, you'll need a CD-ROM drive, a three-button mouse and, if you plan to take advantage of the sketching capabilities, a graphics tablet. In Autodesk AliasStudio 2008, self-shadows boost the realism of a scene by adding more information about the spatial relationships of objects. They also provide information about the shape of an object as it casts shadows on itself.For more information about and specifications for the AliasStudio line of products, visit www.autodesk.com. Here you can download a copy of AliasStudio Personal Learning Edition as well. TrueSpace 7.5 with V-Ray 1.5Caligari650.390.9600www.caligari.comPrice: $595, trueSpace 7.5;$299, V-Ray render engineAs I was writing this article, Caligari was preparing to release trueSpace 7.5, the latest version of its innovative 3D design, visualization and collaboration application. When Cadalyst last looked at the then-new v7 release, there was much to like, and v7.5 is an enhanced and capable successor. Caligari trueSpace7.5 includes a brand new, state-of-the-art character editor with full body IK/FK posing.trueSpace 7.5 offers literally hundreds of modeling tools for organic or mechanical modeling, polygonal modeling, subdivision surfaces, NURBS, metaballs and implicit surfaces, all available via trueSpace's direct manipulation interface. Those who have worked with previous versions of trueSpace will find the interface in v7.x releases much easier to use and comprehend than those of the earlier versions (although trueSpace still reflects its origin on the Amiga). Version 7 introduced greatly enhanced communication capabilities that allow groups to work on a design process concurrently across the Internet, and this remains a remarkably useful feature in the newest release. The list of features and enhancements found in trueSpace 7.5 is extensive, so I'll only touch on a few of the highlights. A new character design system, complete with a hair and fur editor, will be useful to illustrators. You can change every aspect of a character, as well as draw hair or a skeleton from scratch and save parts to a library for use in other characters. Both 2D and 3D primitives are now fully parameterized to provide advanced, visual, real-time controls. The trueSpace 7.5 real-time renderer goes beyond trueSpace 7's supersampling and glows, adding realistic transparency, alpha shadows, real-time environment reflection, mirrors and video projectors, among other new features. Caligari trueSpace is extensible via plug-ins, so new capabilities can be added as needed. Among the plug-ins for this new release is the popular V-Ray render engine v1.5, which provides photorealistic rendering capabilities with tools such as global illumination, caustics, HDRI (high dynamic range imaging) and subsurface scattering. Caligari trueSpace 7.5 runs under Windows Vista, XP or XP Professional and requires a PC based on a Pentium 3 or equivalent AMD Athlon processor, although a Pentium 4 or equivalent AMD Athlon processor is recommended. You'll need at least 512MB of RAM on your system, with 1GB or more recommended, and 120MB of free hard disk space. A 3D graphics card with at least 64MB video memory is needed, although a card with 128MB or more that supports DirectX9 and full Pixel Shader 2.0 support is recommended for the best performance and use of available features. For more information about the many new features in trueSpace 7.5, visit the Caligari Web site at www.caligari.com, where you'll also find active user forums that provide a wealth of information and tips and tricks. Demo versions of older and current releases of Caligari trueSpace are available for download. Piranesi 5Informatix Software International+44 (1223) 246777www.informatix.co.ukPrice: $795No doubt Cadalyst readers will be familiar with Informatix's Piranesi, a 3D painting tool that allows users to start with a simple rendering of a 3D model and quickly develop it into high-quality images for client presentations. Users can create photorealistic images by painting in textures with automatic perspective and masking or by using a broad range of effects to generate nonphotorealistic images that have a hand-rendered feel. You can also use Piranesi with either 2D or elevation images, and you can create panoramas. A rendering created using Piranesi for Informatix's Tenth Birthday Image Competition. Image courtesy of Charles T. Gaushell, AIA, Paradigm Productions. Piranesi has its own native file format, but an included utility converts DXF or 3DS files to this format for use in Piranesi. An ever-growing number of design applications are incorporating support for Piranesi's EPix format, so getting a design into the application for enhancement seldom is a problem. The Piranesi design team seems to have a knack for adding and enhancing elements without obscuring the solid underlying product, and Piranesi 5 is no exception. In this release, you'll find a reorganized user interface designed to make it easier to learn and find your way around, without using up valuable screen space. Immediately apparent upon startup is the new Help Assistant, which tells you how the tool currently in use operates and provides tips about using it to the best advantage possible. Many of what were called effects in Piranesi 4 have been promoted to tools in v5, making them easier to find on the Tools toolbar. As a result, the program now has specialized tools for text creation, edge detection, restore, smudge, construct and filter. Some other tools have been combined, and two new tools have been added—a Light Tool that makes it much easier to relight a scene and a Stamp Tool that allows you to paint with one or more raster images or use them as an alpha mask to the current color. As with previous versions, Piranesi includes a stand-alone utility called Vedute. Vedute is a viewer that can produce Piranesi EPix and EPix panorama images from DXF and AutoCAD 3DS files. Vedute has been enhanced with this release, and parts of the model can be exported as 3D cutouts now. Minimum system requirements for Piranesi 5 are a system with Micro-soft Windows 2000, XP Home, XP Professional or Vista and a graphics card/monitor combination that's capable of at least 1000x750 resolution and that can display at least 65,000 colors. For Macintosh, Piranesi 4 is available and requires at least a 400MHz PowerPC G4 system running OS X v.10.3.4 or later and a graphics card/monitor capable of displaying at least 65,000 colors. A demo version is available for download from the Informatix Web site at www.informatix.co.uk, where you'll find extensive information about Piranesi's capabilities as well as some remarkable galleries of artwork created with the product. In addition, tutorial videos are available for viewing, as are plug-ins for a number of visualization and design applications that don't yet support Piranesi directly. Modo 203Luxology650.378.8506www.modo3d.comPrice: $895Since Cadalyst last reviewed Luxology's modo, development has proceeded through several updates of the application—including 203—which is currently available from Luxology's Web site. modo is updated fairly regularly with new features and enhancements along the way. By mid-summer, Luxology expects to be shipping a new version of modo with additional features, tentatively designated as modo 301. As I've noted before, modo is a superbly integrated application that offers modeling, painting and rendering. Each part of modo was designed to improve the capabilities and workflow possibilities across the entire product. Taken individually, modo's modeler is a very fast and capable polygonal and subdivision surface modeler; the paint tools incorporate procedurals into the layering process of extensive creative options, and the renderer doesn't sacrifice quality for speed, so you get high-quality images quickly. www.geocities. com/[email protected]."/>A visualization of an iron created in modo. Image courtesy of Chris Szetela, digital artist, www.geocities. com/[email protected] of these components offers noteworthy features and characteristics, but it's how these components are integrated as a whole that makes modo shine. This is enhanced by the way that modo integrates into the workflow with other products often used by visualization professionals, such as Adobe Photoshop. The general system requirements for modo 203 are a system with a minimum of 1GB of RAM and 100MB of available hard disk space—3GB if you install all content and integrated training materials. You'll also need an OpenGL-enabled graphics card and a monitor capable of 1024x768 resolution or better. A DVD-ROM drive is required for support materials, and an Internet connection is needed for product activation. If you intend to use modo ImageSynth, you'll need Adobe Photoshop CS or later. For the Macintosh version of modo, you'll need a Mac with a G3, G4, G5 or Intel processor, running Mac OS X 10.3.9 or later. The Windows version of modo requires a PC with an Intel Pentium 4 or AMD Athlon processor (SSE instruction support is required) running Windows 2000 or Windows XP. Vista is not yet supported. Luxology's licensing philosophy is worth mentioning. Luxology licenses its products to individuals, not to their machines, so you are able to move back and forth between platforms as needed—the CD includes versions for both PC and Mac operating systems, with both licensed to the individual. For more information about modo, visit www.modo3d.com. You can download an evaluation version of either the Mac or PC version of modo 203, the full production version which will work for 30 days. Licenses can be extended. While you're at the Web site, be sure to check out the gallery of work created with modo and the new training division and online tutorial series. LightWave 3D v9NewTek210.370.8000www.lightwave3d.comPrice: $795NewTek's LightWave 3D can be used for a wide range of modeling and visualization pursuits that range from game development to big-budget motion picture production. While I reviewed LightWave 3D v9 (the version evaluated in the feature table), LightWave 3D v9.2 was released just a few days before Cadalyst's editorial deadline. As a result, I was unable to evaluate the newer version. If you're interested in learning more about v9.2, visit LightWave 3d's Web site. LightWave 3D was rewritten from the ground up with v9, and it provides a base for future developments. This rewrite brought major increases in user-interface performance, dynamic systems improvements and render-speed enhancements. Both components of LightWave v9 added many new features and improved workflows. A photorealistic rendering created with LightWave 9. Copyrighted image created by Douglas Brown.Noted for its flexibility, LightWave 3D offers modeling, animation, dynamics, volumetric rendering, particle effects and a motion picture–quality rendering engine with unlimited render nodes. Little wonder that LightWave 3D is used for such a diverse range of applications throughout myriad industries. Among the extensive list of major motion pictures that used LightWave 3D are Fantastic Four, Sin City, Harry Potter and the Prisoner of Azkaban, Spider-Man 2, Lord of the Rings: The Return of the King, The Matrix Revolutions, X2: X-Men United, and Monsters, Inc.—an impressive, if only partial, listing. The general requirements for NewTek's LightWave 3D are a system with at least 512MB of RAM (1GB recommended), 230MB of hard disk space (not including content) and a graphics card (NVIDIA FX 5200 series minimum or ATI FireGL V5100 minimum) with the latest driver from the manufacturer. The graphics card will need to have at least 64MB of dedicated video RAM per display, with 128MB per display recommended. This RAM will drive a monitor with a minimum screen resolution of 1024x768 (1280x1024 recommended). LightWave 3D supports the latest generations of dual-core and multicore processors. Out of the box, LightWave 3D supports three different operating systems: Windows 32-bit, Windows 64-bit and Macintosh. For Windows 32-bit, you'll need Windows XP running on an Intel or AMD processor; the Windows 64-bit versions requires Windows XP Professional x64 Edition running on a system with an Intel EM64T or AMD64 processor and at least 1GB of system RAM. The Macintosh version requires a Mac with at least a PowerPC G4 (G5 recommended) running Mac OS X 10.3.9 Panther. For more information about LightWave 3D or to find a reseller, visit the company's Web site at www.lightwave3d.com. While you're there, be sure to visit the outstanding gallery of work created with LightWave3D, as well as the NewTek discussion forum. IRender and IRender PlusRender Plus Systems303.713.1401www.renderplus.comPrice: $189, IRender;$449, IRender PlusNew to Cadalyst are IRender and IRender Plus, both fully integrated rendering solutions for SketchUp that allow users to create photorealistic renderings from SketchUp models. For those unfamiliar with Google SketchUp, it's an easy-to-use application that simplifies 3D design. GoogleSketchUp is available in two versions: a free version, Google SketchUp, and a more feature-rich, professionally supported version, Google SketchUp Pro, which is available for $495. Render Plus Systems' IRender installs and integrates into either version of SketchUp to provide photorealistic renderings using the AccuRender rendering engine. All IRender functions are available while running SketchUp, and all settings are saved in your SketchUp model. IRender is a new, fully integrated rendering solution for SketchUp that creates photo-realistic renderings from SketchUp models. Image courtesy of Haynes Architecture.After you've created a model in SketchUp, you can create sophisticated renderings with attributes such as glows and spectacular reflections without having to resort to an external program. If you change your SketchUp model, you can render it again without having to redefine lights and materials. With IRender, you can create lamp fixtures for customized floor lamps, table lamps and outdoor light fixtures. Lamp components can be created easily with any wattage, beam angle or field angle, and the lights glue to the faces in your model. You also have a Create Mirror function that lets you create reflective materials or define existing materials as being reflective. You can quickly render selected items in a model to get just the effect you desire. The IRender Plus version offers a number of advanced features and capabilities, including the ability to add both plants and materials from material libraries containing more than 5,000 options. You can create SketchUp components from a library of more than 500 AccuRender plants, which will automatically render as fractal plants in IRender Plus. The AccuRender materials automatically render as high-quality materials in IRender Plus. Additionally, you can create flythrough animations of your SketchUp model with IRender Plus, complete with lights, materials, plants and reflections. The system requirements for IRender and IRender Plus are fairly modest—a system running Windows XP and one of the versions of Google SketchUp. To learn more about IRender, visit Render Plus Systems' Web site at www.renderplus.com, where you can see examples of renderings created with the company's products and download a trial version. Tutorials are available as downloads, and forums offer a wealth of information. If you don't already have it, you can download Google SketchUp from http://sketchup.google.com/download.html. Combined with the easy-to-use and popular SketchUp, IRender and IRender Plus add the ability to create sophisticated renderings easily from SketchUp, extending the application in ways that will make it a capable visualization package that may prove to be the only tool that many users will need. AccuRender 4Robert McNeel & Associates206.545.7000www.mcneel.comPrice: $495Accu-Render 4 is the latest version of the popular high-end rendering application that integrates into Autodesk's AutoCAD, Architectural Desktop or Mechanical Desktop programs. Accu-Render certainly has longevity—and deservedly so—and it doesn't require any special hardware requirements beyond those needed to run AutoCAD. AccuRender delivers exquisite renderings, animation, virtual reality panoramas, lighting analysis and network rendering. AccuRender 4 is a high-end rendering application that can be integrated into AutoCAD, Architectural Desktop or Mechanical Desktop.Continually under development, new versions of AccuRender typically go through an extended open beta-testing stage that puts new versions into users' hands for input and problem assessment. The result, over time, is a visualization tool that works very well and has a lot of depth. This new version of AccuRender has a host of new features including postprocessing, network rendering, HDRI lighting, an enhanced and easy-to-use-interface and numerous new materials and objects. Features of note include fractal trees, RPC (rich photorealistic content) support, postprocessing images and new soft shadows. Accu-Render can save images in the EPix file format used by the popular Piranesi paint program described in this round-up article. AccuRender has great depth of features but remains easy to use, and you can access these visualization tools from within AutoCAD. This ability greatly simplifies the process of adjusting models while you're actually in the process of creating the visualization. AccuRender is designed by architects with architects in mind, though it is useful for a broad range of visualization needs. The control and effects possible with AccuRender make for outstanding renderings. For example, you can adjust foliage density per plant to get the exact effect you want. Entourage Arts Plan View Landscape, Volume 1The system requirements for Accu-Render 4 are defined as a system that runs AutoCAD, without any additional requirements specific to the application. At the present time, AutoCAD 2008 will run under Vista, but Accu-Render runs under neither AutoCAD 2008 or Vista, though the developers expect to produce a patch in the near future to resolve this shortcoming. Autodesk has just released a patch that allows AutoCAD 2007 to run under Vista, and AccuRender will be tested with that configuration. You can download a fully functional version of AccuRender (although it has limited materials, light fixture and plant libraries and thin black lines are drawn across the final rendered images) for evaluation at www.mcneel.com. Check out the galleries of images that were created using AccuRender to get an idea of the quality possible. Find companion products that work with Accu-Render and information about active user newsgroups. Rhinoceros 4Robert McNeel & Associates206.545.7000www.mcneel.comPrice: $995Rhinoceros 4, the latest and most significant upgrade of Robert McNeel & Associates' popular modeling tool, recently began shipping. Rhino lets you model any shape you can imagine with uninhibited, freeform 3D modeling tools similar to those found in breathtakingly expensive design products. Start with a sketch, drawing, physical model, scan data or just an idea, and Rhino provides the tools to accurately model and document your designs for rendering, animation, drafting, engineering, analysis or manufacturing. Rhino offers one of the broadest ranges of geometry types and renderers available in any CAD platform and enables customers to customize their rendering tools to match their individual needs and preferences. Like Autodesk 3ds Max, you can use a broad array of rendering engines with Rhino, including such popular and capable renderers as Chaos Group's V-Ray, Next Limit Technologies' Maxwell Render and SputterFish's Brazil. Robert McNeel's Rhino 4.0 is the most significant upgrade in the history of the application with hundreds of new features and enhancements.Robert McNeel & Associates also offers several rendering products that plug directly into Rhino, including Flamingo (which offers ray-tracing) and Radiosity and Penguin, which bring freehand sketching, watercolor painting and cartoon-like rendering to Rhino. Bongo brings professional animation capabilities into Rhino. Rhino's moderate price, extremely capable tools and plug-in architecture has resulted in the application's use for design in a wide range of disciplines that include industrial, marine and jewelry design, as well as CAD/CAM, multimedia and graphic design. Rhino is used for rapid prototyping needs and reverse-engineering. Rhino runs on ordinary Windows desktop and laptop computers with a Pentium, Celeron or higher processor and at least 512MB of RAM (1GB of RAM or more recommended). You'll need 200MB of hard disk space for the installation, and an OpenGL graphics card is recommended. Rhino runs only on Windows 2000, XP Pro, XP Home and Vista, including an iMac with BootCamp or Parallels. Robert McNeel & Associates note that Rhino runs on Vista but is not currently recommended because of the lack of support for OpenGL—they plan to support DirectX on Vista in a future v4 service release. Rhino 4 will not run on Windows NT, 95, 98 or ME, and it runs as a 32-bit application on Windows x64. To learn more about Rhino 4 as well as other products from McNeel, visit the company's Web site at www.mcneel.com, or go directly to the Rhino Web site at www.rhino3d.com. Ron LaFon, a contributing editor for Cadalyst, is a writer, editor and computer graphics and electronic publishing specialist from Atlanta, Georgia. He is a principal at 3Bear Productions in Atlanta. About the Author: Ron LaFon About Ron LaFon See contents by Ron LaFon AutoCAD Tips!
计算机
2015-48/1916/en_head.json.gz/1004
Posted Ouya: ‘Over a thousand’ developers want to make Ouya games By Aaron Colter Check out our review of the Ouya Android-based gaming console. Even after the relatively cheap, Android-based Ouya console proved a massive success on Kickstarter (the console was able to pull in nearly $8.6 million from investors despite having an initial goal of only $960,000), pundits and prospective owners of the new gaming machine loudly wondered how well it would be able to attract developers who would otherwise be making games for the Xbox 360, iPhone or PC. Assuming you believe official statements made by the people behind the Ouya console, there is nothing to worry about on that front. “Over a thousand” developers have contacted the Ouya creators since the end of their Kickstarter campaign, according to a statement published as part of a recent announcement on who will be filling out the company’s leadership roles now that it is properly established. Likewise, the statement claims that “more than 50” companies “from all around the world” have approached the people behind Ouya to distribute the console once it is ready for its consumer debut at some as-yet-undetermined point in 2013. While this is undoubtedly good news for anyone who’s been crossing their fingers, hoping that the Ouya can make inroads into the normally insular world of console gaming, it should be noted that while these thousand-plus developers may have attempted to reach the Ouya’s creators, the company offers no solid figures on how many of them are officially committed to bringing games to the platform. That “over a thousand” figure means little if every last developer examined the terms of developing for the Ouya and quickly declined the opportunity in favor of more lucrative options. We have no official information on how these developer conversations actually went, so until we hear a more official assessment of how many gaming firms are solidly pledging support to the Ouya platform, we’ll continue to harbor a bit of cynicism over how successful this machine might possibly be. As for the aforementioned personnel acquisitions, though they’re less impressive than the possibility that thousands of firms are already tentatively working on games for the Ouya, they should offer a bit more hope that the company making the console will remain stable, guided by people intimately familiar with the gaming biz. According to the announcement, Ouya has attracted former IGN president (and the first investor in the Ouya project) Roy Bahat to serve as chairman of the Ouya board. Additionally, the company has enlisted former EA development director and senior development director for Trion Worlds’ MMO Rift, Steve Chamberlin, to serve as the company’s head of engineering. Finally, Raffi Bagdasarian, former vice president of product development and operations at Sony Pictures Television has been tapped to lead Ouya’s platform service and software product development division. Though you may be unfamiliar with these three men, trust that they’ve all proven their chops as leaders in their respective gaming-centric fields. Expect to hear more solid information on the Ouya and its games line up as we inch closer to its nebulous 2013 release. Hopefully for the system’s numerous potential buyers, that quip about the massive developer interest the console has attracted proves more tangible than not.
计算机
2015-48/1916/en_head.json.gz/2430
Projects / DHEX DHEX dhex is a more than just another hex editor: It includes a diff mode, which can be used to easily and conveniently compare two binary files. Since it is based on ncurses and is themeable, it can run on any number of systems and scenarios. With its utilization of search logs, it is possible to track changes in different iterations of files easily. Text Editors Release Notes: Apparently, the previous release could crash under certain circumstances while opening a file. This has been fixed. Release Notes: It is now possible to set a "base address" when loading a file. This makes working with partial memory dumps much easier. Release Notes: This version adds the ability to search for quoted strings, rather than just single words, from the command line. It is now possible to set the color of the headers from the .dhexrc file. Moreover, the first and the second run of the program will have the same color scheme now. Release Notes: This version fixes the crashes that could occur with version 0.64 on some systems. Release Notes: Correlation between files in diff mode has been added, and a new default theme has been chosen. More bugs were fixed. Typographical errors in the man pages were fixed. dettus i know, i know you shouldn't give yourself the thumbs up. but this little program grew out of necessity, and it became a very important tool for my job.
计算机
2015-48/1916/en_head.json.gz/3953
search VGU.TV 2 mins read Deus Ex Announced For PC, Next-Gen; Human Revolution: Director’s Cut Out This Month 2 mins read Deus Ex Announced For PC, Next-Gen; Human Revolution: Director’s Cut Out This Month October 2, 2013 Deus Ex: Human Revolution came out two years ago for the PC, 360, and PlayStation 3 and it made a bang. Taking place before the events of the original game, Deus Ex: HR was loved by many but disliked by many due to the game’s boss fights. Now, the Director’s Cut of Deus Ex: Human Revolution has a North American and European release date. A new game was announced as well. Eidos Montreal announced a new Deus Ex game for the PC, PlayStation 4, and the Xbox One. Future titles will now be a part of the “Deus Ex: Universe” and it’s “an ongoing, expanding and connected game world built across a generation of core games.” Studio head Dave Anfossi said the following: “I’m really excited to let you know that we are working on an ambitious idea which we’re calling Deus Ex: Universe. It’s a commitment on our part to deliver meaningful content that expands the franchise on a regular basis and to deliver a deep conspiracy that will span several connected Deus Ex games, creating a more immersive and richer experience than ever before. “Deus Ex: Universe will include PC and console games, but also additional Deus Ex games and experiences available in other media such as tablets, smartphones, books, graphic novels, etc. “I’m pleased to confirm that we are already into production of the starting point for Deus Ex: Universe with a new game for PC and next-generation consoles. We’re very excited about it at the studio and I wanted to let you know that most of the team behind Deus Ex: Human Revolution is already working hard on this new game. Also confirmed by Anfossi was the release dates for Deus Ex: Human Revolution Director’s Cut which will address some of the issues consumers had with the original game. Deus Ex: HR: DC will be out in North America on the 22nd of this month while the European version will be out three days later. The game will release on the PC, WiiU, PS3, and Xbox 360. The game will also utilize second screen remote play via the WiiU Game Pad, PlayStation Vita, and Xbox Smartglass. Source: VG247 Gaming, News, PC, PlayStation 4, PS3, PSP/Vita, Wii U AuthorAllan MuirSenior Staff Writer- Gamer, Writer, Lover of all things deemed weird and nerdy. Also by Allan Muir (330) Advertisement Advertisement Advertisement VGU.TV is an entertainment outlet that hosts podcasts, video, and written content. All of the content located here, or other related channels are copyright of VGU.TV and their owners.
计算机
2015-48/1916/en_head.json.gz/4039
SharePoint Advancing the enterprise social roadmap by SharePoint Team, on June 25, 2013February 17, 2015 | 2 Comments | 0 Today’s post comes from Jared Spataro, Senior Director, Microsoft Office Division. Jared leads the SharePoint business, and he works closely with Adam Pisoni and David Sacks on Yammer integration. To celebrate the one-year anniversary of the Yammer acquisition, I wanted to take a moment to reflect on where we’ve come from and talk about where we’re going. My last post focused on product integration, but this time I want to zoom out and look at the big picture. It has been a busy year, and it’s exciting to see how our vision of “connected experiences” is taking shape. Yammer momentum First off, it’s worth noting that Yammer has continued to grow rapidly over the last 12 months–and that’s not something you see every day. Big acquisitions generally slow things down, but in this case we’ve actually seen the opposite. David Sacks provided his perspective in a post on the Microsoft blog, but a few of the high-level numbers bear repeating: over the last year, registered users have increased 55% to almost 8 million, user activity has roughly doubled, and paid networks are up over 200%. All in all, those are pretty impressive stats, and I’m proud of the team and the way the things have gone post-acquisition. Second, we’ve continued to innovate, testing and iterating our way to product enhancements that are helping people get more done. Over the last year we’ve shipped new features in the standalone service once a week, including: Message translation. Real-time message translation based on Microsoft Translator. We support translation to 23 languages and can detect and translate from 37 languages. Inbox. A consolidated view of Yammer messages across conversations you’re following and threads that are most important to you. File collaboration. Enhancements to the file directory for easy access to recent, followed, and group files- including support for multi-file drag and drop. Mobile app enhancements. Continual improvements for our mobile apps for iPad, iPhone, Android, and Windows Phone. Enterprise graph. A dynamically generated map of employees, content and business data based on the Open Graph standard. Using Open Graph, customers can push messages from line of business systems to the Yammer ticker. Platform enhancements. Embeddable feeds, likes, and follow buttons for integrating Yammer with line of business systems. In addition to innovation in the standalone product, we’ve also been hard at work on product integration. In my last roadmap update, I highlighted our work with Dynamics CRM and described three phases of broad Office integration: “basic integration, deeper connections, and connected experiences.” Earlier this month, we delivered the first component of “basic integration” by shipping an Office 365 update that lets customers make Yammer the default social network. This summer, we’ll ship a Yammer app in the SharePoint store and publish guidance for integrating Yammer with an on-prem SharePoint 2013 deployment, and this fall we’ll release Office 365 single sign-on, profile picture synchronization, and user experience enhancements. Finally, even though we’re proud of what we’ve accomplished over the last twelve months, we recognize that we’re really just getting started. “Connected experiences” is our shorthand for saying that social should be an integrated part of the way everyone works together, and over the next year we’ll be introducing innovations designed to make Yammer a mainstream communication tool. Because of the way we develop Yammer, even we don’t know exactly what that will look like. But what we can tell you is that we have an initial set of features we’re working on today, and we’ll test and iterate our way to enhancements that will make working with others easier than ever before. This approach to product roadmap is fairly new for enterprise software, but we’re convinced it’s the only way to lead out in space that is as dynamic and fast-paced as enterprise social. To give you a sense for where we’re headed, here are a few of the projects currently under development over the next 6-8 months: SharePoint search integration. We’re enabling SharePoint search to search Yammer conversations and setting the stage for deeper, more powerful apps that combine social and search. Yammer groups in SharePoint sites. The Yammer app in the SharePoint store will allow you to manually replace a SharePoint site feed with a Yammer group feed, but we recognize that many customers will want to do this programmatically. We’re working on settings that will make Yammer feeds the default for all SharePoint sites. (See below for a mock-up of a Yammer group feed surfaced as an out-of-the-box component of a SharePoint team site.) Yammer messaging enhancements. We’re redesigning the Yammer user experience to make it easier to use as a primary communication tool. We’ll also be improving directed messaging and adding the ability to message multiple groups at once. Email interoperability. We’re making it easier than ever to use Yammer and email together. You’ll be able to follow an entire thread via email, respond to Yammer messages from email, and participate in conversations across Yammer and email. External communication. Yammer works great inside an organization, but today you have to create an external network to collaborate with people outside your domain. We’re improving the messaging infrastructure so that you can easily include external parties in Yammer conversations. Mobile apps. We’ll continue to invest in our iPad, iPhone, Android, Windows Phone 8, and Windows 8 apps as primary access points. The mobile apps are already a great way to use Yammer on the go, and we’ll continue to improve the user experience as we add new features to the service. Localization. We’re localizing the Yammer interface into new languages to meet growing demand across the world. It will take some time, and we’ll learn a lot as we go, but every new feature will help define the future–one iteration at a time. When I take a moment to look at how much has happened over the last year, I’m really proud of the team and all they’ve accomplished. An acquisition can be a big distraction for both sides, but the teams in San Francisco and Redmond have come together and delivered. And as you can see from the list of projects in flight, we’re definitely not resting on our laurels. We’re determined to lead the way forward with rapid innovation, quick-turn iterations, and connected experiences that combine the best of Yammer with the familiar tools of Office. It’s an exciting time, and we hope you’ll join us in our journey. P.S. As you may have seen, we’ll be hosting the next SharePoint Conference March 3rd through the 6th in Las Vegas. I’m really looking forward to getting the community back together again and hope that you’ll join us there for more details on how we’re delivering on our vision of transforming the way people work together. Look forward to seeing you there! amagnotta Will the Office 365 release this fall integrate with SharePoint Online? I only see SharePoint 2013 on-prem mentioned. If not, are there plans in the Road Map to integration with SharePoint Online at some point? Thanks. CorpSec How does Yammer relate to Lync? It seems to me there’s a lot of overlap between the 2 collaboration tools. Will this evolve over time?
计算机
2015-48/1916/en_head.json.gz/6611
Internet Pioneers Note from the author: Various audio clips appear here and throughout of Vint Cerf. The interview was conducted on March 1, 2000, at the Access facility of the National Science Foundation in Washington, D.C. Expenses were provided by the Park Foundation. As a graduate student at UCLA, Vint Cerf was involved in the early design of the ARPANET. He was present when the first IMP was delivered to UCLA. He is called the "father of the Internet." He earned this nickname as one of the co-authors of TCP/IP-the protocol that allowed ARPA to connect various independent networks together to form one large network of networks-the Internet. A Young Man with Style Cerf grew up in Los Angeles. He did very well in school and showed a strong aptitude for math. He had an unusual style of dress for a school kid. He wore a jacket and tie most days. Cerf is still known for his impeccable style. He is usually seen in three-piece suits. As a child, Cerf began to develop an interest in computers. He attended Stanford and majored in mathematics, but continued to grow more interested in computing. "There was something amazingly enticing about programming," said Cerf. "You created your own universe and you were master of it. The computer would do anything you programmed it to do. It was this unbelievable sandbox in which every grain of sand was under your control." (Cerf in Hafner & Lyon, 139). When Cerf graduated from Stanford in 1965, he went to work for IBM as a systems engineer, but soon decided to return to school to learn more about computers. He enrolled in UCLA's computer science department and began pursuing his Ph.D. His thesis was based on work he did on an ARPA-funded project for the "Snuper Computer"—a computer that was designed to remotely observe the execution of programs on another computer. An Interest in Networking The Snuper Computer project got Cerf interested in the field of computer networking. In the fall of 1968, ARPA set up another program at UCLA in anticipation of building the ARPANET. It was called the Network Measurement Center. It was responsible for performance testing and analysis, a sort of testing ground. A man named Len Kleinrock managed about forty students who ran the center. Cerf was one of the senior members of the team. By the end of 1968, a small group of graduate students from the four schools that were slated to be the first four nodes on the ARPANET (UCLA, Stanford, the University of Utah, and UC Santa Barbara) began meeting regularly to discuss the new network and problems related to its development. They called themselves the Network Working Group (NWG). The NWG proved to be instrumental in solving many of the problems that would arrive during the design and implementation of the ARPANET, but they did not realize their importance at the time. Cerf recalls, "We were just rank amateurs, and we were expecting that some authority would finally come along and say, 'Here's how we are going to do it.' And nobody ever came along." (Cerf in Abbate, 73) Protocols One of the main obstacles facing the deployment of ARPA's network was the problem of getting incompatible host computers to communicate with one another through the IMPs. Bolt Beranek &Newman (BBN) was only responsible for building the IMPs and making sure they could move packets, not devising the methods they and the host computers would use to communicate. Devising standards for communication, what came to be known as a protocol, became one of the NWG's main tasks. The NWG implemented a "layered" approach in building a protocol. This means that they created several simple "building block" protocols that could later be joined to oversee network communication as a whole. In 1970, the group released the protocol for basic host-to-host communication called the Network Control Protocol (NCP). They also created several other protocols to work on top of NCP such as Telnet, which allowed for remote logins. Vint Cerf discusses obstacles in ARPANET development. A True Internet In August 1969, BBN delivered the first IMP to UCLA. A month later The second was delivered to SRI. The ARPANET continued to grow quickly from that point. Cerf was present when the first IMP was delivered to UCLA. He was involved with the IMP immediately performing various tests on the new hardware. It was during this testing that he met Bob Kahn. They enjoyed a good working relationship. Vint Cerf talks about the delivery of the first IMP to UCLA. Vint Cerf talks about Bob Kahn. Within a few years of the creation of the ARPANET, other computers networks were deployed. They were all independent self-contained networks. Cerf recalls, "Around this time Bob started saying , 'Look, my problem is how I get a computer that's on a satellite and a computer on a radio net and a computer on ARPANET to communicate uniformly with each other without realizing what's going on in between?'"(Cerf in Hafner & Lyon, 235). They decided that there needed to be a "gateway" computer between each network to route packets. The gateway computers would not care about the various complexities of each network. They would simply be in charge of passing packets back and forth. But all of the networks transmitted packets in different ways, using their own protocols. A new standard was needed to link all of the networks and allow inter-network communication. Cerf and Kahn began working out a plan in 1973. In September, they presented a paper outlining their ideas to the International Networking Group. In May 1974, they complete their paper entitled, "A Protocol for Packet Network Intercommunication." They described a new protocol they called the transmission-control protocol (TCP). The main idea was to enclose packets in "datagrams." These datagrams were to act something like envelopes containing letters. The content and format of the letter is not important for its delivery. The information on the envelope is standardized to facilitate delivery. Gateway computers would simply read only the delivery information contained in the datagrams and deliver the contents to host computers. Only the host computers would actually "open" the envelope and read the actual contents of the packet. TCP allowed networks to be joined into a network of networks, or what we now call the Internet. Cerf continued to refine TCP. In 1976, he accepted a job as program manager responsible for what was then called the "ARPA Internet" at ARPA. In 1978, Cerf and several of his colleagues made a major refinement in 1978. They split TCP into two parts. They took the part of TCP that is responsible for routing packages and formed a separate protocol called the Internet Protocol (IP).TCP would remain responsible for dividing messages into datagrams, reassembling messages, detecting errors, putting packets in the right order, and resending lost packets. The new protocol was called TCP/IP. It went on to become the standard for all Internet communication. Vint Cerf talks about problems facing the early Internet.. What would Cerf change about the Internet? Cerf talks about security problems New Horizons Today Cerf is the chief Internet strategist for MCI WorldCom. His latest pet project is called the Interplanetary Network (IPN). This project, part of NASA's Jet Propulsion Lab, will basically extend the Internet into outer space. It's fitting that the "father of the Internet" on Earth should be involved in launching it to new worlds. Vint Cerf talks about the Interplanetary Network. Home| Vannevar Bush | J.C.R. Licklider | Larry Roberts | Paul Baran | Bob Metcalfe | Doug Engelbart | Vint Cerf | Ted Nelson | Tim Berners-Lee | Marc Andreesen | Epilogue | References |
计算机